You are on page 1of 380

Kasturi Vasudevan

Analog
Communications
Problems and Solutions
Analog Communications
Kasturi Vasudevan

Analog Communications
Problems and Solutions

123
Kasturi Vasudevan
Electrical Engineering
Indian Institute of Technology Kanpur
Kanpur, Uttar Pradesh, India

ISBN 978-3-030-50336-9 ISBN 978-3-030-50337-6 (eBook)


https://doi.org/10.1007/978-3-030-50337-6
Jointly published with ANE Books Pvt. Ltd.
In addition to this printed edition, there is a local printed edition of this work available via Ane Books in
South Asia (India, Pakistan, Sri Lanka, Bangladesh, Nepal and Bhutan) and Africa (all countries in the
African subcontinent).
ISBN of the Co-Publisher’s edition: 9789386761811

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Switzerland AG 2021
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publishers, the authors, and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publishers nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made. The publishers remain neutral with regard to
jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To my family
Preface

Analog Communications: Problems and Solutions is suitable for third-year


undergraduates taking a first course in communications. Although most of the
present-day communication systems are digital, they continue to use many basic
concepts like Fourier transform, Hilbert transform, modulation, synchronization,
signal-to-noise ratio analysis, and so on that have roots in analog communications.
The book is richly illustrated with figures and easy to read.
Chapter 1 covers some of the basic concepts like the Fourier and Hilbert
transforms, Fourier series, and the canonical representation of bandpass signals.
Random variables and random processes are covered in Chap. 2. Amplitude
modulation (AM), envelope detection, Costas loop, squaring loop, single sideband
modulation, vestigial sideband modulation, and quadrature amplitude modulation
are covered in Chap. 3. In particular, different implementations of the Costas loop is
considered in Chap. 3. Chapter 4 deals with frequency modulation (FM) and var-
ious methods of generation and demodulation of FM signals. The figure-of-merit
analysis of AM and FM receivers is dealt with in Chap. 5. The principles of
quantization are covered in Chap. 6. Finally, topics on digital communications are
covered in Chap. 7.
I would like to express my gratitude to some of my instructors at IIT Kharagpur
(where I had completed my undergraduate)—Dr. S. L. Maskara (Emeritus faculty),
Dr. T. S. Lamba (Emeritus faculty), Dr. R. V. Rajkumar and Dr. S. Shanmugavel,
Dr. D. Dutta, and Dr. C. K. Maiti.
During the early stages of my career (1991–1992), I was associated with the
CAD-VLSI Group, Indian Telephone Industries Ltd., Bangalore. I would like to
express my gratitude to Mr. K. S. Raghunathan (formerly a Deputy Chief Engineer
at the CAD-VLSI Group) for his supervision of the implementation of a statistical
fault analyzer for digital circuits. It was from him that I learnt the concepts of good
programming, which I cherish and use to this day.
During the course of my master's degree and Ph.D. at IIT Madras, I had the
opportunity to learn the fundamental concepts of digital communications from my
instructors, Dr. V. G. K. Murthy, Dr. V. V. Rao, Dr. K. Radhakrishna Rao,
Dr. Bhaskar Ramamurthi, and Dr. Ashok Jhunjhunwalla. It is a pleasure to

vii
viii Preface

acknowledge their teaching. I also gratefully acknowledge the guidance of Dr.


K. Giridhar and Dr. Bhaskar Ramamurthi who were jointly my Doctoral supervi-
sors. I also wish to thank Dr. Devendra Jalihal for introducing me to the LATEX
document processing system, without which this book would not have been
complete.
Special mention is also due to Dr. Bixio Rimoldi of the Mobile Communications
Lab, EPFL Switzerland, and Dr. Raymond Knopp, now with Institute Eurecom,
Sophia Antipolis France, for providing me the opportunity to implement some
of the signal processing algorithms in real time, for their software radio platform.
I would like to thank many of my students for their valuable feedback. I thank
my colleagues at IIT Kanpur, in particular Dr. S. C. Srivastava, Dr. V. Sinha
(Emeritus faculty), Dr. Govind Sharma, Dr. Pradip Sircar, Dr. R. K. Bansal,
Dr. K. S. Venkatesh, Dr. A. K. Chaturvedi, Dr. Y. N. Singh, Dr. Ketan Rajawat,
Dr. Abhishek Gupta, and Dr. Rohit Budhiraja for their support and encouragement.
I would also like to thank the following people for encouraging me to write this
book:
• Dr. Surendra Prasad, IIT Delhi, India
• Dr. P. Y. Kam, NUS Singapore
• Dr. John M. Cioffi, Emeritus faculty, Stanford University, USA
• Dr. Lazos Hanzo, University of Southampton, UK
• Dr. Prakash Narayan, University of Maryland, College Park, USA
• Dr. P. P. Vaidyanathan, Caltech, USA
• Dr. Vincent Poor, Princeton, USA
• Dr. W. C. Lindsey, University of Southern California, USA
• Dr. Bella Bose, Oregon State University, USA
• Dr. S. Pal, former President IETE, India
• Dr. G. Panda, IIT Bhubaneswar, India
• Dr. Arne Svensson, Chalmers University of Technology, Sweden
• Dr. Lev B. Levitin, Boston University, USA
• Dr. Lillikutty Jacob, NIT Calicut, India
• Dr. Khoa N. Le, University of Western Sydney, Australia
• Dr. Hamid Jafarkhani, University of California Irvine, USA
• Dr. Aarne Mämmelä, VTT Technical Research Centre, Finland
• Dr. Behnaam Aazhang, Rice University, USA
• Dr. Thomas Kailath, Emeritus faculty, Stanford University, USA
• Dr. Stephen Boyd, Stanford University, USA
• Dr. Rama Chellappa, University of Maryland, College Park, USA
Thanks are also due to the open source community for providing operating
systems like Linux and software like Scilab, LATEX, Xfig, and Gnuplot, without
which this book would not have been complete. I also wish to thank Mr. Jai Raj
Kapoor and his team for their skill and dedication in bringing out this book.
Preface ix

In spite of my best efforts, some errors might have gone unnoticed. Suggestions
for improving the book are welcome.

Kanpur, India Kasturi Vasudevan


Contents

1 Signals and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2 Random Variables and Random Processes . . . . . . . . . . . . . . . . . . . . 77
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
3 Amplitude Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
4 Frequency Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
5 Noise in Analog Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
6 Pulse Code Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
7 Signaling Through AWGN Channel . . . . . . . . . . . . . . . . . . . . . . . . . 357

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369

xi
About the Author

Kasturi Vasudevan completed his Bachelor of Technology (Honours) from the


Department of Electronics and Electrical Communication Engineering, IIT
Kharagpur, India, in 1991, and his M.S. and Ph.D. from the Department of Electrical
Engineering, IIT Madras, in 1996 and 2000, respectively. During 1991–1992, he
was employed with Indian Telephone Industries Ltd., Bangalore, India. He was a
Postdoctoral Fellow at Mobile Communications Lab, EPFL, Switzerland, and then
an Engineer at Texas Instruments, Bangalore. Since July 2001, he has been a faculty
member at the Electrical Department at IIT Kanpur, where he is now an Professor.
His interests lie in the area of communication.

xiii
Notation

a^b Logical AND of a and b


a_b Logical OR of a and b
? a may or may not be equal to b
a¼b
8 For all
b xc Largest integer less than or equal to x
d xe Smallest integer greater than or equal to x
pffiffiffiffiffiffiffi
j 1
D Equal to by definition
¼
V Convolution
dD ðÞ Dirac delta function
dK ðÞ Kronecker delta function
~x A complex quantity
^x Estimate of x
x A vector or matrix
IM An M  M identity matrix
S Complex symbol (note the absence of tilde)
<fg Real part
=fg Imaginary part
xI Real or in-phase part of ~x
xQ Imaginary or quadrature part of ~x
E½ Expectation
erfcðÞ Complementary error function
½ x1 ; x2  Closed interval, inclusive of x1 and x2
½x1 ; x2 Þ Open interval, inclusive of x1 and exclusive of x2
ðx1 ; x2 Þ Open interval, exclusive of x1 and x2
PðÞ Probability
fX ðÞ Probability density function of X
Hz Frequency in Hertz
wrt With respect to

xv
Chapter 1
Signals and Systems

1. We know that the Fourier transform is given by


 ∞
G( f ) = g(t) exp (−j 2π f t) dt. (1.1)
t=−∞

Using the properties of the Dirac-delta function, prove the inverse Fourier trans-
form, that is,  ∞
g(t) = G( f ) exp (j 2π f t) d f. (1.2)
f =−∞

Hint: Substitute for G( f ) from (1.1).


• Solution: Substituting for G( f ) in the right-hand side of (1.2), we get
 ∞
G( f ) exp (j 2π f t) d f
f =−∞
 ∞  ∞
= g(x) exp (−j 2π f x) d x exp (j 2π f t) d f
f =−∞ x=−∞
 ∞  ∞
= g(x) d x exp (j 2π f (t − x)) d f. (1.3)
x=−∞ f =−∞

However, we know that

δ(t)  1
 ∞
⇒ exp (j 2π f t) d f = δ(t)
f =−∞
 ∞
⇒ exp (j 2π f (t − x)) d f = δ(t − x). (1.4)
f =−∞

© The Editor(s) (if applicable) and The Author(s), under exclusive license 1
to Springer Nature Switzerland AG 2021
K. Vasudevan, Analog Communications,
https://doi.org/10.1007/978-3-030-50337-6_1
2 1 Signals and Systems

Therefore (1.3) becomes


 ∞  ∞
G( f ) exp (j 2π f t) d f = δ(t − x)g(x) d x
f =−∞ x=−∞
= g(t). (1.5)

Thus proved.
2. (Simon Haykin 1983) Consider a pulse-like function g(t) that consists of a small
number of straight-line segments. Suppose that this function is differentiated with
respect to time twice so as to generate a sequence of weighted delta functions as
shown by
d 2 g(t) 
= ki δ(t − ti ), (1.6)
dt 2 i

where ki are related to the slopes of the straight-line segments.


(a) Given the values of ki and ti show that the Fourier transform of g(t) is given
by
1 
G( f ) = − 2 2 ki exp (−j 2π f ti ) . (1.7)
4π f i

(b) Using the above procedure find the Fourier transform of the pulse in Fig. 1.1.

• Solution: To solve part (a) we note that


 ∞
d 2 g(t)
= (j 2π f )2 G( f )e j 2π f t d f, (1.8)
dt 2 f =−∞

where G( f ) is the Fourier transform of g(t). In other words

d 2 g(t)
= g1 (t)  (j 2π f )2 G( f ) = G 1 ( f ) (say). (1.9)
dt 2
Consequently
G1( f )
G( f ) = − . (1.10)
4π 2 f 2

Fig. 1.1 g(t)


g(t)
A

−tb −ta ta tb
1 Signals and Systems 3

In part (a), it is clear that



G1( f ) = ki e−j 2π f ti . (1.11)
i

Therefore
1  −j 2π f ti
G( f ) = − ki e . (1.12)
4π 2 f 2 i

To solve part (b), we note that

d 2 g(t) A
= (δ(t + tb ) − δ(t + ta ) − δ(t − ta ) + δ(t − tb )) . (1.13)
dt 2 tb − ta

This is illustrated in Fig. 1.2. Hence

A
G( f ) = − (exp(j 2π f tb ) − exp(j 2π f ta )
(tb − ta )4π 2 f 2
− exp(−j 2π f ta ) + exp(−j 2π f tb ))
A
⇒ G( f ) = − (cos (2π f tb ) − cos (2π f ta ))
(tb − ta )2π 2 f 2
A
⇒ G( f ) = sin (π f (tb + ta )) sin (π f (tb − ta )) , (1.14)
(tb − ta )π 2 f 2

Fig. 1.2 Illustrating the


g(t)
various derivatives of g(t)
A

−tb −ta ta tb

dg(t)/dt

A/(tb − ta )
t

d2 g(t)/dt2
A/(tb − ta )

t
4 1 Signals and Systems

Fig. 1.3 g p (t) gp (t)


A

0 T0
2T

where we have used the formula:


   
A+B A−B
cos(B) − cos(A) = 2 sin sin . (1.15)
2 2

3. (Simon Haykin 1983) Using the above procedure compute the complex Fourier
series representation of the periodic train of triangular pulses shown in Fig. 1.3.
Here T < T0 /2. Hence also find the Fourier transform of g p (t).
• Solution: Consider the pulse:

g p (t) for −T0 /2 < t < T0 /2
g(t) = (1.16)
0 elsewhere.

Clearly

d 2 g(t) A
= (δ(t + T ) − 2δ(t) + δ(t − T )) . (1.17)
dt 2 T
Hence
A
G( f ) = − (exp(j 2π f T ) − 2 + exp(−j 2π f T ))
4π 2 f 2T
AT
= sin2 (π f T )
π2 f 2 T 2
= AT sinc2 ( f T ). (1.18)

We also know that




g p (t) = cn exp (j 2πnt/T0 ) , (1.19)
n=−∞

where
1 Signals and Systems 5

1 ∞
cn = g(t) exp (−j 2πnt/T0 ) dt
T0 t=−∞
1
= G(n/T0 ). (1.20)
T0

Substituting for G(n/T0 ) we have



AT 
g p (t) = sinc2 (nT /T0 ) exp (j 2πnt/T0 ) . (1.21)
T0 n=−∞

Using the fact that

exp (j 2π f c t)  δ( f − f c ) (1.22)

the Fourier transform of g p (t) is given by

∞  
AT  n
G p( f ) = sinc2 (nT /T0 )δ f − . (1.23)
T0 n=−∞ T0

4. (Simon Haykin 1983) Consider a signal g p (t) defined by

g p (t) = A0 + A1 cos(2π f 1 t + θ) + A2 cos(2π f 2 t + θ), (1.24)

which is given to be periodic with a period of T0 .


(a) Determine the autocorrelation Rg p (τ ).
(b) Is any information lost in obtaining Rg p (τ )?

• Solution: Since g p (t) is periodic with a period T0 , the autocorrelation of g p (t)


is also periodic with period T0 . We also have
n m
T0 = =
f1 f2
n−m m+n 2n 2m
⇒ T0 = = = = , (1.25)
f1 − f2 f1 + f2 2 f1 2 f2

where m and n are integers. Thus, frequencies f 1 − f 2 , f 1 + f 2 , 2 f 1 , and 2 f 2


are also periodic with period T0 . Now, the autocorrelation of g p (t) is equal to
(note that g p (t) is real-valued)
 T0
 1
Rg p (τ ) = g p (t)g p (t − τ ) dt
T0 t=0
A21 A2
= A20 + cos(2π f 1 τ ) + 2 cos(2π f 2 τ ), (1.26)
2 2
6 1 Signals and Systems

where we have made use of the fact that sinusoids integrated over one period
is equal to zero. In computing the autocorrelation, we have lost the phase
information.

5. Consider a signal g p (t) defined by

g p (t) = A0 + A1 sin(2π f 1 t + θ) − A2 cos(2π f 2 t + α), (1.27)

which is given to be periodic with a period of T0 .


(a) Determine the autocorrelation Rg p (τ ).
(b) Is any information lost in obtaining Rg p (τ )?

• Solution: Since g p (t) is periodic with a period T0 , the autocorrelation of g p (t)


is also periodic with period T0 . We also have
n m
T0 = =
f1 f2
n−m m+n 2n 2m
⇒ T0 = = = = , (1.28)
f1 − f2 f1 + f2 2 f1 2 f2

where m and n are integers. Thus, frequencies f 1 − f 2 , f 1 + f 2 , 2 f 1 , and 2 f 2


are also periodic with period T0 . Now, the autocorrelation of g p (t) is equal to
(note that g p (t) is real-valued)
 T0
 1
Rg p (τ ) = g p (t)g p (t − τ ) dt
T0 t=0
A21 A2
= A20 + cos(2π f 1 τ ) + 2 cos(2π f 2 τ ), (1.29)
2 2
where we have made use of the fact that sinusoids integrated over one period
is equal to zero. In computing the autocorrelation, we have lost the phase
information.

6. (Simon Haykin 1983) Let G( f ) denote the Fourier transform of a real-valued


signal g(t) and Rg (τ ) its autocorrelation function. Show that
    ∞
∞  d Rg (τ ) 2
  dτ = 4π 2 f 2 |G( f )|4 d f. (1.30)
 dτ 
τ =−∞ f =−∞

• Solution: We know that

Rg (τ )  |G( f )|2
d Rg (τ )
⇒  j 2π f |G( f )|2 . (1.31)

1 Signals and Systems 7

Let
d Rg (τ )
= g1 (τ ). (1.32)

Let G 1 ( f ) denote the Fourier transform of g1 (t). Thus

G 1 ( f ) = j 2π f |G( f )|2 . (1.33)

By Rayleigh’s energy theorem we have


 ∞  ∞
|g1 (τ )|2 dτ = |G 1 ( f )|2 d f. (1.34)
τ =−∞ f =−∞

Substituting for g1 (t) and G 1 ( f ) in the above equation we get the required
result.
7. A sinusoidal signal of the form A cos(2π f 0 t), A > 0, is full wave rectified to
obtain g p (t).
(a) Using the complex Fourier series representation of g p (t) and the Parseval’s
power theorem, compute
 2 
2 2 2
S=9 1 + 2 + ··· + + · · · , (1.35)
π 3 (1 − 4n 2 )2

where n is an integer greater than or equal to one.


(b) The full wave rectified signal is passed through an ideal bandpass filter
having a gain of 2 in the passband frequency range of f 0 < | f | < 5 f 0 .
Compute the power of the output signal.

• Solution: The complex Fourier series representation of a periodic signal g p (t),


having a period T1 , is


g p (t) = cn exp( j 2πnt/T1 ), (1.36)
n=−∞

where
 T1 /2
1
cn = g p (t) exp(−j 2πnt/T1 ) dt (1.37)
T1 −T1 /2

denotes the complex Fourier series coefficient. In the given problem:

g p (t) = A| cos(2π f 0 t)|


T1 = 1/(2 f 0 ). (1.38)
8 1 Signals and Systems

Hence
 T1 /2
A
cn = |cos(2π f 0 t)| exp(−j 2πnt/T1 ) dt
T1 −T1 /2
 T1 /2
2A
= 2 cos(2π f 0 t) cos(2πnt/T1 ) dt
2T1 0

A T1 /2
= [cos(2π( f 0 − n/T1 )t) + cos(2π( f 0 + n/T1 )t)] dt
T1 0
A
= [sin(2π( f 0 − n/T1 )T1 /2)/(2π( f 0 − n/T1 ))
T1
+ sin(2π( f 0 + n/T1 )T1 /2)/(2π( f 0 + n/T1 ))]
AT1
= [cos(nπ)/(2π( f 0 T1 − n))
T1
+ cos(nπ)/(2π( f 0 T1 + n))]
A(−1)n
= [1/(1 − 2n) + 1/(1 + 2n)]
π
2 A(−1)n
= . (1.39)
π(1 − 4n 2 )

Observe that in (1.39)

c−n = cn . (1.40)

From Parseval’s power theorem, we have


 ∞

1 T1 /2  
g p (t)2 dt = |cn |2
T1 t=−T1 /2 n=−∞


= |c0 |2 + 2 |cn |2
n=1
  
2A 2 2 2
= 1 + 2 + ··· + + · · · ,
π 3 (1 − 4n 2 )2
(1.41)

where we have used (1.40). Comparing (1.35) with the last equation of (1.41),
we obtain

A = 3. (1.42)

Now, the left-hand side of (1.41) is equal to


1 Signals and Systems 9
 
1 T1 /2   2 T1 /2
g p (t)2 dt = A cos2 (2π f 0 t) dt
T1 t=−T1 /2 T1 t=−T1 /2
A2
= . (1.43)
2
Substituting for A from (1.42), the sum in (1.35) is equal to

S = 4.5. (1.44)

The (periodic) signal at the bandpass filter output is


2
g p1 (t) = 2cn exp( j 2πnt/T1 ), (1.45)
n=−2
n=0

where we have assumed that the gain of the bandpass filter is 2 and cn is given
by (1.39). The signal power at the bandpass filter output is


2
P= 4 |cn |2
n=−2
n=0
= 3.372. (1.46)

8. (Simon Haykin 1983) Determine the autocorrelation of the Gaussian pulse given
by
 
1 πt 2
g(t) = exp − 2 . (1.47)
t0 t0

• Solution: We know that

e−πt  e−π f
2 2

1 1
⇒ e−πt  e−π f .
2 2
(1.48)
t0 t0

Using time scaling with a = 1/t0 we get

1 −πt 2 /t02 |t0 | −π f 2 t02


g(t) = e  e = G( f ). (1.49)
t0 t0

Since G( f ) is real-valued, the Fourier transform of the autocorrelation of g(t)


is simply G 2 ( f ). Thus

G 2 ( f ) = e−2π f
2 2
t0
= ψg ( f ) (say). (1.50)
10 1 Signals and Systems

Fig. 1.4 X ( f ) X(f )

−W 0 W

Once again we use the time scaling property of the Fourier transform, namely,

|a|g(at)  G( f /a) (1.51)



with 1/a = 2 in (1.49) to get

1
√ e−πt /(2t0 ) ,
2 2
Rg (t) = (1.52)
|t0 | 2

which is the required autocorrelation of the Gaussian pulse. Observe the |t0 |
in the denominator of (1.52), since Rg (0) must be positive.
9. (Rodger and William 2002) Assume that the Fourier transform of x(t) has the
shape as shown in Fig. 1.4. Determine and plot the spectrum of each of the
following signals:
(a) x1 (t) = (3/4)x(t)

+ (1/4)j x̂(t)
(b) x2 (t) =
(3/4)x(t) + (3/4)j x̂(t) e j 2π f0 t
(c) x3 (t) = (3/4)x(t) + (1/4)j x̂(t) e j 2πW t ,
where f 0  W and x̂(t) denotes the Hilbert transform of x(t).
• Solution: We know that

x̂(t)  −j sgn ( f )X ( f ). (1.53)

Therefore

X 1 ( f ) = (3/4)X ( f ) + (1/4)sgn ( f )X ( f )

⎨ X( f ) for f > 0
= (3/4)A for f = 0 . (1.54)

(1/2)X ( f ) for f < 0

The plot of X 1 ( f ) is given in Fig. 1.5.


To solve part (b) let

m(t) = (3/4)x(t) + (3/4)j x̂(t). (1.55)


1 Signals and Systems 11

Fig. 1.5 X 1 ( f ) X1 (f )

A
3A/4
A/2

−W 0 W

Fig. 1.6 X 2 ( f ) X2 (f )

3A/2

3A/4

f0 f0 + W

Then

M( f ) = (3/4)X ( f ) + (3/4)sgn ( f )X ( f )

⎨ (3/2)X ( f ) for f > 0
= (3/4)A for f = 0 . (1.56)

0 for f < 0

Then X 2 ( f ) = M( f − f 0 ) and is given by



⎨ (3/2)X ( f − f 0 ) for f > f 0
X 2 ( f ) = (3/4)A for f = f 0 . (1.57)

0 for f < f 0

The plot of X 2 ( f ) is shown in Fig. 1.6.


To solve part (c) let

m(t) = (3/4)x(t) + (1/4)j x̂(t). (1.58)

Then

M( f ) = (3/4)X ( f ) + (1/4)sgn( f ) X ( f )

⎨ X( f ) for f > 0
= (3/4)A for f = 0 . (1.59)

(1/2)X ( f ) for f < 0
12 1 Signals and Systems

Fig. 1.7 X 3 ( f ) X3 (f )

A
3A/4
A/2

0 W 2W

Thus X 3 ( f ) = M( f − W ) and is given by



⎨ X( f − W) for f > W
X 3 ( f ) = (3/4)A for f = W (1.60)

(1/2)X ( f − W ) for f < W

The plot of X 3 ( f ) is given in Fig. 1.7.


10. Consider the input

x(t) = rect (t/T ) cos(2π( f 0 +  f )t) (1.61)

for  f  f 0 . When x(t) is input to a filter with impulse response

h(t) = αe−αt cos(2π f 0 t)u(t) (1.62)

find the output by convolving the complex envelopes. Assume that the impulse
response of the filter is of the form:

h(t) = 2h c (t) cos(2π f 0 t) − 2h s (t) sin(2π f 0 t). (1.63)

• Solution: The canonical representation of a bandpass signal g(t) centered at


frequency f c is

g(t) = gc (t) cos(2π f c t) − gs (t) sin(2π f c t), (1.64)

where gc (t) and gs (t) extend over the frequency range [−W, W ], where W 
f c . The complex envelope of g(t) is defined as

g̃(t) = gc (t) + j gs (t). (1.65)

The given signal x(t) can be written as

x(t) = rect (t/T ) [cos(2π f 0 t) cos(2π f t) − sin(2π f 0 t) sin(2π f t)] ,


(1.66)
1 Signals and Systems 13

where  f  f 0 . By inspection, we conclude that the complex envelope of x(t)


is given by


x̃(t) = rect (t/T ) cos(2π f t) + j sin(2π f t)
= rect (t/T )e j 2π f t . (1.67)

Thus
 
x(t) =  x̃(t)e j 2π f0 t . (1.68)

In order to facilitate computation of the filter output using the complex envelopes,
it is given that the filter impulse response is to be represented by

h(t) = 2h c (t) cos(2π f 0 t) − 2h s (t) sin(2π f 0 t), (1.69)

where the complex envelope of the filter is

h̃(t) = h c (t) + j h s (t). (1.70)

Again by inspection, the complex envelope of the channel is

1 −αt
h̃(t) = αe u(t). (1.71)
2
Thus
 
h(t) =  2h̃(t)e j 2π f0 t . (1.72)

Now, the complex envelope of the output is given by

ỹ(t) = x̃(t)  h̃(t)


 ∞
= x̃(τ )h̃(t − τ ) dτ . (1.73)
τ =−∞

Substituting for x̃(·) and h̃(τ ) in the above equation, we get


 T /2
α
ỹ(t) = e j 2π f τ e−α(t−τ ) u(t − τ ) dτ . (1.74)
2 τ =−T /2

Since u(t − τ ) = 0 for t < τ , and since −T /2 < τ < T /2, it is clear that ỹ(t) =
0, and hence y(t) = 0 for t < −T /2.
Now, for −T /2 < t < T /2, we have
14 1 Signals and Systems

α t
ỹ(t) = e j 2π f τ e−α(t−τ ) dτ
2 τ =−T /2
 t
α
= e−αt eτ (α+j 2π f ) dτ
2 τ =−T /2
α

= e−αt et (α+j 2π f ) − e−T /2(α+j 2π f )
2(α + j 2π f )
α
j 2π f t
= e − e−α(t+T /2)−j π f T . (1.75)
2(α + j 2π f )

Let
 
2π f
θ1 = tan−1
α
θ2 = π f T

r = α2 + (2π f )2 . (1.76)

Then for −T /2 < t < T /2 the output is


 
y(t) =  ỹ(t)e j 2π f0 t
α
= cos(2π( f 0 +  f )t − θ1 )
2r 
− e−α(t+T /2) cos(2π f 0 t − θ1 − θ2 ) . (1.77)

Similarly, for t > T /2 we have


 T /2
α
ỹ(t) = e j 2π f τ e−α(t−τ ) dτ
2 τ =−T /2
 T /2
α −αt
= e eτ (α+j 2π f ) dτ
2 τ =−T /2
α

= e−αt eT /2(α+j 2π f ) − e−T /2(α+j 2π f )
2(α + j 2π f )
α
−α(t−T /2) j π f T
= e e − e−α(t+T /2) e−j π f T .
2(α + j 2π f )
(1.78)

Therefore for t > T /2 the output is


 
y(t) =  ỹ(t)e j 2π f0 t
α  −α(t−T /2)
= e cos(2π f 0 t − θ1 + θ2 )
2r 
− e−α(t+T /2) cos(2π f 0 t − θ1 − θ2 ) . (1.79)
1 Signals and Systems 15

Y (f )
38.4

19.2

4
f

−46 −40 −34 −20 −6 0 6 20 34 40 46


17
23

Fig. 1.8 Y ( f )

11. A nonlinear system defined by

y(t) = x(t) + 0.2x 2 (t) (1.80)

has an input signal with a bandpass spectrum given by


   
f − 20 f + 20
X ( f ) = 4 rect + 4 rect . (1.81)
6 6

Sketch the spectrum at the output labeling all the important frequencies and
amplitudes.
• Solution: The Fourier transform of the output is

Y ( f ) = X ( f ) + 0.2X ( f )  X ( f ). (1.82)

The spectrum of Y ( f ) can be found out by inspection and is shown in Fig. 1.8.
12. (Simon Haykin 1983) Let R12 (τ ) denote the cross-correlation function of two
energy signals g1 (t) and g2 (t).
(a) Using Fourier transforms, show that
 ∞
(m+n)
R12 (τ ) = (−1) n
h 1 (t)h ∗2 (t − τ ) dt, (1.83)
t=−∞

where

h 1 (t) = g1(m) (t)


h 2 (t) = g2(n) (t) (1.84)

denote the mth and nth derivatives of g1 (t) and g2 (t), respectively.
16 1 Signals and Systems

g1 (t) g2 (t)
2
2
t t

−3 0 3
−2

−3 −1 0 3 1

Fig. 1.9 Pulses g1 (t) and g2 (t)

(b) Use the above relation with m = 1 and n = 0 to evaluate and sketch the
cross-correlation function R12 (τ ) of the pulses g1 (t) and g2 (t) shown in
Fig. 1.9.

• Solution: We know that

R12 (t) = g1 (t)  g2∗ (−t)  G 1 ( f )G ∗2 ( f )


(m+n)
⇒ R12 (t)  (j 2π f )m G 1 ( f ) (j 2π f )n G ∗2 ( f )
(m+n)
⇒ R12 (t)  (j 2π f )m G 1 ( f ) (−1)n (−j 2π f )n G ∗2 ( f ). (1.85)

Let

h 2 (t) = g2(n) (t)  (j 2π f )n G 2 ( f )


⇒ h ∗2 (−t)  (−j 2π f )n G ∗2 ( f ). (1.86)

Similarly

h 1 (t) = g (m) (t)  (j 2π f )m G( f ). (1.87)

Using the fact that multiplication in the frequency domain is equivalent to con-
volution in the time domain, we get
(m+n)  
R12 (t) = (−1)n h 1 (t)  h ∗2 (−t)
 ∞
(m+n)
⇒ R12 (τ ) = (−1) n
h 1 (t)h ∗2 (t − τ ) dt. (1.88)
t=−∞

Hence proved. For the signal in Fig. 1.9, using m = 1 and n = 0, we get

h 1 (t) = 2δ(t + 3) − 2δ(t − 3)


h 2 (t) = g2 (t). (1.89)
1 Signals and Systems 17

g2 (t)

2
t

0
−2

(1)
R12 (τ )

4
τ

−4

R12 (τ )

−8

−6 −4 −2 6 4 2 0

−3 −1 1 3

Fig. 1.10 Cross-correlation R12 (τ )

Thus
 ∞
(1)
R12 (τ ) = h 1 (t)h 2 (t − τ ) dt
t=−∞
= 2g2 (−τ − 3) − 2g2 (3 − τ ). (1.90)
(1)
R12 (τ ) and R12 (τ ) are plotted in Fig. 1.10.
13. Let R12 (τ ) denote the cross-correlation function of two energy signals g1 (t) and
g2 (t).
(a) Using Fourier transforms, show that
 ∞
(m+n)
R12 (τ ) = (−1) n
h 1 (t)h ∗2 (t − τ ) dt, (1.91)
t=−∞
18 1 Signals and Systems

Fig. 1.11 Pulses g1 (t) and g1 (t)


g2 (t)
g2 (t)
2

1 1
t t
0 1
0 1 2 2
−1

where

h 1 (t) = g1(m) (t)


h 2 (t) = g2(n) (t) (1.92)

denote the mth and nth derivatives of g1 (t) and g2 (t), respectively.
(b) Use the above relation with m = 1 and n = 0 to evaluate and sketch the
cross-correlation function R12 (τ ) of the pulses g1 (t) and g2 (t) shown in
Fig. 1.11.

• Solution: We know that

R12 (t) = g1 (t)  g2∗ (−t)  G 1 ( f )G ∗2 ( f )


(m+n)
⇒ R12 (t)  (j 2π f )m G 1 ( f ) (j 2π f )n G ∗2 ( f )
(m+n)
⇒ R12 (t)  (j 2π f )m G 1 ( f ) (−1)n (−j 2π f )n G ∗2 ( f ). (1.93)

Let

h 2 (t) = g2(n) (t)  (j 2π f )n G 2 ( f )


⇒ h ∗2 (−t)  (−j 2π f )n G ∗2 ( f ). (1.94)

Similarly

h 1 (t) = g (m) (t)  (j 2π f )m G( f ). (1.95)

Using the fact that multiplication in the frequency domain is equivalent to


convolution in the time domain, we get
(m+n)  
R12 (t) = (−1)n h 1 (t)  h ∗2 (−t)
 ∞
(m+n)
⇒ R12 (τ ) = (−1)n h 1 (t)h ∗2 (t − τ ) dt. (1.96)
t=−∞

Hence proved. For the signal in Fig. 1.11, using m = 1 and n = 0, we get
1 Signals and Systems 19

h 1 (t) = δ(t) + δ(t − 1) − 2δ(t − 2)


h 2 (t) = g2 (t). (1.97)

Thus
 ∞
(1)
R12 (τ ) = h 1 (t)h 2 (t − τ ) dt
t=−∞
 ∞
= [δ(t) + δ(t − 1) − 2δ(t − 2)] g2 (t − τ )
t=−∞
= g2 (−τ ) + g2 (1 − τ ) − 2g2 (2 − τ ). (1.98)
(1)
R12 (τ ) and R12 (τ ) are plotted in Fig. 1.12.

g2 (t)

1
t

−1

(1)
R12 (τ )
2

1
τ

−3

R12 (τ )

1
τ

−2

−2 −1 0 1 2

Fig. 1.12 Cross-correlation R12 (τ )


20 1 Signals and Systems

14. (Simon Haykin 1983) Let x(t) and y(t) be the input and output signals of a linear
time-invariant filter. Using Rayleigh’s energy theorem, show that if the filter is
stable and the input signal x(t) has finite energy, then the output signal y(t) also
has finite energy.
• Solution: Let H ( f ) denote the Fourier transform of h(t). We have
 ∞
H( f ) = h(t)e−j 2π f t dt
t=−∞ 
 ∞ 

⇒ |H ( f )| =  h(t)e −j 2π f t
dt 
t=−∞
 ∞
 
≤ h(t)e−j 2π f t dt 
t=−∞
 ∞
⇒ |H ( f )| ≤ |h(t)| dt, (1.99)
t=−∞

where we have used Schwarz’s inequality. Since h(t) is stable


 ∞
|h(t)| dt < ∞
t=−∞
⇒ |H ( f )| < ∞. (1.100)

Let

A = max |H ( f )|. (1.101)

The energy of the output signal y(t) is


 ∞  ∞
|y(t)| dt =
2
|Y ( f )|2 d f (1.102)
t=−∞ f =−∞

where we have used the Rayleigh’s energy theorem. Using the fact that

Y ( f ) = H ( f )X ( f ) (1.103)

the energy of the output signal is


 ∞  ∞
|H ( f )|2 |X ( f )|2 d f ≤ A2 |X ( f )|2 d f
f =−∞ f =−∞
 ∞  ∞
⇒ |y(t)| dt ≤ A
2 2
|x(t)|2 dt. (1.104)
t=−∞ f =−∞

Since the input signal has finite energy, so does the output signal.
1 Signals and Systems 21

15. (Simon Haykin 1983) Prove the following properties of the complex exponential
Fourier series representation, for a real-valued periodic signal g p (t):
(a) If the periodic function g p (t) is even, that is, g p (−t) = g p (t), then the Fourier
coefficient cn is purely real and an even function of n.
(b) If g p (t) is odd, that is, g p (−t) = −g p (t), then cn is purely imaginary and
an odd function of n.
(c) If g p (t) has half-wave symmetry, that is, g p (t ± T0 /2) = −g p (t), where T0
is the period of g p (t), then cn consists of only odd order terms.

• Solution: The Fourier series for any periodic signal g p (t) is given by



g p (t) = cn e j 2πnt/T0
n=−∞


= a0 + 2 an cos(2πnt/T0 ) + bn sin(2πnt/T0 ), (1.105)
n=1

where

⎨ an − j bn for n > 0
cn = a0 for n = 0 (1.106)

a−n + j b−n for n < 0.

Note that an and bn are real-valued, since g p (t) is real-valued. To prove the
first part, we note that


g p (−t) = cn e−j 2πnt/T0 . (1.107)
n=−∞

Substituting n = −m in the above equation we get

−∞

g p (−t) = c−m e j 2πmt/T0
m=∞


= c−m e j 2πmt/T0 . (1.108)
m=−∞

Since g p (−t) = g p (t), comparing (1.105) and (1.108) we must have c−m = cm
(even function of m). Moreover, from (1.106), cm must be purely real.
To prove the second part, we note from (1.105) and (1.108) that c−m = −cm
(odd function of m) and moreover from (1.106) it is clear that cm must be
purely imaginary.
To prove the third part, we note that
22 1 Signals and Systems



g p (t ± T0 /2) = cn e j 2πn(t±T0 /2)/T0
n=−∞
∞
= cn (−1)n e j 2πnt/T0 . (1.109)
n=−∞

Since it is given that g p (t ± T0 /2) = −g p (t), comparing (1.105) and (1.109)


we conclude that cn = 0 for n = 2m.

16. (Simon Haykin 1983) A signal x(t) of finite energy is applied to a square-law
device whose output y(t) is given by

y(t) = x 2 (t). (1.110)

The spectrum of x(t) is limited to the frequency interval −W ≤ f ≤ W . Show


that the spectrum of y(t) is limited to −2W ≤ f ≤ 2W .
• Solution: We know that multiplication of signals in the time domain is equiv-
alent to convolution in the frequency domain. Let

x(t)  X ( f ). (1.111)

Therefore
 ∞
Y( f ) = X (α)X ( f − α) dα
α=−∞
 W
= X (α)X ( f − α) dα, (1.112)
α=−W

where we have used the fact that X (α) is bandlimited to −W ≤ α ≤ W .


Consequently, we must also have

−W ≤ f −α≤ W (1.113)

so that the product X (α)X ( f − α) is non-zero and contributes to the integral.


The plot of X ( f − α) as a function of α and f is shown in Fig. 1.13. We see
that the convolution integral is non-zero (for this particular example) only for
−2W ≤ f ≤ 2W .
The important point to note here is that the spectrum of Y ( f ) cannot exceed
−2W ≤ f ≤ 2W .
17. A signal x(t) has the Fourier transform shown in Fig. 1.14.
(a) Compute its Hilbert transform x̂(t).
(b) Compare the area under x̂(t) and the value of X̂ (0). Comment on your
answer.
1 Signals and Systems 23

Fig. 1.13 Plot of X (α) and


X(α)
X ( f − α) for different
values of f

−W 0 W

X(−2W − α)

f = −2W

−3W −W 0 W

X(2W − α)

f = 2W

−3W −W 0 W 3W

X(−α)

f =0

−W 0 W

Fig. 1.14 Fourier transform X(f )


of x(t)
A
−2B f
2B
−A

• Solution: We know that

A rect(t/T )  AT sinc( f T ). (1.114)

Using duality and substituting 2B for T we have


 
f
A rect  2 AB sinc(2Bt). (1.115)
2B
24 1 Signals and Systems

Now
   
f −B f +B
X ( f ) = A rect − A rect
2B 2B
 2 AB sinc(2Bt)e j 2π Bt − 2 AB sinc(2Bt)e−j 2π Bt
⇒ X ( f )  j 4 AB sinc(2Bt) sin(2π Bt). (1.116)

The Fourier transform of x̂(t) is


    
f −B f +B
−j A rect − j A rect for f = 0
X̂ ( f ) = 2B 2B (1.117)
0 for f = 0.

From (1.115) we see that the inverse Fourier transform of X̂ ( f ) is

X̂ ( f )  x̂(t) = −j 2 AB sinc(2t B)e j 2π Bt − j 2 AB sinc(2t B)e−j 2π Bt


= −j 4 AB sinc(2t B) cos(2π Bt)
= −j 4 AB sinc(4t B). (1.118)

Note that
 ∞
x̂(t) dt = −j A = X̂ (0) = 0. (1.119)
t=−∞

This is because the property


 ∞
g(t) dt = G(0) (1.120)
t=−∞

is valid only when G( f ) is continuous at f = 0.

18. Consider the system shown in Fig. 1.15. Assume that the current in branch AB
is zero. The voltage at point A is v1 (t).
(a) Find out the time-domain expression that relates v1 (t) and vi (t).

1Ω v1 (t)
A B vo (t) = dv1 (t)/dt
+
2F d
vi (t) = 3 cos(6πt) i(t) dt
(volts)

Fig. 1.15 Block diagram of the system


1 Signals and Systems 25

(b) Using the Fourier transform, find out the relation between V1 ( f ) and Vi ( f ).
(c) Find the relation between Vo ( f ) and Vi ( f ).
(d) Compute the power dissipated when the output voltage vo (t) is applied
across a 1  resistor.

• Solution: We know that

vi (t) = i(t)R + v1 (t). (1.121)

However
dv1 (t)
i(t) = C . (1.122)
dt
Thus
dv1 (t)
vi (t) = RC + v1 (t). (1.123)
dt
Taking the Fourier transform of both sides, we get

Vi ( f ) = RCj 2π f V1 ( f ) + V1 ( f )
Vi ( f )
⇒ V1 ( f ) = . (1.124)
1 + j 2π f RC

It is given that

dv1 (t)
vo (t) =
dt
⇒ Vo ( f ) = j 2π f V1 ( f )
j 2π f Vi ( f )
= . (1.125)
1 + j 2π f RC

Now
3
Vi ( f ) = [δ( f − 3) + δ( f + 3)] . (1.126)
2
Therefore
3 j 6π 3 −j 6π
Vo ( f ) = δ( f − 3) + δ( f + 3) . (1.127)
2 1 + j 12π 2 1 − j 12π

Taking the inverse Fourier transform we get

vo (t) = c1 e j 6πt + c2 e−j 6πt . (1.128)


26 1 Signals and Systems

0.5 H
v1 (t)
A B
vo (t) = dv1 (t)/dt
+
+
d
i(t) dt
vi (t) = 5 sin(4πt)

(volts)

Fig. 1.16 Block diagram of the system

Using Parseval’s power theorem we get the power of vo (t) as

2 × 9 36π 2
P = |c1 |2 + |c2 |2 = ≈ 9/8 W. (1.129)
4 1 + 144π 2

19. Consider the system shown in Fig. 1.16. Assume that the current in branch AB
is zero. The voltage at point A is v1 (t).
(a) Find out the time-domain expression that relates vi (t) and i(t).
(b) Using the Fourier transform, find out the relation between V1 ( f ) and Vi ( f ).
(c) Find the relation between Vo ( f ) and Vi ( f ).
(d) Compute vo (t).
(e) Compute the power dissipated when the output voltage vo (t) is applied
across a 1/2  resistor.

• Solution: We know that

vi (t) = v1 (t) + Ldi(t)/dt


= i(t)R + Ldi(t)/dt. (1.130)

Taking the Fourier transform of both sides we get

Vi ( f ) = R I ( f ) + j 2π f L I ( f )
Vi ( f )
⇒ I( f ) = . (1.131)
R + j 2π f L

Therefore

V1 ( f ) = R I ( f )
Vi ( f )
=
1 + j 2π f L/R
Vi ( f )
= , (1.132)
1 + j π f /2
1 Signals and Systems 27

where we have substituted L = 0.5 H and R = 2 . Now

vo (t) = dv1 (t)/dt


⇒ Vo ( f ) = j 2π f V1 ( f )
j 2π f
= Vi ( f ). (1.133)
1 + j π f /2

Since
5
Vi ( f ) = [δ( f − 2) − δ( f + 2)] (1.134)
2j

we get

2π (−2π)
Vo ( f ) = 5δ( f − 2) − 5δ( f + 2) . (1.135)
1 + jπ 1 − jπ

Taking the inverse Fourier transform we get

vo (t) = c1 e j 4πt + c2 e−j 4πt , (1.136)

where
10π
c1 =
1 + jπ
= Aej φ
10π
c2 =
1 − jπ
= Ae−j φ , (1.137)

where
10π
A= √
1 + π2
φ = − tan−1 (π). (1.138)

Substituting (1.137) and (1.138) in (1.136), we obtain

vo (t) = 2 A cos(4πt + φ). (1.139)

Using Parseval’s power theorem we get the power of vo (t) as


28 1 Signals and Systems

P = |c1 |2 + |c2 |2
= 2 A2
200π 2
= W. (1.140)
1 + π2

The power dissipated across the 1/2  resistor is 2P.

20. Consider a complex-valued signal g(t). Let g1 (t) = g ∗ (−t). Let g1(n) (t) denote
the nth derivative of g1 (t). Consider another signal g2 (t) = g (n) (t).
Is g2∗ (−t) = g1(n) (t)? Justify your answer using Fourier transforms.
• Solution: Let G( f ) denote the Fourier transform of g(t). Then we have

g1 (t) = g ∗ (−t)  G ∗ ( f ) = G 1 ( f ) (say). (1.141)

Therefore

g1(n) (t)  (j 2π f )n G 1 ( f )
⇒ g1(n) (t)  (j 2π f )n G ∗ ( f ). (1.142)

Next consider g2 (t). We have

g2 (t) = g (n) (t)  (j 2π f )n G( f ) = G 2 ( f ) (say)


∗ ∗ n ∗
⇒ g2 (−t)  G 2 ( f ) = (−j 2π f ) G ( f ). (1.143)

Comparing (1.142) and (1.143) we see that

g2∗ (−t) = g1(n) (t) when n is even


g2∗ (−t) = g1(n) (t) when n is odd. (1.144)

21. (Simon Haykin 1983) Consider N stages of the RC-lowpass filter as illustrated
in Fig. 1.17.
(a) Compute the magnitude response |Vo ( f )/Vi ( f )| of the overall cascade con-
nection. The current drawn by the buffers is zero.

R R R
V1 (f ) V2 (f ) V2 (f ) Vo (f )
V1 (f )
Vi (f ) Buffer Buffer
C C C

Fig. 1.17 N stages of the RC-lowpass filter


1 Signals and Systems 29

(b) Show that the magnitude response in the vicinity of f = 0 approaches a


Gaussian function given by exp(−(N /2)4π 2 f 2 R 2 C 2 ), for large values of N .
• Solution: For a single stage, the input and output voltages are related as fol-
lows:

v1 (t) = i(t)R + v2 (t). (1.145)

However,

dv2 (t)
i(t) = C . (1.146)
dt
Therefore (1.145) can be rewritten as

dv2 (t)
v1 (t) = RC + v2 (t). (1.147)
dt
Taking the Fourier transform of both sides, we get

V1 ( f ) = (j 2π f RC + 1)V2 ( f )
V2 ( f ) 1
⇒ = . (1.148)
V1 ( f ) 1 + j 2π f RC

The magnitude response of the overall cascade connection is


 
 Vo ( f )  1
 
 V ( f )  = (1 + 4π 2 f 2 τ 2 ) N /2 , (1.149)
i 0

where τ0 = RC. Note that


 
 Vo ( f ) 

lim  =1 (1.150)
f →0 Vi ( f ) 

and for large values of N


 
 Vo ( f ) 
 
V (f) → 0 for 4π 2 f 2 τ02 > 1. (1.151)
i

Therefore, we are interested in the magnitude response in the vicinity of f = 0,


for large values of N .
We make use of the Maclaurin series expansion of a function f (x) about
x = 0 as follows (assuming that all the derivatives exist):

f (1) (0) f (2) (0) 2


f (x) = f (0) + x+ x + ··· (1.152)
1! 2!
30 1 Signals and Systems

where

(n) d n f (x) 
f (0) = . (1.153)
d x n x=0

In the present context

1
f (x) = (1.154)
(1 + 4π 2 τ02 x) N /2

with x = f 2 . We have

f (0) = 1

−N 4π 2 τ02 
(1)
f (0) = 
2 (1 + 4π 2 τ02 x) N /2+1 x=0
−N
= (4π 2 τ02 )
2 
   2 2 2 
(2) N N 4π τ0 
f (0) = +1 
2 2 (1 + 4π 2 τ02 x) N /2+2 
x=0
 2
N 2 2
≈ 4π τ0 , (1.155)
2

where we have assumed that for large N

N N
+1≈ . (1.156)
2 2
Generalizing (1.155), we can obtain the nth derivative of the Maclaurin series
as
   
N N N
f (n) (0) = (−1)n + 1 ... +n−1
2 2 2
 2 2 n 
4π τ0 

× 
(1 + 4π 2 τ02 x) N /2+n 
x=0
 n
N
≈ (−1)n 4π 2 τ02 , (1.157)
2

where we have used the fact that for any finite n  N

N N
+n−1≈ . (1.158)
2 2
Thus f (x) in (1.154) can be written as
1 Signals and Systems 31

Fig. 1.18 A periodic gp (t)


waveform (a)
2

−5T /16 0 5T /16 T

|H(f )|
(b)
4
3
f

−5/(2T ) −3/(2T ) 0 3/(2T ) 5/(2T )

∠H(f )
(c)
π

−π

y y2
f (x) = 1 − x + x2 − · · ·
1! 2!
= exp (−x y) , (1.159)

where

y = (N /2)4π 2 τ02 = (N /2)4π 2 R 2 C 2 . (1.160)

Finally, substituting for x we get the desired result which is Gaussian. Observe
that, in order to satisfy the condition n  N , the nth term of the series in
(1.159) must tend to zero for n  N . This can happen only when

xy < 1
⇒ (N /2)4π R C f 2 < 1
2 2 2

1
⇒ |f| < √ , (1.161)
π RC 2N

which is in the vicinity of f = 0 for large N .

22. Consider the periodic waveform g p (t) in Fig. 1.18a.


(a) Compute the real Fourier series representation of g p (t). Give the expression
for the nth term in the Fourier series.
(b) Hence compute the Fourier transform of g p (t).
32 1 Signals and Systems

(c) Express the autocorrelation of g p (t) (Rg p (τ )) in terms of the autocorrela-


tion of the generating function g(t) (Rg (τ )). There is no need to derive the
expression.
(d) Sketch Rg (τ ). Label all the important points.
Assume that g(t) is defined from −T /2 to T /2.
(e) Sketch Rg p (τ ) for 0 < τ < T . Label all the important points.
(f) The pulse width of g p (t) in Fig. 1.18a is 5T /8. What is the maximum possible
pulse width, so that no aliasing occurs when Rg p (τ ) is expressed in terms of
Rg (τ ).
(g) If g p (t) is passed through a filter having magnitude and phase response as
shown in Fig. 1.18b, c, determine the filter output.

• Solution: The Fourier series representation for a periodic signal is given by




g p (t) = a0 + 2 an cos(2πnt/T ) + bn sin(2πnt/T ). (1.162)
n=1

For the given problem g p (t) = g p (−t), therefore bn = 0. We have


 T /2
1
a0 = g p (t) dt
T t=−T /2
 5T /16
1
= 2 dt
T t=−5T /16
5
= . (1.163)
4
Similarly,
 T /2
1
an = g p (t) cos(2πnt/T ) dt
T t=−T /2
 5T /16
1
= 2 cos(2πnt/T ) dt
T t=−5T /16
2
= sin(5nπ/8) for n ≥ 1. (1.164)

The Fourier transform of g p (t) is given by



g p (t)  a0 δ( f ) + an [δ( f − n/T ) + δ( f + n/T )] , (1.165)
n=1

where an for n ≥ 0 is given by (1.163) and (1.164).


The generating function g(t) is defined as follows:
1 Signals and Systems 33

g(t)

gp (t)
(a)
2

t
−5T 5T
16
0 16
T

Rg (τ )
(b)
10T /4

τ
−10T 10T
16
0 16
T

Rgp (τ )
(c)
10/4

1
τ
−10T 6T 10T
16
0 16 16
T

Rgp (τ ) (resultant)
(d)
10/4

1
τ
−10T 6T 10T
16
0 16 16
T

Fig. 1.19 a g p (t). b Rg (τ ). c Superposition of Rg (τ )/T and Rg (τ − T )/T . d Resultant Rg p (τ )


for 0 < τ < T


g p (t) for −T /2 < t < T /2
g(t) = (1.166)
0 elsewhere.

We know that the autocorrelation of g p (t) (Rg p (τ )) is given by


1 
Rg p (τ ) = Rg (τ + mT ), (1.167)
T m=−∞

where Rg (τ ) is the autocorrelation of the generating function g(t). Rg (τ ) is


plotted in Fig. 1.19b and Rg p (τ ) is plotted in Fig. 1.19d.
The maximum pulse width for no aliasing is T /2.

Clearly, the filter output is a dc component plus the first and second harmonics.
The gain of the dc component is 4 and the phase shift is 0. Hence, the output
dc signal is 4a0 = 5.
34 1 Signals and Systems

Fig. 1.20 A periodic 2δ(t)


waveform g p (t)
δ(t − 3T /2)

0 T /2 T

Similarly, the gain of the first harmonic is also 4 and the phase shift is −2π/5.
Finally, the gain of the second harmonic is 3 and the phase shift is −4π/5.
Thus, the filter output is

16
x(t) = 5 + sin(5π/8) cos(2πt/T − 2π/5)
π
6
+ sin(10π/8) cos(4πt/T − 4π/5). (1.168)
π
23. Consider the periodic waveform g p (t) given in Fig. 1.20, where T denotes the
period.
(a) Compute the real Fourier series representation of g p (t). Give the expression
for the coefficient of the nth term.
(b) Compute the Fourier transform of g p (t).

• Solution: The Fourier series representation for a periodic signal is given by




g p (t) = a0 + 2 an cos(2πnt/T ) + bn sin(2πnt/T ). (1.169)
n=1

For the given problem g p (t) = g p (−t), therefore bn = 0. We have


 T−
1
a0 = g p (t) dt
T t=0−
 T−
1
= [2δ(t) + δ(t − T /2)] dt
T t=0−
3
= . (1.170)
T
1 Signals and Systems 35

Similarly,
 T−
1
an = g p (t) cos(2πnt/T ) dt
T t=0−
 T−
1
= [2δ(t) + δ(t − T /2)] cos(2πnt/T ) dt
T t=0−
1
= [2 + cos(nπ)] for n ≥ 1. (1.171)
T
The Fourier transform of g p (t) is given by



g p (t)  a0 δ( f ) + an [δ( f − n/T ) + δ( f + n/T )] , (1.172)
n=1

where an for n ≥ 0 is given by (1.170) and (1.171).

24. If g(t) has the Fourier transform G( f ), compute the inverse Fourier transform
of g(a f − f 0 ) for
(a) a = 0,
(b) a = 0.

• Solution: From the duality property of the Fourier transform, we know that

g( f )  G(−t). (1.173)

In other words
 ∞
G(−t) = g( f ) e j 2π f t d f. (1.174)
f =−∞

Let G 1 (t) be the required inverse Fourier transform. Then


 ∞
G 1 (t) = g(a f − f 0 ) e j 2π f t d f. (1.175)
f =−∞

Let

a f − f0 = x
⇒ a d f = d x. (1.176)

Assuming a < 0, we have

|a| d f = −d x. (1.177)
36 1 Signals and Systems

Therefore, (1.175) becomes


 ∞
1
G 1 (t) = g(x) e j 2π(x+ f0 )t/a d x
|a| x=−∞
1
= G(−t/a)e j 2π f0 t/a . (1.178)
|a|

The above result is also valid for a > 0.


When a = 0, the function reduces to a constant g(− f 0 ) whose inverse Fourier
transform is
 ∞
G 1 (t) = g(− f 0 ) e j 2π f t d f
f =−∞
= g(− f 0 )δ(t). (1.179)

The alternate solution is as follows:

g(t)  G( f )
1
⇒ h(t) = g(at)  H ( f ) = G( f /a)
|a|
1
⇒ h(t − t0 /a) = g(at − t0 )  H ( f )e−j 2π f t0 /a = G( f /a)e−j 2π f t0 /a .
|a|
(1.180)

Since

p( f )  P(−t)
1
⇒ g(a f − f 0 )  G(−t/a)ej 2πt f0 /a . (1.181)
|a|

25. (Simon Haykin 1983) Let Rg (t) denote the autocorrelation of an energy signal
g(t).
(a) Using Fourier transforms, show that
 ∞
Rg(m+n) (τ ) = (−1) n
g2 (t)g1∗ (t − τ ) dt, (1.182)
t=−∞

where the asterisk denotes the complex conjugate and

g1 (t) = g (n) (t),


g2 (t) = g (m) (t), (1.183)

where g (m) (t) denotes the mth derivative of g(t).


1 Signals and Systems 37

Fig. 1.21 Plot of g(t) g(t)

A
t

T 2T 3T 4T

(b) Using the above relation, evaluate and sketch the autocorrelation of the signal
in Fig. 1.21. You can use m = 1 and n = 0.

• Solution: We know that

Rg (t) = g(t)  g ∗ (−t)  G( f )G ∗ ( f )


⇒ R (m+n) (t)  (j 2π f )m G( f ) (j 2π f )n G ∗ ( f )
⇒ R (m+n) (t)  (j 2π f )m G( f ) (−1)n (−j 2π f )n G ∗ ( f ), (1.184)

where “” denotes convolution. Let

g1 (t) = g (n) (t)  (j 2π f )n G( f )


⇒ g1∗ (−t)  (−j 2π f )n G ∗ ( f ). (1.185)

Similarly let

g2 (t) = g (m) (t)  (j 2π f )m G( f ). (1.186)

Thus, using the fact that multiplication in the frequency domain is equivalent
to convolution in the time domain, we get
 
Rg(m+n) (t) = (−1)n g2 (t)  g1∗ (−t)
 ∞
⇒ Rg(m+n) (τ ) = (−1)n g2 (t)g1∗ (t − τ ) dt. (1.187)
t=−∞

Hence proved. For the signal in Fig. 1.21, using m = 1 and n = 0 we get

g1 (t) = g(t)
g2 (t) = g (1) (t)
= A (δ(t) − 2δ(t − 2T ) + 2δ(t − 3T ) − δ(t − 4T )) . (1.188)
38 1 Signals and Systems

(1)
Rg (t)
5A2

A2
t
−A2

−5A2

Rg (t)
4A2 T

A2 T
t
−A2 T

−4T −2T 0 T 3T

(1)
Fig. 1.22 Plot of Rg (t) and Rg (t)

Hence

Rg(1) (t) = g2 (t)  g ∗ (−t). (1.189)

Using the fact that for any signal x(t)

x(t)  δ(t) = x(t), (1.190)

we get

Rg(1) (t) = A (g(−t) − 2g(−t + 2T ) + 2g(−t + 3T ) − g(−t + 4T )) .


(1.191)

Rg(1) (t) and Rg (t) are plotted in Fig. 1.22.

26. Let Rg (t) denote the autocorrelation of an energy signal g(t) shown in Fig. 1.23.
(a) Express the first derivative of Rg (t) in terms of the derivative of g(t).
(b) Draw the first derivative of Rg (t). Show all the steps.
1 Signals and Systems 39

Fig. 1.23 An energy signal g(t)

1
t

−1 0 2

(c) Hence sketch Rg (t).

• Solution: We know that

Rg (t) = g(t)  g(−t)  G( f )G ∗ ( f )


⇒ d Rg (t)/dt  (j 2π f G( f )) G ∗ ( f )
⇒ d Rg (t)/dt = dg(t)/dt  g(−t), (1.192)

where “” denotes convolution. Now

dg(t)/dt = δ(t + 1) + δ(t) − 2δ(t − 2). (1.193)

Therefore, from (1.192), we get

d Rg (t)/dt = s(t + 1) + s(t) − 2s(t − 2), (1.194)

where

s(t) = g(−t). (1.195)

The function d Rg (t)/dt is plotted in Fig. 1.24. Finally, the autocorrelation of


g(t) is presented in Fig. 1.25.
27. (Simon Haykin 1983) Let R12 (τ ) and R21 (τ ) denote the cross-correlation func-
tions of two energy signals g1 (t) and g2 (t)
(a) Show that the total area under R12 (τ ) is defined by
 ∞  ∞  ∞ ∗
R12 (τ ) dτ = g1 (t) dt g2 (t) dt . (1.196)
τ =−∞ t=−∞ t=−∞

(b) Show that



R12 (τ ) = R21 (−τ ). (1.197)

• Solution: We know that


40 1 Signals and Systems

Fig. 1.24 Obtaining the first s(t) = g(−t)


derivative of the
autocorrelation of g(t) in 2
Fig. 1.23
1
t

−2 0 1
s(t + 1)

1
t

−3 −2 −1 0 1
s(t − 2)

1
t

−3 −2 −1 0 1 2 3
dRg (t)/dt

0 1 2 3 t

−3 −2 −1

−2

−3

−4

 ∞
R12 (τ ) = g1 (t)g2∗ (t − τ ) dt
t=−∞
= g1 (t)  g2∗ (−t)
 G 1 ( f )G ∗2 ( f ), (1.198)

where “” denotes convolution and we have used the fact that convolution in
the time domain is equivalent to multiplication in the frequency domain. Thus,
we have
1 Signals and Systems 41

Fig. 1.25 Obtaining the dRg (t)/dt


autocorrelation of g(t) in
Fig. 1.23 4

0 1 2 3 t

−3 −2 −1

−2

−3

−4

Rg (t)

−3 −2 −1 0 1 2 3

 ∞
R12 (τ ) exp (−j 2π f τ ) dτ = G 1 ( f )G ∗2 ( f )
τ =−∞
 ∞
⇒ R12 (τ ) = G 1 (0)G ∗2 (0)
τ =−∞
 ∞
= g1 (t) dt
t=−∞
 ∞ ∗
g2 (t) dt . (1.199)
t=−∞
42 1 Signals and Systems

Thus proved. To prove the second part we note that


 ∞
R21 (τ ) = g2 (t)g1∗ (t − τ ) dt
t=−∞
 ∞

⇒ R21 (τ ) = g2∗ (t)g1 (t − τ ) dt. (1.200)
t=−∞

Now we substitute t − τ = x to get


 ∞

R21 (τ ) = g2∗ (x + τ )g1 (x) d x.
x=−∞
 ∞

⇒ R21 (−τ ) = g2∗ (x − τ )g1 (x) d x.
x=−∞
= R12 (τ ). (1.201)

Thus proved.

28. (Simon Haykin 1983) Consider two periodic signals g p1 (t) and g p2 (t), both of
period T0 . Show that the cross-correlation function R12 (τ ) satisfies the Fourier
transform pair:

1 
R12 (τ )  G 1 (n/T0 )G ∗2 (n/T0 )δ( f − n/T0 ), (1.202)
T02 n=−∞

where G 1 (n/T0 ) and G 2 (n/T0 ) are the Fourier transforms of the generating
functions for the periodic functions g p1 (t) and g p2 (t), respectively.
• Solution: For periodic signals we know that
 T0 /2
1
R12 (τ ) = g p1 (t)g ∗p2 (t − τ ) dt. (1.203)
T0 t=−T0 /2

Define

g p1 (t) for −T0 /2 ≤ t < T0 /2
g1 (t) =
0 elsewhere

g p2 (t) for −T0 /2 ≤ t < T0 /2
g2 (t) = (1.204)
0 elsewhere.

Note that g1 (t) and g2 (t) are the generating functions of g p1 (t) and g p2 (t),
respectively. Hence, we have the following relationships:
1 Signals and Systems 43



g p1 (t) = g1 (t − mT0 )
m=−∞


g p2 (t) = g2 (t − mT0 ). (1.205)
m=−∞

Using the first equation of (1.204) and the second equation of (1.205), we get
 T0 /2 ∞

1
R12 (τ ) = g1 (t) g2∗ (t − τ − mT0 ) dt
T0 t=−T0 /2 m=−∞
 ∞ ∞

1
⇒ R12 (τ ) = g1 (t) g2∗ (t − τ − mT0 ) dt. (1.206)
T0 t=−∞ m=−∞

Interchanging the order of the summation and the integration, we get



1 
R12 (τ ) = Rg g (τ + mT0 ). (1.207)
T0 m=−∞ 1 2

Note that
(a) R12 (τ ) is periodic with period T0 .
(b) The span of Rg1 g2 (τ ) may exceed T0 , hence the summation in (1.207) may
result in aliasing (overlapping). Therefore

1
R12 (τ ) = Rg g (τ ) for −T0 /2 ≤ τ < T0 /2. (1.208)
T0 1 2

Since R12 (τ ) is periodic with period T0 , it can be expressed as a complex Fourier


series as follows:


R12 (τ ) = cn exp (j 2πnτ /T0 ) , (1.209)
n=−∞

where
 T0 /2
1
cn = R12 (τ ) exp (−j 2πnτ /T0 ) dτ . (1.210)
T0 τ =−T0 /2

Substituting for R12 (τ ) in the above equation, we get


 T0 /2 ∞

1
cn = 2 Rg1 g2 (τ + mT0 ) exp (−j 2πnτ /T0 ) dτ . (1.211)
T0 τ =−T0 /2 m=−∞
44 1 Signals and Systems

Substituting for Rg1 g2 (τ ) in the above equation, we get


 T0 /2 ∞ 
 ∞
1
cn = g1 (t)g2∗ (t − τ − mT0 ) dt
T02 τ =−T0 /2 m=−∞ t=−∞

exp (−j 2πnτ /T0 ) dτ . (1.212)

Substituting

t − τ − mT0 = x (1.213)

and interchanging the order of the integrals, we get


 ∞ ∞ 
 t+T0 /2−mT0
1
cn = g1 (t) dt g2∗ (x)
T02 t=−∞ m=−∞ x=t−T0 /2−mT0

exp (−j 2πn(t − x − mT0 )/T0 ) d x. (1.214)

Combining the summation and the second integral and rearranging terms, we
get
 ∞
1
cn = 2 g1 (t) exp (−j 2πnt/T0 ) dt
T t=−∞
 0∞
g2∗ (x) exp (j 2πnx/T0 ) d x
x=−∞
= G 1 (n/T0 )G ∗2 (n/T0 ). (1.215)

Hence

1 
R12 (τ ) = G 1 (n/T0 )G ∗2 (n/T0 ) exp (j 2πnτ /T0 )
T02 n=−∞

1 
 G 1 (n/T0 )G ∗2 (n/T0 )δ( f − n/T0 ). (1.216)
T02 n=−∞

Thus proved.
29. Compute the Fourier transform of


d x(t)
, (1.217)
dt
where the “hat” denotes the Hilbert transform. Assume that the Fourier transform
of x(t) is X ( f ).
• Solution: We have
1 Signals and Systems 45

Fig. 1.26 Plot of g(t) g(t)

−2 −1 1 2

d x(t)
 j 2π f X ( f )
dt

d x(t)
⇒  −j sgn ( f ) (j 2π f ) X ( f )
dt
= 2π| f |X ( f ). (1.218)

30. Prove the following using Schwarz’s inequality:


 ∞
|G( f )| ≤ |g(t)| dt
 
t=−∞

∞  dg(t) 
|j 2π f G( f )| ≤  
 dt  dt
t=−∞
 ∞  2 
   d g(t) 
(j 2π f )2 G( f ) ≤  
 dt 2  dt. (1.219)
t=−∞

Schwarz’s inequality states that


  
 x2  x2
 f (x) d x  ≤ | f (x)| d x. (1.220)

x=x1 x=x1

Evaluate the three bounds on |G( f )| for the pulse shown in Fig. 1.26.
• Solution: We start from the Fourier transform relation:
 ∞
G( f ) = g(t)e−j 2π f t dt. (1.221)
f =−∞

Invoking Schwarz’s inequality, we have


 ∞  
|G( f )| ≤ g(t)e−j 2π f t  dt
f =−∞
 ∞
⇒ |G( f )| ≤ |g(t)| dt. (1.222)
f =−∞

Similarly, we have
46 1 Signals and Systems
 ∞
dg(t) −j 2π f t
j 2π f G( f ) = e dt
f =−∞ dt
 ∞  
 dg(t) 
⇒ |j 2π f G( f )| ≤  
 dt  dt (1.223)
f =−∞

and
 ∞
d 2 g(t) −j 2π f t
(j 2π f )2 G( f ) = 2
e dt
f =−∞ dt
 ∞  2 
   d g(t) 
⇒ (j 2π f )2 G( f ) ≤  
 dt 2  dt. (1.224)
f =−∞

The various derivatives of g(t) are shown in Fig. 1.27. For the given pulse
 2
|g(t)| dt = 3
t=−2
 2  
 dg(t) 
 
 dt  dt = 2
t=−2
 2+  2 
 d g(t) 
 

 dt 2  dt = 4. (1.225)
t=−2

Fig. 1.27 Various g(t)


derivatives of g(t)
1

−2 −1 1 2
dg(t)
dt

1 2 t

−2 −1
−1

d2 g(t)
dt2

−1 1 t
2
−2
−1
1 Signals and Systems 47

3.5

2.5

2 |G(f)|
2/(2*pi*|f|)
1.5 4/(2*pi*f)^2
3
1

0.5

-0.5
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
f

Fig. 1.28 Various bounds for |G( f )|

Therefore, the three bounds are

|G( f )| ≤ 3
2
|G( f )| ≤
2π| f |
4
|G( f )| ≤ . (1.226)
(2π f )2

These bounds on |G( f )| are shown plotted in Fig. 1.28. It can be shown that

cos(2π f ) − cos(4π f )
G( f ) = . (1.227)
2π 2 f 2

31. (Simon Haykin 1983) A signal that is popularly used in communication systems
is the raised cosine pulse. Consider a periodic sequence of these pulses as shown
in Fig. 1.29. Compute the first three terms (n = 0, 1, 2) of the (real) Fourier
series expansion.
• Solution: The (real) Fourier series expansion of a periodic signal g p (t) can be
written as follows:


g p (t) = a0 + 2 an cos(2πnt/T0 ) + bn sin(2πnt/T0 ). (1.228)
n=1
48 1 Signals and Systems

2.5
1 + cos(2πt)
2

1.5
gp (t)

0.5

-0.5
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
t

Fig. 1.29 Plot of g p (t)

In the given problem T0 = 2. Moreover, since the given function is even,


bn = 0. Thus, we have

1 1
a0 = g p (t) dt
2 t=−1

1 1/2
= (1 + cos(2πt)) dt
2 t=−1/2
1
= . (1.229)
2
Similarly

1 1
an = g p (t) cos(2πnt/2) dt
2 t=−1

1 1/2
= (1 + cos(2πt)) cos(nπt) dt
2 t=−1/2

1 1 sin(n − 2)π/2 sin(n + 2)π/2
= sin(nπ/2) + + .(1.230)
nπ 2 (n − 2)π (n + 2)π

Substituting n = 1, 2 in (1.230), we have

4
a1 =

1
a2 = . (1.231)
4
1 Signals and Systems 49

32. (Simon Haykin 1983) Any function g(t) can be expressed as the sum of ge (t)
and go (t), where

1
ge (t) = [g(t) + g(−t)]
2
1
go (t) = [g(t) − g(−t)] . (1.232)
2
(a) Sketch the even and odd parts of
 
t 1
g(t) = A rect − . (1.233)
T 2

(b) Find the Fourier transform of these two parts.

• Solution: Note that


   
t 1 t − (T /2)
A rect − = A rect , (1.234)
T 2 T

which is nothing but rect (t/T ) shifted by T /2. This is illustrated in Fig. 1.30,
along with the even and odd parts of g(t). We know that

Fig. 1.30 Plot of g(t) and g(t)


the corresponding even and
A
odd functions
t

0 T

ge (t)
A/2

−T 0 T

go (t)
A/2

−T t
0
T
−A/2
50 1 Signals and Systems
 
t
A rect  AT sinc ( f T ) . (1.235)
T

Therefore by inspection, we have

A
ge (t)  (2T ) sinc ( f (2T )) = AT sinc (2 f T ) . (1.236)
2
Similarly

A A
go (t)  T sinc ( f T ) e−j 2π f T /2 − T sinc ( f T ) ej 2π f T /2
2 2
= −j AT sinc ( f T ) sin(π f T ). (1.237)

33. Let g(t) = e−πt . Let


2

y(t) = g(at)  δ(t − t0 ), (1.238)

where “” denotes convolution, a and t0 are real-valued constants.


(a) Write down the Fourier transform of g(t). No derivation is required.
(b) Derive the Fourier transform of y(t).
(c) Derive the Fourier transform of the autocorrelation of y(t).
(d) Find the autocorrelation of y(t).
Show all the steps. Use Fourier transform properties. Derivation of any Fourier
transform property is not required.
• Solution: We know that

g(t) = e−πt  e−π f = G( f ).


2 2
(1.239)

Now

y(t) = g(at)  δ(t − t0 )


 ∞
= δ(τ − t0 )g[a(t − τ )] dτ
τ =−∞
= g[a(t − t0 )]
= e−πa (t−t0 ) .
2 2
(1.240)

Using the Fourier transform properties of time scaling and time shifting, we
have
1 Signals and Systems 51

1
x1 (t) = g(at)  G( f /a) = X 1 ( f )
|a|
⇒ x1 (t − t0 ) = g[a(t − t0 )] = y(t)  X 1 ( f )e−j 2π f t0
1
= G( f /a)e−j 2π f t0
|a|
= Y( f )
1 −π f 2 /a 2 −j 2π f t0
= e e . (1.241)
|a|

The Fourier transform of the autocorrelation of y(t) is

1 −2π f 2 /a 2
|Y ( f )|2 = e = H( f ) (say). (1.242)
a2

The autocorrelation of y(t) is the inverse Fourier transform of H ( f ). There-


fore

1 2 −π( f √2/a)2
H( f ) = √ e
|a| 2 |a|
1
 √ g(ct), (1.243)
|a| 2

where
a
c= √ . (1.244)
2

Thus, the autocorrelation of y(t) is

1
√ e−πa t /2 .
2 2
h(t) = (1.245)
|a| 2

34. Consider the periodic pulse train in Fig. 1.31a. Here T ≤ T0 /2.
(a) Using the Fourier transform property of differentiation in the time domain,
compute the complex Fourier series representation of g p (t) in Fig. 1.31a.
Hence, also find the Fourier transform of g p (t).
(b) If g p (t) is passed through a filter having the magnitude and phase response
as depicted in Fig. 1.31b, c, find the output y(t).
• Solution: Consider the pulse:

g p (t) for −T0 /2 < t < T0 /2
g(t) = (1.246)
0 elsewhere.
52 1 Signals and Systems

Fig. 1.31 g p (t).g p (t) gp (t)


(a)
A

−T 0 T T0

|H(f )|
(b)
1
f

−3/(2T0 ) 0 3/(2T0 )

∠H(f )
(c)
π/2
3/(2T0 ) f
0
−3/(2T0 )
−π/2

Clearly

d 2 g(t) A
= (δ(t + T ) − 2δ(t) + δ(t − T )) . (1.247)
dt 2 T
Hence
A
G( f ) = − (exp(j 2π f T ) − 2 + exp(−j 2π f T ))
4π 2
f 2T
AT
= 2 2 2 sin2 (π f T )
π f T
= AT sinc2 ( f T ). (1.248)

We also know that




g p (t) = cn exp (j 2πnt/T0 ) , (1.249)
n=−∞

where

1 ∞
cn = g(t) exp (−j 2πnt/T0 ) dt
T0 t=−∞
1
= G(n/T0 ). (1.250)
T0
1 Signals and Systems 53

Substituting for G(n/T0 ) we have



AT 
g p (t) = sinc2 (nT /T0 ) exp (j 2πnt/T0 ) . (1.251)
T0 n=−∞

Using the fact that

exp (j 2π f c t)  δ( f − f c ) (1.252)

the Fourier transform of g p (t) is given by

∞  
AT  n
G p( f ) = sinc (nT /T0 )δ f −
2
. (1.253)
T0 n=−∞ T0

The filter output will have the frequency components 0, ±1/T0 . From
Fig. 1.31b, c, we note that

H (0) = 1
1
H (1/T0 ) = e−j π/3
3
1 j π/3
H (−1/T0 ) = e . (1.254)
3
Hence
AT AT
y(t) = + sinc2 (T /T0 )e j (2πt/T0 −π/3)
T0 3T0
AT
+ sinc2 (T /T0 )e j (−2πt/T0 +π/3)
3T0
AT 2 AT
= + sinc2 (T /T0 ) cos(2πt/T0 − π/3). (1.255)
T0 3T0

35. (Simon Haykin 1983) Prove the following Hilbert transform:

sin(t) H T 1 − cos(t)
−→ . (1.256)
t t
• Solution: We start with the familiar Fourier transform pair:

A rect (t/T )  AT sinc ( f T ). (1.257)

Applying duality and replacing T by B, we get

A rect ( f /B)  AB sinc (t B). (1.258)


54 1 Signals and Systems

Substituting A = B = 1, we get

rect ( f )  sinc (t) = sin(πt)/(πt). (1.259)

Time scaling by 1/π, we get

π rect (π f )  sin(t)/t = g(t) (say). (1.260)

Now, the Hilbert transform of sin(t)/t is best computed in the frequency


domain. Thus

Ĝ( f ) = −j sgn ( f )π rect (π f )


 ∞
⇒ ĝ(t) = Ĝ( f ) exp (j 2π f t) d f
f =−∞
 1/(2π)
⇒ ĝ(t) = −j π exp (j 2π f t) d f
f =0
 0
+ jπ exp (j 2π f t) d f
−1/(2π)
exp(j t) − 1 1 − exp(−j t)
= −j π + jπ
j 2πt j 2πt
1 − cos(t)
= . (1.261)
t
Thus proved.
36. (Simon Haykin 1983) Prove the following Hilbert transform:

1 HT t
−→ . (1.262)
1+t 2 1 + t2

• Solution: We start with the Fourier transform pair:

1 1
exp (−|t|)  . (1.263)
2 1 + (2π f )2

Time scaling by 2π, we get

1 1
exp (−|2πt|)  . (1.264)
2 (2π)(1 + f 2 )

Multiplying both sides by 2π and applying duality, we get

1
π exp (−|2π f |)  = g(t) (say). (1.265)
1 + t2
1 Signals and Systems 55

Now the Hilbert transform of g(t) in the above equation is best obtained in
the frequency domain. Thus
 ∞
ĝ(t) = −j sgn ( f )π exp (−|2π f |) exp (j 2π f t) d f
f =−∞
 ∞
= −j π exp (−2π f (1 − j t)) d f
f =0
 0
+ jπ exp (2π f (1 + j t)) d f
f =−∞
−1 1
= −j π + jπ
−2π(1 − j t) 2π(1 + j t)
t
= . (1.266)
1 + t2

Thus proved.
37. (Simon Haykin 1983) Prove the following Hilbert transform:
 
HT 1  t − 1/2 
rect (t) −→ − ln  . (1.267)
π t + 1/2 

• Solution: This problem can be easily solved in the time domain. From the
basic definition of the Hilbert transform, we have

1 ∞ g(τ )
ĝ(t) = dτ
π τ =−∞ t − τ
 1/2
1 1
= dτ
π τ =−1/2 t − τ
 
−1  t − 1/2 
= ln  . (1.268)
π t + 1/2 

Thus proved.
38. (Simon Haykin 1983) Let ĝ(t) denote the Hilbert transform of a real-valued
energy signal g(t). Show that the cross-correlation functions of g(t) and ĝ(t) are
given by

Rgĝ (τ ) = − R̂g (τ )
Rĝg (τ ) = R̂g (τ ), (1.269)

where R̂g (τ ) denotes the Hilbert transform of Rg (τ ).


• Solution: We start from the basic definition of the cross-correlation function:
56 1 Signals and Systems
 ∞
Rg1 g2 (τ ) = g1 (t)g2∗ (t − τ ) dt
t=−∞
= g1 (τ )  g2∗ (−τ )
 ∞
= G 1 ( f )G ∗2 ( f ) exp (j 2π f τ ) d f. (1.270)
f =−∞

Here

g1 (t) = g(t)  G( f )
g2 (t) = ĝ(t)  −j sgn ( f )G( f )

⇒ g2 (−t) = ĝ ∗ (−t) = ĝ(−t)  j sgn ( f )G ∗ ( f ), (1.271)

since g(t) and hence ĝ(t) are real-valued. Hence


 ∞
Rgĝ (τ ) = j sgn ( f )G( f )G ∗ ( f ) exp (j 2π f τ ) d f
f =−∞
 ∞
=j sgn ( f ) |G( f )|2 exp (j 2π f τ ) d f. (1.272)
f =−∞

However, we know that

Rg (τ )  |G( f )|2
⇒ R̂g (τ )  −j sgn ( f ) |G( f )|2 . (1.273)

From (1.272) and (1.273), it is clear that

Rgĝ (τ ) = − R̂g (τ ). (1.274)

The second part can be similarly proved.


39. (Simon Haykin 1983) Consider two bandpass signals g1 (t) and g2 (t) whose
pre-envelopes are denoted by g1+ (t) and g2+ (t), respectively.
(a) Show that
 ∞  ∞ 
1 ∗
 {g1+ (t)}  {g2+ (t)} dt =  g1+ (t)g2+ (t) dt .
t=−∞ 2 t=−∞
(1.275)
How is this relation modified if g2 (t) is replaced by g2 (−t)?
(b) Assuming that g(t) is a narrowband signal with complex envelope g̃(t) and
carrier frequency f c , use the result of part (a) to show that
 ∞  ∞
1
g 2 (t) dt = |g̃(t)|2 dt. (1.276)
t=−∞ 2 t=−∞
1 Signals and Systems 57

• Solution: From the basic definition of the pre-envelope, we have

g1+ (t) = g1 (t) + j ĝ1 (t),


g2+ (t) = g2 (t) + j ĝ2 (t), (1.277)

where g1 (t) and g2 (t) are real-valued signals. Hence, ĝ1 (t) and ĝ2 (t) are also
real-valued. Next, we note that the real part of the integral is equal to the
integral of the real part. Hence, the right-hand side of (1.275) becomes
 ∞
1  
g1 (t)g2 (t) + ĝ1 (t)ĝ2 (t) dt. (1.278)
2 t=−∞

Next, we note that


 ∞  ∞
g1 (t)g2 (t) dt = g1 (t)  g2 (−t)|t=0 = G 1 ( f )G ∗2 ( f ) d f,
t=−∞ f =−∞
(1.279)
where “” denotes convolution. Similarly
 
∞  ∞
ĝ1 (t)ĝ2 (t) dt = ĝ1 (t)  ĝ2 (−t)t=0 = (−j sgn ( f )) G 1 ( f )
t=−∞ f =−∞
(j sgn ( f )) G ∗2 ( f ) d f
 ∞
= G 1 ( f )G ∗2 ( f ) d f
f =−∞
 ∞
= g1 (t)g2 (t) dt, (1.280)
t=−∞

where we have used the fact that

sgn2 ( f ) = 1 for f = 0. (1.281)

Now, the left-hand side of (1.275) is nothing but


 ∞
g1 (t)g2 (t) dt. (1.282)
t=−∞

Thus (1.275) is proved. Now, let us see what happens when g2 (t) is replaced
by g2 (−t). Let

g3 (t) = g2 (−t)
⇒ g3+ (t) = g3 (t) + j ĝ3 (t)
= g2 (−t) + j ĝ2 (−t)

⇒ g3+ (t) = g3 (t) − j ĝ3 (t)
= g2 (−t) − j ĝ2 (−t). (1.283)
58 1 Signals and Systems

Clearly (1.275) is still valid with g2+ (t) replaced by g3+ (t) as given in (1.283),
that is,
 ∞  ∞ 
1 ∗
 {g1+ (t)}  {g3+ (t)} dt =  g1+ (t)g3+ (t) dt . (1.284)
t=−∞ 2 t=−∞

To prove part (b) we substitute

g1 (t) = g2 (t) = g(t) (1.285)

in (1.275) and note that

g+ (t) = g(t) + j ĝ(t)


= g̃(t) exp (j 2π f c t) , (1.286)

where g̃(t) is the complex envelope of g(t). Thus, (1.276) follows immediately.
40. Let a narrowband signal be expressed in the form:

g(t) = gc (t) cos(2π f c t) − gs (t) sin(2π f c t). (1.287)

Using G + ( f ) to denote the Fourier transform of the pre-envelope of g(t), express


the Fourier transforms of the in-phase component gc (t) and the quadrature com-
ponent gs (t) in terms of G + ( f ).
• Solution: From the basic definition of the pre-envelope

gc (t) + j gs (t) = g+ (t) exp (−j 2π f c t) = g1 (t) (say)


1

⇒ gc (t) = g1 (t) + g1∗ (t)
2
1

gs (t) = g1 (t) − g1∗ (t) .
2j
(1.288)

Taking the Fourier transform of both sides in the above equation, we get

1

Gc( f ) = G 1 ( f ) + G ∗1 (− f )
2
1

Gs ( f ) = G 1 ( f ) − G ∗1 (− f ) . (1.289)
2j

Note that G 1 ( f ) is equal to

G1( f ) = G +( f + fc )

⇒ G 1 (− f ) = G ∗+ (− f + f c ). (1.290)
1 Signals and Systems 59

Substituting for G 1 ( f ) in (1.289), we get

1

Gc( f ) = G + ( f + f c ) + G ∗+ (− f + f c )
2
1

Gs ( f ) = G + ( f + f c ) − G ∗+ (− f + f c ) . (1.291)
2j

41. (Simon Haykin 1983) The duration of a signal provides a measure for describing
the signal as a function of time. The bandwidth of a signal provides a measure
for describing its frequency content. There is no unique set of definitions for
the duration and bandwidth. However, regardless of the definition we find that
their product is always a constant. The choice of a particular set of definitions
merely changes the value of this constant. This problem is intended to explore
these issues.
(a) The root mean square (rms) bandwidth of a lowpass energy signal g(t) is
defined by
 ∞ 0.5
f =−∞ f
2
|G( f )|2 d f
Wrms = ∞ . (1.292)
f =−∞
|G( f )|2 d f

The corresponding rms duration of the signal is defined by


 ∞ 0.5
t=−∞ t
2
|g(t)|2 dt
Trms = ∞ . (1.293)
t=−∞
|g(t)|2 dt

Show that (Heisenberg–Gabor uncertainty principle)

Trms Wrms ≥ 1/(4π). (1.294)

(b) Consider an energy signal g(t) for which |G( f )| is defined for all frequencies
from −∞ to ∞ and symmetric about f = 0 with its maximum value at
f = 0. The equivalent rectangular bandwidth is defined by
∞
f =−∞
|G( f )|2 d f
Weq = . (1.295)
2|G(0)|2

The corresponding equivalent duration is defined by



 ∞ 2
|g(t)| dt
Teq = t=−∞
∞ . (1.296)
t=−∞ |g(t)|2 dt
60 1 Signals and Systems

Show that

Teq Weq ≥ 0.5. (1.297)

• Solution: To prove

Trms Wrms ≥ 1/(4π) (1.298)

we need to use Schwarz’s inequality which states that


 ∞ 2  ∞
 ∗ 
g1 (t)g2 (t) + g1 (t)g2∗ (t) dt ≤4 |g1 (t)|2 dt
t=−∞ t=−∞
 ∞
× |g2 (t)|2 dt.
t=−∞
(1.299)

For the given problem, we substitute

g1 (t) = tg(t)
dg(t)
g2 (t) = . (1.300)
dt
Thus, the right-hand side of Schwarz’s inequality in (1.299) becomes
   
∞ ∞  dg(t) 2
4 t 2 |g(t)|2 dt  
 dt  dt. (1.301)
t=−∞ t=−∞

Let
dg(t)
g3 (t) =  j 2π f G( f ) = G 3 ( f ). (1.302)
dt
Then, by Rayleigh’s energy theorem
 ∞  ∞
|g3 (t)|2 dt = |G 3 ( f )|2 d f
t=−∞ f =−∞
    ∞
 dg(t) 2

⇒   dt = 4π 2 f 2 |G( f )|2 d f. (1.303)
 dt 
t=−∞ f =−∞

Thus, the right-hand side of Schwarz’s inequality in (1.299) becomes


 ∞  ∞
16π 2
t |g(t)| dt
2 2
f 2 |G( f )|2 d f. (1.304)
t=−∞ f =−∞
1 Signals and Systems 61

Let us now consider the square root of the left-hand side of Schwarz’s inequal-
ity in (1.299). We have
      ∞ 

∗ dg(t) dg(t) ∗ dg(t)
t g (t) + g(t) dt = t g ∗ (t)
t=−∞ dt dt t=−∞ dt
∗ 
dg (t)
+ g(t) dt
dt
 ∞
d  
= t g(t)g ∗ (t) dt
t=−∞ dt
 ∞
d  
= t |g(t)|2 dt.
t=−∞ dt
(1.305)

In the above equation, we have used the fact (this can also be proved using
Fourier transforms) that
 ∗
dg(t) dg ∗ (t)
= . (1.306)
dt dt

Let

g4 (t) = |g(t)|2  G 4 ( f )
dg4 (t)
⇒ g5 (t) =  j 2π f G 4 ( f ) = G 5 ( f )
dt
j dG 5 ( f )
⇒ tg5 (t) 
2π d f
 ∞ 
j dG 5 ( f ) 
⇒ tg5 (t) dt =
t=−∞ 2π d f  f =0
 ∞ 
j d 
⇒ tg5 (t) dt = (j 2π f G 4 ( f ))
t=−∞ 2π d f f =0
 ∞
d
⇒ t |g(t)|2 dt = −G 4 (0)
t=−∞ dt
 ∞  ∞
d
⇒ t |g(t)|2 dt = − |g(t)|2 dt. (1.307)
t=−∞ dt t=−∞

Using Rayleigh’s energy theorem, the left-hand side of Schwarz’s inequality


in (1.299) becomes
 ∞  ∞
|g(t)|2 dt |G( f )|2 d f. (1.308)
t=−∞ f =−∞
62 1 Signals and Systems

Taking the square root of (1.304) and (1.308), we get the desired result in
(1.298).
To prove

Teq Weq ≥ 0.5, (1.309)

we invoke Schwarz’s inequality which states that


  
 ∞  ∞
 g(t) dt  ≤ |g(t)| dt. (1.310)

t=−∞ t=−∞

Squaring both sides, we obtain


 2  ∞ 2
 ∞ 
 
g(t) dt  ≤ |g(t)| dt

t=−∞ t=−∞
 ∞ 2
⇒ |G(0)|2 ≤ |g(t)| dt . (1.311)
t=−∞

Using the above inequality and the Rayleigh’s energy theorem which states
that
 ∞  ∞
|g(t)|2 dt = |G( f )|2 d f, (1.312)
t=−∞ f =−∞

we get the desired result

Teq Weq ≥ 0.5. (1.313)

42. x(t) is a bandpass signal whose Fourier transform is shown in Fig. 1.32. h(t) is
an LTI system having a bandpass frequency response of the form

h(t) = h c (t) cos(2π f c t). (1.314)

The Fourier transform of h c (t) is given by

Fig. 1.32 A bandpass X(f )


spectrum 4Xc (f − fc )

−fc 0 fc fc + W
−fc − W −fc + W fc − W
1 Signals and Systems 63

f 2 for | f | < B
Hc ( f ) = , (1.315)
0 otherwise

where B > W and f c  B, W . x(t) is input to h(t).


(a) Write down the expression of x(t) in terms of xc (t).
(b) Write down the expression of H ( f ) in terms of Hc ( f ).
(c) Find y(t) (output of h(t)) in terms of xc (t) without performing convolution.

• Solution: Note that

X ( f ) = 4[X c ( f − f c ) + X c ( f + f c )]
 8xc (t) cos(2π f c t)
= x(t). (1.316)

Similarly

Hc ( f − f c ) + Hc ( f + f c )
H( f ) = . (1.317)
2
Now

Y ( f ) = X ( f )H ( f )
= 2[X c ( f − f c )Hc ( f − f c ) + X c ( f + f c )Hc ( f + f c )].
(1.318)

Let

G c ( f ) = X c ( f )Hc ( f )
 2
f X c ( f ) for | f | < W
=
0 otherwise
−1 d 2 xc (t)

4π 2 dt 2
= gc (t). (1.319)

Then

y(t) = 4gc (t) cos(2π f c t)


−1 d 2 xc (t)
= 2 cos(2π f c t). (1.320)
π dt 2

43. Two periodic signals g p1 (t) and g p2 (t) are depicted in Fig. 1.33. Let

g p (t) = g p1 (t) + g p2 (t). (1.321)


64 1 Signals and Systems

Fig. 1.33 Periodic gp1 (t)


waveforms
1
t

gp2 (t)
2

1
t

0 1 2 3 4

(a) Sketch g p (t).


(b) Compute the power of dg p (t)/dt.
(c) Compute the Fourier transform of d 2 g p (t)/dt 2 . Assume that the generating
function (of d 2 g p (t)/dt 2 ) extends over [0, T ) where T is the period of
d 2 g p (t)/dt 2 .

• Solution: The signal g p (t) is illustrated in Fig. 1.34.


Let
dg p (t)
g p3 (t) = . (1.322)
dt

Then, the power of g p3 (t) is



1 T 2
P= g (t) dt
T t=0 p3
14
=
4
= 3.5. (1.323)

Let

d 2 g p (t)
g p4 (t) = (1.324)
dt 2

and let g4 (t) denote its generating function. Then

g4 (t) = 3δ(t) − 5δ(t − 1) + δ(t − 2) + δ(t − 3)


 3 − 5e−j 2π f + e−j 4π f + e−j 6π f
= G 4 ( f ). (1.325)
1 Signals and Systems 65

Fig. 1.34 Periodic gp1 (t)


waveforms
1
t

gp2 (t)
2

1
t

0 1 2 3 4
gp (t)
3

1
t

dgp (t)/dt
3

−1
−2

d2 gp (t)/dt2 3
1 1
t

−5

Now


g p4 (t) = g4 (t − kT )
k=−∞


 G4( f ) e−j 2π f kT
k=−∞

1 
= G 4 (n/T )δ( f − n/T ), (1.326)
T n=−∞
66 1 Signals and Systems

where T = 4 s.

44. Using Fourier transform properties and/or any other relation, compute
  
∞  A sinc ( f T ) e−j π f T − A e−j 4π f T 2
  d f. (1.327)
 j 2π f 
f =−∞

Clearly state which Fourier transform property and/or relation is being used. The
constant of any integration may be assumed to be zero.
• Solution: We know from Parseval’s relation that
 ∞  ∞
|G( f )|2 d f = |g(t)|2 dt, (1.328)
f =−∞ t=−∞

where

g(t)  G( f ) (1.329)

is a Fourier transform pair. Let

A sinc ( f T ) e−j π f T − A e−j 4π f T


G( f ) = . (1.330)
j 2π f

Let

G 1 ( f ) = j 2π f G( f )
= A sinc ( f T ) e−j π f T − A e−j 4π f T
 g1 (t)
= dg(t)/dt
 
A t − T /2
= rect − Aδ(t − 2T ). (1.331)
T T

Both g1 (t) and g(t) are plotted in Fig. 1.35. Clearly


 ∞  2T
|g(t)|2 dt = |g(t)|2 dt
t=−∞ t=0
4 A2 T
= , (1.332)
3
which is the required answer.
45. Compute the area under 2e−2πt  e−3πt , where “” denotes convolution. Clearly
2 2

state which Fourier transform property and/or relation is being used and show
all the steps.
1 Signals and Systems 67

Fig. 1.35 Plot of g1 (t) and g1 (t)


g(t)
A/T

2T t

T
A
g(t)

T 2T

• Solution: We know that

e−πt  e−π f .
2 2
(1.333)

We also know that if

g(t)  G( f ), (1.334)

then, from the time scaling property of the Fourier transform,

1
g(at)  G( f /a), (1.335)
|a|

where a = 0. Therefore

1
e−2πt  √ e−π f /2
2 2

2
1
e−3πt  √ e−π f /3 .
2 2
(1.336)
3

Moreover
2
2e−2πt  e−3πt  √ e−π f /2 e−π f /3 .
2 2 2 2
(1.337)
6
68 1 Signals and Systems

Using
 ∞
g(t) dt = G(0) (1.338)
t=−∞


the area under 2e−2πt  e−3πt is 2/ 6.
2 2

46. State Rayleigh’s energy theorem. No derivation is required.


• Solution: Let G( f ) denote the Fourier transform of g(t). Then Rayleigh’s
energy theorem states that
 ∞  ∞
|g(t)| dt =
2
|G( f )|2 d f. (1.339)
t=−∞ f =−∞

47. Let G( f ) denote the Fourier transform of g(t). It is given that G(0) is finite and
non-zero. Does g(t) contain a dc component? Justify your answer.
• Solution: If g(t) has a dc component, then G( f ) contains a Dirac-delta function
at f = 0. However, it is given that G(0) is finite. Therefore, g(t) does not
contain any dc component.
48. Consider the following: if CONDITION then STATEMENT. The CONDITION
is given to be necessary.
Which of the following statement(s) are correct (more than one statement may
be correct).
(a) CONDITION is true implies STATEMENT is always true.
(b) CONDITION is true implies STATEMENT is always false.
(c) CONDITION is true implies STATEMENT may be true or false.
(d) CONDITION is false implies STATEMENT is always true.
(e) CONDITION is false implies STATEMENT is always false.
(f) CONDITION is false implies STATEMENT may be true or false.

• Solution: (c) and (e).


For example, the necessary condition for the sum of an infinite series to exist
is that the nth term must tend to zero as n → ∞. Here:
(a) CONDITION is: n th term goes to zero as n → ∞.
(b) STATEMENT is: The sum of an infinite series exists.
The sum given by

∞
1
S= (1.340)
n=1
n

does not exist even though the nth term goes to zero as n → ∞.
1 Signals and Systems 69

49. Consider the following: if CONDITION then STATEMENT. The CONDITION


is given to be sufficient.
Which of the following statement(s) are correct (more than one statement may
be correct):
(a) CONDITION is true implies STATEMENT is always true.
(b) CONDITION is true implies STATEMENT is always false.
(c) CONDITION is true implies STATEMENT may be true or false.
(d) CONDITION is false implies STATEMENT is always true.
(e) CONDITION is false implies STATEMENT is always false.
(f) CONDITION is false implies STATEMENT may be true or false.
• Solution: (a) and (f).
For example, Dirichlet’s conditions for the existence of the Fourier transform
are sufficient. In other words, if Dirichlet’s conditions are satisfied, the Fourier
transform is guaranteed to exist. However, if Dirichlet’s conditions are not
satisfied, the Fourier transform may or may not exist. Here:
(a) CONDITION is: Dirichlet’s conditions.
(b) STATEMENT is: The Fourier transform exists.

50. State Parseval’s power theorem for periodic signals in terms of its complex
Fourier series coefficients.
• Solution: Let g p (t) be a periodic signal with period T0 . Then g p (t) can be
represented in the form of a complex Fourier series as follows:


g p (t) = cn e j 2πnt/T0 . (1.341)
n=−∞

Then Parseval’s power theorem states that


 T0 ∞

1
|g p (t)| dt =
2
|cn |2 . (1.342)
T0 t=0 n=−∞

51. State and derive the Poisson sum formula. Hence show that

 ∞  
1  n
e j 2π f nT0 = δ f − . (1.343)
n=−∞
T0 n=−∞ T0

• Solution: Let g p (t) be a periodic signal with period T0 . Then g p (t) can be
expressed in the form of a complex Fourier series as follows:


g p (t) = cn e j 2πnt/T0 , (1.344)
n=−∞
70 1 Signals and Systems

where
 T0
1
cn = g p (t)e−j 2πnt/T0 dt. (1.345)
T0 t=0

Let g(t) be the generating function of g p (t), that is,



g p (t) for 0 ≤ t < T0
g(t) = (1.346)
0 otherwise.

Note that


g p (t) = g(t − nT0 ). (1.347)
n=−∞

The Fourier transform of g(t) is


 ∞
G( f ) = g(t)e−j 2π f t dt
t=−∞
T0
= g p (t)e−j 2π f t dt. (1.348)
t=0

From (1.345) and (1.348), we have

cn T0 = G(n/T0 ). (1.349)

Therefore, from (1.344), (1.347), and (1.349) we have



 ∞  
1  n
g(t − nT0 ) = G e j 2πnt/T0 , (1.350)
n=−∞
T0 n=−∞ T0

which proves the Poisson sum formula. Taking the Fourier transform of both
sides of (1.350) we obtain

 ∞  
−j 2π f nT0 1  n
G( f ) e = G δ( f − n/T0 ). (1.351)
n=−∞
T0 n=−∞ T0

For the particular case where G( f ) = 1, we obtain the required result



 ∞
1 
e−j 2π f nT0 = δ( f − n/T0 ). (1.352)
n=−∞
T0 n=−∞
1 Signals and Systems 71

52. Clearly define the unit step function. Derive the Fourier transform of the unit
step function.
• Solution: The unit step function is defined as
 t
u(t) = δ(τ ) dτ
τ =−∞

⎨ 0 for τ < 0
= 1/2 for τ = 0 , (1.353)

1 for τ > 0

where δ(·) is the Dirac-delta function. Now consider



⎨ exp(−at) for t > 0
g(t) = 0 for t = 0 , (1.354)

− exp(at) for t < 0

where a > 0 is a real-valued constant. Note that

lim g(t) = sgn (t), (1.355)


a→0

where

⎨ 1 for t > 0
sgn (t) = 0 for t = 0 (1.356)

−1 for t < 0.

Now, the Fourier transform of g(t) is

−j 4π f
G( f ) = . (1.357)
a2 + (2π f )2

Taking the limit a → 0 in (1.357), we get from (1.355)

1
sgn (t)  . (1.358)
jπ f

We know that

2u(t) − 1 = sgn (t). (1.359)


72 1 Signals and Systems

Taking the Fourier transform of both sides of (1.359) we get

1
2U ( f ) − δ( f ) =
jπ f
1 δ( f )
⇒ U( f ) = + , (1.360)
j 2π f 2

which is the Fourier transform of the unit step function.


Note that if

⎨ exp(−at) for t > 0
h(t) = 1/2 for t = 0 , (1.361)

0 for t < 0

where a > 0 is a real-valued constant, then

lim h(t) = u(t). (1.362)


a→0

Now, the Fourier transform of h(t) in (1.361) is

1
H( f ) = . (1.363)
a + j 2π f

Taking the limit a → 0 in (1.363), we obtain from (1.362), the Fourier trans-
form of the unit step as

1
u(t)  , (1.364)
j 2π f

which is different from (1.360). However, we note from (1.364) that the right-
hand side is purely imaginary, which implies that the unit step is an odd
function, which is incorrect. Therefore, the Fourier transform of the unit step
is given by (1.360).
53. Compute  ∞
sinc3 (3t) dt, (1.365)
t=−∞

where
sin(πt)
sinc (t) = . (1.366)
πt
• Solution: We know that

A rect (t/T )  AT sinc ( f T ). (1.367)


1 Signals and Systems 73

Using duality we get

A rect ( f /B)  AB sinc (t B). (1.368)

Substituting B = 3 in (1.368) we get

1
rect ( f /3)  sinc (3t). (1.369)
3
Using the property that multiplication in the time domain corresponds to con-
volution in the frequency domain, we get

1
G( f ) = (rect ( f /3)  rect ( f /3)  rect ( f /3))  sinc3 (3t) = g(t),
27
(1.370)
where “” denotes convolution. Using the property
 ∞
g(t) dt = G(0), (1.371)
t=−∞

we get
 ∞
1
sinc3 (3t) dt = rect ( f /3)  3 [(1 − | f |/3) rect ( f /6)] f =0
t=−∞ 27
 3/2
1
= (1 − |α|/3) dα
9 α=−3/2

2 3/2
= (1 − α/3) dα
9 α=0
1
= . (1.372)
4
54. A signal g(t) has Fourier transform given by

G(F) = sin(2π Ft0 ). (1.373)

Compute ĝ(t).
• Solution: Clearly
δ(t + t0 ) − δ(t − t0 )
g(t) = , (1.374)
2j

where δ(·) is the Dirac-delta function. The impulse response of the Hilbert
transformer is
1
h(t) = . (1.375)
πt
74 1 Signals and Systems

Therefore

ĝ(t) = g(t)  h(t)


1
= (h(t + t0 ) − h(t − t0 ))
2j
 
1 1 1
= −
2π j t + t0 t − t0
 
1 −t0
= . (1.376)
π j t 2 − t02

55. Compute the Hilbert transform of

24
g(t) = . (1.377)
9 + (2πt)2

• Solution: We start from the Fourier transform pair:

2ab
ae−b|t|  . (1.378)
b2 + (2π f )2

Using duality, we get

2ab
G( f ) = ae−b| f |  = g(t). (1.379)
b2 + (2πt)2

Here a = 4 and b = 3. In this problem, it is best to compute the Hilbert


transform from the frequency domain. We have

Ĝ( f ) = −j sgn( f )G( f ). (1.380)

Now ĝ(t) is the inverse Fourier transform of Ĝ( f ) as given by


 ∞
ĝ(t) = Ĝ( f )e j 2π f t d f
f =−∞
 ∞
= −4j e−3 f e j 2π f t d f
f =0
 0
+ 4j e3 f e j 2π f t d f
f =−∞
 ∞
e− f (3−j 2πt)
= −4j
−(3 − j 2πt) f =0
 f (3+j 2πt) 0
e
+ 4j
(3 + j 2πt) f =−∞
1 Signals and Systems 75

Fig. 1.36 A signal g(t)


1/(2T )

1/(4T )
t

−T /3 0 T /3 2T /3


−1
= 4j
(3 − j 2πt)

1
+ 4j
(3 + j 2πt)
16πt
= . (1.381)
9 + (2πt)2

56. Consider the signal g(t) in Fig. 1.36. Which of the following statement(s) are
correct?
(a)
5
g(t)  g(t)|t=−T /3 = . (1.382)
48T
(b) The Fourier transform of g(t)  g(t) is real-valued and even.
(c) The g(t)  g(−t) is real-valued and even.
(d) g(t)  δ(3t) = g(t)/3.
Here “” denotes convolution.

• Solution: Consider Fig. 1.37. Clearly


 2T /3
g(t)  g(t)|t=−T /3 = g(τ )g(−T /3 − τ ) dτ
τ =−T /3
5
= . (1.383)
48T
Note that g(t) is neither even nor odd. Hence, its Fourier transform G( f ) is
complex valued. Therefore

g(t)  g(t)  G 2 ( f ). (1.384)

Clearly, G 2 ( f ) is also complex valued. Next, since g(t) is real-valued, g(t) 


g(−t) is the autocorrelation of g(t), and is hence real-valued and even. Finally
76 1 Signals and Systems

Fig. 1.37 A signal g(τ )


1/(2T )

1/(4T )
τ

−T /3 0 T /3 2T /3
g(τ − T /3)
1/(2T )

1/(4T )
τ

0 2T /3 T

g(−τ − T /3)
1/(2T )

1/(4T )
τ

−T −2T /3 0

 ∞
g(t)  δ(3t) = δ(3τ )g(t − τ ) dτ
τ =−∞
 ∞
1
= δ(α)g(t − α/3) dα
α=−∞ 3
g(t)
= . (1.385)
3

References

Simon Haykin. Communication Systems. Wiley Eastern, second edition, 1983.


Rodger E. Ziemer and William H. Tranter. Principles of Communications. John Wiley, fifth edition,
2002.
Chapter 2
Random Variables and Random
Processes

1. Show that the magnitude of the correlation coefficient |ρ| is always less than or
equal to unity.
• Solution: For any real number a, we have
 
E (a(X − m X ) − (Y − m Y ))2 ≥ 0
⇒ a 2 σ 2X − 2a E [(X − m X )(Y − m Y )] + σY2 ≥ 0. (2.1)

Since the above quadratic equation in a is nonnegative, its discriminant is


nonpositive. Hence

4E 2 [(X − m X )(Y − m Y )] − 4σ 2X σY2 ≤ 0


|E [(X − m X )(Y − m Y )]|
⇒ ≤ 1. (2.2)
σ X σY

Hence proved.
2. (Papoulis 1991) Using characteristic functions, show that

E[X 1 X 2 X 3 X 4 ] = C12 C34 + C13 C24 + C14 C23 , (2.3)

where E[X i X j ] = Ci j and the random variables X i are jointly normal (Gaussian)
with zero mean.
• Solution: The joint characteristic function of X 1 , X 2 , X 3 , and X 4 is given by

E[e j (v1 X 1 +···+v4 X 4 ) ]. (2.4)

Expanding the exponential in the form of a power series and considering only
the fourth power, we have

© The Editor(s) (if applicable) and The Author(s), under exclusive license 77
to Springer Nature Switzerland AG 2021
K. Vasudevan, Analog Communications,
https://doi.org/10.1007/978-3-030-50337-6_2
78 2 Random Variables and Random Processes
 
E e j (v1 X 1 +···+v4 X 4 )
1  
= · · · + E (v1 X 1 + · · · + v4 X 4 )4 + · · ·
4!
24
= ··· + E[X 1 X 2 X 3 X 4 ]v1 v2 v3 v4 + · · · . (2.5)
4!
Now, let

W = v1 X 1 + · · · + v4 X 4 . (2.6)

Then

E[W ] = 0

E[W 2 ] = vi v j Ci j = σw2 . (2.7)
i, j

Now, the characteristic function of W is


 
E e j W = e−σw /2
2


= e− 2 vi v j Ci j
1
i, j . (2.8)

Expanding the above exponential once again we have


⎛ ⎞2
 jW 1 ⎝1 
E e = ··· + vi v j Ci j ⎠ + · · ·
2! 2 i, j
8
= ··· + (C12 C34 + C13 C24 + C14 C23 ) v1 v2 v3 v4 + · · · (2.9)
8
Equating the coefficients in (2.5) and (2.9) we get the result.
3. (Haykin 1983) A Gaussian distributed random variable X of zero mean and
variance σ 2X is transformed by a half-wave rectifier with input-output relation
given below

X if X ≥ 0
Y = (2.10)
0 if X < 0.

Compute the pdf of Y .


• Solution: We start with the basic equation when a random variable X is trans-
formed to a random variable Y through the relation

Y = g(X ). (2.11)
2 Random Variables and Random Processes 79

We note that the mapping between Y and X is one-to-one for X ≥ 0 and an


inverse mapping exists for X ≥ 0

X = g −1 (Y ). (2.12)

Thus the following relation holds



f X (x)
f Y (y) = . (2.13)
|dy/d x| X =g−1 (Y )

For the given problem


dy 1 for X > 0
= (2.14)
dx 0 for X < 0.

Since X < 0 maps to Y = 0, we must have

P(X < 0) = P(Y = 0). (2.15)

Thus the pdf of Y must have a Dirac delta function at y = 0. Let


2

kδ(y) + √ 1 exp − 2σy 2 for Y ≥ 0
f Y (y) = 2πσ 2X X (2.16)
0 for Y < 0,

where k is a constant such that the pdf of Y integrates to unity. It is easy to see
that
 ∞
f Y (y) dy = k + 1/2 = 1
y=−∞
⇒ k = 1/2, (2.17)

where we have used the fact that


 ∞
δ(y) dy = 1. (2.18)
y=−∞

4. Let X be a uniformly distributed random variable between 0 and 2π. Compute


the pdf of Y = sin(X ).
• Solution: Clearly, the mapping between X and Y is not one-to-one. In fact,
two distinct values of X give the same value of Y (excepting at the extreme
points of y = −1 and y = 1). This is illustrated in Fig. 2.1. Let us consider two
points x1 and x2 as shown in the figure. The probability that Y lies in the range
y to y + dy is equal to the probability that X lies in the range [x1 , x1 + d x1 ]
and [x2 , x2 + d x2 ]. Note that d x2 is actually negative and
80 2 Random Variables and Random Processes

Fig. 2.1 The transformation


1
Y = sin(X )
0.8
y + dy
0.6
y
0.4
0.2
0

Y
-0.2
x1 x2 + dx2 x
-0.4 2
x1 + dx1
-0.6
-0.8
-1
0 1 2 3 4 5 6


dy  dy
= (2.19)
d x1 d x x=x1

Thus we have (since d x2 is negative)

f Y (y) dy = f X (x1 ) d x1 − f X (x2 ) d x2



1/2π 1/2π
⇒ f Y (y) = +
|dy/d x| x1 =sin−1 (y) |dy/d x| x2 =π−sin−1 (y)

1
⇒ f Y (y) =
π cos(x) x=sin−1 (y)
1
⇒ f Y (y) =  for − 1 ≤ y ≤ 1. (2.20)
π 1 − y2

Note that even though f Y (±1) = ∞, the probability that Y = ±1 is zero,


since

P(Y = 1) = P(X = π/2)


=0
P(Y = −1) = P(X = 3π/2)
= 0. (2.21)

5. (Haykin 1983) Consider a random process X (t) defined by

X (t) = sin(2π Ft) (2.22)

in which the frequency F is a random variable with the probability density


function
2 Random Variables and Random Processes 81

1/W for 0 ≤ f ≤ W
fF ( f ) = (2.23)
0 elsewhere.

Is X (t) WSS?
• Solution: Let us first compute the mean value of X (t).
 W
1
E[X (t)] = sin(2π Ft) d F
W f =0
−1
= (cos(2πW t) − 1) (2.24)
2W πt

which is a function of time. Hence X (t) is not WSS.


6. (Haykin 1983) A random process X (t) is defined by

X (t) = A cos(2π f c t), (2.25)

where A is a Gaussian distributed random variable with zero mean and variance
σ 2A . This random process is applied to an ideal integrator producing an output
Y (t) defined by
 t
Y (t) = X (τ ) dτ . (2.26)
τ =0

(a) Determine the probability density function of the output Y (t) at a particular
time tk .
(b) Determine whether Y (t) is WSS.
(c) Determine whether Y (t) is ergodic in the mean and in the autocorrelation.

• Solution: The random variable Y (tk ) is given by

A
Y (tk ) = sin(2π f c tk ). (2.27)
2π f c

Since all the terms in the above equation excepting A are constants for the
time instant tk , Y (tk ) is also a Gaussian distributed random variable with mean
and variance given by

E[A]
E[Y (tk )] = sin(2π f c tk ) = 0
2π f c
E[A2 ] 2
E[Y 2 (tk )] = sin (2π f c tk )
4π 2 f c2
σ 2A
= sin2 (2π f c tk ). (2.28)
4π 2 f c2
82 2 Random Variables and Random Processes

Since the variance is a function of time, Y (tk ) is not WSS. Hence it is not
ergodic in the autocorrelation. However, it can be shown that the time-averaged
mean is zero. Hence Y (t) is ergodic in the mean.

7. (Haykin 1983) Let X and Y be statistically independent Gaussian random vari-


ables with zero mean and unit variance. Define the Gaussian process

Z (t) = X cos(2πt) + Y sin(2πt). (2.29)

(a) Determine the joint pdf of the random variables Z (t1 ) and Z (t2 ).
(b) Is Z (t) WSS?

• Solution: The random process Z (t) has mean and autocorrelation given by

E[Z (t)] = E[X ] cos(2πt) + E[Y ] sin(2πt) = 0


E[Z (t1 )Z (t2 )] = E[(X cos(2πt1 ) + Y sin(2πt2 ))
(X cos(2πt1 ) + Y sin(2πt2 ))]
= cos(2πt1 ) cos(2πt2 ) + sin(2πt1 ) sin(2πt2 )
= cos(2π(t1 − t2 ))
⇒ E[Z 2 (t)] = 1. (2.30)

In the above relations we have used the fact that

E[X ] = E[Y ] = 0
E[X 2 ] = E[Y 2 ] = 1
E[X Y ] = E[X ]E[Y ] = 0. (2.31)

Since the mean is independent of time and the autocorrelation is dependent


only on the time lag, Z (t) is WSS.
Since Z (t) is a linear combination of two Gaussian random variables, it is also
Gaussian. Therefore Z (t1 ) and Z (t2 ) are jointly Gaussian.
The joint pdf of two real-valued Gaussian random variables y1 and y2 is given
by

f Y1 , Y2 (y1 , y2 )
1
= 
2πσ1 σ2 1 − ρ2
 
σ 2 (y1 − m 1 )2 − 2σ1 σ2 ρ(y1 − m 1 )(y2 − m 2 ) + σ12 (y2 − m 2 )2
× exp − 2 ,
2σ12 σ22 (1 − ρ2 )
(2.32)
2 Random Variables and Random Processes 83

where

E[y1 ] = m 1
E[y2 ] = m 2
E[(y1 − m 1 )2 ] = σ12
E[(y2 − m 2 )2 ] = σ22
E[(y1 − m 1 )(y2 − m 2 )]/(σ1 σ2 ) = ρ. (2.33)

Thus, for the given problem

p Z (t1 ), Z (t2 ) (z(t1 ), z(t2 ))


1
=
2π| sin(2π(t1 − t2 ))|
 2 
z (t1 ) − 2 cos(2π(t1 − t2 ))z(t1 )z(t2 ) + z 2 (t2 )
× exp − . (2.34)
2 sin2 (2π(t1 − t2 ))

8. Using ensemble averaging, find the mean and the autocorrelation of the random
process given by


X (t) = Sk p(t − kT − α), (2.35)
k=−∞

where Sk denotes a discrete random variable taking values ±A with equal


probability, p(·) denotes a real-valued waveform, 1/T denotes the bit-rate and
α denotes a random timing phase uniformly distributed in [0, T ). Assume that
Sk is independent of α and Sk is independent of S j for k = j.
• Solution: Since Sk and α are independent


E[X (t)] = E[Sk ]E[ p(t − kT − α)]
k=−∞
,=0 (2.36)

where for the given binary phase shift keying (BPSK) constellation

E[Sk ] = (1/2)A + (1/2)(−A)


= 0. (2.37)

The autocorrelation of X (t) is


84 2 Random Variables and Random Processes

R X (τ ) = E[X (t)X (t − τ )]
⎡ ⎤
∞ ∞

= E⎣ Sk p(t − kT − α) S j p(t − τ − j T − α)⎦
k=−∞ j=−∞

 ∞

= E[Sk S j ]E[ p(t − kT − α) p(t − τ − j T − α)]
k=−∞ j=−∞

 ∞

= A2 δ K (k − j)
k=−∞ j=−∞

1 T
× p(t − kT − α) p(t − τ − j T − α) dα
T α=0
∞ 
A2 T
= p(t − kT − α) p(t − τ − kT − α) dα, (2.38)
k=−∞
T α=0

where we have assumed that Sk and α are independent and


1 for k = j
δ K (k − j) = (2.39)
0 for k = j

is the Kronecker delta function. Let

x = t − kT − α. (2.40)

Substituting (2.40) in (2.38) we obtain

∞ 
A2 t−kT
R X (τ ) = p(x) p(x − τ ) d x. (2.41)
k=−∞
T x=t−kT −T

Combining the summation and the integral, (2.41) becomes



A2 ∞
R X (τ ) = p(x) p(x − τ ) d x
T x=−∞
A2
= R p (τ ), (2.42)
T
where R p (·) is the autocorrelation of p(·).

9. (Haykin 1983) The square wave x(t) of constant amplitude A, period T0 , and
delay td represents the sample function of a random process X (t). This is illus-
trated in Fig. 2.2. The delay is random and is described by the pdf
2 Random Variables and Random Processes 85

Fig. 2.2 A periodic x(t)


waveform with a random T0 /2
timing phase
t

T0
td

1/T0 for 0 ≤ td < T0


f Td (td ) = (2.43)
0 otherwise.

(a) Determine the mean and autocorrelation of X (t) using ensemble averaging.
(b) Determine the mean and autocorrelation of X (t) using time averaging.
(c) Is X (t) WSS?
(d) Is X (t) ergodic in the mean and the autocorrelation?

• Solution: Instead of solving this particular problem we try to solve a more


general problem. Let p(t) denote an arbitrary pulse shape. Consider a random
process X (t) defined by


X (t) = p(t − kT0 − td ), (2.44)
k=−∞

where td is a uniformly distributed random variable in the range [0, T0 ).


Clearly, in the absence of td , X (t) is no longer a random process and it simply
becomes a periodic waveform. We wish to find out the mean and autocorre-
lation of the random process defined in (2.44). The mean of X (t) is given
by
 ∞


E[X (t)] = E p(t − kT0 − td )
k=−∞
 T0 ∞

1
= p(t − kT0 − td ) dtd . (2.45)
T0 td =0 k=−∞

Interchanging the order of integration and summation and substituting

t − kT0 − td = x (2.46)

we get
∞  t−kT0
1 
E[X (t)] = p(x) d x. (2.47)
T0 k=−∞ x=t−kT0 −T0
86 2 Random Variables and Random Processes

Combining the summation and the integral we get


 ∞
1
E[X (t)] = p(x) d x. (2.48)
T0 x=−∞

For the given problem:

A
E[X (t)] = . (2.49)
2
The autocorrelation can be computed as
 ∞

E[X (t)X (t − τ )] = E p(t − kT0 − td )
k=−∞



p(t − τ − j T0 − td )⎦
j=−∞
∞ ∞
1  
=
T0 k=−∞ j=−∞
 T0
p(t − kT0 − td ) p(t − τ − j T0 − td ) dtd . (2.50)
td =0

Substituting

t − kT0 − td = x (2.51)

we get
∞ ∞
1  
E[X (t)X (t − τ )] =
T0 k=−∞ j=−∞
 t−kT0
p(x) p(x + kT0 − τ − j T0 ) d x. (2.52)
x=t−kT0 −T0

Let

kT0 − j T0 = mT0 . (2.53)

Substituting for j we get


2 Random Variables and Random Processes 87

∞ ∞
1  
E[X (t)X (t − τ )] =
T0 k=−∞ m=−∞
 t−kT0
p(x) p(x + mT0 − τ ) d x. (2.54)
x=t−kT0 −T0

Now we interchange the order of summation and combine the summation over
k and the integral to obtain
∞  ∞
1 
E[X (t)X (t − τ )] = p(x) p(x + mT0 − τ ) d x
T0 m=−∞ x=−∞

1 
= R p (τ − mT0 ) = R X (τ ), (2.55)
T0 m=−∞

where R p (τ ) is the autocorrelation of p(t). Thus, the autocorrelation of X (t)


is also periodic with a period T0 , hence X (t) is a cyclostationary random
process. This is illustrated in Fig. 2.3c.
Since the random process is periodic, the time-averaged mean is given by
 td +T0
1
< x(t) > = x(t) dt
T0 t=td
A
= (2.56)
2
independent of td . Comparing (2.49) and (2.56) we find that X (t) is ergodic
in the mean.
The time-averaged autocorrelation is given by
 T0 /2
1
< x(t)x(t − τ ) > = x(t)x(t − τ ) dt
T0 t=−T0 /2
∞
1
= Rg (τ − mT0 ), (2.57)
T0 m=−∞

where Rg (τ ) is the autocorrelation of the generating function of x(t) (the


generating function has been discussed earlier in this chapter of Haykin 2nd
ed). The generating function can be conveniently taken to be

p(t − td ) for td ≤ t < td + T0


g(t) = , (2.58)
0 elsewhere

where p(t) is illustrated in Fig. 2.3a. Let P( f ) be the Fourier transform of


p(t). We have
88 2 Random Variables and Random Processes

p(t)
(a)
A

T0 /2 T0

Rp (τ )
(b)
A2 T0 /2

−T0 /2 T0 /2

RX (τ )
(c)
A2 /2

−T0 /2 T0 /2

Fig. 2.3 Computing the autocorrelation of a periodic wave with random timing phase

Fig. 2.4 A periodic T0 /8


x(t)
waveform with a random
timing phase 2A

A
t

0
T0 /4

td

T0

Rg (τ ) = g(t)  g(−t)  |P( f )|2  p(t)  p(−t) = R p (τ ). (2.59)

Therefore, comparing (2.55) and (2.57) we find that X (t) is ergodic in the
autocorrelation.
10. A signal x(t) with period T0 and delay td represents the sample function of a
random process X (t). This is illustrated in Fig. 2.4. The delay td is a random
variable which is uniformly distributed in [0, T0 ).
(a) Determine the mean and autocorrelation of X (t) using ensemble averaging.
(b) Determine the mean and autocorrelation of X (t) using time averaging.
(c) Is X (t) WSS?
(d) Is X (t) ergodic in the mean and the autocorrelation?
2 Random Variables and Random Processes 89

• Solution: Instead of solving this particular problem we try to solve a more


general problem. Let p(t) denote an arbitrary pulse shape. Consider a random
process X (t) defined by


X (t) = p(t − kT0 − td ), (2.60)
k=−∞

where td is a uniformly distributed random variable in the range [0, T0 ).


Clearly, in the absence of td , X (t) is no longer a random process and it simply
becomes a periodic waveform. We wish to find out the mean and autocorre-
lation of the random process defined in (2.60). The mean of X (t) is given
by
 ∞


E[X (t)] = E p(t − kT0 − td )
k=−∞
 T0 ∞

1
= p(t − kT0 − td ) dtd . (2.61)
T0 td =0 k=−∞

Interchanging the order of integration and summation and substituting

t − kT0 − td = x (2.62)

we get
∞  t−kT0
1 
E[X (t)] = p(x) d x. (2.63)
T0 k=−∞ x=t−kT0 −T0

Combining the summation and the integral we get


 ∞
1
E[X (t)] = p(x) d x. (2.64)
T0 x=−∞

For the given problem

3A
E[X (t)] = . (2.65)
8
The autocorrelation can be computed as
 ∞

E[X (t)X (t − τ )] = E p(t − kT0 − td )
k=−∞
90 2 Random Variables and Random Processes



p(t − τ − j T0 − td )⎦
j=−∞
∞ ∞
1  
=
T0 k=−∞ j=−∞
 T0
p(t − kT0 − td ) p(t − τ − j T0 − td ) dtd(2.66)
.
td =0

Substituting

t − kT0 − td = x (2.67)

we get
∞ ∞
1  
E[X (t)X (t − τ )] =
T0 k=−∞ j=−∞
 t−kT0
p(x) p(x + kT0 − τ − j T0 ) d x.(2.68)
x=t−kT0 −T0

Let

kT0 − j T0 = mT0 . (2.69)

Substituting for j we get


∞ ∞
1  
E[X (t)X (t − τ )] =
T0 k=−∞ m=−∞
 t−kT0
p(x) p(x + mT0 − τ ) d x. (2.70)
x=t−kT0 −T0

Now we interchange the order of summation and combine the summation over
k and the integral to obtain
∞  ∞
1 
E[X (t)X (t − τ )] = p(x) p(x + mT0 − τ ) d x
T0 m=−∞ x=−∞

1 
= R p (τ − mT0 ) = R X (τ ), (2.71)
T0 m=−∞

where R p (τ ) is the autocorrelation of p(t). Thus, the autocorrelation of X (t)


is also periodic with a period T0 , hence X (t) is a cyclostationary random
2 Random Variables and Random Processes 91

p(t)
(a)
2A

A
t

T0 /8 T0 /4

Rp (τ )
(b)
5A2 T0 /8

2A2 T0 /8
τ

−T0 /4 0 T0 /4

−T0 /8 T0 /8

RX (τ )
(c)
5A2 /8

2A2 /8

−5T0 /4 −T0 −3T0 /4 0 3T0 /4 T0 5T0 /4

−T0 /4 T0 /4

−T0 /8 T0 /8

Fig. 2.5 Computing the autocorrelation of a periodic wave with random timing phase

process. This is illustrated in Fig. 2.5c.


Since the random process is periodic, the time-averaged mean is given by
 td +T0
1
< x(t) > = x(t) dt
T0 t=td
3A
= (2.72)
8
independent of td . Comparing (2.65) and (2.72) we find that X (t) is ergodic
in the mean.
The time-averaged autocorrelation is given by
92 2 Random Variables and Random Processes
 T0 /2
1
< x(t)x(t − τ ) > = x(t)x(t − τ ) dt
T0 t=−T0 /2
∞
1
= Rg (τ − mT0 ), (2.73)
T0 m=−∞

where Rg (τ ) is the autocorrelation of the generating function of x(t) (the


generating function has been discussed earlier in this chapter of Haykin 2nd
ed). The generating function can be conveniently taken to be

p(t − td ) for td ≤ t < td + T0


g(t) = (2.74)
0 elsewhere

where p(t) is illustrated in Fig. 2.5a. Let P( f ) be the Fourier transform of


p(t). We have

Rg (τ ) = g(t)  g(−t)  |P( f )|2  p(t)  p(−t) = R p (τ ). (2.75)

Therefore, comparing (2.71) and (2.73) we find that X (t) is ergodic in the
autocorrelation.

11. For two jointly Gaussian random variables X and Y with means m X and m Y ,
variances σ 2X and σY2 and coefficient of correlation ρ, compute the conditional
pdf of X given Y . Hence compute the values of E[X |Y ] and var(X |Y ).
• Solution: The joint pdf of two real-valued Gaussian random variables X and
Y is given by

1
f X, Y (x, y) = 
2πσ X σY 1 − ρ2
 
σY2 (x − m X )2 − 2σ X σY ρ(x − m X )(y − m Y ) + σ 2X (y − m Y )2
× exp − .
2σ 2X σY2 (1 − ρ2 )
(2.76)

We also know that


 
1 (y − m Y )2
f Y (y) = √ exp − . (2.77)
σY 2π 2σY2

Therefore
2 Random Variables and Random Processes 93

f X Y (x, y)
f X |Y (x|y) =
f Y (y)
1
= 
σ X 2π(1 − ρ2 )
 
σ 2 A2 − 2σ X σY ρAB + ρ2 σ 2X B 2
× exp − Y , (2.78)
2σ 2X σY2 (1 − ρ2 )

where we have made the substitution

A = x − mX
B = y − mY . (2.79)

Simplifying (2.78) further we get

1
f X |Y (x|y) = 
σ X 2π(1 − ρ2 )
 
(σY (x − m X ) − ρσ X (y − m Y ))2
× exp −
2σ 2X σY2 (1 − ρ2 )
1
= 
σ X 2π(1 − ρ2 )
 
(x − (m X + ρ(σ X /σY )(y − m Y )))2
× exp − . (2.80)
2σ 2X (1 − ρ2 )

By inspection we conclude from the above equation that X |Y is also a Gaussian


random variable with mean and variance given by
σX
E[X |Y ] = m X + ρ (y − m Y )
σY
var (X |Y ) = σ 2X (1 − ρ2 ). (2.81)

12. If
 
1 (x − m)2
f X (x) = √ exp − (2.82)
σ 2π 2σ 2
   
compute (a) E (X − m)2n and (b) E (X − m)2n−1 .
• Solution: Clearly
 
E (X − m)2n−1 = 0 (2.83)

since the integrand is an odd function. To compute E[(X − m)2n ] we use the
method of integration by parts. We have
94 2 Random Variables and Random Processes
 ∞
  1
(x − m)2n−1 (x − m)e−(x−m) /(2σ 2 )
2
E (X − m)2n = √ d x.
σ 2π x=−∞
(2.84)
The first function is taken as (x − m)2n−1 . The integral of the second function
is
    
(x − m)2 (x − m)2
(x − m) exp − d x = −σ exp −
2
. (2.85)
x 2σ 2 2σ 2

Therefore (2.84) becomes


 
E (X − m)2n
   ∞
1 (x − m)2
= √ − σ 2 (x − m)2n−1 exp −
σ 2π 2σ 2 x=−∞
 ∞   
(x − m)2
− (2n − 1)(x − m) 2n−2
(−σ ) exp −
2
dx
x=−∞ 2σ 2
 
= (2n − 1)σ 2 E (x − m)2n−2 . (2.86)

Using the above recursion we get


 
E (X − m)2n = 1 · 3 · 5 · · · (2n − 1)σ 2n . (2.87)

13. X is a uniformly distributed random variable between zero and one. Find out the
transformation Y = g(X ) such that Y is a Rayleigh distributed random variable.
You can assume that the mapping between X and Y is one-to-one and Y mono-
tonically increases with X . There should not be any unknown constants in your
answer. The Rayleigh pdf is given by
 
y y2
f Y (y) = 2 exp − 2 for y ≥ 0. (2.88)
σ 2σ

Note that f Y (y) is zero for y < 0.


• Solution: Note that

1 for 0 ≤ x ≤ 1
f X (x) = . (2.89)
0 elsewhere

Since the mapping is assumed to be one-to-one and monotonically increasing


(dy/d x is always positive), we have

f Y (y) dy = f X (x) d x
 Y  X
⇒ f Y (y) dy = f X (x) d x. (2.90)
y=0 0
2 Random Variables and Random Processes 95

Substituting for f Y (y) and f X (x) we get

1 − e−Y /(2σ 2 )
2
=X
⇒ Y = −2σ 2 ln(1 − X ).
2
(2.91)

14. Given that Y is a Gaussian distributed random variable with zero mean and
variance σ 2 , use the Chernoff bound to compute the probability that Y ≥ δ for
δ > 0.
• Solution: We know that
 
P[Y ≥ δ] ≤ E eμ(Y −δ) , (2.92)

where μ needs to be found out such that the RHS is minimized. Hence we set

d  μ(Y −δ) 
E e = 0. (2.93)

The optimum value of μ = μ0 satisfies


   
E Y eμ0 Y = δ E eμ0 Y
 ∞  ∞
1 δ
yeμ0 y e−y /2σ dy = √ eμ0 y e−y /2σ dy
2 2 2 2
⇒ √
σ 2π y=−∞ σ 2π y=−∞
δ
⇒ μ0 = 2 . (2.94)
σ
Therefore
 
P[Y ≥ δ] ≤ e−μ0 δ E eμ0 Y
⇒ P[Y ≥ δ] ≤ e−δ /σ eδ /(2σ )
2 2 2 2

⇒ P[Y ≥ δ] ≤ e−δ /(2σ ) .


2 2
(2.95)

15. If two Gaussian distributed random variables, X and Y are uncorrelated, are they
statistically independent? Justify your statement.
• Solution: Since X and Y are uncorrelated

E[(X − m x )(Y − m Y )] = 0
⇒ ρ = E[(X − m X )(Y − m Y )]/σ X σY = 0. (2.96)

Therefore the joint pdf is given by


96 2 Random Variables and Random Processes

SN (f ) (Watts/Hz)
1

f (Hz)

−7 −5 −4 0 4 5 7

Fig. 2.6 Power spectral density of a narrowband noise process

1 1
√ e−(x−m X ) /(2σ X ) √ e−(y−m Y ) /(2σY )
2 2 2 2
f X Y (x, y) =
σ X 2π σY 2π
= f X (x) f Y (y). (2.97)

Hence X and Y are also statistically independent.


16. (Haykin 1983) The power spectral density of a narrowband noise process is
shown in Fig. 2.6. The carrier frequency is 5 Hz.
(a) Plot the power spectral density of Nc (t) and Ns (t).
(b) Plot the cross-spectral density S Nc Ns ( f ).

• Solution: We know that


S N ( f − f c ) + S N ( f + f c ) for | f | < f c
S Nc ( f ) = S Ns ( f ) = (2.98)
0 otherwise.

The psd of the in-phase and quadrature components is plotted in Fig. 2.7.
We also know that the cross-spectral densities are given by

j [S N ( f + f c ) − S N ( f − f c )] for | f | < f c
S Nc Ns ( f ) =
0 otherwise.
= −S Ns Nc ( f ). (2.99)

The cross-spectral density S Nc Ns ( f ) is plotted in Fig. 2.8.

√ variable X have a uniform pdf over −1 ≤ x ≤ 3. Compute the pdf


17. Let a random
of Z = |X |. The transformation is plotted in Fig. 2.9.
• Solution: Using z and x as dummy variables corresponding to Z and X , respec-
tively, we note that

x for x > 0
z =
2
(2.100)
−x for x < 0.

Differentiating both sides we get


2 Random Variables and Random Processes 97

SN (f ) (Watts/Hz)
1
S2 (f ) S1 (f )

f (Hz)

−7 −5 −4 0 4 5 7
Watts/Hz
S1 (f + fc )
1

0.5
f (Hz)
0
0

S2 (f − fc )
1

0.5
f (Hz)
0

SNc (f ) = SNs (f )

0.5
f (Hz)
0

f (Hz)

−2 −1 0 1 2

Fig. 2.7 Power spectral density of Nc (t) and Ns (t)

d x for x > 0
2z dz = (2.101)
−d x for x < 0.

This implies that


dz 1/(2z) for x > 0


=
dx −1/(2z) for x < 0

dz 1
⇒ = . (2.102)
dx 2z
98 2 Random Variables and Random Processes

SN (f ) (Watts/Hz)
1
S2 (f ) S1 (f )

f (Hz)

−7 −5 −4 0 4 5 7
Watts/Hz
S1 (f + fc )
1

0.5
f (Hz)
0

f (Hz)
0

−0.5

−1
−S2 (f − fc )

SNc Ns (f )/j

0.5
f (Hz)
0

−0.5
f (Hz)

−2 −1 0 1 2

Fig. 2.8 Cross-spectral densities of Nc (t) and Ns (t)

Now, in the range −1 ≤ x ≤ 1 corresponding to 0 ≤ z ≤ 1, the mapping from


X to Z is surjective (many-to-one) as indicated in Fig. 2.9. Hence

f X (x) f X (x)
f Z (z) = + for 0 ≤ z ≤ 1
|dz/d x| x=−z 2 |dz/d x| x=z 2
1/4
⇒ f Z (z) = 2
1/(2z)
=z for 0 ≤ z ≤ 1. (2.103)

In the range 1 ≤ x ≤ 3 corresponding to 1 ≤ z ≤ 3 the mapping from X to
Z is one-to-one (injective). Hence
2 Random Variables and Random Processes 99

1.8

1.6

1.4

1.2

1
Z

0.8

0.6

0.4

0.2

0
-1 -0.5 0 0.5 1 1.5 2 2.5 3
X

Fig. 2.9 The transformation Z = |X |


f X (x)
f Z (z) =
|dz/d x| x=z 2
1/4
⇒ f Z (z) =
1/(2z)
z √
= for 1 ≤ z ≤ 3. (2.104)
2
To summarize

z for 0 ≤ z ≤ √
1
f Z (z) = (2.105)
z
2
for 1 ≤ z ≤ 3.

18. Given that

π + 2 tan−1 (x)
FX (x) = for − ∞ < x < ∞ (2.106)

find the pdf of the random variable Z given by

X 2 for X ≥ 0
Z= (2.107)
−1 for X < 0
100 2 Random Variables and Random Processes

• Solution: Note that

FX (−∞) = 0
FX (∞) = 1. (2.108)

The pdf of X is given by

d FX (x)
f X (x) =
dx
1
= . (2.109)
π(1 + x 2 )

Note that

dz 2x for x ≥ 0
=
dx 0 for x < 0


dz 2 z for x ≥ 0
⇒ = (2.110)
dx 0 for x < 0.

We also note that

P(Z = −1) = P(X < 0) = FX (0) = 1/2. (2.111)

Therefore f Z (z) must have a delta function at Z = −1. Further, the mapping
from X to Z is one-to-one (injective) in the range 0 ≤ x < ∞. Hence, the pdf
of Z is given by

kδ(z + 1) for z = −1
f Z (z) = (2.112)
f X (x)/(dz/d x)|x=√z for 0 ≤ z < ∞,

where k is a constant such that the pdf of Z integrates to unity. Simplifying


the above expression we get

kδ(z + 1) for z = −1
f Z (z) = √ (2.113)
1/(2π(1 + z) z) for 0 ≤ z < ∞.

Since
 ∞
f Z (z) dz = P(Z > 0) = P(X > 0) = 1 − FX (0) = 1/2 (2.114)
z=0

we must have k = 1/2.

19. (Papoulis 1991) Let X be a nonnegative continuous random variable and let a
be any positive constant. Prove the Markov inequality given by
2 Random Variables and Random Processes 101

Fig. 2.10 Illustration of the


Markov’s inequality
X/a

g(X)
1

Fig. 2.11 The Y = g(X)


transformation Y = g(X ) 5

−2 −1 0 1 2

P(X ≥ a) ≤ m X /a, (2.115)

where m X = E[X ].
• Solution: Consider a function g(X ) given by

1 for X > a
g(X ) = (2.116)
0 for X < a

as shown in Fig. 2.10. Clearly

g(X ) ≤ X/a for X, a ≥ 0


 ∞  ∞
⇒ g(x) f X (x) d x ≤ (x/a) f X (x) d x
x=0 x=0
 ∞
⇒ f X (x) d x ≤ m X /a
x=a
⇒ P(X > a) ≤ m X /a (proved). (2.117)

20. Let a random variable X have a uniform pdf over −2 ≤ X ≤ 2. Compute the
pdf of Y = g(X ), where

2X 2 for |X | ≤ 1
g(X ) = (2.118)
3|X | − 1 for 1 ≤ |X | ≤ 2.

• Solution: We first note that the mapping g(x) is many-to-one. This is illustrated
in Fig. 2.11. Let y and x denote particular values taken by the RVs Y and X ,
respectively. Therefore we have
102 2 Random Variables and Random Processes

[ f X (x) + f X (−x)] |d x| = f Y (y) |dy|. (2.119)

Note that

1/4 for − 2 < x < 2


f X (x) = (2.120)
0 elsewhere.

Substituting (2.120) in (2.119) we have:



1 d x
= f Y (y). (2.121)
2 dy

Next we observe that




dx
= 1/(2 2y) for 0 < y < 2 (2.122)
dy 1/3 for 2 < y < 5.

Substituting (2.122) in (2.121) we get




1/(4 2y) for 0 < y < 2
f Y (y) = (2.123)
1/6 for 2 < y < 5.

21. A Gaussian distributed random variable X having zero mean and variance σ 2X is
transformed by a square-law device defined by Y = X 2 .
(a) Compute the pdf of Y .
(b) Compute E[Y ].

• Solution: We first note that the mapping is many-to-one. Thus the probability
that Y lies in the range [y, y + dy] is equal to the probability that X lies in the
range [x1 , x1 + d x1 ] and [x2 , x2 + d x2 ], as illustrated in Fig. 2.12. Observe
that d x2 is negative. Mathematically

f Y (y) dy = f X (x1 ) d x1 − f X (x2 ) d x2



f X (x) f X (x)
⇒ f Y (y) = + , (2.124)
|dy/d x| x=√ y |dy/d x| x=−√ y

where

dy  dy
= (2.125)
d x1 d x x=x1
√ √
with x1 = y and x2 = − y. Since
2 Random Variables and Random Processes 103

4.5

3.5

2.5
Y

1.5 y + dy

y
1

0.5
x2 + dx2 x2 x1 x1 + dx1
0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
X

Fig. 2.12 The transformation Y = X 2

1
√ e−x /(2σ X )
2 2
f X (x) =
σ X 2π
dy
= 2x (2.126)
dx
we have
1
e−y/(2σ X )
2
f Y (y) = √ for y ≥ 0. (2.127)
σ X 2π y

Finally

E[Y ] = E[X 2 ] = σ 2X . (2.128)

22. Consider two real-valued random variables A and B.


(a) Prove that E 2 [AB] ≤ E[A2 ]E[B 2 ].
(b) Now, let X (t) = A and X (t + τ + τ p ) − X (t + τ ) = B. Prove that if R X (τ p )
= R X (0) for τ p = 0, then R X (τ ) is periodic with period τ p .

• Solution: Note that


 
E (Ax ± B)2 ≥ 0
   
⇒ x 2 E A2 ± 2x E[AB] + E B 2 ≥ 0. (2.129)
104 2 Random Variables and Random Processes

Thus we get a quadratic equation in x whose discriminant is nonpositive.


Therefore
   
E 2 [AB] − E A2 E B 2 ≤ 0 (2.130)

which proves the first part. Substituting

A = X (t)
B = X (t + τ + τ p ) − X (t + τ ) (2.131)

in we obtain

E[AB] = E[X (t)[X (t + τ + τ p ) − X (t + τ )]]


= R X (τ + τ p ) − R X (τ )
 
E A2 = R X (0)
 
E B 2 = 2[R X (0) − R X (τ p )]. (2.132)

Substituting (2.132) in (2.130) we get


 2
R X (τ + τ p ) − R X (τ ) ≤ 2[R X (0) − R X (τ p )]. (2.133)

Since it is given that R X (0) = R X (τ p ), (2.133) becomes


 2
R X (τ + τ p ) − R X (τ ) ≤ 0. (2.134)

However (2.134) cannot be less than zero; it can only be equal to zero. There-
fore

R X (τ + τ p ) = R X (τ ) (2.135)

which implies that R X (τ ) is periodic with period τ p .

23. Let X and Y be two independent and uniformly distributed random variables
over [−a, a].
(a) Compute the pdf of Z = X + Y .
(b) If a = π and X , Y , Z denote the phase over [−π, π], recompute the pdf of
Z . Assume that X and Y are independent.
• Solution: We note that the pdf of the sum of two independent random variables
is equal to the convolution of the individual pdfs. Therefore

f Z (z) = f X (z)  f Y (z)


 ∞
= f X (α) f Y (z − α) dα. (2.136)
α=−∞
2 Random Variables and Random Processes 105

Fig. 2.13 The pdf of fX (x)


Z = X +Y
(a) 1/(2a)
x

−a 0 a

fZ (z)
(b) 1/(2a)
z

−2a 0 2a

fZ (z)
(c) 1/(2a)
z

−π 0 π

fZ (z)
(d) 1/(2a)
z

−π 0 π

For the given problem, f Z (z) is illustrated in Fig. 2.13b.


When a = π, we note that the pdf of Z extends over [−2π, 2π]. Since it
is given that the phase extends only over [−π, π], the values of Z over
[−2π, −π] is identical to [0, π]. Similarly, the values of Z over [π, 2π]
is identical to [−π, 0]. Therefore, the modified pdf of Z over [−π, π] is
computed as follows:

f Z (z) + f Z (z − 2π) for 0 ≤ z ≤ π


f Z (z) = (2.137)
f Z (z) + f Z (z + 2π) for − π ≤ z ≤ 0.

This is illustrated in Fig. 2.13c. The resulting pdf is shown in Fig. 2.13d. Thus
we find that the modified pdf of Z is also uniform in [−π, π].
24. Let X and Y be two independent RVs with pdfs given by

1/4 for − 2 ≤ x ≤ 2
f X (x) = (2.138)
0 otherwise

and

Ae−3y for 0 ≤ y < ∞


f Y (y) = (2.139)
0 otherwise.
106 2 Random Variables and Random Processes

(a) Find A.
(b) Find the pdf of Z = 3X + 4Y .
• Solution: We have
 ∞
f Y (y) dy = 1
y=0
 ∞
⇒A e−3y dy = 1
y=0
⇒ A = 3. (2.140)

Let U = 3X . Then

1/12 for − 6 ≤ u ≤ 6
fU (u) = (2.141)
0 otherwise.

Similarly, let V = 4Y . Clearly 0 ≤ v < ∞. Since the mapping between V


and Y is one-to-one, we have

f V (v) dv = f Y (y) dy

f Y (y)
⇒ f V (v) =
dv/dy y=v/4
3
⇒ f V (v) = e−3v/4 for 0 ≤ v < ∞
4
3
= e−3v/4 S(v), (2.142)
4
where S(v) denotes the modified unit step function defined by

1 for v ≥ 0
S(v) = (2.143)
0 for v < 0.

Since X and Y are independent, so are U and V . Moreover, Z = U + V ,


therefore −6 ≤ z < ∞. Hence

f Z (z) = fU (z)  f V (z)


 ∞
= fU (α) f V (z − α) dα
α=−∞

1 −3z/4  z
e e3α/4 dα for z ≤6
= 16 1 −3z/4 6
α=−6 3α/4
16
e α=−6 e dα for z ≥6

1 −3z/4  3z/4 −9/2

e  e − e  for z ≤6
= 1 −3z/4 9/2
12 (2.144)
12
e e − e−9/2 for z ≥ 6.
2 Random Variables and Random Processes 107

Fig. 2.14 Two filters


connected in cascade X(t) V (t) Y (t)
h1 (t) h2 (t)

It can be verified that


 ∞
f Z (z) dz = 1. (2.145)
z=−6

25. (Haykin 1983) Consider two linear filters h 1 (t) and h 2 (t) connected in cascade
as shown in Fig. 2.14. Let X (t) be a WSS process with autocorrelation R X (τ ).
(a) Find the autocorrelation function of Y (t).
(b) Find the cross-correlation function RV Y (t).

• Solution: Let

g(t) = h 1 (t)  h 2 (t). (2.146)

Then

Y (t) = X (t)  g(t). (2.147)

Hence

RY (τ ) = E [Y (t)Y (t − τ )]
 ∞  ∞ 
=E g(α)X (t − α) dα g(β)X (t − τ − β) dβ
α=−∞ β=−∞
 ∞  ∞
= g(α)g(β)R X (τ + β − α) dα dβ. (2.148)
α=−∞ β=−∞

The cross-correlation is given by

RV Y (τ ) = E [V (t)Y (t − τ )]
 ∞
=E h 1 (α)X (t − α) dα
α=−∞
 ∞ 
× h 2 (β)V (t − τ − β) dβ
β=−∞
 ∞  ∞
=E h 1 (α)h 2 (β)X (t − α)
α=−∞ β=−∞
∞ 
× h 1 (γ)X (t − τ − β − γ) dγ dα dβ
γ=−∞
108 2 Random Variables and Random Processes
 ∞  ∞  ∞
= h 1 (α)h 2 (β)h 1 (γ)
α=−∞ β=−∞ γ=−∞
× R X (τ + β + γ − α) dγ dα dβ. (2.149)

26. (Haykin 1983) Consider a pair of WSS processes X (t) and Y (t). Show that their
cross-correlations R X Y (τ ) and RY X (τ ) have the following properties:
(a) RY X (τ ) = R X Y (−τ ).
(b) |R X Y (τ )| ≤ 21 [R X (0) + RY (0)].
• Solution: We know that

RY X (τ ) = E [Y (t)X (t − τ )] . (2.150)

Now substitute t − τ = α to get

RY X (τ ) = E [Y (α + τ )X (α).] . (2.151)

Substituting τ = −β we get

RY X (−β) = E [Y (α − β)X (α)] = R X Y (β). (2.152)

Thus the first part is proved. To prove the second part we note that
 
E (X (t) ± Y (t − τ ))2 ≥0
1    
⇒ E X (t)2 + E Y (t − τ )2 ≥ ∓E [X (t)Y (t − τ )]
2
1
⇒ [R X (0) + RY (0)] ≥ ∓R X Y (τ )
2
1
⇒ [R X (0) + RY (0)] ≥ |R X Y (τ )|. (2.153)
2
27. (Haykin 1983) Given that a stationary random process X (t) has the autocorre-
lation function R X (τ ) and power spectral density S X ( f ) show that
(a) The autocorrelation function of d X (t)/dt is equal to minus the second
derivative of R X (τ ).
(b) The power spectral density of d X (t)/dt is equal to 4π 2 f 2 S X ( f ).
(c) If S X ( f ) = 2 rect ( f /W ), compute the power of d X (t)/dt.
• Solution: Let
d X (t)
Y (t) = . (2.154)
dt

It is clear that Y (t) can be obtained by passing X (t) through an ideal differ-
entiator. We know that the Fourier transform of an ideal differentiator is
2 Random Variables and Random Processes 109

H ( f ) = j 2π f. (2.155)

Hence the psd of Y (t) is

SY ( f ) = S X ( f )|H ( f )|2
= 4π 2 f 2 S X ( f )
 ∞
⇒ RY (τ ) = SY ( f ) exp (j 2π f τ ) d f
f =−∞
 ∞
= 4π 2 f 2 S X ( f ) exp (j 2π f τ ) d f. (2.156)
f =−∞

Thus, the second part of the problem is proved. To prove the first part, we note
that
 ∞
R X (τ ) = S X ( f ) exp (j 2π f τ ) d f
f =−∞
 ∞
d R X (τ )
2
⇒ = −4π 2 f 2 S X ( f ) exp (j 2π f τ ) d f. (2.157)
dτ 2 f =−∞

Comparing (2.156) and (2.157) we see that

d 2 R X (τ )
RY (τ ) = − . (2.158)
dτ 2

The power of Y (t) is given by


 ∞
 
E Y (t) =
2
SY ( f ) d f
f =−∞
 W/2
= 8π 2 f 2 d f
f =−W/2
2
= π2 W 3 . (2.159)
3
28. (Haykin 1983) The psd of a random process X (t) is shown in Fig. 2.15.

Fig. 2.15 Power spectral SX (f )


density of a random process
X (t) δ(f )

1
f

−f0 f0
110 2 Random Variables and Random Processes

(a) Determine R X (τ ).
(b) Determine the dc power in X (t).
(c) Determine the ac power in X (t).
(d) What sampling-rates will give uncorrelated samples of X (t)? Are the sam-
ples statistically independent?
• Solution: The psd of X (t) can be written as
 
|f|
S X ( f ) = δ( f ) + 1 − . (2.160)
f0

We know that

A rect (t/T0 )  AT0 sinc ( f T0 )


⇒ A rect (t/T0 )  A rect (−t/T0 )  A2 T02 sinc2 ( f T0 )
 
|t|
⇒ A T0 1 −
2
 A2 T02 sinc2 ( f T0 ). (2.161)
T0

Applying duality
 
|f|
A2 f 0 1 −  A2 f 02 sinc2 (t f 0 ). (2.162)
f0

Given that A2 f 0 = 1. Hence


 
|f|
1−  f 0 sinc2 (t f 0 ). (2.163)
f0

Thus

R X (τ ) = 1 + f 0 sinc2 ( f 0 τ ). (2.164)

Now, consider a real-valued random process Z (t) given by

Z (t) = A + Y (t), (2.165)

where A is a constant and Y (t) is a zero-mean random process. Clearly

E[Z (t)] = A
R Z (τ ) = E[Z (t)Z (t − τ )] = A2 + RY (τ ). (2.166)

Thus we conclude that if a random process has a DC component equal to A,


then the autocorrelation has a constant component equal to A2 . Hence from
(2.164) we get
2 Random Variables and Random Processes 111

E[X (t)] = ±1 = m X (say). (2.167)

Hence the DC power (contributed by the delta function of the psd) is unity.
The AC power (contributed by the triangular part of the psd) is f 0 .
The covariance of X (t) is

K X (τ ) = cov (X (t)X (t − τ )) = E[(X (t) − m X )(X (t − τ ) − m X )]


= f 0 sinc2 ( f 0 τ ). (2.168)

It is clear that

f 0 for τ = 0
K X (τ ) = (2.169)
0 for τ = n(k/ f 0 ), n, k = 0,

where n and k are positive integers. Thus, when X (t) is sampled at a rate
equal to f 0 /k, the samples are uncorrelated. However the samples may not be
statistically independent.

29. The random variable X has a uniform distribution over 0 ≤ x ≤ 2. For the ran-
dom process defined by V (t) = 6e X t compute
(a) E[V (t)]
 (t)V(t − τ )]
(b) E[V
(c) E V 2 (t)

• Solution: Note that


 
E[V (t)] = 6E e X t

6 2 xt
= e dx
2 x=0
3  2t 
= e −1 . (2.170)
t
The autocorrelation is given by
 
E [V (t)V (t − τ )] = 36E e X t e X (t−τ )

36 2 x(2t−τ )
= e dx
2 x=0
36  2(2t−τ ) 
= e −1
2(2t − τ )
18  2(2t−τ ) 
= e −1 . (2.171)
2t − τ

Hence
112 2 Random Variables and Random Processes

RY (τ )

1
τ

−4T −3T −2T −T 0 T 2T 3T 4T

Fig. 2.16 Autocorrelation of a random process Y (t)

 
E V 2 (t) = E [V (t)V (t − τ )]τ =0
9  4t 
= e −1 . (2.172)
t

(Haykin 1983) A random process Y (t) consists of a DC component equal to


30. √
3/2 V, a periodic component G(t) having a random timing phase, and a random
component X (t). Both G(t) and X (t) have zero mean and are independent of
each other. The autocorrelation of Y (t) is shown in Fig. 2.16. Compute
(a) The DC power.
(b) The average power of G(t). Sketch a sample function of G(t).
(c) The average power of X (t). Sketch a sample function of X (t).

• Solution: The random process Y (t) can be written as:

Y (t) = A + G(t) + X (t) (2.173)

where G(t) and X (t) have zero mean and are independent of each other. The
autocorrelation of Y (t) can be written as

RY (τ ) = E[Y (t)Y (t − τ )]
= E[(A + G(t) + X (t))(A + G(t − τ ) + X (t − τ ))]
= A2 + RG (τ ) + R X (τ ), (2.174)

where RG (τ ) and R X (τ ) denote the autocorrelation of G(t) and X (t), respec-


tively, and A2 denotes the DC power. Note that since G(t) is periodic with a
period T0 = 2T , we have

RG (τ ) = E[G(t)G(t − τ )]
= E[G(t)G(t − τ − kT0 )]
= RG (τ + kT0 ). (2.175)
2 Random Variables and Random Processes 113

Therefore RG (τ ) is also
√ periodic with a period of kT0 . Moreover, since A is
given to be equal to 3/2, we have A2 = 3/2. Therefore the DC power is
3/2.
By inspecting Fig. 2.16 we conclude that G(t) and X (t) have the autocor-
relation as indicated in Fig. 2.17a, b, respectively. Thus, the power in the
periodic component is RG (0) = 0.5 and the power in the random component
is R X (0) = 1.
Recall that a periodic signal with period T0 and random timing phase td can
be expressed as a random process:


G(t) = p(t − kT0 − td ), (2.176)
k=−∞

where td is a uniformly distributed random variable in the range [0, T0 ) and


p(·) is a real-valued waveform. A sample function of G(t) and the corre-
sponding p(t) is shown in Fig. 2.17c. Recall that the autocorrelation of G(t)
is

1 
RG (τ ) = R p (τ − mT0 ), (2.177)
T0 m=−∞

where R p (·) denotes the autocorrelation of p(t) in (2.176).


Similarly, recall that a random binary wave is given by


X (t) = Sk p(t − kT0 − α), (2.178)
k=−∞

where Sk denotes a discrete random variable taking values ±A with equal


probability, p(·) denotes a real-valued waveform, 1/T0 denotes the bit-rate
and α denotes a random timing phase uniformly distributed in [0, T0 ). It is
assumed that Sk is independent of α and Sk is independent of S j for k = j.
A sample function of X (t) along with the corresponding p(t) is shown in
Fig. 2.17d. Recall that the autocorrelation of X (t) is

A2
R X (τ ) = R p (τ ), (2.179)
T0

where R p (·) is the autocorrelation of p(·) in (2.178). In this problem, Sk = ±1.


31. The moving average of a random signal X (t) is defined as
 t+T /2
1
Y (t) = X (λ) dλ. (2.180)
T λ=t−T /2
114 2 Random Variables and Random Processes

(a) RG (τ )

0.5

−3T −T T 3T τ
0
−2T 2T
−0.5

RX (τ )
(b)
1

−T 0 T
g(t) p(t)
√ T
(c) 0.5 √
0.5
T

t T0 t
0 0


− 0.5 √
− 0.5
T
T

td
x(t) p(t)
(d) √
2 √
2
T
T
T
t t
0 0
T0


− 2
T

td

Fig. 2.17 a Autocorrelation of G(t). b Autocorrelation of X (t). c Sample function of G(t) and the
corresponding p(t). d Sample function of X (t) and the corresponding p(t)
2 Random Variables and Random Processes 115

(a) The output Y (t) can be written as

Y (t) = X (t)  h(t). (2.181)

Identify h(t).
(b) Obtain a general expression for the autocorrelation of Y (t) in terms of the
autocorrelation of X (t) and the autocorrelation of h(t).
(c) Using the above relation, compute RY (τ ).

• Solution: The random process Y (t) can be written as

Y (t) = X (t)  h(t), (2.182)

where
 
1 t
h(t) = rect . (2.183)
T T

We know that
 ∞  ∞
RY (τ ) = h(α)h(β)R X (τ − α + β) dα dβ. (2.184)
α=−∞ β=−∞

Let

α − β = λ. (2.185)

Substituting for β we get


 ∞  ∞
RY (τ ) = h(α)h(α − λ)R X (τ − λ) dα dλ. (2.186)
α=−∞ λ=−∞

Interchanging the order of the integrals we get


 ∞  ∞
RY (τ ) = R X (τ − λ) dλ. h(α)h(α − λ) dα
λ=−∞ α=−∞
 ∞
= R X (τ − λ)Rh (λ) dλ. (2.187)
λ=−∞

For the given problem


(1/T ) (1 − (|τ |/T )) for |τ | < T


Rh (τ ) = (2.188)
0 elsewhere.

Therefore
116 2 Random Variables and Random Processes
  
1 T
|λ|
RY (τ ) = R X (τ − λ) 1 − dλ. (2.189)
T λ=−T T

32. Let a random process X (t) be applied at the input of a filter with impulse response
h(t). Let the output process be denoted by Y (t).
(a) Show that

RY X (τ ) = h(τ )  R X (τ ). (2.190)

(b) Show that SY X ( f ) = H ( f )S X ( f ).

• Solution: Note that


 ∞
Y (t) = h(α)X (t − α) dα. (2.191)
α=−∞

Therefore

RY X (τ ) = E[Y (t)X (t − τ )]
 ∞ 
=E h(α)X (t − α)X (t − τ ) dα . (2.192)
α=−∞

Interchanging the order of the expectation and the integral,we get


 ∞
RY X (τ ) = h(α)E [X (t − α)X (t − τ )] dα
α=−∞
 ∞
= h(α)R X (τ − α) dα
α=−∞
= h(τ )  R X (τ ). (2.193)

Taking the Fourier transform of both sides and noting that convolution in the
time-domain is equivalent to multiplication in the frequency domain, we get

SY X ( f ) = H ( f )S X ( f ). (2.194)

33. Let X (t1 ) and X (t2 ) be two random variables obtained by observing the random
process X (t) at times t1 and t2 , respectively. It is given that X (t1 ) and X (t2 ) are
uncorrelated. Does this imply that X (t) is WSS?
• Solution: Since X (t1 ) and X (t2 ) are uncorrelated we have

E[(X (t1 ) − m X (t1 ))(X (t2 ) − m X (t2 ))] = 0


⇒ R X (t1 , t2 ) = m X (t1 )m X (t2 ), (2.195)
2 Random Variables and Random Processes 117

Fig. 2.18 PSD of N (t) SN (f )


N0 /2

−f2 −f1 0 f1 f2
W

where m X (t1 ) and m X (t2 ) are the means at times t1 and t2 , respectively.
From the above equation, it is clear that X (t) need not be WSS.
34. The voltage at the output of a noise generator, whose statistics is known to be a
WSS Gaussian process, is measured with a DC voltmeter and a true rms (root
mean square) meter. The DC voltmeter reads 3 V. The rms meter, when it is AC
coupled (DC component is removed), reads 2 V.
(a) Find out the pdf of the voltage at any time.
(b) What would be the reading of the rms meter if it is DC coupled?

• Solution: The DC meter reads the average (mean) value of the voltage. Hence
the mean of the voltage is m = 3 V.
When the rms meter is AC coupled, it reads the standard deviation (σ). Hence
σ = 2 V.
Thus the pdf of the voltage takes the form

1
√ e−(v−m) /(2σ ) .
2 2
pV (v) = (2.196)
σ 2π

We know that

E[V 2 ] = σ 2 + m 2 = 13. (2.197)



Thus, when the rms meter is DC coupled, the reading would be 13.

35. (Ziemer and Tranter 2002) This problem demonstrates the non-uniqueness of
the representation of narrowband noise. Here we show that the in-phase and
quadrature components depend on the choice of the carrier frequency f c .
A narrowband noise process of the form

N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t) (2.198)

has a psd as indicated in Fig. 2.18, where f 1 , f 2


W . Sketch the psd of Nc (t)
and Ns (t) for the following cases:
(a) f c = f 1
118 2 Random Variables and Random Processes

(b) f c = f 2
(c) f c = ( f 1 + f 2 )/2
• Solution: It is given that

f 2 − f 1 = W, (2.199)

where f 1 , f 2
W . We know that

S N ( f − f c ) + S N ( f + f c ) for | f | < f c
S Nc ( f ) =
0 otherwise.
= S Ns ( f ). (2.200)

The psd of Nc (t) and Ns (t) for f c = f 1 is indicated in Fig. 2.19. The psd of
Nc (t) and Ns (t) for f c = f 2 is shown in Fig. 2.20. The psd of Nc (t) and Ns (t)
for f c = ( f 1 + f 2 )/2 is shown in Fig. 2.21.

36. Consider the system used for generating narrowband noise, as shown in Fig. 2.22.
The Fourier transform of the bandpass filter is also shown. Assume that the
narrowband noise representation is of the form

N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t). (2.201)

The input W (t) is a zero mean, WSS, white Gaussian noise process with a psd
equal to 2 × 10−8 W/Hz.
(a) Sketch and label the psd of Nc (t) when f c = 22 MHz. Show the steps, start-
ing from S N ( f ).
(b) Write down the pdf of the random variable X obtained by observing Nc (t)
at t = 2 s.
(c) Sketch and label the cross-spectral density S Nc Ns ( f ) when f c = 22 MHz.
(d) Derive the expression for the correlation coefficient (ρ(τ )) between Nc (t)
and Ns (t − τ ) for f c = 22 MHz.
(e) For what carrier frequency f c , is ρ(τ ) = 0 for all values of τ ?
Assume the following:


R Nc Ns (τ ) = E[Nc (t)Ns (t − τ )]

R Nc (τ ) = E[Nc (t)Nc (t − τ )]. (2.202)

• Solution: The psd of N (t) is given by

S N ( f ) = SW ( f )|H ( f )|2
N0
= |H ( f )|2 . (2.203)
2
2 Random Variables and Random Processes 119

SN (f )
N0 /2

−f2 −f1 0 f1 f2
W
SN (f − f1 )
N0 /2

2f1 f

−W 0 f1 + f2

SN (f + f1 )
N0 /2

−2f1 f

−f1 − f2 0 W

SNc (f ) = SNs (f )
N0 /2

−W 0 W

Fig. 2.19 PSD of Nc (t) and Ns (t) for f c = f 1

The psd of Nc (t) is shown in Fig. 2.23.


Since W (t) is zero mean, WSS Gaussian random process and H ( f ) is an LTI
system, N (t) is also zero mean, WSS Gaussian random process. Thus Nc (t)
and Ns (t) are also zero mean, WSS Gaussian random processes. Therefore
X = Nc (2) is a Gaussian RV with pdf

1
√ e−(x−m) /(2σ ) .
2 2
f X (x) = (2.204)
σ 2π

Here
120 2 Random Variables and Random Processes

SN (f )
N0 /2

−f2 −f1 0 f1 f2
W
SN (f − f2 )
N0 /2

f1 + f2 f

0 W 2f2

SN (f + f2 )
N0 /2

−f1 − f2 f

−2f2 −W 0

SNc (f ) = SNs (f )

N0 /2

−W 0 W

Fig. 2.20 PSD of Nc (t) and Ns (t) for f c = f 2

m= 0

σ2 = S Nc ( f ) d f = 0.68. (2.205)
f =−∞

The cross-spectral density S Nc Ns ( f ) is plotted in Fig. 2.24.


The cross-spectral density is of the form

S Nc Ns ( f ) = j [A1 rect (( f − f 1 )/B) + A2 rect (( f − f 2 )/B)


− A1 rect (( f + f 1 )/B) − A2 rect (( f + f 2 )/B)](2.206)
,

where
2 Random Variables and Random Processes 121

SN (f )
N0 /2

−f2 −f1 0 f1 f2
W
SN (f − fc )
N0 /2

−W/2 0 W/2 f1 + fc f2 + fc

SN (f + fc )
N0 /2

−f2 − fc −f1 − fc −W/2 0 W/2

SNc (f ) = SNs (f )
N0 /2

−W/2 0 W/2

Fig. 2.21 PSD of Nc (t) and Ns (t) for f c = ( f 1 + f 2 )/2

W (t) N (t)
H(f )

H(f )
3

2
f (MHz)

−24 −23 −22 −21 0 21 22 23 24

Fig. 2.22 System used for generating narrowband noise


122 2 Random Variables and Random Processes

SN (f ) (×10−8 )

18

8
f (MHz)

−24 −23 −22 −21 0 21 22 23 24

SN (f − fc ) (×10−8 )

18

f (MHz)
−2 −1 0 1 43 44 45 46

SN (f + fc ) (×10−8 )

18

8
f (MHz)

−45 −44 −43 −1 0 2 1


−46
SNc (f ) (×10−8 )

26

8
f (MHz)

−2 −1 0 1 2

Fig. 2.23 PSD of Nc (t)

f 1 = 0.5 × 106
f 2 = 1.5 × 106
B = 106
A1 = 10 × 10−8
A2 = 8 × 10−8 . (2.207)

The cross-correlation R Nc Ns (τ ) can be obtained by taking the inverse Fourier


transform of S Nc Ns ( f ) and is given by
2 Random Variables and Random Processes 123

SN (f ) (×10−8 )

18

8
f (MHz)

−24 −23 −22 −21 0 21 22 23 24

SN (f − fc ) (×10−8 )

18

f (MHz)
−2 −1 0 1 43 44 45 46

SN (f + fc ) (×10−8 )

18

8
f (MHz)

−45 −44 −43 −1 0 2 1

−46 SNc Ns (f )/j (×10−8 )

10

8
0
−8
−10
f (MHz)

−2 −1 0 1 2

Fig. 2.24 The cross-spectral density S Nc Ns ( f )

 
R Nc Ns (τ ) = jA1 B sinc (Bt) e j 2π f1 t − e−j 2π f1 t
 
+ jA2 B sinc (Bt) e j 2π f2 t − e−j 2π f2 t
= −sinc (Bt) [0.2 sin(2π f 1 t) + 0.16 sin(2π f 2 t)] . (2.208)

Since Nc (t) and Ns (t − τ ) are both N (0, σ 2 ) we have


124 2 Random Variables and Random Processes

Fig. 2.25 R(τ ) versus τ R(τ )

−2T −T 0 T 2T

ρ(τ ) = R Nc Ns (τ )/σ 2 , (2.209)

where σ 2 = 0.68.
R Nc Ns (τ ) = 0 for all τ when f c = 22.5 MHz. Hence ρ(τ ) = 0 for all τ when
f c = 22.5 MHz.

37. Consider the function R(τ ) as illustrated in Fig. 2.25. Is it a valid autocorrelation
function? Justify your answer.
• Solution: Note that
  
t t
R(τ ) = rect + rect
2T 4T
 2T sinc (2 f T ) + 4T sinc (4 f T ). (2.210)

Let 2π f T = θ. Hence

sin(θ) sin(2θ)
R(τ )  2T + 4T
θ 2θ
sin(θ)
 2T (1 + 2 cos(θ))
θ

= S(θ). (2.211)

Clearly, S(θ) is negative for 3π/2 ≤ |θ| < 2π, as illustrated in Fig. 2.26 (sin(θ)
is negative and cos(θ) is positive). Therefore S(θ) cannot be the power spectral
density, and R(τ ) is not a valid autocorrelation.

38. Consider the function

R(τ ) = rect (τ /T ). (2.212)

Is it a valid autocorrelation function? Justify your answer.


• Solution: The Fourier transform of R(τ ) is

rect (τ /T )  T sinc ( f T ) (2.213)


2 Random Variables and Random Processes 125

3
S(θ)

-1
-6 -4 -2 0 2 4 6
θ

Fig. 2.26 Fourier transform of R(τ ) given in Fig. 2.25

Fig. 2.27 Power spectral SN1 (f )


density of n 1 (t)
a

−W W

which can take negative values. Since the power spectral density cannot be
negative, R(τ ) is not a valid autocorrelation, even though it is an even function.
39. (Haykin 1983) A pair of noise processes are related by

N2 (t) = N1 (t) cos(2π f c t + θ) − N1 (t) sin(2π f c t + θ), (2.214)

where f c is a constant and θ is a uniformly distributed random variable between


0 and 2π. The noise process N1 (t) is stationary and its psd is shown in Fig. 2.27.
Find the psd of N2 (t).

• Solution: The autocorrelation of N2 (t) is given by

E[N2 (t)N2 (t − τ )]
= E [(N1 (t) cos(2π f c t + θ) − N1 (t) sin(2π f c t + θ))
(N1 (t − τ ) cos(2π f c (t − τ ) + θ) − N1 (t − τ ) sin(2π f c (t − τ ) + θ))]
126 2 Random Variables and Random Processes

SN2 (f )
a/2

−fc − W −fc fc fc + W

Fig. 2.28 Power spectral density of n 2 (t)

R N1 (τ ) cos(2π f c τ ) R N1 (τ ) cos(2π f c τ )
= +
2 2
R N1 (τ ) sin(2π f c τ ) R N1 (τ ) sin(2π f c τ )
+ − .
2 2
= R N1 (τ ) cos(2π f c τ ). (2.215)

Hence
S N1 ( f − f c ) + S N1 ( f + f c )
S N2 ( f ) = . (2.216)
2
The psd for N2 (t) is illustrated in Fig. 2.28.

40. The output of an oscillator is described by

X (t) = A cos(2π f t + θ), (2.217)

where A is a constant, f and θ are independent random variables. The pdf of f


is denoted by f F ( f ) for −∞ < f < ∞ and the pdf of θ is uniformly distributed
between 0 and 2π. It is given that f F ( f ) is an even function of f .
(a) Find the psd of X (t) in terms of f F ( f ).
(b) What happens to the psd of X (t) when f is a constant equal to f c .

• Solution: The autocorrelation of X (t) is given by

E [X (t)X (t − τ )] = A2 E [cos(2π f t + ) cos(2π f (t − τ ) + )]


A2
= E [cos(2π f τ ) + cos(4π f t − 2π f τ + 2)]
2

A2 ∞
= cos(2π f τ ) f F ( f ) d f
2 f =−∞
 ∞
1
+ fF ( f ) d f
2π f =−∞
 2π
cos(4π f t − 2π f τ + 2) dθ
θ=0
2 Random Variables and Random Processes 127

A2 ∞
= cos(2π f τ ) f F ( f ) d f
2 f =−∞
= R X (τ ). (2.218)

Since R X (τ ) is real-valued, S X ( f ) is an even function. Hence


 ∞
R X (τ ) = S X ( f ) exp (j 2π f τ ) d f
f =−∞
 ∞
= S X ( f ) cos (2π f τ ) d f. (2.219)
f =−∞

Comparing (2.218) and (2.219) we get

A2
SX ( f ) = f F ( f ). (2.220)
2
When f is a constant, say equal to f c , then

A2
R X (τ ) = cos(2π f c τ )
2
A2
⇒ SX ( f ) = [δ( f − f c ) + δ( f + f c )] . (2.221)
4
41. (Haykin 1983) A real-valued stationary Gaussian process X (t) with mean m X ,
variance σ 2X and power spectral density S X ( f ) is passed through two real-valued
linear time invariant filters h 1 (t) and h 2 (t), yielding the output processes Y (t)
and Z (t), respectively.
(a) Determine the joint pdf of Y (t) and Z (t − τ ).
(b) State the conditions, in terms of the frequency response of h 1 (t) and
h 2 (t), that are sufficient to ensure that Y (t) and Z (t − τ ) are statistically
independent.
• Solution: The mean values of Y (t) and Z (t − τ ) are given by
 ∞ 
E[Y (t)] = E h 1 (α)X (t − α) dα
α=−∞
 ∞
= mX h 1 (α) dα
α=−∞
= m X H1 (0)
= mY
 ∞
E[Z (t − τ )] = m X h 2 (α) dα
α=−∞
= m X H2 (0)
= mZ. (2.222)
128 2 Random Variables and Random Processes

The variance of Y (t) and Z (t − τ ) are given by:


 
E[(Y (t) − m Y )2 ] = E Y 2 (t) − m 2Y
 ∞
= S X ( f )|H1 ( f )|2 d f − m 2Y
f =−∞

= σY2
 
E[(Z (t − τ ) − m Z )2 ] = E Z 2 (t) − m 2Z
 ∞
= S X ( f )|H2 ( f )|2 d f − m 2Z
f =−∞

= σ 2Z . (2.223)

The cross covariance between Y (t) and Z (t − τ ) is given by

E [(Y (t) − m Y ) (Z (t − τ ) − m Z )]
= E[Y (t)Z (t − τ )] − m Y m Z
 ∞  ∞ 
= E h 1 (α)X (t − α) dα h 2 (β)X (t − τ − β) dβ
α=−∞ β=−∞
− mY m Z
 ∞  ∞
= h 1 (α)h 2 (β)R X (τ + β − α) dα dβ − m Y m Z
α=−∞ β=−∞
= K Y Z (τ ). (2.224)

The correlation coefficient between Y (t) and Z (t) is given by

K Y Z (τ )
ρ= . (2.225)
σY σ Z

Now, the joint pdf of two Gaussian random variables y1 and y2 is given by

pY1 , Y2 (y1 , y2 )
1
= 
2πσ1 σ2 1 − ρ2
 
σ 2 (y1 − m 1 )2 − 2σ1 σ2 ρ(y1 − m 1 )(y2 − m 2 ) + σ12 (y2 − m 2 )2
× exp − 2 .
2σ12 σ22 (1 − ρ2 )
(2.226)

The joint pdf of Y (t) and Z (t − τ ) can be similarly found out by substituting
the appropriate values from (2.222)–(2.225). The random variables Y (t) and
Z (t − τ ) are uncorrelated if their covariance is zero. This is possible when
(a) H1 (0) = 0 or H2 (0) = 0 AND
2 Random Variables and Random Processes 129

White
Bandpass Lowpass
noise N (t) Output
filter filter
W (t) N1 (t)
H1 (f ) H2 (f )

cos(2πfc t)
H1 (f ) H2 (f )
2B
1 1

f f
−fc fc
2B

Fig. 2.29 System block diagram

(b) The product H1 ( f )H2 ( f ) = 0 for all f . This can be easily shown by
computing the Fourier transform of K Y Z (τ ), which is equal to (assuming
m Y m Z = 0)

K Y Z (τ )  H1 ( f )H2∗ ( f )S X ( f ). (2.227)

Thus if H1 ( f )H2 ( f ) = 0, then K Y Z (τ ) = 0, implying that Y (t) and Z (t −


τ ) are uncorrelated, and being Gaussian, they are statistically independent.

42. (Haykin 1983) Consider a white Gaussian noise process of zero mean and psd
N0 /2 which is applied to the input of the system shown in Fig. 2.29.
(a) Find the psd of the random process at the output of the system.
(b) Find the mean and variance of this output.

• Solution: We know that narrow-band noise can be represented by

n(t) = n c (t) cos(2π f c t) − n s (t) sin(2π f c t). (2.228)

We also know that the psd of n c (t) and n s (t) are given by

S N ( f − f c ) + S N ( f + f c ) for − B ≤ f ≤ B
S Nc ( f ) = S Ns ( f ) = (2.229)
0 elsewhere,

where it is assumed that S N ( f ) occupies the frequency band f c − B ≤ | f | ≤


f c + B. For the given problem S N ( f ) is shown in Fig. 2.30a. Thus

N0 for − B ≤ f ≤ B
R Nc (τ )  S Nc ( f ) = (2.230)
0 elsewhere.
130 2 Random Variables and Random Processes

SN (f ) SN1 (f )
2B
N0 /2
N0 /4

f f
−fc fc
2B
(a) (b)

Fig. 2.30 a Noise psd at the output of H1 ( f ). b Noise psd at the output of H2 ( f )

The output of H2 ( f ) is clearly

H2 ( f )
n(t) cos(2π f c t) −→ n c (t)/2 (2.231)

whose autocorrelation is
 
n c (t) n c (t − τ ) R Nc (τ ) SN ( f )
E =  c . (2.232)
2 2 4 4

The psd of noise at the output of H2 ( f ) is illustrated in Fig. 2.30b. The output
noise is zero-mean Gaussian with variance
N0 N0 B
× 2B = . (2.233)
4 2
43. (Haykin 1983) Consider a narrowband noise process N (t) with its Hilbert trans-
form denoted by N̂ (t). Show that the cross-correlation functions of N (t) and
N̂ (t) are given by

R N N̂ (τ ) = − R̂ N (τ )
R N̂ N (τ ) = R̂ N (τ ), (2.234)

where R̂ N (τ ) is the Hilbert transform of R N (τ ).


• Solution: From the basic definition of the autocorrelation function we have
 
R N N̂ (τ ) = E N (t) N̂ (t − τ )
  
1 ∞ N (α)
= E N (t) dα
π α=−∞ t − τ − α
 ∞
1 E[N (t)N (α)]
= dα
π α=−∞ t − τ − α
 ∞
1 R N (t − α)
= dα
π α=−∞ t − τ − α
2 Random Variables and Random Processes 131
 ∞
1 R N (β)
= dβ
π β=−∞ β−τ
= − R̂ N (τ ). (2.235)

The second part can be similarly proved.


44. (Haykin 1983) A random process X (t) characterized by the autocorrelation
function

R X (τ ) = e−2ν|τ | , (2.236)

where ν is a constant, is applied to a first-order RC-lowpass filter. Determine


the psd and the autocorrelation of the random process at the filter output.
• Solution: The psd of X (t) is the Fourier transform of the autocorrelation
function and is given by
 0  ∞
2ντ −j 2π f τ
SX ( f ) = e e dτ + e−2ντ e−j 2π f τ dτ
τ =−∞ τ =0
1  2τ (ν−j π f ) 0
= e τ =−∞
2(ν − j π f )
1  −2τ (ν+j π f ) ∞
− e τ =0
2(ν + j π f )
ν
= 2
ν + π2 f 2
1/ν
= . (2.237)
1 + π 2 f 2 /ν 2

Since the transfer function of the LPF is


1
H( f ) = . (2.238)
1 + j 2π f RC

Therefore the psd of the output random process is

SY ( f ) = S X ( f )|H ( f )|2
ν 1
= 2 ×
ν +π f 2 2 1 + (2π f RC)2
A B
= 2 + . (2.239)
ν + π2 f 2 1 + (2π f RC)2

Solving for A and B we get


132 2 Random Variables and Random Processes

ν
A=
1 − 4R 2 C 2 ν 2
−4R 2 C 2 ν
B= . (2.240)
1 − 4R 2 C 2 ν 2

The autocorrelation of the output random process is the inverse Fourier trans-
form of (2.239), and is given by

A −2ν|τ | B −|τ |/(RC)


RY (τ ) = e + e . (2.241)
ν 2RC
45. Let X and Y be independent random variables with

αe−αx x > 0
f X (x) =
0 otherwise

βe−β y y > 0
f Y (y) = (2.242)
0 otherwise,

where α and β are positive constants. Find the pdf of Z = X + Y when α = β


and α = β.
• Solution: We know that the pdf of Z is the convolution of the pdfs of X and
Y and is given by (for α = β)
 ∞
f Z (z) = f X (a) f Y (z − a) da
a=−∞
 z
= f X (a) f Y (z − a) da
a=0
z
= αβe−αa e−β(z−a) da
a=0

αβ  −βz 
e − e−αz for z > 0
= α−β (2.243)
0 for z < 0.

When α = β we have
 z
f Z (z) = α2 e−αz da

2 −αz a=0
zα e for z > 0
= (2.244)
0 otherwise.

46. A WSS random process X (t) with psd N0 /2 is passed through a first-order RC
highpass filter. Determine the psd and autocorrelation of the filter output.
• Solution: The transfer function of the first-order RC highpass filter is
2 Random Variables and Random Processes 133

j 2π f RC
H( f ) = . (2.245)
1 + j 2π f RC

Let Y (t) denote the random process at the filter output. Therefore the psd of
the filter output is

N0 (2π f RC)2
SY ( f ) = ×
2 1 + (2π f RC)2
 
N0 1
= 1− . (2.246)
2 1 + (2π f RC)2

The autocorrelation of Y (t) is the inverse Fourier transform of SY ( f ). Consider


the Fourier transform pair

1 −|t| 1
e  . (2.247)
2 1 + (2π f )2

Using the time scaling property

g(t)  G( f )
1
⇒ g(at)  G( f /a) (2.248)
|a|

with a = 1/(RC), (2.247) becomes

1 −|t|/(RC) RC
e  . (2.249)
2 1 + (2π f RC)2

Thus from (2.246) and (2.249) we have

N0 N0 −|t|/(RC)
RY (τ ) = δ(τ ) − e . (2.250)
2 4RC
47. Let X be a random variable having pdf

f X (x) = ae−b|x| for a, b > 0, −∞ < x < ∞. (2.251)

(a) Find the relation between a and b.


(b) Let Y = 2X 2 − c, where c > 0. Compute P(Y < 0).

• Solution: We have
 ∞
2 ae−bx = 1
x=0
⇒ 2a = b. (2.252)
134 2 Random Variables and Random Processes

Now

P(Y < 0) = P(2X 2 − c < 0)


= P(X 2 < c/2)

= P(|X | < c/2)
 √c/2
=2 ae−bx d x
x=0
2a  √ 
= 1 − e−b c/2
b √
= 1 − e−b c/2
. (2.253)

48. A narrowband noise process N (t) has zero mean and autocorrelation function
R N (τ ). Its psd S N ( f ) is centered about ± f c . Its quadrature components Nc (t)
and Ns (t) are defined by

Nc (t) = N (t) cos(2π f c t) + N̂ (t) sin(2π f c t)


Ns (t) = N̂ (t) cos(2π f c t) − N (t) sin(2π f c t). (2.254)

Using (2.234) show that

R Nc (τ ) = R Ns (τ ) = R N (τ ) cos(2π f c τ ) + R̂ N (τ ) sin(2π f c τ )
R Nc Ns (τ ) = −R Ns Nc (τ ) = R N (τ ) sin(2π f c τ ) − R̂ N (τ ) cos(2π f c τ ).
(2.255)

• Solution: From the basic definition of the autocorrelation

R Nc (τ ) = E [Nc (t)Nc (t − τ )]
 
= E N (t) cos(2π f c t) + N̂ (t) sin(2π f c t)

N (t − τ ) cos(2π f c (t − τ )) + N̂ (t − τ ) sin(2π f c (t − τ ))
R N (τ )
= [cos(2π f c τ ) + cos(4π f c t − 2π f c τ )]
2
R (τ )
+ N N̂ [sin(4π f c t − 2π f c τ ) − sin(2π f c τ )]
2
R (τ )
+ N̂ N [sin(4π f c t − 2π f c τ ) + sin(2π f c τ )]
2
R (τ )
+ N̂ [cos(2π f c τ ) − cos(4π f c t − 2π f c τ )] . (2.256)
2
2 Random Variables and Random Processes 135

Now

S N̂ ( f ) = S N ( f ) |−j sgn ( f )|2


= SN ( f )
⇒ R N̂ (τ ) = R N (τ ). (2.257)

In the above equation, we have assumed that S N (0) = 0. Moreover from


(2.234)

R N̂ N (τ ) = −R N N̂ (τ ) = R̂ N (τ ). (2.258)

Substituting (2.257) and (2.258) in (2.256) we get

R Nc (τ ) = R N (τ ) cos(2π f c τ ) + R̂ N (τ ) sin(2π f c τ ). (2.259)

The expression for R Ns (τ ) can be similarly proved.


Similarly we have

R Nc Ns (τ )
= E [Nc (t)Ns (t − τ )]
 
= E N (t) cos(2π f c t) + N̂ (t) sin(2π f c t)

N̂ (t − τ ) cos(2π f c (t − τ )) − N (t − τ ) sin(2π f c (t − τ ))
R N N̂ (τ )
= [cos(2π f c τ ) + cos(4π f c t − 2π f c τ )]
2
R N (τ )
− [sin(4π f c t − 2π f c τ ) − sin(2π f c τ )]
2
R (τ )
+ N̂ [sin(4π f c t − 2π f c τ ) + sin(2π f c τ )]
2
R (τ )
− N̂ N [cos(2π f c τ ) − cos(4π f c t − 2π f c τ )] . (2.260)
2
Once again substituting (2.257) and (2.258) in (2.260) we get

R Nc Ns (τ ) = R N (τ ) sin(2π f c τ ) − R̂ N (τ ) cos(2π f c τ ). (2.261)

The expression for R Ns Nc (τ ) can be similarly proved.


49. Let X (t) be a stationary process with zero mean, autocorrelation function R X (τ )
and psd S X ( f ). Find a linear filter with impulse response h(t) such that the
filter output is X (t) when the input is white noise with psd N0 /2. What is the
corresponding condition on the transfer function H ( f )?
136 2 Random Variables and Random Processes

• Solution: Let Y (t) denote the input process. It is given that

N0
RY (τ ) = δ(τ ). (2.262)
2
The autocorrelation of the output is given by
 ∞  ∞ 
R X (τ ) = E h(α)Y (t − α) dα h(β)Y (t − τ − β) dβ
α=−∞ β=−∞
 ∞  ∞
= h(α)h(β)RY (τ + β − α) dα dβ
α=−∞ β=−∞
  ∞
N0 ∞
= h(α)h(β)δ(τ + β − α) dα dβ
2 α=−∞ β=−∞
 ∞
N0
= h(α)h(α − τ ) dα
2 α=−∞
N0
= (h(t)  h(−t)) . (2.263)
2
The transfer function must satisfy the following relationship:

N0
SX ( f ) = |H ( f )|2 . (2.264)
2
50. White Gaussian noise W (t) of zero mean and psd N0 /2 is applied to a first-
order RC-lowpass filter, producing noise N (t). Determine the pdf of the random
variable obtained by observing N (t) at time tk .
• Solution: Clearly, the first-order RC lowpass filter is stable and linear, hence
N (t) is also a Gaussian random process. Hence N (tk ) is a Gaussian distributed
random variable with pdf given by

1
√ e−(x−m) /(2σ ) ,
2 2
p N (tk ) (x) = (2.265)
σ 2π

where m is the mean and σ 2 denotes the variance.


Moreover, since W (t) is WSS, N (t) is also WSS. It only remains to compute
the mean and variance of N (tk ). The mean value of N (tk ) is computed as
follows:
 ∞
N (tk ) = h(τ )W (tk − τ ) dτ
τ =0

⇒ m = E [N (tk )] = h(τ )E[W (tk − τ )] dτ
τ =0
= 0. (2.266)
2 Random Variables and Random Processes 137

The variance can be computed from the psd as


 ∞
 N0 
σ = E N (tk ) =
2 2
|H ( f )|2 d f. (2.267)
2 f =−∞

For the given problem H ( f ) is given by

1/(j 2π f C)
H( f ) =
R + 1/(j 2π f C)
1
=
1 + j 2π f RC
1
⇒ |H ( f )|2 = . (2.268)
1 + (2π f RC)2

Hence
  N0 1  −1 ∞ N0
σ 2 = E N 2 (tk ) = tan (2π f RC) f =−∞ = .(2.269)
2 2π RC 4RC
51. Let X and Y be two independent random variables. X is uniformly distributed
between [−1, 1] and Y is uniformly distributed between [−2, 2]. Find the pdf
of Z = max(X, Y ).
• Solution: We begin by noting that Z lies in the range [−1, 2]. Next, we com-
pute the cumulative distribution function of Z and proceed to compute the pdf
by differentiating the distribution function.
To compute the cumulative distribution function, we note that

X if X > Y
Z= (2.270)
Y if Y > X.

The pdfs of X and Y are given by


1/2 for − 1 ≤ x ≤ 1
f X (x) =
0 otherwise

1/4 for − 2 ≤ y ≤ 2
f Y (y) = (2.271)
0 otherwise.

We also note that since the pdfs of X and Y are different in the range [−1, 1]
and [1, 2], we expect the distribution function of Z to be different in these
two regions.
Using the fact that X and Y are independent and the events X > Y and Y > X
are mutually exclusive we have for −1 ≤ z ≤ 1
138 2 Random Variables and Random Processes

P(Z ≤ z) = P(X ≤ z AND X > y)


+ P(Y ≤ z AND Y > x)
  x
1 z
= dy dx
8 x=−1 y=−2
  y
1 z
+ dy dx
8 y=−1 x=−1
1 2 
= z + 3z + 2 . (2.272)
8
When 1 ≤ z ≤ 2, then Z = Y . Hence the cumulative distribution takes the
form
  1
1 z
P(Z ≤ z) = P(Z < 1) + dy dx
8 y=1 x=−1
1 2 
= z + 3z + 2 z=1
8
  1
1 z
+ dy dx
8 y=1 x=−1
1
= [2z + 4] . (2.273)
8
Thus

(1/8) [2z + 3] for − 1 ≤ z ≤ 1


f Z (z) = (2.274)
1/4 for 1 ≤ z ≤ 2.

52. Consider a random variable given by Z = a X + bY . Here X and Y are statis-


tically independent Gaussian random variables with mean m X and m Y , respec-
tively. The variance of X and Y is σ 2 .
(a) Is Z a Gaussian distributed random variable?
(b) Compute the mean and the variance of Z .

• Solution: Z is Gaussian distributed because it is a linear combination of two


Gaussian random variables. The mean value of Z is

m Z = E[Z ] = am X + bm Y . (2.275)

The variance of Z is
   
E (Z − m Z )2 = E Z 2 − m 2Z
   
= a 2 E X 2 + b2 E Y 2 + 2abE[X ]E[Y ]
− (am X + bm Y )2
2 Random Variables and Random Processes 139

= a 2 (σ 2 + m 2X ) + b2 (σ 2 + m 2Y ) + 2abm X m Y
− (am X + bm Y )2
= σ 2 (a 2 + b2 ). (2.276)

53. Let X and Y be two independent random variables. X is uniformly distributed


between [−1, 1] and Y is uniformly distributed between [−2, 2]. Find the pdf
of Z = min(X, Y ).
• Solution: We begin by noting that Z lies in the range [−2, 1]. Next, we com-
pute the probability distribution function of Z and proceed to compute the pdf
by differentiating the cumulative distribution function with respect to z.
To compute the cumulative distribution function, we note that

X if X < Y
Z= (2.277)
Y if Y < X.

The pdfs of X and Y are given by


1/2 for − 1 ≤ x ≤ 1
f X (x) =
0 otherwise

1/4 for − 2 ≤ y ≤ 2
f Y (y) = (2.278)
0 otherwise.

Moreover, since the pdfs of X and Y are different in the region [−2, −1]
and [−1, 1], we also expect the cumulative distribution function of Z to be
different in these two regions.
Let us first consider the region [−2, −1]. Using the fact that X and Y are
independent and the events X < Y and Y < X are mutually exclusive we
have for −2 ≤ z ≤ −1

P(Z ≤ z) = P(X ≤ z AND X < y)


+ P(Y ≤ z AND Y < x). (2.279)

However we note that for −2 ≤ z ≤ −1

P(X ≤ z AND X < y) = 0 (2.280)

since X is always greater than −1. Hence

P(Z ≤ z) = P(Y ≤ z AND Y < x)


  1
1 z
= dy dx
8 y=−2 x=−1
1
= [2z + 4] . (2.281)
8
140 2 Random Variables and Random Processes

Similarly we have for −1 ≤ z ≤ 1.

P(Z ≤ z) = P(X ≤ z AND X < y)


+ P(Y ≤ z AND Y < x)
  2
1 z
= P(Z < −1) + dy dx
8 x=−1 y=x
  1
1 z
+ dy dx
8 y=−1 x=y
  1
1 −1
= dy dx
8 y=−2 x=−1
  2
1 z
+ dy dx
8 x=−1 y=x
  1
1 z
+ dy dx
8 y=−1 x=y
1 2 
= −z + 3z + 6 . (2.282)
8
Thus

1/4 for − 2 ≤ z ≤ −1
f Z (z) = . (2.283)
(1/8) [−2z + 3] for − 1 ≤ z ≤ 1

54. (Haykin 1983) Let X (t) be a stationary Gaussian process with zero mean and
variance σ 2 and autocorrelation R X (τ ). This process is applied to an ideal half-
wave rectifier. The random process at the rectifier output is denoted by Y (t).
(a) Compute the mean value of Y (t).
(b) Compute the autocorrelation of Y (t). Make use of the result
 ∞  ∞   1 − θ cot(θ)
uv exp −u 2 − v 2 − 2uv cos(θ) du dv = .
(2.284)
u=0 v=0 4 sin2 (θ)

• Solution: We begin by noting that Y (t) is some function of X (t), which we


denote by g(X ). For the given problem

X (t) if X (t) ≥ 0
Y (t) = g(X ) = (2.285)
0 if X (t) < 0.

We also know that since X (t) is given to be stationary, f X (x) is independent


of time, hence
2 Random Variables and Random Processes 141
 ∞
E [g(X )] = g(x) f X (x) d x
x=−∞
 ∞  
1 x2
= √ x exp − 2 d x
σ 2π x=0 2σ
 ∞
σ
= √ exp(−z) dz
2π z=0
σ
= √ . (2.286)

The autocorrelation of Y (t) is given by

E [Y (t)Y (t − τ )] . (2.287)

For convenience of representation, let

X (t) = α
X (t − τ ) = β. (2.288)

To compute the autocorrelation we make use of the result


 ∞  ∞
E [g(α, β)] = g(α, β) f α, β (α, β) dα dβ. (2.289)
α=−∞ β=−∞

The joint pdf of two real-valued Gaussian random variables Y1 and Y2 is given
by

f Y1 , Y2 (y1 , y2 )
1
= 
2πσ1 σ2 1 − ρ2
 
σ 2 (y1 − m 1 )2 − 2σ1 σ2 ρ(y1 − m 1 )(y2 − m 2 ) + σ12 (y2 − m 2 )2
× exp − 2 .
2σ12 σ22 (1 − ρ2 )
(2.290)

For the given problem, the joint pdf of α and β is given by


 
1 α2 − 2ραβ + β 2
f α, β (α, β) =  exp − , (2.291)
2πσ 2 1 − ρ2 2σ 2 (1 − ρ2 )

where we have made use of the fact that


142 2 Random Variables and Random Processes

E[α] = E[β] = 0
E[α2 ] = E[β 2 ] = σ 2
E[αβ] R X (τ )
ρ = ρ(τ ) = = . (2.292)
σ 2 σ2
Thus the autocorrelation of Y (t) can be written as
 ∞  ∞  
1 α2 − 2ραβ + β 2
RY (τ ) =  αβ exp − dα dβ.
2πσ 2 1 − ρ2 α=0 β=0 2σ 2 (1 − ρ2 )
(2.293)
Let
1
A= 
2σ 2 (1 − ρ2 )
1
B= 
2σ 2 (1 − ρ2 )
u = Aα
v = −Bβ. (2.294)

Thus
 ∞  ∞
2σ 2 (1 − ρ2 )3/2  
RY (τ ) = uv exp −(u 2 + 2uvρ + v 2 ) du dv
π u=0 v=0
(2.295)
Using (2.284) we get

σ 2  
RY (τ ) = 1 − ρ2 − ρ cos−1 (ρ) . (2.296)

55. (Haykin 1983) Let X (t) be a real-valued zero-mean stationary Gaussian process
with autocorrelation function R X (τ ). This process is applied to a square-law
device. Denote the output process as Y (t). Compute the mean and autocovariance
of Y (t).
• Solution: It is given that

Y (t) = X 2 (t). (2.297)

Since X (t) is given to be stationary, we have


 
m Y = E X 2 (t) = R X (0). (2.298)

The autocorrelation of Y (t) is given by


2 Random Variables and Random Processes 143
 
RY (τ ) = E [Y (t)Y (t − τ )] = E X 2 (t)X 2 (t − τ ) . (2.299)

Let

X (t) = X 1
X (t − τ ) = X 2 . (2.300)

We know that

E[X 1 X 2 X 3 X 4 ] = C12 C34 + C13 C24 + C14 C23 , (2.301)

where E[X i X j ] = Ci j and the random variables X i are jointly normal (Gaus-
sian) with zero mean. For the given problem we note that X 3 = X 1 and
X 4 = X 2 . Thus

RY (τ ) = 2R 2X (τ ) + R 2X (0). (2.302)

The autocovariance of Y (t) is given by

K Y (τ ) = E[(Y (t) − m Y )(Y (t − τ ) − m Y )]


= RY (τ ) − m 2Y
= 2R 2X (τ ). (2.303)

56. A random variable X is uniformly distributed in [−1, 4] and another random


variable Y is uniformly distributed in [7, 8]. X and Y are statistically indepen-
dent. Find the pdf of Z = |X | − Y .
• Solution: The pdf of X and Y are shown in Fig. 2.31a, b, respectively. Let

U = |X |
V = −Y. (2.304)

Then the pdf of U and V are illustrated in Fig. 2.31c, d. Thus

Z = U + V, (2.305)

where U and V are independent. Clearly Z lies in the range [−8, −3]. More-
over, the pdf of Z is given by the convolution of the pdfs of U and V and is
equal to
 ∞
f Z (z) = fU (α) f V (z − α) dα. (2.306)
α=−∞
144 2 Random Variables and Random Processes

fX (x)
(a)
1/5
x

−1 0 4
fY (y)
(b)

1
y

0 7 8
fU (u)
(c)
2/5
1/5
u

0 1 4
fV (v)
(d)

1
v

−8 −7 0

Fig. 2.31 PDF of various random variables

The various steps in the convolution of U and V are illustrated in Fig. 2.32.
We have

f Z (−8) = 0
f Z (−7) = 2/5
f Z (−6) = 1/5
f Z (−4) = 1/5
f Z (−3) = 0. (2.307)

The pdf of Z is illustrated in Fig. 2.33.

57. A random variable X is uniformly distributed in [2, 3] and [−3, −2]. Y is a


discrete random variable which takes values −1 and 7 with probabilities 3/5
and 2/5, respectively. X and Y are statistically independent. Find the pdf of
Z = X 2 + |Y |.
• Solution: Note that

f X (x) = 1/2 for 2 ≤ |x| ≤ 3 (2.308)


2 Random Variables and Random Processes 145

Fig. 2.32 Various steps in fU (α)


the convolution of U and V (a)
2/5
1/5
α

0 1 4
fV (−8 − α)
(b)
1
α

−1 0
fV (−7 − α)
(c)
1
α

0 1
fV (−6 − α)
(d)
1
α

0 1 2
fV (−4 − α)
(e)
1
α

0 3 4
fV (−3 − α)
(f)
1
α

0 4 5

Fig. 2.33 PDF of Z fZ (z)

2/5

1/5
z

−8 −7 −6 −4 −3 0
146 2 Random Variables and Random Processes

Fig. 2.34 PDF of U = X 2 0.25

0.2

0.15

fU (u)
0.1

0.05

0
0 2 4 6 8 10
u

and
3 2
f Y (y) = δ(y + 1) + δ(y − 7). (2.309)
5 5
Now, the pdf of V = |Y | is

3 2
f V (v) = δ(v − 1) + δ(v − 7). (2.310)
5 5

Let U = X 2 . The pdf of U is

fU (u) = f X (x)/|du/d x|x=√u + f X (x)/|du/d x|x=−√u


1
= √ for 4 ≤ u ≤ 9 (2.311)
2 u

which is illustrated in Fig. 2.34. Therefore

Z = U + V. (2.312)

Since X and Y are independent, U and V are also independent. Moreover, the
pdf of Z is given by the convolution of the pdfs of U and V and is equal to
 ∞
f Z (z) = fU (α) f V (z − α) dα
α=−∞
3 2
= fU (z − 1) + fU (z − 7). (2.313)
5 5
The pdf of Z is shown in Fig. 2.35.
2 Random Variables and Random Processes 147

Fig. 2.35 PDF of 0.14


U = X 2 + |Y |
0.12

0.1

0.08

fZ (z)
0.06

0.04

0.02

0
0 2 4 6 8 10 12 14 16
z

58. Let X (t) be a zero-mean Gaussian random process with psd N0 /2. X (t) is passed
through a filter with impulse response h(t) = exp(−πt 2 ), followed by an ideal
differentiator. Let the output process be denoted by Y (t). Determine the pdf of
Y (1).
• Solution: At the outset, we emphasize that Y (t) can be considered to be a
random process as well as a random variable. Recall that a random variable is
obtained by observing a random process at a particular instant of time.
Note that both h(t) and the ideal differentiator are LTI filters. Hence Y (t) is
also a Gaussian random process. Since X (t) is WSS, so is Y (t). Therefore the
random variable Y (t) has a Gaussian pdf, independent of t. It only remains
to compute the mean and the variance of Y (t). Since X (t) is zero mean, so is
Y (t). Therefore

E[Y (t)] = 0 (2.314)

independent of t. The variance of Y (t) is computed as follows.


Let H ( f ) denote the Fourier transform of h(t). Clearly

H ( f ) = exp(−π f 2 ). (2.315)

The overall frequency response of h(t) in cascade with the ideal differentiator
is

G( f ) = j 2π f H ( f ). (2.316)

Therefore the psd of the random process Y (t) is

SY ( f ) = (N0 /2)4π 2 f 2 exp(−2π f 2 ). (2.317)


148 2 Random Variables and Random Processes

The variance of the random variable Y (t) (this is also the power of the random
process Y (t)) is
 
σY2 = E |Y (t)|2
 ∞
= SY ( f ) d f
f =−∞

2
= N0 π. (2.318)
4
Hence the pdf of the random variable Y (t) is

1
pY (y) = √ exp(−y 2 /(2σY2 )) (2.319)
σY 2π

independent of t, and is hence also the pdf of the random variable Y (1).
59. Let X 1 and X 2 be two independent random variables. X 1 is uniformly distributed
between [−1, 1] and X 2 has a triangular pdf between [−3, 3] with a peak at
zero. Find the pdf of Z = max(X 1 , X 2 ).
• Solution: Let us consider a more general problem given by

Z = max(X 1 , . . . , X n ), (2.320)

where X 1 , . . . , X n are independent random variables. Now

P(Z < z) = P((X 1 < z) AND . . . AND (X n < z))


= P(X 1 < z) . . . P(X n < z). (2.321)

Therefore
d
f Z (z) = P(Z < z)
dz
d
= [P(X 1 < z) . . . P(X n < z)] . (2.322)
dz

In the given problem

P(Z < z) = P(X 1 < z)P(X 2 < z). (2.323)

Moreover, it can be seen that Z lies in the range [−1, 3]. In order to compute
(2.323), we need to partion the range of Z into three intervals, given by [−1, 0],
[0, 1] and [1, 3]. Now
2 Random Variables and Random Processes 149

⎨ (z + 1)/2 for − 1 ≤ z ≤ 0
P(X 1 < z) = (z + 1)/2 for 0 ≤ z ≤ 1 , (2.324)

1 for 1 ≤ z ≤ 3

where we have used the fact that


 z
P(X 1 < z) = f X 1 (x1 ) d x1
x1 =−1
 z
1
= d x1
2 x1 =−1
z+1
= . (2.325)
2
Similarly, noting that f X 2 (0) = 1/3, we obtain
⎧ 2
⎨ z /18 + z/3 + 1/2 for − 1 ≤ z ≤ 0
P(X 2 < z) = −z 2 /18 + z/3 + 1/2 for 0 ≤ z ≤ 1 (2.326)
⎩ 2
−z /18 + z/3 + 1/2 for 1 ≤ z ≤ 3.

Substituting (2.324) and (2.326) in (2.323) and using (2.322) we get


⎧ 2
⎨ z /12 + 7z/18 + 5/12 for − 1 ≤ z ≤ 0
f Z (z) = −z 2 /12 + 5z/18 + 5/12 for 0 ≤ z ≤ 1 (2.327)

−z/9 + 1/3 for 1 ≤ z ≤ 3.

60. Let X 1 and X 2 be two independent random variables. X 1 is uniformly distributed


between [−1, 1] and X 2 has a triangular pdf between [−3, 3] with a peak at
zero. Find the pdf of Z = min(X 1 , X 2 ).
• Solution: Let us consider a more general problem given by

Z = min(X 1 , . . . , X n ), (2.328)

where X 1 , . . . , X n are independent random variables. Now

P(Z > z) = P((X 1 > z) AND . . . AND (X n > z))


= P(X 1 > z) . . . P(X n > z). (2.329)

Therefore
d
f Z (z) = P(Z < z)
dz
d
= [1 − P(Z > z)]
dz
150 2 Random Variables and Random Processes

d
= [1 − P(X 1 > z) . . . P(X n > z)]
dz
d
= [1 − (1 − P(X 1 < z)) . . . (1 − P(X n < z))] . (2.330)
dz

In the given problem

P(Z > z) = P(X 1 > z)P(X 2 > z). (2.331)

Moreover, it can be seen that Z lies in the range [−3, 1]. In order to com-
pute (2.331), we need to partion the range of Z into three intervals, given by
[−3, −1], [−1, 0] and [0, 1]. Now

P(X 1 > z) = 1 − P(X 1 < z)



⎨1 for − 3 ≤ z ≤ −1
= 1 − (z + 1)/2 for − 1 ≤ z ≤ 0

1 − (z + 1)/2 for 0 ≤ z ≤ 1

⎨1 for − 3 ≤ z ≤ −1
= (1 − z)/2 for − 1 ≤ z ≤ 0 (2.332)

(1 − z)/2 for 0 ≤ z ≤ 1

where we have used the fact that


 z
P(X 1 < z) = f X 1 (x1 ) d x1
x1 =−1
 z
1
= d x1
2 x1 =−1
z+1
= . (2.333)
2
Similarly, noting that f X 2 (0) = 1/3, we obtain

P(X 2 > z) = 1 − P(X 2 < z)



⎨ 1 − z 2 /18 − z/3 − 1/2 for − 3 ≤ z ≤ −1
= 1 − z 2 /18 − z/3 − 1/2 for − 1 ≤ z ≤ 0

1 + z 2 /18 − z/3 − 1/2 for 0 ≤ z ≤ 1
⎧ 2
⎨ −z /18 − z/3 + 1/2 for − 3 ≤ z ≤ −1
= −z 2 /18 − z/3 + 1/2 for − 1 ≤ z ≤ 0 (2.334)
⎩ 2
z /18 − z/3 + 1/2 for 0 ≤ z ≤ 1.
2 Random Variables and Random Processes 151

Substituting (2.332) and (2.334) in (2.331) and using (2.330) we get



⎨ z/9 + 1/3 for − 3 ≤ z ≤ −1
f Z (z) = −z 2 /12 − 5z/18 + 5/12 for − 1 ≤ z ≤ 0 (2.335)
⎩ 2
z /12 − 7z/18 + 5/12 for 0 ≤ z ≤ 1.

References

Simon Haykin. Communication Systems. Wiley Eastern, second edition, 1983.


A. Papoulis. Probability, Random Variables and Stochastic Processes. McGraw-Hill, third edition,
1991.
Rodger E. Ziemer and William H. Tranter. Principles of Communications. John Wiley, fifth edition,
2002.
Chapter 3
Amplitude Modulation

1. (Haykin 1983) Suppose that a non-linear device is available for which the output
current i 0 and the input voltage vi are related by:

i 0 (t) = a1 vi (t) + a3 vi3 (t), (3.1)

where a1 and a3 are constants. Explain how this device may be used to provide
(a) a product modulator (b) an amplitude modulator.
• Solution: Let

vi (t) = Ac cos(π f c t) + m(t), (3.2)

where m(t) occupies the frequency band [−W, +W ]. Then

a3 A3c
i 0 (t) = a1 (Ac cos(π f c t) + m(t)) + (3 cos(π f c t) + cos(3π f c t))
4
3
+ a3 A2c (1 + cos(2π f c t)m(t)) + 3a3 Ac cos(π f c t)m 2 (t)
2
+ a3 m 3 (t)
   
3 3
= a1 + a3 A2c m(t) + a3 m 3 (t) + a1 Ac + a3 A3c cos(π f c t)
2 4
3
+ 3a3 Ac cos(π f c t)m 2 (t) + a3 A2c m(t) cos(2π f c t)
2
a3 A3c
+ cos(3π f c t). (3.3)
4
Using the fact that multiplication in the time domain is equivalent to convo-
lution in the frequency domain, we know that m 2 (t) occupies the frequency

© The Editor(s) (if applicable) and The Author(s), under exclusive license 153
to Springer Nature Switzerland AG 2021
K. Vasudevan, Analog Communications,
https://doi.org/10.1007/978-3-030-50337-6_3
154 3 Amplitude Modulation

−3W −W 0 W 3W fc /2 fc 3fc /2

fc /2 + 2W
fc − W

Fig. 3.1 Fourier transform of i 0 (t)

Ac cos(πfc t) Nonlinear s(t)


BPF
device

3
m(t) s(t) = a A2
2 3 c
cos(2πfc t)m(t)

Fig. 3.2 Method of obtaining the DSBSC signal

band [−2W, +2W ] and m 3 (t) occupies [−3W, 3W ]. The Fourier transform
of i 0 (t) is illustrated in Fig. 3.1. To extract the DSBSC component at f c using
a bandpass filter, the following conditions must be satisfied:

f c /2 + 2W < f c − W
f c + W < 3 f c /2. (3.4)

The two inequalities in the above equation can be combined to get

f c > 6W. (3.5)

The procedure for obtaining the DSBSC signal is illustrated in Fig. 3.2.
The procedure for obtaining the AM signal using two identical nonlinear
devices and bandpass filters is illustrated in Fig. 3.3. The amplitude sensitivity
is controlled by A0 .
Note that with this method, the amplitude sensitivity and the carrier power
can be independently controlled. Moreover, the amplitude sensitivity is inde-
pendent of a3 , which is not under user control (a3 is device dependent).
2. (Haykin 1983) In this problem we consider the switching modulator shown
in Fig. 3.4. Assume that the carrier wave c(t) applied to the diode is large in
amplitude compared to |m(t)|, so that the diode acts like an ideal switch, that is
3 Amplitude Modulation 155

s1 (t)
Ac cos(πfc t) Nonlinear
BPF
device

m(t) AM
3
s1 (t) = a A2
2 3 c
cos(2πfc t)m(t)
signal
3
s2 (t) = a A A2
2 3 0 c
cos(2πfc t)

Ac cos(πfc t) Nonlinear
BPF
device s2 (t)

A0

Fig. 3.3 Method of obtaining the AM signal

Fig. 3.4 Block diagram of a c(t) = Ac cos(2πfc t)


switching modulator
+ − +

m(t) v1 (t) Rl v2 (t)



v1 (t) c(t) > 0
v2 (t) = (3.6)
0 c(t) < 0.

Hence we may write

v2 (t) = (Ac cos(2π f c t) + m(t)) g p (t), (3.7)

where

1 2  (−1)n−1
g p (t) = + cos(2π f c (2n − 1)t). (3.8)
2 π n=1 2n − 1

Find out the AM signal centered at f c contained in v2 (t).


156 3 Amplitude Modulation

• Solution: We have
Ac
v2 (t) = cos(2π f c t)
2

Ac  (−1)n−1
+ (cos(4π(n − 1) f c t) + cos(4πn f c t))
π n=1 2n − 1

m(t) 2  (−1)n−1
+ + cos(2π(2n − 1) f c t)m(t). (3.9)
2 π n=1 2n − 1

Clearly, the desired AM signal is

Ac 2
cos(2π f c t) + cos(2π f c t)m(t). (3.10)
2 π
3. (Haykin 1983) Consider the AM signal

s(t) = [1 + 2 cos(2π f m t)] cos(2π f c t). (3.11)

The AM signal s(t) is applied to an ideal envelope detector, producing output


v(t). Determine the real (not complex) Fourier series representation of v(t).

• Solution: The output of the ideal envelope detector is given by

v(t) = |1 + 2 cos(2π f m t)| , (3.12)

which is periodic with a period of 1/ f m . The envelope detector output, v(x)


is plotted in Fig. 3.5 where

x = 2π f m t. (3.13)

Fig. 3.5 Output of the ideal 3


envelope detector
2.5

2
v(x)

1.5

0.5

0
-3 -2 -1 0 1 2 3
x
3 Amplitude Modulation 157

Now

1 + 2 cos(x) < 0 for π − π/3 < x < π + π/3. (3.14)

This implies that



1 + 2 cos(x) for 0 < |x| < π − π/3
v(x) = (3.15)
−1 − 2 cos(x) for π − π/3 < |x| < π

Since v(x) is an even function, it has a Fourier series representation given by:


v(x) = a0 + 2 an cos(nx), (3.16)
n=1

where
 π−π/3  π
2 2
a0 = (1 + 2 cos(x)) d x − (1 + 2 cos(x)) d x
2π x=0 2π π−π/3

1 2 3
= + . (3.17)
3 π
Similarly
 π−π/3
2
an = (1 + 2 cos(x)) cos(nx) d x

π 
x=0
2
− (1 + 2 cos(x)) cos(nx) d x
2π π−π/3
 
2 sin(2nπ/3) sin(2(n − 1)π/3) sin(2(n + 1)π/3)
= + + .
π n n−1 n+1
(3.18)

4. Consider the AM signal

s(t) = [1 + 3 sin(2π f m t + α)] sin(2π f c t + θ). (3.19)

The AM signal s(t) is applied to an ideal envelope detector, producing output


v(t).
(a) Find v(t).
(b) Find α, 0 ≤ α < π, such that the real Fourier series representation of v(t)
contains only dc and cosine terms.
(c) Determine the real Fourier series representation of v(t) for α obtained in
(b).
158 3 Amplitude Modulation

3.5

2.5
v(x)

1.5

0.5

0
-6 -4 -2 0 2 4 6
x
Fig. 3.6 Output of the ideal envelope detector for α = π/2

• Solution: The output of the ideal envelope detector is given by



v(t) = |1 + 3 sin(2π f m t + α)| sin2 (θ) + cos2 (θ)
= |1 + 3 sin(2π f m t + α)| (3.20)

which is periodic with a period of 1/ f m . The envelope detector output, v(x)


is plotted in Fig. 3.6 where

x = 2π f m t. (3.21)

Note that v(x) is periodic with a period 2π. If the real Fourier series represen-
tation of v(x) is to contain only dc and cosine terms, we require:

α = π/2. (3.22)

Therefore

v(x) = |1 + 3 cos(x)|
∞
= a0 + 2 an cos(nx). (3.23)
n=1
3 Amplitude Modulation 159

Let

v(x1 ) = |1 + 3 cos(x1 )|
=0
⇒ x1 = π − cos−1 (1/3)
= 1.911 rad. (3.24)

Now
 x1  π
2 2
a0 = (1 + 3 cos(x)) d x − (1 + 3 cos(x)) d x
2π x=0 2π x1
4x1 − 2π + 12 sin(x1 )
=

= 2.02 (3.25)

and
 x1
2
an = (1 + 3 cos(x)) cos(nx) d x

 π x=0
2
− (1 + 3 cos(x)) cos(nx) d x
2π x1
2 sin(nx1 ) 3 sin((n − 1)x1 ) 3 sin((n + 1)x1 )
= + + for n > 1
π n π n−1 π n+1
(3.26)

and
 x1
2
a1 = (1 + 3 cos(x)) cos(x) d x

 π x=0
2
− (1 + 3 cos(x)) cos(x) d x
2π x1
2 3 3 sin(2x1 ) 3
= sin(x1 ) + x1 + − . (3.27)
π π π 2 2
5. (Haykin 1983) The AM signal

s(t) = Ac [1 + ka m(t)] cos(2π f c t) (3.28)

is applied to the system in Fig. 3.7. Assuming that |ka m(t)| < 1 for all t and m(t)
is bandlimited to [−W, W ], and that the carrier frequency f c > 2W , which show
that m(t) can be obtained from the square rooter output, v3 (t).
160 3 Amplitude Modulation

s(t) v1 (t) Lowpass v2 (t) Square v3 (t)


Squarer
filter rooter

Fig. 3.7 Nonlinear demodulation of AM signals

Fig. 3.8 Spectrum of the


M (f )
message signal
1

−W 0 W f

• Solution: We have

A2c
v1 (t) = [1 + cos(4π f c t)] [1 + ka m(t)]2 . (3.29)
2
We also have (assuming an ideal LPF)

A2c
v2 (t) = [1 + ka m(t)]2 . (3.30)
2
Therefore
Ac
v3 (t) = √ [1 + ka m(t)] . (3.31)
2

6. (Haykin 1983) Consider a message signal m(t) with spectrum shown in Fig. 3.8,
with W = 1 kHz. This message is DSB-SC modulated using a carrier of the form
Ac cos(2π f c t), producing the signal s(t). The modulated signal is next applied
to a coherent detector with carrier A0 cos(2π f c t). Determine the spectrum of the
detector output when the carrier is (a) f c = 1.25 kHz (b) f c = 0.75 kHz. Assume
that the LPF in the demodulator is ideal with unity gain. What is the lowest carrier
frequency for which there is no aliasing (no overlap in the frequency spectrum)
in the modulated signal s(t)?

• Solution: We have

s(t) = Ac cos(2π f c t)m(t)


Ac
⇒ S( f ) = [M( f − f c ) + M( f + f c )] . (3.32)
2
The block diagram of the coherent demodulator is shown in Fig. 3.9. We have
3 Amplitude Modulation 161

−W W

s(t) g(t) h(t)


Ideal LPF

A0 cos(2πfc t)

Fig. 3.9 A coherent demodulator for DSB-SC signals. An ideal LPF is assumed

G(f )

Ac A0 /2
Ac A0 /4

f kHz

−3.5 −2.5 −1.5 −1 0 1 1.5 2.5 3.5


H(f )

Ac A0 /2

f kHz

−1 0 1

Fig. 3.10 Spectrum at the outputs of the multiplier and the LPF for f c = 1.25

g(t) = Ac A0 m(t) cos2 (2π f c t)


Ac A0 m(t)
= [1 + cos(4π f c t)]
2
Ac A0 M( f ) Ac A0
⇒ G( f ) = + [M( f − 2 f c ) + M( f + 2 f c )] .
2 4
(3.33)

The spectrum of H ( f ) for f c = 1.25 and f c = 0.75 kHz are shown in


Figs. 3.10 and 3.11, respectively.
For no aliasing the following condition must be satisfied:

2 fc − W > W
⇒ f c > W. (3.34)
162 3 Amplitude Modulation

G(f )

Ac A0 /2
Ac A0 /4

f kHz

−2.5 −1.5 −1 −0.5 0 0.5 1 1.5 2.5

H(f )

Ac A0 /2

Ac A0 /4
Ac A0 /8
f kHz

−1 −0.5 0 0.5 1

Fig. 3.11 Spectrum at the outputs of the multiplier and the LPF for f c = 0.75

7. (Ziemer and Tranter 2002) An amplitude modulator has output

s(t) = A cos(π400t) + B cos(π360t) + B cos(π440t). (3.35)

The carrier power is 100 W and the power efficiency (ratio of sideband power
to total power) is 40%. Compute A, B and the modulation factor μ.
• Solution: The general expression for a tone modulated AM signal is:

s(t) = Ac [1 + μ cos(2π f m t)] cos(2π f c t)


μAc
= Ac cos(2π f c t) + cos(2π( f c − f m )t)
2
μAc
+ cos(2π( f c + f m )t). (3.36)
2
In the given problem

A = Ac
μA
B=
2
f c = 200 Hz
f c − f m = 180 Hz
f c + f m = 220 Hz. (3.37)
3 Amplitude Modulation 163

The carrier power is given by:

A2
= 100W
2 √
⇒ A = 200. (3.38)

Since the given AM wave is tone modulated, the power efficiency is given by:

μ2 2
=
2 + μ2 5
⇒ μ = 1.155. (3.39)

The constant B is given by:

μA
B= = 8.165. (3.40)
2
8. Consider a modulating wave m(t) such that (1 + ka m(t)) > 0 for all t. Assume
that the spectrum of m(t) is zero for | f | > W . Let

s(t) = Ac [1 + ka m(t)] cos(2π f c t), (3.41)

where f c > W . The modulated wave s(t) is applied to a series combination of


an ideal full-wave rectifier and an ideal lowpass filter. The transfer function of
the ideal lowpass filter is:

1 for | f | < W
H( f ) = (3.42)
0 otherwise.

Compute the time-domain output v(t) of the lowpass filter.


• Solution: The output of the full-wave rectifier is given by:

|s(t)| = Ac [1 + ka m(t)]| cos(2π f c t)|. (3.43)

Let

2π f c t = x. (3.44)

Then | cos(x)| has a Fourier series representation:




| cos(x)| = a0 + 2 (an cos(nx) + bn sin(nx)) . (3.45)
n=1
164 3 Amplitude Modulation

Note that bn = 0 since | cos(x)| is an even function. Moreover | cos(x)| is


periodic with period π. The coefficients an are given by:

1 π/2
a0 = cos(x) d x
π x=−π/2
2
=
π

1 π/2
an = cos(x) cos(nx) d x
π x=−π/2

0 for n = 2m + 1
= (3.46)
K n for n = 2m

where K n is given by:

1 1
Kn = sin((n − 1)π/2) + sin((n + 1)π/2). (3.47)
π(n − 1) π(n + 1)

Thus the output of the LPF is clearly:

2 Ac
v(t) = Ac [1 + ka m(t)]a0 = [1 + ka m(t)]. (3.48)
π
9. (Haykin 1983) This problem is related to the 2two-stage approach for generating
SSB (frequency discrimination method) signals. A voice signal m(t) occupying
the frequency band 0.3–3.4 kHz is to be SSB modulated with only the upper
sideband transmitted. Assume the availability of bandpass filters which provide
an attenuation of 50 dB in a transition band that is one percent of the center
frequency of the bandpass filter, as illustrated in Fig. 3.12a. Assume that the first
stage eliminates the lower sideband and the product modulator in the second
stage uses a carrier frequency of 11.6 MHz. The message spectrum is shown
in Fig. 3.12b and the spectrum of the final SSB signal must be as shown in
Fig. 3.12c.
Find the range of the carrier frequencies that can be used by the product modulator
in the first stage, so that the unwanted sideband is attenuated by no less than 50 dB.

• Solution: Firstly, we note that if only a single stage were was employed, the
required Q-factor of the BPF would be very high, since

centre frequency
Q-factor ≈ . (3.49)
transition bandwidth
Secondly, we would like to align the center frequency of the bandpass filter
in the center of the message band to be transmitted. The transmitted message
band could be the upper or the lower sideband. Let us denote the carrier
frequency of the first stage by f 1 and that of the second stage by f 2 . Then, the
3 Amplitude Modulation 165

20 log |H(f )| dB

0
(a)
−50
f

−fc fc

M (f ) 0.01fc S(f )

(b) (c)
f f

−3.4 −0.3 0 0.3 3.4 0

Fig. 3.12 a Magnitude response of the bandpass filters. b Message spectrum. c Spectrum of the
final SSB signal

center frequency of the BPF at the second stage is given by (see Fig. 3.13):

1
f c2 = [ f2 + f1 + fa + f2 + f1 + fb ] . (3.50)
2
In the above equation:

f 2 = 11600 kHz
f a = 0.3 kHz
f b = 3.4 kHz. (3.51)

The transition bandwidth of the second BPF is 0.01 f c2 . The actual transition
bandwidth at the output of the first SSB modulator is 2( f 1 + f a ). We require
that

2( f 1 + f a ) ≥ 0.01 f c2
⇒ f 1 (1 − 0.005) ≥ 58 + 0.01 × 0.925 − 0.3
f 1 ≥ 57.999 kHz. (3.52)

The center frequency of the first BPF is given by:

1
f c1 = [ f1 + fa + f1 + fb ]
2
= f 1 + 1.85. (3.53)
166 3 Amplitude Modulation

M (f )

−fb −fa 0 fa fb
(a)
S1 (f )

−fc1 −f1 0 f1 fc1


(b)
−f1 − fa f1 + fa
S(f )

−fc2 −f2 0 f2 fc2


(c)
−f2 − f1 − fa f2 + f1 + fa

Fig. 3.13 a Message spectrum. b SSB signal at the output of the first stage. c SSB signal at the
output of the second stage

The transition bandwidth of the first BPF is 0.01 f c1 and the actual transition
bandwidth of the input message is 2 f a . We require that:

2 f a ≥ 0.01 f c1
⇒ f 1 ≤ 58.15 kHz. (3.54)

Thus the range of f 1 is given by 57.999 ≤ f 1 ≤ 58.15 kHz.

10. (Haykin 1983) This problem is related to the 2two-stage approach for generating
SSB signals. A voice signal m(t) occupying the frequency band 0.3–3 kHz is to
be SSB modulated with only the lower sideband transmitted. Assume the avail-
ability of bandpass filters which provide an attenuation of 60 dB in a transition
band that is two percent of the center frequency of the bandpass filter, as illus-
trated in Fig. 3.14a. Assume that the first stage eliminates the upper sideband and
the product modulator in the second stage uses a carrier frequency of 1 MHz.
3 Amplitude Modulation 167

20 log |H(f )| dB

0
(a)
−60
f

−fc fc

0.02fc
M (f ) S(f )
(b) (c)
f f

−3 −0.3 0 0.3 3 0

Fig. 3.14 a Magnitude response of the bandpass filters. b Message spectrum. c Spectrum of the
final SSB signal

The message spectrum is shown in Fig. 3.14b and the spectrum of the final SSB
signal must be as shown in Fig. 3.14c.

(a) Find the range of carrier frequencies that can be used by the product mod-
ulator in first stage, so that the unwanted sideband is attenuated by no less
than 60 dB.
(b) Write down the expression of the SSB signal s(t) at the output of the second
stage, in terms of m(t). Clearly specify the carrier frequency of s(t) in terms
of the carrier frequency of the two product modulators.

• Solution: Firstly, we note that if only a single stage were was employed, the
required Q-factor of the BPF would be very high, since

centre frequency
Q-factor ≈ . (3.55)
transition bandwidth
Secondly, we would like to align the center frequency of the bandpass filter
in the center of the message band to be transmitted. The transmitted message
band could be the upper or the lower sideband. Let us denote the carrier
frequency of the first stage by f 1 and that of the second stage by f 2 . Then, the
center frequency of the BPF at the second stage is given by (see Fig. 3.15):

1
f c2 = [ f2 + f1 − fb + f2 + f1 − fa ] . (3.56)
2
In the above equation:
168 3 Amplitude Modulation

M (f )
(a)

−fb −fa 0 fa fb

S1 (f )
(b)

−f1 −fc1 0 fc1 f1

−f1 + fa −f1 + fb f1 − fb f1 − fa

S(f )
(c)

−fc2 −f2 0 f2 fc2

−f2 − f1 + fb f2 + f1 − fb
−f2 − f1 + fa f2 + f1 − fa

Fig. 3.15 a Message spectrum. b SSB signal at the output of the first stage. c SSB signal at the
output of the second stage

f 2 = 1000 kHz
f a = 0.3 kHz
f b = 3.0 kHz. (3.57)

The transition bandwidth of the second BPF is 0.02 f c2 . The actual transition
bandwidth at the output of the first SSB modulator is 2( f 1 − f b ). We require
that

2( f 1 − f b ) ≥ 0.02 f c2
⇒ f 1 − f b ≥ 0.01[ f 2 + f 1 − 0.5( f a + f b )]
f 1 ≥ 13.1146 kHz. (3.58)

The center frequency of the first BPF is given by:


3 Amplitude Modulation 169

Envelope
Input Output
BPF
detector
cos(2πf0 t)

Variable
frequency oscillator

Fig. 3.16 Receiver for a frequency division multiplexing system

M1 (f ) M2 (f )

fc1 fc2
2W 2W

Fig. 3.17 Input signal to the FDM receiver in Fig. 3.16

1
f c1 = [ f1 − fa + f1 − fb ]
2
= f 1 − 1.65. (3.59)

The transition bandwidth of the first BPF is 0.02 f c1 and the actual transition
bandwidth of the input message is 2 f a . We require that:

2 f a ≥ 0.02 f c1
⇒ f 1 ≤ 31.65 kHz. (3.60)

Thus, the range of f 1 is given by 13.1146 ≤ f 1 ≤ 31.65 kHz.


The final SSB signal is given by:

s(t) = m(t) cos(2π f c t) + m̂(t) sin(2π f c t), (3.61)

where f c = f 2 + f 1 − f a + f a = f 2 + f 1 .

11. Consider the receiver in Fig. 3.16, for a frequency division multiplexing (FDM)
system. An FDM system is similar to the simultaneous radio broadcast by many
stations. Assume that the BPF is ideal with a bandwidth of 2W = 10 kHz.

(a) Now consider the input signal in Fig. 3.17. Assume that the input signals are
amplitude modulated, f 0 = 1.4 MHz, f c1 = 1.1 MHz, and f c2 = 1.7 MHz.
170 3 Amplitude Modulation

Assume that the BPF is centered at 300 kHz. Find out the expression for the
signal at the output of the BPF. The frequency f c2 is called the image of f c1 .
(b) Now assume that f c1 = 1.1 MHz is the desired carrier and f c2 = 1.7 MHz
is to be rejected. Assume that f 0 can only be varied between f c1 and f c2 and
that the center frequency of the BPF cannot be less than 200 kHz (this is
required for proper envelope detection) and cannot be greater than 300 kHz
(so that the Q-factor of the BPF is within practical limits). Find out the
permissible range of values f 0 can take. Also find out the corresponding
permissible range of values of the center frequency of the BPF.

• Solution: Let the input signal be given by:

s(t) = A1 [1 + ka1 m 1 (t)] cos(2π f c1 t) + A2 [1 + ka2 m 2 (t)] cos(2π f c2 t).


(3.62)

The output of the multiplier can be written as:

A1

s(t) = [1 + ka1 m 1 (t)] cos(2π( f 0 − f c1 )t) + cos(2π( f 0 + f c1 )t)
2
A2

+ [1 + ka2 m 2 (t)] cos(2π( f c2 − f 0 )t) + cos(2π( f c2 + f 0 )t) .
2
(3.63)

The output of the BPF is:

A1
y(t) = [1 + ka1 m 1 (t)] cos(2π( f 0 − f c1 )t)
2
A2
+ [1 + ka2 m 2 (t)] cos(2π( f c2 − f 0 )t). (3.64)
2
Now consider Fig. 3.18. Denote the two difference (IF) frequencies by x and
B − x, where B = f c2 − f c1 . To reject B − x, we must have:

B − x − x > 2W
⇒ x < 295. (3.65)

Thus, the BPF center frequency can vary between 200 and 295 kHz. Corre-
spondingly, f 0 can vary between 1.3 and 1.395 MHz.

12. Consider the Costas loop for AM signals as shown in Fig. 3.19. Let the received
signal r (t) be given by:

r (t) = Ac [1 + ka m(t)] cos(2π f c t + θ) + w(t), (3.66)


3 Amplitude Modulation 171

x B−x

fc1 f0 fc2

x B−x

2W

Fig. 3.18 Figure to illustrate the condition to be satisfied by the two difference frequencies, so that
one of them is rejected by the BPF

demodulated signal
LPF

In-phase arm

x1 (t) z1 (T )
Integrator

cos(2πfc t + α)
phase
r(t) y (T )
VCO
detector
sin(2πfc t + α)

Integrator
x2 (t) z2 (T )
Quadrature arm

Fig. 3.19 Costas loop for AM signals

where w(t) is zero-mean additive white Gaussian noise (AWGN) with psd N0 /2.
The random variable z 1 (T ) is computed as:
 T
1
z 1 (T ) = x1 (t) dt. (3.67)
T 0

The random variable z 2 (T ) is similarly computed. The random variable y  (T )


is computed as:

z 2 (T )
y  (T ) = . (3.68)
z 1 (T )
172 3 Amplitude Modulation

Assume that:
(a) θ and α are uniformly distributed random variables.
(b) α − θ is a constant and close to zero.
(c) w(t) and α are statistically independent.
(d) m(t) has zero -mean (zero dc).
(e) T is large, hence the integrator output is a dc term plus noise.
(f) The signal-to-noise ratio (SNR) at the input to the phase detector is high.
Compute the mean and the variance of the random variables z 1 (T ), z 2 (T ) and
y  (T ).
• Solution: We have
Ac
x1 (t) = [1 + ka m(t)] [cos(α − θ) + cos(4π f c t + α + θ)] + a1 (t)
2
Ac
x2 (t) = [1 + ka m(t)] [sin(α − θ) + sin(4π f c t + α + θ)] + a2 (t)
2
, (3.69)

where

a1 (t) = w(t) cos(2π f c t + α)


a2 (t) = w(t) sin(2π f c t + α). (3.70)

The autocorrelation of a1 (t) and a2 (t) is given by:

Ra1 (τ ) = E[a1 (t)a1 (t − τ )]


N0
= δ(τ )
4
= Ra2 (τ ). (3.71)

Since w(t) and α are statistically independent

E[a1 (t)] = E[a2 (t)] = 0. (3.72)

The integrator outputs are:

Ac
z 1 (T ) = cos(α − θ) + b1
2
Ac
z 2 (T ) = sin(α − θ) + b2 , (3.73)
2
where
3 Amplitude Modulation 173

1 T
b1 = a1 (t) dt
T t=0
 T
1
, b2 = a2 (t) dt. (3.74)
T t=0

Then

1 T
E[b1 ] = E[a1 (t)] dt
T t=0
=0
= E[b2 ]. (3.75)

The variance of b1 and b2 are given by:



N0 ∞
E[b12 ] = |H ( f )|2 d f
4 f =−∞

N0 ∞
= h 2 (t) dt
4 t=−∞
 T
N0
= dt
4T 2 t=0
N0
=
4T
= E[b22 ]. (3.76)

In the above equation we have used the Rayleigh’s energy theorem. Thus:

Ac
E[z 1 (T )] = cos(α − θ)
2
Ac
E[z 2 (T )] = sin(α − θ)
2
N0
var [z 1 (T )] =
4T
N0
var [z 2 (T )] = . (3.77)
4T
The phase detector output is:

(Ac /2) sin(α − θ) + b2


y  (T ) =
(Ac /2) cos(α − θ) + b1
2b2
≈ α−θ+ . (3.78)
Ac
174 3 Amplitude Modulation

M (f )

f (kHz)

−3.4 −0.3 0 0.3 3.4

Fig. 3.20 Message spectrum

wWhere we have made the high SNR approximation. Hence, the mean and
variance of y  (T ) is:

E[y  (T )]] = α − θ
4 N0
var [y  (T )] = 2 . (3.79)
Ac 4T

13. Consider a message signal m(t) whose spectrum extends over 300 Hz to 3.4 kHz,
as illustrated in Fig. 3.20. This message is SSB modulated to obtain:

s(t) = Ac m(t) cos(2π f c t) + Ac m̂(t) sin(2π f c t). (3.80)

At the receiver, s(t) is demodulated using a carrier of the form cos(2π( f c +


 f )t). Plot the spectrum of the demodulated signal at the output of the lowpass
filter when
(a)  f = 10 Hz,
(b)  f = −10 Hz.
Assume that the lowpass filter is ideal with unity gain and extends over
[−4, 4] kHz, and f c = 20 kHz. Show all the steps required to arrive at the answer.
Indicate all the important points on the X Y -axes.

• Solution: The multiplier output can be written as:

s1 (t) = s(t) cos(2π( f c +  f )t)




= Ac m(t) cos(2π f c t) + m̂(t) sin(2π f c t) cos(2π( f c +  f )t)
Ac
= m(t) [cos(2π f t) + cos(2π(2 f c +  f )t)]
2
Ac
+ m̂(t) [sin(2π(2 f c +  f )t) − sin(2π f t)] . (3.81)
2
The output of the lowpass filter is:
3 Amplitude Modulation 175

M1 (f )
(a)
Ac /2

f (kHz)

−3.41 −0.31 −0.01 0 0.01 0.31 3.41

M2 (f )
(b)
Ac /2

f (kHz)

−3.39 −0.29 −0.01 0 0.01 0.29 3.39

Fig. 3.21 Demodulated message spectrum in the presence of frequency offset

Ac

s2 (t) = m(t) cos(2π f t) − m̂(t) sin(2π f t) . (3.82)
2
When  f is positive, s2 (t) is an SSB signal with carrier frequency  f and
upper sideband transmitted. This is illustrated in Fig. 3.21a, for  f = 10 Hz.
When  f is negative, s2 (t) is an SSB signal with carrier frequency  f and
lower sideband transmitted. This is illustrated in Fig. 3.21b with  f = −10.

14. Compute the envelope of the following signal:

s(t) = Ac [1 + 2 cos(2π f m t)] cos(2π f c t + θ). (3.83)

• Solution: The signal can be expanded as:

s(t) = Ac [1 + 2 cos(2π f m t)] [cos(2π f c t) cos(θ) − sin(2π f c t) sin(θ)] .


(3.84)
Therefore the envelope is:

a(t) = Ac |1 + 2 cos(2π f m t)| cos2 (θ) + sin2 (θ)
= Ac |1 + 2 cos(2π f m t)|. (3.85)

15. For the message signal shown in Fig. 3.22 compute the power efficiency (the
ratio of sideband power to total power) in terms of the amplitude sensitivity ka ,
A1 , A2 , and T .
The message is periodic with period T , has zero -mean, and is AM modulated
176 3 Amplitude Modulation

Fig. 3.22 Message signal m(t)


A1

0 t

T
−A2

Fig. 3.23 Message signal m(t)


A1

0 t1 t

T
−A2

according to:

s(t) = Ac [1 + ka m(t)] cos(2π f c t). (3.86)

Assume that T 1/ f c .

• Solution: Since the message is zero -mean we must have:

1 1
t1 A1 = (T − t1 )A2
2 2
A2 T
⇒ t1 = , (3.87)
A1 + A2

where t1 is shown in Fig. 3.23. For a general AM signal given by

s(t) = Ac [1 + ka m(t)] cos(2π f c t), (3.88)

the power efficiency is:

A2c ka2 Pm /2 ka2 Pm


η= = , (3.89)
A2c /2 + A2c ka2 Pm /2 1 + ka2 Pm

where Pm denotes the message power. Here we only need to compute Pm . We


have
3 Amplitude Modulation 177

m(t) AC Am(t)
LPF
amplifier

c(t)

c(t)

1
t
0

−1

−1/fc −1/(4fc ) 0 1/(4fc ) 1/fc

Fig. 3.24 Block diagram of a chopper-stabilized dc amplifier

 T
1
Pm = m 2 (t) dt, (3.90)
T t=0

where

A1 t/t1 for 0 < t < t1
m(t) = (3.91)
A2 (t − T )/(T − t1 ) for t1 < t < T .

Substituting for m(t) in (3.90) we get:


 t1  T
A2 A22
Pm = 12 t dt +
2
(t − T )2 dt
T t1 t=0 T (T − t1 )2 t=t1
A21 t1 A2 (T − t1 )
= + 2
3T 3T
A1 A2
= , (3.92)
3
where we have substituted for t1 from (3.87).

16. (Haykin 1983) Figure 3.24 shows the block diagram of a chopper-stabilized dc
amplifier. It uses a multiplier and an ac amplifier, which shifts the spectrum of
the input signal from the vicinity of zero frequency to the vicinity of the carrier
frequency ( f c ). The signal at the output of the ac amplifier is then coherently
demodulated. The carrier c(t) is a square wave as indicated in the figure.
178 3 Amplitude Modulation

(a) Specify the frequency response of the ac amplifier so that there is no distor-
tion in the LPF output, assuming that m(t) is bandlimited to −W < f < W .
Specify also the relation between f c and W .
(b) Determine the overall gain of the system ( A), assuming that the ac amplifier
has a gain of K and the lowpass filter is ideal having unity gain.
• Solution: The square wave c(t) can be written as:

4  (−1)n−1
c(t) = cos(2π f c (2n − 1)t). (3.93)
π n=1 2n − 1

The ac amplifier can have the frequency response of an ideal bandpass filter,
with a gain of K and bandwidth f c − W < | f | < f c + W . Observe that for
no aliasing of the spectrum centered at f c , we require:

fc + W < 3 fc − W
⇒ W < fc . (3.94)

Thus, the output of the ac amplifier is:

4K
s(t) = m(t) cos(2π f c t). (3.95)
π
The output of the lowpass filter would be:

LPF 16K 8K
s(t)c(t) −→ m(t) = 2 m(t). (3.96)
2π 2 π

Therefore, the overall gain of the system is 8K /π 2 .

17. (Haykin 1983) Consider the phase discrimination method of generating an SSB
signal. Let the message be given by:

m(t) = Am cos(2π f m t) (3.97)

and the SSB signal be given by:

s(t) = m(t) cos(2π f c t) − m̂(t) sin(2π f c t). (3.98)

Determine the ratio of the amplitude of the undesired side-frequency component


to that of the desired side-frequency component when the modulator deviates
from the ideal condition due to the following factors considered one at a time:
(a) The Hilbert transformer introduces a phase lag of π/2 − δ instead of π/2.
(b) The terms m(t) and m̂(t) are given by
3 Amplitude Modulation 179

m(t) = a Am cos(2π f m t)
m̂(t) = b Am sin(2π f m t). (3.99)

(c) The carrier signal applied to the product modulators are not in phase quadra-
ture, that is:

c1 (t) = cos(2π f c t)
c2 (t) = sin(2π f c t + δ). (3.100)

• Solution: In the first case, we have

m̂(t) = Am cos(2π f m t − π/2 + δ)


= Am sin(2π f m t + δ). (3.101)

Thus

s(t) = Am cos(2π f m t) cos(2π f c t) − Am sin(2π f m t + δ) sin(2π f c t)


Am
= [cos(2π( f c + f m )t) + cos(2π( f c − f m )t)]
2
Am
− [cos(2π( f c − f m )t − δ) − cos(2π( f c + f m )t + δ)]
2
(3.102)

The desired side frequency is f c + f m and the undesired side frequency is


f c − f m . The phasor diagram for the signal in (3.102) is shown in Fig. 3.25.
Clearly, the required ratio is R2 /R1 where

R1 = (Am /2)2 (1 + cos(δ))2 + (Am sin(δ)/2)2

R2 = (Am /2)2 (1 − cos(δ))2 + (Am sin(δ)/2)2 . (3.103)

For the second case s(t) is given by:

s(t) = a Am cos(2π f m t) cos(2π f c t) − b Am sin(2π f m t) sin(2π f c t)


a Am
= [cos(2π( f c − f m )t) + cos(2π( f c + f m )t)]
2
b Am
− [cos(2π( f c − f m )t) − cos(2π( f c + f m )t)] . (3.104)
2
Thus, the ratio of the amplitude of the undesired sideband to the desired
sideband is
a−b
R= . (3.105)
a+b
180 3 Amplitude Modulation

Am /2

fc + fm
R1

Am /2
δ

Am /2

R2 fc − fm
δ

Am /2
δ

Am /2

Fig. 3.25 Phasor diagram for the signal in (3.102)

For the third case s(t) is given by:

s(t) = Am cos(2π f m t) cos(2π f c t) − Am sin(2π f m t) sin(2π f c t + δ)


Am
= [cos(2π( f c − f m )t) + cos(2π( f c + f m )t)]
2
Am
− [cos(2π( f c − f m )t + δ) − cos(2π( f c + f m )t + δ)] .
2
(3.106)

Again from the phasor diagram in Fig. 3.26, we see that the ratio of the ampli-
tude of the undesired sideband to the desired sideband is R2 /R1 where R1 and
R2 are given by (3.103).

18. Let the message be given by:

m(t) = Am cos(2π f m t) + Am sin(2π f m t), (3.107)

and the SSB signal be given by:

s(t) = m(t) cos(2π f c t) + m̂(t) [sin(2π f c t) + a sin(4π f c t)] . (3.108)


3 Amplitude Modulation 181

Am /2

fc + fm
R1

Am /2
δ

Am /2
fc − fm

δ Am /2
δ

R2

Am /2

Fig. 3.26 Phasor diagram for the signal in (3.106)

Observe that the quadrature carrier contains harmonic distortion.


Determine the ratio of the power of the undesired frequency component(s) to
that of the desired frequency component(s).
• Solution: We have:

m(t) = Am cos(2π f m t) + Am sin(2π f m t)



= Am 2 cos(2π f m t − π/4)

⇒ m̂(t) = Am 2 cos(2π f m t − π/4 − π/2)

= Am 2 sin(2π f m t − π/4). (3.109)

Therefore

s(t) = Am 2 cos(2π f m t − π/4) cos(2π f c t)

+ Am 2 sin(2π f m t − π/4) [sin(2π f c t) + a sin(4π f c t)]

= Am 2 cos (2π ( f c − f m ) t − π/4)
a Am
+ √ cos (2π(2 f c − f m )t + π/4)
2
a Am
− √ cos (2π(2 f c + f m )t − π/4) . (3.110)
2
182 3 Amplitude Modulation

gp (t)
A

−T 0 T T0

gp (t) m1 (t) m(t) AM s(t)


LPF
modulator

H(f )
2

f
−4/T0 4/T0

Fig. 3.27 Block diagram of an AM system

Clearly, the desired frequency is f c − f m and the undesired frequencies are


2 f c ± f m . Therefore the required power ratio is:

a 2 A2m /2
R=
2 A2m /2
a2
= . (3.111)
2
19. Consider the system shown in Fig. 3.27. The input signal g p (t) is periodic with
a period of T0 . Assume that T /T0 = 0.5. The signal g p (t) is passed through an
ideal lowpass filter, that has a gain of two in the passband, as indicated in the
figure. The LPF output m 1 (t) is further passed through a dc blocking capacitor
to yield the message m(t). The capacitor acts as a short for the frequencies of
interest. This message is used to generate an AM signal given by

s(t) = Ac [1 + ka m(t)] cos(2π f c t). (3.112)

What is the critical value of ka , beyond which s(t) gets overmodulated?


• Solution: Consider the pulse:

g p (t) for −T0 /2 < t < T0 /2
g(t) = (3.113)
0 elsewhere.

Clearly

d 2 g(t) A
2
= (δ(t + T ) − 2δ(t) + δ(t − T )) . (3.114)
dt T
3 Amplitude Modulation 183

Hence
A
G( f ) = − (exp(j 2π f T ) − 2 + exp(−j 2π f T ))
4π 2
f 2T
AT
= 2 2 2 sin2 (π f T )
π f T
= AT sinc2 ( f T ). (3.115)

We also know that




g p (t) = cn exp (j 2πnt/T0 ) , (3.116)
n=−∞

where

1 ∞
cn = g(t) exp (−j 2πnt/T0 ) dt
T0 t=−∞
1
= G(n/T0 ). (3.117)
T0

Substituting for G(n/T0 ) we have


∞  
AT  nT
g p (t) = sinc2 e j 2πnt/T0
T0 n=−∞ T0

A 4A  1
= + 2 cos(2π(2m − 1)t/T0 ), (3.118)
2 π m=1 (2m − 1)2

where we have used the fact that T /T0 = 0.5. The output of the dc blocking
capacitor is

8A 8A
m(t) = cos(2πt/T0 ) + 2 cos(6πt/T0 ), . (3.119)
π 2 9π
where we have used the fact that the LPF has a gain of two in the frequency
range [−4/T0 , 4/T0 ]. The minimum value of m(t) is

80 A
m min (t) = − (3.120)
9π 2
and occurs at nT0 + T0 /2. To prevent overmodulation we must have:

1 + ka m min (t) ≥ 0
9π 2
⇒ ka ≤ . (3.121)
80 A
184 3 Amplitude Modulation

Fig. 3.28 Block diagram of M (f ) H(f )


a scrambler
A 2

f (Hz) f (Hz)

−5000 0 5000 −5000 0 5000

m(t) y(t)
H(f )

2 cos(10000πt)

20. The system shown in Fig. 3.28 is used for scrambling audio signals. The output
y(t) is the scrambled version of the input m(t). The spectrum (Fourier transform)
of m(t) and the lowpass filter (H ( f )) are as shown in the figure.
(a) Draw the spectrum of y(t). Label all important points on the x- and y-axis.
(b) The output y(t) corresponds to a particular modulation scheme (i.e., AM,
FM, SSB with lower/upper sideband transmitted, VSB, DSB-SC). Which
modulation scheme is it?
(c) Write down the precise expression for y(t) in terms of m(t), corresponding
to the spectrum computed in part (a).
(d) Suggest a method for recovering m(t) (not km(t), where k is a constant)
from y(t).

• Solution: The multiplier output is given by:

y1 (t) = 2m(t) cos(2π f c t), (3.122)

where f c = 5 kHz. The Fourier transform of y1 (t) is

Y1 ( f ) = M( f − f c ) + M( f + f c ) (3.123)

The spectrum of y1 (t) and y(t) is shown in Fig. 3.29.


From the spectrum of y(t), we can conclude that it is an SSB signal centered
at f c , with the lower sideband transmitted. Hence we have

y(t) = km(t) cos(2π f c t) + k m̂(t) sin(2π f c t), (3.124)

where k is a constant to be found out. The Fourier transform of y(t) is


3 Amplitude Modulation 185

Fig. 3.29 a Y1 ( f ). b Y ( f ). Y1 (f )
c Procedure for recovering
m(t) A

−2fc −fc 0 fc 2fc


(a)
Y (f )

2A

−2fc −fc 0 fc 2fc


(b)

−fc 0 fc

y(t) m(t)
LPF

cos(2πfc t)

(c)

k
Y( f ) = [M( f − f c ) + M( f + f c )]
2
k

= − sgn( f − f c )M( f − f c ) − sgn( f + f c )M( f + f c )
2
k k
= M( f − f c )[1 − sgn( f − f c )] + M( f + f c )[1 + sgn( f + f c )].
2 2
(3.125)

Comparing the above equation with Fig. 3.29b, we conclude that k = 2. There-
fore

y(t) = 2m(t) cos(2π f c t) + 2m̂(t) sin(2π f c t) (3.126)

A method for recovering m(t) is shown in Fig. 3.29c.

21. Figure 3.30 shows the block diagram of a DSB-SC modulator. Note that the
oscillator generates cos3 (2π f c t). The message spectrum is also shown.
(a) Draw and label the spectrum (Fourier transform) of y(t).
186 3 Amplitude Modulation

Fig. 3.30 Block diagram of m(t) y(t) m(t) cos(2πfc t)


a DSB-SC modulator Filter

cos3 (2πfc t)
M (f )

−B 0 B

(b) Sketch the Fourier transform of a filter such that the output signal is
m(t) cos(2π f c t), where m(t) is the message. The filter should pass only
the required signal and reject all other frequencies.
(c) What is the minimum usable value of f c ?

• Solution: The multiplier output is given by:

3 1
y(t) = m(t) cos(2π f c t) + m(t) cos(6π f c t). (3.127)
4 4
Therefore
3 1
Y( f ) = [M( f − f c ) + M( f + f c )] + [M( f − 3 f c ) + M( f + 3 f c )] .
8 8
(3.128)

The spectrum of y(t) and the bandpass filter is shown in Fig. 3.31. We must
also have the following relationship:

3 fc − B ≥ fc + B
⇒ f c ≥ B. (3.129)

Thus the minimum value of f c is B.

22. Consider the receiver of Fig. 3.32. Both bandpass filters are assumed to be ideal
with unity gain in the passband. The passband of BPF1 is in the range 500–
1500 kHz. The bandwidth of BPF2 is 10 kHz and it selects only the difference
frequency component. The input consists of a series of AM signals of bandwidth
10 kHz and spaced 10 kHz apart, occupying the band 500–1500 kHz as shown
in the figure.
When the LO frequency is 1505 kHz, the output signal is m 1 (t).
(a) Compute the centere frequency of BPF2.
3 Amplitude Modulation 187

M (f )

−B 0 B
Y (f )

4/3

3/8

1/8
f

fc − B fc fc + B 3fc − B 3fc 3fc + B

Fig. 3.31 Spectrum of y(t) and the bandpass filter

Input Envelope Output


BPF 1 BPF 2
signal detector signal

LO

M1 (f ) M2 (f )

f kHz

500 505 510 515 520 1490 1495 1500

Fig. 3.32 Block diagram of a radio receiver

(b) What should be the LO frequency so that the last message (occupying the
band 1490–1500 kHz) is selected.
(c) Can the LO frequency be equal to 1 MHz and the center frequency of BPF2
equal to 495 kHz? Justify your answer.

• Solution: When the LO frequency is 1505 kHz, the difference frequency for
m 1 (t) is 1505 − 505 = 1000 kHz. Therefore, the center frequency of BPF2
must be 1000 kHz.
When the LO frequency is 1495 + 1000 = 2495 kHz, the last message is
selected.
When the LO frequency is 1 MHz and the center frequency of BPF2 is
188 3 Amplitude Modulation

equal to 495 MHz, then the output signal would be the sum of the first mes-
sage (1000 − 505 = 495 kHz) and the last message (image at 1495 − 1000 =
495 kHz). Hence this combination should not be used.

23. (Haykin 1983) Consider a multiplex system in which four input signals m 0 (t),
m 1 (t), m 2 (t), and m 3 (t) are, respectively, multiplied by the carrier waves

c0 (t) = cos(2π f a t) + cos(2π f b t)


c1 (t) = cos(2π f a t + α1 ) + cos(2π f b t + β1 )
c2 (t) = cos(2π f a t + α2 ) + cos(2π f b t + β2 )
c3 (t) = cos(2π f a t + α3 ) + cos(2π f b t + β3 ) (3.130)

to produce the multiplexed signal


3
s(t) = m i (t)ci (t). (3.131)
i=0

All the messages are bandlimited to [−W, W ]. At the receiver, the ith message
is recovered as follows:
LPF
s(t)ci (t) −→ m i (t) (3.132)

where the LPF is ideal with unity gain in the frequency band [−W, W ].
(a) Compute αi and βi , 0 ≤ i ≤ 3 for this to be feasible.
(b) Determine the minimum separation between f a and f b .

• Solution: Note that we require:



LPF
1 for i = j
ci (t)c j (t) −→ 0.5 cos(αi −α j ) + cos(βi −β j ) = (3.133)
0 for i = j

provided

| f b − f a | ≥ 2W
fa ≥ W
f b ≥ W. (3.134)

Thus the minimum frequency separation is 2W . Now, in order to ensure orthog-


onality between carriers we proceed as follows. Let us first consider c0 (t) and
c1 (t). Orthogonality is ensured when

α1 = β1 = π/2. (3.135)
3 Amplitude Modulation 189

Other solutions are also possible.


Let us now consider c2 (t). We get the following relations:

c2 (t) ⊥ c0 (t) ⇒ cos(α2 ) + cos(β2 ) = 0


c2 (t) ⊥ c1 (t) ⇒ cos(α2 − α1 ) + cos(β2 − β1 ) = 0
⇒ sin(α2 ) + sin(β2 ) = 0. (3.136)

The two equations in (3.136) imply that if α2 is in the 1st quadrant then β2
must be in the 3rd quadrant. Otherwise, if α2 is in the 2nd quadrant, then β2
must be in the 4th quadrant. Let us take α2 = π/4. Then β2 = 5π/4.
Finally we consider c3 (t). We get the relations:

c3 (t) ⊥ c0 (t) ⇒ cos(α3 ) + cos(β3 ) = 0


c3 (t) ⊥ c1 (t) ⇒ cos(α3 − α1 ) + cos(β3 − β1 ) = 0
⇒ sin(α3 ) + sin(β3 ) = 0
c3 (t) ⊥ c2 (t) ⇒ cos(α3 − α2 ) + cos(β3 − β2 ) = 0
⇒ cos(α3 ) − cos(β3 ) + sin(α3 ) − sin(β3 ) = 0. (3.137)

The set of equations in (3.137) can be reduced to:

cos(α3 ) = − sin(α3 )
cos(β3 ) = − sin(β3 ). (3.138)

Thus we conclude that α3 and β3 must be an odd multiple of π/4 and must
lie in the 2nd/4th quadrant and 4th/2nd quadrant, respectively. If we take
α3 = 3π/4 then β3 = 7π/4. Note that there are infinite solutions to αi and βi
(1 ≤ i ≤ 3).

24. Consider the system shown in Fig. 3.33a. The signal s(t) is given by:

Complex
multiplication
s(t)
y(t)

Ideal |M (f )|
Hilbert ŝ(t) x(t)
transformer
Local
f
oscillator
fl
−W 0 W
(a) (b)

Fig. 3.33 Demodulation using a Hilbert transformer


190 3 Amplitude Modulation

s(t) = A1 m 1 (t) cos(2π f c t) − A2 m 2 (t) sin(2π f c t). (3.139)

Both m 1 (t) and m 2 (t) are real-valued and bandlimited to [−W, W ]. The carrier
frequency f c W . The inputs to the complex multiplier are s(t) + j ŝ(t) and
x(t) (which could be real or complex-valued and depends on the local oscillator
frequency fl ). The multiplier output is the complex-valued signal y(t). Let

m(t) = A1 m 1 (t) + j A2 m 2 (t)  M( f ). (3.140)

(a) Find fl and x(t) such that y(t) = m(t).


(b) The local oscillator that generates x(t) has a frequency fl = f c +  f . Find
y(t). Sketch |Y ( f )| when |M( f )| is depicted in Fig. 3.33b.

• Solution: We know that

ŝ(t) = A1 m 1 (t) cos(2π f c t − π/2) − A2 m 2 (t) sin(2π f c t − π/2)


= A1 m 1 (t) sin(2π f c t) + A2 m 2 (t) cos(2π f c t). (3.141)

Therefore

s(t) + j ŝ(t) = m(t)e j 2π fc t . (3.142)

If y(t) = m(t), then we must have fl = f c and

x(t) = e−j 2π fc t . (3.143)

If

x(t) = e−j 2π( fc + f )t . (3.144)

then

y(t) = m(t)e−j 2π f t


⇒ Y ( f ) = M( f +  f )
⇒ |Y ( f )| = |M( f +  f )|. (3.145)

The spectrum of |Y ( f )| is shown in Fig. 3.34.


3 Amplitude Modulation 191

25. An AM signal is given by:

s(t) = Ac [1 + ka m(t)] sin(2π f c t + θ). (3.146)

The message m(t) is real-valued, does not have a dc component and its one-
sided bandwidth is W f c . Explain how km(t), where k is a constant, can be
recovered from s(t) using a Hilbert transformer and other components.
• Solution: When s(t) is passed through a Hilbert transformer, its output is:

ŝ(t) = Ac [1 + ka m(t)] sin(2π f c t + θ − π/2)


= −Ac [1 + ka m(t)] cos(2π f c t + θ). (3.147)

Now

s(t) + j ŝ(t) = Ac [1 + ka m(t)] e j (2π fc t+θ−π/2) . (3.148)

The message m(t) can be recovered using the block diagram in Fig. 3.35. Note
that

y(t) = Ac [1 + ka m(t)] . (3.149)

26. Consider the Costas loop shown in Fig. 3.36. The input signal s(t) is

s(t) = m(t) cos(2π f c t), (3.150)

Fig. 3.34 Spectrum of |Y (f )| = |M (f + Δf )|


|Y ( f )|

−Δf − W −Δf −Δf + W

Complex
multiplication
s(t)
y(t) Ac ka m(t)

Ideal DC blocking
Hilbert ŝ(t) capacitor
transformer
e−j (2πfc t+θ−π/2)

Fig. 3.35 Demodulating an AM signal using a Hilbert transformer


192 3 Amplitude Modulation

xI (t)
LPF
2 cos(2πfc t + θ)
s(t) y

VCO t=−∞
2 sin(2πfc t + θ)

LPF
xQ (t)
M (f )
2
f

−W 0 W

Fig. 3.36 Costas loop

where m(t) is a real-valued message signal. The spectrum of m(t) is shown in


Fig. 3.36. The LPFs are ideal with a gain of two in the passband [−W, W ]. It is
given that f c W .
(a) Determine the signals x I (t), x Q (t), and y.
(b) For what values of θ will y be equal to zero?

• Solution: Clearly

2m(t) cos(2π f c t) cos(2π f c t + θ) = m(t) cos(θ) + 2 f c term


2m(t) cos(2π f c t) sin(2π f c t + θ) = m(t) sin(θ) + 2 f c term. (3.151)

Since the gain of the LPF is two in the passband

x I (t) = 2m(t) cos(θ)


x Q (t) = 2m(t) sin(θ). (3.152)

Hence
 ∞
y = 2 sin(2θ) m 2 (t) dt
t=−∞
 ∞
= 2 sin(2θ) |M( f )|2 d f
f =−∞
  2
W
−2 f
= 4 sin(2θ) +2 df
f =0 W
16
= W sin(2θ) (3.153)
3
3 Amplitude Modulation 193

Fig. 3.37 Spectrum of each M (f )


voice signal
1

f (kHz)

−3.3 −0.3 0 0.3 3.3

where we have used the Rayleigh’s energy theorem. It is clear that y = 0 when
θ = kπ/2, where k is an integer.

27. It is desired to transmit 400 voice signals using frequency division multiplexing
(FDM). The voice signals are SSB modulated with lower sideband transmitted.
The spectrum of each of the voice signals is illustrated in Fig. 3.37.
(a) What is the minimum carrier spacing required to multiplex all the signals,
such that there is no overlap of spectra.
(b) Using a single- stage approach and assuming a carrier spacing of 7 kHz and
a minimum carrier frequency of 200 kHz, determine the lower and upper
frequency limits occupied by the FDM signal containing 400 voice signals.
(c) Determine the spectrum (lower and upper frequency limits) occupied by the
100th voice signal.
(d) In the two-stage approach, the first stage uses SSB with lower sideband
transmitted. In the first stage, the voice signals are grouped into L blocks,
with each block containing K voice signals. The carrier frequencies in each
block in the first stage is given by 7n kHz, for 1 ≤ n ≤ K . Note that the
FDM signal using the two-stage approach must be identical to the single-
stage approach in (b).
i. Sketch the spectrum of each block at the output of the first stage.
ii. Specify the modulation required for each of the blocks in the second
stage.
iii. How many carriers are required to generate the composite FDM signal?
Express your answer in terms of L and K .
iv. Determine L and K such that the number of carriers is minimized.
v. Give the expression for the carrier frequencies required to modulate
each block in the second stage.

• Solution: The minimum carrier spacing required is 3 kHz.


If the minimum carrier frequency is 200 kHz, then the spectrum of the first
voice signal starts at 200 − 3.3 = 196.7 kHz. The carrier for the 400th voice
signal is at

200 + 399 × 7 = 2993 kHz. (3.154)


194 3 Amplitude Modulation

The spectrum of the 400th voice signal ends at 2993 − 0.3 = 2992.7 kHz.
Therefore, the spectrum of the FDM signal extends from 196.7 to 2992.7 kHz,
that is, 196.7 ≤ | f | ≤ 2992.7 kHz.
The carrier for the 100th voice signal is at

200 + 99 × 7 = 893 kHz. (3.155)

Hence, the spectrum of the 100th voice signal extends from 893 − 3.3 =
889.7 kHz to 893 − 0.3 = 892.7 kHz, that is, 889.7 ≤ | f | ≤ 892.7 kHz.
The spectrum of each block at the output of the first stage is shown in Fig. 3.38b.
The modulation required for each of the blocks in the second stage is SSB
with upper sideband transmitted.
For the second approach, let us first evaluate the number of carriers required
to obtain each block in the first stage. Clearly, the number of carriers required
is K . In the next stage, L carriers are required to translate each block to the
appropriate frequency band. Thus, the total number of carriers required is
L + K.
Now, we need to minimize L + K subject to the constraint L K = 400. The
problem can be restated as

400
min + K. (3.156)
K K

Differentiating with respect to K and setting the result to zero, we get the
solution as K = 20, L = 20.
The carrier frequencies required in the second stage is given by 200 − 7 +
7K (l − 1) for 1 ≤ l ≤ L.
The block diagram of the two-stage approach is given in Fig. 3.38a.

28. It is desired to transmit 500 voice signals using frequency division multiplexing
(FDM). The voice signals are DSB-SC modulated. The spectrum of each of the
voice signals is illustrated in Fig. 3.39.
(a) How many carrier signals are required to multiplex all the 500 voice signals?
(b) What is the minimum carrier spacing required to multiplex all the signals,
such that there is no overlap of spectra.
(c) Using a single- stage approach and assuming a carrier spacing of 10 kHz and
a minimum carrier frequency of 300 kHz, determine the lower and upper
frequency limits occupied by the FDM signal containing 500 voice signals.
(d) Determine the spectrum (lower and upper frequency limits) occupied by the
150th voice signal.
(e) It is desired to reduce the number of carriers using a two-stage approach.
The first stage uses DSB-SC modulation. In the first stage, the voice signals
are grouped into L blocks, with each block containing K voice signals. The
carrier frequencies in each block in the first stage is given by 10n kHz, for
3 Amplitude Modulation 195

ml, 1 (t)
(a) sl (t) Sl (f )
SSB
for 1 ≤ l ≤ L
fc = 7 kHz

ml, K (t)
SSB
fc = 7K kHz
1st stage

s1 (t)
SSB
fc = 200 − 7 kHz
Final FDM signal

sL (t)
SSB
fc = 200 − 7 + 7K(L − 1) kHz
2nd stage

(b) Sl (f )

(kHz)
f
0

−7K + 3.3 −10.7 −3.7 3.7 10.7 7K − 3.3


−13.7 −6.7 6.7 13.7
−7K −14 −7 7 14 7K

Fig. 3.38 a Two-stage approach for obtaining the final FDM signal. b Spectrum of each block at
the output of the first stage

Fig. 3.39 Spectrum of each M (f )


voice signal
1

f (kHz)

−4 0 4
196 3 Amplitude Modulation

1 ≤ n ≤ K . Note that the FDM signal using the two-stage approach must
be identical to the single-stage approach in (c).
i. Sketch the spectrum of each block at the output of the first stage.
ii. Specify the modulation required for each of the blocks in the second
stage, such that, the minimum carrier frequency used in the second
stage, is closest to 300 kHz.
iii. How many carriers are required to generate the composite FDM signal?
Express your answer in terms of L and K .
iv. Determine L and K such that the number of carriers is minimized.
v. Give the expression for the carrier frequencies required to modulate
each block in the second stage.

• Solution: The number of carriers required to multiplex all the voice signals is
500.
The minimum carrier spacing required is 8 kHz.
If the minimum carrier frequency is 300 kHz, then the spectrum of the first
voice signal starts at 300 − 4 = 296 kHz. The carrier for the 500th voice signal
is at

300 + 499 × 10 = 5290 kHz. (3.157)

The spectrum of the 500th voice signal ends at 5290 + 4 = 5294 kHz. There-
fore, the spectrum of the FDM signal extends from 296 to 5294 kHz, that is
296 ≤ | f | ≤ 5294 kHz.
The carrier for the 150th voice signal is at

300 + 149 × 10 = 1790 kHz. (3.158)

Hence the spectrum of the 150th voice signal extends from 1790 − 4 =
1786 kHz to 1790 + 4 = 1794 kHz, that is 1786 ≤ | f | ≤ 1794 kHz.
The spectrum of each block at the output of the first stage is shown in Fig. 3.40b.
The modulation required for each of the blocks in the second stage is SSB.
In order to ensure that the minimum carrier frequency in the second stage is
closest to 300 kHz, the upper sideband must be transmitted. Thus, the mini-
mum carrier frequency in the second stage is 300 − 10 = 290 kHz.
For the second approach, let us first evaluate the number of carriers required
to obtain each block in the first stage. Clearly, the number of carriers required
is K . In the next stage, L carriers are required to translate each block to the
appropriate frequency band. Thus, the total number of carriers required is
L + K.
Now, we need to minimize L + K subject to the constraint L K = 500. The
problem can be restated as

500
min + K. (3.159)
K K
3 Amplitude Modulation 197

ml, 1 (t)
(a) sl (t) Sl (f )
DSBSC
for 1 ≤ l ≤ L
fc = 10 kHz

ml, K (t)
(c) L K L+K
DSBSC
fc = 10K kHz
2 250 252
1st stage
4 125 129
s1 (t) 10 50 60
SSB 20 25 45
fc = 300 − 10 kHz Final
25 20 45
FDM
50 10 60
signal
sL (t) 125 4 129
SSB 250 2 252
fc = 300 − 10 + 10K(L − 1) kHz
2nd stage
(b)
Sl (f )

(kHz)
f
0

−10K + 4 −16 −6 6 16 10K − 4


−24 −14 14 24
−10K − 4 10K + 4

Fig. 3.40 a Two-stage approach for obtaining the final FDM signal. b Spectrum of each block at
the output of the first stage. c Various possible values of L and K

Differentiating with respect to K and setting the result to zero, we get the
solution as K = 22.36, which is not an integer. Hence, we need to obtain the
solution manually, as given in Fig. 3.40c. We find that, there are two sets of
solutions, L = 20, K = 25, and L = 25, K = 20.
The carrier frequencies required in the second stage is given by 300 − 10 +
10K (l − 1) for 1 ≤ l ≤ L.
The block diagram of the two-stage approach is given in Fig. 3.40a.

29. Let m(t) be a signal having Fourier transform M( f ), with M(0) = 0. When
m(t) is passed through a filter with impulse response h(t), the Fourier transform
of the output can be written as
198 3 Amplitude Modulation

⎨ M( f ) e−j θ for f > 0
Y( f ) = 0 for f = 0 (3.160)

M( f ) e j θ for f < 0

Determine h(t).
• Solution: We know that if (this was done in class)

y(t) = m(t) cos(θ) + m̂(t) sin(θ) (3.161)

the Fourier transform of y(t) is

Y ( f ) = M( f ) cos(θ) − j sgn( f )M( f ) sin(θ)



⎨ M( f ) e−j θ for f > 0
= 0 for f = 0 (3.162)

M( f ) e j θ for f < 0.

From (3.162) it is clear that

sin(θ)
h(t) = cos(θ)δ(t) + . (3.163)
πt
30. Consider the DSB-SC signal

s(t) = Ac cos(2π f c t)m(t). (3.164)

Assume that the energy signal m(t) occupies the frequency band [−W, W ].
Now, s(t) is applied to a square law device given by

y(t) = s 2 (t). (3.165)

The output y(t) is applied to an ideal bandpass filter with a passband transfer
function equal to 1/( f ), midband frequency of ±2 f c and bandwidth  f .
Assume that  f → 0. All signals are real-valued.
(a) Determine the spectrum of y(t).
(b) Find the relation between f c and W for no aliasing in the spectrum of y(t).
(c) Find the expression for the signal v(t) at the BPF output.
• Solution: We have

A2c
y(t) = [1 + cos(4π f c t)] m 2 (t)
2
A2 A2
⇒ Y ( f ) = c G( f ) + c [G( f − 2 f c ) + G( f + 2 f c )] , (3.166)
2 4
where
3 Amplitude Modulation 199

G( f ) = M( f )  M( f )
M( f )  m(t). (3.167)

In the above equation “” denotes convolution and G( f ) extends over


[−2W, 2W ]. For no aliasing in Y ( f ), we require

2 f c − 2W > 2W
⇒ f c > 2W. (3.168)

Observe that, as  f → 0, the transfer function of the BPF can be expressed


as two delta functions at ±2 f c . Therefore, the spectrum at the BPF output is
given by:

A2c
V( f ) = G(0) [δ( f − 2 f c ) + δ( f + 2 f c )] . (3.169)
4
Therefore

A2c
v(t) = G(0) cos(4π f c t). (3.170)
2
Now
 ∞
G( f ) = M(x)M( f − x) d x
x=−∞
 ∞
⇒ G(0) = M(x)M(−x) d x. (3.171)
x=−∞

Since m(t) is real-valued

M(−x) = M ∗ (x)
 ∞
⇒ G(0) = |M(x)|2 d x
x=−∞
= E, (3.172)

where we have used the Rayleigh’s energy theorem. Thus (3.170) reduces to:

A2c
v(t) = E cos(4π f c t). (3.173)
2
31. (Haykin 1983) Consider the quadrature -carrier multiplex system shown in
Fig. 3.41. The multiplexed signal s(t) is applied to a communication channel
of frequency response H ( f ). The channel output is then applied to the receiver
input. Here f c denotes the carrier frequency and the message spectra extends
over [−W, W ]. Find
200 3 Amplitude Modulation

Fig. 3.41 Quadrature carrier m1 (t)


multiplexing system

Ac cos(2πfc t)
s(t) y(t)
H(f )
Ac sin(2πfc t)

m2 (t)

Ac m1 (t)
LPF

y(t) 2 cos(2πfc t)

2 sin(2πfc t)

Ac m2 (t)
LPF

(a) the relation between f c and W ,


(b) the condition on H ( f ), and
(c) the frequency response of the LPF
that is necessary for recovery of the message signals m 1 (t) and m 2 (t) at
the receiver output. Assume a real-valued channel impulse response and
Ac = 1.

• Solution: Assuming Ac = 1 we have

s(t) = m 1 (t) cos(2π f c t) + m 2 (t) sin(2π f c t)


M1 ( f − f c ) + M1 ( f + f c ) M2 ( f − f c ) − M2 ( f + f c )
⇒ S( f ) = +
2 2j
(3.174)

Let

Y ( f ) = S( f )H ( f )  y(t). (3.175)

Now at the receiver:

2y(t) cos(2π f c t)  Y ( f − f c ) + Y ( f + f c ). (3.176)

Now
3 Amplitude Modulation 201

Y ( f − fc ) = H ( f − fc )
 
M1 ( f − 2 f c ) + M1 ( f ) M2 ( f − 2 f c ) − M2 ( f )
+
2 2j
Y ( f + fc ) = H ( f + fc )
 
M1 ( f ) + M1 ( f + 2 f c ) M2 ( f ) − M2 ( f + 2 f c )
+
2 2j
(3.177)

Since the LPF eliminates frequency components beyond ±W , the output of


the upper LPF is given by:
 
M1 ( f ) M2 ( f )
G1( f ) = − H ( f − fc )
2 2j
 
M1 ( f ) M2 ( f )
+ + H ( f + f c ). (3.178)
2 2j

From the above equation, it is clear that to recover M1 ( f ) from the upper LPF
(G 1 ( f ) = M1 ( f )) we require

H ( f − fc ) = H ( f + fc ) for −W ≤ f ≤ W

⇒ H ( fc − f ) = H ( f + fc ) for −W ≤ f ≤ W (3.179)

where it is assumed that the channel impulse response is real-valued. It can


be shown that the above condition is also required for the recovery of M2 ( f )
from the lower LPF.
Note that for distortionless recovery of the message signals, the frequency
response of both the LPFs must be:

HLPF ( f ) = 2/(H ( f − f c ) + H ( f + f c )) for −W ≤ f ≤ W (.3.180)

Finally, for no aliasing we require f c > W .

32. (Haykin 2001) A particular version of AM stereo uses quadrature multiplexing.


Specifically, the carrier Ac cos(2π f c t) is used to modulate the sum signal

m 1 (t) = V0 + m l (t) + m r (t), (3.181)

where V0 is a dc offset included for the purpose of transmitting the carrier


component, m l (t) is the left hand audio signal, and m r (t) is the right hand audio
signal. The quadrature carrier Ac sin(2π f c t) is used to modulate the difference
signal

m 2 (t) = m l (t) − m r (t). (3.182)


202 3 Amplitude Modulation

(a) Show that an envelope detector may be used to recover the sum signal
m r (t) + m l (t) from the quadrature multiplexed signal. How would you min-
imize the signal distortion produced by the envelope detector.
(b) Show that a coherent detector can recover the difference m l (t) − m r (t).

• Solution: The quadrature -multiplexed signal can be written as:

s(t) = Ac m 1 (t) cos(2π f c t) + Ac m 2 (t) sin(2π f c t). (3.183)

Ideally, the output of the envelope detector is given by:



y(t) = Ac m 21 (t) + m 22 (t)

m 2 (t)
= Ac m 1 (t) 1 + 22 . (3.184)
m 1 (t)

The desired signal is Ac m 1 (t) and the distortion term is



m 22 (t)
D(t) = 1+ . (3.185)
m 21 (t)

The distortion can be minimized by increasing the dc offset V0 . Note, however,


that increasing V0 results in increasing the carrier power, which makes the
system more power inefficient.
Multiplying s(t) by 2 cos(2π f c t) and passing the output through a lowpass
filter yields:

z 1 (t) = Ac m 1 (t)
= Ac (V0 + m l (t) + m r (t)). (3.186)

Similarly, multiplying s(t) by 2 sin(2π f c t) and passing the output through a


lowpass filter yields:

z 2 (t) = Ac m 2 (t)
= Ac (m l (t) − m r (t)). (3.187)

Adding z 1 (t) and z 2 (t) we get

z 1 (t) + z 2 (t) = Ac V0 + 2 Ac m l (t). (3.188)

Subtracting z 2 (t) from z 1 (t) yields:

z 1 (t) − z 2 (t) = Ac V0 + 2 Ac m r (t). (3.189)


3 Amplitude Modulation 203

The message signals m l (t) and m r (t) typically have zero dc, hence Ac V0 can
be removed by a dc blocking capacitor.
33. (Haykin 1983) Using the message signal

1
m(t) = , (3.190)
1 + t2

determine the modulated signal for the following methods of modulation:


(a) AM with 50% modulation. Assume a cosine carrier.
(b) DSB-SC. Assume a cosine carrier.
(c) SSB with upper sideband transmitted.
In all cases, the area under the Fourier transform of the modulated signal must
be unity.
• Solution: Note that the maximum value of m(t) occurs at t = 0. Moreover

m(0) = 1
 ∞
= , M( f ) d f (3.191)
f =−∞

where M( f ) is the Fourier transform of m(t). Hence the AM signal with 50%
modulation is given by:
 
0.5
s(t) = Ac1 1 + cos(2π f c t). (3.192)
1 + t2

The Fourier transform of s(t) is:

Ac1 Ac1
S( f ) = [δ( f − f c ) + δ( f + f c )] + [M( f − f c ) + M( f + f c )] .
2 4
(3.193)

Since
 ∞
S( f ) d f = 1, (3.194)
f =−∞

we require

Ac1 Ac1
[1 + 1] + [1 + 1] = 1
2 4
⇒ Ac1 = 2/3, (3.195)
204 3 Amplitude Modulation

where we have used (3.191). Note that shifting the spectrum of M( f ) does
not change the area. The DSB-SC modulated signal is given by:

Ac2
s(t) = cos(2π f c t). (3.196)
1 + t2

The Fourier transform of s(t) is

Ac2
S( f ) = [M( f − f c ) + M( f + f c )] . (3.197)
2
Again, due to (3.191) and (3.194) we have

Ac2
[1 + 1] = 1
2
⇒ Ac2 = 1. (3.198)

The Hilbert transform of m(t) is given by:

1 HT t
 . (3.199)
1+t 2 1 + t2

Therefore, the SSB modulated signal with upper sideband transmitted is given
by:
 
1 t
s(t) = Ac3 cos(2π f c t) − sin(2π f c t) . (3.200)
1 + t2 1 + t2

The spectrum of s(t) is

Ac3

S( f ) = M( f − f c ) 1 + sgn( f − f c )
2
Ac3

+ M( f + f c ) 1 − sgn( f + f c ) . (3.201)
2
Note that m(t) is real-valued and an even function of time. Hence, M( f ) is
also real-valued and an even function of frequency. Therefore from (3.191)
 ∞
M( f ) d f = 1/2. (3.202)
f =0

Hence, from (3.194) and (3.202) we again have

2 × 0.5Ac3 2 × 0.5Ac3
+ =1
2 2
⇒ Ac3 = 1. (3.203)
3 Amplitude Modulation 205

M (f )
2

f (kHz)

−3 −0.2 0 0.2 3

Fig. 3.42 Message spectrum

Note that we can also use the relation


 ∞
S( f ) d f = s(0) = 1 (3.204)
f =−∞

to obtain Ac1 , Ac2 , and Ac3 .


34. Consider a message signal m(t) whose spectrum extends over 200 Hz to 3 kHz,
as illustrated in Fig. 3.42. This message is SSB modulated to obtain:

s(t) = Ac m(t) cos(2π f c t) + Ac m̂(t) sin(2π f c t). (3.205)

At the receiver, s(t) is demodulated using a carrier of the form cos(2π( f c +


 f )t). Plot the spectrum of the demodulated signal at the output of the lowpass
filter when
(a)  f = 20 Hz,
(b)  f = −10 Hz.
Assume that the lowpass filter is ideal with unity gain and extends over
[−4, 4] kHz, and f c = 20 kHz.
Show all the steps required to arrive at the answer. In the sketch, indicate all the
important points on the X Y -axes.
• Solution: The multiplier output can be written as:

s1 (t) = s(t) cos(2π( f c +  f )t)




= Ac m(t) cos(2π f c t) + m̂(t) sin(2π f c t) cos(2π( f c +  f )t)
Ac
= m(t) [cos(2π f t) + cos(2π(2 f c +  f )t)]
2
Ac
+ m̂(t) [sin(2π(2 f c +  f )t) − sin(2π f t)] . (3.206)
2
The output of the lowpass filter is:
206 3 Amplitude Modulation

M1 (f )
(a)
Ac

f (kHz)

−3.02 −0.22 −0.02 0 0.02 0.22 3.02

M2 (f )
(b)
Ac

f (kHz)

−2.99 −0.19 −0.01 0 0.01 0.19 2.99

Fig. 3.43 Demodulated message spectrum in the presence of frequency offset

Fig. 3.44 An AM signal L C


vi (t) vo (t)
transmitted through a series
RLC circuit

Ac

s2 (t) = m(t) cos(2π f t) − m̂(t) sin(2π f t) . (3.207)
2
When  f is positive, s2 (t) is an SSB signal with carrier frequency  f and
upper sideband transmitted. This is illustrated in Fig. 3.43a, for  f = 20 Hz.
When  f is negative, s2 (t) is an SSB signal with carrier frequency  f and
lower sideband transmitted. This is illustrated in Fig. 3.43b with  f = −10
Hz.
35. Consider the series RLC circuit in Fig. 3.44. The resonant frequency of the circuit
is 1 MHz and the Q-factor is 100. The input signal vi (t) is given by:

vi (t) = Ac [1 + μ cos(2π f m t)] cos(2π f c t), (3.208)

where μ < 1, f c = 1 MHz, f m = 5 kHz.


Determine vo (t). Note that the Q-factor of the circuit is defined as
3 Amplitude Modulation 207

Q = 2π f c L/R
fc
= (3.209)
−3 dB bandwidth

where f c denotes the resonant frequency of the RLC circuit.


• Solution: The input signal vi (t) can be written as:

μAc
vi (t) = Ac cos(2π f c t) + [cos(2π( f c − f m )t) + cos(2π( f c + f m )t)]
2
(3.210)

The transfer function of the circuit is given by:

Vo (ω) R
= H (ω) = , (3.211)
Vi (ω) R + j ωL − j/(ωC)

where ω = 2π f . Let us now consider a frequency

ω = ωc + ω (3.212)

where ω ωc = 2π f c . Therefore H (ω) becomes:

R
H (ω) =
R + j (ωc + ω)L − j/((ωc + ω)C)
R

R + j ωL + jω/(ωc2 C)
1
=
1 + j ωL/R + jω/(ωc2 RC)
1
= , (3.213)
1 + j (2ω)L/R

where we have used the following relationships:

ωc L = 1/(ωc C)
1
≈ 1 − ω/ωc when ω ωc . (3.214)
1 + ω/ωc

Since the Q-factor of the filter is high, we can assume that 2ω is the 3-dB
bandwidth of the filter. Hence
ωc
ω = = 2π × 5000 rad/s, (3.215)
2Q

which is also equal to the message frequency. Thus


208 3 Amplitude Modulation

1
H (ωc + ω) = √ e−j π/4
2
1 j π/4
H (ωc − ω) = √ e
2
H (ωc ) = 1. (3.216)

The output signal can be written as:

μAc
vo (t) = Ac cos(2π f c t) + √ {cos [2π( f c − f m )t + π/4]
2 2
+ cos [2π( f c + f m )t − π/4]}
 
μ
= Ac 1 + √ cos (2π f m t − π/4) cos (2π f c t) . (3.217)
2

36. Let su (t) denote the SSB wave obtained by transmitting only the upper sideband,
that is


su (t) = Ac m(t) cos(2π f c t) − m̂(t) sin(2π f c t) , (3.218)

where m(t) is bandlimited to | f | < W f c .


Explain how m(t) can be recovered from su (t) using only multipliers, adders/
subtracters and Hilbert transformers. Lowpass filters should not be used.
• Solution: We have


su (t) = Ac m(t) cos(2π f c t) − m̂(t) sin(2π f c t)


ŝu (t) = Ac m(t) sin(2π f c t) + m̂(t) cos(2π f c t) . (3.219)

From (3.219), we get

1

m(t) = su (t) cos(2π f c t) + ŝu (t) sin(2π f c t) . (3.220)
Ac

37. (Haykin 1983) A method that is used for carrier recovery in SSB modulation
systems involves transmitting two pilot frequencies that are appropriately posi-
tioned with respect to the transmitted sideband. This is shown in Fig. 3.45a for
the case where the lower sideband is transmitted. Here, the two pilot frequencies
are defined by:

f1 = fc − W −  f
f 2 = f c +  f, (3.221)

where f c is the carrier frequency, W is the message bandwidth, and  f is chosen


such that
3 Amplitude Modulation 209

S(f )
(a)

−f2 −fc −f1 0 f1 fc f2

−fc + W fc − W

(b)
Frequency
Narrowband
s(t) v1 (t) v3 (t) v4 (t)
filter Lowpass divide
centered filter
by n + 2
at f1

Narrowband
v2 (t)
filter
centered
at f2
Narrowband
Output v5 (t)
filter
centered
at fc

Fig. 3.45 Carrier recovery for SSB signals with lower sideband transmitted

W
n= (3.222)
f

where n is an integer. Carrier recovery is accomplished using the scheme shown


in Fig. 3.45b. The outputs of the two narrowband filters centered at f 1 and f 2 are
given by:

v1 (t) = A1 cos(2π f 1 t + φ1 )
v2 (t) = A2 cos(2π f 2 t + φ2 ). (3.223)

The lowpass filter is designed to select the difference frequency component at


the first multiplier output, due to v1 (t) and v2 (t).
(a) Show that the output signal in Fig. 3.45b is proportional to the carrier wave
Ac cos(2π f c t) if the phase angles satisfy:

−φ1
φ2 = . (3.224)
1+n

(b) For the case when only the upper sideband is transmitted, the two pilot
frequencies are
210 3 Amplitude Modulation

f1 = fc −  f
f 2 = f c + W +  f. (3.225)

How would you modify the carrier recovery scheme in order to deal with this
case. What is the corresponding relation between φ1 and φ2 for the output
to be proportional to the carrier signal?

• Solution: The output of the lowpass filter can be written as:

A1 A2
v3 (t) = cos(2π( f 2 − f 1 )t + φ2 − φ1 ). (3.226)
2
Substituting for f 1 and f 2 we get:

A1 A2
v3 (t) = cos(2π(n + 2)W t/n + φ2 − φ1 )
2
A1 A2
= cos(2π(n + 2)W t/n + φ2 − φ1 + 2πk)
2
, (3.227)

for 0 ≤ k < n + 2, where k is an integer. However, v3 (t) in (3.227) can be


written as:
A1 A2
v3 (t) = cos(2π(n + 2)W (t − t0 )/n) (3.228)
2
where
−2π(n + 2)W t0
= φ2 − φ1 + 2πk (3.229)
n
for some value of t0 . The output of the frequency divider is:

A1 A2
v4 (t) = cos(2πW (t − t0 )/n)
2
A1 A2
= cos(2πW t/n + (φ2 − φ1 )/(n + 2) + 2πk/(n + 2))
2
A1 A2
= cos(2π f t + (φ2 − φ1 )/(n + 2) + 2πk/(n + 2)).
2
(3.230)

Note that frequency division by n + 2 results in a phase ambiguity of 2πk/(n +


2). We need to choose the phase corresponding to k = 0. How this can be done
will be explained later. The output of the second multiplier is given by:
3 Amplitude Modulation 211

A1 A22
v4 (t)v2 (t) = cos(2π f t + (φ2 − φ1 )/(n + 2))
2
× cos(2π( f c +  f )t + φ2 ). (3.231)

The output of the narrowband filter centered at f c is

A1 A22
v5 (t) = cos(2π f c t + φ2 − (φ2 − φ1 )/(n + 2)), (3.232)
4
which is proportional to the carrier frequency when

φ2 − (φ2 − φ1 )/(n + 2) = 0
−φ1
⇒ φ2 = . (3.233)
n+1

Note that φ1 = φ2 = 0 is a trivial solution.


One possible method to choose the correct value of k(= 0) could be as follows.
Obtain v5 (t) using each value of k, for 0 ≤ k < n + 2, and use it to demodulate
the SSB signal. The value of k that maximizes the message energy/power, is
selected.
For the case where the upper sideband is transmitted, the carrier recovery
scheme is shown in Fig. 3.46. Here we again have

A1 A2
v3 (t) = cos(2π( f 2 − f 1 )t + φ2 − φ1 )
2
A1 A2
= cos(2π(n + 2)W t/n + φ2 − φ1 + 2πk), (3.234)
2
for 0 ≤ k < n + 2. Using similar arguments, the output of the frequency
divider is (for k = 0):

A1 A2
v4 (t) = cos(2πW t/n + (φ2 − φ1 )/(n + 2))
2
A1 A2
= cos(2π f t + (φ2 − φ1 )/(n + 2)). (3.235)
2
The output of the second multiplier is given by:

A21 A2
v4 (t)v1 (t) = cos(2π f t + (φ2 − φ1 )/(n + 2))
2
× cos(2π( f c −  f )t + φ1 ). (3.236)
212 3 Amplitude Modulation

S(f )
(a)

−f2 −fc − W −f1 0 f1 fc + W f2

−fc fc

(b)
Frequency
Narrowband
s(t) v2 (t) v3 (t) v4 (t)
filter Lowpass divide
centered filter
by n + 2
at f2

Narrowband
v1 (t)
filter
centered
at f1
Narrowband
Output v5 (t)
filter
centered
at fc

Fig. 3.46 Carrier recovery for SSB signals with upper sideband transmitted

The output of the narrowband filter centered at f c is

A21 A2
v5 (t) = cos(2π f c t), (3.237)
4
provided

φ2 − φ1
+ φ1 = 0
n+2
⇒ φ2 = −(n + 1)φ1 . (3.238)

38. Consider a signal given by:

s(t) = m̂(t) cos(2π f c t) − m(t) sin(2π f c t), (3.239)

where m̂(t) is the Hilbert transform of m(t).


(a) Sketch the spectrum of s(t) when the spectrum of m(t) is shown in Fig. 3.47a.
Label all the important points along the axes.
(b) It is desired to recover m(t) from s(t) using the scheme shown in Fig. 3.47b.
Give the specifications of filter1, filter2, and the value of x.
3 Amplitude Modulation 213

M (f )
(a)
2

−fb −fa 0 fa fb

(b)

s(t) m(t)
Filter1 Filter2

2 cos(2πfc t) x

Fig. 3.47 a Spectrum of the message signal. b Scheme to recover the message

• Solution: We know that the Fourier transform of m̂(t) is given by

m̂(t)  −j sgn ( f )M( f ). (3.240)

Then
−j

m̂(t) cos(2π f c t)  sgn ( f − f c )M( f − f c )


2
+ sgn ( f + f c )M( f + f c )

= S1 ( f ), (3.241)

which is plotted in Fig. 3.48b. Similarly

−j
m(t) sin(2π f c t)  [M( f − f c ) − M( f + f c )]
2

= S2 ( f ) (3.242)

which is plotted in Fig. 3.48c.


Finally, we have

S( f ) = S1 ( f ) − S2 ( f ), (3.243)

which is plotted in Fig. 3.48d.


One possible implementation of the receiver is shown in Fig. 3.48e.

39. It is desired to transmit 100 voice signals using frequency division multiplexing
(FDM). The voice signals are SSB modulated, with the upper sideband trans-
mitted. The spectrum of each of the voice signals is illustrated in Fig. 3.49.
214 3 Amplitude Modulation

M (f )
2
(a)

−fb −fa 0 fa fb
S1 (f )
j
(b)
−fc + fa fc + fa
f
0
−fc fc
−fc − fa fc − fa
−j

S2 (f )

(c) j
fc − fa fc + fa
f
0
−fc fc
−fc − fa −fc + fa
−j

S(f )
2j
(d) −fc + fb
−fc + fa

f
0
−fc fc
fc − fa
fc − fb
−2 j

(e)
1

−fb 0 fb
s(t) m̂(t) −m(t) m(t)
LPF HT

2 cos(2πfc t) −1

Fig. 3.48 a M( f ). b S1 ( f ). c S2 ( f ). d S( f )
3 Amplitude Modulation 215

Fig. 3.49 Spectrum of the M (f )


message signal
1

f (kHz)

−3.3 −0.3 0 0.3 3.3

(a) What is the minimum carrier spacing required to multiplex all the signals,
such that there is no overlap of spectra.
(b) Using a single- stage approach and assuming a carrier spacing of 4 kHz and
a minimum carrier frequency of 100 kHz, determine the lower and upper
frequency limits occupied by the FDM signal containing 100 voice signals.
(c) In the two-stage approach, both stages use SSB with upper sideband trans-
mitted. In the first stage, the voice signals are grouped into L blocks, each
containing K voice signals. The carrier frequencies in each block in the first
stage is given by 4n kHz, for 0 ≤ n ≤ K − 1. Note that the FDM signal
using the two-stage approach must be identical to the single-stage approach
in (b).
i. Sketch the spectrum of each block in the first stage.
ii. How many carriers are required to generate the FDM signal? Express
your answer in terms of L and K .
iii. Determine L and K such that the number of carriers is minimized.
iv. Give the expression for the carrier frequencies required to modulate
each block in the 2nd stage.

• Solution: The minimum carrier spacing required is 3 kHz. If the minimum


carrier frequency is 100 kHz, then the spectrum of the first voice signal starts
at 100.3 kHz. The carrier for the 100th voice signal is at

100 + 99 × 4 = 496 kHz. (3.244)

The spectrum of the 100th voice signal starts at 496.3 kHz and ends at
499.3 kHz. Therefore, the spectrum of the composite FDM signal extends
from 100.3 to 499.3 kHz, that is 100.3 ≤ | f | ≤ 499.3 kHz.
The spectrum of each block in the first stage is shown in Fig. 3.50b.
For the second approach, let us first evaluate the number of carriers required
to obtain each block in the first stage. Clearly, the number of carriers required
is K − 1, since the first message in each block is not modulated at all. In the
next stage, L carriers are required to translate each block to the appropriate
frequency band. Thus, the total number of carriers required is L + K − 1.
Now, we need to minimize L + K − 1 subject to the constraint L K = 100.
The problem can be restated as
216 3 Amplitude Modulation

ml, 1 (t)
(a)

ml, 2 (t)
sl (t) Sl (f )
SSB
for 1 ≤ l ≤ L
fc = 4 kHz

ml, K (t)
SSB
fc = 4(K − 1) kHz
1st stage

s1 (t)
SSB
fc = 100 kHz
Final FDM signal

sL (t)
SSB
fc = 100 + 4K(L − 1) kHz
2nd stage

(b) Sl (f )

(kHz)
f
−7.3 −4.3 −3.3 0 3.3 4.3 7.3

−4(K − 1) −4 −0.3 0.3 4 4(K − 1)


−4(K − 1) − 0.3 4(K − 1) + 0.3
−4(K − 1) − 3.3 4(K − 1) + 3.3

Fig. 3.50 a Two-stage approach for obtaining the final FDM signal. b Spectrum of each block at
the output of the first stage

100
min + K − 1. (3.245)
K K

Differentiating with respect to K and setting the result to zero, we get the
solution as K = 10, L = 10.
The carrier frequencies required to modulate each block in the second stage
is given by 100 + 4K l for 0 ≤ l ≤ L − 1.
The block diagram of the two-stage approach is given in Fig. 3.50a.
40. Consider the modified switching modulator shown in Fig. 3.51. Note that

v1 (t) = Ac cos(2π f c t) + m(t)


v2 (t) = v1 (t)g p (t), (3.246)
3 Amplitude Modulation 217

v1 (t) v2 (t) Ideal s(t)


Switching
Bandpass
modulator
filter

A1 cos3 (2πfc t)

Fig. 3.51 Block diagram of a modified switching modulator

where

1 2  (−1)n−1
g p (t) = + cos(2π f c (2n − 1)t). (3.247)
2 π n=1 2n − 1

It is desired to obtain an AM signal s(t) centered at 3 f c .


(a) Draw the spectrum of the ideal BPF required to obtain s(t). Assume a BPF
gain of k. The bandwidth of m(t) extends over [−W, W ].
(b) Write down the expression for s(t), assuming a BPF gain of k.
(c) It is given that the carrier power is 10 W with 100% modulation. The absolute
maximum value of m(t) is 2 V. Compute A1 and k.

• Solution: We have
Ac
v2 (t) = cos(2π f c t)
2

Ac  (−1)n−1
+ (cos(4π(n − 1) f c t) + cos(4πn f c t))
π n=1 2n − 1

m(t) 2  (−1)n−1
+ + cos(2π(2n − 1) f c t)m(t). (3.248)
2 π n=1 2n − 1

In (3.248), the only term centered at 3 f c is

2
− cos(6π f c t)m(t), (3.249)

which occurs at n = 2. Similarly

A1
A1 cos3 (2π f c t) = [3 cos(2π f c t) + cos(6π f c t)] , (3.250)
4
which has a component at f c and 3 f c . Thus it is clear that in order to extract the
components at 3 f c , the spectrum of the BPF must be as indicated in Fig. 3.52.
The BPF output is given by:
218 3 Amplitude Modulation

H(f )
k

−3fc −2fc 2fc 3fc

−3fc − W −3fc + W 3fc − W 3fc + W

Fig. 3.52 Spectrum of the BPF

A1 k 2k
s(t) = cos(6π f c t) − m(t) cos(6π f c t)
4  3π
k A1 8
= 1− m(t) cos(6π f c t). (3.251)
4 3π A1

For 100% modulation we require

8
max |m(t)| = 1
3π A1
16
⇒ =1
3π A1
⇒ A1 = 1.7. (3.252)

The carrier power is given by

k 2 A21
= 10
32
⇒ k = 10.5. (3.253)

41. Consider an AM signal given by:

s(t) = Ac [1 + μ cos(2π f m t)] cos(2π f c t) |μ| < 1. (3.254)

Assume that the envelope detector is implemented using a diode and an RC filter,
as shown in Fig. 3.53. Determine the upper limit on RC such that the capacitor
voltage follows the envelope. Assume that RC 1/ f c , e−x ≈ 1 − x for small
values of x and f c to be very large.

• Solution: The envelope of s(t) is (see Fig. 3.54)

a(t) = Ac |1 + μ cos(2π f m t)|


= Ac [1 + μ cos(2π f m t)] , (3.255)
3 Amplitude Modulation 219

s(t) Rs
v(t)

C R

Fig. 3.53 Envelope detector using an RC filter

2
a(t0 )
1.5
a(t)
v(t)
1

0.5

-0.5
s(t)
-1

-1.5

-2
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
t

Fig. 3.54 Plots of s(t), the envelope a(t) and the capacitor voltage v(t)

since |μ| < 1. Consider any point a(t0 ) on the envelope at time t0 , which
coincides with the carrier peak. The capacitor starts discharging from a(t0 )
upto approximately the next carrier peak. The capacitor voltage v(τ ) can be
written as:

v(τ ) = a(t0 )e−τ /RC


 τ 
≈ a(t0 ) 1 − for 0 < τ < 1/ f c , (3.256)
RC
where it is assumed that the new time variable τ = 0 at t0 . The absolute value
of the slope of the capacitor voltage at t0 is given by
 
 dv(τ )  a(t0 )
  = . (3.257)
 dτ  RC
τ =0
220 3 Amplitude Modulation

We require that the absolute value of the slope of the capacitor voltage must
exceed that of the envelope at time t0 . The absolute value of the slope of the
envelope at t0 is
 
 da(t) 
  = μAc (2π f m ) |sin(2π f m t0 )| . (3.258)
 dt 
t=t0

Note that since t0 must coincide with the carrier peak, we must have

k
t0 = (3.259)
fc

for integer values of k. From (3.257) and (3.258) we get the required condition
as
a(t0 )
≥ μAc (2π f m ) |sin(2π f m t0 )|
RC
1 + μ cos(2π f m t0 )
⇒ RC ≤ (3.260)
2πμ f m | sin(2π f m t0 )|

for every t0 given by (3.259). However, as f c becomes very large t0 → t. Thus


the upper limit of RC can be found by computing the minimum value of the
RHS of (3.260). Assuming that sin(2π f m t) is positive, we have
 
d 1 + μ cos(2π f m t)
=0
dt 2πμ f m sin(2π f m t)
⇒ cos(2π f m t) = −μ. (3.261)

We get the same result when sin(2π f m t) is negative. Substituting (3.261) in


(3.260) we get

1 − μ2
RC ≤ . (3.262)
2πμ f m

Recall that for proper functioning of the envelope detector, we require:

1
RC , (3.263)
W
where W is the one-sided bandwidth of the message. We find that (3.263) is
valid when μ is close to unity and f m is replaced by W in (3.262).

42. Consider a VSB signal s(t) obtained by passing

s1 (t) = Ac m(t) cos(2π f c t) (3.264)


3 Amplitude Modulation 221

H(f )

−fc − W −fc −fc + W 0 fc − W fc fc + W

Fig. 3.55 Spectrum of the VSB filter

through a VSB filter H ( f ) as illustrated in Fig. 3.55. Assume m(t) to be ban-


dlimited to [−W, W ] and f c W . Let

s(t) = sc (t) cos(2π f c t) − ss (t) sin(2π f c t). (3.265)

(a) Determine the bandwidth of s(t).


(b) Determine sc (t) and ss (t).
(c) Draw the block diagram of the phase discrimination method of generating the
VSB signal s(t) for this particular problem. Sketch the frequency response
of Hs ( f )/j. Label all the important points.
(d) In this problem, most of the energy in s(t) is concentrated on the upper side-
band. How should the block diagram in (c) be modified (without changing
Hs ( f )) such that most of the energy is concentrated in the lower sideband.
(e) How should Hs ( f ) in (c) be modified so that s(t) becomes an SSB signal
with upper sideband transmitted? Sketch the new response of Hs ( f )/j. Label
all the important points. Note that in this part only Hs ( f ) is to be modified.
There is no need to derive any formula. All symbols have their usual meaning.
• Solution: The bandwidth of s(t) is 2W .
We know that

Ac M( f )H ( f c ) for −W ≤ f ≤ W
Sc ( f ) = (3.266)
0 otherwise.

In the given problem H ( f c ) = 2. Therefore

sc (t) = 2 Ac m(t). (3.267)

Similarly

Ss ( f ) = Ac M( f )Hs ( f )/2, (3.268)

where
222 3 Amplitude Modulation

j [H ( f − f c ) − H ( f + f c )] for −W ≤ f ≤ W
Hs ( f ) = (3.269)
0 otherwise.

From Fig. 3.56a–c and (3.269), it is clear that



−j (4 f /W ) for −W ≤ f ≤ W
Hs ( f ) = (3.270)
0 otherwise.

Hs ( f )/j is depicted in Fig. 3.56d. Hence

Ss ( f ) = (−Ac /(πW ))j 2π f M( f ). (3.271)

Taking the inverse Fourier transform of (3.271) we get

Ac dm(t)
ss (t) = − . (3.272)
πW dt

The block diagram of the phase discrimination method of generating s(t) is


given in Fig. 3.57.
If most of the energy is to be concentrated in the lower sideband, z(t) in
Fig. 3.57 must be subtracted instead of being added.
If s(t) is to be an SSB signal with upper sideband transmitted, then Hs ( f )/j
must be as shown in Fig. 3.56e.
43. The signal m 1 (t) = B cos(2π f m t), B > 0, is passed through a first-order RC-
filter as shown in Fig. 3.58. It is given that 1/(2π RC) = f m /2. The output m(t)
is amplitude modulated to obtain:

s(t) = Ac [1 + ka m(t)] cos(2π f c t). (3.273)

(a) Find m(t).


(b) Find the limits of ka for no envelope distortion.
(c) Compute the power of s(t).
• Solution: The transfer function of the first-order lowpass filter is

1
H( f ) = , (3.274)
1 + j f / f0

where f 0 is the –3 dB frequency given by:

1 fm
f0 = = . (3.275)
2π RC 2
Clearly

m(t) = B|H ( f m )| cos(2π f m t + θ), (3.276)


3 Amplitude Modulation 223

H(f )
(a)
4

−fc − W −fc −fc + W 0 fc − W fc fc + W

H(f − fc )
(b)
4

−W 0 W 2fc − W 2fc 2fc + W

H(f + fc )
(c)
4

−2fc − W 2fc −2fc + W −W 0 W

Hs (f )/j
(d)
4

W f

−W

−4

Hs (f )/j
(e)
4

W f

−W

−4

Fig. 3.56 Spectrum of the VSB filter and Hs ( f )/j


224 3 Amplitude Modulation

m(t)

2Ac cos(2πfc t) +
s(t)
Hs (f )
−(Ac /2) sin(2πfc t) +

z(t)

Fig. 3.57 Block diagram for generating s(t)

Fig. 3.58 First-order RC R


m1 (t) m(t)
filter

where

θ = − tan−1 (2)
1
|H ( f m )| = √ . (3.277)
5

For no envelope distortion we require

|ka |B|H ( f m )| ≤ 1

5
⇒ |ka | ≤ (3.278)
B
with the constraint that ka = 0. In order to compute the power of s(t) we note
that

s(t) = Ac cos(2π f c t)
B
+ Ac ka √ cos(2π( f c − f m )t − θ)
2 5
B
+ Ac ka √ cos(2π( f c + f m )t + θ). (3.279)
2 5

Thus the power of s(t) is just the sum of the power of the individual sinusoids
and is equal to:
3 Amplitude Modulation 225

Fig. 3.59 a Message M (f )


spectrum. b Recovery of the
1
message (a)

−fb −fa 0 fa fb
(b)
s(t) m(t)
H(f )

2 sin(2πfc t)

A2c A2 k 2 B 2
P= + c a . (3.280)
2 20
44. An AM signal

s1 (t) = Ac [1 + ka m(t)] cos(2π f c t) (3.281)

is passed through an ideal bandpass filter (BPF) having a frequency response



3 for f c − f b ≤ | f | ≤ f c − f a
H( f ) = (3.282)
0 otherwise.

The message spectrum is shown in Fig. 3.59a.


(a) Write down the expression of the signal s(t), at the BPF output, in terms of
m(t).
(b) It is desired to recover m(t) (not cm(t) where c is a constant) from s(t) using
the block diagram in Fig. 3.59b.
Give the specifications of H ( f ). Assume ideal filter characteristics.

• Solution: Clearly, the signal at the BPF output is SSB modulated with lower
sideband transmitted. Therefore, the expression for s(t) is

s(t) = km(t) cos(2π f c t) + k m̂(t) sin(2π f c t), (3.283)

where k is a constant that needs to be found out.


Now, the Fourier transform of s(t) is

k

S( f ) = M( f − f c ) 1 − sgn ( f − f c )
2
k

+ M( f + f c ) 1 + sgn ( f + f c ) . (3.284)
2
226 3 Amplitude Modulation

S(f )
3Ac ka /2

−fc −fc + fa −fc + fb 0 fc − fb fc − fa fc

Fig. 3.60 Spectrum at BPF output

Inphase channel

a(t) Lowpass c(t)

filter
cos(2πf0 t) cos(2πfc t)
m(t) SSB signal
sin(2πf0 t) sin(2πfc t)
e(t)

b(t) Lowpass d(t)

filter

Quadrature channel

M (f )
1

−fb −fa fa fb

Fig. 3.61 Weaver’s method of generating SSB signals transmitting the upper sideband

Comparing S( f ) in (3.284) with the spectrum in Fig. 3.60, we get

3Ac ka
k= . (3.285)
2
In order to recover the message, we need a lowpass filter with unity gain and
passband [− f b , f b ] in cascade with an ideal Hilbert transformer followed by
a gain of −1/k. These three elements can be combined into a single filter
whose frequency response is given by

j (1/k)sgn( f ) for | f | < f b
H( f ) = (3.286)
0 otherwise.

45. (Haykin 1983) Consider the block diagram in Fig. 3.61. The message m(t) is
bandlimited to f a ≤ | f | ≤ f b . The auxiliary carrier applied to the first pair of
3 Amplitude Modulation 227

product modulators is given by:

fa + fb
f0 = . (3.287)
2
The lowpass filters are identical and can be assumed to be ideal with unity
gain and cutoff frequency equal to ( f b − f a )/2. The carrier frequency f c >
( f b − f a )/2.
(a) Plot the spectra of the complex signals a(t) + j b(t), c(t) + j d(t) and the
real-valued signal e(t).
(b) Write down the expression for e(t) in terms of m(t).
(c) How would you modify Fig. 3.61 so that only the lower sideband is trans-
mitted.
• Solution: We have

a(t) + j b(t) = m(t) exp (j 2π f 0 t) = m 1 (t) (say). (3.288)

Thus

m 1 (t)  M1 ( f ) = M( f − f 0 ). (3.289)

The plot of M1 ( f ) is given in Fig. 3.62b. Note that m 1 (t) cannot be regarded as
a pre-envelope, since it has non-zero negative frequencies. The various edge
frequencies in M1 ( f ) are given by:

f 1 = ( f b − f a )/2
f 2 = ( f b + 3 f a )/2
f 3 = ( f a + 3 f b )/2. (3.290)

Now the complex-valued signal m 1 (t) gets convolved with a real-valued low-
pass filter (the lowpass filters have been stated to be identical, hence they
can be considered to be a single real-valued lowpass filter), resulting in a
complex-valued output m 2 (t). The plot of M2 ( f ) is shown in Fig. 3.62c. Let

m 2 (t) = m 2, c (t) + j m 2, s (t) = c(t) + j d(t). (3.291)

Now e(t) is given by

e(t) =  {m 2 (t) exp (−j 2π f c t)}


1

= m 2 (t) exp (−j 2π f c t) + m ∗2 (t) exp (j 2π f c t)
2
1

 M2 ( f + f c ) + M2∗ (− f + f c ) . (3.292)
2
228 3 Amplitude Modulation

Fig. 3.62 Plot of the signal (a) M (f )


spectra at various points 1

−fb −fa fa fb

(b) M1 (f )
1

−f1 f1 f2 f3

(c) M2 (f )
1
f

−f1 f1

(d) E(f )
0.5
f

−fc fc

fc + f1

fc − f1

Since M2 ( f ) is real-valued, we have

1
e(t)  [M2 ( f + f c ) + M2 (− f + f c )] = E( f ) (say). (3.293)
2
E( f ) is plotted in Fig. 3.62d.
Let

f c1 = f c − f 1 − f a . (3.294)

Then

e(t) = m(t) cos(2π f c1 t) − m̂(t) sin(2π f c1 t). (3.295)

In order to transmit the lower sideband, the first product modulator in the
quadrature arm must be fed with − sin(2π f 0 t).

46. (Haykin 2001) The spectrum of a voice signal m(t) is zero outside the inter-
val f a < | f | < f b . To ensure communication privacy, this signal is applied to a
3 Amplitude Modulation 229

m(t) v1 (t) Highpass v2 (t) v3 (t) Lowpass s(t)

filter filter

cos(2πfc t) cos(2π(fc + fb )t)

Fig. 3.63 Block diagram of a scrambler

scrambler that consists of the following components in cascade: product modu-


lator, highpass filter, second product modulator, and lowpass filter as illustrated
in Fig. 3.63. The carrier wave applied to the first product modulator has fre-
quency equal to f c , whereas that applied to the second product modulator has
frequency equal to f b + f c . Both the carriers have unit amplitude. The highpass
and lowpass filters are ideal with unity gain and have the same cutoff frequency
at f c . Assume that f c > f b .
(a) Derive an expression for the scrambler output s(t).
(b) Show that the original voice signal m(t) may be recovered from s(t) by
using an unscrambler that is identical to the scrambler.

• Solution: The block diagram of the system is shown in Fig. 3.63. The output
of the first product modulator is given by:

1
V1 ( f ) = [M( f − f c ) + M( f + f c )] . (3.296)
2
The output of the highpass filter is an SSB signal with the upper sideband
transmitted. This is illustrated in Fig. 3.64. Hence:

1

v2 (t) = m(t) cos(2π f c t) − m̂(t) sin(2π f c t) . (3.297)
2
The output of the second product modulator is given by:

v3 (t) = v2 (t) cos(2π( f c + f b )t)


1
= [m(t) (cos(2π(2 f c + f b )t) + cos(2π f b t))
4
− m̂(t) (sin(2π(2 f c + f b )t) − sin(2π f b t)) , (3.298)

which after lowpass filtering becomes:

1

s(t) = m(t) cos(2π f b t) + m̂(t) sin(2π f b t) . (3.299)
4
230 3 Amplitude Modulation

M (f )

−fb −fa 0 fa fb

V1 (f )
1 Highpass filter

0.5

−fc − fb −fc 0 fc fc + fb

V2 (f )

0.5

−fc − fb −fc 0 fc fc + fb

S(f )

0.25

−fb 0 fb

fa − fb fb − fa

S1 (f ) = M (f )

1/16

−fb −fa 0 fa fb

Fig. 3.64 Illustrating the spectra of various signals in the scrambler


3 Amplitude Modulation 231

It is clear that s(t) is an SSB signal with lower sideband transmitted and carrier
frequency f b . This is illustrated in Fig. 3.64.
If s(t) is fed to the scrambler, the output would be similar to (3.299), that is:

1

s1 (t) = s(t) cos(2π f b t) + ŝ(t) sin(2π f b t) . (3.300)
4
In other words, we transmit the lower sideband of s(t) with carrier frequency
f b . Thus

1
s1 (t) = m(t) (3.301)
16
which is again shown in Fig. 3.64.

47. (Haykin 1983) The single-tone modulating signal

m(t) = Am cos(2π f m t) (3.302)

is used to generate the VSB signal

Am Ac
s(t) = [a cos(2π( f c + f m )t) + (1 − a) cos(2π( f c − f m )t)] (,3.303)
2
where 0 ≤ a ≤ 1 is a constant, representing the attenuation of the upper side
frequency.
(a) From the canonical representation of a bandpass signal, find the quadrature
component of s(t).
(b) The VSB signal plus a carrier Ac cos(2π f c t), is passed through an envelope
detector. Determine the distortion produced by the quadrature component.
(c) What is the value of a for which the distortion is maximum.
(d) What is the value of a for which the distortion is minimum.

• Solution: The VSB signal s(t) can be simplified to:

Am Ac
s(t) = [cos(2π f c t) cos(2π f m t) + (1 − 2a) sin(2π f c t) sin(2π f m t)] .
2
(3.304)
By comparing with the canonical representation of a bandpass signal, we see
that the quadrature component is:

Am Ac
− (1 − 2a) sin(2π f m t). (3.305)
2
After the addition of the carrier, the modified signal is:
232 3 Amplitude Modulation

g(t) x(t) Bandpass y(t) Energy Output

filter meter

A cos(2πfc t)

Variable
frequency
oscillator

Fig. 3.65 Block diagram of a heterodyne spectrum analyzer

 
Am
s(t) = Ac cos(2π f c t) 1 + cos(2π f m t)
2
Am Ac
+ (1 − 2a) sin(2π f c t) sin(2π f m t). (3.306)
2
The envelope is given by:
 
Am
E(t) = Ac 1 + cos(2π f m t) 1 + D(t), (3.307)
2

where D(t) is the distortion defined by:


 2
(Am /2)(1 − 2a) sin(2π f m t)
D(t) = . (3.308)
1 + (Am /2) cos(2π f m t)

Clearly, D(t) is minimum when a = 1/2 and maximum when a = 0, 1.

48. (Haykin 1983) Figure 3.65 shows the block diagram of a heterodyne spectrum
analyzer. The oscillator has an amplitude A and operates over the range f 0 to
f 0 + W where ± f 0 is the midband frequency of the BPF and g(t) extends over
the frequency band [−W, W ]. Assume that the BPF bandwidth  f W and
f 0 W and that the passband response of the BPF is unity. Determine the value
of the energy meter output for an input signal g(t), for a particular value of the
oscillator frequency, say f c . Assume that g(t) is a real-valued energy signal.
• Solution: Let us denote the output of the product modulator by x(t). Hence

x(t) = A cos(2π f c t)g(t)


A
 [G( f − f c ) + G( f + f c )] = X ( f ), (3.309)
2
where f c denotes the oscillator frequency. The BPF output can be written as:
3 Amplitude Modulation 233
    
f − f0 f + f0
Y ( f ) = X ( f ) rect + rect
f f
    
A f − f0 f + f0
= X ( f 0 )rect + X (− f 0 )rect
2 f f
    
A f − f0 f + f0
= G( f 0 − f c )rect + G(− f 0 + f c )rect
2 f f
, (3.310)

where we have assumed that  f is small enough so that X ( f ) can be assumed


to be constant in the bandwidth of the filter. The energy of y(t) is given by:
 ∞
E= y 2 (t) dt
t=−∞
 ∞
= |Y ( f )|2 d f (3.311)
f =−∞

where we have used the Rayleigh’s energy theorem. Hence

A2
E= |G( f 0 − f c )|2  f (3.312)
2
where we have used the fact that g(t) is real-valued, hence

|G( f 0 − f c )| = |G(− f 0 + f c )|. (3.313)

Note that as f c varies from f 0 to f 0 + W , we obtain the magnitude spectrum


of g(t).
49. An AM signal is generated using a switching modulator as shown in Fig. 3.66.
The signal v1 (t) = Ac cos(2π f c t) + m(t), where Ac |m(t)|. The diode D has

vD
+ −
D
v1 (t) v2 (t)
iD
iD

R
vD

0 Vγ
i − v characteristics of D

Fig. 3.66 Generation of AM signal using a switching modulator


234 3 Amplitude Modulation

a cut-in voltage equal to Vγ (< Ac ). The i − v characteristics of the diode is also


shown.

(a) Assuming that Vγ /Ac = 1/ 2, derive the expression for the AM signal
centered at f c and explain how it can be obtained. Assume also that f c W ,
where W is the two-sided bandwidth of m(t).
(b) Find the minimum value of Ac for no envelope distortion, given that |m(t)| ≤
1 for all t.

• Solution: We have

v1 (t) − Vγ when D is ON
v2 (t) =
0 when D is OFF

v1 (t) − Vγ when v1 (t) ≥ Vγ
⇒ v2 (t) =
0 when v1 (t) < Vγ

v1 (t) − Vγ when Ac cos(2π f c t) ≥ Vγ
⇒ v2 (t) = (3.314)
0 when Ac cos(2π f c t) < Vγ

where we have assumed that v1 (t) ≈ Ac cos(2π f c t). The output voltage v2 (t)
can also be written as:

v2 (t) = (v1 (t) − Vγ )g p (t), (3.315)

where

1 when Ac cos(2π f c t) ≥ Vγ
g p (t) = (3.316)
0 when Ac cos(2π f c t) < Vγ

1.5

1 gp (t)
Amplitude

0.5

0
cos(2πfc t)
−T0 T0
-0.5
T
-1

-1.5

-2
-1.5 -1 -0.5 0 0.5 1 1.5

fc t

Fig. 3.67 Plot of g p (t) and cos(2π f c t) for Vγ /Ac = 1/ 2
3 Amplitude Modulation 235

as illustrated in Fig. 3.67. Note that:

Ac cos(2π f c T0 ) = Vγ

⇒ cos(2π f c T0 ) = 1/ 2
⇒ 2π f c T0 = π/4
⇒ T0 /T = 1/8, (3.317)

where f c = 1/T . The Fourier series expansion for g p (t) is:

∞     
2πnt 2πnt
g p (t) = a0 + 2 an cos + bn sin , (3.318)
n=1
T T

where
 T /2
1
a0 = g p (t) dt
T −T /2
 T0
1
= dt
T −T0
= 1/4. (3.319)

Since g p (t) is an even function, bn = 0. Now


 T /2
1
an = cos(2πnt/T ) dt
T −T /2
 T0
1
= cos(2πnt/T ) dt
T −T0
1
= sin(nπ/4). (3.320)

Therefore

∞
1 1
g p (t) = +2 sin(nπ/4) cos(2πnt/T )
4 n=1


1 2 1
= + cos(2π f c t) + cos(4π f c t) + · · · (3.321)
4 π π
Substituting (3.321) in (3.315) and collecting terms corresponding to the AM
signal centered at f c , we obtain:
236 3 Amplitude Modulation

Ac 2
v2 (t) = cos(2π f c t) + m(t) cos(2π f c t)
4 √ π
2 Ac
− Vγ cos(2π f c t) + cos(2π f c t) + · · · (3.322)
π 2π
Using a bandpass signal centered at f c and two-sided bandwidth equal to W
we obtain the desired AM signal as:
 √ √ 
Ac Vγ 2 Ac 2
s(t) = − + + m(t) cos(2π f c t)
4 π 2π π
 √ 
1 1 1 2
= Ac − + + m(t) cos(2π f c t)
4 π 2π π Ac
 √ 
1 1 2
= Ac − + m(t) cos(2π f c t)
4 2π π Ac
 
0.4501582
= Ac 0.0908451 + m(t) cos(2π f c t)
Ac
 
4.9552301
= 0.0908451Ac 1 + m(t) cos(2π f c t). (3.323)
Ac

For no envelope distortion, the minimum value of Ac is 4.9552301.

50. A signal x(t) = A sin(2000πt) is a full wave rectified and passed through an
ideal lowpass filter (LPF) having a bandwidth [−4.5, 4.5] kHz and a gain of 2 in
the passband. Let the LPF output be denoted by m(t). Find y(t) = m(t) + j m̂(t)
and sketch its Fourier transform. Compute the power of y(t).
• Solution: Note that the period of x(t) is 1 ms. The full wave rectified sine wave
can be represented by the complex Fourier series coefficients as follows:


|x(t)| = cn e j 2πnt/T1 , (3.324)
n=−∞

where T1 = 0.5 ms and


 T1
A
cn = sin(ω0 t)e−j 2πnt/T1 dt (3.325)
T1 t=0

where ω0 = 2π/T0 , with T0 = 2T1 = 1 ms. The above equation can be rewrit-
ten as:
 T1
A  j ω0 t 
cn = e − e−j ω0 t e−j 2πnt/T1 dt
2 j T1 t=0
3 Amplitude Modulation 237

Fig. 3.68 Fourier transform 2c0


of y(t)
4c1
4c2

0 1/T1 2/T1

 j (ω0 −2πn/T1 )t T1


A  e 
=  
 j (ω − 2πn/T ) 
2 j T1 0 1 t=0
 T
A  e−j (ω0 +2πn/T1 )t  1

2 j T1  −j (ω0 + 2πn/T1 ) t=0
2A
= . (3.326)
π(1 − 4n 2 )

The output of the LPF is (gain of the LPF is 2):


2
m(t) = 2 cn e j 2πnt/T1
n=−2
= 2c0 + 4c1 cos(ω1 t) + 4c2 cos(2ω1 t), (3.327)

where ω1 = 2π/T1 = 2ω0 . The output of the Hilbert transformer is:

m̂(t) = 4c1 cos(ω1 t − π/2) + 4c2 cos(2ω1 t − π/2)


= 4c1 sin(ω1 t) + 4c2 sin(2ω1 t). (3.328)

Note the absence of c0 in (3.328), since the Hilbert transformer blocks the dc
component. Therefore

y(t) = m(t) + j m̂(t)


= 2c0 + 4c1 e j ω1 t + 4c2 e 2ω1 t . (3.329)

The Fourier transform of y(t) is given by:

Y ( f ) = 2c0 δ( f ) + 4c1 δ( f − 1/T1 ) + 4c2 δ( f − 2/T1 ), (3.330)

which is shown in Fig. 3.68. The power of y(t) is

P = 4c02 + 16c12 + 16c22 . (3.331)


238 3 Amplitude Modulation

51. Explain the principle of operation of the Costas loop for DSB-SC signals given by
s(t) = Ac m(t) cos(2π f c t). Draw the block diagram. Clearly identify the phase
discriminator. Explain phase ambiguity.

• Solution: See Fig. 3.69. We assume that m(t) is real-valued with energy E and
Fourier transform M( f ). Clearly

x1 (t) = Ac m(t) cos(φ)


x2 (t) = Ac m(t) sin(φ). (3.332)

and
1 2 2
x3 (t) = A m (t) sin(2φ). (3.333)
2 c
Note that
 ∞
m (t) 
2
M(α)M( f − α) dα
α=−∞

= G( f ). (3.334)

Therefore
 ∞
G(0) = |M(α)|2 dα
α=−∞
=E (3.335)

since M(−α) = M ∗ (α) (m(t) is real-valued), and we have used Rayleigh’s


energy theorem. We also have

lim H1 ( f ) = δ( f ). (3.336)
 f →0

Hence

X 4 ( f ) = X 3 ( f )δ( f )
1
= A2c sin(2φ)Eδ( f )
2
1
 A2c sin(2φ)E
2
= x4 (t). (3.337)

Thus, the (dc) control signal x4 (t) determines the phase of the voltage con-
trolled oscillator (VCO). Clearly, x4 (t) = 0 when
3 Amplitude Modulation 239

x1 (t)
H(f )

Phase discriminator
2 cos(2πfc t + φ)

s(t) x4 (t) x3 (t)


VCO H1 (f )

2 sin(2πfc t + φ)

H(f )
x2 (t)
H1 (f )

H(f )

1 1/Δf
f f

−W 0 W
Δf

Fig. 3.69 Block diagram of the Costas loop

Fig. 3.70 Message signal


m(t)

ae−t

t
0
T /5 T

−7

2φ = nπ
⇒ φ = nπ/2. (3.338)

Therefore, the Costas loop exhibits 90◦ phase ambiguity.


52. For the message signal shown in Fig. 3.70 compute the power efficiency (the
ratio of sideband power to total power) in terms of the amplitude sensitivity ka ,
and T . Find a in terms of T .
The message is periodic with period T , has zero -mean and is AM modulated
240 3 Amplitude Modulation

according to:

s(t) = Ac [1 + ka m(t)] cos(2π f c t). (3.339)

Assume that T 1/ f c .
• Solution: Since the message is zero- mean we must have:
 T
m(t) dt = 0
t=0
 T /5  
1 T
⇒ ae−t dt =
×7× T −
t=0 2 5
14T
⇒a= . (3.340)
5(1 − e−T /5 )

For a general AM signal given by

s(t) = Ac [1 + ka m(t)] cos(2π f c t), (3.341)

the power efficiency is:

A2c ka2 Pm /2 ka2 Pm


η= = (3.342)
A2c /2 + A2c ka2 Pm /2 1 + ka2 Pm

where Pm denotes the message power. Here we only need to compute Pm . We


have 
1 T 2
Pm = m (t) dt. (3.343)
T t=0

Let

1 T /5 2 −2t
P1 = a e dt
T t=0
a2

= 1 − e−2T /5 . (3.344)
2T
Let
 T
1
P2 = (mt + c)2 dt
T t=T /5

1 T  2 2 
= m t + 2mct + c2 dt
T t=T /5
 T
1 m2t 3
= + mct 2 + c2 t
T 3 t=T /5
3 Amplitude Modulation 241
      
m2T 2 1 1 1
= 1− + mT c 1 − +c 1−
2
3 125 25 5
 2  
35 124 24 4
= − +
4 3 × 125 25 5
= 13.0667, (3.345)

where
35
m=
4T
c = −mT. (3.346)

Now

Pm = P1 + P2 . (3.347)

The power efficiency can be obtained by substituting (3.340), (3.344), (3.345),


and (3.347) in (3.342).

References

Simon Haykin. Communication Systems. Wiley Eastern, second edition, 1983.


Simon Haykin. Communication Systems. Wiley Eastern, fourth edition, 2001.
Rodger E. Ziemer and William H. Tranter. Principles of Communications. John Wiley, fifth edition,
2002.
Chapter 4
Frequency Modulation

1. (Haykin 1983) In a frequency-modulated radar, the instantaneous frequency of


the transmitted carrier f t (t) is varied as given in Fig. 4.1. The instantaneous
frequency of the received echo fr (t) is also shown, where τ is the round-trip
delay time. Assuming that f 0 τ  1 determine the number of beat (difference
frequency) cycles in one second, in terms of the frequency deviation ( f ) of the
carrier frequency, the delay τ , and the repetition frequency f 0 . Assume that f 0
is an integer.
• Solution: Let the transmitted signal be given by
  t 
s(t) = A1 cos 2π f t (τ ) dτ . (4.1)
τ =0

Let the received signal be given by


  t 
r (t) = A2 cos 2π fr (τ ) dτ . (4.2)
τ =0

The variation of the beat (difference) frequency ( f t (t) − fr (t)) with time is
plotted in Fig. 4.2b. Note that the number of beat cycles over the time duration
1/ f 0 is given by
 t1 +1/ f 0
N = | f t (t) − fr (t)| dt
t=t1
= |area of ABCD| + |area of DEFG| . (4.3)

Note that

f1 = fc +  f − f2 . (4.4)

© The Editor(s) (if applicable) and The Author(s), under exclusive license 243
to Springer Nature Switzerland AG 2021
K. Vasudevan, Analog Communications,
https://doi.org/10.1007/978-3-030-50337-6_4
244 4 Frequency Modulation

frequency
1/f0
fc + Δf

fc t

fc − Δf

ft (t) τ fr (t)

Fig. 4.1 Variation of the instantaneous frequency with time in an FM radar

τ
frequency
1/f0 (a)
fc + Δf
Y

f2
t3 t4
t
fc
X
t1 t2 t5

fc − Δf

τ
ft (t) − fr (t)
(b)
E F
f1

A D G
t

−f1
B C

1/(2f0 )

Fig. 4.2 Variation of the instantaneous difference frequency with time in an FM radar

The equation of the line XY in Fig. 4.2a is

y = mt + c, (4.5)
4 Frequency Modulation 245

where

m = 4 f 0  f. (4.6)

At t = t1 + τ we have

f c +  f = 4 f 0  f (t1 + τ ) + c. (4.7)

At t = t1 we have

f 2 = 4 f 0  f t1 + c
= fc +  f − 4 f0  f τ . (4.8)

Therefore

f1 = fc +  f − f2
= 4 f0  f τ . (4.9)

Now
 
1τ 1
N =4 f1 + 2 f1 −τ
22 2 f0
1 − 2τ f 0
= τ f1 + f1
f0
≈ τ f1 + f1 / f0 . (4.10)

Since f 0 is an integer, the number of cycles in one second is

N f 0 = f 1 (1 + τ f 0 )
≈ f1 = 4 f0  f τ , (4.11)

which is proportional to τ and hence twice the distance between the target and
the radar. In other words,

τ = 2x/c, (4.12)

where x is the distance between the target and the radar and c is the velocity
of light. Thus the FM radar can be used for ranging.
2. (Haykin 1983) The instantaneous frequency of a cosine wave is equal to fc −  f
for |t| < T /2 and f c for |t| > T /2. Determine the spectrum of this signal.
• Solution: The Fourier transform of this signal is given by
246 4 Frequency Modulation
 −T /2
S( f ) = cos(2π f c t)e−j 2π f t dt
t=−∞
 T /2
+ cos(2π( f c +  f )t)e−j 2π f t dt
t=−T /2
 ∞
+ cos(2π f c t)e−j 2π f t dt
t=T /2
 ∞
= cos(2π f c t)e−j 2π f t dt
t=−∞
 T /2
+ (cos(2π( f c +  f )t) − cos(2π f c t)) e−j 2π f t dt
t=−T /2
1
= [δ( f − f c ) + δ( f + f c )]
2
T
+ [sinc (( f − f c −  f )T ) + sinc (( f + f c +  f )T )]
2
T
− [sinc (( f − f c )T ) + sinc (( f + f c )T )] . (4.13)
2
3. (Haykin 1983) Single sideband modulation may be viewed as a hybrid form of
amplitude modulation and frequency modulation. Evaluate the envelope and the
instantaneous frequency of an SSB wave, in terms of the message signal and its
Hilbert transform, for the two cases:
(a) When only the upper sideband is transmitted.
(b) When only the lower sideband is transmitted.
Assume that the message signal is m(t), the carrier amplitude is Ac /2 and carrier
frequency is f c .
• Solution: The SSB signal can be written as

Ac  
s(t) = m(t) cos(2π f c t) ± m̂(t) sin(2π f c t) . (4.14)
2
When the upper sideband is to be transmitted, the minus sign is used and when
the lower sideband is to be transmitted, the plus sign is used. The envelope of
s(t) is given by

Ac  2
a(t) = m (t) + m̂ 2 (t) (4.15)
2
is independent of whether the upper or lower sideband is transmitted. Note
that s(t) can be written as

s(t) = a(t) cos(2π f c t + θ(t)), (4.16)


4 Frequency Modulation 247

where θ(t) denotes the instantaneous phase which is given by


 
−1 m̂(t)
θ(t) = ± tan . (4.17)
m(t)

The plus sign in the above equation is used when the upper sideband is trans-
mitted and the minus sign is used when the lower sideband is transmitted. The
total instantaneous phase is given by

θtot (t) = 2π f c t + θ(t). (4.18)

The total instantaneous frequency is given by

1 dθtot (t)
f tot (t) = . (4.19)
2π dt
When the upper sideband is transmitted, the total instantaneous frequency is
given by

1 m̂  (t)m(t) − m̂(t)m  (t)


f tot (t) = f c + , (4.20)
2π (m 2 (t) + m̂ 2 (t))

where m  (t) denotes the derivative of m(t) and m̂  (t) denotes the derivative
of m̂(t). When the lower sideband is transmitted, the total instantaneous fre-
quency is given by

1 m̂  (t)m(t) − m̂(t)m  (t)


f tot (t) = f c − . (4.21)
2π (m 2 (t) + m̂ 2 (t))

4. (Haykin 1983) Consider a narrowband FM signal approximately defined by

s(t) ≈ Ac cos(2π f c t) − β Ac sin(2π f m t) sin(2π f c t). (4.22)

(a) Determine the envelope of s(t). What is the ratio of the maximum to the
minimum value of this envelope.
(b) Determine the total average power of the narrowband FM signal. Determine
the total average power in the sidebands.
(c) Assuming that s(t) in (4.22) can be written as

s(t) = a(t) cos(2π f c t + θ(t)) (4.23)

expand θ(t) in the form of a Maclaurin series. Assume that β < 0.3. What
is the power ratio of the third harmonic to the fundamental component.

• Solution: The envelope is given by


248 4 Frequency Modulation

a(t) = Ac 1 + β 2 sin2 (2π f m t). (4.24)

Therefore, the maximum and the minimum values of the envelope are given
by

Amax = Ac 1 + β 2
Amin = Ac
Amax 
⇒ = 1 + β2. (4.25)
Amin

The total average power of narrowband FM is equal to

A2c 2β 2 A2c
Ptot = + . (4.26)
2 8
The total average power in the sidebands is equal to

2β 2 A2c
Pmes = . (4.27)
8
Assuming that β  1, the narrowband FM signal can be written as

s(t) = a(t) cos(2π f c t + θ(t)), (4.28)

where θ(t) denotes the instantaneous phase of the message component and is
given by

θ(t) ≈ tan−1 (β sin(2π f m t)). (4.29)

Now, the Maclaurin series expansion of tan−1 (x) is (ignoring higher terms)

x3
tan−1 (x) ≈ x − . (4.30)
3
Thus
1
θ(t) = β sin(2π f m t) − β 3 sin3 (2π f m t). (4.31)
3
Using the fact that

3 sin(θ) − sin(3θ)
sin3 (θ) = (4.32)
4
(4.31) becomes
4 Frequency Modulation 249

Wideband FM
m(t) Narrowband Frequency signal
BPF
FM modulator multiplier n1 fc = 104 MHz

Frequency
f0 = 1 MHz multiplier n2

Fig. 4.3 Armstrong-type FM modulator

θ(t) ≈ β sin(2π f m t)
1
− β 3 (3 sin(2π f m t) − sin(2π(3 f m )t))
12
 
β3 β3
= β− sin(2π f m t) + sin(2π(3 f m )t). (4.33)
4 12

Therefore, the power ratio of the third harmonic to the fundamental is



2
β3 4
R= × . (4.34)
12 (4β − β 3 )

5. (Proakis and Salehi 2005) To generate wideband FM, we can first generate a
narrowband FM signal, and then use frequency multiplication to spread the signal
bandwidth. This is illustrated in Fig. 4.3, which is called the Armstrong-type FM
modulator. The narrowband FM has a frequency deviation of 1 kHz.
(a) If the frequency of the first oscillator is 1 MHz, determine n 1 and n 2 that
is necessary to generate an FM signal at a carrier frequency of 104 MHz
and a maximum frequency deviation of 75 kHz. The BPF allows only the
difference frequency component.
(b) If the error in the carrier frequency f c for the wideband FM signal is to
be within ±200 Hz, determine the maximum allowable error in the 1 MHz
oscillator.

• Solution: The maximum frequency deviation of the output wideband FM sig-


nal is 75 kHz. The maximum frequency deviation of the narrowband FM
(NBFM) signal is 1 kHz. Therefore

75
n1 = = 75. (4.35)
1
Consequently, the carrier frequency at the output of the first frequency multi-
plier is 75 MHz. However, the required carrier frequency is 104 MHz. Hence,
what we now require is a frequency translation. Therefore, we must have
250 4 Frequency Modulation

Fig. 4.4 Fourier transform H(f )/j


of the frequency πaBT
discriminator for negative
frequencies
−fc − BT /2 f
0
−fc −fc + BT /2
−πaBT

s(t) y(t) Envelope z(t)


H(f )
detector

1 × n 2 − 75 = 104 MHz
⇒ n 2 = 179. (4.36)

Let us assume that the error in f 0 is x Hz. Hence, the error in the final output
carrier frequency is

n 2 x − n 1 x = ±200 Hz
⇒ x = ±1.92 Hz. (4.37)

6. Figure 4.4 shows the Fourier transform of a frequency discriminator for negative
frequencies. Here BT denotes the transmission bandwidth of the FM signal and
f c denotes the carrier frequency.
The block diagram of the system proposed for frequency demodulation is also
shown, where
  t 
s(t) = Ac cos 2π f c t + 2πk f m(τ ) dτ , (4.38)
τ =0

where m(t) denotes the message signal.


(a) Sketch the frequency response of H ( f ) for positive frequencies given that
h(t) is real-valued.
(b) Find the output z(t).
(c) Can the proposed system be used for frequency demodulation? Justify your
answer.

• Solution: Since h(t) is real-valued we must have

H (− f ) = H ∗ ( f ), (4.39)
4 Frequency Modulation 251

H(f )/j

πaBT

−fc − BT /2 fc − BT /2 f
0
−fc −fc + BT /2 fc fc + BT /2

−πaBT

Fig. 4.5 Frequency response of the proposed discriminator

which is illustrated in Fig. 4.5. Therefore



j 2πa( f − f c ) for f c − BT /2 < f < f c + BT /2
H( f ) = (4.40)
j 2πa( f + f c ) for − f c − BT /2 < f < − f c + BT /2.

In order to compute y(t), we use the complex lowpass equivalent model. We


assume that

h(t) = 2h c (t) cos(2π f c t) − 2h s (t) sin(2π f c t)



=  2h̃(t)e j 2π fc t , (4.41)

where

h̃(t) = h c (t) + j h s (t). (4.42)

We know that the Fourier transform of the complex envelope of h(t) is given
by

H ( f + f c ) for − BT /2 < f < BT /2
H̃ ( f ) = , (4.43)
0 elsewhere

which reduces to

j 2πa f for − BT /2 < f < BT /2
H̃ ( f ) = . (4.44)
0 elsewhere

Let us denote the Fourier transform of the complex envelope of the output by
Ỹ ( f ). Then
252 4 Frequency Modulation

Ỹ ( f ) = H̃ ( f ) S̃( f )
= j 2π f a S̃( f ), (4.45)

where S̃( f ) is the Fourier transform of the complex envelope of s(t). Note
that
  t 
s̃(t) = Ac exp j 2πk f m(τ ) dτ . (4.46)
τ =0

From (4.45) it is clear that

d s̃(t)
ỹ(t) = a
dt  
 t
= j a Ac 2πk f m(t) exp j 2πk f m(τ ) dτ
τ =0
  t 
= 2a Ac πk f m(t) exp j 2πk f m(τ ) dτ + j π/2 . (4.47)
τ =0

Therefore
  t 
y(t) = 2πa Ac k f m(t) cos 2π f c t + 2πk f m(τ ) dτ + π/2 . (4.48)
τ =0

The output is

z(t) = | ỹ(t)| = 2πa Ac k f |m(t)|. (4.49)

Thus, the proposed system cannot be used for frequency demodulation.

7. Figure 4.6 shows the Fourier transform of a frequency discriminator for negative
frequencies. Here BT denotes the transmission bandwidth of the FM signal and
f c denotes the carrier frequency.
The block diagram of the system proposed for frequency demodulation is also
shown, where
  t 
s(t) = Ac cos 2π f c t + 2πk f m(τ ) dτ , (4.50)
τ =0

where m(t) denotes the message signal.


(a) Sketch the frequency response of H ( f ) for positive frequencies given that
h(t) is real-valued.
(b) Find the output z(t).
(c) Can the proposed system be used for frequency demodulation? Justify your
answer.
4 Frequency Modulation 253

Fig. 4.6 Fourier transform H(f )/j


of the frequency
discriminator for negative −fc − BT /2 −fc f
frequencies −fc + BT /2 0

−2πaBT

s(t) y(t) Envelope z(t)


H(f )
detector

• Solution: Since h(t) is real-valued we must have

H (− f ) = H ∗ ( f ), (4.51)

which is illustrated in Fig. 4.7. Therefore



−j 2πa( f − f c − BT /2) for f c − BT /2 < f < f c + BT /2
H( f ) =
−j 2πa( f + f c + BT /2) for − f c − BT /2 < f < − f c + BT /2.
(4.52)
In order to compute y(t), we use the complex lowpass equivalent model. We
assume that

h(t) = 2h c (t) cos(2π f c t) − 2h s (t) sin(2π f c t)



=  2h̃(t)e j 2π fc t , (4.53)

H(f )/j

2πaBT

−fc − BT /2 −fc + BT /2 f
0
−fc fc − BT /2 fc fc + BT /2

−2πaBT

Fig. 4.7 Frequency response of the proposed discriminator


254 4 Frequency Modulation

where

h̃(t) = h c (t) + j h s (t). (4.54)

We know that the Fourier transform of the complex envelope of h(t) is given
by

H ( f + f c ) for − BT /2 < f < BT /2
H̃ ( f ) = , (4.55)
0 elsewhere

which reduces to

−j 2πa( f − BT /2) for − BT /2 < f < BT /2
H̃ ( f ) = . (4.56)
0 elsewhere

Let us denote the Fourier transform of the complex envelope of the output by
Ỹ ( f ). Then

Ỹ ( f ) = H̃ ( f ) S̃( f )
= −j 2πa( f − BT /2) S̃( f ), (4.57)

where S̃( f ) is the Fourier transform of the complex envelope of s(t). Note
that
  t 
s̃(t) = Ac exp j 2πk f m(τ ) dτ . (4.58)
τ =0

From (4.45) it is clear that

d s̃(t)
ỹ(t) = −a + j πa BT s̃(t)
dt  
 t
= −j a Ac 2πk f m(t) exp j 2πk f m(τ ) dτ
τ =0
  t 
+j a Ac π BT exp j 2πk f m(τ ) dτ
τ =0
  t 
 
= j πa Ac BT − 2k f m(t) exp j 2πk f m(τ ) dτ . (4.59)
τ =0

Therefore
  t 
y(t) = πa Ac [BT − 2k f m(t)] cos 2π f c t + 2πk f m(τ ) dτ + π/2 .
τ =0
(4.60)
The output is
4 Frequency Modulation 255

s(t) Delay − x(t) Envelope a(t)


line detector
+

Fig. 4.8 Delay-line method of demodulating FM signals

z(t) = | ỹ(t)| = πa Ac BT [1 − 2k f m(t)/BT ]. (4.61)

Therefore, the proposed system can be used for frequency demodulation pro-
vided
 
2k f m(t)/BT  < 1. (4.62)

8. (Haykin 1983) Consider the frequency demodulation scheme shown in Fig. 4.8
in which the incoming FM signal is passed through a delay line that produces a
delay of T such that 2π f c T = π/2. The delay-line output is subtracted from the
incoming FM signal and the resulting output is envelope detected. This demod-
ulator finds wide application in demodulating microwave FM waves. Assuming
that

s(t) = Ac cos(2π f c t + β sin(2π f m t)) (4.63)

compute a(t) when β < 1 and the delay T is such that

2π f m T  1. (4.64)

• Solution: The signal x(t) can be written as

x(t) = s(t) − s(t − T )


= Ac cos(2π f c t + β sin(2π f m t))
−Ac cos(2π f c (t − T ) + β sin(2π f m (t − T )))
≈ Ac cos(2π f c t + β sin(2π f m t))
−Ac cos(2π f c t + β sin(2π f m t) − 2π f m T β cos(2π f m t) − π/2)
= Ac cos(2π f c t + β sin(2π f m t))
−Ac sin(2π f c t + β sin(2π f m t) − 2π f m T β cos(2π f m t)). (4.65)

Let

θ(t) = β sin(2π f m t)
α(t) = θ(t) − 2π f m T β cos(2π f m t). (4.66)

Then x(t) can be written as


256 4 Frequency Modulation

x(t) = Ac cos(2π f c t + θ(t)) − Ac sin(2π f c t + α(t))


= Ac cos(2π f c t) [cos(θ(t)) − sin(α(t))]
−Ac sin(2π f c t) [sin(θ(t)) + cos(α(t))]
= a(t) cos(2π f c t + φ(t)), (4.67)

where
sin(θ(t)) + cos(α(t))
tan(φ(t)) = (4.68)
cos(θ(t)) − sin(α(t))

and the envelope of x(t) is



a(t) = Ac 2 + 2 sin(θ(t) − α(t))

≈ Ac 2 [1 + (θ(t) − α(t))/2]

= Ac 2 [1 + π f m T β cos(2π f m t)] , (4.69)

where we have made use of the fact that

2π f m T  1
β < 1
⇒ θ(t) − α(t)  1
⇒ sin(θ(t) − α(t)) ≈ θ(t) − α(t). (4.70)

Note that a(t) > 0 for all t. The message signal cos(2π f m t) can be recovered
by passing a(t) through a capacitor.
9. Consider the frequency demodulation scheme shown in Fig. 4.9 in which the
incoming FM signal is passed through a delay line that produces a delay of T
such that 2π f c T = π/2. The delay-line output is subtracted from the incoming
FM signal and the resulting output is envelope detected. This demodulator finds
wide application in demodulating microwave FM waves. Assuming that
  t 
s(t) = Ac cos 2π f c t + 2πk f m(τ ) dτ (4.71)
τ =0

s(t) Delay − x(t) Envelope a(t)


line detector
+

Fig. 4.9 Delay-line method of demodulating FM signals


4 Frequency Modulation 257

compute a(t) when the delay T is such that

2π f T  1 (4.72)

where  f denotes the frequency deviation. Assume also that m(t) is constant
over any interval T .
• Solution: The signal x(t) can be written as

x(t) = s(t) − s(t − T ), (4.73)

where
  t−T 
s(t − T ) = Ac cos 2π f c (t − T ) + 2πk f m(τ ) dτ
τ =0
  t−T 
= Ac cos 2π f c t − π/2 + 2πk f m(τ ) dτ
τ =0
  t−T 
= Ac sin 2π f c t + 2πk f m(τ ) dτ . (4.74)
τ =0

Now
 t−T  t  t
2πk f m(τ ) dτ = 2πk f m(τ ) dτ − 2πk f m(τ ) dτ
τ =0 τ =0 τ =t−T
 t
= 2πk f m(τ ) dτ − 2πk f T m(t)
τ =0
= θ(t) − 2πk f T m(t)
= α(t) (say), (4.75)

where
 t
2πk f m(τ ) dτ = θ(t). (4.76)
τ =0

Therefore x(t) in (4.73) becomes

x(t) = Ac cos(2π f c t + θ(t)) − Ac sin(2π f c t + α(t))


= Ac cos(2π f c t) [cos(θ(t)) − sin(α(t))]
−Ac sin(2π f c t) [sin(θ(t)) + cos(α(t))]
= a(t) cos(2π f c t + φ(t)), (4.77)

where
258 4 Frequency Modulation

sin(θ(t)) + cos(α(t))
tan(φ(t)) = (4.78)
cos(θ(t)) − sin(α(t))

and the envelope of x(t) is



a(t) = Ac 2 + 2 sin(θ(t) − α(t))

≈ Ac 2 [1 + (θ(t) − α(t))/2]
√  
= Ac 2 1 + πk f T m(t) , (4.79)

where we have made use of the fact that

 f = max |k f m(t)|
2π f T  1 (given)
⇒ |θ(t) − α(t)|  1
⇒ sin(θ(t) − α(t)) ≈ θ(t) − α(t). (4.80)

Note that a(t) > 0 for all t. The message signal m(t) can be recovered by
passing a(t) through a capacitor.
10. A tone-modulated FM signal of the form:

s(t) = Ac sin(2π f c t + β cos(2π f m t)) (4.81)

is passed through an ideal unity gain BPF with center frequency equal to the
carrier frequency and bandwidth equal to 3 f m (±1.5 f m on either side of the
carrier), yielding the signal z(t).
(a) Derive the expression for z(t).
(b) Assuming that z(t) is of the form

z(t) = a(t) cos(2π f c t + θ(t)) (4.82)

compute a(t) and θ(t).


Use the relations
 x=π
1
Jn (β) = e j (β sin(x)−nx) d x
2π x=−π
Jn (β) = (−1)n J−n (β), (4.83)

where Jn (β) denotes the nth order Bessel function of the first kind and
argument β.
• Solution: The input FM signal can be written as
4 Frequency Modulation 259

s(t) = Ac cos(2π f c t + β cos(2π f m t) − π/2)


 
=  s̃(t)e j 2π fc t , (4.84)

where

s̃(t) = Ac e j β cos(2π fm t)−j π/2


= −j Ac e j β cos(2π fm t) , (4.85)

which is periodic with a period 1/ f m . Hence, s̃(t) can be expanded in the form
of a complex Fourier series given by


s̃(t) = cn e j 2πn fm t , (4.86)
n=−∞

where
 1/(2 f m )
cn = f m s̃(t)e−j 2πn fm t dt
t=−1/(2 f m )
 1/(2 fm )
= −j Ac f m e j β cos(2π fm t) e−j 2πn fm t dt. (4.87)
t=−1/(2 f m )

Let

2π f m t = π/2 − x
⇒ 2π f m dt = −d x. (4.88)

Thus
 −π/2
j Ac
cn = e j(β sin(x)−n(π/2−x)) d x. (4.89)
2π x=3π/2

Noting that the integrand is periodic with respect to x with a period 2π we


can interchange the limits and integrate from −π to π to get

−j Ac π
cn = e j(β sin(x)−n(π/2−x)) d x
2π x=−π

Ac −j (n+1)π/2 π
= e e j(β sin(x)+nx) d x
2π x=−π
= Ac J−n (β)e−j (n+1)π/2 . (4.90)

Therefore
260 4 Frequency Modulation



s̃(t) = Ac J−n (β)e j (2πn fm t−(n+1)π/2) . (4.91)
n=−∞

Thus


s(t) = Ac J−n (β) cos(2π f c t + 2πn f m t − (n + 1)π/2). (4.92)
n=−∞

The output of the BPF is

z(t) = Ac J0 (β) cos(2π f c t − π/2) + Ac J−1 (β) cos(2π f c t + 2π f m t − π)


+Ac J1 (β) cos(2π f c t − 2π f m t)
= Ac J0 (β) sin(2π f c t) + Ac J1 (β) cos(2π f c t + 2π f m t)
+Ac J1 (β) cos(2π f c t − 2π f m t)
= Ac J0 (β) sin(2π f c t) + 2 Ac J1 (β) cos(2π f c t) cos(2π f m t). (4.93)

Assuming that z(t) is of the form

z(t) = a(t) cos(2π f c t + θ(t))


= a(t) cos(2π f c t) cos(θ(t)) − a(t) sin(2π f c t) sin(θ(t)) (4.94)

we have

a(t) cos(θ(t)) = 2 Ac J1 (β) cos(2π f m t)


a(t) sin(θ(t)) = −Ac J0 (β). (4.95)

Thus, the envelope of the output is



a(t) = Ac (2J1 (β) cos(2π f m t))2 + J02 (β). (4.96)

The phase is
 
J0 (β)
θ(t) = − tan−1 . (4.97)
2J1 (β) cos(2π f m t)

11. (Haykin 1983) The bandwidth of an FM signal extends over both sides of the
carrier frequency. However, in the single sideband version of FM, it is possible
to transmit either the upper or the lower sideband.
(a) Assuming that the FM signal is given by

s(t) = Ac cos(2π f c t + φ(t)) (4.98)


4 Frequency Modulation 261

explain how we can transmit only the upper sideband. Express your result in
terms of complex envelope of s(t) and Hilbert transforms. Assume that for
all practical purposes, s(t) is bandlimited to f c − BT /2 < | f | < f c + BT /2
and f c
BT .
(b) Verify your answer for single-tone FM modulation when

φ(t) = β sin(2π f m t). (4.99)

Use the Fourier series representation for this FM signal given by




s(t) = Ac Jn (β) cos(2π( f c + n f m )t). (4.100)
n=−∞

• Solution: Recall that in SSB modulation with upper sideband transmitted, the
signal is given by

s(t) = m(t) cos(2π f c t) − m̂(t) sin(2π f c t)


  
=  m(t) + j m̂(t) e j 2π fc t , (4.101)

where m̂(t) is the Hilbert transform of the message m(t), which is typically
bandlimited between [−W, W ] and f c
W . Observe that m(t) in (4.101)
can be complex. For the case of the FM signal given by

s(t) = Ac cos(2π f c t + φ(t))


 
= Ac  e j (2π fc t+φ(t)) (4.102)

the complex envelope is given by

s̃(t) = Ac e j φ(t) . (4.103)

Clearly, s̃(t) is bandlimited to −BT /2 < | f | < BT /2.


Using the concept given in (4.101) the required equation for the FM signal
with only the upper sideband transmitted is given by
 
s1 (t) =  s̃(t) + j
s̃(t) e j 2π fc t , (4.104)

where 
s̃(t) is the Hilbert transform of s̃(t).
Now in the given example

s(t) = Ac cos(2π f c t + β sin(2π f m t))


∞
= Ac Jn (β) cos(2π( f c + n f m )t). (4.105)
n=−∞
262 4 Frequency Modulation

The complex envelope can be written as




s̃(t) = Ac Jn (β)e j 2πn fm t
n=−∞
∞
= Ac Jn (β) (cos(2πn f m t) + j sin(2πn f m t)) . (4.106)
n=−∞

The Hilbert transform of s̃(t) in (4.106) is

−1


s̃(t) = Ac Jn (β) (cos(2πn f m t + π/2) + j sin(2πn f m t + π/2))
n=−∞


+Ac Jn (β) (cos(2πn f m t − π/2) + j sin(2πn f m t − π/2))
n=1
−1

= Ac Jn (β) (− sin(2πn f m t) + j cos(2πn f m t))
n=−∞


+Ac Jn (β) (sin(2πn f m t) − j cos(2πn f m t)) (4.107)
n=1

since the Hilbert transformer removes the dc component (n = 0), introduces a


phase shift of π/2 for negative frequencies (n < 0) and a phase shift of −π/2
for positive frequencies (n > 0). Hence


s̃(t) + j
s̃(t) = Ac J0 (β) + 2 Ac Jn (β)e j 2πn fm t (4.108)
n=1

and s1 (t) in (4.104) becomes




s1 (t) = Ac J0 (β) cos(2π f c t) + 2 Ac Jn (β) cos(2π( f c + n f m )t).
(4.109)
n=1

Thus, we find that only the upper sideband is transmitted, which verifies our
result.

12. (Haykin 1983) Consider the message signal as shown in Fig. 4.10, which is used
to frequency modulate a carrier. Assume a frequency sensitivity of k f Hz/V and
that the FM signal is given by

s(t) = Ac cos(2π f c t + φ(t)). (4.110)


4 Frequency Modulation 263

Fig. 4.10 A periodic square m(t) (volts)


wave corresponding to the
1
message signal

t
−T0 /4 0 T0 /4

−1

T0 /2 T0 /2

(a) Sketch the waveform corresponding to the total instantaneous frequency


f (t) of the FM signal for −T0 /4 ≤ t ≤ 3T0 /4. Label all the important points
on the axes.
(b) Sketch φ(t) for −T0 /4 ≤ t ≤ 3T0 /4. Label all the important points on the
axes. Assume that φ(t) has zero mean.
(c) Write down the expression for the complex envelope of s(t) in terms of φ(t).
(d) The FM signal s(t) can be written as


s(t) = cn cos(2π f c t + 2nπt/T0 ), (4.111)
n=−∞

where
   

β−n β+n
cn = an sinc + bn sinc , (4.112)
2 2

where β = k f T0 . Compute an and bn . Assume the limits of integration (for


computing cn ) to be from −T0 /4 to 3T0 /4.

• Solution: The total instantaneous frequency is depicted in Fig. 4.11b. Note


that

φ(t) = 2πk f m(τ ) dτ . (4.113)

The variation of φ(t) is shown in Fig. 4.11c where

T0
− A + 2πk f =A
2
T0
⇒ A = πk f . (4.114)
2

The complex envelope of s(t) is


264 4 Frequency Modulation

Fig. 4.11 Plot of the total m(t)


instantaneous frequency and (a)
1
φ(t)

t
−T0 /4 0 T0 /4

−1

T0 /2 T0 /2

f (t)
(b)
fc + k f

fc t
−T0 /4 0 T0 /4

fc − k f

T0 /2 T0 /2

φ(t)
(c)

t
0
−T0 /4 T0 /4 3T0 /4

−A

T0 /2 T0 /2

s̃(t) = Ac e j φ(t) . (4.115)

In order to compute the Fourier coefficients cn we note that



2πk f t for − T0 /4 ≤ t ≤ T0 /4
φ(t) = . (4.116)
2πk f (−t + T0 /2) for T0 /4 ≤ t ≤ 3T0 /4

Therefore
4 Frequency Modulation 265
 3T0 /4
1
cn = s̃(t)e−j 2πnt/T0 dt
T0 t=−T0 /4
 T0 /4
Ac
= e j 2πt (k f −n/T0 )
T0 t=−T0 /4
 3T0 /4
Ac
+ e j πβ e−j 2πt (k f +n/T0 ) . (4.117)
T0 t=T0 /4

The first integral in (4.117) evaluates to


 
Ac β−n
I= sinc . (4.118)
2 2

The second integral in (4.117) evaluates to


e−j βπ/2 e−j 3nπ/2 − ej βπ/2 e−j nπ/2


II = Ac . (4.119)
−j 2π(β + n)

Now
 
3nπ 3π nπ
− = − + π n − nπ = − − nπ. (4.120)
2 2 2

Similarly
nπ  π  nπ
− = − + π n − nπ = − nπ. (4.121)
2 2 2
Substituting (4.120) and (4.121) in (4.119), we get
 
Ac −j nπ β+n
II = e sinc
2 2
 
Ac β+n
= (−1)n sinc . (4.122)
2 2

Therefore
   

Ac β−n Ac β+n
cn = sinc + (−1)n sinc (4.123)
2 2 2 2

, which implies that

Ac
an =
2
Ac
bn = (−1)n . (4.124)
2
266 4 Frequency Modulation

τ
fc + Δf
ft (t) fr (t)

fc

fc − Δf
1/f0

Fig. 4.12 Variation of the instantaneous frequency with time in an FM radar

13. In a frequency-modulated radar, the instantaneous frequency of the transmitted


carrier f t (t) is varied as given in Fig. 4.12. The instantaneous frequency of the
received echo fr (t) is also shown, where τ is the round-trip delay time. Assuming
that f 0 τ  1 so that cos(2π f 0 τ ) ≈ 1 and sin(2π f 0 τ ) ≈ 2π f 0 τ , determine the
number of beat (difference frequency) cycles in one second, in terms of the
frequency deviation ( f ) of the carrier frequency, the delay τ , and the repetition
frequency f 0 . Assume that f 0 is an integer. Note that in Fig. 4.12

f t (t) = f c +  f sin(2π f 0 t). (4.125)

• Solution: Let the transmitted signal be given by


  t 
s(t) = A1 cos 2π f t (τ ) dτ . (4.126)
τ =0

Let the received signal be given by


  t 
r (t) = A2 cos 2π fr (τ ) dτ . (4.127)
τ =0

The beat (difference) frequency is f t (t) − fr (t). The number of beat cycles
over the time duration 1/ f 0 is given by
4 Frequency Modulation 267
 1/ f 0
N= | f t (t) − fr (t)| dt. (4.128)
t=0

Note that

fr (t) = f c +  f sin(2π f 0 (t − τ ))
= f c +  f [sin(2π f 0 t) cos(2π f 0 τ ) − cos(2π f 0 t) sin(2π f 0 τ )]
≈ f c +  f [sin(2π f 0 t) − 2π f 0 τ cos(2π f 0 t)] . (4.129)

Therefore

f t (t) − fr (t) = 2π f 0 τ  f cos(2π f 0 t). (4.130)

Hence (4.128) becomes


 1/(4 f 0 )
N = 2π f 0 τ  f × 4 cos(2π f 0 t) dt
t=0
 
 sin(2π f 0 t) 1/(4 f0 )

= 8π f 0 τ  f  
2π f 0 t=0
= 4τ  f. (4.131)

Since f 0 is an integer, the number of beat cycles per second is

N f 0 = 4τ  f f 0 . (4.132)

14. (Haykin 1983) The sinusoidal modulating wave

m(t) = Am cos(2π f m t) (4.133)

is applied to a phase modulator with phase sensitivity k p . The modulated signal


s(t) is of the form Ac cos(2π f c t + φ(t)).
(a) Determine the spectrum of s(t), assuming that the maximum phase deviation
β p = k p Am does not exceed 0.3 rad.
(b) Construct a phasor diagram for s(t).

• Solution: The PM signal is given by


268 4 Frequency Modulation

Fig. 4.13 Phasor diagram


fc
for narrowband phase Ac βp cos(2πfm t)
modulation

cos(2πfc t)
Ac

sin(2πfc t)

s(t) = Ac cos(2π f c t + k p Am cos(2π f m t))


= Ac cos(2π f c t) cos(k p Am cos(2π f m t))
−Ac sin(2π f c t) sin(k p Am cos(2π f m t))
≈ Ac cos(2π f c t) − Ac β p sin(2π f c t) cos(2π f m t)
Ac β p
= Ac cos(2π f c t) − [sin(2π( f c − f m )t) + sin(2π( f c + f m )t)] .
2
(4.134)

Hence, the spectrum is given by

Ac
S( f ) = [δ( f − f c ) + δ( f + f c )]
2
Ac β p
− [δ( f − f c + f m ) − δ( f + f c − f m )]
4j
Ac β p
− [δ( f − f c − f m ) − δ( f + f c + f m )] . (4.135)
4j

The phasor diagram is shown in Fig. 4.13.


15. Consider a tone-modulated PM signal of the form

s(t) = Ac cos(2π f c t + β p cos(2π f m t)), (4.136)

where β p = k p Am . This modulated signal is applied to an ideal BPF with


unity gain, midband frequency f c , and passband extending from f c − 1.5 f m
to f c + 1.5 f m . Determine the envelope, phase, and instantaneous frequency of
the modulated signal at the filter output as functions of time.
• Solution: The PM signal is given by

s(t) = Ac cos(2π f c t + k p Am cos(2π f m t)). (4.137)

The complex envelope is


4 Frequency Modulation 269

s̃(t) = Ac exp( j k p Am cos(2π f m t)), (4.138)

which is periodic with period 1/ f m and hence can be represented in the form
of a Fourier series as follows:


s̃(t) = cn exp( j 2πn f m t). (4.139)
n=−∞

The coefficients cn are given by


 1/(2 f m )
cn = Ac f m exp( j k p Am cos(2π f m t) − j 2πn f m t) dt. (4.140)
t=−1/(2 f m )

Let

2π f m t = π/2 − x. (4.141)

Then
 −π/2
−Ac
cn = exp( j k p Am sin(x) − j (nπ/2 − nx)) d x. (4.142)
2π x=3π/2

Since the above integrand is periodic with a period of 2π, we can write
 π
Ac
cn = exp(−j nπ/2) exp( j k p Am sin(x) + j nx)) d x. (4.143)
2π x=−π

We know that
 π
1
Jn (β) = exp( j β sin(x) − j nx)) d x, (4.144)
2π x=−π

where Jn (β) is the nth-order Bessel function of the first kind and argument β.
Thus

cn = Ac exp(−j nπ/2)J−n (β p ). (4.145)

The transmitted PM signal can be written as


270 4 Frequency Modulation

s(t) =  {s̃(t) exp( j 2π f c t)}


 ∞ 

= Ac  exp(−j nπ/2)J−n (β p ) exp(j 2πn f m t) exp( j 2π f c t)
n=−∞


= Ac J−n (β p ) cos(2π( f c + n f m )t − nπ/2). (4.146)
n=−∞

The BPF allows only the components at f c , f c − f m and f c + f m . Thus, the


BPF output is

x(t) = Ac J0 (β p ) cos(2π f c t)
+Ac J−1 (β p ) cos(2π( f c + f m )t − π/2)
+Ac J1 (β p ) cos(2π( f c − f m )t + π/2)
= Ac J0 (β p ) cos(2π f c t)
+Ac J−1 (β p ) sin(2π( f c + f m )t)
−Ac J1 (β p ) sin(2π( f c − f m )t). (4.147)

However

J−1 (β p ) = −J1 (β p ). (4.148)

Hence

x(t) = Ac J0 (β p ) cos(2π f c t)
−Ac J1 (β p ) sin(2π( f c + f m )t)
−Ac J1 (β p ) sin(2π( f c − f m )t)
= Ac J0 (β p ) cos(2π f c t)
−2 Ac J1 (β p ) sin(2π f c t) cos(2π f m t)
= a(t) cos(2π f c t + θi (t)). (4.149)

The envelope of x(t) is



a(t) = Ac J02 (β p ) + 4J12 (β p ) cos2 (2π f m t). (4.150)

The total instantaneous phase of x(t) is


−1 2J1 (β p )
φi (t) = 2π f c t + tan cos(2π f m t) . (4.151)
J0 (β p )

The total instantaneous frequency of x(t) is


4 Frequency Modulation 271
 

1 d 2J1 (β p )
f i (t) = f c + tan−1 cos(2π f m t)
2π dt J0 (β p )
2J1 (β p )/J0 (β p )
= fc − f m sin(2π f m t)
1 + (2J1 (β p )/J0 (β p ))2 cos2 (2π f m t)
2J1 (β p )J0 (β p )
= fc − 2 f m sin(2π f m t).
J0 (β p ) + (2J1 (β p ))2 cos2 (2π f m t)
(4.152)

16. (Haykin 1983) A carrier wave is frequency modulated using a sinusoidal signal
of frequency f m and amplitude Am .
(a) Determine the values of the modulation index β for which the carrier com-
ponent of the FM signal is reduced to zero.
(b) In a certain experiment conducted with f m = 1 kHz and increasing Am start-
ing from zero volts, it is found that the carrier component of the FM signal
is reduced to zero for the first time when Am = 2 V. What is the frequency
sensitivity of the modulator? What is the value of Am for which the carrier
component is reduced to zero for the second time?

• Solution: From the table of Bessel functions, we see that J0 (β) is equal to zero
for

β = 2.44
β = 5.52
β = 8.65
β = 11.8. (4.153)

For tone modulation


k f Am
β=
fm
β fm
⇒ kf =
Am
= 1.22 kHz/V. (4.154)

The value of Am for which the carrier component goes to zero for the second
time is equal to

β fm
Am =
kf
5.52
⇒ Am =
1.22
= 4.52 V. (4.155)
272 4 Frequency Modulation

17. An FM signal with modulation index β = 2 is transmitted through an ideal


bandpass filter with midband frequency f c and bandwidth 7 f m , where f c is the
carrier frequency and f m is the frequency of the sinusoidal modulating wave.
Determine the spectrum of the filter output.
• Solution: We know that the spectrum of the tone-modulated FM signal is given
by

Ac 
S( f ) = Jn (β) [δ( f − f c − n f m ) + δ( f + f c + n f m )] .(4.156)
2 n=−∞

The spectrum at the output of the BPF is given by

Ac 
3
X( f ) = Jn (2)δ( f − f c − n f m ) + δ( f + f c + n f m ), (4.157)
2 n=−3

which is illustrated in Fig. 4.14.


18. (Haykin 1983) Consider a wideband PM signal produced by a sinusoidal mod-
ulating wave m(t) = Am cos(2π f m t), using a modulator with phase sensitivity
k p.
(a) Show that if the phase deviation of the PM signal is large compared to one
radian, the bandwidth of the PM signal varies linearly with the modulation
frequency f m .
(b) Compare this bandwidth of the wideband PM signal with that of a wideband
FM signal produced by m(t) and frequency sensitivity k f .

0.6

0.35 0.35
0.3

0.08

fc − 3fm fc − fm

fc − 2fm fc fc + fm fc + 3fm
−0.08
fc + 2fm

−0.6

Fig. 4.14 Spectrum of the signal at the BPF output for Ac = 2


4 Frequency Modulation 273

• Solution: The PM signal is given by

s(t) = Ac cos(2π f c t + k p Am cos(2π f m t)). (4.158)

The phase deviation is k p Am . The instantaneous frequency due to the message


is
1 d  
k p Am cos(2π f m t) = −k p Am f m sin(2π f m t). (4.159)
2π dt
The frequency deviation is

 f 1 = k p Am f m , (4.160)

which is directly proportional to f m . Now, the PM signal in (4.158) can be


considered to be an FM signal with message given by

m 1 (t) = −(k p /k f )Am f m sin(2π f m t). (4.161)

Therefore, the bandwidth of the PM signal in (4.158) is

BT, PM = 2( f 1 + f m )
= 2 f m (k p Am + 1)
≈ 2 f m k p Am , (4.162)

which varies linearly with f m . However, in the case of FM, the frequency
deviation is

 f 2 = k f Am , (4.163)

which is independent of f m . The bandwidth of the FM signal is

BT, FM = 2( f 2 + f m )
= 2(k f Am + f m ). (4.164)

19. Figure 4.15 shows the block diagram of a system. Here s(t) and h(t) are FM
signals, given by

Fig. 4.15 Block diagram of g(t) Output


a system h(t)

s(t)
274 4 Frequency Modulation

s(t) = cos (2π f c t − π f m t)


h(t) = cos (2π f c t + π f m t) . (4.165)

Assume that g(t) is a real-valued lowpass signal with bandwidth [−W, W ],


f m < W , f c
W and h(t) is of the form

h(t) = 2h c (t) cos(2π f c t) − 2h s (t) sin(2π f c t). (4.166)

(a) Using the method of complex envelopes, determine the output of h(t).
(b) Determine the envelope of the output of h(t).

• Solution: Let the signal at the output of the multiplier be denoted by v1 (t).
Then

v1 (t) = g(t) cos(2π f c t − π f m t)


= g(t) [cos(2π f c t) cos(π f m t) + sin(2π f c t) sin(π f m t)] .
(4.167)

Comparing the above equation with that of the canonical representation of a


bandpass signal, we conclude that the complex envelope of v1 (t) is given by

ṽ1 (t) = g(t)e−j π fm t . (4.168)

Note that according to the representation in (4.167), v1 (t) is a bandpass signal,


with carrier frequency f c . Similarly, the complex envelope of the filter is given
by

1 j π fm t
h̃(t) = e , (4.169)
2
where we have assumed that

h(t) =  2h̃(t)e j 2π fc t . (4.170)

Thus, the complex envelope of the filter output can be written as

ỹ(t) = h̃(t)  ṽ1 (t), (4.171)

where  denotes convolution. Therefore


4 Frequency Modulation 275
 ∞
ỹ(t) = ṽ1 (τ )h̃(t − τ ) dτ
τ =−∞
 ∞
1
= g(τ )e−j π fm τ e j π fm (t−τ ) dτ
2 τ =−∞
 ∞
1
= e j π fm t g(τ )e−j 2π fm τ dτ
2 τ =−∞
1
= e j π fm t G( f m ). (4.172)
2
Therefore, the output of h(t) is
 
y(t) =  ỹ(t)e j 2π fc t
1
= G( f m ) cos(2π( f c − f m /2)t). (4.173)
2
Thus, the envelope of the output is given by

1
| ỹ(t)| = |G( f m )|. (4.174)
2
Hence, the envelope of the output signal is proportional to the magnitude
response of g(t) evaluated at f = f m .

20. (Haykin 1983) Figure 4.16 shows the block diagram of the transmitter and
receiver for stereophonic FM. The input signals l(t) and r (t) represent left-
hand and right-hand audio signals. The difference signal x1 (t) = l(t) − r (t) is
DSB-SC modulated as shown in the figure, with f c = 25 kHz. The DSB-SC
wave, x2 (t) = l(t) + r (t) and the pilot carrier are summed to produce the com-
posite signal m(t). The composite signal m(t) is used to frequency modulate
a carrier and the resulting FM signal is transmitted. Assume that f 2 = 20 kHz,
f 1 = 200 Hz.
(a) Sketch the spectrum of m(t). Label all the important points on the x- and
y-axes. The spectrums of l(t) and r (t) are shown in Fig. 4.16.
(b) Assuming that the frequency deviation of the FM signal is 90 kHz, find the
transmission bandwidth of the FM signal using Carson’s rule.
(c) In the receiver block diagram determine the signal y(t), the input-output
characteristics of the device, and the specifications of filter1, filter2, filter3,
and filter4 in the frequency domain. Assume ideal filter characteristics.

• Solution: Note that

m(t) = x2 (t) + cos(2π f c t) + x1 (t) cos(4π f c t). (4.175)

The spectrum of m(t) is depicted in Fig. 4.17.


From Carson’s rule, the transmission bandwidth of the FM signal is
276 4 Frequency Modulation

l(t) + x1 (t)
Transmitter

cos(4πfc t)
Stereophonic
r(t)
Freq Pilot FM signal
carrier s(t)
doubler
source
+
+ m(t) FM
cos(2πfc t) modulator
x2 (t)

L(f ) l(t) R(f ) r(t)


1 1

f f

−f2 −f1 0 f1 f2 −f2 −f1 0 f1 f2

Receiver
s(t) FM m(t) x2 (t)
Filter1
demodulator

y(t) cos(2πfc t)
Filter2 Device Filter3

2 cos(4πfc t)

x1 (t)
Filter4

Fig. 4.16 Transmitter and receiver for stereophonic FM

M (f )

0.5
−f6 −f5 −f4 −f3 f3 f4 f5 f6
0 f
−2fc −fc −f2 −f1 f1 f2 fc 2fc
−0.5
f3 = 2fc − f2 f5 = 2fc + f1
f4 = 2fc − f1 f6 = 2fc + f2

Fig. 4.17 Spectrum of m(t)


4 Frequency Modulation 277

Fig. 4.18 Variation of phase θ(t) (radians)


versus time for an FM signal

π
t × 10−6 sec
0
2 3 5 6

BT = 2( f + W ), (4.176)

where W = 2 f c + f 2 = 70 kHz is the maximum frequency content in m(t)


and  f = 90 kHz is the frequency deviation. Therefore BT = 320 kHz.
At the receiver side y(t) = m(t). The device is a squarer. Filter1 is an LPF with
unity gain and bandwidth [− f 2 , f 2 ]. Filter2 is a BPF with unity gain, center
frequency f c , and bandwidth less than 5 kHz on either side of f c . Filter3 is a
BPF with a gain of 4 with center frequency 2 f c and any suitable bandwidth,
such that the dc component is eliminated. Filter4 is an LPF with unity gain
and bandwidth [− f 2 , f 2 ].

21. Consider an FM signal given by

s(t) = A sin(θ(t)),

where θ(t) is a periodic waveform as shown in Fig. 4.18. The message signal
has zero mean. Compute the carrier frequency.
• Solution: We know that the instantaneous frequency is given by

1 dθ
f (t) =
2π dt
= f c + k f m(t), (4.177)

which is plotted in Fig. 4.19b. Observe that f (t) is also periodic. Since m(t)
has zero mean, the mean value of f (t) is f c , where

25 × 2 + 50 × 1
fc = × 104
3
100
= × 104 Hz. (4.178)
3
22. A 10 kHz periodic square wave g p (t) is applied to a first-order R L lowpass filter
as shown in Fig. 4.20. It is given that R/(2πL) = 10 kHz. The output signal m(t)
is FM modulated with frequency deviation equal to 75 kHz.
Determine the bandwidth of the FM signal s(t), using Carson’s rule. Ignore those
278 4 Frequency Modulation

(a)
θ(t) (radians)

π
t × 10−6 sec
0
2 3 5 6

(b)
f (t) (Hz)
50 × 104

25 × 104
t × 10−6 sec
0
3 2 6 5

Fig. 4.19 Variation of phase versus time for an FM signal

gp (t)
L
gp (t) m(t) s(t)
1/T = 10 kHz FM
B modulator
t
R
0
−B

T /2 T /2

Fig. 4.20 Periodic signal applied to an R L-lowpass filter

harmonic terms in m(t) whose (absolute value of the) amplitude is less than 1%
of the fundamental.
• Solution: We know that

4B  (−1)m
g p (t) = cos [2π(2m + 1) f 0 t] , (4.179)
π m=0 (2m + 1)

where f 0 = 1/T = 10 kHz. The transfer function of the filter is

R
H (ω) =
R + j ωL
1
= , (4.180)
1 + j ω/ω0
4 Frequency Modulation 279

where ω0 = R/L = 2π f 0 (given) is the −3 dB frequency in rad/s. Therefore



4B  (−1)m
m(t) = Am cos [2π(2m + 1) f 0 t + θm ] , (4.181)
π m=0 (2m + 1)

where
1
Am = |H ((2m + 1)ω0 )| =  (4.182)
1 + (2m + 1)2

and

θm = − tan−1 (2m + 1). (4.183)

We need to ignore those harmonics that satisfy

4B 1 1 4B 1
 < 0.01 √
π (2m + 1) 1 + (2m + 1)2 π 2
1 1
⇒  < 0.00707 for m > 0. (4.184)
(2m + 1) 1 + (2m + 1)2

We find that m ≥ 6 satisfies the inequality in (4.184). Therefore, the (one-


sided) bandwidth of m(t) is (2 × 5 + 1) f 0 = 11 f 0 = 110 kHz.
Hence, the bandwidth of s(t) using Carson’s rule is

B = 2(75 + 110) = 370 kHz. (4.185)

23. A message signal m(t) = Ac cos(2π f m t) with f m = 5 kHz is applied to a fre-


quency modulator. The resulting FM signal has a frequency deviation of 10 kHz.
This FM signal is applied to two frequency multipliers in cascade. The first
frequency multiplier has a multiplication factor of 2 and the second frequency
multiplier has a multiplication factor of 3. Determine the frequency deviation and
the modulation index of the FM signal obtained at the second multiplier output.
What is the frequency separation between two consecutive spectral components
in the spectrum of the output FM signal?
• Solution: Let us denote the instantaneous frequency of the input FM signal
by

f i (t) = f c +  f cos(2π f m t), (4.186)

where  f = 10 kHz. The overall multiplication factor of the system is 6,


hence the instantaneous frequency of the output FM signal is
280 4 Frequency Modulation

FM
Frequency
signal
multiplier
n2

Narrowband Frequency
Message
Mixer
FM multiplier
modulator n1

f1 = 0.1 MHz f1 = 9.5 MHz

Oscillator Oscillator

Fig. 4.21 Wideband frequency modulator using the indirect method

f o (t) = 6 f c + 6 f cos(2π f m t). (4.187)

Thus, the frequency deviation of the output FM signal is 60 kHz. The mod-
ulation index is 60/5 = 12. The frequency separation in the spectrum of the
output FM signal is unchanged at 5 kHz.
24. (Haykin 1983) Figure 4.21 shows the block diagram of a wideband frequency
modulator using the indirect method. Note that a mixer is essentially a multiplier,
followed by a bandpass filter which allows only the difference frequency com-
ponent. This transmitter is used to transmit audio signals in the range 100 Hz to
15 kHz. The narrowband frequency modulator is supplied with a carrier of fre-
quency f 1 = 0.1 MHz. The second oscillator supplies a frequency of 9.5 MHz.
The system specifications are as follows: carrier frequency at the transmitter
output f c = 100 MHz with frequency deviation,  f = 75 kHz. Maximum mod-
ulation index at the output of the narrowband frequency modulator is 0.2 rad.
(a) Calculate the frequency multiplication ratios n 1 and n 2 .
(b) Specify the value of the carrier frequency at the output of the first frequency
multiplier.
• Solution: It is given that the frequency deviation of the output FM wave is
equal to 75 kHz. Note that for a tone-modulated FM signal, the frequency
deviation is related to the frequency of the modulating wave as

 f = β fm . (4.188)
4 Frequency Modulation 281

The frequency deviation at the output of the narrowband FM modulator is


fixed. Thus, the lowest frequency component in the message will produce
the maximum modulation index, β = 0.2, modulator has a β = 0.2. Then the
frequency deviation at the output of the narrowband FM modulator is

 f inp = 0.2 × 0.1 kHz


= 0.02 kHz. (4.189)

However, the required frequency deviation of the output FM signal is 75 kHz.


Hence, we need to have

75
n1n2 =
0.02
= 3750. (4.190)

The carrier frequency at the output of the first multiplier is 0.1n 1 MHz. The
carrier frequency at the output of the second multiplier is

n 2 (9.5 − 0.1n 1 ) = 100 MHz. (4.191)

Solving for n 1 and n 2 we get

n 1 = 75
n 2 = 50. (4.192)

The carrier frequency at the output of the first frequency multiplier is

n 1 f 1 = 7.5 MHz. (4.193)

25. (Haykin 1983) The equivalent circuit of the frequency-determining network of


a VCO is shown in Fig. 4.22. Frequency modulation is produced by applying the
modulating signal Vm sin(2π f m t) plus a bias Vb to a varactor diode connected
across the parallel combination of a 200-μH inductor (L) and a 100-pF capacitor

Fig. 4.22 Equivalent circuit


of the frequency-determining
network of a VCO
C Ci (t)
L
282 4 Frequency Modulation

(C). The capacitance in the varactor diode is related to the voltage V (t) applied
across its terminals by

Ci (t) = 100/ V (t) pF. (4.194)

The instantaneous frequency of oscillation is given by

1 1
f i (t) = √ . (4.195)
2π L(C + Ci (t))

The unmodulated frequency of oscillation is 1 MHz. The VCO output is applied


to a frequency multiplier to produce an FM signal with carrier frequency of
64 MHz and a modulation index of 5.
(a) Determine the magnitude of the bias voltage Vb .
(b) Find the amplitude Vm of the modulating wave, given that f m = 10 kHz.
Assume that Vb
Vm .
• Solution: The instantaneous frequency of oscillation is given by

1 1
f i (t) = √ . (4.196)
2π L(C + Ci (t))

The unmodulated frequency of oscillation is given as 1 MHz. Thus

1 1
fc =

2π L(C + C0 )
⇒ C + C0 = 126.65 pF
⇒ C0 = 26.651 pF

⇒ 100/ Vb = 26.651
⇒ Vb = 14.078 V. (4.197)

Since the final carrier frequency is 64 MHz and the frequency multiplication
factor is 64. Thus, the modulation index at the VCO output is

5
β= = 0.078. (4.198)
64
Thus, the FM signal at the VCO output is narrowband. The instantaneous
frequency of oscillation at the VCO output is

1 1
f i (t) =  . (4.199)
2π L(C + 100(Vb + Vm sin(2π f m t))−0.5 )

Since Vb
Vm the instantaneous frequency can be approximated as
4 Frequency Modulation 283

1 1
f i (t) ≈ 
2π L(C + 100V −0.5 (1 − V /(2V ) sin(2π f t)))
b m b m

1 1
= √ . (4.200)
2π L(C + C0 (1 − Vm /(2Vb ) sin(2π f m t)))

Hence, the instantaneous frequency becomes

1 1
f i (t) = √
2π L(C + C0 − C0 Vm /(2Vb ) sin(2π f m t)))
1 1
= √
2π L(C + C0 )
1

1 − C0 Vm /(2Vb (C + C0 )) sin(2π f m t)
 
C0 Vm
≈ fc 1 + sin(2π f m t) . (4.201)
4Vb (C + C0 )

The modulation index of the narrowband FM signal is given by

C0 Vm f c
β= = 0.078. (4.202)
4Vb f m (C + C0 )

Substituting f c = 1 MHz, f m = 10 kHz, and Vb = 14.078 V, we get

Vm = 0.2087 V. (4.203)

26. The equivalent circuit of the frequency-determining network of a VCO is shown


in Fig. 4.23. Frequency modulation is produced by applying the modulating
signal Vm sin(2π f m t) plus a bias Vb to a varactor diode connected across the
parallel combination of a 25-μH inductor (L) and a 200-pF capacitor (C). The
capacitance in the varactor diode is related to the voltage V (t) applied across its
terminals by

Ci (t) = 100/ V (t) pF. (4.204)

Fig. 4.23 Equivalent circuit


of the frequency-determining
network of a VCO
C Ci (t)
L
284 4 Frequency Modulation

The instantaneous frequency of oscillation is given by

1 1
f i (t) = √ . (4.205)
2π L(C + Ci (t))

The unmodulated frequency of oscillation is 2 MHz. The VCO output is applied


to a frequency multiplier to produce an FM signal with carrier frequency of
128 MHz and a modulation index of 6.
(a) Determine the magnitude of the bias voltage Vb .
(b) Find the amplitude Vm of the modulating wave, given that f m = 20 kHz.
Assume that Vb
Vm .
• Solution: The instantaneous frequency of oscillation is given by

1 1
f i (t) = √ . (4.206)
2π L(C + Ci (t))

The unmodulated frequency of oscillation is given as 2 MHz. Thus

1 1
fc =

2π L(C + C0 )
⇒ C + C0 = 253.303 pF
⇒ C0 = 53.303 pF

⇒ 100/ Vb = 53.303
⇒ Vb = 3.5196 V. (4.207)

Since the final carrier frequency is 128 MHz, the frequency multiplication
factor is 128/2 = 64. Thus, the modulation index at the VCO output is

6
β= = 0.09375. (4.208)
64
Thus, the FM signal at the VCO output is narrowband. The instantaneous
frequency of oscillation at the VCO output is

1 1
f i (t) =  . (4.209)
2π L(C + 100(Vb + Vm sin(2π f m t))−0.5 )

Since Vb
Vm the instantaneous frequency can be approximated as
4 Frequency Modulation 285

1 1
f i (t) ≈ 
2π L(C + 100V −0.5 (1 − V /(2V ) sin(2π f t)))
b m b m

1 1
= √ . (4.210)
2π L(C + C0 (1 − Vm /(2Vb ) sin(2π f m t)))

Hence, the instantaneous frequency becomes

1 1
f i (t) = √
2π L(C + C0 − C0 Vm /(2Vb ) sin(2π f m t)))
1 1
= √
2π L(C + C0 )
1

1 − C0 Vm /(2Vb (C + C0 )) sin(2π f m t)
 
C0 Vm
≈ fc 1 + sin(2π f m t) . (4.211)
4Vb (C + C0 )

The modulation index of the narrowband FM signal is given by

C0 Vm f c
β= = 0.09375. (4.212)
4Vb f m (C + C0 )

Substituting f c = 2 MHz, f m = 20 kHz, and Vb = 3.5196 V, we get

Vm = 0.0627 V. (4.213)

27. (Haykin 1983) The FM signal


 t

s(t) = Ac cos 2π f c t + 2πk f m(τ ) dτ (4.214)


τ =0

is applied to the system shown in Fig. 4.24. Assume that the resistance R is small
compared to the impedance of C for all significant frequency components of s(t)
and the envelope detector does not load the filter. Determine the resulting signal
at the envelope detector output assuming that k f |m(t)| < f c for all t.
• Solution: The transfer function of the highpass filter is

R
H( f ) =
R + 1/(j 2π f C)
≈ j 2π f RC (4.215)

provided
286 4 Frequency Modulation

Fig. 4.24 Frequency C


demodulation using a
highpass filter
FM
Envelope Output
signal R
detector signal
s(t)

1
R . (4.216)
2π f C

Thus, over the range of frequencies in which (4.216) is valid, the highpass
filter acts like an ideal differentiator. Hence, the output of the highpass filter
is given by

ds(t)
x(t) = RC
dt


  t
= −RC Ac 2π f c + 2πk f m(t) sin 2π f c t + 2πk f m(τ ) dτ .
τ =0
(4.217)

The output of the envelope detector is given by


 
y(t) = 2π RC Ac f c + k f m(t) . (4.218)

28. Consider the FM signal


  t 
s(t) = Ac cos 2π f c t + 2πk f m(τ ) dτ , (4.219)
τ =0

where k f is 10 kHz/V and

m(t) = 5 cos(ωt) V, (4.220)

where ω = 20,000 rad/s. Compute the frequency deviation and the modulation
index.
• Solution: The total instantaneous frequency is

f i (t) = f c + k f 5 cos(ωt). (4.221)

Therefore, the frequency deviation is

5k f = 50 kHz. (4.222)
4 Frequency Modulation 287

The transmitted FM signal is


 
10πk f
s(t) = Ac cos 2π f c t + sin(ωt) . (4.223)
ω

Hence, the modulation index is

10πk f
β= = 5π. (4.224)
ω
29. (Haykin 1983) Suppose that the received signal in an FM system contains some
residual amplitude modulation as shown by

s(t) = a(t) cos(2π f c t + φ(t)), (4.225)

where a(t) > 0 and f c is the carrier frequency. The phase φ(t) is related to the
modulating signal m(t) by
 t
φ(t) = 2πk f m(τ ) dτ . (4.226)
τ =0

Assume that s(t) is restricted to a frequency band of width BT centered at f c ,


where BT is the transmission bandwidth of s(t) in the absence of amplitude mod-
ulation, that is, when a(t) = Ac . Also assume that a(t) varies slowly compared to
φ(t) and f c > k f m(t). Compute the output of the ideal frequency discriminator
(ideal differentiator followed by an envelope detector).
• Solution: The output of the ideal differentiator is

ds(t)
s  (t) = = a  (t) cos(2π f c t + φ(t))
dt
−a(t) sin(2π f c t + φ(t))(2π f c + φ (t)). (4.227)

The envelope of s  (t) is given by



y(t) = (a  (t))2 + a 2 (t)(2π f c + φ (t))2 . (4.228)

Since it is given that a(t) varies slowly compared to φ(t), we have

|φ (t)|
|a  (t)| (4.229)

s  (t) can be approximated by

s  (t) ≈ −a(t) sin(2π f c t + φ(t))(2π f c + 2πk f m(t)). (4.230)

The output of the envelope detector is


288 4 Frequency Modulation

y(t) = 2πa(t)( f c + k f m(t)), (4.231)

where we have assumed that [ f c + k f m(t)] > 0. Thus, we see that there is
distortion due to a(t), at the envelope detector output.
30. (Haykin 1983) Let

s(t) = a(t) cos(2π f c t + φ(t)), (4.232)

where a(t) > 0, be applied to a hard limiter whose output z(t) is defined by

z(t) = sgn[s(t)]

+1 for s(t) > 0
= (4.233)
−1 for s(t) < 0.

(a) Show that z(t) can be expressed in the form of a Fourier series as follows:

4  (−1)n
z(t) = cos[2π f c t (2n + 1) + (2n + 1)φ(t)]. (4.234)
π n=0 2n + 1

(b) Compute the output when z(t) is applied to an ideal bandpass filter with cen-
ter frequency f c and bandwidth BT , where BT is the transmission bandwidth
of s(t) in the absence of amplitude modulation. Assume that f c
BT .

• Solution: Since a(t) > 0, z(t) can be written as

z(t) = sgn[cos(2π f c t + φ(t))]. (4.235)

Let

α(t) = 2π f c t + φ(t). (4.236)

Then z(t) can be rewritten as

z(α(t)) = sgn[cos(α(t))]. (4.237)

Note that z(α(t)) is periodic with respect to α(t), that is,

z(α(t) + 2nπ) = z(α(t)). (4.238)

This is illustrated in Fig. 4.25. Hence, z(t) can be written in the form of a Fourier
series with respect to α(t) as follows:
4 Frequency Modulation 289

cos(α(t)) z(α(t))
0.5

-0.5

-1
-6 -4 -2 0 2 4 6
α(t)

Fig. 4.25 z(α(t)) is periodic with respect to α(t)



z(α(t)) = 2 an cos(nα(t)), (4.239)
n=1

where
 π
1
an = z(α(t)) cos(nα(t)) dα(t)
2π α(t)=−π
 π
1
= z(α(t)) cos(nα(t)) dα(t). (4.240)
π α(t)=0

For convenience denote α(t) = x. Thus



1 π
an = z(x) cos(nx) d x
π x=0
 π/2 
1 1 π
= cos(nx) d x − cos(nx) d x
π x=0 π x=π/2

0 for n = 2m
= 2(−1)m (4.241)
π(2m+1)
for n = 2m + 1.

Thus
290 4 Frequency Modulation

∞
4(−1)m
z(x) = cos((2m + 1)x)
m=0
π(2m + 1)
∞
4(−1)m
⇒ z(α(t)) = cos((2m + 1)α(t))
m=0
π(2m + 1)
∞
4(−1)m
= cos((2m + 1)2π f c t + (2m + 1)φ(t)).
m=0
π(2m + 1)
(4.242)

Observe that the mth harmonic has a carrier frequency at (2m + 1) f c and band-
width (2m + 1)BT , where BT is the bandwidth of s(t) with amplitude modula-
tion removed, that is, a(t) = Ac . If the fundamental component at m = 0 is to
be extracted from z(α(t)), we require

BT BT
< (2m + 1) f c − (2m + 1)
fc +
 2 2
BT 1
⇒ 1+ < fc for m > 0. (4.243)
2 m

The left-hand side of the second equation in (4.243) is maximum for m = 1.


Thus, in the worst case, we require

f c > BT . (4.244)

Now if z(α(t)) is passed through an ideal bandpass filter with center frequency
f c and bandwidth BT , the output is (assuming (4.244) is satisfied)

4
y(t) = cos(2π f c t + φ(t)), (4.245)
π
which has no amplitude modulation.
31. The message signal
 4
sin(t B)
m(t) = A (4.246)
t

is applied to an FM modulator. Compute the two-sided bandwidth (on both sides


of the carrier) of the FM signal using Carson’s rule. Assume frequency sensitivity
of the modulator to be k f .
• Solution: According to Carson’s rule, the two-sided bandwidth of the FM
signal is

BT = 2( f + W ), (4.247)
4 Frequency Modulation 291

where

 f = max k f |m(t)| (4.248)

is the frequency deviation and W is the one-sided bandwidth of the message.


Clearly

 f = k f A4 B 4 (4.249)

since the maximum value of m(t) is A4 B 4 , which occurs at t = 0. Next, we


make use of the Fourier transform pair:

A
Asinc(t B)  rect( f /B). (4.250)
B
Time scaling by 1/π, we obtain


Asinc(t B/π)  rect( f π/B)
B
sin(t B) Aπ
⇒A  rect( f π/B)
tB B
sin(t B)
⇒A  Aπ rect( f π/B), (4.251)
t

which has a one-sided bandwidth equal to B/(2π). Therefore, m(t) has a


one-sided bandwidth of W = 2B/π. Hence

BT = 2(k f A4 B 4 + 2B/π). (4.252)

32. Explain the principle of operation of the PLL demodulator for FM signals. Draw
the block diagram and clearly state the signal model and assumptions.
• Solution: Consider the block diagram in Fig. 4.26. Here s(t) denotes the input
FM signal (bandlimited to [ f c − BT /2, f c + BT /2]; f c is the carrier fre-
quency, BT is the bandwidth of s(t)) given by

s(t) = Ac sin(2π f c t + φ1 (t)) V, (4.253)

where
 t
φ1 (t) = 2πk f m(τ ) dτ , (4.254)
τ =−∞

where k f is the frequency sensitivity of the FM modulator in Hz/V and m(·) is


the message signal bandlimited to [−W, W ]. The voltage controlled oscillator
(VCO) output is
292 4 Frequency Modulation

Mixer

Loop
s(t) e(t) v(t)
LPF filter
h(t)

r(t)
VCO

Fig. 4.26 Block diagram of the phase locked loop (PLL) demodulator for FM signals

r (t) = Av cos(2π f c t + φ2 (t)) V, (4.255)

where
 t
φ2 (t) = 2πkv v(τ ) dτ , (4.256)
τ =−∞

where kv is the frequency sensitivity of the VCO in Hz/V and v(·) is the control
signal at the VCO input. The lowpass filter eliminates the sum frequency
component at the multiplier output and allows only the difference frequency
component. Hence

e(t) = Ac Av km sin(φ1 (t) − φ2 (t)) V, (4.257)


 
where km V−1 is the multiplier gain (a factor of 1/2 is absorbed in km ). The
loop filter output is
 ∞
v(t) = e(τ )h(t − τ ) dτ V. (4.258)
τ =−∞

Note that h(t) is dimensionless. Let

φe (t) = φ1 (t) − φ2 (t)


 t
⇒ φe (t) = φ1 (t) − 2πkv v(τ ) dτ
τ =−∞
dφe (t) dφ1 (t)
⇒ = − 2πkv v(t)
dt dt  ∞
dφe (t) dφ1 (t)
⇒ = − 2πK 0 sin(φe (τ ))h(t − τ ) dτ ,
dt dt τ =−∞
(4.259)
4 Frequency Modulation 293

where

K 0 = km kv Ac Av Hz. (4.260)

Now, under steady state

φe (t)  1 rad (4.261)

for all t, therefore

sin(φe (t)) ≈ φe (t) (4.262)

for all t. Hence, the last equation in (4.259) can be written as


 ∞
dφe (t) dφ1 (t)
= − 2πK 0 φe (τ )h(t − τ ) dτ . (4.263)
dt dt τ =−∞

Taking the Fourier transform of both sides, we get

j 2π f e ( f ) = j 2π f 1 ( f ) − 2πK 0 e ( f )H ( f )
j f 1 ( f )
⇒ e ( f ) =
j f + K0 H ( f )
1 ( f )
⇒ e ( f ) = . (4.264)
1 + K 0 H ( f )/(j f )

Now φ1 (t) is bandlimited to [−W, W ]. It will be shown later that φ2 (t) is also
bandlimited to [−W, W ]. Therefore, both φe (t) and h(t) are bandlimited to
[−W, W ]. If
 
 K0 H ( f ) 
 
1 for | f | < W (4.265)
 f 

then the last equation in (4.264) reduces to

j f 1 ( f )
e ( f ) = . (4.266)
K0 H ( f )

Hence, the Fourier transform of (4.257), after applying (4.262), becomes

E( f ) = Ac Av km e ( f ) (4.267)

and
294 4 Frequency Modulation

V ( f ) = E( f )H ( f )
= Ac Av km j f 1 ( f )/K 0
= (j f /kv ) 1 ( f ). (4.268)

The inverse Fourier transform of (4.268) gives

1 dφ1 (t)
v(t) =
2πkv dt
kf
= m(t). (4.269)
kv

From (4.269), it is clear that v(t) and φ2 (t) are bandlimited to [−W, W ].

References

Simon Haykin. Communication Systems. Wiley Eastern, second edition, 1983.


J. G. Proakis and M. Salehi. Fundamentals of Communication Systems. Pearson Education Inc.,
2005.
Chapter 5
Noise in Analog Modulation

1. A DSB-SC signal of the form

S(t) = Ac M(t) cos(2π f c t + ) (5.1)

is transmitted over a channel that adds additive Gaussian noise with psd shown
in Fig. 5.1. The message spectrum extends over [−4, 4] kHz and the carrier
frequency is 200 kHz. Assuming that the average power of S(t) is 10 W and
coherent detection, determine the output SNR of the receiver.
Assume that the IF filter is ideal with unity gain in the passband and zero for
other frequencies, and the narrowband representation of noise at the IF filter
output is

N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t), (5.2)

where Nc (t) and Ns (t) are both independent of .


• Solution: The received signal at the output of the IF filter is

X (t) = Ac M(t) cos(2π f c t + ) + N (t)


= Ac M(t) cos(2π f c t + ) + Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t)
, (5.3)

where M(t) denotes the random process corresponding to the message,  is


uniformly distributed in [0, 2π ) and N (t) is a narrowband noise process. The
psd of N (t) is illustrated in Fig. 5.2. If the local oscillator (LO) at the receiver
supplies 2 cos(2π f c t + ), then the LPF output is

Y (t) = Ac M(t) + Nc (t) cos() + Ns (t) sin(). (5.4)

© The Editor(s) (if applicable) and The Author(s), under exclusive license 295
to Springer Nature Switzerland AG 2021
K. Vasudevan, Analog Communications,
https://doi.org/10.1007/978-3-030-50337-6_5
296 5 Noise in Analog Modulation

Fig. 5.1 Noise psd


SW (f ) (Watts/Hz)

10−6

f (kHz)

-400 0 400

Fig. 5.2 Psd of narrowband SN (f )(×10−6 ) (Watts/Hz)


noise at the IF filter output
0.51

0.49

f (kHz)

−204 −196 0 196 204

Hence the noise power at the LPF output is


     
E Nc2 (t) cos2 () + E Ns2 (t) sin2 () = E N 2 (t)
 ∞
= SN ( f ) d f
f =−∞

= 8000 × 10−6 W.
(5.5)

Note that

E [Nc (t)Ns (t)] = 0


E [sin() cos()] = 0 (5.6)

where we have used the fact that the cross spectral density S Nc Ns ( f ) is an odd
function, therefore R Nc Ns (0) = 0.
The power of the modulated message signal is

A2c P/2 = 10 W. (5.7)

Thus, the power of the demodulated message in (5.4) is A2c P = 20 W. Hence

SNR O = 2.5 × 103 ≡ 33.98 dB. (5.8)


5 Noise in Analog Modulation 297

Ac M (t) cos(2πfc t) + N (t) V (t) Lowpass Y (t)

filter

2 cos(2πfc t + Θ(t))

Fig. 5.3 DSB-SC demodulator having a phase error (t)

2. (Haykin 1983) In a DSB-SC receiver, the sinusoidal wave generated by the


local oscillator suffers from a phase error (t) with respect to the input carrier
wave cos(2π f c t), as illustrated in Fig. 5.3. Assuming that (t) is a zero-mean
Gaussian process of variance σ2 , and that most of the time |(t)| is small
compared to unity, find the mean squared error at the receiver output for DSB-
SC modulation.
The mean squared error is defined as the expected value of the squared difference
between the receiver output for (t) = 0 and the message signal component of
the receiver output when (t) = 0.
Assume that the message M(t), the carrier phase (t), and noise N (t) are
statistically independent of each other. Also assume that N (t) is a zero-mean
narrowband noise process with psd N0 /2 extending over f c − W ≤ | f | ≤ f c +
W and having the representation:

N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t). (5.9)

The psd of M(t) extends over [−W, W ] with power P. The LPF is ideal with
unity gain in [−W, W ].
• Solution: The output of the multiplier is

V (t) = 2 [Ac M(t) cos(2π f c t) + N (t)] cos(2π f c t + (t)), (5.10)

where N (t) is narrowband noise. The output of the lowpass filter is

Y (t) = Ac M(t) cos((t)) + Nc (t) cos((t)) + Ns (t) sin((t)). (5.11)

When (t) = 0 the signal component is

Y0 (t) = Ac M(t). (5.12)

Thus, the mean squared error is


 
E (Y0 (t) − Y (t))2
= E [(Ac M(t)(1 − cos((t)))

+ Nc (t) cos((t)) + Ns (t) sin((t)))2
298 5 Noise in Analog Modulation
   
= E A2c M 2 (t)(1 − cos((t)))2 + E Nc2 (t) cos2 ((t))
 
+E Ns2 (t) sin2 ((t)) . (5.13)

We now use the following relations (assuming |(t)|  1 for all t):

4 (t)
(1 − cos((t)))2 ≈
  4
E Nc2 (t) = 2N0 W
 
E Ns2 (t) = 2N0 W
 
E M 2 (t) = P. (5.14)

Thus
 
E (Y0 (t) − Y (t))2
A2 P    
= c E 4 (t) + 2N0 W E cos2 ((t)) + sin2 ((t))
4
3A2c Pσ4
= + 2N0 W, (5.15)
4
where we have used the fact that if X is a zero-mean Gaussian random variable
with variance σ 2
 
E X 2n = 1 × 3 × · · · × (2n − 1)σ 2n . (5.16)

3. (Haykin 1983) Let the message M(t) be transmitted using SSB modulation. The
psd of M(t) is
 a| f |
for | f | < W
SM ( f ) = W (5.17)
0 elsewhere,

where a and W are constants. Let the transmitted signal be of the form:

S(t) = Ac M(t) cos(2π f c t + θ ) + Ac M̂(t) sin(2π f c t + θ ), (5.18)

where θ is a uniformly distributed random variable in [0, 2π ) and independent


of M(t). White Gaussian noise of zero mean and psd N0 /2 is added to the SSB
signal at the receiver input. Compute SNR O assuming coherent detection and
an appropriate unity gain IF filter in the receiver front-end.

• Solution: The average message power is


5 Noise in Analog Modulation 299
 W
P= SM ( f ) d f
f =−W
= aW. (5.19)

The received SSB signal at the output of the IF filter is given by

x(t) = Ac M(t) cos(2π f c t + θ ) + Ac M̂(t) sin(2π f c t + θ )


+Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t). (5.20)

Assuming coherent demodulation by 2 cos(2π f c t + θ ), the LPF output is given


by

Y (t) = Ac M(t) + Nc (t) cos(θ ) + Ns (t) sin(θ ). (5.21)

The noise power is


     
E (Nc (t) cos(θ ) + Ns (t) sin(θ ))2 = E Nc2 (t) E cos2 (θ )
   
= +E Ns2 (t) E sin2 (θ )
 
= E N 2 (t)
= N0 W, (5.22)

since
     
E Nc2 (t) = E Ns2 (t) = E N 2 (t)
= N0 W
 2 
E cos (θ ) + sin (θ ) = 1.
2
(5.23)

Therefore, the output SNR is

A2c aW a A2c
SNR O = = . (5.24)
N0 W N0

4. An unmodulated carrier of amplitude Ac and frequency f c and bandlimited white


noise are summed and passed through an ideal envelope detector. Assume that
the noise psd to be of height N0 /2 and bandwidth 2W centered at f c . Determine
the output SNR when the input carrier-to-noise ratio is high.

• Solution: The input to the envelope detector is

x(t) = Ac cos(2π f c t) + n(t)


= Ac cos(2π f c t)
+n c (t) cos(2π f c t) − n s (t) sin(2π f c t). (5.25)
300 5 Noise in Analog Modulation

The output of the envelope detector is



y(t) = (Ac + n c (t))2 + n 2s (t). (5.26)

It is given that the carrier-to-noise ratio is high. Hence

y(t) ≈ Ac + n c (t). (5.27)

The output signal power is A2c and the output noise power is 2N0 W . Hence

A2c
SNR O = . (5.28)
2N0 W

5. (Haykin 1983) A frequency division multiplexing (FDM) system uses SSB mod-
ulation to combine 12 independent voice channels and then uses frequency mod-
ulation to transmit the composite signal. Each voice signal has an average power
P and occupies the frequency band [−4, 4] kHz. Only the lower sideband is
transmitted. The modulated voice signals used for the first stage of modulation
are defined by

Sk (t) = Ak Mk (t) cos(2π k f 0 t + θ ) + Ak M̂k (t) sin(2π k f 0 t + θ ), (5.29)

for 1 ≤ k ≤ 12, where f 0 = 4 kHz. Note that E[Mk2 (t)] = P. The received sig-
nal consists of the transmitted FM signal plus zero-mean white Gaussian noise
of psd N0 /2.
Assume that the output of the FM receiver is given by

Y (t) = k f S(t) + No (t), (5.30)

where


12
S(t) = Sk (t). (5.31)
k=1

The psd of No (t) is



N0 f 2 /A2c for | f | < 48 kHz
S No ( f ) = (5.32)
0 otherwise,

where Ac is the amplitude of the transmitted FM signal.


Find the relationship between the subcarrier amplitudes Ak so that the modulated
voice signals have equal SNRs at the FM receiver output.
5 Noise in Analog Modulation 301

• Solution: The power in the kth voice signal is


 
E Sk2 (t) = A2k P. (5.33)

The overall signal at the output of the first stage of modulation is given by


12
S(t) = Sk (t). (5.34)
k=1

The bandwidth of S(t) extends over [−48, 48] kHz. The signal S(t) is given
as input to the second stage, which is a frequency modulator.
The output of the FM receiver is given by

Y (t) = k f S(t) + No (t). (5.35)

The psd of No (t) is



N0 f 2 /A2c for | f | < 48 kHz
S No ( f ) = (5.36)
0 otherwise,

where Ac is the amplitude of the transmitted FM signal. The noise power in the
kth received voice band is
 kB
PNk = 2 S No ( f ) d f
(k−1)B
2N0 B 3  2 
= 3k − 3k + 1 , (5.37)
3A2c

where B = 4 kHz. The power in the kth received voice signal is

Pk = k 2f A2k P. (5.38)

The output SNR for the kth SSB modulated voice signal is

3k 2f A2k P A2c
SNR O, k = . (5.39)
2N0 B 3 (3k 2 − 3k + 1)

We require SNR O, k to be independent of k. Hence the required condition on


Ak is

A2k = C(3k 2 − 3k + 1), (5.40)

where C is a constant.
302 5 Noise in Analog Modulation

Fig. 5.4 Multiplying a


random process with a sine Lowpass
X(t) Y (t) Z(t)
wave filter
H(f )

2 cos(2πfc t + θ)

6. Consider the system shown in Fig. 5.4. Here, H ( f ) is an ideal LPF with unity
gain in the band [−W, W ]. Carefully follow the two procedures outlined below:
(a) Let

X (t) = M(t) cos(2π f c t + θ ), (5.41)

where θ is a uniformly distributed random variable in [0, 2π ), M(t) is a


random process with psd S M ( f ) in the range [−W, W ]. Assume that M(t)
and θ are independent.
Compute the autocorrelation and the psd of Y (t). Hence compute the psd of
Z (t).
(b) Let X (t) be any random process with autocorrelation R X (τ ) and psd S X ( f ).
Compute the psd of Z (t) in terms of S X ( f ).
Now assuming that X (t) is given by (5.41), compute S X ( f ). Substitute this
expression for S X ( f ) into the expression for the psd of Z (t).
(c) Explain the result for the psd of Z (t) obtained using procedure (a) and pro-
cedure (b).

• Solution: In the case of procedure (a)

Y (t) = M(t)[1 + cos(4π f c t + 2θ )]. (5.42)

Therefore

RY (τ ) = E[Y (t)Y (t − τ )]

1
= R M (τ ) 1 + cos(4π f c τ ) . (5.43)
2

Hence
1
SY ( f ) = S M ( f ) + [S M ( f − 2 f c ) + S M ( f + 2 f c )] . (5.44)
4
The psd of Z (t) is

S Z ( f ) = SY ( f )|H ( f )|2
= S M ( f ). (5.45)
5 Noise in Analog Modulation 303

Using procedure (b) we have

Y (t) = 2X (t) cos(2π f c t + θ ). (5.46)

Hence

RY (τ ) = 2R X (τ ) cos(2π f c τ )
⇒ SY ( f ) = S X ( f − f c ) + S X ( f + f c ). (5.47)

Therefore

S Z ( f ) = SY ( f )|H ( f )|2
= [S X ( f − f c ) + S X ( f + f c )]|H ( f )|2 . (5.48)

Now, assuming that

X (t) = M(t) cos(2π f c t + θ ), (5.49)

we get

1
R X (τ ) = R M (τ ) cos(2π f c τ )
2
1
⇒ S X ( f ) = [S M ( f − f c ) + S M ( f + f c )] . (5.50)
4
Substituting the above value of S X ( f ) into (5.48) we get

1
SZ ( f ) = [S M ( f − 2 f c ) + S M ( f ) + S M ( f ) + S M ( f + 2 f c )] |H ( f )|2
4
1
= S M ( f ). (5.51)
2
The reason for the difference in the psd using the two procedures is that in the
first case we are doing coherent demodulation.
However, in the second case coherent demodulation is not assumed. In fact, the
local oscillator supplying any arbitrary phase α would have given the result in
(5.51).

7. Consider the communication system shown in Fig. 5.5. The message is assumed
to be a random process X (t) given by


X (t) = Sk p(t − kT − α), (5.52)
k=−∞
304 5 Noise in Analog Modulation

Fig. 5.5 Block diagram of a


communication system X(t) Ideal Z(t)
LPF

W (t)

−3 −1 1 3

where 1/T denotes the symbol-rate, α is a random variable uniformly distributed


in [0, T ], and

p(t) = sinc (t/T ). (5.53)

The symbols Sk are drawn from a 4-ary constellation as indicated in Fig. 5.5.
The symbols are independent, that is

E[Sk Sk+n ] = E[Sk ]E[Sk+n ] for n = 0 (5.54)

and equally likely, that is

P(−3) = P(−1) = P(1) = P(3). (5.55)

Also assume that Sk and α are independent. The term W (t) denotes an additive
white noise process with psd N0 /2. The LPF is ideal with unity gain in the
bandwidth [−W, W ]. Assume that the LPF does not distort the message.
(a) Derive the expression for the autocorrelation and the psd of X (t).
(b) Compute the signal-to-noise ratio at the LPF output.
(c) What should be the value of W so that the SNR at the LPF output is maximized
without distorting the message?

• Solution: The autocorrelation of X (t) is given by

R X (τ ) = E[X (t)X (t − τ )]
⎡ ⎤
∞ ∞

= E⎣ Si p(t − i T − α) S j p(t − τ − j T − α)⎦
i=−∞ j=−∞

 ∞

= E[Si S j ]E[ p(t − i T − α) p(t − τ − j T − α)].
i=−∞ j=−∞
(5.56)
5 Noise in Analog Modulation 305

Now

E[Si S j ] = 5δ K (i − j), (5.57)

where

1 for i = j
δ K (i − j) = (5.58)
0 for i = j

Similarly

E [ p(t − i T − α) p(t − τ − j T − α)]



1 T
= p(t − i T − α) p(t − τ − j T − α) dα. (5.59)
T α=0

Let

t − iT − α = z (5.60)

Thus
∞ ∞
5  
R X (τ ) = δ K (i − j)
T i=−∞ j=−∞
 t−i T
p(z) p(z + i T − τ − j T ) dz
z=t−i T −T
∞  t−i T
5
= p(z) p(z − τ ) dz
T i=−∞ z=t−i T −T
 ∞
5
= p(z) p(z − τ ) dz
T z=−∞
5
= R pp (τ ), (5.61)
T
where R pp (τ ) is the autocorrelation of p(t). The power spectral density of X (t)
is given by

5
SX ( f ) = |P( f )|2
T
= 5T rect ( f T )
 5 sinc (t/T ). (5.62)
306 5 Noise in Analog Modulation

f1 = fc − W
f2 = fc + W
k 1
(a)

−f2 −f1 0 f1 f2 −W 0 W

R(t) Y (t) V (t)


IF
LPF
filter

Ac cos(2πfc t + θ)
SM (f ) (watts/kHz)
(b) 5
f (kHz)

−4 0 4

Fig. 5.6 Block diagram of a DSB-SC receiver

Therefore

R X (τ ) = 5 sinc (t/T ). (5.63)

The power in X (t) is


 ∞
R X (0) = SX ( f ) d f
f =−∞
= 5. (5.64)

The noise power at the LPF output is

N0
PN = × 2W = N0 W. (5.65)
2
Therefore
5
SNR O = . (5.66)
N0 W

Clearly, SNR O is maximized when W = 1/(2T ).

8. Consider the coherent DSB-SC receiver shown in Fig. 5.6a. The signal R(t) is
given by
5 Noise in Analog Modulation 307

R(t) = S(t) + W (t)


= Ac M(t) cos(2π f c t + θ ) + W (t), (5.67)

where M(t) has the psd as shown in Fig. 5.6b, θ is a uniformly distributed random
variable in [0, 2π ), and W (t) is a zero-mean random process with psd

SW ( f ) = a f 2 W/kHz for − ∞ < f < ∞. (5.68)

The power of S(t) is 160 W. The representation of narrowband noise is

N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t). (5.69)

Assume that M(t), θ , N (t) are independent of each other.


(a) Compute Ac .
(b) Derive the expression for the signal power at the receiver output.
(c) Derive the expression for the noise power at the receiver output.
(d) Compute the SNR in dB at the receiver output, if a = 10−3 , f c = 8 kHz, and
W = 4 kHz.

• Solution: It is given that the power of S(t) is 160 W. Thus

      A2 P
E S 2 (t) = A2c E M 2 (t) E cos2 (2π f c t + θ ) = c = 160, (5.70)
2
where

  4
P = E M (t) = 2
S M ( f ) d f = 20 W. (5.71)
f =−4

Therefore

Ac = 4. (5.72)

The IF filter output is

Y (t) = k Ac M(t) cos(2π f c t + θ ) + N (t). (5.73)

The psd of N (t) is

S N ( f ) = ak 2 f 2 for f c − W ≤ | f | ≤ f c + W. (5.74)

The receiver output is

k A2c M(t) Ac Ac
V (t) = + Nc (t) cos(θ ) + Ns (t) sin(θ ). (5.75)
2 2 2
308 5 Noise in Analog Modulation

Ac M (t) cos(2πfc t) + N (t) V (t) Lowpass Y (t)

filter

2 cos(2πfc t + Θ(t))

Fig. 5.7 DSB-SC demodulator having a phase error (t)

The signal power at the receiver output is

k 2 A4c P/4 = 1280k 2 . (5.76)

The noise power at the receiver output is

A2c   2   2      A2  
E Nc (t) E cos (θ ) + E Ns2 (t) E sin2 (θ ) = c E N 2 (t) .
4 4
(5.77)
Now
 fc +W
 
E N 2 (t) = 2ak 2 f2df
f = f c −W
4ak  2
2 
= 3 fc W + W 3 . (5.78)
3
Therefore, the noise power at the receiver output becomes

ak 2 A2c  2 
3 f c W + W 3 = 4.4373k 2 . (5.79)
3
The SNR at the receiver output in dB is

SNR = 10 log(1280/4.4373) = 24.6 dB. (5.80)

9. In a DSB-SC receiver, the sinusoidal wave generated by the local oscillator


suffers from a phase error (t) with respect to the input carrier wave cos(2π f c t),
as illustrated in Fig. 5.7. Assuming that (t) is uniformly distributed in [−δ, δ],
where δ  1, find the mean squared error at the receiver output.
The mean squared error is defined as the expected value of the squared difference
between the receiver output for (t) = 0 and the message signal component of
the receiver output when (t) = 0.
Assume that the message M(t), the carrier phase (t), and noise N (t) are
statistically independent of each other. Also assume that N (t) is a zero-mean
narrowband noise process with psd N0 /2 extending over f c − W ≤ | f | ≤ f c +
W and having the representation:
5 Noise in Analog Modulation 309

N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t). (5.81)

The psd of M(t) extends over [−W, W ] with power P. The LPF is ideal with
unity gain in [−W, W ].

• Solution: The output of the multiplier is

V (t) = 2 [Ac M(t) cos(2π f c t) + N (t)] cos(2π f c t + (t)), (5.82)

where N (t) is narrowband noise. The output of the lowpass filter is

Y (t) = Ac M(t) cos((t)) + Nc (t) cos((t)) + Ns (t) sin((t)). (5.83)

When (t) = 0 the signal component is

Y0 (t) = Ac M(t). (5.84)

Thus, the mean squared error is


 
E (Y0 (t) − Y (t))2
= E [(Ac M(t)(1 − cos((t)))

+ Nc (t) cos((t)) + Ns (t) sin((t)))2
   
= E A2c M 2 (t)(1 − cos((t)))2 + E Nc2 (t) cos2 ((t))
 
+E Ns2 (t) sin2 ((t)) . (5.85)

We now use the following relations (assuming |(t)|  1 for all t):

4 (t)
(1 − cos((t)))2 ≈
  4
E Nc2 (t) = 2N0 W
 
E Ns2 (t) = 2N0 W
 
E M 2 (t) = P. (5.86)

Thus
 
E (Y0 (t) − Y (t))2
A2 P    
= c E 4 (t) + 2N0 W E cos2 ((t)) + sin2 ((t))
4
A2c Pδ 4
= + 2N0 W, (5.87)
20
where we have used the fact that
310 5 Noise in Analog Modulation
 δ
 4  1
E  (t) = α 4 dα
2δ α=−δ
δ4
= . (5.88)
5
10. (Haykin 1983) Consider a phase modulation (PM) system, with the received
signal at the output of the IF filter given by

x(t) = Ac cos(2π f c t + k p m(t)) + n(t), (5.89)

where

n(t) = n c (t) cos(2π f c t) − n s (t) sin(2π f c t). (5.90)

Assume that the carrier-to-noise ratio of x(t) to be high, the message power to
be P, the message bandwidth to extend over [−W, W ], and the transmission
bandwidth of the PM signal to be BT . The psd of n(t) is equal to N0 /2 for
f c − BT /2 ≤ | f | ≤ f c + BT /2 and zero elsewhere.
(a) Find the output SNR. Show all the steps.
(b) Determine the figure-of-merit of the system.
(c) If the PM system uses a pair of pre-emphasis and de-emphasis filters defined
by

Hpe ( f ) = 1 + j f / f 0 = 1/Hde ( f ) (5.91)

determine the improvement in the output SNR.

• Solution: Let φ(t) = k p m(t) and

n(t) = r (t) cos(2π f c t + ψ(t)). (5.92)

Then the received signal x(t) can be written as (see Fig. 5.8):

x(t) = a(t) cos(2π f c t + θ (t)), (5.93)

Fig. 5.8 Phasor diagram for a(t)


x(t) r(t)
ψ(t)
Ac
φ(t)

θ(t)
5 Noise in Analog Modulation 311

where
 
r (t) sin(ψ(t) − φ(t))
θ (t) = φ(t) + tan−1 . (5.94)
Ac + r (t) cos(ψ(t) − φ(t))

The phase detector consists of the cascade of a hard limiter, bandpass filter,
frequency discriminator, and an integrator. The hard limiter and bandpass filter
remove envelope variations in x(t) to yield

x1 (t) = cos(2π f c t + θ (t)). (5.95)

The frequency discriminator (cascade of differentiator, envelope Detector, and


dc blocking capacitor) output is

x2 (t) = dθ (t)/dt, (5.96)

assuming that

2π f c + dθ (t)/dt > 0, (5.97)

for all t. The output of the integrator is

x3 (t) = θ (t)
r (t) sin(ψ(t) − φ(t))
≈ φ(t) +
Ac
= φ(t) + n s (t)/Ac , (5.98)

where we have assumed that Ac


r (t) (high carrier-to-noise ratio). It is rea-
sonable to assume that the psd of n s (t) to be identical to n s (t) and is equal
to

N0 for f c − BT /2 < | f | < f c + BT /2
S Ns ( f ) = (5.99)
0 elsewhere.

The noise power at the output of the postdetection (baseband) lowpass filter is
2N0 W/A2c . The output SNR is

k 2p P A2c
SNR O = . (5.100)
2N0 W

The average power of the PM signal is A2c /2 and the average noise power in the
message bandwidth is N0 W . Therefore, the channel signal-to-noise ratio is

A2c
SNRC = . (5.101)
2N0 W
312 5 Noise in Analog Modulation

Therefore, the figure-of-merit of the receiver is

SNR O
= k 2p P. (5.102)
SNRC

When pre-emphasis and de-emphasis is used, the noise psd at the output of the
de-emphasis filter is

N0
S No ( f ) = | f | < W. (5.103)
A2c (1 + ( f / f 0 )2 )

Therefore, the noise power at the output of the de-emphasis filter is


 W
S No ( f ) d f = 2N0 f 0 tan−1 (W/ f 0 )/A2c . (5.104)
f =−W

Therefore, the improvement in the output SNR is

W/ f 0
D= . (5.105)
tan−1 (W/ f 0 )

11. (Haykin 1983) Suppose that the transfer functions of the pre-emphasis and de-
emphasis filters of an FM system are scaled as follows:
 
jf
H pe ( f ) = k 1 +
f0
 
1 1
Hde ( f ) = . (5.106)
k 1 + j f / f0

The scaling factor k is chosen so that the average power of the emphasized signal
is the same as the original message M(t).
(a) Find the value of k that satisfies this requirement for the case when the psd
of the message is
  
1/ 1 + ( f / f 0 )2 for − W ≤ f ≤ W
SM ( f ) = (5.107)
0 otherwise.

(b) What is the corresponding value of the improvement factor obtained by using
this pair of pre-emphasis and de-emphasis filters.
• Solution: The message power is
5 Noise in Analog Modulation 313
 W
P= SM ( f ) d f
f =−W
 
−1 W
= 2 f 0 tan . (5.108)
f0

The message psd at the output of the pre-emphasis filter is



k 2 (1+( f / f 0 )2 )
for − W ≤ f ≤ W
SM ( f) = 1+( f / f 0 )2
0 otherwise

k 2 for − W ≤ f ≤ W
= (5.109)
0 otherwise.

The message power at the output of the pre-emphasis filter is


 W

P = SM ( f ) d f = k 2 2W. (5.110)
f =−W

Solving for k we get


   1/2
f0 W
k= tan−1 . (5.111)
W f0

The improvement factor due to this pair of pre-emphasis and de-emphasis


combination is

k 2 (W/ f 0 )3
D=
3[(W/ f 0 ) − tan−1 (W/ f 0 )]
(W/ f 0 )2 tan−1 (W/ f 0 )
= . (5.112)
3[(W/ f 0 ) − tan−1 (W/ f 0 )]

12. Consider an AM receiver using a square-law detector whose output is equal to


the square of the input as indicated in Fig. 5.9. The AM signal is defined by

s(t) = Ac [1 + μ cos(2π f m t)] cos(2π f c t) (5.113)

s(t) x(t) x2 (t) y(t) Square z(t)


Squarer LPF
rooter

n(t)

Fig. 5.9 Square-law detector in the presence of noise


314 5 Noise in Analog Modulation

and n(t) is narrowband noise given by

n(t) = n c (t) cos(2π f c t) − n s (t) sin(2π f c t). (5.114)

Assume that
(a) The bandwidth of the LPF is large enough so as to reject only those compo-
nents centered at 2 f c .
(b) The channel signal-to-noise ratio at the receiver input is high.
(c) The psd of n(t) is flat with a height of N0 /2 in the range f c − W ≤ | f | ≤
fc + W .
Compute the output SNR.

• Solution: The squarer output is

x 2 (t) = s 2 (t) + n 2 (t) + 2s(t)n(t)


(1 + cos(4π f c t))
= A2c [1 + μ cos(2π f m t)]2
2
(1 + cos(4π f c t)) (1 − cos(4π f c t))
+n 2c (t) + n 2s (t)
2 2
−2n c (t)n s (t) cos(2π f c t) sin(2π f c t)
(1 + cos(4π f c t))
+2 Ac n c (t)[1 + μ cos(2π f m t)]
2
−2 Ac n s (t)[1 + μ cos(2π f m t)] cos(2π f c t) sin(2π f c t).
(5.115)

The output of the lowpass filter is

A2c n 2 (t) n 2s (t)


y(t) = [1 + μ cos(2π f m t)]2 + c +
2 2 2
+Ac n c (t)[1 + μ cos(2π f m t)]
A2
≈ c [1 + μ cos(2π f m t)]2 + Ac n c (t)[1 + μ cos(2π f m t)],(5.116)
2
where we have made the high channel SNR approximation. The output of the
square rooter is

A2c
z(t) = [1 + μ cos(2π f m t)]2 + Ac n c (t)[1 + μ cos(2π f m t)]
2

Ac 2n c (t)
= √ [1 + μ cos(2π f m t)] 1 +
2 Ac (1 + μ cos(2π f m t))
Ac n c (t)
≈ √ [1 + μ cos(2π f m t)] + √ . (5.117)
2 2
5 Noise in Analog Modulation 315

The signal power is

A2c μ2
P= . (5.118)
4
The noise power is

2N0 W
PN = . (5.119)
2
The output SNR is

A2c μ2
SNR O = . (5.120)
4N0 W

13. (Haykin 1983) Consider the random process

X (t) = A + W (t), (5.121)

where A is a constant and W (t) is a zero-mean WSS random process with psd
N0 /2. The signal X (t) is passed through a first-order RC-lowpass filter. Find
the expression for the output SNR, with the dc component at the LPF output
regarded as the signal of interest.

• Solution: The signal component at the output of the lowpass filter is A. Assum-
ing that 1/ f 0 = 2π RC, the noise power at the LPF output is
 ∞
N0 1 N0
PN = df = . (5.122)
2 f =−∞ 1 + ( f / f0 ) 2 4RC

Therefore the output SNR is:

4RC A2
SNR O = . (5.123)
N0

14. Consider the modified receiver for detecting DSB-SC signals, as illustrated in
Fig. 5.10. Note that in this receiver configuration, the IF filter is absent, hence
w(t) is zero-mean AWGN with psd N0 /2. The DSB-SC signal s(t) is given by

Fig. 5.10 Modified receiver s(t) + w(t) y(t)


for DSB-SC signals LPF

cos(2πfc t + θ)
316 5 Noise in Analog Modulation

s(t) = Ac m(t) cos(2π f c t + θ ), (5.124)

where θ is a uniformly distributed random variable in [0, 2π ]. The message


m(t) extends over the band [−W, W ] and has a power P. Assume the LPF to be
ideal, with unity gain over [−W, W ]. Assume that θ and w(t) are statistically
independent.
Compute the figure-of-merit of this receiver. Does this mean that the IF filter is
redundant? Justify your answer.

• Solution: The noise autocorrelation at the output of the multiplier is

E[w(t)w(t − τ ) cos(2π f c t + θ ) cos(2π f c (t − τ ) + θ )]


N0 cos(2π f c τ )
= δ(τ )
2 2
N0
= δ(τ ). (5.125)
4
The average noise power at the LPF output is

N0 N0 W
PN = 2W = . (5.126)
4 2
The signal component at the LPF output is

s1 (t) = Ac m(t)/2. (5.127)

The average signal power at the LPF output is


   
E s12 (t) = A2c E m 2 (t) /4 = A2c P/4. (5.128)

Therefore, SNR at the LPF output is

A2c P
SNR O = . (5.129)
2N0 W

The average power of the modulated signal is

  A2 P
E s 2 (t) = c . (5.130)
2
The average noise power in the message bandwidth for baseband transmis-
sion is
N0
PN1 = (2W ) = N0 W. (5.131)
2
Thus, the channel SNR is
5 Noise in Analog Modulation 317

A2c P
SNRC = . (5.132)
2N0 W

Hence the figure-of-merit of the receiver is 1.


The above result does not imply that the IF filter is redundant. In fact, the main
purpose of the IF filter in a superheterodyne receiver is to reject undesired
adjacent stations. If we had used a bandpass filter in the RF section for this
purpose, the Q-factor of the BPF would have to be very high, since the adjacent
stations are spaced very closely. In fact, the required Q-factor of the RF BPF
to reject adjacent stations would have been

f RF, max (= 1605 kHz)


Q≈
bandwidth of message ( = 10 kHz)
= 160.5, (5.133)

which is very high.


However, the Q-factor requirement of the IF BPF is

f IF (= 455 kHz)
Q≈
bandwidth of message ( = 10 kHz)
= 45.5, (5.134)

which is reasonable.
Note that the IF filter cannot reject image stations. The image stations are
rejected by the RF bandpass filter. The frequency spacing between the image
stations is 2 f IF , which is quite large, hence the Q-factor requirement of the RF
bandpass filter is not very high.
15. Consider an FM demodulator in the presence of noise. The input signal to the
demodulator is given by
  t 
X (t) = Ac cos 2π f c t + 2π k f M(τ ) dτ + N (t) (5.135)
τ =0

where N (t) denotes narrowband noise process with psd as illustrated in Fig. 5.11,
and BT is the bandwidth of the FM signal. The psd of the message is

a f 2 for | f | < W
SM ( f ) = (5.136)
0 otherwise.

(a) Write down the expression for the signal at the output of the FM discriminator
(cascade of a differentiator, envelope detector, dc blocking capacitor, and a
gain of 1/(2π )). No derivation is required.
(b) Compute the SNR at the output of the FM demodulator.
318 5 Noise in Analog Modulation

SN (f )

N0 /2

−fc 0 fc
−fc − BT /2 fc + BT /2
−fc + BT /2 fc − BT /2

Fig. 5.11 PSD of N (t)

• Solution: The output of the FM discriminator is

V (t) = k f M(t) + Nd (t)


1 d Ns (t)
= k f M(t) + (5.137)
2π Ac dt

where Ns (t) has the same statistical properties as Ns (t). Here Ns (t) denotes
the quadrature component of N (t), as given by

N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t). (5.138)

Thus the psd of Ns (t) is identical to that of Ns (t) and is given by



S N ( f − f c ) + S N ( f + f c ) for | f | < BT /2
S Ns ( f ) =
0 otherwise

N0 /2 for | f | < BT /2
= (5.139)
0 otherwise.

The noise psd at the demodulator output is


⎧ 2
⎨ f
S ( f ) for | f | < W
S No ( f ) = A2c Ns

0 otherwise
⎧ 2
⎨ f N0
for | f | < W
= 2 A2c (5.140)

0 otherwise.

The noise power at the demodulator output is


 W
N0 W 3
S No ( f ) d f = . (5.141)
f =−W 3A2c
5 Noise in Analog Modulation 319

Bandpass Lowpass
r(t) mo (t) + no (t)
filter filter
H1 (f ) H2 (f )

2 cos(2πfc t)

Fig. 5.12 Demodulation of an AM signal using nonideal filters

The signal power at the demodulator output is


 W 2k 2f aW 3
SM ( f ) d f = . (5.142)
f =−W 3

Therefore, the output SNR is

2a A2c k 2f
SNR O = . (5.143)
N0

16. (Haykin 1983) Consider the AM receiver shown in Fig. 5.12 where

r (t) = [1 + m(t)] cos(2π f c t) + w(t), (5.144)

where w(t) has zero mean and power spectral density N0 /2 and m(t) is WSS
with psd S M ( f ). Note that H1 ( f ) and H2 ( f ) in Fig. 5.12 are nonideal filters,
h 1 (t) and h 2 (t) are real-valued, and
 
h 1 (t) = 2 h̃ 1 (t) e j 2π fc t . (5.145)

It is also given that H1 ( f ) has conjugate symmetry about f c , that is

H1 ( f c + f ) = H1∗ ( f c − f ) for 0 ≤ f ≤ B. (5.146)

The two-sided bandwidth of H1 ( f ) about f c is 2B. The one-sided bandwidth of


H2 ( f ) about zero frequency is B. The one-sided bandwidth of S M ( f ) may be
greater than B. Assume that m(t) and m o (t) do not contain any dc component
and any other dc component at the receiver output is removed by a capacitor. As
a measure of distortion introduced by H1 ( f ), H2 ( f ) and noise, we define the
mean squared error as
 
E = E (m o (t) + n o (t) − m(t))2 . (5.147)

Note that m o (t), n o (t) and m(t) are real-valued. All symbols have their usual
meaning.
320 5 Noise in Analog Modulation

(a) Show that


 ∞  
E = |1 − H ( f )|2 S M ( f ) + N0 |H ( f )|2 d f
f =−∞
, (5.148)

where

H ( f ) = H̃1 ( f )H2 ( f ). (5.149)

(b) Given that

S0
SM ( f ) = | f | < ∞, f c
f 0 (5.150)
1 + ( f / f 0 )2

and

1 for | f | ≤ B, f c
B
H( f ) = (5.151)
0 otherwise

find B such that E is minimized. Ignore the effects of aliasing due to the 2 f c
component during demodulation.

• Solution: Given that


(a)
 
h 1 (t) = 2 h̃ 1 (t) e j 2π fc t . (5.152)

(b)

H1 ( f c + f ) = H1∗ ( f c − f ) for 0 ≤ f ≤ B. (5.153)

(c)

S0
SM ( f ) = | f | < ∞, f c
f 0 . (5.154)
1 + ( f / f 0 )2

(d)

1 for | f | ≤ B, f c
B
H( f ) = (5.155)
0 otherwise.

The Fourier transform of (5.152) is

H1 ( f ) = H̃1 ( f − f c ) + H̃1∗ (− f − f c ), (5.156)


5 Noise in Analog Modulation 321

where H̃1 ( f ) denotes the Fourier transform of the complex envelope, h̃ 1 (t).
Substituting (5.156) in (5.153) we obtain

H̃1 ( f ) = H̃1∗ (− f ) for 0 ≤ f ≤ B


⇒ h̃ 1 (t) = h 1, c (t) (5.157)

that is, h̃ 1 (t) is real-valued. The signal component at the bandpass filter output
is
 
y(t) = ỹ(t) e j 2π fc t , (5.158)

where

ỹ(t) = (1 + m(t)) h̃ 1 (t)


= (1 + m(t)) h 1, c (t)
= yc (t), (5.159)

where “ ” denotes convolution. Substituting (5.159) in (5.158) we get

y(t) = yc (t) cos(2π f c t). (5.160)

The signal component at the lowpass filter output is

yc (t) h 2 (t) = (1 + m(t)) h 1, c (t) h 2 (t)


= (1 + m(t)) h(t), (5.161)

where

h(t) = h 1, c (t) h 2 (t)


⇒ H ( f ) = H̃1 ( f )H2 ( f ). (5.162)

The dc term (1 h(t)) in (5.161) is removed by a capacitor. Therefore, the


message component at the lowpass filter output is

m o (t) = m(t) h(t). (5.163)

Therefore, the message psd at the lowpass filter output is

S Mo ( f ) = S M ( f )|H ( f )|2 . (5.164)

Let us now turn our attention to the noise component. The noise psd at the
bandpass filter output is
322 5 Noise in Analog Modulation

N0
SN ( f ) = |H1 ( f )|2 . (5.165)
2
The narrowband noise at the bandpass filter output is given by

N (t) = Nc (t) cos(2π f c t) − Ns (t) sin(2π f c t). (5.166)

The noise component at the output of the lowpass filter is

n o (t) = Nc (t) h 2 (t). (5.167)

The noise psd at the lowpass filter output is

S No ( f ) = S Nc ( f )|H2 ( f )|2 , (5.168)

where S Nc ( f ) is the psd of Nc (t) and is given by



S N ( f − f c ) + S N ( f + f c ) for | f | < B
S Nc ( f ) =
0 otherwise
  
(N0 /2) |H1 ( f − f c )| + |H1 ( f + f c )|2 for | f | < B
2
=
0 otherwise
⎧  2  2
⎨    
(N0 /2)  H̃1∗ (− f ) +  H̃1 ( f ) for | f | < B
=

0 otherwise
  2
 
N0  H̃1 ( f ) for | f | < B
= (5.169)
0 otherwise,

where we have used (5.165), (5.156), and (5.157). Therefore (5.168) becomes
 2
 
S No ( f ) = N0 |H2 ( f )|2  H̃1 ( f )
= N0 |H ( f )|2 , (5.170)

where we have used (5.162).


Now
 
E = E (m o (t) + n o (t) − m(t))2
 
= E m 2o (t) + n 2o (t) + m 2 (t) − 2m(t)m o (t) , (5.171)

where we have assumed that the noise and message are independent, and that
the noise has zero mean. Simplifying (5.171) we obtain
5 Noise in Analog Modulation 323
 ∞ 
E = S Mo ( f ) + S N o ( f ) + S M ( f )
f =−∞

− S M ( f )H ( f ) − S M ( f )H ∗ ( f ) d f
 ∞
 
= |1 − H ( f )|2 S M ( f ) + N0 |H ( f )|2 d f, (5.172)
f =−∞

where we have used the following relations


 ∞
 
E m 2o (t) dt = S Mo ( f ) d f
f =−∞
 ∞
 
E n 2o (t) dt = S No ( f ) d f
f =−∞
 ∞
 
E m 2 (t) dt = SM ( f ) d f
f =−∞
  ∞
E [m(t)m o (t) dt] = E m(t) h(α)m(t − α) dα
α=−∞
 ∞
= h(α)E [m(t) m(t − α)] dα
α=−∞
 ∞
= h(α)R M (α) dα
α=−∞
 ∞
= H ( f )S M ( f ) d f
f =−∞
 ∞
= H ∗ ( f )S M ( f ) d f. (5.173)
f =−∞

The last two equations in (5.173) are obtained as follows


 ∞
h(α)R M (α) dα = h(−t) R M (t)|t=0
α=−∞
= h(t) R M (−t)|t=0 , (5.174)

which is also equal to the inverse Fourier transform evaluated at t = 0.


The second part may be solved by substituting S M ( f ) in (5.154) and H ( f ) in
(5.155) into (5.172). Therefore (5.172) becomes
 ∞
S0
E =2 d f + 2N0 B
f =B 1 + ( f / f 0 )2
 
= 2S0 f 0 π/2 − tan−1 (B/ f 0 ) + 2N0 B. (5.175)

In order to minimize E , we differentiate with respect to B and set the result to


zero. Thus we obtain
324 5 Noise in Analog Modulation

3.2

2.8

2.6

2.4

2.2
E

1.8

1.6

1.4

1.2
0 1 2 3 4 5 6 7 8 9 10

B (Hz)

Fig. 5.13 Plot of E versus B

Fig. 5.14 Block diagram of R


S(t)
a system S1 (t) + W1 (t)

W (t) C

dE 1
= −2S0 + 2N0
dB 1 + (B/ f 0 )2
=0 
S0
⇒ B = f0 − 1. (5.176)
N0

The plot of E versus B is shown in Fig. 5.13, for S0 = 1.0 W/Hz, N0 =


0.1 W/Hz, and f 0 = 1.0 Hz.

17. Consider the block diagram in Fig. 5.14. Here S(t) is a narrowband FM signal
given by

S(t) = Ac cos(2π f c t + θ ) − β Ac sin(2π f c t + θ ) sin(2π f m t + α), (5.177)

where θ, α are independent random variables,


 uniformly
 distributed
 in [0, 2π ).
The psd of W (t) is N0 /2. Compute E S12 (t) E W12 (t) .

• Solution: Now S(t) can be written as


5 Noise in Analog Modulation 325

S(t) = Ac cos(2π f c t + θ )
β Ac
− cos(2π f 1 t + θ − α)
2
β Ac
+ cos(2π f 2 t + θ + α), (5.178)
2
where

f1 = fc − fm
f2 = fc + fm . (5.179)

The autocorrelation of S(t) is

R S (τ ) = E [S(t)S(t − τ )]
A2
= c cos(2π f c τ )
2
β 2 A2c
+ cos(2π f 1 τ )
8
β 2 A2c
+ cos(2π f 2 τ ). (5.180)
8
The psd of S(t) is the Fourier transform of R S (τ ) in (5.180), and is given by

A2c
SS ( f ) = [δ( f − f c ) + δ( f + f c )]
4
β 2 A2c
+ [δ( f − f 1 ) + δ( f + f 1 )]
16
β 2 A2c
+ [δ( f − f 2 ) + δ( f + f 2 )] . (5.181)
16
The psd of S1 (t) is

A2c
SS1 ( f ) = |H ( f c )|2 [δ( f − f c ) + δ( f + f c )]
4
β 2 A2c
+ |H ( f 1 )|2 [δ( f − f 1 ) + δ( f + f 1 )]
16
β 2 A2c
+ |H ( f 2 )|2 [δ( f − f 2 ) + δ( f + f 2 )] , (5.182)
16
where
1
|H ( f )|2 =
1 + (2π f RC)2
⇒ |H ( f )|2 = |H (− f )|2 . (5.183)
326 5 Noise in Analog Modulation

The power of S1 (t) is


 ∞
 
E S12 (t) = SS1 ( f ) d f
f =−∞
A2c β 2 A2c β 2 A2c
= |H ( f c )|2 + |H ( f 1 )|2 + |H ( f 2 )|2 .
2 8 8
(5.184)

Finally, the power of W1 (t) is



  N0 ∞ 1
E W12 (t) = df
2 f =−∞ 1 + (2π f RC)2
 ∞
N0 1
= dx
4π RC x=−∞ 1 + x 2
 ∞
N0 1
= dx
2π RC x=0 1 + x 2
N0  −1 ∞
= tan (x) x=0
2π RC
N0
= . (5.185)
4RC

Reference

Simon Haykin. Communication Systems. Wiley Eastern, second edition, 1983.


Chapter 6
Pulse Code Modulation

1. A speech signal has a total duration of 10 s. It is sampled at a rate of 8 kHz and


then encoded. The SNR Q must be greater than 40 dB. Calculate the minimum
storage capacity required to accommodate this digitized speech signal. Assume
that the speech signal has a Gaussian pdf with zero mean and variance σ 2 and
the overload factor is 5. Assume that the quantizer is uniform and of the mid-rise
type.

• Solution: We know that

12σ 2
SNR Q = , (6.1)
2
where the step-size  is given by

2xmax
= . (6.2)
2n
Here xmax = 5σ is the maximum input the quantizer can handle and n is the
number of bits used to encode any representation level. Substituting for ,
the SNR Q becomes

12 × 22n
SNR Q = = 104
100
⇒ n = 8.17. (6.3)

Since n has to be an integer and SNR Q must be greater than 40 dB, we take
n = 9.
Now, the number of samples obtained in 10 s is 8 × 104 . The number of bits
obtained is 9 × 8 × 104 , which is the storage capacity required for the speech
signal.

© The Editor(s) (if applicable) and The Author(s), under exclusive license 327
to Springer Nature Switzerland AG 2021
K. Vasudevan, Analog Communications,
https://doi.org/10.1007/978-3-030-50337-6_6
328 6 Pulse Code Modulation

2. A PCM system uses a uniform quantizer followed by a 7-bit binary encoder.


The bit-rate of the system is 56 Mbps.
(a) What is the maximum message bandwidth for which the system operates
satisfactorily.
(b) Determine the SNR Q when a full-load sinusoidal signal of frequency 1 MHz
is applied to the input.

• Solution: The sampling-rate is

56
= 8 MHz. (6.4)
7
Hence the maximum message bandwidth is 4 MHz.
For a sinusoidal signal of amplitude A, the power is A2 /2. Since the sinusoid
is stated to be at full-load, xmax = A. The step-size is

2A
= . (6.5)
27
The mean-square quantization error is

2
σ 2Q = . (6.6)
12
Hence the SNR Q is

6 × 1282
SNR Q = ≡ 43.91 dB. (6.7)
4
3. Twenty-four voice signals are sampled uniformly and then time division multi-
plexed (TDM). The sampling operation uses flat-top samples with 1 µs duration.
The multiplexing operation includes provision for synchronization by adding
an extra pulse also of 1 µs duration. The highest frequency component of each
voice signal is 3.4 kHz.
(a) Assuming a sampling-rate of 8 kHz and uniform spacing between pulses,
calculate the spacing (the time gap between the ending of a pulse and the
starting of the next pulse) between successive pulses of the multiplexed
signal.
(b) Repeat your calculation using Nyquist-rate sampling.

• Solution: Since the sampling-rate is 8 kHz, the time interval between two con-
secutive samples of the same message is 106 /8000 = 125 µs. Thus, we can
visualize a “frame” of duration 125 µs containing samples of the 24 voice
signals plus the extra synchronization pulse. Thus the spacing between the
starting points of 2 consecutive pulses is 125/25 = 5 µs. Since the pulse-
width is 1 µs, the spacing between consecutive pulses is 4 µs.
6 Pulse Code Modulation 329

When Nyquist-rate sampling is used, the frame duration is 106 /6800 =


147.059 µs. Therefore the spacing between the starting points of two consec-
utive pulses is 147.059/25 = 5.88 µs. Hence the spacing between the pulses
is 4.88 µs.
4. Twelve different message signals, each with a bandwidth of [−W, W ] where
W = 10 kHz, are to be multiplexed and transmitted. Determine the minimum
bandwidth required for each method if the modulation/multiplexing method
used is
(a) SSB, FDM.
(b) Pulse amplitude modulation (PAM), TDM with Dirac-delta samples fol-
lowed by Nyquist pulse shaping.
• Solution: Bandwidth required for SSB/FDM is 12 × 10 = 120 kHz.
The sampling rate required for each message is 20 kHz. Thus, the output of
a PAM transmitter for a single message is 20 ksamples/s. The samples are
weighted Dirac-delta functions. The output of the TDM is 240 ksamples/s.
Using Nyquist pulse shaping, the transmission bandwidth required is 120 kHz.
5. (Vasudevan 2010) Compute the power spectrum of NRZ unipolar signals as
shown in Fig. 6.1. Assume that the symbols are equally likely, statistically inde-
pendent and WSS.
• Solution: Consider the system model in Fig. 6.2. Here the input X (t) is given
by

1 0 1 1 0 0 0 1
A

0 T

Fig. 6.1 NRZ unipolar signals

Fig. 6.2 System model for


computing the psd of linearly X(t) Y (t)
h(t)
modulated digital signals

h(t)
A
0 1

t
Constellation
T
330 6 Pulse Code Modulation



X (t) = Sk δ(t − kT − α), (6.8)
k=−∞

where Sk denotes a symbol occurring at time kT , drawn from an M-ary PAM


constellation, 1/T denotes the symbol-rate and α is a uniformly distributed
RV in [0, T ]. Since the symbols are independent and WSS

E[Sk S j ] = E[Sk ]E[S j ] = m 2S , (6.9)

where m S denotes the mean value of the symbols in the constellation and is
equal to


M
mS = P(Si )Si , (6.10)
i=1

where Si denotes the ith symbol in the constellation and P(Si ) denotes the
probability of occurrence of Si , and M is the number of symbols in the con-
stellation. In the given problem M = 2, P(Si ) = 0.5, and S1 = 0 and S2 = 1,
hence m S = 0.5.
The output Y (t) is given by


Y (t) = Sk h(t − kT − α), (6.11)
k=−∞

where h(t) is the impulse response of the transmit filter. We know that the
psd of Y (t) is given by

P − m 2S m2 
SY ( f ) = |H ( f )|2 + 2S |H (n/T )|2 δ( f − n/T ), (6.12)
T T n=−∞

where H ( f ) is the Fourier transform of h(t) and


M
P= P(Si )Si2 = 0.5 (6.13)
i=1

is the average power of the constellation. Here H ( f ) is equal to

H ( f ) = e−j π f T AT sinc ( f T )
⇒ |H ( f )|2 = A2 T 2 sinc2 ( f T ). (6.14)

Therefore
6 Pulse Code Modulation 331

Table 6.1 The 15-segment μ-law companding characteristic


Segment number Step-size Projections of the Representation levels
segment end points
onto the horizontal
axis
0 2 ±31 0, . . . , ±15
1a, 1b 4 ±95 ±16, . . . , ±31
2a, 2b 8 ±223 ±32, . . . , ±47
3a, 3b 16 ±479 •
4a, 4b 32 ±991 •
5a, 5b 64 ±2015 •
6a, 6b 128 ±4063
7a, 7b 256 ±8159


A2 T 2 sinc2 ( f T ) A2 
SY ( f ) = + sinc2 (nT /T )δ( f − n/T ). (6.15)
4T 4 k=−∞

Since 
1 for f = 0
sinc( f T ) = (6.16)
0 for f = n/T , n = 0

and δ( f − n/T ) is not defined for f = n/T , it is assumed that the product
sinc(n)δ( f − n/T ) is zero for n = 0. The reason is as follows. Consider an
analogy in the time-domain. If aδ(t) is input to a filter with impulse response
h(t), the output is ah(t). This implies that if a = 0, then the output is also
zero, which in turn is equivalent to zero input. Thus we conclude that 0δ( f )
is equivalent to zero.
Thus

A2 T sinc2 ( f T ) A2
SY ( f ) = + δ( f ). (6.17)
4 4
6. Consider the 15-segment piecewise linear characteristic used for μ-law com-
panding (μ = 255), shown in Table 6.1. Assume that the input signal lies in the
range [−8159, 8159] mV. Compute the representation level corresponding to an
input of 500 mV. Note that in Table 6.1 the representation levels are not properly
scaled, hence c(xmax ) = xmax .

• Solution: From Table 6.1 we see that 500 mV lies in segment 4a. The first
representation level corresponding to segment 4a is 64. The step-size for 4a is
32. Hence the end point of the first uniform segment in 4a is 479 + 32 = 511.
Thus the representation level is 64.
332 6 Pulse Code Modulation

Fig. 6.3 Piecewise linear y(x)


approximation of the μ-law
1.0
characteristic
0.75

0.50

0.25
x

3α α
7α 15α = 1

7. Consider an 8-segment piecewise linear characteristic which approximates the


μ-law for companding. Four segments are in the first quadrant and the other four
are in the third quadrant (odd symmetry). The μ-law is given by

ln(1 + μx)
c(x) = for 0 ≤ x ≤ 1. (6.18)
ln(1 + μ)

The projections of all segments along the y-axis are spaced uniformly. For the
segments in the first quadrant, the projections of the segments along the x-axis
are such that the length of the projection is double that of the previous segment.
(a) Compute μ.
(b) Let e(x) denote the difference between the values obtained by the μ-law
and that obtained by the 8-segment approximation. Determine e(x) for the
second segment in the first quadrant.

• Solution: Let the projection of the first segment in the first quadrant along
the x-axis be denoted by α. Then the projection of the second segment in the
first quadrant along the x-axis is 2α and so on. The projection of each of the
segments along the y-axis is of length 1/4 since c(1) = 1. This is illustrated
in Fig. 6.3. Thus

α + 2α + 4α + 8α = 1
⇒ α = 1/15. (6.19)

Also
ln(1 + μα)
= 1/4
ln(1 + μ)
ln(1 + μ3α)
= 1/2
ln(1 + μ)
⇒μ= 1/α
= 15. (6.20)
6 Pulse Code Modulation 333

Fig. 6.4 Message pdf


fX (x)
A

−3 −1 1 0 3

The expression for the second segment in the first quadrant is

y(x) = 15x/8 + 1/8 for 1/15 ≤ x ≤ 3/15. (6.21)

Hence

e(x) = c(x) − y(x) for 1/15 ≤ x ≤ 3/15. (6.22)

8. A DPCM system uses a second-order predictor of the form

x̂(n) = p1 x(n − 1) + p2 x(n − 2). (6.23)

The autocorrelation of the input is given by: R X (0) = 1, R X (1) = 0.8, R X (2) =
0.6.
Compute the optimum forward prediction coefficients and the prediction gain.

• Solution: We know that


    
R X (0) R X (1) p1 R X (1)
= . (6.24)
R X (1) R X (0) p2 R X (2)

Solving for p1 and p2 we get p1 = 8/9, p2 = −1/9.


The prediction error is

σ 2E = σ 2X − p1 R X (1) − p2 R X (2) = 3.2/9 = 0.355. (6.25)

The prediction gain is

σ 2X 1
= 2 = 2.8125. (6.26)
σ 2E σE

9. Consider the message pdf shown in Fig. 6.4. The decision thresholds of a non-
uniform quantizer are at 0, ±1, and ±3.
Compute SNR Q at the output of the expander.

• Solution: Since the area under the pdf must be unity, we must have A = 1/3.
Moreover
334 6 Pulse Code Modulation

Fig. 6.5 Message pdf


fX (x)
A

−3 0 3

f X (x) = −x/9 + 1/3. (6.27)

The representation levels at the output of the expander are at ±0.5 and ±2.
Here f X (x) cannot be considered a constant in any decision region, therefore
 1  3
σ 2Q =2 (x − 1/2) f X (x) d x + 2
2
(x − 2)2 f X (x) d x
x=0 x=1
 1
−x 4 4x 3 13x 2 x 
=2 + − + 
36 27 72 12 
x=0
 4 3
−x 7x 3 16x 2 4x 
+2 + − +
36 27 18 3 x=1
= 0.19444. (6.28)

10. Consider the message pdf shown in Fig. 6.5.


(a) Compute the representation levels of the optimum 2-level Lloyd-Max quan-
tizer.
(b) What is the corresponding value of σ 2Q ?

• Solution: Since the area under the pdf is unity, we must have A = 1/3. Due to
symmetry of the pdf, one of the decision thresholds is x1 = 0. The other two
decision thresholds are x0 = −3 and x2 = 3. The corresponding representa-
tion levels are y1 and y2 which are related by y1 = −y2 . Thus the variance of
the quantization error is
 3
σ 2Q = 2 (x − y2 )2 f X (x) d x, (6.29)
x=0

which can be minimized by differentiating wrt y2 . Thus


3
x f X (x) d x
y2 = x=0
3
. (6.30)
x=0 f X (x) d x

The numerator of (6.30) is


6 Pulse Code Modulation 335

t = kT
R
w(t) x(t) xk 2nd -order x̂k
prediction filter

C
+ −

ek

Fig. 6.6 Block diagram of a system using a prediction filter

 3  3
x f X (x) d x = x(−x/9 + 1/3) d x
x=0 x=0
3
= (−x 3 /27 + x 2 /6)x=0
= 1/2. (6.31)

The denominator of (6.30) is


 3
1 1 1
f X (x) d x = ×3× = . (6.32)
x=0 2 3 2

Thus y2 = 1.
The minimum variance of the quantization error is
 3
σ 2Q, min =2 (x − 1)2 f X (x) d x
x=0
 3
=2 (x − 1)2 (−x/9 + 1/3) d x
x=0
= 0.5. (6.33)

11. Consider the block diagram in Fig. 6.6. Assume that w(t) is zero-mean AWGN
with psd N0 /2 = 0.5 × 10−4 W/Hz. Assume that RC = 10−4 s and T = RC/4 s.
Let

x̂k = p1 xk−1 + p2 xk−2 . (6.34)

(a) Compute p1 and p2 so that the prediction error variance, E[ek2 ], is mini-
mized.
(b) What is the minimum value of the prediction error variance?

• Solution: We know that


336 6 Pulse Code Modulation

N0
SX ( f ) = |H ( f )|2
2
N0 /2
. (6.35)
1 + (2π f RC)2

Now
1 1 −t/RC
H( f ) =  e u(t). (6.36)
1 + j 2π f RC RC

We also use the following relation:

|H ( f )|2  h(t)  h(−t). (6.37)

Thus the autocorrelation function of x(t) is

N0
E[x(t)x(t − τ )] = R X (τ ) = (h(τ )  h(−τ ))
2 
N0 ∞
= h(t)h(t − τ ) dt
2 t=−∞
 ∞
N0
= e−t/RC u(t)
2(RC)2 t=−∞
× e(τ −t)/RC u(t − τ ) dt. (6.38)

When τ > 0
 ∞
N0
E[x(t)x(t − τ )] = e−t/RC e(τ −t)/RC dt
2(RC)2 t=τ
N0 τ

= exp − . (6.39)
4RC RC
When τ < 0
 ∞
N0
E[x(t)x(t − τ )] = e−t/RC e(τ −t)/RC dt
2(RC)2 t=0
N0 τ

= exp . (6.40)
4RC RC

Thus R X (τ ) is equal to

N0 |τ |
E[x(t)x(t − τ )] = R X (τ ) = exp − . (6.41)
4RC RC

The autocorrelation of the samples of x(t) is


6 Pulse Code Modulation 337

N0 |mT |
E[x(kT )x(kT − mT )] = R X (mT ) = exp −
4RC RC
1
⇒ R X (m) = exp (−|m|/4)
4
(6.42)

The relevant autocorrelation values are

R X (0) = 0.25
R X (1) = 0.25 e−1/4 = 0.1947
R X (2) = 0.25 e−1/2 = 0.1516. (6.43)

We know that
    
R X (0) R X (1) p1 R X (1)
= . (6.44)
R X (1) R X (0) p2 R X (2)

Solving for p1 and p2 we get p1 = 0.7788, p2 = 0.


The minimum prediction error is

σ 2E = σ 2X − p1 R X (1) − p2 R X (2) = 0.25(1 − e−1/2 ) = 0.09837. (6.45)

12. A delta modulator is designed to operate on speech signals limited to 3.4 kHz.
The specifications of the modulator are
(a) Sampling-rate is ten times the Nyquist-rate of the speech signal.
(b) Step-size  = 100 mV.
The modulator is tested with a 1 kHz sinusoidal signal. Determine the max-
imum amplitude of this test signal required to avoid slope overload.

• Solution
0.1 f s
Am <
2π f c
0.1 × 68
=

= 1.08 V. (6.46)

13. Consider a message pdf given by

f X (x) = a e−|x| for −∞ < x < ∞. (6.47)

(a) Compute the representation levels of the optimum 2-level Lloyd-Max quan-
tizer.
(b) What is the minimum value of σ 2Q ?
338 6 Pulse Code Modulation

• Solution: Since the area under the pdf is unity, we must have
 ∞
2a e−x d x = 1
x=0
⇒ a = 1/2. (6.48)

Due to symmetry of the pdf, one of the decision thresholds is x1 = 0. The


other two decision thresholds are x0 = −∞ and x2 = ∞. The corresponding
representation levels are y1 and y2 which are related by y1 = −y2 . Thus the
variance of the quantization error is
 ∞
σ 2Q =2 (x − y2 )2 f X (x) d x (6.49)
x=0

which can be minimized by differentiating wrt y2 . Thus


∞
x f X (x) d x
y2 = x=0
∞ . (6.50)
x=0 f X (x) d x

The numerator of (6.50) is


 ∞  ∞
x f X (x) d x = (x/2)e−x d x
x=0 x=0
= 1/2. (6.51)

The denominator of (6.50) is


 ∞
1
f X (x) d x = . (6.52)
x=0 2

Thus y2 = 1.
The minimum variance of the quantization error is
 ∞
σ 2Q, min = 2 (x − 1)2 f X (x) d x
x=0
= 1. (6.53)

14. A message signal having a pdf shown in Fig. 6.7 is applied to a μ-law compressor.
The compressor characteristic is given by
xmax
c(|x|) = ln (1 + μ|x|/xmax ) for 0 ≤ |x| ≤ xmax . (6.54)
ln(1 + μ)

Derive the expression for the signal-to-quantization noise ratio at the expander
output.
6 Pulse Code Modulation 339

Fig. 6.7 Message pdf fX (x)

−xmax 0 xmax

Assume that the number of representation levels is L and the overload level is
xmax .

• Solution: We know that

3L 2 σ 2X
SNR Q =  xmax . (6.55)
−2 d x
x=−xmax f X (x)(dc/d x)
2
xmax

The pdf of the message is



1 |x|
f X (x) = 1− . (6.56)
xmax xmax

Since

B ln (1 + μx/xmax ) for 0 ≤ x ≤ xmax
c(x) = (6.57)
−B ln (1 − μx/xmax ) for −xmax ≤ x ≤ 0,

where
xmax
B= (6.58)
ln(1 + μ)

we have
 
dc μ 1
= for 0 ≤ |x| ≤ xmax . (6.59)
dx ln(1 + μ) 1 + μ|x|/xmax

Since both f X (x) and dc/d x are even functions of x we have


 xmax  xmax
f X (x)(dc/d x)−2 d x = 2 f X (x)(dc/d x)−2 d x
x=−xmax x=0
=I (say). (6.60)

Substituting for f X (x) and dc/d x we get


340 6 Pulse Code Modulation
 xmax
I =K (1 − x/xmax ) (1 + μx/xmax )2 d x
x=0

 xmax 
μ2 x 2 2μx x μ2 x 3 2μx 2
=K 1+ 2 + − − 3 − 2 dx
x=0 xmax xmax xmax xmax xmax
 x
μ2 x 3 2μx 2 x2 μ2 x 4 2μx 3 max
=K x+ 2 + − − 3 − 2
3xmax 2xmax 2xmax 4xmax 3xmax x=0
 
μ2
μ2

= C 1+ + μ − 0.5 − −
3 4 3
 
μ2 μ
= C 0.5 + + , (6.61)
12 3

where
 2
ln(1 + μ)
2
K =
xmax μ
C = K xmax . (6.62)

The signal power is given by


 xmax
σ 2X = x 2 f X (x) d x
x=−xmax

 xmax 
2 x
= x2 1 − dx
xmax x=0 xmax
 3 xmax
2 x x4
= −
xmax 3 4xmax x=0
2
xmax
= . (6.63)
6
Therefore

L2
SNR Q = . (6.64)
2I
15. A message signal having a pdf shown in Fig. 6.8 is applied to a uniform mid-step
quantizer also shown in the same figure. Compute a and the pdf of the quantizer
output Y . Assume that the overload level of the quantizer is ±3.

• Solution: Firstly we have


 3
2a x2 dx = 1
x=0
⇒ a = 1/18. (6.65)
6 Pulse Code Modulation 341

Fig. 6.8 Message pdf fX (x) = ax2

Output (Y )

Input (X)

−3 0 3

Moreover, Y is a discrete random variable which takes only three values


(L = 3). The step-size of the quantizer is

2xmax 6
= = = 2. (6.66)
L 3
The various quantizer levels are indicated in Fig. 6.9. Thus

P(Y = 0) = P(−1 < X < 1)


 1
2
= x2 dx
18 x=0
1
=
27
P(Y = 2) = P(1 < X < 3)
 3
1
= x2 dx
18 x=1
13
=
27
= P(Y = −2). (6.67)

Therefore
342 6 Pulse Code Modulation

Fig. 6.9 Message pdf and fX (x) = ax2


quantizer levels

Output (Y )

Input (X)

−2

−3 −1 0 1 3

Fig. 6.10 Message pdf fX (x)


a

a/4
x

−10 −2 0 2 10

13 1 13
f Y (y) = δ(y + 2) + δ(y) + δ(y − 2). (6.68)
27 27 27
16. The probability density function f X (x) of a message signal is given in Fig. 6.10.
The message is quantized by a 3-bit quantizer such that all the eight reconstruc-
tion levels occur with equal probability.
(a) Determine a.
(b) Determine the power of the message signal.
(c) Compute the decision thresholds.
(d) For each partition cell, compute the reconstruction level such that the quan-
tization noise power is minimized.
(e) Determine the overall quantization noise power.

• Solution: Since the area under the pdf is unity we must have
6 Pulse Code Modulation 343
a
2 2a + 8 =1
4
1
⇒a= . (6.69)
8
The power in the message is
 10
2 x 2 f X (x) d x = 21.33. (6.70)
x=0

Since the message pdf is symmetric, we expect the decision thresholds also
to be symmetric about the origin. Let us denote the decision thresholds on
the positive x-axis as x0 , x1 , x2 , x3 , and x4 . The decision thresholds along the
negative x-axis are −x1 , −x2 , −x3 , and −x4 . Note that x4 = 10 and x0 = 0.
Let us first compute x3 . Since each of the reconstruction levels occur with
probability equal to 1/8, we have

P(x3 ≤ x ≤ 10) = 1/8


(a/4)(10 − x3 ) = 1/8
⇒ x3 = 6. (6.71)

Similarly we get x2 = 2 and x1 = 1.


Let us denote the reconstruction levels along the positive x-axis as y1 , y2 , y3 ,
and y4 . Due to symmetry, the reconstruction levels along the negative x-axis
are −y1 , −y2 , −y3 and −y4 . The quantization noise power for the ith partition
cell is
 xi
Qi = (x − yi )2 f X (x) d x for 1 ≤ i ≤ 4. (6.72)
x=xi−1

The overall quantization power is


4
Q=2 Qi . (6.73)
i=1

In order to minimize the overall quantization noise power, we need to mini-


mize Q i . For minimizing Q i in (6.72) we differentiate wrt yi and set the result
to zero. Thus we get
 xi
Qi
=2 (x − yi ) f X (x) d x = 0
dyi x=x
 xi i−1
x=x x f X (x) d x
⇒ yi =  xi i−1 (6.74)
x=xi−1 f X (x) d x
344 6 Pulse Code Modulation

Fig. 6.11 Message pdf fX (x)


a

−16 0 16

which is the centroid of each partition cell. For the given problem f X (x) is
constant for each partition cell, therefore

xi−1 + xi
yi = . (6.75)
2
Therefore the reconstruction levels are at y1 = 1/2, y2 = 3/2, y3 = 4, and
y4 = 8.
The overall quantization noise power is

Q = 2[Q 1 + Q 2 + Q 3 + Q 4 ]
= 2[2/96 + 2/6]
= 17/24
= 0.70833. (6.76)

17. The probability density function f X (x) of a message signal is given in Fig. 6.11.
The message is quantized by a 3-bit quantizer such that all the eight reconstruc-
tion levels occur with equal probability.
(a) Determine a.
(b) Determine the power of the message signal.
(c) Compute the decision thresholds.
(d) For each partition cell, compute the reconstruction level such that the quan-
tization noise power is minimized.
(e) Determine the overall quantization noise power.

• Solution: Consider Fig. 6.12. Since the area under the pdf is unity we must
have
1
2× × 16a = 1
2
1
⇒a= . (6.77)
16
The power in the message is
6 Pulse Code Modulation 345

Fig. 6.12 Message pdf fX (x)


a

−16 0 x1 x3 16

x2
x0 x4

 16  16
1 x
2 x f X (x) d x = 2
2
x 2
− dx
x=0 x=0 16 256
128
= . (6.78)
3
Since the message pdf is symmetric, we expect the decision thresholds also
to be symmetric about the origin. Let us denote the decision thresholds on
the positive x-axis as x0 , x1 , x2 , x3 , and x4 . The decision thresholds along the
negative x-axis are −x1 , −x2 , −x3 , and −x4 . Note that x4 = 16 and x0 = 0.
Let us first compute x3 . Since each of the reconstruction levels occur with
probability equal to 1/8, we have

P(x3 ≤ x ≤ 16) = 1/8



1 1 x3
⇒ − (16 − x3 ) = 1/8
2 16 256
⇒ x3 = 8. (6.79)

Similarly we get

P(x2 ≤ x ≤ x3 ) = 1/8

1 1 1 x2
⇒ + − (x3 − x2 ) = 1/8
2 32 16 256
⇒ x2 = 4.6862915 (6.80)

and

P(0 ≤ x ≤ x1 ) = 1/8

1 1 1 x1
⇒ + − x1 = 1/8
2 16 16 256
⇒ x1 = 2.1435935. (6.81)

Let us denote the reconstruction levels along the positive x-axis as y1 , y2 ,


y3 , and y4 . Due to symmetry, the reconstruction levels along the negative x-
346 6 Pulse Code Modulation

axis are −y1 , −y2 , −y3 , and −y4 . The quantization noise power for the ith
partition cell is
 xi
Qi = (x − yi )2 f X (x) d x for 1 ≤ i ≤ 4. (6.82)
x=xi−1

The overall quantization power is


4
Q=2 Qi . (6.83)
i=1

In order to minimize the overall quantization noise power, we need to mini-


mize Q i . For minimizing Q i in (6.82) we differentiate wrt yi and set the result
to zero. Thus we get
 xi
Qi
=2 (x − yi ) f X (x) d x = 0
dyi x=x
 xi i−1
x=x x f X (x) d x
⇒ yi =  xi i−1
x=xi−1 f X (x) d x
 xi
1 x
=8 x − dx
x=xi−1 16 256
 2 x
x x3 i
=8 − for 1 ≤ i ≤ 4 (6.84)
32 768 xi−1

which is the centroid of each partition cell. Therefore the reconstruction levels
are at

y1 = 1.0461463
y2 = 3.3721317
y3 = 6.2483887
y4 = 10.666667. (6.85)

From (6.82), the quantization noise in the ith partition cell is


 xi
1 x
Qi = −
(x − yi )2 dx for 1 ≤ i ≤ 4
x=xi−1 16 256
 4 xi
(x − yi )3 1 x yi2 x 2 2yi x 3
= − + − (6.86)
48 256 4 2 3 x=xi−1

which evaluates to
6 Pulse Code Modulation 347

Q 1 = 0.0477823
Q 2 = 0.0671179
Q 3 = 0.1132596
Q 4 = 0.4444444. (6.87)

Therefore

Q = 2(Q 1 + Q 2 + Q 3 + Q 4 )
= 1.3452084. (6.88)

18. A speech signal has a total duration of 20 s. It is sampled at a rate of 8 kHz and
then encoded. The SNR Q must be greater than 50 dB. Calculate the minimum
storage capacity required to accommodate this digitized speech signal. Assume
that the speech signal has a Gaussian pdf with zero mean and variance σ 2 and
the overload factor is 6. Assume that the quantizer is uniform and of the mid-rise
type.
• Solution: We know that

12σ 2
SNR Q = , (6.89)
2
where the step-size  is given by

2xmax
= . (6.90)
2n
Here, xmax = 6σ is the maximum input the quantizer can handle and n is the
number of bits used to encode any representation level. Substituting for ,
the SNR Q becomes

12 × 22n
SNR Q = = 105
144
⇒ n = 10.097. (6.91)

Since n has to be an integer and SNR Q must be greater than 50 dB, we take
n = 11.
Now, the number of samples obtained in 20 s is 16 × 104 . The number of
bits obtained is 11 × 16 × 104 = 176 × 104 , which is the storage capacity
required for the speech signal.
19. A DPCM system uses a second-order predictor of the form

x̂(n) = p1 x(n − 1) + p2 x(n − 2). (6.92)


348 6 Pulse Code Modulation

The autocorrelation of the input is given by: R X (0) = 2, R X (1) = 1.8, R X (2) =
1.6.
Compute the optimum prediction coefficients and the prediction gain.
• Solution: We know that
    
R X (0) R X (1) p1 R X (1)
= . (6.93)
R X (1) R X (0) p2 R X (2)

Solving for p1 and p2 we get p1 = 0.9473684, p2 = −0.0526316.


The prediction error is

σ 2E = σ 2X − p1 R X (1) − p2 R X (2) = 0.3789474. (6.94)

The prediction gain is

σ 2X 2
= 2 = 5.28. (6.95)
σ 2E σE

20. A random process X (t) is defined by

X (t) = Y cos(2π f c t + θ), (6.96)

where Y and θ are independent random variables. Y is uniformly distributed in


the range [−3, 3] and θ is uniformly distributed in the range [0, 2π).
(a) Compute the autocorrelation and psd of X (t).
(b) X (t) is fed to a uniform mid-rise type quantizer. Compute the number of
bits per sample required so that the SQNR is at least 40 dB with no overload
distortion.

• Solution: Clearly

E[X (t)] = E[Y ]E[cos(2π f c t + θ)]


= 0. (6.97)

The autocorrelation of X (t) is given by

E[X (t)X (t − τ )] = R X (τ )
= E [Y cos(2π f c t + θ)Y cos(2π f c (t − τ ) + θ)]
 
= E Y 2 E [cos(2π f c t + θ) cos(2π f c (t − τ ) + θ)]
3
= cos(2π f 0 τ ). (6.98)
2
The psd is
6 Pulse Code Modulation 349

3
SX ( f ) = [δ( f − f 0 ) + δ( f + f 0 )] . (6.99)
4
The signal power is 3/2. The quantization noise power is

2
σ 2Q = , (6.100)
12
where
2m max 6
= = n, (6.101)
2n 2
where m max = 3 denotes the maximum amplitude of the random process and
n is the number of bits per sample.
Therefore the SQNR is

3 22n
SQNR =×
2 3
= 22n−1
⇒ 10 log10 (22n−1 ) ≥ 40
⇒ n ≥ 7.14
⇒ n = 8. (6.102)

21. The pdf of a full-load message signal is uniformly distributed in [−6, 6]. Com-
pute the SNR Q in dB, due to a 2-bit, uniform mid-rise quantizer.
• Solution: The signal power is
 6
σ 2X = x 2 f X (x) d x
x=−6
 6
2
= x2 dx
12 x=0
= 12. (6.103)

Since a 2-bit quantizer has four representation levels, the step-size is

2xmax
=
4
2×6
=
4
= 3. (6.104)

Therefore the decision thresholds are at x0 = −6, x1 = −3, x2 = 0, x3 = 3,


x4 = 6 and the representation levels are at the mid points of two consecutive
350 6 Pulse Code Modulation

decision thresholds, that is, y1 = −9/2, y2 = −3/2, y3 = 3/2, and y4 = 9/2.


The quantization noise power is

4 
 xi
σ 2Q = (x − yi )2 f X (x) d x
i=1 x=xi−1
 3  6
2 2
= (x − 3/2) d x +
2
(x − 9/2)2 d x
12 x=0 12 x=3
= 3/4. (6.105)

Hence we get

σ 2X
SNR Q =
σ 2Q
= 16
⇒ SNR Q (in dB) = 10 log10 (16)
= 12.0412 dB. (6.106)

22. The μ-law compressor characteristic is given by


xmax
c(|x|) = ln (1 + μ|x|/xmax ) for 0 ≤ |x| ≤ xmax . (6.107)
ln(1 + μ)

Derive and compute the companding gain for μ = 255.


• Solution: The companding gain is defined as

u
Gc = , (6.108)
r, min

where u is the step-size of the uniform quantizer and r, min is the minimum
step-size of the robust quantizer having the same overload level (xmax ) and
representation levels (L). Now

2xmax
u = . (6.109)
L
Similarly, for the kth segment of the robust quantizer we have

dc(x)  2xmax
 = , (6.110)
dx k Lk

where k is the length of the kth segment along the x-axis (input). Note that
the length of each segment along the y-axis is identical and equal to 2xmax /L.
From (6.110) it is clear that k is minimum when dc/d x is maximum, which
6 Pulse Code Modulation 351

occurs at the origin. Substituting (6.109) and (6.110) in (6.108) we get



dc(x) 
Gc =
d x x=0
μ
=
ln(1 + μ)
= 45.985904 (6.111)

for μ = 255.
23. The A-law compressor characteristic is given by

c(|x|) A|x|/(K xmax ) for 0 ≤ |x|/xmax ≤ 1/A
= (6.112)
xmax (1 + ln(A|x|/xmax ))/K for 1/A ≤ |x|/xmax ≤ 1,

where K = 1 + ln(A). Derive and compute the companding gain for A = 87.56.

• Solution: The companding gain is defined as

u
Gc = , (6.113)
r, min

where u is the step-size of the uniform quantizer and r, min is the minimum
step-size of the robust quantizer having the same overload level (xmax ) and
representation levels (L). Now

2xmax
u = . (6.114)
L
Similarly, for the kth segment of the robust quantizer we have

dc(x)  2xmax
= , (6.115)
d x k Lk

where k is the length of the kth segment along the x-axis (input). Note that
the length of each segment along the y-axis is identical and equal to 2xmax /L.
From (6.115) it is clear that k is minimum when dc/d x is maximum, which
occurs at the origin. Substituting (6.114) and (6.115) in (6.113) we get

dc(x) 
Gc =
d x x=0
A
=
(1 + ln(A))
= 16.000514 (6.116)

for A = 87.56.
352 6 Pulse Code Modulation

Uniform quantizer Compressor


Ip Op (c(x))
12 (xmax )

3 Δk

Op Ip (x)

10.5 7.5 4.5 1.5


1 3 6 12 (xmax )
1
Ip
3
decision thresholds
(xk , xk+1 )
6

representation
level (yk )
Expander 12

Op

Fig. 6.13 Illustrating the transfer characteristics of the compressor and expander

24. (Haykin 1988) derived the ideal compressor characteristic. Clearly state the
objective of the compressor. Can the ideal compressor be used in practice? Give
reasons.
• Solution: Recall that in the case of a uniform quantizer, the SNR Q is directly
proportional to the input signal power. Therefore, if the input signal power
decreases, the SNR Q also decreases. However, in the case of a robust quan-
tizer, the SNR Q remains constant over a wide range of input signal power. In
order to achieve this feature, the robust quantizer uses a compressor. This is
shown in Fig. 6.13. For ease of illustration, the compressor characteristic is
assumed to be piecewise-linear. The compressor and uniform quantizer are
located at the transmitter, whereas the expander is located at the receiver.
Note that in the case of the compressor, both input and output amplitudes
are continuous. However, in the case of the expander, both input and output
amplitudes are discrete.
The representation level yk is related to the decision thresholds xk and xk+1
as follows:
6 Pulse Code Modulation 353

1
yk = (xk + xk+1 ). (6.117)
2
The quantization error for the kth decision region is defined as

qk = x − yk , (6.118)

where x denotes the compressor input. The compressor characteristic must


satisfy the following property:

c(−x) = −c(x). (6.119)

It is also convenient to have



⎨ xmax for x = xmax
c(x) = 0 for x = 0 (6.120)

−xmax for x = −xmax .

Let L denote the number of decision regions or representation levels. Let


k , for 0 ≤ k ≤ L − 1, denote the length of the kth decision region at the
compressor input. We also observe from Fig. 6.13 that the compressor output
is divided into equal intervals. Hence

dc(x) 2xmax
=
dx k Lk
2xmax
⇒ k = . (6.121)
L(dc(x)/d x)k

We now make the following assumptions regarding the input x.


(a) The pdf of x, f X (x), is an even function of x. This ensures that x has zero-
mean.
(b) In each interval k , f X (x), is approximately a constant. Hence

f X (x) ≈ f X (yk ) for xk ≤ x ≤ xk+1 . (6.122)

This ensures that the quantization error is uniformly distributed in [−k /2,
k /2].
(c) The input does not overload the compressor.
Now, the variance of the quantization error in the kth interval is

σ 2Q, k = 2k /12. (6.123)


354 6 Pulse Code Modulation

The overall quantization noise power is


L−1
σ 2Q = Pk σ 2Q, k
k=0

1 
L−1
= Pk 2k , (6.124)
12 k=0

where

Pk = f X (yk )k (6.125)

denotes the probability that x lies in the region k . Substituting (6.121) in


(6.124), we get

1 
L−1 2
4xmax
σ 2Q = Pk 2 . (6.126)
12 k=0 L (dc(x)/d x)2k

Now, for a given xmax , as L becomes large, we obtain

P → f X (x) d x
k
dc(x) dc(x)
→ (6.127)
dx k dx

and the summation in (6.126) can be replaced by an integral, as follows:



2
xmax xmax
f X (x)
σ 2Q = d x. (6.128)
3L 2 x=−xmax (dc(x)/d x)2

Therefore

σ 2X
SNR Q =
σ 2Q
 xmax
1
= x 2 f X (x) d x (6.129)
σ 2Q x=−xmax

is independent of the input signal power when

dc(x) K
=
dx x
⇒ c(x) = K ln(x) + C0 for x > 0, (6.130)

where K and C0 are constants. Using (6.120) we obtain


6 Pulse Code Modulation 355

C0 = xmax − K ln(xmax ). (6.131)

Hence the ideal compressor characteristic is



x
c(x) = K ln + xmax for x > 0. (6.132)
xmax

Note that when x < 0, (6.119) needs to be applied. The ideal c(x) is not used
in practice, since c(x) → −∞ as x → 0+ .
25. The probability density function of a message signal is

f X (x) = ae−3|x| for −∞ < x < ∞. (6.133)

The message is applied to a 4-representation level Lloyd-Max quantizer. The


initial set of representation levels are given by: y1, 0 = −4, y2, 0 = −1, y3, 0 = 1,
y4, 0 = 4, where the second subscript denotes the 0th iteration. Compute the next
set of decision thresholds and representation levels (in the 1st iteration).
• Solution: Since the area under the pdf is unity we must have
 ∞  ∞
−3|x|
ae dx = 2ae−3x d x
x=−∞ x=0
=1
⇒ a = 3/2. (6.134)

Let the decision thresholds in the 1st iteration be given by x0, 1 , . . . , x4, 1 . We
have

x0, 1 = −∞
y1, 0 + y2, 0
x1, 1 =
2
= −2.5
y2, 0 + y3, 0
x2, 1 =
2
=0
y3, 0 + y4, 0
x3, 1 =
2
= 2.5
x4, 1 = ∞. (6.135)

The representation levels in the 1st iteration are given by


 xk, 1
x=x x f X (x) d x
yk, 1 =  xk, 1k−1, 1 for 1 ≤ k ≤ 4. (6.136)
x=xk−1, 1 f X (x) d x
356 6 Pulse Code Modulation

Solving (6.136) we get

y1, 1 = −2.83
= −y4, 1
y2, 1 = −0.33194
= −y3, 1 . (6.137)

References

Simon Haykin. Digital Communications. John Wiley & Sons, first edition, 1988.
K. Vasudevan. Digital Communications and Signal Processing, Second edition (CDROM included).
Universities Press (India), Hyderabad, www.universitiespress.com, 2010.
Chapter 7
Signaling Through AWGN Channel

1. For the transmit filter with impulse response given in Fig. 7.1, draw the output of
the matched filter. Assume that the matched filter has an impulse response p(−t).

• Solution: The matched filter output is plotted in Fig. 7.2.


2. Consider a communication system shown in Fig. 7.3. Here, an input bit bk = 1
is represented by the signal x1 (t) and the bit bk = 0 is represented by the signal
x2 (t). Note that x1 (t) and x2 (t) are orthogonal, that is
 T
x1 (t)x2 (t) dt = 0. (7.1)
t=0

(a) It is given that the impulse response of the transmit filter is



1 for 0 ≤ t ≤ T /4
p(t) = (7.2)
0 otherwise,

where 1/T denotes the bit-rate of the input data stream bk .


The “bit manipulator” converts the input bit bk = 0 into a sequence of Dirac-
delta functions weighted by symbols from a binary constellation. Similarly
for input bit bk = 1. Thus the output of the bit manipulator is given by


y1 (t) = ak δ(t − kT /4), (7.3)
k=−∞

where ak denotes symbols from a binary constellation. The final PCM signal
y(t) is

© The Editor(s) (if applicable) and The Author(s), under exclusive license 357
to Springer Nature Switzerland AG 2021
K. Vasudevan, Analog Communications,
https://doi.org/10.1007/978-3-030-50337-6_7
358 7 Signaling Through AWGN Channel

Fig. 7.1 Impulse response


p(t)
of the transmit filter

−0.5

0 T /2 T

Fig. 7.2 Matched filter


output
5T /8

−T /4

−T −T /2 0 T /2 T



y(t) = ak p(t − kT /4). (7.4)
k=−∞

Note that the symbol-rate of ak is 4/T , that is four times the input bit-rate.
Draw the constellation and write down the sequence of symbols that are
generated corresponding to an input bit bk = 0 and an input bit bk = 1.
(b) The received signal is given by

u(t) = y(t) + w(t), (7.5)

where y(t) is the transmitted signal and w(t) is a sample function of a zero-
mean AWGN process with psd N0 /2. Compute the mean and variance of
z 1 and z 2 , given that 1 (x1 (t)) was transmitted in the interval [0, T ]. Also
compute cov(z 1 , z 2 ).
(c) Derive the detection rule for the optimal detector.
(d) Derive the average probability of error.
7 Signaling Through AWGN Channel 359

bk
Input bits Bit y1 (t) Transmit y(t)
manipulator filter
(1s and 0s) PCM signal
from quantizer
Transmitter

x1 (t) x2 (t)
A/2 A/2

t t

−A/2 −A/2

3T
0 T
2
T 0 −
T
4 4
T

z1
x1 (−t)
Optimum
u(t) = y(t) + w(t)

detector
z2
x2 (−t)

t = kT

Receiver

Fig. 7.3 Block diagram of a communication system

• Solution: The impulse response of the transmit filter is



1 for 0 ≤ t ≤ T /4
p(t) = (7.6)
0 otherwise.

The input bit 0 gets converted to the symbol sequence:

{A/2, −A/2, −A/2, A/2}. (7.7)

The input bit 1 gets converted to the symbol sequence:

{A/2, A/2, −A/2, −A/2}. (7.8)


360 7 Signaling Through AWGN Channel

Given that 1 (x1 (t)) has been transmitted in the interval [0, T ], the received
signal can be written as

u(t) = x1 (t) + w(t). (7.9)

The output of the upper MF is

z 1 = u(t)  x1 (−t)|t=0
 ∞ 

= u(τ )x1 (−t + τ ) dτ 
τ =−∞ t=0
 T
= u(t)x1 (t) dt
t=0
2
A T
= + w1 , (7.10)
4
where
 T
w1 = w(t)x1 (t) dt. (7.11)
t=0

Similarly

z 2 = u(t)  x2 (−t)|t=0
 T
= u(t)x2 (t) dt
t=0
= w2 , (7.12)

where we have used the fact that x1 (t) and x2 (t) are orthogonal and
 T
w2 = w(t)x2 (t) dt. (7.13)
t=0

Since w(t) is zero mean, w1 and w2 are also zero mean. Since x1 (t) and x2 (t)
are LTI filters, w1 and w2 are Gaussian distributed RVs. Moreover
 T  T 
E[w1 w2 ] = E w(t)x1 (t) dt w(τ )x2 (τ ) dτ
t=0 τ =0
 T  T
N0
= x1 (t)x2 (τ ) δ(t − τ ) dt dτ
t=0 τ =0 2
 T
N0
= x1 (t)x2 (t) dt
t=0 2
= 0. (7.14)
7 Signaling Through AWGN Channel 361

Thus w1 and w2 are uncorrelated, and being Gaussian, they are also statistically
independent. This implies that z 1 and z 2 are also statistically independent. The
variance of w1 and w2 is
  
 T T
E w12 = E w(t)x1 (t) dt w(τ )x1 (τ ) dτ
t=0 τ =0
 T  T
N0
= x12 (t) δ(t − τ ) dt dτ
t=0 τ =0 2
 T
N0
= x12 (t) dt
t=0 2
N0 A2 T
=
8
= E[w22 ]
= var(z 1 ) = var(z 2 ) = σ 2 (say). (7.15)

The mean values of z 1 and z 2 are (given that 1 was transmitted)

A2 T
E[z 1 ] = = m 1, 1 (say)
4
E[z 2 ] = 0 = m 2, 1 (say), (7.16)

where m 1, 1 and m 2, 1 denote the means of z 1 and z 2 , respectively, when 1 is


transmitted. Similarly it can be shown that when 0 is transmitted

E[z 1 ] = 0 = m 1, 0 (say)
A2 T
E[z 2 ] = = m 2, 0 (say). (7.17)
4
Let
 T
z = z1 z2 . (7.18)

The maximum likelihood (ML) detector computes the probabilities:

P( j|z) for j = 0, 1 (7.19)

and decides in favor of that bit for which the probability is maximum. Using
Bayes’ rule we have

f Z (z| j)P( j)
P( j|z) = , (7.20)
f Z (z)
362 7 Signaling Through AWGN Channel

where P( j) denotes the probability that bit j ( j = 0, 1) was transmitted.


Since P( j) = 0.5 and f Z (z) are independent of j they can be ignored and
computing the maximum of P( j|z) is equivalent to computing the maximum
of f Z (z| j) which can be conveniently written as

max f Z (z| j) for j = 0, 1. (7.21)


j

Since z 1 and z 2 are statistically independent (7.21) can be written as

max f Z 1 (z 1 | j) f Z 2 (z 2 | j)
j


1 (z 1 − m 1, j )2 + (z 2 − m 2, j )2
⇒ max exp − . (7.22)
j 2πσ 2 2σ 2

Taking the natural logarithm and ignoring constants (7.22) reduces to

min(z 1 − m 1, j )2 + (z 2 − m 2, j )2 for j = 0, 1. (7.23)


j

To compute the average probability of error we first compute the probability


of detecting 0 when 1 is transmitted (P(0|1)). This happens when

(z 1 − m 1, 0 )2 + (z 2 − m 2, 0 )2 < (z 1 − m 1, 1 )2 + (z 2 − m 2, 1 )2 . (7.24)

Let

e1 = m 1, 1 − m 1, 0
e2 = m 2, 1 − m 2, 0
e1 + e22 = d 2 .
2
(7.25)

Thus the detector makes an error when

2e1 w1 + 2e2 w2 < −d 2 . (7.26)

Let

Z = 2e1 w1 + 2e2 w2 . (7.27)

Then

E[Z ] = 2e1 E[w1 ] + 2e2 E[w2 ]


=0

E Z 2 = 4e12 σ 2 + 4e22 σ 2
= 4d 2 σ 2
7 Signaling Through AWGN Channel 363

= σ 2Z (say). (7.28)

Thus the probability of detecting 0 when 1 is transmitted is


 −d 2

1 Z2
P(Z < −d 2 ) = √ exp − 2 dz
σ Z 2π −∞ 2σ Z

2
1 d
= erfc √
2 σZ 2

1 d2
= erfc . (7.29)
2 8σ 2

3. Consider the passband PAM system in Fig. 7.4. The bits 1 and 0 from the quantizer
are equally likely. The signal b(t) is given by


b(t) = ak δ(t − kT ), (7.30)
k=−∞

where ak denotes a symbol at time kT taken from a binary constellation shown in


Fig. 7.4 and δ(t) is the Dirac-Delta function. The mapping of the bits to symbols
is also shown. The transmit filter is a pulse corresponding to the root-raised
cosine (RRC) spectrum. The roll-off factor of the raised cosine (RC) spectrum

0 1

−d = A0 0 d = A1
Constellation

Input bit
Transmit
stream Bit b(t) y(t) s(t)
filter
from manipulator
quantizer p(t)
(1s and 0s)

Transmitter cos(2πfc t + θ)

t = kT
Output
Matched
s1 (t) y1 (t) r(kT ) Optimum bit
filter
stream
detector
p(−t)

cos(2πfc t + θ) Receiver

Fig. 7.4 Block diagram of a passband PAM system


364 7 Signaling Through AWGN Channel

(from which the root-raised cosine spectrum is obtained) is α = 0.5. Assume that
the bit-rate 1/T = 1 kbps, the energy of p(t) = 2, θ is a uniformly distributed
random variable in [0, 2π] and θ and w(t) are statistically independent. The
received signal is given by

s1 (t) = s(t) + w(t), (7.31)

where w(t) is zero-mean AWGN with psd N0 /2.


(a) Compute the two-sided bandwidth (bandwidth on either side of the carrier)
of the psd of the transmitted signal s(t).
(b) Derive the expression for r (kT ).
(c) Derive the ML detection rule. Start from the MAP rule

max P(A j |rk ) for j = 0, 1, (7.32)


j

where for brevity rk = r (kT ).


(d) Assume that the transmitted bit at time kT is 1. Compute the probability of
making an erroneous decision, in terms of d and N0 .
4. Solution: The two-sided bandwidth is

B = 2 × 500(1 + α)
= 1500 Hz. (7.33)

The signal at the multiplier output is given by

y(t)
s1 (t) cos(2π f c t) = (1 + cos(4π f c t)) + w1 (t), (7.34)
2
where w1 (t) is

w1 (t) = w(t) cos(2π f c t). (7.35)

The autocorrelation of w1 (t) is

E[w1 (t)w1 (t − τ )] = E[w(t)w(t − τ )]


E[cos(2π f c t + θ) cos(2π f c (t − τ ) + θ)]
N0 1
= δ(τ ) cos(2π f c τ )
2 2
N0
= δ(τ ). (7.36)
4
Since the MF eliminates the signal at 2 f c , the effective signal at the output of the
multiplier is
7 Signaling Through AWGN Channel 365


1 
y1 (t) = ai p(t − kT ) + w1 (t). (7.37)
2 i=−∞

The MF output is

r (t) = y1 (t)  p(−t)



1 
= ai g(t − i T ) + z(t), (7.38)
2 i=−∞

where  denotes convolution and

g(t) = p(t)  p(−t)


z(t) = w1 (t)  p(−t). (7.39)

Note that g(t) is a pulse corresponding to the raised cosine spectrum, hence it
satisfies the Nyquist criterion for zero intersymbol interference (ISI). The MF
output sampled at time kT is

r (kT ) = (1/2)ak g(0) + z(kT )


= ak + z(kT ). (7.40)

The above equation can be written more concisely as

r k = ak + z k . (7.41)

Note that z k is a zero-mean Gaussian random variable with variance

N0 N0
E[z k2 ] = g(0) = = σ2 (say). (7.42)
4 2
At time kT given that ak = A j was transmitted, the mean value of rk is

E[rk |A j ] = A j . (7.43)

Note that


1
E[rk ] = E[rk |A j ]P(A j ) = 0. (7.44)
j=0

In other words, the unconditional mean is zero. At time kT , the MAP detector
computes the probabilities P(A j |rk ) for j = 0, 1 and decides in favor of that
symbol for which the probability is maximum. Using Bayes’ rule, the MAP
detector can be re-written as
366 7 Signaling Through AWGN Channel

f Rk |A j (rk |A j )P(A j )
max for j = 0, 1, (7.45)
j f Rk (rk )

where f Rk |A j (rk |A j ) is the conditional pdf of rk given that A j was transmitted.


Since the symbols are equally likely P(A j ) = 1/2, and is independent of j.
Moreover f Rk (rk ) is also independent of j. Thus the MAP detector reduces to an
ML detector given by

max f Rk |A j (rk |A j ) for j = 0, 1. (7.46)


j

Substituting for the conditional pdf we get

1
max √ e−(rk −A j ) /(2σ )
2 2
for j = 0, 1. (7.47)
j σ 2π

Ignoring constants and noting that maximizing ex is equivalent to maximizing x,


the ML detection rule simplifies to

max −(rk − A j )2 for j = 0, 1. (7.48)


j

However for x > 0, maximizing −x is equivalent to minimizing x. Hence the


ML detection rule can be rewritten as

min(rk − A j )2 for j = 0, 1. (7.49)


j

Given that 1 was transmitted at time kT , the ML detector makes an error when

(rk − A0 )2 < (rk − A1 )2


⇒ (z k + d1 )2 < z k2
⇒ z k2 + d12 + 2z k d1 < z k2
⇒ 2z k d1 < −d12
⇒ z k < −d1 /2, (7.50)

where

d1 = (A1 − A0 ) = 2d. (7.51)

Thus the probability of detecting 0 when 1 is transmitted is


7 Signaling Through AWGN Channel 367

t = kT
w(t) z(t) z(kT )
p(−t)

Fig. 7.5 Noise samples at the output of a root-raised cosine filter

 −d1 /2
1
e−zk /(2σ ) dz k
2 2
P(z k < −d1 /2) = √
σ 2π zk =−∞
⎛ ⎞
2
1 d
= erfc ⎝ 1 ⎠
2 8σ 2
⎛ ⎞
1 d2 ⎠
= erfc ⎝ . (7.52)
2 N0

5. Consider the block diagram in Fig. 7.5. Here w(t) is a sample function of a zero-
mean AWGN process with psd N0 /2. Let p(t) denote the pulse corresponding to
the root-raised cosine spectrum. Let P( f ) denote the Fourier transform of p(t)
and let the energy of p(t) be 2. Assume that w(t) is WSS.
Compute E[z(kT )z(kT − mT )].
• Solution: The psd of z(t) is

N0
SZ ( f ) = |P( f )|2 , (7.53)
2

where |P( f )|2 has a raised-cosine spectrum. Note that since w(t) is WSS,
z(t) is also WSS. This implies that the autocorrelation of z(t) is

N0
E[z(t)z(t − τ )] = R Z (τ ) = g(τ ), (7.54)
2
where g(τ ) is the inverse Fourier transform of the raised cosine spectrum.
Therefore
N0
E[z(kT )z(kT − mT )] = R Z (mT ) = g(mT )
2
N0 for m = 0
= (7.55)
0 otherwise
Index

A Compressor, 338
Ac coupled, 117 Condition
Aliasing, 32 necessary, 68
Amplifier sufficient, 69
ac, 177 Conjugate
dc, 177 complex, 36
chopper stabilized, 177 symmetry, 319
Amplitude modulation (AM)
Constellation
amplitude sensitivity, 154
BPSK, 83
modulation factor, 162
Convolution, 12
overmodulation, 182
power efficiency, 162 Correlation
residual, 287 auto, 5
Attenuation, 164 coefficient, 77
cross, 15
Costas loop, 170
B phase ambiguity, 238
Bandpass signal phase discriminator, 238
canonical representation, 12 Cross covariance, 128
Bayes’ rule, 361 Cross-spectral density, 96
Bessel function, 258 Cumulative distribution function, 137

C
Capacitor, 182 D
Carrier Dc coupled, 117
frequency, 56
Dc voltmeter, 117
recovery, 208
Delta function
spacing, 193
Dirac, 1
Carrier-to-noise ratio, 299
Centroid, 344 Kronecker, 84
Characteristic function, 77 Delta modulator, 337
Chernoff bound, 95 Differentiator, 147
Communication privacy, 228 Dirichlet’s conditions, 69
Companding Discriminant, 77
A-law, 351 Distortion, 178
µ-law, 331 DPCM, 333
© The Editor(s) (if applicable) and The Author(s), under exclusive license 369
to Springer Nature Switzerland AG 2021
K. Vasudevan, Analog Communications,
https://doi.org/10.1007/978-3-030-50337-6
370 Index

E Carson’s rule, 275


Echo, 243 delay line, 255
Ensemble average, 83 frequency deviation, 243
Envelope frequency discriminator, 250
complex, 12 frequency sensitivity, 262
detector, 156 microwave, 255
using diode, 218 modulation index, 271
pre, 56 narrowband, 247
Ergodic PM
in autocorrelation, 81 phase sensitivity, 267
in mean, 81 radar, 243
Expander, 333 SSB, 260
stereophonic, 275
wideband, 249
F indirect method, 280
Figure-of-merit, 310 Function
Filter even, 21
bandpass, 7 odd, 21
center frequency, 164
Q-factor, 164
G
cascade connection, 107
Gaussian pulse, 9
de-emphasis, 310
highpass, 132
IF, 295
H
linear time invariant, 127
Half-wave symmetry, 21
lowpass, 28 Hard limiter, 288
postdetection, 311 Harmonic, 33
matched, 357 distortion, 181
narrowband, 209 Heisenberg–Gabor uncertainty, 59
pre-emphasis, 310 Heterodyne spectrum analyzer, 232
stable, 20 Hilbert transform, 10
transmit, 330
Fourier series
coefficient, 7 I
complex, 4 Image, 170
real, 31 Inductor, 281
Fourier transform, 1 Integrator, 81
area under, 203 Intersymbol interference (ISI), 365
differentiation in time, 2 Nyquist criterion, 365
duality, 35
time scaling, 10
Frequency M
beat, 243 Maclaurin series, 29
deviation, 243 MAP detector, 364
discriminator, 250 Markov inequality, 100
divider, 210 Mean squared error, 297
division multiplexing, 169 Mixer, 280
multiplier, 249 ML detector, 361
offset, 175 Modulator
pilot, 208 amplitude, 153
repetition, 243 stereo, 201
translation, 249 DSBSC, 154
Frequency modulation (FM) FM
Index 371

Armstrong-type, 249 Lloyd-Max, 334


product, 153 non-uniform, 333
SSB, 164 overload factor, 327
frequency discrimination, 164 overload level, 340
phase discrimination, 178 partition cell, 342
Weaver’s method, 226 reconstruction level, 342
switching, 154 representation level, 327
VSB, 220 robust, 350
phase discrimination, 221 uniform, 327
Multiplex, 188 mid-rise, 327
quadrature -carrier, 199 mid-step, 340
Multiplier, 161

R
N Radio, 169
Noise Random process, 80
AWGN, 171 cyclostationary, 87
Gaussian, 118 Random timing phase, 83
generator, 117 Random variable, 77
narrowband, 96 discrete, 83
white, 118 Gaussian, 77
Non-linear device, 153 independent, 95
Rayleigh distributed, 94
transformation of a, 78
O
uncorrelated, 95
Orthogonality, 188
uniformly distributed, 79
Oscillator, 126
Rayleigh’s energy theorem, 20
local, 295
Rectifier
full wave, 7
P half wave, 78
Parseval’s power theorem, 7 Response
Phase, 6 magnitude, 32
ambiguity, 210 phase, 32
error, 297 RLC circuit
Phasor, 179 series, 206
Poisson sum formula, 69 resonant frequency, 206
Power spectral density (psd), 96
Prediction
coefficients, 333 S
gain, 333 Sampling, 327
Probability density function (pdf), 78 Nyquist-rate, 328
conditional, 92 Schwarz’s inequality, 45
joint, 82 Scrambling, 184
Pulse amplitude modulation (PAM), 329 Sideband, 162
Pulse shaping lower, 164
Nyquist, 329 upper, 164
Signal
audio, 184
Q energy, 15
Quadrant, 189 narrowband, 56
Quadratic equation, 77 NRZ unipolar, 329
Quantizer periodic, 21
decision threshold, 333 power, 9
372 Index

speech, 327 T
digitized, 327 Time division multiplexed (TDM), 328
Transition bandwidth, 165
Signal-to-noise ratio (SNR), 172
True rms meter, 117
Slope overload, 337
SNR Q , 327
Spectrum, 10 U
RC, 363 Unit step function, 71
roll-off factor, 363
RRC, 363 V
square-law device, 22 Varactor diode, 281
Square rooter, 159 bias voltage, 282
Superheterodyne, 317 Voltage controlled oscillator (VCO), 238
Surjective mapping, 98
System W
nonlinear, 15 Wide sense stationary (WSS), 81

You might also like