You are on page 1of 544

CONTINUOUS AND DISCRETE

SIGNALS AND SYSTEMS

Iru! csnG lc yqtfiqdl


I(ARIU INTEBNAIIOT{AL

Ohak -I2rg. BengLdcth.


Pho.. el3oa87, el38a2e
.:d r,r . jli:, ni.
r.

lt I t
CONTINUOUS AND DISCRETE
SIGNALS AND SYSTEMS
SECOND EDITION

SAMIR S. SOLIMAN
QUALCOMM Incorporated
San Diego, California

MANDYAM D. SRINATH
Southe rn M ethodis t U niv e rs ity
Dallas, Texas

Prenlice-Holl ol lndio Frfivde Mmn80cd


*"* ,"jH;10 001
Thls lndl8n Raprlil4!. &E.O
(Otlghal U.S. Etltffis. 5544.00)

OOIfflNUOIJS AND DISCRETE 536N,lUt AND SYSTEMS, zrd E.L


by Samh S. SolLrun ad mandyam D. Stfuth

@ lSS by PrcnbHa[, llt., (]Ew lmorvn as Pear8on Educauon, lnc.), Ono lako Stest, Uppst
Saddle Rlver, Nfl Jsr8oy 074{i8, U.SA. All rlghts reslved. No Pan o, thb bmk may bo
roprduEsd h any totm, by mlmeogmph ot any o$Er rneans, wlthout Potmlssbn h s'ddng tlotr
Ole Brblbhor.
Tho adE srd Brb&itp c, ods !@f hsvs used tlalt b€8t efiotb h Pr€Darrt8 0& bk Bffi
Tlieso Edu.le
0D (bvBloF rsd, r€sger4 8rd bdtg d tho UEotbs ard Fqtams b (labmets ffiefrsdiYBt@. The auttE
ard grubtrsr nal(o no rsnat$y ol any ldrd, etg€ss€d @ lmpfled, wlth tggard b those Programs @ ths
dcfiordetcl cmtehed h 0lfs !@k Ihe autlor ad trINshet thal d !o ]bbb h any sYsnl E lrtusrtal
or colrssglr€tdal datttsgss h cglrE(tqt rdlh, B artdrE oul ot, Ulo hrrdsl'fiE, P€rlormarEo' ot usa ot tlr@
pro0rans.

tsBN€t-20&zu€

Pubtbhod by Asol(g K GhGh, Prantbs-tlatl ol lndla P.hats LEnibd, M'c/, Connaught Cbas'
New Dolhl:lloool and Prlnted by V.K. Balra at Pearl Onset Prsos Prlvato Llmlted,
N$v Dolrl-11(815.
Contents

PNEFACE rlll

1 NEPRESENNNG S'GA'AIS

1.1 Introduction 1
1.2 Continuous-Time vs. Discrete-Time Signals
1.3 Periodic vs. Aperiodic Signals 4
1.4 Energy and Power Signals 7
1.5 Transformations of the Independent Variable l0
1.5.1 The Shifting Operation, l0
1.5.2 The Retlection Operotion, 13
1.5.3 The Time-Scaling Operarion, 17
1.6 Elementary Signals 19
L6.I The Unit Step Function, 19
1.6.2 The Ramp Function, 2l
L6.3 The Sompling Function, 22
1.6.4 The Unit Impulse Function, 22
1.6.5 Derivatives of the lmpulse Function, 30
1.7 Other Types of Signals 32
1.8 Summary 33
1.9 Checklist of Important Terms 35
1.10 Problems 35
,._,:v'.
I I
:.
, i .Its,'I
:
vl ContenB

2 CONNNUOU$NMESYSTEMS 4I
2,1 Introduction 4l
2.2 Classification of Continuous-Time Systems 42
22.1 Lineat and Nonlinear Sysums, 42
2.2.2 Tbne-Varying ond TimeJnvariant Systems, ,16
2.2.3 Systems with and without Memory, 47
2.2.4 CausolSysetta,4E
2.25 Invenibility and lnverce Sysums,50
2.26 Sublc Systerns,5l
2.3 Linear Time-Invariant Systems 52
2.3.1 The Convolution Integral 52
2.3.2 Graphical Inrerpretation ol Convoluiot, 58
2.4 Properties of Linear, Time-Invariant Systems U
2,4.1 Memorylas LTI Systems, O4
2.42 Causal LTI Systems, &
2.43 Invertible LTI Systems, 65
2.4.4 Stoble LTI Systems, 65

2.5 Sptems Described by Differential Equations 67


2.5J Lincar,Corctant-Coefftcieu DiffereruialEquations,6T
2.52 Basic System Components, 6
2-53 Sinulation Diagrans for Contiauous-Tine Systems, 70
2,5,4 Fiading the Impulse Resporce,73

2.6 State-VariableRepresentation 76
2.6.1 Sute Equations, V
2.6.2 Time-Domah Solwion ol the State Equations, TE
2.63 State Equations h Fint Canonical Form, M
2.6.4 State Equztions h Second Canonical Fon4 E7
2.65 Stability Consideruions, 9l

2.7 Srrmmary 94
2.8 Checklist of Important Terms 96

2.9 Problems 96

S FOURIER SEF'ES t@

3.1 Introduction 106


3.2 Orthogonal Representations of Sipals lUl
3.3 The Exponential Fourier Series lLz
3.4 Dirichlet Conditions 122
Contents vll

3.5 Properties of Fourier Series 125


3,5J Least Squores Approximation Property, 125
3.5.2 Elfecs of Symmetry, 127
3.5.3 Lineairy, 129
3.5.4 Product of Two Signals, 130
3.5.5 Convolution of Two Signals, 131
3.5-6 Paneval)s Theoretry lj2
3.5.7 Shilt in Time, 133
J.5.8 Inregration of Periodic Signab, 134
3.6 Systems with Periodic tnputs 135
3.7 The Gibbs Phenomenon 142
3.8 Summary 145
3.9 Checklist of Important Terms 148
3.10 Problems f48
3.11 Computer Problems 1@

4 THE FOURIER TRANSFORM ,62

4.1 Introduction 162


4.2 The Continuous-Time Fourier Transform 163
4.2.1 Development of the Fourier Transform, 163
4.2.2 Existence of the Fourier Tratsform, 165
4.2.3 Examples of the Continuous-Time Fourier Trarsform, 166
4.3 Properties of the Fourier Transform l7l
4.3.1 Lineafiy, I7I
4.3.2 Symmetry, l7i
4.3.3 Time Shifting, 175
4.3.4 Time Scaling 175
4.3.5 Differentiation,IV
4,3.6 Energy ofAperiodic Signab, 179
4.3.7 Convolution, lEI
4.3.8 Duality, 184
4.i.9 Modulatio+ 185

4.4 Applications of the Fourier Transform 190


4.4.1 Amplitude Modubrion, 190
4.4.2 Multipl*ing 192
4.4.3 The Sampling Theorem, 194
4.4,4 Sigtul Filteriag 2N
4.5 Duration-BandwidthRelationships 2U
4.5.1 Defiaitiotts of Duration and Bandwidrh,2M
4.5.2 The Uncertainty Priacipk,2$
(hntents

4.6 Summary 2ll


4.7 Checklist of Important Terms 212
4.8 Problems 2L2

5 THELAPLACETRANSFORM 224

5,1 Introduction 224


5.2 The Bilateral l-aplace Trensform 225
5.3 The Unilateral I-aplace Transform 228
5.4 Bilateral Transforms Using Unilateral Transforms 229
5.5 Properties of the Unilateral l:place Transform 231
J.s.t Lineanry,232
5.5.2 Tine Shifiing,232
5.5.3 Shifiing in the s Domain,2i3
5.5.4 Time Scaling,234 t
5.5.5 Differentiatibn in the Time Domairy 234 6

5.5.6 Integration in the Time Domairy 237


5.5.7 DWrentiotion in the s Domain, 238
5.5.E Modulation,239
5.5.9 Convolutiory 240
5.5.10 Initial-Value Theorer4 243
5.5.1I Final'Volue Theorem 2'14

5.6, The Ihverse l:place Transform 246


5.'l Simulation Diagrams for Continuous-Time Systems 250

5.8 Applications of the [:place Transform 257


5.8.1 Solution of Differential Equatioru, 257
5.8.2 Application to RLC Circuit Analysb, 258
5.8.3 Application to Control" 2ffi
5.9 State Equations and the l:place Transform 263
5.10 Stability in the s Domain 26
5.ll Summary 268
5.12 Checklist of Important Terms 270
5.13 Problems 27O

6 D//SCNETE.NMESYSTEMS 278

6.1 Introduction 278


6.1.1 Clossification of Discrete-Time Signab,279
6.1.2 Transfornutions of the lndependent Variable, 281
lx
ConientB

6.2 Elementary Disrete'Time Signals 282


6,2.1 Dblete Imputse and Step Functlotts' 283
6,2,2 ExPonentialSequences,2E4

6.3 Discrete-Time SYstems 287


6.4 Periodic Convolution 294
6.5 Difference-EquationRepresentationofDiscrete-TimeSystems 298
6.5.1 Homogeneow Solution of the Difference Eqwtion' 299
6,5,2 The Panicular Solution, 302
6.5,3 Determlnation of the Impube Response' 305
6.6 Simulation Diagrams for Discrete'Time Systems 306
6.7 State-Variable Representation of Discrete-Time Systems 310
6.7.1 Solution of Smte-Space Equotions, 3li
6.7.2 lmpulse Response of Systems Described by State Equations, 316
6.8 Stability of Discrete'Time Systems 376
6.9 Summary 318
6.10 Checklist of Important Terms 320

6.11 Problems 320

7 FOUN//EN ANALYS'S OF D//SCRETE.NME SYSTEMS 329

7.1 Introduction 329


7.2 Fourier-Series Representation of Discrete-Time Periodic Signals 331

7.3 The Discrete-Time Fourier Transform 340


7.4 Properties of the Discrete-Time Fourier Transform 345
7.4.1 Periodiciry,345
7.4.2 Linearity, j45
7.4.3 Time and Frequency Shifting, 345
7.4.4 DifferentiationinFrequency,346
7.4.5 Convolution,3tl6
7.4.6 Modulation,350
7.4.7 Fourier Transform of Dbcrete'Time Periodic Sequences' '150
7.5 Fourier Transform of Sampled Continuous-Time Signals 351
- 7-5.1
Reconsttuction of Sampled Signols,356
7.5.2 Sampling-Rate Conversion, 359
7.5.3 A/D and D/A Conversion,364
7.6 Summary 367
7.7 Checklist of Important Terms 369
7.8 Problems 369
x Contens

8 THEZ.TRANSFO.AM 375*.

8.1 [ntroduction 375


E.Z Z-Transform 376
The
8.3 C.onvergence ofthe Z-Transform 378
8.4 hoperties of the Z-Transform 383
8.4.1 Linearity,3E5
8.4.2 TimeShifting,386
8.4.3 FrequencyScaling,3ST
E.4.4 Differeruiation with Rapecr to z,38
E.4.5 InitialValue,3S9
E.4.6 Fual Value, i89
E,4.7 Convolwio4 390
8.5 The lnyerse Z-Transform 392
E.sJ Invenion by a Power-Series Expansior* 394
8.5.2 Invenion by Panial-Fraction Exparcio4 j95
8.6 Systems 399
Z-Trunsfer Functions of Causal Discrete-Time
8,7 Z-Transform Analysis of State-Variable Systems M2
8.8 Relation Between the Z-Trdnsform and the Laplace Transform 4lO
8.9 Summary 4ll
8.10 Checklist of Important Terms 414
. 8.11 Problems 414

9 THED/SCNETEFOUA//ERTRANSFOAM 4Ig
9.1 Introduction 419
9,2 The Discrete Fourier Transform atrd Its Inverse 4Zl
9.3 Properties of the DFT 422
9.3.1 Linearity,422
9.3.2 TimeShiftitrg,422
9.3.3 Akemative Invenion Formulq 42j
9.3.4 Time Coivolwio4 42j
9.3.5 Relation a the Discrete-Time Fourier and Z-Transforms, 424
9.3.6 Mitrix Interpretarion of the DFT, 425
9.4 Linear Convolution Using the DFT 426
9.5 Fast Fourier Transforms 428
9.5.1 The Decimation-in-Time Algoritha 429
9.5.2 The Decination-in-Frequency Algoritlua,4j3
xr
Contents

9.6 Spectral Estimation of Analog Signals Using the DFI 4*46

91 Summary 445
9.8 Checklist of ImPortant Terms 448

9.9 Problems MB

tts2
10 DES//GN OF ANALOG AND DIGITAL FILTERS

10.1 Introduction 452

10.2 FrequencyTransformations 455


10.3 Design of Analog Filters 457
10.3.1 The Buttenoonh Filrer' 458
10.3.2 The ChebYshev Filrer,462

10.4 Digital Filters 468


10.2.1 Design of IIR Digitat Fitters lJshg Impube Invariance, 469
10.4.2 IIR Design Using the Bilineor Translormatio4 473
10.4.3 FIR Filter Desig4 475
10.4.4 Computer-Aided Design of Digirol Filters, tlEI
10.5 Summary 482

10.6 Checklist of Important Terms 483

10.7 Problems 483

APPENDIX A COMPLEX NUMBENS &5

A.l Definition ,lE5

A.2 Arithmetic Operations 487


A.2.1 Addiuon and Subtraction, tE7
A,2.2 Muhiplication,4ET
A.2.3 Division,488
A.3 Powers and Roots of Complex Numbers 489

A.4 Inequalities 490

APPENDIX B MATHEMANCAL RELANONS 491

B.l Trigonometric ldentities 491

B.2 Exponential and Logarithmic Functions 492


xll ContentE
tspecial
8.3 Functions 4g3
B.i.I GanmaFuctlons,493
8.3.2 Incomplete Gatrutu Functlors, 494
8,3,3 Beu Funaions,494
8.4 Power-SeriesExpansion 494
8.5 Sums of Powers of Natural Numbers 495
B.sJ Suns of Blnomial Coefficiens,496
8.5.2 Series of Exponentials, 496

8.6 DefiniteIntegrals 496


B,7 Indefinite Integrals 498

APPENDIX C ELEMEMARY MATRIX THEONY 602

C.l Basic Definition 5(2


C.2 Operations 503
Basic
C.2.1 Matrir Additio4 503
C.2.2 Differentiation and Inegrarton 503
C.2.3 Marrix Multiplicatiot, 503
C.3 Special Matrices 504
C.4 The Inverse of a Matrix 506
C.5 Eigenvalues and Eigenvectors 507

C.6 Functions of a Matrix 508

APPEND'X D PARNAL FNACNONS 512

D.l Factors 513


Case I: Nonrepeated Linear
D.2 Case II: Repeated Linear Factors 514
D.3 Case III: Nourepeated Irreducible Second-Degree Factors 515
D.4 Case IV: Repeated Irreducible Second-Degree Factors 517

BIBLIOGRAPHY 519

INDEX 521
Preface

The second edition of Continuous and Disqete Signak and Systems is a modified ver.
sion of the fint edition based on our experience in using it as a textbook in the intro'
ductory course on signals and systems at Southern Methodist Universily, as well as the
coEments of numerous colleagues who have used the book at other universities. The
result, we hope, is a book that provides an introductory, but comprehensive treatment
of the subjeci of continuous and disqrete.time signals and systems, Some changes that
we have made to enhance the quality of the book is to move the section on orthoSo'
nal representations of signals from Chapter I to the beginning of Chapter 3 on Fourier
serles, which permlts us to treat Fourier series as a epecial case of more general repre'
sentations. Oiher features are the addition of sections on practical reconstruction fil'
tera, rampling-rate conversion, and A/D and D/A converters to Chapter 7, We have
aleo added reveral problems in various chapters, emphasizing comPuter usage. How'
ever, we have not suggested or requlred the use of any specific mathematiQal software
be left to the preference of the lnstructor,
-Overall, as we feel that this choice should
packages
about a third of the problems and about a fifth of the examples in the book
have been changed,
As noted in the first edition, the aim of building complex systems that perform
sophisticated tasks imposes on engineering students a need to enhance their knowl'
edge of slgnals and syitems, so that they are able to use effectively the rich variety of
anilyeis and synthesis techniques that are available. Thus signals and systems,is a_core
course ln the Electrical Engineering curriculum in most schools. In writlng this book
we have tried to preBent the most widely used techniques of signal and system analy'
sls ln an appropriate fashion for instruction at the junior or senior level in electrical
engineerlng" The concepts and technlques that form the core of the book are of fun'
damental lmportance and ghould prove useful also to engineers wishing to update or
extend thelr understanding of signals and eyetems through self-study,
xlll
The book is divided into two major parts. In the,first part. a comprehensive treat-
ment of continuous-time signals and systems is presented. In the second part, the
results are extended to discrete-time signals and systems. In our experience, we have
found that covering both continuous-time and discrete-time systems together, fre-
quently confuses students and they often are not clear as to whether a particular con-
cept or technique applies to continuous-time or discrete-time systems, or both, The
result is that they often use solution techniques that simply do not apply to particular
problems. Since most students are familiar with continuous-time sigaals and systems in
the basic oourses leading up to this course. they are able to follow the development of
the theory and analysis of continuous-time systems without difficulty. Once they have
become familiar with this material which is covered in the tint five chapters, students
should be ready to handle discrete+ime signals and systems.
The book is organized such that all the chapters are distinct but closely related with
smooth transitions between chapters, thereby providing considerable flexibility in
course design. By appropriate choice of material. the book can be used as a text in sev-
eral courses such as transform theory (Chapters l, 3, 4,5,7, and 8), coutinlsus-1ims
signals and systems (1,2,3,4, and 5), discrete-time signals and systems (Chapters 6,7,
8, and 9), and sipals and systems: continuous and discrete (Chapters 1,2,3,4,6,7,and
8). We have been using the book at Southern Methodist University for a one-semes-
ter course covering both continuous-time and discrete-time systems and it has proved
successful.
Normally, a signals and systems course is taught in the third year of a four-year
undergraduate curriculum. Although the book is designed to be self-contained, a
knowledge of calculus through integration of trigonometric functions, as well as some
knowledge of differential equations, is presumed. A prior exposure to matrix algebra
as well as a course in circuit analysis is preferable but not necessary. These prerequi-
site skills should be mastered by all electrical engineering students by their junior year.
No prior experience with system analysis is required. While we use mathematics exten-
sively, we have done so, not rigorously, but in an engineering context. We use exam-
ples extensively to illustrate the theoretical material in an intuitive manner.
As with all subjects involving problem solving, we feel that it is imperative that a
student sees many solved problems related to the rqaterial covered. We have included
a large number of examples that are worked out in detail to illustrate concepts and to
show the student the application of the theory developed in the text. In order to make
the student aware of the wide range of applications of the principles that are covered,
applications with practical significance are mentioned. These applications are selected
to illustrate key concepts, stimulate interest, and bring out connections with other
branches of electrical engineering.
It is well recognized that the student does not fully understand a subject of this
nature unless he or she is given the opportunity to work out problems in using and
applying the basic tools that are developed in each chapter. This not only reinforces
the understanding of the subject matter, but. in some casesr allows foi the extension of
various concepts discussed in the text, In certain cases, even new material is introduced
via the ptoblem sets. Consequently, over 260 end-of-chapter problems have been
included. These problems are of various types, some being
straightforward pPli3tions
that the stu-
of the basic ideis presented in the chapiers, and are included to ensure
and other problems
dent understands the material fully. Sonre are moderately difficult,
problems
i"quir. that the student apply the theory he or she leamed in the chapter to
of practical imPortance.
the
ihe relative amount of "Design" work in various courses is always a concern for
and digital-filter
engineering faculty. ihe inclusion in this text of analog-
"f..ii*i
;;.;;; *;fi as othe-r design-related material is.in dir. ect response to that concern'
of all the
At the end of each chapier, we have included an item-by-item summary. of all
irpott*t.orcepts ana formulas covered in that chapter *:X,1t-:^th:^llist
i: that
i.["""ri ,"r*s iiscussed. This tist serves as a remindir to the student of materid
deserves sPecial attention.
-- systems.-The.focus
Tt roughout the book, the emphasis is on linear time-invariant
remainder of the book'
io CfruptJ. I is on signals. This material, which is basic to the
considers the mathemati*ii.pi.t""t",ion of signals. In
this chapter, we cover.a vari'
signals' transformations of
ety of .uUi".s such as p.ti.JiL tig,"ft' energy ind power
signals'
--- indepindent variable, and elementary
thl (CT)
CU.pi.r 2 is devoted to the time-domain iharacterization of continuous-timeof con-
urith the classificatioo
Iinear time-invariant (LTIV' systems. The chaPter starts of
tinuous-time systems anO tire'n introduces thi impulse-response-characterization
discussion of.slntems
irrv rpt"., and the convolurion integral. This is followed by a
equations' Simulation diagrams
characterized by linear constant-coeffici-ent differential
to introduce the state vari-
for such system, ur" pr"."ir[anA used as a stepping stone
with a discussion of stability'
- conclpt. The chapter concludes
able
io this point the focus is on the time-domain description of signals.and systems.
startingwithchaPter3'weconsiderfrequency-domaindescriptions.Webeginthe
signals' The
t a consiaeratioi "iit " ortUogiial representation of arbitrary
"i"pt"i*i
Fourier series are then iniroJuced as a slecial cise of the
orthogonal rePresentation
forperiodicsignals.PropertiesoftheFourierseriesarepresented.Theconceptoflineof
signals is given' The response
spectra for describing tfr" tt"qu1n"V content of such
concludes with a discussion
iinear ryst.m, to perilodic i"pii. it iitt*ted' The chapier
of the Gibbs phenomenon.
Chapter4beginswiththedevelopmentoftheFouriertransform.Conditionsunder
propertiesdiscussed' Appli'
which the Fourier tr.nrfor,n .*i.t. lie presented and its
modulation, multiplexing'
cations of the Fourier transform in areai such as amplitud-e
sampling, and signal tilteriig a,e The usi of the transfer function in deter-
;;G;il. ,".p6nr" Liiv;tti;s
"onsiaered'
is discu-"ed' The Nvquist sampling theorem
is
"f
derivedfromtheimpulse-modulationmodelforsampling.Theseveraldefinitionsof
bandwidthareintroducedandduration.bandwidthrelationshipsdiscussed..-
-- l-aplace
Ct.ft", 5 deals with ,i" LpU.. ,t*sform. Both unilateral and bilateral
are derived and examples
transforms are defined. n.p.atl"r of the Laplace transform
used to evaluate new laplace trans-
are given to demonstrate t oL in.r" propertils are
of the transfer func-
foffip.ir. or to find ,tr. i'i""tt" Lif..lt transform. The concept
transform such as for the
tion is introduced and *tLi"pp-fii"tions of the Laplace
nl
solution of differential equations, circuit andysis, and control systems ale presented.
The state.variable representation of syeteme in the frequency domain and the solution
of the state equations using Laplace transforms are discu$ed.
The treatmentof cotrtinuous-time sipals atrd systens ends with Chapter 5, and a
course emphasizing only CT material can be ended at thir point. By the end of this
chapter, the reader ehould have acquired a good undentandiag of contiuuous-time sig-
nals and systems aod should be ready for the second half nf the book in which discrete-
time signals and eystems analysis are c,overed.
We itart our consideration of diecrete.tipe syBtems in Chapter 6 with a dlecueeiou
of elementary diegrete-time signals. The impulse-reeponse characterlzatiou of diEerete'
ti11e systems is presented and the convolutiotr sum for determining the regPonse to
arbitrary inputs is derived. The difference equatiou rePresenUtion of discrete-tine sye
tems and their eolution is given. As itr CT systeos, einulation diagrams are diesussed
as a means of obtainiag the state-variable representation of dissrete'tine systems'
Chapter 7 considerB the Fourier analysis of discrete-tine signals, The Fourier eeriee
for periodic sequences and the Fourier transform for arbitrary signals are derived. The
similarities and differencee between these atrd their cootinuous-tine couterParts 8re
brought out and their propertles and applications discu$ed. The relation between the
coutinuoue-tlme and discrete-time Fourier trsnsfotrrs of sampled analog slgpalo ie
derived and used to obtain the impulse.modulation model for samPlirg that ls consld'
ered in Chapter 4. Reconstruction of sampled analog slgnals uslng practlcal recon'
struction devices such as the zero-order hold ig considered. Sampllng rate converBion
by decimatlon and interpolatigpof sampled signals ie dlscussed. The chapter concludes
wlth a brief deecriptldri df[i/D ahit D/A coqvg[sjptt, i
Chapter E dlscusses lhe (p.transform of dlsgete'tHe slgnals. The derelopment fol'
lowe clooely that of Chapter5for tte Iaplace fransf6dn. Properties of the Z.traneform
are derived and thelr application in the analysis of diecrqte'time systems developed.
The solution of difference equations and the analysle of gtate-vadable systems using
the Z-transform are also dissussed, Flnally, the relation,between the Laplace and the
Z-transforms of sampled signals is derived aud the mapplng of the s'plane lnto the z'
plane'i8 discussed.
Chapter 9 introduceg the discrete Fourler trBnsform (DFT) for uralyzlng ftnite'
longth iequences, The properties of the DFT are derlved and the dlfferences wlth the
other transfotms dlscusged in the book are uoted. The interpretatiou of the DFf as a
matrix operation on a data vector is used to briefly note its relatlon to other orthogo'
nal traneforms. The application of the DFT to linear system analysis and to spectral
estimation of analog signale is discussed. TVo popular fast Fourier tralsform (FFT)
algorithms for the efficient computation of the DFI are preeented.
The final chapter, Chapter 10, congiders Eom€ techuiques for the deslgp of analog
and digttal 6lters, Techniquee for the deelgo of two low.pass analog flltern, namely, the
Butterworth and the Chebyshev filters, are given. The lmpulse invarlance and billnear
technlques for designing digital IIR filters are derlved. Deeign of FIR dlgital ftlters
uslng window functions is also discussed, An example to lllustrate the appllcatlon of
FIR filters to approximate nonconventional filtera ls prssented, Tbe chapter concludes
wlth a very brief overvlew of computer-alded techniques'
Pretacs xvll

In addition, four appendices are included. They should prove useful as a readily
available sourse for some of the background material in complex variables aud matrix
algebra necessary for the course, A somewhat extensive list of frequently'used formu'
las is also included.
we wish to acknowledge the many people who have helped us in writiag this book,
especially the students on whom much of lhis material was classroom tested, a[d the
reviewers whose comments were very useful. We have tried to incorporate mOst of
their comments in preparing this second edition of the book. we wish to thaEk Dyan
Muratalla, who typed a subbtantial part of the manuscript. Finally, we would like to
thank our wives and families for their Patienc€ druing the completion of this book.

S, Soltmaa
M.D, Sttruth
., lii-i , .,, j ,r
i
'tf .f, '.
rI 4t
I
Chapter 1

Representing Signals

1 .1 INTRODUCTION
Signals are detectable physrcal quantities or variables by mcans of which messages or
iniormation can be transmitted. A wide variety of signals are of practical importance
in describing physical phenomena. Examples include the human voice. television pic-
tures, teletypC data, and atmospheric temperature. Electrical signals are the most eas'
ily measuied and the most simply represented type of signals. Therefore, many
engineers prefer to transform physical variables to electrical signals. For example,
ma'ny physical quantities. such as temperature, humidity, specch, wind speed, and
light
intensity, can bi transformed, usirig trinsducers, to time-valying current or voltage sig-
nals. Ellctrical engineers deal with signals that have a broad range of shapes, ampli-
tudes, durations, and perhaps other physical properties. For example, a radar-system
designer analyzes higir-eneigy microwave pulses, a communication-system engineer
whols concemed wiitr signai detection and signal design anall-zes information-carry-
ing signals, a power engineer deals with high-voltage signals, and a comPuter engineer
deals with millions of pulses per second.
Mathematically. sifnals ..presented as functions of one or more independent
"ie
variables. For eximple. time-varying current or voltage signals are functions of one
variable (time), the vibration of a reciangular membrane can be represented as a func-
tion of lwo spatial variables (.r and y coordinates), the electrical field intensity can he
looked upon as a function of two variables (time and space). and finally. an image sig-
nal can be regarded as a function of two variables (.r and.v coordinates). ln this intro-
ductory courie of signals and svstems. we focus attention on signals involving one
independent variable, which we take to be time. although it can be different in some
specific aPPlications.
2 Roprosentlng Slgnals Chapter I
.
We'begin this chapter with atr htroduction to two classes of eignals that we are con-
cemed with throughout the text, namely, continuous-time and discrete-time siguals,
Then, in Section 1.3, we detine periodic signals. Section 1.4 deals with the.iseue of
power and energy signals. A number of traruformations of the independent variable
are discussed in Section 1.5. In Section 1.6, we introduce several inportatrt elementary
sigaals that not ooly occur frequently in applications, but also serve as a basis for rep.
resenting other signals. Other types of signals that are of importance to engineers are
mentioned in Section 1.7.

1,2 CONTINUOUS.TIME VS. DISCRETE.TIME


SIGNALS
One way to classify signals is according to the nature of the independent variable. If
the independent variable is continuous, the corresponding signal is called a continu-
ous-thre signal and ie defined for a continuum of values of the iadependent variable.
A telephone or radio signal as a function of time and an atmospherlc preesure as a
function of altitude are examples of continuous-time slgnah. (See Figure 1.2.1.)
Corresponding to any instant l, and an infiniteslmally small posltlve real
number e, let ue denote the instants rr I and l, * e by ri and ri, respectively. If
-
-
r(ri) xOl) = x(4), we say that x(t) is continuous at, = ,r. Otherwlge it is discon-
tlnuous Bt 11, and the amplitude of signal r(t) has a jump at that point. Slgnal rO is
eaid to be contlnuous if it ls continuous for all t, A eignal that has only a flnlte or a
countably lnffnlte number of discontinultles ls said to be plecewlse condnuous lf the
jump ln amplltude at each discontinuity ls flnlte.
There are many contlnuous-tlrae signale of loterest that are not coutlnuour, An exasr.
ple ir the rectangular pulse function rect(t/t) (eee Figure 1.2.2), which ls de8aed as

rect(t/t) = {
l r, lrl a;
(1.2.1)

[o' hl ,;

u(r)

I
(u) (b)

tlgure l2.l , Exampleo of contlnuous.tlmo elgnalr,


S6c. 1.2 Contlnuouo-Tlme vB. Dlscrete-Timo Signals !l

rsct (r/t)

-rl2 0

Ilgure 122 A reaangular pulse siggal.

-3-2-1 0t2
Flgure L2.3 A pulse train.

This sipal is piecevise conthuous, since it is continuous everywhere except at I = ts 12


and the magnitude of the jump at these poins is 1. Another example is the pulee trail
shom in Figure 1.2.3. This sigpal is continuous at all , except , = 0, 1, t2, ... ,t
At a point of discontinuity r,, the value of the sipal.r(r) is usually considered to be
undefined. However, in order to be able to consider both continuous aud piecevise
continuous signals in a similar manner, we will assigp the value

,tal =f,h(,i) + r(,i)I (t2.2)

to r(t)
at the point of discontinuity r = rr.
If the independent variable takes on only discrete values t : k[, where I is a fixed
positive real number and & ranges over the set of integen (i.e., & = 0, tl, t2, etc,),
the corresponding signal x(&[) is called a discrete-time sipal. Discrete-time signals
arise naturally in many areas of business, economics, science, and engineering. Exam-
ples are the amount of a loan payment in the && month, the weekly Dow Jones stock
index, and the output of an information source that produces one of the digits 1, 2, ...,
I
M every seconds. We consider discrete-time signals in more detail in Chapter 5.
4 Representing Signals Chapter I

PERIODI APERIODIC SIGNALS


Any continuous-time signal that satisfies the condition
.r(t)=..1r*rrr. n = 1.2.3,... ( 1.3.1)

I
where > 0 is a constant known as the fundamental period. is classified as a periodic
signal. A signal .r(r) that is not periodic is referred to as an aperiodic signal. Familiar
eiamples olperiodic signals arethe sinusoidal furrctions. A real-valued sinusoidal sig-
nal can be eipressed mathematically by a time-varying function of the form
x(t)=4sin(r,rnl+$) (1.3.2)

where
A = amplitude
oo = radian frequency in rad/s
6 = initial phase angle with respect to the time origin in rad
This sinusoidal signat is periodic with fundamental Period T = 2t /aotor all values of roo'
The sinusoidal time function described in Equation (1.3.2) is usually referred to as
a sine wave. Examples of physical phenomena that approximately produce sinusoidal
signals are the vottage ourput of an electrical alternator and the vertical displacement
of1 tn*attached tJ a spring under the assumPtion that the spring has negligible mass
and no damping. Tne putse irain shown in Figure 1.2.3 is another example of a peri'
odic signal, witfi fundamental period T = 2. Notice that if r(r) is periodic with tunda-
mentaiperiod I, then r(r) is also periodic with period 2I, 37,4T, . ... The fundamental
frequericy, in radians, liaaian friquency) of the periodic signal r(t) is related to the
fundamental period by the relationship
Zrt
ttO=7 (1.3.3)

'Engineers and most mathematicians refer to the sinusoidal signal with_radian fre'
qrJn.y ro* = 1,oo as the tth harmonic. For example, the signal shown in Figure l'2.3
h"t . iuot.*"nial radian frequency @o = rr, a second harmonic radian frequency
-, = Zn,and a third harmonic iadian friquenry = 3t. Figure 1.3.1 shows. the first'
-,
se'cond, and third harmonics of signal x(t) in Eq. (1.3.2) for specific values_of
A, 0ro, and
O. Note that the waveforms coresPonding to each
harmonic are distinct. In theory' we

(r)
x2 (r) = cos 4rl 13 = +cos 6u,
.rr (r) = +cos 2t,

Flgure Lt.l Harnonically related sinusoids.


Sec. 1.3 Periodic vs. Aperiodic Signals

can associate an infinite number of distioct harmonic signals with a given sinusoidal
waveform.
Periodic signals occur frequently in physical problems. ln this section, we discuss the
mathematical representation of such sipals. In Chapter 3, we show how to represent
any periodic signal in terms of simple ones, such as sine and cosine.

Eranple l3.l
Harmonically related continuous-time exponentials are sets of complex exponentials
with fundamental frequencies that are all multiples of a single positive ftequency r,ro.
Mathematically,

$r() = exp[lk<rrdl' k = 0, +1, -+2, .. (1.3.4)

We show that for k * O, +t(t) is periodic with fundamental period 2rr/ltool or fruda-
mental frequency I kr,rol.
In order for signal Qr(t) to be p€riodic wilh period T > 0, we must have
exp [/<roo(t + I)l = exp[korot]
or, equivalently,

^= 2tt
-le;J (13.5)
'
Note that since a signal that is periodic with period I is also periodic with period lI for
any positive integer ( then all signals Q.(l) have a common period of 2rr/roo.

The sum of rwo periodic signals may or may not be periodic. Consider the two peri-
odic sigrals r(t) and y(t) with fundamental periods T, and Tr, respectively. We inves-
tigate under what conditions the sum
z(t)=ax(t)+by(t\
is periodic and what the fundamental period of this signal is if the sigual is periodic.
Since x(t) is periodic with period fr, it follows that
r(r) = r(, + /<f )
$imil61ly,
y(t)=y(t+lTr)
where k and I are iategers such that
z(t) -- ax(t + kT) + by(t + lT2)
In order for z(r) to be periodic with period T, one needs
ax(t + T) + bv(, + T) = ou(t + trr) + by(t + lTr)
We therefore must have
T=kTr=lTz
6 Repreeenllng Slgnals Ohapter 1

or, equivalently,
T,t
---l _-
T2k -

In other words, the sum of two periodic signals is periodic only if the ratio of their
respective periods can be expressed as a rational number'

ranple 1.82
We wlsh to determine which of the following sigrals are periodic'
,n
(a) r'(r) = sin ?r

(b) rz0) =
'in?rto'llr
(c) .rr(t) = sin 3t
(d) xo(| = rr(r) - 2r!(r)
For there signals, 11(l) is periodic with period Tr = 3. We write:r(r) as.the sum of two
t!
sinusoids wiih periiii rri = L5tl3 and-T.-= 15fl' Since 13T2, ='l7r'i1lollon: that r'()
is periodic with period = 15. rl(r) is periodic with period rl = 2r'll3. Since we cannot
i,
frnd integen k and I such that kT1 = lT3, it follows that ro(t) is not periodic'

Note that if x(r) and y(l) have the same period T, then z(r) = x(t) + y(O-is peri-
oOic wittr period T; i.e., linear operations (addition in this case) do not affect the
peri-
odicity of the resulting signal. Nonlinear oPerations on periodic sigrrals (such as
multiilication) produce peiodic signals with different fundamental Periods.The fol-
lowing example demonstrates this fact.

kanple r.$-l
l,et:(r) = oostrrrr and y(l) = cosr,ly'. Consider the signal a0) = :(t)y(t)' Signal x(t) is
perioriic with periodic itr/'or, aad signat y(r) is periodic with period 2n/ur,The fact that
z(t): ,1rrrr1, has two componenti, one with radian frequency o2 -'o, and the other
wit-hradianfrequency(l,2+.r'canbeseenbyrewritingtheproduct:(t)y(t)as

|t o.tr, -
cosorr coso2, = to,)t + cos(o2 + or)d

if or1 = to2 = ro, have a constant term (ll2) and a second-harmonic term
ihen e(t) wi[
irE t.
z,,ir). ln general,'nonlinear operations on periodic sigtals can produce higher order
harmonics.

Since a periodic sigrral is a signal of infinite duration that should start at, = -o
and
go on toi = o, it dlows that ilt practical signals are aperiodic. Nevertheless, the study
of the system response to periodic inputs is essential (as we shall see in ChaPter 4) in
the process of developing the system response to all practical inpus'

h
Sec. 1.4 Energy and Power Signals 7

1.4 ENERGY AND POWER SI NALS


Let x(r) be a real-valued signal. If r(t) represents the voltage across a-resistance R. it
ptoOucet a c.urrent i(t'1 = ,1'71*' The instantaneous Power of the sigoal is
hr1r1 = ,'(t)i?., and the energy expended during the incremental interval dr is
,ri)i ndl. in general, we do not know whetherr(r) is a voltage or a current signal, and
in oiaer to normatize power, we assume that R = 1 ohm. Hence, the instantatreous
power associated wittrsignal r(l) is r2(r). The signal energy over a time interval of
lenglh2L is defined as
r., = [' lxktlz
l-1' '"
dt (r.4.1)

and the total energy in the signal over the range, € (--, -) can be defined as

(t.4.2)
E=
li,,, I' ,l,r,rl, o,
The average power can then be defined as

P= (1.4.3)
li,n l+,1:,1,(,)t,d,]
Although we have used electrical sigrals to develop Equations (1.4.2) and (1.4.3), these
equatio'ns define the energy and power, respectively, of any arbitrary signal.:.(t)'
When the limir in Equation (1.4.2) exists and yields 0 I f,, < a, signal r(t) is said to
be an energy signal. Inspection of Equation (1.43) reveals that energy signals hav_e zero
power. Oriitre-ottrer trand, if the limit in Equation (1.4.3) exists and yields 0 < P
1n,
itren x(l) is a power signal. fower signals have infinite energy'
es statea earlier, p-riodic signals afe assumed to exist for all time from -o to
+c!
and, therefore, have intinite If it happens that these periodic signals_have finite
"nJrgy.
u""iug" power (which they do in most cases), then they are power signals. In contrast'
bounded finite-duration signals are energy signals.

Example 1.4.1
Inrhisexample,weshowrhatforaperiodicsignalwithperiodl,theaveragepoweris

, = +( r()t,at (1.4.4)

If r(r) is periodic with period I, then the integral in Equalion ( 1.4.3) is the same ovet
any
interval of length L Aliowing the limit to be taken in a manner such that 2L is an integral
iultipte of thJ period (i.e.,it = m7),we find thar rhe total energy of .r(l) over an inter'
val oi length 2L is rn times the energy over one period' The average power is then
p = rim
l*^ l,'l,o)1,d4

= ]7[, vurl'o'

I ti.
-il.t
I Reprcsentng Slgnals Chapter I

.t1(r) x2O)

0 , 0

(a) o)
Ilgure L41 Signals for Examplel.4.2.

Ixanple 1.,1J
Coosider ths nignels in Figure 1.4.1. We wish to determine whether these sigpals are
energy or power sigpals. The signal in Figure 1.4.1(a) is aperiodic with total eoergr

z= l.a,eryl-ald,=+
which is fiaite. Therefore this signal is an energy signal with energy A2 /2. The aver-
age power is

P=
li- (*l_,o, "*r-40)
=m#=o
and is zero as expected.
The energy in the sigpal in Figure 1.42(b) is found as

E=
HI +
["
n, expl-zr]r,] = H nf, * lO- expt-zr)]
U_,o, "
which is clearly unbounded. Thus this sigral is not an energy signal. Its power can be found as

E=
li- riU_rn " + IL a' expl-ztl ")= +
so that this is a power signal with average pwer A2/2.

kample L4a
Consider the sinusoidal signal
x(t)=Ssin(or/+Q)
This signal is periodic with period

-2t
u)o
S€a. 1.4 Energy and Power Slgnals I
The average power of the signal is

r = |[o' errn (<rror + rb) dr

= * f^E- j cos(z'or + zotlat


A7
2

The last step follows because the sipal cos (2r,rol + 20) is periodic with period Tl2 ad
the area under a cqsine sigpal over any ioterval of length lI, where I is a positive integer,
is always zero. (You should have no trouble confrming this result if you draw tso con-
plete periods of cos (2oot + 2$)).

tr'.rqnrple 1.4.4
Consider the two aperiodic signals shovu in Figure 1.4.2. These two sigpals are eramples
of energy sigpals. The rectatrgular pulse shown in Figure 1.4.2(a) is stricily time limited,
since .r, (l) is identically zero outside the duration of the pulse. The other signel b aslmP
totically time limited in the sense that r;(t) -+ 0 as t -+ t.o. Such sigpals may also be
described loosely as "pulses." In either case, the average power equals zero. The energr
for signal .x,(t),r
..
E, =
H J_,r?@a,

=
r:, A2 at = l,)
For xr(),
E, =
!y- f ,A' "*el- ultll at
a2 A2
= tS ?(1 - exPl-zaLl) =
-
r1(r) x2U\= A exp [-a lrll

-tl2 0 rl2 0
(a) ( b)

Flgure L4J Sigpals for Examgle 1.4.4.


't0 Representng Slgnals Chapter 1

Since E1-and E2 are finite, rr(r) aod t2(r) are energy signals. Almost all time-Iimited sig-
nals of practical interest are energy sigrals.

I.5 TRANSFORMATIONS OF THE


INDEPENDENT VARIABLE
A oumber of importanl operatiorui are often Perfofmed on sigpals. Most of thase oper-
ations involve transformatioas of the independent variable. It is imPortant that the
reader know how to perform such operations and understand the physical meaning of
each one. The three operations ve discrrss in ihis section are sffiing, reflecting, and
time scaling.

15.1 flre Shfffing Qreradon


Signal !( - 6) represents a time-shifted version of .r(t); see Figure 1.5.1. The shift in
time is ro. If ro > 0, then the signal is delayed by 6 seconds. Physically, ts cantrot take
on negative vdues, but from the analytical viewpoint, r(, ,0), ,o < 0, rePresents an
-
advanced replica of .r(t). Sigpds that are related in this fashion arise in applications
such as radar, sonar, communication systems, and seismic sigral processing.

trranple 15.1
C,onsider the signal r(r) shown in Figure 1.5.2. We want to plot:(r - 2) and :( + 3). lt
can easily be seen that
(t+t, -l=rso
lr. o<t=z
'(')= 1-r*r, 2<t=3
[0, otherwise

To perform the time-shifting operation, rePlace t by t - 2 in the expression for r(t):


(Q-z)+t, -tst-2=o
lr.
,(t-2) = 1_t, ost-2=2
_2)+3, 2<t_2=3
[0, otherwise

.r (, - ,o)

'u+'l
ngure l5.f The shifting operation.
Sec. 1,5 Translormatons ol the lndependent Variable 11

or, equivalently,

(r-t, t=t=2
x(t - 2)= 1 l,. ,. 1:',:i
[0, othervise

-
The sigrral x(t 2) is plotted in Figure 15.3(a) and can be described as r(t) ahiffed tvo
units to the right oo the time axis. Similarly, it catr be shown that

(t + t, -4=rs -3
r(, + 3) = .| 'j,, -i::: ;'
[0, othersise

The sigpal r(, + 3) is ploned in Figure 153(b) and represents a shifted version of r(t),
shifted thee udts to the left.

Eanple 153
Vibration sensors are mounted on the front and rear axles of a moving vehicle to pick up
vibrations due to the roughness of the road surface. The signal from the front seosor is r(l)
and is shown in Figure 15.4. The signal from the rear axle sensor is modeled asr(t 1Z)). -
If the sensors are placed 5 ft apart' it is possible to determine the speed of the vehicle by
comparing the signal from the rear ade sensor with the sigpal from the front ade eensor.
Figure 1.55 illustrates the time{elayed version of .r(t) where the delay is 1Z) ms, or

Flgure 152 Plot of .r(t) for


Example 1.5.1.

r(r - 2) r(, +3)

-2
(a) (b)

Hgue 15J The shifting of :(t) of Example 1.5.1.


12 Representlng Signats Chapter 1

Flgure 15.4 Front axle sensor


, (ms) signal for Example 1.5.2.
r(r - 120)

figurc 155 Rear axle sensor


t (ms) signal for Example 1.5.2.

0.12 s. rhe delay t between the


sensor sigpals from the front and rear axles is related tb
the distance d between the two axles and the speed o of the vehicle by
d=ut
so that

a=-dT

=#=sofi/s

karnple 1.63
A radar placed to detect aircraft at a range R of 45 nautical miles (nmi) (l nautical mile
= fi16.115 ft) transmits the pulse-train signal shown in Figure 1.5.6. If there is a target,
the transmitted signal is reflected back to the radar's receiver. The radar operates by mia-

*1 l* roo r (gs)

Flgure 15.6 Radar-transmitted signal for Example 1.5.3.


Sec. 1.5 Translormations ol the lndependeni Variable 13

Received
pulse

, (tts)
f*ssot
Ilgure 15.7 Transmitted and received pulse train of Example 1.53.

suring the time delay between each transmitted pulse 8nd the corresponding returq or
echo. The velocity of propagation of the radar sigral. C is equal to 161,S/5 omi/g
The round-trip delay is

":T=##=o.s56ms
Therefore, the received pulse train is the same as the transmitted pulse train, but shifted
to the right by 0.556 ms; see Figure 1.5.7.

1.6.2 the Reflection Operation


The sigral x(-r)is obtained from the signal r(t) by a reflection about f = 0 (i.e., by
reversingr(t)), as shown in Figure 1.5.8. Thus, ifr(t) represents a signal out of a video
recorder, then x(-r) is the signal out of a video player when the rewind switch is
pushed on (assuming that the rewind and play speeds are the same).

&ample 1.6.4
We want to draw.r(-r) and.r(3 - r)if .t(r) is as shown in Figure 1.5.9(a). The signal r(t)
can be written as

.r(-r)

-l I L -2 -l
Flgure l.SJ ilhe reflection operation.
14 Representng Slgnals ChaPter 1

(t+t, -1sr=0
,0)={1, o<t=z
[0, othervise

The sigpat r(-l) is obtained by replacing r by -t in the last equation so that

x,-,, = {i'*
Lq
'' -l
I -:: I
otherwise

or, equivalently,

l-t+t, 0sr=l
r(-r)={1, -2<tso
0, L othersise

The sigpal:(-r) is ilhutrared in Figure 15.9(b) and can be described as:(r) reflected
about the vertical axis. Similarly, it caD be shown that
(q-L 3=t=4
.r(r-r)={t, 1<r<3
Lo, otherwise

The signal r(3 r) is shown in FigUre 15.9(c) and can be viesed as r(t) reflected and then
-
shifted three urits to the right. This result is obtained as follows:
.r(3-t)=r(-(t-3))

.r(3 - ,) r(-, - 3)

l34t-5-4-3-2-l
(d)

ngnre Uu Plots of x(-t) and x(3 - t) for Example lS.zi.


Sec. 1.5 Transtormations ol the lndep€ndenl Variable 15

Note that if we first shifi.r(I) by thrce units and then reflecl thc shifted signal, the result
is .r( - I - 3). which is shorvn in Figure 1.5.9(d). Therefore. thc operalions of shifting and
reflecting are not com]Irutative,

In addition to its use in representing physical phenomena such as that in the video
recorder example, reflection is extremely useful in examining lhc symmetry properties
that the signal may possess. A signal r(r) is referred to as an cvcn signal, or is said to
be even symmetric, if it is ideatical to its reflection about the trrigin-that is, if
r(-t) = 1111 (1.s.1)

A signal is referred to as odd symmetric if


'r(-t) = -'1'; (1.s.2)

An arbitrary signal .r(r) can always be expressed as a sum of even and odd signals as
.r(r) = x.(r) + r,(t) (1.5.3)

where.r"(r) is called the even part of .r(t) and is given by (see Problem 1.14)
1
.r"(,) = +.r(-r)] (1.s.4)
;[r(r)
and r,(r) is called the odd part of x(t) and is expressed as

i.[r(r) -,r(-
.,,,(,) = r)] (1.5.s)

Erample 1.6.6
Consider the signal .r(t) defined by

r(r) =
[1. r > o
[0, r<o
The even and odd parts of this signal are, respectively,
I
r.(r) =;. all texceptr = t)

t:
r"(r) = {
,<o
r>0
l. ;
The only problem herc is thc value of thesc functions at , = 0. If we define x(0) = 112
(the definition here is consistent with our detinition of thc signal at a point of disconti-
nuity), then

,.(o)=j and x,(0)=0

Signals -r.() and r.(r) are plotled in Figure 1.5.10.


16 RopresenUng Slgnals Chapler 1

xo(t I

ngure f5.f0 Plots of .r.(l) and .r,(r) for r(r) in Example 1.5.5.

f,kar,rple 15.8
Consider the sigral

,u, = exp[-cr]' ,>0


td ,<0
The even part of the sigral is

,>0
r.(r): {f)e"*e1-,,1,
"
r<0
[ia"nt"'t'
= expl-,lrl1
]a
The odd part ofrG) is

[] a r>0
",p1-"r1,
x"g)= lz "
e ,<0
[-i "'nl"']'
Signals .r.(t) and .r,(r ) are as shown in Figure 1.5.11.

x.(l)

Ilgure 15.11 Plots of r"(r) and x,(l) for Example 1.5.6.


Sec. 1.5 Translormations ol th€ lndependeni Variable 17

( t.1r )

-l 0 I t -l0] tt 3
0

(a) (b) (.1

Figure 15.12 The time-scaling operalion.

1.6.9 lbe li66gsating Operation


Consider the signals r(t), x(3t), and x(t/2), as shown in Figure 1.5.12. As is seen in the
figure, x(3r) can be described as x(r) contracted by a factor of 3. Similarly, x(t/z) catl
be described as.r(t) expanded by a factor of 2. Both.r(3t) and x(t/2) are said to be
time-scaled versions of -r(t). In general, if the independent variable is scaled by a para-
meter 1, then r(11) is a compressed version of r(r) if hl , t (thc signal exists in a
smaller time interval) and is an expanded version of .r(t) it lrrl . 1 (the signal exists in
a larger time interval). If we think of r(l) as the output of a videotape recorder, then
.r(31) is the signal obtained when the recording is played back at thtee times the speed
at which it was recorded,and. x(t/2) is the signal obtained when thc recording is played
back at half speed.

r.vnrnplo 15.7
Suppose we want lo plot the signal r(3r -
6), where.r(t) is the signal shown in Figure
1.5.2. Using the definition of r(t) in Example 1.5.1 we obtain

3r-5, !-r=2
3

I, 2.r=:
:(3t - O1 =
-3, + 9, !''='
0, otherwise

A plot of r(3r -6) versus r is illustrated in Figure 1.5.13 and can be viewed as r() com-
pressed by a faclor of 3 (or time scaled by a factor of lB) and then shifted two units of
time to the right. Note that if .r(l) is shifted first and then time scaled by a factor of lB,
we will obtain a different signal: therefore, shifting and time scaling are not commutative.
The result we did get can be justified as follows:
18 Representng Slgnals Chapter I

.r (
-1, - (r)

sl B-l , Hg[re 15.13 Plot of r(3, - 6) of


I .l Example 15.7.

.r(3t - 6) = :(3(t - 2))


This equation indicates that we perform the scaling operation first and then the shift-
ing operation.

Exanple 15.8
We oflen encounter signals of the type

r(r) = I - .4 exp[-ol]cos(.ot + Q)
Figure 1.5.14 shows.r(r) for typical values of ,4, o and roo. As can tre see1, rhis 5igtal even-
tually goes to a steady stale value of I as I becomes infinite. In practice, it is assumed that
the signal has settled down to a final value when it stays within a specified percentage of
its final theoretical value. This percentage is usually chosen to be 5% and the time ,, after
which the sigral stays within this range is defined as the settling time ,,. As can be seen
from Figure 1.5.14, r, can be determined by solving
I + A exp[-ot,] = 1.05
so that

,,

ffgore fJ.14 Sigpal:(t) for Example 15.8.


Sec. 1.6 Elementary Signals 19

t' = -!ornlo'osl
LA l
Let
r(r) = I - 2.3 exp[- 10.356t] cos[5t]

We will find l, for t(t), x(t/2) and r(2t).


For x(l), since A = 2.3 and c = 10.356. we get rr = 0.3697 s.
Since
x(t12) =I - 2.3 exp[-5.178r] cos[2'5t]
and
x(Zt) : | - 2.3 expl- 2O.7l2tl cos Il0r]
we get ,r = o.7394 s and t, = 0.1849 s for .r (r/2) and x(?t) respectivcly. These results are
expected since.r(t) is compressed by a factor of 2 in the first casc and is expanded by the
same factor in the second case.

tn conclusion, for any general signal x(r), the transformation aI + B on the inde-
p€ndent variable can be performed as follows:
.r(or+p)=r(o(,+F/o)) (1.s.6)

where a and p are assumed to be real numbers. The operations should be performed
in the following order:
1. Scale by cr. If c is negative, reflect about the vertical axis.
2. Shift to the right by p/a if p and o have different signs, and to the left by F/o if F
and c have the same sign.
Note that the operation of reflecting and time scaling is commutative, whereas the
operation of shifting and reflecting or shifting and time scaling is not.

ELEMENTARY SIGNALS
Several important elementary signals that occur frequently in applications al5o serve
as a basis for representing other signals. Throughout the book, we will find that repre-
senthg signals in terms of these elementary signals allows us to better understand the
properties of both signals and systems. Furthermore, many of thesc signals have fea'
tures that make them particularly useful in the solution of engineering problems and,
therefore, of importance in our subsequent studies.

1.6.f The Unit Step Function

The continuous-time unit step function is defined as


[t-
rttt)=to' r>o (1.5.1)
,<0
and is shown in Figure 1.6.1.
20 Representing Signals Chapter 1

tlgure L6.l Continuous-time unit


step function.

This signal is an important signal for analytic studies, and it also has many practical
applications. Note that the unit step function is continuous for all t except at , = 0,
where there is a discontinuity. According to our earlier discussion, we define
u(0) = 112. An example of a unit step function is the output of a 1-V dc voltage souroe
in series with a switch that is tumed on at time t = 0.

Erample 1.6.1
The rectangular pulse signal shown in Figure 1.6.2 is the result of an on-off switching oper-
ation of a constant vohage source in an electric circuit.
In general, a reclangular pulse that extends from -a to +a and has an amplitude A can
be written as a difference betwecn appropriately shifted step functions, i.e.,

A rcct(t/?a\ = Alu(t + a) - u(t - a)l (t.6.2)


In our specific example.
2recr(r/21 = 21u(t + l) - a(t - 1)l

Elqqrmple 1.69

Consider the signum lunction (written sgn) shown in Figure 1.6.3. The unit sgn function
is defrned by

r>0
s$nl = r=0 (r.6.3)
{r I, r<0

2 rect (r/2)

Flgure 1.62 Rectangular pulse


signal of Example 1.6.1.
Ssc. 1.6 Elementary Signals 21

r8n (, )

Ftgure l.6J The signum function.

The signum function can be expressed in terms of the unit step function as
sgnr=-1+2a(r)
The signum function is one of the most oflen used signals in communication aud in con-
trol theory.

1.6.2 lhe Ramp Function


The ramp function shown in Figure 1.6.4 is defined by
(t. ,>
,tr)=to, r<o
o
(r.6.4)

The ramp function is obtained by integrating the unit step function:

I ur-)h = r(t)

The device that accomplishes this operation is called an integrator. In contrast to both
the unit step and the signum functions, the ramp function is continuous at t = 0. Time
scaling a unit ramp by a factor o corresponds to a ramp function with slope a' (A unit
ramp function has a slope of unity.) An example of a ramp function is the linear-sweep
waveform of a cathode-ray tube.

Erample 1.63
Lrtx(l) = u(t+2) -Zu(r + 1)+Zu(r)- u(t -2) -2u(r - 3) +2u(t - 4). Lety(t)
denote its integral. Then

Flgure 1.5.4 The ramp function.


22 Representing Signals Chapter 1

y(t) = t(t + 2) - zt(t + l) + zt(t) ' r(t - 2l - 2r(t - 3) + 2r(t - 4\


Signal y(t) is sketched in Figure 1.6.5.

Flgure l.5S The signal used in


Example 1.6.3.

1.63 lhe SamplingFunction


A function frequently encountered in spectral analysis is the sampling function
Sa(.r), defined by

S"t)=Y (1.6.s)

Since the denominator is an increasing function of r and the numerator is bounded


( | sinx | < 1), Sa (r) is simply a damped sine wave. Figure 1.6.5(a) shows that Sa (r) is
an even function ofx having its peak at r = 0 and zero-crossings at x = tnzr. The value
of the function at x = 0 is established by using I'H0pital's rule. A closely related func-
tion is sinc r, which is defined by
sln-r
sinc.r = nx - Sa(rr-r) (1.6.6)

and is shown in Figure 1.6.6(b). Note that sinc x is a compressed version of Sa (.r); the
compression factor is n.

1.6.4 The Unit Impulee Function


The unit impulse signal 6(t), often called the Dirac delta function or, simply, rhe delta
function, occupies a central place in signal analysis. Many physical phenomena such as
point sources, point charges, concentrated loads on structures, and voltage or current
sources acting for very short times can be modeled as delta functions. Mathematically,
the Dirac delta function is defined by

r0)a0) dt = x(o\, tt < o < t2 (1.6.71


.["
provided that x(l) is continuous at I = 0. The tunction 6O is depicted gaphically by a
spike at the origin, as shown in Figure 1.6.7, and possesses the following properties:
Sec. 1.6 Elementary Signals n
Sa (x)

sinc (r)

(b)

Figure 1.6-5 The sampling function.

Ilgure 1.6.7 Representation of the


unit impulse tunction 6(l).

l. 6(0) -+ co

2.60)=0,,+0
3. I 6(t)dt=l
4. 6(t) is an even function; i.e.,6(t) : 5(- t)
As just defined, the 6 function does nol conform to the usual definition of a func-
tion. liowever, it is sometimes convenient to consider it as the limit of a conventional
function as some parameter e approaches zero. Several examples are shown in Figure
1.6.8; all such funitions have the following properties for "small" e:
24 !.. Represenffng Slgnals Chapter I

.
tt rr\2
Pt0l tt2u) o,= "(* ;/

_S 0 e t -2e
22
tigure l.6J Engineering models for 6(r).

1. The value at r = 0 is very large and becomes infinity as I approaches zero.


2. The duration is relatively very short and becomes zero as e becomes zero.
3. The total area under the function is constant and equal to l.
4. The functions are all even.

Eramplo 1.6.4
Consider the function defined as

p0) =
"ts6.'(*..#)'
This function satisfies all the properties of a delta function, as can be shown by rewriting it as

Po)="rrB :w#r
so thal

1- p(0) =
Jl$,
(l/e) = o. Here we used the well-known linu't liq (sinr)/t = t.
2. For values of t * 0,

p(,) = ,(*,rT)'
"rijm.

=("rg.
"r["u* (-r.*i;'1
The second limit is bounded by l, but the fi'.t limit vanishes as e + 0+; therefore,

P(,)=0, t+o
3. To show that the area under p(r) is unity, we note that

[' -o(o
at =
"'s
: l- (''*]'i"')' "
= In 1- sin'(")
r' r,
I --
Sec. 1.6 Elementary Signals

where the last step follo

Itl
a

Since (see Appendix B)

sln'r
--;- dr = t
r_ T.

it follows that

I p(t)dt = |

4. It is clear that p(t) = p(-t)i therefore, p(l) is an even funcrion.

Three important propenies repeatedly uried when operating with delta functions are
the sifting property, the sampling property. and the scaling property.

Sifting Property. The sifting property is expressed in the equation

J"x(r)6(r
- ,rrr,=
{Xlt'' :;I;"" (1.6.8)

This can be seen by using the change of variables r =t - ,o to obtain

J"ros(r
- 6)dt = l,':::,n +,0)6(r)d"

=r0o), \<to<t2
by Equation (1.6.7). Notice that the right-hand side of Equation (1.6.8) can be looked
at as a function of ro. This function is discontinuous at r0 = t, and ro ,2. Following our :
notation, the value of the function aa \ ot tzshould be given by
ttt
| ,(r)t( - ti dt =
1
,o : t or to = tz (1.6.9)
Jtt ,r<^1,
The sifting property is usually used in lieu of Equation (1.6.7) as the de6nition of a
delta function located at ro. In general, the property can be written as

,{,)=f11116(r-t)dr (1.6.10)

which implies that the signal .r(t) can be expressed as a continuous sum of weighted
impulses. This result can be interpreted graphically if we approximate r(r) by a sum of
rectangular pulses, each of width A seconds and of varying heights, as shown in Figure
1.6.9. That is,

i(,)= i r(&A)rect((r -kL)/L\


t'-o
Representing Signals Chapter 1

r(:a) recr ((r - 2AyA)

lo Figure 1.6.9 Approximation of


----l a
signal r(r).

which cante written as

iot - (k - 1)AI
oi.'(ml[]*.t1tr -
= ka)/^)]tkA

Now, each term in the sum represents the area under the /<th pulse in the approxima-
tion iO. We thus let A -+ 0 and replace kA by t, so that /<A (k l)A = dt, and the - -
summation becomes an integral. Also, as A -r 0, 1/A rect((t - tA)/A) approaches
-
6(t r), and Equation (1.6.10) follows. The representation of Equation (1.6.10), along
with the superposition principle, is used in Chapter 2 to study the behavior ofa special
and important class of systems known as Iinear time-invariant systems.

Sampling Property. If x(t) is continuous at r0, then

x(r)6(t - to) : r(to)6(t - t) (r.6.11)


Graphically, this property can be illustrated by approximating the impulse signal by a
rectangular pulse of width A and height 1/A, as shown in Figure 1.6.10, and then allow-
ing A to approach zero, to obtain

l,$ ,Cl I rectllr - h)lL) = :(ro)D(r - ro)

r(r0t rcct tt, - ,u)/a)

Ilgure L6.10 The sampling


property of the impulse function.
Sec. 1.6 Elementary Signals 27

Mathematically. two functions /, (s(r)) and /r(s(r)) are equivalent over the interval
(t,, rr) if, for any continuous function y(l),
ett fl,
I
Jt
y(,)/'(s(,)) dt = |
J"
y(t)fr(6(t)) dt

Therefore, .r(l)S (r - ro) and.r(ro)6(r - ,0) are equivalent, since

'l'-y(r)x(r)60 fl'
|
J\
- t)dt =y(ro)x(o) = J,,| y(r)x(to)6(t - rldt
Note the difference between the sifting property and the sampling ProPerty: The
right-hand side of Equation (1.6.8) is the value of the function evaluated at some Point.
*-her.ur the right-hand side of Equation (1.6.11) is still a delta iunction with strength
equat to the value of .r(t) evaluated at I = lo.

Scaling Property. The scaling proPerty is given by

E(ar + D) =
#rt. i) 0.6.12)

This result is interpreted by considering E(l) as the limit of a unit area pulse p(l) as
some parameter e iends to zero. The pulse p(at) is a compressed (expanded) -version
of. p(ti if a > I (a < 1), and its area is l/lal. (Notc that lhe arca is always positive.)
By
tu[ii! tn" limit as e -+ 0, the result is a delia function with strength l/la I' We show this
by co-nsidering the two cases a > 0 and a ( 0 separately. For a > 0, we have to show
that

+ b)dt -- . t)*, ,, . -
,l
. ,,
1",,Q)t1o, I'l,alu(,
Applying the sifting property to the right'hand side yields

1,r-q\
a \a I
To evaluate the left-hand side, we use the transformation of variables
r-al*b
Then dl = (lla)dt,and therangerr ltltzbecomesal,'r b<t < at2+ D'Thelefr
hand side now becomes
t)'("1
["r()6('r
Jt''-''''\''
+ b)dt
-'-- = J''*t'\
[''no '1t:a l''a ''"
I /-b\
=;'\
)
"
which is the same as the right-hand side.
When a < 0, we have to show that
28 Representing Signals Chapter 1

l,',,rU)6(ar
+ b)d,=
l" fr16111, .ur)*, ,,.--!.,,
Using the sifting property, we evaluate the right-hand side. The result is
I /-b\
E'[7/
For the left-haad side, we use the transformation I = at + b, so that

at=!d., = -!a,
a lal
andtherangeofrbecomes -lolq+ b1r < -lalr, + D,resultingin

l,',,,{,)t1,,
+ b) dt =
il:,'::,(f) rt,l pi*
=hl-':,::'(#)",,'
1 l-b\
=EI'|.7/
Notice that before using the sifting property in the last stepr we interchanged the lim-
is of integration and changed the sign of the integrand, since
-lalq+b<-lalq+b
lit-arnple 1.65
Consider the Gaussian pulse

p(l.. = I f-t21
t6?,*L;pl
The area under this pulse is alwap 1; that is,

l"#'*l#)"='
It can be shown that p(l) approaches 6(l) as e -r 0. (See problem 1.19.) l*t a > I be any
cotNtant. Then p(ar) is a compressed version ofp(l). It can be showtr that the area under
p(ar) is \la, and as e approaches 0, p(ar) approaches 6(ar).

llrerrple 1.0.6
Suppose we want to evaluate the following integrals:
.t
a. I
t-2
0+t2yt1t-z1ar
ra
b. J-2
I (r+12)6(r-3)d,
Sec. 1.6 Elementary Slgnals 29

t3
c. I exp[r -216(a - gdt
Jo
ll
d.
J_-6(t)dt
a- Using the sifting property yields
?l

J_r(,+,')to-3)dr=o
since l = 3 is not in the interval -2 < t < 7.
b. Using the sifting property yields

* r1oc - 3)dt = 3 + 32 = t2
i_r(,
since I = 3 is within the interval -2<t<4.
c. Using the scaling property and then the sifting properly yields

f "ro[
- 216ra - 4)dt =/ *oU - z1la1r - zlar

:
= l*Ptol I
d. Consider the following two cases:

Case l: t <o
In this case, point r:0is not within the interval -€ < r ( l,and the result of the inte-
gral is zero.

Cose2:t>0
In this case, r = 0lies within the interval -co (r( l, and the value of the integral is 1.
Summarizing, we obtain

f_,r"r* =
{i; ;:3
But this is by definition a unit step function; therefore, the functions 6(t) and z(l) form an
integralderivative pair. That is,

fi,1,y =
u1,'1 (1.6.13)

t_ 6(t)dt = z(t) (1.6.r4)

The unit impulse is one of a class of functions known as singularity functions. Note
that the definition of 6() in Equation (1.6.7) does not make sense if D(t) is an ordinary
function. It is meaningful only if 6O is interpreted as a functional, i.e., as a process of
assigning the value r(0) to the signal r(t). The integral notation is used merely as a con-
Representing Signals Chapter 1

' venient way of describing the properties of this functional, such as linearity, shifting.
and scaling. We now consider how to represent the derivatives of the impulse function.

1.6.6 Derivatives of the Impulee Furction


The (first) derivative of the impulse function. or unit d0ublet, denoted by 6'(l), is
defined by

I
J t,
x(r)6'(r - \)dt = -r'(4,), rr ( r,r ( rz (1.6.15)

provided that r(t) possesses a derivative x'(1,) at t - t,. This result can be demon-
strated using integration by parts as follows:

['r(r)6'(r -
J,, h)dt = J,,['r()d[6(r - ro)]

= r (,) s (, _ ,r) _ ,ur6 (t _ t,,)dt


l" l,',,'

=0_0-x,(to)
since 6(t) = 0 for t # 0. It can be shown that 6'(r) possesses the following properties:

1. x(t)6',0 - to) = .rGo)D',(t - to) - -r',(o)5(r - rn)


rl
2. | 6'(t - tn)dr : 6(t -(r)
J_-

3. 5'(ar. D) = r'(,. :)
#
Higher order derivatives of 6(t) can be defined by extending the definition of 6'(t1.
For example, the n th-order derivative of 6(t) is defined by

- ts)dt = (-t)'r,",((,), lr ( r,, ( r, (1.6.16)


f"r{r;u,",U

provided that such derivative exists at r = ,o. The graphical representation of 6'(t) is
shown in Figure 1.6.11.
Sec. 1 .6 Elementary Signals 31

6'(, )

Iigure l.6.ll Representation of


D',(r).

Exanple 1.6.7
The current through an inductor of l-mH is i('l) = l0 exp[- zrlu(t) - 6(t) ampers' The
voltage drop across the inductor is given by

a1t1 = ro-!
fi[10
exp[-2r]u(r) - s(r)l

= -2 x l0- 2
cxp l- 2rlu(t\ + 10 2exp[-2r]6(r) - 10-28'(l) volts

= -2 x l0-2 cxp[-2r]a(r) + 10-280) - 10-2E'(I) volts

where the last step follows from Equation (1.6'l l).


Figures 1.6.12(i) and (b) demonitrate the behavior ofthe inductor current i(l) and the
voltage z(t) respectively.

v (r)

2 x l0 -2 erp [-2 r l u (r)

(a) (b)

Flgure 1.6.111 o(t) afi du(t)ldt for Example 1.6.7.


Representing Signals Chapter 1

Note that rhe derivarive of r(r)z(r) is obtained using the product rule of differenria-
tion" i.e.
d,
*!x(t)u(t)l = x(r)b (r) + x' (t) u(t)
whereas the derivative of .rO6(r) is

*4 kt,lut,)r = ,4 r,tolool]
This result cannot be obtained by direct differentiation of the product, because 6(r) is
interpreted as a functional rather than an ordinary function.

I'.vomple [.6S
We will evaluate the following integrals:

@)
I' ,(,- zrt'(-!,.l)a,
(b) rexp[-z]D'(, - 1)d,
/_t
For (a), we have

[',a - zru'(-i, . i)" : I_,ru - zrt'(t -l)at


= I-,ru-zn'(,-))a,

=
[[i.'(, - i). *t )]"= i'(, - i).,
For (b), we have

rexp[-z]t"(t - t)dt= (4r - a)exp[- all,-r = o


Jr

1.7 OTHER TYPES OF SIGNALS


There are many other types of signals that electrical engineers work with very often.
Signals can be classified broadly as random and nonrandom, or determiniitic. The
study of random signals is well beyond the scope of this text, but some of the ideas and
techniques that we discuss are basic to more advanced topics, Random signals do not
have the kind of totally predictable behavior that determinitti. .igo"rc do. Voice,
music, computer output, TV, and radar signals are neither pure sinusoids nor pure peri-
odic waveforms. If they then by knowing one period of the sigaal, we iould pre-
1vere,
dict what the signal would look like for all future time. Any signai that is capabll of
carrying meaningful information is in some way random. In other words, in order to
Sec. 1.8 Summary 3i,

contain information, a signal must, in some manner change in a nondeterministic


fashion.
Signals can also be classified as analog or digital signals. In science and engineering,
the word "analog" means to act similarly, but in a different domain. For example, the
electric voltage at the output terminals of a stereo amplifier varies in exactly the same
way as does the sound that activated the microphone that is feeding the amplifier. In
other words, the electric voltage o(l) at every instant of time is proportional (anatog)
to the air pressure that is rapidly varying with time. Simply, an analog sigpal is a phys-
ical quantity that varies with time, usually in a smooth and continuous fashion.
The values of a discrete-time signal can be continuous or discrcte. If a discrete-time
signal takes on all possible values on a finite or infrnite range, it is said to be a contin-
uous-amplitude discrete-time signal. Alternatively, if the discrete-time signal takes on
values from a finite set of possible values, it is said to be a discrele-amplitude discrete-
time signal, or, simply, a digital signal. Examples of digital signals are digitized images,
computer input, and signals associated with digital information sources.
Most of the signals that we encounter in nature are analog signals. A basic reason
for this is that physical systems cannot respond instantaneously to changing inputs.
Moreover, in many cases, the signal is not available in electrical fonn, thrs requiring
the use of a transducer (mechanical, electrical, thermal, optical, and so on) to provide
an electrical signal that is representative of the system signal. Transducers generally
cannot respond instantaneously to changes and tend to smooth out the sipals.
Digital signal processing has developed rapidly over the past two decades, chiefly
because of significant advances in digital compuler technology and integrated-circuit
fabrication. In order to process signals digitally, the signals must be in digital form (dis-
crete in time and discrete in amplitude). If the signal to be ;irocessed is in analog form,
it is first converted to a discrete-time signal by sampling at discrete instanB in time.
The discrete-time signal is then converted to a digital signal by a process called
quantization.
Quantization is the process of converting a continuous-amplitude sigral into a dis-
crete-amplitude signal and is basically an approximation procedure. The whole prooe-
dure is called analog-to-digital (A/D) conversion, and the corresponding device is
called an A./D converter.

1.8 SUMMARY
a Sigrals can be classified as continuous-time or discrete-time signals.
a Continuous-time signals that satisfy the condition r(t) = x(l + 7) are periodic with
fundamental period 7"
The fundamental radian frequency of the periodic signal is rclated to the funda-
mental period I by the relationship
2n
(r)0
-- T

The complex exponential .r(t) = exp [jorot] is periodic with period T = 2t /o\ lot
all oo.
. Representing Signals Chapter l
Harmonically related continuous-time exponentials

:o(t) = exp[l&toot]
are periodic with common period I- 2t /oto,
The energy E of the signal x(r) is defined by

E=
ITI [',lrurl,o,
The power P of the signal .r(t) is defined by

P=
lgl j[',r,<,r,0,
The sigral r(r) is an energy signal if 0 1 E 1 a.
The signal .r(r) is a power signal if 0 P o.( (
The signal x(t t) is a time-shifted version of xO. If ,0 > 0, then the signal is
-
-
delayed by ro seconds. If to < 0, then .r(r ,0) represents an advanced replica ofr(l).
a The sigpal x(-t) is obtained by reflecting r(l) about I 0. :
a The signal .t(ct) is a scaled version of .r(r). If c > l, then x(u) is a compressed ver-
sion of x(t), whereas if0 < c < l, thenr(ol) is an expanded version ofr(r),
The signal .r(t) is even symmetric if

'r(t) = r(-t)
The signal .r(l) is odd symmetric if
x(r) = -r(-r)
Unit impulse, unit step. and unit ramp functions are related by

lt
u(r) = J_-
| 5 (r) dr
tt
r(t) =
J_-u(r)dr
The sifting property of the 6 function is

f"r(r)E( - 'o)d,: {r('o)' l;S:"0


. The sarnpling property of the E function is

r(t)6(t - t ) : r(6)6(r : 16)


Sec. 1.10 Problems 35

1.9 CHECKLIST OF IMPORTANT TERMS


Aperlodlc slgnals Sampllng lunction
ConUnuous.tlme slgnals Scallng operation
Dlecretetlme slgnals Shtltlng operation
' Elementary slgnals Slgnum tunctlon
Energy slgnals Slnc functlon
Perlodlc slgnals Unlt lmpulse tunctlon
Power slgnals Unlt ramp lunctlon
Rectangular Pulse Unlt step functlon
Reflectlon operatlon

1.10 PROBLEMS
1.1. Find thc fundamental period Iof each ofthe following signals:

cos (rrr), sin (2rrr ). cos (3nr), sin (4rll.


"o.
(l ,). t'" (; ,),

-,(T,),',"(T,),*.(X,),.,"(?,),*,('1,)
1.2. Sketch the follorving signals:
(a) r(,)='*(;r+zo')
(b) r(t) =t+e1' Ost=2
(t+2 ts-2
(c).r(r)= {o -2=t<2
[,-z z<,
(d) r(r) = 2 exp[-rl, 0' l s l' and -r(t + l) = '111; 1tlr u11
'
tlren it is also periodic with period nT, n = 2,3, ... .
uL show that if x(l) is periodic with period I,
1.4 Show that if :r()
and:r(t) have period I, then -rr(r) = arlt) + h'rr(r) (a. D constant) has
the same period 7.
15. Use Euler's form' exp[lr'rl] = cosro, + i sinr,rl. to show that exp[iroll is periodic with
period I = 2r/o.
1.6, Are the following signals periodic? If so' find their periods.
(a) r(r) =.,,(1,) * r*'(8{,)
(b) .t(t)
I 71,]1{ cxPL,
[.5n I
=
"*o[i o e,l
[7rrl t5 I
(c) x(t) = e*pli e t.l + exn[u t]

(d).r(t) = exPli
[5zrl lrr I
?,]* "*Pl.6,1
/3rr \ /3\
(e) .r(t) = 2sin(:* ,/ + cos\or/
g6 . Repres , rting Signals Chapfer 1

1.7. If ro is a periodic signar with period r, show that x(at),a > 0, is a periodic with
p'id r/a, and x(t/b), b > 0, is a periodicsignal with period br.venty thesesignal
rlsurts for
:(t) = 5inr,o = b = 2.
It Determine whether the foflowing signals are power or energy signars or neither. Justi$
your answers.
(e).r()=4.1nr, -@<r<e
O) r(1 = A[u(t - a) - u(t + a)l
(c) .r(t) = r(t) - r(t - 1)
(d) rO= exp[-ar]a(r), a> 0
(e) r(t) = tuo
(I) :(t) = 21r;
(g).r0)=Aexplbtl, D>0
L9. Repeat Problem 1.8 for the following signats:

(a) r(r) = r..(;,


O) :0) = exp[-2lrl]sin(rrr)
(c) r(r) = exP[4lrl]
(d).tO =
"*[r?],
(e) .r0): r*"(+r"..(?,
(f).r()=1' r<0
exp[3r], 0 s,
L10. Show that if .r(r) is periodic with period I, then

[""<oa'l=tlfi
where P is the average power of the signal.
LlL IJt
:(t)= -r*r, -1sr<0
,, O=t<z
2, 2=t<3
0, otherwise
(a) Sketch.tO.
(b) Sketch.r(r -2),x(t+ 3),r(-3, -zl,anax(Jt* j)*afradtheanalyticarexpres-
sions for these functions.
LtL Repeat Problem 1.11 for

r(t):21 a2, -lsr<O


2r-2, 0sr<1
Ll3. Sketch the following signals:
(8) rr(r) = u(r) + 5z( 1) tu(t 2l - - -
-
O) +(t) = r(r) r(r 1) u(t 2) - - -
(c) .rr(r) = exp[-r]z(r)
(d) r4(r) = ?tt(t) + 6(, - 1)
Sec. 1.10 Probloms gz

(e) xr(r) = u(t)u(t - a), a>0


(f) r50) = u(t)u(a - t), a> 0
(g) .tz0) = a(cosr)

ftl :,tr>,(r + ])
,, ,,(-; * l),,t, - rt
() rr()xr(2 - t)
Ll4 (a) Show that
I
x"(t)=ilx(t)+.r(-r)l
is an even signal.
(b) Show that
1
*"(t)-i[.r0)-.r(-r)]
is an odd signal.
Llli. Consider the simple FIvI stereo transmitter shown in Figure Pl.l-5.
(e) Sketch the signals L + R and L R. -
(b) If the outputs of the two adders are added. sketch the resulring waveform.
(c) If signal L R is inverted and added to signal L + R, skctclr thc resutting waveform.
-
fi(,)
l
I

L+R 0

-l
L_R

l
I

-l

Flgure P1.15

L16 For each of the signals shown in Figure P1.16, write an expression in terms of unit step and
unit ramp basic functions.
Ll7. If the duration of x(t) is defined as the time at which.r() drops to l/e of the value at the
origin, find the duration of the following signals:
(r) rr0) = Aexpl-tlTlu(t)
(b) rz(t) = rr(3r)
38 Representing Signals Chapter 1

.tr (,) x2 (I)

:d(,)

rs (r) r6 (r)

-o-b o+b I

Ftgure PL16

(c) r3(t) = xJtl2)


(d) .ro(t) = blt)
Ll& The signal x0) = rca(/2) is transmitted through the atmosphere and is reflected by dif-
ferent objects located at different distances. The received signal is

v(r) =.r0) + o.s:(r - ) * o.x,1, - n, r>>2


Signal y(t) is processed as shown in Fig. P1.18.
(a) y(t) for I = 10.
Sketch
(b)Sketch zO for T = 10.
1.19. Check whether each of the following can be used as a mathematical model of a delta
function:
1
(a) pr(t) =
lm zr"
"..r,tr1"l
Sea. 1.10 Problems 39

Flgure Pl.lt

(b) pr(,) =
Is J**r[;rt']
(c) p,(r) =!,\#.7
(d) po(r):
H +;ri"
(e) p50) = lim e exp[-elrll

(o poo) =
!,$ ,1":l
Evaluate the following integrals:

n, [. (3,- ])ut, - rra,


or f' tr - ,)'(3,-;)"
o J" [",nt-, * ,l *.in(f ,)]r(,- ;)"
{ar
f' [*nt-r
* ry *.in(],)]r(, -;)"
(e)
/- exp[-sr + l]6'(, - s)d,

The probability that a random variable.r is tess than o is found by integrating the Proba-
bility density function /(.r) to obtain

p(x
= c) = l"- rcl*
Given that

,f(r) = 0.26(.t + 2) + 0.38(r) + 0.26(.r - l) + 0.1[u(.r - 3) - a(r - 5)]

find
(a) P(.r s -3)
(b) P(r s l.s)
49
:- ?jr: r f rr [,1, .i ;*i::i.+ jtt f g* q .:*f, g;, ' Representing Slgnals Chapter I

J, . .{i r$ l,
.t t
,"r,riu
(d) P(.r < 6)
IZL The velaity of I g of mass is
?r(r) = exp[-(, + l)lu(, + l) + 6(, - l)
(a) Plot o(r)
(b) Evaluate the force

t(t) = nfiO<t\
(c) If there is a spring connected to the mass with constant * = I N/m, End the force
tt
f/t) =k
J_-o(r)dt
123. Sketch the fint and second derivatives of the following signals:
(a) .r(t) = t (r) 'l- 5t (, - l) - zu(t - 2l
@) r(1 = r(ri - r(t - l) + 2u(' - 2)
(c) .r(r) =
{f,:r, ;L=, llo
1 .1 1 COMPUTER PROBLEMS
The integral

l"','xg)Y()at
can be approximated by a summation of rectangular strips, each of width Ar, as folloss:

[" {r)v(t)dt=
J., ,. I
i
r(nAt)y(nAr)A,

Here, Ar = (t, - trl/N. Write a program to verify thar

J#"*[#]
can be used as a mathematical model for the delta function by approximating the follow-
ing integrals by a summation:

l-i o)"
r"r /', t, + rr ffiexn

ru /_,0+ rrffiexn[-#I]"
cr J't,+r)\ffiexpl-#).
Repeat Problem 1.24 for the following integrals:

(al
J',expt-rl A*+"
(b)
/',exn[-r; j+#A +1,dl
tcr
J2
exp[-r1 j+#i=frdt
Ch apler 2

Continuous-Time SYstems

2.1 INTRODUCTION
Every physical system is broadly charactcrizc<l by its ability to acccPt an input such as
voltage, force, pressuie, displacement, etc., and lo produce an outPut in
,".poir. "ut."ni
to this input. Fbr cxample, a radar receiver is an electronic system whose
input is the reflection of an electromagnetic signal from the target and whose output is
a ideo signal displayed on the radar screen. Similarly. a robot is a system whose input
the
is an elect-ric coniroi signal and whose output is a motion or aclion on the Part-of
robot. A third example is a filtcr, whose input is a signal corruPted by noise.and inter-
ference and whose output is the desired signal. In brief, a systcm can be viewed as a
process that results in transforming input signals into output signals'
We are interested in both continuous-time and discrete-timc systems. A continuous-
time system is a system in which continuous-time input signals are transformed into
continuous-fime output signals. Such a system is representcd pictorially as shown in
Figure 2.1.1 (a). wheie r(r) is rhe input and y(r) is the output. A discrete-time sptem
is i system thai transforms discrete-iime inputs into discrete-time outputs.
(See Figure
2.l.1ib)). Continuous-time systems are lreated in this chaptcr. and discrete-time sys-
tems are discussed in ChaPter 6.
In studying the behavioi of systerns, the procedure is to modcl mathematically each
element t-hat-comprises the syitem and then to consider the interconnection of ele-
ments. The result is described mathematically either in the time domain, as in this
chapter, or in the frequency domain, as in Chapters 3 and 4'
In this chapter, we show that the analysis of tinear systems can be reduced to the
study of the response of the system to basic input signals'

41
42 Continuous-Time Systems Chapter z

(a) (b)

Flgue ZLl Examples of continuous-time and discrete-time systems.

)_.2 CI-,ASSIFICATION OF CONTINUOUS-TIME


SYSTEM
Our intent in this section is to lend additional substance to the concept of systems by
discussing their classification according to the way the system interacts with the input
signal. This interaction, which defines the model for the system, can be linear or non-
linear, time invariant or time varying, memoryless or with memory, causal or non-
causal, stable or unstable, and deterministic or nondeterministic. For the most part, we
are concerned with line.rr, time-invariant, deterministic systems. In this section, we
briefly examine the properties of each of these classes.

2.2.1 Linear and Nonlinear Systernn


When the system is linear, the superposition principle can be applied. This important
fact is precisely the reason that the techniques of linear-system analysis have been so
well developed. Superposition simply implies that the response resulting from several
input signals can be computed as the sum of the responses resulting from each input
signal acting alone. Mathematically, the superposition principle can be stated as fol-
lows: Let y,(l) be the response of a continuous-time system to an input r,(r) and yro
be the response corresponding to the input xr(t). Then the system is linear (follows the
principle of superposition) if
1. the response torr(r) +.rrO isy,(l) + yr(l); and
2. the response to ax,(t) is ay,(l), where a is any arbitrary constant,
The first property is referred to as the additivity property; the second is referred to as
the homogeneity property. These two properties defining a linear system cao be com-
bined into a single statement as
ax,(r) + pxr(t) + ayr(t) + pyr(t) (2.2.1)
where the notation x(r) -+ y0) represents the inpuuoutput relation of a continuous-
time system. A system is said to be nonlinear if Equation (2.2.1) is not valid for at least
one set of r,(l) , xr(t), a, and B.

hanple 2.2.1
Consider the voltage divider shown in Figure 2.2.1 u/ith Rr = Rz. For input xO and our-
put y(t), this is a linear system. The inpuUoutput relation can be explicitly rrritten as

ro) = ;;f4, sy =f,,1t1


S*.2.2 Classificaton of Conlinuous-Time Systems ,13

Rl
R,

+
x (t) + R2 y(t)

tigure 2"2.t System for Exanple


2.2.1.

i.e., the transformation involves only multiplication by a constant. To prove that the sys-
tem is indeed linear, one has to show that Equati oa (2.2.1) is satisfied. Consider the input
.r() : ar1(t) + b4Q). The corresponding output is
r(r) = ].ro
= | t-,O + Dxz(r)l
= a].r,(r) + blxrQ)
= ayr(t) + byr(t)
where
y,(r) = j.r,(r) and yr() = )xr()
On the other hand, if R, is a voltagedependent resistor such that Rr = Rrr(t), then the
system is nonlinear. The input/output relation can then be writtcn as

r0) = ;;fr o'0)


= --{')-
r(r) + I
For an input of the form
x(t)=*,1r;+bxr(t)
the output is

,tt=;frfi!6,f,,,a
This system is nonlinear because
att(t) + bxr(t) + o xlt) *, ,r(l)
arr()+bx2()+l -rr(r)+ I -rr(r)+l
for some .t, (t),.rr(t), a, and D (try.r,(t) = :z(t) and a = 1, b = 2\.

Example 2.2J
Suppose we want to determine which of the following systems is linear:

(a) y(O: K+ Q2.21


Continuous-Time Systems Chapter 2

(b) y(r) = exp[r(r)l e.z.s)


For pan (a), consider the inpul

r(r) = ar,G) + brr(t) (2.2.4)


The corresponding ouput is

y<i = K*fuxr(t) + bxr(t)l

which can be written as

ili = xa f r,1t) + Kb fix,(t)


= atlt) + byr(t)
where

y,t.) = x
fit,g)
and

y,O = xfrr,(t)
so lhat the system described by Equation (2.2.2) is linear.
Comparing Equation (2.2.2) ur.th

a1) = tff
we conclude that an ideal inductor with input i(r) (crrrrent through the inducror) and out-
put zr(t) (voltage across the inductor) is a linear system (element). Similarly, we can show
that a system that performs integration is a linear system. (See problem 2.1(f).) Hence, an
ideal capacitor is a linear system (element).
For part (b), we investigare the response of the system to the input in Equation (22.4):
y(r) = exp[a.r,(t) + hxr(t)l
= exp [ar, (t)] exp[6.12O]
+ ayr(t) + byz[)
Therefore, the system characterized by Equation (2.2.3) is nonlinear.

kample 23.3
Consider the RL circuit shown in Figure 2.2.2, ffis circuit can be viewed as a continuous-
time slatem with input r(t) equal to voltage source e(r) and wirh output y(l) equal to the
current in the inductor. Assume that at time r0, iLGo) = y(ro) = /0.
Applying Kirchhoffs current law at node a, we obtain

%rO*f.d.(r)=s
S@,. 2.2 Classlllcation ol Continuous-Time Systems 45

Rla

lrz tr) = v<rl


t
r(t) - e(t) + R2

Flgure 2.2.2 RL circuit for


Example 2.2.3.

Since

a.1o= tff
it follows thar

,T*#*,.0)=f
so that

ry).rffi,,1,1 = rdin;"r,t
or

aP.
"ii+rr(r)
=
77fi;;'(r)
The differential equation, Equation (2.2.5), is called the inpur/ourput differential equation
(22s)

describing the system. To compute an explicit expression fory(r) in terms of .r(r), we must
solve the differential equation for an arbitrary inpur r() applied for, > ,0. The cornplete
solution is of the form

y(r) : yto).*[-Affu t, - ,o)]

-
*3"J f '-o[-ff ,t' -t)]'r(t)dt; t]ro (226)
According to Equation (2.2.1), this system is nonlinear unless y(ro) = 0. To prove this, con-
sider the input x(t) = arr(l) + Btr(t). The corresponding outpur is

y(,) = y(,0) *r[-


#ih;i (, -,,) ]

.
au-*i I" *'[- #'h' (r -'l)]'r'('l) d'l

. .ff r I"'*[-#a (r -'l)]r'(r1 d'l

+ cy,(l) + pyz(r)
Continuous-Time Systems Chapter 2

This may seem surprising, since inductors and reslstors are linear elements. However, the
system in Figwe 2.2.2 violates a very important prop€ny of linear systetns, namely, that
zero input should yield zero output. Therefore, if yo = 0, then the system is linear.

The concept of linearity is very important in systems theory. The principle of super-
positioE can be invoked to determine the response of a linear system to an arbitrary
input if that input can be decomposed into the (possibly infinite) sum of several basic
signals- The response to each basic signal can be comPuted separately and added to
obtain the overall system response. This technique is used repeatedly throughout the
text and in most cases yields closed-form mathematical results, which is not possible
-for nonlinear systems.
Many physical systems, when analyzed in detail, demonstrate nonlinear behavior.
In such situatioDs, a solution for a given set of initial conditions and excitation can be
found either analytically or with the aid of a computer. Frequently, it is required to
determine the behavior of the system in the neighborhood of this solution. A common
technique of treating such problems is to approximate the system by a linear model
that is valid in the neighborhood of the operating point. This technique is referred to
as linearization. Some important examples are the small-signal analysis technique
applied to transistor circuits and the small-signal model of a simple pendulum.

2.2.2 Time-Va4ring and Time-lnvariant Systeme


A system is said to be time invariant if a time shift in the input signal causes an iden-
tical time shift in the output signal. Specifically, if y(t) is the output corresponding to
input r(r), a time-invariant system will have y(t - tias the output when .r(l -
to) is
the input. That is, the rule used to compute the system output does not depend on the
time at which the input is applied.
The procedure for testing whether a system is time invariant is summarized in the
following steps:
l. Let yr(t) be the output corresponding to.rt(l).
2. Consider a second input, rr(0, obtained by shifting r,(l),
xr(t)=r,(l-to)
and find the output yz(r) corresponding to the input xr(t).
-
3. From step 1, find y,(t ro) and compare with yr(t).
4. lf yr(t): llt -
ro), then the system is time invariant; otherwise it is a time-varying
system.

nra-ple 2.2.4
We wish to determine whether the systems described by the following equations are time
invariant:
(a) y0) = cos:(r)
,-" dY(t) _ry()
-
rDr _______ = + r(r), , > 0, /(0) = 0
dt
S€o. 2.2 Classification ol Continuous-Timo Systems 47

Consider the system in part (a). y(r) = cos.r(l). From the steps listed before:

l. For input x1(l), the outPut is


yr(r) = cos.r: (t) (2.2.71

2' consider the second input' rr(l) = :r(t - to) The corresponding output is

yr(l) = cos.rr(t)
= cos:r(t - ,o) (2.2.8)

3. From Equation (2.2.7)


yr(, - ro) = cos.rr(t - to)
Q'29)
4. Comparison of Equations (2.2.8) and (2.2.9) shows thal the system y(l) = cosr(') is
time invariant,

Now consider the system in Part (b).


l. If the input is:,(r), it can be easily verified by direct substitution in the differential
equation thar the output y,(t) is given by

I *p[- i * rf,,<,v"
y,t,r = (2.2.101

2. consider the input xr(t) = rr (' - lo)' The corresponding output is

""'
=
| ; - l..+l:;,0
|
- *
*',." ",
(2.2.111

= l-,^
*rl-'i* EP1l,,r"r,'
3. From Equation (2.2.10),

,r(r - to) = + + v'1ttl (2.2.r2)


['""p[-tt-"t' l)x,(t)dr
4. Comparison of Equations (2.2.11) and (2.2.12) leads to the conclusion that the system
is not time invariant.

22.5 Systems with and without Memor?

For most systemst the inputs and outputs are functions of the independent-variable. A
system is said to be memoryless, or instantaneous, if the present value of the outPut
depends only on the preseni value of the input. For example, a resistor is a memory-
less system, iince with input r(r) taken as the current and orltput y(r) taken as the
volt-
age, the input/output relationship is

Y(t) = Rr(t)
where R is the resistance. Thus, the value of y(r) at any instant depends only on
the
value of .r(r) at that instant. on the other hand. a capacitor is an example of a system
I Contlnuous-Tlme Systems Chapter 2

with memory. With input taken as the current and output as the voltage, the inpuUout-
put relationship in the case of the capacitor is

Y(i = ]Cf--x@ar
where C is the capacitance. It is obvious that the outPut at any time t depends on the
entire past history of the inPut.
If a system is memoryless, or instantaneous, then the input/output relationship can
be written in the form
y(t) = r(x(r)) (2.2.13)

For linear systems, this relation reduces to

Y(0 = k0)x(t)
and if the system is also time invariant, we have

Y(') = &.r(')
where&isaconstant.
An example of a linear, time-invariant, memoryless system is the mechanical
damper. The tinear dependence between force fO and velocity o(t) is
1
a(t) =
,f(t)
where D is the damping constant.
A system whose response at the instant r is completely determined by the input sig-
nals over the past T seconds (the intewd from r -I
to t) is a finite-memory system
having a memory of length T unis of time.

&ample2.25
The output of a communication channel y(t) is related to its input r(l) by
fl
YQ) = l,
o,x(t -
T,)

It is clear that the ourput y() of the channel at time r depends not only on the input at
time ,, but also on the past history of r(r), e.9.,
y(o) = a*(o) + o1x(-T) + "'+ aler")
Therefore, this qntem has a finite memory of T = Eaxi(ii).

2.2.4 Causal Syetnme


A system is causal, or nonanticipatory (also known as physically realizable), if the out-
put at any time ro depends only on values of the input for I < !0. Equivalently' if two
inputs to a causal system are identical up to some time r0, the corresponding outPuts
49
Sec. 2.2 Classilication ot Contrnuous-Time Systems

must also be equal up to this same time since a causal systeln cannot predict if the two
inputs witl be different after l, (in the future)' Mathematicallv' il
.r,(t) =.rr(t)l r(trr
and the system is causal. then
y,(t)=y2Q):t<t,,
A system is said to be noncausal or anticipatory il it is not causal'
iausal systems are also referred to as physically realizablc s!'slcrns'

Example 2.2.6
In several applications. rve arc interested in the value of a signal .\ (, ). not al ptcsent timc
t. bur at some time in the future. t + rr. or at some lime in thc pit\I. , -
p. Thc signal .t'(t)
= 3(1 + rr) is called a prcdicrion of .r(l) rvhile the signal l (t -
p) is rhe delal't'd version of
pretlictor while the second svstem is an kleal deloy.
' The first sysrem is called an ideal
.r(l).
Clearly the ircdictor is noncausal since thc output dcpcnds on luture values of the
input.Wecanalsoveri[ythismathematicallyasfollows.Considcrthcinputs
[t ,s5
.rr(r)=lgxp1_r) r>5
and

.,r(t) =
[t r < 5

to r>5
so that,rl(r) and.tr(r) arc idcntical uP to rir = 5
Supposc a = 3. The corrcsponding outputs are

r.,(,) =
{:*pt_ (, * 3)l r>2
and
[t t=2
.,',rr)=to
r>2
If thc sysrem is causal.,v1(r) :
r''(r) for all t < 5. Bul 1'r(3) = e rP(-6) while.v'(3) = ll'
Thus the system is noncausal.
The ideal delay is causal sincc irs outpur depcnds onlv ttn Pilsl vir lues of the input signal.

Example 2.2.7
We are often requirccl to dctcrnrinc lhc irvcrilgc valuc ol :t sigrrirl itl cach time instanl l'
we do this hy deiining thc r.r,,rirr.q ar.r,rr,.q(..r,'(r ) oI signul.r (r )..r" (, ) can b€ compulcd in
several waYs. for examPlc.

."(,)=.f/,r(r)r/r 12.2.t11
50 Continuous-Timo Systems Chapter 2

*"1t1= !rl'4 4,.1a., e.z.ts)


L,et r,(t) and .rr(t) be two signals which are identical for r lo but are differenr from each
other for , > rn. Then, for the system of Equation (2.2.14),
=

:i"00) =
+t. xr(t )dr (2.2.16(a))

I t',
i Jn-,
x2$)dr =.ri'(to)

Thus this system is causal.


For the system of Equation (2.2.15)
t
ri'(ro) = lr[)'i *,r,,0, (22.16(b))

which is not equal to

rj'(,,; =
i[_i"nr* (2.2.16(c))

since.r,(l) and.rr(t) are not the same for, > ,0. This s,'stem tr.,1r.t.;org, nsncafisel.

2.2.6 Invertibility and Inverse Systems


A system is invertible if by observing the output, we can determine its input. That is,
we can construct an inverse system that when cascaded with the given system, as illus-
trated in Figure 2.2.3, yields an output equal to the original input to the given system,
In other words, the inverse system "undoes" what the given system does to input x(t).
So the effect of the given system can be eliminated by cascading it with its inverse sys-
tem. Note that if two different inputs result in the same output, then the system is not
invertible. The inverse of a causal system is not necessarily causal, in fact, it may not
exist at all in any convenlional sense. The use of the concept of a system inverse in the
following chapters is primarily for mathematical convenience and does not require that
such a system be physically realizable.

Iigue ZZ3 Concept of an inverse system.

t-ernple 2.28
We wanl to determine if each of the following systems is invertible. If it is, we will con-
struct the inverse slntem. If it is not, we will find two input signals lo the system that have
the same output.
(a) y(t) = 2r0)
Sec, 2.2 Classification of Continuous'Time Systems 51

(b) y(t) = cos:(t)


lt
(c) y(t) = I x(t)dt; y(-t) =6
) _-

(d) Y(t) = .r(t + l)


For part (a). system y(r) = 2r(r) is invertible with the inverse
z(r) = jy(r)
This idea is demonstrated in Figure 2'2.4'

yUl = 2x(tl
;11y= j.r'{r) = x(r)
.r 0)

Iigure 224 Inverse system for part (a) of Example 2.2.8.

For part (b), system y(l) = cos:(r) is noninvertible since r(t) and x(t) + 2zr give the
same output.

For part (c),systemy(r) =[ xQ)dr, y(--) = 0, is invertible and the inverse s],stem

is the differentiator
d
z(t) =
o,y()
For part (d). system y0) : :(l + l) is inverrible and the inverse system is the one.unit delay
z(t)=y(t-l)

In some applications, it is necessary to perform preliminary processing on the


received sign;l to transform it into a signal that is easy to work with. If the preliminary
prooessing is invertible. it can have no effect on the performance of the overall system
(see Problem 2.13).

2.2.6 Stable Systems


One of the most important concepts in the study of systems is the notion of stability.
Whereas many different types of stability can be defined, in this section, we consider
only one type, namely, bounded-input bounded-output (BIBO) stability. BIBO stabil-
ity involvii the behivior of ihe output response resulting from the apPlication of a
bounded input.
Signat x(i) is said to be bounded if its magnitude does not grorv without bound, i.e.,

l.r(l)l < A < -, for all t


A system is BIBO stable if, for any bounded input r(t), the rcsponse y(l) is also
bounded. That is,

lx(r)l <r,<- imPlies l.v(r)l <ar<'


52 Coniinuous-Time Systems Chapter 2

Exanple 2J.9
We want to determine which of these systems is stable:
(a) y(t) : exp
[r(r)]
r'
(b) y(t) =
J-_*(,)0,
For the system of part (a). a bounded inpur x() such that I.r(l) | < B. results in an out-
put y0) with magnitude

ly()l = lexp[.r(r)]l = explr(r)ls exp[8]< c


Therefore, the oulput is also bounded and the system is stable.
For part (b), consider as input r(r), the unit step function z(t). The output y(t) is then
equal to

y(t)=)_-u(r)tk=r(t)
Thus the bounded input a(r) produces an unbounded output r(l) and the system is not stable.

This example serves to emphasize that for a system to be stable, all bounded inputs
must give rise to bounded outputs. If we can find even one bounded input for which
the output is not bounded. the system is unstable.

2,3 LINEAR TIME-INVARIANT SYSTEMS


In the previous section we have discussed a number of basic system properties. Two of
these. linearity and time invariance, play a fundamental role in signal and system analy-
sis because of the many physical phenomena that can be modeled by linear time-invari-
ant systems and because a mathematical analysis of the behavior of such systems can
be carried out in a fairly straightforward manner. In this section, we develop an impor-
tant and useful representation for linear time-invariant (LTI) systems. This forms the
foundation for linear-system theory and different transforms encountered throughout
the text,
A fundamental problem in system analysis is determining the response to some
specified input. Analytically, this can be answered in many different ways. One obvi'
ous way is to solve the differential equation describing the system, subject to the spec-
ified input and initial conditions. In the following section, we introduce a second
method that exploits the linearity and time invariance of the system, This development
results in an important integral known as the convolution integral. In Chapters 3 and
4, we consider frequency-domain techniques to analyze LTI systems.

2.8.1 The Convolution Integral


Linear systems are governed by the superposition principle. Let the respoNes of the
system to nro inpus x, (t) and rr(t) be y, (r) and y2(t) respectively. The system is linear
if the response to the input x(l) : arx, (t) + aSrQ) is equal to y(t) = atyt(t) + aSrQ).
Sec. 2.3 Linear Tim+lnvariant Systems 53

More generally, if the input r(t) is the weighted sum of any set of signals x,(t). and if
the response to r,(l) is y,(r), if the system is linear, the outPut y(r) will be the weighted
sum of the responses y,(l). That is. if

x(l) = a,x,(t) + arxr(t) + "' + arxn(t) = ) a,x,(t)

we will have

v(t) = a,yr(t) + asr$) + "' + arvyl(,) = )o,v,(r)

In Section 1.6, we demonstrated that the unit-step and unit-impulse functions can
be used as building blocks to rePresent arbitrary signals. In fact, the sifting property of
the 6 function,

x(r) = x(t) 6(, - r)dr (2.3.1)


/"
shows that any signal r(r) can be expressed as a continuum of weighted impulses.
Now consider a continuous-time system with input x(t). Using the suPerPosition
property of linear systems (Equation 2.2.1), we can exPress output y(t) as a linear com'
bination of the responses of the system to shifted impulse signals; that is,

v(r) = h(t,r)d1 (2.3.2)


[_rn,
where l(r, r) denotes the response of a linear system to the shifted impulse 6(t - r).
In olher words, i(, -
r) is the output of the system at time , in response to input 6(t ")
apptied at time z. If, in addition to being linear, the system is also time invariant, then
,r(r, r) should depend not on z. but rather on , 7; i.e., h(t, r\ = h(t r). Tbis is
- -
because the time-invariance property implies that if ,?(r) is the resPonse to 6(t). then
-
the response to 6(t r) is simply h(t - r). Thus. Equation (2.3.2) becomes

/' r(r) h(t - r)dt


.v(r) = (2.3.3)

The function tr(r) is called the impulse response of the LTI system and represents the
output of the system at time , due to a unit-impulse inPut occurring at I = 0 when the
system is relaxed (zero initial conditions).
The integral relationship expressed in Equation (2.3.3) is catled the convolution
integral ofsignals r(r) and ,l(r) and relates the input and output of the system by means
of the system impulse response. This operation is represented symbolically as
y(t)=x(t)*h(t) (2.3.41

One consequence of this representation is that the LTI system is completely charac'
terized by its impulse response. It is important to know that the convolution
y(t)=y(1)*fi(1)
Continuous-Time Syst€ms Chapt€r 2

does not exist fior all possible signals. fie suflicient conditions for the convolution of
two signals x(t) and ll(t) to exist are:
l. Both.r(r) and h(r) must be absolutely integrable over the interval (--,0].
2. Both x(r) and tr(r) must be absolutely integrable over the interval [0. o).
3. Either.r(t) or /r(l) or both must be absolutely integrable over the intewal (--, -).
The signal r(t) is called absolutely integrable over the interval [a. b] if

l' l,til o, . * (2.3.s|

For example, the convolutions sin rl * cos r,l. exp[r] * exp [r], and exp [t] * exP[-r] do
not exist.
Continuous-time convolution satisfies the following important proPerties:
Commutativlty.
x(t)*7111= h(tl"x(t)
This property is proved by substituticn of variables. The property implies that the roles
of the input signal and the impulse resPonse are interchangeable.
Assoclatlvlty.
r(t) x fi,(t) * hr(t): [.r(t) x lr,(t)] * nr(t)
: x(t) * [&,(t) * nr(t)]
This property is proved by changing the orders of integration. Associativity implies that
a cascade combination of LTI systems can be replaced by a single syslem whose
impulse response is the convolution of the individual impulse responses.
Dlstrlbutlvtty.
x(t) * lh,(t'1 + hz?)l = k(t) . &r (r)l + [x(r) * /rz(t)]
This property follows directly as a result of the linear jrroperty of integration. Distrib-
utivity states that a parallel combination of LTI systems is equivalent to a single sys-
tem whose impulse response is the sum of the individual impulse responses in the
parallel configuration. All three properties are illustrated in Figure 2.3.1.
Some interesting and useful additional properties of convolution integrals can be
obtained by considering convolution with singularity signals, particularly the unit step.
unit impulse, and unit doublet. From the defining relationships given in Chapter l. it
can be shown that

r(r) * 5(r) =
[rn,
6(r - r)dr : x(r) (2.3.6)

Therefore, an LTI system with impulse response & (r) = 6(l) is the identity system. Now

.r(r) * u(r) = f-_x@u1t


- r)dr - f__rnro, (2-3.7)
Sec. 2.3 Llnear Time-lnvariant Systems 55

t, (r) ,,,,_-F-]----,,,,
(a)

.r(r) !,(I ) *,,,----f*,flJl*,u,


(b)

h t(tl

v(t) .r(r) h ttt) + hzU) v(tl

h10l

(c)

Figure 2J.1 Properties of continuous'time convolution.

Consequently, an LTI system with impulse response h(t\ = ,,1,1 is a perfect integra-
tor. Also,

x(r)x6'(r)= [ r(t)s'( -r)dr=.r'(r) (2'3'S)


J_-
so that an LTI system with impulse response /r(l) = 6'(t) is a perfect differentiator' The
previous discussions and the discussions in Chapter I point out the differences between
the following three operations:
-r(t)D(t - 4) : n(a)6(, - 4)

I x(t)6(r - a)dt - x(a)

r(r)*6(r-u)=x(t'a\
The result of the first (sampling property of the delta function) is a &function with
strength x(a). The result of the second (sifting property ot thc delta function) is the
value of the signal .r(t) at , = a, and the result of the third (convolution property of the
delta function) is a shifted version of :(t).

Erample 2.9.1
Irt the signal x(l) = a6(r) +b5(, - ro) be input to an l.T'l systcm with impulse response
ft(r1 = 711*r,-cr]a(r). The input is thus the weighled sum rtf trvo shifted &functions.
56 Contnuous-Tlme Syslems Chapter 2

Since the system is linear and time invariant, it follows that the output, y(l), can be
expiessed as the weighted sum of the responses to these &functions. By definition, the
response ofthe system to a unit impulse input is equal to tr@ so that
y(t)=ah(t)+bh(t-to)
= aKexpl-ctlu(r) + bKexp[-c(r - ro)lu(r - rJ)

Example 233
The output y(t) of an optimum receiver in a communicatiotr system is related to its input
r(r) by
y(i=[r(r)s(I-t+.t)d.t,
Jt-T
0s,=r es.s)
where s(r) is a known signal with duration L Comparison of Equation (2.3.9) with Equa-
tion (2.3.3) yields
h(t - r) =r(I-, + r), 0< I - t < T
= 0, elsewhere
or
&(t) =511 - 11' 0<r<r
=0, elsewhere
Such a system is called a matched flter. The sptem impulse response is s(r) reflected a.nd
shifted by I
(sysrem is marched ro s(r)).

Exampb 2A.9
Consider the system described by

v(i = l[''-'
t Jr-i
,G)a,
As noted ertier, this system computes the runnitrg average of signal r(r) over the interval
\t-Tlz.t+Tl2).
We now let r1t) = 6(r) to find 'he impulse response of this systeF as

oo=il,')l,oto,
(I
l- T T
=,fr 2-'-2 --<t<-

[ 0 otherwise
where the last step fotlows from the sifting property, Equation (1.6.E), of lhe impulse function.

Eqorrple 2*3.4
Consider the LTI system with imputse response
ftO=exp[-ar]uO, a>0
SEc. 2.3 Lin€ar Time-lnvariant Systems 57

lf the input to the system is

.r(t.1 = exp[-Dr]u(r). b * a

then the output y(t) is

y(t) =
) _.exp[-btla(r)exp[-a(r -
r)]r(r - r)rlr
Nole thar
rr(t)u(-t) = l, 0(t(r
= 0, otherwise
Therefore

y(t) = |
Jn
exp[-at] expl(a - b\rldr
I
=
;: t[exP(-br) - exp(-ar)lrr(r)

x:rqrnple 2.8.6
[.et us find the impulse response of the system shown in Figurc 2.3.2 if
h'(t) = exPl-2tlu(t)
h'(t) = ZexPl- tlu(t)
,rr(t) = exp [- 3rla0)
na(t) = a5'1';
By using the associative and distributive propcrties ofthe impulso response it follows that
&() for the system of Figure 2.3.2 is
h(r) = hr(t) * hr(t) + hr(t) * hu[)
= [exp(-r) - exp(-Zr)lu(t) + l2exp(- 3r)u(t)
where the last step follows from Example 2.3.4 and the fact lhat ,r (I) * 6(r) = r(r).

Figure 2.3.2 System of Example


2.3.5.
58 Conlinuous-Time Systems Chapter 2

Example 2.9.6
The convolution has the propcrty that the arca of the convolutioh integral is equal to the
product ot thc areas of the two signals entering into the convolution. The area can be com-
puted by integrating Equation (2.3.3) over the interval -.:. < t < o, giving

L. !(t\dt = l" .J' .,t,lU, - rttttttt

Interchanging the orders of integration results in

l__ruro,
=
[ .'otlf .ott -,Yt)a',
=I .r(t;[area under /r(r)ldr

: area underx(t) x area under ft(l)


This result is generalized later when we discuss Fourier and l:place transforms, but for
the moment, we can use it as a tool to quickly check the answer of a convolution integral.

2.8.2 Graphicd Interpretation of Convolution


Calculating -r (r ) " ft (t) is conceptually no more dilficult than ordinary integration when
the two signals are continous for all t. Often, however, one or both of the signals is
defined in a piecewise fashion, and the graphical interpretation of convolution
becomes especially helpful. We list in what follows the steps of this graPhical aid to
computing the convolution integration. These steps demonstrate how the convolution
is computed graphically in the interval ti-t = t
= ri, where the interval [],-t, t ] is cho-
sen such that the product.r(z)ft(t - r) is descnbed by the same mathematical exPres-
sion over the interval.
Step 1. For an arbitrary, but fixed value of t in the interval [ti-r, tJ, plot .t(7),
/z(t - z), and the product g(t, r) : .r(r)h(t - r) as a function of 7. Note that ft(t - r) is
a folded and shifted version offi(r) and is equd to r,(-") shifted to the right by, seconds.

Step 2. Integrate the product g(1, r) as a function of z. Note that the integrand
g(t, r) depends on l and r, the latter being the variable of integralion, which disappears
after the integration is completed and the limits are imposed on the result. The inte-
gration can be viewed as the area under the curve represented by the integrand.
This procedure is illustrated by the following four examples.

Exanple 2.3.7
Consider the signals in Figure 2.3.3(a), rvhere
x(t) = 4 exp[-t], 0<t<co
h(,) =
+,
0sr<T
Figure 2.3.3(b) shows x(r), h(t - rl, ind xQ'1h(t - z) with t < 0. The value of t always
59
Sec. 2.3 Linear fime-lnvariant Systems

r(r)=/exp[-tl

(a)

xU\ h(r - rl

(b)
_t__

(c)

(d)

x(r) r i(t)

lc-r-exp[-r])

(e)

Flgure 233 GraPhical interpretation of the convolution for Exam'


ple2.3.7.
Contlnuous-Time Sy8tems Chapter 2

equals the distance from rhe origin of .r(r) to the shifted origin of&(-z) indicated by the
dashed line. We see thal the signals do not overlap: hence. the integrand eqtrets zero, and

r(t) arr(t) = 0, ,<0


When0 s r s f. as shown in Figure 2.3.3(c), the signals overlap for 0 Tst,sorbecomes
the upper limit of integration. and =

x(t) * h(t)= L.i o,


/ a expl-t1
=1t,- l+exp[-r]] o<rsr
Finally. when I > I, as shown in Figure 2.3.3(d). the signals overlap fot t - T r, and
= "s
tt
.r(r) a ftO =
1,. ,oexp[-t] ']' o,

=4r
7{ - | + exp[-r]]exp[-( - fl]. t>T
The complete result is plotted in Figure 2.3.3(e). For this example. the plot shows that con-
volution is a smoothing operation in the sense that.r(t) * ft(r) is smoother tban either of
the original signals.

Exanple 2.8.8
Let us determine the convolution \

Y (t) = rect (t l2al s rcct (t /h)


Figure 2.3.4 illustrales the overlapping of the two rectangular pulses for different values
ofr and the resulting signal 1'(). The result is expressed analytically as

(o, t < -za


Y(t): l;h;. ;!,=.';o
[0. t >2a
or. in more compact form.

.vo={2r-l'l' 14 su
Itl =u
= 2n L(t/2a)
This signal appeam frequently in our discussion and is called the triangu(ar signal. we use
the notation A0/2a) to denote thc rriangular signal that is of unit heightjcenGred around
r = 0. and has base of length rkr. j
Sec. 2.3 Unear Tlme-lnvariant Systems 6'l

x(r)

-ta t -a0a -a0


to-2a -2o 1t 1-a

-at O o -o0,
-4 <, <0 0(l(a

-a0a -a0 t3a


a 1t (2a , -2a
v (t)

-2aO2at
llgure 23.4 Graphical solution of Example 2.3.8.

Eranple 23.9
and h(t) are as follows:
Let us compute the convolution 'r(')
' h(')' where 'r(t)
r (r) h(t)

Figure 2.3,5 demonstrates the overlapping of the rwo signals x(7) and ft(t -7)' We can
see th-at for, < -2, the Product r(r)h(, z) is alwap zero. For -2 s t
- < -1, the Prod'
uct is I triangle with base , 2 and heigbt
+ , + 2: lherefore. the area is

yg1=l$+2)'z -23t<-r
62 Continuous-Tlmo Systems Chapter 2

,- | , ,+l -l t-l ,+10


t"i t<-2 (b) -23, s-l

,-10, ,+l
(c) -l < r<0 (d) 0srsl
Y (t)

-f Ot-I, ,+l t -2 -l 0 I

(e) ,>l (f)

Flgure 235 Graphical interpretation of :G) * lt0) for Example 2.3'9.

For -l s , < 0. the product is shown in Figure 2.3.5(c)' and the area i8
y(t):r-!2, -1=,<0
For0<r< l, the product is a reaangle with base 1- t and height 1; therefore, the area is

Y(t)=l-t' os,<l
For t > I, the product is always zero. Summarizing, we have

0, r< -2

ry: -z=, < -r


v(,) = I -i'
r2
-1<r<o
1 -r, 0sr<l
0, ,=1
Ssc. 2.3 UnearTime-lnvadant Syst€ms 63

Exanple 2.3.10
The convolution of the two signals shown in the following figure is evaluated using graph'
ical interpretation.

From Figure 2.3.6, we can see that for t < 0, the product .t (r)ft (r - t) is alwayszero for
allctheref6re,y(r)=0.Foro=t<l,theproductisatrianglewithbase'ardheightc
therefore, yO = t2/2.Eot I = t <z,lhe area under the product is equal to

y(r) = t - fl(, - lf + l(2 -,)'zl, 1=t<2


For2sr<3,theproductisatriangtewithbase3-randheight3-4therefore,y(t)=
(3 - t)212. For I >3, the product r(7) h(t - r) is atways zero. Summarizing, we have

t-l 1 ,- l , ,+l
(a) ,< o o) 0<r<I

t-l I , 2 t+l t-l t t+l


(c) | s t<2 (d) 2<r<3

t-l I a+l ? t2
(€) ,>3 (f)

Hgure a3.6 Convolution of ,(r) and ,t(r) in Example 2,3.10.


a .. Continuous-Time Syslems Chapt€r 2

0, l<0
t2
1'
0<r<l
v(t) = *-r-), l=t<2
(3 - t)z
2st<3
2'
0, ,>3

2.4 PROPERTIES OF LINEAR, TIME-INVARIANT


SYSTEMS
The impulse response of an LTI system represents a complete description of the char-
acteristics of the system. In this section, we examine the system properties discussed in
Section 2.2 in terms of the system impulse response.

2.4,1 Memotyleee LTI Syetome


In Section 2,2,3, we defined a system to be memoryless if its output at any time depends
only on the value of the input at the same time, There we saw that a memorytes, time-
invariaat system obeys an input/output relation of the form
y(l) = Kx(t) (2.4.r)
for some constant K. By setting r0) = O(r) in Equation (2.4.1), we see that this system
has the impulse response

h(t'1 =Y 51,1 (2.4.2)

2,42 Caueal LTI Systems

As was mentioned in Section 2,2,4, the output of a causal system depends only on the
present and past values of the input. Using the convolution integral, we can relate this
property to a corresponding property of the impulse response of an LTI system. Specif-
ically, for a continuous-time system to be causal, y(t) must not depend on r(r) for z ,, )
From Equation (2.3.3), we can see that this will be so if
h(t1 =g for l<0 (2.4.3)
In this case, the convolution integral becomes

y(t)= I x(t)h(t-t)dr
J--
r'
- Jn| helx(t-.r)dt (2.4.4)
Sec. 2.4 Propertes ol Llnear, Time-lnvarlant Systeme

As an example. the system ft(t; = ,,1r; is causal, but the system h1t'1 = 61, + lr,). lo > 0.
is noncausal.
In general. x(t) is called a causal signal if
x(t)=g' l<0
The signal is anticausal if r(l; = 0 for I E 0. Any signal that does not contain any sin-
gularities (a delta function or its derivatives) at t = 0 can be written as the sum of a
causal part x*(t) and anticausal part r-(t), i.e.,
r(t)=1'1,;*t-,,,
For example. the exponential r(t) = exp[-4 can be written as
r(t) = exp [-t] a(t) + exp [-r]z(-t)
where the first term represents the causal part ofx(l) and the second term represents
the anticausal part ofr(r). Note that multiplying the signal by the unit step ensures that
the resulting signal is causal.

2.1,8 Invertlble LTI Syeteme


Consider a continuous-time LTI system with impulse response fi(t). In Section 2.2.5,
we mentioned that a system is invertible only if we can design an inverse system that,
when connected in cascade with the original system, yields an output equal to the sys-
tem input. If &, O represents the impulse response of the inverse system, then in terms
of the convolution integral, we must, therefore, have
Y(t) = h,(t) t ft(r) * x(t) :,1'1
From Equation (2.3.6), we conclude that ft,(t) must saiisfy
hr(t) * fi(1) = i(r) * /,,0) = 60) Q.4.5)
As an example, the LTI system ll,(t) = 6(, + lo) is the inverse of the system ft(t) =
a0 - to)'

2.4.4 Stable LTI Syetene


A continuous-time system is stable if and only if every bounded input produces a
bounded output. In order to relate this Foperty to the impulse response of LTI sp-
tems, consider a bounded input x(r), i.e., lx(l)l < I
for all l, Suppose that this input is
applied to an LTI system with impulse response &O. Using Equation (2.3.3), we find
that the magnitude of the output is

l.v(r) I= ool,o -
l1' ")d,l
s - t)ldt
J" taf'll lx(r

a (2.4.6\
l'.lnotla,
66 Continuous-Timg Sysl€ms Chapt€r 2

Therefore, the system is stable if

f *ln<,\a, .- (2.4.7\

i.e., a sufficient condition for bounded-input, bounded-outPut stability of an LTI sys'


tem is that its impulse resPonse be absolutely integrable'
This is also a necessary condition for stability. That is, if the condition is violated,
we can find bounded inputs for which the corresponding outPuts are unbounded..
For
instance, let us fix r and choose as input the bounded signat x(r) = sgnUr(t z)] or' -
-
equivalently, .r(r z; = sgp[ft(r)]. Then

ili --
| -nO)
sgrr[ft(t)]dt

=
f -tn<"\ a,
Clearly. if n(r) is not absolutely integrable, y(t)rrill be unbounded'
As'an exampte, the system with = exp[-rlu(r) is suble' whereas the system
with fto) = z(r) is unstable. '(')
Exanple 2.4.1
we will determine if tbe system with impulse responses as shown are caueal or noncausal,
with or without memory' and stable or unstable:
(D hlt)= 1s*r1-'ra(t) + exp[3'lr(-0 + 6(' - l)
(ii) = -3exP[2r]u(t)
(iii) 'rz(t) = 550 + 5)
"3(r)
(iv) Iro(r) = ls
tilsrl
systems (i), (iii) aniliv) are noncausal sincs for r < 0, Ir,() * 0, i = 1, 3,4. Thur only sys-
tem (ii) is causal.
Since &() is trot of the form K 6(r) for any of the systems, it follo*r that dl the systems
have memorY.
To determine which of the systems are stable, we note that

+ exp[rd *, = E
[- -ln,f,lla= l,' rexpt-zrlar Jo

l- -lrrrtlla, = t- *xp@ptis
unbounded.

=s
['-ln,<ola,
and

J-
ro{da = zo
f--lao<ola'=
Thus Systems (i), (ii) and (iv) are stable. while System (ii) is unstable'
Sec. 2.5 Systems Described by Ditlerential Equations 67

2.5 SYSTEMS DESCRIBED BY DIFFERENTIAL


EQUATIONS
The response of the RC circuit in Example 7.2.7 was described in terms of a differen-
tial equation. In fact, the response of many physical systems can be described by dif-
ferential equations. Examples of such systems are electric netrvorks comprising ideal
resistors, capacitors, and inductors. and mechanical systems made of small sPrhgs,
dampers. and the like. In Secrion 2.5.1. we consider systems with linear input/output
differential equations with constant coefficients and show that such systems can be
realized (or simulated) using adders, multipliers, and integrators. We shall give also
some examples to dcmonstrate how to determine the impulse response of LTI systems
described by linear, constant-coefficient differential equations. In Chapters 4 and 5, we
shall prcsent an easier, more straightforward method of determining the impulse
response of LTI systems. namely, transforms.

2.6.1 Linear, Constant-Coefficient Ilifierential Equations


Consider the continuous-time systcm described by the inpuUoutput differential equation
,r#, * =
Zo,r' ip
(2.s.t)
F_ "!to:t
where the coefficients a,, i = 1,2, ..., N - 1, bi, i= 1, ..., ,DI are real constants and
N > M.In operator form, thc last equation can be written as

(r" - i o,o,)t(t)= (Sr,r,),r,r (2.s.2)

where D represents the differentiation operator that transforms -v(t) into is derivative
y'(r). To solve Equation (2.5.2), one needs the N initial conditions
y(ro). y'00), ..., y('-')(ro)
where ro is some instant at which the input.r(l) is applied to thc system and yi(t) is the
ith derivative of /(r).
The integer N is the order or dimension of the system. Note that if the ith deriva-
tive of the input r(t) contains an impulse or a derivative of an impulse, then, to solve
Equation (2.5.2) for, > ,0, it is necessary to knorv the initial conditions at time t = 6.
The reason is that the output.1,(t) and its derivatives up to order N - l can change
instantaneously at time r = Io. So initial conditions must be taken just prior to time lu.
Although we assume that the reader has some exPosure to solulion techniques for
ordinary linear differential cquatitrns, rve rvork out a first-ordcr case (N: 1) to review
the usuaI method of solving linear, constant-coefficient differential equations.

Example 2.6.1
Considcr the first-order LTI system that is described by the first-order differential equation

4v$) + ay(,) = bx(t) (2.s.3)


dt
68. Conilnuous-Time Systems Chapter 2

where a and D are arbitrary constants. The complete solution of Equation (2.5.3) consists
of the sum of the particular solution, yr(r), and the homogeneous solution, yrO:
y(t)=yr(t)+yr(t) (2.5.1)
The homogeneous differential eriuation
dv(t\
riz+oY(t)=o
has a solulion in the form

Yr(t) = c exPl-atl
Using the integrating factor method, we frnd that the particular solution is

,,111
' = Ju[exn[- a(t-,r)lbx(t)dr, ,=ro
Therefore. the general solution is

y(r)=Cexp[ -ar1 + ['exp1- a(r-r)lbr|)d1 ,>h (2S.5)

Note that in Equation (2.5.5), the constant C has not been determined yet. In order to
have the output completely determiued, we have to know the initial condition y(lo). Let
y0J = yo
Then, from Equation (2.5.5),
Yo = CexP[-aro]
Therefore,forr>6,
y(r) = y6exp[-a(, - ro)] + ['exp[-a(r
Jh
- t)lbtr)dt
If. for t < to, :(t) = 0, then the solution consists of only the homogeneous part:
y(t) =yoexp[-a(r - ro)], r(lo
Combining the solutions for I > to and , < lo, we have

y0) = yoexp[-a( - r0)l* {/'*r1-rO -,r)lbik)&}a(, -,0) e.s.6l


Since a linear system has the property that zero input produces zero output, the previ-
ous system is nonlinear if yo # 0. This can tre easily seen by lelting r(,) = 0 in Equation
(25.6) to yield
y(r) = yo exp [-a(, - ,o)]
Ifyo = 6, 16. system is not only linear, but also time invariant. (Verify this.)

2.62 Basic System Conponents

Any finite-dimensional, linear, time-invariant. continuous-time system given by Equa-


tion (25.1) with M < N can be realized or simulated using adders, subtractors, scalar
multipliers, and integrators. These components can be implemented using resistors,
capacitors, and operational amplifiers. .
Sec. 2.5 Syst€ms Described by Difler€ntal Equations 69

,u,{f-* r tt) = reot +i xkt at


ro Flgure 2S.l The integrator.

The Integrator. A basic element in the theory and practice ofsystem engi-
neering is the integrator. Mathematically, the input/output relation describing the inte-
grator. shown in Figure 2.5.1, is

y(t) = y\i + [' xe)dr, !E ro e.s.l)


Jh
The input/output differential equation of the integrator is
dv(rl
-i'= ,(r) (2.5.8)

If y0d = 0, then the integrator is said to be at rest.

Adders, subtractors, and scalar Multiplierc. These operators are illus-


trated in Figure 2.5.2.

.:1(l) .r, (r) + 12 (r) r, (r) :s (l) - .r1(r)

x2(()
(a)

,,,,-{IFye)-Kx(')
(c)

Flgure 2.52 Basic components: (a) adder, (b) subtracror, and (c) scalar
multiplier.

Erample 2.62
We will find the differential equation describing the syslem of Figure 2.5.3. [.et us denote
the output of the first summer as zr,(t). that of the second summer as ur(r) and that of the
first integrator as y, (r). Then
Dr(t) = y'(t) = .yr(r) + 4y(t) + 4x(t) (2.5.9)
Differentiale this equation and note that yi() = ?rr(r) to ger
y'(t) = ai(\ = or(r) + 4y'(t) + 4x'Q) (25.10)
which on substituting u,(l) = -y(t) + 2r(r) yields
y"(t) = 4y'(t) - y(,) + 4x'(t\ + Lt(t) (2.5.11)
70 Coniinuous-Time Systems Chapter 2

Hgure 2.5J Realization of the system in Example 2.5.2.

253 Simulation Diagrane for Continuous"Time Systeme

Consider the linear, time-invariant system described by Equation (2'5.2). This system
can be realized in several different ways. Depending on the application, a particular
one of these realizations may be preferable. In this section, we derive two different
canonical realizations; each canonical form leads to a dilferent realization, but the two
are equivalent. To derive the fint canonical form, we assume that M = N and rewrite
Equation (2.5.2) as
DNO - Dn4) + aN-t1an-r! - bn-,r) + "'
+ D(ary - Drr) + aov - Dox : 0 (2.s.12',)

Multiplying by D-n and rearranging gives


| = bxx + D-r(bN-rr - a,v-rI) + "'
* ,-trv-t)(D,.r - a.1,) + D-r(b* - oO) (2.s.13)

from which the flow diagram in Figure 2.5,4 can be drawn' starting with output y(t) at
the right and working to the left. The operator D-r stands for integrating & times.
Another useful simulation diagram can be obtained by converting the Nth-order dif-
ferential equation into two equivalent equations. Let

("'. ,?,
a,Di)u(t): x(t) (2.s.14)

Then

v0) = (i.','')'1a (2.s.ls)


Sec. 2.5 Systems Described by Ditlerential Equations 71

Flgure 2S.4 Simutation diagram using the frrst canonical form.

To verify that these two equations are equivalent to Equation (2.5,2), we substitute
Equation (2.5.15) into the left side of Equation (2.5.2) to obtain

(,, * \,,u)t<o: (io r,r".' * 5] ,, i ,,r"."),1,1

= (i ''('".' * i'''o"-"))'<'l
= (i
'''')'t'r
The variables o('v- r)(r), ..., ?(t) that are used in constructing .v(r ) and .r(t) in Equations
(2.5.14) and (2.5.15), respectively, are produced by successively integrating u(n)(t)'Th"
iimulaiion diagram corrisponding to Equations (2.5.141and (2.5.15) is given in Figure
2.5.5. We reteito this form of rePresentation as the second canonical form'
Note that in the second canonical form, the input of any inlegrator is exactly the
same as the output of the preceding integrator. For example, if the outPuts of two suc-
cessive integratbrs (counting from the right-hand side) are dcnoted by a. and a., 1,
respectively, then
a;(t) = a",. r(t)
This fact is used in Section 2.6,4 to develop state-variable representations that have
useful properties.

Exanple 254
We obrain a simulation diagram for the LTI system described hy the linear constanl-coe[-
ficient differential equation:
o
st
(.1

E
o
tr
!
I
tr
o
(.l
o)

!)
c)
tr
t
E
(!
hb
a!
t
tr
o
(!

E
tt)
vl
arl
C'
a,
E
E!
lz

72
S6c. 2.5 Systems Described by Ditlerontial Equations 73

v'(r) + 3v'(r) + 4lQ) = LY"(t) - 3x'(t) + x(t )

where l(t) is the output, and x(r) is the input to the system. To gct the first canonic form.
we rewrite this equation as

Dzy (t) = b(t) + D-tl-3.r(r) - 3y(r)l + D-'z[.r'(r) - +y (t)]


Integrating both sides twice with resPect to , felds
y(t) =u(t) + D-'[-3.r0) - 3y(r)l + D-2[.r1r) - qy$)l
The simulation diagram for this repres€ntation is given in Figure 2.5.6(a). For the second
canonic form. we set
u'(r) + 3o'0) + 4u(t) =:(1)
and

v(t) = 21)',(t) - 3a' (rl + a(t)


which leads to the simulation diagram of Figure 2.5.6(b).
In Section 2.6, we demonstrate how to use the two canonical forms just described to
derive two different state-variable representations,

2.6.4 Finrring the Impulse Response


The system impulse response can be determined from the differential equation describ-
ing the system. In later chapters, we find the impulse response using transform tech-
niques. We defined the impulse response lr(r) as the response y(r) when r0) = 6(t)
and y(t) = 0, cc < , < 0. The following examples demonslrate the procedure for
-
determining the impulse response from the system differential equation.

Example 2.8.4
Consider the system governed by
2y'(t)+ty1t1 =3v111
Setting.r(t) = 6(l) results in the response y(t) = h(t). Thereiorc. l(t) should satisfy the
differential equation
2h'(\ + ah?) = 36(t) (2s.t6)
The homogeneous part of the solution to this first-order differential equation is
h(t) = c (2.5.r7)
"*or-rrurr't
We predict that the panicular solution is zero, the motivation for this being that h(t) can-
not contain a delta function. Otherwise. /r'(r) would have a derivative of a delta function
that is not a part of the right-hand side of Equation (2.5.16). To find the constant C' we
substitute Equation (2..5.17) into Equation (2.5.15) to get

z expl-zrlu0)) + 4cexp[-Z]a(r) = 3 E(,)


!,C
Simplifying this expression results in
74 Continuous-Time Systems Chaptsr 2

(b)

Flgure 25.6 Simulation diagram of the second-order system in Example


2.53.

2C expl-2tl6(,) = -j6(,)
which. after applying the sampling property of the 6 function, is equivalent to

2C 6(t) = 3511;
Sec. 2.5 Systems Described by Ditlerentlal Equations 75

so that C = 1.5, We therefore have

h(t) = 1.5 exp[-2t]z(t)

In general, it can be shown that for r(t) : 6(t), the particular solution of Equation
(2.5.2) is of the form

Coto0r, M z N
he() =

{r M<N

where 6{i)1r) is the ith derivative of the 6 function. Since, in most cases of practical
interest, N > M, it follows that the particular solution is at most a 6 function.

E=n'rrple 2.6.6
C-onsider the first-order system

r'(t)+ry1t1 =211,1
The system impulse response should satis$ the follovirrg rlifferential equation:
h'(t)+3h(t)=26(,)
The homogeneous solution of this equation is of the form C,exp [-31]a(l). Let us assume
a particular solution of the form ftr(l) = Cr60). The general solution is therefore

h(t) = Crexp[-3r]u(t) + c260)


Substituting in the differential equation gives
C,[-3exp ( -3t)z(r) + erp(-3r)6(r)]
+ Cz6'(t) + 3[C, exp (-3r) u(t) + Cr6(t)l : 26(t)
Equate coefficients of 6(l) and a() on both sides and use thc sifting prop€rty of the
&function to get
Cr= 2, Cz= 0
This is in conformity with our previous discussion since M < N in rhis example and so we
should expect that Cz = O. We therefore have
h(t) = Ze*O1-r,tur,, (25.18)

f,'.rarnple 2.6.0

Consider the second-order system


y(t)+2y'(,)+2y(t): i/0) +3r'0) + 3r(,)
The characteristic roots of this differential equation are equal to - 1 + 71 so that the homo-
geneous solution is of the form [C,exp(-l)cost + C;exp(-r)sin llzO. Since M = N n
this case, we should expect that ir(t) = Ca60). Thus the impulse response is of the form
.:it, t:,lrji;g'i:i 'I
tq * il ,': ,rr li; ,i rll fl li
t4 ,T .4, ,{ r.T Continuous-Time Systems Chaptor 2

ft(r) = [C,exp(-t)cosr + Crexp(-t)sint]u(r) + Cr6(r)


so that
h'(t)=[C, - C,lexp(-t)cosru(t) - lcz+ CJexp(-t)sintz(t) + Cr60) + Ca6'(r)
and
h' (t) = - 2gr"*p ( - t)cos t u () + 2C,exp ( - l)sint u ()
+ (Cz - C,)D(r) + Cr6'0) + CaE"(r)
We now substitute these in the system differential equation. Collecting like terms in 6(r),
8(t) and 8(t) and solving for the coefficients C, gives
Ct=1, Cz=0. and Cr=l
so that the impulse response is

a0) = exp[-tlcos, u(t) + 6(r) (2.s.1e)

In Chapters 4 and 5, we use transform methods to find the impulse resPonse in a much
easier manner.

2,6 STATE-VARIABLE REPRESENTATION


In our previous discussions, we characterized linear time-invariant systems either by
their impulse response functions or by differential equations relating their inputs and
outputs. Frequently, the impulse response is the most convenient method of descriF
ing a system. By knowing the input of the system over the interval - @ < , < @, we can
obtain the output of the system by forming the convolution integral. In the case of the
differential-equation representation, the output is determined in terms of a set of ini-
tial conditions. If we want to find the output over some interval lo - I ( 11, w€ rlltst
know not only the input over this interval, but alss a certain number of initial condi-
tions that must be sufEcient to describe how any past inputs (i.e., for , < ,o) affect the
output of the system during that interval.
In this section, we discuss the method of state-variable representation of systems.
The representation of systems in this form has many advantages:
I. It provides an insight into the behavior of the system that neither the impulse
response nor the differential-equation method does.
2. It can be easily adapted for solution by analog or digital computer techniques.
3. It can be extended to nonlinear and time-varying systems.
4. It allows us to handle systems with many inputs and outputs.
The computer solution feature by itself is the reason that state-variable methods are
widely used in analyzing highly complex systems.
We define the state of the system as the minimal amount of information that is suf-
ficient to determine the output of the system for all t > tr, provided that the input to
the system is also known for all times , > ,0. The variables that contain this informa-
Sec.2.6 State-VariableFlepresentation

tion are called the state variables. Given the state of the.system at ,0 and the input ftom
,0 to ,r, we can find both the output and the state at r]lNote that t"his definition of the
state of the system applies only to causal systems (systems in which future inputs can-
not affect the output).

z.AJ State Equations

Consider the single-input, single-output, second-order, continuous-time system


described by Equation (2.5.11). Figure 2.5.3 depicts a realization of the system. Since
integlators are elements with memory (i.e,, they contain infornration about the past
history of the system), it is natural to choose the outputs of integrators as the state of
the system at any time ,. Note that a continuous-Iime system of dimension N is real-
ized by N integrators and is, therefore, completely represented by N state variables. It
isoften advantageous to think of the state variables as the components of an Ndimen-
sional vector referred to as state vector v(r). Throughout the book, boldface lowercase
letters are used to denote vectors, and boldface uppercase letters are used to denote
matrices. In the example under consideration, we defrne the components of the state
vector v(t) as

ar(t) = y(t)
ar(t) = uiQ)
a'r(t) = -n "'r1t) - art;r(t) + box(t)
Expressing v'(l) in terms of v(t) yields

[;;l;l]
=
[-'.. -: ] [;:l]l . [],].,,, (2.6.1')

The output y(l) can be expressed in terms of the state vector v(r) as

y(r)=r1 ,[;j[:]] (2.6.2)

ln this representation, Equation (2.6.Ij is called the state equation and Equation (2.6.2)
is called the output equation.
In general, a state-variable description of an N-dimensional. single-input. single-out-
put linear, time-invariant system is written in the form
v'(,)=Av(r)+bro (2.6.3)

y(t)=cv(t)+dx(t) (2.6.4)

where A is an N X N square matrix that has constant elements. b is an N X 1 column


vector, and c is a I x N row veclor. In the most general case. A, b. c, and d are func'
tions of time, so that we have.a timc-varying system. The solution of the state equation
for such systems generally requires the use of a computer. In this book, we restrict our
attention to time-invariant systems in which all the coefficients arc constant.
78 Contlnuous-Time Systems Chapter 2

v(tl

ffgre Z6.f RLC circuit for


Example 2.6.1.

Exanple 2.8.1
Consider the RLC series circuit shos'n in Figure 2.6.1. By choosing the voltage a(r6s the
capacitor and the current through the inductor as the state variables, we obtain the fol'
lowing sute equations:

cuf; = wtt)

'ry=:(r)-Ro'o-u'(r)
v(t) : u,(t)
In matrix form, these become

v'(r) =
[-', ]'"],u,. i:1,u,
Lz t) LzJ
v(') = [t
o]vo
If we assume that C = I /2 and L =R=l,wehave

v'(r) =
t-? -?]'t'r . [f]'t't
Y(t) = [1 0]v(t)

2.62 Tlne-Dornain Solution of t:he State Equatione


consider the single-input, single-output, linear, time-invariant, continuous-time system
described by the following state equations:
v'(r)=a"1r;+b:o (2.6.s)

y(t)=cv(t)+dx(t) (2.6.6)

The stare vecror v(r)is an explicit function of time, but it also depends implicitly on the
initial srate v(ro) = vu, the initial time ru. and the input r(t). Solving the state equations
Se. 2.6 State-Variable Repr€sentation 79

means finding that functional dependence. wc can then compute the outPut y(r) by
using Equation (2.6.6).
As a natural generalization of the solution to the scalar first-order differential equa-
tion, we would Jxpect the solution to the homogeneous matrix-differential equation to
be of the form
v(r) = exp [At]vo
where exp [Ar] is an N X N matrix exponential of functions of time and is defined by
the matrix power series

exp[Ar] = I + A, * + "' + Ar i; . (2.6.7)


^'*.* ^'f
where I is the N X N identity matrix. Using this definition, we can establish the fol-
lowing properties:
exp[A(r, + ,z)] = exp[Al,]exp[Arr] (2.6.8)

[exp[Ar]l-' = exp[-Ar] Q-6.9)

To prove Equation (2.6.8), we expand exp[Ar,] and exp [Atr] in power series and
multiply out terns to obtain

exp[At,]exp[Atr] =
[t
* nr, *
^'*.*
"'+ l*l;i +
l.
* *, * n'*.+ A'4 + "'+ .. .]
[, "-f
= exp[A(tr + tr) ]

BY setting tz : -\ = ,, it follows that


exp[-Ar]exp[At] = I
so that
exp[-Ar] = [exp[tu]l-'
is well known that the scalar exponential exp[al] is the only function which
pos-
It
sesses the property that its derivative and integral are also exponential
functions- with
scaea amititudes. This observation holds true for the matrix exponential as'well. We
require that A have an inverse for the integrat to exist. To show that the derivative
of ixp[Al] is also a matrix exponential, we differentiate Equation (2.6.7) with respect
to , to get

fiexplst)=
0+A *X.n'* Tn' * "' * \,' Ao * "'
: * n, * n'7,. +"' + Art' .']"
[, "'iI
=n[I+A,+A2'i.*n'f * "*"-;i. ']
80 Conlinuous-Time Systoms Chapter 2

Thus,

explArlA = aexplArl (2.6.10)


ftexplt^tl=
Nou multiplying Equation (2.65) on the left by exp [-Al] and rearrangingterms,we obtain

exp[-Ar][v'(t) - A = exp[-At]bxo
"(t)]
Using Equation (2.6.10), we can write the last equation as

(""R[-a4r(r)) = exp[-Ar]hr(r) (2.6.1t)


,4
Integrating both sides of Equation (2.6.11) between ro and I gives

exp[-Ar]vO + exp[-tuotvo = ['exp1-A"]hx(t)dr (2.6.12)


J4

Multiplying Equation (2.6.12) by exp [At] and rearranging terms, we obtain the com-
plete solution of Equation (2.6.5) in the form

vO = exp[A(r - ro)]vo + f exptl( - t)lbx(r)dr (2.6.t3)

The matrix exponential exp [At] is called the state transition matrix and is denoted by
O(t). The complete output response y(t) is obtained by substituting Equation (2.6.13)
into Equation (2.6.6). The result is

y(r) =co(r- ro)vo+ ['"o1r-t)bx(r)dr + dx(t), t>h


th
(2.6.14)

Using the sifting property of the unit impulse 6(r), we can rewrite Equation (2.6.14) as

y(t):cO(-6)vo+ |rl lctD(t-r)b+d6(r-r)lr(t)dt, t>to (2.6.15)


Jro

Observe that the ccmplete solution is the sum of two terms. The first term is the
response when the input.rO is zero and is called the zero-input response. The second
term is the response when the initial state vo is zero and is called the zero-state
response. Further inspection of the zero-state response r€veals that this term is the con-
volution of input .r(r) with cO(r) + d 6(r). Comparing this result wirh Equation (2.3.3),
we conclude that the impulse response of the system is

,(,)= Ico(t)b+d6(,) r>o (2.6.16)


t0 otherwise
That is, the impulse response is composed of two terms. The first term is due to the
contribution of the state-transition matrix, and the second term is a straight-through
path frorq input to output. Equation (2.6.16) can be used to compute the impulse
response directly from the coefficient matrices of the state model of the system.
S€c.2.6 State-VadableRepresentation 8l

Erample2.63
Consider the linear. time-invariant. continuous-time system described by the differen-
tial equation
v"(tr + y'{t) -zy(t1 = 111y

The state-space model for this system is

.
,,(,) =
[: _ l] ,u,
[?] ,u,
y(r) = [t 0] v0)
so that

c=r,
"=[: -l]' '=[l]' and or

To determine the zero-input response of the system to a specified initial-condition vector

",= [;]
we have to calculate O(t). The powers of ihe matrix A are

* =l-i -l] : :] "'= [-,: ,;]


"'= t
so that

'., ; ] t ; #'1.
o0, =
[l :]. [; ;]. [ fl.
T) [.,'
L-r' L" +".1.L,j" *".J.'
l+r'-r*a*... ,-1*i-ut'*.1I
fFt4t2tr5
=l u + f 1,0 *... r t +3r,'
l" - - - -1,t*]i,"* ._l

The zero-input response of the system is

rr ,
vo) =
[, -, ),'.,,: ii,- , ,.'r,j-.ri,.:;,',: ] tl
=t+tz_:_;.
The impulse response of the system is
s2 Continuous-Timo Systems Chapter 2

| ,. u-;-;."' ,-'i. *'i -*r. tl [rl


+3r,' -2,' *fit *'l L,l
,r(r)=u 0l I s
r - t
lr-,'+tr-lrrto+"'
tj 5,
=t- t2z+r- ur+"'
Note that for.r() = 0, the slate at time t is
v(') = o(') vo

I - ll
t* '"- t4
r*z *. .l
_t
-1"-u +f-l1to .l
Exanple 2.63
Given the continuous-time sYstem
l--r o ol ltl
<,r
',u,= s _i ;l 'ar. [1]
L
y(r) = I-t 2 0l vo
wecomPutethetransitionmatrixandtheimpulseresponseofthesystem.ByusingEqua.
tion (2.6.7). we have
t: 00
ool [-r 0 ol 2
.', = r ol* lo-a, 6tz -atz
+ ..'
[i o ,-l Lo -l 1]. 0
0 ?rz -?J2

1 - 4,+ 6t2 +... 4, - Er2 + "'


-t+2t2+... l-2t2+"'
Using Equation (2.6.16), we find the system impulse response to be
Irl
no)=t-r 2 ot oG)lil=r-,,*3jP+"'
Lr-l

Itis clear from Equations (2.6.13), (2.6.15), and (2.6.16) that in order to-determine'
-have
v(r), y(r), or ft (t), we to first obtain exp [Ar]. The preceding two examples demon-
S€c.2.6 State-VariableRepreseniation 5tl

strate how to use the power serics method to find O(r) = exp [Atl. Although the
method is straightforward and the form is acceptable, the major problem is that it is
usually not possible to recognize a closed form corresponding to this solution. Another
method that can be used comes from the Cayley-Hamilton theorem, which states that
any arbitrary N x N matrix A satisfies its characteristic equation, that is,
det(A -,u) :0
The Cayley-Hamilton theorem gives a means of expressing any power of a matrix A
in terms of a linear combination of A for m = 0, 1, ..., N - l.

Exanpte 2.6.4
Given

^=[i ;]
it follows that
det(A-[)=,t2-7i+6
and the given matrix satisfies

Az-7A+6I:o
Therefore, A2 can bc expressed in terms of A and I by
A2 = 7A - 6I (2.6.171

Also. Ar can be found by multiplying Equation (2.6.17) by A and then using Equation
(2.6.17) again:

A3 = 7A2 - 6A:7(7A - 6I) - 6A


= 43A - 421
Similarly, any power of A can be found as a linear combination of A and I by this method.
Further, we can determine A-r. if it exists, by multiplying Equation (2.6.17) by A-r and
rearranging terms to obtain
1

A-'=;[7r - A]

It follows from our previous discussion that we can use thc Cayley-Hamilton the-
orem to write exp[Ar] as a linear combination of the terms (Ar)i, i = 0, 1,2, ...,
N - 1, so that
Ar- t
exp[Ar] = ) r,(r) l' (2.6.18)
i=0
If A has distinct eigeuvalues ,1,. we can obtain 7,(t) by solving the set of equations
N-t
exp[I,r]=)r,ft)'r.i i=1,...,ts (2.6.19)
i=0
U Continuous-Time Systems Chapter 2

For ihe case of repeated eigenvalues, the procedure is a little more complex, as we will
learn later. (See Appendix C for details.)

&anple 2"6.6
Suppose that we want to 6ad the transition matrix for the system with

[-r I ol
A=l' I -3 ol
L o o -3_J
nsing the Cayley-Hamilton method. First, we calculate the eigenvalues of A as.l, = -2,
t', = -3, and 13 = -4. It follows from the Cayley-Hamilton theorem that we can write
erp [Al] as
erp[Ar] = 7o(r)t + n(r)A + 7r()A2
where the coefficiens 7oO, n(l), and 'y2() arc the solution of the set of equations
expl-Al = ro(t) - 21 r(l + 41 rQ)
exp[-3r] = .yo(t) - 3rr(r) + hr(t)
exp[-4r] = ro(t) - 4tr(r) + 16.y2(r)
ftom which
1o(t) = 3exp[-4t] - Sexp[-3t] + 6 exp[-a]
q1
1,(t) = 6expl-3t] +
iexp[-4tl- rexp[-2t]
I
,r1t1 = i(exn[- 4tl - 2expl-lt] + exp[-2.r1)
Thus, exp [At] is
[r o ol [-s r ol l-ro -5 0l
exp [Ar] = 1o(,)lo r ol+.y,1r11 r -3 ol+1,1111-o 10 0l
Loorl Lo o-3J Lo oeJ

l"*f-oO *).a1-21 -rrexpl-tt'l +rrexpl-al0

-lexefefl + )expt-zrl l "*,-oO + lexpl-t:l 0

00 exp [- 3t]

EwarrFle 2.0.8
[-et us repeat Example 2.6.5 for the system with
[-r o ol
A=l o -4 4l
Lo -r o_j
Sec. 2.6 State.Variable Bepresentation 85

This matrix has ,1, = - I, and ,1, = ,lr = -2. Thus,


O0) = exp[Arl = 7o(r) I + 7,(r) A + 7r(r) Ar
The coefficients yo(t), yr(l), and 7r(l) are obtained by using

exp[,rrl = 7o(r) + 7,(),t + yr(tl[2 (2.6.20)

However, when we use I = -1, -2, and -2 in this equalion, we get

exp[-4 ='yo(r) - ]r0) + r20)


exp[-2rf = ro(t) - z.rlt) + 4.\120)
exp[-2t] = ro(t) - 21,(t) + 41r(t)
Since one eigenvalue is repeated. we have only two equations in threc unknowns. To com-
pletely determine 7o(t), 1(t), and 7r(t). we need anolher equation. which we catr gener-
ate by differentiating Equation (2.6.20) with respecl to,l to obtain
t exp [,ltf = 7,() + 2yr()A
Thus, the coefficients 70(l), 7r(r), and yr(t) are obtained as the solution to the following
three equations:

exp[-tl = ro(t)
- 1'O + 1r(t)
exp[-2t] = ro(t) - 21 lt) + 412()
t expl-?tl : 1,(t) - 41r(l)
Solving for 7(t) lelds
ro(r) = 4exP[-t] - 3exp[-2t] - 2rexpl-2rl
rr(r) = 4 exp[-r] - 4 exp[-2tl - 3rexp[-2t]
rz(t) = exp[-t] - exp[-2t] - texp[-2t]
so that

ftool [-t ool [t o o-l


o(r)=ro(r)10 I 0l+r,(r)l 0 -4 al+1r(r)10 12 -16
[o 4 -4)
I

Loool Lo-rol
[-exp[-r] 0 o I
=| o exp[-2r] -2texpl-2rl 4texpl-2tl I

[O -rexp[-2r] -4exp[-r] + 4exp[-2,] + 4texpl-2rl)

Other methods for calculating O(t) also are available. The readcr should keep in mind
that no one method is easiest for all applications.
The state transition matrix possesses several properties, some of which are as follows:

1. Transition property
o(r, - lo) = o(r, - tr)o(lr - to) Q'6'21)
86 Continuous-Time Systems Chapter 2

2. Inversion property
O(ro - r): O-t(t - lo) (2.6.22)
3. Separation property
O(t-to)=O(,)O-'(,J (2.6.21)

These properties can be easily established by using the properties of the matrix expo-
nentiat exp[Ar], namely, Equations (2.6.8) and (2.6.9). For instance, the transition
proprty follows from
o0z-b)=exPlA(q-to)l
= exp[A(lz - t' + t, - to)]
: exp[A(, - t,)]exp[A(rr - ,o)]
: .b(tz - tr)o(rr - ,o)
The inversion prop€rty follows directly from Equation (2.5.9). Finally, the separation
property is obtained by substituting ,z = t and ,r = 0 in Equation (2.6.21) and then using
the inversion property.

2.63 State Equatione in First Canonical Form

In Section 2.5, we discussed techniques for deriving two different canonical simulation
diagrams for an LTI system. These diagrams can be used to develop two state-variable
representations. The state equation in the first canonical form is obtained by choosing
as a state variable the output of each integrator in Figure 2.5.4. In this case, the state
equations have the form
y(t)=o,(t)+bn.r(r)
oi1t1 - -on-ry(r) + ?,r(l) + bn-rr(r)
oi[) = - ar-ry() + zr(t) + bn-rx(t)
:

: -aty0) + o,v0) + brx(r)


oi-,(t)
oi(): -aov[)+box(t) (2.6.24)

By using the first equation in Equation (2.6.24) to eliminate y(t), the differential equa-
tions for the state variables can be written in the matrix form

i(t ) -att-t I O "'01 ur (r) bn-, - an -rbr


-ax-z 0l "' Ol
a1
az() br -, - ar -rbn
ai;('i.) : : : : :l r(r) (2.6.25)
,t
'"i r)
,U,, -o, ;;': il
0 o "'Ol
rriv- r(t) b, - arbn
--ao - ","(r) - _ bo- aobr

tains ones above the diagonal, and the first column of matrix A consists of the nega'
Sec. 2.6 Slai€-Variable Representation 87

rives of the cocfficients a,. Also. the output y(t) can bc writtcn in tcrms of the state vec-
tor v(t) as

v(,) = t1o , + bNx(,, (2626)


[:ll]
Observe that this form of state-variable rePresentation can be written down directly
from the original Equation (2.5.1)

Example 2.6.7
The [irst-canonical-form state-variable representation of the LTI system described by
zy"(t) + Ay'(t) + 3y0) = Ax'(t\ + zx(t)

=
rit',,,
t;ir:rt
t; lr;;r;tl.
.v(r)=rr ,[;;t]l

kample 2.6.8
rhe Lrr o"*;l;ol
"""* ,nur+ v'(r) + av(r) = r11; * 5''1',

haslhennt**"'i;l;:""'i1
r or[o,61r I zr

L;tBl: [-l I ;]L;:lll.L-11.."'


Irr,(r)l
y(r) = [r o oll,,i,i l*,t,1
L,,,r(r)l

2.6.4 State Equations in Second Canonical Form


Another state-variable form can be obtained from the simulation diagram of Figure
2.5.5. Here, again, the state variables are chosen to be the output of each integrator-
The equations for the state variables are now
Continuous-Time Systems Chapter 2

uiQ) = ur(t)
a''(t) = tt'(t)

oh-r()= 0,v(0

= -an-run() - an-ran-r(t) - ... - aoo,(t)


?r.ln(r) +.ro
y(t) = boar(t) + brar(t) + ..' + Dx-r on(t) +
6"(r(t) - auur(t) - arar(t) -'.' - a,r-, o,r(r))
In matrix form, Equation (2.6.27) can be written as

0 I o 0-
Ir,(,)l 001:0
4l ',t'l I - (2628,
" 1,r,,-l ;;;:i
_-% -a2 t3].[l..,
-ar -att_t_
Ir,o) I
y(r) : [(bo - asby)(b1- arbp)...(b,,-r - a*-,bill "1" | + 0,".r(r) (2.6.2s)

Lr,t,ll
This representation is called the second canonical form. Note that here the ones are
above the diagonal, but the a's go across the bottom row of the N x N transition
matrix. The second canonical state representation form can be written directly upon
inspection of the original differential equation describing the system.

Elrarnple 2.63
The second canonical form of the state equation of the system described by
y'(t) - zy"(t) + y'() + at() = 1'1,1* trr,,
is

ol [o,(r)
i
[;i[;i.l=si r ll ,,(,)
L,io)l L-+ -r 2l Lu,(r) l.[l]..,,
Ir,,(r)l
,(,) = [-3 -1 zll a,@ | + r(t)
Lr,(,ll

The frrst and second canonical forms are only two of many possible state-variable
representations of a continuous-time system. [n other words, the state-variable repre-
sentation of a continuous-time system is not unique. For an N-dimensional system,
Sec. 2.6 Slate-Variable Representation 39

there are an infinite number of state models that represent thal systcm. Howcvci, aii
N-dimensional state models are equivalent in thc scnse that lhey ltavc cxactly thc sarnc
input/output relationship. Mathenratically, a set of state equation'i \\'ith strtc vector
v1i; can be lransformed to a new sct with state v:ctor q(r) by using a transtorlrlation
P such that
q(t) = P v(t) (2.6.30)

where P is an invertible N x N matrix so that Y(r) can be obtaincd trom q(t). It can be
shown (see Problem 2.34) that the new state and output equations arc
q'(r) = Ar q(t) + b,-r(t) (2.6.31)

y(r) = cr q(l) + d, r(t) (2.6.32)

Ar=PAP-r, b,=Pb, c,=6P-r,7, =ri (2.6.33)

The only restriction on P is that its inverse exist. Since there are an infinite number of
such matrices, we conclude that rve can generate an infinite numhcr of equivalent N-
dimensional state models.
If we envisage v(r) as a vector with N coordinates. the transformation in Equation
(2.6.30) ..pr"."--nt. transformation that takes the old state coordinates and
".*rdinate
mapS them to thc new statc Coordinates. The new state model can have one or more
of the coefficients A,, b,, and c, in a special form. Such forms result in a significant sim-
plification in the solution of certain classes of problems: examples of these forms are
the diagonal form and the two canonical forms discussed in this chapter'

Example 2.6.1O

The state equations of a certain system are given by

[;;ll] =
[: :r [;:[]l . [l].,,,
We need to tind the state equations for this system in terrns of lhc new state vadablcs qt
and qr. where

[;lll] =
[l l][;:l;l]
The equation for the state variable q is given by Equation (2 611)' rvherc

^,
=
'nt-'= [l ilt;;ltl-ll
[r rl
=[l iltiilliil
lz :l
_[o ol
-Lo ,)
90 Continuous-Time Sysiems Chapter 2

and

br=Pb=
tllltll= til
E=ernple 2.6.11
Let us find the matrix P that transforms the second-canonical-form state equations

[;;8] =
t : ll[l[;i].
into the fint-canonical-form state equations
[?].u,

[;;8]= [-i iJ[;:[i]. []1.,,,


We desire the transformation such that PAP-| = Ar or
pA = Arp
Substituling for A and Aq, we obtain

3:,:^ i:lt : ll= [-l il1::, i:]


Equating the four elements on the two sides yields

-2pp= -lpn * pzr

pt - 3pn= -3pnt pu
-2Pa = -\Pn
Pa - 3Pd= -2Pn
The reader will immediately recognize that the second and third equations are identical.
Similarly, lhe first and fourth equations are identical. Hence, two equations may be dis-
carded. This leaves us with only two equations and four unknowns. Note, however, that
the constraint Pb = br provides us with the following two additional equalions:
prz=7
pz=2
Solving the four equations simultaneously yields

,=[-3 ]l

Exanple 2.6.12
If A, is a diagonal malrir with entries 1,, il can easily be verified that the transilion malrix
exp[A,t] is also a diagonal matrix with entries exp [,t,rl and is hence easily evaluated. We
can use this result to find lhe fransition matrix for any other representation with A =
PA,P-r, since
Sec.2.6 Stat€-VariableRopresentation 91

exp[Arf = I + Ar +
], n',' * "'

= I + PArP-rr + 1l;tAiP-'r'+...

=PII*n,,* ,1ntu' * ]r-' = P exp [ArrlP-l


For the matrices A and A, of Example 2.6.10, it follows that

0I
exp IA,t] -_ l-exp(6r)
L o exp(2r)l

so that

_ [exp(6r) 0 I
exp IAt]
L 0 exp(z)l
_ lf eu + e'l t* - "'1
2lee - eL eu + e')

2.6.6 StabilityConeiderations
Earlier in this section, we found a general expression for the state vector v(r) of the
system with state matrix A and initial state vo. The solution of this system consisS.bf
two components, the first (zero-input) due to the initial state vo and the second (zero-
state) due to input r(r). For the continuous-time system to be stable, it is required that
not only the oulput, but also all signals internal to the system, remain bounded when
a bounded input is applied. If at least one of the state variables grows without bound,
then the system is unstable.
Since the set of eigenvalues of the matrix A determines the behavior of exp [Al]' and
since exp[At] is used in evaluating the two components in the expression for the state
vector v(l), we expect the eigenvalues of A to play an important role in determining
the stability of the system. Indeed, there exists a technique to tcst lhe stability of con-
tinuous-time systems wlthout solving for the state vector. This tcchnique follows from
the Cayley-Hamilton theorem. We saw earlier that, using this thcorem, we can write
the elements of exp [At], and hence the components of the state vector, as functions of
the exponintials exp[,1,t], exp[Arr], ..., exp [,trt], where t,,i = 1.2...., N, are the eigen-
values of the matrix A. For thesc terms to be bounded, the real part of rlu i: 1,2, ...,
N, must be negative. Thus, the condition for stability of a continuous-time system is
that all eigenvalue$sf the state-transition matrix should have negative real parts.
The foregoing conclusion also follows from the fact that the eigenvalues of A are
identical with the roots of the characteristic equation associated with the differential
equation describing the model.
92 Continuous-Time Syslems Chaptor 2

Example 2.6.13
Consider the continuous-time system whose state matrix is
[z -rl
n=Lo -rJ
The eigenvalues of A are \ = -2 and lz = l, and hence. the system is unstable.

Exanple 2.&14
Consider the system described by the equations

,,(,) =
[-1 -:]"u,. [l],u,
y0) = [ llv(r)
A simulation diagram of this system is shown in Figure 2.6.2. The system can thus be con-
sidered as the cascade ofthe two systems shown inside the dashed lines.
The eigenvalues of A are lr = 1 and Az = -2. Henc€, the system is unstable. The tran-
sition matrix of the system is
exp[Arl = yo(r)I + zr0)A (2.6.v)
where yo(l) and 1(l) are the solutions of

explrl : 1o(l) + 1,(l)


exp[-21 = 1o(t) - 21,(r)
Solving these two equations simultaneously yields

ro(0 = JexPPl* |"*P1-21

r,(r) = |exptrl - |expt-zrl


Substituting into Equation (2.6.y), we obtain

exPl't1= l-exPl'l o
L-exp[r] + erp[-2r] ' ^ '-l
exp[-z]J
[Jt us now look at the responlte of the sptem to a unit-step input when the system is
initially (at time 6 = 0) relaxed, i.e., the initial state vector vo is the zero vector. The out-
put of the system is then

y1r) =
f cexptA(r - r)lbr(t)dt

= (; - ; exet- t:t)u(t)
The state vector at any time l > 0 is
\o
e.l
q)
c.
E

E1

C
OJ

q,)

oo
(!
!
o

.A
6l
\o
N
tu

E!
E,

93
94 Continuous-Timo Systems Chapter 2

v0): erptn (r - t)l b.r(t)dr


f
[1exp[r]- l)z(r) I
l<ttz - exp[r] - t/2 expl - aD4l))
It is clear by looking at y(r) that the output of the system is bounded, whereas inspection
of the state variables reveals that the internal signals in the system are not bounded. To
pave the way to explain what has happened, let us look at the inpuuoutput differential
equation. From the output and state equations of the model, we have

,,,,__::f
,:i,:ii!,,u,._,,1,,i?u,,,,,,,,
= -zy(t) + x(t)
The solution of the last first-order differential equation does not contain any terms that
grow without bouad. It is thus clear that the unstable term exp [ll thal appeaIs in the state
variables o,(t) and ur(t) does not appear in the output yO. This term has, in some sense.
been *cancelled out' at the output of the second system.

The preceding example demonstrates again the importance of the state-variable


representation. State-variable models allow us to examine the internal nature of the
system. Often, many important aspects of the system may go unnoticed in the compu-
tation or observation of only the output variable. In short, the state-variable techniques
have the advantage that all internal components of the system can be made apparent.

2,7 SUMMARY
A continuous-time system is a transformation that operates on a continuous-time
input signal to produce a continuous-time output sigral.
a A sptem is linear if it follows the principle of superposition.
I A system is time invariant if a time shift in the input signal causes an identical time
shift in the output.
A system is memoryless if the present value of the output y(r) depends only on the
present value of the input r(t).
a A system is causal if the output y(10) depends only on values of the input x(l) for r s ro.
a A system is invertible if, by observing the output, we can determine the input.
a A system is BIBO stable if bounded inputs result in bounded outputs.
a A linear, time-invariant (LTI) system is completely characterized by its impulse
re.sponse i (t).
The output y(l) of an LTI system is the convolution of the input r(r) with the
impulse response of the system:
Ssc. 2.7 Summary 95

/(r) = .r(,) t, h(t) = J|_._ xft)h(t - t)ttt


. The convolution operation gives only the zero-state responsc of the system.
o The convolution operator is commutative, associative. and distributive.
. The step response of a linear system with impulse responsc /r(t) is

,14 = ['_- n6'1a,


J

e An LTI system is causal if h(t) = 0 for t < 0. The system is stable if and only if
.
f.ln{"ila, -.
o An LTI system is described by a linear, constant-coefficient, differential equation
of the form

(r, * !. o,o,)r(t)= (5 r,r)..r,r


o A simulation diagram is a block-diagram rePresentation of a system with oomPo-
nents consisting of scalar multipliers (amplifiers), summers. or integrators.
. A system can be simulated or realized in several different ways. All these realiza-
tions are cquivalent. Dcpcnding on the application, a particular one of these real-
izations may be preferable.
. The state equation of an LTI system in state-variable form is
v'(t) = 4 v(r) + br(,)
o The output equation of an LTI system in state-variable form is

v(r) = c v(,) + dr(,)


r The matrix O(r) = .*O 14r] is called the statc-transition matrix.
. The state-transition matrix has the following properties:
Transition property: O(t, - tn) = tD(tz - t')O(t, - t,)
Inversion ProPerty: O(ru - ,) = O-'(t - to)
Separation property: O(, - h) = o(l)O-t(1,)
o The time-domain solution of lhe state equation is

y(r) = co(r - to)vu + ['co(r.- t)bx(r)dr + d't(r). l>lo


. rD(l) can be evaluated using the Cayley-Hamilton theorem. which states that any
matrix A satisfies its own characteristic equation.
o The matrix A in the first canonical form coniains ones above the diagonal, and the
first column consists of the negatives of the coefficients a, in Equation (2'5.2)'
!.

r The matrix A in the second canonical form contains ones above the diagonal, and
the a,'s go across the bottom row.
. A continuous-time system is stable if and only if all the eigenvalues of the transition
matrix A have negative real parts.

2.8 CHECKLIST OF IMPORTANT TERMS


Caueal syotem Multpller
Cayley4lamllton theorem Output equatlon
Convoluton lntegral Scalar multlpller
Flrst canonlcal lorm Second canonlcal lorm
lmpulse responso ot llnear syetem Slmulatlon dlagram
Impulse r€sponse ol LTI system Stable system
!ntegrstor Stale-transluon mstdr
lnvsnse aystem State varlable
Unear system Subtractor
Unear, Ume.lnyadant system Summer
Memoryloee oysiem Tlmc'lnyarlant systgm

2.9 PROBLEMS
2.L Determine whether the systems described by the following input/ourpur relationships are
linear or nonlinear, causal or noncausal, time invariant or time variant, and memorylass
or with memory.
(a) y(t) = 7:(t) + 3
(b) vG) = bz(t) + 3x(,)
(c) Y(t) = Ar(tl
(d) y(t) = AaQ)
(e) y(r) =
{:,;l;,, ;::
tt
(O .y(t) =
l__x(tldr
t'
(oy(r)=lorft)a,.,>0
(h) Y(t) = r(t - 5)
(l) y(r) = exp [.r(t)]
0) v(r) = x(t) x(t - 2)
(k) y(r) = il,')i,,nro,
0 ++zy(t)=bz(t)
Sec. 2,9 Probl€ms 97

aA Use the model for 5(t) given by

s(,) = l$-l recr (,/a )

to prove Equation (2.3.I )


Lj. Evaluate the following convolutions:
(a) rect (t - a/a)t 6(, - 1r)
(b) rect (t/a) * rect(tla)
(c) rect (/a) * a(r)
(d) rect (t/a) * 5gn(t)
(e) u(t) * x(r)
(I) t[u(t) - a(, - l)l* a(t)
(gl rect(t/al * r(1'1
(h) r(t) * [sgn(t) + u(-t - l)l
(i) [u(t + l) - u(t - l)lsgn(r) r a(r1
0) u(t) * 6'1;;
2.4 Graphically deiermine the convolution of the pairs of signals shown in Figure P24.

(b)

(c)

Figure P2.4
Continuous-Time Sysiems Chapter 2

Use the convolution integral to find the response y(t) of an LTI system with impulse
response ft() to input r(t):
(a) .r(t) : exp[-rlzo h(t) = t*'1-or"r',
(b) r(t) = t exp [-tlzo h(t) =
"1'7
(c) .r(t) = exp[-4u0) + u(t) h(t) = u(t)
(d) .r(t) = z(t) h(t) = eapl-Altt () + 6(r)
(e) -rO = exP[-at]zo h(t) = u(t) - expl-ulu(t - b)
(f) .r(t) = 6(, - l) + exP[-t]z(t) fr(t) = exP [-u]z(t)
a6. The cross correlation of two different signals is defined as

R ,0) : f--,Ol Y<, - t)dr =


I- ,n * t) Y(rldt
(a) Show that
R,y(r):r(t)*y(-t)
O) Show that the cross correlation does not obey the commutative law.
(c) Show that R r(r) is symmetric (R rG) = Ry,(-r)).
e7. Find the cross correlation between a signal .r(t) and the signal y(t) = r(t - l) + z(t) for
B/A = 0,O.1, and 1, where.r(r) and z(t) are as shown in Figure P2.7.

Figore HL7

a& The autocorrelation is a special case of cross correlation with y() : :(t). In this case.
t"
R,(r) = R,0) = | r(r)r(r + r)dr
t__
(a) Show that
R,(0) = 5, the energy of r(r)
(b) Show that
R,(r) s R,(0) (use the Schwarz inequality)

(c) Show that the autoconelation of z(t) = :0) + y(t) is


R.() = R,(r) + Ry(r) + &,0) + R,(r)
Sec. 2.9 Problems 99

a9. . Consider an LTI sysrem whose impulse response is rl(r). Ler r(, ) and y() be the input and
output of the system. respcclively. Show that
R,(t) = R.(0 * h(t) * h(-t)
al0. The input to an LTI system with impulse response i (r) is the complex exponenlial
exp !'r,rll. Show that the corresponding ourput is

y(t) = expljutl H(a)


where

r'
H(@) = exp[-yrorldr
l__h(t)
2ll. Determine whether the continuous-time LTI systems characterized by the following
impulse responses ale causal or noncausal, and stable or unstable. Justify your answers.
(a) A(t) = exp[-3tl sin(t)r(r)
(b) ,,(r) = exp [4tlu( -t)
(c) ft(t) = (-r) exp[-rla(-r)
(d) (') = exP[-lzl]
'',r(r)
(e) = l(r - ztlexp[-lzrll
(f) ,(r) = rcctlt,lzl
(g) ft(t) = 6O + exp[-3t]u(t)
:
(h) ,(r) 6'(t) + exp [-2rl
(i) fi(,) = 6'(,) + exp[-l2rl]
0) ,t(r) = (l - r) rect(r/3)
212 For each of the following impulse rcsponses, determine whethcr it is invertible. For those
that are, find the inverse system.
(al h(t) = 511 a 21
(b) l'(t) = uo
(c) ft(t) = 611 - 3;
(d) n(,) = rcct(t/ )
(e) It(r) = exp [- rlz (r )
2.13. Consider the two syslems shown in Figures P2.13(a) and P2.l-3(b). System I op€rates on
r(l) to give an output /t (r) that is optimum according to some dcsired criterion. System II
first operates on r(t) with an invertible operation (subsystem I) to obtain z0) and then
operates on z(t) to obtain an output y20) by an operation that is optimum according to
the same criterion as in system I.
(a) Can system II perform better than system I? (Remember the assumption that system
I is the optimum operation on r(t).)
(b) Replace the optimum operation on z(r) by two subsystems, as shown in Figure
P2.13(c). Now the overall system works as rvell as system I. Can the new system be
better than system II? (Remember thal system II perfornts the optimum operation
on z (r ).)
(c) What do you conclude from parts (a) and (b)?
(d) Does the system have to be linear for part (c) to be true?
100 Continuous-Time Syst€ms Chapter 2

r--'------l ---------'l
System I System ll I
I

.r (r)
I

I
I vt0l

i----------J t' I

(a) (b)

z (r't )r(,)

(c)

Figure 8 .13

L14 Delermine whether the system in Figure P2.14 is BIBO slable.

Figure EL14
h,(t) = expl-2rlu(tl
hr(tl = exPl-7tlu(r)
,rr(t) = exp[- t]u(t)
440) : D(,)
,rs() = exP [- 3tlu(t)
2"15. The input and outPul y(r) of a linear, time-invariant system are as shown in Figure
r(r)
P2.15. Sketch the resF)nses to the following inputs:
(s) :0 + 2)
O) 2r(r) + 3x(-l)
(c) .r(, - l/2) - x(t + t/21
... &(t)
(o, a
S€c. 2.9 Problems 101

ffgure PZIS

Lt6" Find the impulse response of the inilially relaxed system shown in Figure P2.16.

i-----;------l

.t(r)' l,(, ) /(r) = r,R (r)

ll
l___________-J
Flgure HLf6

217. Find the impulse response of the initially relaxed system shown in Figure P2.17. Use this
result to find the output of the system when the input is
/ 0\
(e) r. (, - 2/
or,(. l)
(c) rect 0/0), where d = l/RC

i;1
r(r) = u(r) )= uc (r)

ri
rl
l-----------J

Ftgure YLIT
102 Continuous-Time Systems Chapter 2

2.18. Repeat Problem 2.17 for the circuit shown in Figure P2.18.

I
I
I

x(r) . €(r) v (r) = rh (r)

I
I
I I
L- _--________J

Bigure HLlt
2.19. Show that any system that can be described by a differential equation of the form

W.p*"or #=2r,<offf)
is linear. (Assume zero initial conditions.)
220. Show that any system that can be described by the differential equation in Problem 2.19
is time invarianl. Assume that all the coefficients are constants.
22L A vehicle of mass M is traveling on a paved surface with coefficient of friction k. Assume
that lhe position of the car at time ,, relative lo some reference, is y(t) and the driving force
applied to the vehicle is r(r). Use Newton's second law of motion to wrile the differential
equation describing the system Show that the system is an LTI system. Can this system
be time varying?
22L Consider a pendulum of length I and mass M as shown in Figwe V|'.X\.T\e displacement
from the equilibrium position is ld; hence, the acceleratioh is Id'. The input x(t) is the
force applied to the ma$s M tangential to the direction of motion of the mass. The restor-
ing force is the tangential component Mg sin 0. Neglect the mass of the rod and the air
resistance. Use Newton's second law of motion to write the differential equation describ-
ing lhe system. Is this system linear? As an approximation, assume that dis small enough
that sin , : ,. Is the system now linear?

Mass ll Figue P222


Sec. 2.9 Problems tgg

2.23. For the system realized by the interconnection shown in Figure P2.23. find the differential
equation relating the input .r(r) lo the oulput y(r).

Flgure P2.Zl

2.4. For the system simulated by the diagram shorvn in Figtre Y2.24, (lctermine the differen-
tial equation describing the system.

tigure P2.24

2.25. Consider the series RLC circuit shown in Figure P2.25.


(a) Derive the second-order differential equation that dcscribes the system.
(b) Determine the [irst- and second-canonical-form simulation diagrams.
1U Continuous-Tims Systems Chapter 2

r (t)

Itgore P:225
2,26. Givet an LTI system described by
y-(t) + 3f(t) - y'(,) - zy(t) = 31'111- ,r,,
Find the first- and second-canonical-forrr simulation diagrams.
,Jr. Find the imprrlse resfpnse of the initially relaxed system shown in Figure p2.27.

r(r) = u(r) y (r) = up(r) ,u,--.-|-lr,*n


l-----rar

ii
Flgure HL27

L& Find the state equations in the first and second canonical forms for the system described
by the dilferential equation
y"(t) + 2.sy'(t) + y(t) =.r'0) + r(,)
2.$. For the circuit shown in Figure P2.29, choose the inductor current and the capacitor vott-
age as state variables, and write the state equations.

L=2H

Ifgure HL29
230. Repeat Problem 2.28 for the s]'stem described by the differential equation

f(t) + f(t) - zy(t): x,(t) - u(t)


Sec. 2.9 Problems 105

Z3l. Calculate exp[Arl for the following matrices. Use both the serics-cxpansion and Cayley-
Hamilton methods.
[-r o ol
(a)A=ltt 0 -z 0l
L o o -31
[-r
tt 2 -11
(b)A=l o -r ol
L o o -ll
[-r 1 -ll
tcll=l o t -rl
L o'-31
2.32 Using state-variable techniques, find the impulse respoirse for the system described by the
differential equation
y"(t) + 6y'(t) + 8y(r) = r'(r) + r(r)
Assume that the system is initially relaxed, i.e., y'(0) = 0 and y"(0) = 0.
243. Use state-variable techniques to find the impulse response of lhc system described by
y'(t) + 7y'(t) + lzy(t) = /(r) -
3r'(r) + 4.r(I)
Assume that the system is initially relaxed, i.e., y'(0) : 0 and y(0) = 0.
23. Consider the system describcd by
v'0) = A v(t) + b.r(t)
y0)=cv(t)+dr0)
Select the change of variable given by
z(t) : P v(l)
where P is a square matrix with inverse P-1. Show that the new state equations are
z'(t) = Ar z14 + b' r(t)
y(t) = cr z(t) + drx(t)
where
Ar = PAP-r
br=Pb
ct = cP-l
dt= d
235. Consider the system described by the differential equation
y"(t) + 3y'(t) + 21'() =.r'(r) -.r(r)
(a) Write the state equations in the fint canonical form.
(b) Write the state equations in the second canonical form.
(c) Use Problem 2.34 to find the matrix P which will transform the firsl canonical form
into the second.
(d) Find the state equations if we transform the second canonical form usiitg the matrix

p=l'
L-l
t-l
-ll
Chapter 3

Fourier Series

3.1 INTRODUCTION
As we have seen in the previous chapter, we can obtain the response of a linear sys-
tem to an arbitrary input by representing it in terms of basic signals. The specific sig-
nals used were the shifted 6-functions. Often, it is convenient to choose a set of
orthogonal waveforms as the basic signals. There are several reasons for doing this.
First, it is mathematically convenient to represent an arbitrary signal as a weighted sum
of orthogonal waveforms, since many of the calculations involving signals are simpli-
Iied by using such a relresentation. Second, it is possible to visualize the signal as a
vector in an orthogonal coordinate system, with the orthogonal waveforms being coor-
dinates. Finally, representation in terms of orthogonal basis functions provides a con-
venient means of solving for the response of linear systems to arbitrary inpus. In this
chapter, we will consider the representation of an arbitrary signal ove: a finite interval
in terms of some set of orthogonal basis functions.
For periodic signals, a convenient choice for an orthogonal basis is the set of har-
monically related complex exponentials. The choice of these waveforms is appropri-
ate. since such complex exponentials are periodic, are relatively easy to manipulate
mathematically. and yield results that have a meaningful physical interpretation. The
representation of a periodic signal in terms of complex exponentials, or equivalently,
in terms of sine and cosine waveforms, leads to the Fourier series that are used exten-
sively in all fields of science and engineering. The Fourier series is named after the
French physicist Jean Baptiste Fourier (1768-1830), who was the first to suggest that
periodic sigrals could be represented by a sum of sinusoids.
So far, we have only considered time-domain descriptions of continuous-time signals
and systems. In this chapter, we introduce the concept of frequency-domain reprisen-

106
Sec. 3.2 Orthogonal Bepresentations ot Signals 1O7

tations. We learn how to decompose periodic signals into their [requency components.
The results can be extended to aperiodic signals, as will be shorvn in chapter 4.
periodic signals occur in a rvide range of physical phenomcna. A few examples of
such signals alre acoustic and electromagnetic rvaves of most types, the
vertical dis-
placem-ent of a mechanical pendulum, the periodic vihrations o[ musical instruments,
and the beautiful pattems of crystal structures.
In the present lhapter, we discuss basic concepts, facts, and tcchniques in connec-
tion with itourier serLs. Illustrative examples and some importirnt engineering appli-
in Section 3'2'
cations are included. We begin by considering orthogonal basis functions
In Section 3.3, rve consider l.rioaic signals and develop procedurcs for_resolving such
3'4, we
iignals into a iinear combination of complex exponential functi.ns. In Section
di-scuss the sufficient condilions for a periodic signal to bc
represcnted in terms of a
all
Fourier series. These conditions are known as the Dirichlet conditions. Fortunately,
the periodic signals that we deal with in practice obey these conditions. As
with any
properties' These prop-
other mathema-ticat tool, Fourier series possess several useful
helP_s us. move eas
erties are developed in Section 3.5. Understanding such properties
itf frorn the time domain to the frequency domain and vice ve rsa. In Section 3.6, we
periodic
ule tne properties of the Fourier series to find the response of LTI systems to
signals. The effects of truncating the Fourier series and the Gibbs
phenomenon are dis-
crissed in Section 3.7. We will iee that whenever we attempt to reconstruct
a discon-
in the form of
iinuou, signal from its Fourier series, we encounter a strange behavior
signal overshoot at the discont in uities. This overshoot effecl does not
go away even
,r-h"n *a increase the number of terms used in reconslructing the signal.

3,2 ORTHOGONAL REPRESENTATIONS


OF SIGNAL
engt-
orthogonal representations of signals are of general importance in solving many
*.rin"g proUf.*s. Two of the ,"uion, this is so are that it is mathcnlatically convenient
i. ,"pr".,I* arbitrary signals as a weighted sum of orthogonal waveforms, since many and
of thl calculations involving signals aie simplified by using such a representation
ihat it is possible to visualizi thi signal as a vector in an orthogonll coordinate system,
with the orthogonal waveforms being the unit coordinates'
Asetofsign'also,,i=0,-t-1,-r2,...,issaidtobeorthogonalovcraninterval(a,D)if

I r,,,,.1r, = {f-' ', -oo


= Er6(l - k) (3.2.1)

where Sf (r) stands for the complex conjugate of the signal and 6(/ - &)' called the
Kronecker delta function, is defined as
[t-
6(r-k)=10 t=k (3.2.2)
t+k
108 Fourier Series Chapler 3

If $,(l) corresponds to a voltage or a current waveform associated with a l-ohm resis-


tive load, then, from Equation (1.4.1), Ek is the energy dissipated in the load in b - a
seconds due to signal QrG). If the constants E1 are all equal to I, the 6i(r) are said to
be orthonormal pignals. Normalizing any set of signals g,(t) is achieved by dividing
each signal by V4.
Exanple 32.1
The signals Q,,(r) = sir,at,nr = 1,2,3,..., form an orrhogonal set on the interval
-rr<r<zrbecause

[", o^<,lo: <o = /-" Ginzrrxsinnr)


ar dr

=iL,*r^ - ")tdt -;f, cas(m + n)tdt

_[", m=n
[0, m* n
Since the energy in each sigml equals tr, the following set of signals constitutes an ortho-
normal set over the interval -t <,< ,rr:
sinr sin2r sin 3r
\r;' -G- ' \F'"
Example 8.2.2
The signals go() = exp[i (2rkt)/Tl, k = 0, *1, form an orthogonal set on rhe
interval (0, I) because
=2,...,

J' o,{,)0,'o),, = I,'


*rlffil *, ['94] "

'and
hence, the signals O
=
{l
ll-D expl]2r kt)/Tl constitute an orthonormal set over the
intewal 0<r<f. "iI
trernple 893
The three signals shown in Figure 3.2.1 are orthonormal, since they are mutually orthog-
onal and each has unit energy.

Orthonormal sets are useful in that they lead to a series representation of signals in
a relatively simple fashion. Let 0,(r) be an orthonormal set of signals on an interval
a < t < D, and let.r(l) be a given signal with finite energy over the same interval. We
can reprqsent x(t) in terms of [0rl by a convergent series as
Sec. 3.2 Orthogonal Representations of Signals 1@

0t(r) Q2ltt dlltt

Ilgure 321 Three orthonormal signals.

.r(r) = ) c,g,(r) (3.2.3)


i- -=
where

co = | xQ)g!(t)dt, k -0.'+1,+2.... (3.2.4)

Equation (3.2.4) follows by multiplying Equation (3.2.3) by Qf (r) and integrating the
result over the range of definition ofx(t). Note that the coefficients can be computed
independently of each other. If the sct g,(t) is only an orlhogonal sel, then Equation
(3.2.4) takes the form (see Problem 3.5)

(3.2.s)
',=Il"''ooi(r)dt
The series representation of Equation (3.2.3) is called a generalized Fourier series
ofx(l), and the constants c,, i = 0, *1, *2,..., are called the Fouricr coefficients with
respect to the orthogonal set [0,(r)].
In general, the representalion of an arbitrary signal in a series expansion of the form
of Equation (3.2.3) requires that the sum on the right side be an infinite sum. In prac.
tice, however, we can use only a finite number of terms on the right side. When we trun-
cate the infinite sum on the right to a finite number of terms, we get an approximation
i(r) to the original signal r(r). When we use only M terms, the representation enor is
M
enU) = x(r) - r=l
) c,6,0) (3.2.6)

The energy in this error is


rh rb M

J.lrrQ)l'a = J lrG) - )c,6,(r)l'zdr


E,(M) = (3.2.7)

It can be shown that for aoy M, the choice of co according to Equation (3.2.4) mini-
mizes the energy in the error er. (See Problem 3.4.)
Certain classes of signals-finite- length digital communication signals, for exam-
ple-permit expansion in terms of a finite number of orthogonal funclions l0r(r)1. In
110 Fourier Series Chapter 3

this case, i = 1,2,... , N, where N is the dimension of the set of signals. The series rep-
resentation is then reduced to
r0) = xro(') (3.2.E)

where the vectors x and O0) are defined as


I = [c,, cr, ... c1y]r
o(r) = [0,(,),0z(r), ...0r(r)]' (3.2.e)

and the superscript Idenotes vector transposition. The normalized energy ofr(t) over
the interval a< t<bis
E,=
I lx(t)l2dr=
t ,* c,$,(t'1lzdt
Al /v .h
= coci 6,111oi1r1ar
->,:_, J,

= ) l",l'e, (3.2.10)

This result relates the energy of the signal x(r) to the sum of the squares of the orthog-
onal series coefficients, modified by the energy in each coordinate. Er. If orthonormal
signals are used, we have E, = 1, and Equation (3.2.10) reduces 1o
N
E" = )
i- l
1",1,

In terms of the coefficient vector x. this can be written as

5. = (x*)rx = xt x (3.2. r l )

where t denotes the complex conjugate transpose [( )*]I. This is a special case ofwhat
is known as Parseval's theorem. which we discuss in more detail in Section 3.-5.6.

Example 3.2.4
tn this example. we eramine the representation of a finite-duration signal in terms of an
orrhog,onal set of basis signals. Consider the four signals defined over thc interval (0. 3).
as shown in Figure 3.2.2(a). Thcse signals are not orthogonal. hut it is prssihle lo repre-
sent them in terms of the three orlhogonal signals shown in Figure .i.2.2(h). since combi-
nations of these three basis signals can be used lo reprcsent any of lhc four signals in
Figure 3.2.2(a).
The coefficients rhar represent the signal .rr(r). obtained by using Equation (3.2.4). arc
f-1
c,,= | .r,(r)rfjx(t').lt =2
Jn

.!
c,. = Jo| .t,(r)dj'(r)rlr = tt
fl
c,., = lt| .t, (r )6.,t (tltlt = t
Sec. 3.2 Orthogonal Beprssentations ol Signals 111

rt (r) r2 (r)

0 0

x3(l) rt(,)

-l

(a)
0s (r) 0zU\

-
0!(r)

t--
(b)

Flgure 322 Orthogonat representations of digital signals.

In vector notation, xr : [2, 0, llr. Similarly, we can calculate the coefficients for
xz(t).:r0), and :o(r), and these become
rzr = l, xn = 1, rz.l = 0, or 12 = [l' 1' 0]r
rrr = 0, xn = 1, rrr = l, or xl = [0' l' l]'1
'r,rr = l, roz = - l, xat = 2' or xr = [l' - l' 2lr

Since there are only three basis signals required to completely reptescnt .r,(r), i = 1'2.3'
4, we now can thini ofthese four iignals ai in three-dimensionrl space. We would
""ctor.
112 Fourier Series Chapter 3

like to emphasize that the choice- of the basis is not unique, and many other possibilities
exist. For ixample, if we choos!
11
6,() =-6xr(t), 0,(t) =
fi\tuA - -
l) !(t) - u (r - 3)l
and

0,G) =
rra,o1r1
then

,,=ln,-+',+l'. r, = 1Vi,o,olr

x4 = [0,0, V6]r
"=l+,+'+)''
In closing this section, we should emphasize that the results presented are general,
and the main purpose of the section is to introduce the reader to a way of represent-
ing sipals in terms of other bases in a formal way. In Chapter 4, we will see that if the
signal satisfies some restrictions, then we can write it in terms of an orthonormal basis
(interpolating sigrals), with the series coefficients being samples of the sigpal obtained
at appropriate time intervals.

3.3 THE EXPONENTIAL FOURIER SERIES


Recall from Chapter 1 that a signal is periodic if, for some positive nonzero value of ?,
x(t) = x(t + nT), n = L,2,... (3.3.1)
The quantity I for which Equation (3.3.1) is satisfied is referred to as the fundamen-
tal period, whereas 2r/T is referred to as the fundamental radian frequency and is
denoted by roo. The graph of a periodic sigral is obtained by periodic repetition of its
graph in any interval of length I, as shown in Figure 3.3.1. From Equation (3.3.1). it
follows that2T,3T, ...,arc also periods of r(t). As was demonstrated in Chapter l, if
two signals, r, (t) and xr(t), are periodic with period T, then
x (t)

Iigure 33.1 Periodic signal.


Sec. 3.3 fie Exponential Fourier Series 113

xr(t)=a.r,(t)+bxr(t) (3.3.2)

is also periodic with period L


Familiar examples of periodic signals are the sine. cosine, and complex exPonential
functions. Note that a constant signal r(l) = c is also a periodic signal in the sense of
the definition, because Equation (3.3.1) is satisfied for every positive L
In this section, we consider thc representation of periodic signals by an orthogonal
set of basis functions. We saw in Section 3.2 that the set of complex exPonentials
0,(l) = explj2nnt/Tl forms an orthogonal set. If we select such a sel as basis func-
tions, then, according to Equation (3.2.3),

,(,) =.i-.,.*l,Tl (3.3.3)

where, from Equation (3.2.4), the c, are complex constants and are given by

,, = lrlr' ,rr"-p[-l T). (3.3.4)

Each term of the series has a period T and fundamental radian ttequency 2t fT = ,0.
Hence, if the series converges, its sum is periodic with period 7. Such a series is called
the complex exponential Fourier series, and the c, are called the Fourier coeffrcients.
Note that because of the periodicity of the integrand, the interval of integration in
Equation (3.3.4) can be replaced by any other interval of length l-for instance, by the
*
interval ,0 <, s lo ?, where to is arbitrary. We denote integration over an interval
of length I by the symbol /,r1. We observe that even though an infinite number of fre-
quencies are used to synthesiie the original signal in the Fourier-series expansion, they
do not constitute a continuum; each frequency term is a multiple of ao/2r. The fre-
:
quency corresponding to n 1 is called the fundamental, or first, harmonicl z.= 2 cor-
responds to the second harmonic, and so on. The ccefficients c, define a
complex-valued function of the discrete frequencies nor6, wherc n = 0, +1, !2, ... ,
The dc component, or the full-cycle time average, of r(t) is equal to co and is obtained
by setting n = 0 in Equation (3.3.4). Calculated values of co can be checked by inspect-
ing r(r), a recommended practice to test the validity of the result obtained by integra'
tion. The plot of lc, I versus nr,rn displays the amplitudes of the various frequency
components constituting r(r). Such a plot is therefore called the amplitude, or magni-
tude spectrum, of the periodic signal .r(r). The locus of the tips of the magnitude lines
is called the envelope of the magnitude spectrum. Similarly, the phase of the sinusoidal
components making up r(r) is equal to l, cn and the plot of 4 c,, vcrsus nroo is called
the phase spectrum ofx(t). In sum, the amplitude and phase spectra of any given peri-
odic signal are defined in terms of the magnitude and phase of c,,. Since the spectra
consist of a set of lines representing the magnitude and phasc at a = nr,l.., they are
referred to as line spectra.
114 Fouder Series Chapter 3

For real-valued (noncomplex) signals, the complex conjugate of c, is

,: =
l+ I,^,
rt *olalalo,l"
= l,l,
'u"-olf,?!!)o'
= c-n (3.3.5)

Hence,

l"-, I =1", I and N,c-n=-4cn (3.3.6)

which means that the amplitude spectrum has even symmetry and the phase spectrum
has odd symmetry. This property for real-valued signals allows us to regroup the expo-
nential series into complex-conjugate pairs, except for co, as follows:

r(r)=co * *ry+)
^i__,^*ry+)*P,*
=. * P, ,-,"*oliff!] ..i, ,.*pwf
= * p__, (, -..-rlflf-t] . *r [,+{])
" "
: * *.izne{c,*r['T,]]
= *.i (z n"t.,;
"o,4{ -
2tmlc,l""T) (3.3.2)

Here, Rel . I and Iml ' I denote the real and imaginary parts of the arguments, resPec-
tively. Equation (3.3.7) can be written as

= an.,i "o"4il o,,"rn4f]


x(t\ * (3.3.8)
[n,
The expression for x(t) in Equation (3.3.8) is called the trigonometric Fourier series
for the periodic signal .r(t). The coefficients au, an, and D, are given by

o,=.co=l[,rruro, (3.3.ea)

a, = 2Relc,l : r, [,rrr, o, (3.3.9b)


"or4{
bn: -2lmlc,l = at (3.3.9c)
?[,nrlr"in4'f
In terms of the magnitude and phase of c,, the real-valued signal r(t) can be
expressed as
Sec. 3.3 The Exponential Fourier Serles 115

x(r) : co + ) zl,,l *'(?F * +.,)


='o +,i n,.or(f * o,) (3.3.10)

where
A, = 2lc,l (33.11)
and
0n = 4c, (33.t2)
Equation (3.3.10) represents an alternative form of the Fourier series that is more com-
pact and meaningful than Equation (3.3.8). Each term in the series represens an oscil'
lator needed to generate the periodic signal r(l).
A display of lc, I and 4 c,, versus n or naofor both positive and negative values of
n is called a two-sided amplitude spectrum, A display of A, and Q,, versus positive r or
nroo is called a one-sided spectrum. Two-sided spectra are encountered most often in
theoretical treatments because of the convenient nature of the complex Fourier serie.s,
It must be emphasized that the existence of a line at a negative frequency does uot
imply that the signal is made of negative frequency comPonents, since, for every com'
ponent cB explj}nrtlTl, there is an associated one of the form c-, expl- i?:trr.t/Tl-
These complex signals combinc lo create the rcal comPoncnt a, cos(?sttt/T\ +
b,sin(?tttrt/T). Note that, from the definition of a definite integral, it follows that if
r(r) is continuous or even merely piecewise continuous (continuous except for Enitely
many jumps in the interval of integration), the Fourier coefficients exist' and we can
compute them by the indicated integrals.
Let us illustrate the practical use of the previous equations by the following exarn-
ples. We will see numerous other examples in subsequent sections.

Exanple 83.1
Suppose we want to find the line spectra for the periodic signal shown in Figure 33.2' The
signal r(t) has the analytic represeDtalion

llgure 33.2 Signal .r(l) for


Example 3.3.1.
116 Fourier Serles Chapter 3

t-x. -l<r<o
.rtt)=[r, 0<r<1
and r(, + 2) = .r(t). Therefore, oo = 2n/2 = tt. Signals of this type can <rcur as external
forces acting on mechanical systenul, as electromotive forces in electric circuits, etc. The
Fourier coeffr cients are

",
=
f,l_,. <,1 expl- inn tldt
=
; tl, - K exp(- inntlat + l' x "*p1-ln,fiarf

_K (l - explinrl * exp[-yznl - 1)
2\ int -yn
=
#{r- }("rntr,,,,,r+
exP [- r"1)) (3.3.13)

( zr. n odd
=llntr (33.14)
n even
[r,
The amplitude spe ctrum is

nodd
,".,:[ffi' ,r even
[0,
The dc component, or the average value of the periodic signal :(r), is obtained by setting
z = 0. When we substitute n = 0 into Equation (33.13), we obtain an undefined result
This can be circumvented by using I'H6pital's rule, yielding co = 0. This can be checked
by noticing that r(l) is an odd function and that the area under the curve represented by
.r(t) over one period of the signal is zero.
The phase spectrum of .r(t) is given by

_,,1t n=(h-t),m=7,2,...
--:{ 0, n = Lm.m = 0. 1.2 ...

7t
,'
n= -(1rz-l),m=r,2,...
The line spectra ofr(r) are displayed in Figure 3.3.3. Note that the amplitude spectrum
has even symmetry, the phase spectrum odd symmetry.

Ercnple 8.83
A siausoidal voltage E sinool is passed through a half-wave rectifrer that clips the nega-
tive portion of the waveform, as shown in Figure 33.4. Such signals may be encountered
ts l..r
I
.:
o0
(!
z
-
":
(.)
qJ

r!
G
cl
x
lrl
o

x
Etr
oo

cl
:o
r!, .-'o
o!
o-L
()o
I
Eq)
.i o.
O6r
a.l rdG
I
6a-
o. -o
o!
I trtrtit
'-
E
-ti
qE
I
-: cL
El,
=o
baE
I

. 117
118 FouderSerles ChaflerS

r (r) E E dn oro,

-2n't0g 2tt
rJg oJ6 {rl9 rdq

Flgue 33.4 Sipal:(t) for Example 3.3.2.


in rectifier design problems. Reaifien are circuits that produce direct curent (dc) from
alternating cunent (ac).
The analytic representation of .r() is

lo. when-i<l<o
0'o
r(') = {
whenocrca
Irrio.or,
and r(r + 2tt /a) = x(l). Since.r(l) = 0 when -r /tro<, < 0, we obtain, from Equa-
tion (3.3.4),

",
=
|Jtr.,nr r,.rel-ifffa,
= * f" - exp[-jroor]l exp[:lhroot]dr
+[exp[jroor]
_ E.o l"t_. ,
=
dJ" [erp[-ior6(a - 1)t]-exp[-itoo@ + t)tldt

=ryF#("*[-n-] .*of',])
E
= ?r.i: nr)
cos (nt/2lexpl-im/21' n+ !7 (33.15)

l*+,
=lr,
aeven
(33.16)
zodd, n+lt
S€tting n = 0, we obtain the dc component, or the average value of the periodic qignal, as
co = E I n . This result can be verified by calculating the area under one half cycle of a sine
wave and dividing by f. To determine the coefEcients cl atrd c- I which correspond to the first
harmonic, we note that we cannot subsdmte ,, = + I in Equation (3.3.15), since this yields
an indeterminate quantity. So we use Equation (33.4) instead with n = + 1, which resuls in
E _E
cr =d. and ct= li
The line spectra of .r(r) are displayed in Figure 3.3.5.
In general, a rectifier is used to convert an ac signal to a dc signal. Ideally, rectified out-
put r(t) should consist only of a dc component. Any ac component contributes to the rip-
Sec. 3.3 The Exponential Fourier Series '.'t'

-4 -3 -2 -l 0 I

(a)

(b)

Flgure 335 Line spectra for.r(t) of Example 3.3.2. (a) Magnitude spec-
trum and (b) phase spectrum.

ple {deviation from pure dc) in the sig[al. As can be seen from Figure 3.3.5, the ampli'
tudes of the harmonics decrease rapidty as n increases, so that the main contribution to
the ripple comes from the firct harmonic. The ratio of the amplitudes of the first harmonic
to the dc component can be used as a measure of the amount of ripple in the rectified sig-
nal, In this example, the ratio is e qual lo r/4. More complex circuits can be used that pro-
duce less ripple. (See Example 3.6.4).

Eranple 83.8
Consider the square-wave stgnal shown in Figure 3'3'6' The analytic re[-resentation of 'r(r) is
( -T -t
f0. when;-<t<-,
| -r I
r0) =
1K, whenl:<r<;
l"t'
|.0. whenr<t<;
and x(t + f)= :(r). Signals ot this type can be produced by pulsc generators and are
used extensively in radar and sonar systems. From Equation (3.-3 {). rve obtain
120 Fourier Serles Chapter 3

-, r
| -T -,.i -2z 0 r T
n-t T
r+!7
-T- 1 1 1'-T
Flgure 33.5 Signal .r(t) for Example 3.33.

I rrn ,<,> *ol1fc)"


,,= Tl_ro *ol4!l*
= + f:, *

=#l*,V+l-*,[T ']l
: K ntr
-Sln -
nnl
Kt n't
= rstncT (33.17)

where sinc ()r) = sin (,rI)/nI. The sinc function plays an important role in Fourier analy-
sis and in the study of LTI systems. It has a maximum value at tr = 0 and approaches zero
as I approaches infinity, oscillating through positive and negative values. It goes through
zero at tr = +7,.-2t ,,. .
l-t us investigate the effect of changing Ion the ftequency spectrum of r(l). For 6xed
I
t, increasing reduces the amplitude of each harmonic as well as the fundamental fre-
quency and, hence, the spacing between harmonics. Hovever, the shape of the spearum
is dependent only on the shape of the pulse and does not change as I increases, except for
the amplitude factor. A convenient measure ofthe frequency spread (known as the band-
width) is the distance from the origin to the first zero-crossing of the sinc function. This
distance is equal to2r/t atd is independent of f. Other measures of the frequency width
of the spectrum are discussed in detail in Section 4.5.
We conclude that as the perod increases, the amplitude becomes smaller and the spec-
trum becomes denser, whereas the shape of the spectrum remains the same and does not
depend on the repetition period T. The amplitude spectra ofr(r) with r = 1 and = 5, I
10, and l5 are displayed in Figure 3.3.7.

f,'rrernple 33.4

In this example, we show that r(r) = t2, -tt < , < ?r, with r( + 2tl =.r() has the
Fourier series representation

,Ol =
f-a(*rr - |.o.zr * |.orl, - ...
)
(3.3.18)

Note that.r() is periodic with period 2zr and frmdamental frequency oo = 1. The complex
Fourier series coefficients are
Sec. 3.3 The Exponential Fourier Series 121

-15 - l0 0

(a)

- l0 0

(b)

-45 -30 -15 0 15 lo 45

(c)

Flgure 33,7 Line spectra for the:(t) in Example 3.3.3. (a) Magnitude
spectrum forr = I and I = 5. (b) Magnitude spectrum fort = 1 and T=
10. (c) Magnitude spectrum for z = I and I = 15.

t2 expl- jntldt
cn =
i; L"
Integrating by parls twice yiclds
2 cosnrr
cn = -- n-,--, n+O

The term co is obtained from


I r"
t2.dt
2;1 "
n2
3

From Equations (3.3.9b) and (3.3.9c),


122 FouderSerles Chaptgr3

A
o, = 2Relcn| = -?cosrrr

b.: -2lm{c,f = I
because c, is real. Substituting into Equation (3.3.8), we obtain Equation (3.3.18).

eranple 83.6
Consider

r(r) = I + z'r"(f r)-l"*(?, - sin(7zrr) . .".(+,


I
It can easily be veritied that this signal is periodic with period = 5Il s so that roo = 7tt/3.
Thus 7rr = ?aoand2Ett /3 = 4roo. lVhile we can frnd the Fourier series coefficients for this
example by using Equation (3.3.4), it is much easier in this case to represent the sine and
cosine signals in terms of exponentials and write x() in the form

r(r) = 1 + ]exp[jroor] - lexp[-;,or] - Jexpti,orl-lexpt-i,orl


+
| expli3.tstl - 11
exp [ -73r,0 | +) expll*ror] +
] exp [ - j*ror]

Comparison with Equation (3.3.3) yields


co= I

c, = c!1= -(i.r,)
cr= clr= -L
co = c:4 =,

with all other cn being zero.


Since the amplitude spectrum of .r(r) conrains only a finite number of components, it
is called a bandJimited signal.

3,4 DIRICHLET CONDITIONS


The work of Fourier in representing a periodic signal as a trigonometric series is a
remarkable accomplishment. His results indicate that a periodic signal, such as the sig-
nal with discontinuities in Figure 3.3.2, ao be expressed as a sum of sinusoids. Since
sinusoids are infinitely smooth signals (i.e., they have ordinary derivatives of arbitrary
high order), it is difficult to believe that these discontinuous signals can be expressed
in that manner. Of course, the key here is that the sum is an infinite sum, and the sig-
nal has to satisfy some general conditions. Fourier believed that any periodic signal
Sec. 3.4 Dirichlet Conditions 123

could be expressed as a sum of sinusoids. However, this turned out not to be the case.
Fortunately, the class of functions which can be rePresented by a Fourier series is large
and sufficiintly general that most conceivable periodic signals arising in engineering
applications
- do have a Fouricr-series representation.
For the Fourier series to converge, the signal r(l) must possess the following prop-
erties, which are known as the Dirichlet conditions, over any period:

l. x(t) is absolutely integrable: that is,


-h+T

l, l'r(r)lar < -
2. x(r) has only a finite number of maxima and minima.
3. The number of discontinuities in x(l) must be finite.
These conditions are sufficient, but not necessary. Thus if a signal x(t) satisfies
the
Dirichlet conditions, then the corresponding Fourier series is convergent and its sum
is r(r), except at any point ro at which x(r) ii discontinuous. At the points of
disconti-
ndti, tt. sum ot ttre ieries is the average of the left- and right-hand limits of x(t) at
,o; that is,

] r,ro x(6)l
r(ro) = + (3.4.1)

Example 8.4.1
Consider the periodic signat in Example 3.3.1. The trigonometric Fourier series coeffi-
cients are given bY
a" = 2Relc,l = 0

b,= -Ztm|j =1+(r - cosnt)

(tx
I
I
-a:.
n7t '
n odd
:l
I
I 0. z even
\
so that r(r) can be written as
4Kl r
,"rr+...'l
.r(t)=?lsinzrt+isin3rt*"'*;sin""'
I
t '-
e.4.2)'

we notice that at , = 0 and t=1, two points of discontinuity of r('), the sum in
Equation
(3;r, h; a value of -I( and K
zero, whictr is equal to the arithmetic mean of the values
of rliy. furtn.rmore, since the signal satisfies the Dirichtet conditions, the series con-
,ergei und x(r) is equal to the slm of the infinite series. Setting t = lD in Equation
(3.4.2), we obtain

i. i +...(-rr-r r^'-1 * ")


-
'0=*=1#('
124 Fourier Series Chapler 3

or

,i r-,y,-, ;l_ i=X


Example 3.42
Consider the periodic signal'in Example 3.3.3 with r = I and T = 2. T\e trigonometric
Fourier-series coefficients are given by

a,=2Relc,| =6tin"I
b"= -Z Im[c,l = 0
Thus. ao = Klz,a,-o when a iseven,a,, :2K/nr when n = 1' 5, 9, ... ' a, = -2K/nt
:
when n = 3,'1,11,..., and b, 0 forn = 1.2, .... Hence,x(t) can be written as

,(i=:* ?{[.or,r, - ]r.*r",r + + "'l (3.4.3)


]coss",r
Since.r(l) satisfies the Dirichlet conditions over the interval [- I, ll, the sum in Equation
(3.4.3) converges at all points in that interval, except at t = !U2, the points of disconti-
nuity of the signal. At the points of discontinuity, the righGhand side in Equation (3.4.3)
has the value K/2. tthich is the arithmetic average of the values K and zero of:(t).

Il.learrple 8.43
Consider the periodic signal:(r) ln Example 3.3.4. The trigonometric Fourier-series coef-
ficiens are
tt2
ao=
T
4
an= _,c,,sn1I., n*o
b,=0
Hence. .z(r) can be written as

,(i = + + +,i L,,-lI cosnr (3.4.4)

For this example, the Dirichlet conditions are satisfied. Further,.r() is continuous at all ,.
Thus the sum in Equation (3.4.4) converges to r ( ) at all points. Evaluating r(, ) at, = t gives

:(zr) = r: =1, * e),)i

*-l:"1
frn' 6
Ssc. 3.5 Propertiss of Fouri€r Series 125

is important to realize that the Fourier-series rePresentation of a periodic signat


It
r(r) cannot be differentiated term by tenn to give the representation of dt(t)/dt.To
demonstrate this fact, consider the signal in Example 3.3.3. The derivative of this sig-
nal in one period is a pair of oppositely directed 6 functions. These are not obtainable
by direct differentiation of the Fourier-series representation of r(t). However, a
Flurier series can be integrated term by term to yield a valid reprcsentation of I x(\dt.

ERTIES FOURIER ERIES

In this section, we consider a number of properties of the Fourier series. These prop-
erties provide us with a better understanding of the notion of thc frequency sPectrum
of a continuous-time signat. In addition, many of the properties are often useful in
reducing the complexity involved in computing the Fourier-series coefficients.

8.6.1 Least Squares APProximation PropertSr


If we were to construct a periodic signal r(t) from a set of exponentials, how many
terms must we use to obtain a reasonable approximation? If ,r (r ; is a bandJimited sig-
nal, we can use a finite number of exponentials. otherwise, using only a finite number
of terms-say, M-results in an approximation of .r(r). The diffcrence between x(t)
and its approximation is the error in the apprgximation. Wc want the approximation
to be ..cGe" to x(t) in some sense. For the best approximation, we must minimize
some measure of the error. A useful and mathematically tractable criterion we use iS
the average of the total squared elror over one period, also known as the mean'
squared value of the error. This criterion of approximation is also known as the
aiproximation of r(t) by least squares. The least-squares approximation proPerty of
the Fourier series relates quantitatively the energy of the differcnce signal to the error
difference between the specified signal r(t) and its truncated Fourier-series approxi-
mation. Specifically, the prop€rty shows that the Fourier-series coefficients are the best
choice (in the mean-square sense) for the coefficients of the truncated series.
Now suppose that;(r) can be approximated by a truncated series of exPonentials
in the form
N
.r,u(r) = ) d,exp[7zoor] (3.s.1)
a--N
We want to select coefficients d, such that the enor, x(t) - r,v(r), has a minimum
mean-square value. If we use the Fourier series representation for x(t), we can write
the error signal as
e(t): xG) - r,*,(r)
,N
) c, expljnonLl - )-N d exp[into,,r] (3.s.2)

kt us define coefficients
126 Fourier Series Chapter g

l"l
'" = ["*
E- 'ry
lcn-d,, -N<z<N
so that Equation (3.5.2) can be written as

: sn exp[7n roor] (3.5.4)


"(r) ?
Now' s(r)-isa periodic signal with period r = 2tt /t to, since each term in the summa-
tion is periodic with the same period. It therefore foliows that Equation (3.5.4) repre-
sents the Fourier-series expansion of e0). As a measure -of how *"it ,r1r;
approximates .r (t), we use the mean-square eror, defined as

MSE = il,rl"UrPr,
Sub,stituting for e(l) from Equation (3.5.4), we can write

MSE =
I l, rl2-r "exn tr,,rl) (,, i-.gi expl- in orl) ar
i i tA{ll
nd-am=-e tr J(?)
,*pti(, - m)ootldtl
)
(3.s.s)
since the term in braces on the right-hand side is zero for n * m aadisl form=n.
Equation (3..5.5) reduces to

*r" =,P_ le,l'


^/
n--N
1",-dnl'+
l,l>lv
1",1, ) (3.5.6)

Each term in Equation (3.5.6) is positive; so, to minimize the MSE, we must select
dr= Cn (3.s.7)
This makes the first summation vanish. and the resulting error is

(MSE),in = ) l",l' (3.5.8)


lrl>,v
Equation (3.5.8) demonsrrates the fact that the mean-square eror is minimized by
selecting the coefficients d, in the finite exponential seriis of Equation (3.5.1) to be
identical with the Fourier-series coefficientsq. That is, if the Fourier-series
expansion
of the signal r(r) is truncated at any given value of N, it approximates r() with smaller
mean-square error than any other exponential series with the same nrmbe, of terms.
Furthermore, since the error is the sum of positive terms, the error decteaseg monot-
onicallv as the number of terms used in the approximation increases.

Example 35.1
Consider rhe approximarion of the periodic signal x(r) shown in Figure 3.4.2 by a set of
2rv + I exponentials. In order to see how the approximation error varies with thi number
S€c. 3.5 Properties of Fourier Series 127

of lerms, we consider the approximation of .t(l) based tu threc terms, then seven terms,
then nine terms, and so on. (Note lhat.:(r) contains only odrl harmonics.) ForN = 1 (rhrcr
terms), the minimum mean-square error is

(MSE)"i, = ) l.,l'
l"l,t
=
,.p, ,1s ''"i
'-14i s
,, 1ft,. n,
I

,l odd

8K2 l3n2
= ;7.|\t4'- I

= 0.189K2
Similarly, for N = 3, it can be shown thar

(MsE)*" =
|{: ftz 11
- 0.01K2

9.6.2 Efrects of S5mnretry


Unnecessary work (and corresponding sources of errors) in determining Fourier coef-
ficients of periodic signals can be avoided if the signals posscss any type of symmetry.
The important types of symmetry are:

1. even symmetry, r(r) = r(-r),


2. odd symmetry.x(r) = -.r(-r),
3. half-wave odd symmetry, r(r) = -r(, -l ;)
Each of these is illustrated in Figure 3.5.1.
Recognizing the existence of one or more of these symrnerries simplifies the conr-
Putation of the Fourier-series coefficients. For example, the Fourier series of an even
signal r(l) having period I is a "Fourier cosine series."

x(t)=an*2,o,"or\!
with coefficiens

lrrzx(t)dt, und n,,=


2 4 trt2 Znrt
%=;1, iJn x(t)cos=f -
dt

whereas the Fourier series ofan odd signal.r(t) having penocl 7 is a "Fourier sine serics."
128 Fourier Series Chapter 3

r(t) r(r)
symmetry,
T=3

-3 -2 -l 0 123t
(a)

.r(r)

0l

(c)

Figure 3.5.1 Types of symmetry.

,(,) =,i o,sin4f


with coefficients

u,=
+[: 'rt)sin4ldt
The effects of these symmetries are summarized in Table 3.1, in which entries such as
ao * 0 and bzn*r * 0 are to be interpreted to mean that these coefficients are not nec-
essarity zero, but may be so in specific examples.
In Example 3.3.1 x() is an odd signal, and therefore, the c,t are imaginary (an = 0),
whereas in Example 3.3.3 the c, are real (b, = 0) because.r(r) is an even signal.

TABLE 3.7
E lBc.ts o, Symm€t y
Symmsfy b" Remarks

Even ao# O a,*O b,=o Integrate over Il2 only.


and multiply the coefficients by 2.
odd ao=0 or= O b,+o lntegrate over 12 only,
and multiply the coefficients by 2.
Half-wave odd ao=0 azr=O bz,=o Integrale over 12 only.
aut, * O bL,,t + O and multiply the coefficients by 2.
Sec. 3.5 Properties ol Fourier Series 't29

Figure 3.5,2 Signal r(r) for


Example 3..5.2.

E-arnple 35.2
Consider the signal

(o-!t. o<t<r/2
r
'rrl={"
l!,-to. rtz<r< l
t/
which is shown in Figure 3..5.2.
Notice thal .{(r) is both an cven and a half-u'ave odd signal. Therefore, ao = 0,4 = 0,
and we expect lo have no cven harmonics. Computing a,, we ohtain

". = +1,''' (^ -'|t)"o'hl'a,


, .!4=tl-cos(aTr)), n + tl
ln?I )'
(
|
0, n even
='l-qq
nodd
[ (nr)'
Observe that ao, which corresponds to the dc term (rhe zero harmonic), is zero because
the area under one period of .r(l) evaluates to zero.

8.6.8 Linearity
Suppose that r(r) and y(r) arc periodic with the same period. Lct their Fourier-series
expansions be given by

r(r) =) B.exp[7nr,r,,r] (3.5.9a)


n= --

y(r) =) 1,exp[rno6tl (3.s.eb)


1gO Fourier Serles Chaptor 3

and let

.(t) = kf (t) - kry(t)


where &, and k, arc arbitrary constants. Then we can write

s(r)= i,=:" (t'F,+ kr1)expfint,utl


=
,I, "'exP[7hto"t]
The last equation implies that the Fourier coefficients of e (t) are

a,,: krB,, + kr1,, (3.5.10)

3.6.4 MuctofTbo Signals

If .r(r) and .v(t) are periodic signals rvith the same period as in Equation (3.5'9)'
their product is
z(t; -- ,1,,r',r)

= 0,, exp[7n<ontl,,i, r,, explimu,ntl


,,i,
= P,,1,,, exp[i(, + nr)our]
,,P,,,,>:_

=
,i(,,i, o,-,,,r,,,)exp[iro,,,r] (3.s.r1)

The sum in parentheses is known as the convolution sum of the two sequences p,,, and
1,,,. (More on the convolution sum is presented in Chapter 6.) Equation (3.5.11) indi-
cates that the Fourier coefficients of the product signal z (t ) are equal to the con volution
sum of the two sequences generated by the Fourier coefficients of -r(l) and.v(t). That is'

i P'-"' ''"' : | exP [-/to"r]ril


1,.
""."t)
If y(r) is replaced ar'riUr. we <rbtain

z(,)= > (i
7= -z \n1- -z
P,,,,,rr)exp[/r,,,,tl

and

='i],rr(r).v*(t)exp[-lltr,,tJdt (3'5'12)
,,,i,.F,-,,rfi
Sec. 3.5 Properties ol Fourier Ser ies 191

3.5.6 Convolution of Ttvo Signals


For periodic signals with the same period, a special iorm ofconvolution. known as peri-
udic ur citculat curlt(riutiotl, is dcrined bv the integral
lr
:(I) '- -I | ;;(r),v(l - t)dr (3.5.r3)
J(h
where the integral is taken over one period L It is easy ro show rhat z(r) is periodic
with period Tand the periodic convolution is commutative and associative. (See prob-
lem 3'22). Thus, we can rvrite r (r ) in a Fourier-series representation with coefficients

: I ir e (l) exp [-7hor,,l ldt


", i J,,
I rtt (t)v(r
=V
)J, 't - r) exp [ -7ar''r otldt dr

= !i t*p [-7nr'rnt i r,, - t ) cx, [ - - r t, at}a'r (3.5.14)


/.''',', t | ['
lrr ro u{t

Using the change of variablcs o = r - t in the second integral, we obtain

,,, = .lf'.r,r, cxp[- lnr,,nr] -jrrt,,,.,lao]a"


lil .-.r(rr)cxpf
Since y(t) is periodic. the inner intcgral is independent of a shitt r and is equal to the
Fourier-series coefficients ofy(r), 1,. It follows that

or: FnJ, (3.s.15)


where B, are the Fourier-series coefticients of .r(r;.

Exqrnple 3.6.3
In this example. we compute thc Fourier-series cocflicients of thc product and of the peri.
odic convolurion of the rrvu signals shown in Figurc 3.5.3.
The anall,tic representrrion of .t1r) is

.((r) -- r, 0<r<4. r(r)=.t(r+4)

r -5 -4 -3
-J -t 0 I

Figure 3.5.3 Sirnals .r(r) and _u(i) for Examplc 3.5.3.


132 Fourier Series Chapter 3

The Fourier-series coefficients of x(r) are

u"= if ,*o[-T]"
: ?!-
nn
For the sigral y(r), the analytic representation is given in Example 3.33 with t =2 aad T =
4. The Fourier-series coefficients are

,"=ll,x*vlt;)a,
Knr'Kn
= -_sm7 =

'smc'
From Equation (3.5.15), the Fourier coefficients of the convolution signal are
2iK nn
aa =
b:r)zs,I 2

and from Equation (3.5.11), the coefficiens of the product signal are

"' = i Lo-^'t^

;";*
-7 1 .m,,
=
-?-^@ - ^)" 2

8.6.6 Pareoval'sTheorem
In Chapter l, it was shown that the average power of a periodic signal r(t) is

P =:lt Jo't lxlt)l'zdt


The square root of the average power, called the root-mean-square (or rms) value of
.r(l), is a useful measure of the amplitude of a complicated waveform. For example, the
complex exponential signal x(t) = c" exp [7hort] with frequency n too has lcn l' as its
average power. The relationship between the average power of a periodic signal and
the power in its harmonics is one form (the mnventional one) of Parseval's theorem.
We have seen that if x(r) and y(t) are periodic signals with the same period I and
Fourier-series coefficients p, and 1,, respectively, then the product of .t(t) and y(t) has
Fourier-series coeffrcients (see Equation (3.5.12))

o, =i ,nd -a
go*^^r*

The dc component, or the full-cycle aveiage of the product over time, is


Sec. 3.5 Properties ol Fourier Series 13{}

"": lf Y'v(t)dt

= i "u'
8,,,,; (3.5.16)

If we let y(t) = .r(t) in this expression, then 8,, = .y", and Equarion (3.5.16) becomes

'il,,rl* <'lP o' =,,,i- I 8,, l' (3.5.17)

The left-hand side is thc average power ofthe periodic signal .r(r). The result indicates
that the total average power o[.r(t) is the sum of the average ptrwer in each harmonic
component. Even though power is a nonlinear quantity, we can use superposition of
ayerage powers in this particular situation, provided that all thc individua.l components
are harmonically related.
We now have two different ways of finding the average power of any periodic sig-
nal x(r): in the time domain, using the left-hand side of Equation (3.5.17), and in the
frequency domain, using the right-hand side of the sanre equalion.

S.6.7 Shift in Time


If .r(l)has the Fourier-series coefficients c,,, then the signal .r(r - t) has coeffi-
cicnts d,,, rvhere

o, = !r[,rr r, -l expl-in .ootldt

= exp[-itoor]
] {rr
rt,rl"*nt - inu.uo)tt o

= c, exp [-
oor ]
7h (3.s.r8)
Thus, if the Fourier-series representation of a periodic signal r(t ) is known relative to
one origin, the representation relative to another origin shifted by t is obtained by
adding the phase shift n Gror to the phase of the Fourier coefficicnts ofr(r).

n=anple 3.6.4
Consider the periodic signal r(l) shown in Figure 3.5.4. The sig.nal can be written as the
sum of the two periodic signals rr (r) and rr (r), each with period 2n/r,ro, where .r, (r) is the

.r(r)=flsin..,)orl

tigure 3S.4 Signal -r(t) for Example 3.5..1.


134 Fourier Series Chapl€r 3

half-wave rectified signal of Example 3.3.2 and x2(r) = :r (, - r/(r0)' Therefore. if p, and
'y, are thc Fourier coefficients of .r, (r) and -rr(r), respectively, then, according to Equation
(3.s.18),

,, = B, a]
"*p [-1n..
= p,, exp[-lnrr] = (- lfp,
From Equation (3.5.10). the Fourier-series coefficients of .r(t) are
d,=p,+(-1f9,
: [2P,, n even
[0. n odd

where the Fourier-series coefficients of the periodic signal .r1(t) can be determined as io
Equation (3.3.16) as

U, = Esinorsrexp[7ho6t]dt
*f
(e
|
;o--;'r' ,, even

l-ilE. n=tt
lo otherwise
|.0,
Thus,
2E
r(l - n') -
n even
o"=
0. n odd

This result can be verifred by directly computing the Fourier-series coefficients ofr(,).

3.6.8 Integration of Periodic Signale


If a periodic signal contains a nonzero average value (co + 0), then the integration of
this signal proJu".. a component that increases lirrearly with time, and therefore, the
resulr;nt signal is aperiodic. However, if co = 6, then the integrated signal is periodic'
but might c-ontain u d" .orpon"nt. Integrating both sides of Equation (3.3.3) yields

= n * o (3.s.1e)
f_,,1,'ta, ^2_lh*p[inronr],
The relative amplitudes of the harmonics of the integrated signal compared with its
fundamental or" 1"5 than those for the original, unintegrated signal. In other words,
integration attenuates (deemphasizes) the magnitude of the high-frequency comPo-
Sec. 3.6 Systems with P€dodic lnputs 135

nents of the signal. High-frequency components of the signal are the main contributgrs
to its sharp details, such as those occurring at the points of discontinuity or at discon-
tinuous derivatives of the signal. Hence, integration smooths the signal, and this is one
of the reasons it is sometimes called a smoothing operation.

3.6 SYSTEMS WITH PERIODIC INPUTS


Consider a linear, time-invariant. continuous-time system with impulse response Il(t).
From Chapter 2, we know that the resPonse resulting from an input r(t) is

y(tt=[ hg)x(t-ldr
J_^

For complex exponential inputs of the form


r(t) = exp[ior]
the output of the system is

y0): [ /r(r)exp[io(r -r)]dr

h(r)expl- ir,orldt
= exptTrrl
f
By defrning

H(o,) = r,1"1"*pt- ionldt (3.6.1)


/-
we can write
y0) = tl(o) exp[jror] (3.6.2)

II(ro) is called the system (transfer) function and is a constant for fixed ro. Equation
(3.62) is of fundamental importance because it tells us that the system resPonse to a oom-
plex exponential is also a complex exponential, with the same frequency ro, scaled by the
quantity H(or). The magnitude llr(,o)l is called the magnitude function of the system,
ana + H(.) is known as the phase function of the system. Knorving H(ro), we catr detet-
mine whether the system amplifies or attenuates a given sinusoidal component of the
input and how much of a phase shift the system adds to that particular component.
To determine the response y(l) of ao LTI system to a periodic input.r(t) with the
Fourier-series representation of Equation (3.3.3), we use the linearity proPerty and
Equation (3.6.2) to obtain

y0) = nd2-a H(nor)c,exp[lnoot] (3.6.3)

Equation (3.6.3) tells us that the output signal is the summation of exPonentials with
coeffrcients
136 Fourier Serles Chapter 3

d,: H (n,or,\c, (3.6.4)


These coefficients are the outputs of the system in response to c,, exp Llno0rl. Note that
since H(nton) is a complex constant for each n, it follows that the output is also peri-
odic with Fourier-series coefficients d. In addition, since the fundamental frequency
of y(t) is tos, which is the fundamental frequency of the inputx(r), the period of y(r)
is equal to the period of .r(t). Hence, the response ofan LTI system to a periodic input
I
with period is periodic with the same period.

n-nrnple B.G.l
Consider the system described by the input/output differential equation

y(")(r) + 5P,Yt')(,) =in,""'1,1


For input r() = exp[iot], the corresponding output is .v0) = ll(o)exp[irot]. Since
every input and output should satis$ the system differential equation, substituting into the
latter yields

* exp[iror] = ioa,(I.)'exRU.,rl
[O.f !r,fr.ul']at.)
Solving for l/(ro), we obtain

) q,(i')'
H(r,r) = ---d-;=-
(jo)' + ) p,(ito)'

Exanple 3.6.2
Let us find the output voltage y(l) of the system shown in Figure 3.6.1 if the input voltage
is the periodic signal

r(t) = 4*t' - 2 cos?t

t-------------'l
ll
t L=t I

Itgure 3.6.1 System for Example


3.6.2.
Ssc. 3.6 Syslems wlth Periodic lnputs 137

Applying Kirchhofl's voltage law to lhe circuit yields


o#! *1,r,=rr,u,

If we set r(r) = expUr,rtl in this equation, the output voltage is y(t) = I{(or)exp[lot].
Using the system differential equation. we obtain
qLexp
iorH(ro) exp [jro11 +
f 41.1.*p 1 i.,l = [lr,rr]

Solving for H(r,r) yields

H(.\= ei/h.
Al any frequency o, = tr o0, the system function is

H(noJ = --t!,"*
For this example. oro - I and R/L = l,so that the output is

y(i = -j,exp[ir] +
t'V",el-itl inexp[i2rl *ery;r1 ial
- la6 cos(a - el')
= 2{icos(t - 4s")

fample &6.9
Consider the circuit shown in Figure 3.62. The differential equation goveming lhe system is

to = cff +yf
For an input of the form i(t) = exp[iot], we expecl thc outPut u(t) to be
a(t) = /I(r,r) exp[iot]. Substituting into the differential equation yields

expfirorl = CjroH(ro)expt;rrl + exp[orr]


]41<,,1
Canceling the exp[lorll term and solving for H(o), we have

H(.)= lnll,c
+

r(11= 11r1
) c R u(t) = l,(t)

Flgure 3.6.2 Circuit for Exaople


3.6.3.
138 Fourier Serles Chapter 3

Let us investigatc lhe response of the system,to a more conrplex inpui. consider an input
that is given by the periodic signal .r(r) in Example,3.3.l. The input signal is periodic wirh
period 2 and r,re = z, and we have found that
(zx
l:-.
Lnn zodd
c,= 1'
I o. n even
t
From Equation (3.5.3), the output of the system in response to this periodic input is

v(i=
"2-?! t/a-]
l, odd
jn,cexplinntl

F-ample 8.6.4
consider the system shown in Figure 3.6.3. Apptying Kirchhofls volrage law. we find thar
the differential equation describing the system is

x(t)=y*(++c4+D)+y(t\
which can be wri en as

Lc y"(t) * f,,,@ + y(t\ = x(tl


For an input in the form.t(r) = exp[ior], the outpur voltage is y(r) = H(t:)exp[jorr].
Using the system differential equation, we obtain

(ja\'1LC H(ot) + j,n*Hkn) + H(a) = 1


Solving for H(ro) yields

H(r,r) =
-I + --.1-----
jaL/R - a2 LC
with

Ill(r,)l =

r(r) + C R r(r)

- Figure 3.63 Circuit for Example


3.6.4.
Sec. 3.6 Systems with Periodic lnputs 139

and

Now, suppose that the input lt;:*-:,,',1-. -r-*;e rectiried sigpar in Exampre
3.3.2. Then the output of the system is periodic. rvith the Fourier-series representation giverf
by Equation (3.6.3). Let us investigate the effect of the system on the harmonics of the input
signal .t(r). Suppose that t,l0 = laht, LC = 1.0 x l0-a. andL/R = 1.0 x 10-4. For these
values, the amplitude and phase of H(roo) can be approximated respectively by

1
lH(zo6)l =
ir,1:-a7
and

l.
{H(no6) = ,;rRC

Note that the amplitude of H(n too) decreases as rapidly as lln2.The amplitudes of the frrst
few components d,, tt = 0, |, 2. 3, 4, in the Fourier-series representation of y(t) are as
follows:

l,t,l = :, lr,,l = z.e x ,o'f. ld, l=t.t, rc-r3


5n

la..l = o, I
l,t,l = 4.4x Io ij"
Thc dc component of the input r(r) has becn passed rvithout an) attenuation, whereas the
first- and higher order harmonics have had in their amplitudcs reduced. The amount of
reduction increases as the order of the harmonic increases. As a rnatter of fact, the func-
tion of this circuit is to attenuate all the ac conrponents of thc hulf-wave rectifred signal.
Such an operation is an example of smoothing. or filtering. Thc ratio of the amplitudes of
the first harmonic and the dc cornponent is 7.6 x l0-2zr/4, in comparison with a value of
rr/4 tor the un[iltered halI rvave rcctilied waveform. As we nlcntioned before. complex
circuits can be designed to produce better reclified signals. 'l hc tlesigner is always faced
with a trade-off between complexity and performance.

We have seen so far that when signal x(t) is transmitted thr-trugh an LTI system (a
communication system, an amplifier, etc.) with lransfer function 1/(r,r). the output y(t)
is, in general. different from.r(r) and is said to be distorted. In conl.rast, an LTI system
is said to be distortionless if the shapes of the input and thc ()utput are identical, to
within a multiplicative constanl. A delayed output that rctains the shape of the inPut
signal is also considered distortionless. Thus, the input/output lclationship for a dis-
tortionless LTI should satisfy the cquation

,r,(t)=6..,,-,r, (3.6.s)

The corresponding transfer [unclion H(to) of thu distortionless svstem will be of the form
H(ut) -- Kexp[-lrot,] (3.6.6)
140 Fourier Serles Chapter 3

1 H (.nl

Flgure 3.6.4 Magnitude and phase characteristis of a distortionless


syslem.

Thus, the magnitude lA(o)l is constant for all to, while the phase shift is a linear func-
tion of frequency of the form -trto.
Let the input to a distortionless system be a periodic signal with Fourier series coef-
ficients c,. It follows from Equation (3.6.4) that the corresponding Fourier series coef-
ficients for the output are given by
d, = K expl-inlr,ut,1lc,, (3.6.7)

Thus, for a distortionless system, the quantities la,,lllr,,l andlfur- must be &l/n
conslant for all n.
In practice, we cannot have a system that is distortionless over the entire range
-co < t,t < o. Figure 3.6.4 shows the magnitude and phase characteristics of an LTI
system that is distortionless in the frequency range -or. ( ro ( ro..

Flrample &65
The input and ourput of an LTI system are
.r(l) = 8 exp[i (too, + 30')l + 6 exp[l(3root - 15")] - 2 exp[i(5oor + 45")l
y(r) = aerp[i(orot - 15')] - 3exPU(3orot - 30')l + exp[i(Soot)l
We want to determine whether these iwo signals have the same shape. Note that the ratio
of the magnitudes of corresponding harmonics, ld,l/lr,l, has a value of l2 for all the
harmonics. To compare the phases, we note thar the quantity l$c, -
4dl/z evaluates
to 30' -(-15") = 45' for the fundamental, (- 15' -
30" + 180')/3 = 45' for the third
harmonic, and (45' + lEO')/s = 45' for the filth harmonic. It therefore follows that the
two signals.r() and y(r) have the same shape, except for a scale factor of l2 and a Phase
t
shift of t/4. This phase shift corresponds to a time shift of = t /4t'to. Hence, y(l) can
be written as

y(,,=i,(,-#;)
The system is therefore distortionless for this choice ofr(t).
Sec. 3.6 Systems with Periodic lnputs 141

Example 3.6.6
Let.r(l) and y(t) be the input and the output, respectively, of the simple RCcircuit sbowu
in Figure 3.6.5. Applying Kirchhoff s voltage law, we obtain
dv(t\
'-!--!!
1 1
+
iay$) RC.t0)
T------------]
rl
i R=t0kO I

Flgure 3.65 Circuit for Exarnple


3.6.6.

Setting.rG) = exp[jtorl and recognizing that y1t1 = H(o) expIlr,rtl, we have

i ofl (a)exp jtor| *


I ntr; exp [jor! : exp ti,r]
Sr:lving for H(o) yields
^[ ^!a

nlu) = yp6+
llRC rl
=
1_ r.r.
wlrcre
I
'' loo x l0- ll
= 107 s-r

Hence,

ln(,)l =Vrh
*H("t)=-tun-t9-
The amplitude and phase spectra of H(o) are shown in Figure 3.b.6. Note that for ro ( q,
H(o; = 1

and

4H(0,) = -9-
rl
That is. the magnitude and phase characteristics are practically ideal. For example, for inPut
142 Fourl€r Serlos Chapter 3

| ,r(sr) | 1 H(ul

l.o
0.707

Itgure 3.6.6 Magnitude and phase spectra of H(ro).

x(r) = 4exP[ildt]
the slatem is practically distortionless with output

v(t) = H(td)A erP[ildt]

= exP[ildr]
,]!. n"'
-Aexpllld(r- 1o-7)l

Hence, the time delay is 10-7s.

3.7 THE GIBBS PHENOMENON


Consider the signal in Example 3.3.1, where we have shown that x(t) could be
expressed as

,(0 =#.i_ ) expt;,,r1


a odd

We wish to investigate the effect of truncating the infinite series. For this purpose, con-
sider the truncated series

r,v(,) = ?;
,i-!expllnntl
rodd

The truncated series is shown in Figure 3.7.1 for lV = 3 and 5. Note that even with
N = 3, rr(t) resembles the pulse train in Figure 3.3.2. Increasing N to 39, we obtain the
approximation shown in Figure 3.7.2. It is clear that, except for the overshoot at the
points of discontinuity, the latter figure is a much closer approximation to the pulse
train x(r) than is.rr(l). [n general, as N increases, the mean-squate etror between the
approximation ano the given signal decreases, and the approximation to the given sig-
J:-'
Sec.3.7 The Gibbs Phenomenon I

Flgure 3.7.1 Sigrrals xr0 ) andrt()'

nal improves everywhere except in the immediate.vicinity of a finite discontinuity.


In
tiie neigfrUorfr"od of points of discontinuity in x(r), the Fourier-scries representation
fuil" toionr".g", even though the mean-square error in the represcntation approaches
zero. A carefu-l examinatioi of the plot in Figurc 3.7.2 reveals that the magnitude
of
the overshoot is approximately 9vo higher than the signal x (r ). In fact, the 9vo over-
shoot is always priient and is independenr of the number of terms used to approxi-
mrte.ignat r(r). This observalion was first made by the mathematical physicist Josiah
Willard Gibbs.
of a
To obtain an explanation of this phenomenon, let us consider the general form
truncated Fourier series:
lv
x,v(r) =) c,exP[Throst]

=,i,+L x(r) exp [ - inutotldr exP [lntool]

+[. ,nrt,i, - "llla,


exp [jn roo(t (3.7. r )
=

It can be shown (see Problem 3.39) that the sum in braces is equal to
144 Fourier Series Chapter 3

r39 (r)

Figure 3.72 Signal r,r().

N .r;l(ru. ]),.4 -.'r]


g(r - t)A,-) exp[ynon(r - t)] = (3.7.2)
-/V
.,"(.,?)
The signal g(o) with N = 6 is plorted in Figure 3.7.3. Notice the oscillarory behavior of
the signal and the peaks at points o = mr,m = 1,2, ....
Substituting into Equation (3.7.1) yields

r^,(,)=lIrt,t ''"[(" *
]),,u ..--,)] .
--- da
t Jo\
.ir(",.1;)

='rl,n'u - ",'r[("1 l)*"],,


' (3.7.3)
.i" (.,,
;)
Sec. 3.8 Summary 145

Ilgure 3.73 Signal3(o) for N = 6.

In Section 3.5.1, we showed that xr(t) converges tox(t) (in the mean-square sense) as
N -+ :o. In particular, for suitably large valucs of N, rr(t) shoulrl he a close approxi-
mation tor(t). Equation (3.7.3) demonstrates the Gibbs phenomenon mathematically,
by showing that truncating a Fourier series is the same as convolving the given r(t)
with the signal g(t) defined in Equation (3.7.2). The oscillating nature of the signal g(t)
causes the ripples at the points of discontinuity.
Notice that. for any signal. the high-frequency components ( high-order harmonics)
of its Fourier series are the main contributors to the sharp details, such as those occur-
ring at the points of discontinuity or at discontinuous derivatives of the signal.

3.8 SUMMARY
o Two functions $,(t) and $,(t) are orthogonal over an interval (a, D) if

f'o,t,lo,.rrla, = {f,,' ',j',


and are orthonormal over an interval (a, b) if E, = I for all i.
r Any arbitrary signal r(t) can be expanded over an interval (a, b) in terms of the
orthogonal basis functions (dr(t)l as

.r(r) =) c,g,(r)
la -a

where

= t'
'' f,'u'o'*(')d'l
146 Fourier Sories Chapter 3

r The complex exponentials

d,0) = *ol+ *l
are orthogonal over the interval [0, 7].
n A periodic signal r(t), of period I, can be expanded in an exponential Fourier series as

,(,) =,i_ ,"*rlry)


o The fundamental radian frequency of a periodic signal is related to the fundamen-
tal period by
2tr
_o=
i
. The coefficients c, are called Fourier-series coefficients and are given by

,, = !, @,-vl- P't;tl,
l,r,
o The fundamental frequency rou is called the lirst harmonic frequency, the frequency
2roo is the second harmonic frequency, and so on,
. The plot of lc,l versus nroo is called lhe magnilude sPectrum. The locus of thc tips
of the magnitude lines is called the envelope of the magnitude spectrum.
r The plot of {,c, versus n r,ro is called the phase spectrum.
. For periodic signals, both the magnitude and phase sPectra are line spectra. For real-
valued signals, the magnitude spectrum has even symmetry. and the phase spectrum
has odd symmetry.
. If signal x(r) is a real-valued signal, then it can be expanded in a trigonometric series
of the form

x(r): ao.,i (r" + o,"inhl')


"orhf'
o The relation between the trigonometric-series coefficients and the exponential-
series coefficients is given by
ao= co

a,, = 2 Re[q,]
t,, = -Zlmlc,,l
I
c,,= ib,,l
,(a,,-
o An alternative form of the Fourier scries is

x(tl = A,, * * 0,,)


fi,n,*r(2i'
with
Sec. 3.8 Summary 147

Ao= co

and
A, = Zlc,l, 0,, : 4.,,
. For the Fourier series to converge, the signal.r(r) musl be absolutely integrable, have
only a finite number of maxima and mininra, and have a finitc number of disconti-
nuities over any period. This set of conditions is known as the Dirichlet conditions.
o If the signal .r(t) has even symmetry, then

b, -- 0. n = 1.2....
2r
=
oo
i J,r,r,'(')o'
4r 2ntt
o" =
i ),r,r,'(l)cos j:
rlt

r If the signal .r(t) has odd symmetry, then

4,,=0, n:0'l'2...
4r 2nrt
u"= d'
T
r If the signal .r (t
'J"'""""n
) has half-wavc odd symmetry. then
az'=O'n=0'1""
o2,,+t = il)"'' ,,,
| I,r,r,.rurro'?Qn
bzu=0' n=l'2'"'
= (, rrnreL \: !,,,
b z, tr
| [,r,r,,
r
_u
o If B, and 1, are, respectively, the exponential Fourier-series coefficients for two
periodic signals.r(r) and y(r) with lhe same period. then thc Fourier-series coeffi-
cients for z(t) = krx(t) + kr-v(t) are
a,=krP,,+kr1n
whereas the Fourier-series coefficients for z(t) = x(t)y(t) are

o, =,,,i. B,-,,'Y,,

o For periodic signals.r(r) and y(r) with the same period f. thc periodic convolution
is defined as

[ .r(t)r(r - t)dr
.trl = 1-I ltn
148 FourierSedes Chapterg

o The Fourier-series coefficiens of the periodic convolution of x(r) and y(t) arc
o, = 9,1,
o One form of Parseval's theorem states that the average power in the signal x(t) is
related to the Fourier-series coefficients g, as

P=
,i-l,-"|'
. The s),stem (transfer) function of an LTI system is defined as

H(r,r) = [- 41";"*p1- j.ilrldr


J _-

o The magnitude of l/(ro) is called the magnitude function (magnitude characteristic)


of the system, and {fl(to) is known as the phase function (phase characteristic) of
the system.
e The response y(t) ofan LTI sysrem to the periodic inputr(r) is

y(r) =i H(nao)c,exp[Throor]

where oo is the fundamentaf frlriency ancl c, are thc Fourier series coefficients of
the input.r(t).
. Represen':ng x(t) by a finite series results in an overshoot behavior at the points of
discontinuity. 1)re magnitude of the overshoot is approximately 97o. This phenom-
enon is known as the Gibbs phenomenon.

.9 CHE KLI OF IMPORTANT TERM


Absolutely lntegrable elgnal Mean-square error
Dlrlchlet clndltlong Mlnlmum msan-aquare errot
Dletoltonless eystem Odd harmonlc
Even harmonlc Orthogonal tuncuons
Erponentlal Fdurler ssrles Orthonormal tunctlons
Fouiler c@fflclents Pareeval's theorem
Glbbs phenomenon Perlodlc convolutlon
Hall-wave odd symmetry Perlodlc slgnals
lrboratory torm ol Fourler earles Phase spectrum
Least squares approrlmallon Transler lunctlon
Magnltuds opoctrum Trlgonometrlc Fourlor aerles

3.10 PROBLEMS
3.1. Express the sct of signals shown in Figure Pli.l in ierms of the orthonormal basis signals
$,(t) and gr(t).
3.2. Given an arbitrary set of functions r, (r), i = l, 2, ... , detined over an interval [ro, rrl, we
can generate a set of orthogonal functions r|l,(r) by followingthe Gram-schmidt onhogo-
ndization procedure. Let us choose as the lirst basis function
Sec.3.10 Problems 149

rt (r) r2ir) x!(r)

t 0

-l
-2

0r (r)

E ll
\,1 2 ,/i
0 0
-fr
Figure P3.l

rfr, (l )= .r, (r)


We then choose as our second basis function
r!2(t) =.rr(t) + ar{rr(r)
where a, is determined so as to make r[2(t) orthogonal to Ur(r). We can continue this pro-
cedure by choosing

U.r(,) = -t.r(r) + brll,r(,) + bz,Jtz?)


with b, and Dsdetermined from the requirement lhat musr hc orlhogonal to both
'lrr(r)
r!, (t) and rfr(t). Subsequenl functions can be generated in a similar manner.
For any two signals x(t), -v(r), let

(r(r).y(r)) = l'' *1,'1y1,1a,


! t,,

and let E,= (.r, (r ), .r, (r )).


(e) Verify that the cocfficients c, . b, . and b, are given by
(.r,(t)..t,(t))
-,EI
o, = --- --
,
Di= _ (.r r (l)..r1(r))
e,
150 Fourier Series Chapter 3

. (.r, (r), rr(t))(.r,0). .tr()) - E'(rr(r), .tr(t))


<\(t),x2u)12 - EtEz
(b) Use your results from Part (a) to generate a set of orthogonal funclions from the set
of signals shown in Figure Pli.2.
(c) Obtain a set of orthonormal functions 0,(l). , = 1, 2, 3, from the set {,() that you
determined in Part (b).

-r2(r) rr(r)

v,17
0

-vztl

Figure P32

33. Consider the set cf functions


.t, (r) = a-'r1r;,..r(t) = c-'u(t),x{t) = e-1'u(t)
(a) Use the method of Problem 3.2 to generate a set of orthonormal functions 0,(r)
from.r,(t),i = 1,2,3.
l
(b) Let i(r) = ) .,0,(r) be the approximaiion of r(r) = 3e-qu(t) in terms of S,(), and
,= l
let e(l) denote the approximation error. Determine the accurary of i() by comPut-
ing the ratio of the energies in e(t) and,r(t).
3.4. (a) Assuming that all 0, are real-valued functions, prove that Equation (3.2.4) minimizes
the energy in thc error given by Equation (3.2.1). (Hint: Differentiate Equation (3.2.7)
with respect to some particular c,, set the result equal to zero, and solve.)
O) Can you extend the result in Part (a) to complex functions?
3S. Showthatiftheset$^(l),k=0,=1,!2,...,isano(hogonalsetovertheinterval(0,7-)and

.r(r): ) coS^(r) (P3.s)

then

',
=
+,L' :o$;()dr
where

q= |.T li,o)l'zdt
Jg

3.6 Walsh functions are a set of orthonormal functions detined over the interval [0, l) that
take on values of over this interval. Walsh functions are characterized by their
=l
Sec.3.10 Problems 151

sequenct. which is defined as one-half the numher of zero'crossin.qs of the funclion rrver
the interval [0. I ). Figure P-'1.6 shows the first seven Walsh-()r.'lcred Walsh functions
wal. (/t. r ). arranged in order ol increasing scqucncv.

wulu ( /l. r)
t
0

rlll
-a -1 -7
Figure P3.6

(a) Verify that the Walsh functions shown are orthonormal ove'r [0. 1).
(b) Suppose we wanl to represent the signal x(t) = 11,,1r; - rr(t - l)l in terms of the
Walsh functions as

x,v(r) = ) co wal,.(k, t)
l-0
Find the coefficients cr for N = 6.
(c) Sketch.r/v(r) for N = 3 and 6.
3.7. For the periodic signal

Jr(r) = 2 * ].or1, + 45") + 2cos(3r) - 2sin(4r + 30")


Z

(a) Find the exponential Fourier series.


(b) Sketch the magnitude and phase spectra function of or.
as a
3.& The signal shown in Figure P3.8 is created rvdcn a cosine volluqe or current waveform is
rectified by a single diode. a process known as half-wave rcctiliealion. Deduce the expt,'
nential Fourier-series expansion for the half-rvave rectified signal.
3.9. Find the trigonomelric Fourier-series expansion tbr the signal in l'roblem 3.8.
3.10. The signal shown in Figure P3.10 is created rvhen a sine volt;rgr or curtent waveform is
rectified by a a circuit with two diodes, a pr(rccss known as lull-s'ave rectificarion. f)educc
the exponential Fourier-series expansion for the Iull-wave rcclil icd sigllal.
3.11. Find the trigonometric Fourier-series expansion for the signal rn l'roblcm 3.10.
152 Fourler Serles Chapter 3

-5t -2t -3r -n0a 3tr 2t 5r I


2 2 T' aa

Figure P3J

Flgure P3.10

3.12 Find the exponential Fourier-series represenlations of the signals shown in Figure F3.12.
Plot the magnitude and phase spectrum for each case.
3.13. Fitrd the trigonometric Fourier-series representations of the signals shown in Figure El.l2.
3.14. (a) Show that if a periodic siggal is absolutely integrable, then l",l -. .
(b) Does the periodic sigral .r(r) : ,irr 4 h"r" a Fourier-series representation? why?

(c) siglral:(r) = tan2nt have a Fourier-series represenlation? Why?


Does the periodic
3.15. (a) Shorv that:(r) = t2, -tt < t = t,r(t + 2t) =.t(l) has the Fourier series

,t,) =
T - r(.o,r- |*.2 +
|cos3r -. )
(b) Set, = 0toobtain
€ (-l)"" _- "'
3,--7- lZ
3.16. The Fourier coefficients of a periodic signal with period I are

z=0

Does this represent a real signal? Why or why not? From the form of cn, deduce the time
signal.t(t). Hint: Use
J exp[-TnrortD(t - t)dt = exp[-r'n rrrrrl
Soc.3.10 Problems 153

r (r,

(a) (b) (c)

.r(r) t(r) rir)

1
2

.,
-l 0 I t

-!
(e) (f)

r(r) r (r) r (r)

-2 0 2 -3 -t 0 I 2

(s) (h)

tigure P3,12

3.17. (a) Plot the signal


r*$ 2
.\' (, ) =
4 fr,nt .in "14 cos 2n"rl
tor M = 1,3. and 5.
(b) Predict the form of r(r ) as lll -r ':r.
3.1& Find fhe exponenrial Fouricr serics for thc impulse trains shorvtt in t:igure P3.18.
3.19. The waveforms in Problcnr J.l8 can be considered to be periodic s ith period N for N anl
integer. Find the exponcntial Fourier series coefficients for thc casc N = 3'
32lL The Fourier series coefficicnts Ior a periodic signal x(r) with pcrroJ 7 are

.,,=
[''"r;;"'l'
154 Fourier Series Chaprer 3

\'(, I

-t -l

Flgure P3.lt
(a) Find Tsuch that cs = l/150 if Iis Iarge, so that sin(nnlI) = nr/7.
(b) Determine the energy in x(t) and in
2

i(r) = a-) -2 6,s';..,


321. Specify the types of symmetry for the signals shown in Figure P3.21. Specify also which
terms in the lrigonometric Fourier series are zeto-

.r(rJ 'r
(r)

(a)

( b)

.r(r) .r (r)

(c) (d)

-r(r)

o Tl2
(e)

Flgure P32f
Sec.3.10 Problems 1S5

322. Periodic or circular convolution is a special case tf general convolurion. For periodic sig-
nals rvith lhe same period 7'. periodic convolution is defined bl the integral

lr
z(I) =
7 J,n.r(").v(r
- t)dr
(a) Show that z(r) is periodic. Find its period.
(b) Shorv that periodic convolution is commutative and associati\c.
3.23. Find the periodic convolution l(r) = r1r;tu,r, of the two signals shorvn in Figure P3.23.
Verify Equation (3.5.15) for these signals.
| {t}
I

I -l-l 0 l :
Flgure P3.23

324. Consider the periodic signal .r (t) that has the exponeniial Fourier-scries expansion

,(r) =,,i"c,,exp[lntoorl, co = o

(e) Integrate term bv term 1o obtain the Fourier-series expansion of 1,(l) = | x(t)dt, and
v(r) is periodic, too.
shorv that
(b) How do the amplitudes of the harmonics of r,(r)compare tothe amplitudesof the har-
monics of -r (t )?
(c) Does integration deemphasize or accentuate the high-frequcncv components?
(d) From Part (c). is the integrated waveform smoother than lht: original waveform?
325. The Fourier-series representation of the triangular signal in Figure P3.25(a) is
8r. I - + I I - + .../\
.((r) =
;r [sin: - t sin3r ,-, sin5r - On sinTr
Use this result to obtain the Fourier series for the signal in Figurc P.j.25(b).

r(r)
.r(t)

(a) ( b)

Figure P3.2-s
, u9 Fourier Series Chapter 3

3-?6. A voltage .r(r) is applied lo the circuit shown in Figure P3.26. If the Fourier coefficients
oi.r (I ) are givcn by

,n=
Ifrrl
n, * r.*PL/h:l
(a; Prove that .r (r ) must be a real signal of time.
(b) What is the average value of the signal?
(c) Find the first three nonzero harmonics ofy(t).
(d) What does the circuit do to the high-frequency terms of lhe input?
(e) Repeat Parls (c) and (d) for the case where y(t) is the voltage across the resistor
instead.

R= lo

t'lgure P3J6

3.27. tsnJ the voltage.y(I) across the capacitor in Figure P3.26 if the input is

r(r) = I + 3cos(t + 30') + cos(2r)


32& The input

..(r) =i c,exp[lrooll

is applied to four different systems. ,r,i. ,*rnrr, are


"utputs
l0
)'r(t) => lc,l exp[j(atoor + g - 3zr'ro)]

yr0) =i c,exp[j(zr,ro(r- 16) - 3ntos)]


,-:-
.vr(l) = ) exp[-tooln l]c, exp[7(nrrrot - 3nr,16)l
".:
,o(r) = ) e:<p[- jroo Inllc, exp[j(noor)]
Determine rvhat tvpe **"",* each system has.
",
339 For the circuit shorvn in Figure P3.29.
(a) Determine the transfer function H(o).
(b) Sketch both I H(<,; I and 4H(,,r).
(c) Consider the input x(r) = 16 exp[lr,rr]. What is the highest frequency (o you can
use such that
't57
Sec.3.10 Problems

lr{r-)-tt"'
lx(t) I
'o'o'
(d) What is the highest frequency u, you can use such that 4 H(,,, ) deviates from the ideal
linear characteristics bv less than 0.02?

R ka

r(l)

Figure P3.29

consider the
330. Nonlinear devices can be used to generate harmonics of the input frequerlcy.
nonlinear system described b!'

)'(r) =.4.t(r) + Bx2(t)


Find the response of the system to 'r(r) = rtr cost,r,t + or cos:to"r' List all new harmonics
generated by the system, along with their amplitudes'
331. The square-wave signal ot Example 3.3.3 with Ii = 1, l" = 500ps' and t =
l00ps is Passed
throuih an ideal-lJw pass filteiwith cutoff I = 4'2kHz and applied to thel-rl system
*ho..-frequ.r,cy ,".pon." I/(to) is shown in Figure P3.31. Find the Iesponse Of the SYstem.

I H(r,il

r,r x l0l
2tt

H (ql

rz x 101
........+-_

Figure P33l
158 FouderSedes Chaptor3

332. The triangular waveform of Example 3.5.2 with period f = 4 and peak amplitude ,4 = 10
:
is applied to a series combination of a resistor R 100 (l and an inductor L = 0.1 H.
Determine the power dissipated in the resistor.
3J3. A fint-order s),stem is Eodeled by the differential equation

ff *rr1,y--u1,y

Ifthe input is the waveform of Example 3.32, frnd the amplitudes of the fiIst three har'
monics in the output.
334 Repeat Problem 3.33 for the system

Y"(t) + 3Y'(t) + 2Y(t) =.r'() + .r(t)


335. For the system shown in Figure P335, the input r(t) is periodic with period L Show that
at my time , > Ir after the input is switched on, y.(l) and %(r) approximate Re [c.f and
Im [c,], respectively. tndeed, if fi is an integer multiple of the period Iof the input sigral
r(r), then the outpuls are precisely equal to the desired values. Discuss the outPuE for the
following cases:
(al Tr=7
(b) f, = rr
(c) Tr >> T.b|ut Tt + T

t'c(,)

coS 1166l I
r(r) 0)o = aT Tl
sin n(,o,

.vr(' )

trgnre P335

336. Consider the circuit in Figure E1.36. The input is the half-wave rectified sigpal of
shorrm
Problem 3.8. Find the amplitude of the secoad and fourth harmonics of the output y(t).
Rr = 500 O

.r(r) +)c"too/rf Rr=5ooo -v


(r)

Flgmr P336

337. Consider the circuit shown in Figure P3.37. The input is the half-wave recified sigpal of
Problem 3.8. Find the amplitude of the second and fourth harmonics of output y(t).
S€c. 3.10 Problems 159

, = 0.1

r(r) +-) c= roosr R=lkO r'( r,

Ftgure P337

33& (a) Determine the dc componlnt and the amplitude of lhe second harmonic of the oul-
put signal y(t) in the circuits in Figures P3.36 and P3.17 if the input is the frrll-wave
rectified signal of Problem 3.10.
O) Find the first harmonic of the output signal .y(r) in the circuirs in Figuras I{136 and
P3.37 if the input is the triangular waveform of Problem 3.32.
339. Show that the following are identities:

(a)
r 'r"[(iu. ]).,,,]
=-l-,-,^.,
) exp[yhtoorl
sin (orll2)

3.40. For the signal x(t) depicted in Example 3.3.3, keep Ifixed and discuss the effect of valv-
ing t (with the restriction r < 7') on the Fourier coefficients.
14L Consider the signal x(t) shown in Figure 3.3.6. Determine thc eftect on the amplitude of
the second harmonic of r (l ) when there is a very small error in measuring r. To do this,
let t = ro - e. where e << r0, and tind thc second harmonic dependence on E. Find the
percentage change in lc, I when 7: 10, r = I, and e = 0.1.
3.4L A truncated sinusoidal waveform is shown in Figure P3.42.

r4 sin

Flgure P3.42

(a) Delermine lhe Fourier-series coefficients.


o) Calculate the amplitude of the third harmonic for B = A/2.
(c) Solve for to such that lc, I is maximum. This method is used to gcnerate harmonic con-
tent from a sinusoidal waveform.
,T Fourier Serles Chapter 3

34j. For the signal .r(l) shown in Figurc P3.43, Iind the following:
(a) Determine the Fourier-series coefficients.
(D) Solve for the optimum value of lo for which lc. I is maximum.
(c) Compare the result with part (c) of Problem 3.,12.

lsinr
-'--1
-2r -2tt + to -t -n+ ,0 /'O ,o lo+n

Flgure P3.43

3.114 The signal.r(r) shown in Figure P3.zl4 is the output of a smoolhed half-wave rectified sig-
nal. The constants ,r, ,r. and A satisfy the following relations:
(,)'l = f - tan-l1oRC;

A= sinr,rlr *rl;t]
e"*n[- ib] =,'n,,,
RC = 0.ls
rrr = 2rr X 6O : 377 radls
(a) Verify that ot' = 1.5973 rad, A = 1.M29, and (')r2 = 7.316 rad.
(b) Determine the exponential Fourier-series coefficients.
(c) Find the ratio of the amplitudes of the fint harmonic and the dc component.

A sin r.;l ,4 exp (- l0r)


1.0

I
I
I
I
I
I

olr olz Figure P3.tl4

3.11 COMPUTER PROBLEMS


3.t15. The Fourier-series coefficients can be computed numerically. This becomes advantageous
when an analytical expression for.r(r) is not known and.r(l) is available as numerical data
or when the integration is difficult to perfoim. Show that
Sec. 3.11 Computer Problems 161

.M
on-
U ),.r(rmAt)
2! Ztmn
" = Mt, ,,>--t.r (r; -ll ) cos M
an

,, = '* Lt) sin2!#!1


2,x(,n
where .r (rz 6t) are M equally spaced data points representing,r (r) over (0. I) and Ar is
the interval between data points such that
A,t = TIM
(Hint: Approximate the intcgral with a summation of rectangular strips, each of width Ar.)
3.46. Consider the triangular signal of Example -i.5.2 with A = n /2 and T = 2r.
(a) Use the method of Problem 3.45 io compute numerically the frrst five harmonis from
N equally spaced points per period of this waveform. Assume that N = t00.
(b) Compare the numerical values obtained in (a) with the actual values,
(c) Repeat for values of .V = 20.40,60, and 80. Comment on vour results.
3.47. The signal of Figure P3.47 can be represented as
. 431
r(rl= rfin>
-slnnr,
,-odd
Using the approximation
- =
rr(')
4{1
n 1,.;t'nn"'
a-odd
wrile a computer program to calculate and sketch the error Iunction
er(t)=x0)-ir(t)
from I = 0tol = 2 forN = 1.3,5,and7.
r(r)

tigure P3.47

3.tl& The integral-squared error (error energy) remaining in thc approximation of Problem 3.47
after N terms is

lo'
l",url'u, = ln'l.,ol'a, - ,i ,,,1 '
Calculatc the integral-squared error for N = I l. 27.32,41.51. l0l. and 201.
3.49. Write a program to compute numerically thc coefficients of thc scries expansion in terms
of wal" (&. l). 0 < A s 6. of thc signal r(t1 = ,[u(I) - ri(, l)1. Compare your results
with those of Problem 3.6.
Chapter 4

The Fourier Transform

4.1 INTRODUCTION
We sarv in Chapter 3 that the Fourier series is a powerful tool in trealing various proh-
lems involving periodic signals. We first illustrated this fact in Section 3.6. where we
demonstrated how an LTI system processes a periodic input to produce the output
response. More precisely. at any frequency ntor. we showed that the amplitude of the
output is equal to the product of the amplitude of the periodic input signal. lq,l. ana
the magnitude of the system function I H1<'r) | evaluated at ro = zor,,. and the'phase of
the output is equal to the sum of the phase of the periodic input signal. {c,,. and the
system phase *H (a) evaluated at o = ,ro0.
ln Chapter 3. we were able to decompose any periodic signal with period Iin terms
of infinitely many harmonically related complex exponentials of the form exp [7h or,,l].
All such harmonics have the common period 7 = 2n f a,,.ln this chapter. we consider
another powerful mathematical technique. called the Fourier transform. for describing
both periodic and nonperiodic signals for which no Fourier series exists. Like the
Fourier-series coefficients. the Fourier transform specifies the spectral content ofa sig-
nal. thus providing a frequency-domain description of the signal. Besides being useful
in analytically representing aperiodic signals. the Fourier transform is a valuable tool
in the analysis of LTI systems.
It is perhaps difficult to see how some typical aperiodic signals. such as
r.(t). exp[-rlrr(t). rect(tlT)
could be made up of complex exponentials. The problem is that complex exponentials
exist for all time and have constanl amplitudes. whereas typical aperiodic signals do
not possess these properties. In spite of this. we will see that such aperiodic signals do
162
Sec. 4,2 The Continuous-Time Fcrurror Translorm 163

have harmonic contcnt: that is. thcv can be expressed as the supcrposition of harmon-
ically relatetl cxponentials.
In Section .1.2. rve use the Fourier series as a stepping-stone ro develop the Fourier
transform ancl shrrw lhal lhe latter can he considered an extension of the Fourier series.
In Section 4.3. we consider thc propcrties of the Fourier transfornt that make it useful
in LTI system analysis and provide examples of the calculation of some elementary
transform pairs. In Scctitrn 4.{. we discuss some applications related to the use of
Fourier tl'anslorm theory in comrnunication systems. signal proccssing, and control sys-
tems. In Scctron 4.5. rve inrroducc the concepts of bandwidth and duration of a signal
and discuss sevcral mea:;ures for these quantities. Finally, in rh t same section, the
uncertainty principle is Jeveloped and its significance is discussed.

4.2 THE CONTINUOUS-TIME FOURIER


TRANSFORM
In Chapter 3. we presented the Fourier series as a merhod for analyzing periodic sig-
nals. We saw that the representation of a periodic signal in terms of a weighted sum of
complex exponentials was useful in obtaining the steady statc rcsponse of stable, Iin-
ear, time-invariant sysiems to periodic inputs. Fourier series analysis has somewhat
limited application in that it is restricted to inputs which are periodic, rvhile many sig-
nals o[ inlcrcst arc aperiodic. Wc can tlcvelop a nrcthod, krrrr*rr as the Fourier trans-
form. for representing aperiodic signals by decomposing such signals into a set of
weighted exponentials, in a manner analogous to the Fourier scries representation of
periodic signals. We rviil use a heuristic development invoking physical arguments
where necessary. to circumvent rigorous mathematics. As we see in the next subsec-
tion, in the case of aperiodic signals. the sum in the Fourier serics becomes an integral
and each exponential has esscntiallv zero amplitude, but the totality of all these infin-
itesimal exponentials produces the aperiodic signal.

4.2.1 Development of the Fourier Tlansfom


The ge'reralization of the Fourier series to aperiodic signals was suggested by Fourier
himself and can be deduced lrom an examination of the structurc of the Fourier series
for periodic signals as the period 7 approaches infinity. ln nrakin-e the transition from
the Fourier series to the Fouricr transfornr. rvhcre necessary. rvc use a heuristic devel-
opment invoking phvsical argunlcnts to circumvent somc rcry subtle mathematical
concepts. After taking the linrit, rve will find that the magnitudc spectrum of an aperi-
odic signal is not a linc spectrufil (as with a periodic signal). but rnstead occupies a con-
tinuum oi frequencies. Thr'sanre is true of the correspondine phasc spectrum.
To clarity horv the chanEle fronr discrete to continuous spectra takes place, consider
the periodic signal .i(r1 shorvn in Figure 4.2.1. Now think of kccping the waveform of
one period of i(t) unchanged, but carcfully and inrentionalh, increase L In the limit
as I -r 2,, only a singlc pulsc r,rmains because lhe nearest nciglrllors have been moved
to infinity. Wr-'sflrv in (ihapter 3 that increasing 7'has two ellccts on the sp€ctrum of
1U TheFourierTransform Chapler4

Ilgure 421 Allowing the Period f


to increase to obtain the aperiodic
sigral.

i(r): The amplitude of the spectrum decreases as 1/I, and the spacing between lines
decreases as2t./7. As I approaches infrnity, the spacing between lines approaches
zero. This means that the spectral tines move closer, eventually becoming a continuum.
The overall shapes of the magnitude and phase spectra are determined by the shape of
the single pulse that remains in the new sigral .r(r), which is aperiodic.
To investigate what happens mathematically, we use the exponential form of the
Fourier series representation for;(r); i.e.,

;(r) => tlE -@


c,explinaotl (4.2.1)

where

,, = Lr exp[-yn oor]dr (4.2.2)


ll',,rrL)
In the limit as f + cD, we see that .oo = 2r /T &,comes an infinitesimally sma[ quan-
tity, d<rr, so that

1.-.49
T '2n
we argue that in the limit, ntoo should be a continuous variable. Then, from Equation
(4.2.2),lhe Fourier coefficients per unit frequency interval are

=
*l' t,,, [-itor] dr (4.2.3)
* __
exp

Substituting Equation (4.2.3) into Equation (4.2.1), and recognizing that in the limit
the srrm becomqs an integral and i(l) approaches x(t), we obtain

,u, : expt-i,r]ar] expl1,,lfr (4.2.4)


l-__ [l-_-,u,
The inner integral, in brackes, is a function of r,r only, not ,. Denoting this integral by
X(to), we can write Equation (4.2.4) as

,(,) = *l..xt,l exp[iror]dro (4.2.5)

where
Sec. 4.2 The Continuous-Time Founer Transform 165

X(trt = (4.2.6)
[ _,rUrexp[-7ror]dr
Equations (.1.2.5) and (4.2.6) ctrnstitute rhe Fourier-transform pair for aperiodic sig-
nals that most electncal engrnccrs use. (Some communications engineers prefer to
write the frequency variable in hdrtz rather than rad/s: this can he done by an obvious
change of variables.,l ,\'(o,l) is callcd the Fourier transform o[.r(r) and plays the same
role for aperiodic signals that <,, plays for periodic signals. Thus. -Y(to) is the spectrum
of .r(l) and is a continuous function defined for all values of o. whereas c, is defined
only for discrete frequeni-ics. 'Ihcrefore, as menrioned earlicr. an aperiodic signal has
a continu(rus spectrum riirher than a line spectrum. X(c,r) spcciiies the weight of the
complex t:xponentials rr:.:J to rcpresent the waveform in Equation (4.2.5) and, in gen-
eral, is a complex functir,n of thc variable to. Thus, it can be written as
x(.,) : lx(to)l exp[ig(r,r)] (4.2.7)
The magnitude of X(ar) plotted against ro is called the magnitude specrrum of r(r), and
lX(.)l'is called the energy spectrum.'I'he angle of X(to) plotted versus ro is called the
phase spectrum.
In Chapter 3, we saw that for any periodic signal x(r), therc is a one-to-one corre-
spondence between ,r(l) and the set of Fourier coefficicnts c,,. Here, too, it can be
shown that there is a one-to-one correspondence betwrjen.r(t ) and X(ro), denoted by
.r(t) <+ X(to)
which is meant to inrply that for every.r(r) having a Fourier transform, there is a
unique X(o) and vice versa. Some sufficient conditions for the signals to have a Fourier
transform are discussed later. We emphasize that while we have uscd a real-valued sig-
nal x(t) as an artifice in the development of the transform pair. the Fourier-transform
relations hold for complex signals as well. With few exceptions. horvever, we rvill be
concerned primarily with real-valued signals of time.
As a notational conveniencc, X(r,r) is often denoted by :? {.r (, )l and is read ..the
Fourier transform of .v(r)." In addition. we adhere to the conrcnlion that the Fourier
transform is represented by a capital letter that is the sante as rhe lowercase tetter
denoting the time signal. For exanrple,

yjlh(t)l = l/(o) = l" At,t exp[-iorl,ir


J_.
Before we examine furthcr thc gcneral properties of the Fouricr transform and its
physical meaning, Iet us introrluce l set of sufficicnt conditions for lhe existence oflhe
Fouricr transform.

4.2.2 Existence of the Fourier Tlansforrn


The signal .r(t ) is said to havc a Fourier transform in the ordinltrl sense if the integral
in Equation (4.2.6) convcrges (i.c.. exists). Since
lly(,)/rl =llt?ll,tt
and lexp[-jr,rtll = l, it follorvs that the integral in Equarion ({.J.6) exists if
166 The Fourier Trans{orm Chapler 4

l. x(t) is absolutely integrable and


2. xO is "well behaved."
The first condition means that

f -1,{'\la' . - (4.2.8)

A class of signals that satisfy Equation (4.2.8) is energy signals. Such signals, in gen-
eral, are either time limited or asymptotically time limited in the sense that .r(t) -r 0
as , -,
=
@. The Fourier transform of power signals (a class of signals defined in Chap-
ter 1to have infinite energy content, but finite average power) can also be shown to
exist, but to contain impulses. Therefore, any signal that is either a Power or an energy
signal has a Fourier transform.
"Well behaved" means that the signal is not too "wiggly" or, more correctly, that it
is of bounded variation. This, simply stated, means that r(r) can be represented by a
curve of finite length in any finite interval of time, or alternatively, that the signal has
a finite number of discontinuities, minima, and maxima within any frnite interval of
time. At a point of discontinuity, ,0, the inversion integral in Equation (4'2.5) converges
to | 1.r1rf 1 + ,(r; )l; otherwise it converges to.r(t). Except for impulses, most signals
of interest are well behaved and satisfy Equation (4.2.8).
The conditions just given for the existence of the Fourier transform of .r(t) are suf-
ficient conditions. This means that theri: are signals that violate either one or both con-
ditions and yet possess a Fourier transform. Examples are power signals (uni1-51sp
signal, periodic signals, etc. ) that are not absolutely integrable over an infinite interval
and impulse trains that are not "well behaved" and are neither power nor energy sig-
nals, but still have Fourier transforms. We can include signals that do not have Fourier
transforms in the ordinary sense by generalization to transforms in the limit. For exam-
ple, to obtain the Fourier transform ofa constart; we consider x(l) = rect(r/r) and let
t -+ co after obtaining the Fourier transform.

4.2,8 Examples of the Contlnuous.Tine


Fourier Tranefotm
In this section, we compute the transform of some commonly encountered time signals.

Elvqrnple 43.1
The Fourier transform of the rectangular pulse x() = rect(/t) is

x@'1 = expl-iottlttt
l"__x(t)

=
f/',rexp1-1o,tldr

=*("*l+l- *ol.;.])
Ssc. 4.2 The Continuous-Time Fourier Translorm 167

This can be simplified to

x@ =:sin ]1 =
"
*,n.l' = , sa 9|
Since X(r,r) is a real-valued function of o, its phase is zero for all or. X(t'r) is plotted in Fig-
ure 4-2-2 as a fuuction of o.

(,)f
X(LJ) = r srnc
.\ (r) ltr

-i 8a
- a

Figtre 4.2.2 Fourier transform of a rectangular pulsc.

Clearly, the spectrum of the rectangular pulse extends over the range - @ ( to ( rc.
However, from Figure 4.2.7. we see that most of lhe spcctral conlcnl of the pulse is con-
tained in the interval -2tr/r < a <2rft. 'l'his intcrval is lltrclcd the main lobe of the
sinc signal. The other portion of the spectrum represents what rc called the side lobes of
lhe spectrum. lncreasing r results in a narrower main lobe. \\'herL'as a smaller t produces
a Fouricr transform with a wider main lobc.

Bsa'nple 422
Consider the triangular pulse defincd as

L(t/t) =l'
-
'''
, ,,'

Lo' l' l >T

This pulse is of unit height, centered about t = 0, and of rvidth 2t. Its Fourier transform is

x @ = I _L(t /r)expl- j,;,tldt


-

l, (' . -l)"*nt-i',ta, * /, (, -')exp[-l,r]dr


f (r - 1).wri,,td, + /, (r - ')cxp [ -;.,r]dr

=,1:l - 1)"o.",,a,
168 The Fourior Transtom Chapter 4

After performing the integration and simplifying the expression, we obtain

A(r/t)<+tsinifi="S"'T

Exanple 42a
The Fourier transform of lhe one-sided exponential signal

.t(t) =exp[-crlz(r), a >0


is obtained from Equation (4.2.6) as

f'
X(r) = |-- (exp[-ct]u(t) exp[-jror])dr
J

f*
=fJ6 exp[-(c+jo)tldt
_t @'29\
o + lrrr

Exarnple 4.2.4
In this example, we evaluate the Fourier transform of the two-sided exponential signal

r(r) = exP[-o lr l]' c ) 0


From Equation (4.2.6), rhe rransform is

x1.1 = exp[cr] exp[-jror]dr + expt-crl exp[-jor]dr


/0 f
-l+l
d- ja o +.rr,,

=;4;'
2d

f,lrar.ple 425
The Fourier transform of the impulse function is readily obtained from Equation (4.2.5)
by making use of Equation (1.6.7):

e160)l = [ s1r; exp[-;o tldt: t


We thus have the pair

6 (t) <+ 1 (4.2.10)


Using the inversion formula, we must clearly have
Sec. 4.2 The Continuous-Time Fourier Transform 169

,O = . (4.2.fi')
*l ,expliorlrro
Equation (4.2.1 l) stares that thc impulse signal theorelically consists of equal-amplitude
sinusoids of all freguencies. This integral is obviously meaningless. unless we interprei E(r)
as a function specified by its properties rather than an ordinarv function having definite
values for every t. as we demonstrated in Chapter L Equation (4.2.1 l) can also be written
in the limit form
sin qr
6(11 = 1;' 1rI
(4.2.r21

This result can be established by writing Equation (4.2.1 I ) as

s(,) =
*Hl /' exp[ir,rrld(,,
I .. 2 sincr
21t c-a I
-- sin ot
=ltm- nl

Erample 4.2.6
We can easily show that /1. expljlotldto/Ztr "behaves" like the unit-impulse function by
putting it inside an integral; i.e., wc evaluatc an integral of thc fornr

I- l* I__*^j@4d,,)B(idt
where g(t)
is any arbitrary well-behaved signal that is continuous at , = 0 and possqsses a
Fourier transform G(or). Interchanging the order of integration, rve have

i-J-"U' "trexp[,r]dr] d, =,:, f -(;( - r,r)do


F'rom the inversion formula it follows that

j, [' = = g(o)
-ct-oa,, :" L-G(o)dro
That is, (l/2rr)/1-expljatlda "behaves" like an impulse at t = 0.

Another transform pair follows from interchanging the roles of t and ro in Equation
(4.2.11). The resull is

D(or) =
); l-_-expll,,tat
or
I er 2zt 6 (to) (4.2.13)
In words, the Fourier transform of a constant is an impulse in the frequency domain. The
factor 2rr arises because we are using radian frequency. If we werc to write the transform
in terms of frequency in hertz, the factor 2rr would disappear (D (or) = 6(l)/2tr).
170 The Fourler Translorm Chapter 4

fuqnpls 42.7
In this example. we use Equation (4.2.12\ and Example 4.2.1 lo prove Equarion (4.2.13). By
leiting t go to :c in Example 4.2.1 . we find that the signal r (r ) approaches I for all values of
,. On the other hand, from Equation (4.2.12), the limit of the transform of recr (rh) becomes
' 2 sinot
lim:*=2rr6(r,r)
iJz (, 2

f,aarnple 4.28
Consider the exponential signal .r(t) = exp[/or,rrl. The Fourier transform of this signal is
t'
x(, ) = I exp Ur,rstl exp [-ltotldt
J_-

= [
t_-
exp[-71- - ,oo\tldt

Using the result leading to Equation (4.2.13). we ohrain


exp[jtoot] er 2zr 6 (ro - ron) @.2.14)
This is expected, since exp[7to,,tl has energy concentrated at t,ru.

Periodic signals are power signals. and we anticipate, according to the discussion in
Section 4.2.2, that their Fourrer transforms contain impulses (delta functions). In Chap-
ter 3, we examined the spectrum of periodic signals by computing the Fourier-series
coefficients. We found that the spectrum consists of a set of lines located at ano{}.
where or0 is the fundamental frequency of the periodic signal. In the following exam-
ple, we find ihe Fourier transform of periodic signals and show thar the spectra of peri-
odic signals consist of trains of impulses.

Bynmple 4.2.9
Consider the periodic signal .r(t) with period I; thus. rou = 2zr /T. Assume that x(r) has
the Fourier-series representation

,(,) = ,exp[,7hroor]
,i"
Hence, taking the Fourier transform of both sides yields

x(,) = c,elexp[y'zr,r,r]l
,i.
Using Equation (4.2.14). we obtain

X(t,l) = - rroo) $2.15)


,P,2rrc,6(or
Thus, the Fourier transform of a periodic signal is simply an impulse train u/ith impulses
located at ro = zr'ro, each of which has a strength 2zrc,, and all impulses are separated from
Sec. 4.3 Properties of lhe Fourier Translorm 171

each other by ton. Note that bL'cause the signal -t(r) is periodic, thc nragnitude spectrum
lX1.1l is a train of impulses of streng,th 2nlc,l, whereas the spectrum obtained through
the use o[ the Fourier series is a line spectrum with lines of finitc anrplitude lc, l. Note
thal the Fourier transform is not a periodic functron: Even though ths impulses are sepa-
rated by the same amount, their weights are all different.

Example 4.2.10
Consider the Periodic signal

.r1r1 =
,i. o(r - zr)
which has period L To lind the Fourier transform, we first have to compute the Fourier-
series coefficients. From Equation (3.3.4). the Fourier-series coefficicnts are

,,= trl,,,vl*vl- i'7')0, = t,


since-r(l) = 6(r) in any interval of length T. Thus, the impulse train has the Fourier-series
reprcsentation

r(,) =.>_
+*rl'+)
By using Equation (4.2.14), wc tind that the Fourier transform ol thr: impulse train is

*@:?;.i-'(- -'f) @.2.16)

That is, the Fourier transformation of a sequence of impulses in thc time domain yields a
sequence of impulses in the frequency domain.

A brief listing of some other Fourier pairs is given in Table 4. l.

4.3 PROPERTIES OF I'HE FOURIER TRANSFORM


A number of useful properties of the Fourier transform allorv some Problems to be
solved almost by inspection. In this section, we shall summarizc many of these prop-
erties, some of which may be more or less obvious to the reader.

4.3.1 Linearity
x,(t) e+ X,(or)

xr(t) <-+ Xr(a)


then
&rf (r) + b.rr(t) <+ aXr(a) + bX2(a) (4.3.r)
172 The Fourier Translorm Chapter 4

TABLE 4.I
Some S€lecred Fourler Tranatorm Palrs

x(r) x(.)

l. I 2t 6 (ro)

I
2. u(t) zrD (r'r) +.;
l@

3.6(r) 1

4. 6G - ,o) exp [-jotol


5. rect(t/t) f SlnC
. :-or 2 sintor /2
2n (o

sinr'r't
6.
"7t ain" "lit - 1tl
rect (@ /2a s)

2
7. sgr r
jot

8. exp[lrootl 2t6(ro - roo)

9. ) a, exp[lzrool] 2zr ) a,6(o - zoo)

,0. ;;;", rf6"-oro)+E(oi+too)I


I l. sin oor tut, - ros) - 6(.o + os)l
f
12. (cos root)u (l)
] tot, - roo) + 6(ro + oo)l+
#;,
13. (sinool)z(t) izt[ttto - ro6) - 6 (<.r + oo)l , (oo

'3-''
14. cos oor rect (/t )
".in.Qjfdl
I
15. exp[-at]z(r), Re [a] > 0
atjot
16. texp[-at]zO, Re lal > 0 (;^)'
tn-l I
17. exP[-arlz(;, Re[al > 0
1, - 11 (a + jrLl)'
7a
exp[-alrl], a>o
18.
aT;t
4oj.
19. lrl exp[-alrl], Rela] > o
a2+az
Sec. 4.3 Properties of the Fourier Translorm 173

TABLE 4.1 (@ntinued)

r(r) x(,)

20.
I
> 0 I exp [- alr,r l]
;4,:,Re{al
n. Re[a]>o :4rryIPl:-dell
F+, 2a

a>o
ti [-r2l
?2. expl- at2l,
V;*p[ * J

, r@T
23. 6(t/r) T SlnC- :-
ln
24. > s(r-nI) ?.!.'(, T)
where a and b are arbitrary constants. This property is the direct result of the linearity
of the operation of integration. The linearity property can be easily extended to a lin-
ear combination of an arbitrary number of components and simply means that the
Fourier transform of a linear combination of an arbitrary number of signals is the same
linear combination of the transfornr of the individual componcnts.

Eremplo 43.1
Suppose we want to find the Fourier transform of cos rool. The cosine signal can be writ-
ten as a sum of lwo exponentials as follows:
I
coso,o, =
i [exRIlrootl + exp[-ioor]l

From Equation (4.2.14) and the linearity property of the Fourier transform,
9[cosr,rol] : zr[6(o - roo) * 6(to + rou)l
Similarly. the Fourier transform of sin ool is

glsinoor| :
l [tt, - -
T rog) 6(to + r,l,)l

43.2 Symmetry

If x(t) is a real-valued time signal, then


X(-(,,t) : X*(') (4'3'2)
where * denotes the complex conjugate. This property, referred to as conjugate sym-
metry, follows from taking the conjugate of both sides of Equation (4.2.6) and using
the fact that .r(r) is real.
174 The FourierTranslorm Chapter4
.
Now, if we express X(to) in the polar form, we have
x(.) = lx(o,)lexpli$(<,l)l (4.3.3)

Taking the complex conjugate of both sides of Equation (a3.3) yields


X*1<r; = lx(r,r)l exp[-jg(r,r)]
Replacing each to by -o in Equation (a33) results in
x(-,) : lx(-,)l exp[jg(-ro)]
By Equation (4.3.2), the left-hand sides of the |ast two equations are equal. It then
follows that

lx(,ll = lxt-,tl @3.4)


0(r) = -0(-r) (4.3.5)
i.e., the magnitude spectrum is an even function of frequency, and the phase spectrum
is an odd function of frequency.

kample43.2
From Equations (4.3.4) and (4.3.5), the inversion formula, Equation (4.2.5), which is writ-
ten in terms of complex exponentials, can be changed to an expression involving real cos-
itrusoidal sigtals. Specifically, for real .r(t),

,(O =
*l- r,,, exp[ir,rr]dro
t r.rexp [ltor] dto * ] x
[" <,l "xp [lr,r
= rl dro
f_ _*

=* t: lx(to) | (exp [i(r,,r + Q(to))] + exp[-i(rr,r + Q (o))])do


=j
[- z1xt 1l cos[ror + rfi(r,r)]dr,r
Equations (4.3.4) and (4.3.5) ensurq;hat the exponentials of the form exp[jrut] combine
properly with those of the form exp [-ir,rt] to produce real sinusoids of frequency o for
use in the expansion of real-valued time signals. Thus, a reai-valued signal r(l) can be writ-
ten in terms of the amplitudes and phases of real sinusoids that constitute the signal.

Example 43.9
Consider an even and real-valued signal.r(t). Is transform X(to) is

jutldt
x(,,,) =
I- r(,) expl-
=
J-
r(r)(cmror - jsinot)dr
Sec. 4.3 Properties of lhe Fourier Translorm 175

Since.r(r.1 cosr,r, is anevcn funuriott t-rf r and.r1t.1 srn to, ts iln otlti luttctionof t. we havc
r'
,Y(o1 = 2 r(I ) ct s.t dt
J,
rvhich is a real and even funcl ion of rrr. Therefore. thr' Fouricr iransli)rrtt of an even and real'
valued signal in the timc domain is an cven and rcal-valucd signal irr thc lrequency domain'

4.8.3 Time Shifting


If
,r(t) e+ X(or)
then
.r(t - ,,r) e+ X(t'r) exp[-itoI,,] (4.3.6a)

Similarly,
.r(r) e/-,/ e+ X(o - to,,) (4.3.6b)

The proofs of thesc properties follow from Equation (4.2.6) after suitahle substitution
of variables. Using the polar fornt, Equation (4.3.3). in Equatiorr (1 -3.6a) yields
Sl.r(r - ,.)l = lX(t,r)l exp[i(S(r,r) - o4,)l
The last equation indicates that shifting in tinre does not alter thc anrplitude spectrum
of the signal. The only effcct of such shifting is to introduce a plrasc shift in the trans-
form that is a linear function of o. The result is rcasonable hecattsc rve have already
seen that. to delay or advance a sinusoid, we have ()nly to adjust thc Phase' ln addition.
the energy conlent of a wavefornr does not depcnd on its posilion in time.

4.8.4 Time Scaling


If
.r(t) e+ X(r,r)
then

\(0,) ., (4.3.7)
lll "(l )
where o is a real constant. Thc proof o[ this follows directly fronr thc definition of the
Fourier transform and the appropriate substitution of variables.
Aside from the amplitude factor of u lo | ' linear scaling in tinrc l'v a factor o[ a cor-
responds to linear scaling in frcquency by a factor of l/o. The rcsult can be interpreled
physically by considering a typical signal .r(r) and its Fourier translirrm X(to), as shown
in Figure 4.3.1.1f l"l . t..t(or) isexpanded in time.and the signal varies more slowly
(becomes smoother) than the original. These slorver variations dcctnphasize the high-
frequency components and ntanifcsl themselves in more appreciahle low-frequency
sinusoidal components. That is, expansion in the time domain irnplies compression in
176 The FourierTranslorm Chapter4

I X(or) I

I
x(otl, o I I l.l r,(?)l ,a(l

I
x(atl, a > I Itl ;x(;);''>r

(c)

tlgore 43.1 Examples of the time-scaling property: (a) The original sig-
nal and its magnitude spectrum. (b) the time-expanded signal anA is mal-
nirude spectrum. and (c) the iime-compressed signal and the resuhing
magnitude spectrum.

the frequency domain and vice versa. If lc > l. r(ar) is compressed in time and must
|
vary rapidly. Faster variations in time are manifested by the presence of higher fre-
quency components.
The notion of time expansion and frequency compression has found application in
areas such as data transmission from space probes to receiving stations on gartr. ro
reduce the amount of noise superimposed on the required sign"l, it ir necessary to keep
the bandwidth of the receiver as small as possible. one means of accomplishing this is
to reduce the bandwidth of the signal, store the data colected by the probe, und th"n
play the data back at a stower rate. Because the time-scaling facior is'known, the sig-
nal can be reproduced at the receiver.
Sec. 4.3 Properlies ol the Fourier Transform 177

Example 4.S.4
Suppose we want to determine the Fourier transform of the pulsc.l(r) = o rect(dIh).
o > 0. The Fourier transform ot rect (t/r) is, by Example 4.2. I .

*{rect(,/r)} =".in.l' I

By Equation (4.3.7), the Fourier transform of c rect (ot/r) is

. otT
Y7 lo recl (otlt)| = T Stnc 2ar.

Nole lhat as we increase the valuc of the parametcr o, the rectangular pulse becomes nar-
rower and higher and approachcs an impulse as o -J e. Corresptrndingly, the main lobe
of the Fourier transform becomes wider, and in the limit X(ro) approaches a constant
value for all ro. On the other hand, as q approaches zero. the reclangular signal approaches
I for all t. and the transform approaches a delta signal. (See Examplc 4.2.7.)

The inverse relationship betwcen time and frequency is encounlered in a wide variety
of science and engineering applications. In Section 4.5, we will cover one aPPlication
of this relationship, namely, lhe unccrlainty principle.

4. .6 Differentiation
If
r(t) e+ X(o)
then

d'jl) ,.
i.x<-) (4.3.8)

The proof of this property is obtained by direct differentiation ol' both sides of Equa-
tion (4.2.5) with respect to r. The differentiation proPerty can bc cxtended to yield
o";,t:' (4.3.e)
'ur,,r)"x(r'r)
We must be careful when using the differentiation property. First of all, the property
does not ensure the existence of Tldx(t)/dtl. However, if v cxists, it is given by
jtox(to). Second. one cannot alwavs infer that X(ro) -- 9ltt.r(t\/tltl/i,o.
Since differentiation in the time domain corresponds to multiplication by lro in the
frequency domain, one might conclude that integration in the time domain should
involve division by ito in the frequency domain. However. this is true only for a certain
class of signals. To demonstrate it. consider the signal .r(r) = | lft)dr.With Y(t'r)
178 TheFourierTranstorm Chapter4

as its transform, we conclude from dy(t)/dt = r(l) and Equation (4.3.8) that
iroY(o) = X(<o). For Y(to) to exist, y(t) should satisfy the conditions listed in Section
4.Z.2.Thisis equivalent toy(co) = 0, i.e., I_- x@)dT = X(0) = 0. In this case,
J

(4.3.r0)
f__x@ar--1x1,;
This equation implies that integration in the time domain attenuates (deemphasizes)
the magnitude of the high-frequency components of the signal. Hence, an integrated
signal is smoother than the original signal. This is why integration is sometimes called
a smoothing operation.
If X(0) + 0, then signal x(t) has a dc component, so that according to Equation
(4.2.13), the transform will contain an impulse. As we will show later (see Example
4.3.10), in this case

f-.rt 1o, er rrX(0)6(to) + tL x(,) (4.3.1 l )

f,sarnple 43.6
Consider the unit-step funclion. As we saw in Seclion 1.6, this function can be written as

,(i =:- [,,,,- j]


=
| * |'c"'
The first term has z16(or) as its transform. Although sgnt does not have a derivative in the
regular sense. in Section 1.6 we defined the derivalives of discontinuous signals in terms
of tlre della function. As a consequence.
d tt I
s(r)
7 [7sgnt] =
Since sgnr has a zero dc component (it is an odd signal). applying Equation (4.3.10) yields

i.,*{}.e,,}=r
or

'{j *"'} = *
(4.3.12)

By the linearity of the Fqurier lransform. we obtain

r(r) er rr6(r,r) + ] (4.3. r3)

Therefore. the Fourier transform of lhe unit-step function contains an impulse at to = 0


correpponding to the average value of 12. It also has all the high-frequency compondnts
of the signum function. reduced by one-half.
Sec. 4.3 Properties ol the Fourier Transform 179

4.8.6 Energy of Aperiodic Signals


In Section 3.5.6. we related the total average power of a periodic signal to the aver-
age power of each frequency component in the Fourier series o[ the signal. we did
this through Parseval's theorem. Now we would like to find thc analogous relation-
ship for aperiodic signals, which are energy signals. Thus, in this sccrion. we shorv that
the energy of aperiodic signals can be computed from their transform X(o). The
energy is defined as

E=
f il =
[-_-x(t)x*(r)dt
-lx(r)1'z
Using Equation (4.2.5) in this equarion resulrs in

= r-rexP [-ior r ]rlor dr


' 1 -'urlrt, I---*.
Interchanging the order of integration gives

E : r(r)exp[-y,,rr]dr]r/-
*l* ".,,,[/_,
=; f .lx1,vl'za,
We can therefore write

= (4.3.r4)
[-_-l,r,tl'a, ); [_-t*r,tPr,,
This relation is Parseval's relation for aperiodic signals. It says rhat the energy of an
aperiodic signal can be computed in the frequency domain by computing the energy
per unit frequency, i6 (o) : lX(o) lz/2r, and integrating over all f rcquencies. For this
reason, E (r,r) is often referred to as the energy-density spectrum, or, simply. the energy
spectrum of the signal, since it measures the frequency distribution of the total energy
ofr(t). We note that the energy spectrum of a signal depends on rhc magnitude of the
spectrum and not on the phase. This fact implies that there are many signals that may
have the same energy spectrum. However, for a given signal, thcrc is only one energy
spectrum. The energy in an infinitesimal band of frequencies d o.r is. rhen, i8 (or)do, and
the energy contained within a band or, ( to s ro, is

A/t = f'';l
Jq Zn
lx1,yl,a. (4.3.1s)

That is, lX(r)l'not only allows us ro calculate thc total energy of -r(r) using parseval's
relation, but also permits us to calculate the energy in any given lrcqucncy band. For real-
valued signals, lX(r)l'is an even function, and Equation (4.3.14) can be reduced to

,=Ill lx1,;1,a, (4.3.16)


180 The Fourier Transform Chapter 4

Periodic. signals, as defined in Chapter l, have infinite energy, but finite average
power. A function that describes the distribution of the average power of the signal as
a function of frequency is called the power-density spectrum, or, simply, the power
spectrum. In the following, we develop an expression for the power spectral density of
power signals, and in Section 4.3.9 we give an example to demonstrate how to com-
pute the power spectral density of a periodic signal. Let x(r) be a power signal, and
define x,(l) as

"(')
=
{;l')'
= x(t) rect(t/2t\
;;; "

We also assume that


<-+ x' (')
The average power in the signal ,,;,:'
p : ts
l| [_,v<,>rd,] = l,s l* I:"tr,(,)t,d,]
(4.3.17)

where the last equality follows from the definition of .r,(l). Using Parseval's relation,
we can write Equation (4.3.17) as

; r's [* /- l''r'rl'"] =);f-n[ryr]" (4.3.18)

lzn ['sr,la,
J__

where

s(<o) = 6^ [lx=(')l'2l (4.3.1e)


ZT
"_,, I J

S(to) is referred to as the power-density spectrum, or, simply, power spectrum, of the
signal .r(t) and represents the distribution, or density, of the power of the signal with
frequency o. As in the case of the energy spectrum, the power spectrum of a signal
depends only on the magnitude of the spectrum and not on the phase.

Example 43.6
Consider the one-sided exponential signal

r(t) = exp [- t]n (t )


From Equation (4.2.9),

lx(,)l'= #;,
The total energy in this signal is equal to l/2 and can be obrained by using either Equa-
tion (1.4.2) or Equation (43.1a). The energy in the frequency band -4 < ro < 4 is
Seo. 4.3 Proporties ol the Fourier Transtorm 181

le4 l
AE=:?t I I +:--.d,,t
Jo or'

l-
= tloran- ,.u lo = o.qzz
Thus. approximately M%o of the total energy content of the signal lies in the frequency
band -4 ( rrr ( 4. Note that the previous result could not be obtained with a knowledge
of .r (r) alone.

4.8.7 Convolution
Convolution plays an important role in the study of LTI systems and their applications.
The property is expressed as follows: If
r(t) <+ X(<o)

and

h(r) e> H(<o)

then
r(t) x &(l) t-+ X(<o)H(or) (4.3.20)

The proof of this statement follows from the definition of the convolution integral, namely.

elx(t) * h(t11=
l__l[-_r<,lotL - t)d"]expt -i,otldt
Interchanging the order of integration and noting thatx(t) does not depend on ,, we have

elx(t) * n()l =
f_-x(rrlf_^U - t)exp[-;.,r]rt]dr
By the shifting property, Equation $.3.6a), the bracketed term is simply II(o)
exp [-7'on]. Thus,

hlx{t) * h(t)l = [' .t(t)exp[-i onlH(tt)dr


J--
r'
= H(.) | x(t) exp [- jrot]dt
J_-

= II(ro)X(o)
Hence, convolution in the time domain is equivalent to multiplication in the frequency
domain, which, in many cases, is convenient and can be done by inspection. The use of
the convolution property for LTI systems is demonstrated in Figure 4.3.2. The ampli-
tude and phase spectrum of the output /(l) are related to those of the input r(t) and
the impulse response ft (t) in the following manner:
182 The Fourier Translorm Chapter 4

ltUl l'(l) = .\ (r) * ,l(r)


LTI
ll (o, l'(oJ) = X(Gr)/r(or) Figure 4J.z Convolution property
of LTI system response.

I
yt,ll = lxlr,yl 1a1,,,11
+Y(r,r)=4X(ro)+ 4H(.)
Thus, the amplitude spectrum of the input is modifiea Uy I a1r,r) | to produce the ampli-
tude spectrum of the output, and the phase spectrum of the input is changed by 4H(r,r)
to produce the phase spectrum of the output.
The quantity H(or), the Fourier transform of the system impulse response, is gen-
erally referred to as the frequency response of the system.
As we have seen in Section 4.2.2, lor fl(<,r) to exist, lr(t) has to satisfy two condi-
tions. The first condition requires that the impulse response be absolutely integrable.
This, in turn, implies that the LTI system is stable. Thus, assuming that Ir(t) is "well
behaved," as are essentially all signals of practical significance, we conclude that the
frequency response of a stable LTI system exists. If. however, an LTI system is unsta-
ble. that is. if

f .ln<,tla,: -
then the response of the system to complex exponential inputs may be infrnite, and the
Fourier transform may not exist. Therefore, Fourier analysis is used to study LTI sys-
tems with impulse responses that possess Fourier transforms. Other, more. general
transform techniques are used to examine those unstable systems that do not have
finite-value frequency responses. In Chapter 5, we discuss the Laplace transform,
rvhich is a generalization of the continuous-time Fourier transform.

Example 4.3.7
we demonstrate how to use the convolution property of the Fourier lrans-
ln this example.
form. Consider an LTI system with impulse response
n(t) = exP [- at]rr(t)
whose input is the unit srcp function u(t). The Fourier transform of the output is
Y(a) = 9lu(t)19 [exp[-atlu()l

=
l.ur,;.,i](;;1;;)
zrl
= ..6(o) * -. ;--.+ ..-,
o l@la l(lD)

=J[,tt,l*.'l-'-!.
oL ,(,)l aa+l@
Sec. 4.3 Properties ol the Fourier Transform r83

Taking tlrc tnversc Fourier iirnstorm of both sidcs rcsults in


ll
r'(r) = :rt(r) - =ctp[ - utlu(t)
00
I
=;1, - exp[-arl]rr(r)

Example 4.8.8
The Fourier transform of thc rriangle sigrral A(r/r) can be otrtancrl by observing that the
signal is the convolurion of rhe rectangular pulse ( I /Vr ) rect ( r/r ) rvith itself; that is.

L(r/l = ) rect(r/t; *
uf recr(r/rl
From Example 4.2'l and Equation (4.3.20), it follows that

elA(,/r)) = (-{,1 rectrrl,l}) : ,(.,,. T)'

Example 4.3.9
An LTI system has an impulse response
ft(') = exPl- arlrr(t)
and output

-y(r) = [exp[-br] - exp[ - ttllu(t)


Using thc convolution property, lve find that the transform of thc input is
Y(o)
x(r) = H(,,;

= -(c
- a)(19 1 n1
(lto+b)(ir,r+c)
DE
jlo+h'7or+c
where
D=a-D and E=c-o
Therefore,
.y(r) = [(a - b) exp[-br] + (c - a) exp[- ctllrr(t)

Example 4.3.10
In this example, we use thd relation
tl
: -t(t)'. rr(r)
J_-.(")rt
18/. The Fourier Translorm Chapier 4

and the transform of u(t) to prove the integration property, Equation (4.3.11). From
Equation (4.3.13) and the convolution properly. we have

*{i'..r,1a"} = e1(t)+ rr(r)l = .rt.ll,s1,t + 1]

= rrX(o)o(or) * {-E)
l@
The last equality follorvs from the sampling property of the delta function.

Another important relation follows as a consequence of using the convolution prop


erty to represent the spectrum of the output of an LTI system; that is,
Y(<'r) = X(r'r)H(to)
We then have

lY(,)l' = lx1'141,112 = lx(to)l'zla(,)l' (4.3.21)


This equation shows that the energy-spectrum density of the response of an LTI sys-
tem is the product of the energy-spectrum density of the input signal and the square of
the magnitude of the system function. The phase characteristic of the system does not
affect the energy-spectrum density of the output, in spite of the fact that, in general,
H(ro) is a complex quantity.

4.3.8 lluality
We sometimes have to find the Fourier transform of a time signal that has a form sim-
ilar to an entry in the transform column in the table of Fourier transforms. We can find
the desired transform by using thc table backwards. To accomplish that, we write the
inversion formula in the form

f' x(r)"*p[* jottld,, :2t,x(t)


J__

Notice that there is a symmetry between this equation and Equation (4.2.6): The two
equations are identical except for a sign change in the exponential, a factor of2t, and
an interchange of the variables involved. This type of symmetry leads to the duality
property of the Fourier transform. This property states that if .r(r) has a transform
X(to), then
X(t) ++ 2n.r(-ro) (4.3.22)
We prove Equation (4.3.22) by replacing, with -, in Equarion (4.2.5) to Ber

2r x(- t) = x1r1"xp[-lor]dto
/' __

= jrtldr
[,"_ _xg)expl-
Sec. 4.3 Properlies of the Fourier Translorm 185

since t,r is jusl a duntmv variahlc lor intcgrltion. Now rcplacing t hv o andrbytgivcs
Equation (4.-1.22).

Example 4.3.1I
Considcr thc srrnal
. @ul
.r(r)=saf= stnc ::2i
From Equation (4.2.6 ).

,{r, Y} : /- s" e!! expt-1,,ttat


This is a vcry ditficult integral to evaluate directly. However, wc l(,und in Example 4.2.1 that

rect (r/r ) .t . Su
l
Then according to Equation (4.3.22).

= 11recr(-,/, ,, ='J,recr((,,,/(,,,)
"{"Y}
because the reclangular pulse is an even signal. Note that thc tt atlsform X(ro) is zero out-
side the range - a rf 2 s ot s u sf 2, but that the signal .r (l ) is rr()l t imc limited. Signals with
Fouricr lransforrns that vanislr outsidc a givcrr flequency hirrtl rrrt ,:allcd bandJimited sig-
nals (signals with no spectral content above a certain maxiurunr lrsquency, in this case,
ar/2.). lt can be shown that time limiting and frequency limiting are mutually exclusive
phenomena: i.e., a iimc-limilcd signal -r(r) always has a Fouricr ttitttsform that is nol band
limited. On the oihcr hand. if X(o) is band limited. lhen the coltcsponding time signal is
never time limited.

Example 4.3.12
Differentiating Equation (4.2.6) n times with rcspect to to. we rcadily obtain

(-7i)'-r(r) *d'!|:l (4.3.23)

that is, multiplying a time signal by t is equivalcnt to differcntiating the frequenry spec'
trum, which is the dual of dif[erentiation in the time domain.

The previous two examples demonstrate that, in addition lo its consequences in


reducing the complexity of thc calculation involvcd in determining some Fourier trans-
forms. duality also implies that every property of the Fourier transform has a dual.

4.S.9 Modulation
If
.r(t) <+ X(or)
m(t) <+ M(a)
186 The Fourier Trans,lorm Chapter 4

then

r(r)rr(r) -f txtrl * M(to)l (4.3.24)

Convolution in the frequency domain is carried out exactly like convolution in the time
domain. That is.

X(or) *H(r,,) = J"f o)do =


= "X@)H(ot- f= .H(o)Xkt-o)do
This property is a direct result of combining two properties, the duality and the con-
volution properties, and it states that multiplication in the time domain corresPonds to
convolution in the frequency domain. Multiplication of the desired signal r(r) by m (t)
is equivalent to altering or modulating the amplitude of r(r) according to the variations
in z(r). This is the reason that the multiplication of two signals is often referred to as
modulation. The symmetrical nature of the Fourier transform is clearly reflected in
Equations (4.3.20) and (4.3.24): Convolution in the time domain is equivalent to mul-
tiplication in the frequency domain, and multiplication in the time domain is equiva-
lent to convolution in the frequency domain. The importance of this ProPerty is that
the spectrum of a signal such as x(t) cos ronl can be easily computed. These types ofsig-
nals arise in many communications systems, as we shall see later. Since

cos.,.t = jtexntiroll + exp[-iour]l


it follow that

9lx(l) cosrorrl =lW<. - @o) + X(or + oo)l

This result constitutes the fundamental property of modulation and is useful in the
spectral analysis of signals obtained from multipliers and modulators.

Evanple 43.13
Consider the signal
r,O =.r(r)p(t)
where p(r) is the periodic impulse train with equal-strength impulses, as shown in Figure
4.3.3. Analytically, p (t) can be written as

p1r) =
,i. o1r - ,r1

Iigure 4JJ Periodic pulse train


-47 -3T -2T -T 0 used in Example 43.13.
Sec. 4.3 Properties of the Fourier Translorm 187

Using the sampling propcrty of thc delta function, we obtain

r,(,)= i:(nl)6(r-aI)
That is, r,(r) is a train o[ impulses ,0"".0 a.""""* apart, the strength of the impulses
being equal lo the sample values of r(t). Recall from Example 4.2.10 that the Fourier
transform of the periodic impulse train p(r) is itself a periodic impulse train; sp€cifically'

P@=+
"2.'(,-?)
Consequently, from the modulation property,
1

X,(r) = IX(t,). P(o)l


,'
= i.t"x(,). s(, -T) = +.i-r(- -':;)
That is, X,(to) consists of a periodically repeated rcplica of X(r,r).

Example 43.14
Consider the system depicted in Figure 4.3'4, where

xlry = ll(9a12)

rtrt =.E-a(r - ll)


to1: tt(13'l2

r (r) ) (r)

Flgure 43.4 System for ExamPle


4.3.74.

The Fourier transform ofx(r) is the rectangular pulse with width ro,,, and the Fourier trans-
form of the product r(r)p (r) consists of the periodically repeated rcplica of X(to), as shown
in Figure +.1.5. Similarly, the Fourier transform of ft(r) is a rectan*rlar pulse with width
3or. According to the convolution properly, the transform of thc outPut of the system is
Y(r,r) = X,(ro)H(ro)

= X(ro)
or
y(tl = x(t)
188 The Fourier Translorm Chapler 4

Z(@t

-2.e -.nB O .iB l.a


-'r- -;'

lll.i,

f(qr)

-anB 0 ae o)
:l
Figure 435 Spectra associated wirh signals for Example 4.3.14.

Note that since the system i(r) blocked (i.e.. filtered out) all the undesired components
ofx,(t) in order to obtain a scaled version ofx(r), we refer to such a system as a filter. Fil-
ters are important components of any communication or control sysiem. In Chapter 10,
we study the design of both analog and digital filters.

f,xq'rrple 4.S.16
In this example, we use the modulation properly to show lhat the power spectrum of the
periodic signal .r (t ) with period I is

S(r,r) =2, ) l.,l'S1, - rro;


where c, are the Fourier coefficients.trU,
"*
.o=i2t
We begin by defining the truncated signal r"(r) as rhe product.r(r) rect (/2r). Using
the modulation property. we find that

x,(<,r) = [Zt Sa.r - X(to)l


2f
= * f-2t Saprx(to - p)dp
Sec. 4.3 Properties of the Fourier Transform 189

Substituting Equation (4.2.15) tor X(r,r) and forming the function i ,t - (r,l) l:, rve have

LIJ'I]- = Y. ) Zrc,cX,Sa[(or - no6)rlsa[(r,r - muro)r]


2t ,,o--.
The power-density spectrum of the periodic signal .r(r) is obtained by taking the limit of
the last expression as r -) co. It has been observed earlier that as r -+ .o, the transform of
the rectangular signal approachcs 6(r,r); therefore, we anticipatc lhat the two sampling
functions in the previous expression approach 6(o - *op), where ft : m ar,d a, Also,
observing that
- ,,r,.
6(to - nr,ro)E(, - mroo) =
{i:, Il,*,1"o.
we calculate that the power-density spectrum of the periodic signal is

S(or) = 1;n1 !!1')l'


L'l

=2" ) lc,l2E1r,r - n on)

For convenience, a summary of the foregoing properties of the Fourier transform is


given in Table 4.2. These properties are used repeatedly in this chapter, and they
should be thoroughly understood.
TABLE 4.2
Some Solected Prop€rtl€s of lho Fourler Transtorm

L Linearity )
n=l
o,.r,(r1 )
n=l ",x,,(,t
(4.3.r)

2. Complex conjugation r''(, ) .:f (-.) (4.2.6)

3. Time shift .r(r - 16 ) X(o) exp[" i totol (4.3.u)


4. Frequency shift .r(t) exp [loutl X(ro - r'r,,) (4.3.6b)

5. Time scaling x(ar) t/lal x (- /t) (4.3.7)

6. Differentiation tl"x1t)/dr' (lo)"X(o) (4.3.e)

X(o)
7. Integration I xk)dr * (43.1r)
,-'fi "xtolol';
8. Parseval's relation
f_-t,at'a,
j f__l^t.l',. (4.3.14)

9. Convolution r(r)+rr(I) X(ro)H(r,r) (4.3.2o1

10. Duality x(t) 2r t(-o) (4i.221

11. Multiplication by t (-.ti)'.t0) ry@) (4j.23)


da'
I
12. Modulation x(t)nr (t) X(or) i, rvl ( r,r) (4.3.241
2tr
190 The Fourier Translorm Chapter 4

4,4 APPLICATIONS OFTHE FOURIER TRANSFORM


The continuous-time Fourier transform and its discrete counterpart, the discrete-time
Fourier transform, rvhich we study in detail iu Chapter 7, are tools that find extensive
applications in communication systems, signal processing, control systems. and many
other varieties of physical and engineering disciplines. The important processes of
amplitude modulation and frequency multiplexing provide examples of the use of
Fourier-transform theory in the analysis and design of communication systems. The
sampling theorem is considered to have the most profound effect on information
transmission and signal processing, especially in the digital area. The design of filters
and compensators that are employed in control systems cannot be done without the
help of the Fourier transform. In this section, we discuss some of these applications in
more detail.

4.4.1 Amplitude Modulation


The goal of all communication systems is to convey information from one point to
another. Prior to sending the information signal through the transmission channel, the
signal is converted to a useful form through rvhat is known as modulation. Among the
many reasons for employing this type of conversion are the following:
l. to transmit information efficiently
2. lo overconre hardware limitations
3. to reduce noise and interference
4. to utilize the electromagnetic spectrum efficiently.
Consider the signal multiplier shown in Figure 4.4.1. The output is the product of
the information-carrying signal x(t) and the signal rn (l), rvhich is referred to as the car-
rier signal. This scheme is known as amplitude modulation, which has many forms.
depending on m(t). We concenlrate only on the case when m(t) = cost,r.I, which rep-
resents a practical form of modulation and is referred to as double-sideband (DSB)
amplitude modulation. We will now examine the spectrum of the output (the modu-
lated signal) in terms of the spectrum of both r(l) and rn (t).
The output of the multiplier is
y(t) = .r (r) costout
Since y(t) is the product of two time signals, convolution in the frequency domain can
be used to obtain its spectrum. The result is

\(r) v (tl

lr(r) tigure 4.4.t Signal multiplier.


Sec. 4.4 Applicauons o, the Fourier Transrorm 191

| .t(<o t I I Y(c,r) |

'Qs o -GJo 0 I c.ro I

1.-r.,-J
Figure 4.42 Magnitude spectra of information signal and modulated
signal.

I
Y(r) X(r,r) * zr [D (to - roo) + 6 (r,r + to,,)]
2t
1

2
[X(t" - on) + X(to + ro,,)].

The magnitude spectra of r(t) and y(t) are illustrated in Figurc 4.4.2.The part of the
spectrum of Y(o) centered at *on is thc result of convolving ,Y(to) with D(or - orn),
and thc part ccnlcrcd al -tr,, is lltc rcsult o[ convolving,\'(o) rvilh 6(o + <'ru). This
process of shifting the spectrum of the signal by q, is necessary hecause low'frequency
(baseband) information signals cannot be propagated easily by radio waves.
The process of extracting the information signal from the modulated signal is
referred to as demodulation. In effect, demodulation shifts back the message spectrum
to its original low-frequency location. Synchronous demodulation is one of several
techniques used to perform amplitude demodulation. A synchronous demodulator
consists of a signal multiplier, with the multiplier inputs being thc modulated signal and
cos o0r. The output of the multiplier is

z(t) = y(t) cos r'rot

Hence,
1
Z(o\ [Y(to - or,,) + Y(o + r,ru)]
2

=
)xo *)xr, - 2a,,) +)x6 * 2.,,1
The result is shown in Figure 4.4.3(a). To extract the original inlormation signal r(t)'
the signal z (t) is passed through the system with frequency rcsponse H(ro) shown in
Figure 4.4.3(b). Such a system is referred to as a low-pass filter. since it passes only low-
frequency components of the input signal and filters out all [requencies higher than tor-
the cutoff frequency of the filter. The output of the low-pass filter is illustrated in Fig-
ure 4.4.3(c). Note rhar if lH(('))l = t, l.l ( ror,andthere werc no transmission losses
't92 The Fourier Translorm Chapter 4

I z(u) I

I H(.,tlI

-<ig 0 ,ne

(b)

I X(o:) I

-@D O -n .,)
(c)
Flgure 4.rL3 Demodulation process: (a) Magnitude specrrum of z0);
G)
the low-pass-filter frequency response; and (c) the extracted information
spectrum.

involved, then the energy of final signal is one-fourth that of the original signal because
the total demodulated signal contains energy located at to = 2oro that is ev-entually dis-
carded by the receiver.

a.4.2 Multiple."i.E
A very useful technique for simultaneously transmitting several information signals
involves the assignment of a portion of the final frequency to each signal. This iech-
nique is known as frequency-division multiplexing (FbM), and we enciunter it
almost
daily, often without giving it much thought. L:rgei cities usually have several AM radio
and television stations, fire engines, police cruisers, taxicabs, mobile telephones,
citi-
zen band radios, and many other sources of radio waves. All these souices are fre-
quency multiplexed into the.radio sp€crrum by means of assigning distinct
frequency
bands to each signal. FDM is very similar to amplitude modulati-on. considei
three
Sec. 4.4 Applications ol the Fourier Translorm 193

I ,Yr (qr) I l,YJ(o); I ,ll(&r) i

0 P, ,"s -Wz 0 ]!2 e -lt)! 0 Wt

Figure 4.4,4 Magnitude spectra f,)r x, (r), rr(t). and -ri(r) for the FDM
system.

band-limited signals with Fourier lransr.rrms. irs shown in Figure 4.4.4. (Extension to
n signals follows in a straightforward nianner.)
If we modulate.r, (t) with cosror r,.rr (r) with cos(')2I, and -rr(t ) with coso3r, then. sum-
ming the three modulated signals, we obtain
y(t) = x,(t)cos(,)r, + .r3(I) eostrr2/ + xr(t) cosorl.
Thc frequency spectrum of .y (l ) is

Y@) = l2[,]',(or - or,) + X,(ro + to,)l

I
,., [,r'rtt.r - t,l:) + X, (r',r + to. )]

, l,r.,tu' - or) + Xr(ro + r,r,)l

which has a spectrum similar to that in Figure 4.4.5. h is important here to


make sure that the speclra do not overlap-that is, that tu, I LV, 1 a, - W, and
,ll,r'l W, < or - Wr. At the receiving end, some operations rnust be performed to
recover the individual spectra.
Because of the form of I f lor; I, in order lo capture the spcctrum of x, (t), we would
need a system whose frequency response is equal to I for -r - Wr 5 ro s o, * LV,
and zero otherwise. Such a system is called a band-pass filter, since it passes only fre-
quencies in the band or - Wr s ro s tor + lryr and suppresscs all other frequencies.

| )'(or) I

-O3 -{i2 -@t 0 crt cJr

Flgure 4.45 Magnitude spectrum of y(r) for the FDlvl system.


194 The Fourier Transform Chapter 4

.r,(t) rt (r)

x2lt) :z(r)

.r3 (r) r!(r)

cOS oJS l COS @31

IIgu:: 4.4.6 Frequency-division multiplexing (FDM) slstem. BPF =


band-pass filter. LPF = low-pass fillcr.

The output of this filter is then processed as in the case of synchronous amplitude
demodulation. A similar procedure can be used to extract 12(r) or.r.1(r). The overall
system of modulation, multiplexing, transmission, demultiplexing, and demodulation
is illustrated in Figure 4.4.6.

4,4.3 The SanplingTheorem


Of all the theorems and techniques pertaining to the Fourier transform, the one that
has had the most impact on information transmission and processing is the sampling
theorem. For a low-pass signal x(t) which is band limited such that it has no frequency
components above or, rad/s. the sampling theorem says that x(t) is uniquely deter-
mined by its values at equally spaced points in time I seconds apart, provided that
T < rf a". The sampling theorem allows us to completely reconstruct a band-limited
signal from instantaneous samples taken at a rate rlr. = 2n/T, provided that ro, is at
least as large as 2or,,which is twice the highest frequency present in the band-limited
signal x(t). The mininum sampling rale Zot" is known as the Nyquist rate.
The process of obtaining a set of samples from a continuous function of time x(l) is
referred to as sampling. The samples can be considered to be obtained by passing x(r)
through a sampler, which is a switch that closes and opens instantaneously at sampling
instants aL When the switch is closed. we obtain a sample.r(n I). At all other times.
the output of the sampler is zero. This ideal sampler is a fictitious device. since. in prac-
tice. it is impossible to obtain a switch thal closes and opens instanlaneously. We
denote lhe oulput of the sampler by x,(t).
In order to arrive at the sampling theorem, we model the sampler output as
.r,(,) : r(,)p(r) (4.4.1)
where

p(r)= ) 6(r-nI) (4.4.2)


S€c. 4.4 Applications ol the Fourier Translorm 195

r (t) -\r(')

Figure 4.4.7 The ideal sampling


process.

is the periodic impulse train. We provide a justification of this model later, in Chapter
E, where we discuss the sampling of continuous-time signals in greater detail. As can
be seen from the equation, the sampled signal is considered to be the Product (modu-
lation) of the continuous-time signal r(t) and the impulse train p (t) and, hence, is usu-
ally referred to as the impulse modulation model for the sampling operation. This is
illustrated in Figure 4.4.7.
From Example 4.2.10, it follows that

T "i"r(", - ?) =T ^2.6(,
P(0,) = -nr,r,) (4.4.3)

and hence,

x,(,): f x1,y,-
"1,1
I r*
=;)_- X(o)P(r,r - o)do

-r s X(or - zto,) (4.4.4)

The signals.r(t), p(t), and x,(t), are depicted together with their magnitude.spectra in
Figure a.4.8, withr(i) beinga band-limited signal-that is, X(ur) is zero for lrl , ,r.
As can be seen, x,(l), which is the sampled version of the continuous-time sipal r(t),
consists of impulses spaced I seconds apart, each having an arca equal to the sampled
value of r(r) at the respective sampling instant. The spectrum X"(ro) of the sarrpled
signal is obtained as the convolution of the spectrum of X(r,r) with the impulse train
P(ro) and, hence, consists of the periodic repetition at intervals o, of X(ro), as shown
in the figure. For the case shown, to, is large enough so that the different components
of X"(ro) do not overlap. It is clear that if we pass the sampled signal r,(t) through an
ideal low-pass filter which passes only those frequencies contained in x(t), the spec-
trum of the filter output will be identical to X(o), except for an amplitude scale factor
of 1/I introduced by the sampling operation. Thus, to recover .r(t), we pass r,(t)
through a filter with frequency response

H(,)=
{;' *h::
= (4.4.s')
'I*tt(2"-g)
196 The Fourier Transform Chapter 4

.r(, ) I X(c)) |

p(tl P(ol

-37 -27 -T 0 -2u,


.tr(r) I Xr(o) I

-3T -2T -7' O T 2T 3T t -2ot, -ort 0 r,rr :t:,


Ilgure 4.4.E Time-domain signals and their respective magnitude spectra.

This filter is called an ideal reconstruction filter.


As the sampling frequency is reduced, the different componenls in the spectrum of
X.(o) start coming closer together and eventually will overlap. As shown in Figure
A.a.9@) if o" - or, > ro", the components do not overlap, and the signal .r(t) can be
recovered from r"(t) as descririetl previously. If or, - or, = 0, the components just
touch each other, as showrr :,r ^rigure 4.4.9(b). If ro, - ro, ( 1116, the components will
overlap as shown in Figure "i.;.9(c). Then the resulting speclrum obtained by adding
the overlappin! componeni:r no longer resembles X(r,r) (Figure a.9.(d)), and
'(rgether
x(l) can no longer be recov,:red from the sampled signal. Thus, to recover r(t) from
the sampled signal, it is cluar that the sampling rate should be such that
(os-(t,8>(r)B
Hence, signal .r(t) can t,e ; crvered from its samples only if
t,,lr) 2a" (4.4.6',)

This is the sampling theorem (usually called the Nyquist theorem) that we referred to
earlier. The minimum permissible value of o, is called the Nyquist rate.
The maximum time spacing between samples that can be used is
'ft
T= (t)a
(4.4.7)
Sec. 4.4 Applications ol the Fourier Transform 197

I X.(ar) |

-t t, -ar+@B an, -!rB - to\ (o\+ aDB tt)

-anr-brB -ot, -a8 0

(b)

I X, (ro) | Lt, ( (r) |

-tDB O ag u, (o
-@! 0 arr

(c) (d)

Iigure 4.4.9 Effect of reducingi the sampling frequency on X"(r,r).

If Idoes not satisfy Equation (4.4.7), the different componenrs of X"(o) overlap, and
we lvill not be able to recover.r(r) exactly. This is referred to as aliasing. If .r(t) is not
band limited, there will always be aliasing, irrespective of the chosen s,rmpling rate.

f,snrnple 4.4.1
The spectrum of a signal (for example, a speech signal) is essenlially zero for all frequen-
cies above 5 kHz. The Nyquist sampling rate lor such a signal is

a":2(ds = 2(2n x 5 x ld)


= 2rr x l0r rad/s
The sample spacing I is equal ro 2zr/o, = 0.1 ms.
198 The Fourier Translorm Chapter a

F;xenple 4.4.2
Instead of sampling the previous signal at the Nyquist rale of l0 kHz, let us sample it at a
rate oi I kHz. Ihat is.
a,=Ztt. x8x l0r rad/s
The sampling intcrval I is equal lo21r/a! = 0.125 ms. If we filter the sampled signal.r.(t)
using a low-pass filter rvith a cutoff frequency of4 kHz, the output sP€ctrum contains high-
frequency components of:(l) superimposed on the Iow-frequency components, i.e., we
have aliasing and r(t) cannot be recovered.

In theory. if a signal .r(t) is not band limited, we can eliminate aliasing by low-pass
filtering the signal before sampling it. Clearly, we will need to use a samPling frequency
which is twice the bandwidth of the filter, or. In practice, however, aliasing cannot be
completely etiminated because. first, we cannot build a low-pass filter that cuts off all
frequency components above a certain frequency and second, in many applications,
.r(t) cannot be low-pass filtered without removing information from it. In such cases,
we can reduce aliasing effects by sampling the signal at a high enough frequency that
aliased components do not seriously distort the reconstructed signal. In some cases, the
sampling frequency can be as large as 8 or 10 times the signal bandwidth.

m'<ample 4.4.3
An analog bandpass signal,.r,(r). which is bandlimited to the range 8m < /<
l2(DHz is
input to the system in Figure 4.a.lO(a) where H(o) is an ideal low-pass filter with cutoff
frequency of 2fi) Hz. Assume that the spectrum of r,(t) has a triangle shape symmetric
about the center frequency as shown in Figure 4.4.10(b).
Figure 4.4.10(c) shows X,,(o), the spectrum of the modulated signal,.r,,(t), while
Xo(ro), rhat ot the output of thc low-pass filter (baseband signal), ro(t) is shown in Figure
4.4.10(d). If we now sample ru(r) al intervals f with f < I /4fi) secs, as discussed earlier,
the resulting spectrum X, (<'r) will be the aliased vcrsion of X,(to) and will thus consist of
a set o[ triangular shaped pulses centered at frequencies u = 2rk/T, k = 0,
+ 1, + 2, etc.
If one of these pulses is centered at 2n x l(fr) rad/s, we can clearly recover X, (r'r) and
hence r,(t) by passing the sampled signal through an ideal bandpass filter with renter fre-
quency 2fiX)zr rad/s and bandwidth of Efi)rr rad/s. Figure 4.4.10(e) shows the spectrum of
the sampled signal for 7 = I msec.
In general, we can recover r,(l) from the samPlcd signal by using a band-pas filter if
2rk/T = o,, that is if 1/Iis an integer submultiple of the center frequency in Hz.

The fact that a band-limited signal that has been sampled at the Nyquist rate can be
recovered from its samples can also be illustrated in the time domain using the con-
cept of interpolation. From our previous discussion, we have seen that, since r(t) can
be obtained by passingr,(t) through the ideal reconstruction filter of Equation (4.4.5)'
we can write
X(o) = H(or)x"(co) (4.4.8)

The impulse response corresponding to H(ro) is


Sec. 4.4 Applications ol the Fourier Translorm 199

,{4 ( r) -r. (, )

cos (2000 7t) p(t)

0 0
au.l20[orradls
(b) (c)

xu@)
.r, (ru)

0 -1.5 -l {.5 0 0.5 l5


al2$Nr radls ul2ffinndls
(d) (c)

Flgure 4.4.10 Spectra of the signals of Example 4.4.3.

a(r) = r-s'Iuki
'fiI

Taking the inverse Fourier transform of hoth sides of Equation (4.4.8), we obtain
.r(t)=r.(t)x&(t)
=
[..t,1,i-u<,
-,.)]1 7st!oal

:i_r,@T";*;;?
:.i. f',"D':e*^;T,)
:,,i_ *,'r,nr)Sa(or(r - n7')) (4.4.9)

Equation (4.4.9) can be interpreted as using interpolation t() rcconstruct r(t) from its
samples x(n?"). The functions Sa [ror(r kI)] are called intcrpolating, or sampling'
-
funciions. Interpolation using sampling functions, as in Equation (4.4.9), is commonly
referred to as band-limited interpolation.
The Fourler Transform Chapter 4

4.4.4 Signat Filtering


Filtering is the process by which the essential and useful part of a signal is separated
from extraneous and undesirable components that are generally referred to ai noise.
The term "noise" used here refers to either the undesired part of the signat, as in the
case of amplitude modulation, or interference signals generated by the electronic
devices themselves.
The idea of filtcring using LTI systems is based on the convolution property of the
Fourier transform discussed in section 4.3, namely, that for LTI systems, the i.ourier
transform of the output is the product of the Fourier transform of the input and the fre-
quency response of the system. An ideal frequency-selective filter is a filter that passes
certain frequencies without any change and stops the rest. The range of frequencies that
pass through is called the passband of the filter, whereas the range of frequencies that
do not pass is referred to as the stop band. In the ideal case, lff(t,r)l = I in a passband,
while lrr(r,r) | = 0 in a stop band. Frequency-selective filters are ciassified according to
the functions they perform. The most common types of filten are the following:
l. Low-pass filters are those characterized by a passband that extends from to = 0 to
(o = (oc' where or. is called the cutoff frequency of the filter. (See
Figure 4.4.11(a).)
2. High-pass filters are characlerized by a stop band that extends from to = 0 to o r,r" :
:
and a passband that extends from <o r,r" to infinity. (See Figure 4.4.11(b).)
3. Band-pass filters are characterized by a passband that extends from o @r to :
(,) = (,r,, and all other frequencies are stopped. (See Figure a. .l1(c).)
4. Band-stop filters stop frequencies extending from o, to ro, and pass all other fre-
quencies. (See Figure 4.4.11(d).)
I Hkt) l I H(u) |

0
(a)

I H(ai t I HQo)l

0
(c) (d)

ngule 4dU . Most common classes of filters.


S€c. 4.4 Applications ol lhe Fourier Translorm 201

As is usual with spectra of real-valued signals. in Figure 4.4.1I we have shown fl(ro)
only for values of to > 0, since H(or) = H(- -) for such signals.

Exanple 4.4.4
Consider the ideal low-pass filter with frequency response

,,,r,r =
{l' kl,i:;
The impulse response of this filter corresponds to the inverse Fourier transfom ofthe fre-
quency response H,r(o) and is given by

fr,, = &rin.9tl

Clearly. this filter is noncausal and. hence, is not physically realizable.

The filters described so far are referred to as ideal filters because they pass one set of
frequencies without any change and completely stop others. Since it is imposible to
realize filters with characteristics like those shown in Figure 4.4.1 I , with abrupt changes
from passband to stop band and vice versa, most of the filters we deal with in practice
have some transition band. as shown in Figrure 4.4.12.

I ll(or) I I l/(r.r) |

lH(.u)l I Ir(o) I
202 The Fouder Transtorm Chapter 4

E-anple 4.4.6
Consider the following RC circuit:

r
I
I

I ^t

r (r)
v(t,

lr
tlr------
-- -- __ J
The impulse response of this circuit is (see problem 2.17)

h(t) =
*f *o[#],u,
and the frequency response is

I
H((,)) =
I + TtoRC
The amplitude spectrum is given by

1a1,11' = l++rc),
and is shown in Figure 4.4.13. lt is clear that lhe RC circuit with the output taken as the
voltage across the capacitor.performs as a low-pass filter. The frequenryro. ar which
rhe
Trry!$. spectrum latrll
= ry@/Va (3 dri berow H(0)) is caited r'rre fano edge. or
the ldB cutoff frequency ofthe filrer. (The transition between the passband and
thJsrop
band occurs near o..) Setting JH(or)l = t/!2, we obtain

I
o.=EE

I H(ral I

I
I
,/z

.-= | Egure 4.4.13 Magnitude sp€ctrum


'RC of a low-pass RC circuit.
Sec. 4.4 Applications of the Fourier Transtorm

If we interchange the positions of the capacitor and the resistor, we obtain a s)tstem
with impulse response (see Problem 2.18)

/r(r) = 5,11y - *'..* [1]l,ol


and frequency response

H(,)=#,k
The amplitude spectrum ts given by

la(,rr)l'= t1'#",^.)-
and is shown in Figure 4.4.14.lt is clear that the RC circuit with output taken as the volt-
age across the resistor performs as a high-pass frlter. Again. by sctting lH(.)l = UVz,
the cutoff frequency of this high-pass filter can be determined as

I
''=Ra

I H{ut) |

I tlgure 4.4.14 Magnitude sp€ctrum


i of a high-pass RC circuit.

Filters can be classified as passive or active. Passive filters arc made of passive ele-
ments (resistors, capacitors, and inductors), and active filters use operational ampli-
fiers together with capacitors and resistors. The decision to use a passive filter in
preference to an active filter in a certain application depends on several factors, such
as the following:

l, The range of frequency of operalion of the ftlter. Passive filters can oPerate at higher
frequencies, whereas active filters are usually used at lower frcquencies.
2. The weight and size of the filter realization. Active filters can bc realized as an inte-
grated circuit on a chip. Thus, they are superior when considerations of weight and
size are important. This is a factor in the design of filters for low-frequency appli-
cations where passive filters require large inductors.
--20/. The Fourier Transform Chapter 4

3. lhe sensitivity of lhe filter to parameter changes and stability. Components used in
circuits deviate from their nominal values due to tolerances related to their manu-
facture or due to chemical changes because of thermal and aging effects. Passive fil-
ters are always superior to active filters when it comes to sensitivity.
4. The availability of voltage sources for operational omplifiers. Operational amplifiers
require voltage sources ranging from I to about 12 volts for their proper opera-
tion. Whether such voltages.are available without maintenance is an important
consideration.
We consider the design of analog and discrete-time filters in more detail in Chapter 10.

4.5 DURATION-BANDWIDTH RELATIONSHIPS


In Section 4.3, we discussed the time-scaling property of the Fourier transform. We
noticed that expansion in the time domain implies compression in the frequency
domain, and conversely. In the current section, we give a quantitative measure to this
observation. The width of the signal, in time or in frequency, can be formally defined
in many different ways. No one way or set of ways is best for all purposes. As long as
we use the same definition when working with several signals, we can compare their
durations and spectral widths. If we change definitions, "conversion factors" are
necdcd.to compare the durations and spectral rvidths involved. The principal purpose
of this section is to show that the rvidth of a time signal in seconds (duration) is
inversely related to the width of the Fourier transform of the signal in hertz (band-
width). The spectral width of signals is a very important concept in communication sys-
tems and signal processing, for two main reasons. First, more and more users are being
assigned to increasingly crowded radio frequency (RF) bands, so thar the spectral width
required for each band has to be considered carefully. Second, the spectral width ofsig-
nals is important from the equipment design viewpoint, since the circuits have to have
sufficient bandwidth to accommodate the signal, but reject the noise. The remarkable
observation is that, independent of shape, there is a lower bound on the duration-band-
width product of a given signal; this relationship is known as the uncertainty principle.

4.6.1 l)efinitions of Duration and Bandwidth


As we mentioned earlier, spectral representation is an efficient and convenient method
of representing physical signals. Not only does it simplify some operations, but it also
reveals the frequency content of the signal. One characterization of the signal is its
spread in the frequency domain, or, simply, its bandwidth.
We will give some engineering definitions for the bandwidth of an arbidrary real-
valued time signal. Some of these definitions are fairly generally applicable, and oth-
ers are restricted to a particular application. The reader should keep in mind that there
are also other definitions that might be useful, depending on the application.
The signgl x(r) is called a baseband (low-pass) signal if lX(rtl = 0 for ltol > org
and is called a band-pass sigrat centered at or0 if lx(o.,)l = 0 for lto .ol > tos.(See
-
Sec.4.5 Duration-BandwidthRelationshlps 205

I xl (ar) I lx2(or) I

-0,g O Og @ -@o - atl -t,t -qrO *tdao .rO - @B oO sro +Or, 6

Flgure 4S.1 Amplitude spectra for baseband and band-pass signals.

Figure 4.5.1.) For baseband signals, we measure the bandwidth in terms of the positive
frequency portion only.

Abeolut€ Bandwidth. This notion is used in conjunction with band-limited


signals and is defined as the region outside of which the spectrum is zero. That is, if
.r(r) is a baseband signal and lX1rll is zero outside the interval ltol < or,then
B=aa (45.1)

But if r(r) is a band-pass signal and lX(Gr)l is zero outside the interval or ( o( oz.
then
B: r,o'2- a, (4s.2)

tr',rarnple 45.1
The signal .r(l) = sintost/Tr is a baseband signal and has the Fourier transform
rect (o2oB). The bandwidth of this signal is then ror.

8-dB (Ealf-Power) Bandwidth. This idea is used with baseband signals


that have only one maximum, located at the origin. The 3-dB bandwidth is defined as
the frequency o, such that

lx(r,r,)l _
-
t
(45.3)
lxlp)l \/,
(
Note that inside the band0 < ro ro1, th€ magnitude lX1.;l fatls no lowerthan 1/\6
of its value at to = 0. The 3-dB bandwidth is also known as the half-power bandwidth
because a voltage or current attenuation of 3 dB is equivalent to a power attenuation
by a factor 2.

E:-arnple 452
The signal x(t) = .*01- ,rrlu (t) is a baseband signal and has the Fourier transform
I
x(,)=ilr+i,
206 The Fourier Transform Chapter 4
.
The magnitude spectrum of this signal is shorvn in Figure 4.-5.2. Clearly: X(0) = I. and rhe
3-dB bandrvidth is

B=lT

I X((,)) I

-l 0 I Figure 452 Magnitude spectrum


TT for the signal in Example 4.5.2.

Equivalent Bandwidth. This definition is used in association with band-


pass signals with unimodal spectra whose maxima are at the center of the frequency
band. The equivalent bandwidth is the width of a fictitious rectangular spectrum such
that the energy in that rectangular band is equal to the energy associated with the actual
spectrum. In Section 4.3.6, rve sarv that thc cnergy density is proportional to the squarc
of the magnitude of the signal spectrum. If o,, is the frequency at which the magnitude
spectrum has its maximum, we let the energy in the equivalent rectangular band be

Equivalent energ, = ?"1!9'J'' (4.s.4)

The actual energy in the signal is

Actuarenergy =
* f "1x1,11,a, = :f lx1,;1,a. (4.5.s)

Setting Equation (4.5.4) equal to Equation (4.5.5), rve have the formula that gives the
equivalent bandwidth in hertz:

:
l.ia.l,Jr/ lxt')l"r.,'
B"' (4.s.6)

Frnmple 4.5.3
The equivalent bandwidth of the signal in Example 4.5.2 is

,*= ),llon-'J.+ -zd- = 2T IT

Null-to-Null (Zero.Crossing) Bandwidth. This concept applies to non-


band-limited signals and is defined as the distance between the first null in the enve-
lope of the magnitude spectrum above to,,, and the first null in the envelope below -,,,,
Sec.4.5 Duration-Bandwidth Belationships 207

where o,, is the radian frequency at which the magnitude spcctrum rs maximum. For
baseband signals, the spectrum maximum is at t'r = 0. and the bandrvidth is the distance
between the first null and the origin.

f,sarnple 4.6.4
ln Example 4.2.1, we showed that signal .r() = rcct(t/T) has lhc Fourier transform
aT
X(ro) = 751n.
,-
The magnitude spectrum of this signal is shown in Figure 4.5.3. Frorn the figure, the null-
to-null bandwidth is

B=T
I X(o) I

_4n 2tr_ 0 !l ! 6n !1 o)
T TTTTT
Figure 45J Magnitude spectrum for the signal in Exanrplc 4.5.4.

z7o Bandwidth. This is defined such thal

-*l
f',1*r.tl'0, =
(4.s.7)
.1x1.;1'?2,
For example, z = 99 defines thc frequency band in which 99% of the total energy
resides. This is similar to the Federal Communications Commission (FCC) detinition
of the occupied bandwidth, which states that the energy above thc upper band edge to,
is lVo and th" belorv thc lower band ed8,e 9, is jTrt leaving 99?o of the total
"n".gy
en-ergy within the occupied band. The z7o bandrvidth is implicitll' defined.

RIIIS (Gabor) Bandwidth. Probably the most analylicilllv useful definjtions


of bandwidth are given by various moments of X(to), or even lrctlcr' of lX(to)l'. The
rms bandwidth of the signal is dcfined as

[L:;,21x1,11,a. (4.5.8)
B:^. =

J_"
lxr."ll'a,
2OB The FourierTranslom Chaptsr4

A dual characterization of a signal r(t) can be given in terms of its duralion T, which
is a measure of the extent of x (t ) in the time domain. As with banduddth, duration can
be defined in several ways. The particular definition to be used depends on the appli-
cation. Three of the more common definitions are as follows:
l. Distance between successive zeros. As an example, the signal

'(t)--V4+
nt
hasdurationT=llW.
2. Time at which x(t) drops to a given value. For example, the exponential sigpal
x(r) - "*01-r/Llu(t)
duration I = A, measured as the time at whichx(t) drops to l/e of its value at, = 0.
has
3. Raditu of gyration. This measure is used with signals that are concentrated around
I = 0 and is defined as
T=2 x radius ofgyration

(4.s.e)
lxQ)12 dt

For example, the signal

,(,)=-\+z,N*r[#]
has a duration of

,:,W;/F,,^,1
= t/i a,

4.62 the Uncertainty Principle


The uncertainty- principle states that for any real signal .r(t) that vanishes at infinity
faster than l/Vl, that is,
\4r(r) = 0
,lim
(45.10)

and for which the duration is defined as in Equation (4.5.9) and the bandwidth is
defined as in Equation (4.5.8), the product TB shrruld satisfy the inequality
TB>I (4.s.lr)
In words, T and B cannot simultaneously be arbitrarily small: A short duration implies
a large bandwidth, and a small-bandwidth signal must last a long time. This constraint
Sec.4.5 Duralion-BandwidthRelationships

has a wide domain of applicarions in communication systems, radar, and signal anrJ
speech processing.
The proof of Equation (4.5.11) follows from Parseval's formulr. Equation (4.3.1,1).
and Schwarz's inequality.

lf t,(t\t,toa4' = [: ly,()l2dt l. lt,(il',tt (4.s.i2)

where the equality holds if and only if y, (t) is proportional to y, (,)-that is,

yr(t) = ky,(t) (4.5.13)

Schwarz's inequality can be easily derived from

os ['ler,{r) -yr(t)l'zdt=s f $,()lzdt- zof tlt)rjl(r).rr+ f br<,lfo,


This equation is a nonnegative quadratic form in the variable 0. For the quadratic to
be nonnegative for all values of 0, its discriminant must be nonpositive. Setting this
condition establishes Equation (4.5.12).If the discriminant equals zero. lhen for some
:
value 0 &, the quadratic equals zero. This is possible only if k.v,(r) - )t(t) = 0, ana
Equation (4.5.13) follows.
By using Parseval's formula, we can write the bandwidth of the signal as

L_l+r"
82=-* (4.s. r4)

['-_l,r,tl'o,
Combining Equation (4.5.14) with Equation (4.5.9) gives

:
| + ilx(tylzat I lx'1t11'at
(TB\, (4.s.1 s)

t[ 1,1,v1'ar]

We apply Schwarz's inequality to the numerator of Equation (4.5.15) to obtain

.t.B
>2]L^1,''ul1l (4.5. tb)

['-l't'tl'o'
But the fraction on the right in Equation (4.5.16) is identically ctpral to l/2 (as can be
seen-by. integrating the numetator by parts and noting that x(l ) nrtrst vanish faster than
I /!t as t -+ t m). which gives the desired result.
To obtain equality in Schwaz's inequality. we must have
dxltl
ktx(t\ = ),'
210 The Fourier Translorm Chapter 4

OT

4D=*,
.r(r)
Integrating, we have

ln[,r(r)] =T.constant
or
r(r) = c exp [&,2] (4.s.17)

If k is a negative real number,.r(r) is an acceptable pulselike signal and is referred to


as the Gaussian pulse. Thus, among all signals, the Gaussian pulse has the smallest
duration-bandwidth product in the sense of Equations (4.5.8) and (a.5.9).

f,gornple 4.65
Writing lhe Fourier transform in the polar form
X(r'r) = 41-; exP [jQ (o)l
we show that, among all signals with the same amplitude A (o). the one that minimizes lhe
duralion ofx(r) has zero (linear) phase. From Equalion (4.3.23). we obtain

(-ri)r(r) *4!;:) :l#+i/(,)+P]exp[ie(r,r)] (4.s.re)

From Equations (4.3.14) and (4.5.18), we have

l'_t'ztx1rvt'zat
='i; L"{[iP]'* ,,'r,r[$$]']a., (4s re)

Since the left-hand side of Equation (4.5.19) measures the duration of .r(t). we conclude
that a high ripple in the amplitude spectrum or in the phase angle of X(r'r) results in sig-
nals with long duration. A high ripple results in large absolute values of the derivativcs
of both the amplitude and phase spectrum. and among all signals with the same ampli-
tude A (to). the one that minimizes the left-hand side of Equation (4.5.19) has zero (lin-
ear) phase.

Bxarnple 4.6.6
A convenient measure of the duration of x(l) is the quantity

r =\ l'_,(na,
tn this formula. the duration Tcan be interpreted as the ratio of the area ofx(l) to its
height. Note that if r(t) represents the impulse response of an LTI system. then Iis a mea-
sure of the rise time of the system. which is defined as the ratio of the final value of the
step response to the slope of lhe step response at some appropriate point tu along the rise
(ro = 0 in this case). lf we define the bandwidth of .r(l) by
Sec.4.6 Summary 211

, = ,lor [-._x<-'ta-
it is easy to show that

BT=2t

4.6 SUMMARY
r The Fourier transform of x(r) is defined by
t-
x(o) = iatlat
J -_x(t)expl-
o The inverse Fourier transform of X(<,r) is defincd by

,fo = )i/' x1,1exp[i,r]ar,r

. X(ro) exists if x(t) is "well behaved" and is ahsolutely integrahle. These conditions
are sufficient, but not necessary.
. The magnitude of X(o) plotted against r,r is called the magnitude spectrum of r(l),
ana lXlr; | '? is callcd the encrgy speclrum.
r The angle of X(r,r) plotted versus to is called the phase spectrum.
. Parseval's theorem states that

:*
f--l't'tl' a' f .l x1'; l'za'
. The energy of r(r) within the frequency band o, ( o ( oz is uiven by

zIt Jlll
^E=?[''111.11,r.u
r The total energy of the aperiodic signal x(t) is

E =
*J' txr,rt,a.
. The power-density spectrum of x(l) is defined by

s(.,)=ri,,'flx't'1'1
,-r_ tr L J

where
x, (t; e X, (or)
and
.r, (t) : x(t) rect(t /2r)
212 The Fourier Translorm Chapter 4

The convolution property of the Fourier transform states that


y(t) = r(t) ', h(t) <+ y(or) = X(r,r)H(<,r)
If .Y(to) is the Fourier transform of .r(t), then the duality property of the transform
is expressed as

X(t) <-> 2n:(-to)


Amplitude modulation, multiplexing, filtering, and sampling are among the impor-
tant applications of the Fourier transform.
If .r(t) is a band-limited signal such that
X(ro) = 6. l,o | > rp
then r(l) is uniquely determined by its values (samples) at equally spaced points in
time. provided that 7< T/t,os.The radian frequency o, = 2rlTiscalled thesam-
pling frequency. The minimum sampling rate is 2or, and is known as the Nyquist rate.
a The bandwidth B of x(t) is a measure of the frequency content of the signal.
a There are several definitions for the bandwidth of the signal x(t) tnar arc useful in
particular applications.
e The duration I of x(l) is a measure of the extent of x(t) in time.
o The product of the bandwidth and the duration of x(t) is greater than or equal to a
constant that depends on the definition of both B and T.

4 7 CHECKLIST OF IMPORTANT TERMS


Allaelng Paraeval's theorem
Amplltude modulatlon Perlodlc pulss traln
Band-pas8 fllter Phase Bpectrum
Bandwldth Powerdenslty spectrum
Duallty RGctangular pulse
Duratlon RMS bandwldth
Energy sp€ctrum Sampllng fiequency
Equlvalent bandwldth Sampllng funcuon
Halr-power (3dB) bandwldth Sampllng theorem
Hlgh-p6s flltet Slgnal lllterlng
Low-pass fllter Slnc functlon
Magnltude spoctrum Trlangular puloe
Multlplerlng Two-slded erponentlal
Nyqulst rate Uncertalnty prlnclple

AQ
a.L, PROBLEMS
4.1. Find the Fourier transform of the following signals in terms of X(to). the Fourier trans-
forrn of .r().
(c) x(-r)
Sec. 4.8 Problems 213

(b) 'r"(r) = 40 +'(-')

(c) r,(r) = 4O-- i(:')


(d) r"0)
- r0)
+-r*0)
(e) Re(:(r)l

tD Imk(t)l
43 Determine which of the following signals have a Fourier transform. Why?
(a) :0) = exP[-2t]z(r)
O).tG) = lrlu(r)
(c) x(t) = cos (rrlt)
(d) .r(t) = :
1

(e) .r(r) = t2 exPl-?tlu(t)


43. Show that the Fourier transform of .r(t) can be written as

x(a) = i tilm,+
where

m^= n = 0,1.2....
f-"x(tldt,
(Hinl: Expand exp [-ltor] around, = 0 and integrate termwisc.)
44 Using Equation (4.2.12), show that

[' d, = 2n6(r)
"or.ot
Use the result to prove Equation (4.2.13).
45. Let X((,) = 1g.1[(, l)/2]. Find rhe rransform of the following functions,. using the
-
prope(ies of the Fourier transform:
(s) r(-t)
o) a1t1
(c) r(t+l)
(d).r(-2+a)
(e)(-l)r(t+l)
,rr 4#
(t\
tgti d-t

(h) r(z - l) exp[-j2tl


(l) .t(t) exp[-jzr]
{j) altl expl-jal
214 The Fourier Translorm Chapter 4

(&) (I - l).r(r - l) exp[-l2tl


(l) t' .r (t ) dr
J_-
4.6. Lgt r(r,1 = .*01-rlu(r) and let.v(r) = -r(r + I) + r(r - l). Find l'(o).
4.7. Using Euler's form,
exp [ltotl = coso, + j sin t,r,

interpret the integral

;-[exeli,rtldLo=0(r)
(Hintr Think of the integral as the sum of a large nunrber of cosines and sines of various
frequencies.)
4t. Consider ihe two related signals shown in Figure P4.8. Use linearity, time-shifting, and
integration properties. along rvith the transform of a reclangular signal. to lind X(ro) and
Y(,).

.r(r)

Flgure P4.t

4.9. Find the energy of the following signals. using Parseval's theorem.
(a) .r(t) = exP [- 2rl,I0)
(b) .r(l) = n(t) - a(r - 5)
(c) r(t) - 6(tl4)
(d) r0) = !ln;Yl)
4.10. A Caussian-shaped signal

x(t) = I exP [- ar 2]

is applied io a system whose input/output relationship is

Y(r) =.r2(r)
(a) Find the Fourier transform of the output y().
Sec.4.8 Problems 215

(b) If y(r) is applied to a second system, which is l'TI rvith imprrl"'response


h(t) = I exP[-br:]
find the output of thc second system.
(c) How does the ourPut change if we inlerchange the order of thc two systems?
4.11. The two formulas

x1o; =
/' r(r)tit and ,(o) =
21, [' ,xt-"t'
are special cases of the Fourier-transform pair. These two formulas can be used to evalu-
ronr" detinire integrals. choose the appropriate .t (r ) and x(r,r) to verify the following
","
idr'nlities.

r"r j" rif ae = r

ta
(b) exp [- rr0'lde = t
J_ -
n;"L-ft,,a,=,
,u, ; I" -r":'*Lffi o, = t
4.12. Use the relation

= i;
[- -x(t)y.(t)dt I- -x(r,r)Y*'(to)z/o
and a known transform pair to show that
t"ln
t\rdt = 4oi
(a)
Jo 1oz +
.-. f' sincl
(b)
J.
-+ = I - exP[-rra]
;z ;;
trdt
r' sinll
(cl
l_- ,-at = 3r1
r' sin{, 2t
ldl J_--;'4 dt = 1
4,13. Consider the signal

'r(r) = e*O 1-
"""t"
(a) Find X(ro).
(b)WriteX(to)asX(o)=R(r,r)+j/(r,r),whereR(to)and/(t,r)arctherealandimaginary
parts of X(ro), resPectivelY'
(c) Take the limit as e -+ 0 of part (b), and show that
I
9llin1 exp[-cllu(t)l = 116(o) + --
Hinr.' Note that
216 The Fourler Translorm Chapter 4

.. a [0. ro*0
llm --=-------= = (
.to e" I ot [o, o=0

f' ." @".dr=n


J-- e" +
The signal r(r) = exp[-ar]a(l) is inpur inro a system with impulse response
h(t) = sirrlVr,rnrr.
(a) Find the Fourier transform Y(ro) of the output.
(b) For what value of a does the energy h the output signal equal one-half the input-sig-
nal energy?
415. A signal has Fourier transform

+
xkl,r -+ i4,.D 2
= -ct'tj4u+3
'n2

Find the transforms of each of the following sigpats:


(a) r(-2r + r)
(b) :0) exp[-/]
@4#
(d) .rO sin(nt)
(e) r(t) * 6(, - 1)
(f) .r(t) *.r(t - 1)
4.1& (a) Show that if r0) is band limited, rhat is,

x(o) = 6, for l-l , r"


then

,(r).#=r(r), c>ro.
(b) Use part (a) to show that

Isin r
l1- c>1
sinct sin(r- r) o'='l
, l;'
;l_- a t-r
Ir"inor, lol -r
4.17. show that with current as input and voltage as output, the frequency response of an induc-
tor with inductance L is jrol and that of a capacitor of capacitance Cisl/jotC.
4.1& Consider the system shown in Figure P4.18 with RC = 10.
(a) Find tl(r,r) if the output is the voltage across the capacitor, and sketch lA1t.ryl as a
function of o.
(b) Repeat Part (a) if the output is the resistor voltage. Commenr on your results.
Sec.4.8 Problems 217

Flgure P4.lt

4.19. The input to rhe syslem shown in Figure P4.19 has lhe spectrum shown' ['et
p(t) = costort, otu )) to,,

Find the spcctrum Y(ro) of the outPut if


sin to .l
hr(tl=-;
Consider lhe cases o- ) oa and ro,, s tor.

r(rl ,rt(r) ,li (, ) ) (,)

pl/,l ,, (r)

X(ro)

It, (t :l

6)0 a)

tigure P4.19

rL20. Repeat Problem 4.19 if


3la!
x1r; = sa
218 The Fourier Transform Chapter 4

Ael. Ol The Hilbert transform of a signal.r(r) is obtained by passing rhe signal through an LTI
system with impulse response h(tl = l/zrt. What is H(r,r)?
(b) What is the Hilbert transform of the signal .r(r) = cosrrr?
4ZL The autocorrelalion function of signal r(r) is defined as

t'
R,(t) = + t)dr
J_".rr(r)x(r
Show that

&(,) +, lx(o,) l'z


4.41. Suppose that r(r) is the input to a linear system with impulse response ft0).
(a) Show that
&(,): &0) + (r) r ft(-r)
where y(l) is the output of the system.
(b) Show that

R."(r) <-+ | x1o1 l'z I a1rol l,


424. For the system shown in Figure P4.24, assume that

. sinro,l sin ro.l


r(r) = -l- * *, ro, ( or,

(a) Find y(t) if 0 < ro, ( to,.


(b) Findy() if ro, < or1< ot.
(c) Find y(t) if tq < r,rr.

H (tol

,u,{_*,,,,
'btl O -l tn

Flgure P4.20

425. Cbnsider the standard amplitude-modulation system shown in Figure P4.25.


(a) Sketch rhe spectrum of .r(r).
(b) Sketch the spectrum ofy(r) for the following cases:
(l) 0sro.(@o-o-
(ll) oo - rr,,, s ttr. < &ro
(iii) ro" > @o * o,
(c) Sketch the spectrum of z (t) for the following cases:
(l) 0so.(00-o,,
(li) roo - ro,, s (D. ( roo
(lii) ro. > 0ro + od
Sec. 4.8 Problems 219

(d) Sketch the spectrum of u1t) if o. ) oo * ro,, and


(i) o1 < t'r,,,
(il) <o1 < 2an - ,on,
(lll) tor > Zan 't a,,

x (r) r (r) I;rltcr


m(t\ l, (r)

.4 cos (r0, cos ojo ,

H clu\ lrl (u) ul(,lt

--b)c
b)c 0 ojc (,
o;c 'aia 0 or. os -(nt
-a, O u1
@l o
t's

Flgure P425

426. As discussed in Section 4.4.1, AM demodulation consists of multiplying the received sig-
nal y (l ) bv a replica, zl cos r,r,,I. of lhe calricr and lorv-pass filtc rins t h'r rcsultinE signal : ( ).
Such a scheme is called synchronous dcmodulation and assumcs lhllt the phase of the car-
rier is known at the receiver. If the carrier phase is not known. ; (t ) hecomes
z(r) = y(t)Acos(toor + 0)
where 0 is the assumed phase of the carrier.
(a) Assume that the signal .r(l) is band limited to <o-. and find the output;(r) of the
demodulator.
(b) How does i(r) compare rvith the desired output r(t)?
4Zr. A single-sideband, amplitude-modulated signal is generated usinu the system shown in
Figure P4.27.
(a) Sketch the spectrum ofy(r) for @t= @^.
@) Write a mathematical expression for hrQ).ls it a realizable filter?
iI(o) lllkol

n(tl

-(.,O
cos LJ" ,

Flgure P4.27

42& Consider the system shown in Figure P4.28(a). The systems /r1(r ) and hr(t) respectively
have frequency responses
u0 Tho Fourler Tranolorm Chapter 4

M(ul n(t, E{u)

.r1(r)

(8)

Flgurc P4Jt(a)

I
H,(o) =
)lno{, - oo) + H6(ro + roo)l

unO

-t I/o(or + roo)t
H2@t) = Wo(L,l - oo) -
4
(a) Sketch the spectrum ofy(t).
O) Repeat part (a) for the Ho(ro) eho*r in Figure P4.2t(b).

Ho(j.,.t

-qb - .+ -arg 0 0,O (ro i (,h a',

(b) Ilguro P4rtO)

+lr, Ler.r(t) and y(l) be low-paes eignals with bandwidthe of 150 Hz and 350 Hz' rospec'
tively, and let e1r; = .r(l)y(t). The signal e(t) ls oampled uslng an ldeal eampler at inter-
vale of I
secs.
(q) What is the maxlmum value that I, can take wlthout inuoducing aliaring?
(b) rf

r(r) = sin(1$-l]),rt,l =."(Y)


sketch the sPectrum of the samPled signal for (i) 7i = 0.5 ms and (ii) T, = 2 m*
In natural sampling, the eignal .r(t) is multiplied by a train of rectangular pulses, aB shown
in Flgure P4.30.
(r) Find and Bketch the epectrum ofr,(t),
G) Can r(t) be recovered wilhout any dlBtorlion?
Sec. 4.8 Problems 21

X(ul

n(r) xr(l)

-27
P0'l

Flgure P4.A)

rl3l. In flat-top sampling, the amplitudc of each pulse in the pulse train .r, (r ) is consunt during
the pulse, but is determined by an instantaneous sample of r(t). as illustrated in Figure
P431(a). The instantaneous sample is chosen to occur at the center of the pulse for con'
venience. This choice is not necessary in general.
(a) Write an expression for x, (t).
O) FindX,(or),
(c) How is this r$ult different from the result in part (a) of Problenr 4.30?
(d) Ueing only a low-pass filter, can r(t) be recovered without any distortion?
(e) Show that you can recover x(l) without any distortion if another filter, H", (or)' is
added, as shown in Figure P4.31(b), where I

n(,) =
{1, ki,i:;
H",(,) =
*'til;'l;1, l,l <,"
= arbitrary, elsewhere

X(or)

xr(')

- tu. 0 ola

(a)

rr(, )

(b)

Flgure P4.31 Flat-top sampling of x(t).


222 The Fourier Translorm Chapter 4

432. Figure P4.32 diagrams the FDM system thal generates the baseband signal for FM stereo-
phonic broadcasting. The left-speaker and right-speaker signals are processed to produce
.rr_(t) + -rr(r) and.r1.(r) - .rfl(I), respcctively,
(a) Skctch the spectrum of ) (I)
(b) Sketch the spectrum of z (r). u(t), and z,(l).
(c) Shorv how to recover both 11(r) and -r*(l).

XLU,I + XRk ) X 1|t:l - X pk tl

lo to

r,; X 103 r.r X 103

(a)

xr (r) + rn (r)

r! (r) - rR (r) LPF


h3ul w(t)
0-15 kHz

cos 2rol l cos oJl, cos 2@l I L= t9kHz


(b)

Figure P432

433. Show that the transform of a train of doublets is a train o[ impulses:

jri,$
2t
roo=7
) a'(, - nT) <-+ ) n 6(or - zoo).

4J4. Find the indicated bandwidth of the following signals:

(a) sinc
-, ' (absolute bandwidth)
3Wr

(b) exp[-3tlrr(t) (3-dBbandwidth)


(c) exp[-3rla(t) (957e bandwidth)

fal ,/" exP [- or


'] (rms ban<lwidth)

435. The signal X.(o) shorvn in Figure 4.4.10(b) is a periodic signal in or.
(a) Find the Fourier-series coefficienis of lhis signal.
Sec. 4.8 Problems 223

(b) Show that

x,(o) =,2.11 -.,r,.-o["j],]


(c) Using Equation (4.4.1 ) along with the shifting property of the Fourier transform. shorv that

'o = .?.+-@Dtl#3:+l)t
435. Calculate the time-bandwidth product for the following signals:

(a) r (r) =
I exP [ ,']
-,--.
Y 2tr L" I z'l'
I
I

(Use the radius oIgyration measure for Iand lhe equivalcnt bandwidth measure for B')
sin1ur.Wt
(b) .t(r1 =
----'
(Use the distance between zeros as a measure of I and the absolute bandwidth as a
measure of B.)
(c) .r(r) = Aexpl-orlrr(r). (use the time ar which r(r) drops to l/e of its value at, = 0
as a measure of Iand the 3-dB bandwidth as a measure o[ r9.)
Chapter 5

The Laplace Transform

5,1 INTRODUCTION
In Chapters 3 and 4, we saw how frequency-domain methods are extremely useful in
the study of signals and LTI systems. In those chapters, we demonstrated that Fourier
analysis reduces the convolution operation required to compute the output of LTI sys-
tems to just the product of the Fourier transform of the input signal and the frequency
response of the system. One of the problems we can run into is that many of the input
signals we would like to use do not have Fourier transforms. Examples are
exp[cl]a(t), o > 0; exP[-or], -- < t < a; tu(t): and other time signals thai are not
absolutely integrable. If we are confronted, say, with a system that is driven by a ramp-
function input, is there any method of solution other than the time-domain techniques
of Chapter 2? The difficulty could be resolved by extending the Fourier transform so
that the eignal r() is expressed as a Eum of complex exponentials. exp[sr], where the
frequency variable is s = o + 7'ro and thus is not restricted to the imaginary axis only.
This is equivalent to multiplying the signal by an exponential convergent factor. For
example,.exp [-ot] exp[ct]u(r) satisfres Dirichlet'g conditions for s > c and, there.
fore, should have a generalized or extended Fourier transform, Such an extended
transform is known as the bilateral Laplace transform. named after the French math-
ematician Pierre Simon de Laplace. In tbis chapter, we define the bilateral Laplace
transform (section 5.2) and use the definition to determine a set o[ bilateral transform
pain for eome basic signals.
As mentioned in Chapter 2, any signal .r(r) can be written as the sum of causal and
noncausal sipals, The causal part of .r(t), r(r)rr(r), has a special Laplace trangform
that we refer to as the unilateral Laplace transform or, simply, the Laplace transform.
The unilateral Laplace transform is more often used than the bitateral Laplace trans-

u4
Sec. 5.2 The Bllateral Laplaco Transtorm

form, not only because most of the signals occurring in practice are causal signals, but
also because the response of a causal LTI system to a causal input is causal. In Section
5.3, we define the unilateral Laplace transform and provide some examples to illustrate
how to evaluate such transforms. In Section 5.4, we demonstrate how to evaluate the
bilateral Laplacc transform using the unilateral Laplace transform.
As with other transforms, the Laplace,transform possesses a set of valuable prop-
erties that are used repeatedly in various applications. Because of their importance, we
devote Section 5.5 to the development of the properties of the Laplace transform and
give examples to illustrate their use.
Finding the inverse Laplace transform is as important as finding the transform itself,
The inverse Laplace transforrn is defined in terms of a contour integral. In general,
such an integral is not easy to evaluate and requires the use of some theorems ftom the
subiect of complex variables that are beyond the scope of this text. In Section 5.6, we
use the technique of partial fractions to find the inverse laplace transform for the class
of signals that have rational transforms (i,e.. that can be expressed as the ratio of two
polynomials).
In Section 5.7, we develop techniques for determining the simulation diagrams of
continuous-time systems. In Section 5.8, we discus some applications of the laplace
transform, such as in the solution of differential equations, applications to circuit analy-
sis. and applications to control systems. In Section 5,9, we cover the solution of the
state equations in the frequency domain. Finally, in Section 5.10, we discrus the stabil-
ity of LTI systems in the s domain.

5,2 THE BILATERAL LAPLACE TRANSFORM


The bilateral, or two-sided, Laplace transform of the real-valued signalr(t) is dehned as

X,G) AI' r(r) exp[-st]dr (5.2.1)

where the complex variable s is, in general, of the form.r = o + itr, with o and trr the
real and imaginary parts, respectively. When o = 0, s = jo, and Equation (5.2.1)
becomes the Fourier transform of r(t). while with s # 0, the bilateral Laplace trans-
form is the Fourier transform of the signal .r(t) exp[-ot]. For convenience, we some-
times denote the bilateral laplace transform in operator form as :lrlx(t)l and denote
the transform relationship between r(t) and Xp(s) as

x(,) t+ XB(s) (s.2.2)

Let ue now evaluate a number of bilateral Laplace transforms to illustrate tho rela-
tionship between them and Fourier transforms.
.lil;",'r', '.''I-r- ' ::rl:rii.l,,:; jrL;;
,,, ,_ l, .,r
j,ti i.} I $;
226 The Laplace Transform Chapter 5

Example 62.1
Consider the signal.r() = .* 1-r,Ia O. From the definition of the bilateral Laplace rransform.

Xs(s) = [ exp[-arl exp[-sr]u(r) dr

=fJ6 exp[-(s + a)tldt

l_
= J+a
As stated earlier, we ctn look at this bilateral l:place transform as the Fourier transform
ofthe signal exp[-at] exp[-or]u(r). This signal has a Fourier rransform only if o > -a.
Thus, Xr(s) exists only if Re fsl > -a.

In general, the bilateral Laplace transform converges for some values of Re lsl and
not for othem. The values of s for which it converges, i.e.,

/- lrt,ll exp[-Rels]rldr < - (s.2.3)

is called the region of absolute convergence or, simply, the region of convergence and
is abbreviatcd as ROC. It should bc stressed that tlre region of converg,encc depends
on the given signal x(t). For instance, in the preceding example. ROC is defined by
Re {sf > -a whether a is positive or negative. Note also that even though the bilateral
Laplace transform exists for all values of a. lhe Fourier transform exists only if a > 0.
If we restrict our attention to time signals whose Laplace transforms are rational
functions of s, i.e.. Xr(s) = N(s)/D(s), then clcarly. Xr(s) does not converge at the
zeros of the polynomial D(s) (poles of Xn(.r)). rvhich leads us to conclude thar for
rational laplace transforms, the ROC should nol conlain any poles.

Example 6.2.2
In this example. we show that two signals can have lhe same algebraic expression for their
bilateral laplacc transform, but different ROCs. Consider the signal
r(r) = - exp[-atlu(-t)
Its bilateral l:place iransform is

Xs(s) = -[ exp[-(s + a)tlu(-t)ttt

= - [o "*p1-1, + o)tltlt
For this integral to converge, we require that Re ls + al < 0 or Re [s | < -a. and the bilar-
eral Laplace transform is
I
xa(s) =
r_;;
Sec. 5.2 The Bilateral Laplace Translorm

In spite of the fact that the algcbraic ex;


two signals in Examples 5.2.1 and 5.2.:
Rocs. From these examples, we can cL..-.--, r,,ur, rut srgrrars rr.rr strlsr ror posrtrve nrne
only, the behavior of the signal puts a lower bound on the allosable values of Re[sl,
whereas for signals that exist for negative time, the behavior of the signal puts an upper
bound on the allowable values of Re{sl. Thus, for a given Xr(s). rhere can be more than
one corresponding ,ru (r ), dcpending on the ROC; in other worrls. the correspondence
between.r(l) and Xr(s) is not one to one unless the ROC is spccitied.

A convenient wav to display the ROC is in the complex s plane. as shown in Figure
5.2.1. The horizontal axis is usually referred to as the o axis. and the vertical axis is nor-
mally referred to as the 7o axis. The shaded region in Figure 5.2.1 (a) represents the set
of points in the s plane corresponding to the region of convergcnce for the sigpal in
Example 5.2.1, and the shaded region in Figure 5.2.1(b) represcnts the region of con-
vergence for the signal in Example 5.2.2.

lmis I Inr is I

(a) (b)

Figure 5.a1 s-plane representation of lhe bilateral Laplace transform.

The ROC can also provide us with information about whether .t (r) is Fourier trans-
formable or not. Since the Fourier transform is obtained from the bilateral l-aplace
transform by setting o = 0, the region of convergence in this casc is a single line (the
Ito axis). Therefore, if the ROC for Xr(s) includes the lro axis, -t(r) is Fourier trans-
formable, and Xs(o) can be obrained by replacing s in Xr(s) by jro.

Brornple 6.2.8
Consider the sum of trvo real exponentials:

r(,) ='l exp[-2r]u(r) + 4 exp[r]u(-r)


225 The Laplace Transtorm Chapter 5

Note that for signals that exist for both positive and negative time, the behavior of the sig-
nal for negative time puts an upper bound on the allowable values of Re [s ], and the behav-
ior for positive time puts a lower bound on the allowable Re [sf . Therefore, we expect to
obtain a strip as the ROC for such signals. The bilateral l:place transform ofx(r) is

x,(r) = 3exp[-(s + Z)tldr+ aexpt-(s - t)tldt


/ f
The 6rst integral converges tbr Re [sl > -2, the second integral converges for Re ls] < 1,
and the algebraic expression for the bilateral Laplace transform is

xa$)=#-*
-s - 1l
-2 < Re[sl < I
(s + 2)(s - l)'

5.3 THE UNILATERAL LAPLACE TRANSFORM


Similar to our definition of Equation (5.2.1), we can define a unilateral or one-sided
transform of a signal r(t) as

X(s1 = exp[-sr]dr (5.3.1)


f-x(r)
Some texts use t : 0* or, = 0 as a lower limit. All three lower limits are equivalent if
.r(t) does not contain a singularity function at, = 0. This is because there is no contri-
bution to the area under the function r(t) exp[-sl] at , = 0 even if r(t) is discontinu-
ous at the origin.
The unilateral transform is of particular interest rvhen we are dealing with causal
signals. Recall from our definition in Chapter 2 that if the signal r(t) is causal, we have
r(r) = 0 for t < 0. Thus, the bilateral transform Xr(s) of a causal signal is the same as
the unilateral transform of the signal. Our discussion in Section 5.2 showed that, given
a transform Xr(s), the corresponding time function r(t) is not uniquely specified and
depends on the ROC. For causal signals, however, there is a unique correspondence
between the signal .r(t) and its unilateral transform X(s). This makes for considerable
simplification in analyzing causal signals and systems. In what follows, we will omit the
word "unilateral" and simply refer to X(s) as the Laplace transform of x(t), except
when it is not clear from the context which transform is being used.

f,yqrnple 6.3.1
In this example, we find the unilateral l:place transforms of the following signals:
r,(t) = 4, xr(l) = D(l), rr(t) = exp['2t], :r(,) = cosZ, r5(l) = 5;n2,
From Equation (5.3.1),

r,U, 4
=
I, oexp[-sr]rlr = J , Relsl > 0
Ssc. 5.4 Bilatoral Translorms Using Unilateral Transforms

xr6) =
/"
6(r) exp[-sr]dr = r, for all s

,Yr(s) = [
lg
exp[,;'}tlexpl-stldt

=l_
s-i2
=;+ *i?L*7' Relsl > o

Since cos2 = Re I erp [2r]] and sinZ = Im I exp[iZl), using the linearity ofrhe inregral
oPeration. we have

x.c) = R.{+}= r+' Rels} > o

x,t,r = r,n{-l-}=.t Re{s}> o

Table 5.1 lisrs some of the important unilateral Laplace-transform pairs. These are
used repeatedly in applications.

5.4 BILATERAL TRANSFORMS USING


NILATERAL TRANSFORMS
The bilateral I-.aplace transform can be evaluated using the unilateral l-aptace trans-
form if we express the signal .r(r) as the sum of two signals. The first part represents
the behavior of .r(t) in the interval (--,0), and the second parr represents the behav-
ior of x(t) in the interval [0, cc). ln general, any signal that does not contain any singu-
larities (a delta function or its derivatives) at l = 0 can be writtcn as the sum ofa causal
part.r.(t) and a noncausal part.r_(r), i.e.,
-r(,) = r+(,)r,(r) +.r_(l)a(-r) (s.4.1)
Taking the bilateral Laplace transform of both sides, we have

Xa(s): X.(s) + [o ,-(r)exp[-sr]dr


J _*

Using the substitution t : -t yields

Xr(r) = X,(s) + [ ,-t-')exp[st]r/t


Jo'

If r(t) does not have any singularities at, = 0, then the lower limit in the second term
can be replaced by 0-, and the bilateral Laplace transform beconres

Xr(s) = X.(s) + 9[.r_ (-t)rr(t)],_. , (s.4.2)


230 The Laplace Translorm Chaptor 5

TABLE 5.1
Some Solected Unllsteral LaplacB'Tianstorm Palrc

Slgnal Transtorm

I
l. r (t) Re(s| > 0
s

I- exp [- as ]
Rels| >0
2. u(r) - u(t - a\

3.6(,) I for all s

4. 6(t - a) exp [-asl for all s

t'u(t) Re[s] > 0


5.
#,n=r.2....
I
6. exp[-at]uo s+a
Re ls| > -a
nl
1. { exp[-atlu(t) Re {sl > -a
G+ar'
8. cos oor u(t) -;- s -- Rels! > 0
s'+ @6
uro
9. sin r,ror z(t) Relsl > 0
s2 + roo2

+ 2rl,1,,
s2
10. cos2 oror a(r) Refsl >0
s1s2 + 4roj;

11. sin2 oor a(t) - Rels|>0


-?.6-
+
s1s2 4rrlozl

s+a > -a
12. expl-atl cos root z(t)
G+ rf;G Re [s]

0o- > -a
13. exp [-at] sin r,rot u(l) Rels|
(s+a)2+(l)3

14. l cosroot z (t) {-:-116- Relsl >0


(s2 + rooz)2

_ 2toos _
15. I sin root uo Re[sl >0
(s2 + 0ro2)2

where9l'lstandforthe unilateral Laplace transform. Note that if r-(-l)z(t) has


an ROC defined by Re[s] ) o, then x-(t)u(-t) should have an ROC defined by
Re [sl < -o.
Sec. 5.5 Properties ot the Unilaleral Laplace Transform 231

Example 6.4.1
The bilatcral Laplace transform of the signal .r(l) = exp [arlr.( -r). a > 0, is

Xa(s) = 9lexp[-at]u(t) l.--,


=
t I \ -l Relslca
l'.r/,-,=r-o'
Note that the unilateral l:place transform of exp[arlu(-l) is zero.

Exanple 6.42
According to Equation (5.4.2), the bilateral Laplace lransform of
.r(r) = ,4 exp[-ar]a(r) + Br2 exp[-br]z(-r), a and b>0
ts

A
Xs(s) = + :rlB?t)2exp[brlil(r)],--,
r'.' o
\
=ls+a*(t',t
\'("-t'J,.-,' Relsl > -anRc[s] <-b
A28
=tttt -a< Re[sl <-b
(' i-6;i'
where 9lB(-r)2 exp[Drlu(r)] follows from entry 7 in Table 5.1.

Not all signals possess a bilateral Laplace transform. For cxample, the periodic
exponential exp[rout]does not have a bilateral Laplace transfornr because

9r(exp[.;'rour]] =
/ cxpt-(s - i''r,,,)t)dt

= li.*pt-(, - i,u)rl tt + f-exp[- (s - 1-)tldt


J
-- Jn

For the first integral to converge, we need Re[s] < 0, and for lhc second integral to
converge, we need Re ls) > 0. These two restrictions are contradictory, and there is no
value of s for which the transform converges.
In the remainder of this chapter, we restrict our attention to the unilateral l-aplace
transform, which we simply refer to as the Laplace transform.

5.5 PROPERTIES OF THE UNILATERAL


LAPLACE TRANSFORM
There are a number of useful properties of the unilateral Laplacc transform that will
allow some problems to be solved almost by inspection. In this scction we summarize
many of these properties, some of which may be more or less obvious to the reader. By
232 The Laplace Translorm Chaptor s

using these properties. it is possiblc to derive many of the transform pairs in Table 5.1.
In this section, we list several of these properties and provide outlines of their proof,

6.6.1 Linearity
If
x,(t) <-r X,(s)
rr(t) er Xr(s)
then

atr(t) + bxr(t) <+ axls) + bxz$) (s.s.1)


where a and D are arbitrary constants. This property is the direct result of the linear
operation ofintegration. The linearity property can be easily extended to a linear com-
bination of an arbitrary number of components and simply means that the Laplace
transform of a linear combination of an arbitrary number of signals is the same linear
combination of the transforms of the individual components. The ROC associated with
a linear combination of terms is the intersection of the ROG for the individual terms.

Example 6.5.1
Suppose we want to lind the laplace transform of
(A + B expl-btllu(t)
From Table 5.1, we have the transform pair

u(r) <-+ :rl and exp[- Dr]z() * ;;;


Thus, using linearity, we obtain the transform pair

Au(t) + B exp[- btlu(tl*


f . #;
The ROC is the intersection of Relsf ) -D and Relsf > 0, and, hence, is given by
Re [s I > max ( -b, 0).

6.6.2 Time Shifting


If x(t) <-+ X(s), then for any positive real number lu,

r(t - t ;u1l - ,o) e+ exp[-ros]X(s) (s.s.2)


The signal x(t t)u(t - ro) is a l,-second right shift of x(r)u(). Therefore, a shift in
-
time to the right corresponds to multiplication by exp[-los] in the L:place-transform
domain. The proof follows from Equation (5.3.1) with r(
- ro)u( - ro) substituted for
r(r), to obtain
Sec. 5.5 Properties ol the Unilateral Laplace Transform 233

9[.r(r - ,,t)r,(, - ,,ll = J, -r(r - t,,)rr(r - r,,) cxp[-.rr]rlr

= J,['..1, - r,,)cxp[-srlr/r
,,

Using the transformation of variahles, = ? + ,r. we have

illx(r - r',).r(, - ,,,)l = ) exp[-s(t + t,,)] r/t


I:r(t
t'
= exp [- tur | Jrr| x(r ) exP [- sr] dt

= exp[-r,flX(s)
Note that all values s in the ROC of x(t) are also in the ROC of .r(t tr). Therefore. -
the ROC associated with .r(r - ru) is the same as the ROC associated with r(r).

Example 6,63
Consider lhe reclangular pulse -r(r) = rect((r - o)lza\. This signal can be wrilten as
rect((, - a)/2o) = u(t) - u(t - 2al
Using linearity and time shifting. we find that the l-aplace lransform ofr(l) is

x(r) =:- exp[-2asll= t tcIpL ?e]. Re(s]>0

It should be clear rhar lhe time shifting property holds for a right shift only. For exam-
ple, the t aplace transform of x(t + for ) 0, cannot be expressed in terms of the
laplace transform of xG). (WhY?) '0), '0

6.6.3 Shifting in the s Domain


If
x(t) ++ X(s)
then
exp [sor]x(t) e+ X(s - so) (s.5.3)

The proof follorvs directly from the definition of the Laplace transform. Since the new
tranjform is a shifted version of X(s), for any s that is in the RoC of x(t), the values
s + Re[s,,] are in the ROC of exp[sot]r(t).

Exampte 6.6,3
From entry 8 in Table 5.1 and Equation (5.5.3). the Laplace transform of
r(t) = A exP[-atl cos((oo, + e)u(,)
2U The Laplace Transform Chapter 5

ts

X(s) = 91a exp [- atl(cosr,rol cos 0 - sinront sin 0 )rr(t) ]

= 9lA expl- atl cosr,rot cos0 u(t)l - glA exp[-at] sinr,rot sin0 u(t)l
_ ,4(s + a)cg-st _ __4gr_lI9
(s + a;2 + roo2 (s + a)2 + tofi

_ z4[(s + a) cos_O - r,losin0], Re[s] > _a


- (s + atz +.ni

6.6.4 Time Scdine


If
.r(t) <+ X(s), Re [sf > or
then for any positive real number q.

x(cr) <-r:r(;), Rels| > co, (5.s.4)

The proof follows directly from the definition of the laplace lransform and the appro-
priate substitution of variables.
Aside from the amplitude factor of l/o, linear scaling in time by a factor of c cor-
responds to linear scaling in the s plane by a factor of 1/c. Also. for any value ofs in
the ROC of .r(t), the value s/a will be in the ROC of r(ot): that is, the ROC associated
with.r(cr) is a compressed (c > 1) or expanded (a < l) version of the ROC of x(t).

Ilxarnple 6.6.4
Consider the time-scaled unit-srep signal a(ct), where c is an arbirrary positive number.
The l:place transform of a(or ) is

elu(or)l =**=+ Relsl >0.

This result is anticipated, since u(cr) = u(t) for o > 0.

6.6.6 Differentiation in the fime Domain


If
.r(t) <+ X(s)
then

+) *sX(s) -.r(o-) (55.s)


Sec. 5.5 Properties of the Unilateral Laplac€ Translorm 235

The proof of this property is obtained by computing the lransform of dt(t)/dt. This
transform is

4+l= r+expr-srldt
Integrating by parts yields

expr-srl,,,,l; - expr-'rr] dr
4+\= [''1'11-'1
: lim [exp[-stlr(t)l-.r(0-) + s X(s)
l')-

The assumption that X(s) exists implies that


lim [exp[-sr]x(t;l = g

for s in the ROC. Thus,

*{*:,'}= sx(s)-x(o-)
Therefore, differentiation in the time domain is equivalent to multiplication bys in the
s domain. This permits us to replace operations of calculus by simple algebraic opera-
tions on transforms.
The differentiation property can be cxlendcd to yicld

(s.s.6)

Generally speaking, differentiation in the time domain is the most important ProPerty
(next to linearity) of the l-aplacc transform. It makes the l,aplacc transform useful in
applications such as solving differential equations. Specifically, wc can use the l:place
transform to convert any linear differential equation with constant coefficients into an
algebraic equation.
As mentioned. earlier, for rational L:place transforms, the ROCI does not contain
any poles. Now, if X(s) has a first-order pole at s : 0, multiplying by s. as in Equation
(5.5.5), may cancel that pole and result in a new ROC that contains the ROC of r(r).
Therefore, in general, the ROC associated with dt(t)/dt normally contains the ROC
associated with r(l) and can be larger if X(s) has a first-order polc at s = 0.

Erample 6.6.6
The unit step function.t(t) = r(r) has the transform X(s) a l/s, rvith an ROC defined by
Re ls) > 0. Thc derivative of n (r) is the unit-impulse function, rvhose f-aplace transform
is unity for all s with associated ROC extending over the entirc.r planc.

Example 6.6.6
Ler,r1r; = sin2(,)t u(l), for which r(0-) = 0. Note that
x'(t) = 2- sin r,rt cos tol u (r) = r" .;r rr, ,,,,
236 The Laplace Translorm Ohapter 5

From Table 5.1,


2a
9[sin 2r,r t a (l) ] l
s2 + 4r,r

and therefore.

Stlsinrorr u(r)l =
i1ri2i*i,

Example 6.6.7
one of the important applications of the Laplace transform is in solving differential equa-
tions with specified initial conditions. As an example, consider the differential equati;n
y"(t) + 3y'(rl + 2yQ) = a, /(0-) = 3. y,(o-) = I
Let f(s) = 9ly()l be the l:place transform of lhe (unknown) solution y(). Using the
differentiation-in-iime property, we have
:tly'(t)l = sY(s) - y(0-) = sY(s) - 3
9ly'()l =s2Y1s; - sy(o-) - )'(0-) = sryls; - 3s - I
If we take the l:place transform of both sides of the differential equation and use the lasr
two expressions, we obtain
s2Y1s;+3sY(.s)+2Y(s1 = 3r* 16
Solving algebraically for Y(s), we ger

Y(s) =
3s+10 7 4
(s+2)(s+l) s+l s+
From Table 5.1, we see that
y(t) = 7 exp[-r]u(r) - 4 expl-2tlu(t)

Ernrnple 6.68
consider the RC circuit shown in Figure 5.5.1(a). The input is the rectangular signal shown
in Figure 5.5.1(b). The circuir is assumed inirially relaxed (zero iniriat condirion).

Oo
(a) (b)

Figure 55.1 Circuit for Example 5.5.8.


S€c. 5.5 Properiies of the Unilaleral Laplace Translorm 237

The differential equation governing the circuit is

arlrt +
|f r1r )h = u(t)

The input o(r) can be represented in terms of unit-step functions as

u\t) = volu(t - a)-u(t - b)l


Taking the laplace transform of both sides of the differential equaaion yields
/-fQ !!
rut"; * = 1exp1-arl - exp[-Ds]l

Solving for /(s), we obtain

\i =
;:#Rc[exp[-asl - exp[-sbl I

By using the time shift propeny, we obtain the current

xi =*["*['#3] u(t - a) - *o[t*.')],,,- r,]


The solution is shown in Figure 5.5.2.

yo

Figure 55.2 The current waveform


in Example 5.5.8.

6.6.6 Intcgration in the Time Domain


Since differentiation in the time domain corresponds to mulriplication by s in the s
domain, one might conclude that integration in lhe time domain should involve divi-
sion by s. This is always true if the integral of r(r) does not grow faster than an expo-
nential of the form A exp [-al], that is, if

lim exp [- sr] [ x|'l dr = O


Jn'

for all s such that Re lsl > a.


The integration property can be stated as follows: For any causal signal x(r), if
23t) The Laplace Translorm Chapter 5

y(i = | xg)dr
J6

then

Y(s) = 1X(,) (s.s.7)


s

To prove this result, we stan with

x(s) : x() exp [-srl dr


/-
Dividing both sides by s yields

{iq:
I:,u,*54,,
lntegrating the right-hand side by parts, we have

: ry54 -
+ y(,)
l- I-ru,
exp[-sr]dr

The first term on the right-hand side evaluates to zero at both limits (at the upper limit
by assumption and at lhe lower linrit because y(0-) = 0), so that
x(s)
= su(t)l
s

Thus, integration in the time domain is equivalent to division bys in the s domain. Inte-
gration and differentiation in the time domain are two of the most commonly used
properties of the l:place transform. They can be used to convert the integration and
differentiation operations into division or multiplication by s, respectively, which are
algebraic operatioru; and, hence, much easier to perform.

6.6.7 Ilifferentiation in the c Domain


Differentiating both sides of Equation (5.3.1) with respect to s, we have
dX(sl t- (-t)r(t)exp[-sr]
= dt
,F J
Consequently,

-rxgl <->
ff (s.s.8)

Since differentiating X(s) does not add new poles (it may increase the order of some
existing poles), the ROC associated with -tr(l) is the same as the the ROC associated
with r(t).
By repeated application of Equation (5.5.8), it follows that
Sec. 5.5 Propenies of the Unilateral Laplace Translorm 239

(-r)".r(r) e d'xlr) (s.-5.e)

Erample 6.EO
The lzplace transform of the unit ramp function r(t) = n,1r; nc obtained using Equa-
"on
tion (5.5.8) as
d
n(s) = -;r{a(t)l
dr I
- dss -s2
Applying Equation (5.5.9), we have, in general.

r'r1r; - "f'li (ss.lo)

65.E Modulation
If
r(r) (-) X(r)
then for any real number too,

.r(l) * | [x{" + ir,l.) + X(s - ior,,)]


cos ronr (5.5.11)

x(l) sinr,r,p ,. l1IXt, + jt,ru) - X(s - iro,,)l (5.5.12)

The proof follows from Euler's formula.


exp [lro,,t ]= cos too, * i sin to,,l
and the application of the shifting property in the s domain.

Exemple 65.10
The laplace transform of (cosroor)u(t) is obtained from the Laplace transform of u(r)
using the modulaiion property as follows:

y[(cos<,rrr)u(r)] = tl I *, I \
zl, + ,..uo _;.,u/
.t
- t'+ .ufi

Similarly, lhe Laplace lransform of exp[-ar] sin torl u(t) is obtained from the Laplace
transform of exp [-at I rr (l ) and the modulation property as
240 The Laplace Transform Chapter 5

e[exp[-arl(sinr,,orla(r)l = I(#..,
#o*o)
=_ -L__
(s+a)2+roo2

6.6.9 Convolution
This property is one of the most widely used properties in the study and analysis of lin-
ear systems. Its use reduces the complexity of evaluating the convolution integral to
simple multiplication. The convolution property states that if
.r() er x(s)
h() <+ H(s)
then
r(r) n /r(r) er X(s)H(s) (s.s.13)
where the convolution of x(t) and ft (r) is

x(t) * h(t) = [
J_-
xft)h(t - r)dr
Since both ft (l) and x(t) are causal signals, the convolution in this case can be reduced to

x(t) s h(t) = Jg[' xk)h(t - t) dr


Taking the Laplace transform of both sides results in the rransform pair

r(r) x Ir(r) <+


f [f ,f"l h(t - t)dt]exp[-sr]dr
Interchanging the order of the integrals, we have

r14 * tO * f-,t"1[f h(t - iexp[-sr]ar]at


Using the change of variables p : t - t in the second integral and noting that ft (p) :
0forp<0yields
r(r) x h(r) <+
/- r(t )
exp[-st ] exp[-sp]dp ,r,
[],'0,*, ]
or
x(r1x 11111e+ X(s)H(s)
The ROC associated with X(s)H(s) is the intersection of the ROCs of X(s) and
II(s). However, because of the multiplication process involved. a zero-pole cancella-
tion can occur that results in a larger ROC than the intersection of the ROCs of X(s)
Sec. 5.5 Properties ol tho Unilateral Laplace Transtorm 241

and H(s). In general, the ROC of X(s)H(s) includes the intersection of the ROCs of
X(s) and H(s) and can be larger if zero-pole calrcellation occurs in the process of mul-
tiplying the two transforms.

Eranple 65.11
The integration property can be proved using the convolution property. since

[ .r(r ) dt = :(r) * r(r)

Therefore. the transform of the integral ofr(r) is the product of X(s) and the transform
of rr(r). which is l/s.

Example 65.12
Let r(l) be the rectangular pulse rect ( (, - o)/?a) centered at , = / and with width 2. The
convolution of this pulse with itself can be obtained easily with the help of the convolu-
. tion prop€rty.
From Example 5.5.2. the transform of .r(t) is

x(s) = 1--
gIPL-lsl

The transform of the convolution is

Y(s) - 1r1,; =
[l -elet' -2"]]'
I 2expl-2asl . exp[- 4asl
=s2---7 -'---,,
Taking the inverse l:place transform of both sides and recognizing that 1/s2 is the trans'
form of lu ( ) yields

v(') ='r(') " '(') - 4atu(t - 4a\

- l,!u -7r":' ,;:';(t


'i,,','r'
:
This signal is illustrated in Figure 5.53 and is a triangular pulse, as expected.

y(r)=r(r)..r(r)

flgure SSJ Convolulion of two


rectangular signals.
242 The Laplace Translorm Chapter 5

In Equation (5.5.13). II(s) is called the transfer function of the system whose
impulse response is &(t). This function is the s-domain representation of the LTI sys-
tem and describes the "transfer" from the input in the s domain, X(s), to the output in
the s domain, Y(s), assuming no initial energy in the system at t 0-. Dividing both :
sides of Equation (5.5.13) by X(s), provided that X(s) # 0. gives

H(,): ;I:] (s.s.l4)

That is. the transfer function is equal to the ratio of the transform Y(s) of the output
to the transform X(s) of the input. Equation (5.5.14) allows us to determine the
impulse response of the system from a knowledge of the response y(t) to any nonzero
input r(t).

Example 65.18
Suppose that the input x(l) = exp [-2tla(t) is applied to a relaxed (zero initial conditions)
LTI system. The oulput of the syslem is
)
r0) = l(exn[-r] + exp[-21 - exp[-3rl)a(r)
Then
!
x(s)=s+2
and

3(s + l) 3(s + 2) 3(s + -1)

Using Equation (5.5.14). we conclude that the transfer function H(s) of the system is

rr('):i-ffi 3G*r1ll!)

_ 2(s2+tu+7)
3(s+l)(s+3)
:3[,.#.*]
from which it follcws thal

alry = + + exp[-3rtlu(r)
Jalr; Jtexnl-rl

Erample 6.6.14
Consider the LTI system describcd by the differential equation
y-(t) + Zy"(t) - y'ltl + 5.v(t) = 3r'1r; * r,r,
Sec. 5.5 Properties of the Unilaioral Laplace Translorm 243

Assuming thar the system was initially relaxed. and taking the Laplace transform of both
sides. we obtain

s3Y1s; + 2r2y(s) - sY(s) + 5Y(s) = 3sx(s) + x(s)


Solving for H(s) = y1t1111(s), we have

HIs)
. 3s+1
----.--------
= s'*2s'-s+5

5.5.10 IDttiaI-VaIue Theorem


Let .rO be infinitely differentiable on an interval around r(0.) (an intinitesimal
interval); then
r(0. ) = J-+o
lim sX(s) (s.s.1s)

Equation (5.5.15) implies that the behavior of r(t) for small I is determined by the
behavior of X(s ) for large s. This is another aspect of the inverse relationship between
time- and frequency-domain variables. To establish this result, we expand r(t) in a
Maclaurin series (a Taylor series about t = 0*) to obtain

.r() =
[x(o-)
+ x'(0*)t * ... * rt')10t) ri* ],r,r
where r(a)(O*) denores the n th derivative of x(r) evaluated at I = 0*, Taking the
Laplace transform of both sides yields

x(s) = r(ol) + * ni(.tJ *...


#....
=,i,''"'(0.)#
Multiplying both sides by s and taking the limit as s + @ proves the initial-value theo-
rem. As a generalization, multiplying by s'* t and taking the limit as 5 -v co yields
,t')10*) = lim [s'*lx(s) - s'x(0*) - s'-rr'(0*) -..' - r.rta-rt10+)] (5.5.16)

This more general form of the initial-Value theorem is simplified ', rt')10*) = 0 for
n < N. In that case,
r(N)(g+ ) = lim sN*lX(s) (s.s.17)

This property is useful, since it allows us to compute the initial valuc of the signal r(t)
and its derivatives directly from the Laplace transform X(s) without having to find the
invene x(t). Note that the right-hand side of Equation (5.5.15) can exist without the
existence ofr(0'). Therefore, the initial-value theorem should be applied only when
.r(0*) eilsts. Note also that the initial-value theorem produces.r(0 '), not x(0-).
244 The Laplace Translorm Chapter 5

Example 65.15
Ttie initial value of the signal whose l:place transform is given by

xG)=G+ifu, a+b
is

r(o*) = 1u,1,Jift1, =.
The result can be verified by determining.r() ftrst and then substituting r = 0'. For this
example, the inverse Laplace trausform of X(s) is

,<,1 : explatl-bexp[Dr]lu() -f - explD4lz(r)


;i tla ,[exp[ar]
so that r(0*) = c. Note that r(0-) = 0.

6.6.11 Final-Value Theorem


The final-value theorem allows us to compute the limit of signal r(l) as, + @ from its
Laplace transform as follows:

lS r0l = I'S rxt"l (s.s.18)

The final-value theorem is usiful in some applications, such as control theory, where
we may need to find the tinal value (steady-state value) of the output of the system
without solving for the time-domain function. Equation (5.5.18) can be proved using
'l

the differentiation-in-time property. We have

I
Jo
x'(r) exp[ - stldt:sX(s) - x(0-) (5.5.19)

Taking the limit as s + 0 of both sides of Equation (5.5.19) yields

!,g f r'Ol exp[ - sr]dr = h'u hxG) - r(0-)l


or

t" d'= l'S [sx(s) - r(o-)]


io ''0)
Assuming that lirq x(t) exists. this becomes

lS ,01 - ,r(0-) = litu s xG) - r(0-) .

which, after simplification, results in Equation (5.5.18). One must be careful in using
the final-value theorem. since lim s X(s) can exist even though r(r) does not have a
Sec. 5.5 Propertiss o, the Unilateral Laplace Transform 245

limit as r -> cc. Hence. it is important to know rhat liq x(r) exists betore applying the
final-value theorem. For example, if

xG): r, r---l ,l
then

lirq sx(s) = ln
rL =
o
u-
But r(t) = costor, which does not have a limit as 1 -e o (cosor oscillates between +l
and -1). Why do we have a discrepancy? To use the final-value theorem, we need the
point s = 0 to be in the ROC of sX(s). (Otherwise we cannot substitute s = 0 in
sX(s).) We liave seen earlier that for rational-function Laplace transforms, the ROC
does not contain any poles. Therefore, to use the frnal-value theorem, all the poles of
sX(s) must be in the left-hand side of the s plane. In our example, sX(s) has two poles
on the imaginary axis.

Ernrnple 6.6.16
The input.r(t) = eu() is applied to an automatic position-control system whose transfer
function is

H(s) =
s(s+b)+c
The final value of the output y(r) is obtained as

f11 r0) = l,$ , Yf"l = linl s x(s)H(s)

=h,[4 9
r-ro I s(s + b )+c
s

=A
assuming that the zeros ofs2 + Ds + c are in the left half planc Thus. after a sufficiently
long time, the output follows (tracks) the input r(r).

f,'.ra'nple 6.6.17
Suppose we are interested in the value of the integlal

t" exvl- atl at


Jo
Consider the integral

Y0) = t'exP[-at] dt = dr
[ l,.x(t)
Note that the final value of y(r) is the quantity of interest; that is.
246 The Laplace Translorm Chapters
TABLE 5.2
Somo Solecled Propertlos o, tho Uplaco Translorm

l. Linearity X o,.r (r) ) (s.s.1)


,.I ""X,(s)
2. Time shift .r(-rn)a(-ro) X(s) exp (-sro) (55.2)
3. Frequency shift exp (sor)r(r) X(s - so) (s.s3)
4. Time scaling r(ar). o > 0 t/a X(s/a) (5.5.4)
5. Differentiation tlx(t)/dt s X(s) - .r(0-) (5.s5)
t' I
6. lntegration I r(t) dr
Jn
xttl (55.7)
"
7. Multiplication by I ,r(r) _dx(s_) (55.8)
ds
I
ll. Modulation r(t) orr +ioo)I
ztx(s - iroo) X(s
cos + (5.s.11)

I
r(l) sin to,,l - - X(s +ioo)l (s.5.12)
4lx(s loo)
9. Convolution r(r) {,rt(r) x(s)rr(s) (s.5.t3)
10. I nitial value .r(0.) lim s X(s) (5.s.rs)

l. Finnl value lia s X(s)


I
lH't'l (sJ.t8)

lH Yt,l = l,g "lx(s) = 1ir 1'1",


From Table .5. I .

xG) =
1" irJy,i .

Thercfore.

at =
/=r"exp1-arl ol,!.,

Table 5.2 summarizes the properties of the laplace transform. These properties.
along with the transform pairs in Table 5.1. can be used to derive other transtorm pairs.

we saw in section 5.2 that with s = o + fro such that Relst is inside the Roc, the
l-aplace transform of .r(r) can be interpreted as the Fourier transform of the exponen-
tially weighted signal .r(r) exp [-or]l thar is.
Sec. 5.6 The lnverse Laplaco Transform 247

X(o + iot) = expl-iottlttt


[_-r|rexp[-or]
Using the inverse Fourier-transform relationship givcn in Equation 14.2.5). we can find
x (l ) exp [-ot] as

.r(t) exp [- ot ] = + iro)exp[lo,r] r/r,,


* [_.r,
Multiplying by exp[ot]. we obtain

,<i : jn [_,rr" + iro) exp[(o + iot)tltt'',


Using the change ofvariables s = o * 7 r,r, we get the inverse Laplacc-transform equation

,o '* *urexp[sr]ds (5.6.1)


--
k[
The integral in Equation (5.6.1) is evaluated along the straight line o + lro in_the com-
plex plan-e from o - l- to o * /-, where o is any fixed real number for which Re[s]
= o ii a point in ttr" ROC of X(s). Thus, the integral is evaluated along a straight line
that is pirallel to the imaginary axis and at a distance o from it.
Evaiuation of the integral in Equation (5.6.1) requires the use of contour integration
in the complex plane, which is not only difficult, but also outside of the scope of this
text; hence, we will avoid using Equation (5.6.1) to compute the invcrse l-aplace trans-
form. In many cases of interest, the l-aplace transform can be expressed in the form

x(,) = ;[:] (s.6.2)

where N(s) and D(s) are polynomials in s given by


N(s) = 6,nt- + br,-,s'-l + "'+ bls + D(l
D(s): ansn + an-rs'' l +"'+ ars + 40, a,*o
The function X(s) given by Equation (5.6.2) is said to be a rational function of s, since
it is a ratio of two folynomiali. We assume thal m < r; that is, lhe degree of N(s) is
strictly less than thl digree of D(s). In this case, the rational function is proper in s.
lf. m = n i.e., when thi rational function is improper, we can use long division to
reduce it to a proper rational function. For proper rational transforms, the inverse
Laplace transfbrm can be determined by utilizing Partial-fraction expansion tech-
niques. Actually, this is what we did in some simple cases, ad hoc and without diffi'
culiy. Appendii D is devoted to the subject of partial fractions. We recommend that
the ieadii not familiar with partial fractions review that appendix before studying the
following examples.

Eramplo 6.8.1
To find the inverse laplace transform of

xtr) =
tr.;tlr*J_ a;l
248. The Laplace Transtorm Chapter 5

we factor the polynomial D(s) = 5r + 3s2 - 4s and use the partial-fraitions form

'!-r- * 4t.
x(s)=As * s+4 s-1
Using Equation (D.2). we find that the coefficients ,l,, i = 1,2,3, arc

Ar= _iI
7
Az=
zo
3
At=
5

and the inverse [aplace transform is

,(,) = -1i,1r1 * fr 4tlu(t) + exptrlu(r)


"rp1- J

f,sornple 6.62
In this example, we consider the case where we have repeated factors. Suppose the
l:place transform is given by

xc) = F#;*_,
The denominator D(s) = s3 - 4s2 + 5s - 2 can be factored as

D(s)=(s-2)(s-lf
Since we have a repeated factor of order 2, the corresponding partial-fraction form is
BAzAI *;-'i
x(s) =;-, *
4" -'rF
The coefticient I can be found using Equation (D.2); we obtain
B=2
The coefficients A,,i = l,2, are found using Equations (D.3) and (D.4); we get
Az= |
and
d lx2 - 3s\ I
Ar=
as l, _ r_/1,",
"
= 1r_2$- r)_r4l-1'l
(" - 2)2
Il"-, =,
so that
2l
x(s)=;-*G-il-
Sec.5.6 The lnverse Laplace Translorm 249

The invene Laplace transform is thcrefore


x(tl = 2exp[2lrr(t) + I exp[tlz(t)

Erample 6.63
ln this example, rre treat the case of complex conjugate poles (irreducible second-degree
faclors). Lrt

xG) = ii
rr*,;1
Since we cannol factor lhe denominator. we complete the square as follows:

D(s) = (s + 2\2 +32


Then

---11?-" '
x1s1= (s+2)2+32 *'---t'
(s +212 +3:

By using the shifting properly of the transform, or alternatively. by using entries l2 and
13 in Table 5.1. we find the inverse liplace lransform lo be

r(r) = exp[ - ?,](cos3r)rr() + | exp[- 2](sin 3r)rr(r)

Exanple 6.6.4
As an example of repeated complex conjugate poles, consider the rational function

,,r,,=5rr_3=rr+7.s_3
^\rr_ (sz + l;2

Writing X(s) in partial-fraction form, we have

x(,):t+P.s#;
and therefore,

5s3 - 3s2 + 7s - 3= (Ars + B,)(s2 + l) + Ars + B,


Comparing the co€fficients of the different powers ofs, we oblain
/r = 5, Br= -3' Az=2, Bz= 0

Thus,

xlsr=-_$-----1
/-"2+l * -?
s2+l'(s2+112
and the inverse Laplace transform can be determined from Tablc -5.1 to be

x(r) = (5 cosr - 3 sinl + tsint)u(t)


.. , The Laplace Translorm Chapter 5

5,7 SIMULATION DIAGRAMS


FOR CONTI NUOUS-TI ME SYSTEMS
In Section 2.5.3,we introduced two canonical forms to simulate (realize) LTI systems
and showed that, since simulation is basically a synthesis problem. there are several
ways to simulate LTI systems, but all are equivalent. Now consider the Nth order sys-
tem described by the differential equation

(r,. P.
o,o,)t(t)= (5,r,r,),1,y (s.7.r )

Assuming that the system is initially relaxed, and taking the l-aplace transform of both
sides. we obtain

(,, . t,,,,)r1,) = (#,rr,)ru, (s.7.21

Solving for Y(s)/X(s), we get the transfer function of the system:

) b,,'
H(s) = *=i- (5.7.3)
sil +) a,s'
i-0
Assuming that N = M, we can express Equation (5.7.2) as

sn[y(s) - bpX(s)] +sN-r[ap-rY(s) - Dr-,X(s)] +...+aoY(s) - boX(s) =g


Dividing through by sN and solving for Y(s) yields

Y(s) = brx(s) * 11ar-,x1r; - aN-'v(s)l + "' +

+ 1 l-
"-, [D,X(s) - a,Y(s)] + ; [DnX(s) - ay(s)] (5.7.4)

Thus, Y(s) can be generated by adding all the components on the right-hand side of
Equation (5.7.4). Figure 5.7.1 demonstrates how H(s) is simulated using this technique.
Notice that the figure is similar to Figure 2.5.4, except that each integrator is replaced
by its.transfer function 1/s.
The transfer function in Equation (5.7.3) can also be realized in the second canoni-
cal form if we express Equation (5.7.2) as
M
) b,,'
Y(s) = --J$_, x(s)
sN + ) a,si
i-0

= (5',') (s.7.s)
'1'1
l-

tr

E
o!
E

oo

a
t:
(,
bo
ll

251
The Laplac€ Translom Chapter 5

where
I
v(s) = x(s) (s.7.6)
sil+ ) o,si

or

("* !'r,r') r'1ry = X(s) (s.7.7)

Therefore. we can generate Y(s) in two steps: First. we generate V(s) from Equation
(5.7.7) and then use Equation (5.7.5) to generate Y(.s) from V(s). The result is shown
in Figure 5.7.2. Again, this figure is similar to Figure 2.5.5, except that each integrator
is replaced by its transfer function l/s.

Example 6.7.1
The two canonical realization forms for the system wilh the transfer function
s2-3s+2
H(s) =
sr+612+lls+5
are shown in Figures 5.7.3 and 5.7.4.

As we saw earlier, the Laplace transform is a useful tool for computing the system
transfer function if the system is described by its differential equation or if the output
is expressed explicitly in terms of the input. The situation changes considerably in cases
where a large number of components or elements are interconnected to form the com-
plete system. In such cases, it is convenient to represent the system by suitably inter-
connected subsystems, each of which can be separately and easily analyzed. Three of
lhe most common such subsystems involve series (cascade), parallel, and feedback
interconnections.
In the case of cascade interconnections, as shown in Figure 5.7.5,
Y1(s) = H,(s)X(s)
and
Yr(s) = Hr(s)Y,(s)
= [H,(s)H,(s)lX(s)
which shows that the combined transfer function is given by
H(s) = H,(s)Hr(s) (5.7.8)

We note that Equation (5.7.8) is valid only if there is no initial energy in either sys-
tem. lt is also implied that connecting the second system to the first does not affect the
output of the latter. In short. the transfer function of first subsystem. Ht(s), is com-
puted unrler thc assumption that the second subsystern with lransfer function H,(s) is
not connected. In other rvords. the inputioutput relationship of the first subsystem
c
+

a)

CJ

+
cl)

tr
oo

+
.E

at
|.--

EO

253
2il Th€ Laplace Translorm Chaptor 5

.r(r)

v(t)

Flgure 5.73 Simulation diagram using first canonical form for Exam-
ple 5.7.1.

Egure 5.7.4 Simulation diagram using second canonical form for Eram-
ple'5.7.1.

remains unchanged, regardless of whether Hr(s) is connected to it. If this assumption


is not satisfied. H,(s) must be computed.under loading conditions. i.e., when Hr(s) is
connected to it.
If there are N systems connected in cascade. then their overall transfer function is
H(s) = 17,1r1}I2(s) ... Hn(s) (5.7.e)
Sec. 5.7 Simulation Diagrams lor Contlnuous-Time Systems 255

t'1(r) Flgure 5.7,5 Cascade


interconncction of two subsystems.

ft (s)
,/t(r)

Y! (s)
,t/,(r) Flgure 5.7.6 Parallel
interconnection of two subsystems.

Using the convolution property, the impulse response of the overall system is
h(t) = h,(t) {'fr2(r) n ... * ft/v(,) (5'7'10)

If two subsystems are connected in parallel, as shown in Figure 5.7.6. and each sub-
system has no initial energy, then the outPut
Y(s)=Y'(s)+Yr(s)
= I/r(s)x(s) + Hr(s)X(s)
= [Hr(s) + H,(s)]X(s)
and the overall transfer function is
H(s)=f/,(s)+Hz(s) (s.7.1 1 )

For N subsystems connected in parallel. the overall transfer function is


H(s) = H,(s) + H,(s) + ... + HN(.r) (s.7.t2)

From the linearity of the L:place transform, the impulse response of the overall system is
h(t) = h,(t) + i,(,) + ..' + hNQ) (5.7.13)

These two results are consistent with those obtained in Chapter 2 for the same
in tercon necl ion s,

Eranple 6.72
The transfer function of the system described in Example 5.7.1 also can be written as
s-1s-2
H(s)=;+i I
iiz,+r
This system can be realized as a cascade of three subsystenrs. as shown in Figure 5.7.7'
Each iubsystem is composed of a pole-zero combination. The same system can be realized
in parallel, too. This can be done by expanding H(s) using the method of partial fractions
as follows:
256 The Laplace Translorm Chapter 5

,arrffiru, Figure5.7.7 Cascade-form


simulation for Example 5.7.2.

t2
t +2

, l0 Iigure 5.74 Parallel-form


s +3
simulation for Example 5.7.2.

10
H(s) = -J- - -'2- *
s+ I s+2 s+3
A parallel interconnection is shown in Figure 5.7.8.

The connection in Figure 5.7.9 is called a positive feedback system. The output of
the first system Hr(s) is fed back to the summer through the system Hr(s); hence the
name "feedback connection." Note that if the feedback loop is disconnected, the trans-
fer function from X(s) to Y(s) is H,(s), and hence H,(s) is called the open-loop trans-
fer function. The system with transfer function Hr(s) is called a feedback system. The
rvhole system is called a closed-loop system.

/r2 (s)
Iigure 5.7.9 Feedback connection.

We assume that each system has no initial energy and that the feedback system does
not load the open-loop system. Let e(r) be the input signal to the svstem with transfer
function II,(s). Then
Y(s) : E(s)H,(.s)
E(s) = .Y1.'; + //r(.s)Y(.s)
so that

Y(.s) = s,1r; [x(s) + H,(s)Y(s)]


Solving for the ratio Y(s)/,Y(.s) yields the transfer function of the closed-loop system:
Sec. 5.8 Applications ot lhe Laplace Translorm 257

(s.7.1.t)

Thus, the closed-loop transfer function is equal to the open-loop transfer function
divided by I minus the product of the transfer functions of the open-loop and feedback
systems. If the adder in Figure 5.7.9 is changed to a subtractor. the system is called a
negative feedback system. and the closed-loop transfer function changes to

H(s) =
,r illlL,, (s.7. l s)

5.8 APPLICATIONS OF THE LAPLACE


TRANSFORM
The Laplace transform can bc applied to a number of problenrs in system analysis and
design. These applications depend on the properties of the l-aplace transform, espe-
cially those associated with differentiation. integration, and convolution..
In this section. we discuss three applications. beginning rvith the solution of differ-
ential equaiions.

6.E.1 Solution of Differential Equations


One of the most common uses of the Laplace transform is to solve linear. constant-
coefficient differential equations. As we saw in Section 2.5, such equations are used to
model continuous-time LTI systems. Solving these equations rlepcnds on the differen-
tiation property of the laplace transform. The procedurc is st raightfonvard and sys-
tematic, and we summarize it in the following stcps:
l. For a given set of initial conditions, take the Laplace transfornr of both sides of thc
differential equation to ohtain an algebraic equation in Y(s ).
2. Solve the algebraic equalion for Y(s).
3. Take the inverse Laplace transform to obtain y(r,).

Erample 6.8.1
Consider the second-ordcr, linear. constanl-coefficient diffcrcr:tial cquation
.v"(r) + sr,',(r) + 6f(/) = exp[-r]u(r), .v',(0 )- Ilnd.y(0-)=2
Taking the [-aplace transfornr of hoth sidcs resul]s in

[s:t'(s) Zs-ll+.s[sY(.r)-2] +6]'(s)-. f ,

Solving for Y(s ) yields

r(s)=.2r-+l3s1t2-
(.r+l)(s:+5s+6)
l6e
2(s+ l) s+2 2(s+3t
258 The Laplac€ Transform Chapl€r 5

Taking the inverse Laplace transform. we obtain

y(i = ()exp[-r] + 6exp[-2r] - 2"*o, - srl),r(r)

Higher order differential equations can be solved using the same procedure.

6.8.2 Appltcatlon to RLC CircultAnalysls


In the analysis of circuits. the Laplace transform can be carried one step furrher by
transforming the circuit itself rather than the differential equation. The s-domain cur-
rent-voltage equivalent relations for arbitrary R. L, and C are as follows:

Resietore. The s domain current-voltage characterization of a resistor with


resistanae R is obtained by taking the l.aplace transforrn of the current-voltage rela-
tionship in the time domain, Ri^(t) = aa(t). This yields
7^(s) = RIr(s) (s.8.r )

Induotors. For an inductor with inductance L and time-domain current-volt-


age relationship Ldir(t)/dt = at.Ql. the s-domain characterization is

l/r.(t) =sllr(s) - LiL(0-) (5.8.2)

That is, an energized inductor (an inductor with nonzero initial conditions) at , ='0 -
is equivalent to an unenergized inductor al , = 0- in series with an impulsive voltagc
source with strength LiL@-). This impulsive source is called an initial-condition gen-
erator. Alternatively, Equation (5.8.2) can be written as
(o-)
/r.(s) = ,, trl + (s.8.3)
,l "
That is. an energized inductor at t = 0- is equivalent to an unenergized inductor at
r - 0- in parallel with a step-function current source. The height of the step function
is i,.(0-).

Capacitore. For a capacitor with capacitance C and time-domain current-volt-


age relationship Ctlur.(t)/ th = ,({r). the.r-domain characterization is

(.(s) = .sC tz,.(s) - Ca, (0-) (s.8.4)

That is. a charged capacitor (a capacitor with nonzero initial conditions) at r = 0- is


equivalent to an uncharged capacitor at I = 0- in parallel with an impulsive curent
source. The strength of the impulsive source is Co(O - ), and the source itself is called
an initial-condition generator. Equation (-5.tt.4) also can be written as

t1(:-)
rz.{s) =
r! r, trl + (s.8.5)
S€c. 5.8 Applications ol th6 Laplace Translorm

Thus, a charged capacitor can be replaced by an uncharged capacitor in series with a


step-function voltage source. The height of the step function is u.(0-).
we can similarly write Kirchhoffs laws in the s domain. The equivalent statement
of the current law is that at any node of an equivalent circuit. the algebraic sum of the
currents in the s domain is zero; i.e..

)
k
rr(') = o (s.8.6)

The voltage law states that around any loop in an equivalent circuit, the algebraic sum
of the voltages in the s domain is zero; i.e.,

)
k
v*1'1 = e (5.8.7)

caution must be exercised when assigring the polarity of the initial-condition generators.

Eranfle 6AJ
Consider the circuit shown in Figure 5.E.t(a) with dr(o-) = -2. uc(o-) = 2, aod r(r) =
u(t). The equivalent s-domain circuir is shown in Figu'e 5,8.1(b).
Writing the node equation at node 1, we obtain

2 - rQ-t/,--.? - sy(s) - y(s, = s

Rr- 2tt'

c.l F+ Rz= I o v (t)

2
+

x(s) = !
s
l-
s- I/(s)

(b)

Flgure 53.1 Circuit for Example 5.8.2.


260 -^ The Laplace Translorm Chapter 5

Solving for Y(s) lelds


2s2+6s+l
y(s)=dF+3s+,
I 5rl3 + 5
3s (s+3/2)'1+$/1142
1 5 s+3/2 s t/ilz
3s 3 (s + 3/2)2 + (11/4, ' V3 1, + 3/U, + O/alDz
Taking the inverse l:place uansform of both sides results in

r(r) = |arr). i"*[j,]("o,f ,),r,r *


rl *,[r,,](.,^f ,),<o
The analysis of any circuit can be carried out using this procedure.

6.8.3 Application to Control


One of the major applications of the laplace transform is in the study of control sys-
tems. Many important and practical problems can be formulated as control problems.
Examples can be found in many areas, such as communications systems, radar systems,
and speed control.
Consider the control system shown in Figure 5.8.2. The system is composed of two
subsystems. The fust subsystem is called the plant and has a known transfer function
H(s). The second subsystem is called the controller and is designed to achieve a cer-
tain system performance. The input to the system is the reference signal r(l). The sig-
nal nr(t) is introduced to model any disturbance (noise) in the system. The difference
between the reference and the output signals is an error signal
e0)=r(t)-y(t)
The error signat is applied to the controller, whose function is to force e(l) to zero as
r -+ o; that is,
lim e(t; = 6
This condition implies that the system output follows the reference signal r(r). This
type of sJmtem performance is called tracking in the presence of the disturbance ar(t).

H.(s)

Figure 5.82 Block diagrarp of g control system.


Sec. 5.8 Applications ol the Laplace Transform 261

The following example demonstrates how to design the controllcr to achieve the track-
ing effect.

Example 6.tJ
' Suppose that thc LTI system we have to control has the lransfcr function

rrrrl = {01 (s.8.8)


D (s)

Lrt the input be r(l) = Au(t) and the disturbance be ra0) = Bx(/), where A and B are
constants. Becausi: of linearity. we can divide the problem into trvu simpler problems, one
with input r(t) anrJ the olher wirh input ur(r). That is. the ourpul I(r) is expressed as the
sum of lwo components. The first component is due to r(t) when rrr(l) = 0 and is labeled
.v,(r). It can be easily verified that

y,(,) =
i{#:i,o,,fr, "u,
where R(s) is the l:place transform of r(r). The second component is due to zo(r) when
r(l) = 0 and has the laplace transform
y,6) = w(.)
G-#-lo.r(,
where W(s) is thc Laplacc transform of the disturbance ?o(/).'l-hc complete output has
the Laplace transform
Y(s)=Y,(")+Yr(s)

=, h*5%, R(s) +, * #1;q|r1.i *('r


ir_('ll4QA 1_g,l
= (s.8.e)
s[1 + H.(s)H(s)]
We havc to design l/.(s) such that r(r) tracks.v(r); that is,

lg1 Y(t)
: a

Lrt H.(s) = N.(s)/D.(s). Then we can write


y(s' rv(s)[N.(s)A + D"1s) 8]
'- ttD(t)D.ir) + N(rllv.trll
[,et us assume thal the rcal parts of all lhe zeros of D(s)D.(.s) + N(s)N.(s) are stricrly
neBative. Then by using the final-value theorem, it follows thar

flg rttl: li4 sY(s)

_,,_ +_D.(s)Bl
- i-d !Ell4L{94
o(')o.(') N(s)N.(s)
+
(s.8.r0)

For this to be equal to A, one needs that


li$ A.t"l = 0 or D,.(.s ) has a zero at s = 0. Sub-
stituting in the expressron for Y(s), we obtain
. ,,262 The Laplac€ Translorm Chapter 5

fsro=#ffi# =,

Eranple 6.8.4
Consider the control system shown in Figure 5.8.3. This system represents an automatic
position-control system that can be used in a tracking antenna or in an antiaircraft gun
mount. The input r(r) is the desired angular position of the object to be tracked, and the
outPut is the position of the anlenna.

.10|

Flgure 5.E3 Block diagram of a


tracking antenna.

..:,
The first subsystem is an amptilier with transfer function Hr(s) = 8, and the second sub-
.l;,.i systemisamotorwithtransferfunctionH2(s):1/[s(s+o)],where0<c<V32.btus
investigate the step resPonse of the system as the parameter o changes. The output Y(c) is

Yc) =
:"t'r =1-f*]#oA,rr
8
i1s'+"s+s)
rl I s+o
ss2 +aJ+8
The restriction 0 ( a ( \6j is chosen ro ensure that the roots of the polynomial s2 + as
* 8 are complex numbers and lie in the left half plane. The reason for this will become
clear in Section 5.10.
The step response of this system is obtained by taking the inverse l.aplace transform
of Y(s) to yield

,or = (r - "*[+]{*'[\l[ --(T,]


,])..u,

The step response y(l) for two values of q, namely, c = 2 and a = 3, is shown in Figure
5.8.4. Note that the response is oscillatory with overshoots of 30% and l4olo, resPectively.
The time required for the response to rise from l0% to fr)o/o of its final value is called the
rise time. The first system hai a rise time of 0.48 s, and the second system has a rise time
of O.60 s. Systems with longer rise times are inferior (sluggish) to lhose with shorter rise
times. Reducing the rise time increases the ovenhoot. however. and high overshoots may
not be acceptable in some applications.
Soc. 5.9 Slate Equations and lhs Laplace Translorm 263

v(t)

l.:m

t.l7

Hgure 5J.4 Step response of an antenna tracking sysrem.

5.9 STATE EQUATIONS AND THE LAPLACE


TRANSFORM
We saw that the laplace transform is an efficient and convenient tool in solving dif-
ferential equations. In Chapter 2, we introduced the concept of state variables and
demonstrated that any LTI system can be described by a set of first-order differential
equations called state equations.
Using the time-domain differentiation property of the laplace transform, we can
reduce this set of differential equations to a set of a.lgebraic equations. Consider the
LTI system described by
v'G)=Av(r)+br(r) (s.e,1)

y(t)=cv0)+dx(t) (s.e,2)

Taking the Laplace transform of Equation (5.9.1). we obtain


sY(s) - v(0-) = AY(s) + bX('t)
which can be written as
The Laplace Transtorm Chapter s

(sI - A)v(s) = v(0-) + bX(s)


where I is the unit matrix. l,eft multiplying borh sides by the invenie of (sI - A), we obtain
V(s) = (sI - A)-r v(0-) + (sI - A)-r bx(s) (s,e.3)
The [-aplace transform of the output eguation is
Y(s)=iY1t;+dX(s)
Substituting for V(s) from Equation (5.9.3), we obtain
Y(s) = .1r1 - A)-r v(0-) + [c(sl - A)-'b + d]X(s) (S.9.4)
The first term is the transform of the output when the input is set to zero and is iden-
tified as the Laplace transform of the zero-input component ofy(l), The second term
is the l-aplace transform of the output when the initial state vector is zero and is iden-
tified-as the Laplace transform of the zero-state component of .vG). In chapter 2, we
saw that the solution to Equation (5.9.1) is given by

vO = explArl v(0-) + [' exp[a(r


J6
- r)]bx(r)dt (s.e.s)

(see Equation (2.6.13) with ro = 0-.) The integral on the right side of Equation (5.9.5)
represents the convolution of the signals exp [Al] and bx(r). Thus, the Iaplace trans-
formation of Equation (5.9.5) yields
Y(s) : 9lexp[Ar]] v(0-) + 9[exp[Ar]l bx(s) (s.e.5)
A comparison of Equations (5.9.3) and (5.9.6) shows that
9{exp[Ar]l = (sI - A)-t =,D(s) (s.e.7)
yhere o(s) represents the Laplace transform of the state-transition matrix erp[Ad.
O(s) is usually referred to as the resolvent matrix.
___
Equation (5.9.7) gives us a convenient alrernative method for determining exp[Al]:
we frrst form the matrix sI - A and then take the inverse Laplace transform of
(sI - a1-t
With zero initial conditions, Equation (5.9.4) becomes
y(s) : [c(sr - A)-r u + d]X(s) (s.e.8)
and hence, the transfer function of the system can be written as

II(s) = c[sl - A]-t b + d = cO (s)b + d (s.e.e)

&ample 6.9.1
Consider the system described by

''(,)
=
[-, l]"u, . [l],u,
y(r) = [-l -l]v(r) + 2r(r)
Sec. 5.9 Stats Equations and the Laplace Translorm 26s

with

=
"(o ) [;,]
The resolvent matrix of this system is

-a l-'
*,rr=['r*' ,-3-]
Using Appendix C. we obtain
Fs-3 4 I
o(s) = 1=---'-t
(s+3)(s-3)+8
l-
[--r-1 -_.-r -l
t". t]! --,l u.,t,.,'r- ,,
=
| I

Ltr - ,Xr - tl t' * rlfr - rl.l


The transfer function is obtained usiog Equation (5.9.9):

[ ,-3 4
(" . - 1) (s -,'lt"-
lr(s) = 1-, -,l l ']t "
L(r+tX'-1) G+txr-D ]trt.,
2s:-4s-lE
(s+l)(s-1)
Taking the inverse l:place tralsform. we obtain
,l0) = 2[6(t) + 3 exp[-r]uO - s exp[t]z(t)l
The zero-input response of the system is

Yl(s) = 61t1 - A)-rv(0-)


_ -2(s + 13)
(s+l)(s-1)
and the zero-state response is

Yr(s) = c(sl- A)-'bx(s) + 2x(s)


_2'2-4s-lEyr.\
(s + 1)(s - l) "r"t

The overall response is

,r,r=ffiffi*ffi$*t'r
The step response of this system is obtained by substituting X(s) = 1/s, eo that
EHd The Laplace Translorm Chapter 5

yr,r =
1,*l,figO.,ti.-,ii, _,i,

= --:l$-:-t-!-
s(s+l)(s-l)
_18+ 6 _ 24
s s+l s-l
Taking the inverse laplace transform of both sides yields
y(r) = [18 + 6 exp[-r] - 24 exp[r]lu(r)

k -rr""*
kt us ftid the state-tntrsition matrix of lhe s,6tem in Example 5.9.1. T'he resolvent matrix is

rD(s )
l)(s - l) (s + l)(s -
-2 s+3
:r]
The various elements of O(r) are obtained by taking the inverse laplace transform of each
entry in the matrix O(s). Doing so. we obtain

.(,) =
[r.:;ir-,i,_-".#i, _ffi L-,ir..ffiir],or

.10 STABILITY lN THE s DOMAIN


stability is an importut issue in system design. In chapter 2, we showed that the sta-
bility ofa system can be examined either through the impulse response ofthe system
or through the eigenvalues of the state-transition matrix. Specifically, we demonstrated
that for a stable system, the output, as well as all internal variables, should remain
bounded for any bounded input. A system that satisfies this condition is called a
bounded-input, bounded-output (BIBO) stable system.
Stability can also be examined in the s domain through the transfer function H(s).
The transfer function for any LTI system of the type we have been discussing always
has the form of a ratio of two polynomials in s, Since any polynomial can be factored
in terms of its roots, the rational transfer function can always be written in the follow-
ing form (assuming that the degree of N(s) is less than the degree of D(s)):

H(s)' = -4-'-'* A' *... + A- (s.l0.r)


s-sl s-J2 J -s^r
The zeros s* of the denominator are the poles of H(s ) and, in general. may be complex
numbers. If the coefficients of the goveming differential equation are real. then the
complex roots oocur in conjugate pairs. If all the poles are distinct. then they are sim-
Sec. 5.10 Stabiliry in the s Domain 267

ple poles. If one of the poles corresponds to a repeated facror of the fonir (s - sr)',
then it is a multiple-order pole with order rn. The impulse response of the system, i(t),
is obtained by taking the inverse Laplace transform of Equation (5.10.1). From entry
6 in Table 5.1. the &th pole contributes the term ho$) = Ao exp [.r*r] to i (t). Thus, the
behavior of the system depends on the location of the pole in the s plane. A pole can
be in the left half of the s plane, on the imaginary axis, or in the right half of the s plane.
Also, it may be a simple or multiple-order pole. The following is a discussion of the
effects of the location and order of the pole on the stability of [,TI systems,
L. Simple Poles in the Left Half Plane. In this case, the pole has the form
s*=ooljro*. oo(0
and the impulse-response component of the system, ho(t), corresponding to this pole is
hoQ) = Aoexp[(oo + jtro)r] + Af exp[(oo - lr*)rl
= l,4ol exp[oor](exp[i(toor + 9r)] + exp[-i(oror + 9r)])
= Zlerl exp[oot] cos(orot + p*). or < 0 (s.10.2)

where
Ar= lA*l exp[rprl
As, increases, this component of the impulse response dccays to zero and thus
results in a stable system. Thereforc, systcrns with only sinrplc poles in the left half
plane are stable.
2. Simple Poles on the Imaginary Axis. This case can be considcred a special case of
Equation (5.10.2) with oo = 0. The kth component in the impulse response is then
holt'1 :zlerl cos(ur*t + B^)
Note that there is no exponcntial dampingl that is, the rcsponse does not decay as
time progresses. It may appear that lhe response to the bounded input is also
boundcd. This is not truc if the system is excited by a cosinc function with the same
frequency to^. In that case, a multiple-order pole of the fornr

__ B"
1s2 + ol12
appears in the L:place transform of the output. This term gives rise to a time response
B
stn
2ro ' 'ot
that increases without bound as I increases. Physically', o^ is the natural frequency
of the system. If the input frequency matches the natural lrcquency, the system res-
onates and the output grorvs without bound. An example is the lossless (nonresis-
tive) LC circuit. A system rvith polcs on the imaginary axis is sometimes called a
marginally stable system.
3. Simple Poles in the Right Half Plane. If the system function has poles in the right
half plane, then the sys:em response is of the form
268 The Laplace Translorm Chapter s

h*(t) :2lArl explootlcos(oor + po), qr ) 0


Because of the increasing exponential term, the output of the system increases with-
out bound, even for bounded input. Systems for which potes are in the right half
plane are unstable,
4.' Multiple-order Poles in the Lefi Half Plane. A pole of order rn in the left half plane
gives rise to a response of the form (see entry 7 in Table 5.1)
h* = lA*l r- exp [oor] cos (trl*, + pr ), oo ( 0
For negative values ofo1, the exponential function decreases faster than the polyno-
mial t"'. Thus, the response decays as t fuicreaies, and a system with such poles is stable.
5. Multiple-order Poles on the Imaginary Axb, In this case, the response of the system
takes the form
hk= lAkl t-cos(root + p*)
This term increases with time, and therefore, the system is unstabte.
6. Multiple-order Poles in rhe Right Half Plane.Tlre system response is
hr = lAol fl exp[ort] cos(orr, + pr), oo ) 0
Because o^ > 0. the response increases with time, and therefore, the system is unstable.
In sum. a LTI (causal) system is stable if all iS poles are in the open left half plane
(the region of the romplex plane consisting of all points to the left of, but not includ-
ing. the lo-axis), A LTI system is marginally stable if it has simple poles on the jro-axis.
An LTI system is unstable if it has poles in the right half plane or multiple poles on
the ,1t'r-axis.

.11 SUMMARY
. fhe bilateral laplace transform ofr(l) is defrned by

Xa(s) = [ ,(r) exp[ - sr] dr

? The values of s for which X(s) converges (X(s) exists) constitute the region of con-
vergence (ROC).
a The transformation r(r) e+ X(s) is not one to one unless the ROC is specifred.
o The unilateral Laplace transform is defined as

X(s) = i
Jl
,(r) exp[ - s,] d,
The bilateral and the unilateral Laplace transforms are related by
Xa(") = X, (s) + Y[.r_(-r)x(r)1.-_.
where X*(s) is the unilateral Laplace transform of the causal part of x(r) and.r_(t)
is the noncausal part of .r(t).
Sec. 5.11 Summary 269

r Differentiation in the time domain is equivatent -ttFrnultiplication by s in the s


domain: that is.
Ir/r(r]l
u1_a',-l = sx(s) -'r(o-)
r Integration in the time domain is equivalent to division by s in rhe s domain; that is,

LlI'-t x(t\ar
.- ,l )=-x(s) -

tJ_. J s
r Convolution in the time domain is equivalent to multiplication in the s domain: that is,
y(t) = .r(t) * &(t) er Y(s) = .11'11r,r,
. The initial-value theorem allorvs us to compute the initial valuc of the signal r(l)
and is derivatives directly from X(s):
r(')(0') :1g1 [s'*tX(s) - s't(0*) - s"-rr'(0*) - ...
- sjr(,,-r)(0+)]

r The final-value theorem enables us to find the final value of ,r(r) from X(s):

lg r(,): liru sX(s)

. Partial-fraction expansion can be used to find the inverse laplace transform of sig-
nals whose Laplace transforms are rational functions of s.
. There are many applications of the Laplace transform: among them are the solution
of differential equations, the analysis of electrical circuits, and the design and analy-
sis of control systems.
o If two subsystems with transfer functions H, (s) and H2(s) are connected in parallel,
then the overall transfer function H(s) is
H(s)=Hr(s)+l/2(.t')
o If two subsvstems rvith transfer functions 11,(s) and Hr(.s) are connected in series,
then the overall transfer function II(s) is

It(s): H,(s)Hr(s)
o The closed-loop transfer function of a negative-feedback system with openJoop
transfer function Il,(s) and feedback transfer function Hz(s) is

H(s) =
.r #ifiro
. Simulation diagrams for LTI systems can be obtained in lhe frequency domain.
These diagrams can be used to obtain representations of state variables.
e The solution to the state equation can be written in the s domain as
V(s) : o(s)v(0-) + o(.r)bx(s)
Y(s) = 3Y1t1 + dX(.s)
27O. The Laplac€ Transform Chapter s

r The matrix
rD(s) = (sI - A)-t= glexp[Ar]]
is called the resolvent matrix.
o The transfer function of a system can be written as
H(s)=ctD(s)b+d
o An LTI system is stable if and only if all is poles are in the open left half plane. An
LTI system is marginally stable if it has only simple poles on the jo-axis; otherwise
it is unstable.

5.12 CHECKLIST OF IMPORTANT TERMS


Bllateral Laplac€ translorm Noncaueat part ot r(r)
Cascade lntorconnecfl on Parallel lnterconnectlon
Causal pan ol r(r) Partlal-rracff on expanslon
Contsollel Plant
Convolutlon proporty Poles o, O(s)
Fecdback lnterconnecllon Poeltlve lesdback
Flnal-value theorem Ratlonai lunctlon
lnhlal-condltlona generator Reglon of convergence
lnltlal-Yatus theorem Slmple pole
tnvenre laplace translorm Slmulatlon dlagram
Klrchholfe cunent law s plane
Klrchhotfa Yoltage taw Transler functlon
Left halt plane Unllateral Laptace tranelom
Muluple-ordsr pole Zero-lnput reaponee
Negatlve leodback Zereetate rcaponee

5.13 PROBLEMS
5.1. Find the bilateral laplace transform and the ROC of the following functions:
(e) exp [r + l]
(b) exp[bt]u(-r)
(cl lrl
(d) (l - lrl)
(e) exp [ -2lr l]
(f) t" exp[-l]z(-r)
(g) (cosat)u ( -t)
(h) (sinhat)a(-t)
52 Use the definition in Equation (5.3.1) to determine the unilateral Laplace transforms of
the following signals:
(i) .r,(t) = rrect[(r - l)/21
Sec. 5.13 Problems 271

(ii).rr(r) = r,(f) + i 60)


Ir]
(iii),r(,) = recr[r.l
53. Use Equation (5.4.2) to evaluate thc bilateral Laplace transform of the signals in Problem 5.1.
5.4. The Laplace transform of a signal .r(t) that is zero for I < 0 is

s3+2s2+3s+2
.X(.t ) =;.r+^j +1,.1'+b+2
Determine the Laplace lransform of the following signals:

(a) y0) =
(b) y0) = tr(t)
"(i)
(c) y(t) = tr( - l)
dr (t\ _i.
(dl y() =
(e) y(r) = (r - l)x(r - D * d';!)
rt
(f)y(t)=lx(r)dt
J6

55. Derive entry 5 in Table 5.1.


5.6. Shorv that

y.tt"u(t)t = Oljit,,, o

where

r(u) = r'-'exp[-rldr
f
5.7. Use the property
f(a + l) = uf (u)
to show that the result in Problem 5.6 reduces to entry 5 in Tahle 5.1.
5.& Derive formulas 8 and 9 in Table 5.1 using integration by parts.
5.9, Use enrries E and 9 in Table 5.1 to find the Laplace transfornrs of sinh(roor)u(t) and
cosh (to6r) u (t).
5.10. Determine the initial and final values of each of the signals whose unilateral l-aPlace trans-
forms are as follows without computing the inverse Laplace trartsform. If there is no final
value, stale why not,

(a)-, t
J+A
I
(b)
i" * o1;
6
(e
fu.rzi
272 The Laplace Transtorm Chapter 5

(d)
sri;
- s2+s+3
(e)F+4s,+zsE
o F:+-,
S.fL Find.r0) for thc following laplace tranforms:
(a) , s*2 ^
s--s-z
(b)#+
(c) 2s3+3s2+6s+4
G,J;xs,+r+2)
c2
(d)
",i+
s2-s+1
2

(e)
f _ 2s7J;

.', f,#=
2s2-6s + 3
(8)
s, _ 3s;,

rorsffl 7
:

o) (BE,
u,#6
5.11L Find the following convolutions using laplace transforms:
(a) exp[at]z(t) * explbtlu(t), a * b
@) exP [at]a(t) * exp [ar]z (r)
(c) rect (r/2) * a(t)
(d) ,l,(t) r exP[at]z(t)
(e) exp [-Dt] z(t) * z (t)
.
(I) sin (at)z (t) * cos(Dt)u(t)
(g) exP[-2r]z(r) rect [0 - 1)/2]
(!) [exp(-2.r)z(r) ' + 6(r)] . u(t - r)
5.13. (a) Use the convolution property to find the time signals corresponding to the following
I-aplace transforms:

r,l #;r- (tr) ("+


@) can you infer the inverse l:place transform of l/(s - a)' from your answers in part (a)?
5.14 We have seen that the outpul of an LTI system can be determined as y(s) = ,rG)X(s),
where the system transfer function H(s) is the kplace rransform of the system'i;pulse
Sec.5.13 Problems 273

response h(t). Ler H(s1 = N(s)/D (s), where N(s) and D(s) are polynomials in s. The
roos of N(s) are the zeros of l/(s), while the roors of D(s) are rhe poles.
(a) For the transfer function
s2+3s+2
H(r) = _
si sr-i s, _l
plot the locations of the polcs and zeros in the complex s plane.
(b) Whar is fi(r) for rhis sysrem? Is ft(r) real?
(c) Show that if /r(l) is rcal, H(s*) = H*(s). Hence show that if s = s6 is a pole (zero) of
l/(s), so is s = sot. That is poles and zeros occur in complex conjugate pairs.
(d) Verify that the given H(s) satisfies (c).
5.15. Find the system transfer functions for each of the systems in Figurc P5.15. (/rinr.'You may
have to move the pickoff, or summation, point.)

r(r) v(,)

v (t)

Figure P5.15

5.16. Draw ihe simulation diagrams in the first and second canonical fornrs for the LTI system
described by the transfer function

HG) =,, #|.ri-,


.G,
,rl The LaplaceTranslom Chapter 5

5.17. Repeat Problem 5.16 for the system described by

sl+3s+l
H(s) =
s3+3s2+s

5.1& Find the transfer function of the syslem described by

2Y"(t) + 3Y'(t) + aYQ) = u'(t) - x(t,

(Assume zero initial conditions.) Find the system impulse response.


5.19. Find lhe transfer function of the system shown in Figure P5.19.

,r4(s) /15 (s)

/I, (s) IJ2(s)

/Ir(s)

tigure P5.19

520. Solve the following differential equations:


y'(t) + 2y(t) = u(t). Y(0-) = I
(a)
y'(t) + 2y(t) = (cosr)z(t). y(0-) = I
(b)
y'(t) + 2y(t) = exp[-3r1il(). y(0-) = I
(c)
/(,) + 4y'(t, + 3/(,) = u(,), y(0-) = 2.y(0-) : I
(d)
y"(t) + 4y'(t) + 3y(t) = exp[-3t]a(t). y(0-) = 0. y'(0-) = I
(e)
(I) y-(r) + 3y'(t) + 2y'(t) -6r(r) = e*1-21uftr. y(0-) =y'(0-) =.v10-) = 0
s2L Find the impulse response i () for the systems described by the following differential equatiom;
(a) y'(t) + 5Y() = r(t) + 2t'(t)
(b) y"(r) + 4y'(t) + 3-v(r) = Zr(t) - 3x'(t)
(c) y-(t) + y'(t) - Zy(t) = x'(r) + ,r'(t) + 2r(t)
szL One major problem in systems theory is syslem identification. Obsewing the oulPut of an
LTI system in response to a known input can provide us with the impulse res?onse of the
system. Find the impulse response of the syslems whose inPut and outPuf are as follows:

#,_
Sec. 5.13 Problems 275

(s) -r(t) = 2exPl-2rlu(tl


y(r) = (1 -, + exp[-,1 + exp[-Z])z(r)
(b) .r(t) = 2x 11;
y(t) = n() - exp[-2r]rr(r)
(c) .r(t) = exP [-2rla(t)
y(r) = exp[-r] - 3 exp[-2r])u(r)
(d) r(r) = s11;
y(r) = 0, - 2 exp[-3r])z(r)
lel r(tl = 71111
y(r) = exp[-2r] cos(4r + 135")z(r)
(f) r(r; = 3u,1r,
y(t) = exp[-4r][cos(4t + 135') - 2sin(4r + 135")]r(,)
523. For the circuit shown in Figure P5.23, leto.(0-)=lvolt.ir(0-)=2anrperes,and.r(r)=
zO. Find y(t). (lncorporate the initial energy for the inductor and the capacitor in your
transformed model.)
l.lH

Flgure P523

For the circuit shown in Figure P5.24, let o.(0-) = I volt, ir(0-) = 2 amperes, and.r(r) =
u(t). Find y(). (Incorporate the initial energy for the inductor and rhe capacitor in your
transformed model.)

r'(, )

Figure P524

525. Repeat Problem 5.23 for r() = (cosr)u(l).


s26. Repeat Problem 5.24 forx(l) = (sinZ)z(r).
szt. Repeat Problem 5.23 for the circuit shown in Figure P5.27.
5r& Consider the control system shown in Figure P5.28. Forx() = u(r). //,(.s) = K,andlrr(s)
= l/(s(s + a)), find the following:
(a) Y(s)
(b) y() for( = 29,a = 5,a = 3, and a = I
Consider the control system shown in Figure P5.29.
Let
x(t) = Au(t)
276 The Laplace Transform Chapter 5

Flgure P527

Iigure PSJS

H.k)

Ugure P52!l

H.(") =
s* 1

s
I
II(s) =
sf2
(a) Show that lim y(; = 4.
(b) Determine the error signal e(t).
(c) Does the system track the input if H.(s) = If not, why?
U - ,1,
(d) Does the system work if H.(s) = u*r,
ffi,
Find exp [At] using the Laplace transform for the following matrices:

(,) A:[l N 61
Fr -rl
e=[z o]

,", n=[l ?]
(,,) A=ti ?]
I r ool fz 1ll
tel l=l-t l rl ro n=lo 3 rl
L-r o ol Lo -r rl
Sec. 5.13 Problems

53L Consider the circuit shown in Figure P5.31. Select the capacitor voltage and the inductor
current as state variables. Assume zero initial conditions.
(a) Write the state equations in the transform domain.
(b) Find l/(s) if the input r(r) is the unit step.
(c) What isy(t)?

Elgure P53l

Use the L:place-transform method to find the solution of the following state equations:

, [;it]l [-t -3][;;8] [t[s.]l [l]


= =

, [;i[l]l [? -l][18] [;:[s-]l : t-?l


=

Check the stability of the systems shown in Figr,re P5.33.

.n
5

"+2

s *?
?+r

;;t
?,

Flgure PS33
Chapter 6

Discrete-Time Systems

INTRODUCTION
In the preceding chapters, we discussed techniques for the analysis of analog or con-
tinuoui-time signals and systems. In this and subsequent chapters, we consider corre-
sponding techniques for the analysis of discrete-time signals and systems.
Discrete-time signals, as the name implies, are signals that are defined only at dis-
crete instants of time. Examples of such signals are the number of children born on a
specific <tay in a year, the population of the United States as obtained by a census, the
interest on a bank account, etc. A second type of discrete-time signal occurs when an
analog signal is converted into a discrete-time signal by the process of sampling. (We
will have more to say about sampling later.) An example is the digital recording of
audio signals. Aoother example is a telemetering system in which data from several
measurement sensors are transmitted over a single channel by time'shaing.
In either case, we represent the discrete-time signal as a sequence of values x(t,),
where the t, correspond to the instants at which the signal is defined. We can also write
the sequence as x(n), with a assuming only integer values.
As with continuous-time signals. we usually rePresent discrete-time signals in func-
tional form-for example,

.r(n) = (6.1.1)
].o.rn
Alternatively, if only over a finite interval, we can list the values of
a signal is nonzero
the signal as the elements of a sequence. Thus, the function shown in Figure 6.1.1 can
be written as
278
Sec.6.1 lnlroduction 279

\'(,l)

Egure 6.1.1 Example of a discrete-


time sequcnce.

l'I I I 3
(6.t.2)
.r(r ) =
t+'z':'o'o' ;)
1

where the arrow indicates the value for n = 0. In this notatron. it is assumed that all
values not listed are zero. For causal sequences. in which the [irst entry represents the
value at n = 0, we omit the arrow.
The sequence shown in Equation (6.1.2) is an example of a .l'inite-lengtft sequence.
The length of the sequence is given by the number of lerms in the sequence. Thus,
Equation (6.1.2) represents a six-point sequence.

6.1.f Classification of Discrete-Time Signals

As rvith continuous-timc sigrrals. discrctc-tinrc signals catt bc classified into different


categories. For examplc, we can define the encrgy of a discre tc-time signal r(n ) as
N
E = lrm l..tn)l' (6.1.3)
,I^
The average power of lhe signal is

P =,[i. ,,f ,i,l't'rl' (6.1.4)

The signal x(n) is an energy signal if E is finite. It is a power signal if E is not finite, but
P is finite. Since P = 0 when E is finite, all energy signals arc also power signals. How-
ever, if P is finite, E may or may not be finite. Thus, not all power signals are energy
signals. If neither E nor P is tinite, the signal is neither an encrgy nor a power signal.
The signal:(n) is periodic if, for some integer N > 0,
x(il + N) = r(n) for all n (6.1.s)

The smallest value of N that satisfies this relation is the fundamcntal period of the signal.
If there is no integer N that satisfies Equation (6,1.5), x(rr ) is an aperiodic signal.

Example 6.1.1
Considcr the signal
x(n) = I sin(2rrlon + $o)
Then
Discrete-Time Systoms Chapt€r 6

x(n + N) = A sin(2t fo(n + N) + 0o)


= A sin(2r fon + $o) cos(2nf6N) + A cos(Zr fnn + $o)sin(2tfiN)
Clearly, .r (n + N) rvill be equal to : (n ) if

N=im
where rn is some integer. The fundamental period is obtained by choosing ra as the small-
est integer that yields an integer value for N. For example, ifro = 3/5, we can choose zl =
3togetN=5.
O-n rhe other hand, if /o = f4, ry *itt not be an integer, and thus, .r(n ) is aperiodic.

Let x(n ) be the sum of two periodic sequences rr (n ) and xr(n ), with periods Nr and
N, respectively. Let p and q be two integers such that
PNr=qNr:N (6.1.6)

Then .r (z ) is periodic with period N, since

.r(n + N) = xr(n *pNr) + 4(n + qNr) = rr(rr) + xr(n)


Because we can always find integers p and 4 to satisfy Equation (6.1.6), it follows that
the sum of two discrete-time periodic sequences is also periodic.

Erample 6.19
Let

'(') = ""'(T). ''(T.;)


It can be easily verified, as in Example 6.1.1, that the two terms in.r(z) are botb periodic
with perioG Nr = 18 and N2 = 14, respectively, so that x (n ) is periodic with period N = 126.

The signal r(n ) is even if


x(n\ : 11-n7 for all z (6.1.7)

and is odd if
x(n) = -r1-r) for all n (6.1.8)

The even part of x(r) can be determined as


1
x"(n):;b@) + r(-z)l (6.1.e)

whereas its odd part is given by

xo@) =l1rr- x(-a)l (6.r.10)


Sec.6.1 lntroduclion 281

6.1.2 Tranformations of the Independent Variable


For integer values ofk. the sequence x(n - k) repfesents lhe sequence.r(n) shifted by
k samples. The shifr is ro rhe righr if k > 0 and ro the left if k < 0. Sinrilarly, the signal
_r( -n ) corresponds to reflecting -r(n ) around the timc origin n = 0. As in the conlinu-
ous-time case. lhe operations of shifting and reflecting are not commutative.
While amplitude scaling is no different than in the continuous-timc case, time scal-
ing must be interpreted with care in the discrete-time case, since thc signals are defined
only for integer values of the time variable- We illustrate this by a ferv examples'

Example 6,1.3
Lct
(n) = 'rz rn,
t
"- "
and suppose we w nI to find (i) 2r(5rrl3) and (ii) r(2rr).
With r'(n ) = 2t(5rrl3). we havc
r(0) = 2t(0) = 2.t'(l) = 2r(5/-3) = 0.r(2) = 2.t(l(1,/l) = 0.

.v(3) = 2r(5) = 2cxp(-5/2).v(a) = zt(10/3) = (, ctc'


Here we have assumed that -r(n ) is zero i[ n is not an integer. lt is clcar ]hat the general
expression for y(n ) is

rr = 0, 3, 6, etc,
y(n ) =
otherwise

Similarly, with z (n ) = x(zn), wc have


z(o) = r(0) = 1. z(l) = r(2) = exp[-11, z(3) = x(6) = cxp[-3], etc'

The general expression for z (n ) is therefore

,,'
.(,,) =
{;:or ;:3
The preceding example shows that for discrete-time signals, time scaling does not yield
just i stretched or compressed version of the original signal, but may give a totally dif-
ferent waveform.

B3arnplg 6.1.4
[-t
..(r) =
ft. n even
t_r, nodd
Then
t'in) = x(V11 = 1 for all n
282 Discrete-Timo Systems Chapier 6

F rnrnple 6.1.6
Consider the wavelorm shown in Fig. (6.1.2a), and let

y(,)=..(-i.3)

.r ( rr) ,(i )

(b)

.(-3) ^Fi * i)

Flgure 6.12 Signals for Example 6.1.5 (a) r(a), (b) r(n/3), (c) r(-al3), and (d)
r(-n/3 + 2/31.

We determine y(a) by writing it as

,(r)=r[-?]
We first scale r(.) by a factor of 1,/3 to oblaih r(n/3) and then reflect this about the ver-
tical axis to obtain r(-n/3). The result is shifted to the right bJ two samples to obtain
y(n ). These steps are illustrated in Fies. (6.1.2bF(6.1.2d). The resulting sequence is
y(r) = [-2, 0, 0, 0, 0,0, 1, 0, 0, 2, 0, 0, -11
t

6.2 ELEMENTARY DISCRETE-TIME SIGNALS


Thus far, we have seen that continuous-time signals can be represented in terms of ele-
mentary signals such as the delta function, unit-step function, exponentials, and sine
and cosine waveforms. We now consider the discrete-time equivalents of these signals.
We will see that these discrete-time signals have characteristics similar to those of their
Sec. 6.2 Elementary Discrete-Time Signals 283

continuous-time counterparts, but with some significant differences. As with continu-


ous-time systems, the analysis of the responses of discrete-time lincar systems to arbi-
trary inputs is considerably simplified by expressing the inputs in terms of elementary
time functions.

62.1 Discrete impulse and Step Functions


We define the unit-impulse function in discrete time as

6(,) =
{;: :;i (6.2.1)

as shown in Figure 6.2.1. We refer to E(z) as the unit sample occurring at n = 0 and
the shifted function 6(n - /<) as the unit sample occurring at n = lc. That is,

u(" - o) : {l: :;I (6.2.2)

Whereas 6(n ) is somewhat similar to the continuous-time impulse function 6(t), we


note that the magnitude of the discrete impulse is always finite. Thus. there are no ana-
lytical difficulties in defining 6(n ).
The unit-step sequence shown in Figtre 6.2.2 is defined as

,,(r):{l: ;.3 (6.2.3)

The discrete-time delta and step functions have properties somewhat similar to their con-
tinuous-time counterparts. For example, the first dffirence of the unit-step function is
u(n) - u(n - l) = 6(r) (6.2.4)

If we compute the sum from -oo to n of the 6 function, as can be seen from Figure 6.2.3,
we get the unit step function:

6(n ) 6(, - t)

kn
(b)

Ilgure 6.2.1 (a) The unit sample of E function. (b) The shifted 6 function.

u(n)

tlgure 6.22 Thc unit step


function,
2U Discrete-Time Systems Chapier 6

ln
t.
I
I

(a) (b)

Flgure 623 Summing the 6 tunction. (a) a < 0. (b) z > 0.

.i.,,: {?' ;:l (6_2.s)

= u(n)
By replacing kby n - k, we can write Equation (6.2.5) as

i
t-0
ut, - k) = u(n) (6.2.6)

From Equations (6.2.4) and (6.2.5), we see that in discrete-time systems, the first dif-
ference, in a sense, takes the place of the first derivative in continuous-time systems,
and the sum operator replaces the integral.
Other analogous properties of the 6 function follow easily. For any arbitrary
sequenoe x (z ), we have
x(n ) E(r - k) : x(k) 6(n - k) (6.2.7)

Since we can write .r(n) as


x(n) = "' + x(-l) 6(n + 1) +.r(0) 6(n) +.r(1) 6(n - 1) + "'
it follows that

,(r)= ir(/<)s(n-l<) (6.2.8)


l' -r
Thus. Equation (6.2.6) is a special case of Equation (6.2.8).

6.2.2 ExponontialShquencee
The exponential sequence in discrete time is given by
x(n) = grn (6.2.e)

where, in general, C and o are complex numbers. The fact that this is a direct analog
of the exponential function in continuous time can be seen by writing c e9, so that :
r(n)=g"w' (6.2.10)

For C and a real,.r(n) increases with increasing n if lcl > l. Similarly. if lal < l, we
have a decreasing exponential.
Sec. 6.2 Elementary Discrete-Tlme Signals n5

Consider the complex exponential signal in continuous timc,


.r(t): Cexp[itoot] (6.2.1r)

Suppose we sample.r(t) at equally spaced intervals xf to get the discrete-time signal

x(n) = gexp[loroln ] (6.2.12)

By replacing ool in this equation by f,ln, we obtain the complex exponential in dis-
crete time,

x(n) = Cexp[i0oz] (6.2.13)

Recall that oo is the frequency of the continuous-time signal x (t ). Correspondingly, we


will refer to flo as the frequency of the discrete-time signal x(z ). It can be seen, how-
ever, that whereas the continuous-time or analog frequency or0 has units of radians per
second, the discrete-time frequency (lo has units of radians.
Furthermore, while the signal x(r) is periodic with period 'l' = 2r/aofor any too, in
the discrete-time case, since the period is constrained to be an intcger, not all values of
On correspond to a periodic signal. To see this, suppose;(n) in Equation (6.2.13) is
periodic with period N. Then, since x(n) = x(n + N), we must have
ejltoN = |
For this to hold, OoN must be an integer multiple of 2n, so that
OoN = m 2r,m = 0, +1, a2, etc.

or
fb m
2r N

Ior m any integer. Thus, r(n) will be periodic only it {lr/2n is a rational number. The
period is given by N = 2rm/Oo, with the fundamental period corresponding to the
smallest possible value for rn.

&anple 6.2,1
Irt
l7t
: exn[ 1
x(n) zJ
,
so that

Oo7
2n18N=^
=
Thus, the sequenc€ is periodic, and the fundamental period. ohtarned by choosing rn = 7'
is given by N = 18.
286 Discrete-Time Systems Chapter 6

Exarrple 6.2.2
For the sequence

,(") = *o[, ?]
we have
q= 7
2tr l8rr
which is not rational. Thus, the sequence is not periodic.

lrt x.(n) define the set of functions


+2, ... (6.2.r4)
- gi*t\'n
x1,(n) k = 0,
=1,
with Oo = 2n/N, so that rr(r,) represents the kth harmonic of fundamental signal
.r, (n ). In the case of continuous-time signals, we saw that the set of harmonics
expljk(2t/T)tl, k : 0, +1, !2,... are all distinct, so that we have an infinite number
of harmonics. However, in the discrete-time case, since
x*tN(n): si$+N,:a = ,i2zn 4kzin = x{n) (6.2.1s)

there are only N distinct waveforms in the set given by Equation (6.2.14). These cor-
respond to the frequencies f,)1 :Ztrk/Nfork =0, 1,...,N- l. Since dlp*y= dlo+. 2n,
waveforrns separated in frequency by 2n radians are identical. As we shall see later,
this has implications in the Fourier analysis of discrete-time, periodic signals.

Esample 6,2.3
Consider the continuous-time signal
2

r(l) = ) c*d^'i'
L- -2
where co = l, cr = (l + il) = cl,,and c, = cl, = 312.
Let us sample:(t) uniformly at a rate f = 4 to get the sampled signal
2

r1z; = ) c*dr'i to
k=-2

= f
I--2
,re,ro*

where .f!o = aQn/3). Thus, x(n ) represents a sum of harmonic signals with fundamental
period N = 2tm/Ao. Choosing rn = 4 then yields N = 3. Il follows, therefore, that there
are only three distinct harmonics, and hence, the summation can be reduced to one con-
sisting only of three terms.
To sEe this, we note that, from Equation (6.2.15), we have exp (i 2flra) = exp(-i0on )
and exp(l(-2()on)) = exp(ifha), so thal grouping like terms together gives
Sec. 6.3 Discrete-Time Systems 287

x(n) = ) duetL'i'
t--l
where

do= co= l.d, = c, )- t' ,= -1 * i),a ,= c-r + c, - -, - i)= al

6.3 DISCRETE-TIME SYSTEMS


A discrete-time system is a system in which all the signals are discrete-time signals.
That is, a discrete-time system transforms discrete-time inputs into discrete-time out-
puts. Such concepts as linearity, time invariance, causality, etc.. which we defined for
continuous-time systems carry over to discrete-time systems. As in our discussion of
continuous-time systems, we consider only linear, time-invariant (or shifi-invariant)
systems in discrete time.
Again, as with continuous-tirne systems, we can use either a timc-domain or a fre-
quency-domain characterization of a discrete-time system. In this scction, we examine
the time-domain characterization of discrete-time systems using (a) the impulse-
response and (b) the difference equation representations.
Consider a lincar, shift-invaria n t, discrele-tinrc system with input x(n). We saw in
Section 6.2.1 that any arbitrary signal .t(n) can be written as thc weighted sum of
shifted unit-sample functions:

-r(n)= ) x(t)E(n-l<) (6.3.1)


t = -o
It follows, therefore, that we can use the linearity property of lhe system to determine
its response to.r(n) in terms of its response to a unit-sample input. kt i(n) denote
the response of the system measured at time n lo a unit impulsc applied at time zero.
If we apply a shifted impulse 6(n - k) occurring at time k, then. by the assumption of
shift invariance, the response of the system at lime n is given by /r (n - k). If the input
is amplitude scaled by a factor r(k), then, again, by linearity. so is the output. If we
now fix a, let k vary from -- to or, and take the sum, it follows from Equation (6.3.1)
that the output of the system at time n is given in terms of the input as

y(,)= i x(k)h(n-k) (6.3.2)


t=-D
As in the case of continuous-time systems, the impulse responsc is determined assum-
ing that the system.has no initial energ,y; otherwise the linearity property does not hold
(why?), so that y(z), as determined by using Equation (6.3.2), corresponds to only the
forced response of the system.
The right-hand side of Equation (6.3.2) is referred to as the convolution surl, of the
two sequences r(n) and h(n) and is represented symbolically as r(n) * &(z). By
replacing kby n - k in the equation, the output can also be writtcn as
Discr€t+Time Systems Chapter 6

y(n)= ) x(n-k)h(k)
k= -a
= h(n) * x(n) (6.3.3)
Thus, the convolution operation is commutative.
For causal systems, it is clear that
h(n):0, n<0 (5.3.4)
so that Equation (6.3.2) can be written as
tl
y(n)= ) x(k)h(n-k) (53.s)
i- -o
or, in the equivalent form,

y(n)=)r(n-k)h(k) (6.3.6)
l=0
For continuous-time systems, we saw that the impulse response is, in general, the sum
of several oomplex exponentials. Consequently, the impulse response is nonzero over
any finite interval of time (except, possibly, at isolated points) and is generally referred
to as aD infinile impube response (IIR). With discrete-time systems, on the other hand,
the impulse response can become identically zero after a few samples. Such sptems
are said to have a Jinite impulse response (FIR). Thus, discrete-time systems catr be
either IIR or FIR.
We can interpret Equation (6.3.2) in a manner similar to the continuous-time case.
For a fixed value of n, we consider the product of the two sequences .r(t) and
h(n - k),where h(n - k) is obtained from ft(&) by first reflectingh(k) about the ori-
ginandthenshiftingtotherightbynifnispositiveortotheleftby lnlifzisnega-
tive. This is illustrated in Figure (6.3.1). The output y(x) for this value of z is
determined by summing the values of the sequence x(k)h(n - k).
n(t)

(b)

i(a - l) r(t<)lt(a - *)

(c) (d)

Figure 6J.l Convolution operation of Equation (6.3.2). (a) r(k), (b)


e(t), (c) h(n - k), and (d) r(&)&(z - &).
Sec. 6.3 Discrete-Time Systems 289

" We note that the convolution of h(n) with 6(n) is, by definition, equal toi(n).That
is, the convolution of any function with the 6 function gives back the original function.
We now consider a few examples.

f,rqrnple 63.1
When an input.r(n ) = 36(r - 2) is applied lo a causal, lincar time-invariatrt system, the
output is found to be

v@ =,li).'0" n>2
Find the impulse response h(n ) of the system.
By definition, /r(a) is the response of the system to the input 6(n). Since the splem is
LTI. it follows that

h(nl -\y(n + 2)

We note that the output can be wrilten as

* lr(l)'-'),a rt
,,,, =
[; (- ;)"'" -

so that

,", = :[(-;)" . (])"1.,,,

Example 0-92
L.et

r(n) = o"1n',
h(n): $'u(n)
Then

y(n) =) aru(k)p'-tu(n - k)

Since u(k) = 0 for k < 0, and u(n - kl = 0 for & > n, we can rewrite the summation as

y(r) = i olpa-r = p" ) (op-,)^


l -0 t-0
ClearlY,Y(n)=0ifr<0.
Forn z0,if o = g,wehave

y(n)=9'ittl=(n+l)P"
If c + p, the sum can be put in closed form by using the formula (see Problem 6.5)
Discrete-Time Sysiems Chapter 6

),,,r='l:--s::"-, a*t (6.3.7)

Assuming that ag -l + l. we can write

y(z) =iPa,
I - ("P-')1.' gil-{-'
l-oP-t = o-P
As a special case of this example, let c = l, so that r(fl) is the unit step. The step response
of this system obtained by setting c = I in the last expression for y(z ) is
I - p'tl
ytr,/=-l_p

In general, as can be seen by letting r(z) : u(nl in Equation (6.3.3), the step
response of a system whose impulse response is fr(z) is given by

st,r)= j att) (6.3.8)


k--r
For a causal system, this reduces to

stn)=jrt*)
I .(l
(6.3.e)

It follows that, given the step response s(a) of a system, we can find the impulse
response as

h(n) = s1r) - s(n - 1) (6.3.10)

xample 6.3.3
We want to find the step response of the system with impulse response

h@ = 2(:) *,(?,),",
By writing &(n ) as

o(,) =
[(]",'')'* (l"-,'r)'],t"r
it follows from the Iast equation in Example 6.3.2 that the step response is equal to

- (l:' ).'.1,,,,,
,,., = f:$;{.'
l r-r"" t-1e-t' I
which can be simplified as

s(n) =2. i;(l)'.',(T,). n>o


Sec. 6.3 Discrete-Time System. 291

We can use Equation (6.3.1 pulse


responsc as
ft(n) = 51r,; -.r(n - l)
= (i)"'."(? ,) - '.,,('l (" - ,))
J,, A(l)"
which simplifies lo lhe expression for ft(n) in the problem srarement.

The following examples consider the convolution of two finite-length sequences.

Brerrrple 6.8.4
Let r(n)be a finite sequence that is nonzero for n e lN,,Nrl and h(n) be a finite
sequence that is nonzero for n e [N,, Nnl. Then for fixcd n, h(n - /<) is nonzero for
k e ln- Nn,rr - N.'1. whereas r (k ) is nonzero only for li e lN,.Nr],sothattheprod-
uctr(k)lr(n - k) iszero if rr - N, < N, or i[a - No > Nr.'l hus,y(n ) is nonzero only for
n e [N, + Nr, N, + N.rl.
Let M = N, - N, + I be the length of the sequence.r(n ) and N = No - Nr + I be the
length of the sequence /r(n ). The length of lhe sequence r,(rr). which is (N, + &) -
(Nr + Nr) + I isthusequal to M + N - l.Thar is, the convolurion of an M-point sequence
and an N-point sequence rcsults in an (M + N - l)-point scqucnce.

Example 63.5
Let h(n) : ll. 2,0. - l, I I and -r(z) = 11, 3, - l, -21 be trvo causal sequences. Since i(n)
is a five-point sequence and.r(z) is a four-point sequencc, from the results of Example
6.3.3, .y (n ) is an eight-point sequence that is zero for r ( 0 or a > 7.
Since both sequences are finite, we can perform the convolution easily by setting up a
table of values of h(k) and x(n - k ) for thc relevant valucs o[ n and using

y(r)=i h(k)x(n-k)
as shown in Table 6.1 Thc cntries for r(n - /i ) in lhe table are obtained by first reflecting
.r(k) about the origin to form r(-t) and successively shifting thc resulting sequence by I
to the right. All entries not explicitly shorvn are assumed to hc zero. The output y(z) is
determined by multiplying the entries in the rorvs corresponrling to & (& ) and .r(r - * ) and
summing the results. Thus, to find y(0), multiply the entries in rorvs 2 and 4; fory(l), mul-
tiply rows 2 and 5; and so on. The last two columns list n and r,(n), respectively.
From the last column in the table, we see that

_v(n ) : ll. s,5, -5, -6,4. l. -21

Example 6.8.6
We can use an alternative tabular form lo dctermine y(a ) hy noting that
y(n) = h(o\x(n) + ,l(l)x(x - l) + h(2).r(n - 2) +---
+ ft(-l)-t(n + l) + l,(-2).r( + 2) +...
292 Discrele-Time Systems Chapter 6

ABLE 5.1
onYoluuon Table ,o, Erample 6.3.4.

-3 -2 -1 v(n,
h(k) 72 0 -l
r(,t) 13 -l -2
r(-k) -2 -t 3 1 0l
.t(l - k) -2 -l 3r l5
x(2 - k) -1 i 1 25
x(3 - k) -2 -t 3l 3-5
x(-k) -2 -1 3 I 4-6
r(s - t) -2 -l 3 1 54
r(6 - /<) -2 -l 3 I 61
r(7 - k) -2 -1 3 t7-2

We consider the convolution of sequences

h(n)-- l-2,2,O, -1, l! and .r(z) = l-1,3, -1,-21


It
The convolution table is shown in Table 6.2. Rows 2 through 5 lisr.r(n - /<) for lhe rele-
vantvalucsof t, namely, & = -l,0, 1, 2, and 3. Values of ft (t ): (n - ft ) are shown in rows
7 through ll, and y(n ) for each n is obtained by summing these entries in each column.

TABLE 6.2
Conyolutlon Table ,or ErEmplo 6.3.5.

-2 -1
.t (z + 1) -1 3-l -2
x(n) -t 3 -1 -2
.r(z - l) -l 3 -t -2
x(n - 2) -1 3 -l -2
x(n - 3) -l 3 -1 -2
h(-l)r(n + 1) -6 24
/l (0)r(n ) -2 6-2 -4
ft(l)r(z - 1) 00 0 0 0
h(2)x(n - 2) I -3 I 2
h(3)x(n - 3) -1 3 -1 -2
v(n, -8 -8 4 I -2

Finally, we note that just as with the convolution integral, the convolution sum
defined in Equation (6.3.2) is additive, distributive, and commurative. This enables us
to determine the impulse response of series or paratlel combinations of systems in
terms of their individual impulse responses, as shown in Figure 6.3.2.
Sec. 6.3 Discrete-Time Systems 293

ffi-@* (a)

i,(tl)

h1l,t)

h20tl

i,(n)

i,(l)

(c)

Iigure 6J.2 Impulse responses of series and parallel comhinations'

E-a'nple 63.7
Consider the system shown in Figure 6.3.3 with
ft'(n ) = E(z) - a6(n - l)
o,ot = (l)""<"t
\(n) = a"u1n)
ho@\=(n-l)u(n)

[o (n)

h10') h2h'1 h3@)

h5(,')

Flgure 633 System for Example 6.3.7.


294 Discrete-Time Syst€ms Chapt€r 6

and
fts(n) : 51r; + n u(n - I) + D(n - 2)
It is clear from the figure that
h(n) = 1r,1n1 * h2(n) * hr(n) * lhr(n) - ho@)l
To evaluate h(n\, we first form the convolution hr(tt) * fir1n1
^,
hr(n) * hr(n) = [6(n) - a6(n - 1)] * a' u(nl
= a" u(n) - a' u(n - l) = 6(n)
Also.
h'(n) - h'@) -2) - (n -
=:[i]:,:'i;,'l;l '|)u(n)

so that
n(n) = 6(n) * hr(n) t [6(n) + 6(zr - 2) + u(n)l
= h(n) + hr(n - 2) + sr(n)
where sr(n) represents the step response corresponding to hr(n). (See Equation (6.3.9).)
We have, therefore,
/l\",t /t\,-2u(n-2) -i\l/
+ /!\-
h(^)=\2) l+\2)
which can be put in closed form, using Equation (6.3.7), as

*@=(i) ,@-2)+2u(n)

6.4 PERIODIC CONVOLUTION


In certain applications, it is desirable to consider the convolution of two periodic
sequences r,(n ) and:r(n), with common period N. However, the convolution of two
periodic sequences in the sense of Equation (6.3.2) or (6.3.3) does not converge. This
can be seen by letting k :
rN + m in Equation (6.3.2) and rewriting the sum over k as
a double sum over r and m;
a o N-t
y(n)= ) x,(k)xr(n-e)= > )r,(dv+m)xr(n-rN-m)
k=-- ,--@ m-o
Since both sequences on the right side are periodic with period N, we have
- N-l
y(n)= ) )x,(m)rr(n-m)
r= -a m=0
For a fixed value of /r, the inner sum is a constant; thus, the infinite sum on the right
does not converge.
Sec. 6.4 Periodic Convolution 295

ln order to get around this problem. as in continuous_ timc, wc define a different


form of convolution for periodic signals, namely, periodic convolution:
N-t
y(n)= ) t,(k)xr(n-k) (6.4.1)
'{l 4

Note that the sum on the right has only N terms. We denote thts operation as

v(n) = x,(n) €r -r2(n ) (6.4.2)

By replacing k by n - k in Equation (6.4.1), we obtain the equivalent form,


/V-l
.y(,,): ) x,@-k)xr(k) (6.4.3)
I ={l

We emphasize that periodic convolution is defined only for sequences with the same
period. Recall that, since the convolution of Equation (6.3.2) rcpresents the output of
a linear system. it is usual to call it a linear conyolution in order to distinguish it from
the convolution of Equation (6.4. I ).
It is clear that y(n) as defined in Equation (6.a.1) is periodic. since

y(n+ N)= 5'rr,, + N-t)rr(&)= v(n) (6.4.4)

so that y(n ) hastobe evaluated only for0sr


< N l. It can also be easily verified
-
that the sum can be laken over any one period. (See Problem 6.12). That is,
N,,+N-l

l= ,,

The convolulion operation of Equation (6.4.1) involves the shifrecl sequence rr(n - *),
which is obtained from.rr(n) by successive shifts to the right. ll()wevert we are inter-
ested only in values of n in the range 0 < n -: N - l. On each succcssive shift, the first
value in this range is replaced by the value at - l. Since the sequcnce is periodic, this
is the same as the value at N - l, as shown in the example in l.'igure 6.4.1. We can
assume, therefore, that on each successive shift, each entry in lhc sequence moves one
place to the right, and the last entry moves into the first place. Such a shift is known as
a periodic, or circular, sirifl.
From Equation (6.4.1), ylnl can be explicitly written as
y(rz) =,r,(0).rr(z) + r,(1).rr(n - l) + ..' +r,(N - l).t.(n - N +'1)
We can use the tabular form of Example 6.3.6 to calculate y(rr ). However, since the
sum is taken only over values of n from 0 to N - l, the tablc has to have only N
columns. We present an example to illustrate this.

f,gernple 6.4.1
Consider the convolution of the periodic exlezsioru of two sequcnccs:

r(n)=11.2.0,-ll and ft (n )= {1. 3. - I . - 2l


296 Discrele-Ti me Systems Chapter 6

r (n) |

I
r(z - l)

Flgure 6.4.1 Shifting of periodic sequences.

It follows that y(a ) is periodic with period N = 4. The convolution table of Table 63 illus-
trates the steps involved in determining y (n ). For n = 0, 1, 2, 3, rows 2 through 5 list the
values ofr(z - &) obtained by circular shifts r(n). Rows 6 through 9list the values of
-
h(k)x(n k). The oputput y(n ) is determined by summing the entries in each column
corresponding lo these rows.

TABLE 6.3
Perlodlc Convoludon o, Erampte 6.4.1

.r(n ) 1 20 -1
x(n - l) -1 12 0
x(n-z) 0 -1 I 2
x(n -3) 2 0-t 1
/l (o).r(n ) I 20 -1
ft(l):(z - l) -3 36 0
h(z)t(n - 2) 0 I -1 -2
ft(3)r(n - 3) -4 02 -2
v(n) -6 67 -5

While we have defrned periodic convolution in terms of periodic sequences, giver


two finiteJength sequences, we can define a periodic convolution of the two sequences
in a similar manner. Thus, given two N-point sequences r,(n ) and rr(n), we defrne
their N-point periodic convolution as
N-t
4@) = l-0
l, r,(k)xr(n - k) (6.4.6)

where rr(n - &) denotes that the shift is periodic.


Sec. 6.4 Periodic Convolution n7

ln order to distinguish y(n ) discussed in the previous section from yr(n )'y(n ) is usu-
ally refcrred to as the llnear convolution of the sequences .r, (a ) and -r,(n ). since it cor-
responds to the output of a linear system driven by an input.
It is clear that yr(n ) in Equation (6.4.6) is the same as the pcriodic convolution of
the periodic extensions of the signals x;(n) and xr(n ), so that v/,(n) can also be con-
sidered periodic with period N. If the two sequences are not of the same length, we can
still define their convolution by augmenting the shorter sequence with zeros to make
the two sequences the same length. This is known as zero'Podding or zero-augmenta-
tion. Since zero-augmentation of a finiteJength sequence does not change the
sequence. given two sequences of length N, and Nr, we can define their periodic con-
volution. of arbitrary length M. denoted ll o@)l*, provided that M > Max [Nt, Nrl. We
illustrate this in the following example.

Example 6,42
Consider the periodic convolution of lhe sequences h(n) = ll.2' 0, -l' ll and.r(n) =
I l. 3. - I , - 2l of Example 6.3.5.
We can find the M-point periodic convolution of the two
sequences for M 5 by > zero-padding the sequences appropriatcly and following the pro-
cedure of Example 6.4.1. Thus, Ior M = 5, we form

x"(n ) = (1, 3. - 1. -2, 01

so that both h(n) and x.(n) are five points long. It can then casily be veritied that

lv,(n )ls = ls' 6' 3' -5' -61

Comparing rhis result with y(n ) obtained in Example 6.3.4. we note that while the ftrst
three values ofylz) and lr,r(n )ls are different, the next two values are the same. In fact,

tv,(o)ls = .v(o) + v(6), t/e(l)L = v(l) + v(7)' lJoQ\l -- v(2) + v(8)

Ir can similarly be verified lhat the eight-point circular convention of r(z) and ft(z)
obtained by considering the auBmenled sequences

x,(n ) = ll, 3, -1, -2,0, 0, 0, 0l

and

h"(n) = 1t,2,0, -1, l,0,0,0l

is given by

1Yr(n)h = ll' s' s' -s' -6'4't' -2]1

which is exactly the same as y(n ) obtained in Example 6.3.5.

The preceding example shows that the periodic convolution lr@) ol two finite-
length sequences is related to their linear convolution y,(n ). We will exPlore this rela-
tionship further in Section 9.4.
4E Discrete-Time Systems Chapler 6

J.5 DIFFERENCE-EQUAT]ON REPRESENTATION


OF DISCRETE-TIME SYSTEM
Earlier, we saw that we can characterize a continuoui-time system in terms of a dif-
ferential equation relating the output and its derivatives to the input and its derivatives.
The discrete+ime counterpart of this characterization is the difference equation, which,
for linear, time-invariant systems, is of the form
M
)
t-0
aoY@ - k): t)oox1n-t<'1, a>o (6.s.1)
-o
where ao and bo are known constants.
By defining the operator

Dky(n): y(n - k) (6.s.2)


we can write Equation (6.5.1) in operator notation as
NM
)-0 aoDk y(n) = l-0
&
) toDk x(n\ (6.s.3)

Note that an alternative form of Equation (6.5.1) is sometimes given as


NM a

2ooy(.n + k): > box(n+ k), n>0 (6.5.4)


t-0 &-0
In this form, if the system is causal, we must have M s N.
The solution to either Equation (6.5.1) or Equation (6.5.4) can be determined, by
analogy with the differential equation, as the sum of two components: the homoge-
ne()us solution, which depends on the initial conditions that are assumed to be known,
and tlre particular solution, which depends on the input.
Before we explore this approach to finding the solution to Equation (6.5.1), let us
consider an alternative approach by rewriting that equation as

v@ =;l2u"a- o,- -t ap@ - q] (6.s.s)

In this equation, x(n - &) are known. lf y(n - /<) are also known, then y(n) can be
determined. Setting z = 0 in Equation (6.5.5) yields

:
v(o)
; [r4 r",-o, - -i., +rt-tl] (6.s.6)

The quantities y(-&), for k = 1,2,..., N, represent the initial condirions for the dif-
ference equation and are therefore assumed to be known. Thus, since all the terms on
the right-hand side are known, we can determine y(0).
We now let n = I in Equation (6.5.5) to get

* [_tr-,,, - k) -j,,rrr - &)]


yrr) =
S€c. 6.5 Difference-Equation Repr€sentation ol Discrete-Time Systems 299

and use the value of y(0) determined earlier to solve for y(l ). This process can be
repeated for successive values of n to determine y(n) by iteration.
Using an argument similar to the previous one, we can see that the initial conditions
needed to solve Equation (6.5.4) are y(0), y(1), ..., y(N -
1). Starting with these ini-
tial conditions, Equation (6.5.4) can be solved iteratively in a similar manner.

Example 65.1
Consider lhe difference equation

y@ -ly(n - 1) *
|rt, -, = (i)"' n )0
with

Y(-l) = I y(-2) = o
Then

t{d =tot{n- u - irt, - r,. ())"


so that

'ott' r)-lvt -2)+1- 4


7
r'(o) =

y(l) = 31t<ol -
lr,- , -:=12
rr83*
y(2) =
lrtrr - 8)(o) a= u
etc.

Whereas we can use the iterative procedure described before to obtain y(n) for sev-
eral values of n, the procedure does not, in general, yield an analytical expression for
evaluating y(a ) for any arbitrary n. The procedure, however, is easily implemented on
a digital computer. We now consider the analytical solution of the difference equation
by determining the homogeneous and Particular solutions of Equation (6.5'l).

6.6.1 Eomogeneoue Solution


of the Difference Equation

The homogeneous equation corresponding to Equation (6.5.1) is

2orY@-t)=o
=0
&
(6.s.7)

By analogy with our discussion of the continuous-time case, we assume that the solu-
tion to this equation is given by the exponential function
300 Dlscrete-Time Systems Chapter 6

yo(n) = Aa'
Substituting into the difference equation yields

II =0 a2Aa"-& :0
Thrs, any homogeneous solution musl satisfy the algebraic equation

)aoa-k=O
t=0
(6.5.8)

Equation (6.5.8) is the characteristic equation for the difference equation, and the val-
ues of a that satis$ this equation are the characteristic values. It is clear that there are
N characteristic roots.rl, c2, ..., ax, and these roots may or may not be distinct. If they
are distinct, the corresponding characteristic solutions are independent, and we can
obtain the homogeneous solution yr(n) as a linear combination of terms of the type
ci, so that
ytb) : Aro'i + A2ai + ..- + Ana,|, (6.5.9)
If any of the roots are repeated, then we generate N independent solutions by multi- |
plying the corresponding characteristic solution by the appropriate power of n. For
example, if c, has a multiplicity of P,, while the other N - P, roots are distinct, we
assume a homogeneous solution of the form
yn@) = z{raf + Arna'l + "' * Ap,nP,-ts'i
+ Ar,*ta[,*r + ... + Anai, (6.5.10)

trranple 6.5.2
Consider the equation

y@) -Ey@- r1 +
fr(z - 4 - *cy@- 3) = o
with.
y(-l)=6, y(-2)= 6 y(-3) = -2
The characteristic equation is

r-|1"-'+f"-' -Loa=o
or

o,-Eo,*i"-*=o
which can be factored as

("-)("-i)("-i)=.
Sec. 6.5 Diflerence-Equation Representation ol Discrete-Time Systems gOi

so that the characteristic roots are

tlt or=i, ar=4


"r=),
Since {hese roots are distinct, the homogeneous solulion is of the form

nt t = e,(l). n,(1)'. r,(l)"


Substitution of the initial conditions then gives the following equalions for the unknown
constants .r4
r, ./42, and z4r:

zAt+3A2+4A3=6
4At+9A2+164=6
8At + nAz + 64A, = -)
The simultaneous solution of these equations yields

Ar=7, n,= -l:, A.,=:


The hornogeneous solution, therefore, is equal to

vd) -?G) T(]I i(ii


E:varnple 653
Consider the equation

.5 ll
y(n) - +
iy@ -.1) S@ - z) - -iey@ - 3) = 0

with the same initial conditions as in the previous example. The characteristic equation is

5lr.
l-4o-'*ro-'-rUo-'=0
with roots

llt (lz=i,
9t=i' ar=4
Therefore, we write the homogeneous solution as

v^(") = A,(:) . n*(:)^. r,(i)'


Substituting the initial conditions and solving the resulting equarions gives

951 At=4,
At=2. /r=-g
302 Discrete-Time Systems Chapter 6

so that the homogeneous solulioo is

y^@ =z(r)" . ?0'- ;(i)'

6.62' The Particular Solution


We now consider the determination of the particular solution for the difference equation
NM
2ooy@-k):>box(n-k)
t=0 *-0
(5.s.11)

We note that the right side of this equation is the weighted sum of the input x(n ) and
is delayed versions. Therefore, we can obtain lr@), the particular solution to Equa-
tion (65.11), by first determining y(n ), the particular solution to the equatirn

l=0
2 oot@ - k) = x(n) (6.s.12)

Use of the principle of superposition then enables us to write


M

4@)=luoyln-*7
,(-t)
(6.s.13)

To find y(n), we assume that it is a linear combination of x(n) and its delayed versions
.r(z -
1), x(n - -
2), etc. For example, if .r(z) is a constant, so is x(n k) for any k.
Therefore, !(n) is also a constant. Similarly, if .r(z) is an exponential function of the
form p", y(z) is an exponential of the same form. If
x(z) = sin(f,n
then
x(n - k) = sinf,h(n - *) = cos(hk sinf,lon - sin0ok cos0on
Correspondingly, we have
y@)=esinOon+Bcosf,)oz
We get the same form for y (z ) when
x(n) : 61o'
"ot
We can determine the unknown constants in the assumed solution by substituting into
the difference equation and equating like terms.
As in the solution of differential equations, the assumed form for the particular solu-
tion has to be modified by multiplying by an appropriate power of n if the forcing func-
tion is of the same form as one of the characteristic solutions.
Erarnple 6.6.4
Consider the difference equation

yol -llo - r) + |r(z - 2) = 2sinff


Sec. 6.5 Ditlerence-Equalion Bepresentation ol Discrete-Time Systems 303

with initial conditions

.y(-1)=2 and y(-Z)=4


We assume the partrcular solutlon to be

trb)=Asin!+ acosnl

Then

(1 -21)', -r
l,@ - l): e sin * '',
' "o"" '

By using trigonometric identities, ii can easily be verified lhat

sln
. (n-- - l)zr - nt
-cos, ano cos - -, l)rltn'sln
(n
2 2

so that

to@' l) = - Acosn; + o sin"l

Similarly. yr(n - 2) can be shown to be

-21)n * Brin(' ,')'


),p@ - 2) = -/.o.('

= -esinll - ecosll
Substitution into the difference equation yields

?-ir - |a).i,t-. (,. i^ - lr)co''f - 2.inf


Equating like terms gives the lollowing equations for the unknorvn constants A anl.i B:

n-3r-lo=,
48
3l
B+ iA -*B=0
Solving these equations srmultaneously, rve obtain

and ,=-31
^=iT
so that the particular solution is

lD'in"i' - *'t'I
t,,(z; = il
To tind the homogeneous solution. we wriic thc characteristic equiltion for the difference
equation as
304 Discrete.Tim€ Syslems Chapter 6

3l-
l-a"-'+rc-z=0
Since the characteristic roots are

ar=41l ano ,r=i


the homogeneous solution is

n<a= e,(!) .
so that lhe tolal solution is
"(i)
,(,.) =,l,(i) . H''T - H*T
",(;)'.
We can now substitute the given initial conditions to solve for the coNtaDts dr and z{, as

n,= -fi and Ar=+


so that

v(n) = -r" fi)' . 1r(;)' . 1?,',T - H


*,?
Example 6.6.6
Consider the difference equation

y@ -lyrr - rt +
lrtn - 2) -- x(n) + jrla - r)

sith

x(n) = 2"in\

From our earlier discussicn, we can determine the particular sotution for this equation in
terms of the particular solution yr(z ) of Example 6.5.4 as

I
y(n)=yp(n)+)t,@-r)
ll2 stnt
nt 96cos-r-
nr +. 56srn-
. (z - l)zr- 48 (n - lln
= 85-
- EJ E 2 85 "*--l-
74stn nr
__ 152 n7t
= 85
- 'E5-
cos
2
Sec. 6.5 Ditlerence-Equation Representation ol Discrele-Time Systenrs 305

6.6.3 Determination of the Impulse Responee


We conclude this section bv considering the determination o[ the irnpulse response ot
systems described by the differencc equation. Equation (6.5.1 ). Rccall rhar the impulse
response is the response of the svstem to a unit sample input rvith zero initial condi-
tions, so that the impulse response is just the particular solution to thc dift'erence equa-
tion when the input .r(n ) is a 6 function. We thus consider thc cquation

j n^,r1, i h^6(n - k)
- /.) = li=t) (6.5.14)
(=(t
withy(-1), y(-2), etc., set equal to zero.
Clearly, f.or n > M, the right side of Equation (6.5.14) is zcro. so that we have a
homogeneous equation. The N initial conditions required to solvc this equation are
y(M),y(M - l), ..., y(M - N + l). Sinceiy'> M for a causal systcnr. we have roderer-
mine only y(0), ) ( l ), ..., y(M). By successively letting n take on lhe values 0, 1. 2 ....
M in Equation (6.5.14) and using the fact that y(k) is zero if ll < 0. we get the set of
M i I equations;
j
2
t=t)
orYb - k\ = br i = 0, 1, 2. "' . M (6.s.15)

or equivalently, in matrix form.


ao 0 0 hn
atoo0
(6.s.16)

d ttt ou_t oo_ [:] _b .t


The initial conditions obtained by solving these equations are now used to determine
the impulse response as the solution to the homogeneous equltion

i"o''('-e)=0'
l=o
n>M (6'5'17)

Eromple 6.6.6
Consider the syslem
sy(n
y(a) - - l) * l.rt, - 4 - t)oy|- 3) =.r(,rr . l.r(n - r)

so that N = 3 and M = l. It follorvs that the impulse responsc is dctcrmined as thc solu-
tion to ihe cquarion
5r,(rr
.v(n) - - l)* jrfr,- lt - ,lnr(,, - 3t rr. ,r z:
and is therefore o[ the form (scc Example 6.5..\ )
306 Discrete-Time Sysiems Chapter 6

h@ = A,(;) . n>2
^"(:).,.(i)'
The initial conditions needed to determine the constants ,/4 | . Ar, and A, arc y( - I ). y(0).
and y(1). By assumption. y(- t) = 0. We can determine y(0) and y(l) by using Equation
(6.5.16) to get

,] r,tr
[-, til
sothaly(0) = I y(l) =19/12. Us€ of lhese initial conditions gives the impulse response as

,r, = _i0).. ,,r,(jI. i(l).. n


= o

This is an infinite impulse response as defned in Section 6.3.

Example 6.6.7
Consider the following special case of Equation (6.5.1) in which all the coefficients on the
lefthand side are.zero except for ao, which is assumed to be unity:
M
y(n)=)brx(n-kl (6.s.18)
I "0
We let x (n ) = 6(n ) and solve for y(z ) iteratively to get
y(o) = bo

y(l) = Dr

y(M) = bu
Clearly. y(rt ) = 0 for n > M, so that
h(nl = lbo. bt, b2, .... bgl (6.s.le)
This result can be confirmed by comparing Equation (6.5.18) with Equation (6.3.3). which
yields ft(ft) = b1. The impulse response becomes identically zero afler M values. so thal
the system is a finite-impulse-reponse system as defined in Section 6.3.

6.6 SIMULATIONDIAGRAMS
FOR DISCRETE-TIME SYSTEMS
We can obtain simulation diagrams for discrete+ime systems by developing such dia-
grams in a manner similar to that for continuous-time systems. The simulation diagram
in this case is obtained by using summers. coefficient multipliers. and unit delays. The
Sec. 6.6 Simulation Diagrams lor Oiscrete-Time Systems 307

first two are the same as in the continuous-time case, and the unit delay takes the place
of the integrator. As in the case of continuous-time systems, we can obtain several dif-
ferent simulation diagrams for the same system. We illustrate this by considering two
approaches to obtaining the diagrams, similar to the two approaches we used for con-
tinuous-time systems in Chapter 2. In Chapter E, we explore other methods for deriv-
ing simulation diagrams.

Erample 6.6.1
We obtain a simulation diagram for the system described by the difference equation

y(n) - - l) - 0.25y(a - 2) + 0.0625.v(n - 3)


0.25y(n

= r(n) + 0.5 x(n - 1) - .r(n - 2) + 0.25.t(r - 3) (6.6.1)

If we now solve for y(n ) and group like terms together. we can write

y(n) = r(z) + D[0.-5x(n) + 0.25y(n)l + Dzl- x(n) + 0.25y(n)]


+ D3[0.25 r(r) - o.062sy(n)]
where D represents the unit-delay operator defined in Equation (6.5.2). To obtain the sim-
ulation diagram for this system, we assume that y(n) is available and first form the signal

ua@) = 0.25 x(n) - 0.0625 y(n)


We pass this signal through a unit delay and add -r(a) + 0.25 -vQr) to form
ur(n) : p19.25 *(n) - 0.M25 y(n)l + [-.r(n) + 0.25,v(n )]

We now delay this signal and add 0.5 .r(n ) + 0.25 y(n) to it to get

a2(/u)= D2l0:Er(n)-0.0625y(n)l + D[-r(n) +0.25y(n)] + [0.5x(n) + 0.25y(z)]

If we now pass or(rr ) through a unit delay and add r(n), we get

u,(n) = 2r1g.2tr(n)-0.06?5y(n)l + D'z[-.r(z) + 0.25y(n)]


+ D [0.5 .r(z) + 0.25 y(n )] + r(r )
Clearly, o,(z) is the same as y(n ), so that we can complete the simulation diagram by
equating the two expressions. The simulation diagram is shown in Figure 6.6.1.

Consider the general Nth-order difference equation


y(n) + ary(n - 1) + ... + ary(n - N)
= bax(n) + brx(n - l) + ... + b^,.r(n - N) (6.6.2)

By following the approach given in the last example, we can construct the simulation
diagram shown in Figure 6.6.2,
To derive an alternative simulation diagram for the system of Equation (6.6.2), we
rewrite the equation in terms of a new variable u(n) as
308 Discrele-Time Systems Chapter 6

.r(n)

,r(z) l-] + u; (a) =y (z)

Flgure 6.6.1 Simulation diagram for Example 6.6.1.

x(n)

-r'( z )

Figure 6.6,2 Simulation diagram for discrete-time svstem of order N.

N
u(r)+)a,u(n-j):r(n) (6.6.3a)
i=t
,v
y(n):lbSt(n-m) (6.6.3b)
m-0
Note that the left side of Equation (6.6.3a) is of the same form as the left side of Equa-
tion (6.6-2), and the right side of Equation (6.6.3b) is of the form of the right side of
Equation (6.6.2).
Sec. 6.6 Simulation Diagrams for Discrote-Time Systems

To verify rhal these two equations are equivalent to Equation (6.6.2). rvc substitute
Equation (6.6.3b) into the left side of Equation (6.6.2) to ohtain

)(r) * j=lt -r, : -,r) * - ^ - r,]


aty(n
Z,
ttt=
b,,,u(n
i o,lf,,,o,,,u{u

,i),u^I,, -
.,- n]
_r,"r, -
= m\ +

=lb_x(n-m\
where the last step follows from Equation (6.6.3a).
To generate the simulation diagram, we first determine the diagram for Equation
(6.6.3a). If we have o(n ) available, we can generare o(n - 1). u(n - 2), etc., by pass-
ing u(z) through successive unit delays. To generate o(n ), we note from Equation
(6.6.3a) that
N
u(n) - y1n7 - )a,u(n - j) (6.6.4)
i=r
To complete the simulation diagram. we generate _v(n ) as in Equation (6.6.3b) by suit-
ably combining a(n).fln - l), etc.. The complete diagram is shorvn in Figure 6.6.3.

r(z)-+

Figure 6.63 Alternative simulation diagram for Nlh-order svstem.

Note that both simulation diagrams can be obtained in a straightforward manner


from the corresponding difference equations.

Erample 6.62
The alternative simulalion diagram for the system of Equation (6.6.I ) is obtained by writ-
ing the equation as
t(nl - O.Zltt(n ' l) - 0.251(r - 2) + 0.0625r,(rr l) - t(l)
310 Discrets-Time Systems Chapter 6

and
y(nl = aln) + 0.5x,(r - 1) - o(n - 2) + 0.?5a(n - 3)

Figure 6.6.4 gives lhe simulation diagram using these two equations.

Ilgure 6.6.4 Alternative simulation diagram for Example 6.6.2.

6.7 STATE-VARIABLE REPRESENTATION


OF DISCRETE-TI ME SYSTEMS
As with continuous-time systems, the use of state variables permits a more comPlete
description of a system in discrete time. We define the state of a discrete-time system
as the minimum amount of information needed to determine the future output states
of the system. If we denote the state by the N-dimensional vector
v(n)= [u,(a) oz@). ' ,r:x@)l' (6.7.1)

then the state-space description of a single-input, singe-output, time-invariant, discrete-


time system with input x(z) and output y(n ) is given by the vector-matrix equations
v(n+1)=Av(n)+bx(a) (6.7.2a)
y(n)=cv(n)+dx(n) (6.7.2b)
rvhere A is an N x N matrix, b is an N X l column vector, cis a I x Nrow vector, and
d is a scalar.
As with conlinuous-time systems, in deriving a set of state equations for a system,
we can start with a simulation diagram of the system and use the outputs of the delays
as the states. We illustrate this in the following example.

Erample 6.7.1
Consider the problem of Example 6.6.1, and use the simulation diagrams that we obtained
(Figures 6.6.1 and 6.6.4) to derive two state descriptions. For convenience, the two dia-
Sec. 6.7 State-Variable Representation ol Oiscret€-Time Systems 31 1

grams are repealed in Figures 6.7.1(a) and 6.7.1(b). For our first dcscription. we use lhr.'
outputs of the delays in Figure 6.7.1(a) as states lo get
) = ur(n) +.r(,r)
.l'(n
(ti.7.3a )

rr,(n + l) = az@) + 0.25.r,(n) + 0.5r(r) (6 7 thr


= 0.25ur@) + u,(l) + 0.75r(rt)
u,(n + l) = u.(n) + 0.25i(r,) - x(,?)
= Q.fJ1,(n) + u.(n) - 0.75.r(rt ) (6.7.3c)

o.,(z + l) = -0.625Y(n) + 0'25r(n)


= -O.M2Sut@) + 0.1875.t(n ) (6.7.3d)

.r(n)

.'J(a) r,, (rr)

r (r)----r

+-
(br

Figure 6.7.1 Simulation diagrams for Exampls ().7 1 .


Discret€-Time Systems Chapter 5

ln vsstor-matri.\ format. lhese equations can be lvritlen as

fo.zs 1ol [o.zs I


vr,r+ l)=l.,.zs 0 t lv(zt+ l-o.zs lxtnt g.7.4)
L o.oozs o o-l [o.rszs-]
l,(n) : [l 0 0l v(n) + r(n)
so that

[o.zs r ol [o.zs I
5=lo.zs 0 ll 5=l-0.7sl, "=U 0 01. d=r (6.7.s)
[-o.r.rozs o o.l Lo.rsTs]
As in continuous time, rve refer to this form as ihe first canonical form. For our second
representation, we have. from Figure 6.7.1(b),
itrln + t1 = itrln'1 (6.7'6a)
02,r' + t) = i{n) (6'7.b)
6.(n + l) = -o.o625ir(n ) +o.25i2b) + 0.25ir(n ) +r(n) (6.7.6c)

y(n) = o.25iln) - ir(n) + 0.5ir(n) + i3(n + l)


= -0.1875i,(n) -0.7562@) +0.75ir(n) +.r(n) (6.1.a)
rvhere the last step follows by substituting Equation (6.7.6c). In vector-matrix format, we
can rvrite

I o 1 ol [o-l
i(n +r)=l 0 o r lltnt+lo l..tnt G.7.7)
[-o.oozs o.2s o.2sl Lrl
y(r) = [-0.1875 -0.75 0.7s] i(n) + .r(n )
so that

^ t o r ol lol
n=l o o I l. b=lol.
L-o.oozs o.2s o.zs)
L,l
c=[-0.187s -0.75 0.7s], d=r (6.7.8)

This is thc second canonical form of the state equalions.

By generalizing the results of the last example to the system of Equation (6.6.2), we
can show that the first form of the state equations yields

-at I "' O-
-:'0...: tl'
, , c= :l d=bo g.,.e)

--o" O ": ; [: :: ;]
Sec. 6.7 State-Vanable Representalion orDiscreie-Time Systems itl J

whereas for the second form, we get


0 I 0 0
n I 0 t:ll
A=
0
-a:,t_l
I
- llt
'Ll
--aN
T
b n - a,rh,,
bx-r -
c= .aru-rhu d=b' (6.7. r0)

b, - arb,,

These two forms can be directly obtained by inspection of the diffcrence equation t-rr

Figures 6.6.2 and 6.6.3. I.€t


v(z+l)=Av(z)+bx(n) (6.7.11)

Y(n)=cv(n)+dx(n)
and
i(n+1)=Ai'(n)+br(rr) (6.7.t2)

.v(n)=6i(n) +2x@)
be two alternative state-space descriptions of a system. Then thcrc cxists a nonsingu-
lar matrix P of dimension N x N such that
v(a) -- Pi(n) (6.7.13)

It can easily be verified that the lollowing relations hold:


A=pAp-r. u=p-'t, i=cP, i=a (6,7.14)

6.7.1 Solution of State-Space Equatione


We can find the solution of the equation
v(r + 1) = Av(n) + br(n), n=0: v(0) = vu (6.7.1s)

by iteration. Thus, setting n = 0 in Equation (6.7.15) gives


v(l)=Av(0)+bx(o)
For n : l, we have
v(2):Av(l)+bx(l)
: A[Av(o) + br(0)l + bx(l)
= A'?v(0) + Ab.r(0) + bx(l)
314 Dlscrete-Time Systems Chapter 6

which can be written as

v(2) : azr1s, * j o'-'-'*1r'1


j-o
By continuing this procedure, it is clear that the solution for general z is

v(n ): a'"16; +!l'-i-'u,g1


j'o
$'7'16)

The first term in the solution corresponds to the initial-condition response, and the sec-
ond term, which is the convolution sum of An-r and bx(z), corresPonds to the forced
response of the system. The quantity An, which defines how the state changes as time
progresses, represents the state-transition matrix for the discrete-time system O(n). In
terms of tD(n), Equation (6.7.16) can be written as
a-l
v(n):q1r;v(o) + )a(z -l- l)bx(7) $.7.1t)
i'o
Clearty, the frrst step in obtaining the solution of the state equations is the determina-
tion of A". We can use the Cayley-Hamilton theorem for this PurPose.

Example 6.79
Consider the system
ur(n + 7) = or(n\ (6.7.18)

ll
ar(n + ll = Sr,(r) - Oar(n)
+ x(n)

y(z) = u'(n )
By using.the Cayley-Hamilton theorem as in Chapter 2, we can write
4" = ao(z)I + c'(n)A (6.7.19)

Replacing A in Equation (6.?.19) by its eigenvalues, -l *a I, leads to the equations


co(n) - ,l",t,= (-l)'
and

c1(z)+1","r=(1)'
so thal

oo(n)=3(i)'.i(-il
or(z)=;(i)'-;l;l
Sec, 6.7 State-Variable Bepresentation ol Dlscrete-Time Syslems 315

Substituting into Equation (6.7.16) gives

^ [ilil::.rl
Lo\a/
il
-a\-zl rlrl*r(-zl.1
r\a/
;l rl l G72.,

Example 6.7.3
Let us determine the unit-step response of the system of Example 6.7.2 for the case whcn
v(0) = [l -llr. Substituting into Equation (6.7.16) gives

v(z) = 4' . i n'-' '[f]t'r


[-',]
[ :ril.:(-)l *[i(i)"' -i(-])"-'l
=
L :o'-:(-)l.*Li(i)'.'
.3(-l)"'l
Putting the second term in closed form yields

'' [.i[;] .i[il].[i.i[i -lli]


The first term corresponds to rhe initial-condition response and thc second term to the
forced response. Combining the two terms yields the total response:
[s z! r r\n zz ttl'1
",,,=[;j[]l=l;_X)_;{ _i)l( I n>0 (672,,

Ls ra\ z/ rs\+/J
The output is given by

y(a)=u,(n)=
;.?(-1)'-it(l),', r,=0 (6'?.22)

We conclude this section with a brief summary of the propertics of the state-transi-
tion matrix. These properties, which are easily verified. are sontewhat similar to the
corresponding ones in continuous time:
l.
rD(n + 1) = Ao(n) (6.7.23a)

2.

o(0) = 1 (6.7.23b)
] iiie: g-,i"[ ;1.i i] li$f'ft'rrigeJ,11ir r:iii

-
A ,.u, tFI I
i.j*-5i6-f+.---'-:--"--'--'
-
-t
Dlscrele-Time Systems Chapter 6

.1. Transition property:


o(n - /<) -- o(tt - i)@(i - k) (6.7.21c)
4. Inversion property:
' o-'(n ) = o(-z) if the inverse exists. (6J.23d)
Note that unlike the situation in continuous time, the transition matrix car be singu-
lar and hence noninvertible. Clearly, Q-t(r) is nonsingular (invertible) only if A is
nonsingular.

6.72 Impulse Rosponse of Systems Describod


by State Equations

We can frnd the impulse response of the system described by Equation (6.7.2) by setting
v,, = 0 and x(z ) :
D(n ) in the solution to the state equation, Equation (6.7.16), to get

v(n)=a'-t6 (6.7.24)
The impulse response is then obtained from Equation (6.7,2b) as

ft(n):3a'-tb+dE1n; (6.1.?s)

Example 6.7.4
The impulse response of the system of Example 6.7.2 easily follows from our previ-
ous results as

[z /r\'-' - 1/-1\'-' I /rY-' - 1/-1Y-'l


h(n"'\,
L:iij :i ;i 'iff .ti ii. tI
]
=i(i)"-i(-l)'-" 'r=o (6'7'261

STABILITY
As with continuous-time systems, an important property associated with discrete-time
systems is system stability. We can extend our defrnition of stability to the discrete-time
case by saying that a discrete-time system is inpuVoutput stable if a bounded input pro-
duces a bounded output. That is, if

lx@)lsu<- (6.8.1)
thcn

ly(n)l <r<-
S€c. 6.8 Stability ol Discrele-Time 7

By using a similar procedure as in r1

a condition for stability in terms ot tne system lmputse rcsl,(rr'.( . ' vcrr . c.,s'.-,,, ",..'l
impulse rcspurlsL: l(rr), let.r(l) bc such that l.r(rr)i '-,!/. Thurr r,, , .:rl.,Lt .v(rr) isgrrut
by the convolution sum:

-v(n)= k') hk)x(tr-k\ ((r.tt.2 t


-'.
so that

lv(n)l = | >,1(k)t(rr
k=-*
-k)l
s ) ltttll lr(n - k)l
t= -t

< /vr > l/r(A.)l


l=-z
Thus, a sufficient condition for the system to bc stable is tltaL ilr , impulse response
must be absolutely summable: that is,

) la1tll " '" (6.rj.3 )

That it is also a neccssary condilii,n can lre seen hy considcrir'r ,' itr[rui thc boundecl
signal x(/<) : sgnUr(n - t)1. or equivalcntly, .t(n - k) '- 'r,til(A')1, with corre-
sponding outpul

i'(n): ) ,'
l(/<)sgn[r(/<)1= )
k=-r
lr,1r r

^
Clearly, if &(n) is not ahsolutely summable,v(a) will be unhotrn.i ',1.
For causal systems, the condition for stability bccomes

i lrroll .- (6.{i...1)

We can obtain equivalent conditions in terms of tlte locatiorrs trl r r elt:ttitt jclistic val'
ues of the syslem. Recall that for a causal systern described hr ;, ,l,i';ct-cnce equati(rn,
the solution consists of termsof tltelbrnrnro",* - 0. 1,.... ful . \', lr(.r\' ct Licnotcs I chir-
acteristic value of multiplicity I/. lt isclearthat if l<rl == l.thc r!11,,)i,ir'is not l-rotrndcd
for all inputs. Thus, for a systcnl to be stahle, irll the charirctl. lr\rrr' ,':llues rnusl ltavc
magnitude less than l. That is. thcy ntust all lie inside a circlc ol Ltr, r. ' r'utlitts ir the corn-
plex plane.
Forthe statc-r'ariable represcntilt i()n, lvL'sawthatthr-solutit'r: itl't'ttdsonthestarr'-
transition matrix A". The fornt ol A" is detcrmined b1/ thc cigr:rr' ;tlucs or charr.clct:\-
tic values oI the nlatrix A. Suppose wc obtain the dill'crcircr' !(lLli]tii)!l r:c'lating t tc
outputy(r)totheinput.t'(r,)byclirninatingthcstatevariahlL:,tIrrtL:.quirtiorrs(6.7.Ja)
and (6.7.2b). It can hc verified that the characterislic valttcs ol ttri.; r.,:uatirlF rrre cxacr.ly
- 818 OlscretFTlme Systems ChaPter 6

the same as those of the matrix A. (We leave the proof of this relation as an exercise
for the reader; see Problem 6.31.) It follows, therefore, that a system described by state
equations is stable if the eigenvalues of A lie inside the unit cfucle in the complex plane.

Example 68.1
Determine if the follos'ing causal, time-invariant systems are slable:
(i) Sptem with imPulse response

,(,) =
[,(-])'. z(])'],t,r
(ii) System described by the difference equation

v@ -* v@ - D - lvrl, - q + !t@- 3) = :(n) + ?t(n - 2)


(itt)System described by the state equations
[r
-ll,r,.
1l
,r,*,r=ll,
La a.l [i]",
rr'r=[r -3]"'
For the frnt syttem. we have

,i_ lrr,ll
=,;.,O'* r(l)" = u *;
so that the systems is stable.
For the second system, the characteristic equation is

., - !r1o, - l" *I =,
and the characteristic roots are qr E: 2,a, = -112 and c, = l/3. Since l"r | > t, ttris sys'
tem is unstable.
It can easily be verified that the eigetrvalues of the A matrix in the last syBtem are
equal to 3/2 t i 1/2. Since both have a magnitude Sleater than l, it follows that the sys'
tem is unstable.

a A discrete-time (DT) sipal is delined only at discrete instants of time.


a A DT sigral is usually represented as a sequence of values .r (z ) for integer values of z.
a A DT signal .r(n) is periodic with period N if x(n + N) = x(n ) for some integer N.
S€c.6.9 Summary 319

r The DT unit-step and impulse functions are related as

,1n1 = j altt
k- -o
6(n) = a1r; - u(n - l)
o Any DT signal .t(n ) can be cxpressed in tcrms of shifted impulse functions as

,@)= L r(k)s(r-k)
t'-a
. The complex exponential r(n) = exp [l0,,n ] is periodic only if Au/Zr is a ratio-
nal number.
. The set of harmonic signals r* (n ) = exp [/.O,,n ] consists of only N distinct waveforms.
r Time scaling of DT signals may yield a signal that is completely different from the
original signal.
o Concepts such as linearity, memory, time invariance, and causality ir DT systems
are similar to those in continuous-time (CT) systems.
o A DT LTI system is completely characterized by its impulse response.
r The output y(n) of an LTI DT system is obtained as the convolution ofthe input
x(z) and the s)'stem impulse response h(n );

y(n)=h(n)*x(n): i n61r1r- n,7


,ttd_,

o The convotution sum gives only the forced rcsponse of the system.
o The periodic convolution of two periodic sequences x,(n ) and rr(r) is
N-l
.r,(n) el xz@) = )
.r,0, &)rr(k) -
l'0
r An altemative representation of a DT sptem is in terms of the difference equation (DE)
N l.l
> bPh-
2ooY@-t)= l-0 k)' n:-0
A-0
o The DE can be solved either analytically or by iterating from known initial condi-
tions. The analytical solution consists of trvo parts: the homogeneous (zero-input)
solution and the particular (zero-state) solution. The homogeneous solution is
determined by the roots of the characteristic equation. Thc particular solution is of
the same form as the input r(rr) and its delayed versions.
o The impulse response is obtained by solving the system DE rvith input.r(a) = E(r)
and all initial conditions zero.
r The simulation diagram for a DT system can be obtained f rom the DE using sum-
mers, coefficient multipliers, and delays as building blocks.
. The state equations for an LTI DT system can be obtained fronr the simulation dia-
gram by assigning a state to the output of each delay. The equations are of the form
32q L,r:scrare-Time Systems Chapter 6

v(rr + 1) = Av(rr) + b-r(n)


.v(rt) - cv(rr) )- :(n)
As in the CT case, for a given DT s1,stem, we can obtain several equlvalent simula-
tioo diagrams and, hence; several equivalent statc re presentations.
The solution of the state equation is
rr..l
v(n) = 61r;v(n) + ),=(, ibin - i - l)b.r(7)

y(n) = cv(n ) + dr(n)


rvhere

O(n) = 4'
state-transition matrix and can be cvaluated using the Cayley-Hamilton theorem.
is the
o The following conditions for tlre BIBO stability of a DT LTI system are equivalent:
(a) ) la(*)l <-
k=-t
(b) The roots of the characteristic cquation are inside the unit circle.
(c) Thc eigenvalues of A are inside the unit circle.

6 10 CHECKLIST OF IMPORTANT TERMS


Cayley-Hamllton theorem Partlcular solutlon
Characterlstc equailon Perlodlc convolutlon
Coefllclent multlpller Perlodlc slgnal
Complex oxponenilal Slmulatlon dlagram
Convolutlon sum State equallons
Delay State varlables
Dlflerenco equallon Summer
Dlecrete-tlme slgnal Trsnsltlon matrlx
Homogeneous solutlon Unlt-lmpulse luncllon
lmpulse responae Unlt-step functlon

6.1 1 PROBLEMS
6.1. F'or thc discrstc-time signal shorvn in Figurc P6.1. sketch each of the following:
(a) .r(2 - rt)
(tr).r(3rr * 4)
(c) .r(i rr + l)
/ ,, I tl\
lo),r(- I /
(e) .r (a t)
Sec. 6.11 Problems 321

-.1 Flgure P6.l r(n ) for Problem 6.1.

(I) x.(n )
(g) .ro(n )
(h) .r(2 - n) + x(3,t - 4)
Repeat Problem 6.1 if

,r,l = {- r, i,., -;, - , }


',
t
Determine whether each of the following signals is periodic, and if it is, find its period.

(s) x(n)=.,"(T.;)
o) r(n) = ''.(1i') -'t(1,)
(c) r(n ) = .'" (lX.) .', (l ")
(d) r(r) *o[?,]
=

(e) .r(z) = *r[rT,]

(f) :(n) = pt,


- 2-) 26(n - -r 3m)l
,i.
(e) r(n) =.*(li') . *'(1,)

The srgnal x(t) = 5 cos (120r - r/3) is sampled ro yield uniforntlv spaced samples 7 sec
onds apart. What values of '/" cause the resulting discrele-timc scquence to be periodic?
What is the period?
6.5. Repeat Problem 6.4 if .t (t) = 3 sin l(trrrr + 4 cos 120r.
6.6. The following equalities are used several places in the tex!. Provc their validity.
( | - on
i-; ..+I
(a) 5'o.=,1
n'o q-l
[,v
rt

a=0 | - q
322 Discrete-Tlme Syst€ms Chapter 6

(") i o'= oo',: o'l-', c * I


I -(l
6.7. Such conccpts from continuous-time systems as nrcmory. time invariance, lineariry, and
causality carry over to discrete-time systems. tn the following, x(a) refers to the input to
a syslem and.v(a) refers to the output. Derermine whether the systems are (i) linear.
(ii)memoryless, (iii) shifi invarianr, and (iv) causat. Justify your answer in each case.
(o) y(r) = log[.r(n)l
O) y(n) = x(nlx(n - 2)
(c) y(n) = 3nt(n)
(d) y(z) = nx(n) + 3
(e) Y(n ) =:(n - l)
(f) y(a) = r(r) + 7:(n - l)
(0y(,1)=irttl
l-0
(h)y(n)=irtrl
t'0
(r) y(n) = _ o,
i,;,,
0) y(z) = it,\;rntrrt, - oy
(k) y(a) = median lr(n - l)..r(a).r(n + t)l
o) y(a) =
[:,xi;,, ; :3
(rn)y(n) =
6.8. (a) Find
[:,;l;,,
the convolution
ili:;
.y ln) = h(n) r r (n ) of the following signals:
[_r _5sn< _l
(t):(a)=t,' o=n=4
h(n\ = 2u1nS
0r) x(n) = (])",r,t
h(n) = 51n1+ 5(n - rr - (f)',t,1
(Ul) r(a ) = x12;
h(a)=1 0s"s9
(rv)'r(n)=(i)"'t'l
/r\a
ft(a,y= 5,111 * (iJrtrl
(v) x(n ) = (])',.,. )

[(r) = 51r; *
0)'",,
Sec. 6.11 Problems 323

(vl) -r(n) = nu(n)


h(n)= 41n1 - u(n - l0)
(b) Use any mathematical software package to verify your resuhs.
(c, Plot )(n ) vs n.
6.9' Find the c<rnvolution y (n ) = h(n) * x(n) for each of the foUowing pairs of finite sequenc€s:

(a) r(a) =
{, -] I -i :}, h(n) = 11, -r. rr,-r}
f
(b).r('t)= 11,2..1.o.-1.1, hln) = 12,-1,3.1.-21
(c) r(,') =
{,
j I ,.r} nat = , |,-}}
{2.-
1

(d).r(n) = h@) = tt.l,1,r,rl


{-,,;,;,-i,,},
(e) Verity your results using any mathematical software package.
(f) Plot the resulting y(n ).
6.10. (a) Find the impulse response of ihe system shown in Figure P6.10. Assume that

h,(n)=h2@)=(i)"1,)
h{n) = u(n)
n,ot =
G)"at
(b) Find the response of the system to a unit-step input.

h tl l h2Ot)

Flgure P6,10 System for Problem 6.10.

6.1L (o) Repeat Problem 6.10 if

,,t,,r = (l)',t,r
h2(a) = 6@)

h,1n'1= ho@)= (i)"r,r


(b) Find the response of the system to a unit-step input.
324 Discrste-Time Systems Chapter 6

6.12. Let rq (a ) and r.(n ) be two periodic sequences with period N. Show that

) r,(/<)r.(n - &) = )'r,(k).r3(rr - k)


l=0 I ',,u
6.f3. (a) Find the penodic convolurion .vr(r ) of the flntte-length sequences of Problem 6.9 bv
zero-padding lhe shorter sequence. How is 17(n) related lo the y(a) that you deter-
mined in Problem 6.9.
(b) Veri$ your results using any mathematical software package.
6.1d. (e) In solving differential equations on a computer, we can approximate the derivatives
of successive order by the corresponding differences at discrete time increments ?.
That is, we replace
d'(t)
y(,) =
dt
with

y(nT) _
x(nT) - x((n - t)r)
T
and
dv(t) d2x(t)
z0) dt dt2

with

z(nr) = tWP - x@r) - ?:((n - \r) + x((n - 2)T)


,

Use thls approximation to derive the equation you would emPloy to solve the differ-
ential equation

,4#*y(t)=x(r)
(b) Repeat part (a) using the forward-difference approximation
q9:'t((z+1)I)-r(nt)
dtT
6.15. We can use a similar procedure as in Problems 6.13 and 6.14 to evaluate the integral of
coDtinuous-time functions. That is, if we want to find

y()= +r(o)
f,"r(t1dt
we can write

aP =,r,
If we use the backward-difference approximation for y(l)' we get

y(nT) = Tx(nT) + y((n - l\r), y(0) = ,t(to)


whereas the fon*'ard-difference approximation gives
Sec. 6.1 1 Problerns 325

r{(,rr Iill 'i.\U'f r(,r7t. r.(()}


)+ .r(r,l
(a) Lisr thcsc irppr()\rntalt()rr l, dcternlinc thc rnlcgrill ()f thc,:i.r:!inuous-time function
shown tn Figurc Ptr l5 l r r rn lhe rarrgt, [t). .il. r\ssunrc th;,r ,l . u.02 s. What is the
-.rr,r .1. r,i .1..'/t., \rl.r -) 5.
(b) Rcpcat part (irl k-,r 7' = (l(ll s. conllncnt ()n vour fesults.

Figure P6.15 lnput for Problem


ot:r (sccunJs I 6.1-5-

6.16. A better approximation lt-l thc intL.gral in Problem 6..l-5 can hu rrlrtained by the trape-
zoidal rule
71..1rf)
.r,(nf) = 2
+ -r(,l - I)f ) +.v((, - t)i )

Determine the inteeral of the function in Problem 6.15 using this rulc.
6.17. (a) Solvc the following diffcrcnce equations bl,iteration:
(i) l(n) +),(x - I) + lolln - 2) =.r(n). n>0
.r'(- l) - 0. -v(-3) - 1. .r(n) = 111,, 1

1t
(ii) l(,r) -'o!@- I){;r'(z -2)=-r(r). ,r>0
](- rr = r, y(-2) = t). ,t,t -- (l)',,r,1
(iil) t,(r ) + .vtn
f - rt + Jltr - 2)=t(n), x
=(l
.v( - l) =0, y l-2 I = tt. ,t,= (l[ rl (r, )

I 1
(iv) .y(n + l) +
,I' (n- I) = x (n) - ,r(n - l), ,r=()

](0) = l, .r Qr )= [) u(n)
(v) vQt) = r(n) + l..r n- l) + Zr(fl.- 2). r=0
J
.r (rr ) = &(r)
(b) Using any rnathematical software package, verii lour resulls lor'l in lhe range 0 to
20. Obtarn a plot r)l l,(n ) vs. r.
g26 Discr€te-Time Systoms Chapter 6

6.18. Determine tha characteristic roots and the homogeneous solutions of the following dif-
ference equations:

(l) r(n) -.y(n - g+)t(n -2) = x(n). n=0


Y(- t) = 0' Y(-2) = 1

(lt) y(r) - |r,, - rl - l.rt, -2l= x(n)*trrln- t'1. z=0


.Y(- l) = I' Y(-2) = s
(lil)y(n) -.v(n - t7*'Or<, -2)= x(n), n >0
.Y(- 1) = l. 1'(-2) = 1

1t
(iv)y(n) -it@-D+ S@-2)=r(n). n>o
v(- 1) = 2. Y(-2\ = o
tl
(v) y(a) - att" - 1)-ry(, - 2) = x(n), n=0
Y(- r) = r' Y(-2): - 1

6.19. Fiod the total sotution to the following difference equations:

(a) y(n)*|rt, - r)-*yt, - 2)= r(n)*!,6- r'1. n =0


if y(- l) = I' y(-Z) : O, and x(n) : 2 cos3-fi
r :0 /ly a(a)
and r(z) = (-2l
(b) y(a) + ;t@ - l) = r(z) if y(- l)

(c) y(z) * ;lt<n - r) = r(n) if y(- t) :I and r(z) = (i);Ol

(d) y(n)*flrt, - 11 +frt, -2)=r(n), r,=o


ify(-l) = y(-2) = | and rrr= (i)"
(e) y(n + 2)-|iy@+r)-fir(z)=:(n)+|r1a+ r), z>0
if Y(1) = Y(0) = o arrd ,r, = (l)"
62t1. Find the impulse response of the systems in Problem 6.18.
621. Find lhe impulse respons:s of the systems in Problem 6'19.
6,til we can find the difference-equation characterization of a system wu=ith a given impulse
respnse ft(n ) by assuming a difference equation of apPropriate order with unknown coef-
fit-ients. We can substituie the given A (n ) in the difference equation and solve for the coef-
fir:ienrs. Use this procedure to finrl rhe difference-equation representation of the system
with impulse r".pln* h(n) = ('til"u(n) + (l)'u(n) by assuming that

-v(n) + ay(n - l) + by(n - 2) = u(n) + dr(n - 1)

and find a, D, c, and /.


Sec. 6.11 Problems 327

6.ani. Repeat Problem 6.22 if

or,r= (-l)',,t,r - (-j)',r, - rr

6J4 We can frnd the impulse response /r(z) of the system of Equation (6.5.11) by first linding
the impulse response fto(a ) of the system

2
t.0
ory(n - k) = x(n) (P5.1)

and then using superposition to find


M
/l(n)=)b,ho@-k)
t -0
(a) Verify that the inilial condition for solving Equation (P6.1) is

!
v(o) = al

(b) Use this method to find the impulse response of the system of Example 5'5.6.
(c) Find the impulse responscs of the systems of Problem 6.17 by using this merhod.
6li. Find the two canonical simulation diagrams for the systems of Prohlem 6.17.
615. Find the corresponding forms of the state equations for the systems of Problem 6.17 by
using the simulation diagrams that you determined in Problem 6,2.5.
627. Repeat Problems 6.25 and 6.76 for the syslems of Problem 5.1E.
6r& (a) Find an appropriate sei of state equations for the systems describcd by the following
difference equations:

(r) y(n )- fitr" - rl * ]ot<" - z) - iqy@- 3) =.t(n)


(ll) y(n) + 0.70't y(n - l) + y(n - 2) = x(n') * *
lrO - r1 j.r1n - z1

(|ff) y(n) - 3y(n - t) + zt(n - 2) = x(n)


(b) Find A'for the preceding systems.
(c) Verify your results using any mathematical soltware packagc.
6g). (a) Find the unit-step response of the systems in Problem 6.2E if v(0) = n.
(b) Verify your results using any mathematical software pa:kage
Consider the state-space systems with

A= r=[-"]. c=il rt. r/=o


(s) Veri$ that the eigenvalues of A are I and - | .
o) Let i(n ) = Pv(n). Find P such that the state representation in terms of i(z) has

[i
i=li ,l o.l

L'-ll
B2B Discret€-Time Systems Chapter 6

(This.is the diagonal form o[ the srarc cquations.) Find the corresponding values for
b. i. d. and i(0).
(c) Find the unit-step response ofrhe system representation thal you obtained in part (b).
(d) Find the unit-step response of rhe original system.
(e) Verify your results using any mathematical software package.
63L By using the second canonical form of the state equations, show thar the characteristic val-
uesof the difference-equation representation of a system are the same as the eigenvalues
of the A matrix in the state-space characterization.
631 Determine which of the following sysrems are stable:
(a)

"'= {[ll ;-,


(b)

nr,,=[3', o<a<t(x)
10.
otherwise
(c)

n>o
,,,, _ [(l)""..'.
Iz"cosrz. n<o
(d) y(z) =:(n) + zx(n - *
1)
)16 - z\
(e) y(z) -2y(n - l) +y(z - 2) = :(n) + x(r - l)
(r) .v(a + z) -1y@ - D - lyb - 2) = x(n)
I t rl
rer
'r, + rr = l ? ] 1,",. [_ i] ,,,,, y(n) = rr olv(z)
L-o')
(h)v(,,+,)=[-l l],r,1.[f] .r,r y@)=tz rlv(a)
Chapter 7

Fourier Analysis
of Discrete-Time Systems

7.1 INTRODUCTION
In the prcvious chapter. wc considcred techniques for the tintr:-dornain analysis oldis-
crete-time systems. Recall that. as in the case of continuous-tinre systems. the primary
characterization of a linear. time-invariant. discrete-time system that we used was in
terms of the response of lhe system to the unit impulse. In this lrrd subsequcnt chap-
teni, we consider frequency-domain techniques for analyzing discrete-time systems, We
start our discussion of these techniques with an examination of thc Fourier analysis of
discrete-time signals. As we might suspecl, the results that we ohtain closely parallel
those for continuous-time systems.
To motivate our discussion of frequency-domain techniqucs, Iet us consider the
response of a linear, time-invariant, discrete-time system to a complex exponential
input of the form
r(n) = ." (7. r.r )

where e is a complex number. lf the impulse response of the system is ft(rl), the out-
put of the system is determined by the convolution sum as

y(r)= X lr(/<)r(n-k)
*= --

=Z h?\2,-k
k=--

= z') hG\z-k (7.t.2)


k=--
329
ggo Fouri€r Analysis ot Discrete-Time Systems Chapter 7

For a fixed 4, the summation is just a constant, which we denote by H(z): that is,

H(z)=
l- --
i ofo',r-o (7.1.3)

so that

y(n) = H(z\x(n) (7.1.4)

As can be seen from Equation (7.1.4), the outputy(n) is just the input.r(n) multiplied
by a scaling factor I/(e).
We can extend this result to the case where the input to the system consists of a linear
combination of complex exponentials of the form of Equation (7.1.1). Specifically. let
,v
x@)=)alzi (7.1s)
It= I

It then follows from the superposition property and Equation (7.1.3) that the ouput is
N
y(a) = ) arH(zt)zi
l-l
iV

= ) b,zi
&-l
e.t.6)

That is, the output is also a linear combination of the complex exponentials in the
input. The coefficient bo associated with the function z[ in the output is just the corre-
sponding coeffrcient a* multiplied by the scaling factor H (21).

trrample 7.1.1
Suppose we want to find the output of the system with impulse response

ar,r = (i).,r,r
when the input is

x(n) = 2*'2a n

To frnd the outpul, we fint find

/{(z) =
.i (:).. =.4 (i. ,l = . <1
;_ ,_, lj
where we have used Equation (5.3.7). The input can be expressed as

*<,t = .
"*eliz],] *, [-, T,]
so that we have
, . Sec. 7.2 Fourler-Series Represontation of Discrete-Time periodic Signals 331

',
= *o[,?]' =
', "-o[-;:u],
at=i]'=l
and

H(;,) = =
I
t - ;exp[-j(2r/3)l {?'*or-;*t'
I , \/t
H\22) = - -- 1-
-- --- = :2z
"xpUil, O = tan-,
t- rexplj(2n/3)l vt 5

It follorvs from Equation (7.1.5) that the output is

v@= * ?l -,, ':


Acxpllol*rl,?,] 14 '*n [ "]
=:""'(f , * o)

A special case occurs when the input is of the form exp [jO^ ], where Oa is a real, con-
tinuous variable. This corresponds to the case lz1 | = 1. For this input, the output is
y(n) - H(eitt) exp[7oon] (7.t.7)
where, from Equation (7.1.3),

H(eia1= H(O) = ) a(r)exp[-j{)n] (7.1.8)

7.2 FOURIER-SERIES REPRESENTATION


OF DISCRETE-TIME PERIODIC SIGNALS
Often. as wc have seen in our considerations of continuous-time systemsr we are inter-
ested in the response of linear systems to periodic inputs. Recall that a discrete time
signal x(a) is periodic with period N if
r(n)=r1r*r, (7.2.1)

for some positive integer N. lt follows from our discussion in the previous section that
if x(n) can be expressed as the sum of several complex exponcntials. the response of
the system is easily determined. By analogy with our representation of periodic signals
in continuous time, we can expect that we can obtain such a representation in terms of
the harmonics corresponding to the fundamental frequency 2r/N.That is, we seek a
representation for x(r) of the form

,(r) = )tAa1,exp[j0rnl= ) a*xo(rr) (7.2.2)


Fourier Analysis ot Discrete-Time Systems Chapter 7

where ()o = 2rk/N.It is clear that the xo(n) arc periodic, since Oo/2tr is a rational
number. Also, from our discussions in Chapter 6, there are only N distinct waveforms
in this set, corresponding to k = 0, l. 2. .... N l. since -
xaQr) = rr*,r(n ), for all k (7.2.3)
Therefore, we have to include only N terms in the summation on the right side of
Equation (7.2.2). This sum can be taken over any N consecutive values of &. We indicate
this by expressing the range of summation as i! = (M. However, for the most part, we
consider the range 0 s & < N - l. The representation forx(n) can now be written as

,(r) = ,r*oli'#o^f (7.2.4)


o)-,,
Equation (7 ,2.0 is the discrete-time Fourier-series representation of the periodic
sequence x(n) with coefficients a*.
To determine the coefficients al, we replace the summation variable & by nr on the
right side of Equation (7 .?.4) and multiply both sides by expl- j2rkn /Nl to get

,1,r; e*p[-;'; *]= <^ - o,t"] (7.2.s)


^\*o^"-rfi'ff
Then we sum over values of n in [0, N - 1] to get

,e,","*p[-r'i*]=,r*-2.*,- *rli'#o - onf e.2.6)

By interchanging the order of summation in Equation (7 .2.6), we can write

5',r,1*p[-i'#*)=,,e,e o^"*ol1!o - ovf (7.2.7)

From Equation (6.3.7), we note that


/V-l r A,

)o':l-s - o*1 (7.2.8)


ilo I-a
For a = l. we have

)c':N
,-0
(7.2.e)

lf m - k is not an integer multiple of N (i.e., m k rN for - * r = 0, +1, t2, etc.), we


can let a = expljZr.(m t)/Nl in Equation (7.2.8) to get
-
*o[r* r)4t o
F*
{m - on)= !-+r-Ie(?C,4)@l = (,.z.to)
lf. m - k is an integer multiple of N, we can use Equation (7.2.9), so ihat we have

.>* "*[r'# 1^ - *1^)= it (7.2.11)


sec. 7.2 Fourier-series Representation of Discrete-Time periodic srgnals agil

Combining Equations (7.2.10) and (7.2.11), we wrire

"0[r'I ,^ -ol,] = ru1,,


- k - rN) (7.2.t2)
,r*
where E(rz - k - rN) is the unit sample occurrinl ar m = t + rN. substitution into
Equation (7.2.7) then yields

,P.,,,,, "*p[-i'i *] =,,}, r,,,0{,zr - A- rN) (7.2.13\

since the summation on the right is carried out over N consecutive values of m for a
fixed value ol k, it is clear thar the only value that r can take in the range of summa-
tion is r = 0. Thus, the only nonzero value in the sum corresponrJs to k,and the i=
right hand side of Equarion (7.2.13) evaluates to Nar, so thar

,^ = r(r) .-p ,f kr] (7.2.14)


* ,Z [-,
Because each of the terms in the summation in Equation (7.2.14) is periodic with
period N, the summation can be taken over any N sucessive values of r. we thus have
the pair of equations

r(n) = ) ,r*r[i';- orf (7.2.15)


r =(M
and

,o : ',T *] (7.2.16)
i,,?r..(r1.xp[
which together form the discrcte-time Fourier-series pair.
Since r^ , r(n) = .rl(a), it is clear that

o**N = a* (7.2.17)
Because the Fourier series for discrete-time periodic signals is a finite sum defined
entirely b'; the values of the signal over one period, the series always converges. The
Fourier series provides an eract alternative representalion ol thc time signal, and issues
such as convergence or the Gibbs phenomenon do not arise.

Example 7.2.1
Let.r(z) = exp [lKf[nl for some K with O0 = 2rr/N. so rhat.r(1) is periodic wirh period
N. By writing.r(r) as

,t,l=.*o[7'f r,]o=nSn - r.

ii follows lrom Equation (7.2.1-5) that in rhe range 0 s k < N l. only a^ = l, with all
other a, bcing zero. Since d1 *,y = (,6, thc spcclrum of .r(n) is a linc spectrum consisting of
discrcte irnpulses of magnitude I repeated ar intcrvals N(),,. as shorvn in Figure 7.2.1 .
3U Fourler Analysis of DlscreleTlme Systems Chapter 7

(N - tr)Oo 0 /(oo K)Oo


(N + K)Oo I

Ilgure 721 Spectrum of complex exponential for Example 7.2.1.

Example 72J
L-t r(z) be the signal

'(,): "*(T) ..t(T -;)


As we saw in Example 6.1.2, this signal is periodic with period N = 726 arrd fundamental

frequency {ln = 2n /126, so rhat n/9 ana correspond to l40o and l8fh respectively.
}
Since -&f!o corresponds ro (N - *)f!0, it follows thal -; ana - | can be replaced by
l(ts0, and ll2.tlr. We csn therefore write

,@) =:[dr + e-r11 +


]Vta't
+ e-ite-i';j

=la,*" *4,uon, *|r,*, +1a,,**


so that

,* = I = anz.arE : -- other a. = 0,0


= k = t?s
17 "l^,all

Frarnple 7.23
Consider the discrete-time periodic square wave shown in Figure 7.2.2. From Equation
(72.16). the Fourier coefficients can be evaluated as

,. = f
.i "*o[-rr" *]
- O lL'

ftgure 7J2 Periodic square wave of Example 7.2.3.


Sec. 7.2 Fourier-Series Representation ol Discrete-Time Periodic Signals 3il5

Fork = 0.

o, =
|,t,',trr =?4-P
t + 0, \ve can use Equation (6.3.7) to Bet
For

t exp[j(2t/N)kMl - expl- j(zr/N)k(M + t))


at=
it - r --.*pl-]tz"tnkl

=L
N
"y-- a" / *L lzl[:: Il':':y @ : tn--- *o [ - i t' " /Y ('
t
expl-j(2r/N)(k/2)llexplj(2r/N)(k/2)l - expl- j(2r/N)(k/2)ll
jl]
_,.,|,#("lj)] k=r'2"N-I
=''qt';l:l' ,

We can, thcrefore, write an expession for the coefficients ar in tcrnrs ()f the sample values
of the function
sin[(]M- 1 lXq4l
t,r,,.'
\'" =
' sin (o/2)
as

1 l2rk\
'-=n{ru/
The function /( . ) is similar to the sampling tunction (sin x)/-r that u'c have encountered
in the continuous-timc case. Whereas the sinc funclion is not periodic. the function/(O),
being the ratio of two sinusoidal signals with commensurate frcqucncies, is periodic with
period 2zr. Figure 7.2.3 shorvs a plot of the Fourier-series coefficicnts for M = 3, for val-
ues of N corresponding to 10. 20. and 30.
Figure 7.2.4 shows the partial sums xr(z) of the Fourier-series cspansion for this exam-
ple for N = Il and M = 3 and for values ofp = 1, 2,3,4,and 5. rvhcre

,n(n)=
ofoo**rli'#o^l
As can be seen from the figure, the partial sum is exactly the origrnal sequence forp = 5.

Erample 72.4
l.-et.r(n) be the periodic extcnsion of the sequence

[2. -1. 1.2]

The period is N = 4, so that exp [- ih / Nl = -1. The coefficients rra are therefore given by

I
a,,=.r(2-l+l+2)=l
of Dlscrote.Time sysrems Chapter 7
f.r:i, Ir^'s

N=20

.ry =J0

tlgue 723 Fourier series coefficiens for the periodic square wave of Example 7.23.

o,=Irr+i-t-zn=l-il )

or=le*r*r-zl=l
o,=Ie-i - r + ,,t =I* il=
"l
In general, if .r(a) is a real periodic sequence. then
at, = ai-* (7.LtE)
ll)
(!

Q)
(E
a
cr
(J
!,
.9,
L
{,,
:(!
uo
c
o
tr
lll
o.
x(l)
(u
t-
q,

c)

=
o
E.
!u

:U:

(,
A.

9
Ft
F
oa

ba
It

337
Fourier Analysis ot Discrete-Time Systems Chapter 7

Example 72.6
Consider the periodic sequence with the following Fourier-series cocfficienls:

,^ = j.inT .,'r.orki. o<ft s tl


Tlre signal .t (r ) can be determined as

.*1n; = j a^ ex eli'zri rrl


=*[".r,r,*"/6)l-rypl-j(k"/6\l +9xpli(ktr/J)l_!rexpl-i(kr/2lll*r[i];*1
= ot' * - ot" -
,? [,L {"-o['1; 'rl "-p[r ]1 'r]]
. )[*r[,i;or, * rr] * *pfi2ri,,(, - 3)]]]
Using Equation (7.2.12),we can write this equation as

.r, = + r) - lr,, - rl *
l11,
*rr * ]s(n -3)
;6(n
The valucs of the sequcnce.r(rr) in onc pcriod arc thcrefore given by

{. - ;.r.j.o o.r.o.o.l.o.1}
rvhere we have used the fact that.r(N + *; = .11;a;.

It follows from the definition of Equation .2.16) that. given two sequences x, (n)
(7
and -r2 (n). both of period N. with Fourier-series coefficients a,^ and aro. the coefficients
for the sequence A,r, (l) + Bxr(n) are equal to Aor* I Ba.,r.
For a periodic sequence with coefficients at. we can find the coefficients D. corre-
sponCing to the shifted sequence.r(rr - m) as

b^ = i,,E=,r,,, - ^l*el-izl *,f (7.2.te)

-
By replacing n m by a and noting that the summalion is taken over any N succes-
sive values of n. we can write

,^ = (i,,)=,,.,(,,.*p[-r ? o,])"-r[-,'J r,f= .*p[-i'; o^f"r (7.2.20)

Let the periodic sequence x(n). with Fourier coefficients a*. be the input to a linear
system with impulse response ft (a). where /r(n) is not periodic. [Note that if fi(z) is also
periodic, the linear convolution ofr(r) and ft(n) is not defined.] Since. from Equation
(7.1.7). the response y^(n ) to input a* explj(Zr/N)knlis
Sec.7.2 Fourier-Series Representation of Discrete-Time Periodic Signals 339

v*(n\ = ,r,(+t) .-n


l; f; t,l \7.2.21\

it follows that the response to r(n) can be written as

v@): oZ-vr(n)
=
*P-- "rr('*'o).-o[;2f ",]
(7.2.22)

where H(Zrk/N)is obtained by evaluating H(O) in Equation (7.1.8) at A = ZtklN.


If x,(z) and xr(n) are both periodic sequences with the same period and with
Fourier coefficients aroand a21,, it can be shown that the Fouricr-series coefficients of
their periodic convolution is (see Problem 7.7b) Nar2ar1. That is.
.r,(n) @ x2(n) <-+ Naroay
Also, the Fourier-series coefficicnts for their product is (see Prohlcrm 7.7a1 aro @ a*
.r' (r ).rr(n ) o atk t*) au

These properties are summarized in Table 7-1.

TABLE 7-1
Proportlea ot Dlscr€teTlme Fourler S€deB

l. Fourier coefficients .t,(n) periodic with period N ,,- :,1;..,1,,y",pI t?i,o] (7.2.t4)

2. Linearity Axr(n) + Bxr(n) Aar* * Bau


f .- 1
| ,Ltt ,
3. Time shift x(n - m) expl't-i Kn)nt
I
(7.2.20)

4. Convolution x(n\ * fi1n1' ft(n ) not periodic a,.It\; k) (7.2.2t)

H(o) = cxp[-jo,l
,i_r'(n)
5. Periodic convolution r, (n) O .rr(z) Nar*a* (7.2.23)

6. Modulation tr(n)xr(n) atl @) ozt (7.2.24)

Example 7.2.6
Consider the system with impulse responsc h(n\ = (1/3)'u(r). Suppose tvant to find thc
Fourier-series representation for the output y(a) when the input.t(n) is the periodic
extension of the sequence z. - 1,1,21. From Equation (7.2.22),it follows thal we can writc
y(n ) in a Fourier series as

y(,) =
*r^"-o[i'Jo,]
with
340 Fourior Analysis ol Discrete-Time Systems Chapter 7

b' = k)
"'H(T
From Example 7 .2.4, we have

ao = r. ', = ,l = )-ii. ,, ='i


Using Equation (7.1.8), we can wrile

H(o) (l)'*r,-,*, =
=,2Eo\e' .+--
l-iexp[-jo]
so that with N = 4, we have

'('; r) = , - i*o[-i]t]
tt follows that

\= n@a--3;

t, = ,(t),, =tLt:i?)
br= HQr)ar=f,

, Dt,r = 3(1 +j2)


Dt= -'nn

7.3 THE DISCRETE-TIME FOURIER TRANSFORM


We now consider the frequency-domain representation of discrete-time signals that are
not necessarily periodic. For continuous-time signals, we obtained such a representa-
tion by defining the Fourier transform of a signal .r(l) as
r"
x(u,)=g[x(t)l=lr(t)exp[-jtot]dr
J--
(7.3.1)

with respect to the transform (frequency) variable r'r. For discrete-time signals, we con-
sider an analogous definition. To motivate this definition, let us sample x(t) uoiformly
every Iseconds to obtain the samples:(nT). Recall from Equations (4.4.1) and (4.4.2)
that the sampled signal can be written as

r,(r) : x(r) j s1, - ,r1 (7.3.2)


n1-b
Sec. 7.3 The Discrete-Time Fourier Translorm 341

so that its Fourier transform is giren by

x,(,) =
[' -r,1t7e-i-' dt
= I
J-,
x(r) >
n=-e
6(t - nT)e-,''dt

= (7.3.3)
,i_*x(nT)e-i'r'
where the last step follows from the sifting property of the 6 funclion.
If we replace ro7 in the previous equation by the discrete-timc :requency variable
O, we get the discrete-time Fourier transform, X(O), of the discrctc-time signal r(r),
obtained by sampling.r(t), as

x(o) : *lx(n)l: i ,(nl exp[-iorr] (7.3.4\

Equation (7.3.4), in fact, defines ,h. di.";;-;*e Fourier transforrn of any discrete-
time signal x(z). The transform exists if x(n ) satisfies a relation of the type

) l.r(n)l <- or ) lx(n)1'z<- (7.3.s)

These condition. ur" ,ir,.,"nt to guarantee ,nr, *.


sequence has a discrete-time
Fourier transform. As in the case of continuous-time signals, there are signals that nei-
ther are absolutely summable nor have finite energy, but still have discrete-time
Fourier transforms.
We reiterate that although ro has units of radians/second. O has units of radians.
Since exp [loz ] is periodic with period 2n, it follows that X(O) is also pcriodic with
the same period, since

X(O+2z-)= i r(nlexp[-l(O +2n)nl


"=:-
: exp[-ion] = x(o) (7.3.6)
,I_r(r)
As a consequence, while in the continuous-time case we have to consider values of o
over the entire real axis, in the discrete-time case we have to considcr values ofO only
over the range [0,2r].
To find the inverse relation betrveen X(O) and x(rr). we replac,: the variable n in
Equation (7.3.4) by p to get

x(o): i ,fplexp[-lop] ('1.3.7)


P=-6

Next, we multiply both sides of Equation (7.3.7) by exp [j0n] and intcgrele over thc
range [0,2n] to get
g4? Fourier Analysis ot Discrete-Time Systems Chapter 7

exp[ion]r/o = exp[io(n - p)ldo (7.38)


,[=,,r,n, I,i*"=,, r}*_rrp)
Interchanging the order of summation and integration on the right side of Equation
(7.3.8) then gives

f" ,tnl exp[ion]rlo = ,|-,rorf"


exp[;o(n - dlda (7'3e)

It can be verifred (see Problem 7.10) that

- p\tdo= {3: :;i


(7.3.10)
f expt;o(n

so that the right-hand side of Equation (7.3.9) evaluates to 2fr(z). We can therefore write

,fO =
l,f" xtol exp[ion]do (7'3'11)

Again, since the integrand in Equation (7.3.11) is periodic with period 2r, the integra-
tiJn can be carried out ovel any interval of length 2zr. Thus, the discrete-time Fourier-
transform relations can be written as

x(o)= ir{'texP[-ion] (7'3'12\

,Ul =
** 1,r,,
X(o)exp[ion]do (7.3.13)

Example 7.3.1
Consider the sequence
x(n) = o"u1n'' l"l t
'
For this sequence,

x(o) = i
n-0
c'exp[-ion] = G+i-Fj
The magnitude is given bY

lxtoll =
\4 +;, _ 2a-cosO

and the phase by

Argx(o): -tan-rd*k
Figure 7.3.1 shows the magnitude and phase spectra of this signal for c > 0. Note that
these functions are periodic with period 2t.
Sec. 7.3 The Discrete-Time Fourier Transtorm 343

| .lrflr i Ar! .\ ll,1 )

-2r'r0nlnO
tigure 73.1 Fourier spectra of signal for Examplc 7.3.1.

E=ernple 73.2
Lrt
r(n) = sl"l, l"l . t
We obtain the Fourier tranform of :(n ) as

x(o) : ol'lexp[-ionl
,I"
-t -
=) o-'exp[-lr)z] + ) o"expl-i{)nl
a -0
which can be put in closed [orm, by using Equation (6.3.7), as

x(o) = ___l
I - c-rexp[-70] I - oexp[-lO]
l-o'-
=--l-2ocosf,)+s2
In this case, X(O) is rehl, so that the phase is identically zero. Thc magnitude is plotted in
Figure 7 .3.2.

Eranple 73.8
Consider the sequence .r(n) = exp [i Oon l, with f]o arbitrary. 1'hus. .r(n) is not necessarily
a periodic signal. Then
u4 Fourier Analysis ol Discreie-Time Systems Chapter 7

I .Y(r2) I

Figure 732 Magnitude spectrum


2nO of signal for Example 73.2.

x(o)= i 2116(o- ttn-2nm) (7.3.14)

ln the range [0, 2rrl, X(O) consists of a 6 function of strength 2n, occurring at O = fh. As
can be expected, and as indicated by Equation (7.3.14), X(O) is a periodic extension, with
period 2rr, of this 6 function. (See Figure 7.3.3.) To establish Equarion (7.3.14), we use the
inverse Fourier relation of Equation (7.3.13) as

e-r1,,n, =
.r(n) =
*" f "x(o) exp[jon]do
=
i; L"[,,i_*u,n - n" - z,,rz)]exp[ioz]do
= exp [jOon]
where the last step follows because the dnly pemissible value for rz in the range of inte-
gration is ,n = 0.
We can modify the results of this example to determine the Fourier transform of an
exPonential signal that is periodic. Thus, let.r(n ) = exp[Tkfloalbe such that Oo = 2tlN.
We can write the Fourier transform from Equation (7.3.72) as

x(o)

f,2o- 4n Oo-2n Oo Qo+Zt Os*4r


Figure 733 Spectrum of exp [0rz].
Sec. 7.4 Properties o, the Discrete-Time Fouder Transtorm g5

x(o) =,,i, 2z16(o - kttn - Ztrm)

Replacing 2r by NOn yields

x(O) => 2t6(() - tQ,- N{),,nr)

Thar is. the specrrum consists of an Lnn,,. .., of ,rpulses o[ strength 2rr centered at tOn.
(t I N)q. (k '$ 2N)(h.etc. This can be compared to ihe result rvc obtained in Example
7.2.1. where we considered the Fourier-series representalion for 'r(rr). The difference, as
in continuous time. is lhat in the Fourier-series represenlation thc frequency variable
takes on only discrele values, whereas in the Fourier lransform the flequency variable is
continuous.

7.4 PROPERTIES OF THE DISCRETE-TIME


FOURIER TRANSFORM
The properties of the discrete-time Fourier transform closely parallel those of the con-
tinuous-time transform. These properties can prove useful in the analysis of signals and
systems and in simplifying our manipulations of both the forward and inverse trans'
forms. In this scction, rve considcr some of the more uscful propcrties.

7.4.1 Periodicity
We saw that the discrete-time Fourier uansfolm is periodic in O with Period 2T, so that
X(o+2r)=x(o) (7.4.1)

7.42 Linearity
Let x,(a) and.rr(n) be two sequences with Fourier transforms X,(O) and Xr(O)'
resp€ctively. Then
Tlarxr(n) + arx2(n)l = a,X, (O) + arXr(A) Q.42)
for any constants al and a2.

7.4.8 fime and Frequency Shifting


By direct substitution into the defining equations of the Fourier transfonn, it can eas-
ily be shown that
?lx(n - no)l = exP[-i Ono]x(O) (7.4s)

and
9[exp[j0or]x(n)l = x(O - oJ (7.4.4)
346 Fourier Anatysis o, Discr€te-Time Systems Chapter 7

7.4.4 Difrerentiation in Frequcncy


Since

x(o) =,j_ x(n)exp[-ion]


it follows that if we differentiate both sides with respect to O, we get

4X(a\ -a
d; '?-?in)x(n) exP[-ion]
=

from which we can write

e[nt(n)l= i *(nlexp[-loz] =i4{lop (7.4.s)

BTarrrple 2.4.1
Letr(z) = no'u(a),rvith lcl < 1. Then, by using the resulrs of Example 7.3.1, we can wrire

-he
x @ =i @"u(n)t = i rt l-+r_,n;
_ o exp[- jO]
(1 - a exp[-i0])z

7.4.6 Convolution
l*t y(n) represent the convolution of two discrete-time signals r(n) and ft(n ); thar is,
y(n\ = h(n) ," r(n) (7.4.6)
Then
y(o) = H(o)x(o) (7.4.7)
This result can easily be established by using the definition of the convolution opera-
tion given in Equation (6.3.2) and the definition of the Fourier transform:

y(o) = i ytrlexp[-ion]

h(k)x(n- *)]expt-lonl
,i_ Li_
=
_i,r,o,[,i _,(n
- r)expt-ion]]
Here, the last step follows by interchanging the order of summation. Now we replace
n- k by n.in the inner sum to get
Sec. 7.4 Properlies of the Discrete-Time Fourier Transform
347

v(o) =,I., (o)1,,i, -r(n)exp [-ion]] exp t - rrrrl


so that

r(o) = i n61xp1exp[-lok]

= H(o)x(o)
As in the case of continuous-time systems, this property rs extrcmely useful in the
analysis of discrete-time linear sysrems. The function I1(o) is rcf"rrej to as the
/re.
quency rcsponse of the system.

Example 7.4.2
A pure delay is described by rhe input/output relation

y(n)=x(n-no)
Taking the Fourier transformarion of both sides, using Equation (7.4.j), yields

Y(O) = s,(r1-;onolX(O)
The frequency response of a pure delay is thercfore

H(O) = exP[-l0no]
Since H(o) has unity gain for all frequencies and a linear phase, it rs distortionless.

Example 7.43
Lrt

nat =
0).,at
,r,r = (])",,r"r

Their respective Fourier transforms are given by

H(O) = ---.
r - jexpl-io1

x(o):- --l--
1- lexpt-lrll
so that
w Fouri€r Analysis of Discrete-Time Systems Chapter 7

y(o) = H(o)x(o) =
r - jexpt-;ol r - ]expt-7ol

r - j expt-lol r - lexpt-iol
By comparing the two tenns in the previous equation with X(O) in Example 7.3.1, we see
that y(n) can be writlen down as

v(n) = ,(;)' u(n) - ,(i)'u(n)

E-e'nple 7.4.4
As a modification of the problem in Example 7.3.2,\ea
&(n) = ql'-'"|' -@ < n < @

represent the impulse response ofa discrete-time system. It is clear that this is a noncausal
IIR system. By following the same procedure as in Example 7.3.2,i1can easily be verified
that the frequency response of the system is

H(o) =
I --*;;$ *
",
"*nt-ro,,]
The magnitude function lA1Oll is the same as X(O) in Example 7.3.2 and is plotted in
Figure 7.3.2. The phase is given by
Arg H(O) = - zog
Thus, H(O) represents a linear-phase system, with the associated delay equal to no. It can
be shown that, in general, a system will have a linear phase, if ft(a) satisfres
h(n) = 7t17ro - nr, -co < n < co

Ifthe syslem is an IIR system, this condition implies that the system is noncausal. Since a
continuous-time system is always IIR, we cannot have a linear phase in a continuous-time
causal system. For an FIR discrete-time system, for which the impulse response is an N-
point sequence, we can find a causal i(n) lo satisfy the linear-phase condition by letting
delay zo be equal to (N - l)/2.It can easily be verified that ll(z) then satisfies
h(n)= 111Y - I - n)' 0<n sN- I

E=e,nple 7.4.6
Irt

"(o)={;: l.1tl*
That is, H(O) represents the transfer function of an ideal low-pass discrete-time lilter
with a cutoff of O. radians. We can find the impulse respoDse of this ftlter by using Equa-
tion (7.3.11):
Sec. 7.4 Properties ol the Discrete.Time Fourier Translorm 349

1 ro.
h1al = -:- | exp [i Oa]dQ
Ll J_{t,

_ sin O.n
7tn

Exanple 7.4.6
We will find the output y(z) of the system with

- /rrz\
t(n) = 5'1'; - "'lT
7tn
/

when the input is

lrr.n\* lrn * I\
r(n) = cos(
e-l
,inl7 ,/
From Example 7.2.2 and Equation (7.4.12), it follows that, with {),, = n /OA, n the range
0<O<2n
x(o) =2,[:s(o- t4oo) *
f utn- tsoo) n'i,t to- rosn,,) +
]srn - u2o,)].
Now

H(o) = I - *.,(#), -r s lol ... r


so that in the range 0 < A < 2r we have

,,n,=l' i=o'llrr
otherwise
Io

y(o) = s161y,r'rol = z'[f, oro - rach) * u,,, - ro&rh)]


?]
and
.tl .- tl
'--
y (n) = "- ' 4ta+ + 4tffn'

which can be simplified as

,t,l =,,"("i * l)
350 Fouder Analysis of Discrete'Time Systoms Chapter 7

7.4.A Modulation
Let y(a) be the product of the two sequences x,(n ) and xr(n) with transforms Xt(O)
and Xr(O), respectively. Then

' Y(o): i x,(n)x2@)exp[-j0n]

If we use the inverse pou.i.r-tron.rnln ,.t"tion to write r, (n ) in terms of is Fourier


transform, we have

v(o) = ]de]x,(n) exp[-lon]


,i. [r't [,.,*,(r)exp[jon
Interchanging the order of summation and integration yields

v(()) = j, exp[-l(o - elnt]ao


[,,,x,te) {,i_,,(,r)
so that

v(o) =
l[,r,,*,rrr*,(o - e)do (7.4.8)

7.4.7 Fourier Tlansform of Digcrete-fime


Periodic Sequences

Let .r(n ) be a periodic sequence with period N, so that we can express .r(z) in a
Fourier-series expansion as

,(r) = (7.4.e)
eaoexpfik0on]
where

n =
rh -2n (7.4.10)
N
Then

x(o) = hlx(n)l: *F- a* exPfuk'nnnl]


= !' o*r1exp[fi{\n]l
We saw in Example 7.3.3. that. in the range [0, 2rr],
a*lexpljkdl"nll :2rr6(O - &q)
so that
S€c. 7.5 Fourier Translorm ol Sampled Continuous-Time Signals 351

1nu1, lzu1 2ta, 1ra,, ?ta1 2r.a1 2luu 7ta, \u'

- 3rl0 -2sl(r -!lrr 0 Qu 2Qo .lQn 4Qu 5Q(,

Figure 7.4.1 Spectrum of a periodic signal N = 3.

NI
X(O) = ) 2ra*6(O -k0o), 0sO<2n (7.4.11)

Since the discrete-time Fourier transform is periodic with period 2n, it follows that
X(O) consists of a set of N impulses of strength 2ra*, k = 0, l. 2, ..., N - 1, repeated
at intervals of NOo = 2r. Thus, X(O) can be compactly written as
NI
X(O) = 2na*6(O k0o), forallO
) - (7.4.12)
I={)
This is illustrated in Figure 7.4-l for the case N = 3.
Table 7 -2 summarizes the properties of the discrete-time Fourier transform, while
Table 7-3 lists the transforms o[ some common sequences.

TABLE7.2
Proporllas of the Dlgcreto-Tlme Fourlet Translotm

1. Linearity Axt(n) + Bx2h) AX.(A) + BXr(Q) (7.4.2)

(7.4.3)
2. Time shift x(n - nol exp [ -jO nolX (O )
3. Frequency shitt .r(n) exp IjO,,n] x(() - oo) (7.4.4)

4. Convolution .r, (n ) u .t2 (rt ) x,(o)&(o) (7.4.7)

lt
5. Modulation .r, (n ).x, (n)
zn Jr,x'{r)x'ttt
- P)dP (7.4.8)

6. Periodic signals .r(n) periodic with period N ! 2rrars(o - A{)o) (7.4.11)

n :2' ,, = ,l )..t,i cxp [-lfton I

7.5 FOURIER TRANSFORM OF SAMPLED


CONTINUOUS-TIME SIGNAL
We conclude this chapter with a discussion of the Fourier transform of sampled con-
tinuous-time (analog) signals. Recall that we can obtain a discretc-time signal by sam-
p/ing a continuous-time signal. Le t.r,(t) be the analog signal that is sampled at equally
spaced intervals 7:
Fourler Analysis ot Discrete-Time Syst€ms Chapier 7

TABLE 7.3
Sorne Common Dlscrete-Tlme Fourler Tranelorm Paltg

Slgnal Fourler Tranalorm (pododlc ln (}, perlod 2a)

6(a) I
I 2n6(O)
exp[iOrz], ()oarbitrary 2zt6 (O - ()o)
N-l /V-l
) ao exp[j&fton],
t-0
Nfh = 2rr ) 2taoD(O - &f!o)
l-0

a.nu(n), l"l . t
1 - a exp[-lO]
ot,l, l"l . t
l-o2
I- 2q cos f) + d2
no."u(a), l"l . t c exp[-iol
(l - c exp[-jol)'?

rect(n/Nr)
sin(O/2)
sinO"n
tn recr(o/2o.)

x(n) = x,(nT) (7.5.r)


In some applications, the signals in parts of a system can be discrete-time signals,
whereas other signals in the system are analog signals. An example is a system in which
a microprocessor is used to process the signals in the system or to provide on-line con-
trol. In such hybrid. or sampled-data systems, it is advantageous to consider the sam-
pled sigral as feing a continuous-time signal so that all the signals in the system can be
treated in the same manner. when we consider the sampled signal in this manner, we
denote it by x,(r).
We can write the analog signal x,(r) in terms of its Fourier transform X, (o) as

,,(i = * f exp[jor]dto (7.s.2)


_X,(ro)
Sample values.r(n ) can be determined by setting t : nT in Equation (7.5.2), yielding

x(n) : x,(nl) :
* f"X,(o) explj otnTldut (7.s.3)

However, since x(n) is a discrete-time signal, we can write it in terms of its discrete-
rrne Fourier transform X(O) as

:(n) =
lr" X(O)
exp[lOn]do (7.s.4)
," l_"
S€c. 7.5 Fourier Transform ol Sampled Continuous'Timo Signals 353

Both Equations (7.5.3) and (7.5.4) rePresent the same sequence.r(n). Hence, the trans-
forms must also be related. In order to find this relation, let us divide the range
-cp ( ro ( co inro equal intervals of length 2tt/T and express the right-hand side
of
Equation (7.5.3) as a sum of integrals over these intervals:

,ot = ]; exp[rronrld,o (7.s.s)


,2.1,i-,'"i)*.(o)
If we now replace o by a + 2rr/T, we can write Equation (7.5.5) as

,<o =
); ,>-ll,',,*"(,
-T )*p[,(' *';')"fa' (7'5'6)

Interchanging the orders of summation and integration, and noting that


expli2t rnT /T] = 1, we can write Equation (7'5.6) as

, <o =
* I 1,"1,>*-*.(, * +r)] exp t;., rl a, (7.s.7)

If we make the change of variable o = AlT, Equation (7.5.7) becomes

,@=
* I _,li .Z--"(+ . 7,)] *o 1in,1,o (7.s.8)

A comparison of Equations (7.5.8) and (7.5.4) then yields

x(o) = )r,>_*.(*.+,) (7.s.e)

We can express this relation in terms of the frequency variable or by setting

,=7 (7.s.10)

With this change of variable, the left-hand side of Equation (7.5.9) can be identilied as
the continuous--time Fourier transform of the sampled signal and is therefore equal to
X,(o), the Fourier transform of the signal .r,(t). That is'
x,(r): r(n)ln_",, (7.s.11)

Also, since the sampling interval is I, the sampling frequency o, is equal to}rlT ndls.
We can therefore wfite Equation (7.5.9) as

x,(.)=!,,i.at,*'..1 (7s.t2l

This is the result that we obtained in Chapter 4 when we were discussing the Fourier
transform of sampled signals. It is clear from Equation (7.5.12) that X,(o) is_ the peri-
odic extension, with peiod to,, of the continuous-time Fourier transform Xr(r,r) of
the
analog signal x,(r), amplitudi scaled by a factor l/L Suppose that.ro(r) is a low'pas
signafruih thri its rpeitru. is zero for to ) to,,. Figure 7.5.1 shows the spectra of a typ'
in
ical band-limited analog signal and the conesponding sampled signal. As discussed
354 Fourier Analysis of Discrete-Tim"e Systems Chapter 7

I Xs(cr) I

- (.r0 o-o
(a)

I Xr{or) I

tlr

-or T -oo 0 oro

-ou - @r o0-ur or, - aJo oo * cr,


(b)

rx(olr

(-asT -.o,)T (a6- a,\T llo"- alT (.o.+.D,IT


(c)

Iigure 75.1 Spectra of sampled signals. (a) Analog specrrum. (b) Spec-
trum of x, (r). (c) Spectrum of x(z).

chapter 4, and as can be seen from the figure. there is no overlap of the spectrat com-
pon€nts in x-(.) if to, - rrr1, ) to,,. we can then recover x,,(r) from the sampred
signat
xJt) by passing.r,(r) rhrough an ideal low-pass filter with i cutoff at rrr,, radls'and a
of 7. Thus, there is no aliasing distortion if the sampring frequency is such that lain
qrr-0ro>(r)rl
or
o, ) 2t'ro (7.s.13)
This is a restatement of the Nyquist sampling theorem that we encountered in
chap
ler 4 ard specifies the minimum sampling frequency that must be used to recover a
continuous-time signal from ils samples. cleariy. if i,1r; is not band limited.
there is
always an overlap (aliasing).
Sec. 7.5 Fourier Translorm of Sampted Continuous-Time Signats

Equation (7.5.10) describes the mapping berween the analog frequency ro and the
digital frequency O. It follows from this equation that, whereas the ur,its oiro are rad/s.
those for O are just rad.
From Equation (7.5.11) and rhe definition of X(o), it follorvs that the Fourier rrans-
form of the signal .r,(r) is

x,(.) = T] (7.s.14)
,fi_rfurrlexp[-lr,rn
we can use Equation (7.5.14) to justify the impulse-modularion model for sanpled
signals that we employed in Chapter 4. From the sifting property of the 6 function, we
can write Equation (7.5.14) as

x,(.) =
il_-..Ur,i,u,, - nr'yexpl-jutldt
from which it follows that

.r"(r) =.r,(r) ) o1r - zr) (7.s.rs)

That is, the sampled signal x,(r) can be considered to be the product of the analog sig-
nal .r,(l) and the impulse train ) 6(r - nI).
To summarize our discussi#iTo far, when an analog signal .r,(r) is sampted, the
samplcd signal may be considcrcd to be either a discrete-limc signal r(n) or a contin-
uous-time signal r,(r), as given by Equations (7.5.1) and (7,5.15), respecrively. When
the sampled signal is considered to be the discrete-time signal .r(n), we can find its dis-
crete-time Fourier transform

x(o) i .r(n)e-in,
= n=-a (7.s.16)

If we consider the sampled signal to be the continuous-time signal x, (t), we can find
its continuous-time Fourier transform by using either Equation (7.5.12) or (7.5.14).
However, Equation (7.5.12),being in the form of an infinite sum, is not useful in deter-
mining X,(o) in closed form. Still, it enables us to derive the Nyquist sampling theo-
rem, which specifies the minimum sampling frequency or, that must be useo so that
there is no aliasing distortion. From Equation (7.5.11), it follorvs that, to obtain X(O)
from X, (to), we must scale the frequency axis. Therefore, wilh reference to Figure 2.5.1
(b), to find X(O), we replace to in Figure 7.5.1 (c) by oT.
If there is no aliasing, X,(o) is just the periodic repetition of X,(o) at intervals of
<o,, amplitude scaled by the factor l/I, so that

x,(r) = Lr*,,r, -(,,s < ar


= t,lr (7.s.17)

Since X(O) is a frequenry-scaled version of X,(ro) with 0 - orl it follows that


x(o) -zsOsn
=
i"(?) (7.s.r8)
356 Fourier Analysis of Discrete.Time Systems Chapter 7

n=arnple 7.6.1
We consider the analog signal .r, () with spectrum as shown in Figure 7.5 2 (a). The sig-
nal has a one-sided bandwidth /o = 50fi) Ha or equivalently, roo = zrfo = 10,( In rad/sec.
The minimum sampling frequency that can be used without introducing aliasing is
[ro,I,n = 2r,ro = 29,6*, rad/sec. Thus, the maximum sampling rate that can be used is
T*= l/(2fs): l1p
',.ec.
I X"(a) |

I X" (otl I
4xld

-tr x ld -zx Id rx ld Ezx ld


(b)

rx(o)r

v-tf (c)OJL4
Flgure 75, Spectra for Example 7.5.1.

I
Suppose we sample the sigral at a rate = 25 psec. Then (Dr = 8rT x ld
rad/sec. Fig-
ure 75.2 (b) shows the spectrum X, (o) of the sampled signal. The spectrurn is periodic with
perird rrr,. To get X(O), we simply scale the frequency axis, replacing ro by O = r,rT, as
shown in Figure 75.2(c). The resutting spectrum is, as expected, periodic with period 2n.

7.6.1 Reconstnrction of Sampted Sigtrals


If there is no aliasing, we can re@ver the analog sigpal .r"(t) from the samples x.(nI)
by using a reconstnntion fikenThe input to the reconstruction filter is a discrete-time
sigral, whereas its output is a continuous-time signal. As discussed in Chapter 4, the
reconstruction 6lter is an ideal low-pass frlter. Referring to Figure 7.5.1. we see that if
we pass.r"(t) through a filter with frequency response function

,.-t:[r,
t0,
I'l ",
otherwise
(7.s.te)
Sec. 7.5 Fourier Translorm of Sampled Continuous-Time Signats 357

with <o, chosen to lie between o0 and rltr - o0, the spectrum of thu filter output will be
identical to X,(o), so that the output is equal to.r,(l). For a signal sampled at the Nyquist
rate, orj = 2roo, so that the bandwidth of the reconstruction filter must be equat to
a" = a,/2 = r /7.ln this case, the reconstruction filter is said to be matched to the sam-
pler. The reconstructed output can be determined by using Equation (4.4.9) to obtain

*,(0 - 17.s.20)
"2*,,@o"+is#;)
Since the ideal low-pass filter is not causal and hence not physically realizable, in prac-
tice we cannot exactly recover x,(r) from its sample values. Thus, any practical recon-
struction filter can only give an approximation to the analog signal. Indeed, as can be
seen from Equation. (7.5.20), in order to reconstruct ra(l) exactly. we need all sample
values r,(nI) for r in the range (--,-). However, any realizable filter can use only
past samples to reconstruct r,(r). Among such realizable filters are the hold circuiu,
which are based on approximating .r,(l) in the range nT t < (n +
= l)Iin
a series as

I
i,(t) = x.(nT) + .r'"(nT)(t - nT) +
,.xi@T)(t - nT)2 +... (75.21)

The derivatives are approximated in terms of past sampled values: for example
xi@7-l = l.r.(nT) - x,((n - t)r)l/ r.
The most widely used of these filters is the zero-order hold. rvhich can be easily
implemented. The zero-order hold corresponds to retaining only the first term on the
right-hand sidc in Eq. (7.5.21). That is, the output of the hold is given by
i,(t): x,(nT\ nT=t<(n + l)T (7.s.22)
ln other words, the zero-order hold provides a staircase approximation to the analog
signal, as shown in Fig. 7.5.3.
Let goo(t) denote the impulse response of the zero-order hold, obtained by apply-
ing a unit impulse 6(n) to the circuit Since all values of the input are zero except at
n - 0, it follows that

i"(r)

---- Analog signal

Output of zero-orrler hold

-
OI2T3T4T5T6T'17
Figure 753 Recontruction of sampled signal using zero-order hold.
Fourier Analysis ot Discret+Time Systems Chapter 7

0<r<T
o,,,trl = otherwise
(7.s.23)
{1.
with corresponding transfer function

G,o(S) = !--' (7.s.24)


J

In order to compare the zero-order hold with the ideal reconstruction filter, let us
replace s by 7'r,r in Eq. (7.5.22) to get

G,o(,)= t# -e
='#lei@rt2) 2i
-i{-r/21

_ sin (trr,J/ro,)
a s-itru/a,l (7.s.?s)
T@/.n,
where we have used T = 2t/a,.
Figure 7.5.4 shows the magnitude and phase spectra of the zero-order hold as a func-
tion of <o. The figure also shows the magnitude and phase spectra of the ideal recon-
struction filter matched to o.. The presence of the srle lobes ia Goo(to) introduces
distortion in the reconstructed signal, even rvhen there is no aliasing distortion during
sampling. Since the energy in the side lobes is much less in the case of higher order
hold circuits, the rcconstructed signal obtained rvith thcse filters is much closer to the
original analog signal.

I G1,6(ar) |

19a@>

Figure 75.4 (a) Magnitude


spectrum and (b) phase spectrum of
zero-order hold.
Sec. 7.5 Fourier Transtorm ol Sampled Continuous-Time Signals

. An aiternative scheme that is also easy to implemenl ohtains the reconstructed sig-
nal i,(r) in the inrerval
[(n - l)7,nirl as the straight linc joining. the values
x,l@"- i; f1 ana x,@T). This interpolator is called a linear interpolator and is
described by the input'output relation

i,(r) = r,(nf1[r . x.t@ - rtrtlL-{],r, -', r < t < nr (7.s.26)

=T)-
It can be easily verified that lhe impulse response of the linear interpolator is

lr- I4T' Irl=r


(7.s.27)
s,(r) = {
otherwise
[0.
with corresponding frequency function

Gr(,) = rltit<e!!\') (7.s.28)

Note that this interpolator is noncausal. Nonetheless, it applies in areas such as the Pro-
cessing of still image frames, in which interpolation is done in the spatial domain.

7.6.2 Sarnpling-Rate Conversion


We will conclude our consideration of sampled analog symbols with a brief discussion
of changing the sampling rate of a sampled signal. In many applications, we may have
to chan[e the sampling rate of a signal as the signal undergoes successive stages of pro-
cessing by digital hlters. For example, if we process the samplcd signal by a low-pass
digitaifltier, the bandwidth of the filter output will be less than that of the input. Thus,
it Is not necessary to retain the same sampling rate at the output. As another example,
in many telecommunications systems, the signals involved are of different types with
dif-
ferent Landwidths. These signals will therefore have to be proccssed at different rates'
one method of changing rhe sampling rate is to pass the sampled signal through a
reconstruction filter and reiample the resulting signat. Herc. wc will explore an alter-
native, which is to change the effective sampling rate in thc digital domain. The two
basic operation, n"..r.iry for accomplishing such a convcrsion ate decimation
(or
dow nsamp ling) arld i nt e r p o la t i o n (ot up s a m p I ing).
Suppoie *i htr. an inalog signal band limited to a frequcncy'o'*-ht9! has been
tu.pi.d at a rate T ro get thg discrete-time signal r(n), wirh x(n) = x,(nT.)' Decima-
tion involves reducing- the sampling frequency so that lhc new sampling rate is
T' = MT.The new sampling frequency will thus be r,rj : (l.),/M. we will restrict our-
selves to integer values of M, so that decimation is equivalent to retaining one of
every
M samples oithe sampled signal r(n )' The decimated signal is given by
xa@)=x(Mn)=x.(nMT) (7.s.ze)

Since the effective sampling rate is now T' = MT. for no aliasing in the sampled sig-
nal, we must have
360 Fourier Analysls ol Discrete-Time Systems Chapter 7

7" !- 0)o

or equivalentlv.

MT-L (,)o
(7.5.30)

For a fixed r, Equation (7.5.30) provides an upper limit on the maximrrn value that
M can take.
If there is no aliasing in the decimated signal, we can use Equation (7.5.1g) to write

xd(a)=+.,(*). -n<osn
_ l ../1o\
= tar*'\ud' -nsosn
Since x(o), the disclete-time Fourier transform of the analog signal sampled at the
rate T, is equal to

x(o)= +-,(+),-n<osn
it follows that

xd(a):
i.(#) (7.s.3r)

That is, Xr(o) is equal to x(o) amplitude scaled by a factor LIM atdfrequency scaled
by the same factor. This is illustrared in Figure 7.5.5 for the case where r = i.er /ro
andT :27.
Increasing the effective sampling rate of an anatog signal implies that, given a signal
x(n ) obtained by sampling an analog signalxo(r) at a rate r, we want to deiermine aiig-
nal x,(n) that corresponds to sampling r,O at arate T" = f lL, where L > 1. That is,

x,(z) = x(n/L): 4@T/L'1 (7.s.32)


This process is known as interpolation, since, for each n, it involves reconstructing the
missing samples x(n + mL),m : -
1,2,... , L l. We will again restrict oursetvLs to
integer values of L.
The spectrum of the interpolated signal can be found by replacing T uath r/L in
Equation (7.5.18). Thus, in the range -rr < O n,
=
x,(o) =
*r.(r*)
l r*sn1. tot =T
=l,
.lnl .,
(7.s.33)

f
Sec. 7.5 Founer Translorm ol Sampled Conlinuous'Time Signals 361

L\,,1rr-rt i

-dtt (l @tt

(it)

-uoT ll uoT
(h)

tx,/(!)) I
Figure 7-5.5 Illustration of
vT' decimation. (a ) Spectrum of analog
signal. (b) Spcctrum of r(n) with
sampling ratc f. (c) Spectrum of
i) dccimatc(l \ir'nill correspondinB to
- tt -tOrrT anT' tr
0
rate T = M-\ . Figures correspond
(c) to T = O.4n ltooand M = 2.

As a first step in determining .t,(n) from.r(n), let us replacc the missing samPles by
zeros lo form the signal
n:0,-+L,!2L,... (7.s.34)
=
',r^, {;:"'"' otherwise

Then

&(O): 2 ie -r
xln\s-ttt"

=,i_*x(n/L)e-rb

= ) x1t<1e-iteL=X(LA) (7.s.3s)

so that X,(O) is a frequency-scaled version of X(O). The relation betrveen these vari-
ous spectra is shown in Figure 7.5.6, for the case when 7 = 0.4t llonand L
:2.
From the figure. it is clear that if we pass x,(n) through a lorv-p5ss digital filter with
gain L and cutoff frequency auT/ L, the output will correspond to -r,(n). Interpolation
-
by a factor L therefore consists of interspersing L 1 zeros betwcen samPles and then
low-pass filtering the resulting signal.
362 Fourier Analysis ot Discrete-Time Sysiems Chapter 7

I X"(al I

-aO 0
(a)

rx(o) r

-'tr -o\T g .ngT t


= 0.4l = -O.4r
(b)

lxi(o)r

-trOT" O @OT"

rxr(o) |

-t -@oT" O @oT" t A
(d)

Iigure 75.6 Illustratinn of interpolation. (a) Spectrum of analog signal.


(b) Spectrum ofr(n) with sampling rare f. (c) Spectrum of interpoiired
signal corresponding to rate T" : T/L. (d) Spectrum of signal .r,(z). Fig_
ures correspond to T = O.4t /lod,oand L = 2.

Example 7.6.2
Consider the signal of Example 7.5.1, which was band limited to roo = 10,ffi)n rad/sec, so
f
that loat = 100Fs. Suppose x,(r) is sampled at = 25$ to obtain the signal r(n) with
spectrum x(o) as shown in Figure 7.5.2 (b). If we want lo decimare.r(z) by a faitor M
without introducing aliasing, it is clear from Equation (2.5.30) thar M (t
/tooT) = 4.
:
Suppose we decimate r(n\ by M 3. so rhir the effecrive sampling=rate is ?.-= 75ps.
It follows from Equation (7.5.31) and Figure 2.5.5(c) rhat the specrrum of rhe decimaied
Sec. 7.5 Fourier Translorm o, Sampled Conlinuous-Time Signals 363

rx(o) |

t7
0 T
(b)

I xd(Q )l
4\ rd
.3

-!r
-v- 0
(c)

'',ln'L,
Iigure 7S.7 Spectra for Example
7.5.2.(al Analog spectrum.
(b) Spectrum of sampled signal.
-3zr (c) Spectrunr of decimated signal.
t 0
(d) Spectrum oI interpolaled signal
(d) after decimation.

signal, Xo(O), is found by amplitude and frequency scaling X(J)) hy the factor 1/3. The
resulting spectrum is shown in Figure 7.5.7(c).
Let us now interpolate the decimated signal -rr(z) by L:2 to form the interpolated
signal .t,(n ). It follows from Equation (7.5.33) that

a^T
< -,L =
3:r'

xr(o)
L8
=
. lol .,
Figure 7.5.7 (d) shows the spectrum of the interpolated signal. Fr()m our earlier discussion.
it follows that interpolation is achieved by interspersing a zero hctrvcen each two samples
of .r., (n) and low-pass filtering the result with a filter with gain 2 and cutoff frequency
(3rl8) rad/sec.
Fourier Analysis ol Discrete-Time Systems Chapter 7

Note that the combination o[ dccimation and inLcrpolation gives us an elfective sam-
pling rateof. T' : MT/ L = 37.5ps. In general. by suitably choosing M and L. we can
change the sampling rate by any rational multiple of it.

7.6.3 A/D and D/A Conversron


The application of digital signal processing techniques in areas such as communication,
speech, and image processing, to cite only a few, has been significantly facilitated by
the rapid development of new technologies and some important theoretical contribu-
tions. In such applications, the underlying analog signals must be converted into a
sequenoe of samples before they can be processed. After processing, the discrete-time
signals must be converted back into analog form. The term "digital signal processing"
usually implies that after time sampling, the amplitude of the signal is quantized into a
finite nurnber of levels and converted into binary form suitable for processing using for
example, a digital computer. The process of converting an analog signal to a binary rep-
resentation is referred to as analog-to-digilal (ND) conversion, and the process of con-
verting the binary representation back into an analog signal is called digiul-to-analog
(D/A) conversion. Figure 7.5.8 shows a functional block diagram for procesing an ana-
log signal using digital signal-processing techniques.
As we have seen, the sampling operation converts the analog signal into a discrete-
time signal whose amplitude can take on a continuum of values; that is, the amplitude
is represented by an infinite-precision number. The quantization operation converts the
amplitude inlo a finite-prccisrbn number. The binary coder converts this finite preci-
sion number into a string of ones and zeros. While each of the operations depicted in
the figure can introduce errors in the representation of the analog signal in digital forrn,
in most applications the encoding and decoding processes can be carried out without
any significant error. We will, therefore, not consider these processes any further. We
have already discussed the sampling operation, associated errors, and methods for
reducing these errors. We have also considered various schemes for the reconstruction
of sampled signals. In this section, we will briefly look at the quantization of the ampli-
tude of a discrete-time signal.
The quantizer is essentially a device that assigns one of a finite set of values to a sig-
nal which can take on a continuum of values over its range. Let [:r, xr,] denote the
range of values, D, of the signal .r,(r). We divide the range into intervals
[x,-r,:J,i=l,2....N.with.r,=r,andrr=rn.Wethenassignavaluey;,i=1,2...,

Analog-ro-Digital Convenor Digital-to-Analog Convertor

,v,(l)

Figure 75.E Functional block diagram of the A/D and D/A processes.
Ssc. 7.5 Fourier Translorm of Sampled Continuous-1ime Signals 365

Nto the signal whenever.r,_ r


= .r.(t) < of levels of
.r,. Thus, N represents the number
the quantizer. The r, are known as the decision levels, and the l', are known as the
reconstruction levels. Even though the dynamic range of the input signal is usually not
known exactly, and the values x, and x, are educated guesses, it nrav be expected that
values of x,(t) outside this range occur not too frequently. All values xo(t) < \are
assigned to r,, while all values x,(t) > xn are assigned to x,,.
For best performance. the decision and reconstruction levcls must be chosen to
match the characteristics of the input signals. This is, in general, a fairly complex pro-
cedure. However, optimal quantizers have been designed for certain classes of signats.
Untform quantizers are often used in practice because they are easy to implement. In
such quantizers, the differences .r, - r,_r and y, - yi_r arc choscn to be the same
value-say, A-which is referred to as the step size. The step size is related to the
dynamic range D and the number of levels N as

a=-DN (7.s.36)

Figures 7.5.9 (a) and (b) show two variations of rhe uniform quanrizer-namely, the
midriser and the midtead. The difference between the two is rhat the output in the
midriser quantizer is not assigned a value of zero. The midtread quantizer is useful in
situations where the signal level is very close to zero for significant lengths of time-
for example, the level of the error signal in a conlrol system.
Since therc are eight and seven output levels, rcspectively, for thc quantizers shown
in Figures 7 .5.9(a) and (b), if we use a fixed-lenglh code word, each output vatue can
be represented by a three-bit code word, with one code word left ovcr for the midtread
quantizer. In what follows, we will restrict our discussion to thc midriser quantizer. In
that case, for a quantizer with N levels, each output level can bc rcpresented by a code
word of length

(a) (br

Flgure 75.9 lnput/output relation for uniform quantizer. (a) Midriser


quantizcr. (b) Midtread quantizer.
366 Fourier Analysis of Discrete-Time Systems Chapier 7

(i + l)A

A I
,A+; A

iA i
lll
rtl
T1
or,
(a)

A
2

-A
T Figure 75.10 Quantization error.
(a) Quantizer input. (b) Error.

D
B - logzN = togz (7.s.37)
a
'fhe proper analysis of the errors introduced by quantization requires the use of
techniques that are outside the scope of this book. However, we can get a fairly good
understanding of these errors by assuming that the input to the quantizer is a signal
which increases linearly with time at a rate S units/s. Then the input assumes values in
any specilic range of the quantizer-say, [iA, (i + 1)A]-for a duration [f,, with t],
T, - Tr: A/S as shown in Figure 7.5.10. The quantizer input over this time can be
easily verified to be

x"(t) =
r#(r - r,) +,a
while the output is

xo(t)=i^++
The quantization error, e (t), is defined as the difference between the input and the out-
put. We have
Li
e(r) = x,(r) - xn(t) =
+r,tt - r,) - (7.5.38)

= o+"]
'T:'\l'-
It is clear that e (r) also increases linearly from - L/2 to A/2 during the interval lTr,Tzl.
The mean-square value of the error signal is therefore given by (see Problem 7.27)
Sec. 7.6 Summary 367

I tr, - A2
E = ,= ' r,
)r,
e'u)41 =- (7.s.3e)

D2 n-zo
- iL
where the last step follows from Equation (7.5.37). E is usually referred to as the quan-
tization noise power.
It can be shown that if the number of quantizer levels, N, is very large, Equation
(7.5.39) still provides a very good approximation to the mean-square value of the quan-
tization error for a wide variety of input signals.
In conclusion, we note that a quantitative measure of the quality of a quantizer is
the signa.lto-noise ratio (SNR), which is defined as the ratio of the quantizer input sig-
nal power P, to the quantizer noise power E. From Equation (7.5.39)' we can write

SNR=? =rzP,D'2228 (7.5.210)

In decibels,
(SNR)dB = l0loglsSNR
= l0log,o(12) + 10log,oP, - 20log,rD + 20Blog,o(2) (7.s.41)

That is,
(SNR)dB = 10.79 + l0log,oP, + 6.028 - 2Olog',,D (7'5'42)

As can be seen from the last equation, increasing the code-word length by one bit
results in an approximately 6-dB improvement in the quantizcr SNR. The equation
also shows that the assumed dynamic range of the quantizer must be matched to the
input signal. The choice of a very large value for D reduces thc SNR.

Examplo 7.63
Let the input to the quantizer be the signal
.r,(t) = ,4 sin root

The dynamic range of this signal is 2.r4, and the siBnal power is P. = A212. The use of
Equation (7.5.41) gives the SNR for this input as
(sNR)dB = 20logro(l'5) + 6'aB = l'76 + 6028
Note that in this case D was exactly equal to the dynamic rangc of the inPut signal. The
SNR is independent of the amplitude .A of the signal'

7,6 SUMMARY
. A periodic discrete-time signal .r(n) with period N can be represented by the dis-
crete-time Fourier series (DTFS)

.r(n) = *]
^>_,
".-o[iaro
368 Fourier Analysis ol Discreie-Time Systems Chaptor 7

. The DTFS coefficients a, are given by

= i>*''t"l*nf-if;m]
'-
. The coefficients ai are periodic with period N, so that ao : at t N,
o The DTFS is a finite sum over only N terms. It provides an exact alteraative repre-
sentation of the time signal, and issues such ar convergenoe or the Gibbs phenom-
enon do not arise.
t lt ar, are the DTFS coefficients of the signal .r(n), then the coefficiens of. x(n - m)
are equal to ao expl- j(2tt / N ) kml.
r If the periodic sequence.r(n) with DTFS coefficiens ao is input into an LTI system
with impulse response &(n ), the DTFS coefficiens Do of the output y(n) are given by

or = r)
"rn(fr
where

fl(o) =L oOlexp[-jon]
na-6

r The discrete-time Fourier transform (DTFT) of an aperiodic sequence r(n) is given by

x(o)= i,(rtexp[-loz]
na -@

r The inverse relationship is given by

onlda
,<^> =
*f" x1o1exp1;

o The DTFT variable O has units of radians.


o The DTFT is periodic in O with period 2r, so that X(O) = X(O + 2t).
. Other properties of the DTFT are similar to those of the continuous-time Fourier
ransform. In particular, if y(n) is the convolution of x(n) and ft(a), then,
Y(o) = H(o)x(o)
r When the analog signal r,(t) is sampled, the resulting signal can be considered to
be either a CT signal r,(t) with Fourier transform X,(o) or a DT sequence.r(n) with
DTFT X(O). The relation between the two is
X"(or) = X(O) 1n..,
r The preceding equation can be used to derive the impulse modulation model
for sampling:

.r"(r)=x,(r) j 41,-rr1
-.ri,1 '1
7r'i - : 1.f..,,., , ..li,..i ,,E--

f' ,"1 ,*. ..rr li t Ii


'369
Sec. 7.8 Problems

o The transform X.(o) is related to X,(to) by

x,(,) = |,i.r,,, *,,,,


r We can use the last equation to derive lhe sampling theorem. rvhich gives the mini-
mum rate at which an analog signal must be sampled to permit error-free recon-
struction. If the signal has a bandwidth of or. . lhen T < 2rla,..
. The ideal reconstruction filter is an ideal low-pass filter. Hold circuits are practical
reconstruction filters that approximate the analog signal.
o The zero-order hold provides a staircase approximation to the analog signal.
r Changing the sampling rate involves decimation and interpolation. Decimation by
a factor M implies retaining only one of every M samples. Inte rpolation by a factor
-
L requires reconstruction of L I missing samples between every two samples of
the original sampled signal.
. By using a suitable combination of M and L. the sampling ratc can be changed by
any rational factor.
. The process of representing an analog signal by a string of binary numbers is known
as analog-to-digital (AD) conversion. Conceptually, the process consists of sam'
pling, amplitude quantization. and conversion to a binary codc.
o The process of digital-to-analog (D/A) conversion consists of decoding the digital
sequence and passing it through a reconstruction filter.
. A quantizer outputs one of a finite set of values corresponding to the amplitude of
the input signal.
. Uniform quantizers are widely used in practice. In such quanlizcrs, the outPut val-
ues differ by a constant value. The SNR of a uniform quantizcr increases by apProx-
imately 6 dB per bit.

7.7 CHECKLIST OF IMPORTANT TERMS


Allaslng lnterPolatlon
Convergenco ot DTFS lnverse DTFT
Declmatlon Perlodlclty of DTFS coalflclentg
Dlocretetlme Fourler serles Perlodlctty ot DTFT
DlscrotFtlme Fourler translorm Sampllng ot analog slgnals
DTFS coefllclents Sampllng theorem
lmpulse-modulatlon model Zero-order hold

7.8 PROBLEMS
7.1. Determine the Fourier-series representation for each of thc follosing discretetime sig-
nals. Plot the magnitude and phasc of the Fouricr coefficients a^.

(a) ,r(n) = cos3r.n


4
S7O. Fourier Analysis ol Dlscrete.Time Syslems Chapter 7

(b) x(n) = acos ,in


f fi
(c) .r(z) is the periodic extension of the sequence (1. -l.0, l. - ll.
(d) .r(z) is periodic with period 8, and

rt4, : [t. o=n=3


10, 4s n=7
(e) x(n) is periodic with period 6, and

ln, 0sn<3
r(r,r=lO, 4=as5
(0 .r(n) =,i
t- -o
t-1)t6(r - *y + co.z+
72 Given a periodic sequence r(n ) with the following Fourier-series coefficients, determine
the sequence:

(a) at = r *
]'*! *'"""!, osksB

"'} "={;: 2=I:1


(c) ar = exPl-jrk/al' o= k<7
(d) ar = [,0, -1,0, 11
73. l-et ar represenl the Fourier series coefficientsof the periodic sequence x(a) with period
N. Find the Fourier-series coefficients of each of the following signals in terms of a.:
(a) x(a - no)
(b) r(-n )
(c) (- I )"r(n)
(d)
x@),
I zt even
rtzt =
[0, zodd

(Hinx y(n)can be wriuen u. + (- l),.r(n)1.)


] Irtn)
(e)

y(n, = [t(n), n odd


10, r even
1ll y(n): x,(n)
@l y(n) = r"(n)
7.4. Show thal for a real periodic sequence r(n).a^ = a[-n.
75. Find the Fourier-series representation of the output y (a) when each of the periodic sig-
nals in Problem 7.1 is input into a system with impulse response nO = (l) r!l.
7.6. Repeat Problem 7.5 if ,,(r) = (l)r"l
sec' 7'8 Problems 371

7.7. l*t x(n),hln ), and v(n) be periodic sequences rvith the same period ,V. and let ar. br. and
c^ be the respective Fourier-serics coefficients.
(a) Let y(n) = .r(n)h(n).Show that

co=)a-b*-,=2ar-^b-
r.9 (M

= a"@ b*
(b) Lrt y(n) = x(r) @ /r(n). Shorv that
co: Narbn

7.& Consider the periodic extensions of the following pain of signals:


(a) x(n) = 11.0, 1.0,0, ll
&(n) = [], - I, l.0, - I, ll
nn
(b) r(n) = cos j

ft(n) = {1, - I, I, l, - I. ll
1n
(c) r(n) = 2cos -2

n{n) =
t- -l I -ll
lt,-{,q,-s I
(d) .r(n) = I' O=n=7
+ l. 0<n=3
It(z/= [n 4=n=7
l-n+8,
Let y(a) = ft(a)r(n ). Use the results of Problem 7.7 to find the Fourier-series coeffi-
cients for y(z).
7.9. Repeat Problem 7.8 if y(n) = ,r (n) 6, :(n)
7.10. Show that

.1 ['" exp[;o( n - k)ldo = E(n -


Z7t Jo
k)

7.11. By successively differentiating Equation (7.3.2) rvith respect to {}. show that

elnPx(n)l = j'4:fJ9
7.112. Use the properties of the discrete-time transform to determinc X(O) for the follow-
ing sequences:

(a) .t(r) = [t'


o=nsxo
[l' orherwise
rr r l,l
ttl tt") =,(3J
(c) .t(n) = a'cos0/ru(n)' l,l 't
(d) r(n ) = exP[l3al
372 Fourier Analysis ol Discrete-Time Systems Chapter 7

(e) r(n) =
"-o[r;,]
'(O r(z) =
lsinrz + 4cos In
(g) :(n) = a'[u(n) - u(n - n)l
' sin (rrnl3)
(h).r(n)
sin{mnl3):r!(rn/2)
(l) .r(n) -
sinhrn/3!sln(rn/2)
0) x(n) -
(t) x(n ) = (n + 7)a'u(n), lrl t .
7.13. Find the discrete-time sequence .r (n) with transforms in the range 0 = A < 2r as follows:

(a) x(o) = -r,o(o -


]) . 'o(o - +) . 'o(o - T) .i"t(o - 11
(b) X(O) = 4sin5o + 2coso
4
(c) x(o) =
1r"o1..6r_r,
r - expt-iol
f
(d) x(o) =
I + jexpt-;ol- jexpt-izol
7.14. Show that

.i_ t,t,rt' = * f"tx(o)l'zdo


7.15. The following signals are input into a system with impulse response & (n )= (r' z(n ). Use

Fourier transforms to find the output y(z) in each case.

(e) r(n) = (il[- T)"r,


(b) :(z) = (|)'.r"(f),r,r

(") ,(,) = (l)r"l

(d) .r(z) =
"(l)''',r,
7.16. Repeat Problem T.tstth(n)= 5(n - ,1 . (|)'rtrl
7.17. For the LTI system with impulse response

h@)=Y#2
6nd the output if the input .r(n) is as follows:
Sec. 7.8 Problems 373

(a) A periodic square wave with pcriod 6 such that

[t. o s z < 3
.r(n)=10. 4=n<S
(b).r(n)= ) ta(n-2k)-6(rr -1-2k)l
k- --.
7.1& Repeat koblem 7.17 if
o(^) = 2"!#9
7.19. (a) Use the time-shift property of the Fourier transform to find l/(O) for the systems in
Problem 6.18.
(b) Find fi (n) for these systems by inverse tranforming H(O).
720. The frequenry response ofa discrete-time system is given bv

| * 1- *P1-;o;
H@)= --'si
I+
[-exp[-lo]+ iexp[-l2o]
(a) Find the impulse response of the system.
O) Find the difference equation representation of the system.
(c) Find lhe responsc of thc syslem if rhe input is the signal (j)'r,,,,
72L A discrete-time system has a frequency response

d(o) =
rs*Ep"l-pht#1. lol
.r
Assume that p is fixed. Find u such that H(O) is an all-pass funcrion-that is, lH(iO)l is
a constant for all O. (Do not assume that p is real.)
1ZL (al Consider the causal system with frequency response
I + aexp[_ iO.].+ bexp[-l1Q]
\",, _
,,n,,
b + aexp[-j0] + exp[-j20l
Show that this is an all-pass function if. a and b are real.
O) t€t H(O) = N(O)/D(O), where N(O) and D(O) are polynonrials in exp [-lO]. Can
you generalize your result in part (a) to find the relation betrvccn N(O) and D(O) so
that H(O) is an all-pass function?
7J3,. An analog signal .r,(r) = 5cos(2@nt - 30") is sampled at a frequcncy f,intlz
(a) Plot the Fourier spectrum of the sampled signal if f is (i) 150 llz(ii)250H2.
(b) Explain whether.r,(t) can be recovered from the samples, and il so, how.
72A. Deive Equation (7.5.27) tot the impulse response of the linear intcrpolator of Equation
(7.5.26), and show that the corresponding frequency function is as Eiivcn in Equation (7.5.2E).
72J,. A low-pass signal with a bandwidth of 1.5 kHz is sampled at a ratc of 10,ffi sampleVs.
(a) We want to decimate the sampled signal by a factor M. How largc can M be without
introducing aliasing distortion in the decimated signal?
(b) Expiain how you can change the sampling rate from 10,000 sanrpleVs to 4(H sampleds.
374 Fourier Analysis ol Discrete-Time Systems Chapter 7

726. An analog signal rvith spectrum

is sampled at a frequency ro,= 10,000 radls.


(a) Sketch the spectrum of the sampled signa[.
(b) If it is desired to decimate the signal by a factor M,what is the largest value of Mthat
can be used without introducing aliasing distortion?
(c) Sketch the spectrum of the decimated signal if M = 4.
(d) The decimated signal in (c) is to be processed by an interpolator to obtain a sampling
frequency of 75C10 rad/s. Sketch the spectrum of the interpolated signal.
7.t1. veify for lhe uniform quanlizer that the mean-square value of the error' E, is equal to
A2/12. where A is the step size.
7,?A. A uniform quantizer is to be designed for a signal with amplitude assumed to lie in
the range *20.
(e) Find the number of quantizer levels needed if the quantizer SNR must be at least 5
dB. Assume that the signal power is 10.
(b) If the dynamic range of the signal is [-10, l0], what is the resulting SNR?
7J9. Repeat Problem 7.27 if the quantizer SNR is to be at least 10 db.
Chapter B

The Z-Transform

INTROD
In this chapter, we study the Z-transform. which is the discrete-tinle counterpart of thc
Laplace transform that we studied in Chapter 5. Just as the Laplacc transform provides
ur fr"qu"n.y-domain tcchnique for analyzing signals for which thc Fourier transforrn
" noi exisi. the Z-transform enables us to analyze cerlain tliscr.'te-time signals that
does
do not have a discrete-time Fouricr transform. As nlight be expcctcd, the properties of
the Z-transform closely resemble those of the Laplace transfortn, so that the results are
similar to those of Chapter 5. Horvever, as with Fourier transtirrms of continuous and
discrete-time signals. there are cerlain differences.
The relationihip between the taplace transform and the Z-trans[ornr can bc cstab-
lished bv considering the sequence of samples obtained by sanrpling an analog signal
ro(t). In our discussion of samplcd signals in Chapter 7, we sa\\ that lhe outbut of the
simpler could be considered to be either the continuous-time signal

...(r) :,)-ru(xf)S(r - nT) (8.1.1)

or the discrete-time signal


rfu):.r,,(n7') (ii.1 .2 )

Thus, thc l:place transform of .r.(r) is

r-,,2. .r, (n I) exp


X.(.s) = [ -. sr] r/t

=) x.@T)cxp[-'n7.rlr/r (8.1.3)

375
976 The Z_Trans{orm Chaptor g

where the last step follows from the sifting property of the S.function. If we make the
i
substitution = exp[Ts], rhen

.Y,(S)1.=*pr,rl = (E.1.4)
,,i.x,(nT)z-^
The summation on the right side of Equation (E.1.4) is usuaily written as X(e) and
delines the Z-trarrsform of the discrete-time signal r(n ).
We have,in fact, alr,, Ji urrcountered the Z-transform in Section 7.1, where we dis-
cussed the respons. rrf ;. linear, discrete-time, time-invariant system to exponential
inputs. There we sa'.. hat if th; input to the system was.r(n ) : 3", the output was
'

y(n) = H(z)z' (8.1.5)


where H(z) was defined in terms of the impulse response h(z) of the system as

H(z)= i -* n(r)r'
n=
(E.1.6)

Equation (8.1.6) thus defines the Z-transfornr of the sequence &(z). We will formalize
this definition in the next section and subsequently investigate the properties and look
at the applications of the Z-transform.

8.2 THE Z-TRANSFORM


The Z-transform of a discrete-time sequence x(z) is defined as

x(z) =\ r(n)z-'
where e is a complex variable. For convenience, we sometimes denote the Z-transfornr
as Z[r(n)1. For causal sequences, the Z-transform becomes

X(z) = )
r-0
x@)z-" (8.2.2)

To distinguish between the two definitions, as with the taplace transform, the trans-
form in Equation (8.2.1) is usually referred to as the bilateral transform, and the
transform in Equation (8.2.2) is referred to as the unilateral transform.

Example 8.2.1
C-onsider the unirsample sequence

,(r) = j;: :;Z (823)

The use of Equation (8.2.1) yields


X(z) = 1.7o = 1 (E2.4)
Sec. 8.2 The Z-Transform

Example 8.2.2
Let.r(n) he the sequence obtained hy sampling thc conrinuous,limc function
.r(t )= exp[-arlr,(r) (8.2.-s)

every I seconds. Then

r(a) = exp[-arr7'la(n) (8.2.6)


so that, from Equation (8.2.2). rve have

xtzf = j lexp 1- ,r 1-- ,l',


i-tt expl- anrlz-" = n.O
Using Equation (6.3.7). we can rvrite this in closed form as

x(:)= I lz (8'2J)
-expi-arlz'=.-"*i1-,r;
E-a'rrple t2.3
Consider the lwo sequences

.r(n)
fo"
=1 '
rr>o
(E.2.E)
n <o
|.0,
and
(nv a<o
(8.2.e)
't,,r={-(iJ' n=0
10,
Using the definition of the Z-rransform, we can write

x(z) =.e (:1.'=;.(j. )' (8.2.,0)

We can obtain a closed-form expression for X(z) by again using Equation (6.3.7), so that

x(z)=,_1,-= z (8.2.11)
. zZ-, z-l
Similarly, we ger

- (:)' .. = (j. ,)' = -


,i,,,o",
Y@ =
.i. .t Gz.,z)

which yields rhe closed form

y(7)=-;!'^_). = j--, (g.2.13)


| tz z_;
.i, .l The Z-Translorm ChaPier 8

,\s can he seeu. the exprcssions lor the two transforms. x(z) and Y(z). are identical.
Seemingly. rhe rwo tr)raily different sequences.r(n) and y(n) have the same Z-transform'
Thc c.lifieience. of course. as rvith the Laplace transform, is in the two different regions of
convcrgence for x(z) and Y(;), where the region of convergence is those values of z lbr
rvhich tie powcr series in Equation (8.2.1 ) or (8.2.2) exists-that is. has a finite value. Since
Equati.n iO.:t.;l ir n geomerric series. the sum can be put in closed fornl only when the
summand has a nragnitude less than unity. Thus. the exPression for x(z) given in Equa'
tron (8.2.11) is valid (that is. X(3) exists) only if

l]r-'| < t "' l.l , j (8.2.14)

Similarly. from Equation (8.2. l3). Y(z) exists if


lzzl < t or lzl <l (E.2.ls)

Equations 18.2.1a) anrt (8.2.15) define the regions of convergence for x(z\ and Y(z),
reipecrively. These regions are plotted in the comptex z'plane in Figure 8'2'l'

lmz

Flgure &21
Regions of
convergence (ROCs) of the
Z-transforms for Example 8.2'3.
(a) ROC for X(z). (b) ROC for
Y (zl.

From Example 8.2.3, it follows that, in order to uniquely relate a Z-transform to a


time function, we must specify the region of convergence of the z-transform.

CONVERGEN
Ct'rnsider a sequence .r(n ) with Z-transform

x(z)=
aE't
i ,@'12-' (E.3.1)
'
We want to determine the values of z for which X(Z) exists. In order to do so, we
represent z in polar form as z : r exp[10] and write
Sec. 8.3 Convergence of the Z-Transform 379

*(r) = expIlgl) "


,,?-.r(a)(r
-1
= ).r(n)r-"expl-7rrul (5.-i. j

Let -r* (n ) and -r - (n ) denote the causal and anticausal Parts o[.r (,r ). rc spectivel,,*. Tha L ii.
n*(n ) = x(n)u(n)
x-(n): x(n)u(-n - 1) (rt.3.3 )

We subslitute Equation (8.3.3) into Equation (8.3.2) to gct


-t
X(r)= ) r-(n)r-"exp[-ln0l + I*-(rr)r'"cx1,i ir01
,ll

= ).r-1-rr)/"exp[jme] + ) r, (rr)r "cr1il 7'rr(rl

= ) lr-(-rr)ll + > l.r.1rr)lr-" (s..i l)


For X(z) to exist. each of the two tcnns on the right-hand side of l:quation (8.3.4) nrust
be finite. Suppose there exist constants M, N, R-, and R* suclr tlrirl

lx-1n11 < run: forn < 0, l.r*(n1l < Nnl lt;rrr = 0 (i1.3.-5i

We can substitute thesc bounds in Equation (8.3.4) to obtain

X(z) = M ) R-n'r^ + N ) R'ir-" (8.3.6)


trt=l

Clearly, the first sum in Equation (8.3.6) is finite it rlR- < l. irtttl the sec(rnl1 strr,r rs
finite if R*/, < l. We can combine the two relations to detcrtrtrrt. rhe regir;n oI c,,I
vergence for X(z) as
R.<r<R
Figure E.3.1 shorvs the region of convergence in the r planc as Lire annular regiotr
between the circles of radii R- and R*. The part of the transforrl ctrrresponding to the
causal sequence .r*(n) converges in the region for which r -.' oI, equivale'ntlv.
,
lal R..That is, the region of convergence is outside the circle ^.
l ith radius I(., Sirrr
ilarly, the transform corresponding to the anticausal sequen,:c '. 1rl) ctrnr'rrtii ii'
r ( R- or, equivalently, lr l a n-, so that the region of coni'cr{;t:.c is irtsitit ii.': ..,,
cle of radius R-. X(z) does not cxist if R- < R-.
We recall from out' discussion of the Fourier translor m of ,.ll:L tcla-l,t'r -' ':lgrlzll 't'
Chapter 7 that the frequency variable O takes on values in ltt. 1r; l. For ii [i'.t o ' :tti:t
of I it follorvs from a comparison of Equation (8.3.2) with htgiretion 17.-'.1-:. tr,-,'
-"
X(z) can be interpreted as the discrete-time Fourier lranslbrtrt {:l thc srqrral 't(rr)r
This corresponds to evaluating X(z) along the circle of radius r in the i r)llne L
we set r = I, that is. for values of z along the circle with urrit;' r,,'.lrtr: .\'r'l i lr--t-I.,:.
380 The Z-Translorm Chapt€r I

Ilgore &3,1 Region of


oonverg,ence for a general noncausal
sequence.

to the discrete-time Fourier transform of x(n), assuming that the transform exists.
That is, for sequences which possses both a discrete-time Fourier transform and a Z-
trausform, we have
X(O) = X(a) l.-.,p1-ior (8.3.7)
The circle with radius unity is referred to as the unit circle.
In general, if x(n) is the sum of several sequences, X(z) exists only if there is a set
of values of z for which the transforms of each of the sequences forming the sum con-
verge. The region of convergence is thus the intersection of the individual regions of
convergence. If there is no common region of convergence, then X(z) do€s not exist.

E:vanrple 83.1

Consider the function

,r"r = (])"r,r
Clearly, R* = 1/3 and R- = 6, so that the region of convergence is

t.l ,l
The Z-transform of .t(a) is

x(z)=-4-.:-3.
t z - ;z-' - i
which has a pole at z = 1I3. The region of convergence is thus outside the circle enclosing
the pole of X(a). Now let us consider the function

,r,r = (|)',or. (|)',r,r


Sec. 8.3 Convergence of the Z-Transtorm 381

which is the sum of the two functions

r,(n) = {;f
/l\"rr(tt) and =
,r'
\./
.r2(rr} i. | ,;r,,:
From the preceding example, the region of convergcnce for ,\', (.: I i'
I
lzl > z

whereas for Xr(z) it is

I
Izl > r
Thus, the region of convergence for X(z) is the intcrsection of thcsr: trYo recions and is

l.l , =;
""-(;.1)
It can easily be verified that
? , 222 -l;
X(7) = --: = .- + '
z-1, z-l k-))(z-\)
-'-.
--e-
Hence, the region of convergence is oulside the circle that includcs both poles of X(z).

The foregoing example shows that for a causal sequence, the regrou of convergence is
outside of a circle rvhich is such lhat all the poles of the transfornr .Y(t) are witlrin this
circle. We may similarly conclude that for an anticausal function, the region of con-
vergence is inside a circle such that all the poles are external to thc circle.
If the region ofconvergencc is an annular region. then the polcs of X(z) outside this
annulus correspond to the anticausal part of the function. rvhilc thc poles inside the
annulus correspond to the causal part.

Eqa'nple 832
The function

3", n<0
,"r: (ll tt = o.2,4.erc

{ (jl ,, = r,3,5.etc

has the Z-transfornr

..
x(21=
,,i-_t.2.. ; G)". .e
nodJ
(j
Let n = -rr in the first sum, n : 2rn irr lhe second rr.. *.1 ,1 - 2r:r i i,r lhc titird sum. Ii'tn
382 The Z-Translorm ChaPter I

x(r) =
P,(i.)-
. int';t\' *'-' i (i. l-
1: --.l----lt'-
- r-1. I -fz-z I - z-z

z_3 ,r_,1 ,r_i


As can be seen, x(z) has poles ai z = 3, lt3, -1/3, l/2. aad -llZ.The pole at 3 corre-
sponds to the anticausal parr. and the others are causal poles. Figure 8.3.2 shor[s the loca-
tions of these poles in thl i plane, as wetl as the region of convergence. which, accordingl
to our previous discussion' is l/2 < lzl < 3.
lm:

Articausal pole

Flgure E32 Pole locations and


region of convergence for ExamPle
8.3.2.

Example 833
Let.t(a) be a finite sequence that is zero for a < no and n) n,'Then
X(z) = .11r,,);.-* i x(no * l)3-(""* tt r "' r x(nr)z-'r
Since X(z) is a polynomial in z(or z-l). X(z) converges for all finite values of z' exaept
n, > 0'
z = 0 for nt> tl. The poles of X(z) are at infinity i[ ao < 0 and at the origin if

From the previous example. it can be seen that if we form a sequence y(n) by adding
a finite-length sequence to a sequence x(n), the region of convergence of Y(z) is
the
same as that of X(z), except possibly for z = 0.

Erample 8.3.4
Consider the righr-sided sequence

,(") = ,(;)' rr(n + 5)


Sec. 8.4 Properties ot the Z-Transtom 383

By rvriting )(r) -
as the sum of the finite sequence 3(l/2\"lt(n + 5) u(z)] and the
sequcnce t (n'1 = 7112rr,r)"r(n), it becomes clear that the R()('ot f(r) is the same as
thar of X(:). namely, l: I > l/2.
Sinilcr!y, :he sequence

v(n)=-(j)',r-,*,,
can be considered to he the sum of the s!'quence x(n1 = -)2(l/2)'z(-n - l) and
the finite sequence - 32(l/2)'lu(n) - rr(l - 6)1. lt follows rhat Y(z) converges for
0< lzl < trz.

In the rest of this chapter, we restrict ourselves to causal signals and systems' for
which we will be concemed only with the unilateral transform. In the next section, we
discuss some of the relevant properties of the unilateral Z-transform. Many of these
properties carry over to the bilateral transform.

PROPERTIE OF THE Z-TRANSFO


Recall the definition of the Z-transform of a causal sequence -r(n):

x(z)-)-r(n)2" (8.4.1)
,t-0
We can directly use Equation (8.4.1) to derivc, the Z-transforms o[ common discrete-
time signals, as the following example shows.

Example 8.4.1
(a) For thc 6 function, we saw that the Z-transform is

216(n\l= 1.zo= l (8.4.2)

(h) Let
:(n) = o"'1n,
Then
rl
x(z) = ) a'z-'=',
,o
=:'-.1.:1,lcl
l-oi l---, z-q
(8.4.3)

By letting c = t, we obtain the transform of the unit-step function:

zlu(n)l =
11 ;, l.l
, r (8.4.4)

(c) Let
.t (n) = c6.61orr,,,, (E.4.s)

By writing .r Qr) as
rI:!] The Z-Transform Chapter g

I
x(n) = + exp [-lfioz]lz(z)
)[exg[jfiozl
and using the result of (b). it follows that

' ,r,',:!2 expljttol'-L


^\'''
Z
z- 2z -exp[-jfto]

=
zG:ls&)_ (8'4'6)
IL zzcosrh + t
Sinilarly, the Z-transform of the sequence
:(z) = s6q-,r, (E.4.7)

is

z sinOo
x(z) = (8.4.8)
z2 - 2z ccf,lo +I

Let .r(n) be noncausal, with .r* (z) and r_ (r) denoting its causal and anticausal parts,
respectively, as in Equation (8.3.3). Then
x(z): X,(z) + X-(z) (8.4.9)
Now,

x-(z)= j r-1r)e-' (E.4.10)


4d -@

By making the change of variable m = -n and noting that r- (0) = 0, we can write
x-(z)= i x-(-n)z^
m-O
(8.4.11)

l-et us denote r -(-m)by xr(m). It then follows that


X-(z) = Xr(z-') (8.4'12)
wbere X1(z) denotes the Z-transform of the cazsal sequence .r_ (-n )
Eraraple t.4.2
kr
,r, = (;)'''
Then

,.",= (l)", n>0

' "'= (l)-', z<0


Sec. 8.4 Properties ol the Z-Transform 385

and

-r,(n) = 5-1-,; = (])' r,


"

= (])',r,- u,,y
From Example E.4.1. we can write

x,({ = -}-.
1.
lzl > I

and

x.,(z)= L-- I= rl
z-i z-', l.l
-2-,
so that

x-(7)=-;- l:.1.2

and

Thus, a table of transforms of causal time functions can be uscd to find the Z-ians-
forms of noncausal functions. Table 8.1 lists the transform pairs derived in Example
8.4.1, as well as a few others. The additional pairs can be derived dircctly by using Equa-
tion (8.4.1) or by using the properties of the Z-transform. We discuss a few of these prop-
erties next. Since they are similar to those of the other transforms we have discussed so
far, we state many of the more familiar properties and do not dcrive them in detail.

8.4.1 Linearity
If r, (n ) and.rr(n) are two sequences with transforms X, (z) and X,(:). respectively, then

Z[ar.rr(n) + arxr(n)l= arXr(z) + arXr(z) (8.4.13)

where a, and ararc arbitrary constants.


386 The Z-Translorm Chapt€r 8

8.1.2 fime Shifting


Let r(n) be a causal sequence and let X(z) denote its transform. Then, for any inte-
gct' rt,, ) 0,

Zlx(n +no)l = > x(n + no)t-"


n=O

= \ r(my;tn't,t
E
t rll

"!rr1rn1z-,,]
= z,ulio,ln)a-,,, -
[ 'b:r I
= z\lx(z) - )or@)r-"1 (8.4.14)

Similarly,

Zlx(n -no)l : ).r(n - nr)z-'


n =O

=! '1-;t-''.,n''
= t"li,r<^).-'* (*)r-^f
-1,,,..
= r- "lx <rl +
j,. r{-) z-,,] (8.4.1s)
,,

Example 8.4.3
Consider the difference equalion

I
Y(n) - ry(n - l) = 6(z)
with the initial condition

.y(- l) = 3
In order to find y (n) for z 0, we use Equation (E.4.15) and take transforms on both side.
=
of the difference equation, Betlting
1

Y(zl - 2z-tlY?)
+ .r'(- l)31 = t

We now substitute the initial condition and rearrange terms, so lhai


t
|-t'z-t 2:-"
Sec.8.4 Properties ol the Z-Transform 387

It follows from Example E.4.1(b) that


5/l\"
y(n\=
2\z). r=o

Example t.4.4
Solve the difference equation

v(n + 2l - y(n +l) + |r(n) =.r(nr

fory(n).n -
0, it.r(n) = u(n)'v(l) = l, and v(0) = l'
Using Equation (8.4.14). we have

zlly4) -y(0) - y(l)z-rl-zlY(r) - y(o)l * 1v(:) = x(z)


Substituting X(l\ = zlk - l) and using the given initial conditions' rve get

(.' -. * i)",., =
r'-,.* r'=." :-' l'
Writing Y(z) as

z1 'z*l
Y(z\=
"" z'(z-l)(z-llt.-i1
-
,
and expanding the fractional term in partial fractions yields
r: ', t
vt.l=.Lil,*.__l_r_i
.-t *'r,
=,1. :-l - z-]
7,2

From Example 8.4.1(b). it follows that

y1n1 =e;u1n1. i(i)',r, - ,(])""r"r

8.4.3 FrequencyScaling
The Z-transform of a sequence a'r(n) is

Zla" x(n\l = ! xPrl(a


i a'x(n\z-" = x=0 I
:)-'
,,=0
= X(a-t z) (8.4.16)

Example 8.4.6
We can use the scaling property to derive the transform o[ thc signal
y(n) = (a'cosOon)u(n)
388 The Z-Translorm Chapter I
from the transfornr of
.r(z) = (cos l)on)a (z)
rvhich, from Equation (8.a.6), is

v/-\
,"r., _
-
z(z - cosoo)
zz -i iosfrr+T
Th us.

rt.r=;f;{fi*ffi 1

= zz_:12_-_r:elrb)
- 2a cos{loz + a2
Similarly, the transform of
y(z) = a'(sin()na)rr(n)
is, from Equalion (8.4.8),

y(.) =
F;!;'j#;7
4,4.4 l)iffereatiation with Respect to z
If we differentiate both sides of Equation (8.4.1) with respect to z, we obtain
dX(z\ : (_
) n)x(n)z-"-l
u<' n=o

= -z-t 2 *b\z-"
'l-0
from which it follows that

z[nr(n)) = -rt*el (8.4.17)

By successively differentiating with respect to z, this result can be gerreralized as

Ztnkx(n)t: (-,*)r *ru (8.4.18)

Example 8.4.6
Let us find the transform of the function

y (n) = n(n + l)u(n)


Fronr Equation (E.4.17), we have

zlnu(n)t = -, ft ,p611 = -, ** =
;*
Sec. 8.4 Propertes ol the Z-Transform 389

and

Z[nzu(a)t =
*)"rrnl = - z lrr-, l, ru all
(-,
_ _d z z(z+l)
--'A(z-tr=(FT;'
so that

8.45 InitialValue
For a causal sequence x(n), we can write Equation (8.4.1) explicitly as
x(z) : r(0) + r(l)e-t + x(z)z-2 r ... + x(n):-" * ... (8.4.19)
It can be seen that as z -+ co, the term z-n -s0 for each fl > 0. so that

Jrlg
x(a) = -r(o) (8.4.20)

Example 8.4.7
We will determine rhe initial value r(0) for thc signal with transform

x(z\ : _: zr_1zr+22_5.
__. _1
(z-1)(z-!)Q'z-rz+t)
Use the initial value theorem gives

r(0) = 1;' x(z) = t


tJc

The initial value theorem is a convenient tool for checking i[ thc z-rransform of a given
signal is in error. Partial fraction expansion of X(z) gives

x(z) = J-+ -j. - ---:)


z-l z-ti z2-t1z+ I
so that

x(n) = u(n) * (|)',t"r _ (i)'*,(l ,)


The initial value is.r(0) = l which agrees with ihe result above.

t.4.6 FinalValue
From the time-shift theorem, we have
Zfx(n) -.r(n - l)l = (t - z-t)xk) (8.4.21)
The left-hand side of Equation (8.a.21) can be written as
390 The Z-Transform Chapter I

) [.r1r1 -.\'(r - l)1.: "'- ]int ) [.r(rr) -.r(rr - l)]r""


tt ll ''' a'(t

lf we now let : -+ I. Eqtrati,-'n {N I 2l) c;rn lre !r71i11s.


^*

l$ tl - r-')x(;) = I,* ,I, [.r(r)'- r(n - l)]

= lim .r(N) = x(:c) 18.4.22)

assuming.r(cc) exists.

f,sernple E.4.8
By applying the final value theorem, we can find the final value of the signal of Exam-
plc 8.4.7 as

r(a) = 1* , *r,r= g [.,]'_-,f;t'i,.1i)


,t
so th-.
r(cr; = 1

which again agrees with the final value ofx(n) given in the previous example.

Example 8.4.9
[,et us considcr lhe signal x(n) = 2 rr1nl*ith Z-transform given by
3
X(z) = z-2
Application of the final value theorem yields

.t(,)=lirn z: l z..=1
:jt Z Z_z
Clearly this result is incorrect since.r(a) grows without bound and hence has no final value.
This example shows that the final value theorem must be used with care. As noted ear-
lier it gives the correct result only if the tinal value exists.

8.4.7 Convolution
If y(n) is the convolution of two sequences.r(n) and lr(rr), then, in a manner analogous
to our derivation of the convolution property for the discrete-time Fourier transform.
we can show that

Y(z) = Htz)X(z) (8.4.23)

Recall that
Sec. 8.4 Properties ol the Z-Transform 391

Y(z\ = i y@)z-'

so that y(n) is the coefficient of the z,th term in the power-series expansion of Y(z).
It follows that when we multiply two power series or polynomials X (z) and H(z), the
coefficients of the resulting polynomial are the convolutions of the coefficients in
.r(n) and ft (n).

Example 8.4.10
We want to use lhe Z-transform to find the convolution of the [ollowing tvo sequences,
which rvere considered in Example 6.3.4:

h(n) = 11.2,0, - I. ll and r(n) = [.3. - t. -2|'

The respective transforms are

H(z)=l+22-t-z-'+z-o

and

i/(1) = 1 + 3z-r - z-' - Zz-'

so that

Y(z)= 1+52 I+ 52-2 - 5z-r - 6z-o + 4z-\ + z u -22-1

It follows that the resulting sequence is

.v(z) = (1, 5, 5, - s. - 6.4, l, - 2l

This is the same answer that was obtained in Example 6.3.4.

The Z-transform properties discussed in this section are summarized in Table 8-1.
Table 8-2, which is a table of Z-transform pairs of causal time functions, gives. in addi-
tion to the transforms of discrete-time sequences, the transforms of several sampled-
time functions. These transforms can be obtained by fairly obvious modifications of the
derivations discussed in this section and are left as exercises for lhe reader.
392 The Z-Transform Chapter I
TABLE &1
Z-Tranatorm Propertl6
l. Linearity arxr(n) + arxr(n) arXr(z) + arXr(z) (8.4.13)

2. Time shift r(a + z6) ,"1*rr, -9",o12'^) (8.4.14)

r(n - ,t!) z-"lxe)+


L
j ,1,.ya-,1 (E.4.ls)
m--an J

3. Frequency scaling a'r(n) X(a-tz) (8.4.16)

4. Multiplication by n nx(n) -r4*ut


az
(8.4.17)

nk x(n) (8.4.18)
1_2ft)rxat
5. Convolution .r,(n) r.rr(z) Xr(z)Xzk) (8.4.23)

8.5 THE INVERSE Z-TRANSFORM


There are several methods for finding a sequence.r(n ), given its Z-transform X(z). The
most direct is by using lhe inversion integral;

= (85.1)
'@) *j{rx(z)z'-'dz
f
where fs- represents integration along the closed contour in the counterclockwise
direction in the z plane. The contour must be chosen to lie in the region of conver-
gence of X(z).
Equation (8.5.1) can be derived from Equation (8.a.1) by multiplying both sides by
zt-l and integrating over f so that

r,r,r -' (n) zk -'-' dz


* f,* "
=
+ f,flt
By the Cauchy integral theorem,

{r'o-'-'o'= {3:' I;:


so that

= hix(k)
fr*{,),|-'o,
from which it follows that

'o =
*f,xk)zo-'az
TABLE &,2
Z-Transtorm Palr8
Radlua o, GonYargsnco
x(4
{rr)tota>0 lzl 'a
1.6(z) I 0
2.6(n - m) z-,n 0

3. u(n) I
z-l
z
4.n I
G:IT
5. n2 4!-+-,r) I
(z - l)'
z
6. an lol
z-o
az
7. na" lrl
(z - o)'
22
+ l)a" lol
E. (r7
Q -;f
Za+t
lrl
d -En
z(z -_c91!h)-
10. cos flsn I
z2 - 2z cosfh + I
f,lrn !u-q.--- 1
11. sin
---
z2 cos(h + I
2z
z(z_- a cosfh)
12, a" cos(l6n
z2 - 2zo cos 0o + a2
lrl
za sin Oo
13. a' sin flen
zz - 2za cos(lo + a2
l,l
z
14. expl- anTl lexp [-ar]
z - expl- aTl |

Tz
nT I
15.
Grtf
Tz expl- aTl
16. nT expl- anTl lexp [-ar]l
lz - exgl-aTll2
z(z - cosoro 7) I
17. cosaor6I
22 - 2z cosaoT +I
z sinoo I I
18. sinzrool
z2-2zcoso4T+l
z [z - exp [- all cosooT]
19. expl- anTl cos n r,re T
+ expl-ZaTl iexp [- aI] |
zz - 2z expl- aT] cosool

I lerp[- ar]l
20. expl- anTl sin n tos
@rt
393
394 The z-Transtom chapter E

We can evaluate the integral on the right side of Equation (8.5.1) by using the residue
theorem. However. in many cases. this is not necessary. and rve can obtain the inverse
transform by using other methods.
We assume that X(e) is a rational function in I of the torm

x(z) =u)u**uol,'r.- ::2::::, tur


=N
(8.5.2)

with region of convergence outside all the poles of X(z).

8.6.f Inversion by a Power-Series Expansion

If we express X(z) in a power series in z-t, x(n) can easily be determined by identify-
ing it with the coefficient of 2-" in the power-series expansion. The power series can
be obtained by arranging the numeraror and denominator of X(z) in descending pow-
ers of z and then dividing the numerator by the denominator using long division.

g.rarrple 8.5.1
Determine the inverse Z-transform of the function

x(:)=z-lo-i' lzl >o.t


Since we want a power-series expansion in powers of z-1. we divide the numerator by the
denominator to obtain
I + 0.lz-r + (0.lfz-'?+ (0.1)13-r + "'
z -0.11 z
z-0.1
0.1
0.t - (o.l;:. - t

(o.l )!z - |
(0.1)22-'
(o.l Yr. -:
We can write, therefore,

X(z) = 1 + 0.lz-r + (0.1)22-r + (0.1)13-r +...


so that

r(0) = 1. :(1) = s.t, r(2) = (0.1)2. ,r(3) = (0.1)r. etc.


It can easily be seen that this corresponds to the sequence
r(n) = (0.1)'rr(n)

Although we were able to identify the general expression for:(n) in the last example,
in most cases it is not easy to identify the general term from the first few sample val-
ues' However. in those cases where we are interested in only a few sample values of
Sec.8.5 The lnverse Z-Transform 395

x(z), this technique can readily be applied. For example, if .r(n) in the last example rep-
resented a system impulse response, then, since .r(n) decreases very rapidly to zero, we
can for all practical purposes evaluate just the first few values of r (n) and assume that
the rest ate zero. The resulting error in our analysis of the system should prove to be
negligible in most cases.
It is clear from our definition of the Z-transform that the series expinsion of the
transform of a causal sequence can have only negative powers of <. A mnsequence of
this result is that, if r(n ) is causal, the degree of the denominator polynomial in the
expression for X(z) in Equation (8.5.2) must be greater than or equal to the degree of
the numerator polynomial. That is, N > M.

Example 8.5.2
We want lo find the inverse transform of

x(z) =
z3-z'+z-i ,l
..t -5-2 r !- _ _L' l.l
4. t2. 16

Carrying out the long division yields the series expansion

x(z) : | * !,-, *ii.- * s|r- * ...


from which it follows that
s:4,
.r(0) = 1, ,(r) = 1, ,(4 =
reE,
,Q) = etc.

In this example, it is not easy to determine the general expression for.r(n), which, as we
see in the next section, is

.r(n) = s(n) - s(|)',r,1 * s^(l),at.,(])" u(n)

85"2 Invereion by Partial-Itaction Expansion


For rational functions, we can obtain a partial-fraction expansion of X(z) over its poles
just as in the case of the Laplace transform. We can then, in view of the uniqueness of
the Z-transform, use the table of Z-transform pairs to identity the sequences corre-
sponding to the terms in the partial-fraction expansion.

Example 8.63
Consider X(z) of Example 8.5.2:

x(z) =
z'-z'+z-!s ,j
-3 _5--2.. l- _ r' lzl
'4<'2'16
In order to obtain the partial-fraction expansion, we first write X(z) as the sum of a con-
stant and a term in which the degree of the numerator is less than that of the denominator:
,396 The izTransform Chapter I

r.;:i._,z' +_i_r
t,
\z
x(z\ =
4' '2' 16

In factored form, this becomes

x(z)=1.#r)
We can make a partial-fraction expansion of the second term and try to identify terms
from Table 8-2. However, the entries in the table have a factor z in the numerator. We
therefore write X(z) as

x(7)=1.,ffrfi
lf we now make a partial-fraction expansion of the fractional term, we obtain
t -o
X(z) = 1+ zl--:-
-\.-i + I"..- + s- i |
Q-i)' ,-'il
=r-e' z-i +s-i!?-ae_,z-
z-'; k-i)'
From Table 8-2. we can now write

r(n) = 61n; - ,(i),r,1 . t^()),o. r(j)',r,1

f,'rample 8.6.4
Solve the difference equation

. y(n\ -|tA -r1 + |r(n -2) =zsin|


with initial conditions

/(- 1) = 2 and y(-2) = 4


This is the problem considered in Example 6.5.4. To solve it, we first frnd Y(z) by trans-
forming rhe difference equation, with the use of Equation (8.4.15):

vk) -lz-,w(z) + 2zl *f,r-r1v1r1+ 422 + 2zl =


;i
Collecting terms in Y(z) gives

| -1.' * |.-')ret = 1-1.--, -;i


trom which it follows thar
z' - Iz
Ytz) =;lfJ *;;;*:T-2z! r.,' lzl > I
. 4.,8 r rr(.(,
\4 r-s,
Sec.8.5 The lnverso Z-Translorm 397

Carrying out a partial-fraction expansion of the ternlson the right side along the lines of
lhe previous example yields

i",;l!.1
Y(z) = ,_._1. := _
* '4.i*
l3i8zll2z96z'
5z-i tlz-to 8522+1 8512+l
The frrst two terms on the right side correspond to the homogeneous solution, and lhe last
two terns correspond to the particular solution. From Table 8-2, it follows that

13/l\" 8 /lY ll2 nn % nl


v<a= l\z) -rz[a/ + 8s sint-Ecos 2' n
=0
which agrees with our earlier solution.

Ertarnple &65
[-et us find the inverse transform of the function

x(z)=- *--,..
(z-jXz-i) l.l ,j
Direct pa rt ia I- fracl io n expansion yields

xk)=:,_*
which can be written as

x(z): z-t--4:
12'4 ,
- or-'
:!_1
We can now use the table of transforms and the time-shift theorem, Equation (8.4.15), to
wrile the inverse transform as

,(,): o(l) ,(n - r) - r(j)'-',r, - rr

Alternatively, we can write

xdt=
zk-ik-t)
and expand in partial fractions along the lines of the previous example to get

xol=r(9*-q---f9-)=8+ 8t, l6t,


\z z-) z-'rl z-', - r-j
We can directly use Table 8-2 to write r(a) as

r(n) = 8s(n). a(i)",r,r - re(f)",r,1


To verify that this is the same solution as obtained previously, we note thal fora = 0,r e have
398 The Z-Transtorm Chapter I
.r(0)=8+E-16=0
For a = l, we gel

,(,)=r(;) _,.(i). =^(;) ' -.(il '

Thus, either method gives the same answer.

8.6 Z-TRANSFER FUNCTIONS OF CAUSAL


DISCRETE-TIME SYSTEMS
We saw that, given a causal system rvith impulse response h(z), output corresponding
to aoy input r(n ) is given by the convolution sum:

y(n):)h(k)x(n-k) (8.6.1)
t-0
In terms of the respective Z-transforms, the output can be written as

Y(z) = H(z)x(z) (8.6.2)


where

H(z) = zlh(n)l = Yk) (8.6.3)


x(z)
represents the system transfer function.
As can be seen from Equation (8.6.2), the Z-transform of a causal function contains
only negative powers of s. Consequently, when the transfer function H(z) of a causal
system is expressed as the ratio of two polynomials in z, the degree of the denomina-
tor polynomial must be at least as large as the degree of the numerator polynomial.
That is, if

rrt.\:\MzM + PM-FM'| + "'+ 9( + 9o


,"\'/ (8.6.4)
o nzN + or-rz'-' + ... + arz + ao

then N > M if the system is causal. On the other hand, if we write If (z) as the ratio of
two polynomials in z-r, i.e.,

I,z +...+ ou_,t :!!E:


rr+ ,
H(z) = : (E.6.s)
* "' * a'v-12-N'r + o''z-n
then if the r^,"r;r;"r" ,iir**".'.'
Given a system described by the difference equation

5
&-0
oor@- /,) = 5 b6@ - k)
k=0
(8.6.6)

we can End the transfer function of the sysrem by mking rhe Z-transform on both sides
of the equation. We note that in finding the impulse response of a system. and conse-
Soc. 8.6 Z-Trunsler' Functions of Causal Discrete-Time Systems 399

quently, in finding the transfer function, the system must be iniriallv relaxed. Thus. rt
we assume zero initial conditions, we can use the shift theorenr trr gel
fM I t-N
klX(z) I
12 b*r-,lv(z) = l\ arz (6.6.7r
Lt--o I Lr---u - I
so that
M

2 bo'-o
u(z)=#- - (s.6.s)
2
k=0
oo'-r

The corresponding impulse response can be found as

h(n) = z-tlH(z\) (s.6.e)

It is clear that the poles of the system transfer function are the sarne as the character-
istic values of the corresponding difference equation. From our discussion of stability
in Chapter 6, it follows that for the system to be stable, the poles must lie within the
unit circle in the e plane. Consequently, for a stable, causal function, the ROC includes
the unit circle.
We illustrate these results by the following examples.

n-rnple t.0.1
Let the step response of a linear. time-invariant, causal systcm bc

:
y@
l,t,r - f (j)',t,r * fr (- j)',t,r
To find the transfer funclion H(z) of this system, rye note that

'\" s(z-r)
y(z\ =9 z- * ? -- ,
--1 -l3e-))' ls(r*l)
-3 _ la-2
< .1.

(z-r)(z-jlt.+|l
Since

x(z\ = -f '
z- L
it follows that

H@=#=#j (8.6.10)

=2- , *!-l
3z+l 3:-l
Thus, the impulse response of the system is
400 The Z-Transform Chapier I
h@ =l(- l)',,,, . l(l)",r,
Since both poles of the system are within the unit circle, the system is stable.
We can find the difference-equation representation of the system by rewriting Equa-
tion (8.6.10) as
y@. I - |z-t
r(z) (t - jz-')(1 + lz-')
=

=,-*rrl
, 4. 8.,
Cross multiplying yields

[' - 1'-'- ]'-']'t" = [r - ]'-']xr'r


Taking the inverse transformation of both sides of this equation, we obtain

v@) - f,Y@- r) - lY(, - 2) = x(n) - 1,6 - 11

Example 8.6.2
Consider the system described by the difference equation
y(n) - 2y(n - t) + 2y(n - 2) = r(n) + |r(n - l)
We can find the transfer function of the system by Z-transforming both skles of this equa-
tion. With all initial conditions assumed to be zero, the use of Equation (8.4.15) gives
i
Y(z) - 2z-tY(z) + z-zYQ) = x(zl + lz-tx(z) :
so that

u,-,-Y(z)- t+ll-'
" \" xzl | - 27-r a 2r-z
-z L L-
'2t i
=-t
z2-22+2
The zeros of this system are atz = 0andz = - (l/2), while the poles are at z = I arl.
Since lhe poles are outside the unit circle, the system is unstable. Figure 8.6.1 shows the
location of the poles and zeros of H(z) in the z plane. The graph is called a pole-zero plot.
The impul5e response of the system found by writing H(z) as

Hd\ =
.:\1_;;t] ,.1 v _:;;_n
and using Table 8-2 is

h@) = ({i), co'(X,),", * I rrar.i,(1,),r"r


Sec. 8.6 Z-Transler Functions o, Causal Discrele-Time Systems +!, '

t.6.1 P,rlc-zero plo( I'or


tigure
Example 8.6.2.

E-n'nple t.03
Consider the system shown in Figure 8.6.2. in which
0.8tr(^:
a(z)=1.-s3y1r-0.5)
where K is a constant gain.
The transfcr function of thc system can be derived by noting that thc output o[ lhe sunt-
mer can be written as
E(z)=x(z)-Y(z)
so that the system output is

Y(z) = x1r161",
= Ix(z) _ y(z)lH(z)
Substituting for ll(z) and simplifying yields

,o = #i)tr) x(z) = r. _ o.rli.qg*) * o.sr, l'(z)


The transfer function for the feedback system is therefore

' \" :
,,., .Y(i)_ 0._8_Kz__
xQ) z2 + (0.8K - 1.3)t + o.o.l
The poles of the system can be determined as lhe rools of thc cquation

22 + (0.8K - 1.3)z + 0.04 = 0

[igure t.6.2 Ulock diagram of


control s1'ste m of Erample [1.6.1.
402 The Z-Translorm Chapter I

For K = l.lhe two roots are


zr = 0'l and z= = O.4

Since both roots are inside the unit circle. the system is stable' With K= 4. however.
the roots are
zr = 0.0213 and z: = 1.87875

Since one of the roots is now outside the unit circle. the system is unstable.

8.7 Z-TRANSFORM ANALYSIS


OF STATE-VARIABLE S MS

As we have seen in many of our discussions, the use of frequency-domain techniques


considerably simplifies the analysis of linear. time-invariant systems. In this section. we
consider thi Z-tiansform analysis of discrete-time systems that are represented by a
set of state equations of the form
v(n + 1) = Av(n) + bx(n), v(0) = v,, (8.7.1)

y(n)=cv(n)+dx(nl
As we will see, the use o[ Z-transforms is useful both in deriving state-variable repre-
sentations from the transfer function of the system and in obtaining the solution to the
slate equations.
In Chapter 6. starting from the difference-equation rePresentation. we-derived two
alternativi state-space rePresentations. Here, we start with the transfer'function rep'
resentation and dirive two more rePresentations. namely. the parallel and cascade
forms. In order to show how this can be done. let us consider a simple first-order sys'
tem described by the state-variable equations
u(n + l) : aa(nl + bx(n ) (8.7.2)

Y(n'1: a(n) + dr(n)


From these equations it follows that

v(zl = -L
z-a x(z)
Thus, the system can be represented by the block diagram of Figure 8.7.1. Note that as
far as rhe relation between Y(z) and X(e) is concerned. the gains D and c at the input
and output can be arbitrary as long as their product is equal to bc.
we use this block diagram and the corresponding equaiion. Equation (8.7'2). to
obtain the state-variable representation for a general system by writing ll(e) as a com-
bination of such blocks and associating a state variable with the output of each block.
As in continuous-time systems, if we use a Partial-fraction exPansion over the poles of
H (z), we get the parallel form of the state equations. whereas if we represent H(z) as
a cascadebf such blocks. we get the cascade representation. To obtain the two forms
Sec. 8.7 Z-Translorm Analysis ol State-Variable Systems 403

r,( rr * l)

Flgure &7.1 Block diagram of a first-order state-spacc system.

discussed in Chapter 6, we represent the system as a cascade of trvo blocks, with one
block consisting of all the poles and the other block all the zeros. II the poles are in the
first block and the zeros in the second block, we get the second canonical form. The
first canonical form also can be derived, by putting the zeros in thc first block and the
poles in the second block. However, this derivation is not very straightforward, since it
involves manipulating the first block to eliminate terms involving positive powers of z.

Esarrple 8.7.1
Consider the system with transfer function

H (z\ = _j:*-L : 3z+


z, +loz- I (.*lltz-ll
Expanding H(z) by partial fractions, we can write

]--
H(z)=---t
z+i-+ z-i
with the corresponding block-diagram representalion shown in Figure 8.7.2(a). By using
the state variables identified in the figure, we obtain the following set of equstions:

(,.l)n,ut = x(z)

k -i)'un = zx(z)
Y(z)=V,(z)+2Vzk)
The corresponding equations in the time domain are

o,(n+l)=-1t,(n)+r(n)

ur(n + t) =lurrn, + 2r(r)


404 ThEZ-Tlanslorm ChapterS

Vz(:l

v'2(:l X, (:)
Vtlzt. Ylzl

V t(zl
Y(z)

(c)

Figure 8.72 BIock-diagram representations for Example 8.7.1.

y0r)=o,(n)+zaz(n)
lI we use the block-diagram representation ofFig. 8.7.2(b), with the states as shown, we have

(. -'ot)r,,., = xr(z)

x,1zy = (rz .ta)n^u

(, *l)v,at = x1..1

Y(z) = vrk)
which. in the time domain, are cquivalent to

u'(n+l)=l''(')**'(n)

.r,(n) = 3a2@+ r1 + ar1n1


11

1rr(z + l) = -lor1n1 * r1n\


.v(n) = u, (n)
Eliminating r, (n ) and rearranging the equations yields
Sec. 8.7 Z-Translorm Analysis ol State-Variable Systems 405

l-3
u,(n + l) = A,',(") -;u,(x) + -l-r(,r)
I
r'r(n + l) = - lt,z@\ * x(n)

y(n) = u1(z)
To get the second canonical form, we use the block diagram of Figure 8.7.2(c) to get

(u .1,- l)r,,.,: r,.,


vp1 = (tz. J)n,r.l
By defining

zV,(z\ = V2k)
we can write

zv,(z) +
Ir^U -l r,(.) = *,.,
1
Y(z)=-ovt(z)+3Vr(z)

Thus, the corresponding siate equations are


?rr(n + l) = ?r'(n)

+
u2tu + t)=
ir,,r, - f,u,@'1
t1,t1

1
y(n)=iur\n)+3o2@)

As in the continuous-time case, in order lo avoid working with complex numbers, for sls-
tems with complex conjugate poles or zeros, we can combine conjugate pairs. The repre-
sentation for the resulting second-order term can then be obtained in either the first or the
second canonical form. As an example, for the second-order system described by

Y@ =
b++++i!l xo (8.7.3)
t+atz'+azz'
we can obtain the simulation diagram by writing

Y(z) = (bo t brz-t + bzz-z)V(z\ (8.7.4a)

where
I
v(zr) =
l+ qJ-'i 1-o ; X(z)

or equivalently,
V(z) = -o,r-tnk) - arz-zV(z) + X(t\ (E.7.4b)
406 The Z-Translorm Chapter I

.Y(il Y(:)

Flgure 8.73 Simulation diagram for a second-order discrete-time system.

we gencrate Y(i) as the sum ot X(z\, - arz-tV(z),and - a2z-2V (z) and form Y(z) as the
sum of bolz(z) snd bzz-zv(z) to get the simulation diagram shown in Figure 8.7.3.

Example 8.72
Consider the system with transfer function
l+2.52-t+z-2
H(2) =
(t + 0.52-r + 0.Ez-2)(1 + 0.32-r)
By treating this as the cascade combination of the two systems

H,(2) =
I + 0.52-r
H,(z)=i:#
1+0.52-r+0.82-2'
we can draw the simulation diagram using Figure 8.7.3, as shown in Figure 8.7.4'
Using the outpus of the delays as state variables, we get the following equations:
i (z) = zv,(z) = -O'lvlz) + xt(zl
X,(z)=V(z)+O5V2Q)
zVr(z) = v(z) = -g'5v,12)-o'8v3?) + X(z)
zVlz) = lt'171
y(z) = i(z) + ZV,(z)
Eliminating t/(z) and 7(z) and writing the equivalent time-domain equations yields
..i
(:, \ t:
no
C)
c
(E

IJ.I

o
E
EI)
((,
E
c
i!

E
a^

!F
d
c,

b!
lL

407
408 The Z-Transform Chapter I
t,(n + I) = -0.3r'r(n) - 0.9u.(n) +.r(r)
ur(n + 1) = -0.5u:(n) - 0.8u.(n) + x(n)
uj@ + 11 = It.(nl
y(n): 1.lu,(n) - 0.8u.(n) +.r(n)
Clearly, by using different combinations of first- and second-order sections, we can obtain
several different realizations of a given transfer function.

We now consider the frequency-domain solution of the state equations of Equation


(8.7.1), which we repeat for convenience.
v(n + 1) : Av(n ) + b.r(a), v(0) = vo (8.7.5a)
y(n): cv(n) + dx(n) (8.7.5b)
Z-transforming both sides of Equation (8.7.5a) yields
:[v(z) - vo] = AV(z) + bX(z) (8.7.6)
Solving for V(3), we get
v(z) = z(zl - A)-rv,, + (eI - A)-rbx(z) (E.7.7)
It follows from Equation (8.7.5b) that
Y(z) = cz(zt - A)-,'o + c(zl - A)-tbx(z) + dX(z) (8.7.8)
We can determine the transfer function of this sysrem by setting v(0) = g 1o t",

Y(z) = [c(zl - A)-'t + dlx(z) (8.7.9)


It follows that

H(z)=ii:l=c(zl-A)-'|b+d (8'7'10)

Recall from Equation (6.7.16) that the time-domain solution of the state equations is

v(n):oQr)vo* i*1r-
l-o
I -i)b:g) (8.7.11)

Z-transforming both sides of Equation (8.7.11) yields


V(z) = O(z)vo + z-'O(z)bx(z) (8.7.t2)
Comparing Equations (8.7.12) and (8.7.7). we obtain
.D(z) = z(al - A)-r (8.7.13)
or cquivalently,
o(n) = A" - Z"tlz(zl - A)-'l (8.7.14)
Equation (8.7.14) gives us an alternative method of determining A,,.
Sec. 8.7 Z-Transform Analysis of State-Variable Systems 4rl9

Example 8.7.3
Consider the system
zr,(n+l)=u,(n)
2,2(n + l) =
l r,,r, - la.(l; + rrrrl

v (n) = a, (n)
which was discussed in Examples 6.7.2,6.7.3, and 6.7.4. We find tltc unit-step rcsPonse (,i
this system for the case when v(0) = [l
- l]r. Since
A=
lo rl
L; -rl
it follows that

(zl

so we can write
- n';-'= |L-s
T
z,
ir :l
-2 ! .r-l
+ r rl
-_l
.-a z+')
O(z) = .1.1 - A)-r = z !
_! _ _ 6
I
t'l
,*l
.
- _1
14 . l_l
We therefore have

A.= oor) i(-])' 1{i)'- l( )'l


[3{ll.
L:u-:(-l)' l(i)'.i( )L
which is the same result that was obtained in Example 6.7.2. From Eciuation (8.7.7), for
the given initial condition,

v(3) = (zr - ,, '[-l] . ,.r - ,,-'[l] - -l


Multiplying terms and expanding in partial fractions yields
1- c- L't- n-1
v(r)=lI r-
l=.;;i-;-rl
'r..,'-' 'rr-o I
ca t8< tR4 |

so that
L;'-;.;-.-ll
[s 23t r
\' _ 22rl\"1
z) e l+/
',,,=[;;[;]]=l:.;)
-
|

[e rs\- )^ lt(t)"1
410 TheZ-Transfom ChapterB

To find the output, we note that

y(z) = [l , [l:[i] = vtz)


and
y@i = ar(n)
These are exactly the same alt the results in Example 6,7.3. Finally, we. have, from Equa-
tion (8.7.10),

r{(z) = r, ,r*j;_5[. ir :][l]


_1
(z+lxz_l)
!1
=3-3
z-l z+l
so that

3\4/ -1I'.-1)'-'.
fr(,)=1/1)'-' 3\ 2l n>t
Since lr(0) = 0, we can write the last equation as

,(,,=+(i) -1(-r", r>0


which is lhe result obtahed in Example 6.7.4:

8.8 RELATION BETWEEN THE Z-TRANSFORM


E TRAN

The relation between the Laplace transform and the Z-transform of the sequence of
samples obtained by sampling analog sigial x,(t) can easily be developed ftom our dis-
cussion of sampled sigrals in Chapter 7. There we saw that the outPut of the sampler
could be considered to be either the continuous-time signal

,,(r)= i x"(nT)6(t-nT) (r.8.1)


,tE-@

or the discrete-time siggal


x(n) = x.(nT) (8.8.2)

The [-aplace transformation of Equation (8.8.1) yields

X"(s) =,i (E.8.3)


-r"@r)exp[-nls]
Sec.8.9 Summary 411

If we make the substitution z = exp IIs], then


X,(s) l.= *ptnt= )
,le-,
x,(nT)z-' (8.8.4)

We recognize that the righrhand side of Equation (E.8.4) is the Z{ransform, X(z), of
the sequence x(n). Thus, the Z-transform can be viewed as lhe LaPlace transform of
the sampled function x,(t) with the change of variable
z = exp [Tsl (8.8.s)
Equation (8.8.5) defines a mapping of the s plane to the z plane. To determine the
nature of this mapping, Iet s = o + i(o, so that
e :
exp[oI]exp[iorI]
Since lzl = exp[oT], it is clear that if o < 0, lzl < l. Thus, any point in the left half
of the s plane is mapped into a point inside the unit circle in the z plane. Similarly'
since, foi o ) 0, we have lz | > 1, a point in the right half of the s plane is mapped into
a point ouside the unit circte in thez plane. For o = O, l.l :
l, so that the loaxis of
the s plane is mapped into the unit circle in the z plane. The origin in the s plane cor'
responds to the point z = 1.
Finally, let s* denote a set of points that are spaced vertically apart from any point
so by multiples of the sampling frequency ro, = 2r lT. That is,

so:so*jkto,, k = 0,-+7, !2,"'


Then we have
d* = explTs1,1 :ik-'' = exp [Isn] : 2,,
er(s"+

since exp[7*or,Tl = explj}kn]. That is, the points s1 all map into the same point
z6 = exp IIso] in the z plane. We can thus divide the s plane into horizontal strips, each
of width r,r,. Each of these strips is then mapped onto the entire z plane. For conve-
nience, we choose the strips to be symmetric about the horizontal axis. This is sum-
marized in Figure 8.8.1, which shows the mapping of the s plane into the z plane.
We have atready seen that X,(or) is periodic with period to,. Equivalently, X(O) is
periodic with period 2zr. This is easily seen to be a consequence of the result that the
process of sampling essentially divides the s plane into a set of identical horizontal
strips of width r,r,. The fact that the mapping from this plane to the z plane is not unique
(the same point in the z plane corresPonds to several points in the s plane) is a conse'
quence of the fact that we can associate any one of several analog signals with a given
set of sample values.

. The Z-transform is the discrete-time counterPart of the Laplace transform.


r The bitateral Z-transform of the discrete-time sequence x(n ) is defined as

x(z) = nd'r) x@)z-'


I
.1 .., )
i

,\
,1 l::.

+
o.
(l.,

t
o
(il
E
o
6
t:
o. o
0
!
q,)

(!
o.
a)

o
E
o
o
tr
tt,
CL

al al q,

3 3
3 o
3 I I I EO

o.
o
at,

.{
aa
c,

EA
k

412
Sec. 8.9 Summary 413

r The unilateral Z-tr

,r\.t _
ftn^r,,r"
The region of convergence (ROC) of the Z-transform consists o[ those values of i
for which the sum converges.
For causal sequences, the ROC in the z plane lies outside a circlc containing all the
poles of x(z). For anticausal signals, the Roc is inside the circle such that all poles
lrx(z) are external to this circle. If r(n) consists ofboth a causal and an anticausal
part,ihen the ROC is an annular region, such that the poles outside this region cor-
i"spond to the anticausal part of:(n), and the poles inside the annulus correspond
to the causal part.
The Z-transform of an anticausal sequence.r-(n) can be d(.'te Inlined tiom a table of
unilateral transforms as
X-(z) = Zlx-(-n)l
Expanding X(z) in partial fractions and identifying the inversc of each term from a
table of Zltransforms is the most convenient method for determining x(n). If only
the first few terms of the sequence are of interest, x(n) can he obtained by expand-
ing X(z) in a power series in r-r by a Proccss of long division'
The propcrttes of the Z-transform are similar to those of the Laplace transform.
Among ihc applications of the Z-transform are the solulion ot difference equations
and the evaluation of the convolution of trvo discrete sequcnces.
a The time-shift property of the Z-transform can be used to solve difference equations.
O Ify(n) represents the convolution of two discrete sequenccs r(n) and lz(n)' then
Y(z) = It(z)Xlz)
The transfer firnction H(z) of a systenl with input r(n). impulse response &(z). and
output y(n) is

It(z) = zlh(n)l: IE\


Simulation diagrams for discrete-time systems in the z domairl can be used to obtain
state-variablc iepresentati<,ns. 'fhe solutions to the state equations in the Z domain
are given by
v(z) = :(:I - A)-'rn + Ql - A)-rbx(z)
)'(:):cv(z)+dX(z)
The transfer function is givcn hY

H(z) : c(zl - A) 'b + d


The state- transition matrix can he obtained as

Q(rr) : A" = Z-t [:.(;I - ^l;-rl


414 The Z-Transform Chapter I
" The relation between the Laplace transform and the Z-transform of the sampled
analog signal x,(t) is

X(z) l.=".ptrd = X"(s)


" The transformation z = exp Ifs] represents a mapping from the s plane to the z
plane in which the left half of the s plane is mapped inside the unit circle in the z
plane, the y'o-axis is mapped into the unit circle, and the right half of the s plane is
mapped outside the unit circle. The mapping efffectively divides the s plane into
horizontal strips of width to,, each of which is mapped into the entire z plane.

8.10 CHECKLIST OF IMPORTANT TERMS


Bllateral Z-tanetorm Solutlon ol dlflerence equatons
Mapplng o, tho s plene lnto the z plane Stsls-translflon matrll
Partlal-tracUon erpanslon State-varlable representadone
Power*erleo expanslon Transter functlon
Reglon of conyergenco Unllateral Z-translorm
Slmulaflon dlagrams

8.11 PROBLEMS
&L Determine the Z-transforms and the regions of convergence for the fo[owing sequences:
(a) x(a) = (-3)'z(-n - l)
ror,t,r={i, ilft=s
z> o
(c) .r(r) I(JI
l:', a<o
\
(d) -r(a) :26(n) - 2;u(n)
82 The Z-transform of a sequence x(a) is
z3+4zt-u,
x(z) =
z'+lr'-1r*l
(a) Plot the l0cations of the poles and zeros of X(z).
(b) Identify the causal pole(s) if the ROC is (i) lzl < tiil lzl > z
I,
(c) Find.r(n) in both cases.
8J. Use the definition and the properties of the Z-transform to find X(z) for the following
causal sequences:
(a).r(z)=zansinOon
(b) r(z) = n2cos(htt
(c) :(n) ="(:)" +("-r)(1)'
Sec. 8.11 Problems 415

(d) r(n) = 6(n - 2) + nu(n)


(e) .t(z) = 2expl-nlr'"(;"r)
&4. Determine thc Z-transform of the sequenccs that result whcn thc following causal con-
tinuous-time signals are sampled uniformly every I seconds:
(e) .r(t) = ,cos 1000',,
(b) .t(r) = texP[-3(t - 1)]
tJ. Find the inverse of each of the following Z-transforms hy means of (i) power series expan-
sion and (ii) partial-fraction expansion. Use a mathematical software packagc to verify
your partial-fraclion expansions. Assume that all sequences are causal.
(a)
i - l--l
x(z):, _;l_t-j
, 44 ' g4
(b)
(z+l)(z+l)
x(z) =
(c)
iz-- 5(z - )
x(z) =
-l-
(z - j)'
(d)

_41+ ?t_
z2+42+3
&5. Flnd the inverse transform ot
X(z) = 16t11 - t.-',
by the following methods:
(a) Use the series expansion

rog(1 - a) = }i lol . r

(b) Differentiate X(z) and use the properties of the Z-transform.


&7. Use Z-transforms to find the convolution of the following causal scquences:
(a)

,,(,)= (;)., ,r,= {;: nffiJ,


(b)

,(,) = (l) , ,r,= {?: :=; : :


(c)

l(n) = [], -1,2, -1, l], .r(n) = 11.0..'1.31


416 The Z-Translorm Chaptsr I
t.& Find the step response of the system with transfer function

nat=:;i_;-!_,
' ' 6{ 6

&9. (a) Solve the following difference equations ushg Z-transforms:


(l) y(z)- j(n - l) + y(n - 2) = x(n)
y(-l) = l, y(-2) = o, x(nl = (11"u(n)
(ll) y(r) - lyb - r) - luyb - 2) = x(n) - j.r(1 - 1)
y(- l) = o, y(-2) = o, .t(z) = (l)"u(n )
(b) Verify your result using any mathematical software package.
&I0. Solve the difference equations of Problem 6.17 using Z-transforms.
&f L (a) Find the transfer functions of the systems in Prohlem 6.17, and plot the pole-zero loca-
tions in the e plane.
(b) What is the cor..:.ponding impulse response?
&12. (a) When input x(n) = u(n) + 1- f)'a(a) is applied to a linear, causal, time-invariant
system, the output is

y(,) = 6(- o(-


i)',t,- i)',t,r
Find the transfer function of the system.
O) What is the difference-equation representation of lhe system?
&[t. Find tlre transfer function of the system

i, (rr )

r(z) h 3(z) r'(z )

h2ul

::,',::, =':,;Ifl ,) + s(z _ 2)


/l\'
n,@ =
\r)u(n)
E.14. (a) Show that a simplified criterion for the polynomial F(z) = z2 + arz + a2 to have all
its poles within the unit circle in the : plane is

lrto)l . l, F(- r) > o, r(r) > o


(b) Use this criterion to find the values of K for which the system of Example 8.6.3 is stable.
E.Ui. The transfer function of a linear, causal, time-invariant system is

H(:) =
.-r. ,* - f1';i."1 1, - *r;
Sec. 8.1 1 Probl€ms 417

where K and o are constant. Find the range of values of K and a tor which the system is
stable, and plot this region in the K-o plane.
&16. Obtain a reali'zation of the tollowing transfer function as a combination of firsG an<t sec.
ond-order sections in (a) cascade and (b) parallel.

19!._lJ o.oo. '){r + r.z:-, + o.7J: 2)


H(z) =
i(i + 0.42-r + 0.82-:)(, - g)5;i_ g 12-5- i;
&17. (a) Find the state-transition matrix for the sysrcms of Problem 6.28, using the Z-trans-
form.
(b) Use the frequency-domain technique to find the unirstep response of these systems.
assuming lhat v(0) = 0.
(c) Find the transfer function from the state represenration.
(d) Verify your result using any mathemathical software package.
&l& Repeat Problem 8.17 for the state equations obtained in Problcm 6.26.
&19. A low-pass analog signal.r,(r ) with a bandwidth of 3 kHz is sampled ar an appropriate rate
to leld the discrete-time sequence.r(n I).
(e) What sampling rate should be chosen to avoid aliasing?
(b) For the sampling rate chosen in part (a), determine the primary and secondary strips
in the s plane.
&20. The signal .r,(l) = l0 cos 600zrt sin 24002r, is sampled al rates of (i) 800 tlz, (ii) l@ Ha
and (iii) 3200 Hz.
(e) Plot the frequencies present in r,(l ) as polcs at the appropriatr: krcations in the s plane.
(b) Determine the frequencies present in the sampled signal lor cach sampling rate. On
the s plane, indicate the primary and secondary strips for each case, and plot the fre-
quencies in the sampled signal as poles at appropriate locations.
(c) From your plots in part (b), can you determine which sampling rate yill enable an
error-free reconstruclion of the analog signal?
(d) Verify your answer in part (c) by plotting the spectra of the anakrg and sampled signals.
E2L In the text, we saw that the zcro-order represents an easily implcmcntable approximation
to the ideal reconstruction filter. However, the zero-order hold givcs only a staircase
approximation to the analog signal. In the first-order hold, rhe output in the interval
nT= t< (n+ l)Iisgivenby

y(t) = x"(nr) +t--{p"1,r) - +(n7'- r-\l

Find the transfer function Gr,(s) of the first order hold, and compare its frequbncy
resporuie wirh that of the ideal reconstruction filter matched to rhe ratc 7"
t2a As we saw in Chapter 4. filters are used to modify the frequencv content of signals in an
appropriate manner. A technique for designing digital lilters is hased on transforming an
analog filter into an equivalent digital lllter. In order to do so, rvc have to obtain a rela-
lion between the Laplace and Z-transform variables. In Section 8.S. $,e discussed one such
relation based on equaling the sample values of an analog signal rvith a discrete-time sig-
nal. The relation obtained was

r = exp [f.rl
4lg The Z_Transtorm Chapter g

We can obtain other such relations by using different equivalences. For example, by equat-
ing the s-domain transfer function of the derivative operaior and the Z-domain transfer
function of its backward-difference approximation, we can write
-l
s=-I -rz
or equivalently,

I
' '- iir
Similarly, equating the integral operator with the trapezoidal approximation (see problem
6.15) yields

2l - z-l
' TL*r-'
or
L+ (T/2ls
z=
7=@121s
(a) Derive the two alternative relations between the s and e planes just given.
(b) Discuss the mapping of the s plane into the z plane using the two relations.
Chapter 9

The Discrete
Fourier Transform

DUCTION
From our discussions so far, we see that transform techniques play a very useful role
in the analysis of linear. time-invariant systems. Among the many applications of these
techniques are the spectral analysis of signals, the solution of differential or difference
equations, and the analysis of systems in terms of a frequency response or transfer
function. With the tremendous increase in the use of digital hardware in recent years,
interest has centered upon transforms that are especially suited for machine computa-
tion, In this chapter we study one such transform, namely, the discrete Fourier trans-
form (DFT), which can be viewed as a logical extension of the Fourier transforms
discussed earlier.
In order to motivate our definition of the DFT, let us assume that we are interested
in frnding the Fourier transform of an analog sigral r,(t) using a digital computer. Since
such a computer can store and manipulate only a finite set of numbers, it is necessary
to represent r, (t) by a finite set of values. The first step in doing so is to sample the sig-
nal to obtain a discrete sequence.t,(n ). Because the analog signal may not be time lim-
ited, the next step is to obtain a finite set of samples of the discrete sequence by means
of truncation. Without toss of generality, we can assume that these samples are deEned
for n in the range [0, N - U. [.et us denote this finite sequence hy r(n), which we can
consider to be the product of the infrnite sequence x, (n ) and the window function
(t. 0<nsN-l
= (e.l.l)
't') to, otherwise
so that
x(tt) : x"(n)w(n) (e.1.2)

419
42O. The Discrete Fou.ier Transtorm Chapler g

Srnce we now have a discrete sequence, wb can take the discrete-trme Fourier trans-
form of the sequence as

N-l
x(o) r(n) exp[-jfla] (e.r.3)
n-O

This is still not in a form suitable for machine computation, since O is a continuous vari-
able taking values in [0, 2t]. The final step, therefore, is to evaluate X(O) at only a frnite
number of values Oo by a process of sampling uniformly in the range [0,22r]. We obtain

/V-l
:
X(or) ) r(r)exp[-lorz], k=0,1, ...,M - | (e.1.4)
, -0

where

ao=2ik (e.l.s)

The number of frequency samples. M, cao be any value, However, we choose it to be


the same as the number of time samples, N. With this modification, and writing X(f,!* )
as X(&), we finally have

x(k\ = (e.1.6)
5'",*rl-i'#^o)
An assumption that is implicit in our derivations is thatr(n) can take any value in the
range (-to, co)-that is, that.r(n) can be repre.sented to infinite precision. However,
the computer can use only a finite rvord-length representation. Thus, we quantize lhe
dynamic range of the signal to a finite number of levels. In many applications, the enor
that arises in representing an infinite-precision number by a finite word can be made
small, in comparison to the errors introduced by sampling, by a suitable choice of quan-
tization levels. We therefore assume that.r(n ) can assume any value in (-co, co).
Although Equation (9.1.6) can be considered to be an dpproximation to the contin-
uous-time Fourier transform of the signal x,(t), it defines the discrete Fourier trans-
form of the N-point sequence r (n). We will investigate the nature of this
approximation in Section 9.6, where we consider the spectral estimation of analog sig-
nals using the DFT. However, as we will see in subsequent sections, although the DFT
is similar to the discrete-time Fourier transform that we studied in Chapter 7, some of
its properties are quite different.
One of the reasons for the widespread use of the DFT and other discrete transforms
is the existence of algorithms for their fast and efficient computation on a computer.
For the DFT, these algorithms collectively go under the name of fast Fourier transform
(FFT) algorithms. We discuss two popular versions of the FFT in Section 9.5.
Sec. 9.2 The Discrete Fourier Translorm and lls lnvers€ 421

9,2 THE DISCRETE FOURIER TRANSFORM


AND ITS INVERSE
Letr(n),n = 0, 1.2,....N- l, be an N-point sequence. we define the discrete Fourier
transform of .r (n) as

x(r)=b,r,l*rl-t',J,0] o.z.t)

The inversc discrcte Fou riur-lransform (IDFT) relation is given by

.(,) =;5'rul.-o[,?#"0] o.2.2)

To derive this relation, we replace nby p in the right side of Equation (9.2.1) and mul-
tiply by exp [l2nrr*/N] to gct

xo "-pL'?#
,r] : 5''or *o[iTor" - or] p.2.3)

If we now sum over /< in the range [0, N - l], we obtair

5 ,ror "* L, T ,-] = Fj 5' ,rpr *, [, ? A(,, - p)] e.2.4)

In Equation (7.2.12) we sarv that

P*
*t[,,# /'(' - P)] =
{}, ::i
so that the right-hand side of Equation (9.2.4) evaluates to Nx (n), and Equation
(9.2.2) follows.
We saw that .t (0) is periodic in O with period 2rr; thus, X(O* ) = X(Oo + 2c). This
can be written as

.Y(/i) = x(ok) =x(o* + z"\ = x(z;(k + rv)) = x(A - N) (e.2.s)

That is, .Y(k) is periodic with period N.


We norv show that.r(n), as detcrmined from Equation (9.2.2). is also periodic with
period N. From that equation, rve have

r(,r + N) : 5' ,,or *, ,f (n + N)r]


[,

=
^r-",0,"'o[r?*]
= .r(n) (9.2.6)
The Discrete Fourier Translorm Chapter 9

That is, the IDFT operation yields a periodic sequence, of which only the first N val-
ues, coresponding to one period, are evaluated. Hence, in all operations involving the
DFT and the IDFT, we are effectively replacing the finite sequence x(n) by its peri-
odic extension. We can therefore expect that there is a connection between the
Fourier-series expansion of periodic discrete+ime sequences that we discussed in
Chapter 7 and the DFT. In fact. a comparison of Equations (9.2.1) and (9.2.2) with
Equations (7.2.15) and (7.2.16) shows that the DFT X(t) of finite sequence .r(n ) can
be interpreled as the coefficient ao in the Fourier series representation of its periodic
extension ro(n ). multiplied by the period N. (The two can be made identical by includ-
ing the factor l/N with the DFT rather than with the IDFI.)

9.3 PROPERTIES OF THE DFT


We now consider some of the more important properties of the DFT. As might be
expected, they closely parallel properties of the discrete-time Fourier transform. In
considering the properties of the DFT, it is helpful to remember that we are in essence
replacing an N-point sequence by its periodic extension. Thus, operations such as time
shifting must be considered to be operations on a periodic sequence. However, we are
interested only in the range [0, N - 1], so that the shift can be interpreted ari a cLcu-
lar shift, as explained in Section 6.4.
Since the DFT is evaluated at frequencies in lhe range [0,2n], which are spaced
aparl by 2t / N, in considering the DFT of two signals simultaneously, the ftequencies
corresponding to the DFT must be the same for any operation to be meaningful. This
means that the length of the sequences considered must be the same. If this is not the
case, it is usual to augment the signals by an appropriate number of zeros, so that all
the signals considered are of the same length. (Since it is assumed that the signals are
of finite length, adding zeros does not change the essential nature of the sipal.)

9.8.1 Ltnearity
Let Xr(k) and Xr(k) be the DFTs of the two sequences .r, (n) and rr(n ). Then
DFT[a,rr(n) + anxr(n)l= arXrlk) + arXr(k) (e.3.1)
for any constants a, and ar.

9.82 Time Shifting


For any real integer zo,

DFT[x(z + ro)] : 5'rtr,+ rrl*n[-lf;u]


= ),r-).*n[ -i2fl rw -
^1f
=:*[,l]*"]'ttl (e.3.2)

where, as explained before. the shift is a circular shift.


S€o. 9.3 Properlios ot the DFT 425

0.8.3 Alter:nativelnversionFormula
By writing the IDFT formula, Equation (9.2.2\, as

'(') = i [; x-o'-n[-if; ,,t]].


-- f torrtx.t*)ll- (e.3.3)

we can interpret x(n) as the complex conjugate of the DFT of X* (k ) multiplied by 1/N.
Thus, the same algorithm used to calculate the DFT can be used to evaluate the IDFT.

0.9.4 Time Convolutiou


We saw in our earlier discussions of different transforms that the inverse transform
of the product of two transforms corresponds to a convolution of the corresponding
time functions. With this in view, let us determine IDFT of the function Y(k) =
H(k)X(k). We have
y(n) = IDFrlr(k)l

=ieY(k)*P1-,2i;'e]

=
* P* H(k)x(k)'-o [, ?'o]
Using the definition of II(k), we get

: 5' (5 *rl- 2$,r]


v(,) * ot.r izfi *ol)xttr.-nfi
Interchanging the order of summation and using Equation (9.2.2), we obtain

yf"l = 5' h@)r(n -


moO
m) (e.3.4)

A comparison with Equation (6,4.1) shows that the right-hand side of Equation (93.a)
corresponds to the periodic convolution of the two sequences.r(n) and i(n).

Example 03.1
Consider the periodic convolution y(n) of the two sequences

ft(r) = ll,3, -1, -2J' and .r(n) = (1,2,0, - I I

Here N = 4, so that exp [i(zr /N)l = i. By using Equation (9.2.1), we can calculate the
DFTs of the two sequences as

fl(o) = 1,161 + ft(l) + h(2\ + h(3) = |


Tho Discrete Fourler Transform Chapier 9

H(r1 = 1,p1 * ng * rlzt exp[-lrr] * ,1rr.*of-r 3f)=, - tt


"*pl-iil
H(2) = 111q + fi(l)exp[-izrl+ h(2)expl- j2rl + ](3)exp[-i3zr] = -l
II(3) = 1 161 +t rr I *p -i
[ T) * r rrtexp [-l3nl + nplexpl-i \f = z * is

and
x(o) = r(o) +.r(1) + x(2) + x(3) = 2

x(1) = x(0) + rlrl exp[-7 * ,,r, exp[-jrr] +.1111"*ef-1f] = , - ,,


]]
X(2) = 1161 + .r(l) exp[-jzr] + x(2) exp[-i2rr] + x(3) exp[-i3r] = g
x(3) = 316y +,1r1exp[-;]] + r(z)expt-1r,,1+ r1r;exp[-i\f =r * t1

so that
v(0)=H(0)x(0)=2
Y(t) = x11717(-1) : -13 - ill
Y(2)= H(2)x(2)=0
v(3) = H(3)x(3) = -13 +111
We can now use Equation (9.2.2) to frnd.v(a) as
y(o) = l[Y(0) + v111 + Y(2) + y(3)] = -6
ylry = l[r1oy + r(r)exe[i;] + rlzpxplirr - ror-nli]]]= o

y(2) = 1(l'(0) + v(l) explirrl + Y(z)expU?d + Y(3)exp[i3r]) = 7

rrrl = |(rtol + r(r)expljT]* y(3)exp[rT] = -t


",rn*o,i3,,,t+
which is the same answer as was obtt ined in Example 6.4.1.

9.8.6 Relation to the Diecrete-Time Fourier


and Z-Ilansfortre

From Equation (9.2.1), we note that the DFT of an N-point sequence x(n ) can be written as

x(e) = 5 r<rl
z-0
exp[-ion]1,1-,p1 (9.3.s)

= x(O)lo=ilr
That is, the DFT of the sequence r(n) is its discrete-time Fourier transform X(O) eval-
uated at N equally spaced points in the range [0, 2n).
Sec. 9.3 Properties ol lhe DFT 425

For a sequence for which both the discrete-time Fourier transform and the Z-trans
form exist. it follows from Equation (8.3.7) that
x(k) = x(z)l:-erptirzorrrrt (9.3.6)

so that the DFT is the Z-transform evaluated at Nequally spaced points along the unit
circle in the e plane.

04.8 Matrix Interpretation of the Df"T

We can express the DFT relation of Equation (9.2.1) compactly as a matrix oPeration
t
on the data vector = [:(0)x(l)...r(N l)]r. For convenience, let us denote
-
expl-ilr/Nl by W,y. We can then write
x(k)=)x(n)wtfi ft:0,1,...,N-l (e3.7)

-ww w
Let W be the matrix whose (t, n)th element [W]0, is equal to Wf . That is,
w
W w'n w'r wN-l
W=l (e.3.8)

-WN W#-' W2(N-r) .


yg-'irn 'r-
It then follows that the transform vector X = [X(0)X(1). .X(N- 1)]7 can be
obtained as
X=YYx (e.3.e)

The matrix W is usually referred to as the DFT matrix. Clearly. [W]u = [W]*, so that
W is symmetric (W = Wr).
From Equation (9.2.2),we can write

,(r) = i.>*, @)W" (e.3.10)

Since W;r = Wi, where * represents the complex conjugate, it follows that the IDFT
relation can be written in matrix form as

(e.3.11)
' = fiw-x
Solving for x from Equation (9.3.9) gives
x -- W-rX (e.3.t2)

It therefore follows that

tn-'= (e.3.r3)
**-
426 The Discr€te Fourier Transrorm Chapter g

or equivalently,
WXW = NIN (e.3.14)

rvhere I," is the identity matrix of dimension N x N. Since W is a symmetric matrix, we


can write Equation (9.3.14) as

w'$rw : NI/v , (9.3.1si


In general, a matrix A that satisfies A*rA = Iiscalled a unitary matrix. A reat matrix
A that satisfies ArA = I is said to be an orthogonal marrfu. The matrix lY, as defined
in Equation (9.3.8), is not strictly a unitary matrix, as can be seen from Equation
(9.3.15), However, it was pointed out in Section 9.2 that the factor l/N could bp used
with either the DFT or the IDFT relation. Thus, if we associare a factor l/VN with
both the DFT and IDFT relations and let Wn = I /VN exp [- j2r / Nl in the definition
of the matrix W in Equation (9.3.8), it follows that
X=Wx and x=W*X (e.3.16)

with W being a unitary matrix, The DFT is therefore a unilary trawlormi often. how-
ever, it is simply referred to as an orthogonal transform.
Other useful orthogonal transforms can be defined by replacing the DFT matrix in
Equation (9.3.8) by other unitary or orthogonal matrices. Examples are the Walsh-
Hadamard transform and the discrete cosine transform, which have applications in
areas such as speech and image processing. As with the DFT, the utility of these trans-
forms arises from the existence of fast and efficient algorithms for their computation.

9.4 LINEAR CONVOLUTION USING THE DFT


We have seen that one the primary uses of transforms is in the analysis of linear time-
invariant systems. For a linear, discrete-time-system with impulse response t(z), the
output due to any input -r(n) is given by the linear convolution of the two sequences.
However, the product H (k) X (k) of the two DFTs corresponds to a periodb convolu-
tion of &(r) and r(n ). A question that uaturally arises is whether the DFT can be used
to perform a linear convolution. In order to answer this question, let us assume that
h(z) and r(z) are of length M and N, respectively, so that /r(n ) is zero outside the range
10, M - 11, and -r(n) is zero outside the range [0, N -
U. We atso assume that M < N.
Recall that in Chapter 6 we defined the periodic convolution of two finiteJengh
sequences of equal length as the convolution of their periodic extensions. For the two
sequences &(n) and x(n) considered here, we can zero-pad both to the length
K > Max(M, N), to form the augmented sequences /r,(n) and r,(n), respectively. We
can now define the K-point periodic convolution, lr@), of the sequences as the con-
volution of their periodic extensions. We note that while )p(r) is a K-point sequence,
the linear convolution of h(z) and.r(n), yln), has length L = M + N - l.
As Example 6.4.2 shows, for K s L, yo@) corresponds to the sequence obtained by
-
adding in, or'time-aliasing, the last L K values of y,(n) to the first L K poins. Thus,
-
the first L - .K points of yr(n) will not correspond to.v1(a), while the remaining 2K L
-
Sec. 9.4 Linear Convotution Using the DFT 427

points witl be the same in both sequences. Clearly, if we choose K = L, yo@) and y,(n)
rvill be identical.
Most available routines for the efficient comPutation of thc DFf assume that the
length ot the sequence is a power of 2. In that case, K is chosen as the smallest power
of 2 that is larger than L. When K > L, the first L points of .r'r(rr) will be identical to
y1(n ), while the remaining K - L values will be zero.
We will now show that the K-point periodic convolution of h (n) and x(n) is identi-
cal to the linear convolution of the two functions it K = L. We note that

y,@)= ) h(m)x(n-m) (s.4.1)


,nE -!

Now, ft(m) is zero for m el0,M - 1], and.r(n - rn) is zero for (n -m) e [0,N - l]'
so that we have the following.
O<n=M-l:
n
yr(n)= ) h(m)x(n-m)
= Ia,r,r, + h(t)x(n- 1) + "'+ /r(n).t(0)
M=n=N-l:
n

m=n-M+l
= h(n - M + 7)x(M - l\ + h(n - M + Z)x(M - 2)
+ '..+ h(n)x(0)
N=n=M+N-2:

-n-M+l
=h(n-M+t)x(M-t)
+.'.+/,(N+ l).t(n-N+ l) (9.4.2)

On the other hand,


&-l
) h,(nt)x,(n- m)
to?r\: m=O (9.4'3)

Since the sequence xo(n - m) is obtained by circularly shifting.t,(n), it follows that

[r,(r-m+K), n+7=m=K-1
so that

lo@) = h"(o)x.(n) + "' + h"(n)x,(o) + h,(n + I ) r,,(K - l)


+ h,(n + 2)x,(K - 2) + "'+ h,(K - l)t,,(n + 7) (9'4'5)
425 The Discrete Fouri€r Transform. Chapter 9

Now, if we use the fact that

h"(n) = Ino), O=n<lvl-l


lo. othenvise

xo(n) =
(xtu\, 0=n<N-l (e.4.6)
io. otherwise
we can easily verify thatyr(n) is exactly the same asy,(n) for 0 < z <N+M - Z and
iszerofor N + M - I = n< K - l.
In sum, in order to use the DFT to perform the linear convotution of the M-point
sequence h(n) and the N-point sequence r(z), we augment both sequences with zeros
to form the K-point sequences ho(n) and x,(n), with K > M + IV - 1. We determine
the product of the corresponding DFTs, H,(k) and X,(t). Then
yr(n) = IDF'I IH,(k) X,(k)l (e.4.7)

9.5 FAST FOUR]ER TRANSFORMS


As indicated earlier, one of the main reasons for the popularity of the DFT is the exis-
tence of effrcient algorithms for its computation. Recall that the DFT of the sequence
x(z) is given by

x (k) = 5',t,)*p [ -,'# *], k= 0,r,2,...,N- I (9.5.1)

For convenience, let us denote exp[-l2r/Nlby [7,", so that


/Y- I
x(k) =
,-0
) x@)wff , k= 0,r,2,...,N- I (e.s.2)

which can be explicitly written as

x(k) = r(0) wfl + x(t)wf, + x(z)wff


+ "'+ r(N - 1)W$-tr* (e.s.3)
It follows from Equation (9.5.3) that the determination of each X(k) requires N com-
plex multiplications and iv complex additions, where we have also included trivial mul-
tiplications by +1 or tj io the count, in order to have a simple method for comparing
the computational complexity of different algorithms. Since we have to evaluate X(&)
for k = 0, 1, ..., N 1, it follows that the direct determination of the DFI requires N2
-
complex multiplications and N2 complex additions, which can become prohibitive for
large values of N. Thus, procedures that reduce the computational burden are of con-
siderable interest. These procedures are known as fast Fourier-transform (FFT) algo-
rithms. The basic idea in all of the approaches is to divide the given sequence into
subsequences of smaller length. we then combine these smaller DFfs suitably to
obtain the DFT of the original sequence.
In this section, we derive two versions of the FFT algorithm, assuming that the data
length N is a power of2, so that N is of the form N = 2P, where Pisa positive integer.
Sec. 9.5 Fast Fourier Transrorms

That is, N = 2,4,8. 16,32, etc. Accordingly, the algorithms are referred to as radix-2
algorithms.

9.6.1 The Decimation.in.TineAlgorithm


In the decimation-in-timd (DIT) algorithm, we divide x(n) into two subsequenoes, each
of length N/2, b,l grouping the even-indexed samples and the odd-indexed samples
together. We can then rvrite X (kl in Equation (9.5.3) as

x(t)= 2r@)W +)r(n)ffi (e.s.4)


ncdd

Letting r = 2r in thc first sum and n : 2r + | in lhe second sum, we can write
,v/2- I N/2-l
,t'(t)= ) xQr1w'z;k+ ) xQr+l)lYf'tttt
: N/2-l
) g(')wff*+yyi
Nlz-l
> helw2ik (e.5.s)

where g(r) = x(2r) n1r1"=x(2r + 1).


^rA
Note that for an N/2-point sequence y(n), the DFI is given by
N/2- |

Y(t) = ,,-0
> yln)wiirz

:r'f' ,rnr*'r* (e.s.6)


n=ll
where the last step follows from the relation

w,,,, :.*p -' ?in) =(*o[-i'#])' = ** (e.s.7)


f
Thus, Equation (9.5.5) can be written as
x(k) = G(k) + wfrH(k), t = 0,1,...,N - I (e5.E)
where G(& ) and H (k) denote the N /Z-point DFTs of the sequences g(r) and h(r),
respectively. Since G(k) and H(ft) are periodlc with period N/2,we can write Equa-
tion (9.5.8) as

x(k)=G(t)+ wfrH(k), k=0,1,....]-,

.r(t * *) : c&) + wfi+Ni211171 (e.s.e)


\ 2l
The steps involved in determining X(k) can be illustrated by drarving a signal-flow
graph conesponding to Equation (9.5.9). In the graph, we associate a zode with each
signal. The interrelalions bctween the nodes are indicated by drawing appropriate lines
430 The Dlscreto FouderTranslorm Chaptor g

(branches) with arrows pointing in the direction of the signal flow. Hence, each brranch
has an input sigral and an output signal. We associate a weight with each branch that
determines the transmittance between the input and output signals. When not indi-
cated on the graph, the transmittance of any branch is ass,'med to be 1. The sigpal at
any node is the sum of the outPuts of all the branches entering the node. These con-
cepts are illustrated in Figure 9.5.1, which shows the sigpal-flow graph for the compu-
tations involved in Equation (9.5.9) for a particular value of &. Figure 9.5.2 shows the
signal-flow graph for computing X(,t) for an eight-point sequence. As can be seen from
the graph, to determine X(k), we first compute the two four-point DFTs G(&) and
H(lc)of thesequencess(r) = [(0),x(2),.r(a),x(6)land/l(r) = [.r(1),r(3),r(5),x(7)]
and combine them appropriately.
We can determine the number of computations required to find X(/c) using this pro-
cedure. Each of the two DFTs requires (N/2)2 complex multiplications and (N/2)?

G(kl x(kl

H(k) * N\ ngure 9S.f Sigpal-flov graph for


hrfr+Nt2 '(* 1) Equation (9.5.9)

x(0)

x(l)

r(4) x(2)

x(6) x(3)
I

x@\

x(5)
4 -ooint
2'
DFT
x(6)

: (7) x(7t

Flgne 952 Flow graph for first stage of DIT algorithm for N = E.
Sec. 9.5 Fasl Fourler Transforms 431

complex additions. Combining the two DFTs requires N complex multiplications and
N complex additions. Thus, the computation ot X(k) using Equation (9.5.8) requires
N + N2/2 complex additions and multiplications, compared to A'r complex multiplica-
tions and additions for direct computation.
Since N/2 is also even, we can consider using the same proceclure for determining
the N/2-point DFTs G(k) and H(k) by first determining the N/4-point DFTs of
appropriately chosen sequences and combining them. For N = 8. this involves divid-
ing the sequence g(r) into the two sequences lr(0), r(a)) and {r(2). r(6)l and the
sequence /r (r) into lr(1), x(5)| and (.r(3),.r(7)|. The resulting computarions for find-
ing G(/<) and H(&) are illustrated in Figure 9.5.3.
Clearly, this procedure can be continued by further subdividing the subsequences
until we get a set of two-point sequences. Figure 9.5.4 illustrates the computation of
:
the DFT of a two-point sequence y(n ) (y(O), y(t)1. The complere flow graph for the
computation of an eight-point DFT is shown in Figure 9.5.5.
A careful examination of the flow graph in the latter figure leads to several obser-
vations. First, the number of stages in the graph is 3, which equals logr8. In general,

r(0) c (0)

.r (4) c(l)
l,

x(2) QQI

r (6) 6(3)

(a)

x(l) H (O)

lr"9r:
r (3) H(l')
I

.r(5) H (2)

r(7) H (3t

(b)

Figure 953 Florv graph for compuration of four-poinr DFT.


432 The Dlscrete FouriErffanslom Chapter g

v(0)

Y(l) ngrre9S.4 Flowgraphfor


Wz= -l computation of two-poitrt DFT.

.r(0) x(o)

r(4) .r0)

x(2) x(21

r(6) x(31

x(l)

:(5) x(5)

:(3) x(6)

.r(7) x(7t
wfi wfr wlt

Flgure 955 C.omplete flow graph for computation of the DFT tor N = E,

the number of stages ii equal to log2N. Second, each stage of the computatiou requires
eight complex multiplications and additions. For general N, we require N complex mul-
tiplications and additions, leading to a total of N logrN operations.
The ordering of the input to the flow graph, which is 0,4, 2, 6, 1, 5, 3,7, is deter-
mined by bit revershg the natural numberc 0, 1,2,3,4,5,6,7. To obtain the bit-
reyersed order, we revenie the bits in the binary representation of the numbers in their
natural order and obtain their decimal equivalents, as illustrated in Table 9-1.
Finally, the procedure permits in-place computation; that is, the results of the com-
putations at any stage can be stored in the same locations hs those of the hput to that
stage. To illustrate this, let us consider the computation of X(0) and X(4). Both of
these computations require the quantities G(0) and II(0) as inputs. Since G(0) and
I1(0) are nbt required for determining any other value of X(&), once X(0) and X( )
have been determined, they can be stored in the same locatinns as G(0) and II(0). Sim-
Sec. 9.5 Fast Fourier Transforms 43ri]

TABLE 91
Blt-rove?sed ordsr tor lY = 8

Iroclmal Elnary BltRever8od Dealmal


Number Reprosentatlon Reprosentatlon Equlvalent

000 m0 o
001 lm 4
010 010 2
0ll ll0 6
100 001 I
101 101 5
il0 011 3
lll 111 7

ilarly, the locations of G(1) and H(1) can be used to stsre X(l) and X(5), and so on.
Thus, only 2N storage locations are needed to complete the computatioos.

0.6.2 The Decination-in-Frequenoy Algorithn


The decimation-in-frequency (DIF) algortthtt, is obtelned e;sentially by dividing the
outPut sequence X(&), rather than input sequence.r(lc); lnto smaller subsequences. To
derive this algorithm, we group the first N/2 points and the last N/2 poins of the
sequence r(n) together and write

x(k)=
'X | ,(n)rv# + \
lNlzl - ,t-.1
*(fiwff
E0 n- N/2

=r2' *rrr* + wy'o''[' ,(,.I)*f (e.s.1o)

A comparison with Equation (9.5.6) shows that evetr though the two sums in the right
side of Equation (9.5.10) are taken over N/2 values of n, they do no.t represent DFTs.
WecancombinethetwotermsinEqristlun(9.5.10)bynotingthatWff/z a (-1)r'toget

x(k) =',j*' [,., + (- r)tx(z - l)]*,f (e.s.1l)

Let

g(n) = x(n)- r(r . #)


and

h(n) = -,(^ . #)]*u (e.5.12)


[,r,
where0<n<(N/2)-1.
4U The Discrete Fourier Transrorm Chapter 9

For k even. we can set k = 2r and write'Equation (9.5.11) as


(,\/2,-t (N/2)-l
(e.s.13)

Similarly, setting k = 2r +I gives the expression for odd values of &:

x(Zr + r) ='"3-' hlnywr;, ='"3-' h(n)wi{,, (9.5.14)

Equations (9.5.13) and (9.5.14) represent the (N/2)-point DFTs of the sequences G(t)
and H(k). respectively. Thus, the computation of X(&) involves first forming the
sequences g(n) and lr(n) and then computing their DFTs to obtain the even and odd
values of X(t ). This is illustrated in Figure 9.5.6 for the case where N :
8. From the
figure, we see that c(0) = x(0), c(1) = x(2), G(2) = x(4), G(3) = x(6), H(0) =
x(1), H(r) = X(3), H(2) = x(5), and H(3) = v171.
We can proceed to determine the two (N/2)-point DFTs G(t) and H(&) by com-
puting the even and odd values separately using a similar procedure. That is, we form
the sequences

sln) =r(r). s(, . #)

x(0)

x t2l

x(4r

.t(3) x(6)

,\'(4) x(r)

r(5) x(3\

-r (6) x(s)

x(7) x(7\

Flgure 95.6 Firsi stage of DIF graph.


Sec. 9.5 Fasl Fourier Transforms 435

Bz@) = - s(^ * l)fwp,, (e._s. rs)


[et"r
and

N\
h,(n) = n@) + h(n +
4)
N
hzb) =lnat - n(,.t!)fwn,,
4
(e.s.t6)

Then the (N/4)-point DFTs, G, (/<), G2(k) and Ht(k), H2&\ correspond to the even
and odd values of G(t) and H(k), respectively, as shown in Figure 9.5.? for N = 8.
We can continue this procedure until we have a set of two-point sequenoes. which,
as can be seen from Figure 9.5.4, are implemented by adding and subtracting the input
values. Figure 9.5.8 shows the complete flow graph for the computation of an eight-

8(0) Glo) . ,r(0)

8(l ) Gl2l = X14,

sQ) 6( l). X(l)


;r wfrrz= wfl

8(3) Gt-t1 - 116,

n(0) ttit= xrrl


,r )
,'(l) ,l(l) = .tr|5,

h(2't llll)- Xt3l


;l v,!rt= w.tl
/r'(l)
h(3) ll(31= X (71

(b)

Flgure 9J.7 Flow graph for the A//4-point DFTs of G(& ) and H(k),
N=8.
436 The Discrete Fourler Translorm Chapier 9

.r(0) x(0)

.t(l) x(41

r (1, x(2)

.t(3) x(61

,r(4) x0)

r(5) x(5)

r(0) x(3)

r(71 x (7)
-l -l -I
Flgure 95.t Complele flow graph for DIF algorithm, N = 8.

point DFT. As can be seen from the figure, the input in this case is in its natural order,
and the output is in bit-reversed order. However, the other observations made in ref-
erence to the DIT algorithm, such as the number of comPutations, and the in-place
nature of lhe computations apply to the DIF algorithm also. We can modify the sigpal-
flow graph of Figure 9.5.7 to get a DIF algorithm in which the input is in scrambled
(bit-reversed) order and the output is in the natural order. We can also obtain a DIT
algorithm for which the input is in the natural order. In both cases, we can modify the
graphs to give an algorithm in which both the input and the ouput are in their natural
order. However, in this case, the in-place ProPerty of the algorithm will no longer hold.
Finally, as noted earlier (see Equation (9.3.3)), the FFT algorithm can be used to find
the IDFT in an efficient manner.

.$ SPECTHAL ESTIMATION OF ANALOG


SIGNALS USING THE DFT
The DFT represents a transformation of a finite-length discrete+ime signal r(n) into
the frequency domain and is similar to the other frequency-domain transforms that we
have discussed. with some significant differences. As we have seen, however, for ana-
log signals r,(r), we can consider the DFT as an approximation to the continuous-time
Fourier transform X"(r,r). It is therefore of interest to study how closely the DFT
approximates the true spectrum of the signal.
Sec. 9.6 Spectral Estimation of Analog Signals Using the DFT

As noted earlier, the first step in obtaining the DFT of signal .r,(r) is to convert it
into a discrete-time signal r"(r) by sampling at a uniform rate. The process of sampling,
as we saw, can be modeled by multiplying the signal .r"(l) by the impulse train

pr(t)= j r1r-nr1
nE -c

so that we have

r"O = x,(t)pr(r) (e.6.1)

The corresponding Fourier transform is obtained from Equation (1 .5.12):

x,(.) = ts X o(a + mott) (e.6.2)

These steps and the others involved in obtaining the DFT of the signal r,(r) are illus-
trated in Figure 9.6.1. The figures on the left correspond to the time functions, and the
figures on the right correspond to their Fourier transforms. Figure 9.6.1(a) shows a typ-
ical analog signal that is multiplied by the impulse sequence shown in Figure 9.6.1(b)
to leld the sampled signal of Figure 9.6.1(c). The Fourier transform of the impulse
sequeucepr(t), also shown in Figure 9.6.1(b), is a sequence of impulses ofstrength 1/I
in the frequency domain, with spacing o,. The spectrum of the sampled signal is the
convolution of the transform-domain functions in Figures 9.6.1(a) and 9.6.1(b) and is
thus an aliased version of the spectrum of the analog signal, as shorvn in Figure 9.6,1(c).
Thus, the spectrum of the sampled signal is a periodic repetition, with period o", of the
spectrum of the analog signal .r,(l).
If the signal x,(t) is band-limited, we can avoid aliasing errors by sampling at a rate
that is above the Nyquist rate. If the signal is not band limited, aliasing effects cannot
be avoided. They can, however, be minimized by choosing the sampling rate to be the
maximum feasible. In many applications, it is usual to low-pass filter the analog signal
prior to sampling in order to minimize aliasing errors.
The second step in the procedure is to truncate the sampled signal by multiplying
by the window function o(l). The length of the data window Io is related to the num-
ber of data points N and sampling interval by I
To: NT (e.6.3)

Figure 9.5.1(d) shows the rectangular window function

-I='"'- !'
*-(,) =f'' otherwise
(e.6.4)

[0,
The shift of T/2 from the origin is introduced in order to avoid having 61n12 samples at
points of discontinuity of the window function. The Fourier transform is

wx(o) = ,o'E{*ur[-,,rrPl (e.6.s)


@
(a,

wp (l) I wRk,J) I

^Tt
2 (d)

I X,(o) o Ws(ul I

@
(e)

@
(f) al z,

@
(s)

Figure 9.6.1 Discrete Fourier transform of an analog signal. (Adapted


witl: permission from E. Oran Brigham, The Fouriet Transform, Prentice-
Hall. 1987.)
'r38
S€c. 9.6 Sp€ctral Estimation ol Analog Signals Using ths DFT /t39

and Figure 9.6.1(e) shows the truncated sampled function. The corresponding Fourier
transform is obtained as the convolution of the two transforms X,(o) and Xr(to)- The
effect of this convolution is to introduce a ripple into the sPectrum.
The finat step is to sample the spectrum at equally spaced points in the frequency
(
domain, Since the number of frequency points in the range 0 < (l, to, is equal to the
number of data points N, the spacing between frequency samples is to,/N, or equiva-
lenlly,2n/Tn, as cdn be seen by using Equation (9.6.3). Just as we assumed that the
sampted sigral in the time domain could be modeled as the modulation (multiplica-
tion) of the analog signal x,(t) by the impulse train pr(t), the sampling operation in the
frequency domain can be modeled as the multiplication of the transform
X"(.) * Wr(r) by the impulse train in the frequency domain:

Pr"ki ='; -,,?) (e.6.6)


"2_r(,
Note that the inverse transform of p7 (r,r) is also an impulse train, as shown in Fig-
ure 9.5.1(f):

pr"(t)= i
m- -r
uA -mTo) (e.6.7)

Since multiplication in the frequency domain corresponds to convolution in the time


domain, the sampling operation in the frequency domain yields thc convolution of the
signal .r,(t)arr(t) and the impulse train p, (t). The result, as shown in Figure 9.6.1(9)'
isltre pitoaG exrension of the signal x,(r)roro), with period il,. This result also fol-
lows from the symmetry between time-domain and frequency-domain operations, from
which it can be expected that sampling in the frequency domain causes aliasing io the
time domain. This is a restatement of our earlier conclusion that in operations involv-
ing the DFT, the original sequence is replaced by its periodic extension.
As can be seen from Figure 9.6.1, for a general analog signal .r,,(t)' the sPectrum as
obtained by the DFT is somewhat different from the true spectrum X"(ro). There are
two principal sources of error introduced in the process of determining the DFT of
-r, (). The frrst, of course, is the aliasing error introduced by sampling. As discussed
earlier, we can reduce aliasing errors either by increasing the sampling rate or by pre-
filteriog the signal to eliminate its high-frequency comPonents.
The second source of error is the windowing operation, which is equivalent to con-
volving the spectrum of the sampled sigral with the Fourier transform of the window
signat. Unfortunately, this introduces ripples into the spectrum, due to the convolution
operation causing the signat component in r,(r) at any frequency to be spread over, or
to laa& into, othei frequencies. For there to be no leakage, the Fourier transform of the
window function must be a delta function. This corresponds to a window function that
is constant for all time, which implies no windowing. Thus, rvindowing necessarily
causes leakage. We can seek to minimize leakage by choosing a window function whose
Fourier tranaform is as close to a delta function as possible. The rectangular window
function is not generally used, since it does not approximate a delta function very well.
For the rectangular window defined as
MO The Discrete Fourier Transfonn Chapter 9

_2r 2r i
NN
Figure 9.62 Magnitude spectrum of a rectangular window.

,_(,)={l hffiJ-, (e.6.8)

the frequency response is

(ry;!]
r,,n(o) = .*o
[
-,n #rU (e.6.e)

Figure 9.6.2 shows I W* (O) I , which consists of a main lobe extending from
O : -2n/N to2r./N and a set of side lobes. The area under the side lobes, which is
a significant percentage of the area under the main lobe, contributes to the smearing
of the DFT spectrum.
It can be shown that window functions which taper smoothly to zero at both ends
give much better results. For these windows, the area under the side lobes is a much
smaller percentage of the area under the main lobes. An example is the Hamming win-
dow, defined as

wnb) = 0.54 - 0.46cosn3, o <n<N- I (e.5.10)

Figure 9.6.3(a) compares the rectangular and Hamming windows. Figures 9.6.3(b) and
9.6.3(c) show the magnitude spectra of the rectangular and Hamming windows,
respectively. These are conventionally plotted in units of decibels (dB). As can be seen
from the figure, whereas the rectangular window has a narrower main lobe than the
Hamming window, the attenuation of the side lobes is much higher with the Hamming
window.
A factor that has to be considered is the frequency resohttion, which refers to the
spacing between samples in the frequency domain. If the frequency resolution is too
low, the frequency samples may be too far apart, and we may miss critical informa-
tion in the spectrum. For example, we may assume that a single peak exists at a fre-
quency where there actually are two closely spaced peaks in the spectrum. The
frequenry resolution is
Sec. 9.6 Spectral Estimation of Analog Signals Using the DFT 441

rrlrl
I (,t

0tt
I l.rrr rrrrng

0(,

0.4

0.:

l
(a)

-t0
=
-40

-60
o
o -80

- 100

-20
=
G
_40
s
-@
o
o -80

- 100

(c)

Figure 9.6.3 Comparison of rectangular and Hamming windows. (a)


Time functions. (b) Spcctrum of rectangular windorv. (c) Spectrum of
Hamming windorv.
42 The Dlscrete Fourlor Translorm Chapter I

o,_2n _ Ztt
Aro = (e.6.11)
NNTTO
where Iu refers to the length of the data window. It is clear ftom Equation (9.6.11) that,
to improve the frequency resolution, ive have to use a longer data record, If the record
length is fixed and we need a higher resolution in the spectrum, we can consider
padding the data sequence with zeros, thereby increasing the number of sanples from
N to some new value No > N. This is equivalent to using a window of longer duration
T, > To on the modified sigral now defined as

,,,, = 0<r<fo (e.6.t2)


{i,,',' To<t=Tl
f,'ra'nple 0.6.1
Suppose we want to use the DFT to find the spectrum of an analog siEnal that has been
prefiltered by passing it through a low-pass filter with a cutoff of 10 kHz The desired fre-
quency resolution is less than 0.1 flz.
The sampling theorem gives the minimum sampling frequency for this signal as/, = 20
kHz, so that
I< 0.05 ms

The duration of the data window can be determined from the desired frequency resolu-
tion A/as
1
To = 10s
,=
from which it follows that

x =!>zx rG

Assuming that we want to use a radix-2 FFT routine, we chooe N to b Xi\l$ (= 2t\.
which is the smallest power of 2 satisfyitrg the constraint on N. If we chmse /" = n kHz.
fo must be chosen to be 13.10?2 s.

Ere'nple 0.6.2
In tbis example, we illustrate the use of the DFT in frnding the Fourier spoctruo of ana-
log signals. Let us consider the sigral
r"(t) = go5400i I
Since the signal consists of a single frequenc?, is continuous-time Fourier transfora is a
pair of 6 functions occurring at !2N Hz-
Figure 9.6.4 shows the magnitude of the DFT spectrum X(/<) of the signal for data
lengths of 32, 64, and 128 samples obtained by using a reaangular window. The sigtal was
sampled at a rate of 2klfz. which is considerably higher than the Nyquist rate of 4fl) [Iz
As can be seen from the figure, the DFT spectrum erhibits two peaks in each case. If we
let &, denote the location of the first peak, the gecond peak eun at N -
&e itr all cas€&
This is to be expected, sinceX(-t) -
= X(N *). The aualog frequeacies conrsponding
to the two peaks can be determined to be /o = lkpTlN.
S€c. 9.6 Spectral Estimaiion of Analog Slgnals Using the DFT 443

lx(r) r

l5

(a)

t2

t0
I X(&) |

m
tx(r) |
l5

l0

0
60 80 t00

(c)

Ilgure 9.6.4 DFT spectrum of analog signal ,r,(t) using rectangular win-
dow. (a)N = 32. (b)N = 6a. (c) N = 128.
444 The Discrete Fourier Transform Chaptet I
.
Figure 9.6.5 shows lhe results of using a Hamming window on the sampled signal for
data lengths of 32,64,and 128 samples. The DFT spectrum again exhibits two peaks at the
same locations as before.

4
3.5

rx(&)r 3

2.5
,,

1.5

I
0.5
0
t0 l5 20 25 30

(a)

6
r x(r) r
5

0E
o

(b)

t2

t0
r x(r) |
8

or0

(c)

Iigore 9.65 DFT spectrum of analog signal .r,(t) using Hamming win-
dow. (a) N = 32. (b)N = el. (c) N = 128.
Sec. 9.7 Summary 445

With both lhe rectangular and Hamming windorvs. the first pe lk occurs at k, = 3. (r.
and l3 for N = 32. 64, and 128 sarnples, respcctivelv. Thesc ctr.rcspond to analog tre-
quencies of 187'5 Hz. 187.5 Hz. and 190.0625 Hz. Thus. as rhe number of data samples
increases. the peak moves closer to the actual analog frequencr. Note thal the peaks
become sharper as N (and hence the resolution in th.: digital [rcquencv domain) increases.
The figures also show thal the spectrum obtained using the Hamming window is some-
what smoother than that resulting from the rectangular window.
Suppose we add anolher frequency to our analog signal. so lhilt the signal is norv

.t,,(r) = cosz[00n, + cos440Tr


To resolve the two sinusoids in the signal, the frequency resolurion .[/must be less than
20 Hz. The duration of the data window, r0, must therelore be chosen to be greater rhan
1/20 s. If the sampling rale is 2 kHz, the number N o[ discrere-tirne samples needed to
resolve the two frequencies must be chosen to be larger than l(X).
Figurc 9.6.6 shows the DFT spectrum of the signal for <Iata lenr:rhs of 61, l2g, and 2s6
samples using a rectangular window, while Figure 9.6.7 shorvs rhc corresponding results
obtained using a Hamming rvindorv. with both rvindorvs, lhc 6-l-point DFT is unable to
resolve thc two peaks. For a rvindorv lengrh of l2ti, there are two Iaree values of
lx(k)|
at values of k = 13 and 14. The corresponding frequencies in thc anllog domain are equal
lo 203-125 Hz and 218.75 Hz, respectively. Thus, even though thc rrvo lrequencies do not
appear as peaks in the DFf spectrum. it is neverthelcss possibL: t(, identify ihem.
For N = 256, there are lrvo clearly identifiable peaks in the spcclrum at k = 26 and 28.
These again corrcspond to 20-3.125 Hz and 218.75 Hz irr the :rnalor Ircqucncy domain.

MMARY
o The discrete Fourier transform (DFT) of lhe finite-lenglh scc;uence .r(n) of lengrh
N is defined as

x(k) x(n)wi!
where

,, =.'o[-i?-]
r The inverse discrete Fourier transform (IDFI) is defined irv
1 lv_ |
.'tr) = xtr)w;'a
,,r,],
The DFT oI an N-point sequcnce is related to its Z-transfornt :rs

X(k) = X(r)1.-u.l
The sequence X(k), k = 0, l, 2. ..., N - l. is pcriodic wilh pcrrotl N. The sequence
x(n ) obtained by determining the IDFT of X(k) is also periodic wirh period N.
46 The Discrete Fourler Translorm Chapter 9

20
IE
t6
I X(&) |
l4
t2
l0
E

6
4
2
0

30

?5
r x(e) r

20

l5 --fr
t0
II
5

0
. l\, 80 tn

50
45
q
r x(*) |
35
30
25
zo
t5
l0
5
0
0

(c)

Iigure 9.66DFT spectrum of analog signal .rD (, ) using reciangular win-


dow. (a) N = 6a. (b)N = 128. (c) N: 256.
Sec. 9.7 Summary 47

20
I8
t6
I X(r) |
l4
t2
t0
8
6
4

30

25
I X(l) r

20

t5

lo

50
45

rx(r) r .10
3s
30
25
20
t5
t0
5

0
lm t50 250

k
(c)

Iigure 9.6.7 DFT spectrum of analog signal:r(l) using Hamming win'


dow. (a) N = 6a. (b)N = 128. (c) N = 256.
444 The Discrete Founer Translorm Chapter 9

' In all operations involving the DFT and the IDFT, the sequence.r(n ) is effectively
replaced by its periodic extension rp(n ).
. X(k) is equal to Nao, where a* is the coefficient of the discrete-time Fourier-series
representation of. x r(n),
o The properties of the DFT are similar to those of the other Fourier transforms, with
some significant differences. In particular, the DFT performs cyclic or periodic con-
volution instead of the linear convolution needed for the analysis of LTI systems.
. To perform a linear convolution of an N-point sequenc€ with an M-point sequence,
the sequences must be padded with zeros so that both are of length N + M - L.
. Algorithms for efficient and fast machine computation of the DFT are known as fast
Fourier-transform (FFT) algorithms.
o For sequences whose length is an integer power of 2, the most commonly used FFT
algorithms are the decimation-in-time (DIT) and decimation-in-frequency (DIF)
algorithms.
. For in-Place computation using either the DI'l' or the DIF algorithm, either the
input or the output must be in bit-reversed order.
o The DFT provides a convenient method for the approximate determination of the
sPectra of analog signals. Care must be taken, however, to minimize errors caused
by sampling and windowing the analog signal to obtain a finite-length discrete-
time sequence.
r Aliasing errors can be reduced by choosing a higher sampling rate or by prefilter-
ing the analog signal. Windowing errors can be reduced by choosing a window func-
tion that tapers smoothly to zero at both ends.
r The sPectral resolution in the analog domain is directly proportional to the data length.

9.8 CHECKLIST OF IMPORTANT TERMS


Allaslng lnveree dlscrete Fouder tranetom (lDFf)
Analog specirum Llnear conyolutlon
Blt-rcversed ordor Perlodlc conYolutlon
I)eclmaton-ln-trequency algorlthm Perlodlclty ol DFT and IDFT
Declmadon-ln-tme algorlthm Prefllterlng
Dlscrete Fourler translorm (DFf) Spoctral reaoluuon
Eror reducflon Wlndowlng
Fast Fouder transform (FFT) Zero paddlng
ln-placa computadon

9.1. Compute the DFT of the following N-point sequences:

tet.trzl = Il' n= no' 0<nocrv-l


t0, otherwise
(b) r(a) = (- lf
Sec. 9.9 Problems 449

(c) x(n) =
It. n even
{o orherwise
92. Show that if .r(n) is a real sequence, X(/V - t) = X*(t).
93. Let .r(a) be an N-point sequence with DFf X(k). Find the DFI of the followiog
sequences in term of X(& ):

reven
(B) y,(z) = I,(;),
lo,
\
n odd

(b) vz(n) =.r(N-n - l).


0<n<rv- I
<N- I
(c) )r(/,) = r(zn), 0=x
(r(nl.
' 0<nsN-l
(d)yo(a) =
tr, N-n=2N_l
9.4. Letr(n) be a real eight-point-sequence, and let
(r(n\, O<n<7
rtr) =
[..1n - 8), 8=z<15
Find Y(k ), given that
x(0) = 1' x(t) = 1 + 2j; x(2)= I - jl; x(3) = 1 + jt: andX(4\1 =2-

9.5. (e) Use the DFT to find the periodic convolution of the following sequences:
(l) r(a) = ll, -1, -1, l, -1, 1l and&(n) = 11,2,3,3,2,t|
(ll) :(n) = lr, -2, -1,1l and ft(n) = (1, 0, 0, 1[
(b) Verify your results using any mathematical software package.
9.5. Repeat Problem 9.5 for the linear convolution of the sequences in the Problem.
9.7. Le,l X(O) denote the Fourier transform of the sequence r(n) = (1/3)nu(n), atrd lety(n)
denote an eight-point sequence such that its DFT, (k), corresponds to eight equally spaced
samples of X(O). That is,

Y@=x(+k) o=0,1,"?
What is y(a)?
9.& Derive Parseval's relation for the DFT:
/v-l I N-l
,), lrtr)l' lxttl l'
=
.)*
^,
9.9. Suppose we want to evaluate the discrete-time Fourier transform of an N-point sequence
r(n) at M equally spaced points in the range [0,2n]. Explain how we can use the DFT to
do this if (a) M > N aad (b) M < N.
9.10. Let.r(a) be an N-point sequence. It is desired to find 12E equally spaced samPles of the
spectrum X(O) in the range 7r/16< O= 15n/16, using a radix-2 FFT algorithm'
Describe a procedure for doing so if (i) N = 1000, (ii) N = 120.
9,11. Suppose we want to evaluate the DFT of an N-point sequence .r(n) using a hardware
processor that can only do M-point FFTs, where M is an integer multiple of N. Assuming
that additional facilities for storage, addition, or multiplication are available, show how
this can be done.
4il Tho Discrote Fourler Translorm Chapbr g

9.tL Given a six-point sequence r(z), we can seek to lind its DFT by suMividing it into three
two-point DFTs that can then be combined to give X(&). Draw a signal-flow graph lo eval-
uate X(k) using this procedure.
9.13. Draw a signal-flow graph for computing a nine-point DFT as the sum of three three-
Point DFTs.
9.14 Analog data that has been prefiltered to 20 kHz must be spectrum analyzed to a resolution
of les than 0.25 Hz using a radix.2 algorithm. Determine the necessary data length Io.
9.15. For the analog signal in Problem 9.14, what is the frequency resolution if the sigral is sam-
pled at 40 kHz to obtain l()96 samples?
9.16. The analog signal ro(l) of duration 24 s is sampled 8t the rate of 421E2and the DFTof
the resulting samples taken.
(a) What is the frequency resolution in the analog domain?
(b) What is the digital frequency spacing for the DFT taken?
(c) What is the highest analog frequency that does not ca"se aliasing?
9.17. The following represent the DFT values X(k) of an analog sigral r,(r) that has been san-
pled to yield 16 samples:
x(o) = 2, 1113' = a - ia, x$) = -2, x(8) = - I, x1r r1 = -2. x(t3) = 4 + i4
All other values are ?sto.
(a) Find the corresponding r(a).
O) What is the digital freguency resolution?
(c) Assuming lhat the sampling intewal is 0.25 s. find the analog frequency resolution.
What is the duration Io of the analog signal?
(d) For the sampling rate in part (c), what is the highest analog frequensl that can be pre-
sent in ra(r) without causing aliasing?
(e) Find f0 to give an analog frequenca resolution that is twice that in part (c).
9.I& Given two real N.point sequences /(n) and g(a), we can find their DFTs simultaneously
by computing a single N-point DFf of the complex sequence
x(nl=l(n)+js(n)
We show how to do lhis in the following:
(a) kt ,,(n) be any real N-point sequence. Show that

Relr(k)l = H,(k) = U@: !12 tt--A

ImlH(&)l = H"(k) = U$l----!'U :!)


G) frt t(n) be purely imaginary. Show that

Relfr(t)l = H.(k)
ImUr(*)l = H,(kl
(c) Use your resulrs in Parts (a) and (b) to show that

r(e)=1o111+ixb$)
G(kl=Xp(k)-ixp,(kl
S@.9.9 Problems 451

where X".(&) and X""(k) represent the even and odd parts of Xr(k), the real part of
X(&), and X,"(&) and X,o(t) represent the even and odd parts of Xr(*) tl1s irnnginary
parr of x(t ).
9.M. (a) The signal x,(r) = 4cos(2nt/31is sampled at discrete insrants I to generate 32 points
of the sequence :(r). Find the DFT of the sequence if. T = 15t16, and plot the magpiftde
and phase of the sequence. Use a rectangular window in trnding the DFT.
@) Determine lhe Fourier transform of r,(t), and compare its magnitude atrd phase with
the results of Part (a).
I
(c) Repeat Parts (a) and (b) if = 0.1 s.
9.a). Repeat problem 9.19 with a Hamming window. Comment on your results.
92L We want to determine the Fourier transform of the amplitude-modulated signal
ro(l) = 19 cos 12mnr) cos(100?rr) using the DFT. Choose an appropriate duration Io
over which the signal must be observed in order to clearly distinguish all the frequencies
I
in r,(t). Asume a sampling interval of = 0.4 ms.
(r)
Use a rectangular window, and lind the DFT of the sampled signal for N = 128,
N=
256, and N = 512 samples.
@) Determine the Fourier transform of ro(l), and compare its magnitude and phase vith
the results of Part (a).
9.2L Repeat Problem 9.21 with a Hamming window. Comment on your resulb.
Chapter '1 0

Design of Analog
and Digital Filters

10.1 JNTRODUCTION
Earlier we saw that when we apply an input to a system, it is modified or transformed
at the output. Typically, we would like to design the system such that it modities the
input in a specified manner. When the system is designed to remove certain unwanted
components of the input signal, it is usually referred to as a filter. When the unwanted
components are described in terms of their frequency content, the filters, as discussed
in Chapter 4, are said to be frequency selective. Although many applications require
only simple filters that can be designed using a brute-force method, the desigr of more
complicated filters requires the use of sophisticated techniques. In this chapter, we con-
sider some techniques for the design of both continuous-time and discrete-time fre-
quency-selective fi lters.
As noted in Chapter 4, an ideal frequency-selective filter passes certain frequencies
without any change and completely stops the other frequencies. The range of fre-
quencies that are passed without attenuation is the passband ofthe filter, and the range
of frequencies that are not passed constitutes the stop band. Thus, for ideal continu-
ous-time filters, the magnitude transfer function of the filter is given by lH(ro) | = 1 ;n
the passband ana la1<r)l = 0 in the stop band. Frequency-selective filters are classi-
fied as low-pass, high-pass, band-pass, or band-stop filters, depending on the band of
frequencies that either are passed through without attenuation or are completely
stopped. Figure 10.1.1 shows the characteristics of these filters.
Similar definitions carry over to discrete-time filters, with the distinction that the
frequenry range of interest in this case is 0 O < 2n, since If(O) is now a periodic
=
function with period 2zr. Figure 10.1.2 shows the discrete-time counterparts of the fil-
ters shown in Fig. 10.1.1.

452
Sec. 10.1 lntroduclion 453

I fl(or) I I ,/(o) I

0
(a)

I lr(o) I I lr(o) I

(c) (d)

Iigure l0.l.l Ideal conlinuous-time frequcncy-sclccrivc filters.

I fl(O) I trl('}) |

-2t -1 0a -aOi
(a) ( b)

I ,,(O) I lll(sl) |

Ott -?0t
(c) (d)

Figure l0.lJ ldeal discrete+ime frequency-sclective filrers.

In practicc, we cannot obtain filter characteristics with abrupt transitions between


passbands and stop bands. as shown in Figures l0.l.l and 10.1.2. This can easily be s€en
by considering the impulse response of the ideal low-pass filter. which is noncausal and
hence not physically realizable. To obtain practical filters, we rherefore have to relax
4il Dosign ol Analog and Dlgttal Flltem Chapter 10

IH(tt)I

I + 6r
I -6r

52

tlgure 10.13 Specification for practical low-pass frlter.

our requirements on lH(rD) | (or I a(O) | ) in the passbands and stop bands, by per-
mitting deviations from the ideal response, as well as specifying a transition band
between the passbands and stop bands. Thus, for a continuous-time low-pass filter, the
specifications can be of the form
I -E,s la1rll <t +0,, lrl s., (10.1.1)

ln1,1l < or, l.u, lsto


where ro, and ro, are the passband and stop band cutoff frequencies, respectively. The
range o[ frequencies between o, and o" is the transition band, depicted in Fig'
ure 10.1.3.
Often, the filter is specified to have a peak gain of unity. The corresponding speci'
fications for the filter frequency resPonse can be easily determined from Figure 10.1.3
by amplitude scaling by a factor of 1/(l + 6,). Specifications for discrete-time filters
are given in a similar manner as

llatoll-rl<6,, lol =o, (10.1.2)

lrr(o)l = s,, o"= lol ="'


Given a set of specifications, filter desigr consisLs of obtaining an analytical approxi-
mation to the desired filter characteristics in the form of a frlter transfer function II(s)
for continuous-time systems and f/(e) for discrete-time systems. Once the transfer
function has been determined, we can obtain a realization of the filter, as discussed in
earlier chapters. We consider the design of two standard analog filters in Section 103
and examine digital frlter design in Section 10.4.
In our discussion of filter design, we confine ourselves to low-Pass filters' since, as
is shown in the next section, a tow-pass frlter can be converted to one of the other types
of frlters by using appropriate frequency transformations. Thus. given a specilication
for any other type of filter, we can convert this specification into an equivalent one for
Sec. 10.2 FrequencyTranslormations 455

a low-pass filter, obtain the corresponding transfer function H(.r) { or //(e)), and con-
vert the transfer function back into the desired range.

1O.2 FREQUENCY TRANSFORMATIONS


As indicated before, frequency transformations are useful for converting a frequency-
selective frlter from one type to another. For example, supPose u'e are given a contin-
uous-time low-pass filter transfer function H(s) with a normalized cutoff frequencv of
unity. We now verify that the transformation which converts it into a low-pass filter
with a cutoff frequency ro. is
sn = Jo. (10.2.1)
where s' represents the transformed frequency variable. Since
u
(r) = tl(o. (10.2.2\

it is clear that the frequency range 0 - lrl - 1 is mapped into the range
0 s lor'l s r,r.. Thus, H(st) represents a low-pass filter with a cutoff frequency of to..
More generally, the transformation

,':r4 or.
(10.2.3)

transforms a low-pass filter with a cutoff frequency r,l. to a low-pass filter with a cutoff
frequency of ro!. Simitarly, the transformation

-o-9c (10.2.4)
s

transforms a normalized low-pass filter to a high-pass filter with a cutoff frequency of


o". This can be easily verified by noting that in this case we have
or
rrr' =- 0.)
(10.2.s)

so that the point lorl = 1 corresponds to the point lrol = ,.. Also, the range lt'rl s 1
is mapped onto the ranges defined by." -
lr,rol = -.
Next rve consider the transformation of the normalized low-pass filter to a band-pass
filter with lower and upper cutoff frequencies given by to,j, arld o]r.,, respectively. The
required transformation is given in terms of the bandwidth of the filter,
BW=o..-ro., (10.2.6)

atrd the frequency,


tuu ffo".-orn (10.2,7)

ali

'= #(;;. p) (10.2.8)


Design of Analog and Digiial Filters Chapter 10

This transformation maps ro = 0intothe points r,r0 = + co, and the segment lrrrl slto
the segments ro., > ltool = to.,.
Finally, the bimd-stop filtei is obtained through the transformation
BW
J= (10.2.e)
*(*.3)
where BW and o, are defined similarly to the way they were in the case of the band-
pass filter. The transformations are summarized in Table 10-1.

TABLE 1Gl
Fr€quency translormEllons lrom low-pass analog llllor roapon8g.

Fllte, Typo Transtormallon

s'
Low Pass lic

High Pass lt
Band Pass # (* . p), r,rs = \6.J+

BandStop l"*L;,* BW = o., - ro.,


,.l.* 7/
We now consider similar transformations for the discrete-time problem. Thus, sup
pose that we are given a discrete-time low-pass 6lter with a cutoff ftequency O", and we
want to obtain a low-pass frlter with cutoff ftequency Of . The required transformation is

,'=fi (10.2.10)

More conventionally, this is written as

l,r\-l:z-l-o
tz")':l_az-, (10.2.11)

By setting z = exp [iO] in the right side of Equation (10.2.11), it foltows that

.'=",p[i,un'##H*] rrc.2.t2)
Thus, the transformation maps the unit circle in the z plane into the unit circle in the
et plane. The required value ofc can be determined by setting zr = expUOS] and
O = O. in Equation (10.2.11), lelding

a=
+drri
^_sin[(O"-oj)/2]
sinid (10'2'13)
Sec. 10.3 Design ot Analog Filt€rs 457
.
TAELE 1G2
Frcquencry tsanslomadons Irom los-pa88 dlgtlEl llltor'rpsponso.

Fllter Type Transtormauon Assoclatod Formul,ss

. o.-o:
srn -', -'
Low Pass (z')-r =
i-*+ -
sin
O +0!
-L;-a
Ol = desired cutoff frequency
oi-o_
z-l + o -' 2--
cos
High Pass - r;;=i c=-
o!+o-
cos-'7-
2ok . k-l
-r+=-- O:. + o:,
' k+l' k+l t*2
Band Pass
" = t;. - n:,
-'-2 -
O!-niir o
k= cor-1t- tan
f
o:,, o.t, = desired lower and upper
cuioff frequencics, respectively
t_- k O.'. + O:,
,-, - -Lr-, I l+t --
cos
t- '-
Band Stop
" = ---o=n:.
cos --
J
Oi-n!'' r.r
k = lan
2 tanf

Transformations for converting a low-pass filter into a high-pass, band-pass, or band-


stop filter can be similarly defined and are summarized in Table 10-2.

The design of practical filters starts with a prescribed set of specifications, such as those
given in Equation (10.1.1) or depicted in Figure 10.1.2. Whereas procedures are avail-
able for the design of several different analog filters, we consider the desigp of two
standard filters. namely, the Butterworth and Chebyshev filters. The Butterworth fil-
ter provides an approximation to a low-pass characteristic that approaches zero
smoothly. Tbe Chebyshev filter provides an approximation that oscillates in the pass-
band, but monotonically decreases in the transition and stop bands.
458 Design ot Analog and Digital Filters Chapt€r 10

10.3.1 Ihe Butterworth Filter


The Butterworth filter is characterized by the magnitude function

la(.)1, =
ilrr-* (10.3.1)

where N denotes the order of the filter. It is clear from this equation that the magni-
tude is a monotonically decreasing function of to, with its ma$mum vatue of uiity
:
occurring at ro 0. For o = l, the magnitude is equal to l/\/r, for all values of N.
Thus, the normalized Butterworth filter has a 3-dB cutoff frequency of unity.
Figure 10.3.1 shows a plot of the magnitude characteristic of this'filter as a function
of ro for various values of N. The parameter N determines how closely the Butterworth
characteristic approximates the ideal filter. clearly. the approximation improves as N
is increased.
The Butterworth approximation is called a maximally fiat approximation, since, for
a given N, the maximal number of derivatives of the magnitude function is zero at the
- :
origin. In fact,.the first 2N 1 derivatives of lfflroyl are zero ar o, 0, ali we can see
by expanding la1rll in a power series about ro = 6:

la(r)l' = t- i.il + lro+'v -... (10.3.2)


To obtain the filter transfer function l/(s), we use

I Ir(ro) |

Ideal response

A/= 4
M-3
N-2
Lrv=t

t23
Flgure 103.1 Magnitude plot of normalized Butterworth filrer.
Sec. 10.3 Design of Analog Filters 459

' (10.3.3)
11(s) // t - s) l, - p= lH( ,u)l'
I

l+
i",?']'
so that

H(s)H(-s) =- l,.m (10.3.4)


,.(i)
From Equation (10.3.4), it is clear rhat the poles of H(s) are givcn by the roots of
the equation

(;)- = -, (10.35)

= expU(zk - l)"1, k= 0, l,2.... ,2N - 1

It follows that the roots are given by


s* = exp[j(2k + N - t)r/2N] t = 0.1,2.....2N -I (103.5)
By substituting sr : orr + 7to1, we can write the real and imaginary parts :ls

or =
l2k+N-t It)\
cosl--r,-
.l2k-lrr.\
=""\-r- t/
,* = sin
l2k+N-l \
(toj.7)
l-ZN ")
/2k-lt\
= cosl\-
N-rJ
As can be seen from Equation (10.3.6), Equation (10.3.5) has 2N roots spaced uni-
formly around the unit circle at intervals of n/2N radians. Since 2/< - 1 cannot be even,
it is clear that there are no roots on the 7ro axis, so that there are exactly N roots each
in the left and right half planes. Now, the poles and zeros of H(.s) are rhe mirror images
of the poles and zeros of H(-s). Thus. in order to get a stable transfer function, rve
simply associate the roots in the lcft half plane rvith H(s).
As an example. for N = 3, from Equation (10.3.6), the roots are located at

sr,
[ "tlj.
= exnll. s, =
l.zrl
exO[i:i:]. sz = exp[lnl,

', =.-nfilil. '. =.-n[,T]. ss =r


as shown in Figure 10.3.2.
, 460r Design ol Analog and Digital Filtsrs Chapt€r iO

I
I
x-
I
t
\

Flgure 1032 Roots of the


Butterworth polynomial for N = l.
To get a stable transfer function, we choose as the poles of fl(s) the left-half plane
roots, so that

fl(s): (10.3.8)
ls - explj?r /3ll[s - exp[lzr]l[s - exp[ianl3]l
The denominator can be expanded to yield
I
H(s) = .,' 1) (10.3.e)
Gr. "lix"
Table l0-3 lists the denominator of the Bunerworth transfer function in factored form
for values of N ranging from lY = I to N = 8. When these factors are multiplied, the
result is a polynomial of the form
s(s): ansfl + a,r-,C-r * "'* a,s *I (10.3.10)
These coefficients are listed in Table 104 for N = I to N 8. :
To obtain a filter with 3-dB cutoff at toc, we replace s in II(s) by s/to.. The corre-
sponding magnitude characteristic is

TABLE 10€
Eutlsrwoih polynotnlalo (tadored lorm)

I s+1
2 s2 \6s + 1
+
3 (s2+s+f)(s+l)
4 (s2 + 0.7653s + l)(s2 + l.B476s + l)
5 (s + l)(s2 + 0.6180r + 1)(r2 + l.6lEtu + l)
6 (.s2 + 0.5176s + 1)(s2 + V2s + l)(s2 + 1.931& + t)
7 (s + l)(s2 + 0.4450r + 1)(s2 + 1.2455s + l)(s2 + 1.8()22s + l)
E (s2 + 0.3986s + l)(s'? + l.lllos + lxs2 + 1.6630r + l)(s2 + t.g62x + l)
Sec. 10.3 Design ot Analog Filters 481

TABLE 10.4
Bullemorth polynomlal8
a. ar as a, c
I
\/i 1

2 2 I
2.613 3.414 2.613 1

3.?36 5.236 5.236 3.2% 1

3.864 7.4U 9.141 7.&4 3.W 1

4.494 10.103 14.6(b 14.6M 10.103 4.494 I


5.126 13.128 21.828 25.691 21.84E r3.13E 5.126

la(.)l'=r*#Jil (103.11)

I-et us now consider the design of a low-pass Butterworth filter that satisfies the fol-
lowing specifications:

la1.;l >t-0,, l.l =,, (10.3.12)

s Ez, lrl ,.,


Since the Butterworth filter is defined by the parameters iV and o", we need fwo equa-
tions to determine these quantities. From the monotonic nature of the magpitude
respome, it is clear that the specifrcations are satisfied if we choose

la(ror)l =l-Er (10.3.13)

and
la1o,;l : t, (10.3.14)

Subatituting these relations into Equation (10.3.11) yields

('J.)*=(+)'-,
and

(**)"=#-,
Eliminating o. from these two equations and solving for N regults in

,-,[*ctrHbl
*::
'L
(10.3.15)
"=
l
..462.. Deslgn ol Analog and Digital Fllters. . Chapter tO

Since N must be an integer, we round up the value of ,itr' obtained frbm Equation
(10.3.15) to the nearest integer. This value of N can now be used in either Equa-
tiol (103.13) or Equation (10.3.14) ro determine ro.. If ro. is determined ftom Equation
(10.3.13), the passband specifications are met exactly, whereas the stopband specifrca-
tions are exceeded. But if we use Equation (10.3.14).to determine to". the reverse is
true. The steps in finding II(s) are summarized as follows:
1. Determine N from Equation (10.3.15), using the values of 6,, 5r, ror, and o,, and
round-up to the nearest integer.
2. Determine o., using either Equation (10.3.13) or Equation (10.3.14).
3. For the value of N calculated in Step l, determinp the denominator polynomial of
the normnlized Butterworth filter, using either Tdble 10-3 or Tabte 104 (for values
ofN < 8) or using Equation (10.3.8), and form t/(s).
4. Find the unnormalized transfer function by replacing s in H(s) found in Step 3 by
s/o.. The filter so obtained will have a dc gain of unity. If .some other dc gain is
desired, H(s) must be multiplied by the desired gain.

Erample l0J.l
We will design Butterworth filter to have an attetruation of no more than .l dB for
lrol s 2000 radis and at least 15 dB for l,ol = SOOO rad/s. From rhe specilications
20log,o(l - Er): - I and 20lo9,o6, = -15
so :
that Er 0.10E7 and Ez = 0.1778. substituting these values into Equation (103.15) yields
a value of 2.6045 for /v. Thus we choose N to be 3 and obtain the normalized frlter from
Table 10-3 as
'1
H("): s, + 2.a + A+ 1

Use of Equation (10.3.14) yields r,r. : 2826.8 radds.


The unnormalized filter is therefore equal to

H(s) =
(s /2826.8)1 + 2(s /2826.8)2 + 2(s / 2826.8) +t
_128r9!I_
sr + 2(2E26.8)s2 + 2(2826.8)2s + (2826.8)3
Figwe 103.3 shows a plot of the magnitude of the filter as a funciion of o. As can be seen
from the plot, the filter meets the spot-band specifications, and the passband specifications
are exceeded.

10.8.2 the Ghebyshev f ilter

The Butterworth filter provides a good approximation to the ideal low-pass character-
istic for values of or near zero, but has a low faltoff rate in the transition band. we now
consider the chebyshev filter, which has ripples in the passband, but has a sharper cut-
off in the transition band. Thus. for filters of the same order, the chebyshev filter has
Sec. 10.3 Design of Analog Filters 463

IH (ull

Figure 10.3.3 Nlagnitude function


of the Butterrvorth filter of Example
6 artrad/s 10.3.1.

a smaller transition band than lhe Butterworth filter. Since the derivation of the
Chebyshev approximation is quite complicated, we do not give the details here, but
only present the steps needed to determine H(s) from the specifications.
The Chebyshev filter is bascd on Chebyshev cosine polynomials, defined as
C,u(r) = cos(Ncos-to)' lrl s t

= cosh (N cosh-t or), lr,rl , t (10.3.16)

Chebyshev polynomials are also defined by the recursion formula


C,r(r) = 2oCn-,(or) - Cr-z(.) (10.3.17)

with Cs(to) = 1 and C1(<,r) = 6.


The Chebyshev low-pass characteristic of order N is defined in terms of Cx(o) as

la(,)l'=;-.-
t-t e'zcfr(to)
(10.3.18)

To determine the behavior of this characteristic, we note thal for any N, the zeros ol'
C^,(ro) are located in the interval l.l = t. Further, for lol ' t. lCrloll < l, and for
l.l , t, lC"(r)l increases rapidly as lrl becomes large. It follorvs that in the inter-
val l.ol - t, lf 1o1l'? oscillares about unity such that the maximum value is l and the
minimum is 1/(l + e'?). es l.ol increases, lg(r) l' approaches zero rapidly, thus pro-
viding an approximation to the ideal low-pass characteristic.
The magnitude characteristic corresponding to the Chebyshev filter is shown in
Figure 10.3.4. As can be seen from the figure, lH(r,r) | ripples between I and
l/!t + e2. Since Ci(l) = I for all N, it fotlows that for or = l.
lstrll =*+ (r 0.3.1e)
o
(!
E
-4,
EO
(J CL
<G
-EL
a,

.o
o)
q!- E
3
oq)

o
()

c)
J()
all
GI
(,
oJ
!a
c
it6
I 2
J -t
.i
c
a)

'E Ea
-Ir

4il
SEc. 10.3 Design ol Analog Fitters

For large values of ,-that is, values in the stop band-we can appr.ximare lalr) | as

lnr,)l = --',1 --
E C,\' ( u,
(10 3'rlr)

The dB attenualion (or loss) from the value at r,r = 0 can thus hc wrirren as

loss : - 20 log,,, lH{t,r; I

: 20 log e + 20 log Cr(or) (10.3.21)


For large o, Cn,(to) can be approximated by 2N- rt,lN,
so thar we have
loss = 20 loge + 6(N - 1) + 20N logo (.'10.3.22)

Equations (10.3.19) and (10.3.22) can he used lo determine rhc rrvo parameters N
and e required for the chebyshev filter. The parameter e is dercrnrined by using the
passband specifications in Equarion (10.3.19). This value is rhen used in nquirion
(10.3.22), along with the stop-band specifications, to determine N. In order ro find
H(s), we introduce the parameter

F : (10.3.23)
fisinr,-'1
The poles of H(s), s, = o* -r f ro^, & : 0, l, ..., N - 1, are givcn b1,

"- =,i"(?51)],i,r,o
,, =.",(4;,)] *,nu (10.3.24)

It follows that the poles are locared on an ellipse in the s plane givcn by
ol - ofi
= (10.3.25)
rinh'g '
"o.tr:B
The major semiaxis of the ellipse is on the lo axis, the minor senriaxis is on the o axis.
:
and the foci are at or + l, as shown in Figure 10.3.5. Ttre 3-dB cutolf frequency occurs
at the point wh.-re the ellipse intersects the it,l axis-that is, ar t,,r = cosh B.
It is clear from Equation (10.3.24) rhar the chebyshev poles are relared to the Bur-
terworth poles of the same order. The relation between these poles is shown in Figure
10.3.6 for IV = 3 and can be uscd to determine the locations ol rhe chebyshev poles
geometrically. The corresponding H(s) is obtained from lhe lefr-half plane poles.

Elrenrple 1032
we consider the design of a chcbyshev filter to have an attenuariorr o[ ntr more than I dB
for lr,r | - l([0 rads/s and at leasi l0 dB for l, | = 50)0 railsls.
We will first normalize or, to I, so that to, = 5. From the pas:;hand specifications, rve
have. from Equarion ( 10.3. l9 )
466 Design ol Analog and Digital Filters Chapter 10

lct
coslt 6

fir
\
sinh P

\, )
Flgure 103.5 Poles of the
Chebyshev filter.
i/f
,t'1

:'t, Q Buncrworth poles


X
-x- 'n--
...' { Chebyshev poles
z1t
't -x-
,i.
-:t, lV t.
lr
t t,. r! \
,ii
I
'1
sinh P
-x
l6t
,t/l
J-
i-l-+-,
ilt
Vt i
t

I /(--_ Buttereorth
pole locus
-x-

Ilgure 103.6 Relation between the Chebyshev and Butterworth poles for
N: 3.

I
20loero;-; = -l
It follows that e = 0.509. From Equation 10.3.22.

l0 = 20 logro0.509 + 6(N - t) + 20N tog,65


so thal N = l.G)43. Thus. we use a value of N = 2. The parameter p can be determined
from Equation (103.23) to b€ equal to 0.714. To find the Chebyshev poles. we determine
the poles for the corresponding Butterworth filter of the same order and multiply the real
parts by sinh p and the imaginary parts by coshp. From Table l0-3. the poles of the nor.
malized Butterworth filter are given by
Sec. 10.3 Dssign ol Anatog Fitters 467

1_ I
@P
lj ;
\/, v2
where r,rr = 10C0. The Chebyshev poles are. rhen.

, = -# (sinho.7l4) .,#(cosho.7t4)
= _545.31 + j892.92

Hence,

H(r) =
(s + 545.31)'z + (892.92)2
--1
The corresponding filter with a dc gain of unity is given by

(s4s.3ly + (892.q2)2
H(s; =
(s + 545.31)2 + (8ct2.92)'z

The magnitude characteristic for this filter is shown in Figure 10.3.7.

I Hlti) I

t.t2
I

Ilgure 103.7 Magnitude


characteristic of the Chebyshev
qJ ( kHz) filter of Example 10.3.2.

An approximation to the ideal low-pass characteristic, which, for a given order of


frlter has an even smaller transition band than the Chebyshev filrer. can be obtained in
terms of Jacobi elliptic sine functions. The resulting filter is called an elliptic filter. The
design of this filter is somewhat complicated and is not discussed here. We note, how-
ever, that the magnitude characteristic of the elliptic filter has ripples in both the pass-
band and the stop band. Figure 10.3.8 shows a typical elliptic-filrer characteristic.
468 Design ot Analog and Digilal Filters Chapter 10

I H(utt 12 t ttlot l:

I I
l;? I - .'

'ic
N odd N even

Figure 103.8 Magnitude characteristic of an elliptic Jilter.

ITAL FILTERS
In recent years, digital filters have supplanted analog filters in many applications
because of their higher reliability. flexibility, and superior performance. The digital fil-
ter is designed to alter the spectral characteristics of a discrete-time input signal in a
specified manner, in much the same rvay as the analog filter does lor continuous-time
signals. The specifications for the digital filter are given in terms of the discrete-time
Fourier-transform variable o. and the design procedure consists of determining the dis-
crete-time transfer function H(l) that meets these specifications. We refer to H(z) as
the digital filter.
In certain applications in which a continuous-time signal is to be filtered, the analog
filter is implemented as a digital filter for the reasons given. Such an implementation
involves an analog-to digital conversion of the continuous-time signal, to obtain a dig-
ital signal that is filtered using a digiral filter. The outpur of the digital filter is then con-
verted back into a continuous-time signal by a digital-to-analog converter. In obtaining
this equivalent digital realization of an analog filter, the specifications for the analog
filter, which are in terms of the continuous-time Fourier-transform variable o, must be
transformed into an equivalent set of specifications in terms of the variable O.
As we saw earlier. digital sysrems (and, hence, digital filters) can bc either FIR or
IIR filters. The FIR digital filter. of course, has no counterpart in the analog domain.
However, as we saw in previous sections, there are several well-established techniques
for designing IIR filters. It would appear reasonable, therefore, to try and use these
techniques for the design of IIR digital filters. In the next section, we discuss two com-
monly used methods for designing IIR digital filters based on analog-filter design tech-
niques. For reasons discussed in the previous section, we confine our discussions to the
design of low-pass filters. The procedure essentially involves converting the given dig-
ital-filter specifications to equivalent analog specifications, designing an analog filter
that mects these specifications, and finally, converting the analog-filter transfer func-
tion H.(s) into an equivalent discrete-time transfer function I/(a).
Sec. 10.4 Digital Filters 469

10.4.1 Design of IIR Digital Filters Using


Impulse Invaria"ce

A fairly straightfonvard method for establishing an equivalence betrveen a discrete'


time system and a corresponding analog system is to require that the responses of the
two systems to a test input match in a certain sense. To obtain a meaningful match, we
assume that the output y,(t) of the continuous-time system is sampled at an appropri-
ate rate T. We can then require that the sampled output y,(nf) be equal to the output
y(n ) of the discrete-time system. If we now choose the test input as a unit imPulse, we
require that the impulse responses of the two systems be the same at the sampling
instants. so that
h,(nT) = h(n) (10.4.1)

The technique is thus referred to as impulse'invariant design.


It follows from Equation (10.4.1) and our discussions in Section 7.5 that the relation
between the digital frequency O and the analog frequency to undcr this equivalence is
given by Equation (7.5.10),
o (t0.4.2)
'=i
Equation (10.4.2) can be used to convert the digital-Iilter specifications to equivalent
analog-filter specifications. Once the analog filter H,(s) is dclcrrnined, we can obtain
the digitat filter H(z) by finding the sampled impulse response h,(nT) and taking its
Z-transform. In most casesi we can go directly from H,(s) to H(z) by expanding H.(s)
in partial fractions and determining the corresponding Z-transform of each term from
a table of transforms, as shown in Table 10-5. The steps can be summarized as follows:

1. From the specified passband and stop-band cutoff frequcncics. Q and O" respec-
tively, determine the equivalent analog frequencies, oo and r,r,.
2. Determine the analog transfer function H,(s), using the techniques of Section 10.3.
3. Expand H,(s) in partial fractions, and determine the Z-transform of each term from
a table of transforms. Combine the terms to obtain fI(z).
while the impulse-invariant technique fairly straightforward to use, it suffers from
is
one disadvantage, namely, that we are in essence obtaining a discrcte-time system from
a continuous-time system by the process of sampling. We recall that samPling intro-
duces aliasing and that the frequency response corresponding to the sequence h,(n?')
is obtained from Equation (7.5.9) as

H(O) =
;_i. a"(a.2] r) (10.4.3)

so that

H(o) = ,ir"rn, (10.4.4)

only if
470 Oosign ol Analog and Digital Filtors Chapter 10

TABLE 1(}5
Laplaco bansrorms and thelr Z-transtorm equlvalents

Laplaco Transtoh, Z-Transtorm,


H(s) H14

I Tz
,
s- (.-tl,
2 r'rS:-U
sl (z - l)'

s*a
l- z
z - expl- aTl
I Tz expl- aTl
G*"i' (z - exp[- aTl)'1
I 1 l_
(s+a)(s+b) (b-o)\z - exp[-al] z - expl- bTl
a Tz __
s2(s + a) (z- lF_ _(r_-Ip_1-o4)z
a1! - 9[-
"*o-1-n7p
1 Tz exp[- aTl
tr * ,l: (z - exp[-aI])2
a2 z aT expl- aTlz
s(s-+ljl z-l z - exp [-aI] -
(z expl-aTl)2

_ z sinool
- -9e -
s2+-j z2 - 2z cosooT + I
s
-;- -_-; _z (z - cosoo_ I)
t' * rrro' z2 - 2z costl,oT + I

__.._.9o.. 3 exp[:rfl_fiqgsl_
(s+a)2+(o; z2 - all cos roo T + expl-?aTl
2z expf-
s+d _
-- _ z' - z expl-rll "gryLl__
(s+a):+(o3 z2 - 2z expl- oI] cosorl + expl- 2aTl

H,(.) = 0. lrl = T 7t
(10.4.s)

which is not the case with practical low-pass filters. Thus. the resulting digital filter does
not exactly meet the original design specifications.
It may appear that one way to reduce aliasing effects is to decrease the sampling
interval T. However, since the analog passband cutoff frequency is given by
?r..: Ar/ T, decreasing 7 has the effect of increasing <or. thereby increasing aliasing. It
follows, therefore, that the choice of r has no effect oir the performance of the digital
filter and can be chosen to be unity.
For implementing an analog filter as a digital filter, we can follow exactly the same
procedure as beforc, except that Step 1 is not required. since the specifications are now
Sec. 1 0.4 Digital Filters 471

given directly in the analog domain. From Equation (10.4.-l). rrhen rhe analog filter is
sufficiently band linrited. lhe corresponding digiral filter has a grin of l/I. which can
become extrenrely high for low values of L Gencrally. thereforc. the resulting trans-
fer function H(t) is multiplied by 7'. The choice of I is usuall,"- determined by hard-
ware considcrations. We illustrarc the procedure by the follorving cxample.

Example lo.4.l
Find the digital equivalent of the analog Butterworlh filter derived in Example 10.3,t using
the impu lse-inva rian t method.
From Example 10.3.1. with ro,. = 2826.E, thc filter transfer function is

H(s) =
si + 2(2826.E)srl"r'lrll .r,* + (2sr6.s).
2826.8 2826.8(.r + 1413.4)+ 0 s(2826.8F
= s + ZSlO.g + 1413.4)2 + (2448.1): + lJl3..l)r fdd.tl'
1.s 15
We can determine lhe equivalent Z-transfer function from Tablc l().5 as
t - - ' sin(2A8.lf[l
H(:) = 2826.8[.
- "',.',0, .r] - z2-
_ zc-r{r'1'{r[cos(22148.17)
2-. ,.,ii;;;;.(;44s.rr) + e-2826.8r I
If the sampling interval I is assumed to be I ms, we get

n(z) = 2s:o.sf :".".


lz - 0.2433
- :',,- :l:*o"t'
+ O.37422 (,.()5921
I

which can be amplitude normalized as desired.

n=ample 1O.42
l.et us consider the design of a Butterworth lorv-pass digital filt,:r rhar mees the follow-
ing specifications. The passband magnitude should be constanr to within 2 dB for fre-
quencies helow 0.2rr radians, and the stop-band magnitudc in thc range 0.4n < 0 < n
should be lcss lhan -10 dB. Assume that the magnitude at O = () is normalized to unity.
With ()/ = 0.2rr and O. = 0.4n, since the Butterworth filtcr has a monotonic magni-
tude characteristic, it is clear that to meet thc specifications. wc rlrust have
20logro lH(0.21r 1l = -2. or lfl(o.zrr l l' -- to-o:
and
20 logrolr/(0.4r)l = - or
lH1tt.+" 11t = 19-'
10,

For the impulse-invarianl dcsign techniquc. we obtain the equiralent analog domain spec-
ifications by setting r,r = Of. rvith I = l. so that
la"1o.zr'11'= to'o'
la"1o.ln;lr = to-'
For the Butterworth filtcr.
I
l1ltiu,tl:-
I r ur '/r"' )' '
rvhere t,, uttd Nntust Lrc tL:1,-'r t,tirt,rr.'l ito,rt li:c r,rrc1tlj..1i;,r;i, li, -r ie ltls t hc l$o cqualions
472 Design ot Analog and Digttal Fllters Chapter 10

r* (94)- l'r,P,

r*l,o.o")*=,0
\ t'1. /
Solving for lV gives N = 1.918, so that we choose N : 2. With this value of ly, we can solve
for of the last two equations. If we ,se the first equation, we just meet the
to. from either
passband specifications, but more than meet the stop-band specifications, whereas if we
use the second equation, the rcverse is true. Assuming we use the first equation, we get
ro. = 0.7185 rads
The corresponding Butterworth filter is given by
o.5162
I{,(s) =
(s/o.)2+!21t1-,1 +t sz+1.016s+0.5162
with impulse response
h,(t) = 1.01 exp [-0.5081] sin0.508r z (r)
The impulse response of the digital filter obtained by sampling ft"() with I =I is
&(a) = 1.61 exp [-0.508r] sin0.50E n z(n)
By taking the corresponding Z-rransform and normalizing so that the magnitude at O =
Ois unity, we obtain
0.58542
H(z\ =
zz-l.Ostz+01162
Figure 10.4.1 shows a ptot of lH(o)l for () in the range [0, rrl2]. For rhis parricutar
example, the analog filter is sufficiently band limited, so that the effects of aliasing are not
noticeable. This is not true in general, however. one possibility in such a case is to choose
a higher value of N than is obtained from the specifications.

I,,(O) I

0.89t

ngure f0Af Magpitude function


of the Butterworth desigp of
O (rad) Example 10.4.2.
Sec. 10.4 Digiial Filters 473

10.4.2 IIR Desigrr Using the Bilinear Transforrnation


As slated earlier. digital-filter dcsign based on analoq fillers involvcs converting dis-
crete-domain specifications into the analog domain. The impulse -invariant design does
this by using the lransformation
ot = A/T
or equivalently.
z = exp [?ns]
We saw, however. that because of the nalure of this mapping, :rs discussed in Section
8.8, the impulse-invariant design leads to aliasing problems. ()nc approach to over-
coming aliasing is to use a lransformation that maps the Z-domain onto a domain thal
is similar to the.r domain, in that the unit circle in the e plane ntaps into the vertical
axis in the new domain, the interior of the unit circle maps onto the open left half
plane, and the exterior of the circle maps onto the open right half plane. We can then
treat this new plane as if it were the analog domain and use standard techniques for
obtaining the equivalent analog filier. The specific transformation that we use is

2l-zl (10.4.6)
' Tl*r-'
or equivalently,

t + (T/Z\s
' |- (T/z)s {1o.4.7)

where Tnis a parameter lhat can be chosen to be any convenient value. It can easily be
verified by setting Z = r-t exp[jO] that this transformation, which is referred to as the
bilinear transformation, does indeed satisfy the three requiremcnts lhat we mentioncd
earlier. We have

s=o+i_:?t#_ffi_L#
2 7-12
.' +i:2 2r sin 0
TI + r2 + 2r cos0 TI + r2 + 2r cos ()
For r < l, clearly, o > 0, and for r ) l, we have o < 0. For r = l..r is purely imag-
inary, with

sin O 2A
Il+coso = 7'2
(r) = tan

This relationship is plotted in Figure 10.4.2.


The procedure for obtaining the digital filter //(i) can be sunrnrarized as follows:
I . From the given digital-tilter spc-cifications. find thc correspon(lin!l analog-filter spec-
ifications by using the relation
474 Design ol Analog and Oigital Filters Chapter 10

Flgure 10,42 Relation between O


and ro under lhe bilinear
transformation.

a=
20 (10.4.8)
ltanl
where f can be chosen arbitrarily, e.9., T = 2.
2. Find the corresponding analog-filter function H.(s). Then find the equivalent dig-
ital filter as

H(z) = H,(s) l,=;l;:: (r0.4.e)

The following example illustrates the use of the bilinear transform ln digital filter design.

Exampte 10.4.3
We consider the problem of Example 10.4.2, but will now obtain a Butterworth design
using the bilinear iransform method. With f = 2, we determine the corresponding pass-
band and stop-band cutoff frequencies in the analog domain as

,,)p= $nY- = 0.3249

."=tanT=0.7265
To meet the specifications, we now set

1*/9.142\*:16-.,
\ro./
,*/o7265),N_1s_r
\ro"/
and solve for N to get N = 1.695. Choosing N= 2 and determining o. as before gives
(,,). = 0'4195
The corresponding analog filter is

at'l = 7;fl0re3,6* qn6


We can now obtain the digital filter H(r) with gain at O = 0 normalized to be unity as
Sec. 10.4 Oigital Fillers 475

{).1355(3+ l):
H(z) = H,,(s)i.-l. i= _r
- 2.1712 + 1.7 16
Figure 10.4.3 shorvs the magnitude characteristic of the digital liltcr for 0 in the range
lo,rnl.
I,,(O) I

Figure 10.4,-1 Magnitude


charactcristic rrf the filter [or
Exanrplc 10..1.3 using the bilinear
0.4n 0.f zr f,! (rad) method.

10.4.3 FIR Filter Design

In our earlier discussions, we noted that it is desirable that a filtcr have a linear phase
characteristic. Although an IIR digital filter does not in gencrirl havc a linear phase,
we can obtain such a characteristic with a FIR digital filter. In rhis section, we consider
a technique for the design of FIR digital filters.
We first establish that a FIR digital filter of length N has a lirrcrrr phase character-
istic, provided that its impulse response satisfies the symmetrv condition
h(n)=11111 -t-n) (10.4.10)

This can be easily verified by determining H(O). We consider thc case of N even and
N odd separately. For N even, we write
N- |

H(o): ) t(")cxp[-ion]
n =O

N12- |
: 2 n@lexp[-ioz] + ) a (n) e.rp [-lon ]
x=0
Now we replace n by N -n- 1 in the second term in the last equation and use Equa-
tion (10.4.10) to get
476 Design ot Analog and Digital Filters Chapte, 10

\N/71- | tNlzt - |
+> h(n)exp[-lo(N-1 -z)]
n=o

which can be written as

H(o) =
{r2"'*''*'[n(' ?)l)"*[-,"(?)]
Similarly, for N odd, we can show that

H(o) =
{, (?) .':i" zr,r ... [o(, - ?)] ]
*,
[-,"(?)]
In both these cases, the term in braces is real, so that the phase of H(O) is given by the
complex exponential. It follows that the system has a linear phase shift, with a corre-
-
sponding delay of (N l)/2 samples.
Given a desired frequency response I/r(O), such as an ideal low-pass characteristic,
which is symmetric about the origin, the corresponding impulse response fta(z) is sym-
metric about the point n = 0, but, in general, is of infinite duration. The most direct
way of obtaining an equivalent FIR filter of length N is to just truncate this infinite
sequence. The truncation operation, as in our earlier discussion of the DFT in Chap-
ter 9, can be considered to result from multiplying the infinite sequenoe by a window
sequence ut(n).lt hr(n) is symmetric about n :
0, we get a linear phase 6lter that is,
however, noncausal. We can get a causal impulse response by shifting the truncated
sequence to the right by (N -1)/2 samples. The desired digital filter H(z) is then
determined as the Z-transform of this truncated, shifted sequence. We summarize
these steps as follows:

1. From the desired frequency-response characteristic Hr(O), find the corresponding


impulse response ftr(r).
2. Multiply ho@) by the window function ar(n ).
3. Find the impulse response of the digital filter as
h(n) = 1,01n - (N - t)/zlw(n)
and determine H(z). Alternatively, we can find the Z-transform II,(z) of the
sequenoe h/n)u(n) and find H(z) as

H(z) = z-rl,-t't2H' (z)


As noted before, we encountered the truncation operation in step 2 in our earlier
discussion of the DFT in chapter 9. There it was pointed out that truncation causes the
frequency response of the filter to be smeared.
In general, windows with wide main lobes cause more spreading than those with
narrow main lobes. Figure 10.4.4 shows the effect of using the rectangular window on
the ideal low-pass filter characteristic, As can be seen, the transition width of the result-
ing frlter is approximately equal to the main lobe width of the window function and is,
hence, inversely proportional to the window length N. The choice of N, therefore,
involves a compromise between transition width and filter length.
Sec. 10.4 Digiial Filters 477

i Ir(rt) |

ltu I
Figure 10.4.4 Frequency response
l4r obtained by using rectangular
F_ window on idcal filter response.

The following are some commonly used window functions.

Rectangular:
_
","(,)
=
{;: :;J;J ' (10.4.11a)

Bartlett:

N-l
O=n<
N-t' ,
2n N-l
-r, 1_-------:--
' N-1'
_ <
2 -"-<N_ (r0.4.11b)

elsewhere
{
Hanning:
2rn \
-.osr-rt1/, 0<n<N-l
?.//a,n(n) (10.4.11c)

Hamming:
{'l' elsewheru

[o.ro - o.aocosffi, 0<n<N-l


@Hu'(n) = (10.4.11d)

l, elservhere

Blackman:

ws(n):
42 - O.s.o,
ff1 + 0.08 cos
ffi , 0<l=N-l
(10.4.1le)
elsewhcrc
{,
478 oesign ol Analog and Drgital Filler." Chapter 10

Kaiser:
,,)
("[(';')'- (, - 'r- 1)']'
zo* (n) = !)] ' o'n=N-' 1ro.a.ttg
t["("r-
{: elsewhere

where /o-(.r) is the modified zero-oder Bessel function of the first kind given by
/s(x) : Jo"' exp [x cos 0ldl /2r and o is a parameter that effects the relative widths of
the main and side lobes. When o is zero, we get the rectangular window, and for a :
5.414, we get the Hamming window. In general, as o. becomes larger, the main lobe
becomes wider and the side lobes smaller. Of the windows described previously, the most
commonly used is the Hamming window, and the most versatile is the Kaiser window.

Erernple 1O.4.4

I-et us consider the design of a nine-point FIR digital filter to approximate an ideal low-pass
digital filter with a cutoff frequency A, = O.2n. The impulse response of the desired filter is

h,,(o) = f'z expllan'ldo = Ino'2'1


^! J _.2a
zn nn
For a rectangular window of length 9, lhc corresponding inrpulse rcsponse is obtained by
evaluating /r.,(n) for -4 - n s 4. We obtain

to.t47 0.117 0.475 0.588 0.588


- 0.475 0.317 0.147
--, - l 1
h,tln)=\
-Lzr1Tn,Ifft?[nfrl -,1,
1

The filter function is

u'171 =0'147
n?lTSn
-'* 0'1!Z
,, *9'47s.., a 9'588. * I

+ 0.588
2., +
0.475 . 0.317 0.147
z-, + --_ z-, + --_- z-c
Tr7tIt?I
so that

H(z) = 7-es'P1 =
o't'1?
'i
1r * .-rl * o*' e-t + 2-t.1

t o.475
(z-t + z-") +
O't*,r-, +7-s1 +r-a
7t It
For N = 9. the Hamming window defined in Equation (l0.4.lld) is given by the sequence

ta(n) = [0.081, 0.215, 0.541,0.865, l, 0.865, 0.541, 0.215, 0.0811


t
Hence, we have
Sec. 10.4 Digital Filters 479

0'012. 0'068, 0'257, 0'508 q10! 0.257 0.(b8 0.0121


rr,,(n)w (n)- { ,1.
[1r1t7tn 1f lt1Jt
T

The filter funclion is

. 0.012
H'(z)= z" +
0.m6E , 0.25't
------ 22 " +.--
0.s08
i*l
TT

-z'+
0.508
+ -- z-l +-z 0.257 _r
* 0.068
,_, a9O_12 ,_o
7f 1f 7t?r
Finally,

o#? 94
H(z) = z-o H'(2, = (l + z-8) .. 9'EQ 1.-' + z-') * p-z + z'c1

*, o'!99 1.-r + .-r) + z-.r

The frequency responses of the filters obtained using both the rectangular and Hamming
,I0.4.5,
windows are shown in Figure with the gain at O = 0 normalized to unity. As can
be scen from the figure, the response corresponding to the Hamming window is smoother
than the onc for lhe recta[gular window.

I ,,(O) | I ,,(O) I

Sl (rad) Q (md)
(a) (b)

tigure 10.45 Response of the FIR digital tilrer of Example 10.a.4. (a) Reclangu-
lar window. (b) Hamming window.
.l80 Deslgn ol Analog and Digital Fllters Chaptor tO

f,'.sample 10.4.5

FIR digital filten can be used to approximate filters such as the ideal differentiator or the
Hilbert transformer, which cannot be implemented in the analog domain. In the analog
domain, the ideal differentiator is described by the frequency respo le

H(r,r) = jro
while the Hilbert transformer is described by the frequency response

H(o) = -jsgp(ro)
To design discrete-time implementation of such filters, we start by specifying the desired
a
response in the frequency domain as

Ho@) = H(a), -l=-=|


whcre o, = 2r /T for some choice of 7. Equivalently,
H/A)=Y1'7''' -rsOsn
where (! = ro7" Since Ifr(O) is periodic in O vith period 2rr, we can expand it in a
Fourier series as

Hd(o) =
,t.oo1n'1"-rn
where the coeflicients llr(a) are the correspondiog impulse response samples, given by

h"@) =
* I:,Hd(a)eanda
As we have seen earlier, if the desired frequency function Hr(O)is purely real, the
impulse response is even and symmetric; that is, ir(n) = ha? n). On the other hand, if
the frequency response is purely imaginary, the impulse response.is odd and symmetric,
so rhat ir(a) = -hdFn).
We can now design a FIR digital filtdr by following the procedure given earlier. We wilt
illustrate this for the case of the Hilbert transformer. This transformer is used lo generate
signals that are in phase quadrature to an iniut sinusoidal signal (or, rnor" g.n"--Uy, *
input narrow-band waveform). That is, if the input to a Hilbert transformer is the signal
:
.r,(r) cos roor, the output is y,1l; = sinoot. The Hilbert transformer is used in commu-
nication systems in various modulation schemes.
'The
impulse response for the Hilbert lransformer is obtained as

h,@) : (o)e,,ndo
* I:,-i ssn

(
I 0. n even
_)
-lz
zodd
t nrr
For a rectangular window of length 15, we obtain
S€c. 10.4 Digital Filters 41

ho@'1 =
{- * , - fr,0, -*,r.-?.0.?.n..1- , * ,.-ri}
which can be realized with a delay of seven samples by the transfer fuction

H@=:t- , l r-,-lr-,-z-6+z-8+Jz-,' *lr "*1, {


The frequency response H(O) of this filter is shown in Figure 10.4.6. As can be seen, the
response exhibits considerable ripple. As discussed previously, the ripples can be reduced
by using window functions other than the rectangular. Also shown in the frgure is the
response corresponding to the Hamming window.

H@ttj

Figure 10.4.5 Frequency response of Hilbert lransformcr.

10.4.4 Computer.Aided Design of Digital Filters


ln recent years, the use of computer-aided techniques for thc dcsign of digital filten
has become widespread, and several software packages are available for such design.
Techniques have been developed for both FIR and IIR filters that, in general, involve
the minimization of a suitably chosen cost function. Given a desired frequency-
response characteristic Hr(O), a filter of either the FIR or IIR type and of fixed order
is selected. We express the frequency response of this filter, H(O), in terms of the vec-
tor a of filter coefficients. The difference between the two responses, which represents
the deviation from the desired response, is a function of a. We associate a cost func-
tion with thls difference and seek the set of filter coefficients a that minimizes this cost
function. A typical cost function is of the form
482 Design ol Analog and Digilat Fllters Chapter lO

fI',
t(a) =
l_ r.y(o)lHi(o) - H($lzdo (10.4.12)

where tlz(O) is a nonnegative weighting function that reflects the significance attached
to the deviation from the desired response in a particular range of frequencies. w(o) is
chosen to be relatively large over that range of frequencies considered to be important.
Quite often, insread of minimizing the deviation at all frequencies, as in Equation
(10.4.12), we can choose to do so only at a finite number of frequencies. The coit func-
tion then trecomes
M
/(a) = w(o,)lHd(o,) - H(o,)1, (r0.4.13)

where Q, I < i < M, are a set of frequency samples over the range of interest. Typi-
cally, the minimization problem is quite complex, and the resulting equations cannot
be solved analytically. An iterative search procedure is usually employed to determine
the optimum set of filter coefficients. We start with an arbitrary initial choice for the
filter coefficients and successively adjust them such that the resutting cost function is
reduced at each step. The procedure stops when a further adjustment of the coeffi-
cients does not result in a reduction in the cost function. Several standard algorithms
and software packages are available for determining the optimum filter coefficients.
A popular technique for the design of FIR filters is based on the fact that the fre-
quency response of a linear-phase FIR filter can be expressed as a trignometric poty-
nomial similar to the Chebyshev polynomial. The filter coefficients are chosin to
minimize the maximum deviation from the desired response. Again, computer pro-
grams are available to determine the optimum filter coefficients.

Frequency-selective filters are classified as Iow-pass, high-pass, band-pass, or band-


stop filters.
The passband of a filter is the range of frequencies that are passed without attenu-
ation. The stop band is the range of frequencies that are completely attenuated.
Filter specifications usually specify the permissible deviation from the ideal charac-
teristic in both passband and stop band, as well as specifyinB a transition band
between the two.
Filter design consisrs of obtaining an analytical approximation to the desired filte:
characteristic in the form of a filter transfer function, given as t/(s) for analog fil-
ters and H (z) tor digital filters.
a Freqirency transformations can be used for converting one type of filter to another.
a Two popular low-pass analog filters are the Butterworth and chebyshev titters. The
Butterworth filter has a monotonically decreasing characteristic that goes to zero
smoothly. The chebyshev filter has a ripple in the passband, but is monotonically
decreasing in the transition and stop bands.
Sec. 10.7 Problems

A given set of specifications can be met by a Chebyshev filtcr of lower order than a
Butterworth filter.
The poles of the Butterworth filter are spaced uniformly around the unit circle in
the s plane. The poles of the Chebyshev filter are located in an ellipse on the s plane
and can be obtained geometrically from the Butterworth polcs.
a Digital filters can be either IIR or FIR.
a Digital IIR filters can be obtained from equivalent analog designs by using either
the impulse-invariant technique or the bilinear transformation.
Digital filters designed using impulse invariance exhibit distortion due to aliasing.
No aliasing distortion arises from the use of the bilinear transformation method.
Digital FIR filters are often chosen to have a linear phase characteristic. One
method of obtaining an FIR filter is to determine the impulse resporre ho(n) cor-
responding to the desired filter characteristic Hr(O) and to lruncate the resulting
sequence by multiplying it by an appropriate window function.
For a given filter length, the trarrsition band depends on the window function.

10.6 CHECKLIST OF IMPORTANT TERMS


Allaslng enor Frequency-selectlve llltet
Analog llltel Frequency translormallone
Band-pass lllter Hlgh-pase lllter
Band-atop fllter lmpulso lnvarlance
Blllnear transtormatlon tlR fllter
Buttemorth llltet Llnear phase characterleilc
Chebyshev fllter Low-pass lllter
Dlglta! fllter Paesband
Fllter speclllcatlons Transltlon band
FIR fllter Wndow tunction

10.7 PROBLEMS
10.1. Design an analog low-pass Butterworth filter to meel thc follorving specifications: lhe
attenuation to be less than 1.5 dB up to I kHz and to be at lcast-15 dB for frequencies
greater than 4 kHz.
102 Use the frequency transformations of Section 10.2 to obtain an analog Buttersorth fil-
ter with an auenuation of less than 1.5 dB [or lrequencies up to 3 kHz, from your design
in Problem 10.1.
103. Design a Butterworth band-pass filter to meet the following specifications:
ro., = lower cutoff frequency = ZffJHz.
(l).. = upPer cutoff frequency = 3ff) Llz
The altenuation in the passband is to be less than I dB. The attcnuation in the stop band
is to be at least l0 dB.
.,,M Design of Analog and Digiltal Filters Chapter 10

10.4 A Chebyshev low-pass filter is to be designed to have a passband ripple < 2 dB and a
cutoff ftequency of 15@ Hz. The attenuation for frequencies greater thatr 50m rlz must
be at least 20 dB. Fird e, N, and H(s).
105. Consider the third-order Butterworth and Chebyshev filters wilh the 3-dB cutoff fre-
quency normalized to I in both cases, Compare and comment on the corresponding char-
acteristics in both passbands. and stop bands.
10.6 ln Problem 105, what order of Butter*,orth filter compares to the Chebyshev filter of order 3?
10.7. Design a Chebyshev filter to meet the specifications of Problem 10.1. Compare the fre-
quency response of the resulting filter to that of the Butterworth filter of Problem 10.1.
rc8. Obtain the digiul equivalent of the low-pass filter of Problem 10.1 using the impulse-
invariant method. Assume a sampling frequency of (a) 6 kHz, (b) 10 kHz.
10.9. Plot the frequency responses of the digital filters of Problem 10.8. Comment on your
results.
10.10. The bilinear transform technique enables us to design IIR digital filters using standard
analog designs. However, if we want to replace an analog filter by an equivalent A./D dig-
ital 6lter-D/A combination, we have to prewarp the given cutoff frequencies before
desigping the analog filter. Thus, if we want to replace an analog Butterworth filter by a
digital filter, we first design the analog frlter by replacing the passband and stopband cut
off frequencies, ro, and r,r,, respectively, by
, 2 .n,T
u; = rta'i L

ui,2a-T
= Vrai;

The equivalent digital filter is then obtained from the analog design by using Equation
(10.4.9). Use this method to obtain a digital filter to replace the analog filter in Problem
10.1. Assume that the sampling frequency is 3 kllz
10.1L Repeat Problem 10.10 for the bandpass filter of Problem 10.4.
10.12 (a) Show that the frequency response II(O) of a filter is (i) purely real if the impulse
response ft(z) is even and symmetric (i.e., &(a) = -n(-z)) and (ii) purely imagi-
nary if i(z) is odd and symmetric (i.e., i(z) = - h(- n)),
O) Use your result in Part (a) to determine the phase of an N-point FIR filter if (i)
h(n) = 111v - 1 - z) and (ii) n(z) = -h(N - 1 - n).
10.13. (a) The ideal differentatior has frequency resporuie
Ho(A)=jA O<l0l <zr
Show that the Fourier series coetfrcient for IIr(O) are

h,(n\ =
(-l)'
n

@) Hence, design a lGpoint differentiator using both rectangular and Hanning windows.
10.1d (a) Design an ll+ap FIR frlter (N = 12) to approximate the ideal low-pass characteris-
tic with cutoff r/6 radians.
(b) Plot the frequency response of the 6ller you designed in Part (a).
(c) Use the Hanning window to modify the results of Part (a). Plot the frequency
response of the resulting filter, and comment on it.
Appendix A

Complex Numbers

Many engineering problems can be treated and solved by methods of complex analy-
sis. Roughly speaking, these problems can be subdivided into two large classes. The
first class consists of elementary problems for which the knowlcdge of comptex num-
bers and calculus is sufficient. Applications of this class of problcms are in differential
equations, electric circuits, and the analysis of signals and systems. The second cless of
problems requires detailed knowledge of the theory of complex analytic functions.
Problems in areas such as electrostatics, electromagnetics, and heat transfer belong to
this category.
In this appendix, we concern ourselves with problems of the first class. Probtems of
the second class are beyond the scope of the text.

A.1 DEFINITION
A complex numtrer z. = x +,7y,wherel = VJ, consists of two parts, a real partr and
an imaginary part y.t This form of representation for complex numbers is calted the
rectangular or Cartesian form, since z can be represented in rectangular coordinates
by the point (.r, y), as shown in Figure A.l.
The horizontal.t axis is called the real axis, and the vertical y axis is called the imag-
inary axis. The r-y plane in which the complex numbers are represented ia rhis way is
called the complex plane. Two complex numbers are equal if their real parts are equal
and their imaginary parts are equal.
The complex number z can also be written in polar form. The polar coordinates r
and 0 are related to the Cartesian coordinates x and y by
I
Mathematicians use i to tepres€nl V- I, bur engineers use,, tor this purpose because i is usually
used to rcpresent currenl in electric circuits. "l."iricrl
486 Complex Numbers Appendix A

Imaginaty axis

Real oxis Figure A.l The complex number z


l--, "*ai' in the complex plane.

x= rcos0 and y=rsin0 (A.l)


Hence, a complex number z = x +,7y can be written as

z : rcos0 +yisin0 (^.2)


This is known as the polar form, or trigonometric form, of a complex number. By using
Euler's identity,
exp[j0] : cos0 +/sin0
rve can express the complex z number in Equation (A.2) in exponential form as

z = rexp[i0] (A.3)
where r, the magnitude of z, is denoted fy lz | . From Figure A.l,
l.l ='= (A.4)

o = arctan I: arc"in /- = u...o, I (A.s)


xfr
The angle 0 is called the argument of z, denoted 42, and measured in radians. The
argument is defined only for nonzero complex numbers and is determined only up to
integer multiples of 2n. The value of 0 that lies in the interval

-tr ( 0 < rr
is called the principal value of the argument of z. Geometrically, le I is the length of
the vector from the origin to the point z in the complex plane, and 0 is the directed
angle from the positive x axis to z.

f,;omple.dl
For the complex number z =1+ j\/1,
,={fa1l.3Y=z ancl 42=arctan{3=!+Znn
The principal value of x 1 is n/3. antl thcrcfrrre,
: *- 2(cosz/3 + i sinr/3)
Sec. 4.2 Arithmetic Operalions 487

The complex conjugate o[: is defined as

z*=r_jy (A.6)
Since

z+i4=U and Z- z* =2i!- (4.7)


it follows thar

Rel;l
I
=.s=-(:+3*). and Im(il =-v : I r, - r-1 (A.8)

Note that if 2 = ,*, then the number is real, and if z = - : +. t hen the number is
purely imaginary.

A,2 ARITHMETIC OPERATIONS


A-2.f Addition and Subtraction
The sum and difference of two complex numbers are respectively defined by
Zr + 3: = (x, + rr) + j(yr + ),2) (A.e)
and

zt - Zz: (tr - .r:) + jor - .vz) (A.10)


These are demonetrated geometrically in Figure A.2 and can be interpreted in accor-
dance with the "parallelogram law" by which forces are added in rnechanics.

zt+ zl

//
i---'
A.2 Addition
Figure and subtraction of complex numbers.

A.,2.2 Multiplication

The product zr z2 is Biven by

zrr2 = (-rl + jy1)Q2+ jv,)


= (.rr.r: -.yry:) + 7(,r,y. +.r,v1) (A.11)
488 Complex Numbers Appendix A

In polar form, this becomes

z1i.t = ty exp[j0,]r, exp[i02] = r,r. erp[i(0, + 0r)] (A.12)

That is, the magnitude of the product of two complex numbers is the product of the mag-
nitudes of the two numbers, and the angle of the product is the sum of the two angles.

A.2.3 Division
Division is defined as the inverse of multiplication. The quotient zr/zris obtained by
multiplying both the numerator and denominator by the conjugate of er:

z, (r, +/.y,)
=
z2 @z + jYr)

- (\ + iy)@2
x22 + yl
- iyz)

x,xz *
- -i. lJz
_
rt-
,xzlr - xrlz
-, t- r?i vZ (A.13)

Division is performed easily in polar form as follows:

z, _ r,exp[ie,J
z2 r, exp[jOr]
: lexp[l(0, - 0r)] (A.14) I

That is, the magnitude of the quotient is the quotient of the magnitudes, and the angle
of the quotient is the difference of the angle of the numerator and the angle of the
denominator.
For any complex numbers 21, !2, and 23, we have the following:

o Commutative laws:

lrr*rr=it*zr
l_ (A.1s)
-
(41!2 - 4:{l
o Associative laws:

[(2, + zzl t zt= zr + (r, + zr) (A.16)


[2,(2213) - (zrzr)21

c Distributive law:

zt(zz+ z) = z1z2* 1r7, (A.17)


Sec. 4.3 Powers and Hoots of Complex Numbers 489

A.3 PCWEBS AND ROOTS


OF COMPLEX NUMBERS
The n th power oi the compiex number

z = rexp[le]
is

expUn$l = r"(cosne + Tsinn 0)


z', = r',

from which we obtain the so-called formula of De Moivre:l


(cos0 + /sinO)i = (cosl0 + jsina0) (A.18)
For example,
(l + i1)5 = ({)explir/tD' = q{iexpfitu/al = -4 - i4
The nth root of a complex number z is the number ro such that ?{r' = z Thus, to find
the nth root of :. we must solve the equation
u,^ - lzl expliol = 0
which is of degree r and. therefore, has n roots. These roots are given by
,,,, = l.l,r,"*p[i9l

ro, = |rIr/'exP[i e-t'ztt]

u, = lzlt/'exp[r-,ou] (A.re)

u, = r/'exe[i o + 1Q--!)l]
lzl

For example. the five roots of 32 exp [i tr] are

",,
= z"*n[i{], ,,,, = r*oliT-], rrr,:2exp[in],
ei-]
,. = , .-o ?J, ,. = z *n
l, li
Notice that the roots of a complex number lie on a circle in the conrplex-number plane.
The radius of the circle is li I ri'. The roots are uniformly distributed around the circle,

2
Abraham De Moivre (1667-1754) is a French mathematician who introducc'd imaginar.v quantities in
trigonometry and contributed to the theor:i of mathenratical probability.
490 Complex Numbers Appendix A

, "-,
[i f1
.: exp
i,;l
2expflrt
)
I 2 exp
[,?]
,*r[,f] Flgure A3 Roots of 32 exp [jtJ.

and the angle between adjacent roots is 2n/n radians. The five roots of 32 exp [7rt] are
shown in Figure A.3.

q.4 INEQUALITIES
For complex numbers, we observe the important triangle inequality,

lr, +,rl 'lz,l + lzrl (A.20)


That is, the magnitude of the sum of two complex numbers is at most equal to the sum
of the magnitudes of the numbers. This inequality follows by noting that the poins 0,
e1,and z, * zzare the vertices of the triangle shown in Figure A.4with sides lz1l,lz2 l,
and lz, + z, l, and the fact that one side in the triangle cannot exceed the sum of the
other two sides. Other useful inequalities are

Inelzll = lzl and llmlzll < lzl (A.21)


These follow by noting that for any z =.r + 7i, we have
lzl:t/7+j>lxl
and, similarly,

l.l = lvl

Flgure A,4 Triangle inequality.


Appendix B

Mathematical Relations

Some of the mathematical relations encountered in electrical engineering are listed in


this section for convenient reference. However. this appendix is not intended as a sub-
stitute for more comprehensive handbooks.

8.1 TRIGONOMETRIC IDENTITIES


exp[ti0] = cosO t/sin0
cose : j(exRllel + exp[-ieff = ti"(e * ;)
sine = 1|(exp[i0] - exp[-ie ]) = *.(, - ,")
sin20+cos2o=1
cos2e-sin20=cos2o
cos20-J(t+cos20)
sin20:](t-cos2e)
cosr0 = lt3 cosO + cos-30)
rtnrt = j{3 sino - sin3e )
sin(c + 9) = sina cosB = sinB cosc
cos(o + P) = cos.t cosP i sinc sinP
491
492 Mathematical Relations Appendlx B

a -r tan p
P) :
tan
tan(a
= l*-t."" t."p
sina sinp = |[cos(o - B) - cos(a + B)]
cosc cos B = ][cos(c - p) + cos(c + p)]
sina cosp = j[rin1" - p) + sin(c + B)]

sinc * sinp = zsinaf c*9-/


coso+cosp= z"orff.orf
cosq - cosp - z.ir1f .inLt'
.rrl cosq, + Bsina = \E + Ei cor(o - t."-';)
sinho : |(exn["J - exp[-a])
coshc = ](exn[cl + exp[-a])
sinh o
tanho : -----
cosh o
coshza-5inh2q=1
coshc*sinhc=exp[c]
coshc-sinhc=exp[-c]
sinh(cl : sinhc coshp -+ coshc sinhp
= 9)
cosh(c + 9) = coshc coshp t sinhcr sinhp
tanh c tanhB
tanh(a+B)=-- =
I tanhc tanhB
=
sinh2a=|(cosh2c-1)
coshzcr:|(cosh2a+l)

8.2 EXPONENTIAL AND LOGARITHMIC


FUNCTIONS
exp[c] exp[p] = exp[a + B]

exPra - Bl
'"#[ii=
(exp[c])P : exp[cp]
Sec. 8.3 Special Functions 493

lncB=lno+lnP
hfr=lnc-ln0
lroe =.B lna
ln c is the inverse of exp [a]: that is,

exp[lno]= a and exp[-ln<r]: r*of,n (ll] - ,^


logu : M lna, ,\l = loge :0.4343
1l
lnc = plotsa,
ru-2.3OZd
log o is the inverse of 10"; that is,

l0loE' = a 1g-,o*. = oI

Figure B.l Nr tural logarithmic


and exponen lial functions.

8.3 SPECIAL FUNCTIONS


B.8.1 Gnrtrrrra f.rrngf,igag

I(")
I, r"-rexp[-r]dr
=

f(o+l):af(c)
l(&+t1=1t. k=0,1.2,...
r(l) = v?r
494 Mathematical Relations Appendix B

B.S-2 Inconplete Gamma Functlons


rF
1(c,p) = J6| r"-rexp[-t]dt
t@
O(c,g): Jg| 1o-t expl-tldt
I(a) = I(a, p) + g(", F)

B3.8 Beta tr\rnctions


rl
g(p,y):lrr-t(l -t)'-tdr, F>0, v>0
J6

9(rr,r)=lfrH

8,4 POWER-SERIES EXPANSION

(1+r), =t+nx*#"- .(;)* +...+xn

exp[xf = I *r + Lr' * "' + \,x" + "'


rn(1 + x) =, -,.* -..(-1)n*' * ...., l,l . r

- .f3 f5 t?n+l
sinx = x - t.; +...+ (-l)'(2h 1I
+ ..'

cosx = r. -*..i.- #. . + (-r)"


#.
tanr=r+f,*f,r'*.'.*
(,Ti,)' +...
a, : t *rlna +
ry+...*
I
sinhr =.r +,X3+ 15 +... + X1r++ q1 +'..
12
'i
x2 x4 xh
coshr=1+r+ 4t.+...
+12r;1 +...

(1 + r). = t t ox *tk,]r, -Wr3 +..., lrl . t


Sec. 8.5 Sums ol Powers of Natural Numbers
495

where d is negative or a fraction.

(l +.r)-t = I -.r *.r2 -.rr + ...: ) (- l;r'-'..r


'
*=0
(l +.r;'/? = I + j.r - 1.1.., + jlj... - jj ii,' -
MS OF POWE RS OF NATUBAL NUMBERS

$o=.rutn*rl
k=t Z

{, N(N + l)(2N + l)
rl,., _
--
l-l O

$ or: ry2(NI + l)2


k=t

) r. : {iv1ru + t)(2N + l)(3N2 + 3N - l)

) *s - ,,.rur(ly + D2(zNz + 2N - 1)

) to = urrlvllu + l)(2N + l)(3N4 + 6Nr - 3N .F l)

.) t'= rLoN'(N + l)2(3Nr + 6Nr - N2 - 4.\ F 2)


k=t

2 <zk - l) = ry2
l-l

2 e* - ry: = lrrvlarur - rr

2 <zr - 1)r = N2(2tv2 - t)

) t1l + r'y: = ;!ru1N + l)(,ry + 2)(3N + s)


l-l

) r1rr1 = (N + r)t- I
496 Mathematcal Belatons Appendlx B

S.S.f Sums of Binomial Coefficlents

e(';-) =r.,:i')
(;) . (;). :2n-'
'.
(l) .(;).(!) . =2n-'

* ,,(il = zw-'\t(n +2)


$,,0
5 t-rl-(flo,= (-l)NN! N=o

e(t'= ("il)
8.6.2 Seriee of Exponentials

a*'L
"{,o={,-+ lsk<N-1
.**o[,T] =tl; &:0,N

2"=*' r'l <l

i*"=T+, lrl .t
lol .t
Ln'o':t#'
8.6 DEFINITE INTEGRALS

exp [- o-r2]dr :
Sec. 8.6 Definite lntegrals 497

.['r".*p[-.,r14. =.,11! n r(]


r2 exp[-o.r'1d, = ,ln
/ ot
,o
exp[-cxl cospxdr = c>0
J p, i or,
r'u
exo[-".v; sinP.r dr
lo = p, I or. u - u
f exp[-ox'?]cosprdr = .,';"*[-(r*)']
z1
f'sinccrd-r n
J. -; =
, sgno

f 'l - cosc-r - orr


J,, ,'-.o': z' o>o
'1 - cosox
f
i, ,(r-B) a'=nsincB
g" o>O', Brea[, b+0
r9!9iar=rnl. a>0,
/-cosS:Y- B>o
s_stlE=frn*I dr : ao rn c> o. >0
/- fi, B

cosB.r.L1 (F ,,)r.
['coso'r = rt . lr
h .--,
,\'' ;_r

|r- --__
adx _ rt
)n 12 +.r2- 2

1" sin(2n - I ),r


|Jo ---':- - -d.t = rr
stn.r

r" sin-.---"
2ru
| d.r:0
)o sln.r

1"/2 sin (2n - l).r ,


|Jo ----sin.r ax = n2.-
f"/2 sin2rr.r I +I (- I )', '
IJo slnr rLr=I- -...+
3 5 2n'-l
f" cos (2n + 1).r
|Ju 'cos.r d.r = (- l)"rr
4gg Math€matical Relaffons Appendh B

1oi2 sin 2ar cosr . =


ot
T
J, tin' ,
r2o
(l - cosr)" sin n'r d'r : o
J,

I
fza
,, - cos.r)'cosnrdr = (-D'#
n m
cosrnr
I ro sinnx

'r'*
* = [o'
U,' I3I:. :::"y:*
=

R7 INDEFINITE INTEGRALS

["a"=uu-[ra,
r7 n*-r
)r"*=;|1r'*t+C,
/ exptxl dx = explxl
+ C

I
t explaxl a, = \ (ar - rlexp [ax] + C

I
x" explaxl * - lr" exp [ar] - i I *-, exp [ax] dr
ff=rnl.rl +c
llurar=xtnlrl -x+C

JI x, hx dx: -jn*t
(rilf-[(z + 1)lnl'rl-t]+ c
dx =tnlrn.rl + c
I#
I cosrd, = sinx + C
J

[;nxat=-cosx+c
Sec. 8.7 lndetinite lntegrals
499

Is#rdx=tanx+C
oira* = -cot.r + C
I
tanxax: +
t ln lsecrl C

cotxtlx = In lsin.rl + C
I
+ tan.rl + C
"ecra, = lsecr
ln
|
cscr a, : ln lcscr - cotrl + C
/
lsecxtanxdr=secr*C

/ o"r.o,, dr
: -cscr * C

I sn'zx dx: l' - i sin2-r + C


dx: ], * ]sin2r + C
I coszx

.fln'ra, = tanr -.r + C


I
Ico*xdx=-cotr -xt-C
:
I s#xdx -!tz+ sin2,r) cos.r *c

I cos3, ax = lQ * cos2.r) sinx * C

: -lrlnn-'r.*, * L1 sin,-2.rd-r
sin'rdr'
/ /
l-.or,,-'rsin.r I L
cos', ax: + -
I I cor,-r, d,
I xsinx dx = sinx - .r cosr J. C
500 Mathematical Relations Appendix B

,.o... r/.r = cos.r * .r sin.r + C


J'

I
J
r" sin., r/.v = -.r" cos.r + ,r J .r" cos.r,,Lr

/ ,' .os,r dx = x" sinr - n / -r"-r sinx dr

J sirrh t,1.: :o:li.t -c

costr.r d.r = sinhr + C


/
rrnt , r/.r = ln cosh.r -t- C
/
cott , dx: lnltanh.rl + c
/
tan-rlsinhrl + C
/sectr.rdr=
I .:.,;i,.'
j . .. ;,. ,... .,...

sechs.rd.r = tanh.r + C
/
csch'?r r/.r = -cothx + C
/
:
/ ,e.t, tanh.r r/.r -sech.r + C.

o"t , cothx dr = - cschr + C


/
r ..t tt-r I

J ;'*"i;= ;,(,-t bx - n ln lxl) + c


I ;*: ,]*K, + bx)z - 4a(a + bx) + b2tila + D.rll + c
rdrll.rl :
J ,G; a*r ,'n 1,, * a..l * c

I#=rnlx+ t/?-71+c
SEc. B.7 lndefinito tntegrals
501

d.x
lt --....=:-_ \/7- +C
J vz1/rz - oz a?t
fdtr
ldr-W - oz1J7 -; + c
t dx Lx
I a +7= -tan-'- +c
tdx
lm==lnl:l +!a2+x2+c
I dx t.l{d+rr+"1
J;ffii=,rnl_-, J+c
-
d,
I ------___::.:\/7;? -r c
I r2!a2 + xz a2x

ldxr
J Gr+W= 7\/-+r,* c
I dx ,r
) t5-*, = sin-'- + c
I dt
-,a-x
JA-=cos-'-;--c
rxfu
lffi= -\/ir - x, * acos-r9-o_x + c
dr
f _------.---.rL \/2a;-
J x!?ax - 12 ax
I dr I .-r
l;@i= -sec-'- + c

-} a, = x - \/i; -}+ f
J2{ux
a
[ cos-'
T *,
I r\/2",
- dx =b' - g, - 3o' \/2n -t + cos-,
o -u_o
*
r4
"
Appendix C

Elementary Matrix Theory

This appenrlix presents the nrinimum amount of matrix theory needed to comprehend
rhe ntaterial in Chaptcrs 2 and (r in the tcxl. It is recommcnded that even those well
versed in nratrix theory read the material herein to become familiar with the notation.
For those not so well versed. the presentation is terse and oriented toward them. For
a more comprehensivc presentation of matrix theory. we suggest the study of text-
books solely concerned with the subject.

c.1 BASIC DEFINITION


A matrix, denoted hy a capital boldface letter such as A or rD or by the notation [a7]'
is a rectangular array of elements. Such arrays ocur in various branches of applied
mathematics. Matrices are useful because tlrey enable us to consider an arTay of many
numbers as a single ohject and to perform calculations on these objecls in a compact
tbrm. Matrix elements can be real nunrbers, complex numbers. polynomials, or func'
tions. A matrix that contains only one rorv is called a row matrix,and a matrix that con'
tains only one column is called a column matrix. Square matrices have the same
number of rows and columns. A mntrix A is of order m x n (read m.by n) if it has rn
rows and r columns.
The complex conjugate of a matrix A is obtained by conjugating every element
in A and is denoted by A4. A matrix is real if all elements of the matrix are real.
Clearly, for real matrices, A't = A. Two matrices are equal if their corresponding
elements are equal. A = B means ln,il = lb,,l, for all i and i. The matrices should be
of the same ordcr.

502
Sec. C.1 Basic Operations 503

c.2 BASIC OPERATIONS


C.2.1 MatrixAddition
A matrix C : A + B is formed by adding corresponding elements; that is,

[c,,]=la,,l+[bi1) (c.l)
Matrix subtraction is analogously defined. The matrices musr be of the same order.
Matrix addition is commutative and associative.

C.2.2 Differentiatioaandlntegration
The derivative or inregral of a matrix is obtained by differenriating or integrating each
element of the matrix.

C.2.3 MatrixMultiplication
Matrix multiplication is an extension of the dot product of vecrors. Recall that the dot
product of the two N-dimensional vectors u and v is defined as
N
u.v = )rr,n,
i.l
Elements [cr] of the producr matrix c = AB are found by taking the dot product of
the ith row of the matrix A and the 7th column of the matrix B, so that

[c,,]: ) a,1,bo, (c.2)


t=l
The process of matrix multiplication is, therefore, convenicntly referred to as the mul-
tiplication of rows into columns, as demonstrated in Figure C.l.
This definition requires that the number of columns of A be the same as the num-
ber of rows of B. In that case, the matrices A and B are saicl to be compatible. Other-
wise the product is undefincd. Matrix mulriplicarion is associative [(AB)C = A(BC)],
but not, in general, commutative (AB # BA). As an example, let

n k

,{ A c
I",

Figure C.l Matrir multiplication.


504 Elementary Matrix Theory Appendix C

tl
Then, by Equation (C.2),
^=[i 1] and
"=[-o
Ll 6)

o"=[tslt-a)+(_2)(t) (3)(l) + (-2X6)l [-r+ -sl


and
L(1)(_.4) + (s)(r) (r)(r)+(s)(6) l=L r :rl
BA : [(-4)(3) + (l)(r) (-4)(-2) + (txs)l [_rr _31
L(r)(3)+(6)(r) (r)(_2)*1oy1sy
Matrix multiplication has the following properties:
J:I s za)

(tA)B =,t(AB) = a1;rs; (C.3a)


A(BC): (AB)c (c.3b)
(A+B)C=AC+BC (C.3c)
C(A+B)=CA+CB (c3d)
AB + BA, in general (C.3e)
AB = 0 does not necessarily imply A=0 or B=0 (C.30
properties hold, provided that A, B, and c are
]ese
left are defined (t is any number). An example oflC.fg
such that the expressions on the
is
[s rl [_r rl _ [o ol
Lo o) L r -i.l: L; ;l
The properties expressed by Equations (c.3e) and (c.3f)
are quite unusual because
they have nocounterparrs inrhe;randard muttipticatiin
fore, be carefully observed. As with vectors, trrlr"
oinr.u"o *Jrioura]ii"r"_
i*o
matrix division.

zero Matrix The zero mahix, denoted by 0, is a matrix whose erements


are a[ zero.
Diagonal Matrit The diagonal matrix, denoted
by D, is a square matrix whose
off-diagonal elements are all zeros.
unit Matrit' The unit matrix, denoted by I, is a diagonar
matrix whose diagonar
elements are all ones. (Note: N
= Ar = A, where.l, ir *y compatibre matrix.)
. upper Triangurar Matrb. The upper triangurar matrix has alr zeros berow the
main diagonal.
Lower Trianguhr M atrit. The lower triangular matrix has all zeros above
main diagonal. the
Sec. C.3 Special Matrices

ln upper or lower triangular matrices, the diagonal elements need not be zero. An
upper triangular matrix added to or multiplied by an upper trianqular matrix results in
an upper triangular matrix, and similarly for lower triangular matrices.
For example. matrices

[t 4 21 [r o ol
r, =lo 3 -21 ana r,=l z 1 oJ
[o o -s_] [-r o 7)
are upper and lower triangular matrices, respectively.

Transpose Matrix. The transpose matrix, denoted by A r. is the matrix resulting


from an interchange of rows and columns of a given matrix A. lt A = [a,rl. then Ar =
la,il, so that the element in the ith row and the /th column of A becomei-the element
in the lth row and ith column of A r.
Complex Conjugate Transpose Matrit. The complex conjugate transpose
matrix, denoted by Ar, is the matrix whose elements are complex conjugates of the ele-
ments of Ar. Note that

(AB;r = 3r4r and (AB;t = 3t"'


The following definitions apply ro square marrices:
Symmetric Matrb. Matrix A is symmetric if
A=Ar
Hermitian Matrit. Matrix A is Hermitian if
A=At
Skew-Symmetic Matrix. Matrix A is skew symmetric if
A=-Ar
For example, the matrices

[-r 2 41 [o -3 4l
A=l z s -3 I and B=l 3 o -71
[a -3 6] L-a 7 o_l
are symmetric and skew-symmetric matrices, respectively.

Normal Matix. Matrix A is normal if


ArA = AA1
Uniury Matrit. Matrix A is unitary if
ArA=I
A real unitary matrix is called an orthogonal matrix.
506 Elem€ntary Matrix Theory Appendlx C

C.4 THE INVERSE OF A MATRIX

A is denoted by A-r and is an n x rl matrix such that

AA_r=A_rA=I
where I is the z x z unit matrix. if the determinant of A is zero, then A has no inverse
and is called singular; on the other hand, if the determinant is nonzero, the inverse
exists, and A is called a nonsingular matrix.
In general, finding the inverse of a matrix is a tedious process. For some special
cases, the inverse is easily determined. For aZ x Z matirx

l orrl
l-a', an)
=
Lat
we have

-:"")
"'= a#"'^'l-i?' (c.4)

provided that arra, * anazr. For a diagonal matrix, we have

I
a\
f o,,
oao I
0
-...0
AD
A= A-t = (c.5)

_0
..
:l ;
i
am_)
provided that a,, * 0 for any i.
The invene of the inverse is the given matrix A: that is,
(A-t1-t - (c.6)
The invene of a product AC can be obtained by^inverting each factor and multiptying
the results in reverse order:
(AC;-t - C-rA-l (c.7)
For higher order matrices, the inverse is computed using Cramer's rule:

a-r =
;j^aaj a (c.8)

Here, det A is the determinant of A, and adj A is the adjoint matrix of A. The following
is a summary of the steps needed to calculate the inverse of an n x n square matrix A:
Sec. C.5 Elgenvalues and Elgenvectorc SO7

l.calculate the matrix of minors. (A minor of the element ar, denoted by det Mr, is
the determinant of the matrix formed by deleting the rth row and theiti columi of
the matrix A.)
2. calculate the matrix of cofactors. (A cofactor of the element ar. denoted by cr, is
related to the minor by c, = (- l),+i detM,r.)
3. calculate the adjoint matrix of A by transposing the matrix of cofactors of A:
adj A = [c'.]r
4. C.alculate the determinant ofA using

det A : f o,,rr, for any column I


i=l
or
n
det A=2 o,ic,i, for any row i
i'r
5. Use Equation (C.8) to calculare A-t

C.5 EI ENVAL ES AND EIGENVECTORS


The eigenvalues of an z X n matrix A are the solutions to the equation
Ax=Ix (c.e)
Equation (c.9) can be written as (A - rl)x = 0. Nontrivial solution vectors x exist only
if det (A - rI) -- 0. This is an algebraic equation of degree n in )t and is called the char-
arteristic equation of the matrix. There are n roots for this equation, although some
may be repeated. An eigenvatue of the matrix A is said to be distinct if itls not a
repeated root of the characteristic equation. The polynomial
-
S()t) = det[A N] is
called the characteristic polynomial of A. Associited with eich eigenvaiue r,, ii a
nonzero vector xd of the eigenvalue equation r, = )r,x,. This solution vector is callled an
eigenvector. For example,.the eigenvalues of the mairix

A: [3 4l
Ll 3l
'are obtained by solving the equation

lr-x
detL +I
I 3-^l=o
or
(3-\)(3-r)-4=0
This second-degree equation has two real roots, trr = 1 and trz = 5. There are rwo
eigenvectors. The eigenvector associated with )., = i is the solutlon to
508 Elementary Matrix Theory Appendix C

t;;l[;l]=[;;]
lz al
[r z) [,,]:
: lol
L,,l Lo_]
Then 2x, * 4rr= 0andx, + 2t, -- 0, from which it follows that x r = -b, By choos-
ing x, : 1, we find that the eigenvector is

-, =
[-?]
The eigenvector associated with )r, = 5 is the solution to

tl il [;;]=,[;;]
or
f_z q1
L r -z) lol__o
Lo-r
which has the solution xz = br. Choosing r, = 1 glves
*: t-rl
Lrl
C.6 FUNCTIONS OF A MATRIX
Any analytic scalar function /(r) of a scalar t can be uniquely expressed in a conver-
gent Maclaurin series as

r(,,: i{#ro},-,i
The same type of expansion can be used to defrne functions of matrices. Thus, the func-
tion /(A) of the n x z matrix A can be expanded as

/(A): > {#n,,},-,# (c.ro)

For example,

sinA = (sin0)I + (cos0)A + 1-sino)


f;
+ "'+ (-cosoY-a1.t- * "'
:^-f;.f- (-,r##.
Soc. C.6 Functions ot a Matrix 509

and

exp[Ar] = exp[0]I + exp[0]A,

+ exp[o]
+. + exp[o] {n!1 *. .

=I+Ar *A"t
2l *...*4"'n, *"'
The Cayley-Hamilton (C-H) theorem states that any matrix satisfies its ovn char-
acteristic equation. That is, given an arbitrary n x x matrix A with characteristic poly-
nomial g(I) = det(A - }.I), it follows that S(A) = 0. As an example, if

A=l-3 4l
LI
3J
so that

det[A - II] = g(I) = 12 - 6I + 5


then, by the Cayley-Hamilton theorem, we have

g(A):A2-6A+5I=0
OI

A2 = 6A - 5I (C.11)
In general, the Cayley-Hamilton theorem enables us to express any power of a
matrix in terms of a linear combination of Ar for & = 0, l,Z, ..., n - 1. For exanple,
N can be found from Equation (C.ll) by multiplying both sides by A to obtain
A3:6A2 - 5A,
:6[6A-sr]-sA
= 3lA - 30I
Similarly, higher powers of A can be obtained by this method. Multiplying Equation
(C.ll) by A-r, we obtain

l-r = 9l:4
5

assuming that A-l exists. As a consequence of the C-H theorem, it follows that any
function /(A) can be expressed as

/(A) : 5 r*ro
&-0
The calculation of 1o,.yr, ...,.yn_r can be carried out by the iterative method used in
the calculation of An and A'*r. It can be shown that if the eigenvalues of A are dis-
tinct, then the set of coefficients .y0, ]r, ...,.ya_r satisfies the follorving equations:
510 Elementary Matrlx Theory Appendlx c

,f(I') ='Yo *'Yr\r * "'+ "Y,-r).T-t


,f(I:) :'yo + Jr12 * ... * 1,-,)\j-r
:

,f(I,) : "yo + "yrlo +...+'y,-rI;-r


As an example, let us calculate exp [Ar], where

A= 13 4l
LI 3J

The eigenvalues of A are )tr =I and \, = 5, with /(A) = exp [Ar]. Then
I
exp[Arl = )
t-0
1o1r;a,trr

=.yo(t)I + 1,(r)Ar
where 1o(t) and 1, (l) are the solutions to

exp[t]:10(t)+1,(r)
exp[5t]= ro(r) + 51,(r)
so that

rr(r) : j(exn[5rl - exp[t])


ro(r) = jts exnlrl - exp[5r])
Therefore,

exp[Ar] : (exp[r] - l*pp,ll[l f] * t1.,.o1r,t -.,ot,p[i 1]


-l
exp[r] - *0t,1
: []"-00,, -
exp[5r]

exp[r])
[],.-0,r,, - ]exptsrl -.-pt,ll
If the eigenvalues are not distinct, then we have fewer equarions than unknowns. By
differentiating the equation corresponding to the repeated eigenvalue with resepect to
I, we obtain a new equation that can be used to solve for 1n(t). fr(r). ...,1,_,(t). For
example, consider

[-r o ol
^:Ls-i;]
Sec. C.6 Functlons o, a Matrk 511

This matrix has eigenvalues Ir = -l


and )r, = trr = -2. The coefficients fo(), fr(r),
and 1r(r) are obtained as the solution to the following set of equations:

exp[-r] = ro(r) - rr(r) + r,(r)


expl-2tl = ro(r) - 2tr!) + 41rg)
texpl-Ztl: rr(r) - 4r:(r)
Solving for 1, yields

ro(r) = 4 exp[-r] - 3 expl-2rl - 2t exp[-2tl


rr(r) = 4exp[- tl - 4exp[-Zt] - 3rexp[-2r]
1r(r) = exp[-rl - exp[ - Ul - t expl-2t]
Thus,

[r o ol [-r o ol t-r o ol
exp[Ar]=1n1r;l o r ol+1,1ryl o -4 4l+1,1ryl o rz -16
[o 4 4)
I

[oooJ Io-roJ
[exp[-r] 0 o I
=| 0 exp[-2rl-2texp[-2tl 4texpl-Ztl
L 0
I
-tcxp[-2tl -4exp[-r] + 4expl-?.tl + atexp[-tl)
Appendix D

Partial Fractions

Expansion in partial fractions is a technique used to reduce proper rational functionsl


of the form N(s)/D(s) into sums of simple terms. Each term by itself is a proper ratio-
nal function with denominator of degree 2 or less. More specifically, if N(s) and D(s)
are polynomials, and the degree of N(s) is less than the degree of D(s), then it follows
from the theorem of algebra that

rr + rz+ ...+ r,
#31 = (D.1)

where each I has one of the forms

Bs+C
or
G+al- Gt;ps+qf
where the polynomial s2 + ps + q is irreducible, and p and u are nonnegative integers.
The sum in the right-hand side of Equation (D.l) is called the partial-fiaction decom-
I
position of N(s)/D(s), and each is called a partial fracrion. By using long division,
improper rational functions can be written as a sum of a polynomial oflegrie M N
and a proper rational function, where M is the degree of the polynomial N(s) and N is
-
the degree of the polynomial D(s). For example, given

sa+3s3--5s2-l
s3+2s2-s+l
we obtain, by long division,

rA proper ralional funclion is a ralio of two polynomials. with the degree o[ the numerator tess than lhe
degree of the denominator.

512
Sec. o.1 Case 1 : Nonrepeated Linear Factors 513

- .5s2 - I =J+1
5t + 3sr 6s2 -' - 2

s3+?sz-.s+l s3+2s2-.r+l
-
The parrial-fraction decomposition is then found for (6.s2 2s ?)/(s3 + 2s2 s + 1).
- -
Partial fractions are very useful in integration and also in finding the iavene of many
transforms, such as kplace, Fourier, and Z-transforms. All these operators share one
property in common: linearity.
The first step in the partial-fraction technique is to express D(.s) as a product of fac-
tors s + D or irreducible quadratic factors s2 + ps + q. Repeated factors are then col-
tected, so that D(s) is a product of different factors of the form (s + 6)ts or
(sz + ps + q)', where p and u are nonnegative integers. Thc form of the partial frac-
tions deperrds on the type of factors we have for D(s). There are four different cases.

D.1 CASE 1: NONREPEATED LINEAR FACTORS


To every nonrepeated factor s + D of D(s), there corresponds a partial fraction
A/(s + b).ln general, the rational function can be rvritten as

ffi=, *^1'1

where

, = (*#0,),=_, (D.2)

Example D.l
Consider the rational funclion
37 - lls
s3-4s2+s+6
The denominator has the tactored form (s + l)(s - 2)(s - :t). All lhese factorc are liner.r
nonrepeated factors. Thus, for the factor .s + l, there corresponds a partial fraction of lhe
form A/(s + 1). Similarlv, for the factors r - I and s - 3, thcrc correspond partial frac-
tions 8/(s - 2) and C/(s - 3), resPectively. The decomposition of Equation (D.l) then
has the form

37-lls A * B +, C
;i- 4F;" +; =, i r r-lJ - 3

The values of A, 8. and C are obtained using Equation (D.2):

^
= (,;Ir]'jii),. _,
= o

/ 37-ll.s \
'= lAi irt' l rr/, , = -'
Si4 parflal Fracltons Appendtx D

c=( 37-lls \ =r
- \(s + l)(s - 2)/.-.1
The partial-fraction decomposition is, therefore.

37- lls 4 5 +, I
-airii+o =i +r- i- 2 - l
'i

xample D.2
Let us find the partial-fraction decomposition of

2r+l
i-+ irt--a'
We factor the polynomial as

D(s) = 5:t + 3s2 - 4s = s(s + 4)(s - l)


and then use the partial-fraction form

2s+1 A +s+4ts_
B C
si+3rr_4s= s I
Using Equarion (D.2), we find that the cocfficients are
/ 2r+l \
d=(t'*nlt'-r)/.="=-a
B=/2'1r\ _7
\s(s I)/,--o 20
-
c=/2J*l\ -3
\s(s + 4)/..r 5
The partial-fraction decomposition is, therefore.

2s+l I ++ 7 3
sr+3s2-4s 4s 2o(s+4)' 5(s-l)

,./ CASE ll: REPEATED LINEAR FACTORS


To cach repeated factor (s + b)P, there corresponds the partial fraction
At - -!2...- -.4*
s*D'(s+D)2 "" (s + 6;*
The coefficients r{* can be determined by the formula

" f('*i.'(rt)
u. = (D.3)
\ D(s) J,. .o
Sec. D.2 Case ll: Repeated Linear Faclors 515

r'2,
'- (o-] r;,ta=*#'), --oo: -I
= ,p (D.4)

Exanple D.8
Consider the rational .function

2s2-25s-33
F - isl es=
The denominator has the factored form D(s) = (s + l)2(s - 5). For rhe tacrors 5, there
-
corresponds a partial iruction of the form 8/(s - 5). The facr<.rr (s + I is a lhear. repeared
)2
factor to which there cr-'rrs5pon6s a partial fraction of the form zl2l(s + l), + el/(; + l)
The decomposition of Equation (D.1) then has the form
tu2-25s-33 B
__ A,
+ __---L +
A.
(D.s)
s'-3sz-9s-5 s-5 s+1
=
(s+ l)2
The values of I, Ar. and A2 are obtained using Equations (D.2), (D.3), and @.4) as follows:

,:(I=t?#),.,=-,
Ar=( 2s2-25r-33
s-5 = -,
),.,
I
ld2r2-25s-33\
^'-@-r[\a ,-5 i,,
/Zs2-20s+158\
=(--i-/,-,='
Hence, the rational function in Equation (D.5) can be rvrirren as
2s2-2ss-33 3 5 I
s'-3s'-9s-5 s-5 s+l (s+ l)2

Eqnrnple D.4
[-et us find the partial-fraction decomposition of
3s3-1812+9s-4
sa-5s3+fu2+2os+8
Thc denominator can be factored as (s + l)(s - 2)3. Since rve have a repeared facror of
order 3, the corresponding partial fraction is
3rr- l8s2+9s-4 B +;1,*i;--2):+G=F
A, A, A,
(D.6)
sr-s;+ou.:+zos+a=s+I
The coefficient I can be obtained using Equation (D.2):

I = i3"'_t$-_* t_lJ = 1
\ (s - 2)sr J,.-,
The coefficients A,. i = 1.2.3, are obtained using Equarions 1D.3) and (D.4). Firct,
516 Pardal F actons App€ndix D

/3-"3-1&s2+9s-4\
' t---.- --__-- i
A,=
\ s+l /,-z
=2
td 3s3-18s2+29s-4\
a,=(* s41 ),-,
= (s-+:#ga!r),., = -,
Similarly, ,,1, can be found using Equation (D.4).
In many cases, it is much easier to use the following technique, especialty after finding
all but one coefficient: Multiplying -tboth sides of Equation (D.6) by (s + f)(s 2)3 gives -
3s3 - t&2 + 9s - 4 = a(s - 2)3 + A,(s + lxs - Z)2 + Ar(s + l)(s - 2) +.Ar(s + t)
If we compare the coefficient of s3 on both sides, we obtain

3=B+At
Since B = 2, it follows thal Ar = 1. The resulting partial-fraction decomposition is then
3s3-1&2+9s-4 -!__
r{ - 5s3 + 612 + 20s + 8 s+1 s-2 (s-2)2'(s-2)3

D.3 CASE lll: NONREPEATED IRREDUCIBLE

In the case of a nonrepeated irreducible seconddegree polynomial, we set up fractions


of the fonr
As+B
(D.7)
@+ps+q)
The best way to find the coeffrcients is to equate the coefficients of different powers of
s, as is deEonstrated in the following example.

Ilqnmple D.6
C-onsider the ratioDal futrctiotr

s2-s-21
t5-+ &E
We fector the polynomial as D(s) = (s2 + 4)(2s - 1) and use the partial-fraction form:
s2-s-21 As+B C
t3-+8s-,4=;r+4 -2"-l
Multiplying by the lowest common denominator gives
s2 - s - Zl: (As + B)(zs - 1) + C(sz + 4) (D.8)
Sec. D.4 Case lV: Repeated lrreducible Second_Degree Factors
517

The varues of .4. g. and c can be found by comparing


the coefficicnrs of differenr powen
ofs or by substituting values for s that mike ,ariousiactoi.
,"r,r. For exampre, substitut-
ing s = l/1, we obrain (1/4) -.(1/z)
-
The remaining coefficients can be^found Uy
21 = a|/qa, *ii"r, r,". rhe solution c
= _5.
diff..enr poruen of". Rearranging
the "ornprring
right-hand side of Equarion (D.g) gives

' sz - s- 2l = (2A + C)sz + 1_ 4 + B)s _ u +4C


Comparingthecoefficientofs2onbothsides,weseethat2A+C=l.KnowingCresults
ill = l Similarly, comparing rhe conslanr t.rr. yi.fa, _A * 4C = _2',ot B= l. Thus,
the partial-fraction ttecomposition of fhe rationaliun"rion
lri

s:-s-21 3s+l 5
2sr-;r+&-a=rr+?-2"_r

D.4 QA.SE lV: REpEATED TRREDUCTBLE


GREE FACT
For repeated irreducible second-degree factors, we
have factors of the fomr

Pt *
q (s, + pslBz
!rt-+ 4rt A..s + It.,
sz+ps + + q)2 - "'- 1r, +rr + Sf (D.e)

Agaitr, the best way to find the coefficients is to equate


the different powers of s.

Erample D.0
As an example of repeated irreducible second_degree
factors, consider

sa-6s+7
(s'?-4s+t), (D.10)

Note that the denominalor can b€ written as.[(s _


2f + l12. Thcrefore, applying Equa-
tion (D.9) with v = 2. we can write the partial'fracrions
foi'Eqror,on (D.10) as

{-q+7 Azs+8,
- 4s + s), - is=, + I * A.s+8, (D.ll)
[?, _'-;1r*-,,,
(s2

MultipryingbothsidesofEquarion(D.11)by[(s-2)2+l]2andrea,angingterms,sgeltain

s2-6s+ 7 =AJ1 +(Br - 4Ar)s2 + (sA, + Ar)s + 81 _5Bl (D.12)


The.constants z{ ,, Br, Az, and g, can be determined
by comparing the coefficients ofs in
the lefi- and right-hand sides of bquarion (D.12). rr,.i*tfrli""i-"f
the coefficient of s? yields
,i ytrd";, = il;;
l=Br-4A1, or Br=l
Slg Partial Fractons Appendix D

Comparing the coeffrcient of s on both sides. we obtain

-6=5Ar+ A2 or Az=-6
Comparing the constant term yields
7=5Bt+8, or Br=2
Finally, the partial-fraction decomposition of Equation (D.10) is
sa-6s+7 = 1 6s-2
i;t- 4s + ,l -
(, - 2)ti-tf (' - ,Tl
Bibliography

l. Brigham, E. oram. The Fast Fourier Tronslonn ant! tU Applicatiarr.s. Englewood cliffs, NJ:
Prentice-Hall, 1988.
2. Gabel, Robert A., and Richard A. Roberts. Signals and Linear Systcms,3d ed' New York:
Wiley, 1987.
3. Johnson, Johnny R. Introduction to Digitat Signat Processing. Eng,lewood Cliffs, NJ: Pren-
tice-Hall, 1989.
4. krhi, B. P. Signab and Systenr-s' Berkeley-Cambridge Press. Carmichael' CA' 1987'
5. McGillem, claire D.. and ceorge R. Cooper. Conrinuous and Discrete signal and system
Analysk,2d ed. New York: Holt, Reinhart and Winslon' l9M
6. O'Flynn. Michael, and Eugene Moriarity. Linear systems: Tinrc Domain and Transfornt
Analysb. Nerv York: Harper and Row' 1987.
7. Oppenheim. Alan v., and Ronald w. Schafer. Dbcrete-Time Signol I'rocessing. Englervood
Cliffs. NJ: Prentice-Hall, 1989.
8. Oppenheim, Alan V., Alan S. Wilsky' and S. Hamid Nawab. Signals and Systems' 2d ed'
Englewood Cliffs, NJ: Prenticc-Hall, 1997.
9. Papoulis, Athanasios. The Fourier Inregral and lts Applications. Ncrv York: McGrarv-Hill,
t962.
10. Philip, Charles L., and John Parr. signab, systems and Transfornts. Englewood cliffs, NJ:
Prentice-Halt, 1995.
I L Poularikas, Alexander D., and Samuel seely. Elemens of signals utl
Svstems. Boston: PWS-

Kent, 19E8.
12. Proakis, John C., and Dimitris G. Manolakis. lntra duction to Digirtl Signal Processing. New
York: Macmillan, 198E.
13. Scolt, Donald E. An Inrroduction to circuit Analysis: .A systerns Approach. New York:
McGraw-Hill, 1987'
519
BtbIography

14. Siebert, William M. Circuiu, Signab, and Sysrems. New york: McGraw-Hi[, 19g6.
15. strum. Roberr D., and Donald E ..Ki,rk. Firct principles of Dicrete systems and
Digiul sig-
nol Processing. Reading, MA: Addison-Wesley, l9gE.
16. swisher, George M. Intrcduction to Lhear systems
Anatysb. Beaverton, oR: Matrix, 1g6.
t?. ziemer, Roger E.. william H. Tranter, and D. Ronard Fannin. signals
and sysrems: con-
tinuous and Discrete,2d ed. New york: Macmillan, 19g9.
lndex

A Circular convolution. see Periodic


convolution
A/D conversion 33,364 (see also Sampling)
Circular shift, see Pcriodic shift
Adders, 69, 306
Amplitude modulation, l9() Classification oI continuous-time sysrems, 42
Analog signals, 33 Closed-loop system. 256
Anticipatory system, 48 Coefficient multipliers, 306
Aperiodic signal, 4 Cofactor, 507
Average power, 7 Complex exponcntial, 4, 285
Complex numbers, .185
arithmetical opcrations. zl{17
B conjugate,.l8T
Band edge, 202 polar form, 485
BandJimited sign al, 122, 185, 197 powers and roots.489
Band-pass filter, 200 trigonometric [orm, el86
Bandwidth: Conlinuous-time signals, 2
absolute, 205 piecewise continuous, 3
defDition, 204 Continuous-lime systems:
equivalcnt,206 causal,4li
half-power (3-dB), 205 classification. .ll-52
null-to-null, 206 detinition,.53
rms, 207 ditferential equation representation, 67
27o,207 impulse responsc representatioa, 53
Basic system components, 6E invertibility and invcrse, 50
Bilateral l-aplace transform, see Laplacr linear and nonlinenr, 42
Transform with and rryithour memory. 47
Bilateral Z-transform, see Z-transform simulation diagrams, 70
Bilinear lransform. 473 stable,5'l
Binomial coefficients, 496 stability con:;idcrations, 91
Bounded-input/bounded-outpur
state-variable rcprcsentation, 76
slability, see Stable sysrem
lime-varying and t ime-invariant, t[6
Buttervorth Iilter, 458 Convolution intcgral:
de{inition, -52
c graphical intcrprctation. 58
Canonical forms, continuous-time: properlies, 54
first form, 70, 250 Convolution propcrry:
second form, 71, 52 continuous-limc Fourier transform, l8l
Canonical forms, discrete-lime: discrele Fourier transform. 423
Iint form, 307-308 discrete-Timc Fourier trans[orm, 346
second form, 30E-309 Fourier series, ll I
Cascade interconnection, 252 Z-transform. 375
Causal signal, 65 Convolution sum:
Causal sptem, tE, 64 defrnition. 287
Cayley-Hamilton theorem, 83. 91. 314. 509 graphical interpretalion, 2E8
clraracteristic equation, 91 properties, 293
Chebyshev filter, ,152 tabular forms tor finite sequences, 291-92

521
522 lndex

D frequenry function, 347


D/A conversion, 364 inverse relation, 342
periodic property, 341
Decimation, 359
properties, 345-51
Detinite integrals. 496
6-function: table of properties, 351
definition, 22 table of transform pairs, 352
derivatives, 30 use in convolution, 346
properties, 5-29 Discrete-time sign al, 3, 27 E
De Moiwe &eorem, zE9 Discrete-time systems, 41, 287
Design of analog filters: difference-equation representation, 298
Butterwosth tilter. 458 finite impulse response, 288
Chebyshev filler,462 impulse response, 287
frequency transformations, 455 infinite impulse response, 288
ideal filters, 452-53 simulation diagrams, 307
specifications for low-pass filter. 454 stabiliry ol 316
Design of digital filten, 468 state-variable representatiotr, 310
computer-aided design, zEl Distortionless sy,stem, 139, 347
FIR design using windows, 475 Down Sampling, 359
frequency transformations, 456 Duration, 2M, 208
IIR filter design by bilinear
transformation, 473 E
IIR filter design by impulse invariance, Eigenvalues, 83, 9l
469
Eigenvectors, 507
linear-phase filten, 475
Elementary signals, 19, ?52
Diagonal matrix, 5(X Energy signal, 7
Difference equations:
Euler's form, 35, zlE6
characteristic equation, 300
Exponential function, 5, 492
characteristic roots, 300
Exponential sequence, 284
homogeneous solution, 300
impulse response from, 305{E
initial conditions, 299 F
particular solution, 302 Fast Fourier transforms (FFf):
solution by iteration, 298-29 bit-reversal in, 433
Differential equations, see a/so Continuous- decimation-in-frequency (DIF) algorirhm,
I
time systems 4t1
solution, 67 decimation-in-time (DIT) algorithm, 429
Digiral sitnals, 33 in-place mmpuiation, 432
Dirichlet Conditions. 122 signal-flow graph for DIF, 436
Discrete Fourier lransform: signal-flow graph for DIT, 412
circular shift, 422 spectral estimation of analog signals, 436
definition, 421 Feedback connection, 256
inverse transform (lDFf), 421 Feedback sptem, 256
linear convolulion using the DF"I,426 Filters, 200, see alsa Design of filters
matrix interpretation, 425 Final-value theorem:
periodicity of DFT and IDFT, 421 laplace transform, 244, 269
properties, 422-25 Z-transform, 389
zero-augmenting, 426 Finite impulse response system, 2E8
Discrete impulse function, 283 First difference, 283
Discrete step function, see Unit step function Fourier series:
Discrete-time Fourier series (DTFS): coeflicienrs, 109, 113
convergence of, 333 exponential, I l2
evaluation of co€fficients, 333 generaliz-ed, 109
prop,erties, 338-39 of periodic sequences, s€e
representation of periodic sequences, 331 Discrete-time Fourier series
table of properties, 339 properties:
Discrete-time Fourier transform (DTFT): convolution of two signals, 13l
delinition. 340 integration, 134
lndex 525

least squares approximation, 125 Inverse l:place transform, 2z16-17


linearity, 129 Inverse of a matrix. 506
product of two signals, 130 Inverse syslem, 50
shift in rime, 133 Inverse Z-translbrm. 392
symmetry, 127 Inversion propert)'. tl6
trigonometric, 114 lnvertible LTI systems, 65
Fourier transform:
applications of: K
amplitude modulation, 190
multiplexing, l92 Kirchhofl's current law. 259
sampling theorem, 194 Kirchhoffs voltage law. 137, 259
signal fillering, 2fl) Kronecker delta, 107
deEnition, 164-65
development, 163 L
examples, 166
existencc, 165 laplace transform:
applications:
ProPerties:
convolution, l8l control, 260
differentiation, 17 RLC circuit analysis, 25E
duality, 184 solution of dilfcrcntial equations. 257
linearity, 171 bilateral,225
modulation, 185 bilateral using unilateral. Z
symmetry. 173 inverse,246-47
lime scaling, 175 ProPerties:
time shifiing, 175 convolution. 2.lt)
discrcte-time, see Discrete-time Fourier differentiation in the s-d<
transform differentiation in time do
final-valuc lhcorcm, 244
G initial-valuc thcorcm, 243
integralion in tinle domai
Gaussian pulse, 28, 210 linearity,232
Cibbr phenomenon, 142 modulation,239
Gram-Schmidt orthogonaliz:tion, 148 shifting in thc s-Domain,
timc scaling. 23.1
H rime shitting. 232
Harmonic content, 159 region of convcrgenca, 226
Harmonic signals: unilateral,22tl
continuous-time, 4 Left half-plane,267
discrete-time, 2E6 Linear constant coelficiens Dl
Hermitian malrix, 505 Linear convolution. 295
Hold Circuits. 357 Linearity, 42, 2tt7
Linear time-invariant syslem, i
properties,64
I Logarithmic function, 492 rtl I
Ideal tilten, 198, 2ffi
Ideal sampling, 194 M
lS -rl
IIR tilter, see Design of digital filters
Impulse function, see 6-function Marginally stahle system, 267
derivative of, 30 Malched filrer, 56
Impulse train, 186, 194 Matrices:
Impulse modulation model, 195, 35-5 definition, 502
Impulse response. 73 delerminant. -506
Indefinite integrals, 49&-501 inverse,506
lnitial-value theorem: minor, 507
l:place transform, 24!, 259 operations,503
Z{ransform.389 special kinds. 504
Integrator,69 Memoryless systems. 47, 64
Interpolation, 199, 359 Modulation. see Amplitude modulation
524 lndex

Ivlultiple-order polcs, 268 s


Multiplication of matrices. 503
Sampled-data system, 352
Sampling function, 22
N Sampling of continuous-time functions:
Nonanticipatory syste:n, 4E aliasing in, 197, 354
(see a/so Causal system) Fourier transfgrm of, 351
Noncausal system, see Anticipatory system impulse-modulation model, 195, 355
Nonperiodic signals. see Aperiodic signals Nyquist rate, 195
Nyquist sampling theorem, 196, 354
o Sampling property, see E-function
Sanrpling raie conversion, 359
Orthogonal representation of signals, Scalar multipliers, 80
to7-tt2 Scaling property, see 6-function
Orthogonal signals, 107 Schwartz's inequality, 209
Orthonormal signals, 108 Separation property, E6
Overshoot. 143, 262 Series of exponentials, 496
Shifting operation, 10
P Sifting propeny, see 6-function
Signum function, 21, l7E
Parallel interconnection, 255, 293
Simple-order poles, 267
Parseval's theorem, 132, 179
Simulation diagrams:
Partial fraction expansion, 247, 512
continuous-time systems, 70
Passband, 2(D, 453
discrete-time systems, 306
Period, 4, 285
in the Z-domain, ,l(2-8
Periodic convolution:
Sinc function, 22
continuous-time, 131
Singularity functions, 29
discretc-time, 295-96
Sinusoidal signal, 4
Periqlic sequence:
Special functions:
definition, 285
beta, 494
fundamental period, 285
gamma, 493
Periodic shifl, 296
incomplete gamma, 494
Periodic signals:
Spectrum:
defrnition, 4
amplitude, 113
fundamental period, 4
enegy, 179, 1M
representatioD, I l3
estimation, see Discrele Fourier transform
Plant, 260
Power signals, 7
line, I 13
phase, 113
Principle value, tlE6
power-density, 180
two-sided, I l5
a State equalions, continuous-time:
Quantization, 33, 364 definition, 77
first canonical form, 86
sccond canonical form, 87
R,
timc-domain solution, 78
Ramp function, 2l State equations, discrete-time:
Random signals, 32 fint canonical form, 310-t3
Rational function ,226, 247 , 394 frequency-domain solution, 40E
Reconstruction Filters: parallel and cascade forms, {D
ideal, 195-6, 356 second canonical form, 310-13
practical,357 timedomain solution, 313
Reclangular function, 3, 20 Z-transform analysis, .102
Reflection operation, 13 State-transition matrix. continuous-time:
Region of convergence (ROC): definition, 80
laplace transform, 226 determination using liplace transform,
Z-transform, 37&{0 263-$
Rectifier, I 18 propcrties, 85, 86
Residue theorem, 394 iime-domain evaluation, 81-5
Rise time, 262 State-transition matrix, discrete:time:
' '; , . .. ,
y,,ittt,i
,--.,ii-' ll
lndex 525

definition,315 Transformation:
delermination using Z-transform, 408 of independent variahle, 281
properties,315-16 of state Yector. 89
relalion lo impulse response, 315 Transition matrix, ic., Stale-transition matrix
time-domain evaluation of. 3l{ Transition propcrly. sj
State-variable representation, 76, 310 Triangle inequalirv. .l9t)
equivalence, 89, 313 Triangular pulse,60
Stable LTI systems, 65 Fourier transform of. 167
Stable system, 51. 9l Trigonometric idenrirics. 491 -92
Stability considerations. 9l Time-scaling:
Stability in the s-domain, 266 continuous-time srgnals. l7
Stop band, 20 discrete-time signals, 281
Subtractors, 69 Trvo-sided exponential, I 68
Summers, 306
Symrnetry:
effects of, 127 U
even symmetric signal, l5 Uncertainty principlc. 2M
odd symmetric signal, l5 Unilateral l:place transform, see laplace
System: transform
causal. ,18 Unilateraf Z{ransform. see Z-lransform
continuous-time and discrete.time, 4 I Uni lorm quantizes. -165
distortionless, I39 Unit delay, 307
function, 135 Unit doublet. 30
inverse,50 Unit impulse function. see 6.function
linear a-od nonlinear, 42 Unit step function:
memoryless, 47 continuous-limc. l9
with periodic inpurs, 135 discrete-time, 2S3
lime-varying and timc-invariant, 46 Up sarnpling, 359

T w
Tables: Walsh [unctions. l5(] 51
effects of symmerry, 128 Window functions:
Fourier series properlies: in FIR digital trlrer design,476-Tl
discrelc-time, 339 in spectral estimation, 439-41
Fourier transform pairs:
continuous-time, 172-73
discrete-time, 352
z
Fourier transform properties: Z-transform:
continuous{ime, 189 convolution propcrty, 390
discrete-timc, 351 dcfinition,3T6
Frequency transformalions: inversion by series cxpansion. 394
analog, 456 invcnion integral. -1()2
digital, 457 invcrsion by partial-f raction expansion,
laplace transform: 395
pairs, 230 properties of the unilateral Z-transform.
prop€rties,246 383
[:place transforms and their Z-transform region of convergencc, 378-79
equivalens. 470 relation to Laplacc transform, 410
Z-transform: solution of dilfcrcncc equations, 386
pairs,393 lable of propertics. 392
properlies, 392 table of transforms. 393
Time average, 49-56 Zero-input componcnt. 2
Time domain solution. 78 Zero-order hold.357
Time limited, 9 Zero padding.2Sti, a/so srz Discrelc
Transfer function, 135, 242, 26 Fourier lransform
open-loop, 256 Zero-state componcnt. 26zl

You might also like