Professional Documents
Culture Documents
lt I t
CONTINUOUS AND DISCRETE
SIGNALS AND SYSTEMS
SECOND EDITION
SAMIR S. SOLIMAN
QUALCOMM Incorporated
San Diego, California
MANDYAM D. SRINATH
Southe rn M ethodis t U niv e rs ity
Dallas, Texas
@ lSS by PrcnbHa[, llt., (]Ew lmorvn as Pear8on Educauon, lnc.), Ono lako Stest, Uppst
Saddle Rlver, Nfl Jsr8oy 074{i8, U.SA. All rlghts reslved. No Pan o, thb bmk may bo
roprduEsd h any totm, by mlmeogmph ot any o$Er rneans, wlthout Potmlssbn h s'ddng tlotr
Ole Brblbhor.
Tho adE srd Brb&itp c, ods !@f hsvs used tlalt b€8t efiotb h Pr€Darrt8 0& bk Bffi
Tlieso Edu.le
0D (bvBloF rsd, r€sger4 8rd bdtg d tho UEotbs ard Fqtams b (labmets ffiefrsdiYBt@. The auttE
ard grubtrsr nal(o no rsnat$y ol any ldrd, etg€ss€d @ lmpfled, wlth tggard b those Programs @ ths
dcfiordetcl cmtehed h 0lfs !@k Ihe autlor ad trINshet thal d !o ]bbb h any sYsnl E lrtusrtal
or colrssglr€tdal datttsgss h cglrE(tqt rdlh, B artdrE oul ot, Ulo hrrdsl'fiE, P€rlormarEo' ot usa ot tlr@
pro0rans.
tsBN€t-20&zu€
Pubtbhod by Asol(g K GhGh, Prantbs-tlatl ol lndla P.hats LEnibd, M'c/, Connaught Cbas'
New Dolhl:lloool and Prlnted by V.K. Balra at Pearl Onset Prsos Prlvato Llmlted,
N$v Dolrl-11(815.
Contents
PNEFACE rlll
1 NEPRESENNNG S'GA'AIS
1.1 Introduction 1
1.2 Continuous-Time vs. Discrete-Time Signals
1.3 Periodic vs. Aperiodic Signals 4
1.4 Energy and Power Signals 7
1.5 Transformations of the Independent Variable l0
1.5.1 The Shifting Operation, l0
1.5.2 The Retlection Operotion, 13
1.5.3 The Time-Scaling Operarion, 17
1.6 Elementary Signals 19
L6.I The Unit Step Function, 19
1.6.2 The Ramp Function, 2l
L6.3 The Sompling Function, 22
1.6.4 The Unit Impulse Function, 22
1.6.5 Derivatives of the lmpulse Function, 30
1.7 Other Types of Signals 32
1.8 Summary 33
1.9 Checklist of Important Terms 35
1.10 Problems 35
,._,:v'.
I I
:.
, i .Its,'I
:
vl ContenB
2 CONNNUOU$NMESYSTEMS 4I
2,1 Introduction 4l
2.2 Classification of Continuous-Time Systems 42
22.1 Lineat and Nonlinear Sysums, 42
2.2.2 Tbne-Varying ond TimeJnvariant Systems, ,16
2.2.3 Systems with and without Memory, 47
2.2.4 CausolSysetta,4E
2.25 Invenibility and lnverce Sysums,50
2.26 Sublc Systerns,5l
2.3 Linear Time-Invariant Systems 52
2.3.1 The Convolution Integral 52
2.3.2 Graphical Inrerpretation ol Convoluiot, 58
2.4 Properties of Linear, Time-Invariant Systems U
2,4.1 Memorylas LTI Systems, O4
2.42 Causal LTI Systems, &
2.43 Invertible LTI Systems, 65
2.4.4 Stoble LTI Systems, 65
2.6 State-VariableRepresentation 76
2.6.1 Sute Equations, V
2.6.2 Time-Domah Solwion ol the State Equations, TE
2.63 State Equations h Fint Canonical Form, M
2.6.4 State Equztions h Second Canonical Fon4 E7
2.65 Stability Consideruions, 9l
2.7 Srrmmary 94
2.8 Checklist of Important Terms 96
2.9 Problems 96
S FOURIER SEF'ES t@
5 THELAPLACETRANSFORM 224
6 D//SCNETE.NMESYSTEMS 278
8 THEZ.TRANSFO.AM 375*.
9 THED/SCNETEFOUA//ERTRANSFOAM 4Ig
9.1 Introduction 419
9,2 The Discrete Fourier Transform atrd Its Inverse 4Zl
9.3 Properties of the DFT 422
9.3.1 Linearity,422
9.3.2 TimeShiftitrg,422
9.3.3 Akemative Invenion Formulq 42j
9.3.4 Time Coivolwio4 42j
9.3.5 Relation a the Discrete-Time Fourier and Z-Transforms, 424
9.3.6 Mitrix Interpretarion of the DFT, 425
9.4 Linear Convolution Using the DFT 426
9.5 Fast Fourier Transforms 428
9.5.1 The Decimation-in-Time Algoritha 429
9.5.2 The Decination-in-Frequency Algoritlua,4j3
xr
Contents
91 Summary 445
9.8 Checklist of ImPortant Terms 448
9.9 Problems MB
tts2
10 DES//GN OF ANALOG AND DIGITAL FILTERS
BIBLIOGRAPHY 519
INDEX 521
Preface
The second edition of Continuous and Disqete Signak and Systems is a modified ver.
sion of the fint edition based on our experience in using it as a textbook in the intro'
ductory course on signals and systems at Southern Methodist Universily, as well as the
coEments of numerous colleagues who have used the book at other universities. The
result, we hope, is a book that provides an introductory, but comprehensive treatment
of the subjeci of continuous and disqrete.time signals and systems, Some changes that
we have made to enhance the quality of the book is to move the section on orthoSo'
nal representations of signals from Chapter I to the beginning of Chapter 3 on Fourier
serles, which permlts us to treat Fourier series as a epecial case of more general repre'
sentations. Oiher features are the addition of sections on practical reconstruction fil'
tera, rampling-rate conversion, and A/D and D/A converters to Chapter 7, We have
aleo added reveral problems in various chapters, emphasizing comPuter usage. How'
ever, we have not suggested or requlred the use of any specific mathematiQal software
be left to the preference of the lnstructor,
-Overall, as we feel that this choice should
packages
about a third of the problems and about a fifth of the examples in the book
have been changed,
As noted in the first edition, the aim of building complex systems that perform
sophisticated tasks imposes on engineering students a need to enhance their knowl'
edge of slgnals and syitems, so that they are able to use effectively the rich variety of
anilyeis and synthesis techniques that are available. Thus signals and systems,is a_core
course ln the Electrical Engineering curriculum in most schools. In writlng this book
we have tried to preBent the most widely used techniques of signal and system analy'
sls ln an appropriate fashion for instruction at the junior or senior level in electrical
engineerlng" The concepts and technlques that form the core of the book are of fun'
damental lmportance and ghould prove useful also to engineers wishing to update or
extend thelr understanding of signals and eyetems through self-study,
xlll
The book is divided into two major parts. In the,first part. a comprehensive treat-
ment of continuous-time signals and systems is presented. In the second part, the
results are extended to discrete-time signals and systems. In our experience, we have
found that covering both continuous-time and discrete-time systems together, fre-
quently confuses students and they often are not clear as to whether a particular con-
cept or technique applies to continuous-time or discrete-time systems, or both, The
result is that they often use solution techniques that simply do not apply to particular
problems. Since most students are familiar with continuous-time sigaals and systems in
the basic oourses leading up to this course. they are able to follow the development of
the theory and analysis of continuous-time systems without difficulty. Once they have
become familiar with this material which is covered in the tint five chapters, students
should be ready to handle discrete+ime signals and systems.
The book is organized such that all the chapters are distinct but closely related with
smooth transitions between chapters, thereby providing considerable flexibility in
course design. By appropriate choice of material. the book can be used as a text in sev-
eral courses such as transform theory (Chapters l, 3, 4,5,7, and 8), coutinlsus-1ims
signals and systems (1,2,3,4, and 5), discrete-time signals and systems (Chapters 6,7,
8, and 9), and sipals and systems: continuous and discrete (Chapters 1,2,3,4,6,7,and
8). We have been using the book at Southern Methodist University for a one-semes-
ter course covering both continuous-time and discrete-time systems and it has proved
successful.
Normally, a signals and systems course is taught in the third year of a four-year
undergraduate curriculum. Although the book is designed to be self-contained, a
knowledge of calculus through integration of trigonometric functions, as well as some
knowledge of differential equations, is presumed. A prior exposure to matrix algebra
as well as a course in circuit analysis is preferable but not necessary. These prerequi-
site skills should be mastered by all electrical engineering students by their junior year.
No prior experience with system analysis is required. While we use mathematics exten-
sively, we have done so, not rigorously, but in an engineering context. We use exam-
ples extensively to illustrate the theoretical material in an intuitive manner.
As with all subjects involving problem solving, we feel that it is imperative that a
student sees many solved problems related to the rqaterial covered. We have included
a large number of examples that are worked out in detail to illustrate concepts and to
show the student the application of the theory developed in the text. In order to make
the student aware of the wide range of applications of the principles that are covered,
applications with practical significance are mentioned. These applications are selected
to illustrate key concepts, stimulate interest, and bring out connections with other
branches of electrical engineering.
It is well recognized that the student does not fully understand a subject of this
nature unless he or she is given the opportunity to work out problems in using and
applying the basic tools that are developed in each chapter. This not only reinforces
the understanding of the subject matter, but. in some casesr allows foi the extension of
various concepts discussed in the text, In certain cases, even new material is introduced
via the ptoblem sets. Consequently, over 260 end-of-chapter problems have been
included. These problems are of various types, some being
straightforward pPli3tions
that the stu-
of the basic ideis presented in the chapiers, and are included to ensure
and other problems
dent understands the material fully. Sonre are moderately difficult,
problems
i"quir. that the student apply the theory he or she leamed in the chapter to
of practical imPortance.
the
ihe relative amount of "Design" work in various courses is always a concern for
and digital-filter
engineering faculty. ihe inclusion in this text of analog-
"f..ii*i
;;.;;; *;fi as othe-r design-related material is.in dir. ect response to that concern'
of all the
At the end of each chapier, we have included an item-by-item summary. of all
irpott*t.orcepts ana formulas covered in that chapter *:X,1t-:^th:^llist
i: that
i.["""ri ,"r*s iiscussed. This tist serves as a remindir to the student of materid
deserves sPecial attention.
-- systems.-The.focus
Tt roughout the book, the emphasis is on linear time-invariant
remainder of the book'
io CfruptJ. I is on signals. This material, which is basic to the
considers the mathemati*ii.pi.t""t",ion of signals. In
this chapter, we cover.a vari'
signals' transformations of
ety of .uUi".s such as p.ti.JiL tig,"ft' energy ind power
signals'
--- indepindent variable, and elementary
thl (CT)
CU.pi.r 2 is devoted to the time-domain iharacterization of continuous-timeof con-
urith the classificatioo
Iinear time-invariant (LTIV' systems. The chaPter starts of
tinuous-time systems anO tire'n introduces thi impulse-response-characterization
discussion of.slntems
irrv rpt"., and the convolurion integral. This is followed by a
equations' Simulation diagrams
characterized by linear constant-coeffici-ent differential
to introduce the state vari-
for such system, ur" pr"."ir[anA used as a stepping stone
with a discussion of stability'
- conclpt. The chapter concludes
able
io this point the focus is on the time-domain description of signals.and systems.
startingwithchaPter3'weconsiderfrequency-domaindescriptions.Webeginthe
signals' The
t a consiaeratioi "iit " ortUogiial representation of arbitrary
"i"pt"i*i
Fourier series are then iniroJuced as a slecial cise of the
orthogonal rePresentation
forperiodicsignals.PropertiesoftheFourierseriesarepresented.Theconceptoflineof
signals is given' The response
spectra for describing tfr" tt"qu1n"V content of such
concludes with a discussion
iinear ryst.m, to perilodic i"pii. it iitt*ted' The chapier
of the Gibbs phenomenon.
Chapter4beginswiththedevelopmentoftheFouriertransform.Conditionsunder
propertiesdiscussed' Appli'
which the Fourier tr.nrfor,n .*i.t. lie presented and its
modulation, multiplexing'
cations of the Fourier transform in areai such as amplitud-e
sampling, and signal tilteriig a,e The usi of the transfer function in deter-
;;G;il. ,".p6nr" Liiv;tti;s
"onsiaered'
is discu-"ed' The Nvquist sampling theorem
is
"f
derivedfromtheimpulse-modulationmodelforsampling.Theseveraldefinitionsof
bandwidthareintroducedandduration.bandwidthrelationshipsdiscussed..-
-- l-aplace
Ct.ft", 5 deals with ,i" LpU.. ,t*sform. Both unilateral and bilateral
are derived and examples
transforms are defined. n.p.atl"r of the Laplace transform
used to evaluate new laplace trans-
are given to demonstrate t oL in.r" propertils are
of the transfer func-
foffip.ir. or to find ,tr. i'i""tt" Lif..lt transform. The concept
transform such as for the
tion is introduced and *tLi"pp-fii"tions of the Laplace
nl
solution of differential equations, circuit andysis, and control systems ale presented.
The state.variable representation of syeteme in the frequency domain and the solution
of the state equations using Laplace transforms are discu$ed.
The treatmentof cotrtinuous-time sipals atrd systens ends with Chapter 5, and a
course emphasizing only CT material can be ended at thir point. By the end of this
chapter, the reader ehould have acquired a good undentandiag of contiuuous-time sig-
nals and systems aod should be ready for the second half nf the book in which discrete-
time signals and eystems analysis are c,overed.
We itart our consideration of diecrete.tipe syBtems in Chapter 6 with a dlecueeiou
of elementary diegrete-time signals. The impulse-reeponse characterlzatiou of diEerete'
ti11e systems is presented and the convolutiotr sum for determining the regPonse to
arbitrary inputs is derived. The difference equatiou rePresenUtion of discrete-tine sye
tems and their eolution is given. As itr CT systeos, einulation diagrams are diesussed
as a means of obtainiag the state-variable representation of dissrete'tine systems'
Chapter 7 considerB the Fourier analysis of discrete-tine signals, The Fourier eeriee
for periodic sequences and the Fourier transform for arbitrary signals are derived. The
similarities and differencee between these atrd their cootinuous-tine couterParts 8re
brought out and their propertles and applications discu$ed. The relation between the
coutinuoue-tlme and discrete-time Fourier trsnsfotrrs of sampled analog slgpalo ie
derived and used to obtain the impulse.modulation model for samPlirg that ls consld'
ered in Chapter 4. Reconstruction of sampled analog slgnals uslng practlcal recon'
struction devices such as the zero-order hold ig considered. Sampllng rate converBion
by decimatlon and interpolatigpof sampled signals ie dlscussed. The chapter concludes
wlth a brief deecriptldri df[i/D ahit D/A coqvg[sjptt, i
Chapter E dlscusses lhe (p.transform of dlsgete'tHe slgnals. The derelopment fol'
lowe clooely that of Chapter5for tte Iaplace fransf6dn. Properties of the Z.traneform
are derived and thelr application in the analysis of diecrqte'time systems developed.
The solution of difference equations and the analysle of gtate-vadable systems using
the Z-transform are also dissussed, Flnally, the relation,between the Laplace and the
Z-transforms of sampled signals is derived aud the mapplng of the s'plane lnto the z'
plane'i8 discussed.
Chapter 9 introduceg the discrete Fourler trBnsform (DFT) for uralyzlng ftnite'
longth iequences, The properties of the DFT are derlved and the dlfferences wlth the
other transfotms dlscusged in the book are uoted. The interpretatiou of the DFf as a
matrix operation on a data vector is used to briefly note its relatlon to other orthogo'
nal traneforms. The application of the DFT to linear system analysis and to spectral
estimation of analog signale is discussed. TVo popular fast Fourier tralsform (FFT)
algorithms for the efficient computation of the DFI are preeented.
The final chapter, Chapter 10, congiders Eom€ techuiques for the deslgp of analog
and digttal 6lters, Techniquee for the deelgo of two low.pass analog flltern, namely, the
Butterworth and the Chebyshev filters, are given. The lmpulse invarlance and billnear
technlques for designing digital IIR filters are derlved. Deeign of FIR dlgital ftlters
uslng window functions is also discussed, An example to lllustrate the appllcatlon of
FIR filters to approximate nonconventional filtera ls prssented, Tbe chapter concludes
wlth a very brief overvlew of computer-alded techniques'
Pretacs xvll
In addition, four appendices are included. They should prove useful as a readily
available sourse for some of the background material in complex variables aud matrix
algebra necessary for the course, A somewhat extensive list of frequently'used formu'
las is also included.
we wish to acknowledge the many people who have helped us in writiag this book,
especially the students on whom much of lhis material was classroom tested, a[d the
reviewers whose comments were very useful. We have tried to incorporate mOst of
their comments in preparing this second edition of the book. we wish to thaEk Dyan
Muratalla, who typed a subbtantial part of the manuscript. Finally, we would like to
thank our wives and families for their Patienc€ druing the completion of this book.
S, Soltmaa
M.D, Sttruth
., lii-i , .,, j ,r
i
'tf .f, '.
rI 4t
I
Chapter 1
Representing Signals
1 .1 INTRODUCTION
Signals are detectable physrcal quantities or variables by mcans of which messages or
iniormation can be transmitted. A wide variety of signals are of practical importance
in describing physical phenomena. Examples include the human voice. television pic-
tures, teletypC data, and atmospheric temperature. Electrical signals are the most eas'
ily measuied and the most simply represented type of signals. Therefore, many
engineers prefer to transform physical variables to electrical signals. For example,
ma'ny physical quantities. such as temperature, humidity, specch, wind speed, and
light
intensity, can bi transformed, usirig trinsducers, to time-valying current or voltage sig-
nals. Ellctrical engineers deal with signals that have a broad range of shapes, ampli-
tudes, durations, and perhaps other physical properties. For example, a radar-system
designer analyzes higir-eneigy microwave pulses, a communication-system engineer
whols concemed wiitr signai detection and signal design anall-zes information-carry-
ing signals, a power engineer deals with high-voltage signals, and a comPuter engineer
deals with millions of pulses per second.
Mathematically. sifnals ..presented as functions of one or more independent
"ie
variables. For eximple. time-varying current or voltage signals are functions of one
variable (time), the vibration of a reciangular membrane can be represented as a func-
tion of lwo spatial variables (.r and y coordinates), the electrical field intensity can he
looked upon as a function of two variables (time and space). and finally. an image sig-
nal can be regarded as a function of two variables (.r and.v coordinates). ln this intro-
ductory courie of signals and svstems. we focus attention on signals involving one
independent variable, which we take to be time. although it can be different in some
specific aPPlications.
2 Roprosentlng Slgnals Chapter I
.
We'begin this chapter with atr htroduction to two classes of eignals that we are con-
cemed with throughout the text, namely, continuous-time and discrete-time siguals,
Then, in Section 1.3, we detine periodic signals. Section 1.4 deals with the.iseue of
power and energy signals. A number of traruformations of the independent variable
are discussed in Section 1.5. In Section 1.6, we introduce several inportatrt elementary
sigaals that not ooly occur frequently in applications, but also serve as a basis for rep.
resenting other signals. Other types of signals that are of importance to engineers are
mentioned in Section 1.7.
rect(t/t) = {
l r, lrl a;
(1.2.1)
[o' hl ,;
u(r)
I
(u) (b)
rsct (r/t)
-rl2 0
-3-2-1 0t2
Flgure L2.3 A pulse train.
to r(t)
at the point of discontinuity r = rr.
If the independent variable takes on only discrete values t : k[, where I is a fixed
positive real number and & ranges over the set of integen (i.e., & = 0, tl, t2, etc,),
the corresponding signal x(&[) is called a discrete-time sipal. Discrete-time signals
arise naturally in many areas of business, economics, science, and engineering. Exam-
ples are the amount of a loan payment in the && month, the weekly Dow Jones stock
index, and the output of an information source that produces one of the digits 1, 2, ...,
I
M every seconds. We consider discrete-time signals in more detail in Chapter 5.
4 Representing Signals Chapter I
I
where > 0 is a constant known as the fundamental period. is classified as a periodic
signal. A signal .r(r) that is not periodic is referred to as an aperiodic signal. Familiar
eiamples olperiodic signals arethe sinusoidal furrctions. A real-valued sinusoidal sig-
nal can be eipressed mathematically by a time-varying function of the form
x(t)=4sin(r,rnl+$) (1.3.2)
where
A = amplitude
oo = radian frequency in rad/s
6 = initial phase angle with respect to the time origin in rad
This sinusoidal signat is periodic with fundamental Period T = 2t /aotor all values of roo'
The sinusoidal time function described in Equation (1.3.2) is usually referred to as
a sine wave. Examples of physical phenomena that approximately produce sinusoidal
signals are the vottage ourput of an electrical alternator and the vertical displacement
of1 tn*attached tJ a spring under the assumPtion that the spring has negligible mass
and no damping. Tne putse irain shown in Figure 1.2.3 is another example of a peri'
odic signal, witfi fundamental period T = 2. Notice that if r(r) is periodic with tunda-
mentaiperiod I, then r(r) is also periodic with period 2I, 37,4T, . ... The fundamental
frequericy, in radians, liaaian friquency) of the periodic signal r(t) is related to the
fundamental period by the relationship
Zrt
ttO=7 (1.3.3)
'Engineers and most mathematicians refer to the sinusoidal signal with_radian fre'
qrJn.y ro* = 1,oo as the tth harmonic. For example, the signal shown in Figure l'2.3
h"t . iuot.*"nial radian frequency @o = rr, a second harmonic radian frequency
-, = Zn,and a third harmonic iadian friquenry = 3t. Figure 1.3.1 shows. the first'
-,
se'cond, and third harmonics of signal x(t) in Eq. (1.3.2) for specific values_of
A, 0ro, and
O. Note that the waveforms coresPonding to each
harmonic are distinct. In theory' we
(r)
x2 (r) = cos 4rl 13 = +cos 6u,
.rr (r) = +cos 2t,
can associate an infinite number of distioct harmonic signals with a given sinusoidal
waveform.
Periodic signals occur frequently in physical problems. ln this section, we discuss the
mathematical representation of such sipals. In Chapter 3, we show how to represent
any periodic signal in terms of simple ones, such as sine and cosine.
Eranple l3.l
Harmonically related continuous-time exponentials are sets of complex exponentials
with fundamental frequencies that are all multiples of a single positive ftequency r,ro.
Mathematically,
We show that for k * O, +t(t) is periodic with fundamental period 2rr/ltool or fruda-
mental frequency I kr,rol.
In order for signal Qr(t) to be p€riodic wilh period T > 0, we must have
exp [/<roo(t + I)l = exp[korot]
or, equivalently,
^= 2tt
-le;J (13.5)
'
Note that since a signal that is periodic with period I is also periodic with period lI for
any positive integer ( then all signals Q.(l) have a common period of 2rr/roo.
The sum of rwo periodic signals may or may not be periodic. Consider the two peri-
odic sigrals r(t) and y(t) with fundamental periods T, and Tr, respectively. We inves-
tigate under what conditions the sum
z(t)=ax(t)+by(t\
is periodic and what the fundamental period of this signal is if the sigual is periodic.
Since x(t) is periodic with period fr, it follows that
r(r) = r(, + /<f )
$imil61ly,
y(t)=y(t+lTr)
where k and I are iategers such that
z(t) -- ax(t + kT) + by(t + lT2)
In order for z(r) to be periodic with period T, one needs
ax(t + T) + bv(, + T) = ou(t + trr) + by(t + lTr)
We therefore must have
T=kTr=lTz
6 Repreeenllng Slgnals Ohapter 1
or, equivalently,
T,t
---l _-
T2k -
In other words, the sum of two periodic signals is periodic only if the ratio of their
respective periods can be expressed as a rational number'
ranple 1.82
We wlsh to determine which of the following sigrals are periodic'
,n
(a) r'(r) = sin ?r
(b) rz0) =
'in?rto'llr
(c) .rr(t) = sin 3t
(d) xo(| = rr(r) - 2r!(r)
For there signals, 11(l) is periodic with period Tr = 3. We write:r(r) as.the sum of two
t!
sinusoids wiih periiii rri = L5tl3 and-T.-= 15fl' Since 13T2, ='l7r'i1lollon: that r'()
is periodic with period = 15. rl(r) is periodic with period rl = 2r'll3. Since we cannot
i,
frnd integen k and I such that kT1 = lT3, it follows that ro(t) is not periodic'
Note that if x(r) and y(l) have the same period T, then z(r) = x(t) + y(O-is peri-
oOic wittr period T; i.e., linear operations (addition in this case) do not affect the
peri-
odicity of the resulting signal. Nonlinear oPerations on periodic sigrrals (such as
multiilication) produce peiodic signals with different fundamental Periods.The fol-
lowing example demonstrates this fact.
kanple r.$-l
l,et:(r) = oostrrrr and y(l) = cosr,ly'. Consider the signal a0) = :(t)y(t)' Signal x(t) is
perioriic with periodic itr/'or, aad signat y(r) is periodic with period 2n/ur,The fact that
z(t): ,1rrrr1, has two componenti, one with radian frequency o2 -'o, and the other
wit-hradianfrequency(l,2+.r'canbeseenbyrewritingtheproduct:(t)y(t)as
|t o.tr, -
cosorr coso2, = to,)t + cos(o2 + or)d
if or1 = to2 = ro, have a constant term (ll2) and a second-harmonic term
ihen e(t) wi[
irE t.
z,,ir). ln general,'nonlinear operations on periodic sigtals can produce higher order
harmonics.
Since a periodic sigrral is a signal of infinite duration that should start at, = -o
and
go on toi = o, it dlows that ilt practical signals are aperiodic. Nevertheless, the study
of the system response to periodic inputs is essential (as we shall see in ChaPter 4) in
the process of developing the system response to all practical inpus'
h
Sec. 1.4 Energy and Power Signals 7
and the total energy in the signal over the range, € (--, -) can be defined as
(t.4.2)
E=
li,,, I' ,l,r,rl, o,
The average power can then be defined as
P= (1.4.3)
li,n l+,1:,1,(,)t,d,]
Although we have used electrical sigrals to develop Equations (1.4.2) and (1.4.3), these
equatio'ns define the energy and power, respectively, of any arbitrary signal.:.(t)'
When the limir in Equation (1.4.2) exists and yields 0 I f,, < a, signal r(t) is said to
be an energy signal. Inspection of Equation (1.43) reveals that energy signals hav_e zero
power. Oriitre-ottrer trand, if the limit in Equation (1.4.3) exists and yields 0 < P
1n,
itren x(l) is a power signal. fower signals have infinite energy'
es statea earlier, p-riodic signals afe assumed to exist for all time from -o to
+c!
and, therefore, have intinite If it happens that these periodic signals_have finite
"nJrgy.
u""iug" power (which they do in most cases), then they are power signals. In contrast'
bounded finite-duration signals are energy signals.
Example 1.4.1
Inrhisexample,weshowrhatforaperiodicsignalwithperiodl,theaveragepoweris
, = +( r()t,at (1.4.4)
If r(r) is periodic with period I, then the integral in Equalion ( 1.4.3) is the same ovet
any
interval of length L Aliowing the limit to be taken in a manner such that 2L is an integral
iultipte of thJ period (i.e.,it = m7),we find thar rhe total energy of .r(l) over an inter'
val oi length 2L is rn times the energy over one period' The average power is then
p = rim
l*^ l,'l,o)1,d4
= ]7[, vurl'o'
I ti.
-il.t
I Reprcsentng Slgnals Chapter I
.t1(r) x2O)
0 , 0
(a) o)
Ilgure L41 Signals for Examplel.4.2.
Ixanple 1.,1J
Coosider ths nignels in Figure 1.4.1. We wish to determine whether these sigpals are
energy or power sigpals. The signal in Figure 1.4.1(a) is aperiodic with total eoergr
z= l.a,eryl-ald,=+
which is fiaite. Therefore this signal is an energy signal with energy A2 /2. The aver-
age power is
P=
li- (*l_,o, "*r-40)
=m#=o
and is zero as expected.
The energy in the sigpal in Figure 1.42(b) is found as
E=
HI +
["
n, expl-zr]r,] = H nf, * lO- expt-zr)]
U_,o, "
which is clearly unbounded. Thus this sigral is not an energy signal. Its power can be found as
E=
li- riU_rn " + IL a' expl-ztl ")= +
so that this is a power signal with average pwer A2/2.
kample L4a
Consider the sinusoidal signal
x(t)=Ssin(or/+Q)
This signal is periodic with period
-2t
u)o
S€a. 1.4 Energy and Power Slgnals I
The average power of the signal is
The last step follows because the sipal cos (2r,rol + 20) is periodic with period Tl2 ad
the area under a cqsine sigpal over any ioterval of length lI, where I is a positive integer,
is always zero. (You should have no trouble confrming this result if you draw tso con-
plete periods of cos (2oot + 2$)).
tr'.rqnrple 1.4.4
Consider the two aperiodic signals shovu in Figure 1.4.2. These two sigpals are eramples
of energy sigpals. The rectatrgular pulse shown in Figure 1.4.2(a) is stricily time limited,
since .r, (l) is identically zero outside the duration of the pulse. The other signel b aslmP
totically time limited in the sense that r;(t) -+ 0 as t -+ t.o. Such sigpals may also be
described loosely as "pulses." In either case, the average power equals zero. The energr
for signal .x,(t),r
..
E, =
H J_,r?@a,
=
r:, A2 at = l,)
For xr(),
E, =
!y- f ,A' "*el- ultll at
a2 A2
= tS ?(1 - exPl-zaLl) =
-
r1(r) x2U\= A exp [-a lrll
-tl2 0 rl2 0
(a) ( b)
Since E1-and E2 are finite, rr(r) aod t2(r) are energy signals. Almost all time-Iimited sig-
nals of practical interest are energy sigrals.
trranple 15.1
C,onsider the signal r(r) shown in Figure 1.5.2. We want to plot:(r - 2) and :( + 3). lt
can easily be seen that
(t+t, -l=rso
lr. o<t=z
'(')= 1-r*r, 2<t=3
[0, otherwise
.r (, - ,o)
'u+'l
ngure l5.f The shifting operation.
Sec. 1,5 Translormatons ol the lndependent Variable 11
or, equivalently,
(r-t, t=t=2
x(t - 2)= 1 l,. ,. 1:',:i
[0, othervise
-
The sigrral x(t 2) is plotted in Figure 15.3(a) and can be described as r(t) ahiffed tvo
units to the right oo the time axis. Similarly, it catr be shown that
(t + t, -4=rs -3
r(, + 3) = .| 'j,, -i::: ;'
[0, othersise
The sigpal r(, + 3) is ploned in Figure 153(b) and represents a shifted version of r(t),
shifted thee udts to the left.
Eanple 153
Vibration sensors are mounted on the front and rear axles of a moving vehicle to pick up
vibrations due to the roughness of the road surface. The signal from the front seosor is r(l)
and is shown in Figure 15.4. The signal from the rear axle sensor is modeled asr(t 1Z)). -
If the sensors are placed 5 ft apart' it is possible to determine the speed of the vehicle by
comparing the signal from the rear ade sensor with the sigpal from the front ade eensor.
Figure 1.55 illustrates the time{elayed version of .r(t) where the delay is 1Z) ms, or
-2
(a) (b)
a=-dT
=#=sofi/s
karnple 1.63
A radar placed to detect aircraft at a range R of 45 nautical miles (nmi) (l nautical mile
= fi16.115 ft) transmits the pulse-train signal shown in Figure 1.5.6. If there is a target,
the transmitted signal is reflected back to the radar's receiver. The radar operates by mia-
*1 l* roo r (gs)
Received
pulse
, (tts)
f*ssot
Ilgure 15.7 Transmitted and received pulse train of Example 1.53.
suring the time delay between each transmitted pulse 8nd the corresponding returq or
echo. The velocity of propagation of the radar sigral. C is equal to 161,S/5 omi/g
The round-trip delay is
":T=##=o.s56ms
Therefore, the received pulse train is the same as the transmitted pulse train, but shifted
to the right by 0.556 ms; see Figure 1.5.7.
&le 1.6.4
We want to draw.r(-r) and.r(3 - r)if .t(r) is as shown in Figure 1.5.9(a). The signal r(t)
can be written as
.r(-r)
-l I L -2 -l
Flgure l.SJ ilhe reflection operation.
14 Representng Slgnals ChaPter 1
(t+t, -1sr=0
,0)={1, o<t=z
[0, othervise
x,-,, = {i'*
Lq
'' -l
I -:: I
otherwise
or, equivalently,
l-t+t, 0sr=l
r(-r)={1, -2<tso
0, L othersise
The sigpal:(-r) is ilhutrared in Figure 15.9(b) and can be described as:(r) reflected
about the vertical axis. Similarly, it caD be shown that
(q-L 3=t=4
.r(r-r)={t, 1<r<3
Lo, otherwise
The signal r(3 r) is shown in FigUre 15.9(c) and can be viesed as r(t) reflected and then
-
shifted three urits to the right. This result is obtained as follows:
.r(3-t)=r(-(t-3))
.r(3 - ,) r(-, - 3)
l34t-5-4-3-2-l
(d)
Note that if we first shifi.r(I) by thrce units and then reflecl thc shifted signal, the result
is .r( - I - 3). which is shorvn in Figure 1.5.9(d). Therefore. thc operalions of shifting and
reflecting are not com]Irutative,
In addition to its use in representing physical phenomena such as that in the video
recorder example, reflection is extremely useful in examining lhc symmetry properties
that the signal may possess. A signal r(r) is referred to as an cvcn signal, or is said to
be even symmetric, if it is ideatical to its reflection about the trrigin-that is, if
r(-t) = 1111 (1.s.1)
An arbitrary signal .r(r) can always be expressed as a sum of even and odd signals as
.r(r) = x.(r) + r,(t) (1.5.3)
where.r"(r) is called the even part of .r(t) and is given by (see Problem 1.14)
1
.r"(,) = +.r(-r)] (1.s.4)
;[r(r)
and r,(r) is called the odd part of x(t) and is expressed as
i.[r(r) -,r(-
.,,,(,) = r)] (1.5.s)
Erample 1.6.6
Consider the signal .r(t) defined by
r(r) =
[1. r > o
[0, r<o
The even and odd parts of this signal are, respectively,
I
r.(r) =;. all texceptr = t)
t:
r"(r) = {
,<o
r>0
l. ;
The only problem herc is thc value of thesc functions at , = 0. If we define x(0) = 112
(the definition here is consistent with our detinition of thc signal at a point of disconti-
nuity), then
xo(t I
ngure f5.f0 Plots of .r.(l) and .r,(r) for r(r) in Example 1.5.5.
f,kar,rple 15.8
Consider the sigral
,>0
r.(r): {f)e"*e1-,,1,
"
r<0
[ia"nt"'t'
= expl-,lrl1
]a
The odd part ofrG) is
[] a r>0
",p1-"r1,
x"g)= lz "
e ,<0
[-i "'nl"']'
Signals .r.(t) and .r,(r ) are as shown in Figure 1.5.11.
x.(l)
( t.1r )
-l 0 I t -l0] tt 3
0
r.vnrnplo 15.7
Suppose we want lo plot the signal r(3r -
6), where.r(t) is the signal shown in Figure
1.5.2. Using the definition of r(t) in Example 1.5.1 we obtain
3r-5, !-r=2
3
I, 2.r=:
:(3t - O1 =
-3, + 9, !''='
0, otherwise
A plot of r(3r -6) versus r is illustrated in Figure 1.5.13 and can be viewed as r() com-
pressed by a faclor of 3 (or time scaled by a factor of lB) and then shifted two units of
time to the right. Note that if .r(l) is shifted first and then time scaled by a factor of lB,
we will obtain a different signal: therefore, shifting and time scaling are not commutative.
The result we did get can be justified as follows:
18 Representng Slgnals Chapter I
.r (
-1, - (r)
Exanple 15.8
We oflen encounter signals of the type
r(r) = I - .4 exp[-ol]cos(.ot + Q)
Figure 1.5.14 shows.r(r) for typical values of ,4, o and roo. As can tre see1, rhis 5igtal even-
tually goes to a steady stale value of I as I becomes infinite. In practice, it is assumed that
the signal has settled down to a final value when it stays within a specified percentage of
its final theoretical value. This percentage is usually chosen to be 5% and the time ,, after
which the sigral stays within this range is defined as the settling time ,,. As can be seen
from Figure 1.5.14, r, can be determined by solving
I + A exp[-ot,] = 1.05
so that
,,
t' = -!ornlo'osl
LA l
Let
r(r) = I - 2.3 exp[- 10.356t] cos[5t]
tn conclusion, for any general signal x(r), the transformation aI + B on the inde-
p€ndent variable can be performed as follows:
.r(or+p)=r(o(,+F/o)) (1.s.6)
where a and p are assumed to be real numbers. The operations should be performed
in the following order:
1. Scale by cr. If c is negative, reflect about the vertical axis.
2. Shift to the right by p/a if p and o have different signs, and to the left by F/o if F
and c have the same sign.
Note that the operation of reflecting and time scaling is commutative, whereas the
operation of shifting and reflecting or shifting and time scaling is not.
ELEMENTARY SIGNALS
Several important elementary signals that occur frequently in applications al5o serve
as a basis for representing other signals. Throughout the book, we will find that repre-
senthg signals in terms of these elementary signals allows us to better understand the
properties of both signals and systems. Furthermore, many of thesc signals have fea'
tures that make them particularly useful in the solution of engineering problems and,
therefore, of importance in our subsequent studies.
This signal is an important signal for analytic studies, and it also has many practical
applications. Note that the unit step function is continuous for all t except at , = 0,
where there is a discontinuity. According to our earlier discussion, we define
u(0) = 112. An example of a unit step function is the output of a 1-V dc voltage souroe
in series with a switch that is tumed on at time t = 0.
Erample 1.6.1
The rectangular pulse signal shown in Figure 1.6.2 is the result of an on-off switching oper-
ation of a constant vohage source in an electric circuit.
In general, a reclangular pulse that extends from -a to +a and has an amplitude A can
be written as a difference betwecn appropriately shifted step functions, i.e.,
Elqqrmple 1.69
Consider the signum lunction (written sgn) shown in Figure 1.6.3. The unit sgn function
is defrned by
r>0
s$nl = r=0 (r.6.3)
{r I, r<0
2 rect (r/2)
r8n (, )
The signum function can be expressed in terms of the unit step function as
sgnr=-1+2a(r)
The signum function is one of the most oflen used signals in communication aud in con-
trol theory.
I ur-)h = r(t)
The device that accomplishes this operation is called an integrator. In contrast to both
the unit step and the signum functions, the ramp function is continuous at t = 0. Time
scaling a unit ramp by a factor o corresponds to a ramp function with slope a' (A unit
ramp function has a slope of unity.) An example of a ramp function is the linear-sweep
waveform of a cathode-ray tube.
Erample 1.63
Lrtx(l) = u(t+2) -Zu(r + 1)+Zu(r)- u(t -2) -2u(r - 3) +2u(t - 4). Lety(t)
denote its integral. Then
S"t)=Y (1.6.s)
and is shown in Figure 1.6.6(b). Note that sinc x is a compressed version of Sa (.r); the
compression factor is n.
sinc (r)
(b)
l. 6(0) -+ co
2.60)=0,,+0
3. I 6(t)dt=l
4. 6(t) is an even function; i.e.,6(t) : 5(- t)
As just defined, the 6 function does nol conform to the usual definition of a func-
tion. liowever, it is sometimes convenient to consider it as the limit of a conventional
function as some parameter e approaches zero. Several examples are shown in Figure
1.6.8; all such funitions have the following properties for "small" e:
24 !.. Represenffng Slgnals Chapter I
.
tt rr\2
Pt0l tt2u) o,= "(* ;/
_S 0 e t -2e
22
tigure l.6J Engineering models for 6(r).
Eramplo 1.6.4
Consider the function defined as
p0) =
"ts6.'(*..#)'
This function satisfies all the properties of a delta function, as can be shown by rewriting it as
Po)="rrB :w#r
so thal
1- p(0) =
Jl$,
(l/e) = o. Here we used the well-known linu't liq (sinr)/t = t.
2. For values of t * 0,
p(,) = ,(*,rT)'
"rijm.
=("rg.
"r["u* (-r.*i;'1
The second limit is bounded by l, but the fi'.t limit vanishes as e + 0+; therefore,
P(,)=0, t+o
3. To show that the area under p(r) is unity, we note that
[' -o(o
at =
"'s
: l- (''*]'i"')' "
= In 1- sin'(")
r' r,
I --
Sec. 1.6 Elementary Signals
Itl
a
sln'r
--;- dr = t
r_ T.
it follows that
I p(t)dt = |
Three important propenies repeatedly uried when operating with delta functions are
the sifting property, the sampling property. and the scaling property.
J"x(r)6(r
- ,rrr,=
{Xlt'' :;I;"" (1.6.8)
J"ros(r
- 6)dt = l,':::,n +,0)6(r)d"
=r0o), \<to<t2
by Equation (1.6.7). Notice that the right-hand side of Equation (1.6.8) can be looked
at as a function of ro. This function is discontinuous at r0 = t, and ro ,2. Following our :
notation, the value of the function aa \ ot tzshould be given by
ttt
| ,(r)t( - ti dt =
1
,o : t or to = tz (1.6.9)
Jtt ,r<^1,
The sifting property is usually used in lieu of Equation (1.6.7) as the de6nition of a
delta function located at ro. In general, the property can be written as
,{,)=f11116(r-t)dr (1.6.10)
which implies that the signal .r(t) can be expressed as a continuous sum of weighted
impulses. This result can be interpreted graphically if we approximate r(r) by a sum of
rectangular pulses, each of width A seconds and of varying heights, as shown in Figure
1.6.9. That is,
iot - (k - 1)AI
oi.'(ml[]*.t1tr -
= ka)/^)]tkA
Now, each term in the sum represents the area under the /<th pulse in the approxima-
tion iO. We thus let A -+ 0 and replace kA by t, so that /<A (k l)A = dt, and the - -
summation becomes an integral. Also, as A -r 0, 1/A rect((t - tA)/A) approaches
-
6(t r), and Equation (1.6.10) follows. The representation of Equation (1.6.10), along
with the superposition principle, is used in Chapter 2 to study the behavior ofa special
and important class of systems known as Iinear time-invariant systems.
Mathematically. two functions /, (s(r)) and /r(s(r)) are equivalent over the interval
(t,, rr) if, for any continuous function y(l),
ett fl,
I
Jt
y(,)/'(s(,)) dt = |
J"
y(t)fr(6(t)) dt
'l'-y(r)x(r)60 fl'
|
J\
- t)dt =y(ro)x(o) = J,,| y(r)x(to)6(t - rldt
Note the difference between the sifting property and the sampling ProPerty: The
right-hand side of Equation (1.6.8) is the value of the function evaluated at some Point.
*-her.ur the right-hand side of Equation (1.6.11) is still a delta iunction with strength
equat to the value of .r(t) evaluated at I = lo.
E(ar + D) =
#rt. i) 0.6.12)
This result is interpreted by considering E(l) as the limit of a unit area pulse p(l) as
some parameter e iends to zero. The pulse p(at) is a compressed (expanded) -version
of. p(ti if a > I (a < 1), and its area is l/lal. (Notc that lhe arca is always positive.)
By
tu[ii! tn" limit as e -+ 0, the result is a delia function with strength l/la I' We show this
by co-nsidering the two cases a > 0 and a ( 0 separately. For a > 0, we have to show
that
+ b)dt -- . t)*, ,, . -
,l
. ,,
1",,Q)t1o, I'l,alu(,
Applying the sifting property to the right'hand side yields
1,r-q\
a \a I
To evaluate the left-hand side, we use the transformation of variables
r-al*b
Then dl = (lla)dt,and therangerr ltltzbecomesal,'r b<t < at2+ D'Thelefr
hand side now becomes
t)'("1
["r()6('r
Jt''-''''\''
+ b)dt
-'-- = J''*t'\
[''no '1t:a l''a ''"
I /-b\
=;'\
)
"
which is the same as the right-hand side.
When a < 0, we have to show that
28 Representing Signals Chapter 1
l,',,rU)6(ar
+ b)d,=
l" fr16111, .ur)*, ,,.--!.,,
Using the sifting property, we evaluate the right-hand side. The result is
I /-b\
E'[7/
For the left-haad side, we use the transformation I = at + b, so that
at=!d., = -!a,
a lal
andtherangeofrbecomes -lolq+ b1r < -lalr, + D,resultingin
l,',,,{,)t1,,
+ b) dt =
il:,'::,(f) rt,l pi*
=hl-':,::'(#)",,'
1 l-b\
=EI'|.7/
Notice that before using the sifting property in the last stepr we interchanged the lim-
is of integration and changed the sign of the integrand, since
-lalq+b<-lalq+b
lit-arnple 1.65
Consider the Gaussian pulse
p(l.. = I f-t21
t6?,*L;pl
The area under this pulse is alwap 1; that is,
l"#'*l#)"='
It can be shown that p(l) approaches 6(l) as e -r 0. (See problem 1.19.) l*t a > I be any
cotNtant. Then p(ar) is a compressed version ofp(l). It can be showtr that the area under
p(ar) is \la, and as e approaches 0, p(ar) approaches 6(ar).
llrerrple 1.0.6
Suppose we want to evaluate the following integrals:
.t
a. I
t-2
0+t2yt1t-z1ar
ra
b. J-2
I (r+12)6(r-3)d,
Sec. 1.6 Elementary Slgnals 29
t3
c. I exp[r -216(a - gdt
Jo
ll
d.
J_-6(t)dt
a- Using the sifting property yields
?l
J_r(,+,')to-3)dr=o
since l = 3 is not in the interval -2 < t < 7.
b. Using the sifting property yields
* r1oc - 3)dt = 3 + 32 = t2
i_r(,
since I = 3 is within the interval -2<t<4.
c. Using the scaling property and then the sifting properly yields
f "ro[
- 216ra - 4)dt =/ *oU - z1la1r - zlar
:
= l*Ptol I
d. Consider the following two cases:
Case l: t <o
In this case, point r:0is not within the interval -€ < r ( l,and the result of the inte-
gral is zero.
Cose2:t>0
In this case, r = 0lies within the interval -co (r( l, and the value of the integral is 1.
Summarizing, we obtain
f_,r"r* =
{i; ;:3
But this is by definition a unit step function; therefore, the functions 6(t) and z(l) form an
integralderivative pair. That is,
fi,1,y =
u1,'1 (1.6.13)
The unit impulse is one of a class of functions known as singularity functions. Note
that the definition of 6() in Equation (1.6.7) does not make sense if D(t) is an ordinary
function. It is meaningful only if 6O is interpreted as a functional, i.e., as a process of
assigning the value r(0) to the signal r(t). The integral notation is used merely as a con-
Representing Signals Chapter 1
' venient way of describing the properties of this functional, such as linearity, shifting.
and scaling. We now consider how to represent the derivatives of the impulse function.
I
J t,
x(r)6'(r - \)dt = -r'(4,), rr ( r,r ( rz (1.6.15)
provided that r(t) possesses a derivative x'(1,) at t - t,. This result can be demon-
strated using integration by parts as follows:
['r(r)6'(r -
J,, h)dt = J,,['r()d[6(r - ro)]
=0_0-x,(to)
since 6(t) = 0 for t # 0. It can be shown that 6'(r) possesses the following properties:
3. 5'(ar. D) = r'(,. :)
#
Higher order derivatives of 6(t) can be defined by extending the definition of 6'(t1.
For example, the n th-order derivative of 6(t) is defined by
provided that such derivative exists at r = ,o. The graphical representation of 6'(t) is
shown in Figure 1.6.11.
Sec. 1 .6 Elementary Signals 31
6'(, )
Exanple 1.6.7
The current through an inductor of l-mH is i('l) = l0 exp[- zrlu(t) - 6(t) ampers' The
voltage drop across the inductor is given by
a1t1 = ro-!
fi[10
exp[-2r]u(r) - s(r)l
= -2 x l0- 2
cxp l- 2rlu(t\ + 10 2exp[-2r]6(r) - 10-28'(l) volts
v (r)
(a) (b)
Note that rhe derivarive of r(r)z(r) is obtained using the product rule of differenria-
tion" i.e.
d,
*!x(t)u(t)l = x(r)b (r) + x' (t) u(t)
whereas the derivative of .rO6(r) is
*4 kt,lut,)r = ,4 r,tolool]
This result cannot be obtained by direct differentiation of the product, because 6(r) is
interpreted as a functional rather than an ordinary function.
I'.vomple [.6S
We will evaluate the following integrals:
@)
I' ,(,- zrt'(-!,.l)a,
(b) rexp[-z]D'(, - 1)d,
/_t
For (a), we have
=
[[i.'(, - i). *t )]"= i'(, - i).,
For (b), we have
1.8 SUMMARY
a Sigrals can be classified as continuous-time or discrete-time signals.
a Continuous-time signals that satisfy the condition r(t) = x(l + 7) are periodic with
fundamental period 7"
The fundamental radian frequency of the periodic signal is rclated to the funda-
mental period I by the relationship
2n
(r)0
-- T
The complex exponential .r(t) = exp [jorot] is periodic with period T = 2t /o\ lot
all oo.
. Representing Signals Chapter l
Harmonically related continuous-time exponentials
:o(t) = exp[l&toot]
are periodic with common period I- 2t /oto,
The energy E of the signal x(r) is defined by
E=
ITI [',lrurl,o,
The power P of the signal .r(t) is defined by
P=
lgl j[',r,<,r,0,
The sigral r(r) is an energy signal if 0 1 E 1 a.
The signal .r(r) is a power signal if 0 P o.( (
The signal x(t t) is a time-shifted version of xO. If ,0 > 0, then the signal is
-
-
delayed by ro seconds. If to < 0, then .r(r ,0) represents an advanced replica ofr(l).
a The sigpal x(-t) is obtained by reflecting r(l) about I 0. :
a The signal .t(ct) is a scaled version of .r(r). If c > l, then x(u) is a compressed ver-
sion of x(t), whereas if0 < c < l, thenr(ol) is an expanded version ofr(r),
The signal .r(t) is even symmetric if
'r(t) = r(-t)
The signal .r(l) is odd symmetric if
x(r) = -r(-r)
Unit impulse, unit step. and unit ramp functions are related by
lt
u(r) = J_-
| 5 (r) dr
tt
r(t) =
J_-u(r)dr
The sifting property of the 6 function is
1.10 PROBLEMS
1.1. Find thc fundamental period Iof each ofthe following signals:
-,(T,),',"(T,),*.(X,),.,"(?,),*,('1,)
1.2. Sketch the follorving signals:
(a) r(,)='*(;r+zo')
(b) r(t) =t+e1' Ost=2
(t+2 ts-2
(c).r(r)= {o -2=t<2
[,-z z<,
(d) r(r) = 2 exp[-rl, 0' l s l' and -r(t + l) = '111; 1tlr u11
'
tlren it is also periodic with period nT, n = 2,3, ... .
uL show that if x(l) is periodic with period I,
1.4 Show that if :r()
and:r(t) have period I, then -rr(r) = arlt) + h'rr(r) (a. D constant) has
the same period 7.
15. Use Euler's form' exp[lr'rl] = cosro, + i sinr,rl. to show that exp[iroll is periodic with
period I = 2r/o.
1.6, Are the following signals periodic? If so' find their periods.
(a) r(r) =.,,(1,) * r*'(8{,)
(b) .t(t)
I 71,]1{ cxPL,
[.5n I
=
"*o[i o e,l
[7rrl t5 I
(c) x(t) = e*pli e t.l + exn[u t]
(d).r(t) = exPli
[5zrl lrr I
?,]* "*Pl.6,1
/3rr \ /3\
(e) .r(t) = 2sin(:* ,/ + cos\or/
g6 . Repres , rting Signals Chapfer 1
1.7. If ro is a periodic signar with period r, show that x(at),a > 0, is a periodic with
p'id r/a, and x(t/b), b > 0, is a periodicsignal with period br.venty thesesignal
rlsurts for
:(t) = 5inr,o = b = 2.
It Determine whether the foflowing signals are power or energy signars or neither. Justi$
your answers.
(e).r()=4.1nr, -@<r<e
O) r(1 = A[u(t - a) - u(t + a)l
(c) .r(t) = r(t) - r(t - 1)
(d) rO= exp[-ar]a(r), a> 0
(e) r(t) = tuo
(I) :(t) = 21r;
(g).r0)=Aexplbtl, D>0
L9. Repeat Problem 1.8 for the following signats:
[""<oa'l=tlfi
where P is the average power of the signal.
LlL IJt
:(t)= -r*r, -1sr<0
,, O=t<z
2, 2=t<3
0, otherwise
(a) Sketch.tO.
(b) Sketch.r(r -2),x(t+ 3),r(-3, -zl,anax(Jt* j)*afradtheanalyticarexpres-
sions for these functions.
LtL Repeat Problem 1.11 for
ftl :,tr>,(r + ])
,, ,,(-; * l),,t, - rt
() rr()xr(2 - t)
Ll4 (a) Show that
I
x"(t)=ilx(t)+.r(-r)l
is an even signal.
(b) Show that
1
*"(t)-i[.r0)-.r(-r)]
is an odd signal.
Llli. Consider the simple FIvI stereo transmitter shown in Figure Pl.l-5.
(e) Sketch the signals L + R and L R. -
(b) If the outputs of the two adders are added. sketch the resulring waveform.
(c) If signal L R is inverted and added to signal L + R, skctclr thc resutting waveform.
-
fi(,)
l
I
L+R 0
-l
L_R
l
I
-l
Flgure P1.15
L16 For each of the signals shown in Figure P1.16, write an expression in terms of unit step and
unit ramp basic functions.
Ll7. If the duration of x(t) is defined as the time at which.r() drops to l/e of the value at the
origin, find the duration of the following signals:
(r) rr0) = Aexpl-tlTlu(t)
(b) rz(t) = rr(3r)
38 Representing Signals Chapter 1
:d(,)
rs (r) r6 (r)
-o-b o+b I
Ftgure PL16
Flgure Pl.lt
(b) pr(,) =
Is J**r[;rt']
(c) p,(r) =!,\#.7
(d) po(r):
H +;ri"
(e) p50) = lim e exp[-elrll
(o poo) =
!,$ ,1":l
Evaluate the following integrals:
The probability that a random variable.r is tess than o is found by integrating the Proba-
bility density function /(.r) to obtain
p(x
= c) = l"- rcl*
Given that
find
(a) P(.r s -3)
(b) P(r s l.s)
49
:- ?jr: r f rr [,1, .i ;*i::i.+ jtt f g* q .:*f, g;, ' Representing Slgnals Chapter I
J, . .{i r$ l,
.t t
,"r,riu
(d) P(.r < 6)
IZL The velaity of I g of mass is
?r(r) = exp[-(, + l)lu(, + l) + 6(, - l)
(a) Plot o(r)
(b) Evaluate the force
t(t) = nfiO<t\
(c) If there is a spring connected to the mass with constant * = I N/m, End the force
tt
f/t) =k
J_-o(r)dt
123. Sketch the fint and second derivatives of the following signals:
(a) .r(t) = t (r) 'l- 5t (, - l) - zu(t - 2l
@) r(1 = r(ri - r(t - l) + 2u(' - 2)
(c) .r(r) =
{f,:r, ;L=, llo
1 .1 1 COMPUTER PROBLEMS
The integral
l"','xg)Y()at
can be approximated by a summation of rectangular strips, each of width Ar, as folloss:
[" {r)v(t)dt=
J., ,. I
i
r(nAt)y(nAr)A,
J#"*[#]
can be used as a mathematical model for the delta function by approximating the follow-
ing integrals by a summation:
l-i o)"
r"r /', t, + rr ffiexn
ru /_,0+ rrffiexn[-#I]"
cr J't,+r)\ffiexpl-#).
Repeat Problem 1.24 for the following integrals:
(al
J',expt-rl A*+"
(b)
/',exn[-r; j+#A +1,dl
tcr
J2
exp[-r1 j+#i=frdt
Ch apler 2
Continuous-Time SYstems
2.1 INTRODUCTION
Every physical system is broadly charactcrizc<l by its ability to acccPt an input such as
voltage, force, pressuie, displacement, etc., and lo produce an outPut in
,".poir. "ut."ni
to this input. Fbr cxample, a radar receiver is an electronic system whose
input is the reflection of an electromagnetic signal from the target and whose output is
a ideo signal displayed on the radar screen. Similarly. a robot is a system whose input
the
is an elect-ric coniroi signal and whose output is a motion or aclion on the Part-of
robot. A third example is a filtcr, whose input is a signal corruPted by noise.and inter-
ference and whose output is the desired signal. In brief, a systcm can be viewed as a
process that results in transforming input signals into output signals'
We are interested in both continuous-time and discrete-timc systems. A continuous-
time system is a system in which continuous-time input signals are transformed into
continuous-fime output signals. Such a system is representcd pictorially as shown in
Figure 2.1.1 (a). wheie r(r) is rhe input and y(r) is the output. A discrete-time sptem
is i system thai transforms discrete-iime inputs into discrete-time outputs.
(See Figure
2.l.1ib)). Continuous-time systems are lreated in this chaptcr. and discrete-time sys-
tems are discussed in ChaPter 6.
In studying the behavioi of systerns, the procedure is to modcl mathematically each
element t-hat-comprises the syitem and then to consider the interconnection of ele-
ments. The result is described mathematically either in the time domain, as in this
chapter, or in the frequency domain, as in Chapters 3 and 4'
In this chapter, we show that the analysis of tinear systems can be reduced to the
study of the response of the system to basic input signals'
41
42 Continuous-Time Systems Chapter z
(a) (b)
hanple 2.2.1
Consider the voltage divider shown in Figure 2.2.1 u/ith Rr = Rz. For input xO and our-
put y(t), this is a linear system. The inpuUoutput relation can be explicitly rrritten as
Rl
R,
+
x (t) + R2 y(t)
i.e., the transformation involves only multiplication by a constant. To prove that the sys-
tem is indeed linear, one has to show that Equati oa (2.2.1) is satisfied. Consider the input
.r() : ar1(t) + b4Q). The corresponding output is
r(r) = ].ro
= | t-,O + Dxz(r)l
= a].r,(r) + blxrQ)
= ayr(t) + byr(t)
where
y,(r) = j.r,(r) and yr() = )xr()
On the other hand, if R, is a voltagedependent resistor such that Rr = Rrr(t), then the
system is nonlinear. The input/output relation can then be writtcn as
,tt=;frfi!6,f,,,a
This system is nonlinear because
att(t) + bxr(t) + o xlt) *, ,r(l)
arr()+bx2()+l -rr(r)+ I -rr(r)+l
for some .t, (t),.rr(t), a, and D (try.r,(t) = :z(t) and a = 1, b = 2\.
Example 2.2J
Suppose we want to determine which of the following systems is linear:
y,t.) = x
fit,g)
and
y,O = xfrr,(t)
so lhat the system described by Equation (2.2.2) is linear.
Comparing Equation (2.2.2) ur.th
a1) = tff
we conclude that an ideal inductor with input i(r) (crrrrent through the inducror) and out-
put zr(t) (voltage across the inductor) is a linear system (element). Similarly, we can show
that a system that performs integration is a linear system. (See problem 2.1(f).) Hence, an
ideal capacitor is a linear system (element).
For part (b), we investigare the response of the system to the input in Equation (22.4):
y(r) = exp[a.r,(t) + hxr(t)l
= exp [ar, (t)] exp[6.12O]
+ ayr(t) + byz[)
Therefore, the system characterized by Equation (2.2.3) is nonlinear.
kample 23.3
Consider the RL circuit shown in Figure 2.2.2, ffis circuit can be viewed as a continuous-
time slatem with input r(t) equal to voltage source e(r) and wirh output y(l) equal to the
current in the inductor. Assume that at time r0, iLGo) = y(ro) = /0.
Applying Kirchhoffs current law at node a, we obtain
%rO*f.d.(r)=s
S@,. 2.2 Classlllcation ol Continuous-Time Systems 45
Rla
Since
a.1o= tff
it follows thar
,T*#*,.0)=f
so that
ry).rffi,,1,1 = rdin;"r,t
or
aP.
"ii+rr(r)
=
77fi;;'(r)
The differential equation, Equation (2.2.5), is called the inpur/ourput differential equation
(22s)
describing the system. To compute an explicit expression fory(r) in terms of .r(r), we must
solve the differential equation for an arbitrary inpur r() applied for, > ,0. The cornplete
solution is of the form
-
*3"J f '-o[-ff ,t' -t)]'r(t)dt; t]ro (226)
According to Equation (2.2.1), this system is nonlinear unless y(ro) = 0. To prove this, con-
sider the input x(t) = arr(l) + Btr(t). The corresponding outpur is
.
au-*i I" *'[- #'h' (r -'l)]'r'('l) d'l
+ cy,(l) + pyz(r)
Continuous-Time Systems Chapter 2
This may seem surprising, since inductors and reslstors are linear elements. However, the
system in Figwe 2.2.2 violates a very important prop€ny of linear systetns, namely, that
zero input should yield zero output. Therefore, if yo = 0, then the system is linear.
The concept of linearity is very important in systems theory. The principle of super-
positioE can be invoked to determine the response of a linear system to an arbitrary
input if that input can be decomposed into the (possibly infinite) sum of several basic
signals- The response to each basic signal can be comPuted separately and added to
obtain the overall system response. This technique is used repeatedly throughout the
text and in most cases yields closed-form mathematical results, which is not possible
-for nonlinear systems.
Many physical systems, when analyzed in detail, demonstrate nonlinear behavior.
In such situatioDs, a solution for a given set of initial conditions and excitation can be
found either analytically or with the aid of a computer. Frequently, it is required to
determine the behavior of the system in the neighborhood of this solution. A common
technique of treating such problems is to approximate the system by a linear model
that is valid in the neighborhood of the operating point. This technique is referred to
as linearization. Some important examples are the small-signal analysis technique
applied to transistor circuits and the small-signal model of a simple pendulum.
nra-ple 2.2.4
We wish to determine whether the systems described by the following equations are time
invariant:
(a) y0) = cos:(r)
,-" dY(t) _ry()
-
rDr _______ = + r(r), , > 0, /(0) = 0
dt
S€o. 2.2 Classification ol Continuous-Timo Systems 47
Consider the system in part (a). y(r) = cos.r(l). From the steps listed before:
2' consider the second input' rr(l) = :r(t - to) The corresponding output is
yr(l) = cos.rr(t)
= cos:r(t - ,o) (2.2.8)
I *p[- i * rf,,<,v"
y,t,r = (2.2.101
""'
=
| ; - l..+l:;,0
|
- *
*',." ",
(2.2.111
= l-,^
*rl-'i* EP1l,,r"r,'
3. From Equation (2.2.10),
For most systemst the inputs and outputs are functions of the independent-variable. A
system is said to be memoryless, or instantaneous, if the present value of the outPut
depends only on the preseni value of the input. For example, a resistor is a memory-
less system, iince with input r(r) taken as the current and orltput y(r) taken as the
volt-
age, the input/output relationship is
Y(t) = Rr(t)
where R is the resistance. Thus, the value of y(r) at any instant depends only on
the
value of .r(r) at that instant. on the other hand. a capacitor is an example of a system
I Contlnuous-Tlme Systems Chapter 2
with memory. With input taken as the current and output as the voltage, the inpuUout-
put relationship in the case of the capacitor is
Y(i = ]Cf--x@ar
where C is the capacitance. It is obvious that the outPut at any time t depends on the
entire past history of the inPut.
If a system is memoryless, or instantaneous, then the input/output relationship can
be written in the form
y(t) = r(x(r)) (2.2.13)
Y(0 = k0)x(t)
and if the system is also time invariant, we have
Y(') = &.r(')
where&isaconstant.
An example of a linear, time-invariant, memoryless system is the mechanical
damper. The tinear dependence between force fO and velocity o(t) is
1
a(t) =
,f(t)
where D is the damping constant.
A system whose response at the instant r is completely determined by the input sig-
nals over the past T seconds (the intewd from r -I
to t) is a finite-memory system
having a memory of length T unis of time.
&le2.25
The output of a communication channel y(t) is related to its input r(l) by
fl
YQ) = l,
o,x(t -
T,)
It is clear that the ourput y() of the channel at time r depends not only on the input at
time ,, but also on the past history of r(r), e.9.,
y(o) = a*(o) + o1x(-T) + "'+ aler")
Therefore, this qntem has a finite memory of T = Eaxi(ii).
must also be equal up to this same time since a causal systeln cannot predict if the two
inputs witl be different after l, (in the future)' Mathematicallv' il
.r,(t) =.rr(t)l r(trr
and the system is causal. then
y,(t)=y2Q):t<t,,
A system is said to be noncausal or anticipatory il it is not causal'
iausal systems are also referred to as physically realizablc s!'slcrns'
Example 2.2.6
In several applications. rve arc interested in the value of a signal .\ (, ). not al ptcsent timc
t. bur at some time in the future. t + rr. or at some lime in thc pit\I. , -
p. Thc signal .t'(t)
= 3(1 + rr) is called a prcdicrion of .r(l) rvhile the signal l (t -
p) is rhe delal't'd version of
pretlictor while the second svstem is an kleal deloy.
' The first sysrem is called an ideal
.r(l).
Clearly the ircdictor is noncausal since thc output dcpcnds on luture values of the
input.Wecanalsoveri[ythismathematicallyasfollows.Considcrthcinputs
[t ,s5
.rr(r)=lgxp1_r) r>5
and
.,r(t) =
[t r < 5
to r>5
so that,rl(r) and.tr(r) arc idcntical uP to rir = 5
Supposc a = 3. The corrcsponding outputs are
r.,(,) =
{:*pt_ (, * 3)l r>2
and
[t t=2
.,',rr)=to
r>2
If thc sysrem is causal.,v1(r) :
r''(r) for all t < 5. Bul 1'r(3) = e rP(-6) while.v'(3) = ll'
Thus the system is noncausal.
The ideal delay is causal sincc irs outpur depcnds onlv ttn Pilsl vir lues of the input signal.
Example 2.2.7
We are often requirccl to dctcrnrinc lhc irvcrilgc valuc ol :t sigrrirl itl cach time instanl l'
we do this hy deiining thc r.r,,rirr.q ar.r,rr,.q(..r,'(r ) oI signul.r (r )..r" (, ) can b€ compulcd in
several waYs. for examPlc.
."(,)=.f/,r(r)r/r 12.2.t11
50 Continuous-Timo Systems Chapter 2
:i"00) =
+t. xr(t )dr (2.2.16(a))
I t',
i Jn-,
x2$)dr =.ri'(to)
rj'(,,; =
i[_i"nr* (2.2.16(c))
since.r,(l) and.rr(t) are not the same for, > ,0. This s,'stem tr.,1r.t.;org, nsncafisel.
t-ernple 2.28
We wanl to determine if each of the following systems is invertible. If it is, we will con-
struct the inverse slntem. If it is not, we will find two input signals lo the system that have
the same output.
(a) y(t) = 2r0)
Sec, 2.2 Classification of Continuous'Time Systems 51
yUl = 2x(tl
;11y= j.r'{r) = x(r)
.r 0)
For part (b), system y(l) = cos:(r) is noninvertible since r(t) and x(t) + 2zr give the
same output.
For part (c),systemy(r) =[ xQ)dr, y(--) = 0, is invertible and the inverse s],stem
is the differentiator
d
z(t) =
o,y()
For part (d). system y0) : :(l + l) is inverrible and the inverse system is the one.unit delay
z(t)=y(t-l)
Exanple 2J.9
We want to determine which of these systems is stable:
(a) y(t) : exp
[r(r)]
r'
(b) y(t) =
J-_*(,)0,
For the system of part (a). a bounded inpur x() such that I.r(l) | < B. results in an out-
put y0) with magnitude
y(t)=)_-u(r)tk=r(t)
Thus the bounded input a(r) produces an unbounded output r(l) and the system is not stable.
This example serves to emphasize that for a system to be stable, all bounded inputs
must give rise to bounded outputs. If we can find even one bounded input for which
the output is not bounded. the system is unstable.
More generally, if the input r(t) is the weighted sum of any set of signals x,(t). and if
the response to r,(l) is y,(r), if the system is linear, the outPut y(r) will be the weighted
sum of the responses y,(l). That is. if
we will have
In Section 1.6, we demonstrated that the unit-step and unit-impulse functions can
be used as building blocks to rePresent arbitrary signals. In fact, the sifting property of
the 6 function,
The function tr(r) is called the impulse response of the LTI system and represents the
output of the system at time , due to a unit-impulse inPut occurring at I = 0 when the
system is relaxed (zero initial conditions).
The integral relationship expressed in Equation (2.3.3) is catled the convolution
integral ofsignals r(r) and ,l(r) and relates the input and output of the system by means
of the system impulse response. This operation is represented symbolically as
y(t)=x(t)*h(t) (2.3.41
One consequence of this representation is that the LTI system is completely charac'
terized by its impulse response. It is important to know that the convolution
y(t)=y(1)*fi(1)
Continuous-Time Syst€ms Chapt€r 2
does not exist fior all possible signals. fie suflicient conditions for the convolution of
two signals x(t) and ll(t) to exist are:
l. Both.r(r) and h(r) must be absolutely integrable over the interval (--,0].
2. Both x(r) and tr(r) must be absolutely integrable over the interval [0. o).
3. Either.r(t) or /r(l) or both must be absolutely integrable over the intewal (--, -).
The signal r(t) is called absolutely integrable over the interval [a. b] if
For example, the convolutions sin rl * cos r,l. exp[r] * exp [r], and exp [t] * exP[-r] do
not exist.
Continuous-time convolution satisfies the following important proPerties:
Commutativlty.
x(t)*7111= h(tl"x(t)
This property is proved by substituticn of variables. The property implies that the roles
of the input signal and the impulse resPonse are interchangeable.
Assoclatlvlty.
r(t) x fi,(t) * hr(t): [.r(t) x lr,(t)] * nr(t)
: x(t) * [&,(t) * nr(t)]
This property is proved by changing the orders of integration. Associativity implies that
a cascade combination of LTI systems can be replaced by a single syslem whose
impulse response is the convolution of the individual impulse responses.
Dlstrlbutlvtty.
x(t) * lh,(t'1 + hz?)l = k(t) . &r (r)l + [x(r) * /rz(t)]
This property follows directly as a result of the linear jrroperty of integration. Distrib-
utivity states that a parallel combination of LTI systems is equivalent to a single sys-
tem whose impulse response is the sum of the individual impulse responses in the
parallel configuration. All three properties are illustrated in Figure 2.3.1.
Some interesting and useful additional properties of convolution integrals can be
obtained by considering convolution with singularity signals, particularly the unit step.
unit impulse, and unit doublet. From the defining relationships given in Chapter l. it
can be shown that
r(r) * 5(r) =
[rn,
6(r - r)dr : x(r) (2.3.6)
Therefore, an LTI system with impulse response & (r) = 6(l) is the identity system. Now
t, (r) ,,,,_-F-]----,,,,
(a)
h t(tl
h10l
(c)
Consequently, an LTI system with impulse response h(t\ = ,,1,1 is a perfect integra-
tor. Also,
r(r)*6(r-u)=x(t'a\
The result of the first (sampling property of the delta function) is a &function with
strength x(a). The result of the second (sifting property ot thc delta function) is the
value of the signal .r(t) at , = a, and the result of the third (convolution property of the
delta function) is a shifted version of :(t).
Erample 2.9.1
Irt the signal x(l) = a6(r) +b5(, - ro) be input to an l.T'l systcm with impulse response
ft(r1 = 711*r,-cr]a(r). The input is thus the weighled sum rtf trvo shifted &functions.
56 Contnuous-Tlme Syslems Chapter 2
Since the system is linear and time invariant, it follows that the output, y(l), can be
expiessed as the weighted sum of the responses to these &functions. By definition, the
response ofthe system to a unit impulse input is equal to tr@ so that
y(t)=ah(t)+bh(t-to)
= aKexpl-ctlu(r) + bKexp[-c(r - ro)lu(r - rJ)
Example 233
The output y(t) of an optimum receiver in a communicatiotr system is related to its input
r(r) by
y(i=[r(r)s(I-t+.t)d.t,
Jt-T
0s,=r es.s)
where s(r) is a known signal with duration L Comparison of Equation (2.3.9) with Equa-
tion (2.3.3) yields
h(t - r) =r(I-, + r), 0< I - t < T
= 0, elsewhere
or
&(t) =511 - 11' 0<r<r
=0, elsewhere
Such a system is called a matched flter. The sptem impulse response is s(r) reflected a.nd
shifted by I
(sysrem is marched ro s(r)).
Exampb 2A.9
Consider the system described by
v(i = l[''-'
t Jr-i
,G)a,
As noted ertier, this system computes the runnitrg average of signal r(r) over the interval
\t-Tlz.t+Tl2).
We now let r1t) = 6(r) to find 'he impulse response of this systeF as
oo=il,')l,oto,
(I
l- T T
=,fr 2-'-2 --<t<-
[ 0 otherwise
where the last step fotlows from the sifting property, Equation (1.6.E), of lhe impulse function.
Eqorrple 2*3.4
Consider the LTI system with imputse response
ftO=exp[-ar]uO, a>0
SEc. 2.3 Lin€ar Time-lnvariant Systems 57
.r(t.1 = exp[-Dr]u(r). b * a
y(t) =
) _.exp[-btla(r)exp[-a(r -
r)]r(r - r)rlr
Nole thar
rr(t)u(-t) = l, 0(t(r
= 0, otherwise
Therefore
y(t) = |
Jn
exp[-at] expl(a - b\rldr
I
=
;: t[exP(-br) - exp(-ar)lrr(r)
x:rqrnple 2.8.6
[.et us find the impulse response of the system shown in Figurc 2.3.2 if
h'(t) = exPl-2tlu(t)
h'(t) = ZexPl- tlu(t)
,rr(t) = exp [- 3rla0)
na(t) = a5'1';
By using the associative and distributive propcrties ofthe impulso response it follows that
&() for the system of Figure 2.3.2 is
h(r) = hr(t) * hr(t) + hr(t) * hu[)
= [exp(-r) - exp(-Zr)lu(t) + l2exp(- 3r)u(t)
where the last step follows from Example 2.3.4 and the fact lhat ,r (I) * 6(r) = r(r).
Example 2.9.6
The convolution has the propcrty that the arca of the convolutioh integral is equal to the
product ot thc areas of the two signals entering into the convolution. The area can be com-
puted by integrating Equation (2.3.3) over the interval -.:. < t < o, giving
l__ruro,
=
[ .'otlf .ott -,Yt)a',
=I .r(t;[area under /r(r)ldr
Step 2. Integrate the product g(1, r) as a function of z. Note that the integrand
g(t, r) depends on l and r, the latter being the variable of integralion, which disappears
after the integration is completed and the limits are imposed on the result. The inte-
gration can be viewed as the area under the curve represented by the integrand.
This procedure is illustrated by the following four examples.
Exanple 2.3.7
Consider the signals in Figure 2.3.3(a), rvhere
x(t) = 4 exp[-t], 0<t<co
h(,) =
+,
0sr<T
Figure 2.3.3(b) shows x(r), h(t - rl, ind xQ'1h(t - z) with t < 0. The value of t always
59
Sec. 2.3 Linear fime-lnvariant Systems
r(r)=/exp[-tl
(a)
xU\ h(r - rl
(b)
_t__
(c)
(d)
x(r) r i(t)
lc-r-exp[-r])
(e)
equals the distance from rhe origin of .r(r) to the shifted origin of&(-z) indicated by the
dashed line. We see thal the signals do not overlap: hence. the integrand eqtrets zero, and
=4r
7{ - | + exp[-r]]exp[-( - fl]. t>T
The complete result is plotted in Figure 2.3.3(e). For this example. the plot shows that con-
volution is a smoothing operation in the sense that.r(t) * ft(r) is smoother tban either of
the original signals.
Exanple 2.8.8
Let us determine the convolution \
.vo={2r-l'l' 14 su
Itl =u
= 2n L(t/2a)
This signal appeam frequently in our discussion and is called the triangu(ar signal. we use
the notation A0/2a) to denote thc rriangular signal that is of unit heightjcenGred around
r = 0. and has base of length rkr. j
Sec. 2.3 Unear Tlme-lnvariant Systems 6'l
x(r)
-at O o -o0,
-4 <, <0 0(l(a
-2aO2at
llgure 23.4 Graphical solution of Example 2.3.8.
Eranple 23.9
and h(t) are as follows:
Let us compute the convolution 'r(')
' h(')' where 'r(t)
r (r) h(t)
Figure 2.3,5 demonstrates the overlapping of the rwo signals x(7) and ft(t -7)' We can
see th-at for, < -2, the Product r(r)h(, z) is alwap zero. For -2 s t
- < -1, the Prod'
uct is I triangle with base , 2 and heigbt
+ , + 2: lherefore. the area is
yg1=l$+2)'z -23t<-r
62 Continuous-Tlmo Systems Chapter 2
,-10, ,+l
(c) -l < r<0 (d) 0srsl
Y (t)
-f Ot-I, ,+l t -2 -l 0 I
For -l s , < 0. the product is shown in Figure 2.3.5(c)' and the area i8
y(t):r-!2, -1=,<0
For0<r< l, the product is a reaangle with base 1- t and height 1; therefore, the area is
Y(t)=l-t' os,<l
For t > I, the product is always zero. Summarizing, we have
0, r< -2
Exanple 2.3.10
The convolution of the two signals shown in the following figure is evaluated using graph'
ical interpretation.
From Figure 2.3.6, we can see that for t < 0, the product .t (r)ft (r - t) is alwayszero for
allctheref6re,y(r)=0.Foro=t<l,theproductisatrianglewithbase'ardheightc
therefore, yO = t2/2.Eot I = t <z,lhe area under the product is equal to
t-l 1 ,- l , ,+l
(a) ,< o o) 0<r<I
t-l I a+l ? t2
(€) ,>3 (f)
0, l<0
t2
1'
0<r<l
v(t) = *-r-), l=t<2
(3 - t)z
2st<3
2'
0, ,>3
As was mentioned in Section 2,2,4, the output of a causal system depends only on the
present and past values of the input. Using the convolution integral, we can relate this
property to a corresponding property of the impulse response of an LTI system. Specif-
ically, for a continuous-time system to be causal, y(t) must not depend on r(r) for z ,, )
From Equation (2.3.3), we can see that this will be so if
h(t1 =g for l<0 (2.4.3)
In this case, the convolution integral becomes
y(t)= I x(t)h(t-t)dr
J--
r'
- Jn| helx(t-.r)dt (2.4.4)
Sec. 2.4 Propertes ol Llnear, Time-lnvarlant Systeme
As an example. the system ft(t; = ,,1r; is causal, but the system h1t'1 = 61, + lr,). lo > 0.
is noncausal.
In general. x(t) is called a causal signal if
x(t)=g' l<0
The signal is anticausal if r(l; = 0 for I E 0. Any signal that does not contain any sin-
gularities (a delta function or its derivatives) at t = 0 can be written as the sum of a
causal part x*(t) and anticausal part r-(t), i.e.,
r(t)=1'1,;*t-,,,
For example. the exponential r(t) = exp[-4 can be written as
r(t) = exp [-t] a(t) + exp [-r]z(-t)
where the first term represents the causal part ofx(l) and the second term represents
the anticausal part ofr(r). Note that multiplying the signal by the unit step ensures that
the resulting signal is causal.
l.v(r) I= ool,o -
l1' ")d,l
s - t)ldt
J" taf'll lx(r
a (2.4.6\
l'.lnotla,
66 Continuous-Timg Sysl€ms Chapt€r 2
f *ln<,\a, .- (2.4.7\
ili --
| -nO)
sgrr[ft(t)]dt
=
f -tn<"\ a,
Clearly. if n(r) is not absolutely integrable, y(t)rrill be unbounded'
As'an exampte, the system with = exp[-rlu(r) is suble' whereas the system
with fto) = z(r) is unstable. '(')
Exanple 2.4.1
we will determine if tbe system with impulse responses as shown are caueal or noncausal,
with or without memory' and stable or unstable:
(D hlt)= 1s*r1-'ra(t) + exp[3'lr(-0 + 6(' - l)
(ii) = -3exP[2r]u(t)
(iii) 'rz(t) = 550 + 5)
"3(r)
(iv) Iro(r) = ls
tilsrl
systems (i), (iii) aniliv) are noncausal sincs for r < 0, Ir,() * 0, i = 1, 3,4. Thur only sys-
tem (ii) is causal.
Since &() is trot of the form K 6(r) for any of the systems, it follo*r that dl the systems
have memorY.
To determine which of the systems are stable, we note that
+ exp[rd *, = E
[- -ln,f,lla= l,' rexpt-zrlar Jo
l- -lrrrtlla, = t- *xp@ptis
unbounded.
=s
['-ln,<ola,
and
J-
ro{da = zo
f--lao<ola'=
Thus Systems (i), (ii) and (iv) are stable. while System (ii) is unstable'
Sec. 2.5 Systems Described by Ditlerential Equations 67
where D represents the differentiation operator that transforms -v(t) into is derivative
y'(r). To solve Equation (2.5.2), one needs the N initial conditions
y(ro). y'00), ..., y('-')(ro)
where ro is some instant at which the input.r(l) is applied to thc system and yi(t) is the
ith derivative of /(r).
The integer N is the order or dimension of the system. Note that if the ith deriva-
tive of the input r(t) contains an impulse or a derivative of an impulse, then, to solve
Equation (2.5.2) for, > ,0, it is necessary to knorv the initial conditions at time t = 6.
The reason is that the output.1,(t) and its derivatives up to order N - l can change
instantaneously at time r = Io. So initial conditions must be taken just prior to time lu.
Although we assume that the reader has some exPosure to solulion techniques for
ordinary linear differential cquatitrns, rve rvork out a first-ordcr case (N: 1) to review
the usuaI method of solving linear, constant-coefficient differential equations.
Example 2.6.1
Considcr the first-order LTI system that is described by the first-order differential equation
where a and D are arbitrary constants. The complete solution of Equation (2.5.3) consists
of the sum of the particular solution, yr(r), and the homogeneous solution, yrO:
y(t)=yr(t)+yr(t) (2.5.1)
The homogeneous differential eriuation
dv(t\
riz+oY(t)=o
has a solulion in the form
Yr(t) = c exPl-atl
Using the integrating factor method, we frnd that the particular solution is
,,111
' = Ju[exn[- a(t-,r)lbx(t)dr, ,=ro
Therefore. the general solution is
Note that in Equation (2.5.5), the constant C has not been determined yet. In order to
have the output completely determiued, we have to know the initial condition y(lo). Let
y0J = yo
Then, from Equation (2.5.5),
Yo = CexP[-aro]
Therefore,forr>6,
y(r) = y6exp[-a(, - ro)] + ['exp[-a(r
Jh
- t)lbtr)dt
If. for t < to, :(t) = 0, then the solution consists of only the homogeneous part:
y(t) =yoexp[-a(r - ro)], r(lo
Combining the solutions for I > to and , < lo, we have
The Integrator. A basic element in the theory and practice ofsystem engi-
neering is the integrator. Mathematically, the input/output relation describing the inte-
grator. shown in Figure 2.5.1, is
x2(()
(a)
,,,,-{IFye)-Kx(')
(c)
Flgure 2.52 Basic components: (a) adder, (b) subtracror, and (c) scalar
multiplier.
Erample 2.62
We will find the differential equation describing the syslem of Figure 2.5.3. [.et us denote
the output of the first summer as zr,(t). that of the second summer as ur(r) and that of the
first integrator as y, (r). Then
Dr(t) = y'(t) = .yr(r) + 4y(t) + 4x(t) (2.5.9)
Differentiale this equation and note that yi() = ?rr(r) to ger
y'(t) = ai(\ = or(r) + 4y'(t) + 4x'Q) (25.10)
which on substituting u,(l) = -y(t) + 2r(r) yields
y"(t) = 4y'(t) - y(,) + 4x'(t\ + Lt(t) (2.5.11)
70 Coniinuous-Time Systems Chapter 2
Consider the linear, time-invariant system described by Equation (2'5.2). This system
can be realized in several different ways. Depending on the application, a particular
one of these realizations may be preferable. In this section, we derive two different
canonical realizations; each canonical form leads to a dilferent realization, but the two
are equivalent. To derive the fint canonical form, we assume that M = N and rewrite
Equation (2.5.2) as
DNO - Dn4) + aN-t1an-r! - bn-,r) + "'
+ D(ary - Drr) + aov - Dox : 0 (2.s.12',)
from which the flow diagram in Figure 2.5,4 can be drawn' starting with output y(t) at
the right and working to the left. The operator D-r stands for integrating & times.
Another useful simulation diagram can be obtained by converting the Nth-order dif-
ferential equation into two equivalent equations. Let
("'. ,?,
a,Di)u(t): x(t) (2.s.14)
Then
To verify that these two equations are equivalent to Equation (2.5,2), we substitute
Equation (2.5.15) into the left side of Equation (2.5.2) to obtain
= (i ''('".' * i'''o"-"))'<'l
= (i
'''')'t'r
The variables o('v- r)(r), ..., ?(t) that are used in constructing .v(r ) and .r(t) in Equations
(2.5.14) and (2.5.15), respectively, are produced by successively integrating u(n)(t)'Th"
iimulaiion diagram corrisponding to Equations (2.5.141and (2.5.15) is given in Figure
2.5.5. We reteito this form of rePresentation as the second canonical form'
Note that in the second canonical form, the input of any inlegrator is exactly the
same as the output of the preceding integrator. For example, if the outPuts of two suc-
cessive integratbrs (counting from the right-hand side) are dcnoted by a. and a., 1,
respectively, then
a;(t) = a",. r(t)
This fact is used in Section 2.6,4 to develop state-variable representations that have
useful properties.
Exanple 254
We obrain a simulation diagram for the LTI system described hy the linear constanl-coe[-
ficient differential equation:
o
st
(.1
E
o
tr
!
I
tr
o
(.l
o)
!)
c)
tr
t
E
(!
hb
a!
t
tr
o
(!
E
tt)
vl
arl
C'
a,
E
E!
lz
72
S6c. 2.5 Systems Described by Ditlerontial Equations 73
where l(t) is the output, and x(r) is the input to the system. To gct the first canonic form.
we rewrite this equation as
Example 2.8.4
Consider the system governed by
2y'(t)+ty1t1 =3v111
Setting.r(t) = 6(l) results in the response y(t) = h(t). Thereiorc. l(t) should satisfy the
differential equation
2h'(\ + ah?) = 36(t) (2s.t6)
The homogeneous part of the solution to this first-order differential equation is
h(t) = c (2.5.r7)
"*or-rrurr't
We predict that the panicular solution is zero, the motivation for this being that h(t) can-
not contain a delta function. Otherwise. /r'(r) would have a derivative of a delta function
that is not a part of the right-hand side of Equation (2.5.16). To find the constant C' we
substitute Equation (2..5.17) into Equation (2.5.15) to get
(b)
2C expl-2tl6(,) = -j6(,)
which. after applying the sampling property of the 6 function, is equivalent to
2C 6(t) = 3511;
Sec. 2.5 Systems Described by Ditlerentlal Equations 75
In general, it can be shown that for r(t) : 6(t), the particular solution of Equation
(2.5.2) is of the form
Coto0r, M z N
he() =
{r M<N
where 6{i)1r) is the ith derivative of the 6 function. Since, in most cases of practical
interest, N > M, it follows that the particular solution is at most a 6 function.
E=n'rrple 2.6.6
C-onsider the first-order system
r'(t)+ry1t1 =211,1
The system impulse response should satis$ the follovirrg rlifferential equation:
h'(t)+3h(t)=26(,)
The homogeneous solution of this equation is of the form C,exp [-31]a(l). Let us assume
a particular solution of the form ftr(l) = Cr60). The general solution is therefore
f,'.rarnple 2.6.0
In Chapters 4 and 5, we use transform methods to find the impulse resPonse in a much
easier manner.
tion are called the state variables. Given the state of the.system at ,0 and the input ftom
,0 to ,r, we can find both the output and the state at r]lNote that t"his definition of the
state of the system applies only to causal systems (systems in which future inputs can-
not affect the output).
ar(t) = y(t)
ar(t) = uiQ)
a'r(t) = -n "'r1t) - art;r(t) + box(t)
Expressing v'(l) in terms of v(t) yields
[;;l;l]
=
[-'.. -: ] [;:l]l . [],].,,, (2.6.1')
The output y(l) can be expressed in terms of the state vector v(r) as
ln this representation, Equation (2.6.Ij is called the state equation and Equation (2.6.2)
is called the output equation.
In general, a state-variable description of an N-dimensional. single-input. single-out-
put linear, time-invariant system is written in the form
v'(,)=Av(r)+bro (2.6.3)
y(t)=cv(t)+dx(t) (2.6.4)
v(tl
Exanple 2.8.1
Consider the RLC series circuit shos'n in Figure 2.6.1. By choosing the voltage a(r6s the
capacitor and the current through the inductor as the state variables, we obtain the fol'
lowing sute equations:
cuf; = wtt)
'ry=:(r)-Ro'o-u'(r)
v(t) : u,(t)
In matrix form, these become
v'(r) =
[-', ]'"],u,. i:1,u,
Lz t) LzJ
v(') = [t
o]vo
If we assume that C = I /2 and L =R=l,wehave
v'(r) =
t-? -?]'t'r . [f]'t't
Y(t) = [1 0]v(t)
y(t)=cv(t)+dx(t) (2.6.6)
The stare vecror v(r)is an explicit function of time, but it also depends implicitly on the
initial srate v(ro) = vu, the initial time ru. and the input r(t). Solving the state equations
Se. 2.6 State-Variable Repr€sentation 79
means finding that functional dependence. wc can then compute the outPut y(r) by
using Equation (2.6.6).
As a natural generalization of the solution to the scalar first-order differential equa-
tion, we would Jxpect the solution to the homogeneous matrix-differential equation to
be of the form
v(r) = exp [At]vo
where exp [Ar] is an N X N matrix exponential of functions of time and is defined by
the matrix power series
To prove Equation (2.6.8), we expand exp[Ar,] and exp [Atr] in power series and
multiply out terns to obtain
exp[At,]exp[Atr] =
[t
* nr, *
^'*.*
"'+ l*l;i +
l.
* *, * n'*.+ A'4 + "'+ .. .]
[, "-f
= exp[A(tr + tr) ]
fiexplst)=
0+A *X.n'* Tn' * "' * \,' Ao * "'
: * n, * n'7,. +"' + Art' .']"
[, "'iI
=n[I+A,+A2'i.*n'f * "*"-;i. ']
80 Conlinuous-Time Systoms Chapter 2
Thus,
exp[-Ar][v'(t) - A = exp[-At]bxo
"(t)]
Using Equation (2.6.10), we can write the last equation as
Multiplying Equation (2.6.12) by exp [At] and rearranging terms, we obtain the com-
plete solution of Equation (2.6.5) in the form
The matrix exponential exp [At] is called the state transition matrix and is denoted by
O(t). The complete output response y(t) is obtained by substituting Equation (2.6.13)
into Equation (2.6.6). The result is
Using the sifting property of the unit impulse 6(r), we can rewrite Equation (2.6.14) as
Observe that the ccmplete solution is the sum of two terms. The first term is the
response when the input.rO is zero and is called the zero-input response. The second
term is the response when the initial state vo is zero and is called the zero-state
response. Further inspection of the zero-state response r€veals that this term is the con-
volution of input .r(r) with cO(r) + d 6(r). Comparing this result wirh Equation (2.3.3),
we conclude that the impulse response of the system is
Erample2.63
Consider the linear. time-invariant. continuous-time system described by the differen-
tial equation
v"(tr + y'{t) -zy(t1 = 111y
.
,,(,) =
[: _ l] ,u,
[?] ,u,
y(r) = [t 0] v0)
so that
c=r,
"=[: -l]' '=[l]' and or
",= [;]
we have to calculate O(t). The powers of ihe matrix A are
'., ; ] t ; #'1.
o0, =
[l :]. [; ;]. [ fl.
T) [.,'
L-r' L" +".1.L,j" *".J.'
l+r'-r*a*... ,-1*i-ut'*.1I
fFt4t2tr5
=l u + f 1,0 *... r t +3r,'
l" - - - -1,t*]i,"* ._l
rr ,
vo) =
[, -, ),'.,,: ii,- , ,.'r,j-.ri,.:;,',: ] tl
=t+tz_:_;.
The impulse response of the system is
s2 Continuous-Timo Systems Chapter 2
I - ll
t* '"- t4
r*z *. .l
_t
-1"-u +f-l1to .l
Exanple 2.63
Given the continuous-time sYstem
l--r o ol ltl
<,r
',u,= s _i ;l 'ar. [1]
L
y(r) = I-t 2 0l vo
wecomPutethetransitionmatrixandtheimpulseresponseofthesystem.ByusingEqua.
tion (2.6.7). we have
t: 00
ool [-r 0 ol 2
.', = r ol* lo-a, 6tz -atz
+ ..'
[i o ,-l Lo -l 1]. 0
0 ?rz -?J2
Itis clear from Equations (2.6.13), (2.6.15), and (2.6.16) that in order to-determine'
-have
v(r), y(r), or ft (t), we to first obtain exp [Ar]. The preceding two examples demon-
S€c.2.6 State-VariableRepreseniation 5tl
strate how to use the power serics method to find O(r) = exp [Atl. Although the
method is straightforward and the form is acceptable, the major problem is that it is
usually not possible to recognize a closed form corresponding to this solution. Another
method that can be used comes from the Cayley-Hamilton theorem, which states that
any arbitrary N x N matrix A satisfies its characteristic equation, that is,
det(A -,u) :0
The Cayley-Hamilton theorem gives a means of expressing any power of a matrix A
in terms of a linear combination of A for m = 0, 1, ..., N - l.
Exanpte 2.6.4
Given
^=[i ;]
it follows that
det(A-[)=,t2-7i+6
and the given matrix satisfies
Az-7A+6I:o
Therefore, A2 can bc expressed in terms of A and I by
A2 = 7A - 6I (2.6.171
Also. Ar can be found by multiplying Equation (2.6.17) by A and then using Equation
(2.6.17) again:
A-'=;[7r - A]
It follows from our previous discussion that we can use thc Cayley-Hamilton the-
orem to write exp[Ar] as a linear combination of the terms (Ar)i, i = 0, 1,2, ...,
N - 1, so that
Ar- t
exp[Ar] = ) r,(r) l' (2.6.18)
i=0
If A has distinct eigeuvalues ,1,. we can obtain 7,(t) by solving the set of equations
N-t
exp[I,r]=)r,ft)'r.i i=1,...,ts (2.6.19)
i=0
U Continuous-Time Systems Chapter 2
For ihe case of repeated eigenvalues, the procedure is a little more complex, as we will
learn later. (See Appendix C for details.)
&anple 2"6.6
Suppose that we want to 6ad the transition matrix for the system with
[-r I ol
A=l' I -3 ol
L o o -3_J
nsing the Cayley-Hamilton method. First, we calculate the eigenvalues of A as.l, = -2,
t', = -3, and 13 = -4. It follows from the Cayley-Hamilton theorem that we can write
erp [Al] as
erp[Ar] = 7o(r)t + n(r)A + 7r()A2
where the coefficiens 7oO, n(l), and 'y2() arc the solution of the set of equations
expl-Al = ro(t) - 21 r(l + 41 rQ)
exp[-3r] = .yo(t) - 3rr(r) + hr(t)
exp[-4r] = ro(t) - 4tr(r) + 16.y2(r)
ftom which
1o(t) = 3exp[-4t] - Sexp[-3t] + 6 exp[-a]
q1
1,(t) = 6expl-3t] +
iexp[-4tl- rexp[-2t]
I
,r1t1 = i(exn[- 4tl - 2expl-lt] + exp[-2.r1)
Thus, exp [At] is
[r o ol [-s r ol l-ro -5 0l
exp [Ar] = 1o(,)lo r ol+.y,1r11 r -3 ol+1,1111-o 10 0l
Loorl Lo o-3J Lo oeJ
00 exp [- 3t]
EwarrFle 2.0.8
[-et us repeat Example 2.6.5 for the system with
[-r o ol
A=l o -4 4l
Lo -r o_j
Sec. 2.6 State.Variable Bepresentation 85
exp[-tl = ro(t)
- 1'O + 1r(t)
exp[-2t] = ro(t) - 21 lt) + 412()
t expl-?tl : 1,(t) - 41r(l)
Solving for 7(t) lelds
ro(r) = 4exP[-t] - 3exp[-2t] - 2rexpl-2rl
rr(r) = 4 exp[-r] - 4 exp[-2tl - 3rexp[-2t]
rz(t) = exp[-t] - exp[-2t] - texp[-2t]
so that
Loool Lo-rol
[-exp[-r] 0 o I
=| o exp[-2r] -2texpl-2rl 4texpl-2tl I
Other methods for calculating O(t) also are available. The readcr should keep in mind
that no one method is easiest for all applications.
The state transition matrix possesses several properties, some of which are as follows:
1. Transition property
o(r, - lo) = o(r, - tr)o(lr - to) Q'6'21)
86 Continuous-Time Systems Chapter 2
2. Inversion property
O(ro - r): O-t(t - lo) (2.6.22)
3. Separation property
O(t-to)=O(,)O-'(,J (2.6.21)
These properties can be easily established by using the properties of the matrix expo-
nentiat exp[Ar], namely, Equations (2.6.8) and (2.6.9). For instance, the transition
proprty follows from
o0z-b)=exPlA(q-to)l
= exp[A(lz - t' + t, - to)]
: exp[A(, - t,)]exp[A(rr - ,o)]
: .b(tz - tr)o(rr - ,o)
The inversion prop€rty follows directly from Equation (2.5.9). Finally, the separation
property is obtained by substituting ,z = t and ,r = 0 in Equation (2.6.21) and then using
the inversion property.
In Section 2.5, we discussed techniques for deriving two different canonical simulation
diagrams for an LTI system. These diagrams can be used to develop two state-variable
representations. The state equation in the first canonical form is obtained by choosing
as a state variable the output of each integrator in Figure 2.5.4. In this case, the state
equations have the form
y(t)=o,(t)+bn.r(r)
oi1t1 - -on-ry(r) + ?,r(l) + bn-rr(r)
oi[) = - ar-ry() + zr(t) + bn-rx(t)
:
By using the first equation in Equation (2.6.24) to eliminate y(t), the differential equa-
tions for the state variables can be written in the matrix form
tains ones above the diagonal, and the first column of matrix A consists of the nega'
Sec. 2.6 Slai€-Variable Representation 87
rives of the cocfficients a,. Also. the output y(t) can bc writtcn in tcrms of the state vec-
tor v(t) as
Example 2.6.7
The [irst-canonical-form state-variable representation of the LTI system described by
zy"(t) + Ay'(t) + 3y0) = Ax'(t\ + zx(t)
=
rit',,,
t;ir:rt
t; lr;;r;tl.
.v(r)=rr ,[;;t]l
kample 2.6.8
rhe Lrr o"*;l;ol
"""* ,nur+ v'(r) + av(r) = r11; * 5''1',
haslhennt**"'i;l;:""'i1
r or[o,61r I zr
uiQ) = ur(t)
a''(t) = tt'(t)
oh-r()= 0,v(0
0 I o 0-
Ir,(,)l 001:0
4l ',t'l I - (2628,
" 1,r,,-l ;;;:i
_-% -a2 t3].[l..,
-ar -att_t_
Ir,o) I
y(r) : [(bo - asby)(b1- arbp)...(b,,-r - a*-,bill "1" | + 0,".r(r) (2.6.2s)
Lr,t,ll
This representation is called the second canonical form. Note that here the ones are
above the diagonal, but the a's go across the bottom row of the N x N transition
matrix. The second canonical state representation form can be written directly upon
inspection of the original differential equation describing the system.
Elrarnple 2.63
The second canonical form of the state equation of the system described by
y'(t) - zy"(t) + y'() + at() = 1'1,1* trr,,
is
ol [o,(r)
i
[;i[;i.l=si r ll ,,(,)
L,io)l L-+ -r 2l Lu,(r) l.[l]..,,
Ir,,(r)l
,(,) = [-3 -1 zll a,@ | + r(t)
Lr,(,ll
The frrst and second canonical forms are only two of many possible state-variable
representations of a continuous-time system. [n other words, the state-variable repre-
sentation of a continuous-time system is not unique. For an N-dimensional system,
Sec. 2.6 Slate-Variable Representation 39
there are an infinite number of state models that represent thal systcm. Howcvci, aii
N-dimensional state models are equivalent in thc scnse that lhey ltavc cxactly thc sarnc
input/output relationship. Mathenratically, a set of state equation'i \\'ith strtc vector
v1i; can be lransformed to a new sct with state v:ctor q(r) by using a transtorlrlation
P such that
q(t) = P v(t) (2.6.30)
where P is an invertible N x N matrix so that Y(r) can be obtaincd trom q(t). It can be
shown (see Problem 2.34) that the new state and output equations arc
q'(r) = Ar q(t) + b,-r(t) (2.6.31)
The only restriction on P is that its inverse exist. Since there are an infinite number of
such matrices, we conclude that rve can generate an infinite numhcr of equivalent N-
dimensional state models.
If we envisage v(r) as a vector with N coordinates. the transformation in Equation
(2.6.30) ..pr"."--nt. transformation that takes the old state coordinates and
".*rdinate
mapS them to thc new statc Coordinates. The new state model can have one or more
of the coefficients A,, b,, and c, in a special form. Such forms result in a significant sim-
plification in the solution of certain classes of problems: examples of these forms are
the diagonal form and the two canonical forms discussed in this chapter'
Example 2.6.1O
[;;ll] =
[: :r [;:[]l . [l].,,,
We need to tind the state equations for this system in terrns of lhc new state vadablcs qt
and qr. where
[;lll] =
[l l][;:l;l]
The equation for the state variable q is given by Equation (2 611)' rvherc
^,
=
'nt-'= [l ilt;;ltl-ll
[r rl
=[l iltiilliil
lz :l
_[o ol
-Lo ,)
90 Continuous-Time Sysiems Chapter 2
and
br=Pb=
tllltll= til
E=ernple 2.6.11
Let us find the matrix P that transforms the second-canonical-form state equations
[;;8] =
t : ll[l[;i].
into the fint-canonical-form state equations
[?].u,
pt - 3pn= -3pnt pu
-2Pa = -\Pn
Pa - 3Pd= -2Pn
The reader will immediately recognize that the second and third equations are identical.
Similarly, lhe first and fourth equations are identical. Hence, two equations may be dis-
carded. This leaves us with only two equations and four unknowns. Note, however, that
the constraint Pb = br provides us with the following two additional equalions:
prz=7
pz=2
Solving the four equations simultaneously yields
,=[-3 ]l
Exanple 2.6.12
If A, is a diagonal malrir with entries 1,, il can easily be verified that the transilion malrix
exp[A,t] is also a diagonal matrix with entries exp [,t,rl and is hence easily evaluated. We
can use this result to find lhe fransition matrix for any other representation with A =
PA,P-r, since
Sec.2.6 Stat€-VariableRopresentation 91
exp[Arf = I + Ar +
], n',' * "'
= I + PArP-rr + 1l;tAiP-'r'+...
0I
exp IA,t] -_ l-exp(6r)
L o exp(2r)l
so that
_ [exp(6r) 0 I
exp IAt]
L 0 exp(z)l
_ lf eu + e'l t* - "'1
2lee - eL eu + e')
2.6.6 StabilityConeiderations
Earlier in this section, we found a general expression for the state vector v(r) of the
system with state matrix A and initial state vo. The solution of this system consisS.bf
two components, the first (zero-input) due to the initial state vo and the second (zero-
state) due to input r(r). For the continuous-time system to be stable, it is required that
not only the oulput, but also all signals internal to the system, remain bounded when
a bounded input is applied. If at least one of the state variables grows without bound,
then the system is unstable.
Since the set of eigenvalues of the matrix A determines the behavior of exp [Al]' and
since exp[At] is used in evaluating the two components in the expression for the state
vector v(l), we expect the eigenvalues of A to play an important role in determining
the stability of the system. Indeed, there exists a technique to tcst lhe stability of con-
tinuous-time systems wlthout solving for the state vector. This tcchnique follows from
the Cayley-Hamilton theorem. We saw earlier that, using this thcorem, we can write
the elements of exp [At], and hence the components of the state vector, as functions of
the exponintials exp[,1,t], exp[Arr], ..., exp [,trt], where t,,i = 1.2...., N, are the eigen-
values of the matrix A. For thesc terms to be bounded, the real part of rlu i: 1,2, ...,
N, must be negative. Thus, the condition for stability of a continuous-time system is
that all eigenvalue$sf the state-transition matrix should have negative real parts.
The foregoing conclusion also follows from the fact that the eigenvalues of A are
identical with the roots of the characteristic equation associated with the differential
equation describing the model.
92 Continuous-Time Syslems Chaptor 2
Example 2.6.13
Consider the continuous-time system whose state matrix is
[z -rl
n=Lo -rJ
The eigenvalues of A are \ = -2 and lz = l, and hence. the system is unstable.
Exanple 2.&14
Consider the system described by the equations
,,(,) =
[-1 -:]"u,. [l],u,
y0) = [ llv(r)
A simulation diagram of this system is shown in Figure 2.6.2. The system can thus be con-
sidered as the cascade ofthe two systems shown inside the dashed lines.
The eigenvalues of A are lr = 1 and Az = -2. Henc€, the system is unstable. The tran-
sition matrix of the system is
exp[Arl = yo(r)I + zr0)A (2.6.v)
where yo(l) and 1(l) are the solutions of
exPl't1= l-exPl'l o
L-exp[r] + erp[-2r] ' ^ '-l
exp[-z]J
[Jt us now look at the responlte of the sptem to a unit-step input when the system is
initially (at time 6 = 0) relaxed, i.e., the initial state vector vo is the zero vector. The out-
put of the system is then
y1r) =
f cexptA(r - r)lbr(t)dt
= (; - ; exet- t:t)u(t)
The state vector at any time l > 0 is
\o
e.l
q)
c.
E
E1
C
OJ
q,)
oo
(!
!
o
.A
6l
\o
N
tu
E!
E,
93
94 Continuous-Timo Systems Chapter 2
,,,,__::f
,:i,:ii!,,u,._,,1,,i?u,,,,,,,,
= -zy(t) + x(t)
The solution of the last first-order differential equation does not contain any terms that
grow without bouad. It is thus clear that the unstable term exp [ll thal appeaIs in the state
variables o,(t) and ur(t) does not appear in the output yO. This term has, in some sense.
been *cancelled out' at the output of the second system.
2,7 SUMMARY
A continuous-time system is a transformation that operates on a continuous-time
input signal to produce a continuous-time output sigral.
a A sptem is linear if it follows the principle of superposition.
I A system is time invariant if a time shift in the input signal causes an identical time
shift in the output.
A system is memoryless if the present value of the output y(r) depends only on the
present value of the input r(t).
a A system is causal if the output y(10) depends only on values of the input x(l) for r s ro.
a A system is invertible if, by observing the output, we can determine the input.
a A system is BIBO stable if bounded inputs result in bounded outputs.
a A linear, time-invariant (LTI) system is completely characterized by its impulse
re.sponse i (t).
The output y(l) of an LTI system is the convolution of the input r(r) with the
impulse response of the system:
Ssc. 2.7 Summary 95
e An LTI system is causal if h(t) = 0 for t < 0. The system is stable if and only if
.
f.ln{"ila, -.
o An LTI system is described by a linear, constant-coefficient, differential equation
of the form
r The matrix A in the second canonical form contains ones above the diagonal, and
the a,'s go across the bottom row.
. A continuous-time system is stable if and only if all the eigenvalues of the transition
matrix A have negative real parts.
2.9 PROBLEMS
2.L Determine whether the systems described by the following input/ourpur relationships are
linear or nonlinear, causal or noncausal, time invariant or time variant, and memorylass
or with memory.
(a) y(t) = 7:(t) + 3
(b) vG) = bz(t) + 3x(,)
(c) Y(t) = Ar(tl
(d) y(t) = AaQ)
(e) y(r) =
{:,;l;,, ;::
tt
(O .y(t) =
l__x(tldr
t'
(oy(r)=lorft)a,.,>0
(h) Y(t) = r(t - 5)
(l) y(r) = exp [.r(t)]
0) v(r) = x(t) x(t - 2)
(k) y(r) = il,')i,,nro,
0 ++zy(t)=bz(t)
Sec. 2,9 Probl€ms 97
(b)
(c)
Figure P2.4
Continuous-Time Sysiems Chapter 2
Use the convolution integral to find the response y(t) of an LTI system with impulse
response ft() to input r(t):
(a) .r(t) : exp[-rlzo h(t) = t*'1-or"r',
(b) r(t) = t exp [-tlzo h(t) =
"1'7
(c) .r(t) = exp[-4u0) + u(t) h(t) = u(t)
(d) .r(t) = z(t) h(t) = eapl-Altt () + 6(r)
(e) -rO = exP[-at]zo h(t) = u(t) - expl-ulu(t - b)
(f) .r(t) = 6(, - l) + exP[-t]z(t) fr(t) = exP [-u]z(t)
a6. The cross correlation of two different signals is defined as
Figore HL7
a& The autocorrelation is a special case of cross correlation with y() : :(t). In this case.
t"
R,(r) = R,0) = | r(r)r(r + r)dr
t__
(a) Show that
R,(0) = 5, the energy of r(r)
(b) Show that
R,(r) s R,(0) (use the Schwarz inequality)
a9. . Consider an LTI sysrem whose impulse response is rl(r). Ler r(, ) and y() be the input and
output of the system. respcclively. Show that
R,(t) = R.(0 * h(t) * h(-t)
al0. The input to an LTI system with impulse response i (r) is the complex exponenlial
exp !'r,rll. Show that the corresponding ourput is
r'
H(@) = exp[-yrorldr
l__h(t)
2ll. Determine whether the continuous-time LTI systems characterized by the following
impulse responses ale causal or noncausal, and stable or unstable. Justify your answers.
(a) A(t) = exp[-3tl sin(t)r(r)
(b) ,,(r) = exp [4tlu( -t)
(c) ft(t) = (-r) exp[-rla(-r)
(d) (') = exP[-lzl]
'',r(r)
(e) = l(r - ztlexp[-lzrll
(f) ,(r) = rcctlt,lzl
(g) ft(t) = 6O + exp[-3t]u(t)
:
(h) ,(r) 6'(t) + exp [-2rl
(i) fi(,) = 6'(,) + exp[-l2rl]
0) ,t(r) = (l - r) rect(r/3)
212 For each of the following impulse rcsponses, determine whethcr it is invertible. For those
that are, find the inverse system.
(al h(t) = 511 a 21
(b) l'(t) = uo
(c) ft(t) = 611 - 3;
(d) n(,) = rcct(t/ )
(e) It(r) = exp [- rlz (r )
2.13. Consider the two syslems shown in Figures P2.13(a) and P2.l-3(b). System I op€rates on
r(l) to give an output /t (r) that is optimum according to some dcsired criterion. System II
first operates on r(t) with an invertible operation (subsystem I) to obtain z0) and then
operates on z(t) to obtain an output y20) by an operation that is optimum according to
the same criterion as in system I.
(a) Can system II perform better than system I? (Remember the assumption that system
I is the optimum operation on r(t).)
(b) Replace the optimum operation on z(r) by two subsystems, as shown in Figure
P2.13(c). Now the overall system works as rvell as system I. Can the new system be
better than system II? (Remember thal system II perfornts the optimum operation
on z (r ).)
(c) What do you conclude from parts (a) and (b)?
(d) Does the system have to be linear for part (c) to be true?
100 Continuous-Time Syst€ms Chapter 2
r--'------l ---------'l
System I System ll I
I
.r (r)
I
I
I vt0l
i----------J t' I
(a) (b)
z (r't )r(,)
(c)
Figure 8 .13
Figure EL14
h,(t) = expl-2rlu(tl
hr(tl = exPl-7tlu(r)
,rr(t) = exp[- t]u(t)
440) : D(,)
,rs() = exP [- 3tlu(t)
2"15. The input and outPul y(r) of a linear, time-invariant system are as shown in Figure
r(r)
P2.15. Sketch the resF)nses to the following inputs:
(s) :0 + 2)
O) 2r(r) + 3x(-l)
(c) .r(, - l/2) - x(t + t/21
... &(t)
(o, a
S€c. 2.9 Problems 101
ffgure PZIS
Lt6" Find the impulse response of the inilially relaxed system shown in Figure P2.16.
i-----;------l
ll
l___________-J
Flgure HLf6
217. Find the impulse response of the initially relaxed system shown in Figure P2.17. Use this
result to find the output of the system when the input is
/ 0\
(e) r. (, - 2/
or,(. l)
(c) rect 0/0), where d = l/RC
i;1
r(r) = u(r) )= uc (r)
ri
rl
l-----------J
Ftgure YLIT
102 Continuous-Time Systems Chapter 2
2.18. Repeat Problem 2.17 for the circuit shown in Figure P2.18.
I
I
I
I
I
I I
L- _--________J
Bigure HLlt
2.19. Show that any system that can be described by a differential equation of the form
W.p*"or #=2r,<offf)
is linear. (Assume zero initial conditions.)
220. Show that any system that can be described by the differential equation in Problem 2.19
is time invarianl. Assume that all the coefficients are constants.
22L A vehicle of mass M is traveling on a paved surface with coefficient of friction k. Assume
that lhe position of the car at time ,, relative lo some reference, is y(t) and the driving force
applied to the vehicle is r(r). Use Newton's second law of motion to wrile the differential
equation describing the system Show that the system is an LTI system. Can this system
be time varying?
22L Consider a pendulum of length I and mass M as shown in Figwe V|'.X\.T\e displacement
from the equilibrium position is ld; hence, the acceleratioh is Id'. The input x(t) is the
force applied to the ma$s M tangential to the direction of motion of the mass. The restor-
ing force is the tangential component Mg sin 0. Neglect the mass of the rod and the air
resistance. Use Newton's second law of motion to write the differential equation describ-
ing lhe system. Is this system linear? As an approximation, assume that dis small enough
that sin , : ,. Is the system now linear?
2.23. For the system realized by the interconnection shown in Figure P2.23. find the differential
equation relating the input .r(r) lo the oulput y(r).
Flgure P2.Zl
2.4. For the system simulated by the diagram shorvn in Figtre Y2.24, (lctermine the differen-
tial equation describing the system.
tigure P2.24
r (t)
Itgore P:225
2,26. Givet an LTI system described by
y-(t) + 3f(t) - y'(,) - zy(t) = 31'111- ,r,,
Find the first- and second-canonical-forrr simulation diagrams.
,Jr. Find the imprrlse resfpnse of the initially relaxed system shown in Figure p2.27.
ii
Flgure HL27
L& Find the state equations in the first and second canonical forms for the system described
by the dilferential equation
y"(t) + 2.sy'(t) + y(t) =.r'0) + r(,)
2.$. For the circuit shown in Figure P2.29, choose the inductor current and the capacitor vott-
age as state variables, and write the state equations.
L=2H
Ifgure HL29
230. Repeat Problem 2.28 for the s]'stem described by the differential equation
Z3l. Calculate exp[Arl for the following matrices. Use both the serics-cxpansion and Cayley-
Hamilton methods.
[-r o ol
(a)A=ltt 0 -z 0l
L o o -31
[-r
tt 2 -11
(b)A=l o -r ol
L o o -ll
[-r 1 -ll
tcll=l o t -rl
L o'-31
2.32 Using state-variable techniques, find the impulse respoirse for the system described by the
differential equation
y"(t) + 6y'(t) + 8y(r) = r'(r) + r(r)
Assume that the system is initially relaxed, i.e., y'(0) = 0 and y"(0) = 0.
243. Use state-variable techniques to find the impulse response of lhc system described by
y'(t) + 7y'(t) + lzy(t) = /(r) -
3r'(r) + 4.r(I)
Assume that the system is initially relaxed, i.e., y'(0) : 0 and y(0) = 0.
23. Consider the system describcd by
v'0) = A v(t) + b.r(t)
y0)=cv(t)+dr0)
Select the change of variable given by
z(t) : P v(l)
where P is a square matrix with inverse P-1. Show that the new state equations are
z'(t) = Ar z14 + b' r(t)
y(t) = cr z(t) + drx(t)
where
Ar = PAP-r
br=Pb
ct = cP-l
dt= d
235. Consider the system described by the differential equation
y"(t) + 3y'(t) + 21'() =.r'(r) -.r(r)
(a) Write the state equations in the fint canonical form.
(b) Write the state equations in the second canonical form.
(c) Use Problem 2.34 to find the matrix P which will transform the firsl canonical form
into the second.
(d) Find the state equations if we transform the second canonical form usiitg the matrix
p=l'
L-l
t-l
-ll
Chapter 3
Fourier Series
3.1 INTRODUCTION
As we have seen in the previous chapter, we can obtain the response of a linear sys-
tem to an arbitrary input by representing it in terms of basic signals. The specific sig-
nals used were the shifted 6-functions. Often, it is convenient to choose a set of
orthogonal waveforms as the basic signals. There are several reasons for doing this.
First, it is mathematically convenient to represent an arbitrary signal as a weighted sum
of orthogonal waveforms, since many of the calculations involving signals are simpli-
Iied by using such a relresentation. Second, it is possible to visualize the signal as a
vector in an orthogonal coordinate system, with the orthogonal waveforms being coor-
dinates. Finally, representation in terms of orthogonal basis functions provides a con-
venient means of solving for the response of linear systems to arbitrary inpus. In this
chapter, we will consider the representation of an arbitrary signal ove: a finite interval
in terms of some set of orthogonal basis functions.
For periodic signals, a convenient choice for an orthogonal basis is the set of har-
monically related complex exponentials. The choice of these waveforms is appropri-
ate. since such complex exponentials are periodic, are relatively easy to manipulate
mathematically. and yield results that have a meaningful physical interpretation. The
representation of a periodic signal in terms of complex exponentials, or equivalently,
in terms of sine and cosine waveforms, leads to the Fourier series that are used exten-
sively in all fields of science and engineering. The Fourier series is named after the
French physicist Jean Baptiste Fourier (1768-1830), who was the first to suggest that
periodic sigrals could be represented by a sum of sinusoids.
So far, we have only considered time-domain descriptions of continuous-time signals
and systems. In this chapter, we introduce the concept of frequency-domain reprisen-
106
Sec. 3.2 Orthogonal Bepresentations ot Signals 1O7
tations. We learn how to decompose periodic signals into their [requency components.
The results can be extended to aperiodic signals, as will be shorvn in chapter 4.
periodic signals occur in a rvide range of physical phenomcna. A few examples of
such signals alre acoustic and electromagnetic rvaves of most types, the
vertical dis-
placem-ent of a mechanical pendulum, the periodic vihrations o[ musical instruments,
and the beautiful pattems of crystal structures.
In the present lhapter, we discuss basic concepts, facts, and tcchniques in connec-
tion with itourier serLs. Illustrative examples and some importirnt engineering appli-
in Section 3'2'
cations are included. We begin by considering orthogonal basis functions
In Section 3.3, rve consider l.rioaic signals and develop procedurcs for_resolving such
3'4, we
iignals into a iinear combination of complex exponential functi.ns. In Section
di-scuss the sufficient condilions for a periodic signal to bc
represcnted in terms of a
all
Fourier series. These conditions are known as the Dirichlet conditions. Fortunately,
the periodic signals that we deal with in practice obey these conditions. As
with any
properties' These prop-
other mathema-ticat tool, Fourier series possess several useful
helP_s us. move eas
erties are developed in Section 3.5. Understanding such properties
itf frorn the time domain to the frequency domain and vice ve rsa. In Section 3.6, we
periodic
ule tne properties of the Fourier series to find the response of LTI systems to
signals. The effects of truncating the Fourier series and the Gibbs
phenomenon are dis-
crissed in Section 3.7. We will iee that whenever we attempt to reconstruct
a discon-
in the form of
iinuou, signal from its Fourier series, we encounter a strange behavior
signal overshoot at the discont in uities. This overshoot effecl does not
go away even
,r-h"n *a increase the number of terms used in reconslructing the signal.
where Sf (r) stands for the complex conjugate of the signal and 6(/ - &)' called the
Kronecker delta function, is defined as
[t-
6(r-k)=10 t=k (3.2.2)
t+k
108 Fourier Series Chapler 3
_[", m=n
[0, m* n
Since the energy in each sigml equals tr, the following set of signals constitutes an ortho-
normal set over the interval -t <,< ,rr:
sinr sin2r sin 3r
\r;' -G- ' \F'"
Example 8.2.2
The signals go() = exp[i (2rkt)/Tl, k = 0, *1, form an orthogonal set on rhe
interval (0, I) because
=2,...,
'and
hence, the signals O
=
{l
ll-D expl]2r kt)/Tl constitute an orthonormal set over the
intewal 0<r<f. "iI
trernple 893
The three signals shown in Figure 3.2.1 are orthonormal, since they are mutually orthog-
onal and each has unit energy.
Orthonormal sets are useful in that they lead to a series representation of signals in
a relatively simple fashion. Let 0,(r) be an orthonormal set of signals on an interval
a < t < D, and let.r(l) be a given signal with finite energy over the same interval. We
can reprqsent x(t) in terms of [0rl by a convergent series as
Sec. 3.2 Orthogonal Representations of Signals 1@
Equation (3.2.4) follows by multiplying Equation (3.2.3) by Qf (r) and integrating the
result over the range of definition ofx(t). Note that the coefficients can be computed
independently of each other. If the sct g,(t) is only an orlhogonal sel, then Equation
(3.2.4) takes the form (see Problem 3.5)
(3.2.s)
',=Il"''ooi(r)dt
The series representation of Equation (3.2.3) is called a generalized Fourier series
ofx(l), and the constants c,, i = 0, *1, *2,..., are called the Fouricr coefficients with
respect to the orthogonal set [0,(r)].
In general, the representalion of an arbitrary signal in a series expansion of the form
of Equation (3.2.3) requires that the sum on the right side be an infinite sum. In prac.
tice, however, we can use only a finite number of terms on the right side. When we trun-
cate the infinite sum on the right to a finite number of terms, we get an approximation
i(r) to the original signal r(r). When we use only M terms, the representation enor is
M
enU) = x(r) - r=l
) c,6,0) (3.2.6)
It can be shown that for aoy M, the choice of co according to Equation (3.2.4) mini-
mizes the energy in the error er. (See Problem 3.4.)
Certain classes of signals-finite- length digital communication signals, for exam-
ple-permit expansion in terms of a finite number of orthogonal funclions l0r(r)1. In
110 Fourier Series Chapter 3
this case, i = 1,2,... , N, where N is the dimension of the set of signals. The series rep-
resentation is then reduced to
r0) = xro(') (3.2.E)
and the superscript Idenotes vector transposition. The normalized energy ofr(t) over
the interval a< t<bis
E,=
I lx(t)l2dr=
t ,* c,$,(t'1lzdt
Al /v .h
= coci 6,111oi1r1ar
->,:_, J,
= ) l",l'e, (3.2.10)
This result relates the energy of the signal x(r) to the sum of the squares of the orthog-
onal series coefficients, modified by the energy in each coordinate. Er. If orthonormal
signals are used, we have E, = 1, and Equation (3.2.10) reduces 1o
N
E" = )
i- l
1",1,
5. = (x*)rx = xt x (3.2. r l )
where t denotes the complex conjugate transpose [( )*]I. This is a special case ofwhat
is known as Parseval's theorem. which we discuss in more detail in Section 3.-5.6.
Example 3.2.4
tn this example. we eramine the representation of a finite-duration signal in terms of an
orrhog,onal set of basis signals. Consider the four signals defined over thc interval (0. 3).
as shown in Figure 3.2.2(a). Thcse signals are not orthogonal. hut it is prssihle lo repre-
sent them in terms of the three orlhogonal signals shown in Figure .i.2.2(h). since combi-
nations of these three basis signals can be used lo reprcsent any of lhc four signals in
Figure 3.2.2(a).
The coefficients rhar represent the signal .rr(r). obtained by using Equation (3.2.4). arc
f-1
c,,= | .r,(r)rfjx(t').lt =2
Jn
.!
c,. = Jo| .t,(r)dj'(r)rlr = tt
fl
c,., = lt| .t, (r )6.,t (tltlt = t
Sec. 3.2 Orthogonal Beprssentations ol Signals 111
rt (r) r2 (r)
0 0
x3(l) rt(,)
-l
(a)
0s (r) 0zU\
-
0!(r)
t--
(b)
In vector notation, xr : [2, 0, llr. Similarly, we can calculate the coefficients for
xz(t).:r0), and :o(r), and these become
rzr = l, xn = 1, rz.l = 0, or 12 = [l' 1' 0]r
rrr = 0, xn = 1, rrr = l, or xl = [0' l' l]'1
'r,rr = l, roz = - l, xat = 2' or xr = [l' - l' 2lr
Since there are only three basis signals required to completely reptescnt .r,(r), i = 1'2.3'
4, we now can thini ofthese four iignals ai in three-dimensionrl space. We would
""ctor.
112 Fourier Series Chapter 3
like to emphasize that the choice- of the basis is not unique, and many other possibilities
exist. For ixample, if we choos!
11
6,() =-6xr(t), 0,(t) =
fi\tuA - -
l) !(t) - u (r - 3)l
and
0,G) =
rra,o1r1
then
,,=ln,-+',+l'. r, = 1Vi,o,olr
x4 = [0,0, V6]r
"=l+,+'+)''
In closing this section, we should emphasize that the results presented are general,
and the main purpose of the section is to introduce the reader to a way of represent-
ing sipals in terms of other bases in a formal way. In Chapter 4, we will see that if the
signal satisfies some restrictions, then we can write it in terms of an orthonormal basis
(interpolating sigrals), with the series coefficients being samples of the sigpal obtained
at appropriate time intervals.
xr(t)=a.r,(t)+bxr(t) (3.3.2)
where, from Equation (3.2.4), the c, are complex constants and are given by
Each term of the series has a period T and fundamental radian ttequency 2t fT = ,0.
Hence, if the series converges, its sum is periodic with period 7. Such a series is called
the complex exponential Fourier series, and the c, are called the Fourier coeffrcients.
Note that because of the periodicity of the integrand, the interval of integration in
Equation (3.3.4) can be replaced by any other interval of length l-for instance, by the
*
interval ,0 <, s lo ?, where to is arbitrary. We denote integration over an interval
of length I by the symbol /,r1. We observe that even though an infinite number of fre-
quencies are used to synthesiie the original signal in the Fourier-series expansion, they
do not constitute a continuum; each frequency term is a multiple of ao/2r. The fre-
:
quency corresponding to n 1 is called the fundamental, or first, harmonicl z.= 2 cor-
responds to the second harmonic, and so on. The ccefficients c, define a
complex-valued function of the discrete frequencies nor6, wherc n = 0, +1, !2, ... ,
The dc component, or the full-cycle time average, of r(t) is equal to co and is obtained
by setting n = 0 in Equation (3.3.4). Calculated values of co can be checked by inspect-
ing r(r), a recommended practice to test the validity of the result obtained by integra'
tion. The plot of lc, I versus nr,rn displays the amplitudes of the various frequency
components constituting r(r). Such a plot is therefore called the amplitude, or magni-
tude spectrum, of the periodic signal .r(r). The locus of the tips of the magnitude lines
is called the envelope of the magnitude spectrum. Similarly, the phase of the sinusoidal
components making up r(r) is equal to l, cn and the plot of 4 c,, vcrsus nroo is called
the phase spectrum ofx(t). In sum, the amplitude and phase spectra of any given peri-
odic signal are defined in terms of the magnitude and phase of c,,. Since the spectra
consist of a set of lines representing the magnitude and phasc at a = nr,l.., they are
referred to as line spectra.
114 Fouder Series Chapter 3
,: =
l+ I,^,
rt *olalalo,l"
= l,l,
'u"-olf,?!!)o'
= c-n (3.3.5)
Hence,
which means that the amplitude spectrum has even symmetry and the phase spectrum
has odd symmetry. This property for real-valued signals allows us to regroup the expo-
nential series into complex-conjugate pairs, except for co, as follows:
r(r)=co * *ry+)
^i__,^*ry+)*P,*
=. * P, ,-,"*oliff!] ..i, ,.*pwf
= * p__, (, -..-rlflf-t] . *r [,+{])
" "
: * *.izne{c,*r['T,]]
= *.i (z n"t.,;
"o,4{ -
2tmlc,l""T) (3.3.2)
Here, Rel . I and Iml ' I denote the real and imaginary parts of the arguments, resPec-
tively. Equation (3.3.7) can be written as
o,=.co=l[,rruro, (3.3.ea)
where
A, = 2lc,l (33.11)
and
0n = 4c, (33.t2)
Equation (3.3.10) represents an alternative form of the Fourier series that is more com-
pact and meaningful than Equation (3.3.8). Each term in the series represens an oscil'
lator needed to generate the periodic signal r(l).
A display of lc, I and 4 c,, versus n or naofor both positive and negative values of
n is called a two-sided amplitude spectrum, A display of A, and Q,, versus positive r or
nroo is called a one-sided spectrum. Two-sided spectra are encountered most often in
theoretical treatments because of the convenient nature of the complex Fourier serie.s,
It must be emphasized that the existence of a line at a negative frequency does uot
imply that the signal is made of negative frequency comPonents, since, for every com'
ponent cB explj}nrtlTl, there is an associated one of the form c-, expl- i?:trr.t/Tl-
These complex signals combinc lo create the rcal comPoncnt a, cos(?sttt/T\ +
b,sin(?tttrt/T). Note that, from the definition of a definite integral, it follows that if
r(r) is continuous or even merely piecewise continuous (continuous except for Enitely
many jumps in the interval of integration), the Fourier coefficients exist' and we can
compute them by the indicated integrals.
Let us illustrate the practical use of the previous equations by the following exarn-
ples. We will see numerous other examples in subsequent sections.
Exanple 83.1
Suppose we want to find the line spectra for the periodic signal shown in Figure 33.2' The
signal r(t) has the analytic represeDtalion
t-x. -l<r<o
.rtt)=[r, 0<r<1
and r(, + 2) = .r(t). Therefore, oo = 2n/2 = tt. Signals of this type can <rcur as external
forces acting on mechanical systenul, as electromotive forces in electric circuits, etc. The
Fourier coeffr cients are
",
=
f,l_,. <,1 expl- inn tldt
=
; tl, - K exp(- inntlat + l' x "*p1-ln,fiarf
_K (l - explinrl * exp[-yznl - 1)
2\ int -yn
=
#{r- }("rntr,,,,,r+
exP [- r"1)) (3.3.13)
( zr. n odd
=llntr (33.14)
n even
[r,
The amplitude spe ctrum is
nodd
,".,:[ffi' ,r even
[0,
The dc component, or the average value of the periodic signal :(r), is obtained by setting
z = 0. When we substitute n = 0 into Equation (33.13), we obtain an undefined result
This can be circumvented by using I'H6pital's rule, yielding co = 0. This can be checked
by noticing that r(l) is an odd function and that the area under the curve represented by
.r(t) over one period of the signal is zero.
The phase spectrum of .r(t) is given by
_,,1t n=(h-t),m=7,2,...
--:{ 0, n = Lm.m = 0. 1.2 ...
7t
,'
n= -(1rz-l),m=r,2,...
The line spectra ofr(r) are displayed in Figure 3.3.3. Note that the amplitude spectrum
has even symmetry, the phase spectrum odd symmetry.
Ercnple 8.83
A siausoidal voltage E sinool is passed through a half-wave rectifrer that clips the nega-
tive portion of the waveform, as shown in Figure 33.4. Such signals may be encountered
ts l..r
I
.:
o0
(!
z
-
":
(.)
qJ
r!
G
cl
x
lrl
o
x
Etr
oo
cl
:o
r!, .-'o
o!
o-L
()o
I
Eq)
.i o.
O6r
a.l rdG
I
6a-
o. -o
o!
I trtrtit
'-
E
-ti
qE
I
-: cL
El,
=o
baE
I
. 117
118 FouderSerles ChaflerS
r (r) E E dn oro,
-2n't0g 2tt
rJg oJ6 {rl9 rdq
lo. when-i<l<o
0'o
r(') = {
whenocrca
Irrio.or,
and r(r + 2tt /a) = x(l). Since.r(l) = 0 when -r /tro<, < 0, we obtain, from Equa-
tion (3.3.4),
",
=
|Jtr.,nr r,.rel-ifffa,
= * f" - exp[-jroor]l exp[:lhroot]dr
+[exp[jroor]
_ E.o l"t_. ,
=
dJ" [erp[-ior6(a - 1)t]-exp[-itoo@ + t)tldt
=ryF#("*[-n-] .*of',])
E
= ?r.i: nr)
cos (nt/2lexpl-im/21' n+ !7 (33.15)
l*+,
=lr,
aeven
(33.16)
zodd, n+lt
S€tting n = 0, we obtain the dc component, or the average value of the periodic qignal, as
co = E I n . This result can be verified by calculating the area under one half cycle of a sine
wave and dividing by f. To determine the coefEcients cl atrd c- I which correspond to the first
harmonic, we note that we cannot subsdmte ,, = + I in Equation (3.3.15), since this yields
an indeterminate quantity. So we use Equation (33.4) instead with n = + 1, which resuls in
E _E
cr =d. and ct= li
The line spectra of .r(r) are displayed in Figure 3.3.5.
In general, a rectifier is used to convert an ac signal to a dc signal. Ideally, rectified out-
put r(t) should consist only of a dc component. Any ac component contributes to the rip-
Sec. 3.3 The Exponential Fourier Series '.'t'
-4 -3 -2 -l 0 I
(a)
(b)
Flgure 335 Line spectra for.r(t) of Example 3.3.2. (a) Magnitude spec-
trum and (b) phase spectrum.
ple {deviation from pure dc) in the sig[al. As can be seen from Figure 3.3.5, the ampli'
tudes of the harmonics decrease rapidty as n increases, so that the main contribution to
the ripple comes from the firct harmonic. The ratio of the amplitudes of the first harmonic
to the dc component can be used as a measure of the amount of ripple in the rectified sig-
nal, In this example, the ratio is e qual lo r/4. More complex circuits can be used that pro-
duce less ripple. (See Example 3.6.4).
Eranple 83.8
Consider the square-wave stgnal shown in Figure 3'3'6' The analytic re[-resentation of 'r(r) is
( -T -t
f0. when;-<t<-,
| -r I
r0) =
1K, whenl:<r<;
l"t'
|.0. whenr<t<;
and x(t + f)= :(r). Signals ot this type can be produced by pulsc generators and are
used extensively in radar and sonar systems. From Equation (3.-3 {). rve obtain
120 Fourier Serles Chapter 3
-, r
| -T -,.i -2z 0 r T
n-t T
r+!7
-T- 1 1 1'-T
Flgure 33.5 Signal .r(t) for Example 3.33.
=#l*,V+l-*,[T ']l
: K ntr
-Sln -
nnl
Kt n't
= rstncT (33.17)
where sinc ()r) = sin (,rI)/nI. The sinc function plays an important role in Fourier analy-
sis and in the study of LTI systems. It has a maximum value at tr = 0 and approaches zero
as I approaches infinity, oscillating through positive and negative values. It goes through
zero at tr = +7,.-2t ,,. .
l-t us investigate the effect of changing Ion the ftequency spectrum of r(l). For 6xed
I
t, increasing reduces the amplitude of each harmonic as well as the fundamental fre-
quency and, hence, the spacing between harmonics. Hovever, the shape of the spearum
is dependent only on the shape of the pulse and does not change as I increases, except for
the amplitude factor. A convenient measure ofthe frequency spread (known as the band-
width) is the distance from the origin to the first zero-crossing of the sinc function. This
distance is equal to2r/t atd is independent of f. Other measures of the frequency width
of the spectrum are discussed in detail in Section 4.5.
We conclude that as the perod increases, the amplitude becomes smaller and the spec-
trum becomes denser, whereas the shape of the spectrum remains the same and does not
depend on the repetition period T. The amplitude spectra ofr(r) with r = 1 and = 5, I
10, and l5 are displayed in Figure 3.3.7.
f,'rrernple 33.4
In this example, we show that r(r) = t2, -tt < , < ?r, with r( + 2tl =.r() has the
Fourier series representation
,Ol =
f-a(*rr - |.o.zr * |.orl, - ...
)
(3.3.18)
Note that.r() is periodic with period 2zr and frmdamental frequency oo = 1. The complex
Fourier series coefficients are
Sec. 3.3 The Exponential Fourier Series 121
-15 - l0 0
(a)
- l0 0
(b)
(c)
Flgure 33,7 Line spectra for the:(t) in Example 3.3.3. (a) Magnitude
spectrum forr = I and I = 5. (b) Magnitude spectrum fort = 1 and T=
10. (c) Magnitude spectrum for z = I and I = 15.
t2 expl- jntldt
cn =
i; L"
Integrating by parls twice yiclds
2 cosnrr
cn = -- n-,--, n+O
A
o, = 2Relcn| = -?cosrrr
b.: -2lm{c,f = I
because c, is real. Substituting into Equation (3.3.8), we obtain Equation (3.3.18).
eranple 83.6
Consider
c, = c!1= -(i.r,)
cr= clr= -L
co = c:4 =,
could be expressed as a sum of sinusoids. However, this turned out not to be the case.
Fortunately, the class of functions which can be rePresented by a Fourier series is large
and sufficiintly general that most conceivable periodic signals arising in engineering
applications
- do have a Fouricr-series representation.
For the Fourier series to converge, the signal r(l) must possess the following prop-
erties, which are known as the Dirichlet conditions, over any period:
l, l'r(r)lar < -
2. x(r) has only a finite number of maxima and minima.
3. The number of discontinuities in x(l) must be finite.
These conditions are sufficient, but not necessary. Thus if a signal x(t) satisfies
the
Dirichlet conditions, then the corresponding Fourier series is convergent and its sum
is r(r), except at any point ro at which x(r) ii discontinuous. At the points of
disconti-
ndti, tt. sum ot ttre ieries is the average of the left- and right-hand limits of x(t) at
,o; that is,
] r,ro x(6)l
r(ro) = + (3.4.1)
Example 8.4.1
Consider the periodic signat in Example 3.3.1. The trigonometric Fourier series coeffi-
cients are given bY
a" = 2Relc,l = 0
(tx
I
I
-a:.
n7t '
n odd
:l
I
I 0. z even
\
so that r(r) can be written as
4Kl r
,"rr+...'l
.r(t)=?lsinzrt+isin3rt*"'*;sin""'
I
t '-
e.4.2)'
we notice that at , = 0 and t=1, two points of discontinuity of r('), the sum in
Equation
(3;r, h; a value of -I( and K
zero, whictr is equal to the arithmetic mean of the values
of rliy. furtn.rmore, since the signal satisfies the Dirichtet conditions, the series con-
,ergei und x(r) is equal to the slm of the infinite series. Setting t = lD in Equation
(3.4.2), we obtain
or
a,=2Relc,| =6tin"I
b"= -Z Im[c,l = 0
Thus. ao = Klz,a,-o when a iseven,a,, :2K/nr when n = 1' 5, 9, ... ' a, = -2K/nt
:
when n = 3,'1,11,..., and b, 0 forn = 1.2, .... Hence,x(t) can be written as
Il.learrple 8.43
Consider the periodic signal:(r) ln Example 3.3.4. The trigonometric Fourier-series coef-
ficiens are
tt2
ao=
T
4
an= _,c,,sn1I., n*o
b,=0
Hence. .z(r) can be written as
For this example, the Dirichlet conditions are satisfied. Further,.r() is continuous at all ,.
Thus the sum in Equation (3.4.4) converges to r ( ) at all points. Evaluating r(, ) at, = t gives
*-l:"1
frn' 6
Ssc. 3.5 Propertiss of Fouri€r Series 125
In this section, we consider a number of properties of the Fourier series. These prop-
erties provide us with a better understanding of the notion of thc frequency sPectrum
of a continuous-time signat. In addition, many of the properties are often useful in
reducing the complexity involved in computing the Fourier-series coefficients.
kt us define coefficients
126 Fourier Series Chapter g
l"l
'" = ["*
E- 'ry
lcn-d,, -N<z<N
so that Equation (3.5.2) can be written as
MSE = il,rl"UrPr,
Sub,stituting for e(l) from Equation (3.5.4), we can write
MSE =
I l, rl2-r "exn tr,,rl) (,, i-.gi expl- in orl) ar
i i tA{ll
nd-am=-e tr J(?)
,*pti(, - m)ootldtl
)
(3.s.s)
since the term in braces on the right-hand side is zero for n * m aadisl form=n.
Equation (3..5.5) reduces to
Each term in Equation (3.5.6) is positive; so, to minimize the MSE, we must select
dr= Cn (3.s.7)
This makes the first summation vanish. and the resulting error is
Example 35.1
Consider rhe approximarion of the periodic signal x(r) shown in Figure 3.4.2 by a set of
2rv + I exponentials. In order to see how the approximation error varies with thi number
S€c. 3.5 Properties of Fourier Series 127
of lerms, we consider the approximation of .t(l) based tu threc terms, then seven terms,
then nine terms, and so on. (Note lhat.:(r) contains only odrl harmonics.) ForN = 1 (rhrcr
terms), the minimum mean-square error is
(MSE)"i, = ) l.,l'
l"l,t
=
,.p, ,1s ''"i
'-14i s
,, 1ft,. n,
I
,l odd
8K2 l3n2
= ;7.|\t4'- I
= 0.189K2
Similarly, for N = 3, it can be shown thar
(MsE)*" =
|{: ftz 11
- 0.01K2
x(t)=an*2,o,"or\!
with coefficiens
whereas the Fourier series ofan odd signal.r(t) having penocl 7 is a "Fourier sine serics."
128 Fourier Series Chapter 3
r(t) r(r)
symmetry,
T=3
-3 -2 -l 0 123t
(a)
.r(r)
0l
(c)
u,=
+[: 'rt)sin4ldt
The effects of these symmetries are summarized in Table 3.1, in which entries such as
ao * 0 and bzn*r * 0 are to be interpreted to mean that these coefficients are not nec-
essarity zero, but may be so in specific examples.
In Example 3.3.1 x() is an odd signal, and therefore, the c,t are imaginary (an = 0),
whereas in Example 3.3.3 the c, are real (b, = 0) because.r(r) is an even signal.
TABLE 3.7
E lBc.ts o, Symm€t y
Symmsfy b" Remarks
E-arnple 35.2
Consider the signal
(o-!t. o<t<r/2
r
'rrl={"
l!,-to. rtz<r< l
t/
which is shown in Figure 3..5.2.
Notice thal .{(r) is both an cven and a half-u'ave odd signal. Therefore, ao = 0,4 = 0,
and we expect lo have no cven harmonics. Computing a,, we ohtain
8.6.8 Linearity
Suppose that r(r) and y(r) arc periodic with the same period. Lct their Fourier-series
expansions be given by
and let
If .r(r) and .v(t) are periodic signals rvith the same period as in Equation (3.5'9)'
their product is
z(t; -- ,1,,r',r)
=
,i(,,i, o,-,,,r,,,)exp[iro,,,r] (3.s.r1)
The sum in parentheses is known as the convolution sum of the two sequences p,,, and
1,,,. (More on the convolution sum is presented in Chapter 6.) Equation (3.5.11) indi-
cates that the Fourier coefficients of the product signal z (t ) are equal to the con volution
sum of the two sequences generated by the Fourier coefficients of -r(l) and.v(t). That is'
z(,)= > (i
7= -z \n1- -z
P,,,,,rr)exp[/r,,,,tl
and
='i],rr(r).v*(t)exp[-lltr,,tJdt (3'5'12)
,,,i,.F,-,,rfi
Sec. 3.5 Properties ol Fourier Ser ies 191
Exqrnple 3.6.3
In this example. we compute thc Fourier-series cocflicients of thc product and of the peri.
odic convolurion of the rrvu signals shown in Figurc 3.5.3.
The anall,tic representrrion of .t1r) is
r -5 -4 -3
-J -t 0 I
u"= if ,*o[-T]"
: ?!-
nn
For the sigral y(r), the analytic representation is given in Example 3.33 with t =2 aad T =
4. The Fourier-series coefficients are
,"=ll,x*vlt;)a,
Knr'Kn
= -_sm7 =
'smc'
From Equation (3.5.15), the Fourier coefficients of the convolution signal are
2iK nn
aa =
b:r)zs,I 2
and from Equation (3.5.11), the coefficiens of the product signal are
"' = i Lo-^'t^
;";*
-7 1 .m,,
=
-?-^@ - ^)" 2
8.6.6 Pareoval'sTheorem
In Chapter l, it was shown that the average power of a periodic signal r(t) is
o, =i ,nd -a
go*^^r*
"": lf Y'v(t)dt
= i "u'
8,,,,; (3.5.16)
If we let y(t) = .r(t) in this expression, then 8,, = .y", and Equarion (3.5.16) becomes
The left-hand side is thc average power ofthe periodic signal .r(r). The result indicates
that the total average power o[.r(t) is the sum of the average ptrwer in each harmonic
component. Even though power is a nonlinear quantity, we can use superposition of
ayerage powers in this particular situation, provided that all thc individua.l components
are harmonically related.
We now have two different ways of finding the average power of any periodic sig-
nal x(r): in the time domain, using the left-hand side of Equation (3.5.17), and in the
frequency domain, using the right-hand side of the sanre equalion.
= exp[-itoor]
] {rr
rt,rl"*nt - inu.uo)tt o
= c, exp [-
oor ]
7h (3.s.r8)
Thus, if the Fourier-series representation of a periodic signal r(t ) is known relative to
one origin, the representation relative to another origin shifted by t is obtained by
adding the phase shift n Gror to the phase of the Fourier coefficicnts ofr(r).
n=anple 3.6.4
Consider the periodic signal r(l) shown in Figure 3.5.4. The sig.nal can be written as the
sum of the two periodic signals rr (r) and rr (r), each with period 2n/r,ro, where .r, (r) is the
.r(r)=flsin..,)orl
half-wave rectified signal of Example 3.3.2 and x2(r) = :r (, - r/(r0)' Therefore. if p, and
'y, are thc Fourier coefficients of .r, (r) and -rr(r), respectively, then, according to Equation
(3.s.18),
,, = B, a]
"*p [-1n..
= p,, exp[-lnrr] = (- lfp,
From Equation (3.5.10). the Fourier-series coefficients of .r(t) are
d,=p,+(-1f9,
: [2P,, n even
[0. n odd
where the Fourier-series coefficients of the periodic signal .r1(t) can be determined as io
Equation (3.3.16) as
U, = Esinorsrexp[7ho6t]dt
*f
(e
|
;o--;'r' ,, even
l-ilE. n=tt
lo otherwise
|.0,
Thus,
2E
r(l - n') -
n even
o"=
0. n odd
This result can be verifred by directly computing the Fourier-series coefficients ofr(,).
= n * o (3.s.1e)
f_,,1,'ta, ^2_lh*p[inronr],
The relative amplitudes of the harmonics of the integrated signal compared with its
fundamental or" 1"5 than those for the original, unintegrated signal. In other words,
integration attenuates (deemphasizes) the magnitude of the high-frequency comPo-
Sec. 3.6 Systems with P€dodic lnputs 135
nents of the signal. High-frequency components of the signal are the main contributgrs
to its sharp details, such as those occurring at the points of discontinuity or at discon-
tinuous derivatives of the signal. Hence, integration smooths the signal, and this is one
of the reasons it is sometimes called a smoothing operation.
y(tt=[ hg)x(t-ldr
J_^
h(r)expl- ir,orldt
= exptTrrl
f
By defrning
II(ro) is called the system (transfer) function and is a constant for fixed ro. Equation
(3.62) is of fundamental importance because it tells us that the system resPonse to a oom-
plex exponential is also a complex exponential, with the same frequency ro, scaled by the
quantity H(or). The magnitude llr(,o)l is called the magnitude function of the system,
ana + H(.) is known as the phase function of the system. Knorving H(ro), we catr detet-
mine whether the system amplifies or attenuates a given sinusoidal component of the
input and how much of a phase shift the system adds to that particular component.
To determine the response y(l) of ao LTI system to a periodic input.r(t) with the
Fourier-series representation of Equation (3.3.3), we use the linearity proPerty and
Equation (3.6.2) to obtain
Equation (3.6.3) tells us that the output signal is the summation of exPonentials with
coeffrcients
136 Fourier Serles Chapter 3
n-nrnple B.G.l
Consider the system described by the input/output differential equation
* exp[iror] = ioa,(I.)'exRU.,rl
[O.f !r,fr.ul']at.)
Solving for l/(ro), we obtain
) q,(i')'
H(r,r) = ---d-;=-
(jo)' + ) p,(ito)'
Exanple 3.6.2
Let us find the output voltage y(l) of the system shown in Figure 3.6.1 if the input voltage
is the periodic signal
t-------------'l
ll
t L=t I
If we set r(r) = expUr,rtl in this equation, the output voltage is y(t) = I{(or)exp[lot].
Using the system differential equation. we obtain
qLexp
iorH(ro) exp [jro11 +
f 41.1.*p 1 i.,l = [lr,rr]
H(.\= ei/h.
Al any frequency o, = tr o0, the system function is
H(noJ = --t!,"*
For this example. oro - I and R/L = l,so that the output is
y(i = -j,exp[ir] +
t'V",el-itl inexp[i2rl *ery;r1 ial
- la6 cos(a - el')
= 2{icos(t - 4s")
fample &6.9
Consider the circuit shown in Figure 3.62. The differential equation goveming lhe system is
to = cff +yf
For an input of the form i(t) = exp[iot], we expecl thc outPut u(t) to be
a(t) = /I(r,r) exp[iot]. Substituting into the differential equation yields
H(.)= lnll,c
+
r(11= 11r1
) c R u(t) = l,(t)
Let us investigatc lhe response of the system,to a more conrplex inpui. consider an input
that is given by the periodic signal .r(r) in Example,3.3.l. The input signal is periodic wirh
period 2 and r,re = z, and we have found that
(zx
l:-.
Lnn zodd
c,= 1'
I o. n even
t
From Equation (3.5.3), the output of the system in response to this periodic input is
v(i=
"2-?! t/a-]
l, odd
jn,cexplinntl
F-ample 8.6.4
consider the system shown in Figure 3.6.3. Apptying Kirchhofls volrage law. we find thar
the differential equation describing the system is
x(t)=y*(++c4+D)+y(t\
which can be wri en as
H(r,r) =
-I + --.1-----
jaL/R - a2 LC
with
Ill(r,)l =
r(r) + C R r(r)
and
Now, suppose that the input lt;:*-:,,',1-. -r-*;e rectiried sigpar in Exampre
3.3.2. Then the output of the system is periodic. rvith the Fourier-series representation giverf
by Equation (3.6.3). Let us investigate the effect of the system on the harmonics of the input
signal .t(r). Suppose that t,l0 = laht, LC = 1.0 x l0-a. andL/R = 1.0 x 10-4. For these
values, the amplitude and phase of H(roo) can be approximated respectively by
1
lH(zo6)l =
ir,1:-a7
and
l.
{H(no6) = ,;rRC
Note that the amplitude of H(n too) decreases as rapidly as lln2.The amplitudes of the frrst
few components d,, tt = 0, |, 2. 3, 4, in the Fourier-series representation of y(t) are as
follows:
la..l = o, I
l,t,l = 4.4x Io ij"
Thc dc component of the input r(r) has becn passed rvithout an) attenuation, whereas the
first- and higher order harmonics have had in their amplitudcs reduced. The amount of
reduction increases as the order of the harmonic increases. As a rnatter of fact, the func-
tion of this circuit is to attenuate all the ac conrponents of thc hulf-wave rectifred signal.
Such an operation is an example of smoothing. or filtering. Thc ratio of the amplitudes of
the first harmonic and the dc cornponent is 7.6 x l0-2zr/4, in comparison with a value of
rr/4 tor the un[iltered halI rvave rcctilied waveform. As we nlcntioned before. complex
circuits can be designed to produce better reclified signals. 'l hc tlesigner is always faced
with a trade-off between complexity and performance.
We have seen so far that when signal x(t) is transmitted thr-trugh an LTI system (a
communication system, an amplifier, etc.) with lransfer function 1/(r,r). the output y(t)
is, in general. different from.r(r) and is said to be distorted. In conl.rast, an LTI system
is said to be distortionless if the shapes of the input and thc ()utput are identical, to
within a multiplicative constanl. A delayed output that rctains the shape of the inPut
signal is also considered distortionless. Thus, the input/output lclationship for a dis-
tortionless LTI should satisfy the cquation
,r,(t)=6..,,-,r, (3.6.s)
The corresponding transfer [unclion H(to) of thu distortionless svstem will be of the form
H(ut) -- Kexp[-lrot,] (3.6.6)
140 Fourier Serles Chapter 3
1 H (.nl
Thus, the magnitude lA(o)l is constant for all to, while the phase shift is a linear func-
tion of frequency of the form -trto.
Let the input to a distortionless system be a periodic signal with Fourier series coef-
ficients c,. It follows from Equation (3.6.4) that the corresponding Fourier series coef-
ficients for the output are given by
d, = K expl-inlr,ut,1lc,, (3.6.7)
Thus, for a distortionless system, the quantities la,,lllr,,l andlfur- must be &l/n
conslant for all n.
In practice, we cannot have a system that is distortionless over the entire range
-co < t,t < o. Figure 3.6.4 shows the magnitude and phase characteristics of an LTI
system that is distortionless in the frequency range -or. ( ro ( ro..
Flrample &65
The input and ourput of an LTI system are
.r(l) = 8 exp[i (too, + 30')l + 6 exp[l(3root - 15")] - 2 exp[i(5oor + 45")l
y(r) = aerp[i(orot - 15')] - 3exPU(3orot - 30')l + exp[i(Soot)l
We want to determine whether these iwo signals have the same shape. Note that the ratio
of the magnitudes of corresponding harmonics, ld,l/lr,l, has a value of l2 for all the
harmonics. To compare the phases, we note thar the quantity l$c, -
4dl/z evaluates
to 30' -(-15") = 45' for the fundamental, (- 15' -
30" + 180')/3 = 45' for the third
harmonic, and (45' + lEO')/s = 45' for the filth harmonic. It therefore follows that the
two signals.r() and y(r) have the same shape, except for a scale factor of l2 and a Phase
t
shift of t/4. This phase shift corresponds to a time shift of = t /4t'to. Hence, y(l) can
be written as
y(,,=i,(,-#;)
The system is therefore distortionless for this choice ofr(t).
Sec. 3.6 Systems with Periodic lnputs 141
Example 3.6.6
Let.r(l) and y(t) be the input and the output, respectively, of the simple RCcircuit sbowu
in Figure 3.6.5. Applying Kirchhoff s voltage law, we obtain
dv(t\
'-!--!!
1 1
+
iay$) RC.t0)
T------------]
rl
i R=t0kO I
nlu) = yp6+
llRC rl
=
1_ r.r.
wlrcre
I
'' loo x l0- ll
= 107 s-r
Hence,
ln(,)l =Vrh
*H("t)=-tun-t9-
The amplitude and phase spectra of H(o) are shown in Figure 3.b.6. Note that for ro ( q,
H(o; = 1
and
4H(0,) = -9-
rl
That is. the magnitude and phase characteristics are practically ideal. For example, for inPut
142 Fourl€r Serlos Chapter 3
| ,r(sr) | 1 H(ul
l.o
0.707
x(r) = 4exP[ildt]
the slatem is practically distortionless with output
= exP[ildr]
,]!. n"'
-Aexpllld(r- 1o-7)l
We wish to investigate the effect of truncating the infinite series. For this purpose, con-
sider the truncated series
r,v(,) = ?;
,i-!expllnntl
rodd
The truncated series is shown in Figure 3.7.1 for lV = 3 and 5. Note that even with
N = 3, rr(t) resembles the pulse train in Figure 3.3.2. Increasing N to 39, we obtain the
approximation shown in Figure 3.7.2. It is clear that, except for the overshoot at the
points of discontinuity, the latter figure is a much closer approximation to the pulse
train x(r) than is.rr(l). [n general, as N increases, the mean-squate etror between the
approximation ano the given signal decreases, and the approximation to the given sig-
J:-'
Sec.3.7 The Gibbs Phenomenon I
It can be shown (see Problem 3.39) that the sum in braces is equal to
144 Fourier Series Chapter 3
r39 (r)
r^,(,)=lIrt,t ''"[(" *
]),,u ..--,)] .
--- da
t Jo\
.ir(",.1;)
In Section 3.5.1, we showed that xr(t) converges tox(t) (in the mean-square sense) as
N -+ :o. In particular, for suitably large valucs of N, rr(t) shoulrl he a close approxi-
mation tor(t). Equation (3.7.3) demonstrates the Gibbs phenomenon mathematically,
by showing that truncating a Fourier series is the same as convolving the given r(t)
with the signal g(t) defined in Equation (3.7.2). The oscillating nature of the signal g(t)
causes the ripples at the points of discontinuity.
Notice that. for any signal. the high-frequency components ( high-order harmonics)
of its Fourier series are the main contributors to the sharp details, such as those occur-
ring at the points of discontinuity or at discontinuous derivatives of the signal.
3.8 SUMMARY
o Two functions $,(t) and $,(t) are orthogonal over an interval (a, D) if
.r(r) =) c,g,(r)
la -a
where
= t'
'' f,'u'o'*(')d'l
146 Fourier Sories Chapter 3
d,0) = *ol+ *l
are orthogonal over the interval [0, 7].
n A periodic signal r(t), of period I, can be expanded in an exponential Fourier series as
,, = !, @,-vl- P't;tl,
l,r,
o The fundamental frequency rou is called the lirst harmonic frequency, the frequency
2roo is the second harmonic frequency, and so on,
. The plot of lc,l versus nroo is called lhe magnilude sPectrum. The locus of thc tips
of the magnitude lines is called the envelope of the magnitude spectrum.
r The plot of {,c, versus n r,ro is called the phase spectrum.
. For periodic signals, both the magnitude and phase sPectra are line spectra. For real-
valued signals, the magnitude spectrum has even symmetry. and the phase spectrum
has odd symmetry.
. If signal x(r) is a real-valued signal, then it can be expanded in a trigonometric series
of the form
a,, = 2 Re[q,]
t,, = -Zlmlc,,l
I
c,,= ib,,l
,(a,,-
o An alternative form of the Fourier scries is
Ao= co
and
A, = Zlc,l, 0,, : 4.,,
. For the Fourier series to converge, the signal.r(r) musl be absolutely integrable, have
only a finite number of maxima and mininra, and have a finitc number of disconti-
nuities over any period. This set of conditions is known as the Dirichlet conditions.
o If the signal .r(t) has even symmetry, then
b, -- 0. n = 1.2....
2r
=
oo
i J,r,r,'(')o'
4r 2ntt
o" =
i ),r,r,'(l)cos j:
rlt
4,,=0, n:0'l'2...
4r 2nrt
u"= d'
T
r If the signal .r (t
'J"'""""n
) has half-wavc odd symmetry. then
az'=O'n=0'1""
o2,,+t = il)"'' ,,,
| I,r,r,.rurro'?Qn
bzu=0' n=l'2'"'
= (, rrnreL \: !,,,
b z, tr
| [,r,r,,
r
_u
o If B, and 1, are, respectively, the exponential Fourier-series coefficients for two
periodic signals.r(r) and y(r) with lhe same period. then thc Fourier-series coeffi-
cients for z(t) = krx(t) + kr-v(t) are
a,=krP,,+kr1n
whereas the Fourier-series coefficients for z(t) = x(t)y(t) are
o, =,,,i. B,-,,'Y,,
o For periodic signals.r(r) and y(r) with the same period f. thc periodic convolution
is defined as
[ .r(t)r(r - t)dr
.trl = 1-I ltn
148 FourierSedes Chapterg
o The Fourier-series coefficiens of the periodic convolution of x(r) and y(t) arc
o, = 9,1,
o One form of Parseval's theorem states that the average power in the signal x(t) is
related to the Fourier-series coefficients g, as
P=
,i-l,-"|'
. The s),stem (transfer) function of an LTI system is defined as
y(r) =i H(nao)c,exp[Throor]
where oo is the fundamentaf frlriency ancl c, are thc Fourier series coefficients of
the input.r(t).
. Represen':ng x(t) by a finite series results in an overshoot behavior at the points of
discontinuity. 1)re magnitude of the overshoot is approximately 97o. This phenom-
enon is known as the Gibbs phenomenon.
3.10 PROBLEMS
3.1. Express the sct of signals shown in Figure Pli.l in ierms of the orthonormal basis signals
$,(t) and gr(t).
3.2. Given an arbitrary set of functions r, (r), i = l, 2, ... , detined over an interval [ro, rrl, we
can generate a set of orthogonal functions r|l,(r) by followingthe Gram-schmidt onhogo-
ndization procedure. Let us choose as the lirst basis function
Sec.3.10 Problems 149
t 0
-l
-2
0r (r)
E ll
\,1 2 ,/i
0 0
-fr
Figure P3.l
-r2(r) rr(r)
v,17
0
-vztl
Figure P32
then
',
=
+,L' :o$;()dr
where
q= |.T li,o)l'zdt
Jg
3.6 Walsh functions are a set of orthonormal functions detined over the interval [0, l) that
take on values of over this interval. Walsh functions are characterized by their
=l
Sec.3.10 Problems 151
sequenct. which is defined as one-half the numher of zero'crossin.qs of the funclion rrver
the interval [0. I ). Figure P-'1.6 shows the first seven Walsh-()r.'lcred Walsh functions
wal. (/t. r ). arranged in order ol increasing scqucncv.
wulu ( /l. r)
t
0
rlll
-a -1 -7
Figure P3.6
(a) Verify that the Walsh functions shown are orthonormal ove'r [0. 1).
(b) Suppose we wanl to represent the signal x(t) = 11,,1r; - rr(t - l)l in terms of the
Walsh functions as
x,v(r) = ) co wal,.(k, t)
l-0
Find the coefficients cr for N = 6.
(c) Sketch.r/v(r) for N = 3 and 6.
3.7. For the periodic signal
Figure P3J
Flgure P3.10
3.12 Find the exponential Fourier-series represenlations of the signals shown in Figure F3.12.
Plot the magnitude and phase spectrum for each case.
3.13. Fitrd the trigonometric Fourier-series representations of the signals shown in Figure El.l2.
3.14. (a) Show that if a periodic siggal is absolutely integrable, then l",l -. .
(b) Does the periodic sigral .r(r) : ,irr 4 h"r" a Fourier-series representation? why?
,t,) =
T - r(.o,r- |*.2 +
|cos3r -. )
(b) Set, = 0toobtain
€ (-l)"" _- "'
3,--7- lZ
3.16. The Fourier coefficients of a periodic signal with period I are
z=0
Does this represent a real signal? Why or why not? From the form of cn, deduce the time
signal.t(t). Hint: Use
J exp[-TnrortD(t - t)dt = exp[-r'n rrrrrl
Soc.3.10 Problems 153
r (r,
1
2
.,
-l 0 I t
-!
(e) (f)
-2 0 2 -3 -t 0 I 2
(s) (h)
tigure P3,12
.,,=
[''"r;;"'l'
154 Fourier Series Chaprer 3
\'(, I
-t -l
Flgure P3.lt
(a) Find Tsuch that cs = l/150 if Iis Iarge, so that sin(nnlI) = nr/7.
(b) Determine the energy in x(t) and in
2
.r(rJ 'r
(r)
(a)
( b)
.r(r) .r (r)
(c) (d)
-r(r)
o Tl2
(e)
Flgure P32f
Sec.3.10 Problems 1S5
322. Periodic or circular convolution is a special case tf general convolurion. For periodic sig-
nals rvith lhe same period 7'. periodic convolution is defined bl the integral
lr
z(I) =
7 J,n.r(").v(r
- t)dr
(a) Show that z(r) is periodic. Find its period.
(b) Shorv that periodic convolution is commutative and associati\c.
3.23. Find the periodic convolution l(r) = r1r;tu,r, of the two signals shorvn in Figure P3.23.
Verify Equation (3.5.15) for these signals.
| {t}
I
I -l-l 0 l :
Flgure P3.23
324. Consider the periodic signal .r (t) that has the exponeniial Fourier-scries expansion
,(r) =,,i"c,,exp[lntoorl, co = o
(e) Integrate term bv term 1o obtain the Fourier-series expansion of 1,(l) = | x(t)dt, and
v(r) is periodic, too.
shorv that
(b) How do the amplitudes of the harmonics of r,(r)compare tothe amplitudesof the har-
monics of -r (t )?
(c) Does integration deemphasize or accentuate the high-frequcncv components?
(d) From Part (c). is the integrated waveform smoother than lht: original waveform?
325. The Fourier-series representation of the triangular signal in Figure P3.25(a) is
8r. I - + I I - + .../\
.((r) =
;r [sin: - t sin3r ,-, sin5r - On sinTr
Use this result to obtain the Fourier series for the signal in Figurc P.j.25(b).
r(r)
.r(t)
(a) ( b)
Figure P3.2-s
, u9 Fourier Series Chapter 3
3-?6. A voltage .r(r) is applied lo the circuit shown in Figure P3.26. If the Fourier coefficients
oi.r (I ) are givcn by
,n=
Ifrrl
n, * r.*PL/h:l
(a; Prove that .r (r ) must be a real signal of time.
(b) What is the average value of the signal?
(c) Find the first three nonzero harmonics ofy(t).
(d) What does the circuit do to the high-frequency terms of lhe input?
(e) Repeat Parls (c) and (d) for the case where y(t) is the voltage across the resistor
instead.
R= lo
t'lgure P3J6
3.27. tsnJ the voltage.y(I) across the capacitor in Figure P3.26 if the input is
..(r) =i c,exp[lrooll
lr{r-)-tt"'
lx(t) I
'o'o'
(d) What is the highest frequency u, you can use such that 4 H(,,, ) deviates from the ideal
linear characteristics bv less than 0.02?
R ka
r(l)
Figure P3.29
consider the
330. Nonlinear devices can be used to generate harmonics of the input frequerlcy.
nonlinear system described b!'
I H(r,il
r,r x l0l
2tt
H (ql
rz x 101
........+-_
Figure P33l
158 FouderSedes Chaptor3
332. The triangular waveform of Example 3.5.2 with period f = 4 and peak amplitude ,4 = 10
:
is applied to a series combination of a resistor R 100 (l and an inductor L = 0.1 H.
Determine the power dissipated in the resistor.
3J3. A fint-order s),stem is Eodeled by the differential equation
ff *rr1,y--u1,y
Ifthe input is the waveform of Example 3.32, frnd the amplitudes of the fiIst three har'
monics in the output.
334 Repeat Problem 3.33 for the system
t'c(,)
coS 1166l I
r(r) 0)o = aT Tl
sin n(,o,
.vr(' )
trgnre P335
336. Consider the circuit in Figure E1.36. The input is the half-wave rectified sigpal of
shorrm
Problem 3.8. Find the amplitude of the secoad and fourth harmonics of the output y(t).
Rr = 500 O
Flgmr P336
337. Consider the circuit shown in Figure P3.37. The input is the half-wave recified sigpal of
Problem 3.8. Find the amplitude of the second and fourth harmonics of output y(t).
S€c. 3.10 Problems 159
, = 0.1
Ftgure P337
33& (a) Determine the dc componlnt and the amplitude of lhe second harmonic of the oul-
put signal y(t) in the circuits in Figures P3.36 and P3.17 if the input is the frrll-wave
rectified signal of Problem 3.10.
O) Find the first harmonic of the output signal .y(r) in the circuirs in Figuras I{136 and
P3.37 if the input is the triangular waveform of Problem 3.32.
339. Show that the following are identities:
(a)
r 'r"[(iu. ]).,,,]
=-l-,-,^.,
) exp[yhtoorl
sin (orll2)
3.40. For the signal x(t) depicted in Example 3.3.3, keep Ifixed and discuss the effect of valv-
ing t (with the restriction r < 7') on the Fourier coefficients.
14L Consider the signal x(t) shown in Figure 3.3.6. Determine thc eftect on the amplitude of
the second harmonic of r (l ) when there is a very small error in measuring r. To do this,
let t = ro - e. where e << r0, and tind thc second harmonic dependence on E. Find the
percentage change in lc, I when 7: 10, r = I, and e = 0.1.
3.4L A truncated sinusoidal waveform is shown in Figure P3.42.
r4 sin
Flgure P3.42
34j. For the signal .r(l) shown in Figurc P3.43, Iind the following:
(a) Determine the Fourier-series coefficients.
(D) Solve for the optimum value of lo for which lc. I is maximum.
(c) Compare the result with part (c) of Problem 3.,12.
lsinr
-'--1
-2r -2tt + to -t -n+ ,0 /'O ,o lo+n
Flgure P3.43
3.114 The signal.r(r) shown in Figure P3.zl4 is the output of a smoolhed half-wave rectified sig-
nal. The constants ,r, ,r. and A satisfy the following relations:
(,)'l = f - tan-l1oRC;
A= sinr,rlr *rl;t]
e"*n[- ib] =,'n,,,
RC = 0.ls
rrr = 2rr X 6O : 377 radls
(a) Verify that ot' = 1.5973 rad, A = 1.M29, and (')r2 = 7.316 rad.
(b) Determine the exponential Fourier-series coefficients.
(c) Find the ratio of the amplitudes of the fint harmonic and the dc component.
I
I
I
I
I
I
.M
on-
U ),.r(rmAt)
2! Ztmn
" = Mt, ,,>--t.r (r; -ll ) cos M
an
tigure P3.47
3.tl& The integral-squared error (error energy) remaining in thc approximation of Problem 3.47
after N terms is
lo'
l",url'u, = ln'l.,ol'a, - ,i ,,,1 '
Calculatc the integral-squared error for N = I l. 27.32,41.51. l0l. and 201.
3.49. Write a program to compute numerically thc coefficients of thc scries expansion in terms
of wal" (&. l). 0 < A s 6. of thc signal r(t1 = ,[u(I) - ri(, l)1. Compare your results
with those of Problem 3.6.
Chapter 4
4.1 INTRODUCTION
We sarv in Chapter 3 that the Fourier series is a powerful tool in trealing various proh-
lems involving periodic signals. We first illustrated this fact in Section 3.6. where we
demonstrated how an LTI system processes a periodic input to produce the output
response. More precisely. at any frequency ntor. we showed that the amplitude of the
output is equal to the product of the amplitude of the periodic input signal. lq,l. ana
the magnitude of the system function I H1<'r) | evaluated at ro = zor,,. and the'phase of
the output is equal to the sum of the phase of the periodic input signal. {c,,. and the
system phase *H (a) evaluated at o = ,ro0.
ln Chapter 3. we were able to decompose any periodic signal with period Iin terms
of infinitely many harmonically related complex exponentials of the form exp [7h or,,l].
All such harmonics have the common period 7 = 2n f a,,.ln this chapter. we consider
another powerful mathematical technique. called the Fourier transform. for describing
both periodic and nonperiodic signals for which no Fourier series exists. Like the
Fourier-series coefficients. the Fourier transform specifies the spectral content ofa sig-
nal. thus providing a frequency-domain description of the signal. Besides being useful
in analytically representing aperiodic signals. the Fourier transform is a valuable tool
in the analysis of LTI systems.
It is perhaps difficult to see how some typical aperiodic signals. such as
r.(t). exp[-rlrr(t). rect(tlT)
could be made up of complex exponentials. The problem is that complex exponentials
exist for all time and have constanl amplitudes. whereas typical aperiodic signals do
not possess these properties. In spite of this. we will see that such aperiodic signals do
162
Sec. 4,2 The Continuous-Time Fcrurror Translorm 163
have harmonic contcnt: that is. thcv can be expressed as the supcrposition of harmon-
ically relatetl cxponentials.
In Section .1.2. rve use the Fourier series as a stepping-stone ro develop the Fourier
transform ancl shrrw lhal lhe latter can he considered an extension of the Fourier series.
In Section 4.3. we consider thc propcrties of the Fourier transfornt that make it useful
in LTI system analysis and provide examples of the calculation of some elementary
transform pairs. In Scctitrn 4.{. we discuss some applications related to the use of
Fourier tl'anslorm theory in comrnunication systems. signal proccssing, and control sys-
tems. In Scctron 4.5. rve inrroducc the concepts of bandwidth and duration of a signal
and discuss sevcral mea:;ures for these quantities. Finally, in rh t same section, the
uncertainty principle is Jeveloped and its significance is discussed.
i(r): The amplitude of the spectrum decreases as 1/I, and the spacing between lines
decreases as2t./7. As I approaches infrnity, the spacing between lines approaches
zero. This means that the spectral tines move closer, eventually becoming a continuum.
The overall shapes of the magnitude and phase spectra are determined by the shape of
the single pulse that remains in the new sigral .r(r), which is aperiodic.
To investigate what happens mathematically, we use the exponential form of the
Fourier series representation for;(r); i.e.,
where
1.-.49
T '2n
we argue that in the limit, ntoo should be a continuous variable. Then, from Equation
(4.2.2),lhe Fourier coefficients per unit frequency interval are
=
*l' t,,, [-itor] dr (4.2.3)
* __
exp
Substituting Equation (4.2.3) into Equation (4.2.1), and recognizing that in the limit
the srrm becomqs an integral and i(l) approaches x(t), we obtain
where
Sec. 4.2 The Continuous-Time Founer Transform 165
X(trt = (4.2.6)
[ _,rUrexp[-7ror]dr
Equations (.1.2.5) and (4.2.6) ctrnstitute rhe Fourier-transform pair for aperiodic sig-
nals that most electncal engrnccrs use. (Some communications engineers prefer to
write the frequency variable in hdrtz rather than rad/s: this can he done by an obvious
change of variables.,l ,\'(o,l) is callcd the Fourier transform o[.r(r) and plays the same
role for aperiodic signals that <,, plays for periodic signals. Thus. -Y(to) is the spectrum
of .r(l) and is a continuous function defined for all values of o. whereas c, is defined
only for discrete frequeni-ics. 'Ihcrefore, as menrioned earlicr. an aperiodic signal has
a continu(rus spectrum riirher than a line spectrum. X(c,r) spcciiies the weight of the
complex t:xponentials rr:.:J to rcpresent the waveform in Equation (4.2.5) and, in gen-
eral, is a complex functir,n of thc variable to. Thus, it can be written as
x(.,) : lx(to)l exp[ig(r,r)] (4.2.7)
The magnitude of X(ar) plotted against ro is called the magnitude specrrum of r(r), and
lX(.)l'is called the energy spectrum.'I'he angle of X(to) plotted versus ro is called the
phase spectrum.
In Chapter 3, we saw that for any periodic signal x(r), therc is a one-to-one corre-
spondence between ,r(l) and the set of Fourier coefficicnts c,,. Here, too, it can be
shown that there is a one-to-one correspondence betwrjen.r(t ) and X(ro), denoted by
.r(t) <+ X(to)
which is meant to inrply that for every.r(r) having a Fourier transform, there is a
unique X(o) and vice versa. Some sufficient conditions for the signals to have a Fourier
transform are discussed later. We emphasize that while we have uscd a real-valued sig-
nal x(t) as an artifice in the development of the transform pair. the Fourier-transform
relations hold for complex signals as well. With few exceptions. horvever, we rvill be
concerned primarily with real-valued signals of time.
As a notational conveniencc, X(r,r) is often denoted by :? {.r (, )l and is read ..the
Fourier transform of .v(r)." In addition. we adhere to the conrcnlion that the Fourier
transform is represented by a capital letter that is the sante as rhe lowercase tetter
denoting the time signal. For exanrple,
f -1,{'\la' . - (4.2.8)
A class of signals that satisfy Equation (4.2.8) is energy signals. Such signals, in gen-
eral, are either time limited or asymptotically time limited in the sense that .r(t) -r 0
as , -,
=
@. The Fourier transform of power signals (a class of signals defined in Chap-
ter 1to have infinite energy content, but finite average power) can also be shown to
exist, but to contain impulses. Therefore, any signal that is either a Power or an energy
signal has a Fourier transform.
"Well behaved" means that the signal is not too "wiggly" or, more correctly, that it
is of bounded variation. This, simply stated, means that r(r) can be represented by a
curve of finite length in any finite interval of time, or alternatively, that the signal has
a finite number of discontinuities, minima, and maxima within any frnite interval of
time. At a point of discontinuity, ,0, the inversion integral in Equation (4'2.5) converges
to | 1.r1rf 1 + ,(r; )l; otherwise it converges to.r(t). Except for impulses, most signals
of interest are well behaved and satisfy Equation (4.2.8).
The conditions just given for the existence of the Fourier transform of .r(t) are suf-
ficient conditions. This means that theri: are signals that violate either one or both con-
ditions and yet possess a Fourier transform. Examples are power signals (uni1-51sp
signal, periodic signals, etc. ) that are not absolutely integrable over an infinite interval
and impulse trains that are not "well behaved" and are neither power nor energy sig-
nals, but still have Fourier transforms. We can include signals that do not have Fourier
transforms in the ordinary sense by generalization to transforms in the limit. For exam-
ple, to obtain the Fourier transform ofa constart; we consider x(l) = rect(r/r) and let
t -+ co after obtaining the Fourier transform.
Elvqrnple 43.1
The Fourier transform of the rectangular pulse x() = rect(/t) is
x@'1 = expl-iottlttt
l"__x(t)
=
f/',rexp1-1o,tldr
=*("*l+l- *ol.;.])
Ssc. 4.2 The Continuous-Time Fourier Translorm 167
x@ =:sin ]1 =
"
*,n.l' = , sa 9|
Since X(r,r) is a real-valued function of o, its phase is zero for all or. X(t'r) is plotted in Fig-
ure 4-2-2 as a fuuction of o.
(,)f
X(LJ) = r srnc
.\ (r) ltr
-i 8a
- a
Clearly, the spectrum of the rectangular pulse extends over the range - @ ( to ( rc.
However, from Figure 4.2.7. we see that most of lhe spcctral conlcnl of the pulse is con-
tained in the interval -2tr/r < a <2rft. 'l'his intcrval is lltrclcd the main lobe of the
sinc signal. The other portion of the spectrum represents what rc called the side lobes of
lhe spectrum. lncreasing r results in a narrower main lobe. \\'herL'as a smaller t produces
a Fouricr transform with a wider main lobc.
Bsa'nple 422
Consider the triangular pulse defincd as
L(t/t) =l'
-
'''
, ,,'
This pulse is of unit height, centered about t = 0, and of rvidth 2t. Its Fourier transform is
=,1:l - 1)"o.",,a,
168 The Fourior Transtom Chapter 4
A(r/t)<+tsinifi="S"'T
Exanple 42a
The Fourier transform of lhe one-sided exponential signal
f'
X(r) = |-- (exp[-ct]u(t) exp[-jror])dr
J
f*
=fJ6 exp[-(c+jo)tldt
_t @'29\
o + lrrr
Exarnple 4.2.4
In this example, we evaluate the Fourier transform of the two-sided exponential signal
=;4;'
2d
f,lrar.ple 425
The Fourier transform of the impulse function is readily obtained from Equation (4.2.5)
by making use of Equation (1.6.7):
,O = . (4.2.fi')
*l ,expliorlrro
Equation (4.2.1 l) stares that thc impulse signal theorelically consists of equal-amplitude
sinusoids of all freguencies. This integral is obviously meaningless. unless we interprei E(r)
as a function specified by its properties rather than an ordinarv function having definite
values for every t. as we demonstrated in Chapter L Equation (4.2.1 l) can also be written
in the limit form
sin qr
6(11 = 1;' 1rI
(4.2.r21
s(,) =
*Hl /' exp[ir,rrld(,,
I .. 2 sincr
21t c-a I
-- sin ot
=ltm- nl
Erample 4.2.6
We can easily show that /1. expljlotldto/Ztr "behaves" like the unit-impulse function by
putting it inside an integral; i.e., wc evaluatc an integral of thc fornr
I- l* I__*^j@4d,,)B(idt
where g(t)
is any arbitrary well-behaved signal that is continuous at , = 0 and possqsses a
Fourier transform G(or). Interchanging the order of integration, rve have
j, [' = = g(o)
-ct-oa,, :" L-G(o)dro
That is, (l/2rr)/1-expljatlda "behaves" like an impulse at t = 0.
Another transform pair follows from interchanging the roles of t and ro in Equation
(4.2.11). The resull is
D(or) =
); l-_-expll,,tat
or
I er 2zt 6 (to) (4.2.13)
In words, the Fourier transform of a constant is an impulse in the frequency domain. The
factor 2rr arises because we are using radian frequency. If we werc to write the transform
in terms of frequency in hertz, the factor 2rr would disappear (D (or) = 6(l)/2tr).
170 The Fourler Translorm Chapter 4
fuqnpls 42.7
In this example. we use Equation (4.2.12\ and Example 4.2.1 lo prove Equarion (4.2.13). By
leiting t go to :c in Example 4.2.1 . we find that the signal r (r ) approaches I for all values of
,. On the other hand, from Equation (4.2.12), the limit of the transform of recr (rh) becomes
' 2 sinot
lim:*=2rr6(r,r)
iJz (, 2
f,aarnple 4.28
Consider the exponential signal .r(t) = exp[/or,rrl. The Fourier transform of this signal is
t'
x(, ) = I exp Ur,rstl exp [-ltotldt
J_-
= [
t_-
exp[-71- - ,oo\tldt
Periodic signals are power signals. and we anticipate, according to the discussion in
Section 4.2.2, that their Fourrer transforms contain impulses (delta functions). In Chap-
ter 3, we examined the spectrum of periodic signals by computing the Fourier-series
coefficients. We found that the spectrum consists of a set of lines located at ano{}.
where or0 is the fundamental frequency of the periodic signal. In the following exam-
ple, we find ihe Fourier transform of periodic signals and show thar the spectra of peri-
odic signals consist of trains of impulses.
Bynmple 4.2.9
Consider the periodic signal .r(t) with period I; thus. rou = 2zr /T. Assume that x(r) has
the Fourier-series representation
,(,) = ,exp[,7hroor]
,i"
Hence, taking the Fourier transform of both sides yields
x(,) = c,elexp[y'zr,r,r]l
,i.
Using Equation (4.2.14). we obtain
each other by ton. Note that bL'cause the signal -t(r) is periodic, thc nragnitude spectrum
lX1.1l is a train of impulses of streng,th 2nlc,l, whereas the spectrum obtained through
the use o[ the Fourier series is a line spectrum with lines of finitc anrplitude lc, l. Note
thal the Fourier transform is not a periodic functron: Even though ths impulses are sepa-
rated by the same amount, their weights are all different.
Example 4.2.10
Consider the Periodic signal
.r1r1 =
,i. o(r - zr)
which has period L To lind the Fourier transform, we first have to compute the Fourier-
series coefficients. From Equation (3.3.4). the Fourier-series coefficicnts are
r(,) =.>_
+*rl'+)
By using Equation (4.2.14), wc tind that the Fourier transform ol thr: impulse train is
That is, the Fourier transformation of a sequence of impulses in thc time domain yields a
sequence of impulses in the frequency domain.
4.3.1 Linearity
x,(t) e+ X,(or)
TABLE 4.I
Some S€lecred Fourler Tranatorm Palrs
x(r) x(.)
l. I 2t 6 (ro)
I
2. u(t) zrD (r'r) +.;
l@
3.6(r) 1
sinr'r't
6.
"7t ain" "lit - 1tl
rect (@ /2a s)
2
7. sgr r
jot
'3-''
14. cos oor rect (/t )
".in.Qjfdl
I
15. exp[-at]z(r), Re [a] > 0
atjot
16. texp[-at]zO, Re lal > 0 (;^)'
tn-l I
17. exP[-arlz(;, Re[al > 0
1, - 11 (a + jrLl)'
7a
exp[-alrl], a>o
18.
aT;t
4oj.
19. lrl exp[-alrl], Rela] > o
a2+az
Sec. 4.3 Properties of the Fourier Translorm 173
r(r) x(,)
20.
I
> 0 I exp [- alr,r l]
;4,:,Re{al
n. Re[a]>o :4rryIPl:-dell
F+, 2a
a>o
ti [-r2l
?2. expl- at2l,
V;*p[ * J
, r@T
23. 6(t/r) T SlnC- :-
ln
24. > s(r-nI) ?.!.'(, T)
where a and b are arbitrary constants. This property is the direct result of the linearity
of the operation of integration. The linearity property can be easily extended to a lin-
ear combination of an arbitrary number of components and simply means that the
Fourier transform of a linear combination of an arbitrary number of signals is the same
linear combination of the transfornr of the individual componcnts.
Eremplo 43.1
Suppose we want to find the Fourier transform of cos rool. The cosine signal can be writ-
ten as a sum of lwo exponentials as follows:
I
coso,o, =
i [exRIlrootl + exp[-ioor]l
From Equation (4.2.14) and the linearity property of the Fourier transform,
9[cosr,rol] : zr[6(o - roo) * 6(to + rou)l
Similarly. the Fourier transform of sin ool is
glsinoor| :
l [tt, - -
T rog) 6(to + r,l,)l
43.2 Symmetry
kample43.2
From Equations (4.3.4) and (4.3.5), the inversion formula, Equation (4.2.5), which is writ-
ten in terms of complex exponentials, can be changed to an expression involving real cos-
itrusoidal sigtals. Specifically, for real .r(t),
,(O =
*l- r,,, exp[ir,rr]dro
t r.rexp [ltor] dto * ] x
[" <,l "xp [lr,r
= rl dro
f_ _*
Example 43.9
Consider an even and real-valued signal.r(t). Is transform X(to) is
jutldt
x(,,,) =
I- r(,) expl-
=
J-
r(r)(cmror - jsinot)dr
Sec. 4.3 Properties of lhe Fourier Translorm 175
Since.r(r.1 cosr,r, is anevcn funuriott t-rf r and.r1t.1 srn to, ts iln otlti luttctionof t. we havc
r'
,Y(o1 = 2 r(I ) ct s.t dt
J,
rvhich is a real and even funcl ion of rrr. Therefore. thr' Fouricr iransli)rrtt of an even and real'
valued signal in the timc domain is an cven and rcal-valucd signal irr thc lrequency domain'
Similarly,
.r(r) e/-,/ e+ X(o - to,,) (4.3.6b)
The proofs of thesc properties follow from Equation (4.2.6) after suitahle substitution
of variables. Using the polar fornt, Equation (4.3.3). in Equatiorr (1 -3.6a) yields
Sl.r(r - ,.)l = lX(t,r)l exp[i(S(r,r) - o4,)l
The last equation indicates that shifting in tinre does not alter thc anrplitude spectrum
of the signal. The only effcct of such shifting is to introduce a plrasc shift in the trans-
form that is a linear function of o. The result is rcasonable hecattsc rve have already
seen that. to delay or advance a sinusoid, we have ()nly to adjust thc Phase' ln addition.
the energy conlent of a wavefornr does not depcnd on its posilion in time.
\(0,) ., (4.3.7)
lll "(l )
where o is a real constant. Thc proof o[ this follows directly fronr thc definition of the
Fourier transform and the appropriate substitution of variables.
Aside from the amplitude factor of u lo | ' linear scaling in tinrc l'v a factor o[ a cor-
responds to linear scaling in frcquency by a factor of l/o. The rcsult can be interpreled
physically by considering a typical signal .r(r) and its Fourier translirrm X(to), as shown
in Figure 4.3.1.1f l"l . t..t(or) isexpanded in time.and the signal varies more slowly
(becomes smoother) than the original. These slorver variations dcctnphasize the high-
frequency components and ntanifcsl themselves in more appreciahle low-frequency
sinusoidal components. That is, expansion in the time domain irnplies compression in
176 The FourierTranslorm Chapter4
I X(or) I
I
x(otl, o I I l.l r,(?)l ,a(l
I
x(atl, a > I Itl ;x(;);''>r
(c)
tlgore 43.1 Examples of the time-scaling property: (a) The original sig-
nal and its magnitude spectrum. (b) the time-expanded signal anA is mal-
nirude spectrum. and (c) the iime-compressed signal and the resuhing
magnitude spectrum.
the frequency domain and vice versa. If lc > l. r(ar) is compressed in time and must
|
vary rapidly. Faster variations in time are manifested by the presence of higher fre-
quency components.
The notion of time expansion and frequency compression has found application in
areas such as data transmission from space probes to receiving stations on gartr. ro
reduce the amount of noise superimposed on the required sign"l, it ir necessary to keep
the bandwidth of the receiver as small as possible. one means of accomplishing this is
to reduce the bandwidth of the signal, store the data colected by the probe, und th"n
play the data back at a stower rate. Because the time-scaling facior is'known, the sig-
nal can be reproduced at the receiver.
Sec. 4.3 Properlies ol the Fourier Transform 177
Example 4.S.4
Suppose we want to determine the Fourier transform of the pulsc.l(r) = o rect(dIh).
o > 0. The Fourier transform ot rect (t/r) is, by Example 4.2. I .
*{rect(,/r)} =".in.l' I
. otT
Y7 lo recl (otlt)| = T Stnc 2ar.
Nole lhat as we increase the valuc of the parametcr o, the rectangular pulse becomes nar-
rower and higher and approachcs an impulse as o -J e. Corresptrndingly, the main lobe
of the Fourier transform becomes wider, and in the limit X(ro) approaches a constant
value for all ro. On the other hand, as q approaches zero. the reclangular signal approaches
I for all t. and the transform approaches a delta signal. (See Examplc 4.2.7.)
The inverse relationship betwcen time and frequency is encounlered in a wide variety
of science and engineering applications. In Section 4.5, we will cover one aPPlication
of this relationship, namely, lhe unccrlainty principle.
4. .6 Differentiation
If
r(t) e+ X(o)
then
d'jl) ,.
i.x<-) (4.3.8)
The proof of this property is obtained by direct differentiation ol' both sides of Equa-
tion (4.2.5) with respect to r. The differentiation proPerty can bc cxtended to yield
o";,t:' (4.3.e)
'ur,,r)"x(r'r)
We must be careful when using the differentiation property. First of all, the property
does not ensure the existence of Tldx(t)/dtl. However, if v cxists, it is given by
jtox(to). Second. one cannot alwavs infer that X(ro) -- 9ltt.r(t\/tltl/i,o.
Since differentiation in the time domain corresponds to multiplication by lro in the
frequency domain, one might conclude that integration in the time domain should
involve division by ito in the frequency domain. However. this is true only for a certain
class of signals. To demonstrate it. consider the signal .r(r) = | lft)dr.With Y(t'r)
178 TheFourierTranstorm Chapter4
as its transform, we conclude from dy(t)/dt = r(l) and Equation (4.3.8) that
iroY(o) = X(<o). For Y(to) to exist, y(t) should satisfy the conditions listed in Section
4.Z.2.Thisis equivalent toy(co) = 0, i.e., I_- x@)dT = X(0) = 0. In this case,
J
(4.3.r0)
f__x@ar--1x1,;
This equation implies that integration in the time domain attenuates (deemphasizes)
the magnitude of the high-frequency components of the signal. Hence, an integrated
signal is smoother than the original signal. This is why integration is sometimes called
a smoothing operation.
If X(0) + 0, then signal x(t) has a dc component, so that according to Equation
(4.2.13), the transform will contain an impulse. As we will show later (see Example
4.3.10), in this case
f,sarnple 43.6
Consider the unit-step funclion. As we saw in Seclion 1.6, this function can be written as
i.,*{}.e,,}=r
or
'{j *"'} = *
(4.3.12)
E=
f il =
[-_-x(t)x*(r)dt
-lx(r)1'z
Using Equation (4.2.5) in this equarion resulrs in
E : r(r)exp[-y,,rr]dr]r/-
*l* ".,,,[/_,
=; f .lx1,vl'za,
We can therefore write
= (4.3.r4)
[-_-l,r,tl'a, ); [_-t*r,tPr,,
This relation is Parseval's relation for aperiodic signals. It says rhat the energy of an
aperiodic signal can be computed in the frequency domain by computing the energy
per unit frequency, i6 (o) : lX(o) lz/2r, and integrating over all f rcquencies. For this
reason, E (r,r) is often referred to as the energy-density spectrum, or, simply. the energy
spectrum of the signal, since it measures the frequency distribution of the total energy
ofr(t). We note that the energy spectrum of a signal depends on rhc magnitude of the
spectrum and not on the phase. This fact implies that there are many signals that may
have the same energy spectrum. However, for a given signal, thcrc is only one energy
spectrum. The energy in an infinitesimal band of frequencies d o.r is. rhen, i8 (or)do, and
the energy contained within a band or, ( to s ro, is
A/t = f'';l
Jq Zn
lx1,yl,a. (4.3.1s)
That is, lX(r)l'not only allows us ro calculate thc total energy of -r(r) using parseval's
relation, but also permits us to calculate the energy in any given lrcqucncy band. For real-
valued signals, lX(r)l'is an even function, and Equation (4.3.14) can be reduced to
Periodic. signals, as defined in Chapter l, have infinite energy, but finite average
power. A function that describes the distribution of the average power of the signal as
a function of frequency is called the power-density spectrum, or, simply, the power
spectrum. In the following, we develop an expression for the power spectral density of
power signals, and in Section 4.3.9 we give an example to demonstrate how to com-
pute the power spectral density of a periodic signal. Let x(r) be a power signal, and
define x,(l) as
"(')
=
{;l')'
= x(t) rect(t/2t\
;;; "
where the last equality follows from the definition of .r,(l). Using Parseval's relation,
we can write Equation (4.3.17) as
lzn ['sr,la,
J__
where
S(to) is referred to as the power-density spectrum, or, simply, power spectrum, of the
signal .r(t) and represents the distribution, or density, of the power of the signal with
frequency o. As in the case of the energy spectrum, the power spectrum of a signal
depends only on the magnitude of the spectrum and not on the phase.
Example 43.6
Consider the one-sided exponential signal
lx(,)l'= #;,
The total energy in this signal is equal to l/2 and can be obrained by using either Equa-
tion (1.4.2) or Equation (43.1a). The energy in the frequency band -4 < ro < 4 is
Seo. 4.3 Proporties ol the Fourier Transtorm 181
le4 l
AE=:?t I I +:--.d,,t
Jo or'
l-
= tloran- ,.u lo = o.qzz
Thus. approximately M%o of the total energy content of the signal lies in the frequency
band -4 ( rrr ( 4. Note that the previous result could not be obtained with a knowledge
of .r (r) alone.
4.8.7 Convolution
Convolution plays an important role in the study of LTI systems and their applications.
The property is expressed as follows: If
r(t) <+ X(<o)
and
then
r(t) x &(l) t-+ X(<o)H(or) (4.3.20)
The proof of this statement follows from the definition of the convolution integral, namely.
elx(t) * h(t11=
l__l[-_r<,lotL - t)d"]expt -i,otldt
Interchanging the order of integration and noting thatx(t) does not depend on ,, we have
elx(t) * n()l =
f_-x(rrlf_^U - t)exp[-;.,r]rt]dr
By the shifting property, Equation $.3.6a), the bracketed term is simply II(o)
exp [-7'on]. Thus,
= II(ro)X(o)
Hence, convolution in the time domain is equivalent to multiplication in the frequency
domain, which, in many cases, is convenient and can be done by inspection. The use of
the convolution property for LTI systems is demonstrated in Figure 4.3.2. The ampli-
tude and phase spectrum of the output /(l) are related to those of the input r(t) and
the impulse response ft (t) in the following manner:
182 The Fourier Translorm Chapter 4
I
yt,ll = lxlr,yl 1a1,,,11
+Y(r,r)=4X(ro)+ 4H(.)
Thus, the amplitude spectrum of the input is modifiea Uy I a1r,r) | to produce the ampli-
tude spectrum of the output, and the phase spectrum of the input is changed by 4H(r,r)
to produce the phase spectrum of the output.
The quantity H(or), the Fourier transform of the system impulse response, is gen-
erally referred to as the frequency response of the system.
As we have seen in Section 4.2.2, lor fl(<,r) to exist, lr(t) has to satisfy two condi-
tions. The first condition requires that the impulse response be absolutely integrable.
This, in turn, implies that the LTI system is stable. Thus, assuming that Ir(t) is "well
behaved," as are essentially all signals of practical significance, we conclude that the
frequency response of a stable LTI system exists. If. however, an LTI system is unsta-
ble. that is. if
f .ln<,tla,: -
then the response of the system to complex exponential inputs may be infrnite, and the
Fourier transform may not exist. Therefore, Fourier analysis is used to study LTI sys-
tems with impulse responses that possess Fourier transforms. Other, more. general
transform techniques are used to examine those unstable systems that do not have
finite-value frequency responses. In Chapter 5, we discuss the Laplace transform,
rvhich is a generalization of the continuous-time Fourier transform.
Example 4.3.7
we demonstrate how to use the convolution property of the Fourier lrans-
ln this example.
form. Consider an LTI system with impulse response
n(t) = exP [- at]rr(t)
whose input is the unit srcp function u(t). The Fourier transform of the output is
Y(a) = 9lu(t)19 [exp[-atlu()l
=
l.ur,;.,i](;;1;;)
zrl
= ..6(o) * -. ;--.+ ..-,
o l@la l(lD)
=J[,tt,l*.'l-'-!.
oL ,(,)l aa+l@
Sec. 4.3 Properties ol the Fourier Transform r83
Example 4.8.8
The Fourier transform of thc rriangle sigrral A(r/r) can be otrtancrl by observing that the
signal is the convolurion of rhe rectangular pulse ( I /Vr ) rect ( r/r ) rvith itself; that is.
L(r/l = ) rect(r/t; *
uf recr(r/rl
From Example 4.2'l and Equation (4.3.20), it follows that
Example 4.3.9
An LTI system has an impulse response
ft(') = exPl- arlrr(t)
and output
= -(c
- a)(19 1 n1
(lto+b)(ir,r+c)
DE
jlo+h'7or+c
where
D=a-D and E=c-o
Therefore,
.y(r) = [(a - b) exp[-br] + (c - a) exp[- ctllrr(t)
Example 4.3.10
In this example, we use thd relation
tl
: -t(t)'. rr(r)
J_-.(")rt
18/. The Fourier Translorm Chapier 4
and the transform of u(t) to prove the integration property, Equation (4.3.11). From
Equation (4.3.13) and the convolution properly. we have
= rrX(o)o(or) * {-E)
l@
The last equality follorvs from the sampling property of the delta function.
4.3.8 lluality
We sometimes have to find the Fourier transform of a time signal that has a form sim-
ilar to an entry in the transform column in the table of Fourier transforms. We can find
the desired transform by using thc table backwards. To accomplish that, we write the
inversion formula in the form
Notice that there is a symmetry between this equation and Equation (4.2.6): The two
equations are identical except for a sign change in the exponential, a factor of2t, and
an interchange of the variables involved. This type of symmetry leads to the duality
property of the Fourier transform. This property states that if .r(r) has a transform
X(to), then
X(t) ++ 2n.r(-ro) (4.3.22)
We prove Equation (4.3.22) by replacing, with -, in Equarion (4.2.5) to Ber
2r x(- t) = x1r1"xp[-lor]dto
/' __
= jrtldr
[,"_ _xg)expl-
Sec. 4.3 Properlies of the Fourier Translorm 185
since t,r is jusl a duntmv variahlc lor intcgrltion. Now rcplacing t hv o andrbytgivcs
Equation (4.-1.22).
Example 4.3.1I
Considcr thc srrnal
. @ul
.r(r)=saf= stnc ::2i
From Equation (4.2.6 ).
rect (r/r ) .t . Su
l
Then according to Equation (4.3.22).
= 11recr(-,/, ,, ='J,recr((,,,/(,,,)
"{"Y}
because the reclangular pulse is an even signal. Note that thc tt atlsform X(ro) is zero out-
side the range - a rf 2 s ot s u sf 2, but that the signal .r (l ) is rr()l t imc limited. Signals with
Fouricr lransforrns that vanislr outsidc a givcrr flequency hirrtl rrrt ,:allcd bandJimited sig-
nals (signals with no spectral content above a certain maxiurunr lrsquency, in this case,
ar/2.). lt can be shown that time limiting and frequency limiting are mutually exclusive
phenomena: i.e., a iimc-limilcd signal -r(r) always has a Fouricr ttitttsform that is nol band
limited. On the oihcr hand. if X(o) is band limited. lhen the coltcsponding time signal is
never time limited.
Example 4.3.12
Differentiating Equation (4.2.6) n times with rcspect to to. we rcadily obtain
that is, multiplying a time signal by t is equivalcnt to differcntiating the frequenry spec'
trum, which is the dual of dif[erentiation in the time domain.
4.S.9 Modulation
If
.r(t) <+ X(or)
m(t) <+ M(a)
186 The Fourier Trans,lorm Chapter 4
then
Convolution in the frequency domain is carried out exactly like convolution in the time
domain. That is.
This result constitutes the fundamental property of modulation and is useful in the
spectral analysis of signals obtained from multipliers and modulators.
Evanple 43.13
Consider the signal
r,O =.r(r)p(t)
where p(r) is the periodic impulse train with equal-strength impulses, as shown in Figure
4.3.3. Analytically, p (t) can be written as
p1r) =
,i. o1r - ,r1
r,(,)= i:(nl)6(r-aI)
That is, r,(r) is a train o[ impulses ,0"".0 a.""""* apart, the strength of the impulses
being equal lo the sample values of r(t). Recall from Example 4.2.10 that the Fourier
transform of the periodic impulse train p(r) is itself a periodic impulse train; sp€cifically'
P@=+
"2.'(,-?)
Consequently, from the modulation property,
1
Example 43.14
Consider the system depicted in Figure 4.3'4, where
xlry = ll(9a12)
r (r) ) (r)
The Fourier transform ofx(r) is the rectangular pulse with width ro,,, and the Fourier trans-
form of the product r(r)p (r) consists of the periodically repeated rcplica of X(to), as shown
in Figure +.1.5. Similarly, the Fourier transform of ft(r) is a rectan*rlar pulse with width
3or. According to the convolution properly, the transform of thc outPut of the system is
Y(r,r) = X,(ro)H(ro)
= X(ro)
or
y(tl = x(t)
188 The Fourier Translorm Chapler 4
Z(@t
lll.i,
f(qr)
-anB 0 ae o)
:l
Figure 435 Spectra associated wirh signals for Example 4.3.14.
Note that since the system i(r) blocked (i.e.. filtered out) all the undesired components
ofx,(t) in order to obtain a scaled version ofx(r), we refer to such a system as a filter. Fil-
ters are important components of any communication or control sysiem. In Chapter 10,
we study the design of both analog and digital filters.
f,xq'rrple 4.S.16
In this example, we use the modulation properly to show lhat the power spectrum of the
periodic signal .r (t ) with period I is
Substituting Equation (4.2.15) tor X(r,r) and forming the function i ,t - (r,l) l:, rve have
L Linearity )
n=l
o,.r,(r1 )
n=l ",x,,(,t
(4.3.r)
X(o)
7. Integration I xk)dr * (43.1r)
,-'fi "xtolol';
8. Parseval's relation
f_-t,at'a,
j f__l^t.l',. (4.3.14)
\(r) v (tl
| .t(<o t I I Y(c,r) |
1.-r.,-J
Figure 4.42 Magnitude spectra of information signal and modulated
signal.
I
Y(r) X(r,r) * zr [D (to - roo) + 6 (r,r + to,,)]
2t
1
2
[X(t" - on) + X(to + ro,,)].
The magnitude spectra of r(t) and y(t) are illustrated in Figurc 4.4.2.The part of the
spectrum of Y(o) centered at *on is thc result of convolving ,Y(to) with D(or - orn),
and thc part ccnlcrcd al -tr,, is lltc rcsult o[ convolving,\'(o) rvilh 6(o + <'ru). This
process of shifting the spectrum of the signal by q, is necessary hecause low'frequency
(baseband) information signals cannot be propagated easily by radio waves.
The process of extracting the information signal from the modulated signal is
referred to as demodulation. In effect, demodulation shifts back the message spectrum
to its original low-frequency location. Synchronous demodulation is one of several
techniques used to perform amplitude demodulation. A synchronous demodulator
consists of a signal multiplier, with the multiplier inputs being thc modulated signal and
cos o0r. The output of the multiplier is
Hence,
1
Z(o\ [Y(to - or,,) + Y(o + r,ru)]
2
=
)xo *)xr, - 2a,,) +)x6 * 2.,,1
The result is shown in Figure 4.4.3(a). To extract the original inlormation signal r(t)'
the signal z (t) is passed through the system with frequency rcsponse H(ro) shown in
Figure 4.4.3(b). Such a system is referred to as a low-pass filter. since it passes only low-
frequency components of the input signal and filters out all [requencies higher than tor-
the cutoff frequency of the filter. The output of the low-pass filter is illustrated in Fig-
ure 4.4.3(c). Note rhar if lH(('))l = t, l.l ( ror,andthere werc no transmission losses
't92 The Fourier Translorm Chapter 4
I z(u) I
I H(.,tlI
-<ig 0 ,ne
(b)
I X(o:) I
-@D O -n .,)
(c)
Flgure 4.rL3 Demodulation process: (a) Magnitude specrrum of z0);
G)
the low-pass-filter frequency response; and (c) the extracted information
spectrum.
involved, then the energy of final signal is one-fourth that of the original signal because
the total demodulated signal contains energy located at to = 2oro that is ev-entually dis-
carded by the receiver.
a.4.2 Multiple."i.E
A very useful technique for simultaneously transmitting several information signals
involves the assignment of a portion of the final frequency to each signal. This iech-
nique is known as frequency-division multiplexing (FbM), and we enciunter it
almost
daily, often without giving it much thought. L:rgei cities usually have several AM radio
and television stations, fire engines, police cruisers, taxicabs, mobile telephones,
citi-
zen band radios, and many other sources of radio waves. All these souices are fre-
quency multiplexed into the.radio sp€crrum by means of assigning distinct
frequency
bands to each signal. FDM is very similar to amplitude modulati-on. considei
three
Sec. 4.4 Applications ol the Fourier Translorm 193
Figure 4.4,4 Magnitude spectra f,)r x, (r), rr(t). and -ri(r) for the FDM
system.
band-limited signals with Fourier lransr.rrms. irs shown in Figure 4.4.4. (Extension to
n signals follows in a straightforward nianner.)
If we modulate.r, (t) with cosror r,.rr (r) with cos(')2I, and -rr(t ) with coso3r, then. sum-
ming the three modulated signals, we obtain
y(t) = x,(t)cos(,)r, + .r3(I) eostrr2/ + xr(t) cosorl.
Thc frequency spectrum of .y (l ) is
I
,., [,r'rtt.r - t,l:) + X, (r',r + to. )]
| )'(or) I
.r,(t) rt (r)
x2lt) :z(r)
The output of this filter is then processed as in the case of synchronous amplitude
demodulation. A similar procedure can be used to extract 12(r) or.r.1(r). The overall
system of modulation, multiplexing, transmission, demultiplexing, and demodulation
is illustrated in Figure 4.4.6.
r (t) -\r(')
is the periodic impulse train. We provide a justification of this model later, in Chapter
E, where we discuss the sampling of continuous-time signals in greater detail. As can
be seen from the equation, the sampled signal is considered to be the Product (modu-
lation) of the continuous-time signal r(t) and the impulse train p (t) and, hence, is usu-
ally referred to as the impulse modulation model for the sampling operation. This is
illustrated in Figure 4.4.7.
From Example 4.2.10, it follows that
T "i"r(", - ?) =T ^2.6(,
P(0,) = -nr,r,) (4.4.3)
and hence,
x,(,): f x1,y,-
"1,1
I r*
=;)_- X(o)P(r,r - o)do
The signals.r(t), p(t), and x,(t), are depicted together with their magnitude.spectra in
Figure a.4.8, withr(i) beinga band-limited signal-that is, X(ur) is zero for lrl , ,r.
As can be seen, x,(l), which is the sampled version of the continuous-time sipal r(t),
consists of impulses spaced I seconds apart, each having an arca equal to the sampled
value of r(r) at the respective sampling instant. The spectrum X"(ro) of the sarrpled
signal is obtained as the convolution of the spectrum of X(r,r) with the impulse train
P(ro) and, hence, consists of the periodic repetition at intervals o, of X(ro), as shown
in the figure. For the case shown, to, is large enough so that the different components
of X"(ro) do not overlap. It is clear that if we pass the sampled signal r,(t) through an
ideal low-pass filter which passes only those frequencies contained in x(t), the spec-
trum of the filter output will be identical to X(o), except for an amplitude scale factor
of 1/I introduced by the sampling operation. Thus, to recover .r(t), we pass r,(t)
through a filter with frequency response
H(,)=
{;' *h::
= (4.4.s')
'I*tt(2"-g)
196 The Fourier Transform Chapter 4
.r(, ) I X(c)) |
p(tl P(ol
This is the sampling theorem (usually called the Nyquist theorem) that we referred to
earlier. The minimum permissible value of o, is called the Nyquist rate.
The maximum time spacing between samples that can be used is
'ft
T= (t)a
(4.4.7)
Sec. 4.4 Applications ol the Fourier Transform 197
I X.(ar) |
(b)
-tDB O ag u, (o
-@! 0 arr
(c) (d)
If Idoes not satisfy Equation (4.4.7), the different componenrs of X"(o) overlap, and
we lvill not be able to recover.r(r) exactly. This is referred to as aliasing. If .r(t) is not
band limited, there will always be aliasing, irrespective of the chosen s,rmpling rate.
f,snrnple 4.4.1
The spectrum of a signal (for example, a speech signal) is essenlially zero for all frequen-
cies above 5 kHz. The Nyquist sampling rate lor such a signal is
F;xenple 4.4.2
Instead of sampling the previous signal at the Nyquist rale of l0 kHz, let us sample it at a
rate oi I kHz. Ihat is.
a,=Ztt. x8x l0r rad/s
The sampling intcrval I is equal lo21r/a! = 0.125 ms. If we filter the sampled signal.r.(t)
using a low-pass filter rvith a cutoff frequency of4 kHz, the output sP€ctrum contains high-
frequency components of:(l) superimposed on the Iow-frequency components, i.e., we
have aliasing and r(t) cannot be recovered.
In theory. if a signal .r(t) is not band limited, we can eliminate aliasing by low-pass
filtering the signal before sampling it. Clearly, we will need to use a samPling frequency
which is twice the bandwidth of the filter, or. In practice, however, aliasing cannot be
completely etiminated because. first, we cannot build a low-pass filter that cuts off all
frequency components above a certain frequency and second, in many applications,
.r(t) cannot be low-pass filtered without removing information from it. In such cases,
we can reduce aliasing effects by sampling the signal at a high enough frequency that
aliased components do not seriously distort the reconstructed signal. In some cases, the
sampling frequency can be as large as 8 or 10 times the signal bandwidth.
m'<ample 4.4.3
An analog bandpass signal,.r,(r). which is bandlimited to the range 8m < /<
l2(DHz is
input to the system in Figure 4.a.lO(a) where H(o) is an ideal low-pass filter with cutoff
frequency of 2fi) Hz. Assume that the spectrum of r,(t) has a triangle shape symmetric
about the center frequency as shown in Figure 4.4.10(b).
Figure 4.4.10(c) shows X,,(o), the spectrum of the modulated signal,.r,,(t), while
Xo(ro), rhat ot the output of thc low-pass filter (baseband signal), ro(t) is shown in Figure
4.4.10(d). If we now sample ru(r) al intervals f with f < I /4fi) secs, as discussed earlier,
the resulting spectrum X, (<'r) will be the aliased vcrsion of X,(to) and will thus consist of
a set o[ triangular shaped pulses centered at frequencies u = 2rk/T, k = 0,
+ 1, + 2, etc.
If one of these pulses is centered at 2n x l(fr) rad/s, we can clearly recover X, (r'r) and
hence r,(t) by passing the sampled signal through an ideal bandpass filter with renter fre-
quency 2fiX)zr rad/s and bandwidth of Efi)rr rad/s. Figure 4.4.10(e) shows the spectrum of
the sampled signal for 7 = I msec.
In general, we can recover r,(l) from the samPlcd signal by using a band-pas filter if
2rk/T = o,, that is if 1/Iis an integer submultiple of the center frequency in Hz.
The fact that a band-limited signal that has been sampled at the Nyquist rate can be
recovered from its samples can also be illustrated in the time domain using the con-
cept of interpolation. From our previous discussion, we have seen that, since r(t) can
be obtained by passingr,(t) through the ideal reconstruction filter of Equation (4.4.5)'
we can write
X(o) = H(or)x"(co) (4.4.8)
,{4 ( r) -r. (, )
0 0
au.l20[orradls
(b) (c)
xu@)
.r, (ru)
a(r) = r-s'Iuki
'fiI
Taking the inverse Fourier transform of hoth sides of Equation (4.4.8), we obtain
.r(t)=r.(t)x&(t)
=
[..t,1,i-u<,
-,.)]1 7st!oal
:i_r,@T";*;;?
:.i. f',"D':e*^;T,)
:,,i_ *,'r,nr)Sa(or(r - n7')) (4.4.9)
Equation (4.4.9) can be interpreted as using interpolation t() rcconstruct r(t) from its
samples x(n?"). The functions Sa [ror(r kI)] are called intcrpolating, or sampling'
-
funciions. Interpolation using sampling functions, as in Equation (4.4.9), is commonly
referred to as band-limited interpolation.
The Fourler Transform Chapter 4
0
(a)
I H(ai t I HQo)l
0
(c) (d)
As is usual with spectra of real-valued signals. in Figure 4.4.1I we have shown fl(ro)
only for values of to > 0, since H(or) = H(- -) for such signals.
Exanple 4.4.4
Consider the ideal low-pass filter with frequency response
,,,r,r =
{l' kl,i:;
The impulse response of this filter corresponds to the inverse Fourier transfom ofthe fre-
quency response H,r(o) and is given by
fr,, = &rin.9tl
The filters described so far are referred to as ideal filters because they pass one set of
frequencies without any change and completely stop others. Since it is imposible to
realize filters with characteristics like those shown in Figure 4.4.1 I , with abrupt changes
from passband to stop band and vice versa, most of the filters we deal with in practice
have some transition band. as shown in Figrure 4.4.12.
I ll(or) I I l/(r.r) |
lH(.u)l I Ir(o) I
202 The Fouder Transtorm Chapter 4
E-anple 4.4.6
Consider the following RC circuit:
r
I
I
I ^t
r (r)
v(t,
lr
tlr------
-- -- __ J
The impulse response of this circuit is (see problem 2.17)
h(t) =
*f *o[#],u,
and the frequency response is
I
H((,)) =
I + TtoRC
The amplitude spectrum is given by
1a1,11' = l++rc),
and is shown in Figure 4.4.13. lt is clear that lhe RC circuit with the output taken as the
voltage across the capacitor.performs as a low-pass filter. The frequenryro. ar which
rhe
Trry!$. spectrum latrll
= ry@/Va (3 dri berow H(0)) is caited r'rre fano edge. or
the ldB cutoff frequency ofthe filrer. (The transition between the passband and
thJsrop
band occurs near o..) Setting JH(or)l = t/!2, we obtain
I
o.=EE
I H(ral I
I
I
,/z
If we interchange the positions of the capacitor and the resistor, we obtain a s)tstem
with impulse response (see Problem 2.18)
H(,)=#,k
The amplitude spectrum ts given by
la(,rr)l'= t1'#",^.)-
and is shown in Figure 4.4.14.lt is clear that the RC circuit with output taken as the volt-
age across the resistor performs as a high-pass frlter. Again. by sctting lH(.)l = UVz,
the cutoff frequency of this high-pass filter can be determined as
I
''=Ra
I H{ut) |
Filters can be classified as passive or active. Passive filters arc made of passive ele-
ments (resistors, capacitors, and inductors), and active filters use operational ampli-
fiers together with capacitors and resistors. The decision to use a passive filter in
preference to an active filter in a certain application depends on several factors, such
as the following:
l, The range of frequency of operalion of the ftlter. Passive filters can oPerate at higher
frequencies, whereas active filters are usually used at lower frcquencies.
2. The weight and size of the filter realization. Active filters can bc realized as an inte-
grated circuit on a chip. Thus, they are superior when considerations of weight and
size are important. This is a factor in the design of filters for low-frequency appli-
cations where passive filters require large inductors.
--20/. The Fourier Transform Chapter 4
3. lhe sensitivity of lhe filter to parameter changes and stability. Components used in
circuits deviate from their nominal values due to tolerances related to their manu-
facture or due to chemical changes because of thermal and aging effects. Passive fil-
ters are always superior to active filters when it comes to sensitivity.
4. The availability of voltage sources for operational omplifiers. Operational amplifiers
require voltage sources ranging from I to about 12 volts for their proper opera-
tion. Whether such voltages.are available without maintenance is an important
consideration.
We consider the design of analog and discrete-time filters in more detail in Chapter 10.
I xl (ar) I lx2(or) I
Figure 4.5.1.) For baseband signals, we measure the bandwidth in terms of the positive
frequency portion only.
But if r(r) is a band-pass signal and lX(Gr)l is zero outside the interval or ( o( oz.
then
B: r,o'2- a, (4s.2)
tr',rarnple 45.1
The signal .r(l) = sintost/Tr is a baseband signal and has the Fourier transform
rect (o2oB). The bandwidth of this signal is then ror.
lx(r,r,)l _
-
t
(45.3)
lxlp)l \/,
(
Note that inside the band0 < ro ro1, th€ magnitude lX1.;l fatls no lowerthan 1/\6
of its value at to = 0. The 3-dB bandwidth is also known as the half-power bandwidth
because a voltage or current attenuation of 3 dB is equivalent to a power attenuation
by a factor 2.
E:-arnple 452
The signal x(t) = .*01- ,rrlu (t) is a baseband signal and has the Fourier transform
I
x(,)=ilr+i,
206 The Fourier Transform Chapter 4
.
The magnitude spectrum of this signal is shorvn in Figure 4.-5.2. Clearly: X(0) = I. and rhe
3-dB bandrvidth is
B=lT
I X((,)) I
Actuarenergy =
* f "1x1,11,a, = :f lx1,;1,a. (4.5.s)
Setting Equation (4.5.4) equal to Equation (4.5.5), rve have the formula that gives the
equivalent bandwidth in hertz:
:
l.ia.l,Jr/ lxt')l"r.,'
B"' (4.s.6)
Frnmple 4.5.3
The equivalent bandwidth of the signal in Example 4.5.2 is
where o,, is the radian frequency at which the magnitude spcctrum rs maximum. For
baseband signals, the spectrum maximum is at t'r = 0. and the bandrvidth is the distance
between the first null and the origin.
f,sarnple 4.6.4
ln Example 4.2.1, we showed that signal .r() = rcct(t/T) has lhc Fourier transform
aT
X(ro) = 751n.
,-
The magnitude spectrum of this signal is shown in Figure 4.5.3. Frorn the figure, the null-
to-null bandwidth is
B=T
I X(o) I
_4n 2tr_ 0 !l ! 6n !1 o)
T TTTTT
Figure 45J Magnitude spectrum for the signal in Exanrplc 4.5.4.
-*l
f',1*r.tl'0, =
(4.s.7)
.1x1.;1'?2,
For example, z = 99 defines thc frequency band in which 99% of the total energy
resides. This is similar to the Federal Communications Commission (FCC) detinition
of the occupied bandwidth, which states that the energy above thc upper band edge to,
is lVo and th" belorv thc lower band ed8,e 9, is jTrt leaving 99?o of the total
"n".gy
en-ergy within the occupied band. The z7o bandrvidth is implicitll' defined.
[L:;,21x1,11,a. (4.5.8)
B:^. =
J_"
lxr."ll'a,
2OB The FourierTranslom Chaptsr4
A dual characterization of a signal r(t) can be given in terms of its duralion T, which
is a measure of the extent of x (t ) in the time domain. As with banduddth, duration can
be defined in several ways. The particular definition to be used depends on the appli-
cation. Three of the more common definitions are as follows:
l. Distance between successive zeros. As an example, the signal
'(t)--V4+
nt
hasdurationT=llW.
2. Time at which x(t) drops to a given value. For example, the exponential sigpal
x(r) - "*01-r/Llu(t)
duration I = A, measured as the time at whichx(t) drops to l/e of its value at, = 0.
has
3. Raditu of gyration. This measure is used with signals that are concentrated around
I = 0 and is defined as
T=2 x radius ofgyration
(4.s.e)
lxQ)12 dt
,(,)=-\+z,N*r[#]
has a duration of
,:,W;/F,,^,1
= t/i a,
and for which the duration is defined as in Equation (4.5.9) and the bandwidth is
defined as in Equation (4.5.8), the product TB shrruld satisfy the inequality
TB>I (4.s.lr)
In words, T and B cannot simultaneously be arbitrarily small: A short duration implies
a large bandwidth, and a small-bandwidth signal must last a long time. This constraint
Sec.4.5 Duralion-BandwidthRelationships
has a wide domain of applicarions in communication systems, radar, and signal anrJ
speech processing.
The proof of Equation (4.5.11) follows from Parseval's formulr. Equation (4.3.1,1).
and Schwarz's inequality.
where the equality holds if and only if y, (t) is proportional to y, (,)-that is,
L_l+r"
82=-* (4.s. r4)
['-_l,r,tl'o,
Combining Equation (4.5.14) with Equation (4.5.9) gives
:
| + ilx(tylzat I lx'1t11'at
(TB\, (4.s.1 s)
t[ 1,1,v1'ar]
.t.B
>2]L^1,''ul1l (4.5. tb)
['-l't'tl'o'
But the fraction on the right in Equation (4.5.16) is identically ctpral to l/2 (as can be
seen-by. integrating the numetator by parts and noting that x(l ) nrtrst vanish faster than
I /!t as t -+ t m). which gives the desired result.
To obtain equality in Schwaz's inequality. we must have
dxltl
ktx(t\ = ),'
210 The Fourier Translorm Chapter 4
OT
4D=*,
.r(r)
Integrating, we have
ln[,r(r)] =T.constant
or
r(r) = c exp [&,2] (4.s.17)
f,gornple 4.65
Writing lhe Fourier transform in the polar form
X(r'r) = 41-; exP [jQ (o)l
we show that, among all signals with the same amplitude A (o). the one that minimizes lhe
duralion ofx(r) has zero (linear) phase. From Equalion (4.3.23). we obtain
l'_t'ztx1rvt'zat
='i; L"{[iP]'* ,,'r,r[$$]']a., (4s re)
Since the left-hand side of Equation (4.5.19) measures the duration of .r(t). we conclude
that a high ripple in the amplitude spectrum or in the phase angle of X(r'r) results in sig-
nals with long duration. A high ripple results in large absolute values of the derivativcs
of both the amplitude and phase spectrum. and among all signals with the same ampli-
tude A (to). the one that minimizes the left-hand side of Equation (4.5.19) has zero (lin-
ear) phase.
Bxarnple 4.6.6
A convenient measure of the duration of x(l) is the quantity
r =\ l'_,(na,
tn this formula. the duration Tcan be interpreted as the ratio of the area ofx(l) to its
height. Note that if r(t) represents the impulse response of an LTI system. then Iis a mea-
sure of the rise time of the system. which is defined as the ratio of the final value of the
step response to the slope of lhe step response at some appropriate point tu along the rise
(ro = 0 in this case). lf we define the bandwidth of .r(l) by
Sec.4.6 Summary 211
, = ,lor [-._x<-'ta-
it is easy to show that
BT=2t
4.6 SUMMARY
r The Fourier transform of x(r) is defined by
t-
x(o) = iatlat
J -_x(t)expl-
o The inverse Fourier transform of X(<,r) is defincd by
. X(ro) exists if x(t) is "well behaved" and is ahsolutely integrahle. These conditions
are sufficient, but not necessary.
. The magnitude of X(o) plotted against r,r is called the magnitude spectrum of r(l),
ana lXlr; | '? is callcd the encrgy speclrum.
r The angle of X(r,r) plotted versus to is called the phase spectrum.
. Parseval's theorem states that
:*
f--l't'tl' a' f .l x1'; l'za'
. The energy of r(r) within the frequency band o, ( o ( oz is uiven by
zIt Jlll
^E=?[''111.11,r.u
r The total energy of the aperiodic signal x(t) is
E =
*J' txr,rt,a.
. The power-density spectrum of x(l) is defined by
s(.,)=ri,,'flx't'1'1
,-r_ tr L J
where
x, (t; e X, (or)
and
.r, (t) : x(t) rect(t /2r)
212 The Fourier Translorm Chapter 4
AQ
a.L, PROBLEMS
4.1. Find the Fourier transform of the following signals in terms of X(to). the Fourier trans-
forrn of .r().
(c) x(-r)
Sec. 4.8 Problems 213
tD Imk(t)l
43 Determine which of the following signals have a Fourier transform. Why?
(a) :0) = exP[-2t]z(r)
O).tG) = lrlu(r)
(c) x(t) = cos (rrlt)
(d) .r(t) = :
1
x(a) = i tilm,+
where
m^= n = 0,1.2....
f-"x(tldt,
(Hinl: Expand exp [-ltor] around, = 0 and integrate termwisc.)
44 Using Equation (4.2.12), show that
[' d, = 2n6(r)
"or.ot
Use the result to prove Equation (4.2.13).
45. Let X((,) = 1g.1[(, l)/2]. Find rhe rransform of the following functions,. using the
-
prope(ies of the Fourier transform:
(s) r(-t)
o) a1t1
(c) r(t+l)
(d).r(-2+a)
(e)(-l)r(t+l)
,rr 4#
(t\
tgti d-t
;-[exeli,rtldLo=0(r)
(Hintr Think of the integral as the sum of a large nunrber of cosines and sines of various
frequencies.)
4t. Consider ihe two related signals shown in Figure P4.8. Use linearity, time-shifting, and
integration properties. along rvith the transform of a reclangular signal. to lind X(ro) and
Y(,).
.r(r)
Flgure P4.t
4.9. Find the energy of the following signals. using Parseval's theorem.
(a) .r(t) = exP [- 2rl,I0)
(b) .r(l) = n(t) - a(r - 5)
(c) r(t) - 6(tl4)
(d) r0) = !ln;Yl)
4.10. A Caussian-shaped signal
x(t) = I exP [- ar 2]
Y(r) =.r2(r)
(a) Find the Fourier transform of the output y().
Sec.4.8 Problems 215
x1o; =
/' r(r)tit and ,(o) =
21, [' ,xt-"t'
are special cases of the Fourier-transform pair. These two formulas can be used to evalu-
ronr" detinire integrals. choose the appropriate .t (r ) and x(r,r) to verify the following
","
idr'nlities.
ta
(b) exp [- rr0'lde = t
J_ -
n;"L-ft,,a,=,
,u, ; I" -r":'*Lffi o, = t
4.12. Use the relation
= i;
[- -x(t)y.(t)dt I- -x(r,r)Y*'(to)z/o
and a known transform pair to show that
t"ln
t\rdt = 4oi
(a)
Jo 1oz +
.-. f' sincl
(b)
J.
-+ = I - exP[-rra]
;z ;;
trdt
r' sinll
(cl
l_- ,-at = 3r1
r' sin{, 2t
ldl J_--;'4 dt = 1
4,13. Consider the signal
'r(r) = e*O 1-
"""t"
(a) Find X(ro).
(b)WriteX(to)asX(o)=R(r,r)+j/(r,r),whereR(to)and/(t,r)arctherealandimaginary
parts of X(ro), resPectivelY'
(c) Take the limit as e -+ 0 of part (b), and show that
I
9llin1 exp[-cllu(t)l = 116(o) + --
Hinr.' Note that
216 The Fourler Translorm Chapter 4
.. a [0. ro*0
llm --=-------= = (
.to e" I ot [o, o=0
+
xkl,r -+ i4,.D 2
= -ct'tj4u+3
'n2
,(r).#=r(r), c>ro.
(b) Use part (a) to show that
Isin r
l1- c>1
sinct sin(r- r) o'='l
, l;'
;l_- a t-r
Ir"inor, lol -r
4.17. show that with current as input and voltage as output, the frequency response of an induc-
tor with inductance L is jrol and that of a capacitor of capacitance Cisl/jotC.
4.1& Consider the system shown in Figure P4.18 with RC = 10.
(a) Find tl(r,r) if the output is the voltage across the capacitor, and sketch lA1t.ryl as a
function of o.
(b) Repeat Part (a) if the output is the resistor voltage. Commenr on your results.
Sec.4.8 Problems 217
Flgure P4.lt
4.19. The input to rhe syslem shown in Figure P4.19 has lhe spectrum shown' ['et
p(t) = costort, otu )) to,,
pl/,l ,, (r)
X(ro)
It, (t :l
6)0 a)
tigure P4.19
Ael. Ol The Hilbert transform of a signal.r(r) is obtained by passing rhe signal through an LTI
system with impulse response h(tl = l/zrt. What is H(r,r)?
(b) What is the Hilbert transform of the signal .r(r) = cosrrr?
4ZL The autocorrelalion function of signal r(r) is defined as
t'
R,(t) = + t)dr
J_".rr(r)x(r
Show that
H (tol
,u,{_*,,,,
'btl O -l tn
Flgure P4.20
--b)c
b)c 0 ojc (,
o;c 'aia 0 or. os -(nt
-a, O u1
@l o
t's
Flgure P425
426. As discussed in Section 4.4.1, AM demodulation consists of multiplying the received sig-
nal y (l ) bv a replica, zl cos r,r,,I. of lhe calricr and lorv-pass filtc rins t h'r rcsultinE signal : ( ).
Such a scheme is called synchronous dcmodulation and assumcs lhllt the phase of the car-
rier is known at the receiver. If the carrier phase is not known. ; (t ) hecomes
z(r) = y(t)Acos(toor + 0)
where 0 is the assumed phase of the carrier.
(a) Assume that the signal .r(l) is band limited to <o-. and find the output;(r) of the
demodulator.
(b) How does i(r) compare rvith the desired output r(t)?
4Zr. A single-sideband, amplitude-modulated signal is generated usinu the system shown in
Figure P4.27.
(a) Sketch the spectrum ofy(r) for @t= @^.
@) Write a mathematical expression for hrQ).ls it a realizable filter?
iI(o) lllkol
n(tl
-(.,O
cos LJ" ,
Flgure P4.27
42& Consider the system shown in Figure P4.28(a). The systems /r1(r ) and hr(t) respectively
have frequency responses
u0 Tho Fourler Tranolorm Chapter 4
.r1(r)
(8)
Flgurc P4Jt(a)
I
H,(o) =
)lno{, - oo) + H6(ro + roo)l
unO
-t I/o(or + roo)t
H2@t) = Wo(L,l - oo) -
4
(a) Sketch the spectrum ofy(t).
O) Repeat part (a) for the Ho(ro) eho*r in Figure P4.2t(b).
Ho(j.,.t
+lr, Ler.r(t) and y(l) be low-paes eignals with bandwidthe of 150 Hz and 350 Hz' rospec'
tively, and let e1r; = .r(l)y(t). The signal e(t) ls oampled uslng an ldeal eampler at inter-
vale of I
secs.
(q) What is the maxlmum value that I, can take wlthout inuoducing aliaring?
(b) rf
X(ul
n(r) xr(l)
-27
P0'l
Flgure P4.A)
rl3l. In flat-top sampling, the amplitudc of each pulse in the pulse train .r, (r ) is consunt during
the pulse, but is determined by an instantaneous sample of r(t). as illustrated in Figure
P431(a). The instantaneous sample is chosen to occur at the center of the pulse for con'
venience. This choice is not necessary in general.
(a) Write an expression for x, (t).
O) FindX,(or),
(c) How is this r$ult different from the result in part (a) of Problenr 4.30?
(d) Ueing only a low-pass filter, can r(t) be recovered without any distortion?
(e) Show that you can recover x(l) without any distortion if another filter, H", (or)' is
added, as shown in Figure P4.31(b), where I
n(,) =
{1, ki,i:;
H",(,) =
*'til;'l;1, l,l <,"
= arbitrary, elsewhere
X(or)
xr(')
- tu. 0 ola
(a)
rr(, )
(b)
432. Figure P4.32 diagrams the FDM system thal generates the baseband signal for FM stereo-
phonic broadcasting. The left-speaker and right-speaker signals are processed to produce
.rr_(t) + -rr(r) and.r1.(r) - .rfl(I), respcctively,
(a) Skctch the spectrum of ) (I)
(b) Sketch the spectrum of z (r). u(t), and z,(l).
(c) Shorv how to recover both 11(r) and -r*(l).
lo to
(a)
xr (r) + rn (r)
Figure P432
jri,$
2t
roo=7
) a'(, - nT) <-+ ) n 6(or - zoo).
(a) sinc
-, ' (absolute bandwidth)
3Wr
435. The signal X.(o) shorvn in Figure 4.4.10(b) is a periodic signal in or.
(a) Find the Fourier-series coefficienis of lhis signal.
Sec. 4.8 Problems 223
'o = .?.+-@Dtl#3:+l)t
435. Calculate the time-bandwidth product for the following signals:
(a) r (r) =
I exP [ ,']
-,--.
Y 2tr L" I z'l'
I
I
(Use the radius oIgyration measure for Iand lhe equivalcnt bandwidth measure for B')
sin1ur.Wt
(b) .t(r1 =
----'
(Use the distance between zeros as a measure of I and the absolute bandwidth as a
measure of B.)
(c) .r(r) = Aexpl-orlrr(r). (use the time ar which r(r) drops to l/e of its value at, = 0
as a measure of Iand the 3-dB bandwidth as a measure o[ r9.)
Chapter 5
5,1 INTRODUCTION
In Chapters 3 and 4, we saw how frequency-domain methods are extremely useful in
the study of signals and LTI systems. In those chapters, we demonstrated that Fourier
analysis reduces the convolution operation required to compute the output of LTI sys-
tems to just the product of the Fourier transform of the input signal and the frequency
response of the system. One of the problems we can run into is that many of the input
signals we would like to use do not have Fourier transforms. Examples are
exp[cl]a(t), o > 0; exP[-or], -- < t < a; tu(t): and other time signals thai are not
absolutely integrable. If we are confronted, say, with a system that is driven by a ramp-
function input, is there any method of solution other than the time-domain techniques
of Chapter 2? The difficulty could be resolved by extending the Fourier transform so
that the eignal r() is expressed as a Eum of complex exponentials. exp[sr], where the
frequency variable is s = o + 7'ro and thus is not restricted to the imaginary axis only.
This is equivalent to multiplying the signal by an exponential convergent factor. For
example,.exp [-ot] exp[ct]u(r) satisfres Dirichlet'g conditions for s > c and, there.
fore, should have a generalized or extended Fourier transform, Such an extended
transform is known as the bilateral Laplace transform. named after the French math-
ematician Pierre Simon de Laplace. In tbis chapter, we define the bilateral Laplace
transform (section 5.2) and use the definition to determine a set o[ bilateral transform
pain for eome basic signals.
As mentioned in Chapter 2, any signal .r(r) can be written as the sum of causal and
noncausal sipals, The causal part of .r(t), r(r)rr(r), has a special Laplace trangform
that we refer to as the unilateral Laplace transform or, simply, the Laplace transform.
The unilateral Laplace transform is more often used than the bitateral Laplace trans-
u4
Sec. 5.2 The Bllateral Laplaco Transtorm
form, not only because most of the signals occurring in practice are causal signals, but
also because the response of a causal LTI system to a causal input is causal. In Section
5.3, we define the unilateral Laplace transform and provide some examples to illustrate
how to evaluate such transforms. In Section 5.4, we demonstrate how to evaluate the
bilateral Laplacc transform using the unilateral Laplace transform.
As with other transforms, the Laplace,transform possesses a set of valuable prop-
erties that are used repeatedly in various applications. Because of their importance, we
devote Section 5.5 to the development of the properties of the Laplace transform and
give examples to illustrate their use.
Finding the inverse Laplace transform is as important as finding the transform itself,
The inverse Laplace transforrn is defined in terms of a contour integral. In general,
such an integral is not easy to evaluate and requires the use of some theorems ftom the
subiect of complex variables that are beyond the scope of this text. In Section 5.6, we
use the technique of partial fractions to find the inverse laplace transform for the class
of signals that have rational transforms (i,e.. that can be expressed as the ratio of two
polynomials).
In Section 5.7, we develop techniques for determining the simulation diagrams of
continuous-time systems. In Section 5.8, we discus some applications of the laplace
transform, such as in the solution of differential equations, applications to circuit analy-
sis. and applications to control systems. In Section 5,9, we cover the solution of the
state equations in the frequency domain. Finally, in Section 5.10, we discrus the stabil-
ity of LTI systems in the s domain.
where the complex variable s is, in general, of the form.r = o + itr, with o and trr the
real and imaginary parts, respectively. When o = 0, s = jo, and Equation (5.2.1)
becomes the Fourier transform of r(t). while with s # 0, the bilateral Laplace trans-
form is the Fourier transform of the signal .r(t) exp[-ot]. For convenience, we some-
times denote the bilateral laplace transform in operator form as :lrlx(t)l and denote
the transform relationship between r(t) and Xp(s) as
Let ue now evaluate a number of bilateral Laplace transforms to illustrate tho rela-
tionship between them and Fourier transforms.
.lil;",'r', '.''I-r- ' ::rl:rii.l,,:; jrL;;
,,, ,_ l, .,r
j,ti i.} I $;
226 The Laplace Transform Chapter 5
Example 62.1
Consider the signal.r() = .* 1-r,Ia O. From the definition of the bilateral Laplace rransform.
l_
= J+a
As stated earlier, we ctn look at this bilateral l:place transform as the Fourier transform
ofthe signal exp[-at] exp[-or]u(r). This signal has a Fourier rransform only if o > -a.
Thus, Xr(s) exists only if Re fsl > -a.
In general, the bilateral Laplace transform converges for some values of Re lsl and
not for othem. The values of s for which it converges, i.e.,
is called the region of absolute convergence or, simply, the region of convergence and
is abbreviatcd as ROC. It should bc stressed that tlre region of converg,encc depends
on the given signal x(t). For instance, in the preceding example. ROC is defined by
Re {sf > -a whether a is positive or negative. Note also that even though the bilateral
Laplace transform exists for all values of a. lhe Fourier transform exists only if a > 0.
If we restrict our attention to time signals whose Laplace transforms are rational
functions of s, i.e.. Xr(s) = N(s)/D(s), then clcarly. Xr(s) does not converge at the
zeros of the polynomial D(s) (poles of Xn(.r)). rvhich leads us to conclude thar for
rational laplace transforms, the ROC should nol conlain any poles.
Example 6.2.2
In this example. we show that two signals can have lhe same algebraic expression for their
bilateral laplacc transform, but different ROCs. Consider the signal
r(r) = - exp[-atlu(-t)
Its bilateral l:place iransform is
= - [o "*p1-1, + o)tltlt
For this integral to converge, we require that Re ls + al < 0 or Re [s | < -a. and the bilar-
eral Laplace transform is
I
xa(s) =
r_;;
Sec. 5.2 The Bilateral Laplace Translorm
A convenient wav to display the ROC is in the complex s plane. as shown in Figure
5.2.1. The horizontal axis is usually referred to as the o axis. and the vertical axis is nor-
mally referred to as the 7o axis. The shaded region in Figure 5.2.1 (a) represents the set
of points in the s plane corresponding to the region of convergcnce for the sigpal in
Example 5.2.1, and the shaded region in Figure 5.2.1(b) represcnts the region of con-
vergence for the signal in Example 5.2.2.
lmis I Inr is I
(a) (b)
The ROC can also provide us with information about whether .t (r) is Fourier trans-
formable or not. Since the Fourier transform is obtained from the bilateral l-aplace
transform by setting o = 0, the region of convergence in this casc is a single line (the
Ito axis). Therefore, if the ROC for Xr(s) includes the lro axis, -t(r) is Fourier trans-
formable, and Xs(o) can be obrained by replacing s in Xr(s) by jro.
Brornple 6.2.8
Consider the sum of trvo real exponentials:
Note that for signals that exist for both positive and negative time, the behavior of the sig-
nal for negative time puts an upper bound on the allowable values of Re [s ], and the behav-
ior for positive time puts a lower bound on the allowable Re [sf . Therefore, we expect to
obtain a strip as the ROC for such signals. The bilateral l:place transform ofx(r) is
xa$)=#-*
-s - 1l
-2 < Re[sl < I
(s + 2)(s - l)'
f,yqrnple 6.3.1
In this example, we find the unilateral l:place transforms of the following signals:
r,(t) = 4, xr(l) = D(l), rr(t) = exp['2t], :r(,) = cosZ, r5(l) = 5;n2,
From Equation (5.3.1),
r,U, 4
=
I, oexp[-sr]rlr = J , Relsl > 0
Ssc. 5.4 Bilatoral Translorms Using Unilateral Transforms
xr6) =
/"
6(r) exp[-sr]dr = r, for all s
,Yr(s) = [
lg
exp[,;'}tlexpl-stldt
=l_
s-i2
=;+ *i?L*7' Relsl > o
Since cos2 = Re I erp [2r]] and sinZ = Im I exp[iZl), using the linearity ofrhe inregral
oPeration. we have
Table 5.1 lisrs some of the important unilateral Laplace-transform pairs. These are
used repeatedly in applications.
If r(t) does not have any singularities at, = 0, then the lower limit in the second term
can be replaced by 0-, and the bilateral Laplace transform beconres
TABLE 5.1
Some Solected Unllsteral LaplacB'Tianstorm Palrc
Slgnal Transtorm
I
l. r (t) Re(s| > 0
s
I- exp [- as ]
Rels| >0
2. u(r) - u(t - a\
+ 2rl,1,,
s2
10. cos2 oror a(r) Refsl >0
s1s2 + 4roj;
s+a > -a
12. expl-atl cos root z(t)
G+ rf;G Re [s]
0o- > -a
13. exp [-at] sin r,rot u(l) Rels|
(s+a)2+(l)3
_ 2toos _
15. I sin root uo Re[sl >0
(s2 + 0ro2)2
Example 6.4.1
The bilatcral Laplace transform of the signal .r(l) = exp [arlr.( -r). a > 0, is
Exanple 6.42
According to Equation (5.4.2), the bilateral Laplace lransform of
.r(r) = ,4 exp[-ar]a(r) + Br2 exp[-br]z(-r), a and b>0
ts
A
Xs(s) = + :rlB?t)2exp[brlil(r)],--,
r'.' o
\
=ls+a*(t',t
\'("-t'J,.-,' Relsl > -anRc[s] <-b
A28
=tttt -a< Re[sl <-b
(' i-6;i'
where 9lB(-r)2 exp[Drlu(r)] follows from entry 7 in Table 5.1.
Not all signals possess a bilateral Laplace transform. For cxample, the periodic
exponential exp[rout]does not have a bilateral Laplace transfornr because
9r(exp[.;'rour]] =
/ cxpt-(s - i''r,,,)t)dt
For the first integral to converge, we need Re[s] < 0, and for lhc second integral to
converge, we need Re ls) > 0. These two restrictions are contradictory, and there is no
value of s for which the transform converges.
In the remainder of this chapter, we restrict our attention to the unilateral l-aplace
transform, which we simply refer to as the Laplace transform.
using these properties. it is possiblc to derive many of the transform pairs in Table 5.1.
In this section, we list several of these properties and provide outlines of their proof,
6.6.1 Linearity
If
x,(t) <-r X,(s)
rr(t) er Xr(s)
then
Example 6.5.1
Suppose we want to lind the laplace transform of
(A + B expl-btllu(t)
From Table 5.1, we have the transform pair
= J,['..1, - r,,)cxp[-srlr/r
,,
= exp[-r,flX(s)
Note that all values s in the ROC of x(t) are also in the ROC of .r(t tr). Therefore. -
the ROC associated with .r(r - ru) is the same as the ROC associated with r(r).
Example 6,63
Consider lhe reclangular pulse -r(r) = rect((r - o)lza\. This signal can be wrilten as
rect((, - a)/2o) = u(t) - u(t - 2al
Using linearity and time shifting. we find that the l-aplace lransform ofr(l) is
It should be clear rhar lhe time shifting property holds for a right shift only. For exam-
ple, the t aplace transform of x(t + for ) 0, cannot be expressed in terms of the
laplace transform of xG). (WhY?) '0), '0
The proof follorvs directly from the definition of the Laplace transform. Since the new
tranjform is a shifted version of X(s), for any s that is in the RoC of x(t), the values
s + Re[s,,] are in the ROC of exp[sot]r(t).
Exampte 6.6,3
From entry 8 in Table 5.1 and Equation (5.5.3). the Laplace transform of
r(t) = A exP[-atl cos((oo, + e)u(,)
2U The Laplace Transform Chapter 5
ts
= 9lA expl- atl cosr,rot cos0 u(t)l - glA exp[-at] sinr,rot sin0 u(t)l
_ ,4(s + a)cg-st _ __4gr_lI9
(s + a;2 + roo2 (s + a)2 + tofi
The proof follows directly from the definition of the laplace lransform and the appro-
priate substitution of variables.
Aside from the amplitude factor of l/o, linear scaling in time by a factor of c cor-
responds to linear scaling in the s plane by a factor of 1/c. Also. for any value ofs in
the ROC of .r(t), the value s/a will be in the ROC of r(ot): that is, the ROC associated
with.r(cr) is a compressed (c > 1) or expanded (a < l) version of the ROC of x(t).
Ilxarnple 6.6.4
Consider the time-scaled unit-srep signal a(ct), where c is an arbirrary positive number.
The l:place transform of a(or ) is
The proof of this property is obtained by computing the lransform of dt(t)/dt. This
transform is
4+l= r+expr-srldt
Integrating by parts yields
expr-srl,,,,l; - expr-'rr] dr
4+\= [''1'11-'1
: lim [exp[-stlr(t)l-.r(0-) + s X(s)
l')-
*{*:,'}= sx(s)-x(o-)
Therefore, differentiation in the time domain is equivalent to multiplication bys in the
s domain. This permits us to replace operations of calculus by simple algebraic opera-
tions on transforms.
The differentiation property can be cxlendcd to yicld
(s.s.6)
Generally speaking, differentiation in the time domain is the most important ProPerty
(next to linearity) of the l-aplacc transform. It makes the l,aplacc transform useful in
applications such as solving differential equations. Specifically, wc can use the l:place
transform to convert any linear differential equation with constant coefficients into an
algebraic equation.
As mentioned. earlier, for rational L:place transforms, the ROCI does not contain
any poles. Now, if X(s) has a first-order pole at s : 0, multiplying by s. as in Equation
(5.5.5), may cancel that pole and result in a new ROC that contains the ROC of r(r).
Therefore, in general, the ROC associated with dt(t)/dt normally contains the ROC
associated with r(l) and can be larger if X(s) has a first-order polc at s = 0.
Erample 6.6.6
The unit step function.t(t) = r(r) has the transform X(s) a l/s, rvith an ROC defined by
Re ls) > 0. Thc derivative of n (r) is the unit-impulse function, rvhose f-aplace transform
is unity for all s with associated ROC extending over the entirc.r planc.
Example 6.6.6
Ler,r1r; = sin2(,)t u(l), for which r(0-) = 0. Note that
x'(t) = 2- sin r,rt cos tol u (r) = r" .;r rr, ,,,,
236 The Laplace Translorm Ohapter 5
and therefore.
Stlsinrorr u(r)l =
i1ri2i*i,
Example 6.6.7
one of the important applications of the Laplace transform is in solving differential equa-
tions with specified initial conditions. As an example, consider the differential equati;n
y"(t) + 3y'(rl + 2yQ) = a, /(0-) = 3. y,(o-) = I
Let f(s) = 9ly()l be the l:place transform of lhe (unknown) solution y(). Using the
differentiation-in-iime property, we have
:tly'(t)l = sY(s) - y(0-) = sY(s) - 3
9ly'()l =s2Y1s; - sy(o-) - )'(0-) = sryls; - 3s - I
If we take the l:place transform of both sides of the differential equation and use the lasr
two expressions, we obtain
s2Y1s;+3sY(.s)+2Y(s1 = 3r* 16
Solving algebraically for Y(s), we ger
Y(s) =
3s+10 7 4
(s+2)(s+l) s+l s+
From Table 5.1, we see that
y(t) = 7 exp[-r]u(r) - 4 expl-2tlu(t)
Ernrnple 6.68
consider the RC circuit shown in Figure 5.5.1(a). The input is the rectangular signal shown
in Figure 5.5.1(b). The circuir is assumed inirially relaxed (zero iniriat condirion).
Oo
(a) (b)
arlrt +
|f r1r )h = u(t)
\i =
;:#Rc[exp[-asl - exp[-sbl I
yo
y(i = | xg)dr
J6
then
{iq:
I:,u,*54,,
lntegrating the right-hand side by parts, we have
: ry54 -
+ y(,)
l- I-ru,
exp[-sr]dr
The first term on the right-hand side evaluates to zero at both limits (at the upper limit
by assumption and at lhe lower linrit because y(0-) = 0), so that
x(s)
= su(t)l
s
Thus, integration in the time domain is equivalent to division bys in the s domain. Inte-
gration and differentiation in the time domain are two of the most commonly used
properties of the l:place transform. They can be used to convert the integration and
differentiation operations into division or multiplication by s, respectively, which are
algebraic operatioru; and, hence, much easier to perform.
-rxgl <->
ff (s.s.8)
Since differentiating X(s) does not add new poles (it may increase the order of some
existing poles), the ROC associated with -tr(l) is the same as the the ROC associated
with r(t).
By repeated application of Equation (5.5.8), it follows that
Sec. 5.5 Propenies of the Unilateral Laplace Translorm 239
Erample 6.EO
The lzplace transform of the unit ramp function r(t) = n,1r; nc obtained using Equa-
"on
tion (5.5.8) as
d
n(s) = -;r{a(t)l
dr I
- dss -s2
Applying Equation (5.5.9), we have, in general.
65.E Modulation
If
r(r) (-) X(r)
then for any real number too,
Exemple 65.10
The laplace transform of (cosroor)u(t) is obtained from the Laplace transform of u(r)
using the modulaiion property as follows:
y[(cos<,rrr)u(r)] = tl I *, I \
zl, + ,..uo _;.,u/
.t
- t'+ .ufi
Similarly, lhe Laplace lransform of exp[-ar] sin torl u(t) is obtained from the Laplace
transform of exp [-at I rr (l ) and the modulation property as
240 The Laplace Transform Chapter 5
e[exp[-arl(sinr,,orla(r)l = I(#..,
#o*o)
=_ -L__
(s+a)2+roo2
6.6.9 Convolution
This property is one of the most widely used properties in the study and analysis of lin-
ear systems. Its use reduces the complexity of evaluating the convolution integral to
simple multiplication. The convolution property states that if
.r() er x(s)
h() <+ H(s)
then
r(r) n /r(r) er X(s)H(s) (s.s.13)
where the convolution of x(t) and ft (r) is
x(t) * h(t) = [
J_-
xft)h(t - r)dr
Since both ft (l) and x(t) are causal signals, the convolution in this case can be reduced to
and H(s). In general, the ROC of X(s)H(s) includes the intersection of the ROCs of
X(s) and H(s) and can be larger if zero-pole calrcellation occurs in the process of mul-
tiplying the two transforms.
Eranple 65.11
The integration property can be proved using the convolution property. since
Therefore. the transform of the integral ofr(r) is the product of X(s) and the transform
of rr(r). which is l/s.
Example 65.12
Let r(l) be the rectangular pulse rect ( (, - o)/?a) centered at , = / and with width 2. The
convolution of this pulse with itself can be obtained easily with the help of the convolu-
. tion prop€rty.
From Example 5.5.2. the transform of .r(t) is
x(s) = 1--
gIPL-lsl
Y(s) - 1r1,; =
[l -elet' -2"]]'
I 2expl-2asl . exp[- 4asl
=s2---7 -'---,,
Taking the inverse l:place transform of both sides and recognizing that 1/s2 is the trans'
form of lu ( ) yields
y(r)=r(r)..r(r)
In Equation (5.5.13). II(s) is called the transfer function of the system whose
impulse response is &(t). This function is the s-domain representation of the LTI sys-
tem and describes the "transfer" from the input in the s domain, X(s), to the output in
the s domain, Y(s), assuming no initial energy in the system at t 0-. Dividing both :
sides of Equation (5.5.13) by X(s), provided that X(s) # 0. gives
That is. the transfer function is equal to the ratio of the transform Y(s) of the output
to the transform X(s) of the input. Equation (5.5.14) allows us to determine the
impulse response of the system from a knowledge of the response y(t) to any nonzero
input r(t).
Example 65.18
Suppose that the input x(l) = exp [-2tla(t) is applied to a relaxed (zero initial conditions)
LTI system. The oulput of the syslem is
)
r0) = l(exn[-r] + exp[-21 - exp[-3rl)a(r)
Then
!
x(s)=s+2
and
Using Equation (5.5.14). we conclude that the transfer function H(s) of the system is
rr('):i-ffi 3G*r1ll!)
_ 2(s2+tu+7)
3(s+l)(s+3)
:3[,.#.*]
from which it follcws thal
alry = + + exp[-3rtlu(r)
Jalr; Jtexnl-rl
Erample 6.6.14
Consider the LTI system describcd by the differential equation
y-(t) + Zy"(t) - y'ltl + 5.v(t) = 3r'1r; * r,r,
Sec. 5.5 Properties of the Unilaioral Laplace Translorm 243
Assuming thar the system was initially relaxed. and taking the Laplace transform of both
sides. we obtain
HIs)
. 3s+1
----.--------
= s'*2s'-s+5
Equation (5.5.15) implies that the behavior of r(t) for small I is determined by the
behavior of X(s ) for large s. This is another aspect of the inverse relationship between
time- and frequency-domain variables. To establish this result, we expand r(t) in a
Maclaurin series (a Taylor series about t = 0*) to obtain
.r() =
[x(o-)
+ x'(0*)t * ... * rt')10t) ri* ],r,r
where r(a)(O*) denores the n th derivative of x(r) evaluated at I = 0*, Taking the
Laplace transform of both sides yields
This more general form of the initial-Value theorem is simplified ', rt')10*) = 0 for
n < N. In that case,
r(N)(g+ ) = lim sN*lX(s) (s.s.17)
This property is useful, since it allows us to compute the initial valuc of the signal r(t)
and its derivatives directly from the Laplace transform X(s) without having to find the
invene x(t). Note that the right-hand side of Equation (5.5.15) can exist without the
existence ofr(0'). Therefore, the initial-value theorem should be applied only when
.r(0*) eilsts. Note also that the initial-value theorem produces.r(0 '), not x(0-).
244 The Laplace Translorm Chapter 5
Example 65.15
Ttie initial value of the signal whose l:place transform is given by
xG)=G+ifu, a+b
is
r(o*) = 1u,1,Jift1, =.
The result can be verified by determining.r() ftrst and then substituting r = 0'. For this
example, the inverse Laplace trausform of X(s) is
The final-value theorem is usiful in some applications, such as control theory, where
we may need to find the tinal value (steady-state value) of the output of the system
without solving for the time-domain function. Equation (5.5.18) can be proved using
'l
I
Jo
x'(r) exp[ - stldt:sX(s) - x(0-) (5.5.19)
which, after simplification, results in Equation (5.5.18). One must be careful in using
the final-value theorem. since lim s X(s) can exist even though r(r) does not have a
Sec. 5.5 Propertiss o, the Unilateral Laplace Transform 245
limit as r -> cc. Hence. it is important to know rhat liq x(r) exists betore applying the
final-value theorem. For example, if
xG): r, r---l ,l
then
lirq sx(s) = ln
rL =
o
u-
But r(t) = costor, which does not have a limit as 1 -e o (cosor oscillates between +l
and -1). Why do we have a discrepancy? To use the final-value theorem, we need the
point s = 0 to be in the ROC of sX(s). (Otherwise we cannot substitute s = 0 in
sX(s).) We liave seen earlier that for rational-function Laplace transforms, the ROC
does not contain any poles. Therefore, to use the frnal-value theorem, all the poles of
sX(s) must be in the left-hand side of the s plane. In our example, sX(s) has two poles
on the imaginary axis.
Ernrnple 6.6.16
The input.r(t) = eu() is applied to an automatic position-control system whose transfer
function is
H(s) =
s(s+b)+c
The final value of the output y(r) is obtained as
=h,[4 9
r-ro I s(s + b )+c
s
=A
assuming that the zeros ofs2 + Ds + c are in the left half planc Thus. after a sufficiently
long time, the output follows (tracks) the input r(r).
f,'.ra'nple 6.6.17
Suppose we are interested in the value of the integlal
Y0) = t'exP[-at] dt = dr
[ l,.x(t)
Note that the final value of y(r) is the quantity of interest; that is.
246 The Laplace Translorm Chapters
TABLE 5.2
Somo Solecled Propertlos o, tho Uplaco Translorm
I
r(l) sin to,,l - - X(s +ioo)l (s.5.12)
4lx(s loo)
9. Convolution r(r) {,rt(r) x(s)rr(s) (s.5.t3)
10. I nitial value .r(0.) lim s X(s) (5.s.rs)
xG) =
1" irJy,i .
Thercfore.
at =
/=r"exp1-arl ol,!.,
Table 5.2 summarizes the properties of the laplace transform. These properties.
along with the transform pairs in Table 5.1. can be used to derive other transtorm pairs.
we saw in section 5.2 that with s = o + fro such that Relst is inside the Roc, the
l-aplace transform of .r(r) can be interpreted as the Fourier transform of the exponen-
tially weighted signal .r(r) exp [-or]l thar is.
Sec. 5.6 The lnverse Laplaco Transform 247
Eramplo 6.8.1
To find the inverse laplace transform of
xtr) =
tr.;tlr*J_ a;l
248. The Laplace Transtorm Chapter 5
we factor the polynomial D(s) = 5r + 3s2 - 4s and use the partial-fraitions form
'!-r- * 4t.
x(s)=As * s+4 s-1
Using Equation (D.2). we find that the coefficients ,l,, i = 1,2,3, arc
Ar= _iI
7
Az=
zo
3
At=
5
f,sornple 6.62
In this example, we consider the case where we have repeated factors. Suppose the
l:place transform is given by
xc) = F#;*_,
The denominator D(s) = s3 - 4s2 + 5s - 2 can be factored as
D(s)=(s-2)(s-lf
Since we have a repeated factor of order 2, the corresponding partial-fraction form is
BAzAI *;-'i
x(s) =;-, *
4" -'rF
The coefticient I can be found using Equation (D.2); we obtain
B=2
The coefficients A,,i = l,2, are found using Equations (D.3) and (D.4); we get
Az= |
and
d lx2 - 3s\ I
Ar=
as l, _ r_/1,",
"
= 1r_2$- r)_r4l-1'l
(" - 2)2
Il"-, =,
so that
2l
x(s)=;-*G-il-
Sec.5.6 The lnverse Laplace Translorm 249
Erample 6.63
ln this example, rre treat the case of complex conjugate poles (irreducible second-degree
faclors). Lrt
xG) = ii
rr*,;1
Since we cannol factor lhe denominator. we complete the square as follows:
---11?-" '
x1s1= (s+2)2+32 *'---t'
(s +212 +3:
By using the shifting properly of the transform, or alternatively. by using entries l2 and
13 in Table 5.1. we find the inverse liplace lransform lo be
Exanple 6.6.4
As an example of repeated complex conjugate poles, consider the rational function
,,r,,=5rr_3=rr+7.s_3
^\rr_ (sz + l;2
x(,):t+P.s#;
and therefore,
Thus,
xlsr=-_$-----1
/-"2+l * -?
s2+l'(s2+112
and the inverse Laplace transform can be determined from Tablc -5.1 to be
(r,. P.
o,o,)t(t)= (5,r,r,),1,y (s.7.r )
Assuming that the system is initially relaxed, and taking the l-aplace transform of both
sides. we obtain
) b,,'
H(s) = *=i- (5.7.3)
sil +) a,s'
i-0
Assuming that N = M, we can express Equation (5.7.2) as
+ 1 l-
"-, [D,X(s) - a,Y(s)] + ; [DnX(s) - ay(s)] (5.7.4)
Thus, Y(s) can be generated by adding all the components on the right-hand side of
Equation (5.7.4). Figure 5.7.1 demonstrates how H(s) is simulated using this technique.
Notice that the figure is similar to Figure 2.5.4, except that each integrator is replaced
by its.transfer function 1/s.
The transfer function in Equation (5.7.3) can also be realized in the second canoni-
cal form if we express Equation (5.7.2) as
M
) b,,'
Y(s) = --J$_, x(s)
sN + ) a,si
i-0
= (5',') (s.7.s)
'1'1
l-
tr
E
o!
E
oo
a
t:
(,
bo
ll
251
The Laplac€ Translom Chapter 5
where
I
v(s) = x(s) (s.7.6)
sil+ ) o,si
or
Therefore. we can generate Y(s) in two steps: First. we generate V(s) from Equation
(5.7.7) and then use Equation (5.7.5) to generate Y(.s) from V(s). The result is shown
in Figure 5.7.2. Again, this figure is similar to Figure 2.5.5, except that each integrator
is replaced by its transfer function l/s.
Example 6.7.1
The two canonical realization forms for the system wilh the transfer function
s2-3s+2
H(s) =
sr+612+lls+5
are shown in Figures 5.7.3 and 5.7.4.
As we saw earlier, the Laplace transform is a useful tool for computing the system
transfer function if the system is described by its differential equation or if the output
is expressed explicitly in terms of the input. The situation changes considerably in cases
where a large number of components or elements are interconnected to form the com-
plete system. In such cases, it is convenient to represent the system by suitably inter-
connected subsystems, each of which can be separately and easily analyzed. Three of
lhe most common such subsystems involve series (cascade), parallel, and feedback
interconnections.
In the case of cascade interconnections, as shown in Figure 5.7.5,
Y1(s) = H,(s)X(s)
and
Yr(s) = Hr(s)Y,(s)
= [H,(s)H,(s)lX(s)
which shows that the combined transfer function is given by
H(s) = H,(s)Hr(s) (5.7.8)
We note that Equation (5.7.8) is valid only if there is no initial energy in either sys-
tem. lt is also implied that connecting the second system to the first does not affect the
output of the latter. In short. the transfer function of first subsystem. Ht(s), is com-
puted unrler thc assumption that the second subsystern with lransfer function H,(s) is
not connected. In other rvords. the inputioutput relationship of the first subsystem
c
+
a)
CJ
+
cl)
tr
oo
+
.E
at
|.--
EO
253
2il Th€ Laplace Translorm Chaptor 5
.r(r)
v(t)
Flgure 5.73 Simulation diagram using first canonical form for Exam-
ple 5.7.1.
Egure 5.7.4 Simulation diagram using second canonical form for Eram-
ple'5.7.1.
ft (s)
,/t(r)
Y! (s)
,t/,(r) Flgure 5.7.6 Parallel
interconnection of two subsystems.
Using the convolution property, the impulse response of the overall system is
h(t) = h,(t) {'fr2(r) n ... * ft/v(,) (5'7'10)
If two subsystems are connected in parallel, as shown in Figure 5.7.6. and each sub-
system has no initial energy, then the outPut
Y(s)=Y'(s)+Yr(s)
= I/r(s)x(s) + Hr(s)X(s)
= [Hr(s) + H,(s)]X(s)
and the overall transfer function is
H(s)=f/,(s)+Hz(s) (s.7.1 1 )
From the linearity of the L:place transform, the impulse response of the overall system is
h(t) = h,(t) + i,(,) + ..' + hNQ) (5.7.13)
These two results are consistent with those obtained in Chapter 2 for the same
in tercon necl ion s,
Eranple 6.72
The transfer function of the system described in Example 5.7.1 also can be written as
s-1s-2
H(s)=;+i I
iiz,+r
This system can be realized as a cascade of three subsystenrs. as shown in Figure 5.7.7'
Each iubsystem is composed of a pole-zero combination. The same system can be realized
in parallel, too. This can be done by expanding H(s) using the method of partial fractions
as follows:
256 The Laplace Translorm Chapter 5
t2
t +2
10
H(s) = -J- - -'2- *
s+ I s+2 s+3
A parallel interconnection is shown in Figure 5.7.8.
The connection in Figure 5.7.9 is called a positive feedback system. The output of
the first system Hr(s) is fed back to the summer through the system Hr(s); hence the
name "feedback connection." Note that if the feedback loop is disconnected, the trans-
fer function from X(s) to Y(s) is H,(s), and hence H,(s) is called the open-loop trans-
fer function. The system with transfer function Hr(s) is called a feedback system. The
rvhole system is called a closed-loop system.
/r2 (s)
Iigure 5.7.9 Feedback connection.
We assume that each system has no initial energy and that the feedback system does
not load the open-loop system. Let e(r) be the input signal to the svstem with transfer
function II,(s). Then
Y(s) : E(s)H,(.s)
E(s) = .Y1.'; + //r(.s)Y(.s)
so that
(s.7.1.t)
Thus, the closed-loop transfer function is equal to the open-loop transfer function
divided by I minus the product of the transfer functions of the open-loop and feedback
systems. If the adder in Figure 5.7.9 is changed to a subtractor. the system is called a
negative feedback system. and the closed-loop transfer function changes to
H(s) =
,r illlL,, (s.7. l s)
Erample 6.8.1
Consider the second-ordcr, linear. constanl-coefficient diffcrcr:tial cquation
.v"(r) + sr,',(r) + 6f(/) = exp[-r]u(r), .v',(0 )- Ilnd.y(0-)=2
Taking the [-aplace transfornr of hoth sidcs resul]s in
r(s)=.2r-+l3s1t2-
(.r+l)(s:+5s+6)
l6e
2(s+ l) s+2 2(s+3t
258 The Laplac€ Transform Chapl€r 5
Higher order differential equations can be solved using the same procedure.
That is, an energized inductor (an inductor with nonzero initial conditions) at , ='0 -
is equivalent to an unenergized inductor al , = 0- in series with an impulsive voltagc
source with strength LiL@-). This impulsive source is called an initial-condition gen-
erator. Alternatively, Equation (5.8.2) can be written as
(o-)
/r.(s) = ,, trl + (s.8.3)
,l "
That is. an energized inductor at t = 0- is equivalent to an unenergized inductor at
r - 0- in parallel with a step-function current source. The height of the step function
is i,.(0-).
t1(:-)
rz.{s) =
r! r, trl + (s.8.5)
S€c. 5.8 Applications ol th6 Laplace Translorm
)
k
rr(') = o (s.8.6)
The voltage law states that around any loop in an equivalent circuit, the algebraic sum
of the voltages in the s domain is zero; i.e.,
)
k
v*1'1 = e (5.8.7)
caution must be exercised when assigring the polarity of the initial-condition generators.
Eranfle 6AJ
Consider the circuit shown in Figure 5.E.t(a) with dr(o-) = -2. uc(o-) = 2, aod r(r) =
u(t). The equivalent s-domain circuir is shown in Figu'e 5,8.1(b).
Writing the node equation at node 1, we obtain
Rr- 2tt'
2
+
x(s) = !
s
l-
s- I/(s)
(b)
H.(s)
The following example demonstrates how to design the controllcr to achieve the track-
ing effect.
Example 6.tJ
' Suppose that thc LTI system we have to control has the lransfcr function
Lrt the input be r(l) = Au(t) and the disturbance be ra0) = Bx(/), where A and B are
constants. Becausi: of linearity. we can divide the problem into trvu simpler problems, one
with input r(t) anrJ the olher wirh input ur(r). That is. the ourpul I(r) is expressed as the
sum of lwo components. The first component is due to r(t) when rrr(l) = 0 and is labeled
.v,(r). It can be easily verified that
y,(,) =
i{#:i,o,,fr, "u,
where R(s) is the l:place transform of r(r). The second component is due to zo(r) when
r(l) = 0 and has the laplace transform
y,6) = w(.)
G-#-lo.r(,
where W(s) is thc Laplacc transform of the disturbance ?o(/).'l-hc complete output has
the Laplace transform
Y(s)=Y,(")+Yr(s)
lg1 Y(t)
: a
_,,_ +_D.(s)Bl
- i-d !Ell4L{94
o(')o.(') N(s)N.(s)
+
(s.8.r0)
fsro=#ffi# =,
Eranple 6.8.4
Consider the control system shown in Figure 5.8.3. This system represents an automatic
position-control system that can be used in a tracking antenna or in an antiaircraft gun
mount. The input r(r) is the desired angular position of the object to be tracked, and the
outPut is the position of the anlenna.
.10|
..:,
The first subsystem is an amptilier with transfer function Hr(s) = 8, and the second sub-
.l;,.i systemisamotorwithtransferfunctionH2(s):1/[s(s+o)],where0<c<V32.btus
investigate the step resPonse of the system as the parameter o changes. The output Y(c) is
Yc) =
:"t'r =1-f*]#oA,rr
8
i1s'+"s+s)
rl I s+o
ss2 +aJ+8
The restriction 0 ( a ( \6j is chosen ro ensure that the roots of the polynomial s2 + as
* 8 are complex numbers and lie in the left half plane. The reason for this will become
clear in Section 5.10.
The step response of this system is obtained by taking the inverse l.aplace transform
of Y(s) to yield
The step response y(l) for two values of q, namely, c = 2 and a = 3, is shown in Figure
5.8.4. Note that the response is oscillatory with overshoots of 30% and l4olo, resPectively.
The time required for the response to rise from l0% to fr)o/o of its final value is called the
rise time. The first system hai a rise time of 0.48 s, and the second system has a rise time
of O.60 s. Systems with longer rise times are inferior (sluggish) to lhose with shorter rise
times. Reducing the rise time increases the ovenhoot. however. and high overshoots may
not be acceptable in some applications.
Soc. 5.9 Slate Equations and lhs Laplace Translorm 263
v(t)
l.:m
t.l7
y(t)=cv0)+dx(t) (s.e,2)
(see Equation (2.6.13) with ro = 0-.) The integral on the right side of Equation (5.9.5)
represents the convolution of the signals exp [Al] and bx(r). Thus, the Iaplace trans-
formation of Equation (5.9.5) yields
Y(s) : 9lexp[Ar]] v(0-) + 9[exp[Ar]l bx(s) (s.e.5)
A comparison of Equations (5.9.3) and (5.9.6) shows that
9{exp[Ar]l = (sI - A)-t =,D(s) (s.e.7)
yhere o(s) represents the Laplace transform of the state-transition matrix erp[Ad.
O(s) is usually referred to as the resolvent matrix.
___
Equation (5.9.7) gives us a convenient alrernative method for determining exp[Al]:
we frrst form the matrix sI - A and then take the inverse Laplace transform of
(sI - a1-t
With zero initial conditions, Equation (5.9.4) becomes
y(s) : [c(sr - A)-r u + d]X(s) (s.e.8)
and hence, the transfer function of the system can be written as
&le 6.9.1
Consider the system described by
''(,)
=
[-, l]"u, . [l],u,
y(r) = [-l -l]v(r) + 2r(r)
Sec. 5.9 Stats Equations and the Laplace Translorm 26s
with
=
"(o ) [;,]
The resolvent matrix of this system is
-a l-'
*,rr=['r*' ,-3-]
Using Appendix C. we obtain
Fs-3 4 I
o(s) = 1=---'-t
(s+3)(s-3)+8
l-
[--r-1 -_.-r -l
t". t]! --,l u.,t,.,'r- ,,
=
| I
[ ,-3 4
(" . - 1) (s -,'lt"-
lr(s) = 1-, -,l l ']t "
L(r+tX'-1) G+txr-D ]trt.,
2s:-4s-lE
(s+l)(s-1)
Taking the inverse l:place tralsform. we obtain
,l0) = 2[6(t) + 3 exp[-r]uO - s exp[t]z(t)l
The zero-input response of the system is
,r,r=ffiffi*ffi$*t'r
The step response of this system is obtained by substituting X(s) = 1/s, eo that
EHd The Laplace Translorm Chapter 5
yr,r =
1,*l,figO.,ti.-,ii, _,i,
= --:l$-:-t-!-
s(s+l)(s-l)
_18+ 6 _ 24
s s+l s-l
Taking the inverse laplace transform of both sides yields
y(r) = [18 + 6 exp[-r] - 24 exp[r]lu(r)
k -rr""*
kt us ftid the state-tntrsition matrix of lhe s,6tem in Example 5.9.1. T'he resolvent matrix is
rD(s )
l)(s - l) (s + l)(s -
-2 s+3
:r]
The various elements of O(r) are obtained by taking the inverse laplace transform of each
entry in the matrix O(s). Doing so. we obtain
.(,) =
[r.:;ir-,i,_-".#i, _ffi L-,ir..ffiir],or
ple poles. If one of the poles corresponds to a repeated facror of the fonir (s - sr)',
then it is a multiple-order pole with order rn. The impulse response of the system, i(t),
is obtained by taking the inverse Laplace transform of Equation (5.10.1). From entry
6 in Table 5.1. the &th pole contributes the term ho$) = Ao exp [.r*r] to i (t). Thus, the
behavior of the system depends on the location of the pole in the s plane. A pole can
be in the left half of the s plane, on the imaginary axis, or in the right half of the s plane.
Also, it may be a simple or multiple-order pole. The following is a discussion of the
effects of the location and order of the pole on the stability of [,TI systems,
L. Simple Poles in the Left Half Plane. In this case, the pole has the form
s*=ooljro*. oo(0
and the impulse-response component of the system, ho(t), corresponding to this pole is
hoQ) = Aoexp[(oo + jtro)r] + Af exp[(oo - lr*)rl
= l,4ol exp[oor](exp[i(toor + 9r)] + exp[-i(oror + 9r)])
= Zlerl exp[oot] cos(orot + p*). or < 0 (s.10.2)
where
Ar= lA*l exp[rprl
As, increases, this component of the impulse response dccays to zero and thus
results in a stable system. Thereforc, systcrns with only sinrplc poles in the left half
plane are stable.
2. Simple Poles on the Imaginary Axis. This case can be considcred a special case of
Equation (5.10.2) with oo = 0. The kth component in the impulse response is then
holt'1 :zlerl cos(ur*t + B^)
Note that there is no exponcntial dampingl that is, the rcsponse does not decay as
time progresses. It may appear that lhe response to the bounded input is also
boundcd. This is not truc if the system is excited by a cosinc function with the same
frequency to^. In that case, a multiple-order pole of the fornr
__ B"
1s2 + ol12
appears in the L:place transform of the output. This term gives rise to a time response
B
stn
2ro ' 'ot
that increases without bound as I increases. Physically', o^ is the natural frequency
of the system. If the input frequency matches the natural lrcquency, the system res-
onates and the output grorvs without bound. An example is the lossless (nonresis-
tive) LC circuit. A system rvith polcs on the imaginary axis is sometimes called a
marginally stable system.
3. Simple Poles in the Right Half Plane. If the system function has poles in the right
half plane, then the sys:em response is of the form
268 The Laplace Translorm Chapter s
.11 SUMMARY
. fhe bilateral laplace transform ofr(l) is defrned by
? The values of s for which X(s) converges (X(s) exists) constitute the region of con-
vergence (ROC).
a The transformation r(r) e+ X(s) is not one to one unless the ROC is specifred.
o The unilateral Laplace transform is defined as
X(s) = i
Jl
,(r) exp[ - s,] d,
The bilateral and the unilateral Laplace transforms are related by
Xa(") = X, (s) + Y[.r_(-r)x(r)1.-_.
where X*(s) is the unilateral Laplace transform of the causal part of x(r) and.r_(t)
is the noncausal part of .r(t).
Sec. 5.11 Summary 269
LlI'-t x(t\ar
.- ,l )=-x(s) -
tJ_. J s
r Convolution in the time domain is equivalent to multiplication in the s domain: that is,
y(t) = .r(t) * &(t) er Y(s) = .11'11r,r,
. The initial-value theorem allorvs us to compute the initial valuc of the signal r(l)
and is derivatives directly from X(s):
r(')(0') :1g1 [s'*tX(s) - s't(0*) - s"-rr'(0*) - ...
- sjr(,,-r)(0+)]
r The final-value theorem enables us to find the final value of ,r(r) from X(s):
. Partial-fraction expansion can be used to find the inverse laplace transform of sig-
nals whose Laplace transforms are rational functions of s.
. There are many applications of the Laplace transform: among them are the solution
of differential equations, the analysis of electrical circuits, and the design and analy-
sis of control systems.
o If two subsystems with transfer functions H, (s) and H2(s) are connected in parallel,
then the overall transfer function H(s) is
H(s)=Hr(s)+l/2(.t')
o If two subsvstems rvith transfer functions 11,(s) and Hr(.s) are connected in series,
then the overall transfer function II(s) is
It(s): H,(s)Hr(s)
o The closed-loop transfer function of a negative-feedback system with openJoop
transfer function Il,(s) and feedback transfer function Hz(s) is
H(s) =
.r #ifiro
. Simulation diagrams for LTI systems can be obtained in lhe frequency domain.
These diagrams can be used to obtain representations of state variables.
e The solution to the state equation can be written in the s domain as
V(s) : o(s)v(0-) + o(.r)bx(s)
Y(s) = 3Y1t1 + dX(.s)
27O. The Laplac€ Transform Chapter s
r The matrix
rD(s) = (sI - A)-t= glexp[Ar]]
is called the resolvent matrix.
o The transfer function of a system can be written as
H(s)=ctD(s)b+d
o An LTI system is stable if and only if all is poles are in the open left half plane. An
LTI system is marginally stable if it has only simple poles on the jo-axis; otherwise
it is unstable.
5.13 PROBLEMS
5.1. Find the bilateral laplace transform and the ROC of the following functions:
(e) exp [r + l]
(b) exp[bt]u(-r)
(cl lrl
(d) (l - lrl)
(e) exp [ -2lr l]
(f) t" exp[-l]z(-r)
(g) (cosat)u ( -t)
(h) (sinhat)a(-t)
52 Use the definition in Equation (5.3.1) to determine the unilateral Laplace transforms of
the following signals:
(i) .r,(t) = rrect[(r - l)/21
Sec. 5.13 Problems 271
s3+2s2+3s+2
.X(.t ) =;.r+^j +1,.1'+b+2
Determine the Laplace lransform of the following signals:
(a) y0) =
(b) y0) = tr(t)
"(i)
(c) y(t) = tr( - l)
dr (t\ _i.
(dl y() =
(e) y(r) = (r - l)x(r - D * d';!)
rt
(f)y(t)=lx(r)dt
J6
y.tt"u(t)t = Oljit,,, o
where
r(u) = r'-'exp[-rldr
f
5.7. Use the property
f(a + l) = uf (u)
to show that the result in Problem 5.6 reduces to entry 5 in Tahle 5.1.
5.& Derive formulas 8 and 9 in Table 5.1 using integration by parts.
5.9, Use enrries E and 9 in Table 5.1 to find the Laplace transfornrs of sinh(roor)u(t) and
cosh (to6r) u (t).
5.10. Determine the initial and final values of each of the signals whose unilateral l-aPlace trans-
forms are as follows without computing the inverse Laplace trartsform. If there is no final
value, stale why not,
(a)-, t
J+A
I
(b)
i" * o1;
6
(e
fu.rzi
272 The Laplace Transtorm Chapter 5
(d)
sri;
- s2+s+3
(e)F+4s,+zsE
o F:+-,
S.fL Find.r0) for thc following laplace tranforms:
(a) , s*2 ^
s--s-z
(b)#+
(c) 2s3+3s2+6s+4
G,J;xs,+r+2)
c2
(d)
",i+
s2-s+1
2
(e)
f _ 2s7J;
.', f,#=
2s2-6s + 3
(8)
s, _ 3s;,
rorsffl 7
:
o) (BE,
u,#6
5.11L Find the following convolutions using laplace transforms:
(a) exp[at]z(t) * explbtlu(t), a * b
@) exP [at]a(t) * exp [ar]z (r)
(c) rect (r/2) * a(t)
(d) ,l,(t) r exP[at]z(t)
(e) exp [-Dt] z(t) * z (t)
.
(I) sin (at)z (t) * cos(Dt)u(t)
(g) exP[-2r]z(r) rect [0 - 1)/2]
(!) [exp(-2.r)z(r) ' + 6(r)] . u(t - r)
5.13. (a) Use the convolution property to find the time signals corresponding to the following
I-aplace transforms:
response h(t). Ler H(s1 = N(s)/D (s), where N(s) and D(s) are polynomials in s. The
roos of N(s) are the zeros of l/(s), while the roors of D(s) are rhe poles.
(a) For the transfer function
s2+3s+2
H(r) = _
si sr-i s, _l
plot the locations of the polcs and zeros in the complex s plane.
(b) Whar is fi(r) for rhis sysrem? Is ft(r) real?
(c) Show that if /r(l) is rcal, H(s*) = H*(s). Hence show that if s = s6 is a pole (zero) of
l/(s), so is s = sot. That is poles and zeros occur in complex conjugate pairs.
(d) Verify that the given H(s) satisfies (c).
5.15. Find the system transfer functions for each of the systems in Figurc P5.15. (/rinr.'You may
have to move the pickoff, or summation, point.)
r(r) v(,)
v (t)
Figure P5.15
5.16. Draw ihe simulation diagrams in the first and second canonical fornrs for the LTI system
described by the transfer function
sl+3s+l
H(s) =
s3+3s2+s
/Ir(s)
tigure P5.19
#,_
Sec. 5.13 Problems 275
Flgure P523
For the circuit shown in Figure P5.24, let o.(0-) = I volt, ir(0-) = 2 amperes, and.r(r) =
u(t). Find y(). (Incorporate the initial energy for the inductor and rhe capacitor in your
transformed model.)
r'(, )
Figure P524
Flgure P527
Iigure PSJS
H.k)
Ugure P52!l
H.(") =
s* 1
s
I
II(s) =
sf2
(a) Show that lim y(; = 4.
(b) Determine the error signal e(t).
(c) Does the system track the input if H.(s) = If not, why?
U - ,1,
(d) Does the system work if H.(s) = u*r,
ffi,
Find exp [At] using the Laplace transform for the following matrices:
(,) A:[l N 61
Fr -rl
e=[z o]
,", n=[l ?]
(,,) A=ti ?]
I r ool fz 1ll
tel l=l-t l rl ro n=lo 3 rl
L-r o ol Lo -r rl
Sec. 5.13 Problems
53L Consider the circuit shown in Figure P5.31. Select the capacitor voltage and the inductor
current as state variables. Assume zero initial conditions.
(a) Write the state equations in the transform domain.
(b) Find l/(s) if the input r(r) is the unit step.
(c) What isy(t)?
Elgure P53l
Use the L:place-transform method to find the solution of the following state equations:
.n
5
"+2
s *?
?+r
;;t
?,
Flgure PS33
Chapter 6
Discrete-Time Systems
INTRODUCTION
In the preceding chapters, we discussed techniques for the analysis of analog or con-
tinuoui-time signals and systems. In this and subsequent chapters, we consider corre-
sponding techniques for the analysis of discrete-time signals and systems.
Discrete-time signals, as the name implies, are signals that are defined only at dis-
crete instants of time. Examples of such signals are the number of children born on a
specific <tay in a year, the population of the United States as obtained by a census, the
interest on a bank account, etc. A second type of discrete-time signal occurs when an
analog signal is converted into a discrete-time signal by the process of sampling. (We
will have more to say about sampling later.) An example is the digital recording of
audio signals. Aoother example is a telemetering system in which data from several
measurement sensors are transmitted over a single channel by time'shaing.
In either case, we represent the discrete-time signal as a sequence of values x(t,),
where the t, correspond to the instants at which the signal is defined. We can also write
the sequence as x(n), with a assuming only integer values.
As with continuous-time signals. we usually rePresent discrete-time signals in func-
tional form-for example,
.r(n) = (6.1.1)
].o.rn
Alternatively, if only over a finite interval, we can list the values of
a signal is nonzero
the signal as the elements of a sequence. Thus, the function shown in Figure 6.1.1 can
be written as
278
Sec.6.1 lnlroduction 279
\'(,l)
l'I I I 3
(6.t.2)
.r(r ) =
t+'z':'o'o' ;)
1
where the arrow indicates the value for n = 0. In this notatron. it is assumed that all
values not listed are zero. For causal sequences. in which the [irst entry represents the
value at n = 0, we omit the arrow.
The sequence shown in Equation (6.1.2) is an example of a .l'inite-lengtft sequence.
The length of the sequence is given by the number of lerms in the sequence. Thus,
Equation (6.1.2) represents a six-point sequence.
The signal x(n) is an energy signal if E is finite. It is a power signal if E is not finite, but
P is finite. Since P = 0 when E is finite, all energy signals arc also power signals. How-
ever, if P is finite, E may or may not be finite. Thus, not all power signals are energy
signals. If neither E nor P is tinite, the signal is neither an encrgy nor a power signal.
The signal:(n) is periodic if, for some integer N > 0,
x(il + N) = r(n) for all n (6.1.s)
The smallest value of N that satisfies this relation is the fundamcntal period of the signal.
If there is no integer N that satisfies Equation (6,1.5), x(rr ) is an aperiodic signal.
Example 6.1.1
Considcr the signal
x(n) = I sin(2rrlon + $o)
Then
Discrete-Time Systoms Chapt€r 6
N=im
where rn is some integer. The fundamental period is obtained by choosing ra as the small-
est integer that yields an integer value for N. For example, ifro = 3/5, we can choose zl =
3togetN=5.
O-n rhe other hand, if /o = f4, ry *itt not be an integer, and thus, .r(n ) is aperiodic.
Let x(n ) be the sum of two periodic sequences rr (n ) and xr(n ), with periods Nr and
N, respectively. Let p and q be two integers such that
PNr=qNr:N (6.1.6)
Erample 6.19
Let
and is odd if
x(n) = -r1-r) for all n (6.1.8)
Example 6,1.3
Lct
(n) = 'rz rn,
t
"- "
and suppose we w nI to find (i) 2r(5rrl3) and (ii) r(2rr).
With r'(n ) = 2t(5rrl3). we havc
r(0) = 2t(0) = 2.t'(l) = 2r(5/-3) = 0.r(2) = 2.t(l(1,/l) = 0.
rr = 0, 3, 6, etc,
y(n ) =
otherwise
,,'
.(,,) =
{;:or ;:3
The preceding example shows that for discrete-time signals, time scaling does not yield
just i stretched or compressed version of the original signal, but may give a totally dif-
ferent waveform.
B3arnplg 6.1.4
[-t
..(r) =
ft. n even
t_r, nodd
Then
t'in) = x(V11 = 1 for all n
282 Discrete-Timo Systems Chapier 6
F rnrnple 6.1.6
Consider the wavelorm shown in Fig. (6.1.2a), and let
y(,)=..(-i.3)
.r ( rr) ,(i )
(b)
.(-3) ^Fi * i)
Flgure 6.12 Signals for Example 6.1.5 (a) r(a), (b) r(n/3), (c) r(-al3), and (d)
r(-n/3 + 2/31.
,(r)=r[-?]
We first scale r(.) by a factor of 1,/3 to oblaih r(n/3) and then reflect this about the ver-
tical axis to obtain r(-n/3). The result is shifted to the right bJ two samples to obtain
y(n ). These steps are illustrated in Fies. (6.1.2bF(6.1.2d). The resulting sequence is
y(r) = [-2, 0, 0, 0, 0,0, 1, 0, 0, 2, 0, 0, -11
t
6(,) =
{;: :;i (6.2.1)
as shown in Figure 6.2.1. We refer to E(z) as the unit sample occurring at n = 0 and
the shifted function 6(n - /<) as the unit sample occurring at n = lc. That is,
The discrete-time delta and step functions have properties somewhat similar to their con-
tinuous-time counterparts. For example, the first dffirence of the unit-step function is
u(n) - u(n - l) = 6(r) (6.2.4)
If we compute the sum from -oo to n of the 6 function, as can be seen from Figure 6.2.3,
we get the unit step function:
6(n ) 6(, - t)
kn
(b)
Ilgure 6.2.1 (a) The unit sample of E function. (b) The shifted 6 function.
u(n)
ln
t.
I
I
(a) (b)
= u(n)
By replacing kby n - k, we can write Equation (6.2.5) as
i
t-0
ut, - k) = u(n) (6.2.6)
From Equations (6.2.4) and (6.2.5), we see that in discrete-time systems, the first dif-
ference, in a sense, takes the place of the first derivative in continuous-time systems,
and the sum operator replaces the integral.
Other analogous properties of the 6 function follow easily. For any arbitrary
sequenoe x (z ), we have
x(n ) E(r - k) : x(k) 6(n - k) (6.2.7)
6.2.2 ExponontialShquencee
The exponential sequence in discrete time is given by
x(n) = grn (6.2.e)
where, in general, C and o are complex numbers. The fact that this is a direct analog
of the exponential function in continuous time can be seen by writing c e9, so that :
r(n)=g"w' (6.2.10)
For C and a real,.r(n) increases with increasing n if lcl > l. Similarly. if lal < l, we
have a decreasing exponential.
Sec. 6.2 Elementary Discrete-Tlme Signals n5
By replacing ool in this equation by f,ln, we obtain the complex exponential in dis-
crete time,
or
fb m
2r N
Ior m any integer. Thus, r(n) will be periodic only it {lr/2n is a rational number. The
period is given by N = 2rm/Oo, with the fundamental period corresponding to the
smallest possible value for rn.
&anple 6.2,1
Irt
l7t
: exn[ 1
x(n) zJ
,
so that
Oo7
2n18N=^
=
Thus, the sequenc€ is periodic, and the fundamental period. ohtarned by choosing rn = 7'
is given by N = 18.
286 Discrete-Time Systems Chapter 6
Exarrple 6.2.2
For the sequence
,(") = *o[, ?]
we have
q= 7
2tr l8rr
which is not rational. Thus, the sequence is not periodic.
there are only N distinct waveforms in the set given by Equation (6.2.14). These cor-
respond to the frequencies f,)1 :Ztrk/Nfork =0, 1,...,N- l. Since dlp*y= dlo+. 2n,
waveforrns separated in frequency by 2n radians are identical. As we shall see later,
this has implications in the Fourier analysis of discrete-time, periodic signals.
Esample 6,2.3
Consider the continuous-time signal
2
r(l) = ) c*d^'i'
L- -2
where co = l, cr = (l + il) = cl,,and c, = cl, = 312.
Let us sample:(t) uniformly at a rate f = 4 to get the sampled signal
2
r1z; = ) c*dr'i to
k=-2
= f
I--2
,re,ro*
where .f!o = aQn/3). Thus, x(n ) represents a sum of harmonic signals with fundamental
period N = 2tm/Ao. Choosing rn = 4 then yields N = 3. Il follows, therefore, that there
are only three distinct harmonics, and hence, the summation can be reduced to one con-
sisting only of three terms.
To sEe this, we note that, from Equation (6.2.15), we have exp (i 2flra) = exp(-i0on )
and exp(l(-2()on)) = exp(ifha), so thal grouping like terms together gives
Sec. 6.3 Discrete-Time Systems 287
x(n) = ) duetL'i'
t--l
where
y(n)= ) x(n-k)h(k)
k= -a
= h(n) * x(n) (6.3.3)
Thus, the convolution operation is commutative.
For causal systems, it is clear that
h(n):0, n<0 (5.3.4)
so that Equation (6.3.2) can be written as
tl
y(n)= ) x(k)h(n-k) (53.s)
i- -o
or, in the equivalent form,
y(n)=)r(n-k)h(k) (6.3.6)
l=0
For continuous-time systems, we saw that the impulse response is, in general, the sum
of several oomplex exponentials. Consequently, the impulse response is nonzero over
any finite interval of time (except, possibly, at isolated points) and is generally referred
to as aD infinile impube response (IIR). With discrete-time systems, on the other hand,
the impulse response can become identically zero after a few samples. Such sptems
are said to have a Jinite impulse response (FIR). Thus, discrete-time systems catr be
either IIR or FIR.
We can interpret Equation (6.3.2) in a manner similar to the continuous-time case.
For a fixed value of n, we consider the product of the two sequences .r(t) and
h(n - k),where h(n - k) is obtained from ft(&) by first reflectingh(k) about the ori-
ginandthenshiftingtotherightbynifnispositiveortotheleftby lnlifzisnega-
tive. This is illustrated in Figure (6.3.1). The output y(x) for this value of z is
determined by summing the values of the sequence x(k)h(n - k).
n(t)
(b)
i(a - l) r(t<)lt(a - *)
(c) (d)
" We note that the convolution of h(n) with 6(n) is, by definition, equal toi(n).That
is, the convolution of any function with the 6 function gives back the original function.
We now consider a few examples.
f,rqrnple 63.1
When an input.r(n ) = 36(r - 2) is applied lo a causal, lincar time-invariatrt system, the
output is found to be
v@ =,li).'0" n>2
Find the impulse response h(n ) of the system.
By definition, /r(a) is the response of the system to the input 6(n). Since the splem is
LTI. it follows that
h(nl -\y(n + 2)
* lr(l)'-'),a rt
,,,, =
[; (- ;)"'" -
so that
Example 0-92
L.et
r(n) = o"1n',
h(n): $'u(n)
Then
y(n) =) aru(k)p'-tu(n - k)
Since u(k) = 0 for k < 0, and u(n - kl = 0 for & > n, we can rewrite the summation as
y(n)=9'ittl=(n+l)P"
If c + p, the sum can be put in closed form by using the formula (see Problem 6.5)
Discrete-Time Sysiems Chapter 6
y(z) =iPa,
I - ("P-')1.' gil-{-'
l-oP-t = o-P
As a special case of this example, let c = l, so that r(fl) is the unit step. The step response
of this system obtained by setting c = I in the last expression for y(z ) is
I - p'tl
ytr,/=-l_p
In general, as can be seen by letting r(z) : u(nl in Equation (6.3.3), the step
response of a system whose impulse response is fr(z) is given by
stn)=jrt*)
I .(l
(6.3.e)
It follows that, given the step response s(a) of a system, we can find the impulse
response as
xample 6.3.3
We want to find the step response of the system with impulse response
h@ = 2(:) *,(?,),",
By writing &(n ) as
o(,) =
[(]",'')'* (l"-,'r)'],t"r
it follows from the Iast equation in Example 6.3.2 that the step response is equal to
- (l:' ).'.1,,,,,
,,., = f:$;{.'
l r-r"" t-1e-t' I
which can be simplified as
Brerrrple 6.8.4
Let r(n)be a finite sequence that is nonzero for n e lN,,Nrl and h(n) be a finite
sequence that is nonzero for n e [N,, Nnl. Then for fixcd n, h(n - /<) is nonzero for
k e ln- Nn,rr - N.'1. whereas r (k ) is nonzero only for li e lN,.Nr],sothattheprod-
uctr(k)lr(n - k) iszero if rr - N, < N, or i[a - No > Nr.'l hus,y(n ) is nonzero only for
n e [N, + Nr, N, + N.rl.
Let M = N, - N, + I be the length of the sequence.r(n ) and N = No - Nr + I be the
length of the sequence /r(n ). The length of lhe sequence r,(rr). which is (N, + &) -
(Nr + Nr) + I isthusequal to M + N - l.Thar is, the convolurion of an M-point sequence
and an N-point sequence rcsults in an (M + N - l)-point scqucnce.
Example 63.5
Let h(n) : ll. 2,0. - l, I I and -r(z) = 11, 3, - l, -21 be trvo causal sequences. Since i(n)
is a five-point sequence and.r(z) is a four-point sequencc, from the results of Example
6.3.3, .y (n ) is an eight-point sequence that is zero for r ( 0 or a > 7.
Since both sequences are finite, we can perform the convolution easily by setting up a
table of values of h(k) and x(n - k ) for thc relevant valucs o[ n and using
y(r)=i h(k)x(n-k)
as shown in Table 6.1 Thc cntries for r(n - /i ) in lhe table are obtained by first reflecting
.r(k) about the origin to form r(-t) and successively shifting thc resulting sequence by I
to the right. All entries not explicitly shorvn are assumed to hc zero. The output y(z) is
determined by multiplying the entries in the rorvs corresponrling to & (& ) and .r(r - * ) and
summing the results. Thus, to find y(0), multiply the entries in rorvs 2 and 4; fory(l), mul-
tiply rows 2 and 5; and so on. The last two columns list n and r,(n), respectively.
From the last column in the table, we see that
Example 6.8.6
We can use an alternative tabular form lo dctermine y(a ) hy noting that
y(n) = h(o\x(n) + ,l(l)x(x - l) + h(2).r(n - 2) +---
+ ft(-l)-t(n + l) + l,(-2).r( + 2) +...
292 Discrele-Time Systems Chapter 6
ABLE 5.1
onYoluuon Table ,o, Erample 6.3.4.
-3 -2 -1 v(n,
h(k) 72 0 -l
r(,t) 13 -l -2
r(-k) -2 -t 3 1 0l
.t(l - k) -2 -l 3r l5
x(2 - k) -1 i 1 25
x(3 - k) -2 -t 3l 3-5
x(-k) -2 -1 3 I 4-6
r(s - t) -2 -l 3 1 54
r(6 - /<) -2 -l 3 I 61
r(7 - k) -2 -1 3 t7-2
TABLE 6.2
Conyolutlon Table ,or ErEmplo 6.3.5.
-2 -1
.t (z + 1) -1 3-l -2
x(n) -t 3 -1 -2
.r(z - l) -l 3 -t -2
x(n - 2) -1 3 -l -2
x(n - 3) -l 3 -1 -2
h(-l)r(n + 1) -6 24
/l (0)r(n ) -2 6-2 -4
ft(l)r(z - 1) 00 0 0 0
h(2)x(n - 2) I -3 I 2
h(3)x(n - 3) -1 3 -1 -2
v(n, -8 -8 4 I -2
Finally, we note that just as with the convolution integral, the convolution sum
defined in Equation (6.3.2) is additive, distributive, and commurative. This enables us
to determine the impulse response of series or paratlel combinations of systems in
terms of their individual impulse responses, as shown in Figure 6.3.2.
Sec. 6.3 Discrete-Time Systems 293
ffi-@* (a)
i,(tl)
h1l,t)
h20tl
i,(n)
i,(l)
(c)
E-a'nple 63.7
Consider the system shown in Figure 6.3.3 with
ft'(n ) = E(z) - a6(n - l)
o,ot = (l)""<"t
\(n) = a"u1n)
ho@\=(n-l)u(n)
[o (n)
h5(,')
and
fts(n) : 51r; + n u(n - I) + D(n - 2)
It is clear from the figure that
h(n) = 1r,1n1 * h2(n) * hr(n) * lhr(n) - ho@)l
To evaluate h(n\, we first form the convolution hr(tt) * fir1n1
^,
hr(n) * hr(n) = [6(n) - a6(n - 1)] * a' u(nl
= a" u(n) - a' u(n - l) = 6(n)
Also.
h'(n) - h'@) -2) - (n -
=:[i]:,:'i;,'l;l '|)u(n)
so that
n(n) = 6(n) * hr(n) t [6(n) + 6(zr - 2) + u(n)l
= h(n) + hr(n - 2) + sr(n)
where sr(n) represents the step response corresponding to hr(n). (See Equation (6.3.9).)
We have, therefore,
/l\",t /t\,-2u(n-2) -i\l/
+ /!\-
h(^)=\2) l+\2)
which can be put in closed form, using Equation (6.3.7), as
*@=(i) ,@-2)+2u(n)
Note that the sum on the right has only N terms. We denote thts operation as
We emphasize that periodic convolution is defined only for sequences with the same
period. Recall that, since the convolution of Equation (6.3.2) rcpresents the output of
a linear system. it is usual to call it a linear conyolution in order to distinguish it from
the convolution of Equation (6.4. I ).
It is clear that y(n) as defined in Equation (6.a.1) is periodic. since
l= ,,
The convolulion operation of Equation (6.4.1) involves the shifrecl sequence rr(n - *),
which is obtained from.rr(n) by successive shifts to the right. ll()wevert we are inter-
ested only in values of n in the range 0 < n -: N - l. On each succcssive shift, the first
value in this range is replaced by the value at - l. Since the sequcnce is periodic, this
is the same as the value at N - l, as shown in the example in l.'igure 6.4.1. We can
assume, therefore, that on each successive shift, each entry in lhc sequence moves one
place to the right, and the last entry moves into the first place. Such a shift is known as
a periodic, or circular, sirifl.
From Equation (6.4.1), ylnl can be explicitly written as
y(rz) =,r,(0).rr(z) + r,(1).rr(n - l) + ..' +r,(N - l).t.(n - N +'1)
We can use the tabular form of Example 6.3.6 to calculate y(rr ). However, since the
sum is taken only over values of n from 0 to N - l, the tablc has to have only N
columns. We present an example to illustrate this.
f,gernple 6.4.1
Consider the convolution of the periodic exlezsioru of two sequcnccs:
r (n) |
I
r(z - l)
It follows that y(a ) is periodic with period N = 4. The convolution table of Table 63 illus-
trates the steps involved in determining y (n ). For n = 0, 1, 2, 3, rows 2 through 5 list the
values ofr(z - &) obtained by circular shifts r(n). Rows 6 through 9list the values of
-
h(k)x(n k). The oputput y(n ) is determined by summing the entries in each column
corresponding lo these rows.
TABLE 6.3
Perlodlc Convoludon o, Erampte 6.4.1
.r(n ) 1 20 -1
x(n - l) -1 12 0
x(n-z) 0 -1 I 2
x(n -3) 2 0-t 1
/l (o).r(n ) I 20 -1
ft(l):(z - l) -3 36 0
h(z)t(n - 2) 0 I -1 -2
ft(3)r(n - 3) -4 02 -2
v(n) -6 67 -5
ln order to distinguish y(n ) discussed in the previous section from yr(n )'y(n ) is usu-
ally refcrred to as the llnear convolution of the sequences .r, (a ) and -r,(n ). since it cor-
responds to the output of a linear system driven by an input.
It is clear that yr(n ) in Equation (6.4.6) is the same as the pcriodic convolution of
the periodic extensions of the signals x;(n) and xr(n ), so that v/,(n) can also be con-
sidered periodic with period N. If the two sequences are not of the same length, we can
still define their convolution by augmenting the shorter sequence with zeros to make
the two sequences the same length. This is known as zero'Podding or zero-augmenta-
tion. Since zero-augmentation of a finiteJength sequence does not change the
sequence. given two sequences of length N, and Nr, we can define their periodic con-
volution. of arbitrary length M. denoted ll o@)l*, provided that M > Max [Nt, Nrl. We
illustrate this in the following example.
Example 6,42
Consider the periodic convolution of lhe sequences h(n) = ll.2' 0, -l' ll and.r(n) =
I l. 3. - I , - 2l of Example 6.3.5.
We can find the M-point periodic convolution of the two
sequences for M 5 by > zero-padding the sequences appropriatcly and following the pro-
cedure of Example 6.4.1. Thus, Ior M = 5, we form
so that both h(n) and x.(n) are five points long. It can then casily be veritied that
Comparing rhis result with y(n ) obtained in Example 6.3.4. we note that while the ftrst
three values ofylz) and lr,r(n )ls are different, the next two values are the same. In fact,
Ir can similarly be verified lhat the eight-point circular convention of r(z) and ft(z)
obtained by considering the auBmenled sequences
and
is given by
The preceding example shows that the periodic convolution lr@) ol two finite-
length sequences is related to their linear convolution y,(n ). We will exPlore this rela-
tionship further in Section 9.4.
4E Discrete-Time Systems Chapler 6
In this equation, x(n - &) are known. lf y(n - /<) are also known, then y(n) can be
determined. Setting z = 0 in Equation (6.5.5) yields
:
v(o)
; [r4 r",-o, - -i., +rt-tl] (6.s.6)
The quantities y(-&), for k = 1,2,..., N, represent the initial condirions for the dif-
ference equation and are therefore assumed to be known. Thus, since all the terms on
the right-hand side are known, we can determine y(0).
We now let n = I in Equation (6.5.5) to get
and use the value of y(0) determined earlier to solve for y(l ). This process can be
repeated for successive values of n to determine y(n) by iteration.
Using an argument similar to the previous one, we can see that the initial conditions
needed to solve Equation (6.5.4) are y(0), y(1), ..., y(N -
1). Starting with these ini-
tial conditions, Equation (6.5.4) can be solved iteratively in a similar manner.
Example 65.1
Consider lhe difference equation
y@ -ly(n - 1) *
|rt, -, = (i)"' n )0
with
Y(-l) = I y(-2) = o
Then
y(l) = 31t<ol -
lr,- , -:=12
rr83*
y(2) =
lrtrr - 8)(o) a= u
etc.
Whereas we can use the iterative procedure described before to obtain y(n) for sev-
eral values of n, the procedure does not, in general, yield an analytical expression for
evaluating y(a ) for any arbitrary n. The procedure, however, is easily implemented on
a digital computer. We now consider the analytical solution of the difference equation
by determining the homogeneous and Particular solutions of Equation (6.5'l).
2orY@-t)=o
=0
&
(6.s.7)
By analogy with our discussion of the continuous-time case, we assume that the solu-
tion to this equation is given by the exponential function
300 Dlscrete-Time Systems Chapter 6
yo(n) = Aa'
Substituting into the difference equation yields
II =0 a2Aa"-& :0
Thrs, any homogeneous solution musl satisfy the algebraic equation
)aoa-k=O
t=0
(6.5.8)
Equation (6.5.8) is the characteristic equation for the difference equation, and the val-
ues of a that satis$ this equation are the characteristic values. It is clear that there are
N characteristic roots.rl, c2, ..., ax, and these roots may or may not be distinct. If they
are distinct, the corresponding characteristic solutions are independent, and we can
obtain the homogeneous solution yr(n) as a linear combination of terms of the type
ci, so that
ytb) : Aro'i + A2ai + ..- + Ana,|, (6.5.9)
If any of the roots are repeated, then we generate N independent solutions by multi- |
plying the corresponding characteristic solution by the appropriate power of n. For
example, if c, has a multiplicity of P,, while the other N - P, roots are distinct, we
assume a homogeneous solution of the form
yn@) = z{raf + Arna'l + "' * Ap,nP,-ts'i
+ Ar,*ta[,*r + ... + Anai, (6.5.10)
trranple 6.5.2
Consider the equation
y@) -Ey@- r1 +
fr(z - 4 - *cy@- 3) = o
with.
y(-l)=6, y(-2)= 6 y(-3) = -2
The characteristic equation is
r-|1"-'+f"-' -Loa=o
or
o,-Eo,*i"-*=o
which can be factored as
("-)("-i)("-i)=.
Sec. 6.5 Diflerence-Equation Representation ol Discrete-Time Systems gOi
zAt+3A2+4A3=6
4At+9A2+164=6
8At + nAz + 64A, = -)
The simultaneous solution of these equations yields
.5 ll
y(n) - +
iy@ -.1) S@ - z) - -iey@ - 3) = 0
with the same initial conditions as in the previous example. The characteristic equation is
5lr.
l-4o-'*ro-'-rUo-'=0
with roots
llt (lz=i,
9t=i' ar=4
Therefore, we write the homogeneous solution as
951 At=4,
At=2. /r=-g
302 Discrete-Time Systems Chapter 6
We note that the right side of this equation is the weighted sum of the input x(n ) and
is delayed versions. Therefore, we can obtain lr@), the particular solution to Equa-
tion (65.11), by first determining y(n ), the particular solution to the equatirn
l=0
2 oot@ - k) = x(n) (6.s.12)
4@)=luoyln-*7
,(-t)
(6.s.13)
To find y(n), we assume that it is a linear combination of x(n) and its delayed versions
.r(z -
1), x(n - -
2), etc. For example, if .r(z) is a constant, so is x(n k) for any k.
Therefore, !(n) is also a constant. Similarly, if .r(z) is an exponential function of the
form p", y(z) is an exponential of the same form. If
x(z) = sin(f,n
then
x(n - k) = sinf,h(n - *) = cos(hk sinf,lon - sin0ok cos0on
Correspondingly, we have
y@)=esinOon+Bcosf,)oz
We get the same form for y (z ) when
x(n) : 61o'
"ot
We can determine the unknown constants in the assumed solution by substituting into
the difference equation and equating like terms.
As in the solution of differential equations, the assumed form for the particular solu-
tion has to be modified by multiplying by an appropriate power of n if the forcing func-
tion is of the same form as one of the characteristic solutions.
Erarnple 6.6.4
Consider the difference equation
trb)=Asin!+ acosnl
Then
(1 -21)', -r
l,@ - l): e sin * '',
' "o"" '
sln
. (n-- - l)zr - nt
-cos, ano cos - -, l)rltn'sln
(n
2 2
so that
= -esinll - ecosll
Substitution into the difference equation yields
n-3r-lo=,
48
3l
B+ iA -*B=0
Solving these equations srmultaneously, rve obtain
and ,=-31
^=iT
so that the particular solution is
lD'in"i' - *'t'I
t,,(z; = il
To tind the homogeneous solution. we wriic thc characteristic equiltion for the difference
equation as
304 Discrete.Tim€ Syslems Chapter 6
3l-
l-a"-'+rc-z=0
Since the characteristic roots are
n<a= e,(!) .
so that lhe tolal solution is
"(i)
,(,.) =,l,(i) . H''T - H*T
",(;)'.
We can now substitute the given initial conditions to solve for the coNtaDts dr and z{, as
y@ -lyrr - rt +
lrtn - 2) -- x(n) + jrla - r)
sith
x(n) = 2"in\
From our earlier discussicn, we can determine the particular sotution for this equation in
terms of the particular solution yr(z ) of Example 6.5.4 as
I
y(n)=yp(n)+)t,@-r)
ll2 stnt
nt 96cos-r-
nr +. 56srn-
. (z - l)zr- 48 (n - lln
= 85-
- EJ E 2 85 "*--l-
74stn nr
__ 152 n7t
= 85
- 'E5-
cos
2
Sec. 6.5 Ditlerence-Equation Representation ol Discrele-Time Systenrs 305
j n^,r1, i h^6(n - k)
- /.) = li=t) (6.5.14)
(=(t
withy(-1), y(-2), etc., set equal to zero.
Clearly, f.or n > M, the right side of Equation (6.5.14) is zcro. so that we have a
homogeneous equation. The N initial conditions required to solvc this equation are
y(M),y(M - l), ..., y(M - N + l). Sinceiy'> M for a causal systcnr. we have roderer-
mine only y(0), ) ( l ), ..., y(M). By successively letting n take on lhe values 0, 1. 2 ....
M in Equation (6.5.14) and using the fact that y(k) is zero if ll < 0. we get the set of
M i I equations;
j
2
t=t)
orYb - k\ = br i = 0, 1, 2. "' . M (6.s.15)
i"o''('-e)=0'
l=o
n>M (6'5'17)
Eromple 6.6.6
Consider the syslem
sy(n
y(a) - - l) * l.rt, - 4 - t)oy|- 3) =.r(,rr . l.r(n - r)
so that N = 3 and M = l. It follorvs that the impulse responsc is dctcrmined as thc solu-
tion to ihe cquarion
5r,(rr
.v(n) - - l)* jrfr,- lt - ,lnr(,, - 3t rr. ,r z:
and is therefore o[ the form (scc Example 6.5..\ )
306 Discrete-Time Sysiems Chapter 6
h@ = A,(;) . n>2
^"(:).,.(i)'
The initial conditions needed to determine the constants ,/4 | . Ar, and A, arc y( - I ). y(0).
and y(1). By assumption. y(- t) = 0. We can determine y(0) and y(l) by using Equation
(6.5.16) to get
,] r,tr
[-, til
sothaly(0) = I y(l) =19/12. Us€ of lhese initial conditions gives the impulse response as
Example 6.6.7
Consider the following special case of Equation (6.5.1) in which all the coefficients on the
lefthand side are.zero except for ao, which is assumed to be unity:
M
y(n)=)brx(n-kl (6.s.18)
I "0
We let x (n ) = 6(n ) and solve for y(z ) iteratively to get
y(o) = bo
y(l) = Dr
y(M) = bu
Clearly. y(rt ) = 0 for n > M, so that
h(nl = lbo. bt, b2, .... bgl (6.s.le)
This result can be confirmed by comparing Equation (6.5.18) with Equation (6.3.3). which
yields ft(ft) = b1. The impulse response becomes identically zero afler M values. so thal
the system is a finite-impulse-reponse system as defined in Section 6.3.
6.6 SIMULATIONDIAGRAMS
FOR DISCRETE-TIME SYSTEMS
We can obtain simulation diagrams for discrete+ime systems by developing such dia-
grams in a manner similar to that for continuous-time systems. The simulation diagram
in this case is obtained by using summers. coefficient multipliers. and unit delays. The
Sec. 6.6 Simulation Diagrams lor Oiscrete-Time Systems 307
first two are the same as in the continuous-time case, and the unit delay takes the place
of the integrator. As in the case of continuous-time systems, we can obtain several dif-
ferent simulation diagrams for the same system. We illustrate this by considering two
approaches to obtaining the diagrams, similar to the two approaches we used for con-
tinuous-time systems in Chapter 2. In Chapter E, we explore other methods for deriv-
ing simulation diagrams.
Erample 6.6.1
We obtain a simulation diagram for the system described by the difference equation
If we now solve for y(n ) and group like terms together. we can write
We now delay this signal and add 0.5 .r(n ) + 0.25 y(n) to it to get
If we now pass or(rr ) through a unit delay and add r(n), we get
By following the approach given in the last example, we can construct the simulation
diagram shown in Figure 6.6.2,
To derive an alternative simulation diagram for the system of Equation (6.6.2), we
rewrite the equation in terms of a new variable u(n) as
308 Discrele-Time Systems Chapter 6
.r(n)
x(n)
-r'( z )
N
u(r)+)a,u(n-j):r(n) (6.6.3a)
i=t
,v
y(n):lbSt(n-m) (6.6.3b)
m-0
Note that the left side of Equation (6.6.3a) is of the same form as the left side of Equa-
tion (6.6-2), and the right side of Equation (6.6.3b) is of the form of the right side of
Equation (6.6.2).
Sec. 6.6 Simulation Diagrams for Discrote-Time Systems
To verify rhal these two equations are equivalent to Equation (6.6.2). rvc substitute
Equation (6.6.3b) into the left side of Equation (6.6.2) to ohtain
,i),u^I,, -
.,- n]
_r,"r, -
= m\ +
=lb_x(n-m\
where the last step follows from Equation (6.6.3a).
To generate the simulation diagram, we first determine the diagram for Equation
(6.6.3a). If we have o(n ) available, we can generare o(n - 1). u(n - 2), etc., by pass-
ing u(z) through successive unit delays. To generate o(n ), we note from Equation
(6.6.3a) that
N
u(n) - y1n7 - )a,u(n - j) (6.6.4)
i=r
To complete the simulation diagram. we generate _v(n ) as in Equation (6.6.3b) by suit-
ably combining a(n).fln - l), etc.. The complete diagram is shorvn in Figure 6.6.3.
r(z)-+
Erample 6.62
The alternative simulalion diagram for the system of Equation (6.6.I ) is obtained by writ-
ing the equation as
t(nl - O.Zltt(n ' l) - 0.251(r - 2) + 0.0625r,(rr l) - t(l)
310 Discrets-Time Systems Chapter 6
and
y(nl = aln) + 0.5x,(r - 1) - o(n - 2) + 0.?5a(n - 3)
Figure 6.6.4 gives lhe simulation diagram using these two equations.
Erample 6.7.1
Consider the problem of Example 6.6.1, and use the simulation diagrams that we obtained
(Figures 6.6.1 and 6.6.4) to derive two state descriptions. For convenience, the two dia-
Sec. 6.7 State-Variable Representation ol Oiscret€-Time Systems 31 1
grams are repealed in Figures 6.7.1(a) and 6.7.1(b). For our first dcscription. we use lhr.'
outputs of the delays in Figure 6.7.1(a) as states lo get
) = ur(n) +.r(,r)
.l'(n
(ti.7.3a )
.r(n)
r (r)----r
+-
(br
[o.zs r ol [o.zs I
5=lo.zs 0 ll 5=l-0.7sl, "=U 0 01. d=r (6.7.s)
[-o.r.rozs o o.l Lo.rsTs]
As in continuous time, rve refer to this form as ihe first canonical form. For our second
representation, we have. from Figure 6.7.1(b),
itrln + t1 = itrln'1 (6.7'6a)
02,r' + t) = i{n) (6'7.b)
6.(n + l) = -o.o625ir(n ) +o.25i2b) + 0.25ir(n ) +r(n) (6.7.6c)
I o 1 ol [o-l
i(n +r)=l 0 o r lltnt+lo l..tnt G.7.7)
[-o.oozs o.2s o.2sl Lrl
y(r) = [-0.1875 -0.75 0.7s] i(n) + .r(n )
so that
^ t o r ol lol
n=l o o I l. b=lol.
L-o.oozs o.2s o.zs)
L,l
c=[-0.187s -0.75 0.7s], d=r (6.7.8)
By generalizing the results of the last example to the system of Equation (6.6.2), we
can show that the first form of the state equations yields
-at I "' O-
-:'0...: tl'
, , c= :l d=bo g.,.e)
--o" O ": ; [: :: ;]
Sec. 6.7 State-Vanable Representalion orDiscreie-Time Systems itl J
b, - arb,,
These two forms can be directly obtained by inspection of the diffcrence equation t-rr
Y(n)=cv(n)+dx(n)
and
i(n+1)=Ai'(n)+br(rr) (6.7.t2)
.v(n)=6i(n) +2x@)
be two alternative state-space descriptions of a system. Then thcrc cxists a nonsingu-
lar matrix P of dimension N x N such that
v(a) -- Pi(n) (6.7.13)
The first term in the solution corresponds to the initial-condition response, and the sec-
ond term, which is the convolution sum of An-r and bx(z), corresPonds to the forced
response of the system. The quantity An, which defines how the state changes as time
progresses, represents the state-transition matrix for the discrete-time system O(n). In
terms of tD(n), Equation (6.7.16) can be written as
a-l
v(n):q1r;v(o) + )a(z -l- l)bx(7) $.7.1t)
i'o
Clearty, the frrst step in obtaining the solution of the state equations is the determina-
tion of A". We can use the Cayley-Hamilton theorem for this PurPose.
Example 6.79
Consider the system
ur(n + 7) = or(n\ (6.7.18)
ll
ar(n + ll = Sr,(r) - Oar(n)
+ x(n)
y(z) = u'(n )
By using.the Cayley-Hamilton theorem as in Chapter 2, we can write
4" = ao(z)I + c'(n)A (6.7.19)
c1(z)+1","r=(1)'
so thal
oo(n)=3(i)'.i(-il
or(z)=;(i)'-;l;l
Sec, 6.7 State-Variable Bepresentation ol Dlscrete-Time Syslems 315
^ [ilil::.rl
Lo\a/
il
-a\-zl rlrl*r(-zl.1
r\a/
;l rl l G72.,
Example 6.7.3
Let us determine the unit-step response of the system of Example 6.7.2 for the case whcn
v(0) = [l -llr. Substituting into Equation (6.7.16) gives
Ls ra\ z/ rs\+/J
The output is given by
y(a)=u,(n)=
;.?(-1)'-it(l),', r,=0 (6'?.22)
We conclude this section with a brief summary of the propertics of the state-transi-
tion matrix. These properties, which are easily verified. are sontewhat similar to the
corresponding ones in continuous time:
l.
rD(n + 1) = Ao(n) (6.7.23a)
2.
o(0) = 1 (6.7.23b)
] iiie: g-,i"[ ;1.i i] li$f'ft'rrigeJ,11ir r:iii
-
A ,.u, tFI I
i.j*-5i6-f+.---'-:--"--'--'
-
-t
Dlscrele-Time Systems Chapter 6
We can frnd the impulse response of the system described by Equation (6.7.2) by setting
v,, = 0 and x(z ) :
D(n ) in the solution to the state equation, Equation (6.7.16), to get
v(n)=a'-t6 (6.7.24)
The impulse response is then obtained from Equation (6.7,2b) as
ft(n):3a'-tb+dE1n; (6.1.?s)
Example 6.7.4
The impulse response of the system of Example 6.7.2 easily follows from our previ-
ous results as
STABILITY
As with continuous-time systems, an important property associated with discrete-time
systems is system stability. We can extend our defrnition of stability to the discrete-time
case by saying that a discrete-time system is inpuVoutput stable if a bounded input pro-
duces a bounded output. That is, if
lx@)lsu<- (6.8.1)
thcn
ly(n)l <r<-
S€c. 6.8 Stability ol Discrele-Time 7
a condition for stability in terms ot tne system lmputse rcsl,(rr'.( . ' vcrr . c.,s'.-,,, ",..'l
impulse rcspurlsL: l(rr), let.r(l) bc such that l.r(rr)i '-,!/. Thurr r,, , .:rl.,Lt .v(rr) isgrrut
by the convolution sum:
lv(n)l = | >,1(k)t(rr
k=-*
-k)l
s ) ltttll lr(n - k)l
t= -t
That it is also a neccssary condilii,n can lre seen hy considcrir'r ,' itr[rui thc boundecl
signal x(/<) : sgnUr(n - t)1. or equivalcntly, .t(n - k) '- 'r,til(A')1, with corre-
sponding outpul
i'(n): ) ,'
l(/<)sgn[r(/<)1= )
k=-r
lr,1r r
^
Clearly, if &(n) is not ahsolutely summable,v(a) will be unhotrn.i ',1.
For causal systems, the condition for stability bccomes
i lrroll .- (6.{i...1)
We can obtain equivalent conditions in terms of tlte locatiorrs trl r r elt:ttitt jclistic val'
ues of the syslem. Recall that for a causal systern described hr ;, ,l,i';ct-cnce equati(rn,
the solution consists of termsof tltelbrnrnro",* - 0. 1,.... ful . \', lr(.r\' ct Licnotcs I chir-
acteristic value of multiplicity I/. lt isclearthat if l<rl == l.thc r!11,,)i,ir'is not l-rotrndcd
for all inputs. Thus, for a systcnl to be stahle, irll the charirctl. lr\rrr' ,':llues rnusl ltavc
magnitude less than l. That is. thcy ntust all lie inside a circlc ol Ltr, r. ' r'utlitts ir the corn-
plex plane.
Forthe statc-r'ariable represcntilt i()n, lvL'sawthatthr-solutit'r: itl't'ttdsonthestarr'-
transition matrix A". The fornt ol A" is detcrmined b1/ thc cigr:rr' ;tlucs or charr.clct:\-
tic values oI the nlatrix A. Suppose wc obtain the dill'crcircr' !(lLli]tii)!l r:c'lating t tc
outputy(r)totheinput.t'(r,)byclirninatingthcstatevariahlL:,tIrrtL:.quirtiorrs(6.7.Ja)
and (6.7.2b). It can hc verified that the characterislic valttcs ol ttri.; r.,:uatirlF rrre cxacr.ly
- 818 OlscretFTlme Systems ChaPter 6
the same as those of the matrix A. (We leave the proof of this relation as an exercise
for the reader; see Problem 6.31.) It follows, therefore, that a system described by state
equations is stable if the eigenvalues of A lie inside the unit cfucle in the complex plane.
Example 68.1
Determine if the follos'ing causal, time-invariant systems are slable:
(i) Sptem with imPulse response
,(,) =
[,(-])'. z(])'],t,r
(ii) System described by the difference equation
,i_ lrr,ll
=,;.,O'* r(l)" = u *;
so that the systems is stable.
For the second system, the characteristic equation is
., - !r1o, - l" *I =,
and the characteristic roots are qr E: 2,a, = -112 and c, = l/3. Since l"r | > t, ttris sys'
tem is unstable.
It can easily be verified that the eigetrvalues of the A matrix in the last syBtem are
equal to 3/2 t i 1/2. Since both have a magnitude Sleater than l, it follows that the sys'
tem is unstable.
,1n1 = j altt
k- -o
6(n) = a1r; - u(n - l)
o Any DT signal .t(n ) can be cxpressed in tcrms of shifted impulse functions as
,@)= L r(k)s(r-k)
t'-a
. The complex exponential r(n) = exp [l0,,n ] is periodic only if Au/Zr is a ratio-
nal number.
. The set of harmonic signals r* (n ) = exp [/.O,,n ] consists of only N distinct waveforms.
r Time scaling of DT signals may yield a signal that is completely different from the
original signal.
o Concepts such as linearity, memory, time invariance, and causality ir DT systems
are similar to those in continuous-time (CT) systems.
o A DT LTI system is completely characterized by its impulse response.
r The output y(n) of an LTI DT system is obtained as the convolution ofthe input
x(z) and the s)'stem impulse response h(n );
o The convotution sum gives only the forced rcsponse of the system.
o The periodic convolution of two periodic sequences x,(n ) and rr(r) is
N-l
.r,(n) el xz@) = )
.r,0, &)rr(k) -
l'0
r An altemative representation of a DT sptem is in terms of the difference equation (DE)
N l.l
> bPh-
2ooY@-t)= l-0 k)' n:-0
A-0
o The DE can be solved either analytically or by iterating from known initial condi-
tions. The analytical solution consists of trvo parts: the homogeneous (zero-input)
solution and the particular (zero-state) solution. The homogeneous solution is
determined by the roots of the characteristic equation. Thc particular solution is of
the same form as the input r(rr) and its delayed versions.
o The impulse response is obtained by solving the system DE rvith input.r(a) = E(r)
and all initial conditions zero.
r The simulation diagram for a DT system can be obtained f rom the DE using sum-
mers, coefficient multipliers, and delays as building blocks.
. The state equations for an LTI DT system can be obtained fronr the simulation dia-
gram by assigning a state to the output of each delay. The equations are of the form
32q L,r:scrare-Time Systems Chapter 6
O(n) = 4'
state-transition matrix and can be cvaluated using the Cayley-Hamilton theorem.
is the
o The following conditions for tlre BIBO stability of a DT LTI system are equivalent:
(a) ) la(*)l <-
k=-t
(b) The roots of the characteristic cquation are inside the unit circle.
(c) Thc eigenvalues of A are inside the unit circle.
6.1 1 PROBLEMS
6.1. F'or thc discrstc-time signal shorvn in Figurc P6.1. sketch each of the following:
(a) .r(2 - rt)
(tr).r(3rr * 4)
(c) .r(i rr + l)
/ ,, I tl\
lo),r(- I /
(e) .r (a t)
Sec. 6.11 Problems 321
(I) x.(n )
(g) .ro(n )
(h) .r(2 - n) + x(3,t - 4)
Repeat Problem 6.1 if
(s) x(n)=.,"(T.;)
o) r(n) = ''.(1i') -'t(1,)
(c) r(n ) = .'" (lX.) .', (l ")
(d) r(r) *o[?,]
=
The srgnal x(t) = 5 cos (120r - r/3) is sampled ro yield uniforntlv spaced samples 7 sec
onds apart. What values of '/" cause the resulting discrele-timc scquence to be periodic?
What is the period?
6.5. Repeat Problem 6.4 if .t (t) = 3 sin l(trrrr + 4 cos 120r.
6.6. The following equalities are used several places in the tex!. Provc their validity.
( | - on
i-; ..+I
(a) 5'o.=,1
n'o q-l
[,v
rt
a=0 | - q
322 Discrete-Tlme Syst€ms Chapter 6
[(r) = 51r; *
0)'",,
Sec. 6.11 Problems 323
(a) r(a) =
{, -] I -i :}, h(n) = 11, -r. rr,-r}
f
(b).r('t)= 11,2..1.o.-1.1, hln) = 12,-1,3.1.-21
(c) r(,') =
{,
j I ,.r} nat = , |,-}}
{2.-
1
h,(n)=h2@)=(i)"1,)
h{n) = u(n)
n,ot =
G)"at
(b) Find the response of the system to a unit-step input.
h tl l h2Ot)
,,t,,r = (l)',t,r
h2(a) = 6@)
6.12. Let rq (a ) and r.(n ) be two periodic sequences with period N. Show that
y(nT) _
x(nT) - x((n - t)r)
T
and
dv(t) d2x(t)
z0) dt dt2
with
Use thls approximation to derive the equation you would emPloy to solve the differ-
ential equation
,4#*y(t)=x(r)
(b) Repeat part (a) using the forward-difference approximation
q9:'t((z+1)I)-r(nt)
dtT
6.15. We can use a similar procedure as in Problems 6.13 and 6.14 to evaluate the integral of
coDtinuous-time functions. That is, if we want to find
y()= +r(o)
f,"r(t1dt
we can write
aP =,r,
If we use the backward-difference approximation for y(l)' we get
6.16. A better approximation lt-l thc intL.gral in Problem 6..l-5 can hu rrlrtained by the trape-
zoidal rule
71..1rf)
.r,(nf) = 2
+ -r(,l - I)f ) +.v((, - t)i )
Determine the inteeral of the function in Problem 6.15 using this rulc.
6.17. (a) Solvc the following diffcrcnce equations bl,iteration:
(i) l(n) +),(x - I) + lolln - 2) =.r(n). n>0
.r'(- l) - 0. -v(-3) - 1. .r(n) = 111,, 1
1t
(ii) l(,r) -'o!@- I){;r'(z -2)=-r(r). ,r>0
](- rr = r, y(-2) = t). ,t,t -- (l)',,r,1
(iil) t,(r ) + .vtn
f - rt + Jltr - 2)=t(n), x
=(l
.v( - l) =0, y l-2 I = tt. ,t,= (l[ rl (r, )
I 1
(iv) .y(n + l) +
,I' (n- I) = x (n) - ,r(n - l), ,r=()
](0) = l, .r Qr )= [) u(n)
(v) vQt) = r(n) + l..r n- l) + Zr(fl.- 2). r=0
J
.r (rr ) = &(r)
(b) Using any rnathematical software package, verii lour resulls lor'l in lhe range 0 to
20. Obtarn a plot r)l l,(n ) vs. r.
g26 Discr€te-Time Systoms Chapter 6
6.18. Determine tha characteristic roots and the homogeneous solutions of the following dif-
ference equations:
1t
(iv)y(n) -it@-D+ S@-2)=r(n). n>o
v(- 1) = 2. Y(-2\ = o
tl
(v) y(a) - att" - 1)-ry(, - 2) = x(n), n=0
Y(- r) = r' Y(-2): - 1
6J4 We can frnd the impulse response /r(z) of the system of Equation (6.5.11) by first linding
the impulse response fto(a ) of the system
2
t.0
ory(n - k) = x(n) (P5.1)
!
v(o) = al
(b) Use this method to find the impulse response of the system of Example 5'5.6.
(c) Find the impulse responscs of the systems of Problem 6.17 by using this merhod.
6li. Find the two canonical simulation diagrams for the systems of Prohlem 6.17.
615. Find the corresponding forms of the state equations for the systems of Problem 6.17 by
using the simulation diagrams that you determined in Problem 6,2.5.
627. Repeat Problems 6.25 and 6.76 for the syslems of Problem 5.1E.
6r& (a) Find an appropriate sei of state equations for the systems describcd by the following
difference equations:
[i
i=li ,l o.l
L'-ll
B2B Discret€-Time Systems Chapter 6
(This.is the diagonal form o[ the srarc cquations.) Find the corresponding values for
b. i. d. and i(0).
(c) Find the unit-step response ofrhe system representation thal you obtained in part (b).
(d) Find the unit-step response of rhe original system.
(e) Verify your results using any mathematical software package.
63L By using the second canonical form of the state equations, show thar the characteristic val-
uesof the difference-equation representation of a system are the same as the eigenvalues
of the A matrix in the state-space characterization.
631 Determine which of the following sysrems are stable:
(a)
nr,,=[3', o<a<t(x)
10.
otherwise
(c)
n>o
,,,, _ [(l)""..'.
Iz"cosrz. n<o
(d) y(z) =:(n) + zx(n - *
1)
)16 - z\
(e) y(z) -2y(n - l) +y(z - 2) = :(n) + x(r - l)
(r) .v(a + z) -1y@ - D - lyb - 2) = x(n)
I t rl
rer
'r, + rr = l ? ] 1,",. [_ i] ,,,,, y(n) = rr olv(z)
L-o')
(h)v(,,+,)=[-l l],r,1.[f] .r,r y@)=tz rlv(a)
Chapter 7
Fourier Analysis
of Discrete-Time Systems
7.1 INTRODUCTION
In the prcvious chapter. wc considcred techniques for the tintr:-dornain analysis oldis-
crete-time systems. Recall that. as in the case of continuous-tinre systems. the primary
characterization of a linear. time-invariant. discrete-time system that we used was in
terms of the response of lhe system to the unit impulse. In this lrrd subsequcnt chap-
teni, we consider frequency-domain techniques for analyzing discrete-time systems, We
start our discussion of these techniques with an examination of thc Fourier analysis of
discrete-time signals. As we might suspecl, the results that we ohtain closely parallel
those for continuous-time systems.
To motivate our discussion of frequency-domain techniqucs, Iet us consider the
response of a linear, time-invariant, discrete-time system to a complex exponential
input of the form
r(n) = ." (7. r.r )
where e is a complex number. lf the impulse response of the system is ft(rl), the out-
put of the system is determined by the convolution sum as
y(r)= X lr(/<)r(n-k)
*= --
=Z h?\2,-k
k=--
For a fixed 4, the summation is just a constant, which we denote by H(z): that is,
H(z)=
l- --
i ofo',r-o (7.1.3)
so that
As can be seen from Equation (7.1.4), the outputy(n) is just the input.r(n) multiplied
by a scaling factor I/(e).
We can extend this result to the case where the input to the system consists of a linear
combination of complex exponentials of the form of Equation (7.1.1). Specifically. let
,v
x@)=)alzi (7.1s)
It= I
It then follows from the superposition property and Equation (7.1.3) that the ouput is
N
y(a) = ) arH(zt)zi
l-l
iV
= ) b,zi
&-l
e.t.6)
That is, the output is also a linear combination of the complex exponentials in the
input. The coefficient bo associated with the function z[ in the output is just the corre-
sponding coeffrcient a* multiplied by the scaling factor H (21).
trrample 7.1.1
Suppose we want to find the output of the system with impulse response
ar,r = (i).,r,r
when the input is
x(n) = 2*'2a n
/{(z) =
.i (:).. =.4 (i. ,l = . <1
;_ ,_, lj
where we have used Equation (5.3.7). The input can be expressed as
*<,t = .
"*eliz],] *, [-, T,]
so that we have
, . Sec. 7.2 Fourler-Series Represontation of Discrete-Time periodic Signals 331
',
= *o[,?]' =
', "-o[-;:u],
at=i]'=l
and
H(;,) = =
I
t - ;exp[-j(2r/3)l {?'*or-;*t'
I , \/t
H\22) = - -- 1-
-- --- = :2z
"xpUil, O = tan-,
t- rexplj(2n/3)l vt 5
A special case occurs when the input is of the form exp [jO^ ], where Oa is a real, con-
tinuous variable. This corresponds to the case lz1 | = 1. For this input, the output is
y(n) - H(eitt) exp[7oon] (7.t.7)
where, from Equation (7.1.3),
for some positive integer N. lt follows from our discussion in the previous section that
if x(n) can be expressed as the sum of several complex exponcntials. the response of
the system is easily determined. By analogy with our representation of periodic signals
in continuous time, we can expect that we can obtain such a representation in terms of
the harmonics corresponding to the fundamental frequency 2r/N.That is, we seek a
representation for x(r) of the form
where ()o = 2rk/N.It is clear that the xo(n) arc periodic, since Oo/2tr is a rational
number. Also, from our discussions in Chapter 6, there are only N distinct waveforms
in this set, corresponding to k = 0, l. 2. .... N l. since -
xaQr) = rr*,r(n ), for all k (7.2.3)
Therefore, we have to include only N terms in the summation on the right side of
Equation (7.2.2). This sum can be taken over any N consecutive values of &. We indicate
this by expressing the range of summation as i! = (M. However, for the most part, we
consider the range 0 s & < N - l. The representation forx(n) can now be written as
)c':N
,-0
(7.2.e)
since the summation on the right is carried out over N consecutive values of m for a
fixed value ol k, it is clear thar the only value that r can take in the range of summa-
tion is r = 0. Thus, the only nonzero value in the sum corresponrJs to k,and the i=
right hand side of Equarion (7.2.13) evaluates to Nar, so thar
,o : ',T *] (7.2.16)
i,,?r..(r1.xp[
which together form the discrcte-time Fourier-series pair.
Since r^ , r(n) = .rl(a), it is clear that
o**N = a* (7.2.17)
Because the Fourier series for discrete-time periodic signals is a finite sum defined
entirely b'; the values of the signal over one period, the series always converges. The
Fourier series provides an eract alternative representalion ol thc time signal, and issues
such as convergence or the Gibbs phenomenon do not arise.
Example 7.2.1
Let.r(z) = exp [lKf[nl for some K with O0 = 2rr/N. so rhat.r(1) is periodic wirh period
N. By writing.r(r) as
,t,l=.*o[7'f r,]o=nSn - r.
ii follows lrom Equation (7.2.1-5) that in rhe range 0 s k < N l. only a^ = l, with all
other a, bcing zero. Since d1 *,y = (,6, thc spcclrum of .r(n) is a linc spectrum consisting of
discrcte irnpulses of magnitude I repeated ar intcrvals N(),,. as shorvn in Figure 7.2.1 .
3U Fourler Analysis of DlscreleTlme Systems Chapter 7
Example 72J
L-t r(z) be the signal
frequency {ln = 2n /126, so rhat n/9 ana correspond to l40o and l8fh respectively.
}
Since -&f!o corresponds ro (N - *)f!0, it follows thal -; ana - | can be replaced by
l(ts0, and ll2.tlr. We csn therefore write
Frarnple 7.23
Consider the discrete-time periodic square wave shown in Figure 7.2.2. From Equation
(72.16). the Fourier coefficients can be evaluated as
,. = f
.i "*o[-rr" *]
- O lL'
Fork = 0.
o, =
|,t,',trr =?4-P
t + 0, \ve can use Equation (6.3.7) to Bet
For
=L
N
"y-- a" / *L lzl[:: Il':':y @ : tn--- *o [ - i t' " /Y ('
t
expl-j(2r/N)(k/2)llexplj(2r/N)(k/2)l - expl- j(2r/N)(k/2)ll
jl]
_,.,|,#("lj)] k=r'2"N-I
=''qt';l:l' ,
We can, thcrefore, write an expession for the coefficients ar in tcrnrs ()f the sample values
of the function
sin[(]M- 1 lXq4l
t,r,,.'
\'" =
' sin (o/2)
as
1 l2rk\
'-=n{ru/
The function /( . ) is similar to the sampling tunction (sin x)/-r that u'c have encountered
in the continuous-timc case. Whereas the sinc funclion is not periodic. the function/(O),
being the ratio of two sinusoidal signals with commensurate frcqucncies, is periodic with
period 2zr. Figure 7.2.3 shorvs a plot of the Fourier-series coefficicnts for M = 3, for val-
ues of N corresponding to 10. 20. and 30.
Figure 7.2.4 shows the partial sums xr(z) of the Fourier-series cspansion for this exam-
ple for N = Il and M = 3 and for values ofp = 1, 2,3,4,and 5. rvhcre
,n(n)=
ofoo**rli'#o^l
As can be seen from the figure, the partial sum is exactly the origrnal sequence forp = 5.
Erample 72.4
l.-et.r(n) be the periodic extcnsion of the sequence
The period is N = 4, so that exp [- ih / Nl = -1. The coefficients rra are therefore given by
I
a,,=.r(2-l+l+2)=l
of Dlscrote.Time sysrems Chapter 7
f.r:i, Ir^'s
N=20
.ry =J0
tlgue 723 Fourier series coefficiens for the periodic square wave of Example 7.23.
o,=Irr+i-t-zn=l-il )
or=le*r*r-zl=l
o,=Ie-i - r + ,,t =I* il=
"l
In general, if .r(a) is a real periodic sequence. then
at, = ai-* (7.LtE)
ll)
(!
Q)
(E
a
cr
(J
!,
.9,
L
{,,
:(!
uo
c
o
tr
lll
o.
x(l)
(u
t-
q,
c)
=
o
E.
!u
:U:
(,
A.
9
Ft
F
oa
ba
It
337
Fourier Analysis ot Discrete-Time Systems Chapter 7
Example 72.6
Consider the periodic sequence with the following Fourier-series cocfficienls:
.r, = + r) - lr,, - rl *
l11,
*rr * ]s(n -3)
;6(n
The valucs of the sequcnce.r(rr) in onc pcriod arc thcrefore given by
{. - ;.r.j.o o.r.o.o.l.o.1}
rvhere we have used the fact that.r(N + *; = .11;a;.
It follows from the definition of Equation .2.16) that. given two sequences x, (n)
(7
and -r2 (n). both of period N. with Fourier-series coefficients a,^ and aro. the coefficients
for the sequence A,r, (l) + Bxr(n) are equal to Aor* I Ba.,r.
For a periodic sequence with coefficients at. we can find the coefficients D. corre-
sponCing to the shifted sequence.r(rr - m) as
-
By replacing n m by a and noting that the summalion is taken over any N succes-
sive values of n. we can write
Let the periodic sequence x(n). with Fourier coefficients a*. be the input to a linear
system with impulse response ft (a). where /r(n) is not periodic. [Note that if fi(z) is also
periodic, the linear convolution ofr(r) and ft(n) is not defined.] Since. from Equation
(7.1.7). the response y^(n ) to input a* explj(Zr/N)knlis
Sec.7.2 Fourier-Series Representation of Discrete-Time Periodic Signals 339
v@): oZ-vr(n)
=
*P-- "rr('*'o).-o[;2f ",]
(7.2.22)
TABLE 7-1
Proportlea ot Dlscr€teTlme Fourler S€deB
l. Fourier coefficients .t,(n) periodic with period N ,,- :,1;..,1,,y",pI t?i,o] (7.2.t4)
H(o) = cxp[-jo,l
,i_r'(n)
5. Periodic convolution r, (n) O .rr(z) Nar*a* (7.2.23)
Example 7.2.6
Consider the system with impulse responsc h(n\ = (1/3)'u(r). Suppose tvant to find thc
Fourier-series representation for the output y(a) when the input.t(n) is the periodic
extension of the sequence z. - 1,1,21. From Equation (7.2.22),it follows thal we can writc
y(n ) in a Fourier series as
y(,) =
*r^"-o[i'Jo,]
with
340 Fourior Analysis ol Discrete-Time Systems Chapter 7
b' = k)
"'H(T
From Example 7 .2.4, we have
H(o) (l)'*r,-,*, =
=,2Eo\e' .+--
l-iexp[-jo]
so that with N = 4, we have
'('; r) = , - i*o[-i]t]
tt follows that
\= n@a--3;
t, = ,(t),, =tLt:i?)
br= HQr)ar=f,
with respect to the transform (frequency) variable r'r. For discrete-time signals, we con-
sider an analogous definition. To motivate this definition, let us sample x(t) uoiformly
every Iseconds to obtain the samples:(nT). Recall from Equations (4.4.1) and (4.4.2)
that the sampled signal can be written as
x,(,) =
[' -r,1t7e-i-' dt
= I
J-,
x(r) >
n=-e
6(t - nT)e-,''dt
= (7.3.3)
,i_*x(nT)e-i'r'
where the last step follows from the sifting property of the 6 funclion.
If we replace ro7 in the previous equation by the discrete-timc :requency variable
O, we get the discrete-time Fourier transform, X(O), of the discrctc-time signal r(r),
obtained by sampling.r(t), as
Equation (7.3.4), in fact, defines ,h. di.";;-;*e Fourier transforrn of any discrete-
time signal x(z). The transform exists if x(n ) satisfies a relation of the type
Next, we multiply both sides of Equation (7.3.7) by exp [j0n] and intcgrele over thc
range [0,2n] to get
g4? Fourier Analysis ot Discrete-Time Systems Chapter 7
so that the right-hand side of Equation (7.3.9) evaluates to 2fr(z). We can therefore write
,fO =
l,f" xtol exp[ion]do (7'3'11)
Again, since the integrand in Equation (7.3.11) is periodic with period 2r, the integra-
tiJn can be carried out ovel any interval of length 2zr. Thus, the discrete-time Fourier-
transform relations can be written as
,Ul =
** 1,r,,
X(o)exp[ion]do (7.3.13)
Example 7.3.1
Consider the sequence
x(n) = o"u1n'' l"l t
'
For this sequence,
x(o) = i
n-0
c'exp[-ion] = G+i-Fj
The magnitude is given bY
lxtoll =
\4 +;, _ 2a-cosO
Argx(o): -tan-rd*k
Figure 7.3.1 shows the magnitude and phase spectra of this signal for c > 0. Note that
these functions are periodic with period 2t.
Sec. 7.3 The Discrete-Time Fourier Transtorm 343
-2r'r0nlnO
tigure 73.1 Fourier spectra of signal for Examplc 7.3.1.
E=ernple 73.2
Lrt
r(n) = sl"l, l"l . t
We obtain the Fourier tranform of :(n ) as
x(o) : ol'lexp[-ionl
,I"
-t -
=) o-'exp[-lr)z] + ) o"expl-i{)nl
a -0
which can be put in closed [orm, by using Equation (6.3.7), as
x(o) = ___l
I - c-rexp[-70] I - oexp[-lO]
l-o'-
=--l-2ocosf,)+s2
In this case, X(O) is rehl, so that the phase is identically zero. Thc magnitude is plotted in
Figure 7 .3.2.
Eranple 73.8
Consider the sequence .r(n) = exp [i Oon l, with f]o arbitrary. 1'hus. .r(n) is not necessarily
a periodic signal. Then
u4 Fourier Analysis ol Discreie-Time Systems Chapter 7
I .Y(r2) I
ln the range [0, 2rrl, X(O) consists of a 6 function of strength 2n, occurring at O = fh. As
can be expected, and as indicated by Equation (7.3.14), X(O) is a periodic extension, with
period 2rr, of this 6 function. (See Figure 7.3.3.) To establish Equarion (7.3.14), we use the
inverse Fourier relation of Equation (7.3.13) as
e-r1,,n, =
.r(n) =
*" f "x(o) exp[jon]do
=
i; L"[,,i_*u,n - n" - z,,rz)]exp[ioz]do
= exp [jOon]
where the last step follows because the dnly pemissible value for rz in the range of inte-
gration is ,n = 0.
We can modify the results of this example to determine the Fourier transform of an
exPonential signal that is periodic. Thus, let.r(n ) = exp[Tkfloalbe such that Oo = 2tlN.
We can write the Fourier transform from Equation (7.3.72) as
x(o)
Thar is. the specrrum consists of an Lnn,,. .., of ,rpulses o[ strength 2rr centered at tOn.
(t I N)q. (k '$ 2N)(h.etc. This can be compared to ihe result rvc obtained in Example
7.2.1. where we considered the Fourier-series representalion for 'r(rr). The difference, as
in continuous time. is lhat in the Fourier-series represenlation thc frequency variable
takes on only discrele values, whereas in the Fourier lransform the flequency variable is
continuous.
7.4.1 Periodicity
We saw that the discrete-time Fourier uansfolm is periodic in O with Period 2T, so that
X(o+2r)=x(o) (7.4.1)
7.42 Linearity
Let x,(a) and.rr(n) be two sequences with Fourier transforms X,(O) and Xr(O)'
resp€ctively. Then
Tlarxr(n) + arx2(n)l = a,X, (O) + arXr(A) Q.42)
for any constants al and a2.
and
9[exp[j0or]x(n)l = x(O - oJ (7.4.4)
346 Fourier Anatysis o, Discr€te-Time Systems Chapter 7
4X(a\ -a
d; '?-?in)x(n) exP[-ion]
=
BTarrrple 2.4.1
Letr(z) = no'u(a),rvith lcl < 1. Then, by using the resulrs of Example 7.3.1, we can wrire
-he
x @ =i @"u(n)t = i rt l-+r_,n;
_ o exp[- jO]
(1 - a exp[-i0])z
7.4.6 Convolution
l*t y(n) represent the convolution of two discrete-time signals r(n) and ft(n ); thar is,
y(n\ = h(n) ," r(n) (7.4.6)
Then
y(o) = H(o)x(o) (7.4.7)
This result can easily be established by using the definition of the convolution opera-
tion given in Equation (6.3.2) and the definition of the Fourier transform:
y(o) = i ytrlexp[-ion]
h(k)x(n- *)]expt-lonl
,i_ Li_
=
_i,r,o,[,i _,(n
- r)expt-ion]]
Here, the last step follows by interchanging the order of summation. Now we replace
n- k by n.in the inner sum to get
Sec. 7.4 Properlies of the Discrete-Time Fourier Transform
347
r(o) = i n61xp1exp[-lok]
= H(o)x(o)
As in the case of continuous-time systems, this property rs extrcmely useful in the
analysis of discrete-time linear sysrems. The function I1(o) is rcf"rrej to as the
/re.
quency rcsponse of the system.
Example 7.4.2
A pure delay is described by rhe input/output relation
y(n)=x(n-no)
Taking the Fourier transformarion of both sides, using Equation (7.4.j), yields
Y(O) = s,(r1-;onolX(O)
The frequency response of a pure delay is thercfore
H(O) = exP[-l0no]
Since H(o) has unity gain for all frequencies and a linear phase, it rs distortionless.
Example 7.43
Lrt
nat =
0).,at
,r,r = (])",,r"r
H(O) = ---.
r - jexpl-io1
x(o):- --l--
1- lexpt-lrll
so that
w Fouri€r Analysis of Discrete-Time Systems Chapter 7
y(o) = H(o)x(o) =
r - jexpt-;ol r - ]expt-7ol
r - j expt-lol r - lexpt-iol
By comparing the two tenns in the previous equation with X(O) in Example 7.3.1, we see
that y(n) can be writlen down as
E-e'nple 7.4.4
As a modification of the problem in Example 7.3.2,\ea
&(n) = ql'-'"|' -@ < n < @
represent the impulse response ofa discrete-time system. It is clear that this is a noncausal
IIR system. By following the same procedure as in Example 7.3.2,i1can easily be verified
that the frequency response of the system is
H(o) =
I --*;;$ *
",
"*nt-ro,,]
The magnitude function lA1Oll is the same as X(O) in Example 7.3.2 and is plotted in
Figure 7.3.2. The phase is given by
Arg H(O) = - zog
Thus, H(O) represents a linear-phase system, with the associated delay equal to no. It can
be shown that, in general, a system will have a linear phase, if ft(a) satisfres
h(n) = 7t17ro - nr, -co < n < co
Ifthe syslem is an IIR system, this condition implies that the system is noncausal. Since a
continuous-time system is always IIR, we cannot have a linear phase in a continuous-time
causal system. For an FIR discrete-time system, for which the impulse response is an N-
point sequence, we can find a causal i(n) lo satisfy the linear-phase condition by letting
delay zo be equal to (N - l)/2.It can easily be verified that ll(z) then satisfies
h(n)= 111Y - I - n)' 0<n sN- I
E=e,nple 7.4.6
Irt
"(o)={;: l.1tl*
That is, H(O) represents the transfer function of an ideal low-pass discrete-time lilter
with a cutoff of O. radians. We can find the impulse respoDse of this ftlter by using Equa-
tion (7.3.11):
Sec. 7.4 Properties ol the Discrete.Time Fourier Translorm 349
1 ro.
h1al = -:- | exp [i Oa]dQ
Ll J_{t,
_ sin O.n
7tn
Exanple 7.4.6
We will find the output y(z) of the system with
- /rrz\
t(n) = 5'1'; - "'lT
7tn
/
lrr.n\* lrn * I\
r(n) = cos(
e-l
,inl7 ,/
From Example 7.2.2 and Equation (7.4.12), it follows that, with {),, = n /OA, n the range
0<O<2n
x(o) =2,[:s(o- t4oo) *
f utn- tsoo) n'i,t to- rosn,,) +
]srn - u2o,)].
Now
,,n,=l' i=o'llrr
otherwise
Io
,t,l =,,"("i * l)
350 Fouder Analysis of Discrete'Time Systoms Chapter 7
7.4.A Modulation
Let y(a) be the product of the two sequences x,(n ) and xr(n) with transforms Xt(O)
and Xr(O), respectively. Then
v(o) =
l[,r,,*,rrr*,(o - e)do (7.4.8)
Let .r(n ) be a periodic sequence with period N, so that we can express .r(z) in a
Fourier-series expansion as
,(r) = (7.4.e)
eaoexpfik0on]
where
n =
rh -2n (7.4.10)
N
Then
NI
X(O) = ) 2ra*6(O -k0o), 0sO<2n (7.4.11)
Since the discrete-time Fourier transform is periodic with period 2n, it follows that
X(O) consists of a set of N impulses of strength 2ra*, k = 0, l. 2, ..., N - 1, repeated
at intervals of NOo = 2r. Thus, X(O) can be compactly written as
NI
X(O) = 2na*6(O k0o), forallO
) - (7.4.12)
I={)
This is illustrated in Figure 7.4-l for the case N = 3.
Table 7 -2 summarizes the properties of the discrete-time Fourier transform, while
Table 7-3 lists the transforms o[ some common sequences.
TABLE7.2
Proporllas of the Dlgcreto-Tlme Fourlet Translotm
(7.4.3)
2. Time shift x(n - nol exp [ -jO nolX (O )
3. Frequency shitt .r(n) exp IjO,,n] x(() - oo) (7.4.4)
lt
5. Modulation .r, (n ).x, (n)
zn Jr,x'{r)x'ttt
- P)dP (7.4.8)
TABLE 7.3
Sorne Common Dlscrete-Tlme Fourler Tranelorm Paltg
6(a) I
I 2n6(O)
exp[iOrz], ()oarbitrary 2zt6 (O - ()o)
N-l /V-l
) ao exp[j&fton],
t-0
Nfh = 2rr ) 2taoD(O - &f!o)
l-0
a.nu(n), l"l . t
1 - a exp[-lO]
ot,l, l"l . t
l-o2
I- 2q cos f) + d2
no."u(a), l"l . t c exp[-iol
(l - c exp[-jol)'?
rect(n/Nr)
sin(O/2)
sinO"n
tn recr(o/2o.)
x(n) : x,(nl) :
* f"X,(o) explj otnTldut (7.s.3)
However, since x(n) is a discrete-time signal, we can write it in terms of its discrete-
rrne Fourier transform X(O) as
:(n) =
lr" X(O)
exp[lOn]do (7.s.4)
," l_"
S€c. 7.5 Fourier Transform ol Sampled Continuous'Timo Signals 353
Both Equations (7.5.3) and (7.5.4) rePresent the same sequence.r(n). Hence, the trans-
forms must also be related. In order to find this relation, let us divide the range
-cp ( ro ( co inro equal intervals of length 2tt/T and express the right-hand side
of
Equation (7.5.3) as a sum of integrals over these intervals:
,<o =
); ,>-ll,',,*"(,
-T )*p[,(' *';')"fa' (7'5'6)
, <o =
* I 1,"1,>*-*.(, * +r)] exp t;., rl a, (7.s.7)
,@=
* I _,li .Z--"(+ . 7,)] *o 1in,1,o (7.s.8)
,=7 (7.s.10)
With this change of variable, the left-hand side of Equation (7.5.9) can be identilied as
the continuous--time Fourier transform of the sampled signal and is therefore equal to
X,(o), the Fourier transform of the signal .r,(t). That is'
x,(r): r(n)ln_",, (7.s.11)
Also, since the sampling interval is I, the sampling frequency o, is equal to}rlT ndls.
We can therefore wfite Equation (7.5.9) as
x,(.)=!,,i.at,*'..1 (7s.t2l
This is the result that we obtained in Chapter 4 when we were discussing the Fourier
transform of sampled signals. It is clear from Equation (7.5.12) that X,(o) is_ the peri-
odic extension, with peiod to,, of the continuous-time Fourier transform Xr(r,r) of
the
analog signal x,(r), amplitudi scaled by a factor l/L Suppose that.ro(r) is a low'pas
signafruih thri its rpeitru. is zero for to ) to,,. Figure 7.5.1 shows the spectra of a typ'
in
ical band-limited analog signal and the conesponding sampled signal. As discussed
354 Fourier Analysis of Discrete-Tim"e Systems Chapter 7
I Xs(cr) I
- (.r0 o-o
(a)
I Xr{or) I
tlr
rx(olr
Iigure 75.1 Spectra of sampled signals. (a) Analog specrrum. (b) Spec-
trum of x, (r). (c) Spectrum of x(z).
chapter 4, and as can be seen from the figure. there is no overlap of the spectrat com-
pon€nts in x-(.) if to, - rrr1, ) to,,. we can then recover x,,(r) from the sampred
signat
xJt) by passing.r,(r) rhrough an ideal low-pass filter with i cutoff at rrr,, radls'and a
of 7. Thus, there is no aliasing distortion if the sampring frequency is such that lain
qrr-0ro>(r)rl
or
o, ) 2t'ro (7.s.13)
This is a restatement of the Nyquist sampling theorem that we encountered in
chap
ler 4 ard specifies the minimum sampling frequency that must be used to recover a
continuous-time signal from ils samples. cleariy. if i,1r; is not band limited.
there is
always an overlap (aliasing).
Sec. 7.5 Fourier Translorm of Sampted Continuous-Time Signats
Equation (7.5.10) describes the mapping berween the analog frequency ro and the
digital frequency O. It follows from this equation that, whereas the ur,its oiro are rad/s.
those for O are just rad.
From Equation (7.5.11) and rhe definition of X(o), it follorvs that the Fourier rrans-
form of the signal .r,(r) is
x,(.) = T] (7.s.14)
,fi_rfurrlexp[-lr,rn
we can use Equation (7.5.14) to justify the impulse-modularion model for sanpled
signals that we employed in Chapter 4. From the sifting property of the 6 function, we
can write Equation (7.5.14) as
x,(.) =
il_-..Ur,i,u,, - nr'yexpl-jutldt
from which it follows that
That is, the sampled signal x,(r) can be considered to be the product of the analog sig-
nal .r,(l) and the impulse train ) 6(r - nI).
To summarize our discussi#iTo far, when an analog signal .r,(r) is sampted, the
samplcd signal may be considcrcd to be either a discrete-limc signal r(n) or a contin-
uous-time signal r,(r), as given by Equations (7.5.1) and (7,5.15), respecrively. When
the sampled signal is considered to be the discrete-time signal .r(n), we can find its dis-
crete-time Fourier transform
x(o) i .r(n)e-in,
= n=-a (7.s.16)
If we consider the sampled signal to be the continuous-time signal x, (t), we can find
its continuous-time Fourier transform by using either Equation (7.5.12) or (7.5.14).
However, Equation (7.5.12),being in the form of an infinite sum, is not useful in deter-
mining X,(o) in closed form. Still, it enables us to derive the Nyquist sampling theo-
rem, which specifies the minimum sampling frequency or, that must be useo so that
there is no aliasing distortion. From Equation (7.5.11), it follorvs that, to obtain X(O)
from X, (to), we must scale the frequency axis. Therefore, wilh reference to Figure 2.5.1
(b), to find X(O), we replace to in Figure 7.5.1 (c) by oT.
If there is no aliasing, X,(o) is just the periodic repetition of X,(o) at intervals of
<o,, amplitude scaled by the factor l/I, so that
n=arnple 7.6.1
We consider the analog signal .r, () with spectrum as shown in Figure 7.5 2 (a). The sig-
nal has a one-sided bandwidth /o = 50fi) Ha or equivalently, roo = zrfo = 10,( In rad/sec.
The minimum sampling frequency that can be used without introducing aliasing is
[ro,I,n = 2r,ro = 29,6*, rad/sec. Thus, the maximum sampling rate that can be used is
T*= l/(2fs): l1p
',.ec.
I X"(a) |
I X" (otl I
4xld
rx(o)r
v-tf (c)OJL4
Flgure 75, Spectra for Example 7.5.1.
I
Suppose we sample the sigral at a rate = 25 psec. Then (Dr = 8rT x ld
rad/sec. Fig-
ure 75.2 (b) shows the spectrum X, (o) of the sampled signal. The spectrurn is periodic with
perird rrr,. To get X(O), we simply scale the frequency axis, replacing ro by O = r,rT, as
shown in Figure 75.2(c). The resutting spectrum is, as expected, periodic with period 2n.
,.-t:[r,
t0,
I'l ",
otherwise
(7.s.te)
Sec. 7.5 Fourier Translorm of Sampled Continuous-Time Signats 357
with <o, chosen to lie between o0 and rltr - o0, the spectrum of thu filter output will be
identical to X,(o), so that the output is equal to.r,(l). For a signal sampled at the Nyquist
rate, orj = 2roo, so that the bandwidth of the reconstruction filter must be equat to
a" = a,/2 = r /7.ln this case, the reconstruction filter is said to be matched to the sam-
pler. The reconstructed output can be determined by using Equation (4.4.9) to obtain
*,(0 - 17.s.20)
"2*,,@o"+is#;)
Since the ideal low-pass filter is not causal and hence not physically realizable, in prac-
tice we cannot exactly recover x,(r) from its sample values. Thus, any practical recon-
struction filter can only give an approximation to the analog signal. Indeed, as can be
seen from Equation. (7.5.20), in order to reconstruct ra(l) exactly. we need all sample
values r,(nI) for r in the range (--,-). However, any realizable filter can use only
past samples to reconstruct r,(r). Among such realizable filters are the hold circuiu,
which are based on approximating .r,(l) in the range nT t < (n +
= l)Iin
a series as
I
i,(t) = x.(nT) + .r'"(nT)(t - nT) +
,.xi@T)(t - nT)2 +... (75.21)
The derivatives are approximated in terms of past sampled values: for example
xi@7-l = l.r.(nT) - x,((n - t)r)l/ r.
The most widely used of these filters is the zero-order hold. rvhich can be easily
implemented. The zero-order hold corresponds to retaining only the first term on the
right-hand sidc in Eq. (7.5.21). That is, the output of the hold is given by
i,(t): x,(nT\ nT=t<(n + l)T (7.s.22)
ln other words, the zero-order hold provides a staircase approximation to the analog
signal, as shown in Fig. 7.5.3.
Let goo(t) denote the impulse response of the zero-order hold, obtained by apply-
ing a unit impulse 6(n) to the circuit Since all values of the input are zero except at
n - 0, it follows that
i"(r)
-
OI2T3T4T5T6T'17
Figure 753 Recontruction of sampled signal using zero-order hold.
Fourier Analysis ot Discret+Time Systems Chapter 7
0<r<T
o,,,trl = otherwise
(7.s.23)
{1.
with corresponding transfer function
In order to compare the zero-order hold with the ideal reconstruction filter, let us
replace s by 7'r,r in Eq. (7.5.22) to get
G,o(,)= t# -e
='#lei@rt2) 2i
-i{-r/21
_ sin (trr,J/ro,)
a s-itru/a,l (7.s.?s)
T@/.n,
where we have used T = 2t/a,.
Figure 7.5.4 shows the magnitude and phase spectra of the zero-order hold as a func-
tion of <o. The figure also shows the magnitude and phase spectra of the ideal recon-
struction filter matched to o.. The presence of the srle lobes ia Goo(to) introduces
distortion in the reconstructed signal, even rvhen there is no aliasing distortion during
sampling. Since the energy in the side lobes is much less in the case of higher order
hold circuits, the rcconstructed signal obtained rvith thcse filters is much closer to the
original analog signal.
I G1,6(ar) |
19a@>
. An aiternative scheme that is also easy to implemenl ohtains the reconstructed sig-
nal i,(r) in the inrerval
[(n - l)7,nirl as the straight linc joining. the values
x,l@"- i; f1 ana x,@T). This interpolator is called a linear interpolator and is
described by the input'output relation
=T)-
It can be easily verified that lhe impulse response of the linear interpolator is
Note that this interpolator is noncausal. Nonetheless, it applies in areas such as the Pro-
cessing of still image frames, in which interpolation is done in the spatial domain.
Since the effective sampling rate is now T' = MT. for no aliasing in the sampled sig-
nal, we must have
360 Fourier Analysls ol Discrete-Time Systems Chapter 7
7" !- 0)o
or equivalentlv.
MT-L (,)o
(7.5.30)
For a fixed r, Equation (7.5.30) provides an upper limit on the maximrrn value that
M can take.
If there is no aliasing in the decimated signal, we can use Equation (7.5.1g) to write
xd(a)=+.,(*). -n<osn
_ l ../1o\
= tar*'\ud' -nsosn
Since x(o), the disclete-time Fourier transform of the analog signal sampled at the
rate T, is equal to
x(o)= +-,(+),-n<osn
it follows that
xd(a):
i.(#) (7.s.3r)
That is, Xr(o) is equal to x(o) amplitude scaled by a factor LIM atdfrequency scaled
by the same factor. This is illustrared in Figure 7.5.5 for the case where r = i.er /ro
andT :27.
Increasing the effective sampling rate of an anatog signal implies that, given a signal
x(n ) obtained by sampling an analog signalxo(r) at a rate r, we want to deiermine aiig-
nal x,(n) that corresponds to sampling r,O at arate T" = f lL, where L > 1. That is,
f
Sec. 7.5 Founer Translorm ol Sampled Conlinuous'Time Signals 361
L\,,1rr-rt i
-dtt (l @tt
(it)
-uoT ll uoT
(h)
tx,/(!)) I
Figure 7-5.5 Illustration of
vT' decimation. (a ) Spectrum of analog
signal. (b) Spcctrum of r(n) with
sampling ratc f. (c) Spectrum of
i) dccimatc(l \ir'nill correspondinB to
- tt -tOrrT anT' tr
0
rate T = M-\ . Figures correspond
(c) to T = O.4n ltooand M = 2.
As a first step in determining .t,(n) from.r(n), let us replacc the missing samPles by
zeros lo form the signal
n:0,-+L,!2L,... (7.s.34)
=
',r^, {;:"'"' otherwise
Then
&(O): 2 ie -r
xln\s-ttt"
=,i_*x(n/L)e-rb
= ) x1t<1e-iteL=X(LA) (7.s.3s)
so that X,(O) is a frequency-scaled version of X(O). The relation betrveen these vari-
ous spectra is shown in Figure 7.5.6, for the case when 7 = 0.4t llonand L
:2.
From the figure. it is clear that if we pass x,(n) through a lorv-p5ss digital filter with
gain L and cutoff frequency auT/ L, the output will correspond to -r,(n). Interpolation
-
by a factor L therefore consists of interspersing L 1 zeros betwcen samPles and then
low-pass filtering the resulting signal.
362 Fourier Analysis ot Discrete-Time Sysiems Chapter 7
I X"(al I
-aO 0
(a)
rx(o) r
lxi(o)r
-trOT" O @OT"
rxr(o) |
-t -@oT" O @oT" t A
(d)
Example 7.6.2
Consider the signal of Example 7.5.1, which was band limited to roo = 10,ffi)n rad/sec, so
f
that loat = 100Fs. Suppose x,(r) is sampled at = 25$ to obtain the signal r(n) with
spectrum x(o) as shown in Figure 7.5.2 (b). If we want lo decimare.r(z) by a faitor M
without introducing aliasing, it is clear from Equation (2.5.30) thar M (t
/tooT) = 4.
:
Suppose we decimate r(n\ by M 3. so rhir the effecrive sampling=rate is ?.-= 75ps.
It follows from Equation (7.5.31) and Figure 2.5.5(c) rhat the specrrum of rhe decimaied
Sec. 7.5 Fourier Translorm o, Sampled Conlinuous-Time Signals 363
rx(o) |
t7
0 T
(b)
I xd(Q )l
4\ rd
.3
-!r
-v- 0
(c)
'',ln'L,
Iigure 7S.7 Spectra for Example
7.5.2.(al Analog spectrum.
(b) Spectrum of sampled signal.
-3zr (c) Spectrunr of decimated signal.
t 0
(d) Spectrum oI interpolaled signal
(d) after decimation.
signal, Xo(O), is found by amplitude and frequency scaling X(J)) hy the factor 1/3. The
resulting spectrum is shown in Figure 7.5.7(c).
Let us now interpolate the decimated signal -rr(z) by L:2 to form the interpolated
signal .t,(n ). It follows from Equation (7.5.33) that
a^T
< -,L =
3:r'
xr(o)
L8
=
. lol .,
Figure 7.5.7 (d) shows the spectrum of the interpolated signal. Fr()m our earlier discussion.
it follows that interpolation is achieved by interspersing a zero hctrvcen each two samples
of .r., (n) and low-pass filtering the result with a filter with gain 2 and cutoff frequency
(3rl8) rad/sec.
Fourier Analysis ol Discrete-Time Systems Chapter 7
Note that the combination o[ dccimation and inLcrpolation gives us an elfective sam-
pling rateof. T' : MT/ L = 37.5ps. In general. by suitably choosing M and L. we can
change the sampling rate by any rational multiple of it.
,v,(l)
Figure 75.E Functional block diagram of the A/D and D/A processes.
Ssc. 7.5 Fourier Translorm of Sampled Continuous-1ime Signals 365
a=-DN (7.s.36)
Figures 7.5.9 (a) and (b) show two variations of rhe uniform quanrizer-namely, the
midriser and the midtead. The difference between the two is rhat the output in the
midriser quantizer is not assigned a value of zero. The midtread quantizer is useful in
situations where the signal level is very close to zero for significant lengths of time-
for example, the level of the error signal in a conlrol system.
Since therc are eight and seven output levels, rcspectively, for thc quantizers shown
in Figures 7 .5.9(a) and (b), if we use a fixed-lenglh code word, each output vatue can
be represented by a three-bit code word, with one code word left ovcr for the midtread
quantizer. In what follows, we will restrict our discussion to thc midriser quantizer. In
that case, for a quantizer with N levels, each output level can bc rcpresented by a code
word of length
(a) (br
(i + l)A
A I
,A+; A
iA i
lll
rtl
T1
or,
(a)
A
2
-A
T Figure 75.10 Quantization error.
(a) Quantizer input. (b) Error.
D
B - logzN = togz (7.s.37)
a
'fhe proper analysis of the errors introduced by quantization requires the use of
techniques that are outside the scope of this book. However, we can get a fairly good
understanding of these errors by assuming that the input to the quantizer is a signal
which increases linearly with time at a rate S units/s. Then the input assumes values in
any specilic range of the quantizer-say, [iA, (i + 1)A]-for a duration [f,, with t],
T, - Tr: A/S as shown in Figure 7.5.10. The quantizer input over this time can be
easily verified to be
x"(t) =
r#(r - r,) +,a
while the output is
xo(t)=i^++
The quantization error, e (t), is defined as the difference between the input and the out-
put. We have
Li
e(r) = x,(r) - xn(t) =
+r,tt - r,) - (7.5.38)
= o+"]
'T:'\l'-
It is clear that e (r) also increases linearly from - L/2 to A/2 during the interval lTr,Tzl.
The mean-square value of the error signal is therefore given by (see Problem 7.27)
Sec. 7.6 Summary 367
I tr, - A2
E = ,= ' r,
)r,
e'u)41 =- (7.s.3e)
D2 n-zo
- iL
where the last step follows from Equation (7.5.37). E is usually referred to as the quan-
tization noise power.
It can be shown that if the number of quantizer levels, N, is very large, Equation
(7.5.39) still provides a very good approximation to the mean-square value of the quan-
tization error for a wide variety of input signals.
In conclusion, we note that a quantitative measure of the quality of a quantizer is
the signa.lto-noise ratio (SNR), which is defined as the ratio of the quantizer input sig-
nal power P, to the quantizer noise power E. From Equation (7.5.39)' we can write
In decibels,
(SNR)dB = l0loglsSNR
= l0log,o(12) + 10log,oP, - 20log,rD + 20Blog,o(2) (7.s.41)
That is,
(SNR)dB = 10.79 + l0log,oP, + 6.028 - 2Olog',,D (7'5'42)
As can be seen from the last equation, increasing the code-word length by one bit
results in an approximately 6-dB improvement in the quantizcr SNR. The equation
also shows that the assumed dynamic range of the quantizer must be matched to the
input signal. The choice of a very large value for D reduces thc SNR.
Examplo 7.63
Let the input to the quantizer be the signal
.r,(t) = ,4 sin root
The dynamic range of this signal is 2.r4, and the siBnal power is P. = A212. The use of
Equation (7.5.41) gives the SNR for this input as
(sNR)dB = 20logro(l'5) + 6'aB = l'76 + 6028
Note that in this case D was exactly equal to the dynamic rangc of the inPut signal. The
SNR is independent of the amplitude .A of the signal'
7,6 SUMMARY
. A periodic discrete-time signal .r(n) with period N can be represented by the dis-
crete-time Fourier series (DTFS)
.r(n) = *]
^>_,
".-o[iaro
368 Fourier Analysis ol Discreie-Time Systems Chaptor 7
= i>*''t"l*nf-if;m]
'-
. The coefficients ai are periodic with period N, so that ao : at t N,
o The DTFS is a finite sum over only N terms. It provides an exact alteraative repre-
sentation of the time signal, and issues such ar convergenoe or the Gibbs phenom-
enon do not arise.
t lt ar, are the DTFS coefficients of the signal .r(n), then the coefficiens of. x(n - m)
are equal to ao expl- j(2tt / N ) kml.
r If the periodic sequence.r(n) with DTFS coefficiens ao is input into an LTI system
with impulse response &(n ), the DTFS coefficiens Do of the output y(n) are given by
or = r)
"rn(fr
where
fl(o) =L oOlexp[-jon]
na-6
x(o)= i,(rtexp[-loz]
na -@
onlda
,<^> =
*f" x1o1exp1;
.r"(r)=x,(r) j 41,-rr1
-.ri,1 '1
7r'i - : 1.f..,,., , ..li,..i ,,E--
7.8 PROBLEMS
7.1. Determine the Fourier-series representation for each of thc follosing discretetime sig-
nals. Plot the magnitude and phasc of the Fouricr coefficients a^.
ln, 0sn<3
r(r,r=lO, 4=as5
(0 .r(n) =,i
t- -o
t-1)t6(r - *y + co.z+
72 Given a periodic sequence r(n ) with the following Fourier-series coefficients, determine
the sequence:
(a) at = r *
]'*! *'"""!, osksB
7.7. l*t x(n),hln ), and v(n) be periodic sequences rvith the same period ,V. and let ar. br. and
c^ be the respective Fourier-serics coefficients.
(a) Let y(n) = .r(n)h(n).Show that
co=)a-b*-,=2ar-^b-
r.9 (M
= a"@ b*
(b) Lrt y(n) = x(r) @ /r(n). Shorv that
co: Narbn
ft(n) = {1, - I, I, l, - I. ll
1n
(c) r(n) = 2cos -2
n{n) =
t- -l I -ll
lt,-{,q,-s I
(d) .r(n) = I' O=n=7
+ l. 0<n=3
It(z/= [n 4=n=7
l-n+8,
Let y(a) = ft(a)r(n ). Use the results of Problem 7.7 to find the Fourier-series coeffi-
cients for y(z).
7.9. Repeat Problem 7.8 if y(n) = ,r (n) 6, :(n)
7.10. Show that
7.11. By successively differentiating Equation (7.3.2) rvith respect to {}. show that
elnPx(n)l = j'4:fJ9
7.112. Use the properties of the discrete-time transform to determinc X(O) for the follow-
ing sequences:
(e) r(n) =
"-o[r;,]
'(O r(z) =
lsinrz + 4cos In
(g) :(n) = a'[u(n) - u(n - n)l
' sin (rrnl3)
(h).r(n)
sin{mnl3):r!(rn/2)
(l) .r(n) -
sinhrn/3!sln(rn/2)
0) x(n) -
(t) x(n ) = (n + 7)a'u(n), lrl t .
7.13. Find the discrete-time sequence .r (n) with transforms in the range 0 = A < 2r as follows:
(d) .r(z) =
"(l)''',r,
7.16. Repeat Problem T.tstth(n)= 5(n - ,1 . (|)'rtrl
7.17. For the LTI system with impulse response
h@)=Y#2
6nd the output if the input .r(n) is as follows:
Sec. 7.8 Problems 373
[t. o s z < 3
.r(n)=10. 4=n<S
(b).r(n)= ) ta(n-2k)-6(rr -1-2k)l
k- --.
7.1& Repeat koblem 7.17 if
o(^) = 2"!#9
7.19. (a) Use the time-shift property of the Fourier transform to find l/(O) for the systems in
Problem 6.18.
(b) Find fi (n) for these systems by inverse tranforming H(O).
720. The frequenry response ofa discrete-time system is given bv
| * 1- *P1-;o;
H@)= --'si
I+
[-exp[-lo]+ iexp[-l2o]
(a) Find the impulse response of the system.
O) Find the difference equation representation of the system.
(c) Find lhe responsc of thc syslem if rhe input is the signal (j)'r,,,,
72L A discrete-time system has a frequency response
d(o) =
rs*Ep"l-pht#1. lol
.r
Assume that p is fixed. Find u such that H(O) is an all-pass funcrion-that is, lH(iO)l is
a constant for all O. (Do not assume that p is real.)
1ZL (al Consider the causal system with frequency response
I + aexp[_ iO.].+ bexp[-l1Q]
\",, _
,,n,,
b + aexp[-j0] + exp[-j20l
Show that this is an all-pass function if. a and b are real.
O) t€t H(O) = N(O)/D(O), where N(O) and D(O) are polynonrials in exp [-lO]. Can
you generalize your result in part (a) to find the relation betrvccn N(O) and D(O) so
that H(O) is an all-pass function?
7J3,. An analog signal .r,(r) = 5cos(2@nt - 30") is sampled at a frequcncy f,intlz
(a) Plot the Fourier spectrum of the sampled signal if f is (i) 150 llz(ii)250H2.
(b) Explain whether.r,(t) can be recovered from the samples, and il so, how.
72A. Deive Equation (7.5.27) tot the impulse response of the linear intcrpolator of Equation
(7.5.26), and show that the corresponding frequency function is as Eiivcn in Equation (7.5.2E).
72J,. A low-pass signal with a bandwidth of 1.5 kHz is sampled at a ratc of 10,ffi sampleVs.
(a) We want to decimate the sampled signal by a factor M. How largc can M be without
introducing aliasing distortion in the decimated signal?
(b) Expiain how you can change the sampling rate from 10,000 sanrpleVs to 4(H sampleds.
374 Fourier Analysis ol Discrete-Time Systems Chapter 7
The Z-Transform
INTROD
In this chapter, we study the Z-transform. which is the discrete-tinle counterpart of thc
Laplace transform that we studied in Chapter 5. Just as the Laplacc transform provides
ur fr"qu"n.y-domain tcchnique for analyzing signals for which thc Fourier transforrn
" noi exisi. the Z-transform enables us to analyze cerlain tliscr.'te-time signals that
does
do not have a discrete-time Fouricr transform. As nlight be expcctcd, the properties of
the Z-transform closely resemble those of the Laplace transfortn, so that the results are
similar to those of Chapter 5. Horvever, as with Fourier transtirrms of continuous and
discrete-time signals. there are cerlain differences.
The relationihip between the taplace transform and the Z-trans[ornr can bc cstab-
lished bv considering the sequence of samples obtained by sanrpling an analog signal
ro(t). In our discussion of samplcd signals in Chapter 7, we sa\\ that lhe outbut of the
simpler could be considered to be either the continuous-time signal
=) x.@T)cxp[-'n7.rlr/r (8.1.3)
375
976 The Z_Trans{orm Chaptor g
where the last step follows from the sifting property of the S.function. If we make the
i
substitution = exp[Ts], rhen
.Y,(S)1.=*pr,rl = (E.1.4)
,,i.x,(nT)z-^
The summation on the right side of Equation (E.1.4) is usuaily written as X(e) and
delines the Z-trarrsform of the discrete-time signal r(n ).
We have,in fact, alr,, Ji urrcountered the Z-transform in Section 7.1, where we dis-
cussed the respons. rrf ;. linear, discrete-time, time-invariant system to exponential
inputs. There we sa'.. hat if th; input to the system was.r(n ) : 3", the output was
'
H(z)= i -* n(r)r'
n=
(E.1.6)
Equation (8.1.6) thus defines the Z-transfornr of the sequence &(z). We will formalize
this definition in the next section and subsequently investigate the properties and look
at the applications of the Z-transform.
x(z) =\ r(n)z-'
where e is a complex variable. For convenience, we sometimes denote the Z-transfornr
as Z[r(n)1. For causal sequences, the Z-transform becomes
X(z) = )
r-0
x@)z-" (8.2.2)
To distinguish between the two definitions, as with the taplace transform, the trans-
form in Equation (8.2.1) is usually referred to as the bilateral transform, and the
transform in Equation (8.2.2) is referred to as the unilateral transform.
Example 8.2.1
C-onsider the unirsample sequence
Example 8.2.2
Let.r(n) he the sequence obtained hy sampling thc conrinuous,limc function
.r(t )= exp[-arlr,(r) (8.2.-s)
x(:)= I lz (8'2J)
-expi-arlz'=.-"*i1-,r;
E-a'rrple t2.3
Consider the lwo sequences
.r(n)
fo"
=1 '
rr>o
(E.2.E)
n <o
|.0,
and
(nv a<o
(8.2.e)
't,,r={-(iJ' n=0
10,
Using the definition of the Z-rransform, we can write
We can obtain a closed-form expression for X(z) by again using Equation (6.3.7), so that
x(z)=,_1,-= z (8.2.11)
. zZ-, z-l
Similarly, we ger
,\s can he seeu. the exprcssions lor the two transforms. x(z) and Y(z). are identical.
Seemingly. rhe rwo tr)raily different sequences.r(n) and y(n) have the same Z-transform'
Thc c.lifieience. of course. as rvith the Laplace transform, is in the two different regions of
convcrgence for x(z) and Y(;), where the region of convergence is those values of z lbr
rvhich tie powcr series in Equation (8.2.1 ) or (8.2.2) exists-that is. has a finite value. Since
Equati.n iO.:t.;l ir n geomerric series. the sum can be put in closed fornl only when the
summand has a nragnitude less than unity. Thus. the exPression for x(z) given in Equa'
tron (8.2.11) is valid (that is. X(3) exists) only if
Equations 18.2.1a) anrt (8.2.15) define the regions of convergence for x(z\ and Y(z),
reipecrively. These regions are plotted in the comptex z'plane in Figure 8'2'l'
lmz
Flgure &21
Regions of
convergence (ROCs) of the
Z-transforms for Example 8.2'3.
(a) ROC for X(z). (b) ROC for
Y (zl.
CONVERGEN
Ct'rnsider a sequence .r(n ) with Z-transform
x(z)=
aE't
i ,@'12-' (E.3.1)
'
We want to determine the values of z for which X(Z) exists. In order to do so, we
represent z in polar form as z : r exp[10] and write
Sec. 8.3 Convergence of the Z-Transform 379
Let -r* (n ) and -r - (n ) denote the causal and anticausal Parts o[.r (,r ). rc spectivel,,*. Tha L ii.
n*(n ) = x(n)u(n)
x-(n): x(n)u(-n - 1) (rt.3.3 )
lx-1n11 < run: forn < 0, l.r*(n1l < Nnl lt;rrr = 0 (i1.3.-5i
Clearly, the first sum in Equation (8.3.6) is finite it rlR- < l. irtttl the sec(rnl1 strr,r rs
finite if R*/, < l. We can combine the two relations to detcrtrtrrt. rhe regir;n oI c,,I
vergence for X(z) as
R.<r<R
Figure E.3.1 shorvs the region of convergence in the r planc as Lire annular regiotr
between the circles of radii R- and R*. The part of the transforrl ctrrresponding to the
causal sequence .r*(n) converges in the region for which r -.' oI, equivale'ntlv.
,
lal R..That is, the region of convergence is outside the circle ^.
l ith radius I(., Sirrr
ilarly, the transform corresponding to the anticausal sequen,:c '. 1rl) ctrnr'rrtii ii'
r ( R- or, equivalently, lr l a n-, so that the region of coni'cr{;t:.c is irtsitit ii.': ..,,
cle of radius R-. X(z) does not cxist if R- < R-.
We recall from out' discussion of the Fourier translor m of ,.ll:L tcla-l,t'r -' ':lgrlzll 't'
Chapter 7 that the frequency variable O takes on values in ltt. 1r; l. For ii [i'.t o ' :tti:t
of I it follorvs from a comparison of Equation (8.3.2) with htgiretion 17.-'.1-:. tr,-,'
-"
X(z) can be interpreted as the discrete-time Fourier lranslbrtrt {:l thc srqrral 't(rr)r
This corresponds to evaluating X(z) along the circle of radius r in the i r)llne L
we set r = I, that is. for values of z along the circle with urrit;' r,,'.lrtr: .\'r'l i lr--t-I.,:.
380 The Z-Translorm Chapt€r I
to the discrete-time Fourier transform of x(n), assuming that the transform exists.
That is, for sequences which possses both a discrete-time Fourier transform and a Z-
trausform, we have
X(O) = X(a) l.-.,p1-ior (8.3.7)
The circle with radius unity is referred to as the unit circle.
In general, if x(n) is the sum of several sequences, X(z) exists only if there is a set
of values of z for which the transforms of each of the sequences forming the sum con-
verge. The region of convergence is thus the intersection of the individual regions of
convergence. If there is no common region of convergence, then X(z) do€s not exist.
E:vanrple 83.1
,r"r = (])"r,r
Clearly, R* = 1/3 and R- = 6, so that the region of convergence is
t.l ,l
The Z-transform of .t(a) is
x(z)=-4-.:-3.
t z - ;z-' - i
which has a pole at z = 1I3. The region of convergence is thus outside the circle enclosing
the pole of X(a). Now let us consider the function
r,(n) = {;f
/l\"rr(tt) and =
,r'
\./
.r2(rr} i. | ,;r,,:
From the preceding example, the region of convergcnce for ,\', (.: I i'
I
lzl > z
I
Izl > r
Thus, the region of convergence for X(z) is the intcrsection of thcsr: trYo recions and is
l.l , =;
""-(;.1)
It can easily be verified that
? , 222 -l;
X(7) = --: = .- + '
z-1, z-l k-))(z-\)
-'-.
--e-
Hence, the region of convergence is oulside the circle that includcs both poles of X(z).
The foregoing example shows that for a causal sequence, the regrou of convergence is
outside of a circle rvhich is such lhat all the poles of the transfornr .Y(t) are witlrin this
circle. We may similarly conclude that for an anticausal function, the region of con-
vergence is inside a circle such that all the poles are external to thc circle.
If the region ofconvergencc is an annular region. then the polcs of X(z) outside this
annulus correspond to the anticausal part of the function. rvhilc thc poles inside the
annulus correspond to the causal part.
Eqa'nple 832
The function
3", n<0
,"r: (ll tt = o.2,4.erc
{ (jl ,, = r,3,5.etc
..
x(21=
,,i-_t.2.. ; G)". .e
nodJ
(j
Let n = -rr in the first sum, n : 2rn irr lhe second rr.. *.1 ,1 - 2r:r i i,r lhc titird sum. Ii'tn
382 The Z-Translorm ChaPter I
x(r) =
P,(i.)-
. int';t\' *'-' i (i. l-
1: --.l----lt'-
- r-1. I -fz-z I - z-z
Articausal pole
Example 833
Let.t(a) be a finite sequence that is zero for a < no and n) n,'Then
X(z) = .11r,,);.-* i x(no * l)3-(""* tt r "' r x(nr)z-'r
Since X(z) is a polynomial in z(or z-l). X(z) converges for all finite values of z' exaept
n, > 0'
z = 0 for nt> tl. The poles of X(z) are at infinity i[ ao < 0 and at the origin if
From the previous example. it can be seen that if we form a sequence y(n) by adding
a finite-length sequence to a sequence x(n), the region of convergence of Y(z) is
the
same as that of X(z), except possibly for z = 0.
Erample 8.3.4
Consider the righr-sided sequence
By rvriting )(r) -
as the sum of the finite sequence 3(l/2\"lt(n + 5) u(z)] and the
sequcnce t (n'1 = 7112rr,r)"r(n), it becomes clear that the R()('ot f(r) is the same as
thar of X(:). namely, l: I > l/2.
Sinilcr!y, :he sequence
v(n)=-(j)',r-,*,,
can be considered to he the sum of the s!'quence x(n1 = -)2(l/2)'z(-n - l) and
the finite sequence - 32(l/2)'lu(n) - rr(l - 6)1. lt follows rhat Y(z) converges for
0< lzl < trz.
In the rest of this chapter, we restrict ourselves to causal signals and systems' for
which we will be concemed only with the unilateral transform. In the next section, we
discuss some of the relevant properties of the unilateral Z-transform. Many of these
properties carry over to the bilateral transform.
x(z)-)-r(n)2" (8.4.1)
,t-0
We can directly use Equation (8.4.1) to derivc, the Z-transforms o[ common discrete-
time signals, as the following example shows.
Example 8.4.1
(a) For thc 6 function, we saw that the Z-transform is
(h) Let
:(n) = o"'1n,
Then
rl
x(z) = ) a'z-'=',
,o
=:'-.1.:1,lcl
l-oi l---, z-q
(8.4.3)
zlu(n)l =
11 ;, l.l
, r (8.4.4)
(c) Let
.t (n) = c6.61orr,,,, (E.4.s)
By writing .r Qr) as
rI:!] The Z-Transform Chapter g
I
x(n) = + exp [-lfioz]lz(z)
)[exg[jfiozl
and using the result of (b). it follows that
=
zG:ls&)_ (8'4'6)
IL zzcosrh + t
Sinilarly, the Z-transform of the sequence
:(z) = s6q-,r, (E.4.7)
is
z sinOo
x(z) = (8.4.8)
z2 - 2z ccf,lo +I
Let .r(n) be noncausal, with .r* (z) and r_ (r) denoting its causal and anticausal parts,
respectively, as in Equation (8.3.3). Then
x(z): X,(z) + X-(z) (8.4.9)
Now,
By making the change of variable m = -n and noting that r- (0) = 0, we can write
x-(z)= i x-(-n)z^
m-O
(8.4.11)
and
= (])',r,- u,,y
From Example E.4.1. we can write
x,({ = -}-.
1.
lzl > I
and
x.,(z)= L-- I= rl
z-i z-', l.l
-2-,
so that
x-(7)=-;- l:.1.2
and
Thus, a table of transforms of causal time functions can be uscd to find the Z-ians-
forms of noncausal functions. Table 8.1 lists the transform pairs derived in Example
8.4.1, as well as a few others. The additional pairs can be derived dircctly by using Equa-
tion (8.4.1) or by using the properties of the Z-transform. We discuss a few of these prop-
erties next. Since they are similar to those of the other transforms we have discussed so
far, we state many of the more familiar properties and do not dcrive them in detail.
8.4.1 Linearity
If r, (n ) and.rr(n) are two sequences with transforms X, (z) and X,(:). respectively, then
= \ r(my;tn't,t
E
t rll
"!rr1rn1z-,,]
= z,ulio,ln)a-,,, -
[ 'b:r I
= z\lx(z) - )or@)r-"1 (8.4.14)
Similarly,
=! '1-;t-''.,n''
= t"li,r<^).-'* (*)r-^f
-1,,,..
= r- "lx <rl +
j,. r{-) z-,,] (8.4.1s)
,,
Example 8.4.3
Consider the difference equalion
I
Y(n) - ry(n - l) = 6(z)
with the initial condition
.y(- l) = 3
In order to find y (n) for z 0, we use Equation (E.4.15) and take transforms on both side.
=
of the difference equation, Betlting
1
Y(zl - 2z-tlY?)
+ .r'(- l)31 = t
Example t.4.4
Solve the difference equation
fory(n).n -
0, it.r(n) = u(n)'v(l) = l, and v(0) = l'
Using Equation (8.4.14). we have
(.' -. * i)",., =
r'-,.* r'=." :-' l'
Writing Y(z) as
z1 'z*l
Y(z\=
"" z'(z-l)(z-llt.-i1
-
,
and expanding the fractional term in partial fractions yields
r: ', t
vt.l=.Lil,*.__l_r_i
.-t *'r,
=,1. :-l - z-]
7,2
8.4.3 FrequencyScaling
The Z-transform of a sequence a'r(n) is
Example 8.4.6
We can use the scaling property to derive the transform o[ thc signal
y(n) = (a'cosOon)u(n)
388 The Z-Translorm Chapter I
from the transfornr of
.r(z) = (cos l)on)a (z)
rvhich, from Equation (8.a.6), is
v/-\
,"r., _
-
z(z - cosoo)
zz -i iosfrr+T
Th us.
rt.r=;f;{fi*ffi 1
= zz_:12_-_r:elrb)
- 2a cos{loz + a2
Similarly, the transform of
y(z) = a'(sin()na)rr(n)
is, from Equalion (8.4.8),
y(.) =
F;!;'j#;7
4,4.4 l)iffereatiation with Respect to z
If we differentiate both sides of Equation (8.4.1) with respect to z, we obtain
dX(z\ : (_
) n)x(n)z-"-l
u<' n=o
= -z-t 2 *b\z-"
'l-0
from which it follows that
Example 8.4.6
Let us find the transform of the function
zlnu(n)t = -, ft ,p611 = -, ** =
;*
Sec. 8.4 Propertes ol the Z-Transform 389
and
Z[nzu(a)t =
*)"rrnl = - z lrr-, l, ru all
(-,
_ _d z z(z+l)
--'A(z-tr=(FT;'
so that
8.45 InitialValue
For a causal sequence x(n), we can write Equation (8.4.1) explicitly as
x(z) : r(0) + r(l)e-t + x(z)z-2 r ... + x(n):-" * ... (8.4.19)
It can be seen that as z -+ co, the term z-n -s0 for each fl > 0. so that
Jrlg
x(a) = -r(o) (8.4.20)
Example 8.4.7
We will determine rhe initial value r(0) for thc signal with transform
x(z\ : _: zr_1zr+22_5.
__. _1
(z-1)(z-!)Q'z-rz+t)
Use the initial value theorem gives
The initial value theorem is a convenient tool for checking i[ thc z-rransform of a given
signal is in error. Partial fraction expansion of X(z) gives
t.4.6 FinalValue
From the time-shift theorem, we have
Zfx(n) -.r(n - l)l = (t - z-t)xk) (8.4.21)
The left-hand side of Equation (8.a.21) can be written as
390 The Z-Transform Chapter I
assuming.r(cc) exists.
f,sernple E.4.8
By applying the final value theorem, we can find the final value of the signal of Exam-
plc 8.4.7 as
which again agrees with the final value ofx(n) given in the previous example.
Example 8.4.9
[,et us considcr lhe signal x(n) = 2 rr1nl*ith Z-transform given by
3
X(z) = z-2
Application of the final value theorem yields
.t(,)=lirn z: l z..=1
:jt Z Z_z
Clearly this result is incorrect since.r(a) grows without bound and hence has no final value.
This example shows that the final value theorem must be used with care. As noted ear-
lier it gives the correct result only if the tinal value exists.
8.4.7 Convolution
If y(n) is the convolution of two sequences.r(n) and lr(rr), then, in a manner analogous
to our derivation of the convolution property for the discrete-time Fourier transform.
we can show that
Recall that
Sec. 8.4 Properties ol the Z-Transform 391
Y(z\ = i y@)z-'
so that y(n) is the coefficient of the z,th term in the power-series expansion of Y(z).
It follows that when we multiply two power series or polynomials X (z) and H(z), the
coefficients of the resulting polynomial are the convolutions of the coefficients in
.r(n) and ft (n).
Example 8.4.10
We want to use lhe Z-transform to find the convolution of the [ollowing tvo sequences,
which rvere considered in Example 6.3.4:
H(z)=l+22-t-z-'+z-o
and
so that
The Z-transform properties discussed in this section are summarized in Table 8-1.
Table 8-2, which is a table of Z-transform pairs of causal time functions, gives. in addi-
tion to the transforms of discrete-time sequences, the transforms of several sampled-
time functions. These transforms can be obtained by fairly obvious modifications of the
derivations discussed in this section and are left as exercises for lhe reader.
392 The Z-Transform Chapter I
TABLE &1
Z-Tranatorm Propertl6
l. Linearity arxr(n) + arxr(n) arXr(z) + arXr(z) (8.4.13)
nk x(n) (8.4.18)
1_2ft)rxat
5. Convolution .r,(n) r.rr(z) Xr(z)Xzk) (8.4.23)
= (85.1)
'@) *j{rx(z)z'-'dz
f
where fs- represents integration along the closed contour in the counterclockwise
direction in the z plane. The contour must be chosen to lie in the region of conver-
gence of X(z).
Equation (8.5.1) can be derived from Equation (8.a.1) by multiplying both sides by
zt-l and integrating over f so that
= hix(k)
fr*{,),|-'o,
from which it follows that
'o =
*f,xk)zo-'az
TABLE &,2
Z-Transtorm Palr8
Radlua o, GonYargsnco
x(4
{rr)tota>0 lzl 'a
1.6(z) I 0
2.6(n - m) z-,n 0
3. u(n) I
z-l
z
4.n I
G:IT
5. n2 4!-+-,r) I
(z - l)'
z
6. an lol
z-o
az
7. na" lrl
(z - o)'
22
+ l)a" lol
E. (r7
Q -;f
Za+t
lrl
d -En
z(z -_c91!h)-
10. cos flsn I
z2 - 2z cosfh + I
f,lrn !u-q.--- 1
11. sin
---
z2 cos(h + I
2z
z(z_- a cosfh)
12, a" cos(l6n
z2 - 2zo cos 0o + a2
lrl
za sin Oo
13. a' sin flen
zz - 2za cos(lo + a2
l,l
z
14. expl- anTl lexp [-ar]
z - expl- aTl |
Tz
nT I
15.
Grtf
Tz expl- aTl
16. nT expl- anTl lexp [-ar]l
lz - exgl-aTll2
z(z - cosoro 7) I
17. cosaor6I
22 - 2z cosaoT +I
z sinoo I I
18. sinzrool
z2-2zcoso4T+l
z [z - exp [- all cosooT]
19. expl- anTl cos n r,re T
+ expl-ZaTl iexp [- aI] |
zz - 2z expl- aT] cosool
I lerp[- ar]l
20. expl- anTl sin n tos
@rt
393
394 The z-Transtom chapter E
We can evaluate the integral on the right side of Equation (8.5.1) by using the residue
theorem. However. in many cases. this is not necessary. and rve can obtain the inverse
transform by using other methods.
We assume that X(e) is a rational function in I of the torm
If we express X(z) in a power series in z-t, x(n) can easily be determined by identify-
ing it with the coefficient of 2-" in the power-series expansion. The power series can
be obtained by arranging the numeraror and denominator of X(z) in descending pow-
ers of z and then dividing the numerator by the denominator using long division.
g.rarrple 8.5.1
Determine the inverse Z-transform of the function
(o.l )!z - |
(0.1)22-'
(o.l Yr. -:
We can write, therefore,
Although we were able to identify the general expression for:(n) in the last example,
in most cases it is not easy to identify the general term from the first few sample val-
ues' However. in those cases where we are interested in only a few sample values of
Sec.8.5 The lnverse Z-Transform 395
x(z), this technique can readily be applied. For example, if .r(n) in the last example rep-
resented a system impulse response, then, since .r(n) decreases very rapidly to zero, we
can for all practical purposes evaluate just the first few values of r (n) and assume that
the rest ate zero. The resulting error in our analysis of the system should prove to be
negligible in most cases.
It is clear from our definition of the Z-transform that the series expinsion of the
transform of a causal sequence can have only negative powers of <. A mnsequence of
this result is that, if r(n ) is causal, the degree of the denominator polynomial in the
expression for X(z) in Equation (8.5.2) must be greater than or equal to the degree of
the numerator polynomial. That is, N > M.
Example 8.5.2
We want lo find the inverse transform of
x(z) =
z3-z'+z-i ,l
..t -5-2 r !- _ _L' l.l
4. t2. 16
In this example, it is not easy to determine the general expression for.r(n), which, as we
see in the next section, is
Example 8.63
Consider X(z) of Example 8.5.2:
x(z) =
z'-z'+z-!s ,j
-3 _5--2.. l- _ r' lzl
'4<'2'16
In order to obtain the partial-fraction expansion, we first write X(z) as the sum of a con-
stant and a term in which the degree of the numerator is less than that of the denominator:
,396 The izTransform Chapter I
r.;:i._,z' +_i_r
t,
\z
x(z\ =
4' '2' 16
x(z)=1.#r)
We can make a partial-fraction expansion of the second term and try to identify terms
from Table 8-2. However, the entries in the table have a factor z in the numerator. We
therefore write X(z) as
x(7)=1.,ffrfi
lf we now make a partial-fraction expansion of the fractional term, we obtain
t -o
X(z) = 1+ zl--:-
-\.-i + I"..- + s- i |
Q-i)' ,-'il
=r-e' z-i +s-i!?-ae_,z-
z-'; k-i)'
From Table 8-2. we can now write
f,'rample 8.6.4
Solve the difference equation
Carrying out a partial-fraction expansion of the ternlson the right side along the lines of
lhe previous example yields
i",;l!.1
Y(z) = ,_._1. := _
* '4.i*
l3i8zll2z96z'
5z-i tlz-to 8522+1 8512+l
The frrst two terms on the right side correspond to the homogeneous solution, and lhe last
two terns correspond to the particular solution. From Table 8-2, it follows that
Ertarnple &65
[-et us find the inverse transform of the function
x(z)=- *--,..
(z-jXz-i) l.l ,j
Direct pa rt ia I- fracl io n expansion yields
xk)=:,_*
which can be written as
x(z): z-t--4:
12'4 ,
- or-'
:!_1
We can now use the table of transforms and the time-shift theorem, Equation (8.4.15), to
wrile the inverse transform as
xdt=
zk-ik-t)
and expand in partial fractions along the lines of the previous example to get
y(n):)h(k)x(n-k) (8.6.1)
t-0
In terms of the respective Z-transforms, the output can be written as
then N > M if the system is causal. On the other hand, if we write If (z) as the ratio of
two polynomials in z-r, i.e.,
5
&-0
oor@- /,) = 5 b6@ - k)
k=0
(8.6.6)
we can End the transfer function of the sysrem by mking rhe Z-transform on both sides
of the equation. We note that in finding the impulse response of a system. and conse-
Soc. 8.6 Z-Trunsler' Functions of Causal Discrete-Time Systems 399
quently, in finding the transfer function, the system must be iniriallv relaxed. Thus. rt
we assume zero initial conditions, we can use the shift theorenr trr gel
fM I t-N
klX(z) I
12 b*r-,lv(z) = l\ arz (6.6.7r
Lt--o I Lr---u - I
so that
M
2 bo'-o
u(z)=#- - (s.6.s)
2
k=0
oo'-r
It is clear that the poles of the system transfer function are the sarne as the character-
istic values of the corresponding difference equation. From our discussion of stability
in Chapter 6, it follows that for the system to be stable, the poles must lie within the
unit circle in the e plane. Consequently, for a stable, causal function, the ROC includes
the unit circle.
We illustrate these results by the following examples.
n-rnple t.0.1
Let the step response of a linear. time-invariant, causal systcm bc
:
y@
l,t,r - f (j)',t,r * fr (- j)',t,r
To find the transfer funclion H(z) of this system, rye note that
'\" s(z-r)
y(z\ =9 z- * ? -- ,
--1 -l3e-))' ls(r*l)
-3 _ la-2
< .1.
(z-r)(z-jlt.+|l
Since
x(z\ = -f '
z- L
it follows that
H@=#=#j (8.6.10)
=2- , *!-l
3z+l 3:-l
Thus, the impulse response of the system is
400 The Z-Transform Chapier I
h@ =l(- l)',,,, . l(l)",r,
Since both poles of the system are within the unit circle, the system is stable.
We can find the difference-equation representation of the system by rewriting Equa-
tion (8.6.10) as
y@. I - |z-t
r(z) (t - jz-')(1 + lz-')
=
=,-*rrl
, 4. 8.,
Cross multiplying yields
Example 8.6.2
Consider the system described by the difference equation
y(n) - 2y(n - t) + 2y(n - 2) = r(n) + |r(n - l)
We can find the transfer function of the system by Z-transforming both skles of this equa-
tion. With all initial conditions assumed to be zero, the use of Equation (8.4.15) gives
i
Y(z) - 2z-tY(z) + z-zYQ) = x(zl + lz-tx(z) :
so that
u,-,-Y(z)- t+ll-'
" \" xzl | - 27-r a 2r-z
-z L L-
'2t i
=-t
z2-22+2
The zeros of this system are atz = 0andz = - (l/2), while the poles are at z = I arl.
Since lhe poles are outside the unit circle, the system is unstable. Figure 8.6.1 shows the
location of the poles and zeros of H(z) in the z plane. The graph is called a pole-zero plot.
The impul5e response of the system found by writing H(z) as
Hd\ =
.:\1_;;t] ,.1 v _:;;_n
and using Table 8-2 is
E-n'nple t.03
Consider the system shown in Figure 8.6.2. in which
0.8tr(^:
a(z)=1.-s3y1r-0.5)
where K is a constant gain.
The transfcr function of thc system can be derived by noting that thc output o[ lhe sunt-
mer can be written as
E(z)=x(z)-Y(z)
so that the system output is
Y(z) = x1r161",
= Ix(z) _ y(z)lH(z)
Substituting for ll(z) and simplifying yields
' \" :
,,., .Y(i)_ 0._8_Kz__
xQ) z2 + (0.8K - 1.3)t + o.o.l
The poles of the system can be determined as lhe rools of thc cquation
Since both roots are inside the unit circle. the system is stable' With K= 4. however.
the roots are
zr = 0.0213 and z: = 1.87875
Since one of the roots is now outside the unit circle. the system is unstable.
y(n)=cv(n)+dx(nl
As we will see, the use o[ Z-transforms is useful both in deriving state-variable repre-
sentations from the transfer function of the system and in obtaining the solution to the
slate equations.
In Chapter 6. starting from the difference-equation rePresentation. we-derived two
alternativi state-space rePresentations. Here, we start with the transfer'function rep'
resentation and dirive two more rePresentations. namely. the parallel and cascade
forms. In order to show how this can be done. let us consider a simple first-order sys'
tem described by the state-variable equations
u(n + l) : aa(nl + bx(n ) (8.7.2)
v(zl = -L
z-a x(z)
Thus, the system can be represented by the block diagram of Figure 8.7.1. Note that as
far as rhe relation between Y(z) and X(e) is concerned. the gains D and c at the input
and output can be arbitrary as long as their product is equal to bc.
we use this block diagram and the corresponding equaiion. Equation (8.7'2). to
obtain the state-variable representation for a general system by writing ll(e) as a com-
bination of such blocks and associating a state variable with the output of each block.
As in continuous-time systems, if we use a Partial-fraction exPansion over the poles of
H (z), we get the parallel form of the state equations. whereas if we represent H(z) as
a cascadebf such blocks. we get the cascade representation. To obtain the two forms
Sec. 8.7 Z-Translorm Analysis ol State-Variable Systems 403
r,( rr * l)
discussed in Chapter 6, we represent the system as a cascade of trvo blocks, with one
block consisting of all the poles and the other block all the zeros. II the poles are in the
first block and the zeros in the second block, we get the second canonical form. The
first canonical form also can be derived, by putting the zeros in thc first block and the
poles in the second block. However, this derivation is not very straightforward, since it
involves manipulating the first block to eliminate terms involving positive powers of z.
Esarrple 8.7.1
Consider the system with transfer function
]--
H(z)=---t
z+i-+ z-i
with the corresponding block-diagram representalion shown in Figure 8.7.2(a). By using
the state variables identified in the figure, we obtain the following set of equstions:
(,.l)n,ut = x(z)
k -i)'un = zx(z)
Y(z)=V,(z)+2Vzk)
The corresponding equations in the time domain are
o,(n+l)=-1t,(n)+r(n)
Vz(:l
v'2(:l X, (:)
Vtlzt. Ylzl
V t(zl
Y(z)
(c)
y0r)=o,(n)+zaz(n)
lI we use the block-diagram representation ofFig. 8.7.2(b), with the states as shown, we have
(. -'ot)r,,., = xr(z)
(, *l)v,at = x1..1
Y(z) = vrk)
which. in the time domain, are cquivalent to
u'(n+l)=l''(')**'(n)
l-3
u,(n + l) = A,',(") -;u,(x) + -l-r(,r)
I
r'r(n + l) = - lt,z@\ * x(n)
y(n) = u1(z)
To get the second canonical form, we use the block diagram of Figure 8.7.2(c) to get
zV,(z\ = V2k)
we can write
zv,(z) +
Ir^U -l r,(.) = *,.,
1
Y(z)=-ovt(z)+3Vr(z)
+
u2tu + t)=
ir,,r, - f,u,@'1
t1,t1
1
y(n)=iur\n)+3o2@)
As in the continuous-time case, in order lo avoid working with complex numbers, for sls-
tems with complex conjugate poles or zeros, we can combine conjugate pairs. The repre-
sentation for the resulting second-order term can then be obtained in either the first or the
second canonical form. As an example, for the second-order system described by
Y@ =
b++++i!l xo (8.7.3)
t+atz'+azz'
we can obtain the simulation diagram by writing
where
I
v(zr) =
l+ qJ-'i 1-o ; X(z)
or equivalently,
V(z) = -o,r-tnk) - arz-zV(z) + X(t\ (E.7.4b)
406 The Z-Translorm Chapter I
.Y(il Y(:)
we gencrate Y(i) as the sum ot X(z\, - arz-tV(z),and - a2z-2V (z) and form Y(z) as the
sum of bolz(z) snd bzz-zv(z) to get the simulation diagram shown in Figure 8.7.3.
Example 8.72
Consider the system with transfer function
l+2.52-t+z-2
H(2) =
(t + 0.52-r + 0.Ez-2)(1 + 0.32-r)
By treating this as the cascade combination of the two systems
H,(2) =
I + 0.52-r
H,(z)=i:#
1+0.52-r+0.82-2'
we can draw the simulation diagram using Figure 8.7.3, as shown in Figure 8.7.4'
Using the outpus of the delays as state variables, we get the following equations:
i (z) = zv,(z) = -O'lvlz) + xt(zl
X,(z)=V(z)+O5V2Q)
zVr(z) = v(z) = -g'5v,12)-o'8v3?) + X(z)
zVlz) = lt'171
y(z) = i(z) + ZV,(z)
Eliminating t/(z) and 7(z) and writing the equivalent time-domain equations yields
..i
(:, \ t:
no
C)
c
(E
IJ.I
o
E
EI)
((,
E
c
i!
E
a^
!F
d
c,
b!
lL
407
408 The Z-Transform Chapter I
t,(n + I) = -0.3r'r(n) - 0.9u.(n) +.r(r)
ur(n + 1) = -0.5u:(n) - 0.8u.(n) + x(n)
uj@ + 11 = It.(nl
y(n): 1.lu,(n) - 0.8u.(n) +.r(n)
Clearly, by using different combinations of first- and second-order sections, we can obtain
several different realizations of a given transfer function.
H(z)=ii:l=c(zl-A)-'|b+d (8'7'10)
Recall from Equation (6.7.16) that the time-domain solution of the state equations is
v(n):oQr)vo* i*1r-
l-o
I -i)b:g) (8.7.11)
Example 8.7.3
Consider the system
zr,(n+l)=u,(n)
2,2(n + l) =
l r,,r, - la.(l; + rrrrl
v (n) = a, (n)
which was discussed in Examples 6.7.2,6.7.3, and 6.7.4. We find tltc unit-step rcsPonse (,i
this system for the case when v(0) = [l
- l]r. Since
A=
lo rl
L; -rl
it follows that
(zl
so we can write
- n';-'= |L-s
T
z,
ir :l
-2 ! .r-l
+ r rl
-_l
.-a z+')
O(z) = .1.1 - A)-r = z !
_! _ _ 6
I
t'l
,*l
.
- _1
14 . l_l
We therefore have
so that
L;'-;.;-.-ll
[s 23t r
\' _ 22rl\"1
z) e l+/
',,,=[;;[;]]=l:.;)
-
|
[e rs\- )^ lt(t)"1
410 TheZ-Transfom ChapterB
3\4/ -1I'.-1)'-'.
fr(,)=1/1)'-' 3\ 2l n>t
Since lr(0) = 0, we can write the last equation as
The relation between the Laplace transform and the Z-transform of the sequence of
samples obtained by sampling analog sigial x,(t) can easily be developed ftom our dis-
cussion of sampled sigrals in Chapter 7. There we saw that the outPut of the sampler
could be considered to be either the continuous-time signal
We recognize that the righrhand side of Equation (E.8.4) is the Z{ransform, X(z), of
the sequence x(n). Thus, the Z-transform can be viewed as lhe LaPlace transform of
the sampled function x,(t) with the change of variable
z = exp [Tsl (8.8.s)
Equation (8.8.5) defines a mapping of the s plane to the z plane. To determine the
nature of this mapping, Iet s = o + i(o, so that
e :
exp[oI]exp[iorI]
Since lzl = exp[oT], it is clear that if o < 0, lzl < l. Thus, any point in the left half
of the s plane is mapped into a point inside the unit circle in the z plane. Similarly'
since, foi o ) 0, we have lz | > 1, a point in the right half of the s plane is mapped into
a point ouside the unit circte in thez plane. For o = O, l.l :
l, so that the loaxis of
the s plane is mapped into the unit circle in the z plane. The origin in the s plane cor'
responds to the point z = 1.
Finally, let s* denote a set of points that are spaced vertically apart from any point
so by multiples of the sampling frequency ro, = 2r lT. That is,
since exp[7*or,Tl = explj}kn]. That is, the points s1 all map into the same point
z6 = exp IIso] in the z plane. We can thus divide the s plane into horizontal strips, each
of width r,r,. Each of these strips is then mapped onto the entire z plane. For conve-
nience, we choose the strips to be symmetric about the horizontal axis. This is sum-
marized in Figure 8.8.1, which shows the mapping of the s plane into the z plane.
We have atready seen that X,(or) is periodic with period to,. Equivalently, X(O) is
periodic with period 2zr. This is easily seen to be a consequence of the result that the
process of sampling essentially divides the s plane into a set of identical horizontal
strips of width r,r,. The fact that the mapping from this plane to the z plane is not unique
(the same point in the z plane corresPonds to several points in the s plane) is a conse'
quence of the fact that we can associate any one of several analog signals with a given
set of sample values.
,\
,1 l::.
+
o.
(l.,
t
o
(il
E
o
6
t:
o. o
0
!
q,)
(!
o.
a)
o
E
o
o
tr
tt,
CL
al al q,
3 3
3 o
3 I I I EO
o.
o
at,
.{
aa
c,
EA
k
412
Sec. 8.9 Summary 413
,r\.t _
ftn^r,,r"
The region of convergence (ROC) of the Z-transform consists o[ those values of i
for which the sum converges.
For causal sequences, the ROC in the z plane lies outside a circlc containing all the
poles of x(z). For anticausal signals, the Roc is inside the circle such that all poles
lrx(z) are external to this circle. If r(n) consists ofboth a causal and an anticausal
part,ihen the ROC is an annular region, such that the poles outside this region cor-
i"spond to the anticausal part of:(n), and the poles inside the annulus correspond
to the causal part.
The Z-transform of an anticausal sequence.r-(n) can be d(.'te Inlined tiom a table of
unilateral transforms as
X-(z) = Zlx-(-n)l
Expanding X(z) in partial fractions and identifying the inversc of each term from a
table of Zltransforms is the most convenient method for determining x(n). If only
the first few terms of the sequence are of interest, x(n) can he obtained by expand-
ing X(z) in a power series in r-r by a Proccss of long division'
The propcrttes of the Z-transform are similar to those of the Laplace transform.
Among ihc applications of the Z-transform are the solulion ot difference equations
and the evaluation of the convolution of trvo discrete sequcnces.
a The time-shift property of the Z-transform can be used to solve difference equations.
O Ify(n) represents the convolution of two discrete sequenccs r(n) and lz(n)' then
Y(z) = It(z)Xlz)
The transfer firnction H(z) of a systenl with input r(n). impulse response &(z). and
output y(n) is
8.11 PROBLEMS
&L Determine the Z-transforms and the regions of convergence for the fo[owing sequences:
(a) x(a) = (-3)'z(-n - l)
ror,t,r={i, ilft=s
z> o
(c) .r(r) I(JI
l:', a<o
\
(d) -r(a) :26(n) - 2;u(n)
82 The Z-transform of a sequence x(a) is
z3+4zt-u,
x(z) =
z'+lr'-1r*l
(a) Plot the l0cations of the poles and zeros of X(z).
(b) Identify the causal pole(s) if the ROC is (i) lzl < tiil lzl > z
I,
(c) Find.r(n) in both cases.
8J. Use the definition and the properties of the Z-transform to find X(z) for the following
causal sequences:
(a).r(z)=zansinOon
(b) r(z) = n2cos(htt
(c) :(n) ="(:)" +("-r)(1)'
Sec. 8.11 Problems 415
_41+ ?t_
z2+42+3
&5. Flnd the inverse transform ot
X(z) = 16t11 - t.-',
by the following methods:
(a) Use the series expansion
rog(1 - a) = }i lol . r
nat=:;i_;-!_,
' ' 6{ 6
i, (rr )
h2ul
H(:) =
.-r. ,* - f1';i."1 1, - *r;
Sec. 8.1 1 Probl€ms 417
where K and o are constant. Find the range of values of K and a tor which the system is
stable, and plot this region in the K-o plane.
&16. Obtain a reali'zation of the tollowing transfer function as a combination of firsG an<t sec.
ond-order sections in (a) cascade and (b) parallel.
Find the transfer function Gr,(s) of the first order hold, and compare its frequbncy
resporuie wirh that of the ideal reconstruction filter matched to rhe ratc 7"
t2a As we saw in Chapter 4. filters are used to modify the frequencv content of signals in an
appropriate manner. A technique for designing digital lilters is hased on transforming an
analog filter into an equivalent digital lllter. In order to do so, rvc have to obtain a rela-
lion between the Laplace and Z-transform variables. In Section 8.S. $,e discussed one such
relation based on equaling the sample values of an analog signal rvith a discrete-time sig-
nal. The relation obtained was
r = exp [f.rl
4lg The Z_Transtorm Chapter g
We can obtain other such relations by using different equivalences. For example, by equat-
ing the s-domain transfer function of the derivative operaior and the Z-domain transfer
function of its backward-difference approximation, we can write
-l
s=-I -rz
or equivalently,
I
' '- iir
Similarly, equating the integral operator with the trapezoidal approximation (see problem
6.15) yields
2l - z-l
' TL*r-'
or
L+ (T/2ls
z=
7=@121s
(a) Derive the two alternative relations between the s and e planes just given.
(b) Discuss the mapping of the s plane into the z plane using the two relations.
Chapter 9
The Discrete
Fourier Transform
DUCTION
From our discussions so far, we see that transform techniques play a very useful role
in the analysis of linear. time-invariant systems. Among the many applications of these
techniques are the spectral analysis of signals, the solution of differential or difference
equations, and the analysis of systems in terms of a frequency response or transfer
function. With the tremendous increase in the use of digital hardware in recent years,
interest has centered upon transforms that are especially suited for machine computa-
tion, In this chapter we study one such transform, namely, the discrete Fourier trans-
form (DFT), which can be viewed as a logical extension of the Fourier transforms
discussed earlier.
In order to motivate our definition of the DFT, let us assume that we are interested
in frnding the Fourier transform of an analog sigral r,(t) using a digital computer. Since
such a computer can store and manipulate only a finite set of numbers, it is necessary
to represent r, (t) by a finite set of values. The first step in doing so is to sample the sig-
nal to obtain a discrete sequence.t,(n ). Because the analog signal may not be time lim-
ited, the next step is to obtain a finite set of samples of the discrete sequence by means
of truncation. Without toss of generality, we can assume that these samples are deEned
for n in the range [0, N - U. [.et us denote this finite sequence hy r(n), which we can
consider to be the product of the infrnite sequence x, (n ) and the window function
(t. 0<nsN-l
= (e.l.l)
't') to, otherwise
so that
x(tt) : x"(n)w(n) (e.1.2)
419
42O. The Discrete Fou.ier Transtorm Chapler g
Srnce we now have a discrete sequence, wb can take the discrete-trme Fourier trans-
form of the sequence as
N-l
x(o) r(n) exp[-jfla] (e.r.3)
n-O
This is still not in a form suitable for machine computation, since O is a continuous vari-
able taking values in [0, 2t]. The final step, therefore, is to evaluate X(O) at only a frnite
number of values Oo by a process of sampling uniformly in the range [0,22r]. We obtain
/V-l
:
X(or) ) r(r)exp[-lorz], k=0,1, ...,M - | (e.1.4)
, -0
where
ao=2ik (e.l.s)
x(k\ = (e.1.6)
5'",*rl-i'#^o)
An assumption that is implicit in our derivations is thatr(n) can take any value in the
range (-to, co)-that is, that.r(n) can be repre.sented to infinite precision. However,
the computer can use only a finite rvord-length representation. Thus, we quantize lhe
dynamic range of the signal to a finite number of levels. In many applications, the enor
that arises in representing an infinite-precision number by a finite word can be made
small, in comparison to the errors introduced by sampling, by a suitable choice of quan-
tization levels. We therefore assume that.r(n ) can assume any value in (-co, co).
Although Equation (9.1.6) can be considered to be an dpproximation to the contin-
uous-time Fourier transform of the signal x,(t), it defines the discrete Fourier trans-
form of the N-point sequence r (n). We will investigate the nature of this
approximation in Section 9.6, where we consider the spectral estimation of analog sig-
nals using the DFT. However, as we will see in subsequent sections, although the DFT
is similar to the discrete-time Fourier transform that we studied in Chapter 7, some of
its properties are quite different.
One of the reasons for the widespread use of the DFT and other discrete transforms
is the existence of algorithms for their fast and efficient computation on a computer.
For the DFT, these algorithms collectively go under the name of fast Fourier transform
(FFT) algorithms. We discuss two popular versions of the FFT in Section 9.5.
Sec. 9.2 The Discrete Fourier Translorm and lls lnvers€ 421
x(r)=b,r,l*rl-t',J,0] o.z.t)
To derive this relation, we replace nby p in the right side of Equation (9.2.1) and mul-
tiply by exp [l2nrr*/N] to gct
xo "-pL'?#
,r] : 5''or *o[iTor" - or] p.2.3)
P*
*t[,,# /'(' - P)] =
{}, ::i
so that the right-hand side of Equation (9.2.4) evaluates to Nx (n), and Equation
(9.2.2) follows.
We saw that .t (0) is periodic in O with period 2rr; thus, X(O* ) = X(Oo + 2c). This
can be written as
=
^r-",0,"'o[r?*]
= .r(n) (9.2.6)
The Discrete Fourier Translorm Chapter 9
That is, the IDFT operation yields a periodic sequence, of which only the first N val-
ues, coresponding to one period, are evaluated. Hence, in all operations involving the
DFT and the IDFT, we are effectively replacing the finite sequence x(n) by its peri-
odic extension. We can therefore expect that there is a connection between the
Fourier-series expansion of periodic discrete+ime sequences that we discussed in
Chapter 7 and the DFT. In fact. a comparison of Equations (9.2.1) and (9.2.2) with
Equations (7.2.15) and (7.2.16) shows that the DFT X(t) of finite sequence .r(n ) can
be interpreled as the coefficient ao in the Fourier series representation of its periodic
extension ro(n ). multiplied by the period N. (The two can be made identical by includ-
ing the factor l/N with the DFT rather than with the IDFI.)
9.8.1 Ltnearity
Let Xr(k) and Xr(k) be the DFTs of the two sequences .r, (n) and rr(n ). Then
DFT[a,rr(n) + anxr(n)l= arXrlk) + arXr(k) (e.3.1)
for any constants a, and ar.
0.8.3 Alter:nativelnversionFormula
By writing the IDFT formula, Equation (9.2.2\, as
we can interpret x(n) as the complex conjugate of the DFT of X* (k ) multiplied by 1/N.
Thus, the same algorithm used to calculate the DFT can be used to evaluate the IDFT.
=ieY(k)*P1-,2i;'e]
=
* P* H(k)x(k)'-o [, ?'o]
Using the definition of II(k), we get
A comparison with Equation (6,4.1) shows that the right-hand side of Equation (93.a)
corresponds to the periodic convolution of the two sequences.r(n) and i(n).
Example 03.1
Consider the periodic convolution y(n) of the two sequences
Here N = 4, so that exp [i(zr /N)l = i. By using Equation (9.2.1), we can calculate the
DFTs of the two sequences as
and
x(o) = r(o) +.r(1) + x(2) + x(3) = 2
so that
v(0)=H(0)x(0)=2
Y(t) = x11717(-1) : -13 - ill
Y(2)= H(2)x(2)=0
v(3) = H(3)x(3) = -13 +111
We can now use Equation (9.2.2) to frnd.v(a) as
y(o) = l[Y(0) + v111 + Y(2) + y(3)] = -6
ylry = l[r1oy + r(r)exe[i;] + rlzpxplirr - ror-nli]]]= o
From Equation (9.2.1), we note that the DFT of an N-point sequence x(n ) can be written as
x(e) = 5 r<rl
z-0
exp[-ion]1,1-,p1 (9.3.s)
= x(O)lo=ilr
That is, the DFT of the sequence r(n) is its discrete-time Fourier transform X(O) eval-
uated at N equally spaced points in the range [0, 2n).
Sec. 9.3 Properties ol lhe DFT 425
For a sequence for which both the discrete-time Fourier transform and the Z-trans
form exist. it follows from Equation (8.3.7) that
x(k) = x(z)l:-erptirzorrrrt (9.3.6)
so that the DFT is the Z-transform evaluated at Nequally spaced points along the unit
circle in the e plane.
We can express the DFT relation of Equation (9.2.1) compactly as a matrix oPeration
t
on the data vector = [:(0)x(l)...r(N l)]r. For convenience, let us denote
-
expl-ilr/Nl by W,y. We can then write
x(k)=)x(n)wtfi ft:0,1,...,N-l (e3.7)
-ww w
Let W be the matrix whose (t, n)th element [W]0, is equal to Wf . That is,
w
W w'n w'r wN-l
W=l (e.3.8)
The matrix W is usually referred to as the DFT matrix. Clearly. [W]u = [W]*, so that
W is symmetric (W = Wr).
From Equation (9.2.2),we can write
Since W;r = Wi, where * represents the complex conjugate, it follows that the IDFT
relation can be written in matrix form as
(e.3.11)
' = fiw-x
Solving for x from Equation (9.3.9) gives
x -- W-rX (e.3.t2)
tn-'= (e.3.r3)
**-
426 The Discr€te Fourier Transrorm Chapter g
or equivalently,
WXW = NIN (e.3.14)
with W being a unitary matrix, The DFT is therefore a unilary trawlormi often. how-
ever, it is simply referred to as an orthogonal transform.
Other useful orthogonal transforms can be defined by replacing the DFT matrix in
Equation (9.3.8) by other unitary or orthogonal matrices. Examples are the Walsh-
Hadamard transform and the discrete cosine transform, which have applications in
areas such as speech and image processing. As with the DFT, the utility of these trans-
forms arises from the existence of fast and efficient algorithms for their computation.
points witl be the same in both sequences. Clearly, if we choose K = L, yo@) and y,(n)
rvill be identical.
Most available routines for the efficient comPutation of thc DFf assume that the
length ot the sequence is a power of 2. In that case, K is chosen as the smallest power
of 2 that is larger than L. When K > L, the first L points of .r'r(rr) will be identical to
y1(n ), while the remaining K - L values will be zero.
We will now show that the K-point periodic convolution of h (n) and x(n) is identi-
cal to the linear convolution of the two functions it K = L. We note that
Now, ft(m) is zero for m el0,M - 1], and.r(n - rn) is zero for (n -m) e [0,N - l]'
so that we have the following.
O<n=M-l:
n
yr(n)= ) h(m)x(n-m)
= Ia,r,r, + h(t)x(n- 1) + "'+ /r(n).t(0)
M=n=N-l:
n
m=n-M+l
= h(n - M + 7)x(M - l\ + h(n - M + Z)x(M - 2)
+ '..+ h(n)x(0)
N=n=M+N-2:
-n-M+l
=h(n-M+t)x(M-t)
+.'.+/,(N+ l).t(n-N+ l) (9.4.2)
[r,(r-m+K), n+7=m=K-1
so that
xo(n) =
(xtu\, 0=n<N-l (e.4.6)
io. otherwise
we can easily verify thatyr(n) is exactly the same asy,(n) for 0 < z <N+M - Z and
iszerofor N + M - I = n< K - l.
In sum, in order to use the DFT to perform the linear convotution of the M-point
sequence h(n) and the N-point sequence r(z), we augment both sequences with zeros
to form the K-point sequences ho(n) and x,(n), with K > M + IV - 1. We determine
the product of the corresponding DFTs, H,(k) and X,(t). Then
yr(n) = IDF'I IH,(k) X,(k)l (e.4.7)
That is, N = 2,4,8. 16,32, etc. Accordingly, the algorithms are referred to as radix-2
algorithms.
Letting r = 2r in thc first sum and n : 2r + | in lhe second sum, we can write
,v/2- I N/2-l
,t'(t)= ) xQr1w'z;k+ ) xQr+l)lYf'tttt
: N/2-l
) g(')wff*+yyi
Nlz-l
> helw2ik (e.5.s)
Y(t) = ,,-0
> yln)wiirz
(branches) with arrows pointing in the direction of the signal flow. Hence, each brranch
has an input sigral and an output signal. We associate a weight with each branch that
determines the transmittance between the input and output signals. When not indi-
cated on the graph, the transmittance of any branch is ass,'med to be 1. The sigpal at
any node is the sum of the outPuts of all the branches entering the node. These con-
cepts are illustrated in Figure 9.5.1, which shows the sigpal-flow graph for the compu-
tations involved in Equation (9.5.9) for a particular value of &. Figure 9.5.2 shows the
signal-flow graph for computing X(,t) for an eight-point sequence. As can be seen from
the graph, to determine X(k), we first compute the two four-point DFTs G(&) and
H(lc)of thesequencess(r) = [(0),x(2),.r(a),x(6)land/l(r) = [.r(1),r(3),r(5),x(7)]
and combine them appropriately.
We can determine the number of computations required to find X(/c) using this pro-
cedure. Each of the two DFTs requires (N/2)2 complex multiplications and (N/2)?
G(kl x(kl
x(0)
x(l)
r(4) x(2)
x(6) x(3)
I
x@\
x(5)
4 -ooint
2'
DFT
x(6)
: (7) x(7t
Flgne 952 Flow graph for first stage of DIT algorithm for N = E.
Sec. 9.5 Fasl Fourler Transforms 431
complex additions. Combining the two DFTs requires N complex multiplications and
N complex additions. Thus, the computation ot X(k) using Equation (9.5.8) requires
N + N2/2 complex additions and multiplications, compared to A'r complex multiplica-
tions and additions for direct computation.
Since N/2 is also even, we can consider using the same proceclure for determining
the N/2-point DFTs G(k) and H(k) by first determining the N/4-point DFTs of
appropriately chosen sequences and combining them. For N = 8. this involves divid-
ing the sequence g(r) into the two sequences lr(0), r(a)) and {r(2). r(6)l and the
sequence /r (r) into lr(1), x(5)| and (.r(3),.r(7)|. The resulting computarions for find-
ing G(/<) and H(&) are illustrated in Figure 9.5.3.
Clearly, this procedure can be continued by further subdividing the subsequences
until we get a set of two-point sequences. Figure 9.5.4 illustrates the computation of
:
the DFT of a two-point sequence y(n ) (y(O), y(t)1. The complere flow graph for the
computation of an eight-point DFT is shown in Figure 9.5.5.
A careful examination of the flow graph in the latter figure leads to several obser-
vations. First, the number of stages in the graph is 3, which equals logr8. In general,
r(0) c (0)
.r (4) c(l)
l,
x(2) QQI
r (6) 6(3)
(a)
x(l) H (O)
lr"9r:
r (3) H(l')
I
.r(5) H (2)
r(7) H (3t
(b)
v(0)
.r(0) x(o)
r(4) .r0)
x(2) x(21
r(6) x(31
x(l)
:(5) x(5)
:(3) x(6)
.r(7) x(7t
wfi wfr wlt
Flgure 955 C.omplete flow graph for computation of the DFT tor N = E,
the number of stages ii equal to log2N. Second, each stage of the computatiou requires
eight complex multiplications and additions. For general N, we require N complex mul-
tiplications and additions, leading to a total of N logrN operations.
The ordering of the input to the flow graph, which is 0,4, 2, 6, 1, 5, 3,7, is deter-
mined by bit revershg the natural numberc 0, 1,2,3,4,5,6,7. To obtain the bit-
reyersed order, we revenie the bits in the binary representation of the numbers in their
natural order and obtain their decimal equivalents, as illustrated in Table 9-1.
Finally, the procedure permits in-place computation; that is, the results of the com-
putations at any stage can be stored in the same locations hs those of the hput to that
stage. To illustrate this, let us consider the computation of X(0) and X(4). Both of
these computations require the quantities G(0) and II(0) as inputs. Since G(0) and
I1(0) are nbt required for determining any other value of X(&), once X(0) and X( )
have been determined, they can be stored in the same locatinns as G(0) and II(0). Sim-
Sec. 9.5 Fast Fourier Transforms 43ri]
TABLE 91
Blt-rove?sed ordsr tor lY = 8
000 m0 o
001 lm 4
010 010 2
0ll ll0 6
100 001 I
101 101 5
il0 011 3
lll 111 7
ilarly, the locations of G(1) and H(1) can be used to stsre X(l) and X(5), and so on.
Thus, only 2N storage locations are needed to complete the computatioos.
x(k)=
'X | ,(n)rv# + \
lNlzl - ,t-.1
*(fiwff
E0 n- N/2
A comparison with Equation (9.5.6) shows that evetr though the two sums in the right
side of Equation (9.5.10) are taken over N/2 values of n, they do no.t represent DFTs.
WecancombinethetwotermsinEqristlun(9.5.10)bynotingthatWff/z a (-1)r'toget
Let
Equations (9.5.13) and (9.5.14) represent the (N/2)-point DFTs of the sequences G(t)
and H(k). respectively. Thus, the computation of X(&) involves first forming the
sequences g(n) and lr(n) and then computing their DFTs to obtain the even and odd
values of X(t ). This is illustrated in Figure 9.5.6 for the case where N :
8. From the
figure, we see that c(0) = x(0), c(1) = x(2), G(2) = x(4), G(3) = x(6), H(0) =
x(1), H(r) = X(3), H(2) = x(5), and H(3) = v171.
We can proceed to determine the two (N/2)-point DFTs G(t) and H(&) by com-
puting the even and odd values separately using a similar procedure. That is, we form
the sequences
x(0)
x t2l
x(4r
.t(3) x(6)
,\'(4) x(r)
r(5) x(3\
-r (6) x(s)
x(7) x(7\
N\
h,(n) = n@) + h(n +
4)
N
hzb) =lnat - n(,.t!)fwn,,
4
(e.s.t6)
Then the (N/4)-point DFTs, G, (/<), G2(k) and Ht(k), H2&\ correspond to the even
and odd values of G(t) and H(k), respectively, as shown in Figure 9.5.? for N = 8.
We can continue this procedure until we have a set of two-point sequenoes. which,
as can be seen from Figure 9.5.4, are implemented by adding and subtracting the input
values. Figure 9.5.8 shows the complete flow graph for the computation of an eight-
(b)
Flgure 9J.7 Flow graph for the A//4-point DFTs of G(& ) and H(k),
N=8.
436 The Discrete Fourler Translorm Chapier 9
.r(0) x(0)
.t(l) x(41
r (1, x(2)
.t(3) x(61
,r(4) x0)
r(5) x(5)
r(0) x(3)
r(71 x (7)
-l -l -I
Flgure 95.t Complele flow graph for DIF algorithm, N = 8.
point DFT. As can be seen from the figure, the input in this case is in its natural order,
and the output is in bit-reversed order. However, the other observations made in ref-
erence to the DIT algorithm, such as the number of comPutations, and the in-place
nature of lhe computations apply to the DIF algorithm also. We can modify the sigpal-
flow graph of Figure 9.5.7 to get a DIF algorithm in which the input is in scrambled
(bit-reversed) order and the output is in the natural order. We can also obtain a DIT
algorithm for which the input is in the natural order. In both cases, we can modify the
graphs to give an algorithm in which both the input and the ouput are in their natural
order. However, in this case, the in-place ProPerty of the algorithm will no longer hold.
Finally, as noted earlier (see Equation (9.3.3)), the FFT algorithm can be used to find
the IDFT in an efficient manner.
As noted earlier, the first step in obtaining the DFT of signal .r,(r) is to convert it
into a discrete-time signal r"(r) by sampling at a uniform rate. The process of sampling,
as we saw, can be modeled by multiplying the signal .r"(l) by the impulse train
pr(t)= j r1r-nr1
nE -c
so that we have
These steps and the others involved in obtaining the DFT of the signal r,(r) are illus-
trated in Figure 9.6.1. The figures on the left correspond to the time functions, and the
figures on the right correspond to their Fourier transforms. Figure 9.6.1(a) shows a typ-
ical analog signal that is multiplied by the impulse sequence shown in Figure 9.6.1(b)
to leld the sampled signal of Figure 9.6.1(c). The Fourier transform of the impulse
sequeucepr(t), also shown in Figure 9.6.1(b), is a sequence of impulses ofstrength 1/I
in the frequency domain, with spacing o,. The spectrum of the sampled signal is the
convolution of the transform-domain functions in Figures 9.6.1(a) and 9.6.1(b) and is
thus an aliased version of the spectrum of the analog signal, as shorvn in Figure 9.6,1(c).
Thus, the spectrum of the sampled signal is a periodic repetition, with period o", of the
spectrum of the analog signal .r,(l).
If the signal x,(t) is band-limited, we can avoid aliasing errors by sampling at a rate
that is above the Nyquist rate. If the signal is not band limited, aliasing effects cannot
be avoided. They can, however, be minimized by choosing the sampling rate to be the
maximum feasible. In many applications, it is usual to low-pass filter the analog signal
prior to sampling in order to minimize aliasing errors.
The second step in the procedure is to truncate the sampled signal by multiplying
by the window function o(l). The length of the data window Io is related to the num-
ber of data points N and sampling interval by I
To: NT (e.6.3)
-I='"'- !'
*-(,) =f'' otherwise
(e.6.4)
[0,
The shift of T/2 from the origin is introduced in order to avoid having 61n12 samples at
points of discontinuity of the window function. The Fourier transform is
wp (l) I wRk,J) I
^Tt
2 (d)
I X,(o) o Ws(ul I
@
(e)
@
(f) al z,
@
(s)
and Figure 9.6.1(e) shows the truncated sampled function. The corresponding Fourier
transform is obtained as the convolution of the two transforms X,(o) and Xr(to)- The
effect of this convolution is to introduce a ripple into the sPectrum.
The finat step is to sample the spectrum at equally spaced points in the frequency
(
domain, Since the number of frequency points in the range 0 < (l, to, is equal to the
number of data points N, the spacing between frequency samples is to,/N, or equiva-
lenlly,2n/Tn, as cdn be seen by using Equation (9.6.3). Just as we assumed that the
sampted sigral in the time domain could be modeled as the modulation (multiplica-
tion) of the analog signal x,(t) by the impulse train pr(t), the sampling operation in the
frequency domain can be modeled as the multiplication of the transform
X"(.) * Wr(r) by the impulse train in the frequency domain:
pr"(t)= i
m- -r
uA -mTo) (e.6.7)
_2r 2r i
NN
Figure 9.62 Magnitude spectrum of a rectangular window.
(ry;!]
r,,n(o) = .*o
[
-,n #rU (e.6.e)
Figure 9.6.2 shows I W* (O) I , which consists of a main lobe extending from
O : -2n/N to2r./N and a set of side lobes. The area under the side lobes, which is
a significant percentage of the area under the main lobe, contributes to the smearing
of the DFT spectrum.
It can be shown that window functions which taper smoothly to zero at both ends
give much better results. For these windows, the area under the side lobes is a much
smaller percentage of the area under the main lobes. An example is the Hamming win-
dow, defined as
Figure 9.6.3(a) compares the rectangular and Hamming windows. Figures 9.6.3(b) and
9.6.3(c) show the magnitude spectra of the rectangular and Hamming windows,
respectively. These are conventionally plotted in units of decibels (dB). As can be seen
from the figure, whereas the rectangular window has a narrower main lobe than the
Hamming window, the attenuation of the side lobes is much higher with the Hamming
window.
A factor that has to be considered is the frequency resohttion, which refers to the
spacing between samples in the frequency domain. If the frequency resolution is too
low, the frequency samples may be too far apart, and we may miss critical informa-
tion in the spectrum. For example, we may assume that a single peak exists at a fre-
quency where there actually are two closely spaced peaks in the spectrum. The
frequenry resolution is
Sec. 9.6 Spectral Estimation of Analog Signals Using the DFT 441
rrlrl
I (,t
0tt
I l.rrr rrrrng
0(,
0.4
0.:
l
(a)
-t0
=
-40
-60
o
o -80
- 100
-20
=
G
_40
s
-@
o
o -80
- 100
(c)
o,_2n _ Ztt
Aro = (e.6.11)
NNTTO
where Iu refers to the length of the data window. It is clear ftom Equation (9.6.11) that,
to improve the frequency resolution, ive have to use a longer data record, If the record
length is fixed and we need a higher resolution in the spectrum, we can consider
padding the data sequence with zeros, thereby increasing the number of sanples from
N to some new value No > N. This is equivalent to using a window of longer duration
T, > To on the modified sigral now defined as
The duration of the data window can be determined from the desired frequency resolu-
tion A/as
1
To = 10s
,=
from which it follows that
x =!>zx rG
Assuming that we want to use a radix-2 FFT routine, we chooe N to b Xi\l$ (= 2t\.
which is the smallest power of 2 satisfyitrg the constraint on N. If we chmse /" = n kHz.
fo must be chosen to be 13.10?2 s.
Ere'nple 0.6.2
In tbis example, we illustrate the use of the DFT in frnding the Fourier spoctruo of ana-
log signals. Let us consider the sigral
r"(t) = go5400i I
Since the signal consists of a single frequenc?, is continuous-time Fourier transfora is a
pair of 6 functions occurring at !2N Hz-
Figure 9.6.4 shows the magnitude of the DFT spectrum X(/<) of the signal for data
lengths of 32, 64, and 128 samples obtained by using a reaangular window. The sigtal was
sampled at a rate of 2klfz. which is considerably higher than the Nyquist rate of 4fl) [Iz
As can be seen from the figure, the DFT spectrum erhibits two peaks in each case. If we
let &, denote the location of the first peak, the gecond peak eun at N -
&e itr all cas€&
This is to be expected, sinceX(-t) -
= X(N *). The aualog frequeacies conrsponding
to the two peaks can be determined to be /o = lkpTlN.
S€c. 9.6 Spectral Estimaiion of Analog Slgnals Using the DFT 443
lx(r) r
l5
(a)
t2
t0
I X(&) |
m
tx(r) |
l5
l0
0
60 80 t00
(c)
Ilgure 9.6.4 DFT spectrum of analog signal ,r,(t) using rectangular win-
dow. (a)N = 32. (b)N = 6a. (c) N = 128.
444 The Discrete Fourier Transform Chaptet I
.
Figure 9.6.5 shows lhe results of using a Hamming window on the sampled signal for
data lengths of 32,64,and 128 samples. The DFT spectrum again exhibits two peaks at the
same locations as before.
4
3.5
rx(&)r 3
2.5
,,
1.5
I
0.5
0
t0 l5 20 25 30
(a)
6
r x(r) r
5
0E
o
(b)
t2
t0
r x(r) |
8
or0
(c)
Iigore 9.65 DFT spectrum of analog signal .r,(t) using Hamming win-
dow. (a) N = 32. (b)N = el. (c) N = 128.
Sec. 9.7 Summary 445
With both lhe rectangular and Hamming windorvs. the first pe lk occurs at k, = 3. (r.
and l3 for N = 32. 64, and 128 sarnples, respcctivelv. Thesc ctr.rcspond to analog tre-
quencies of 187'5 Hz. 187.5 Hz. and 190.0625 Hz. Thus. as rhe number of data samples
increases. the peak moves closer to the actual analog frequencr. Note thal the peaks
become sharper as N (and hence the resolution in th.: digital [rcquencv domain) increases.
The figures also show thal the spectrum obtained using the Hamming window is some-
what smoother than that resulting from the rectangular window.
Suppose we add anolher frequency to our analog signal. so lhilt the signal is norv
MMARY
o The discrete Fourier transform (DFT) of lhe finite-lenglh scc;uence .r(n) of lengrh
N is defined as
x(k) x(n)wi!
where
,, =.'o[-i?-]
r The inverse discrete Fourier transform (IDFI) is defined irv
1 lv_ |
.'tr) = xtr)w;'a
,,r,],
The DFT oI an N-point sequcnce is related to its Z-transfornt :rs
X(k) = X(r)1.-u.l
The sequence X(k), k = 0, l, 2. ..., N - l. is pcriodic wilh pcrrotl N. The sequence
x(n ) obtained by determining the IDFT of X(k) is also periodic wirh period N.
46 The Discrete Fourler Translorm Chapter 9
20
IE
t6
I X(&) |
l4
t2
l0
E
6
4
2
0
30
?5
r x(e) r
20
l5 --fr
t0
II
5
0
. l\, 80 tn
50
45
q
r x(*) |
35
30
25
zo
t5
l0
5
0
0
(c)
20
I8
t6
I X(r) |
l4
t2
t0
8
6
4
30
25
I X(l) r
20
t5
lo
50
45
rx(r) r .10
3s
30
25
20
t5
t0
5
0
lm t50 250
k
(c)
' In all operations involving the DFT and the IDFT, the sequence.r(n ) is effectively
replaced by its periodic extension rp(n ).
. X(k) is equal to Nao, where a* is the coefficient of the discrete-time Fourier-series
representation of. x r(n),
o The properties of the DFT are similar to those of the other Fourier transforms, with
some significant differences. In particular, the DFT performs cyclic or periodic con-
volution instead of the linear convolution needed for the analysis of LTI systems.
. To perform a linear convolution of an N-point sequenc€ with an M-point sequence,
the sequences must be padded with zeros so that both are of length N + M - L.
. Algorithms for efficient and fast machine computation of the DFT are known as fast
Fourier-transform (FFT) algorithms.
o For sequences whose length is an integer power of 2, the most commonly used FFT
algorithms are the decimation-in-time (DIT) and decimation-in-frequency (DIF)
algorithms.
. For in-Place computation using either the DI'l' or the DIF algorithm, either the
input or the output must be in bit-reversed order.
o The DFT provides a convenient method for the approximate determination of the
sPectra of analog signals. Care must be taken, however, to minimize errors caused
by sampling and windowing the analog signal to obtain a finite-length discrete-
time sequence.
r Aliasing errors can be reduced by choosing a higher sampling rate or by prefilter-
ing the analog signal. Windowing errors can be reduced by choosing a window func-
tion that tapers smoothly to zero at both ends.
r The sPectral resolution in the analog domain is directly proportional to the data length.
(c) x(n) =
It. n even
{o orherwise
92. Show that if .r(n) is a real sequence, X(/V - t) = X*(t).
93. Let .r(a) be an N-point sequence with DFf X(k). Find the DFI of the followiog
sequences in term of X(& ):
reven
(B) y,(z) = I,(;),
lo,
\
n odd
9.5. (e) Use the DFT to find the periodic convolution of the following sequences:
(l) r(a) = ll, -1, -1, l, -1, 1l and&(n) = 11,2,3,3,2,t|
(ll) :(n) = lr, -2, -1,1l and ft(n) = (1, 0, 0, 1[
(b) Verify your results using any mathematical software package.
9.5. Repeat Problem 9.5 for the linear convolution of the sequences in the Problem.
9.7. Le,l X(O) denote the Fourier transform of the sequence r(n) = (1/3)nu(n), atrd lety(n)
denote an eight-point sequence such that its DFT, (k), corresponds to eight equally spaced
samples of X(O). That is,
Y@=x(+k) o=0,1,"?
What is y(a)?
9.& Derive Parseval's relation for the DFT:
/v-l I N-l
,), lrtr)l' lxttl l'
=
.)*
^,
9.9. Suppose we want to evaluate the discrete-time Fourier transform of an N-point sequence
r(n) at M equally spaced points in the range [0,2n]. Explain how we can use the DFT to
do this if (a) M > N aad (b) M < N.
9.10. Let.r(a) be an N-point sequence. It is desired to find 12E equally spaced samPles of the
spectrum X(O) in the range 7r/16< O= 15n/16, using a radix-2 FFT algorithm'
Describe a procedure for doing so if (i) N = 1000, (ii) N = 120.
9,11. Suppose we want to evaluate the DFT of an N-point sequence .r(n) using a hardware
processor that can only do M-point FFTs, where M is an integer multiple of N. Assuming
that additional facilities for storage, addition, or multiplication are available, show how
this can be done.
4il Tho Discrote Fourler Translorm Chapbr g
9.tL Given a six-point sequence r(z), we can seek to lind its DFT by suMividing it into three
two-point DFTs that can then be combined to give X(&). Draw a signal-flow graph lo eval-
uate X(k) using this procedure.
9.13. Draw a signal-flow graph for computing a nine-point DFT as the sum of three three-
Point DFTs.
9.14 Analog data that has been prefiltered to 20 kHz must be spectrum analyzed to a resolution
of les than 0.25 Hz using a radix.2 algorithm. Determine the necessary data length Io.
9.15. For the analog signal in Problem 9.14, what is the frequency resolution if the sigral is sam-
pled at 40 kHz to obtain l()96 samples?
9.16. The analog signal ro(l) of duration 24 s is sampled 8t the rate of 421E2and the DFTof
the resulting samples taken.
(a) What is the frequency resolution in the analog domain?
(b) What is the digital frequency spacing for the DFT taken?
(c) What is the highest analog frequency that does not ca"se aliasing?
9.17. The following represent the DFT values X(k) of an analog sigral r,(r) that has been san-
pled to yield 16 samples:
x(o) = 2, 1113' = a - ia, x$) = -2, x(8) = - I, x1r r1 = -2. x(t3) = 4 + i4
All other values are ?sto.
(a) Find the corresponding r(a).
O) What is the digital freguency resolution?
(c) Assuming lhat the sampling intewal is 0.25 s. find the analog frequency resolution.
What is the duration Io of the analog signal?
(d) For the sampling rate in part (c), what is the highest analog frequensl that can be pre-
sent in ra(r) without causing aliasing?
(e) Find f0 to give an analog frequenca resolution that is twice that in part (c).
9.I& Given two real N.point sequences /(n) and g(a), we can find their DFTs simultaneously
by computing a single N-point DFf of the complex sequence
x(nl=l(n)+js(n)
We show how to do lhis in the following:
(a) kt ,,(n) be any real N-point sequence. Show that
Relfr(t)l = H.(k)
ImUr(*)l = H,(kl
(c) Use your resulrs in Parts (a) and (b) to show that
r(e)=1o111+ixb$)
G(kl=Xp(k)-ixp,(kl
S@.9.9 Problems 451
where X".(&) and X""(k) represent the even and odd parts of Xr(k), the real part of
X(&), and X,"(&) and X,o(t) represent the even and odd parts of Xr(*) tl1s irnnginary
parr of x(t ).
9.M. (a) The signal x,(r) = 4cos(2nt/31is sampled at discrete insrants I to generate 32 points
of the sequence :(r). Find the DFT of the sequence if. T = 15t16, and plot the magpiftde
and phase of the sequence. Use a rectangular window in trnding the DFT.
@) Determine lhe Fourier transform of r,(t), and compare its magnitude atrd phase with
the results of Part (a).
I
(c) Repeat Parts (a) and (b) if = 0.1 s.
9.a). Repeat problem 9.19 with a Hamming window. Comment on your results.
92L We want to determine the Fourier transform of the amplitude-modulated signal
ro(l) = 19 cos 12mnr) cos(100?rr) using the DFT. Choose an appropriate duration Io
over which the signal must be observed in order to clearly distinguish all the frequencies
I
in r,(t). Asume a sampling interval of = 0.4 ms.
(r)
Use a rectangular window, and lind the DFT of the sampled signal for N = 128,
N=
256, and N = 512 samples.
@) Determine the Fourier transform of ro(l), and compare its magnitude and phase vith
the results of Part (a).
9.2L Repeat Problem 9.21 with a Hamming window. Comment on your resulb.
Chapter '1 0
Design of Analog
and Digital Filters
10.1 JNTRODUCTION
Earlier we saw that when we apply an input to a system, it is modified or transformed
at the output. Typically, we would like to design the system such that it modities the
input in a specified manner. When the system is designed to remove certain unwanted
components of the input signal, it is usually referred to as a filter. When the unwanted
components are described in terms of their frequency content, the filters, as discussed
in Chapter 4, are said to be frequency selective. Although many applications require
only simple filters that can be designed using a brute-force method, the desigr of more
complicated filters requires the use of sophisticated techniques. In this chapter, we con-
sider some techniques for the design of both continuous-time and discrete-time fre-
quency-selective fi lters.
As noted in Chapter 4, an ideal frequency-selective filter passes certain frequencies
without any change and completely stops the other frequencies. The range of fre-
quencies that are passed without attenuation is the passband ofthe filter, and the range
of frequencies that are not passed constitutes the stop band. Thus, for ideal continu-
ous-time filters, the magnitude transfer function of the filter is given by lH(ro) | = 1 ;n
the passband ana la1<r)l = 0 in the stop band. Frequency-selective filters are classi-
fied as low-pass, high-pass, band-pass, or band-stop filters, depending on the band of
frequencies that either are passed through without attenuation or are completely
stopped. Figure 10.1.1 shows the characteristics of these filters.
Similar definitions carry over to discrete-time filters, with the distinction that the
frequenry range of interest in this case is 0 O < 2n, since If(O) is now a periodic
=
function with period 2zr. Figure 10.1.2 shows the discrete-time counterparts of the fil-
ters shown in Fig. 10.1.1.
452
Sec. 10.1 lntroduclion 453
I fl(or) I I ,/(o) I
0
(a)
I lr(o) I I lr(o) I
(c) (d)
I fl(O) I trl('}) |
-2t -1 0a -aOi
(a) ( b)
I ,,(O) I lll(sl) |
Ott -?0t
(c) (d)
IH(tt)I
I + 6r
I -6r
52
our requirements on lH(rD) | (or I a(O) | ) in the passbands and stop bands, by per-
mitting deviations from the ideal response, as well as specifying a transition band
between the passbands and stop bands. Thus, for a continuous-time low-pass filter, the
specifications can be of the form
I -E,s la1rll <t +0,, lrl s., (10.1.1)
a low-pass filter, obtain the corresponding transfer function H(.r) { or //(e)), and con-
vert the transfer function back into the desired range.
it is clear that the frequency range 0 - lrl - 1 is mapped into the range
0 s lor'l s r,r.. Thus, H(st) represents a low-pass filter with a cutoff frequency of to..
More generally, the transformation
,':r4 or.
(10.2.3)
transforms a low-pass filter with a cutoff frequency r,l. to a low-pass filter with a cutoff
frequency of ro!. Simitarly, the transformation
-o-9c (10.2.4)
s
so that the point lorl = 1 corresponds to the point lrol = ,.. Also, the range lt'rl s 1
is mapped onto the ranges defined by." -
lr,rol = -.
Next rve consider the transformation of the normalized low-pass filter to a band-pass
filter with lower and upper cutoff frequencies given by to,j, arld o]r.,, respectively. The
required transformation is given in terms of the bandwidth of the filter,
BW=o..-ro., (10.2.6)
ali
This transformation maps ro = 0intothe points r,r0 = + co, and the segment lrrrl slto
the segments ro., > ltool = to.,.
Finally, the bimd-stop filtei is obtained through the transformation
BW
J= (10.2.e)
*(*.3)
where BW and o, are defined similarly to the way they were in the case of the band-
pass filter. The transformations are summarized in Table 10-1.
TABLE 1Gl
Fr€quency translormEllons lrom low-pass analog llllor roapon8g.
s'
Low Pass lic
High Pass lt
Band Pass # (* . p), r,rs = \6.J+
,'=fi (10.2.10)
l,r\-l:z-l-o
tz")':l_az-, (10.2.11)
By setting z = exp [iO] in the right side of Equation (10.2.11), it foltows that
.'=",p[i,un'##H*] rrc.2.t2)
Thus, the transformation maps the unit circle in the z plane into the unit circle in the
et plane. The required value ofc can be determined by setting zr = expUOS] and
O = O. in Equation (10.2.11), lelding
a=
+drri
^_sin[(O"-oj)/2]
sinid (10'2'13)
Sec. 10.3 Design ot Analog Filt€rs 457
.
TAELE 1G2
Frcquencry tsanslomadons Irom los-pa88 dlgtlEl llltor'rpsponso.
. o.-o:
srn -', -'
Low Pass (z')-r =
i-*+ -
sin
O +0!
-L;-a
Ol = desired cutoff frequency
oi-o_
z-l + o -' 2--
cos
High Pass - r;;=i c=-
o!+o-
cos-'7-
2ok . k-l
-r+=-- O:. + o:,
' k+l' k+l t*2
Band Pass
" = t;. - n:,
-'-2 -
O!-niir o
k= cor-1t- tan
f
o:,, o.t, = desired lower and upper
cuioff frequencics, respectively
t_- k O.'. + O:,
,-, - -Lr-, I l+t --
cos
t- '-
Band Stop
" = ---o=n:.
cos --
J
Oi-n!'' r.r
k = lan
2 tanf
The design of practical filters starts with a prescribed set of specifications, such as those
given in Equation (10.1.1) or depicted in Figure 10.1.2. Whereas procedures are avail-
able for the design of several different analog filters, we consider the desigp of two
standard filters. namely, the Butterworth and Chebyshev filters. The Butterworth fil-
ter provides an approximation to a low-pass characteristic that approaches zero
smoothly. Tbe Chebyshev filter provides an approximation that oscillates in the pass-
band, but monotonically decreases in the transition and stop bands.
458 Design ot Analog and Digital Filters Chapt€r 10
la(.)1, =
ilrr-* (10.3.1)
where N denotes the order of the filter. It is clear from this equation that the magni-
tude is a monotonically decreasing function of to, with its ma$mum vatue of uiity
:
occurring at ro 0. For o = l, the magnitude is equal to l/\/r, for all values of N.
Thus, the normalized Butterworth filter has a 3-dB cutoff frequency of unity.
Figure 10.3.1 shows a plot of the magnitude characteristic of this'filter as a function
of ro for various values of N. The parameter N determines how closely the Butterworth
characteristic approximates the ideal filter. clearly. the approximation improves as N
is increased.
The Butterworth approximation is called a maximally fiat approximation, since, for
a given N, the maximal number of derivatives of the magnitude function is zero at the
- :
origin. In fact,.the first 2N 1 derivatives of lfflroyl are zero ar o, 0, ali we can see
by expanding la1rll in a power series about ro = 6:
I Ir(ro) |
Ideal response
A/= 4
M-3
N-2
Lrv=t
t23
Flgure 103.1 Magnitude plot of normalized Butterworth filrer.
Sec. 10.3 Design of Analog Filters 459
' (10.3.3)
11(s) // t - s) l, - p= lH( ,u)l'
I
l+
i",?']'
so that
(;)- = -, (10.35)
or =
l2k+N-t It)\
cosl--r,-
.l2k-lrr.\
=""\-r- t/
,* = sin
l2k+N-l \
(toj.7)
l-ZN ")
/2k-lt\
= cosl\-
N-rJ
As can be seen from Equation (10.3.6), Equation (10.3.5) has 2N roots spaced uni-
formly around the unit circle at intervals of n/2N radians. Since 2/< - 1 cannot be even,
it is clear that there are no roots on the 7ro axis, so that there are exactly N roots each
in the left and right half planes. Now, the poles and zeros of H(.s) are rhe mirror images
of the poles and zeros of H(-s). Thus. in order to get a stable transfer function, rve
simply associate the roots in the lcft half plane rvith H(s).
As an example. for N = 3, from Equation (10.3.6), the roots are located at
sr,
[ "tlj.
= exnll. s, =
l.zrl
exO[i:i:]. sz = exp[lnl,
I
I
x-
I
t
\
fl(s): (10.3.8)
ls - explj?r /3ll[s - exp[lzr]l[s - exp[ianl3]l
The denominator can be expanded to yield
I
H(s) = .,' 1) (10.3.e)
Gr. "lix"
Table l0-3 lists the denominator of the Bunerworth transfer function in factored form
for values of N ranging from lY = I to N = 8. When these factors are multiplied, the
result is a polynomial of the form
s(s): ansfl + a,r-,C-r * "'* a,s *I (10.3.10)
These coefficients are listed in Table 104 for N = I to N 8. :
To obtain a filter with 3-dB cutoff at toc, we replace s in II(s) by s/to.. The corre-
sponding magnitude characteristic is
TABLE 10€
Eutlsrwoih polynotnlalo (tadored lorm)
I s+1
2 s2 \6s + 1
+
3 (s2+s+f)(s+l)
4 (s2 + 0.7653s + l)(s2 + l.B476s + l)
5 (s + l)(s2 + 0.6180r + 1)(r2 + l.6lEtu + l)
6 (.s2 + 0.5176s + 1)(s2 + V2s + l)(s2 + 1.931& + t)
7 (s + l)(s2 + 0.4450r + 1)(s2 + 1.2455s + l)(s2 + 1.8()22s + l)
E (s2 + 0.3986s + l)(s'? + l.lllos + lxs2 + 1.6630r + l)(s2 + t.g62x + l)
Sec. 10.3 Design ot Analog Filters 481
TABLE 10.4
Bullemorth polynomlal8
a. ar as a, c
I
\/i 1
2 2 I
2.613 3.414 2.613 1
la(.)l'=r*#Jil (103.11)
I-et us now consider the design of a low-pass Butterworth filter that satisfies the fol-
lowing specifications:
and
la1o,;l : t, (10.3.14)
('J.)*=(+)'-,
and
(**)"=#-,
Eliminating o. from these two equations and solving for N regults in
,-,[*ctrHbl
*::
'L
(10.3.15)
"=
l
..462.. Deslgn ol Analog and Digital Fllters. . Chapter tO
Since N must be an integer, we round up the value of ,itr' obtained frbm Equation
(10.3.15) to the nearest integer. This value of N can now be used in either Equa-
tiol (103.13) or Equation (10.3.14) ro determine ro.. If ro. is determined ftom Equation
(10.3.13), the passband specifications are met exactly, whereas the stopband specifrca-
tions are exceeded. But if we use Equation (10.3.14).to determine to". the reverse is
true. The steps in finding II(s) are summarized as follows:
1. Determine N from Equation (10.3.15), using the values of 6,, 5r, ror, and o,, and
round-up to the nearest integer.
2. Determine o., using either Equation (10.3.13) or Equation (10.3.14).
3. For the value of N calculated in Step l, determinp the denominator polynomial of
the normnlized Butterworth filter, using either Tdble 10-3 or Tabte 104 (for values
ofN < 8) or using Equation (10.3.8), and form t/(s).
4. Find the unnormalized transfer function by replacing s in H(s) found in Step 3 by
s/o.. The filter so obtained will have a dc gain of unity. If .some other dc gain is
desired, H(s) must be multiplied by the desired gain.
Erample l0J.l
We will design Butterworth filter to have an attetruation of no more than .l dB for
lrol s 2000 radis and at least 15 dB for l,ol = SOOO rad/s. From rhe specilications
20log,o(l - Er): - I and 20lo9,o6, = -15
so :
that Er 0.10E7 and Ez = 0.1778. substituting these values into Equation (103.15) yields
a value of 2.6045 for /v. Thus we choose N to be 3 and obtain the normalized frlter from
Table 10-3 as
'1
H("): s, + 2.a + A+ 1
H(s) =
(s /2826.8)1 + 2(s /2826.8)2 + 2(s / 2826.8) +t
_128r9!I_
sr + 2(2E26.8)s2 + 2(2826.8)2s + (2826.8)3
Figwe 103.3 shows a plot of the magnitude of the filter as a funciion of o. As can be seen
from the plot, the filter meets the spot-band specifications, and the passband specifications
are exceeded.
The Butterworth filter provides a good approximation to the ideal low-pass character-
istic for values of or near zero, but has a low faltoff rate in the transition band. we now
consider the chebyshev filter, which has ripples in the passband, but has a sharper cut-
off in the transition band. Thus. for filters of the same order, the chebyshev filter has
Sec. 10.3 Design of Analog Filters 463
IH (ull
a smaller transition band than lhe Butterworth filter. Since the derivation of the
Chebyshev approximation is quite complicated, we do not give the details here, but
only present the steps needed to determine H(s) from the specifications.
The Chebyshev filter is bascd on Chebyshev cosine polynomials, defined as
C,u(r) = cos(Ncos-to)' lrl s t
la(,)l'=;-.-
t-t e'zcfr(to)
(10.3.18)
To determine the behavior of this characteristic, we note thal for any N, the zeros ol'
C^,(ro) are located in the interval l.l = t. Further, for lol ' t. lCrloll < l, and for
l.l , t, lC"(r)l increases rapidly as lrl becomes large. It follorvs that in the inter-
val l.ol - t, lf 1o1l'? oscillares about unity such that the maximum value is l and the
minimum is 1/(l + e'?). es l.ol increases, lg(r) l' approaches zero rapidly, thus pro-
viding an approximation to the ideal low-pass characteristic.
The magnitude characteristic corresponding to the Chebyshev filter is shown in
Figure 10.3.4. As can be seen from the figure, lH(r,r) | ripples between I and
l/!t + e2. Since Ci(l) = I for all N, it fotlows that for or = l.
lstrll =*+ (r 0.3.1e)
o
(!
E
-4,
EO
(J CL
<G
-EL
a,
.o
o)
q!- E
3
oq)
o
()
c)
J()
all
GI
(,
oJ
!a
c
it6
I 2
J -t
.i
c
a)
'E Ea
-Ir
4il
SEc. 10.3 Design ol Analog Fitters
For large values of ,-that is, values in the stop band-we can appr.ximare lalr) | as
lnr,)l = --',1 --
E C,\' ( u,
(10 3'rlr)
The dB attenualion (or loss) from the value at r,r = 0 can thus hc wrirren as
Equations (10.3.19) and (10.3.22) can he used lo determine rhc rrvo parameters N
and e required for the chebyshev filter. The parameter e is dercrnrined by using the
passband specifications in Equarion (10.3.19). This value is rhen used in nquirion
(10.3.22), along with the stop-band specifications, to determine N. In order ro find
H(s), we introduce the parameter
F : (10.3.23)
fisinr,-'1
The poles of H(s), s, = o* -r f ro^, & : 0, l, ..., N - 1, are givcn b1,
"- =,i"(?51)],i,r,o
,, =.",(4;,)] *,nu (10.3.24)
It follows that the poles are locared on an ellipse in the s plane givcn by
ol - ofi
= (10.3.25)
rinh'g '
"o.tr:B
The major semiaxis of the ellipse is on the lo axis, the minor senriaxis is on the o axis.
:
and the foci are at or + l, as shown in Figure 10.3.5. Ttre 3-dB cutolf frequency occurs
at the point wh.-re the ellipse intersects the it,l axis-that is, ar t,,r = cosh B.
It is clear from Equation (10.3.24) rhar the chebyshev poles are relared to the Bur-
terworth poles of the same order. The relation between these poles is shown in Figure
10.3.6 for IV = 3 and can be uscd to determine the locations ol rhe chebyshev poles
geometrically. The corresponding H(s) is obtained from lhe lefr-half plane poles.
Elrenrple 1032
we consider the design of a chcbyshev filter to have an attenuariorr o[ ntr more than I dB
for lr,r | - l([0 rads/s and at leasi l0 dB for l, | = 50)0 railsls.
We will first normalize or, to I, so that to, = 5. From the pas:;hand specifications, rve
have. from Equarion ( 10.3. l9 )
466 Design ol Analog and Digital Filters Chapter 10
lct
coslt 6
fir
\
sinh P
\, )
Flgure 103.5 Poles of the
Chebyshev filter.
i/f
,t'1
I /(--_ Buttereorth
pole locus
-x-
Ilgure 103.6 Relation between the Chebyshev and Butterworth poles for
N: 3.
I
20loero;-; = -l
It follows that e = 0.509. From Equation 10.3.22.
1_ I
@P
lj ;
\/, v2
where r,rr = 10C0. The Chebyshev poles are. rhen.
, = -# (sinho.7l4) .,#(cosho.7t4)
= _545.31 + j892.92
Hence,
H(r) =
(s + 545.31)'z + (892.92)2
--1
The corresponding filter with a dc gain of unity is given by
(s4s.3ly + (892.q2)2
H(s; =
(s + 545.31)2 + (8ct2.92)'z
I Hlti) I
t.t2
I
I H(utt 12 t ttlot l:
I I
l;? I - .'
'ic
N odd N even
ITAL FILTERS
In recent years, digital filters have supplanted analog filters in many applications
because of their higher reliability. flexibility, and superior performance. The digital fil-
ter is designed to alter the spectral characteristics of a discrete-time input signal in a
specified manner, in much the same rvay as the analog filter does lor continuous-time
signals. The specifications for the digital filter are given in terms of the discrete-time
Fourier-transform variable o. and the design procedure consists of determining the dis-
crete-time transfer function H(l) that meets these specifications. We refer to H(z) as
the digital filter.
In certain applications in which a continuous-time signal is to be filtered, the analog
filter is implemented as a digital filter for the reasons given. Such an implementation
involves an analog-to digital conversion of the continuous-time signal, to obtain a dig-
ital signal that is filtered using a digiral filter. The outpur of the digital filter is then con-
verted back into a continuous-time signal by a digital-to-analog converter. In obtaining
this equivalent digital realization of an analog filter, the specifications for the analog
filter, which are in terms of the continuous-time Fourier-transform variable o, must be
transformed into an equivalent set of specifications in terms of the variable O.
As we saw earlier. digital sysrems (and, hence, digital filters) can bc either FIR or
IIR filters. The FIR digital filter. of course, has no counterpart in the analog domain.
However, as we saw in previous sections, there are several well-established techniques
for designing IIR filters. It would appear reasonable, therefore, to try and use these
techniques for the design of IIR digital filters. In the next section, we discuss two com-
monly used methods for designing IIR digital filters based on analog-filter design tech-
niques. For reasons discussed in the previous section, we confine our discussions to the
design of low-pass filters. The procedure essentially involves converting the given dig-
ital-filter specifications to equivalent analog specifications, designing an analog filter
that mects these specifications, and finally, converting the analog-filter transfer func-
tion H.(s) into an equivalent discrete-time transfer function I/(a).
Sec. 10.4 Digital Filters 469
1. From the specified passband and stop-band cutoff frequcncics. Q and O" respec-
tively, determine the equivalent analog frequencies, oo and r,r,.
2. Determine the analog transfer function H,(s), using the techniques of Section 10.3.
3. Expand H,(s) in partial fractions, and determine the Z-transform of each term from
a table of transforms. Combine the terms to obtain fI(z).
while the impulse-invariant technique fairly straightforward to use, it suffers from
is
one disadvantage, namely, that we are in essence obtaining a discrcte-time system from
a continuous-time system by the process of sampling. We recall that samPling intro-
duces aliasing and that the frequency response corresponding to the sequence h,(n?')
is obtained from Equation (7.5.9) as
H(O) =
;_i. a"(a.2] r) (10.4.3)
so that
only if
470 Oosign ol Analog and Digital Filtors Chapter 10
TABLE 1(}5
Laplaco bansrorms and thelr Z-transtorm equlvalents
I Tz
,
s- (.-tl,
2 r'rS:-U
sl (z - l)'
s*a
l- z
z - expl- aTl
I Tz expl- aTl
G*"i' (z - exp[- aTl)'1
I 1 l_
(s+a)(s+b) (b-o)\z - exp[-al] z - expl- bTl
a Tz __
s2(s + a) (z- lF_ _(r_-Ip_1-o4)z
a1! - 9[-
"*o-1-n7p
1 Tz exp[- aTl
tr * ,l: (z - exp[-aI])2
a2 z aT expl- aTlz
s(s-+ljl z-l z - exp [-aI] -
(z expl-aTl)2
_ z sinool
- -9e -
s2+-j z2 - 2z cosooT + I
s
-;- -_-; _z (z - cosoo_ I)
t' * rrro' z2 - 2z costl,oT + I
__.._.9o.. 3 exp[:rfl_fiqgsl_
(s+a)2+(o; z2 - all cos roo T + expl-?aTl
2z expf-
s+d _
-- _ z' - z expl-rll "gryLl__
(s+a):+(o3 z2 - 2z expl- oI] cosorl + expl- 2aTl
H,(.) = 0. lrl = T 7t
(10.4.s)
which is not the case with practical low-pass filters. Thus. the resulting digital filter does
not exactly meet the original design specifications.
It may appear that one way to reduce aliasing effects is to decrease the sampling
interval T. However, since the analog passband cutoff frequency is given by
?r..: Ar/ T, decreasing 7 has the effect of increasing <or. thereby increasing aliasing. It
follows, therefore, that the choice of r has no effect oir the performance of the digital
filter and can be chosen to be unity.
For implementing an analog filter as a digital filter, we can follow exactly the same
procedure as beforc, except that Step 1 is not required. since the specifications are now
Sec. 1 0.4 Digital Filters 471
given directly in the analog domain. From Equation (10.4.-l). rrhen rhe analog filter is
sufficiently band linrited. lhe corresponding digiral filter has a grin of l/I. which can
become extrenrely high for low values of L Gencrally. thereforc. the resulting trans-
fer function H(t) is multiplied by 7'. The choice of I is usuall,"- determined by hard-
ware considcrations. We illustrarc the procedure by the follorving cxample.
Example lo.4.l
Find the digital equivalent of the analog Butterworlh filter derived in Example 10.3,t using
the impu lse-inva rian t method.
From Example 10.3.1. with ro,. = 2826.E, thc filter transfer function is
H(s) =
si + 2(2826.E)srl"r'lrll .r,* + (2sr6.s).
2826.8 2826.8(.r + 1413.4)+ 0 s(2826.8F
= s + ZSlO.g + 1413.4)2 + (2448.1): + lJl3..l)r fdd.tl'
1.s 15
We can determine lhe equivalent Z-transfer function from Tablc l().5 as
t - - ' sin(2A8.lf[l
H(:) = 2826.8[.
- "',.',0, .r] - z2-
_ zc-r{r'1'{r[cos(22148.17)
2-. ,.,ii;;;;.(;44s.rr) + e-2826.8r I
If the sampling interval I is assumed to be I ms, we get
n=ample 1O.42
l.et us consider the design of a Butterworth lorv-pass digital filt,:r rhar mees the follow-
ing specifications. The passband magnitude should be constanr to within 2 dB for fre-
quencies helow 0.2rr radians, and the stop-band magnitudc in thc range 0.4n < 0 < n
should be lcss lhan -10 dB. Assume that the magnitude at O = () is normalized to unity.
With ()/ = 0.2rr and O. = 0.4n, since the Butterworth filtcr has a monotonic magni-
tude characteristic, it is clear that to meet thc specifications. wc rlrust have
20logro lH(0.21r 1l = -2. or lfl(o.zrr l l' -- to-o:
and
20 logrolr/(0.4r)l = - or
lH1tt.+" 11t = 19-'
10,
For the impulse-invarianl dcsign techniquc. we obtain the equiralent analog domain spec-
ifications by setting r,r = Of. rvith I = l. so that
la"1o.zr'11'= to'o'
la"1o.ln;lr = to-'
For the Butterworth filtcr.
I
l1ltiu,tl:-
I r ur '/r"' )' '
rvhere t,, uttd Nntust Lrc tL:1,-'r t,tirt,rr.'l ito,rt li:c r,rrc1tlj..1i;,r;i, li, -r ie ltls t hc l$o cqualions
472 Design ot Analog and Digttal Fllters Chapter 10
r* (94)- l'r,P,
r*l,o.o")*=,0
\ t'1. /
Solving for lV gives N = 1.918, so that we choose N : 2. With this value of ly, we can solve
for of the last two equations. If we ,se the first equation, we just meet the
to. from either
passband specifications, but more than meet the stop-band specifications, whereas if we
use the second equation, the rcverse is true. Assuming we use the first equation, we get
ro. = 0.7185 rads
The corresponding Butterworth filter is given by
o.5162
I{,(s) =
(s/o.)2+!21t1-,1 +t sz+1.016s+0.5162
with impulse response
h,(t) = 1.01 exp [-0.5081] sin0.508r z (r)
The impulse response of the digital filter obtained by sampling ft"() with I =I is
&(a) = 1.61 exp [-0.508r] sin0.50E n z(n)
By taking the corresponding Z-rransform and normalizing so that the magnitude at O =
Ois unity, we obtain
0.58542
H(z\ =
zz-l.Ostz+01162
Figure 10.4.1 shows a ptot of lH(o)l for () in the range [0, rrl2]. For rhis parricutar
example, the analog filter is sufficiently band limited, so that the effects of aliasing are not
noticeable. This is not true in general, however. one possibility in such a case is to choose
a higher value of N than is obtained from the specifications.
I,,(O) I
0.89t
2l-zl (10.4.6)
' Tl*r-'
or equivalently,
t + (T/Z\s
' |- (T/z)s {1o.4.7)
where Tnis a parameter lhat can be chosen to be any convenient value. It can easily be
verified by setting Z = r-t exp[jO] that this transformation, which is referred to as the
bilinear transformation, does indeed satisfy the three requiremcnts lhat we mentioncd
earlier. We have
s=o+i_:?t#_ffi_L#
2 7-12
.' +i:2 2r sin 0
TI + r2 + 2r cos0 TI + r2 + 2r cos ()
For r < l, clearly, o > 0, and for r ) l, we have o < 0. For r = l..r is purely imag-
inary, with
sin O 2A
Il+coso = 7'2
(r) = tan
a=
20 (10.4.8)
ltanl
where f can be chosen arbitrarily, e.9., T = 2.
2. Find the corresponding analog-filter function H.(s). Then find the equivalent dig-
ital filter as
The following example illustrates the use of the bilinear transform ln digital filter design.
Exampte 10.4.3
We consider the problem of Example 10.4.2, but will now obtain a Butterworth design
using the bilinear iransform method. With f = 2, we determine the corresponding pass-
band and stop-band cutoff frequencies in the analog domain as
."=tanT=0.7265
To meet the specifications, we now set
1*/9.142\*:16-.,
\ro./
,*/o7265),N_1s_r
\ro"/
and solve for N to get N = 1.695. Choosing N= 2 and determining o. as before gives
(,,). = 0'4195
The corresponding analog filter is
{).1355(3+ l):
H(z) = H,,(s)i.-l. i= _r
- 2.1712 + 1.7 16
Figure 10.4.3 shorvs the magnitude characteristic of the digital liltcr for 0 in the range
lo,rnl.
I,,(O) I
In our earlier discussions, we noted that it is desirable that a filtcr have a linear phase
characteristic. Although an IIR digital filter does not in gencrirl havc a linear phase,
we can obtain such a characteristic with a FIR digital filter. In rhis section, we consider
a technique for the design of FIR digital filters.
We first establish that a FIR digital filter of length N has a lirrcrrr phase character-
istic, provided that its impulse response satisfies the symmetrv condition
h(n)=11111 -t-n) (10.4.10)
This can be easily verified by determining H(O). We consider thc case of N even and
N odd separately. For N even, we write
N- |
H(o): ) t(")cxp[-ion]
n =O
N12- |
: 2 n@lexp[-ioz] + ) a (n) e.rp [-lon ]
x=0
Now we replace n by N -n- 1 in the second term in the last equation and use Equa-
tion (10.4.10) to get
476 Design ot Analog and Digital Filters Chapte, 10
\N/71- | tNlzt - |
+> h(n)exp[-lo(N-1 -z)]
n=o
H(o) =
{r2"'*''*'[n(' ?)l)"*[-,"(?)]
Similarly, for N odd, we can show that
H(o) =
{, (?) .':i" zr,r ... [o(, - ?)] ]
*,
[-,"(?)]
In both these cases, the term in braces is real, so that the phase of H(O) is given by the
complex exponential. It follows that the system has a linear phase shift, with a corre-
-
sponding delay of (N l)/2 samples.
Given a desired frequency response I/r(O), such as an ideal low-pass characteristic,
which is symmetric about the origin, the corresponding impulse response fta(z) is sym-
metric about the point n = 0, but, in general, is of infinite duration. The most direct
way of obtaining an equivalent FIR filter of length N is to just truncate this infinite
sequence. The truncation operation, as in our earlier discussion of the DFT in Chap-
ter 9, can be considered to result from multiplying the infinite sequenoe by a window
sequence ut(n).lt hr(n) is symmetric about n :
0, we get a linear phase 6lter that is,
however, noncausal. We can get a causal impulse response by shifting the truncated
sequence to the right by (N -1)/2 samples. The desired digital filter H(z) is then
determined as the Z-transform of this truncated, shifted sequence. We summarize
these steps as follows:
i Ir(rt) |
ltu I
Figure 10.4.4 Frequency response
l4r obtained by using rectangular
F_ window on idcal filter response.
Rectangular:
_
","(,)
=
{;: :;J;J ' (10.4.11a)
Bartlett:
N-l
O=n<
N-t' ,
2n N-l
-r, 1_-------:--
' N-1'
_ <
2 -"-<N_ (r0.4.11b)
elsewhere
{
Hanning:
2rn \
-.osr-rt1/, 0<n<N-l
?.//a,n(n) (10.4.11c)
Hamming:
{'l' elsewheru
l, elservhere
Blackman:
ws(n):
42 - O.s.o,
ff1 + 0.08 cos
ffi , 0<l=N-l
(10.4.1le)
elsewhcrc
{,
478 oesign ol Analog and Drgital Filler." Chapter 10
Kaiser:
,,)
("[(';')'- (, - 'r- 1)']'
zo* (n) = !)] ' o'n=N-' 1ro.a.ttg
t["("r-
{: elsewhere
where /o-(.r) is the modified zero-oder Bessel function of the first kind given by
/s(x) : Jo"' exp [x cos 0ldl /2r and o is a parameter that effects the relative widths of
the main and side lobes. When o is zero, we get the rectangular window, and for a :
5.414, we get the Hamming window. In general, as o. becomes larger, the main lobe
becomes wider and the side lobes smaller. Of the windows described previously, the most
commonly used is the Hamming window, and the most versatile is the Kaiser window.
Erernple 1O.4.4
I-et us consider the design of a nine-point FIR digital filter to approximate an ideal low-pass
digital filter with a cutoff frequency A, = O.2n. The impulse response of the desired filter is
u'171 =0'147
n?lTSn
-'* 0'1!Z
,, *9'47s.., a 9'588. * I
+ 0.588
2., +
0.475 . 0.317 0.147
z-, + --_ z-, + --_- z-c
Tr7tIt?I
so that
H(z) = 7-es'P1 =
o't'1?
'i
1r * .-rl * o*' e-t + 2-t.1
t o.475
(z-t + z-") +
O't*,r-, +7-s1 +r-a
7t It
For N = 9. the Hamming window defined in Equation (l0.4.lld) is given by the sequence
. 0.012
H'(z)= z" +
0.m6E , 0.25't
------ 22 " +.--
0.s08
i*l
TT
-z'+
0.508
+ -- z-l +-z 0.257 _r
* 0.068
,_, a9O_12 ,_o
7f 1f 7t?r
Finally,
o#? 94
H(z) = z-o H'(2, = (l + z-8) .. 9'EQ 1.-' + z-') * p-z + z'c1
The frequency responses of the filters obtained using both the rectangular and Hamming
,I0.4.5,
windows are shown in Figure with the gain at O = 0 normalized to unity. As can
be scen from the figure, the response corresponding to the Hamming window is smoother
than the onc for lhe recta[gular window.
I ,,(O) | I ,,(O) I
Sl (rad) Q (md)
(a) (b)
tigure 10.45 Response of the FIR digital tilrer of Example 10.a.4. (a) Reclangu-
lar window. (b) Hamming window.
.l80 Deslgn ol Analog and Digital Fllters Chaptor tO
f,'.sample 10.4.5
FIR digital filten can be used to approximate filters such as the ideal differentiator or the
Hilbert transformer, which cannot be implemented in the analog domain. In the analog
domain, the ideal differentiator is described by the frequency respo le
H(r,r) = jro
while the Hilbert transformer is described by the frequency response
H(o) = -jsgp(ro)
To design discrete-time implementation of such filters, we start by specifying the desired
a
response in the frequency domain as
Hd(o) =
,t.oo1n'1"-rn
where the coeflicients llr(a) are the correspondiog impulse response samples, given by
h"@) =
* I:,Hd(a)eanda
As we have seen earlier, if the desired frequency function Hr(O)is purely real, the
impulse response is even and symmetric; that is, ir(n) = ha? n). On the other hand, if
the frequency response is purely imaginary, the impulse response.is odd and symmetric,
so rhat ir(a) = -hdFn).
We can now design a FIR digital filtdr by following the procedure given earlier. We wilt
illustrate this for the case of the Hilbert transformer. This transformer is used lo generate
signals that are in phase quadrature to an iniut sinusoidal signal (or, rnor" g.n"--Uy, *
input narrow-band waveform). That is, if the input to a Hilbert transformer is the signal
:
.r,(r) cos roor, the output is y,1l; = sinoot. The Hilbert transformer is used in commu-
nication systems in various modulation schemes.
'The
impulse response for the Hilbert lransformer is obtained as
h,@) : (o)e,,ndo
* I:,-i ssn
(
I 0. n even
_)
-lz
zodd
t nrr
For a rectangular window of length 15, we obtain
S€c. 10.4 Digital Filters 41
ho@'1 =
{- * , - fr,0, -*,r.-?.0.?.n..1- , * ,.-ri}
which can be realized with a delay of seven samples by the transfer fuction
H@ttj
fI',
t(a) =
l_ r.y(o)lHi(o) - H($lzdo (10.4.12)
where tlz(O) is a nonnegative weighting function that reflects the significance attached
to the deviation from the desired response in a particular range of frequencies. w(o) is
chosen to be relatively large over that range of frequencies considered to be important.
Quite often, insread of minimizing the deviation at all frequencies, as in Equation
(10.4.12), we can choose to do so only at a finite number of frequencies. The coit func-
tion then trecomes
M
/(a) = w(o,)lHd(o,) - H(o,)1, (r0.4.13)
where Q, I < i < M, are a set of frequency samples over the range of interest. Typi-
cally, the minimization problem is quite complex, and the resulting equations cannot
be solved analytically. An iterative search procedure is usually employed to determine
the optimum set of filter coefficients. We start with an arbitrary initial choice for the
filter coefficients and successively adjust them such that the resutting cost function is
reduced at each step. The procedure stops when a further adjustment of the coeffi-
cients does not result in a reduction in the cost function. Several standard algorithms
and software packages are available for determining the optimum filter coefficients.
A popular technique for the design of FIR filters is based on the fact that the fre-
quency response of a linear-phase FIR filter can be expressed as a trignometric poty-
nomial similar to the Chebyshev polynomial. The filter coefficients are chosin to
minimize the maximum deviation from the desired response. Again, computer pro-
grams are available to determine the optimum filter coefficients.
A given set of specifications can be met by a Chebyshev filtcr of lower order than a
Butterworth filter.
The poles of the Butterworth filter are spaced uniformly around the unit circle in
the s plane. The poles of the Chebyshev filter are located in an ellipse on the s plane
and can be obtained geometrically from the Butterworth polcs.
a Digital filters can be either IIR or FIR.
a Digital IIR filters can be obtained from equivalent analog designs by using either
the impulse-invariant technique or the bilinear transformation.
Digital filters designed using impulse invariance exhibit distortion due to aliasing.
No aliasing distortion arises from the use of the bilinear transformation method.
Digital FIR filters are often chosen to have a linear phase characteristic. One
method of obtaining an FIR filter is to determine the impulse resporre ho(n) cor-
responding to the desired filter characteristic Hr(O) and to lruncate the resulting
sequence by multiplying it by an appropriate window function.
For a given filter length, the trarrsition band depends on the window function.
10.7 PROBLEMS
10.1. Design an analog low-pass Butterworth filter to meel thc follorving specifications: lhe
attenuation to be less than 1.5 dB up to I kHz and to be at lcast-15 dB for frequencies
greater than 4 kHz.
102 Use the frequency transformations of Section 10.2 to obtain an analog Buttersorth fil-
ter with an auenuation of less than 1.5 dB [or lrequencies up to 3 kHz, from your design
in Problem 10.1.
103. Design a Butterworth band-pass filter to meet the following specifications:
ro., = lower cutoff frequency = ZffJHz.
(l).. = upPer cutoff frequency = 3ff) Llz
The altenuation in the passband is to be less than I dB. The attcnuation in the stop band
is to be at least l0 dB.
.,,M Design of Analog and Digiltal Filters Chapter 10
10.4 A Chebyshev low-pass filter is to be designed to have a passband ripple < 2 dB and a
cutoff ftequency of 15@ Hz. The attenuation for frequencies greater thatr 50m rlz must
be at least 20 dB. Fird e, N, and H(s).
105. Consider the third-order Butterworth and Chebyshev filters wilh the 3-dB cutoff fre-
quency normalized to I in both cases, Compare and comment on the corresponding char-
acteristics in both passbands. and stop bands.
10.6 ln Problem 105, what order of Butter*,orth filter compares to the Chebyshev filter of order 3?
10.7. Design a Chebyshev filter to meet the specifications of Problem 10.1. Compare the fre-
quency response of the resulting filter to that of the Butterworth filter of Problem 10.1.
rc8. Obtain the digiul equivalent of the low-pass filter of Problem 10.1 using the impulse-
invariant method. Assume a sampling frequency of (a) 6 kHz, (b) 10 kHz.
10.9. Plot the frequency responses of the digital filters of Problem 10.8. Comment on your
results.
10.10. The bilinear transform technique enables us to design IIR digital filters using standard
analog designs. However, if we want to replace an analog filter by an equivalent A./D dig-
ital 6lter-D/A combination, we have to prewarp the given cutoff frequencies before
desigping the analog filter. Thus, if we want to replace an analog Butterworth filter by a
digital filter, we first design the analog frlter by replacing the passband and stopband cut
off frequencies, ro, and r,r,, respectively, by
, 2 .n,T
u; = rta'i L
ui,2a-T
= Vrai;
The equivalent digital filter is then obtained from the analog design by using Equation
(10.4.9). Use this method to obtain a digital filter to replace the analog filter in Problem
10.1. Assume that the sampling frequency is 3 kllz
10.1L Repeat Problem 10.10 for the bandpass filter of Problem 10.4.
10.12 (a) Show that the frequency response II(O) of a filter is (i) purely real if the impulse
response ft(z) is even and symmetric (i.e., &(a) = -n(-z)) and (ii) purely imagi-
nary if i(z) is odd and symmetric (i.e., i(z) = - h(- n)),
O) Use your result in Part (a) to determine the phase of an N-point FIR filter if (i)
h(n) = 111v - 1 - z) and (ii) n(z) = -h(N - 1 - n).
10.13. (a) The ideal differentatior has frequency resporuie
Ho(A)=jA O<l0l <zr
Show that the Fourier series coetfrcient for IIr(O) are
h,(n\ =
(-l)'
n
@) Hence, design a lGpoint differentiator using both rectangular and Hanning windows.
10.1d (a) Design an ll+ap FIR frlter (N = 12) to approximate the ideal low-pass characteris-
tic with cutoff r/6 radians.
(b) Plot the frequency response of the 6ller you designed in Part (a).
(c) Use the Hanning window to modify the results of Part (a). Plot the frequency
response of the resulting filter, and comment on it.
Appendix A
Complex Numbers
Many engineering problems can be treated and solved by methods of complex analy-
sis. Roughly speaking, these problems can be subdivided into two large classes. The
first class consists of elementary problems for which the knowlcdge of comptex num-
bers and calculus is sufficient. Applications of this class of problcms are in differential
equations, electric circuits, and the analysis of signals and systems. The second cless of
problems requires detailed knowledge of the theory of complex analytic functions.
Problems in areas such as electrostatics, electromagnetics, and heat transfer belong to
this category.
In this appendix, we concern ourselves with problems of the first class. Probtems of
the second class are beyond the scope of the text.
A.1 DEFINITION
A complex numtrer z. = x +,7y,wherel = VJ, consists of two parts, a real partr and
an imaginary part y.t This form of representation for complex numbers is calted the
rectangular or Cartesian form, since z can be represented in rectangular coordinates
by the point (.r, y), as shown in Figure A.l.
The horizontal.t axis is called the real axis, and the vertical y axis is called the imag-
inary axis. The r-y plane in which the complex numbers are represented ia rhis way is
called the complex plane. Two complex numbers are equal if their real parts are equal
and their imaginary parts are equal.
The complex number z can also be written in polar form. The polar coordinates r
and 0 are related to the Cartesian coordinates x and y by
I
Mathematicians use i to tepres€nl V- I, bur engineers use,, tor this purpose because i is usually
used to rcpresent currenl in electric circuits. "l."iricrl
486 Complex Numbers Appendix A
Imaginaty axis
z = rexp[i0] (A.3)
where r, the magnitude of z, is denoted fy lz | . From Figure A.l,
l.l ='= (A.4)
-tr ( 0 < rr
is called the principal value of the argument of z. Geometrically, le I is the length of
the vector from the origin to the point z in the complex plane, and 0 is the directed
angle from the positive x axis to z.
f,;omple.dl
For the complex number z =1+ j\/1,
,={fa1l.3Y=z ancl 42=arctan{3=!+Znn
The principal value of x 1 is n/3. antl thcrcfrrre,
: *- 2(cosz/3 + i sinr/3)
Sec. 4.2 Arithmetic Operalions 487
z*=r_jy (A.6)
Since
Rel;l
I
=.s=-(:+3*). and Im(il =-v : I r, - r-1 (A.8)
Note that if 2 = ,*, then the number is real, and if z = - : +. t hen the number is
purely imaginary.
zt+ zl
//
i---'
A.2 Addition
Figure and subtraction of complex numbers.
A.,2.2 Multiplication
That is, the magnitude of the product of two complex numbers is the product of the mag-
nitudes of the two numbers, and the angle of the product is the sum of the two angles.
A.2.3 Division
Division is defined as the inverse of multiplication. The quotient zr/zris obtained by
multiplying both the numerator and denominator by the conjugate of er:
z, (r, +/.y,)
=
z2 @z + jYr)
- (\ + iy)@2
x22 + yl
- iyz)
x,xz *
- -i. lJz
_
rt-
,xzlr - xrlz
-, t- r?i vZ (A.13)
z, _ r,exp[ie,J
z2 r, exp[jOr]
: lexp[l(0, - 0r)] (A.14) I
That is, the magnitude of the quotient is the quotient of the magnitudes, and the angle
of the quotient is the difference of the angle of the numerator and the angle of the
denominator.
For any complex numbers 21, !2, and 23, we have the following:
o Commutative laws:
lrr*rr=it*zr
l_ (A.1s)
-
(41!2 - 4:{l
o Associative laws:
c Distributive law:
z = rexp[le]
is
u, = lzlt/'exp[r-,ou] (A.re)
u, = r/'exe[i o + 1Q--!)l]
lzl
",,
= z"*n[i{], ,,,, = r*oliT-], rrr,:2exp[in],
ei-]
,. = , .-o ?J, ,. = z *n
l, li
Notice that the roots of a complex number lie on a circle in the conrplex-number plane.
The radius of the circle is li I ri'. The roots are uniformly distributed around the circle,
2
Abraham De Moivre (1667-1754) is a French mathematician who introducc'd imaginar.v quantities in
trigonometry and contributed to the theor:i of mathenratical probability.
490 Complex Numbers Appendix A
, "-,
[i f1
.: exp
i,;l
2expflrt
)
I 2 exp
[,?]
,*r[,f] Flgure A3 Roots of 32 exp [jtJ.
and the angle between adjacent roots is 2n/n radians. The five roots of 32 exp [7rt] are
shown in Figure A.3.
q.4 INEQUALITIES
For complex numbers, we observe the important triangle inequality,
l.l = lvl
Mathematical Relations
a -r tan p
P) :
tan
tan(a
= l*-t."" t."p
sina sinp = |[cos(o - B) - cos(a + B)]
cosc cos B = ][cos(c - p) + cos(c + p)]
sina cosp = j[rin1" - p) + sin(c + B)]
exPra - Bl
'"#[ii=
(exp[c])P : exp[cp]
Sec. 8.3 Special Functions 493
lncB=lno+lnP
hfr=lnc-ln0
lroe =.B lna
ln c is the inverse of exp [a]: that is,
l0loE' = a 1g-,o*. = oI
I(")
I, r"-rexp[-r]dr
=
f(o+l):af(c)
l(&+t1=1t. k=0,1.2,...
r(l) = v?r
494 Mathematical Relations Appendix B
9(rr,r)=lfrH
- .f3 f5 t?n+l
sinx = x - t.; +...+ (-l)'(2h 1I
+ ..'
$o=.rutn*rl
k=t Z
{, N(N + l)(2N + l)
rl,., _
--
l-l O
) *s - ,,.rur(ly + D2(zNz + 2N - 1)
2 <zk - l) = ry2
l-l
2 e* - ry: = lrrvlarur - rr
) r1rr1 = (N + r)t- I
496 Mathematcal Belatons Appendlx B
e(';-) =r.,:i')
(;) . (;). :2n-'
'.
(l) .(;).(!) . =2n-'
e(t'= ("il)
8.6.2 Seriee of Exponentials
a*'L
"{,o={,-+ lsk<N-1
.**o[,T] =tl; &:0,N
i*"=T+, lrl .t
lol .t
Ln'o':t#'
8.6 DEFINITE INTEGRALS
exp [- o-r2]dr :
Sec. 8.6 Definite lntegrals 497
cosB.r.L1 (F ,,)r.
['coso'r = rt . lr
h .--,
,\'' ;_r
|r- --__
adx _ rt
)n 12 +.r2- 2
r" sin-.---"
2ru
| d.r:0
)o sln.r
I
fza
,, - cos.r)'cosnrdr = (-D'#
n m
cosrnr
I ro sinnx
'r'*
* = [o'
U,' I3I:. :::"y:*
=
R7 INDEFINITE INTEGRALS
["a"=uu-[ra,
r7 n*-r
)r"*=;|1r'*t+C,
/ exptxl dx = explxl
+ C
I
t explaxl a, = \ (ar - rlexp [ax] + C
I
x" explaxl * - lr" exp [ar] - i I *-, exp [ax] dr
ff=rnl.rl +c
llurar=xtnlrl -x+C
JI x, hx dx: -jn*t
(rilf-[(z + 1)lnl'rl-t]+ c
dx =tnlrn.rl + c
I#
I cosrd, = sinx + C
J
[;nxat=-cosx+c
Sec. 8.7 lndetinite lntegrals
499
Is#rdx=tanx+C
oira* = -cot.r + C
I
tanxax: +
t ln lsecrl C
cotxtlx = In lsin.rl + C
I
+ tan.rl + C
"ecra, = lsecr
ln
|
cscr a, : ln lcscr - cotrl + C
/
lsecxtanxdr=secr*C
/ o"r.o,, dr
: -cscr * C
: -lrlnn-'r.*, * L1 sin,-2.rd-r
sin'rdr'
/ /
l-.or,,-'rsin.r I L
cos', ax: + -
I I cor,-r, d,
I xsinx dx = sinx - .r cosr J. C
500 Mathematical Relations Appendix B
I
J
r" sin., r/.v = -.r" cos.r + ,r J .r" cos.r,,Lr
sechs.rd.r = tanh.r + C
/
csch'?r r/.r = -cothx + C
/
:
/ ,e.t, tanh.r r/.r -sech.r + C.
I#=rnlx+ t/?-71+c
SEc. B.7 lndefinito tntegrals
501
d.x
lt --....=:-_ \/7- +C
J vz1/rz - oz a?t
fdtr
ldr-W - oz1J7 -; + c
t dx Lx
I a +7= -tan-'- +c
tdx
lm==lnl:l +!a2+x2+c
I dx t.l{d+rr+"1
J;ffii=,rnl_-, J+c
-
d,
I ------___::.:\/7;? -r c
I r2!a2 + xz a2x
ldxr
J Gr+W= 7\/-+r,* c
I dx ,r
) t5-*, = sin-'- + c
I dt
-,a-x
JA-=cos-'-;--c
rxfu
lffi= -\/ir - x, * acos-r9-o_x + c
dr
f _------.---.rL \/2a;-
J x!?ax - 12 ax
I dr I .-r
l;@i= -sec-'- + c
-} a, = x - \/i; -}+ f
J2{ux
a
[ cos-'
T *,
I r\/2",
- dx =b' - g, - 3o' \/2n -t + cos-,
o -u_o
*
r4
"
Appendix C
This appenrlix presents the nrinimum amount of matrix theory needed to comprehend
rhe ntaterial in Chaptcrs 2 and (r in the tcxl. It is recommcnded that even those well
versed in nratrix theory read the material herein to become familiar with the notation.
For those not so well versed. the presentation is terse and oriented toward them. For
a more comprehensivc presentation of matrix theory. we suggest the study of text-
books solely concerned with the subject.
502
Sec. C.1 Basic Operations 503
[c,,]=la,,l+[bi1) (c.l)
Matrix subtraction is analogously defined. The matrices musr be of the same order.
Matrix addition is commutative and associative.
C.2.2 Differentiatioaandlntegration
The derivative or inregral of a matrix is obtained by differenriating or integrating each
element of the matrix.
C.2.3 MatrixMultiplication
Matrix multiplication is an extension of the dot product of vecrors. Recall that the dot
product of the two N-dimensional vectors u and v is defined as
N
u.v = )rr,n,
i.l
Elements [cr] of the producr matrix c = AB are found by taking the dot product of
the ith row of the matrix A and the 7th column of the matrix B, so that
n k
,{ A c
I",
tl
Then, by Equation (C.2),
^=[i 1] and
"=[-o
Ll 6)
ln upper or lower triangular matrices, the diagonal elements need not be zero. An
upper triangular matrix added to or multiplied by an upper trianqular matrix results in
an upper triangular matrix, and similarly for lower triangular matrices.
For example. matrices
[t 4 21 [r o ol
r, =lo 3 -21 ana r,=l z 1 oJ
[o o -s_] [-r o 7)
are upper and lower triangular matrices, respectively.
[-r 2 41 [o -3 4l
A=l z s -3 I and B=l 3 o -71
[a -3 6] L-a 7 o_l
are symmetric and skew-symmetric matrices, respectively.
AA_r=A_rA=I
where I is the z x z unit matrix. if the determinant of A is zero, then A has no inverse
and is called singular; on the other hand, if the determinant is nonzero, the inverse
exists, and A is called a nonsingular matrix.
In general, finding the inverse of a matrix is a tedious process. For some special
cases, the inverse is easily determined. For aZ x Z matirx
l orrl
l-a', an)
=
Lat
we have
-:"")
"'= a#"'^'l-i?' (c.4)
I
a\
f o,,
oao I
0
-...0
AD
A= A-t = (c.5)
_0
..
:l ;
i
am_)
provided that a,, * 0 for any i.
The invene of the inverse is the given matrix A: that is,
(A-t1-t - (c.6)
The invene of a product AC can be obtained by^inverting each factor and multiptying
the results in reverse order:
(AC;-t - C-rA-l (c.7)
For higher order matrices, the inverse is computed using Cramer's rule:
a-r =
;j^aaj a (c.8)
Here, det A is the determinant of A, and adj A is the adjoint matrix of A. The following
is a summary of the steps needed to calculate the inverse of an n x n square matrix A:
Sec. C.5 Elgenvalues and Elgenvectorc SO7
l.calculate the matrix of minors. (A minor of the element ar, denoted by det Mr, is
the determinant of the matrix formed by deleting the rth row and theiti columi of
the matrix A.)
2. calculate the matrix of cofactors. (A cofactor of the element ar. denoted by cr, is
related to the minor by c, = (- l),+i detM,r.)
3. calculate the adjoint matrix of A by transposing the matrix of cofactors of A:
adj A = [c'.]r
4. C.alculate the determinant ofA using
A: [3 4l
Ll 3l
'are obtained by solving the equation
lr-x
detL +I
I 3-^l=o
or
(3-\)(3-r)-4=0
This second-degree equation has two real roots, trr = 1 and trz = 5. There are rwo
eigenvectors. The eigenvector associated with )., = i is the solutlon to
508 Elementary Matrix Theory Appendix C
t;;l[;l]=[;;]
lz al
[r z) [,,]:
: lol
L,,l Lo_]
Then 2x, * 4rr= 0andx, + 2t, -- 0, from which it follows that x r = -b, By choos-
ing x, : 1, we find that the eigenvector is
-, =
[-?]
The eigenvector associated with )r, = 5 is the solution to
tl il [;;]=,[;;]
or
f_z q1
L r -z) lol__o
Lo-r
which has the solution xz = br. Choosing r, = 1 glves
*: t-rl
Lrl
C.6 FUNCTIONS OF A MATRIX
Any analytic scalar function /(r) of a scalar t can be uniquely expressed in a conver-
gent Maclaurin series as
r(,,: i{#ro},-,i
The same type of expansion can be used to defrne functions of matrices. Thus, the func-
tion /(A) of the n x z matrix A can be expanded as
For example,
and
+ exp[o]
+. + exp[o] {n!1 *. .
=I+Ar *A"t
2l *...*4"'n, *"'
The Cayley-Hamilton (C-H) theorem states that any matrix satisfies its ovn char-
acteristic equation. That is, given an arbitrary n x x matrix A with characteristic poly-
nomial g(I) = det(A - }.I), it follows that S(A) = 0. As an example, if
A=l-3 4l
LI
3J
so that
g(A):A2-6A+5I=0
OI
A2 = 6A - 5I (C.11)
In general, the Cayley-Hamilton theorem enables us to express any power of a
matrix in terms of a linear combination of Ar for & = 0, l,Z, ..., n - 1. For exanple,
N can be found from Equation (C.ll) by multiplying both sides by A to obtain
A3:6A2 - 5A,
:6[6A-sr]-sA
= 3lA - 30I
Similarly, higher powers of A can be obtained by this method. Multiplying Equation
(C.ll) by A-r, we obtain
l-r = 9l:4
5
assuming that A-l exists. As a consequence of the C-H theorem, it follows that any
function /(A) can be expressed as
/(A) : 5 r*ro
&-0
The calculation of 1o,.yr, ...,.yn_r can be carried out by the iterative method used in
the calculation of An and A'*r. It can be shown that if the eigenvalues of A are dis-
tinct, then the set of coefficients .y0, ]r, ...,.ya_r satisfies the follorving equations:
510 Elementary Matrlx Theory Appendlx c
A= 13 4l
LI 3J
The eigenvalues of A are )tr =I and \, = 5, with /(A) = exp [Ar]. Then
I
exp[Arl = )
t-0
1o1r;a,trr
=.yo(t)I + 1,(r)Ar
where 1o(t) and 1, (l) are the solutions to
exp[t]:10(t)+1,(r)
exp[5t]= ro(r) + 51,(r)
so that
exp[r])
[],.-0,r,, - ]exptsrl -.-pt,ll
If the eigenvalues are not distinct, then we have fewer equarions than unknowns. By
differentiating the equation corresponding to the repeated eigenvalue with resepect to
I, we obtain a new equation that can be used to solve for 1n(t). fr(r). ...,1,_,(t). For
example, consider
[-r o ol
^:Ls-i;]
Sec. C.6 Functlons o, a Matrk 511
[r o ol [-r o ol t-r o ol
exp[Ar]=1n1r;l o r ol+1,1ryl o -4 4l+1,1ryl o rz -16
[o 4 4)
I
[oooJ Io-roJ
[exp[-r] 0 o I
=| 0 exp[-2rl-2texp[-2tl 4texpl-Ztl
L 0
I
-tcxp[-2tl -4exp[-r] + 4expl-?.tl + atexp[-tl)
Appendix D
Partial Fractions
rr + rz+ ...+ r,
#31 = (D.1)
Bs+C
or
G+al- Gt;ps+qf
where the polynomial s2 + ps + q is irreducible, and p and u are nonnegative integers.
The sum in the right-hand side of Equation (D.l) is called the partial-fiaction decom-
I
position of N(s)/D(s), and each is called a partial fracrion. By using long division,
improper rational functions can be written as a sum of a polynomial oflegrie M N
and a proper rational function, where M is the degree of the polynomial N(s) and N is
-
the degree of the polynomial D(s). For example, given
sa+3s3--5s2-l
s3+2s2-s+l
we obtain, by long division,
rA proper ralional funclion is a ralio of two polynomials. with the degree o[ the numerator tess than lhe
degree of the denominator.
512
Sec. o.1 Case 1 : Nonrepeated Linear Factors 513
- .5s2 - I =J+1
5t + 3sr 6s2 -' - 2
s3+?sz-.s+l s3+2s2-.r+l
-
The parrial-fraction decomposition is then found for (6.s2 2s ?)/(s3 + 2s2 s + 1).
- -
Partial fractions are very useful in integration and also in finding the iavene of many
transforms, such as kplace, Fourier, and Z-transforms. All these operators share one
property in common: linearity.
The first step in the partial-fraction technique is to express D(.s) as a product of fac-
tors s + D or irreducible quadratic factors s2 + ps + q. Repeated factors are then col-
tected, so that D(s) is a product of different factors of the form (s + 6)ts or
(sz + ps + q)', where p and u are nonnegative integers. Thc form of the partial frac-
tions deperrds on the type of factors we have for D(s). There are four different cases.
ffi=, *^1'1
where
, = (*#0,),=_, (D.2)
Example D.l
Consider the rational funclion
37 - lls
s3-4s2+s+6
The denominator has the tactored form (s + l)(s - 2)(s - :t). All lhese factorc are liner.r
nonrepeated factors. Thus, for the factor .s + l, there corresponds a partial fraction of lhe
form A/(s + 1). Similarlv, for the factors r - I and s - 3, thcrc correspond partial frac-
tions 8/(s - 2) and C/(s - 3), resPectively. The decomposition of Equation (D.l) then
has the form
37-lls A * B +, C
;i- 4F;" +; =, i r r-lJ - 3
^
= (,;Ir]'jii),. _,
= o
/ 37-ll.s \
'= lAi irt' l rr/, , = -'
Si4 parflal Fracltons Appendtx D
c=( 37-lls \ =r
- \(s + l)(s - 2)/.-.1
The partial-fraction decomposition is, therefore.
37- lls 4 5 +, I
-airii+o =i +r- i- 2 - l
'i
xample D.2
Let us find the partial-fraction decomposition of
2r+l
i-+ irt--a'
We factor the polynomial as
2s+1 A +s+4ts_
B C
si+3rr_4s= s I
Using Equarion (D.2), we find that the cocfficients are
/ 2r+l \
d=(t'*nlt'-r)/.="=-a
B=/2'1r\ _7
\s(s I)/,--o 20
-
c=/2J*l\ -3
\s(s + 4)/..r 5
The partial-fraction decomposition is, therefore.
2s+l I ++ 7 3
sr+3s2-4s 4s 2o(s+4)' 5(s-l)
" f('*i.'(rt)
u. = (D.3)
\ D(s) J,. .o
Sec. D.2 Case ll: Repeated Linear Faclors 515
r'2,
'- (o-] r;,ta=*#'), --oo: -I
= ,p (D.4)
Exanple D.8
Consider the rational .function
2s2-25s-33
F - isl es=
The denominator has the factored form D(s) = (s + l)2(s - 5). For rhe tacrors 5, there
-
corresponds a partial iruction of the form 8/(s - 5). The facr<.rr (s + I is a lhear. repeared
)2
factor to which there cr-'rrs5pon6s a partial fraction of the form zl2l(s + l), + el/(; + l)
The decomposition of Equation (D.1) then has the form
tu2-25s-33 B
__ A,
+ __---L +
A.
(D.s)
s'-3sz-9s-5 s-5 s+1
=
(s+ l)2
The values of I, Ar. and A2 are obtained using Equations (D.2), (D.3), and @.4) as follows:
,:(I=t?#),.,=-,
Ar=( 2s2-25r-33
s-5 = -,
),.,
I
ld2r2-25s-33\
^'-@-r[\a ,-5 i,,
/Zs2-20s+158\
=(--i-/,-,='
Hence, the rational function in Equation (D.5) can be rvrirren as
2s2-2ss-33 3 5 I
s'-3s'-9s-5 s-5 s+l (s+ l)2
Eqnrnple D.4
[-et us find the partial-fraction decomposition of
3s3-1812+9s-4
sa-5s3+fu2+2os+8
Thc denominator can be factored as (s + l)(s - 2)3. Since rve have a repeared facror of
order 3, the corresponding partial fraction is
3rr- l8s2+9s-4 B +;1,*i;--2):+G=F
A, A, A,
(D.6)
sr-s;+ou.:+zos+a=s+I
The coefficient I can be obtained using Equation (D.2):
I = i3"'_t$-_* t_lJ = 1
\ (s - 2)sr J,.-,
The coefficients A,. i = 1.2.3, are obtained using Equarions 1D.3) and (D.4). Firct,
516 Pardal F actons App€ndix D
/3-"3-1&s2+9s-4\
' t---.- --__-- i
A,=
\ s+l /,-z
=2
td 3s3-18s2+29s-4\
a,=(* s41 ),-,
= (s-+:#ga!r),., = -,
Similarly, ,,1, can be found using Equation (D.4).
In many cases, it is much easier to use the following technique, especialty after finding
all but one coefficient: Multiplying -tboth sides of Equation (D.6) by (s + f)(s 2)3 gives -
3s3 - t&2 + 9s - 4 = a(s - 2)3 + A,(s + lxs - Z)2 + Ar(s + l)(s - 2) +.Ar(s + t)
If we compare the coefficient of s3 on both sides, we obtain
3=B+At
Since B = 2, it follows thal Ar = 1. The resulting partial-fraction decomposition is then
3s3-1&2+9s-4 -!__
r{ - 5s3 + 612 + 20s + 8 s+1 s-2 (s-2)2'(s-2)3
Ilqnmple D.6
C-onsider the ratioDal futrctiotr
s2-s-21
t5-+ &E
We fector the polynomial as D(s) = (s2 + 4)(2s - 1) and use the partial-fraction form:
s2-s-21 As+B C
t3-+8s-,4=;r+4 -2"-l
Multiplying by the lowest common denominator gives
s2 - s - Zl: (As + B)(zs - 1) + C(sz + 4) (D.8)
Sec. D.4 Case lV: Repeated lrreducible Second_Degree Factors
517
s:-s-21 3s+l 5
2sr-;r+&-a=rr+?-2"_r
Pt *
q (s, + pslBz
!rt-+ 4rt A..s + It.,
sz+ps + + q)2 - "'- 1r, +rr + Sf (D.e)
Erample D.0
As an example of repeated irreducible second_degree
factors, consider
sa-6s+7
(s'?-4s+t), (D.10)
{-q+7 Azs+8,
- 4s + s), - is=, + I * A.s+8, (D.ll)
[?, _'-;1r*-,,,
(s2
MultipryingbothsidesofEquarion(D.11)by[(s-2)2+l]2andrea,angingterms,sgeltain
-6=5Ar+ A2 or Az=-6
Comparing the constant term yields
7=5Bt+8, or Br=2
Finally, the partial-fraction decomposition of Equation (D.10) is
sa-6s+7 = 1 6s-2
i;t- 4s + ,l -
(, - 2)ti-tf (' - ,Tl
Bibliography
l. Brigham, E. oram. The Fast Fourier Tronslonn ant! tU Applicatiarr.s. Englewood cliffs, NJ:
Prentice-Hall, 1988.
2. Gabel, Robert A., and Richard A. Roberts. Signals and Linear Systcms,3d ed' New York:
Wiley, 1987.
3. Johnson, Johnny R. Introduction to Digitat Signat Processing. Eng,lewood Cliffs, NJ: Pren-
tice-Hall, 1989.
4. krhi, B. P. Signab and Systenr-s' Berkeley-Cambridge Press. Carmichael' CA' 1987'
5. McGillem, claire D.. and ceorge R. Cooper. Conrinuous and Discrete signal and system
Analysk,2d ed. New York: Holt, Reinhart and Winslon' l9M
6. O'Flynn. Michael, and Eugene Moriarity. Linear systems: Tinrc Domain and Transfornt
Analysb. Nerv York: Harper and Row' 1987.
7. Oppenheim. Alan v., and Ronald w. Schafer. Dbcrete-Time Signol I'rocessing. Englervood
Cliffs. NJ: Prentice-Hall, 1989.
8. Oppenheim, Alan V., Alan S. Wilsky' and S. Hamid Nawab. Signals and Systems' 2d ed'
Englewood Cliffs, NJ: Prenticc-Hall, 1997.
9. Papoulis, Athanasios. The Fourier Inregral and lts Applications. Ncrv York: McGrarv-Hill,
t962.
10. Philip, Charles L., and John Parr. signab, systems and Transfornts. Englewood cliffs, NJ:
Prentice-Halt, 1995.
I L Poularikas, Alexander D., and Samuel seely. Elemens of signals utl
Svstems. Boston: PWS-
Kent, 19E8.
12. Proakis, John C., and Dimitris G. Manolakis. lntra duction to Digirtl Signal Processing. New
York: Macmillan, 198E.
13. Scolt, Donald E. An Inrroduction to circuit Analysis: .A systerns Approach. New York:
McGraw-Hill, 1987'
519
BtbIography
14. Siebert, William M. Circuiu, Signab, and Sysrems. New york: McGraw-Hi[, 19g6.
15. strum. Roberr D., and Donald E ..Ki,rk. Firct principles of Dicrete systems and
Digiul sig-
nol Processing. Reading, MA: Addison-Wesley, l9gE.
16. swisher, George M. Intrcduction to Lhear systems
Anatysb. Beaverton, oR: Matrix, 1g6.
t?. ziemer, Roger E.. william H. Tranter, and D. Ronard Fannin. signals
and sysrems: con-
tinuous and Discrete,2d ed. New york: Macmillan, 19g9.
lndex
521
522 lndex
definition,315 Transformation:
delermination using Z-transform, 408 of independent variahle, 281
properties,315-16 of state Yector. 89
relalion lo impulse response, 315 Transition matrix, ic., Stale-transition matrix
time-domain evaluation of. 3l{ Transition propcrly. sj
State-variable representation, 76, 310 Triangle inequalirv. .l9t)
equivalence, 89, 313 Triangular pulse,60
Stable LTI systems, 65 Fourier transform of. 167
Stable system, 51. 9l Trigonometric idenrirics. 491 -92
Stability considerations. 9l Time-scaling:
Stability in the s-domain, 266 continuous-time srgnals. l7
Stop band, 20 discrete-time signals, 281
Subtractors, 69 Trvo-sided exponential, I 68
Summers, 306
Symrnetry:
effects of, 127 U
even symmetric signal, l5 Uncertainty principlc. 2M
odd symmetric signal, l5 Unilateral l:place transform, see laplace
System: transform
causal. ,18 Unilateraf Z{ransform. see Z-lransform
continuous-time and discrete.time, 4 I Uni lorm quantizes. -165
distortionless, I39 Unit delay, 307
function, 135 Unit doublet. 30
inverse,50 Unit impulse function. see 6.function
linear a-od nonlinear, 42 Unit step function:
memoryless, 47 continuous-limc. l9
with periodic inpurs, 135 discrete-time, 2S3
lime-varying and timc-invariant, 46 Up sarnpling, 359
T w
Tables: Walsh [unctions. l5(] 51
effects of symmerry, 128 Window functions:
Fourier series properlies: in FIR digital trlrer design,476-Tl
discrelc-time, 339 in spectral estimation, 439-41
Fourier transform pairs:
continuous-time, 172-73
discrete-time, 352
z
Fourier transform properties: Z-transform:
continuous{ime, 189 convolution propcrty, 390
discrete-timc, 351 dcfinition,3T6
Frequency transformalions: inversion by series cxpansion. 394
analog, 456 invcnion integral. -1()2
digital, 457 invcrsion by partial-f raction expansion,
laplace transform: 395
pairs, 230 properties of the unilateral Z-transform.
prop€rties,246 383
[:place transforms and their Z-transform region of convergencc, 378-79
equivalens. 470 relation to Laplacc transform, 410
Z-transform: solution of dilfcrcncc equations, 386
pairs,393 lable of propertics. 392
properlies, 392 table of transforms. 393
Time average, 49-56 Zero-input componcnt. 2
Time domain solution. 78 Zero-order hold.357
Time limited, 9 Zero padding.2Sti, a/so srz Discrelc
Transfer function, 135, 242, 26 Fourier lransform
open-loop, 256 Zero-state componcnt. 26zl