--. -r




INTRODUCTION TO SIGNALS 2.8-6 If the two halves of one period of a periodic signal are of identiaal shape except that one is the negative of the other. the periodic signal is said to have a half-wave symmetry. Ifa periodic signal get) with a period To satisfies the half-wave symmetry condition, then g (To) t, 2,

= -get)



In this case, show that all the even-numbered harmonies vanish, and that the odd-numbered harmonic coefficients are given by


=~ Too



0J2 get) cos n{J)ot dt



= -, Too


' ...



/2 get) sin nOlOtdt

USIng these results, find the Fourier series for the periodic sig!lals in Fig. P2.8-'6. 2.9-1 For each of the periodic signals in Fig. P2.8-4. find exponential Fourier series and sketch the corresponding spectra. 2.9-2 A periodic signal g(t) is expressed by the following Fourier series: get) =, 3 cos t

+ cos

(21r)' T" 51.":,



21r + 3)

(a) Sketch the amplitude aod phase spectra for the trigonometric series. (b) By inspection of spectra in part (a). sketch the exponential Fourier series spectra. (e) By inspection of spectra in part (b). write the exponential Fourier series for get). 2.9-3 Figure P2.9~3 shows the trigonometric Fourier spectra of a periodic si~ g(t).,


Iectrical engineers instinctively think of signals in terms of their frequency spectra and think of systems in terms of theirfrequc:nc:y responses. Even teenagers'know about audio ,signaIshaving a bandwidth of 20 kHz and good-quality loud speakers responding up' to 20 kHz. Tliis is basically thinking in the frequency domain. In the last chapter we discussed: spectral representation of pesiodic signals ,(Fourier series), In this chapter we extend this, spect(a) representation to aperiodic signals.

(a) By inspection of Fig. P2.9-3. flndthe trigonometric Fourier series representing get).
(b) By inspection of Fig. P2.9-3. sketch the exponential Fourier spectra of get).

(e) By inspection of tire exponential Fourier spectta obtained, in part (b). find the exponential Fourier series for get). " , (d) Show' that the series found in parts (8) and (e) are equivalent,

Applying a limiting process, we now show that an aperiodic signal can: be expressed as a continuous sum (integral) of everlasting exponentials. Th represent an apenodicsignal get). such as the oue shown in Fig. 3.1a by everlasting expoaential signals. let us construct a new periodic signal gTo(t) formed by repeating the signal get) every To seconds, as shown in Fig. 3.1b. The period To is made loug enough to avoid overlap between the repeating pulses. The periodic signal gT. (t) can be represented by an exponential Fourier series. If we let To -+ 00 • the pulses in the periodic signal repeat after an infinite interval, and therefore
lim gro(t) = get) To"'~

Thus. the Fourier series representing 8T. (t) will aiso represent get) in the limit To exponenti81 Fourier series for 8T. (t) IS given by
gT.(t) = Figure P2.9-3 2.9-4 Show that the coefficients of the exponential Fourier series of an even periodic signal those of an odd periodic signal are imaginary.

-+ 00.



L: DneinUJo'


are real and

in which




68 INTRODUCTION TO SIGNALS z.s;2. (a) Sketch the slgnal get) = t and find the trigonometric Fourier series to represent get) over ttie interval (-n, n). Sketch the Fourier series ~(t) for all values of t, -. ..
(b) Verify Parseval'~ theorem [Eq. (2.90)] for this case, given that


2.8-3 If a periodic signal satisfies certain symmetry conditions, the evaluation of the Fourier series components is somewhat siinplified. Show that: (a) Ifg(t) = g(-t) (even symmetry), then all the sine




(b) If get) -g(-~) (odd symmetry), then the de and all the cosine terms in the Fourier series vanish (00 0).


terms nl the Fourier

series vanish (bn = 0).

= a,; =

Further, show that in each case the Fourier coefficients can be evaluated by integrating the periodic signal over the:haIf-cycle only. This is because the entire information of one cycle is implicit in a haIf-cyc1e due to sYmmetry: Hint: If 8,(0 and go(t) are even and odd functions. respectively, of t, then (assuming no impulse or its derivative' at the origin) .


g,(t) dt



g.(t} dt



go(t) dt:


Also tile product of an even and an odd function is anodd function, the product of two odd functions is an even function, and the product of two evep functions is an even function. . . . 2.114 For each Of the periodic signals shown' in Fig. P2.8-4, find the compact trigonometric Fourier series and sketch the amplitude mid phase spectra, If either the sine or the cosine terms are absent in the Fourier series. explain why, 2.8·5 (a) Show that an ~ittaty function get) can be expressed as a sum of an even function g.(t) and an odd function g.(t): .


-8x -fut -4", .' -2~' ",/4



get) Hint: get) =



+ go(t)
. - g(-t)]

1 1 2 [g(t) + g(-t.) 1+ 2[g(t)

(b) Determine-the odd and even components of the functions: (I) u(t);:(Ii), e-Q'u(t);:(iii)



3 -2









10 .'" . (a)


Figure P2.8-4




Figtlre P2.8~

lJ«(J) = 2nlJ(f) Hint: Show that


1 where If Z(I) = x(~) ± y(I), then show' that
(J)='):rlf E, =.-Ex


+ E, ± (Ex, + EyX)



"'(t)lJ(~t) dt =

lal t/I(O)



2.5-6 Let x; (I) and X2(t) be two unit energy sihuus or!hogonaliover an interval from t = Ij to 12. We ~ ~:ent XI (I) and X2(1) by two unit length, orthogonal vectors (XI, X2). Consider a signal

Eq. (2;26) in an

I!1tema1e way

by observing that e

= (g -

ex), and

lel2 =(g-cx).(g_ex)=lgI2

+ c21:1I:j2

- 2cg.

This signal can be represented as a vector g,by a point (c" e2) in the Xl-X2 plane.

or the signalS g(l) andx(t) shown in Fi~ P2.S-2, find the component of the formx(t) contained get). In other words, finditheoptimum va1ueofc in the approximation g(t)'''' cx(t) sotbatthe error signal CI!!lI'gy is minimum. WJ:latis the error signal energy?

(a) Determine the veclbrrepresenfation of the following six signais in this two-dimensional vector ~are: '
(i) gl(l) ",,2xI(t) (iii) gs(l) (v) gs(l) -X2(1)

= -:ra(I) = 2x1(t) + X2(1)

(H) 82(t)

= -Xl


(Iv) 84(~) = XI(t)

+ 2xi(l)

+ 2x2(t)

(vi) 86(P) = 3xI (t)








Figure P2.5-4




z;s..S Energies of the two 'energy, signals x(t) and yet) are E. and Ey, respectively. (a) Ifx(t) andy(i) areor$ogonal, ,totheenergyofthesigllalX(I)
(b) Ifx(t) and y{t)


then show that the energy of the signalx(t) - y(I),and is givcnby E. + Ey"J

+ y(l)

is identical ' :,,~

Figure P2.6-t

are orthogonal,

find theenergies of signals CIX(t) +C2y(t) andclX(I) -c2y(I).

2.8-1 (a) ~ketch the signal get) = t> and find the trigonometric Fourier series to represent g(t)iOver the mterval (-1, 1), Sketch the Fonrier series'l'I(t) for all values of t. '

(e) We define E." the cross energy of the two energy signals x(t) and y(t), as



x(t)y'(t) dt

(b) Verify Parseval:s


[Eq. (2.90)J for this case, given that

2:;;'=90 n=1

"", 1


both g3(1) andg4(t) Cl!Jl. Thus.5 Hint: Use Eg.'11) is ~g.(r). 2. (2. . the signalg( (t) = g{-t~.. .. L: 1: 8(. . i) is located at 1 -t = 0.3-1. ~~. However. 8.3-1 ' o . it is a power signal with power P.3) dx 2 Hint: 8(x) is located atx soon.signalis: (a) time-expanded by a factor a (a > 1)...' 2. g3(t).h) is Eg / a.. that is. (b)n'wl1iplied by a constant a? = . g(2t -4) Isg(2t) with t replaced by I .t) is g( -t) with t replaced by t . gs(/) can be expressed as g(r) titne-shifted. if al is imaginary..4). Express signals g2(t). compression of a signal by a factor a reduces the energy by the factor a"What is the effeCt on signal energy ifthe.I). This shows that time Inversionand time sbiftingdo not affect signal energy. . ' 2. ' t Figure P2..-'.8 Determine the power and the rms valuefor (a) 10cos(lOOt+j) (c) (10+2 sin3t) coslOt each of the following signaL~: -:-4' '.-! P2.!.5)II(x .t) dt 1) ~.r) dt' xt dt 8(t -2)sin (t3 +4)11(1. For mstance g2(t)" get .3-4 For an energy signal get) with energy Eg.shbW that the-energy of anyone of the signals -get).T) + g.a' cosWot lO 51 cos lOt sin :. atl = 1. and 2A-3 Prove that 8(at) Figure P2. (t).18)..r) dT: (b) (d) (f) dt + 3)e-' dt f~ -00. Hint: Recall thatreplacingt wltht .3-3. P2. (These operations may be performed in any order). an exponential e"-a' starting at -00 is neither an.3·2 = . pJify the following expressions: (a) (e) (e) C~:~)II(t) [6-' cos (31 .5).T) for some suitsble value ofT. ~~ a g(f) (b) 10COS(100t+i)+16Sin(150/+~) 0 2 t- (d)'10 cos 5t cos lOt (f) e. time-scaled.(1 .. P2.3-3 2. (d) ~~ g«() t- . (b) g(t/1.. time-scaled.... On the other hand. • '. Show also that the energy of g(at) as well as g(at . sketch the signals: (a) g(-t).4-2 Evaluate the follOwing integrals: (a) -1 .3-3 For the signal get) shown in~Fig.)g(t .::---::J.'.T delaysthe signal by T. 1regardless of the valueOfa. and then multiplied by a constant.. g. g(2 . g(2 ..60°) J 8(t) (b) (d) (f) [Sin t ~(h2)] d(t-l) +4 2 e W2 w2 + ) +9 lI(w) CC0 +2)8(W+3) 1 (sin co klu) lI(w) 1.1)8(3 . and g(t . energy nor a power signal for any real of a.: 64 INTRODUCTION TO SIGNALS is (pmseval's theorem) Problems 65 ps. (e) g(2t . and their time-shifted.4) (d) g(2.2.8(t) lal . Similarly. t) dt (b)1°Oe<X8(1- cos ::(x .3-2. = EIDkl2 .1- (e) (e) (g) Lr-: f~ f~ 8(1 1: g(. g(-I).' be expressed as get . (e) g(3t). ~ g5(/) ~ temis g(/). or time-inverted versions. 24 = O.)8(t . Similarly. (b) get + 6).T) + get + T) for SOlDCsuitsblbvalueofT. sketch: (a) get . Figure P2.. time = .. For part (f) use L'J:!6pital's rule.2.e signal get) shown in Fig. tha.1.For example..

. ~FigureP2'1-2 . 2.l-2c.subplot (212) . Sketch the x(t) + yet) and x(t) . (e) Fig.8-4a.9622 -82. V.0419 0:. 3. A.0503 01.stem(k.. (d) Fig.1) is .. the energies ofthe signals X(/) and yet) shown in Fig. Find also the powers and the rms values of: (a) -get). Are the energies of the signals x(t) + yet) and X(I) .Angles) Ez an.1299 -87. 2.3ro~ .1-6 Find the power and the nos vallie for the signals in: (a) Fig.1.P.FourierSeriesandBoundaryValueProblems..2446 0'.Lathi. P2.1998.findthe power of a sinusoid C cos (Wot +6) by averag4lg the signal energy er one period 21r / WIJ (rather than averaging over the infinitelY'large interval).2a to . Wlley-IntersClence..yet) and show.0359 0:. 2.6.Brown.8719 -85.1-2a. (e) Fig. W.t-S.2. P2. New York. B.1-7 Show that the power ofa signal get) given by get) = Figure R2. disp'( 'Ainplitudes Angles') (Amplitudes Angles] % Plot the Fourier coefficients k=O:length(Amplitudes) -l.andJ..Cn] Angles=Priangles(l:M). that the energies of either of these two signals are equal to E. 198. (e) eg(t).CA.Amplitudes) .0314 0:. .23. 2. (c) o '3 REFERENCES 1. L.•McGtaw-Hill. Angles=Angles'(180/pi) .21b. 2. 1978. [Ci2 Show that if + Cd] e2 2 = +2CIC2 "':!. the power of g(i> = C1 cos (l'I>[t + Bt) cos (61 -9z)}/2.:.1-5 Fmd thepower of the periodic signal get) shown in Ftg.6048 -87.e. . P2.. -5 1 Sin~ A (e) 2..SignalProcessingandLineQrSystems.Problems 62: INTRODUCTION TO SIGNALS ___ 63 Amplitudes=[CO.k=k'. . The Theory ofF ourier Series and Integrals. Comment t- .22a.d Ey.stem(k.1977 -88.yet) identical in this case? . P2.Berkeley-Cambridge~ess. 2. P2.-8 TO x(t) o x(t) ans Amplitudes 0. P. New York. The Fourier Integral and Its ApplicaJions.J 2.14 Example 2. P2. Churcbill.84c. (b) Repeat the procedure for the signal pair of Fig.2317 -86.9437 -88. which is not equal to (Cl2 + cb/2. (b) 2g(t). R.0279 = AIlgles 0 -75.3949 o yet) If----.".5043 0. subplot (211) .. McGraw-~ New York.0837 0'.4175 -87. Papoulis.0629 01. + C2 cos ("':!t + 9...1-1 n ED"e''''' Wi '!WTc for all i #k . Walker.+ Ey• Repeat the procedure for the signal pair ofFtg. 1?62.1-2b. Figure P2 •.1251 0'. 4.Carnllehael. (b) Fig.

imag (Dn) ) / k=O :len~th (Dn)-l/k=k"/ Subp1ot(211). the form (2. In this algorithm. the sample value is taken as the average of the values of the function on two sides of the discontinuity.10 !NUMERICAL COMPUTATION OF o.1 2. Pg = D02+2 ee LIDnl2 n=1 (2. we shall take We can compute numerically using the Fourier ~o~ (DF'I}. Although these forms appear This equation shows that Do+No = Do. ID-nl = IDnl. . that is. we also require No to be a power of 2. Hence. '·0="". c~ider Eq..l T. 2. to minimize the effect of this spectral folding. The cycle repeats again from n = 32 on..80). Comments:' Parsevalls theorem occurs in many different forms. L n=-OO lDnl 2 (2. O { g(t}e-J·"". and (2. is negligible.Ts. For this purpose.59) applies to e~ergy ~lgna1s.. D3\ = D_1. the coefficients represent the values 'for negative n because of the. (2. For instance. . Hence. discrete Compute and plot the trigonometric and exponential Fourier spectra forthe periodic Signal in Fig. . However. The form (2. is:called theaJiasJng. This results in some computational error. Hence. that is.91b) different.(100th) harmonic is about O. Eq. (2. 2. (3. c21/c1g CO=DnMag(l)/ Cn=Z*Dnmag(2:M)/ (2. = _!_ . 1l No k=O .t dt .e~r. it is also preferable (although not necessary) that No be a power 0[2..94). We shall use MATLAB' to implement the EFT algorithm.! %M is the number of coeff~cients .stem{k. To JTo' = lim. Thus. 61 the power is given by (see Prob.m) = lim T. which is inevitablein any numerical evaluation ofan integral. in the present case. The DFT (or FFr) gives the coefficients Dn for 11 :::: () up to n = No/2. (2. the square of the length ~f a vector equ~ls the sum of the squares of its orthogonal components.7). Because get) has ajump discontinuity. we recall program c21. we need to increase No as much as practicable. ~ 0 in computing the right-hand side of Eq.{ 'to To compute trigonometric Fourier series coefficients. Hence. but (e-1tj2 + 1)/2 = 0. The sampling mtervalls T..1-7) Pg = 00.stem(k.91a) Fora real get). when No = 32. Thus. wllere m is an integer. ». there are No'=To/Ts number of samples in one period To. To No-l 'k=O ~=~~~ " 1:. (2.604.64). a choice of No= 200 is acceptable because the (No/2)th.. ' The samples of get) start at t = 0..to be compuced TO=pi/NO=Z56/Ts=TO/NO/M=lO/ t=O:Ts:Ts* (NO~l). . (2. Beyond n = No/2. it is impossible to make T.91). = computer Example C2.1)] that the-overlapping appears as if the spectrum above the (No/2}th harmonics had folded back at this ifequency. DIS D-14.. % (cZ1. Dn decays rather slowly as l/n. L g(kT.~ . and the last (Noth) sample is at t = TO .oI(about 1 %) of the fundamental.92) We write and save a MATLA:i3 file (or program) c21.Dnangle) (2.Dnmag) subp1ot(21Z).90). but not zero becaUse it will increase the data withou~ limit. This will result in overlapping of various components.S• Hence.91) applies to periodic signals represc:nted by the exponential Founer senes. (6. they all state the same principle.. Thus. (2. and the next cycle begins at t = TO. No = 2m.94) yields the Fourier spectrum Dn repeating periodically with period No. we need samples of get} over one period starting at t = O. The error resulting from nonzero T.604/ % fft(g) is the FFT Uthe Sum on the right-hand side of Eg.92) with theimplicit understanding that ~ IS reaso~ly small. we can express Eq.92).21 b (Example 2.. COMPUTATION OF o.90) applies to periodic signalsrepr¢sented by the trigonoDie~c Fo~er se~es. Lg(kTs)e-inRok " No-I where g(kT. and the form (2. Yet another form is round in Eq. we shall ignore the limit on T.(NoWo/2). Therefore..} is the kth sample of get} and n ""'T. To detern!ine No.. Dnmag] =cartZpol (real (Dn) . (259). To reduce the effect of such overlapping.(The last sample is not at t = To liecause the sample at t = 0 is identical to the sample at t = TO.10 NUMERICAl. t=t' / g=exp(-t/Z)/g(1)=O.) At the points of discontinuity.94)] .kT. . the first sample (a~t = 0) is not I. (2. To find the relationship between D~ and the samples of get). which will be discussed in more details in Chapter 6. ' We can use tle efficient fast F~urier tt:ansfonn (FFT) to compute the right-band side of Eq. periodicity property Dn+No = Dn.94) " . DI7 = D-15.m along with commands to convert Dn into en and 90. We shall see later [Sec.)e-in. we require D" forni:: NO/2 to be relatively small.m to compute and plot the Fourier coefficients. which uses the samples of a periodic signal get} over one period. Dn=fft (g) INO [Dhangie.. s:cond. We can make it small.92) as . (2. e ». we should make sure that Do for n :::: No/'). Noo = - To T.60 INTRODUCTION TO SIGNALS 2.93) In practice. . inEq. such as in Eqs.!.. (2..

27..t dt 1 1!" impulse is located). Thus. Compare this spectrum to the'trigonometric spectrum shown in Fig. To/2) and recognizing that over this interval . Therefore.. being real.. (nOJot + On) ••• (b) the power of g(t) is given by (2.. as shown III Pig. the integraton the right-haridside is the value of e-jn""" at t =0 (where the 1 Dn=- .z -To/2 8(t)e-jn"". = liTo) for all the frequencies.eicponential Fourier spectra. the impulse is located at. From the sampling property Of the impulse function. Dn = -. the power of the Fourier series is equal to the sum of the powers ofitsFourlercomponents.oo ejn""" 2Jr 0J0=·- To . 58 INTRODUCTION TO SIGNALS" In this case D. Observe that .To/2.. 2..26 Exponential fouri~r Spectrum of the square pulse periodic signal.271mpull.89) shows that the exponential spectrum is uniform CD. vs 0) instead of the amp1i1ude s~m(lD.2 for the trigonometric Fourier series . requires only the amplitude plot.. In this integral. 2. as shown in Fig. 1 '" o n== . 11=-00 (liFO) .ei1lOXJ' get) ". All phases are zero. (2.1 = ID_nl = CRf2. : 8To€t) == 8(t).90) I Figure 2.5 r f.. To (288) and 8 To (t) ~ =Tr'L..12 Find the exponential Fourier series and sketch the corresponding OTo(l) shown in Fig. = Do + L D. an EXAMPLE 2. Tbepower p~ of g(t) is equal to the power of its Fourier series.! vs .r = O.ejn"'O.. To I To.' 00 .89) F''8fJre2.27..e train and its.26. is real Consequently. we can. 2. cos n=1 co. O).=-00 L D.~ D. The spectrum.(r)e-inwot dt To fro . spectra for the impulse train Parseval's Theorem A periodic signal get) is a power signal. The di:: components are identical and the exponential spectrum amplitudes are half those in the trigonometric spectrum for 0) > O. 1 0. Because the Fourier series consists of terms thatare mutually orthogonal over one period. .its Fourier series is also a power signal. as-expected. .= _!_ { 8T.do without the phase of'8n~e plot u'weplot D. It is also valid for the exponential Fourier series. Equation (2.22b. We have already demonstratedt!us result in Example 2. This follows from Parseval's theorem. Compare this spectrum with the trigonometnc spectrum III fIg. 2.:"..24b. Choosing the interval of integration (. For the exponential Fourier series 2n" 0)0=To (2. 2.9lfxponential Fourier Series 59 Do = Co and liD.. for the trigonometric Fourier series get) = Co ••• o ••• (a) + L c. where 2. :.87} The exponential Fourier series is give:bY OTo(l) = . i i \ i \ D.. and every term in.

21b. the exponential amplitude spectrum IDn Iis half the trigonometric amplitude spectrum Cn for n·~ I. the sinusoid of a negative frequency -roo can be expressed as '\ cos (-roor+ 0) = cos (root . 2.21c and d).10 [Eq. We know that a sinusoid of frequencynroo can be expressed in terms of a pair of exponentials ei_t and e: In .0) What Is a Negative Frequency? a real sigo8L· Do =0.I .2.85) it follows that the amplirnde spectrum (IDn I vs..I are the amplimdes (magnitudes) and LD.96· 1. How do we interpret a negative frequency? Using a trigonometric identity. Moreover. the amplitude spectrum is an even function of wand the angle spectrum is an odd function of w.+j8 .122 (a) -10 LOn L D.clearly shows that the frequency of a sinusoid cos (root quantity. are the angles of various exponential components. (e"':j_To/4 _ -J1IllI()To ei_To/4) n1f Figure 2.122e-i7S.85)J.8~ ==> ID_. series for the periodic signall(1(t) in Fig. -To/4 = _.8'?. the spectra exist for positive as well as negative values ofw (the freqnency).8r.86) Note that ID. whichris a positive = cos root ± jsin root and D2 D 2 o 504. w) is an odd function of eo when get) is We notice some interesting features of these spectra.j~ .96° . Second.0625e-} '82 . [see Eq.122el75.504 Thus. F"1ISt.' = O. The de eornponents Do and Co are identical in both spectra.81)].j4' ==> This.. ~PLE2. The exponential angle spectrum LDn is identical to the'trigonometric phase spectrum On for n ~ O.I = 0.504 = 0.0625 LDz = -82.0625eJ82. the frequency of exponentials e±~"'" is indeecilliool. w) is an even function of wand the angle spectrum (LD. 2. We have w(l') = n=-oo 0. VS. and vice versa. 2.25 shows the frequency spectra (amplitude and angle) of the exponential Fourier .96· l+j4 =} IDlI = 0.504 = 0.0625 1.healthier way of looking at the situation is to say that exponential spectra are a-graphical representatiotz of coefficients Dn as a function of w.122 ' ID_ll = 0.85) show the close connection between trigonometric spectra (Cn and On) andexponential spectra (IDnl and LD~).. na>oTo 4 = _!_ sin (n1f) 2 . (2..1~2 LD~l = 75.21a.25 Exponential Fourier spectra for the signal in Fig._1__ =. =} IDzl =' o. How do we then interpret the: spectral plots for 'negative values of w1 A . (2. Existence of the spectrum at (Ll = -nroo is 11iirely an indication of the fact that an exponential' component e-jn"'Ot exists in the series.84b)]. as expected [see Eqs.- D I = 0.9 Exponential Fourier Series 56 INTRODUCTION TO SI([jNALS 57 Thus. (2. = 82$7° and so on. the frequency (humber of repetitions per second) is a positive qusntity. From Eqs. = _.· ' The existence of the spectrum at negative freqnencies is somewhat disturbing because by definition. 2. ? Figure 2. (2. Equations (2. Note that Dn andiD_n are conjugates. Finally.22a.. We can therefore produce the exponential spectra merely by the inspection of trigonometric spectra.e at 00 J-t where Dh = _!_ { w(t)e-JlUIJot TofTo 1 = _ 1'TO/4 e-jn"'Ot·dt' To. D« = IDnlei6" and D_n = IDnle-ili" (2. _2_ sin (nrooTo) . For the series in Example 2. DI = 0.11 Find the exponential Fourier series for the periodic square wave wet) shown in Fig. The same conclusion is reached by observing that e±J"'Ot + 0) is lrool. for instance. we see a close connection between these spectra and the spectra of the corresponding trigonometric Fourier series for l(1(t): (Fig.504 . LD_2 - = 0. ' 0.504 = 0.

. First. phases of corresponding components of the trigonometric Fourier series. cos (n~t+ . . . the mathematical expression for deriving thecoefficients of the series is also compact. Second. (2. vs.504 n=-oo 2nl L _l_eij4n! 1+ 00' (2. = 0. as expected [see Eqs. 2.1 dt = lJneJnO!(Jt+ D_ne-i_1 where D. == :zCneJ D_. =' (~(i)'e"'J2nl dt .« J_t 00 1-)4 .ll J' (2. LD" = 9. orthe amplitude (magnitude) and the angle of D•. The use ofEq.d (2. w.+9"l] = (~neie.=-00 { where 'D. . 1 (2. We prefer the latter because of its close connection to the amplitudes and which is precisely equivalent to Eq.9. 1+)4 ei2t + _1_el4t + __I_ei6t + .'O) 2 .. The exponential Fourier series in Eq.83) == 00 :zCne -Je. are-conjugates. + 'E DneinO!(J' + o.85b) . = -e. It is' much more convenient to handle the exponential series than the trigonometric In the system analysis also.85a) ExAMPLE 2.82»).~ = 2n'ITo = 2. This results in the exponential Fourier series consisting of components of the form eJhO!(J' . For these reasons we sba1l use exponential (rather than trigonometric) representation of signals in the rest of the bonk.)e-j7lO!(Jl (2. ID. Moreover.54 INTRODUCTION TO SIGNALS In this 2. 1t: 0 1- !To e-t /2 e-J2nldt . we need two plots: the real and the imaginary partsof Dr. (2. 1+)8i 1+ i12 1 + __ . 1 -'e-J6t +.82) show the close connection between the coefficients-of the tdgonometric and the exponential Fourier series.. These two: equations demonstrate very clearly the principal Virtue of exponential Fourier series.To c. Observe the compactness of expressions (2. 1 + --. 1 1 '9" 11:0+)211) 0. This requires that the coefficients D. Itsinusoid in the trigonometric series can be expressed as a sum of two exponentials using Euler's formula: ease. cos (nCr1(jt+9n) = 2n [ei(nO!(J/+II. the exponential. and D"-n are conjugates. . we plot coefficients Do as a function of to. A comparison of Bqs. Equations (2. g(t) "" Do . . =.1 Exponential Fourier Spectra In exponential spectra. with n varying from -00 to 00. I-.80) (for: n = 0) shows that Do = ao == Co.504 e-(!+J2n)t . g(t) is given en) (2. Dn and D_..11'" .65a) IIIl.81) in the preceding equation (and letting Co = Do)yields .79)lis periodic With period To.. . we shaII rederive:the exponential Pourterseries from the trigonometric Fourier series. .82) by The compact trigonometric. and io. form proves more convenient than the trigonometric form. w and LD.84b) = Do + L 00 . the form of the series is more compact.21b (Example 2. .ln) = ~ l"l-(i+J2nlt = . andl .thereiore plot ID. one. To = 11:. ..80) and Compare them to expres-' sions corresponding to trigonometric Fourier series. .504 [1 + _1_..7).9 Exponential Fourier Series 55 The exponential Fourier series is basically another form of the trigonometric Fourier series.1C) Find the exponential Fourier series for the signal in Fig. (2. Each sinusoid of :frequency w can be expressed as the sum of the two exponentials eiOJl and e: JOJl. But since Dn is complex in general. be expressed in polar form as IDhle1LD =.)ej'O!(Jt C . (2. + (~ne-J8...82) show that for a real periodic signal the twin coefficients D. We. and 91(t) = L: Dnei2nl 00 11._e-J4t + --.DneinO!(Jt n=-oo (n.I = ID-nl = :zCn .79) derived earlier. I vs.Fourier series of a periodic signal g(t)= Co + 91(1)== 0. (2. (2.-. e-J2t l-~ n=l Observe that the coefficients Dn are complex. Equations (2.l+ e-J(nO!(J.79) and (2. I" 0 = 1 +j4n (2.84a) 'E c. In order to see its close connection with the trigonometric series..

(2. any even periodic function g(t) consists of cosine terms only and the series of any.±2. 4 From Bqs. cos (nWot + On) 1 (00==- 23l' To We first compute 00.' (2. == 0).57) it follows that a signal g(t) can be expressed over an interval of duration To seconds as an exponential Fourier series T. Cn = 21To.spectrum. 2.79) where [see Eq. 2. In some cases the Fourier series consists of sine terms only. Another useful function that is related to the periodic square wave is the bipolar square wave wo(t) shown in Fig.and bn: 00 ==1 l'.75) shows that the Fourier components of wo(t) are identical to those of wV) [Eq.21a (Example 2. 2. It can be shown that the Fourier series gf. convenient to allow Cn to take on negative values" This permits the spectral information to' be conveyed by a single spectrum-the amplitude.j ••• (b) get) == L D. (2.0.23a.7) consists of sine and cosine terms.' ExAMpnE 2.~ cos 7rooi + . Similarly. ~.23 Bipolar square pulse periodic signal.58) and (2. usmg the sampling property of the impulse.To/2.e 00 J""".s:.(t) Effect of Symmetry The Fourier series for the signal get) in Fig. whenever all sine terms vanish'(b. (2. Hence. the spectrum is called the amplitude spectrum rather than the magnitude spectrum.76).5] b« = ~ '0 21 TO/2 '-To/2 8(t) ' sin nroot dt==O Therefore. . 1 ~. It is easy to see that wo(t) == 2[w(t) .-) (2. = 0. 2.. D. 2. r el(m-n)"'O. m f= n .24 Impulse train and its Fourier spectrum. only. == O.orthogonal over any interval of duration To == 211'1roo. Co = IITci. odd periodic function get) consists of sine terms only (see Prob. (2:57)J Figure 2.) is r eJm"'O'(eJnO»'t dt = To C. from Eq.80) .78) Moreover.75). Because C« can bepositive as ' well as negative. (2.9 Exponential Fourier Series 53 the Fourier series in Eq. a~. The phase spectrum is zero. ±1. Comparison of this equation with Eq. the amplitudes and loss of dc. == - To 11 ' To g(t)e-Jn"'O'dt (2.Fig.9 Find the trigonometric Fourier series and sketch the corresponding spectra for the periodic impulse train 82-0(t) shown in Fig. ~: 52 INTRODUCTION TO SIGNALS 2:.. (2. We encounter this signal in gWitchingapplications. this set is a complete set. Note that wa(t) is basically w(t} minus its de component. t~ cos nroot dt = To ~s result fellows from the sampli~g:property (2.3.8) consists of cosine terms. but the series for the signal wet) in. (2.3).77).v.22a (Example 2. 2. .75)lit follows that Wo(l) ==.~ (cos wot ~ ~ cos 3root + ~cos SWot . g(l)= Sr. This is no accident. dt == {~ '0 m==n . (-To/2 '1 8(t) dt == - To 2 Figure 2.2 that the sen of exponentials eJ-' . it is. Figure 2.2411shows the amplitude spectrum. . we obtain -7]' '8(1) an == 'To -To/~ 2 To/2 9'. The trigonometric Fourier ser!es for 8To(1) is given by 8To(t) == Co + L c. Thus. that is. (2.75)] in every respect except for doublin~ .9 EXPONENTIAL ••• (a) FOIlJRIER SERIES (n ••• o 2 It is shown in Appendix A.19) of the impulse function.242. andO. Therefore.

. 21..) 2 n 3 5 7. We could have found 00.cos ClJOt. The negative sign can be accommodated by aphase ofn radians. i 3. series. Also.cos (3ClI()t . n = 3. Figure 2. Because wet) 1 only over (-To/4 •. "2 J 11 2lt. '. This means the phases of all components are Zero. = . it physical possibility of. (2.. amplitudes Cn are positive [see Eq..7. 1 2 (2.n) Using this fact.8 Trigonometric Fourier Series 51 .1 {02 nil -77: neven nodd ao .. dt=:-.2280.15.. J' I bn = ~ t.5 t Observe.21 / . 13...22a shows that the best choice for a region ofintegration is from . We can. = 9n = 1 • .cos 5ClJOt . we do not need it phase of -77: to account for the sign..n) = -coS x. . we can express the series in Eq. .. + "7 cos (7ClJOtFigure 2.' n = 1.+. however. a periodic waveform is it valid and sufficient condition for the existence of a convergent . (a) 1 2( 1 wet) = . cos. Observe that there is no loss of infoIIDation in doing so and tharthe amplitude spectrum inFig.cos 5ClJOt. Therefore. except that the amplitudes of alternating harmonics are negative. . and hence possesses a convergent Fourier series: Thus.!. J -It -It ~ I I~ '.is therefore already in compact form. dt {O for all n ~ 3.11. 2. -To/4 1· = To/4 .228.. 91l!ot + . spectra. '" . The amplitudes are The Fourier series is wet) = ao where .67a)]. n = ~. 15. If this is allowed.11. Now by definition. ao=- 1 .=..TO/4 '.•.. sin nt dt = 0 To -:-~o/4 . ihePilase can be chosen as ±Nn. we may integrate wet) over any interval of duration To. •..75) as o (b) ? wet) = -2 + - 1 2[ 77: cos ClJOt+ .to take on negative values. the average value of wet). and sketch its amplitude and phase spectra.22 Square pulse perlodicsignal and lISFoutier 1·· J + 9 cos.In filet. TIiis is the desired form of the compact trigonometric Fourier. (2.cos x = cos (x .22b. ~ -3n J I . . 1.22b has the complete information about " Because cos (x:l. 2. (x ± iln) = -cos x for any odd Integral value of N.74b) I . and we can discard the phase spectrum and manage With only the amplitude spectrum. ". Only' the' cosine terms appear in the trigonometric series. 7.75) c. simplify our task in this special case if we allow amplitude C n .n) 3·5· 77:) . To/4)and wet) o over the remaining segment. series. we could ha~e chosen 1bepbasen or -n. In the preceding equation. (2. that bn = 0 and all the sine tenns are zero. to be 1/2 merely by inspection of wet) in Fig. ..-1 cos 3ClJOt+ .To/2 to To/2.8 Find the compact trigonomel1ic:Fourier series for the periodic square wave wet) shown in Fig. cosnClJOtdt=-sin To -To/4 neven TO 4 2 nx (mr:) _ 2 2..74c) In these derivations we used the fact that ClJOTo= '211:.. " (2.. EXAMPLE 2. + x: 00 a" cos ..To . (2. 2. Therefore.ncuot + bn'sin Co=2 nClJOt C.-1 cos.74a) We could plot amplitude and phase spectraiusing these values.i co2... as showndn Fig.11. 1 '1 + .... This can be seen from the trigonometric identity" . 7ClJOt+ ... a. The: series. ( wet) To 11'0 . 0. where N is any cimvenient odd!integer. 2. 15. 5" 9. 50 INTRODUCTION TO SIGNALS conditions.". 7.. .

.}.Db)by setting 7C / This sh~wBthat the trigonometric Fourier series/is a periodicfunction of period To (the period ofits fundamental).21b!'Thus. The time-domain descripdon of 'P(t) is shown in Fig. Figure2.etrOr energy IlliniQlization property. Thus.208. 2:21a over the interval 0 :5 t :5 1r repeats periodically ev~ n seconds.2.arid bn in Bqs. Therefore. From Eqs. 2. (2. These 8pectJ:a tell us a glance the frequency composition of rp(t). 1 and ({I(O-) = e. If-afunction get) satisfies the weak Dirichlet condition. 00 ((I(t) = Co+ ECnCQS (nWQt+9n) n=1 for all t and W(t + To) = Co + EC 00 n cos [nWQ(t-t+ To) + 9n] n=l = Co + Ec.68) indicates that a periodic signal get) can be expressed as a_~um of sinusoids of frequencies 0 (de).an =.7.then the function contains an appreciable amount of components of frequencies approaching infinity.of vanousainusoidal components of ((I(t). (2. when we represent asignal get) by the trigonometric Fourier series over a certain interval of duration To. The function get) have only a finite numberof maxima: and minima in one period. vs. Thus. at a = /p(t) for all t (2.21c and d provide an alternative description-the treq uelicy-domain description of ({J(t. The corresponding Fourier serietj converges to a value of (I + 0. This means in computing the coefficients ao.2 .48. an. Therefore.68) by ({I(t).. for a convergent Fourier series.72b) = 1. Now if the function g~t) were itself to be periodic with period To. These two plots together are the freq~ncy spectra of g(I1). 2. 2€dQ. Taken together.>(t). the coefficients ao.2ib.. (~. we may perform this integration over any interval Of To.208)/2 = 0.3.the series convergence at the paints of discontiiluity' shows about 9% overshoot (Gibbs pbenomenon).the segment of get) in Fig. . ".72c) where fT.00 . (2. (2. a. • In reality •. (2.70). w (phase spectrwri). (2..604 at t = O. (2. i. A signal. therefore. In Fig.For the series to exist. 2. and it may have only a finite number offinite discontinuities in one Period.65) must be finite. 2. the trigonometric Fourier series representing a segment of get) of duration To starting at any instant represents get) for all t. in addition to' condition (2. Moreover. . This is easily verified from Eq. 3. Fourier Spectl1Jm . For example.. • ~" nWQ. as shown on the right-hand side ofEq. the periodic signal rp(t) is discontinuous at t = 0 with 'P(O+) =. 2. 1. 'These two condaions are known as ihe strong DIrlchletconditions. Siinilarly. Knowing the frequency spectra.=) eo = Co + EC«cos (n€dQt+9 n=l· ee n) The compact trigonometnc FOurier series in Eq.65) it follows that the existence of these coefficients is guaranteed if get) is absolutely integrable over one period. Fig.21c and d show the amplitude and phase spectra for the periodic signalrp(t) in Fig. 2. Cn.7. for instance. Therefore.. To Jro ( get) dt n (2. we may use any value for t) in Eqs.2 = 0. then obviously the series representing the function will be nonconvergent at that point. Thus. '" 9. lh.. cos (nWQt+2mr + 9n) .73). q.7).left-hand and right-hand limits ofg(t) at the iustant of discontinuity". For instance. . the frequency spectra in' Fig:~. that is. the existence of a Fourier series is guaranteed. We note here that any periodic waveform that can be generated in a laboratory satisfies strong Dirichlet • 1bisbehavior oti the Fourier senes is dictated by itS.. if a function has an infinite number of maxima and minima in one period. if a function get) is infinite at some point.Dirichlet condition. C)' C2. . " whose amplitudes are CO.2. the higher coefficients inthe series do not decay rapidly" so that the series will not converge rapidly or uniformly. get) cos n€dQtdt To Jro ..65).72a) . . discussed in sec. as shown in Fig. We can readily plot amplitude Cn vs. the Fourier series on the right-hand side ofEq. that is.70). . and bn = ~ ( get) sinnWQt dtn To JTo r = 1. ••• and whose.. the Fourier coefficients ora series representing a periodic signal g(t) (for all t) can be expressed-as . has a dual identity: the time-domain identity ({I(t) and the fiequency-domain identity (Fourier spectra}: The two identities complement each other. (2. the Fourier series repeats periodically With period To. they provide a better uruierstandilig of signal. .~.. the function get) and its Fourier series 'P(t) need only be equal over that interval of To Seconds. INTRODUCTION TO SIGNALS 2. This is known as the weak. In other words. . 2.8 Trigonometric Fourier Series 49 Let us denote the trigonometric Fourier series on the right-hand side ofEq. means that the integration is performed over any interval of To seconds. we reqtiirethat: 2.2Ib. Outside this interval.73) ao =2.phases are 0. (2. ~. roo. then a Fourier series representing get) ovet an interval To will also represent get) for all r (not just over the interval To). (2.21b.. such a periodic signal get) can be generated by a periodic repetition of any of its segments of duration To (See Sec. its Fourier series at the point of discontinuityeonverges to an average otfthe. w (amplitude spectrum) and 9.. 3.we can reconstruct or syntlieSlze 'P(t). and bn.71) Series Convergence at Jump DiscontiDumes When there is a jump discontinUity in a periodic signal get). is a periodic function in which . 1• To Ig(OI dt <. t=~ I Existence of the Fourier Series: Dirichlet Conditions There are two basic conditions for the existence of the Fourier series.. the amplitudes and phases. but the series may not converge at every point.

using Eqs.. 4 0. == . (2./z x 10 ' 8i~ 2nt dt = 0.86. sin 2nt 00 n=l We have shown how an arbitrary signal get) may be expressed as a trigonometric Fourier series.504 0 0. ~. EXAMPLE2.504 (+ 1 2 ---'-2 ) 16n bn . I\-.46 INTRODUCTION TO SIGNALS 2. and .504 8• _. (2.67) as Co = tIo = 0.7 Find the compact trigonometric Fourier series for the:exponeniiale-·/2 1C e-tJZcos 2nt dt = 0.24. (2. and the fundamental frequency is n 2:rr IlIO . (2.244 ~75. Equation (12.42 5 0.69) and displayed in Table 2. over any interval of To seconds..61 7 0.: .1 n Cn 9.66)..95 Figure 2.=-tan -14n' (2. .244C08 (2t -75.504[1 + "£ 1+ ~6n2 (cos ~t 0=1 + 4n sin 0 :5 t :5 :rr .504 (~2) 2nt)] 1+16n show in Fig. tan -I - (-b -.21 Periodic signal and its Fourier spectra. Because we are required to represent get) by the trigonometric Fourier series over the interval 0 ~ t :::: .50 dt where th~ coefficients Co and 8. 0:5 t :5:rr (2. .68.042 -87.42°) + .67).47 Using 'the identity (2.) 00 (2.8 Trigonometric Fourier Series where {from Eq.-4n) .This value can often: be determined by inspection of g (r).125 -82.0504 -87.. -Il 0.70b) + + Table 2. . 1C =- 11" e-' ~ 0 0 I"~ = 0.084 cos (6t 7 85.96 2 0.)· n=l a.21~ over the interval 0 :5 t :5 :rr.063 -86.504"£ 0=1 ~cos 1 + 16n (2nt~tan-14n) 0:::: t:51C (2.-and b.. are computed from a. The Founer series is equal to g(t) over this interval alone. an o) ( = tan -I (.70a) = 0.. Therefore. the trigonometric Fourier series in Eq..65a)shows that tIo (or Co) is the average value of g(t) (averaged over one period) .65a)1 tIo . wefind its coefficients using Eqs.125 cos (4t -'82. (2.87°) 0.1.= -21" . 3:. ~os int + b.. get) = tIo Periodicity of the Trigonometric Fourier SeJl':ieS +2:a.036 -87. To find the compact Fourier series.504+0.To = 1C.87 3 0.504 + 0. It would be interesting to find out what happens to the Fourier series outside this interval . 2.62) can be expressed in the compact forin of the trigonometric Fourier series as g(t) = Co + 2: C. cos: (nlllOt+ 9.=2radls == To Therefore.24°) 1W63cos (St .We now show that the trigonometric Foueier series is a periodic function of period To (the period of the fundamental)." .14 6 0. .084 -85. get) = 0..96~) + 0. Usingthese numerical values. we can express get) in the compact trigonometric Fourier series as get) =0. Outside this interval the-series is-not necessarily equal to g(t).69) The amplitudes and! phases of the dc ~d the first seven harmonics are computed from Eq.

sin 2liltJt.. Just-as a vector can be-represented as a sum cif its components in a variety of ways. and Chebyshev polynomials. .8 TRIGONOMETRIC FOURIER SERIES and 2 1 ["+TO ao = ¢ g(t) tit '0 . j (2. (2..67a) (2. These equations show that the set (2.qJot +az cos + b1 sin 00 21llot al()t + bz sin 2lUtJt + . we obtain' bn = . the Oth harmonic in this set because cos (0 x woO = 1. . Legendre polynomials. (2.. =o for all n and m (2~61c): c.2 + b/ '9 For consistency The notation.64) . fTo means integral over an interval from t = 11 to t = t[ To for any value of 11.67c) secondsas . .62b) n=l where • . Just as we have vector coordinate systems formed by .61a) (with m = n).. spherical. Using Eq. There exists a large number of orthogonal signal sets which can be used as basis < signals for generalized Fourier series. cos Illot. si~ liltJt. = Ja.8 To/2 when n ¥.Bessel functions. mutually orthogonal vectors.O. exponential functions. ['I+TO 'I g(t) cos nliltJt dt n=1.67b) we denote the de term ali by Co. (2.57). such as rectangular.= 0. ncoot +.8 Trigonornetrlc Fourier Series 45 vector-signal analbgy. (2. + L an cos 'nwot + bn sin . get) cos ncoot dt . a signal can be represented as a sum of its components in a variety of ways. (2.61b)! [ sin nliltJt . The functions that concern us most in this book are the trigonometric andthe exponential sets discussed in the rest ofthls chapter..66) fTo [ sin nliltJ1 cos mcoot dt . ' I.62) contains sine and cosine terms of the same frequency. cos 2loot..64<) as see~ from Eq. we can determine the.liltJ1 = 211" To (2.60} A sinusoid of frequency nliltJl is called the nth harmonic of the sinusoid of frequency roo when n is an integer. (2. Laguerre functions. called the' fundamental. Moreover. . . We can combine the two terms in a single term of the same frequency using the trigonometric identity .65b) Using a similar argument. Fourier coefficients [I a" = "+TO [. cos nW{)t..2.3.theremainingterms are harmonics. for n :. + al cos . and b•. . .. In vector space we know that the square of the length of a vector is equal to the sum of the squares of the lengths of its orthogonallcomponents.... ~.+To 2 (2. we can express a signal get) by a trigonometric Fourier series over any interval of duration To + n~ tan-I (~:n) (2. of which all. Series The signal representation by generalized Fourier series shows that signals are vectors in every sense. 2..62a) Some Examples of GeneraliZed FoUrier. The sinusoid of frequency COO serves as an anchor in this set. Thus.2.65a) Consider a signal I set: {I."" sin nliltJt'. Parseval's theorem [Eq. an." (2. (2. Note that the constant term 1is.61a)! (2. Jacobi polynomials.3.59)] is the statement of this fact as applied to signals. the denominator is To. Some well-known signal sets are trigonometric (sinusoid) .sin mcoot dt = {~~ 2 an cos ncoot + b. (2.'0 2 get) sin n6Jot dt n = 1. sin nwot == C« cos (nroot where + 9n)· '(2. .I) an = To " ['HrO . (2.63) ao. To.44 INTRODUCTIQN TO SIGNALS get) = ao or g(t) '= ao 2. depending on the choice of a coordinate system.. Co =ao (2. that is.60) is orthogonal over any contiguous interval of durarion . (2. . and so on. This follows. . from the equations (proved in Appendix A.3• 4 Therefore. which is the period of the fundamental.65c) 1~ 1To· and [ cos nwot CoS mcoot dt = { ~ 2 n¥-m m=~¥-(j n¥-m n=m¥-O Compact 'Iiigonometric Fourier Series The trigonometric Fourier series in Eq. Hermite polynomials. functions.. 4 cos ncoot dt The integral in the denominator of Eq. Hence. This is the ~onometric set. which can be shown to be a Complete set. Walsh functions. cylindrical. we also have signal coordinate systems (basis signals) formed by a variety of sets of mutually orthogonal signals.. We canshowthat this set is orthogonal overany interval of duration To = 211"/ ClIo.

but in the sense that the error energy. are given by CI = g·x. in the future we shall consider only the class.. = .. we define the orthogonality of a signal set (t).x. '.. Any vector in this space can' then be represented (with zero error) in terms of these three vectors. .consider the problem of approximating a signal get) over the interval [tl. « = If the energies En. and the representation in Eqs. a three-dimensional vector g may be represented : in many different ways. if the orthogonal set is-complete.. that is orthogonal to all three vectors XI. X2..zero.3 (2. + C2Xz + CgX3 (2. +CNXN(t) (2. the error energy ~.. n = 1. One subtle point that mast be understood clearly is the meaning of equality InEq.(t)} is such that the error energy E. and x3 represent a complete set of orthogonal vectors in tbreedimensional space.. Thus.58).(2. . the Fourier series on the right-hand side of Eq. in the three-dimensional case discussed. '-"-'1-.7 Signal Representation. if xm·x" = { 0 . (2.a vector g in this space can be expressed as g = LcnxHi) n=1 00 (2. C. the energy thed~rence between the two sides of Eq. + c. N ... X2(t).53).. In fact.42 INliRODUCTION TO S. If a set of vectors {xd is not complete. til by a set of N mutually' orthogonal signals XI (f).and the set {~. The equality here is not an equality in the ordinary sense. Completeness here means that it is impossible to find in this space another vector :14.. The error energy can approach zero even thoughe(t)'. Recall that the signal energy (the area under the squared value of a Signal) is analogous to the square of the length of a vector in the . XN(t): ..56a) (256b) Recall that the energy of the sum of orthogonal signals is equal to the:sum of their energies.. Therefore. Such vectors are known as basis vectors. Because the error signal energy approaches zero. we have tile equality (2.(t)} is complete. (2:57). the difference between the two sides is nonzero at some isolated instants.(t) . but an equality..54a) i = 1. An orthogonal sencan always be normalized by dividingu. of the energies of the individual orthogoIial components.(t)} is called a set of basis functions or basis ~gnaIs. then the set is normalized and is called an orthonormal set. . the error enerzy is always zero.. that is. and the vectors XI.. where the constants C. (2. £12 g(t)x:(t) tit 1 " _ 1 = E. get) = C1X..(i) dt '. (r). "'. get) ~ CIX. . depending on the coordinate system used. 12 x. the energy if we choose- of the error signal e(t).58) is the generalized Fourier series ofg(t)lwith respectto the set (xn(t)}. The choice of basis vectors is not unique. 1 signals. wher: the coefficients Cn are given by Eq. . the energy of the right-hand side of Eq. for all n. C2X2(t). when get) has a jump discontinuity at t to. Equating the energies \If the two sides of Eq.2. Parseval's Theorem c. (2. g(t)x. a set of basis vectors corresponds to a particnlar choice of coordinate system.~ and if this basis set is complete. XN(t) over the interval [t1> t2Jias • Signal Space 1 x.58) == CIX. ~ 0 as N ~ 00 for every member of some particular class. m:j: n (255} Thus. X2. £~ '2 xm(t)X. and X3. =!X. (2. (2. It follows that.Set 43 vector. we say that the set {xn(t)} is complete on [tt. . in this approximation is minimized This important result is called Parseval's theorem..52) " + C2XZ(t) + . C3X3(t).p:.(t) + . by Orthogonal Signal. If the equality exists III' the-ordinary sense...2. (259) It can be shown that E.57b) 1~12 m:j:n m =n (2. that is. lJnless otherwise mentioned.' (t) is En. (t) by .72 Ort~gonal We continue with our signal approximation problem using clues and insights developed for vector approximation. = 1 for all n.58) may differ from get) at a finite number of points. the corresponding Fourier series at to converges to the mean of g(to +) and g(to ").the energy of get) is nowequal to the sum of the' energies of its orthogonal components CIX. Thus.58). The series omthe right-hand side of Eq.the converse: is not necessarily true. In fact.58) yields . This is because even if e(i) is nonzero at such instants: the area under e2(t): is still zero" Thus." 1'2 Moreover. = }::>nXn(t) n=l' N Eij = c'fEI +ciE2 +C~E3 + . . €2.?(t) dt (257a) . When tile set {x. approaches zero.58).IGNALS 2. dt = {0 E . xi(t}. t2] for that Glass of g(t). if a set of!vectors {XI}' is mutually orthogonal. it is generally not possible to represent a vector g in terms of only two.54b) .basis vectors without an error. (t) + C2Xi(t) +.56) is no longer an approximation. . but. the error in the approximation will generally not be.!2g·X. (2.(t) . 0. when the set {x.- . 2.X. Xf'X.. of energy caned _. As before. Now.. To summarize. The energy of a component C. (258) is the sum.

seek to approximate a threedimensional vector g in terms of two mutually orthogonal vectors XI and xz:' . r' Cousider a three-dimensional Cartesian vector space described by three mutually orthogonal vectors Xt.40 INTRODUCTION TO SIGNALS 2. I/Ig~(1:) 2. ClXI. g ~ ctX! +czxz == L:g(t)Z(t +1:) dt (2. . 41 Cn 1 -00 00 g(t)z(t) dt . and X3.51) is no longer .48) coordinate system ofa vector space.(1:) is an ·indication of sinrilarity (correlation) of g(t) with z(t) advanced (left-shifted) by 1: seconds. We know that a vector can be represented as the sum of orthogonal vectors. (2. · the cross-correlation function of two real signals g(t) and z(t). . The problem with signals is analogous. (2. and c~ exists. as shown in Fig. and X3. Now let us determinethe best approximation to gin te:rmsof all three mutually orthogonal vectors XI. and the results for signals are parallel to those for vectors.C2X2. (2. In this section we show Ii way of representing a signal as a Sum of orthogonal signals. Therefore.(CIXl'+ czxz) Autocorrelation Function The correlation of a signal with itselfis called aufucorrelation. xz. T Figure 2. Therefore. (2. To avoid this difficulty. 2.. we use the modified Integral I/Ig. I/Ig.50) . Here again we can benefit from the ~ght gained from a similar problem with. and X3. there is a strong correlation. . we shall show that the autocorrelation function provides a valuable spectra! information about the signal.7' SIGNAL REPRESENTATlON BY ORTHOGONAL SIGNAL SET Figure 2. we not ouly detect the presence of the pulse but we also detect the relative time shift of z(t) wi~ res~ to g(t"f.20. and CII! and C2XZ arethe projectious (components) of g on Xl and Xz.. For .respectively. Note that theerror in the approximation is zero when. defined by' . First.19 T+! e-U-71 (b) t :.7 Signal Representation by Orthogonal Signal Set = ~ "EgE.and e3x3 are the projections (components) of g on XI. fon which Eq. which form the (a) g(t}~. the «o The error e ill this approximation is " 'e ="g . 1/18(1:) of areal signal g(t) is defined as I/Ig(1:)1= j_: The autocorrelation function .expianation of the autocorrelation function.. when e is perpendicular to the II-XZ plane.XI + c2xi + C3I3 fu this case. vectors. . Physical. If for some value of 'C.g is approximated in terms of three mutually orthogonal vectors: Xl.20 sb~ws that a uoique choice of Ch an approxiination but an equality: C2.This is because g is a three-dimensional ..20 Representition of a vector in three-dimensional space. For tliis reason let us review the case of vector representation.The inte~ (2.{1:).51) 2.· .C!Xl +C2J1:z +e· As in the earlier geometrical argument. xz. · Thus the correlation is zero because: the pulses are disjoint (nonoverlapping in time) .20 that the length of e is rnioirnurn g(t)g(t + 1:) dt In Chapter 3. X2..7.instead of using the integral on the right hand. respectively. we shall. (2. IfrRZ(~} '" • For complex . ! t- g =C!.. 0 . r::-::.48) will yield zero value even when the pulses are identical ?ut with rel~tivetime shift.and X3: . the constants Cl and czate given by Eq.26). o· ~ Figure 2. we see from Fig 2.. g =.1 Orthogonal Vector Space .sigoals we define L: g'(t}z(t + . we compare the transmitted pulse g (I) With the received pulse z(t) shifted by 1:..}dt .this reason.49) Here z(t + 1:) is pulse left-shifted (advanced) by 1: seeonds. xz..

electronic warfare. Pulses get distorted and dispersed (spread out) in time.961. make. this procedure is to detect the heavily attenuated. just a noise. iii . does g6(t). In the ideal case correlation of this pulse at the receiver would be 1.. The signalg3(t). However. and it is independent of the amplitude (strength) of the signals compared. as mentioned earlier. the dissimilarity of x(t) with g6(t) that with g3(t). Just by inspection. thus:reducing the distinguishability. during transmission. we decide that -p(t) is received. and many others. where we notice than variations in gs(t) are generally a higher tate than those inx(t). there will be no reflected pulse. 2. the-maximum possible. Such is not the' case with g5 (I). dighalcommunication. COnsider the case of binar)! communication. the received pulse is p(t). . which measures the degree of similarity (agreement or alignment) between the two signals. . which decides that if the correlation is positive. for example. This scheme is sometimes called the antipodal scheme. If we were to use Eq. . implying a high degree' of similarity with xCI). (2. In practice . are also used) 'even when they provide a smaller margin (from 0 to 1 and vice versa) in distinguishing the pulses. respectively. (strength) is different. so that en = O. By detecting the presence or absence of thereflected pulse we confirm the presence or absence of a target The. both these options are used. where we' are required to detect the' presence of' one of the two known waveforms in the presence of~_. This. This is reasonable becauseg4(t) is very similar to x(t) over the duration of x(l) (fos o :::t < 5). . scheme has the best performance in terms of guarding agsinst channe1lnoise arid pulse distortion. the dissimilarity.8S well as the calculation of error probability in the presence of noise. The receiver consists of a correlator that computes the correlation between pet) and the receivedpulse. Suppose. on the other band. channel noise. This appears to indicate that the dissimilarity in this case is notas strong as that of g3(t). Moreover. and en := 1. Thus. we notice that the variations or changes in IJoth X(I) and . the signal 82(t) also shows maximum possible similarity with en = 1. the pulse will be reflected by it If no target is present.. and if the correlation is negative. . gs(t) is similar to x(t).6 Signal Comparison: Correlation 39 Comments: Because gl (t) = x(t).628 for gs(t). thus causing a detection. at --e~~ioD. ' . pulse distortion and overlapping (spreading) from other pulses can make this. the threshold detector decides that . sonar.. e" =. detection is easier. the received pulse is~ pet).' 38 INTRODUCTION TO SIGNALS 2. In the same way. of a target By measuring the time delay between the transmitted and the received (reflected) pulses we determine the distance of the target. the presence or absence. accomplished. that pet) has been transmitted. . Consequently. Hence. This is why en "" 0.2 Correlation Functions Consider the application of correlation to signal detection in a radar. en = 1. Let the transmitted and the reflected pulses be denoted by g(t) and z(t). For g4(1).. g4 (t) are at similar rates.IWIE~~ts1!riianiiieextremely important concept. The dissimilarity between x(t) and g3(1) is Of the nature of tipathy (the worst enemy). This explains why the antipodal. Thereis still considerable similarity: Both signals always remainpositive and show no oscillations.If a target is present. if the correlation is -1. By detecting the presence or absence of the reflected pulse we confirm. but in opposite ways. only the amplitude. for which in. In later chapters we 'Sha!!discuss pnlse dispersion and pulse distortion during transmission. our task is to determine V{hichof the two (known}waveformsis received. signals-have zero ()Jl negligible strength beyond t = 5.. we would obtain We now explain qualitatively how signal detection using the correlation' technique is. That is why it is important to start with as farge a margila as possible. however. We'use a threshold detector. . where two known waveforms are received in a random sequence.may seem odd because g3(t) appears more similar to X(I) than .'. where a signal pulse is transmitted in order to detect a suspected tlUlget If a target is present.45) directly to measure the correlation coefficient en.. as shown in Fig. the pulse will be reflected by it If a target is not Present.. the correlation' coefficient is no longer ±1. there will be no reflected pulse. On other hand.~ .6. Assume that there is no noise nor any other imperfections iDJ the transmission. We can also use orthogonal pulses.error.crucial probl~ in. Both. of x(l) withg6(t) stems from the fact that they are aImosof different species or from different planets. . it is of the nature of being strangers to h other. Our task is to. = -1. although the antipodal one is best in terms of distinsuishability between the two pulses. where Cn = 0. If the correlation is 1. Correlation of the received pulse with the transmitted pulse can be of great help in this situation. which means that we should select one pulse as the negative of the other pulse. '. Hence. but has a smaller magnitude. There is always an unwanted signal (noise) superimposed on the received pnlses. This gives the highest dissimilarity (cn = -1). but not as similar as g4(t). This-is because we have defined en to measure the similarity of the wave shapes. The signal g2(t) is identical to x(t) in shape. sure that the transmitted pulses have sufficient energy forthe damage caused by noise' and other imperfections to remaim within a limit so that the errorprobability is below some acceptable hounds. .19. in a way illey are very similar. Quantitative discussion of correlation in digital signal detection is discussed in chapters 13 and 14.pet) is transmitted. 'Jlo make the detection. easier. Each time we receive a pulse. However.' In practice. becanse of some other reasons. we must make the two pulses as dissimilar as possible.0.pnlse so dissimilar to pet) that the correlation can be negative. has the maximum possible dissimilarity withx(t) because . it is equal to -xCI). reflected pulse (of known waveform) buried in the unwanted noise signal. just a noise.p(t). there are several imperfections. This concept is widely used for signal processing applications in radar. A similar situation exists in digitlllcommunication. the two signals have the maximum possible similarity. the correlation is less than 1. Nowbecanse of the noise and pulse distortion. the detector could decide that p(t) is transmitted. The signal g6(t) is orthogonal to x (t). We expiain this concept by an example of radarwhere a signal pulse is transmitted in order to detect a suspected target. The correlation coefficient eli of these pulses is -1. . 2. we decide that pet) is received.1 and vice versa). which result in en = O. In tIie ideal case" the margin provided by the correlation en for distinguishing the two pulses is 2 (from 1 to ..J~ this case. Let us consider the antipodal scheme in which the two pnlses are pet) and .hcI~e&l.p (t)i has been received. schemes such as an orthogonal scheme •. Because iJfthe maximum possible dissimilllrity between the two pnlses. In some extreme situation. The noise and channel distortion reduce this margin.

6 Signal Comparison: Correlation 37 similarity between g arid x should not change. (5 e-tIS dt = 0. 'rhus. using Eq.1617. 83(1) We can readily verify that this measure is independent of the lengths of g and x.6 Find thecorrelationcoefficientc~ !Jetweenthepulsex(t) and 6.3. ".44) 0 I_ (a) . we must normalize C by..4. 1 = In the same way we find E8t :::i: 5.43) is given by = (b) 8.(12a e-2DT) For g4(t).'0 . The amount of siIDilarity can therefore be conveniently measured by cos 9. To make C in Eq.and Eg.J2. Qualitatively speaking.18.5 S 0 I_ S.JJ correlation coefficients for the six cases arefound as . If the two vectors are.. 82(1) = 1. a = 1/5 and T = 5. Using Schwarz's inequality (proved in Appendix B).-t/S (e) 81(1) 0. Observe that multiplying eitherg(t) or x(l) by any constant has no effect on this index.628 . When the two signals are orthogonal. For g5{t). with equality if and only if X(/) complex signals.36 INTRODUCTION TO SIGNALS 2. ' .-. Similarity'between two vectors is indicated by the. 5.1. and vice versa. the maximum dissimilarity [when g(/) = -Kx(t)] is indicated by Cn = ~1. Igll*1 g doubles the value of . the magnitude of Cn is never greater than unity. the worst enemies (cn= -I). If the two vectors are orthogonal. between the two vectors. however. Therefore.46). (2.= 1 fo 5 ' (0.2. a = 1 and T = 00.worst.Alsocn = Oifg(t) andx(t) are orthogonal. The energy of E66 is given by = ~~J Using Eq~e (1) (3)/r? "o/. the similarity is maximum (cn 1). Two vectors aligned in oppositedirections have maximum dissimilarity (cn = -1). We use the same argument in defining a similarity index (the correlation coefficient) fo~ signals.J 15. the appropriate similarity index c" analogous to Eq. For example.55 fo5 (-1) 'dt = -1 . the similarity is zero.' ExAMPLE Cn = ". Our measure is clearly faulty.6.26). who do not care whether we exist or not (cn = 0).961 .5) dt =1 E8E~ (245n) There is also a similar inequality for " ? ". s Ex == Jax2(t) dt = dt 5 . Thus.45) for each of the six cases.5. people who thlnk like' us.E= = = We can readily verify that if get) = Kx(t). E8.(6) .'COUS1>Ull (5) . 5 fo e-t'dt = 0.~ ". = -1 whenK~imynegatiiveconstant. we See that doubling. (2. ".1617 5. which is given by g·x (2. f"ii'"'ir 1 1"" -00 g(t)x'(t) dt ' (2. It is independent of the sizes (energies) of get) and x(t). Let us first compute the energies of all the signals. C (whereas doubling x halves the value of c). = K8(1). ' • The Schwarz inequality states that for two real energy signals 8(1) and x(l). This similarity measure Cn is known as the cerrelatlon coefficient. The. Therefore.47) i 2. We generalize the definition of Cn to-include complex signals as . theJarg& is the similarity. and Complete Strangers Jo r T (e-at)2 dt:.- U: g(t)x. Eg. Note that maximum dissimilarity is different from unrelatedness qaalitatively.larger is the similarity.' . 2. Jo [T e-2D"dt = 2. ways. We sballJconsider the signals over the entiretiine interval from -'-00 to 00.the maximum similarity [when get) = Kx(t)] is indicated by Cn = I. The larger the cos 9. (2. normalizing the two signals tOhave unit energies. (2.· we can show that the magnitude of Cn is never greater than 1: -1$Cn$l We shall compute C.55 r05 sin J( 2rrt dt = 0 . 0I[V VOl) (t) (g) ~ lAA Ahs M'~ 1-- Cn '= V EgEx ~ '100 g(t)x(t)dt -00. Observe that (2.45) Figure 2. r (2.18 Signaldor Example 2.1 . (2. ~e can readily extend this discussion to complex signal comparison. only in opposite way~. a suitable measure would be en = cos 9..43): n C = COS 9 =.2. 2.(1) 0 1- 5 ~ . Eg. = 0. shown in Fig.EgE.31~ independent oille energies (sizes) of get) and x(t). we have the best friends (cn = I).enemies are not strangers but. where is an arbitrnry. Also to determine Eg and E8" we determine the energy E of e-at'u(t) over the interval t = 0 to T: 4 .(l) dl K r~ 's5 Jo 1 1 r5 dt =1 ' (2) (4) "..255 r.5. angle 9 between the vectors. we may view orthogonal signals as unrelated signals. aligned. Thus. and complete strangers._ L25.(I) andthepulsesgj(t). Best Friends" Worst Enemies. and c. 0 I_ (e) t-"+: S 0 -I (d) 86(1) 5 Thus. then Cn = 1 when K is any positive constant. the similarity is zero. From Bq. in many . The smaller the 9.

get) ~ -'sm 1f 4.31).34) XI (t)xi(t) dt -d.!.26) is large.12 xj(t)X2(t) dt'= 0' (2..33) In light of this result. = " 1'" Ix(t}f +1'" ly(t)1 t1 11 1'2 Ix(t)f 11 dt dt + 1'2 ly(t)1 11 2 dt + '1'2 x(t)y*(i)dt tJ Jt~. 2.32) when the functions are real.6 Signal Comparison: Correlation minimized by choosing' c such that the third term is zero.39) t dt] =~ 1f (2. t2]. express the integra! E.. This' is a ge~eral' definition of orthogonality.42j are complex (in general). for example. the amount of Recall also that III'+ vl2 = (11'+ v)(u' + VO) = lul2 + Ivl2 + u'v + uv' (2. would be defective. .40) represents the best approximation of g(/) by the function sin t. The amount of similarity between g and x should be independent of the lengths of g and x.. (2..of two orthogonal signals is equal to ~ sum of the energies of the two signals. This yields 1 c =. both the coefficient e and the error' eft) = get) .. From Eq. (2. t" get) sin 1r 10.1 and S~ce the first two terms on the right-hand side are independent of C. _'" " ' Energy of the Sum of Orthogonal Signals We'know that the length of the sum of two' orthogonal vectors is equal to the sum of the lengths squared of the . we need to redefine orthogonality for the complex' case as follows:' Two complex functions Xl Wand X2(t) are orthogonal over an interval s II ::.if vectors x and y are orthogonal.17. (2. consideragain the problem of approXimating a function. t2): get) ~ ex(t) (2.!.sin 1r 10 If Ex It:!.41) where get) and x(t) now can be coinplex functions of I. (237) as Ee = Ig(t)lf dt . This sinusoidal component of get) is shown shaded in Fig. Thus.12 t (2. Such a measure" however.t dt = 35 e In this case. given by Ee = .IYI2' We have a similar result for signals. [ t" sint:di + 1'br . if c inEq. which will minimize the error energy.g(t) by a function x(t) over an interval (II ::. we say that the square function get) shown in Fig. In other words...in Eq. We could considere as a quantitative measure of similarity between g 'and x. the vectors g and are. t ::. Recall that the energy Ex of the complexsignal x(t) over an interval [/b t2] is ' E" = 112 In this case. then 2.11 + y(t) 12 dt . orthogonality" the two integrals of the cross products x(t)y*(t) andx'(t)y(t)are zero. The energy of the sum.35) '~IXI:i +.ex(t) (2.38) Using this result. we can benefit by considering the concept of vector comparison: 1\vo vectors g and x iI!:'e_ar if g has a l~~ componentalong x. if signals x(t} and yet) are orthogonal over an mterval [tl.17 has a component of signal sin t and that the magnitude of this component is 4/ir. ') 1 '" 1"2 g(t)x'(t) ' dt 12 . similar. andlif z = x + y.two vectors. we find e .53 Orthogonality in Complex Signals So far we.. For the best approximation. Either equality suffices. t" x*(t)y(t) dt 2 dt (2. X(/) =sin.have restricted ourselves to real functions of t. This result can be exteaded to thesum of any number of mutually orthogonal signals. If we double the length of g. £. Ig(t) - ex(t)12 dt' (2. after some manipulation.". 1 t 12 ' The last result follows from the fact that because of.38) it follows that: ' ' . 'g(t)x*(t) I) ' it is clear that E is dt (2. 2. we can._:P or £. Ix(t)IZ dt 112 Ix(t) . we choose e such that we minimize E. (2.37) 2.5 has prepared the foundation for signal comparison. t2) if ' e Therefore. To generalize the results to complex functions of I. From Eq." 1'~ 1 1 '" vEx 1' 12 I) x g(t)x*(t) dt 12 + 1cJEx - 'IE. By analogy with vectors.34 INTRODUCTION TO SIGNALS 2. (2. Thus.6 SIGNAL COMPARISON: CORRELATION Section 2. Here again. which reduces to Eq. and if z(t) = x(t) + y(t). then Ez == Ex + s. the energy of the error signal e(t).36) We now prove this result for complex signals of which real signals' are a special case.

(_. Note that in vector terminology.17 Approximation of a square signal in terms of a single sinusoid.5 Signals and Vectors and C 33 From lVig. . c = 0).given by . ~t IS evident from these two parallel expressions that the area under the product of two signals corresponds to the inner (scalar or dot) product of two vectors. s: J. t21.called the'inner product of get) ~d x(!).32 INTRODUCTION TO SIGNALS 2. we need to minimize the error signal. =0 de . g). To summarizeour discussion. the area under the product of get) and x(t) is . . = It:! It I) tr: EXAMPlE 2.5 ~~ the optimum value of c that minimizes tbeenergy of the error signal in this approximation given by Eq. which is its energy E. (2. simil~ty _between the behavior of vectors and signals. We now select some ~ for the "best approximation. minimize its size. In fa~t. Vv' ~. it is apparent that when g and x are perpendicular. and ISdenoted by (I.Continuing Withthe analogy. T~? ~ur clue from vectors. we say that if the component of a signal get) of the form x(t) IS zero (that is.31) (2.32) [g:(t) -' CX(t)]2 dt Note thilt the right-hand side is a definite integral with t as the dummy variable.28) IS t~~ otherwise tl ~ (2. (2.(lnergy of a signal is the inner product of a signal with Itself.27)' 2.2 Component of a Signal The concepts of vector component and orthogonality can be extended to signals.". c = o. (2. In other words.Jtl cit . another signal x(t) as get) ::::cx(t) tz (2. For best approximation. the sigeals get) and X(I) are orthogonal over the interval [tl. and corresponds to the vector length squared (which is the inner product of the vector with itself).Jf = J~2 g(t)x(t) )4 dt 1 r!' x2(t) de = E ' x 1'2 ~ g(t)x(t) dt (2. consequently. :c [f [g(t) - cx(t)12 dt] = 0 +:c Expanding the squared term . 12]. KeepiBg: an eye on Eq. that is. as mdicated by Eqs.26) and (2. E.2.' We observe ~. we therefore ' define g and x to be ortbogoual if the inner (Scalar or dot) product of the two vectors is zero. then g has a zero component along x. To minimize E•• a necessary condition is For the square signal get) showmin Fig. ~~ is one possible measure of a signal size. we say that asignal' get) contains a component cx(t). over the interval [tl.29) . 2. . approximate get) iii: termS of sin t: . • For complex signals the definition is inodified as in Sq. we obtain :c l.C+)1.31). cx(t) is the projection of get) on x~t). we define the real signals get) and x(t) to be orthogonal over the interval [11.inside the integral.(t) dt =1'2 1 I) '2 g(t)x(t) dt = 0 (2.. if a signal g(t) is approximated by. so that the energy of the error signal is minimum. where c ISgI~en. is minimum for some choice of c.40).~~ get) ::::cx(t} The error !I(t) in this approximation is e(t) = { ~(t) . Consider the problem of approxill'lating areal signal get) in termS of anotherreal signalx(t) over an interval ~. (2. or orthogonal.26). t21 ez.5. ." We know that •the signal .17 find the component in get) of the form sin I. Hence. is a function of the parameter c (not t) and Ee.by~."'~' [liz g2(t) dt] -2 :c [2c It:!g~t)x(t)-l dt [2L' dt = 0 x 2 (t) -l =0 from which we obtain 1'1') r g(t)x(t) + 2c liz x (t) 2 fiilure 2.31). g(~):::: c sin t dE. The .31). E. or . that is.15.remarkable.c~(t) t1 ~ t ~ .

but itis also elxl. For example.5 Signals and '(e<:tors 1. where e is chosen to minimize the1ength of the error vector e = g .25) . 2. Therefore. is the angle: between vectors g and length of a vector x. we have ' (2. 2.1 Component of a Vector e " (2. this is not the only way to express gin terms. Let the component of galong x be ex. e =g . the (2. Figure 2.15ts that the error vector is the smallest. x is acertain vector with magnitude orlength [x]. -00 (b) 1- ex x.22) . expressed in terms of vector x as ' .15.x (2. the err~r ~ thi~ approximation is the vector e = g . A signal can also be represented as a sum ~fits components in a variety of ways.5 SIGNALS AND VECliORs .of x.Ixf =x. Consequently. We shall denote all vectors by boldface type.. From Fig.21) However.orscalar) product of two vectors g and x its g . (b) Causalexponential e"""u(t). called the error v«tor.16) g is represented in terms of x plus another vector. the length of the component of g along x is Igl cos elxl == Igl COS Multiplying both sides by [x] e.cx..16 shows two of the infinite other possibilities. e yields elxl2 = Igllxl cos g=cx+e (2.15 31 Component (projection) of a vector along anotIIer vector.15).et us begin with some basic vector concepts and then apply those concepts. Similarly. 2.to signals. as showmln Fig. I. 2. the vector g can be Using this definition.=/l(t) dt ' (2.x Now. Consider two vectors gand x. 2. ~~ • . There is a perfect analogy between signals and vectors.23) .14 (a) Unit step functlon u(t).. We can now d~fine mathematically the component of a vector g along vector x to be ex.2Oa) du .26) and . x = Igllxl cos where (J.15.a component of a vector along another vector? As seen from Fig. Figure '2. The analogy is so strong that the term "analogy" understates the reality..that = {~ t<O t 2: 0 ' (2.30 INTRODUCTION TO SIGNALS U(I) 2. 2.. Signals are not just like vectors.ex.24) A vector is specified by its magnitude and its direction. as Xi. . e o (a) Figure 2. If we approximate gby ex (Fig. What is the mathematical significance of. 2. as shOwn in Fig. and is obtained by drawing a perpendicular from the tip of g on the vector x.16a and b. g~cx (2. What is unique abont the approximation m Fig.16. observe that the area from if t 2: 0. A vector can be represented as a sum of its components in a variety of ways. f~/l(r) dt: = u(t) From this result it follows .5. . 2.20b)' In each of these three representations (Figs. we can express lxi. For convenience we define the dot (inner . . g·x e=-=-g·x 'x. to t under the limiting form of /let) is zero if t -< 0 and unity " (a) (b) x Figure 2. depending on the choice of coordinate system.unati~S m Fig.15 and 2. 2. the errors in the :'l'Prox.15. .x 1 fxl2. Geometrically the component of g along x isthe projection of gon x.16 Approximation of a vector in'~rms of another Vector. Signals are vectors. .1 and b are el and e2. 2.

(2. Since the Impulse exists only at t = 0. .14a) t~O t <0 ~< 22 (b) .Pet) at t = 0 is:~(O).28 ImiltoDuCTION TO SIGNALS t2 If get) = e / . at t. In these derivdtions we have assuIDed·that the function is continuous at the instant where the impulse is located. This property. (2. 8(t) = 0 everywhereex'7pt at t = 0. and the value . 2. 2. and is known as the sampling. 1 Sb) is . L E Unit Step Function u(t) Another familiar and useful function is the unit step function u(t)= o . Unit Impulse as. the impulse 8(t . narrow rectangnlar pulse of unit ~ as ~hown in Fig. Multiplication of a Function by an Impulse qf. 2.[Eq.T) = T). Dirac as =</>(0) provided </>(t) is continuous with function (2. by an impulse 8(t . where 8(t) is defined by Eqs. then g(-:) . Sampling ProPerty of the. (2.13 Unit impulse and Its approximation.19b) isjust anotherferm of sampling orsifting. property· of the unit impulse. (2. the sampling property [Eqs. (2.i9). (2.T) (an impulse lecated at t </>(t)8(1 .12b. • ttUingular pUlse:. 0 = O.T) = </>(T)8(~ .8(t) 'is not even a true function in the ordinary sense: An ordinary function is specified by its values for all time t. We define it unit impulse as a function for which the area under its product with a function 41(t) is equal-to the value of the funcm. 2. (2. I Moreover.e-I/2• The signal g(-t) is shown in Fig. Its height IS a very large. such as an exponential pulse. Thisresult means that the area under the product of a at the instant where the unit impulse is located.pet) is multiplied .and an overall'area that has been maintained at ~ty.a height that has become infinItely large. where it is undefined. Therefore. Unit Impulse Function FromEq. g(t) ".T) dt =41(T) .1S11)it follows that 1: it t 41(t)l3(t) dt = 41(0) f: 8(t) dt 2. value 1/ f in the limit as e -+ O. . For this reason a umt impulse IS represented by the spearlike- a Equation (2. The unit impulse therefore c~ De regarded as a rec:tan~lar pulse With width that has become infiniiesimaUy s~ .19)] is the consequence of the classical (Dirac) definition of intpulsein Eq.provided </>(t) is continuous _L . weobtain </>(t)8(t) Let Us now consider what happens 'Yhen we multiply the unit impulse 8(t) by a function 4>(t) that is known to be continuous. (2.19)] defines the impulse function in the generalized function approach: .on ¢(t) at the instant where the impulse located. The width of this rectangular pulse. it canrbe described as e-:a1u(t) (Fig.T) is t/>(T).19b) We can vi~ an impulse as a tall. if .r = T.14b). In other words. t-<O . g(t) is a causal signal if . 19a) L: l3(t) = 0 8(t) '"'-'4. The signal e-a' represents an exponential that starts at t = -00.1 7) 1:' 41(t)8(t . the impulse function is defined not as an or~ functionbut as a generalized function. then (2. M. and at this only interesting part of its range it is undefined. (a) r'--' at. (2. 2. = </>(0)8(t) (2. Th~s. = O. A.4 UNIT IMPULSE FUNCTION The unit impulse function l3(t) is one of the-most ~t functions in the study of signals and systems.a GeneraIized Function The definition of tile unit impulse function. In a more rigorous approach. 2. symbol in Fig.I':f we want this signal to-start at t = 0 (the causal form). From Eq.• Thdmpulse function can also be approximated by. In contrast. {I 0 u(t).13b. or siftiDg.1Sb) it follows that . is veiy important and USeful.issome very small value f.13a.17).T) is located at t = T ..l9b). Recall that the sampling property [Eqs. Instead. A signal that does not start before t ~. we . the area under </>(t)8(t .17)J leads to a nonunique function. This function wasfirst defined by P.property. From Fig. The impulse function is zero' everywhere except at t = 0. C'·"'2:i3b. we only need to multiply the signal with u(t). We say nothing about what the impulse function is or what it looks like..0 Figure 2. defined by (Fig. 0 is called' a causal signl'll._ If'we want a signal to start at t = 0 (so that it has a value of zero for t < 0). it is defined in terms of the effect it has ona test function 4>(t). or a Gaussian pulse. . an impulse 8 (t) is equal to the value of that function dt = 1'.. Inthe caseofEq. '.1 Sa) Similarly. . (2.4 Unit Impulse Function 29 I The instants -1 and -5 in get) are mapped into instants 1 and 5 ill g( -t). other pulses. the value of 41(1) at the instant where the impulse is located ~at t :: T).

2. (2.3 Some Useful Signal Operations Figure ~ 11 27 these-values.14) = lime inversion (reflection) of a signal. 2~11b). we would obtain g(2t)~ In general.lOd. we can show that get) expanded (slowed down) in time by a factor a (a >1) is given by . (2.e Val. we replace t with at.. arid 4>(t) == 15. to time-invert a signal we replace t with -t. 2.9c shows get /2). signal get) shown in Fig. if get) is compressed in time by a factor a (a > 1). (b) z(t/2) is z(t) expanded (slowed down) by a factor of 2. -1. and 24 occur in g(3t) at the instants t = 2. respectively. 12. Recall also that the mirror image of get) about the horizontal axis is -g(t)~ EXAMPLE 2. ~: 2. The values of z(t) at t = 1. the mirror image of get) about the vertical axis is g( -t). Consider the signal get) in Fig. ompressedlby a factor of3. 2. This means that. (8) g(31. respectively.11b at the instant -rt .3 Time Inversion (lime Reversal) . the signal at t == Orem. We can view g(t) as a rigid wire frame: hinged at the vertical axis. Sketch: (8) g(3t).4 For the 'I Figure 2. -z Using a similar argument.1Za. Thus.· (a) Figure 2. the scaling is expansion. sketch g(-t). which is get) expanded in time bya factor of 2. In summary.3. as shown in Fig:2.14). This is because get) = g(at) = g(O) at 1 = O. ' 2. and -3 occur in z(t/2) at instants 2. as shown in.15) Figure 2. lOa and b shows the signals get) and z(t). If get) were recorded on a tape and played back at twice 'the normal recording speed. the 'scaling is compression. the resulting signall/>(t) is given by </J(p) g(at) = (2.10 Examples of time compression and time expansion of signals.)isg(t):C. and if a < 1.11a. as shown in Fig. 2. Therefore. S. 4.3 Figure 2. of g(t) . we rotate this frame 1800 about. In other" words.' ' " I/>(-t) = get) Time inversion may be considered a special case oftime~aling . .ainS unchanged. the time inversion of signal get) YIelds g( -t). z -5 -1 oXt)= 8(-1) z (b) = ExAMPlE 2.1Oc. to time-scale a signal by a factor a.16) ~erefOre. ~2. with a = -1 in Eq. If a > 1.26 INliRODUcrlON TO SIGNALS . Observe tluit whatever happens in Fig: Z. To invert get). </J(t) = 0 at t TII2 and Tz/2.( -t) (2. IS. Fig.12 Example of time inversion. Consequently. Note that the signal remains anchored at t 0 dining scaling operation (expanding or compressing). respectively.2.9b.l1a at some insiarit t also happens in Fig. -7 -s -3 -1 g(-t)1 t:S 1 3 S 7 t- (b) . (b) z(t /2). '~l'' . the vertical axis" This time inversion or folding [the mirror image of get) about the vertical axis] gives us the signal I/> (t) (Fig.• ues at t = 6. Therefore. and -6. and 8.th.

Thus. \ 2.9b is g(t) compressed in time by a factorof 2. so that • (c) . at some instant t also happens to t/l(t) at the instant tiZ.8a) at some instant t also happens int/l(t) (Fig. If T is positive.2) is get) delayed (light-shifted) by2 seconds.Therefore.3.8 lime shifting a signal . 2. is random signal.1 Time Shifting Consider a signal get) (Fig. these operations are discussed as time shifting.T.5 Deterministic and Random Signals A signal whose physical description is known completely. 2. mean Squared value.9 . : 0 .p(t) at half a rather . Therefore. 2.3. Since the independent variable in oun signal description is time. we replace t with t .2 Time Scaling The compression orexpansion of a signal in time is known as time scaHng..T) represents get) time-shifted by T seconds. 8(1) .. and (2.8b).12) 2. However.. • Figure2. t(b) 2. than its complete mathematical or graphical description. . in either amathematical form or a graphical form. I . this discussion is valid for functions having independent variables other than time (e. however. scaling. '. The treatment of random signals will be discussed in Chapter 11. 2. Most of the noise signals encountered in practice are random signals. get .3 SOME USEFUL SIGNAL OPERATIONS We discuss here three useful signal operations: shifting.24 INTRODUCTION TO SIGNALS 2. and t/>(t) = get - T) (2. to convey Information.. The signal t/>(t) 'in Fig.. such as mean value. Thus. . the shift is to the right (delay). 'whatever happens in g(i) (Fig.. as will be shown later. If T is negative. to time-shift a signal by T.9a. and inversion.2. lime scaling a Signal._J ~. frequency or distance). .g. the same thing must happen iii.description.erefofe. . the shift is to the left (advance). g(t. and g<t + 2) is get) advanced (left-shifted) by 2 seconds. 2.3 Some Useful Signal Operations 25 Also because of periodic repetition. a signal. is deterministic sigDaI. t/>(t + T) = get) (2.8b) T seconds later at the instant t + T. Consider the signal g(t)"ofl'"ig. and time inversion (in..8a) arid the same signal!delayed by T seconds (Fig. which we shall denote byt/l(t). whatever happens in get). not all power signals are periodic.13) a Observe that because get) = 0 at t ~ 7i andTz. If a signall is known only in terms of probabilistic . and so on. 2.folding).11) t_· (a) Tb. periodic signals for which the area under'lg(t)12 over one period is finite are power signals.10) Figure 2. t/>(t) = g(2t) (2. All message signals are random signals because. time scaling. t_ (a) 2. must have some uncertainty (randomness) about it.

discrete time.. continuous time. (d)IDigital. (b) Digital. Figure 2.8) Figure2.7 shows a periodic signal get) of period To = 6~The shaded portion of Fig.4 Energy and Power Signals A signal with finite energy is an energy signal. (c)Analog.. Signals in Fig.. and a signal with finite power has infinite energy. get) is that get) can be generated by periodic extension oj any segment of get) uf duration To (the 'period). Therefore. Again we see that this segment. every signal observed in real life is an energysignal. results in get). Otherwise its power.. Obviously it is impossible to generate a true power signal in practice because such a signal has infuiite duration and infinite energy. In other words. a signal cannot both be an energy and a.periodic signal. and a signal with finite power is a power signal.1 and having It chiration of one periOd_(6 seconds).9) .2 Classification of Signals 23 (a) t- t----: (b) . it cannot be the other. (d) 2. a signal g.6 Periodic signal of.. The ramp signal is such an example. discrete time. The reader can verify that this is possible with any Segment of get) starting at any instant as long as the segment duration is one period.7b shows another shaded segment of get) of duration To starting at t = O. A power signal.2. If it is one. the periodfsthe smallest interval that satisfies periodicity condition (2. a signal. On the' other hand. continuous time. when repeated forev~ in either direction. In other words. 2. by definition. Therefore.Since the averaging is overan infinitely large interval. 2.power signal. In other words. which is its average energy (averaged over an infiDitely large interval) will not approach a (nonzero) limit. . However. must necessarily have an infinite duration. Similarly. This means that we can generateg(t) from any segment of get) with a duration of one period by placing this segment and thereproduction thereof end to end ad infinitum on either side.12 2 lim.5 Examples of slgnals.22 INTRODUCTION TO SIGNALS g(t) 2.cc I1T.(t) is an energy signal if (2.. when repeated forever on either side. Figure 2.7). get) may be considered a periodic signal with period mTo.i. results in the periodic signalg(t). (a)Analog.. The second important propertyof a. on the other hand. . there are signals that are' neither energy nor power signals. To remains unchanged. -T Ig(t)1 dt < -T12 00 (2. Therefore.7a shows a segment of get) starting at t ss . respectively: Observe that power is the time average-of the energy . v' f'JgUre 2. Comments Every signal that can be generated in a lab bas a finite energy. where m is any integer. t. This segment. To is the period.is a power signal if o< T. a signal with a finite and nonzero power (mean square value) is a power signal. a signal with finite energy has zero power. ~. ' g(t) (b) Figure 2:7 Generation of a periodic signal by periodic extension of its segment of one-period duration.period To.3a and b are examples of energy and power signals.

Deterministic 4. signal amplltudecan take on . Observe that a periodic signal shifted by an integral. the number. A signal is aperiodic if it is not periodic.5 shows examples of various types of signals. is one whose amplitude Cl!D take on only a finite number of values. . (a) 2. .2 CLASSIFICATION OF SIGNALS There are severallclasses of signals. signals only thefollowing classes. ail shown in Fig. 2.2 Analog and Digital Signals figur~ 2.5. . Analog and digital signals 3. The same is true of the concepts discrete time and digital.2. It can be any finite. 2. The terms continuous time and discrete time qualify the nature of a signallalong the time (horizOntal) axis. digital computer are digital because they1ake on only two values (binary signals). . time is often confused with that of analog. national product (GNP). on the other hand.2 Classification of Signals g(t) 21 I We shall show in Sec. 2. say. It is clear that analog is not necesSarily continuous time and digital ndnot be discrete time.1 Continuous-Time and Discrete..2. Continuous-time and discrete-time signals Here we shall consider '.5c shows an example of an analog bat discrete-time signal.. a periodic signal.3 Periodic and Aperiodic A signal get) is saidto be periodiciffor Signals some positive constant To. qualify the nature of the signal amplitude (vertical axis). 2. This means 'that Periodic. . The signal in Fig. of values need not be restricted to two. A signal whose amplitude' can take on my value in a continuous range is an analog signal. A'digital sigruil. number. must start at -00 and continue forever. as explained in Sec~ 6. By definition.4 Continuous-time and discrete-time signals. This means that an analog.7)is the period of get).. multiple of a .To and get + To) would not be the same as get). wheress the quarterly gross. through quantization (rounding off).2. .Time Signals A signal that is specified for every value of time t (Fi~. The twe are not the' same. t =' 0. Energy and power signals and probabilistic signals 2 .7) Thesmallest value of To that satisfies the periodicity condition (2.. binary (M' = 2) is a special case. Fii!ureZ. I7y definition. a periodic signalg(t) remains unchanged when time-shifted by one period. and stock market daily averages are' discrete-time signals. monthly sales of a corporation. 23a is aperiodic.6.an infinite number of values.4b) is a dlscrete-tlme signaL Telephone and video camera outputs are continuous-time signals. signal must start at t = -00 because if it startaat some finite instant. Signals associated with R. The concept of continuous of 2.4a) is a continuous-time sigruil" and a signal that is specified only at discrete values of t (Fig. get) = get + To) for all t . can be converted into a digital signal [analog-to-digttal (AJD): conversion] . An analog signal. The terms analog and digital. A digital signal whose amplitudes can take on M values is aIiM-ary signal of which. The signal in Fig. which.20 INTRODUCTION TO SIGNALS 2. (2. are suitable for the scope of this book: 1.2.3 that only under a certain condition (called orthogonality condition) the power (or energy) of g1(t) + g2(t) is equal to the sum of the powers (or energies) of gl (t) and g2(t). .3b is a periodicsignal with period 2. For a signal to qualify as digital. Periodic and aperiodic 5.2. Therefore. 2. (b) 2. on the other hand. Figure 2. the time-shifted signal get + To): will start at t = .

The rms value is C . Therefore.Z dt Recall that Ie}""'" =1 so thst IDei""".. All we have proved here is thst this is true if the two signals gl (I) and gz(t) happen to besinusoids. Therefore.. Now the area under any sinusoid over a large time interval is at most equal to the area under half the cycle because of cancellations of positive and negative areas as argued in part (a). (b) In this case. Using a trigonometric identity.. 2CICi -1+ lim -T T-+'oo 2.. ThiS area is at most equal to the area of half the cycle becanse of cancellations of the positive and negative areas of a sinusoid. . is equal to the sum of two sinusoids or frequencies WI + lV2 and WI . ~2. and we have" (2. So the third term vanishes because T-+oo. then 1 00 . . averaging g2(t) over an infinitely large interval is identical to averaging it over one period (2 seconds in this case). ..a 00 sum of any number of sinusoids with distinct . However. dt and therms value is /(C. In fact..ooT I . Pg".CO2. large time interval T withT ~ 00..' ' 2 l' · -1 g2(t)dt=2 1 11 _I i2dt=3" 1 1T/2 -T/Z cos (Wit + (1) cos (Wzt + fh)ldl Recall that the signal power is the square of its rms value. by observing that' a. Moreover.!tis not true in general. the third term is 2CIC2/ T times the sum of the areas under two sinusoids. we may compute its power by averaging its energy over one period iK/ CtI(J. ee I T I T2 '' -T/2 2 .IT'ZC -T/2 2 -T/2 [1 + cos (2CtI(Jt+ 29)] dt L C~ cos (w"t + 9. Clearly this term is zero.18 INTRODUCTION TO SIGNALS for periodic signals. (2. ExAMPLE 2. CZ/2 (e) In this case the signal is complex.~ T. we shall solve this problem by averaging over an joflnitely large time interval using Eq (2. periodic signal repeats regularly each period (2 seconds in this case).Determine the power and the rms value (a) get} = C cos (Ctl(Jt + 9) (b) (e) g(i) = CI cos (Wit + (1) get) == Dei""" of: + C2 cos(ltl2t + fh) COl:/: CO2 Observe that the first and second integrals on the right-hand side the powers of the two sinusoids. ToJo ro f 'Dej~. = Pg = Iim./2.. which is the product of the two sinusoids. Because it is aperiodic signal..6aj This shows that a sinusoid of amplitude C has a power regardless of the valueof its frequency ltJ() (CtI(J :/: 0). which are C12/2 and C2~/2 as found in part (a). Pg= n2 LC (2... Thus. Thus. and phase 9./3. to'.oo T -T/Z where none ·of the iwo sinusoids have identical frequencies.6c) The fiist term on the right-hand side' is equal to C2/2. this term. Pg = ~ [ .oo T lim !:..".1 Size of a Signal 19 + fh) d't . and . 1be <integrand of the third term is 8i nonnegative <:!lIity. Be cautioned against such a generalization. (Co. we need average it only over aiperiod To.1 I 1T/2 '2 . We can readily: extend this result frequencies w.3).·it is not true even for the sinusoids If the two sinusoids are of the same frequency.. -T/2 TZ ' [Ci cos(colt+91)+C2 cos (ltl2t+ fh)j2dt = lim -T T-+oo .' The rms value is JDI. '. and . . + 9).and we use Bq. the second term is zero because the integral appearing in this term represents the area under a sinusoid over a very . I1T/2 CI2 cgs2(ltJ)1 + (1) dt -T/2 Comments: In part (b) we have shpwn that the power of the sumof two sinusoids is equal to the sum of the powers of the sin'Qsoids. if get) = .z' ~o ~ 71 0 dt = IDI2 .!. The second term is this area multiplied by C2/2T with T ~ 00. The suitable measure of this signal is its power. Pg=.2 + Cb/2. -tf: lim -T Cz cos (rozt T-+oo -T/2. We now show that the third term on the right-hanCl side is rem. '"..=1 C2 1T'2 C21T/2: = lim dt + lim -2 cos (2Ctl(Jt + 29) dt T->oo2T T.. Thus.6b) are (a) This is aperiodic signal with period To 2rr /Wo... If the signal frequency is rem'(dc:: or a constant signal of amplitude C). and the integral iil the third term ~' 00 as T -> 00. Thus.. (26d) reader '1 Pg = liniT. the rms value of this signal is 1/. + • This is true only if @.. IT til. However.It appears that the power of gl (J) + g2 (t) is Ps. 'Z' . because this signal is periodic.T .C2 cos2 (ltJ(jt . C2.4) to compute the power. for the sake of generality.) '0=1 2.2 . . =. the can show that the power is.. Pg = . :/: 0) .2 =ID[2.".

The energy (or power) of e(t.. but to indicate the signal size. it follows that E 9 may alsobe interpreted as the energy dissipated . the measure of signal energy. the signal energy Eg could be interpreted as the energy dissipated in a unit resistor if a. ExAMPLE figure 2.3a ~ 0 as It I -+ Isits energy E. and its power Pg has units of volts squared.3.3 + 8(t) R Rg<!) if R 1 00 g2(i) E -. (2:3) or Eq. and the instantaneous power dissipated would be v(t)i(t) = g2(t)/ R.2 Computation of the actual energy dissipat~across a load. (2. -00 Therefore.for this signal. be interpreted as the energy dissipated in a normalized load of a l-ohm resistor. and The mean of an entity averaged over a large time interval approaching infinity exists if the entity is either periodic or has a statistical regularity.z(t). 4 t .1. 2. Parallel observation applies to signal power as defined in. during transmission over a channel. 2.16 INllRODUCTlON TO SIGNALS 2. respectively.5) o 2.1 . it is periodic.2) does not indicate the actual energy of the signal because the sigoaI energy depends not only on the signal. 2.1) and (2.g(t) .. If get) is a voltage signal. If such a condition is not satisfied. is Energy dissipated! = Comments 2'.voltage get). R The signal in Fig. these units will be amperes squared seconds. the average may not exist. message signals are corrupted by unwanted signals (noise).2) are not correct dimensionally. and therefore its power exists... For instance. This is because here we are using the term energy not in its conventional sense. (a) Signal with finite energy. and not the actual energy. Figure 2.The energy dissipated. -. t increases indefinitely as It I ~ 00.of the received signal is judged by the relative sizes of the desired signal and the unwanted signal ~noise). We can use Eq. and amperes squared. From Fig. The signal in Fig. = 1. were applied across this unit resistor.4) for power.3) to determine its power. If get) is a current signal. 2.1) or Eq.3) and{2. if we approximate a signal get) by another signal z(t). It also allows us to determine if one apPJ:'O~tion is better than the other. For this reason the concepts of conservation of energy sheuld not beapplied to. the suitable measure for this signal ~ (a) (b) fi2(t)dt=1. For instance.g given by Eg=]"" . and neither the energy nor the power exists . In this case the ratio of the message signal and the noise signal powers (SNR) isa good indication of the received signal quality. The quality . We can simplify the procedure . being the integral of the instantaneous power.Thus.3b does not ~ 0 as It I -+ 00.1 Determin~ the suitable measures of the signals in Fig. the energy dissipated in the resistor is Eg.iin a unit resistor if a current get) were passed through this unit resistor.o(2)2dt+ -1 Jo roo 4e-'dt=4-t4=8 Figure 2. It provides us with a quantitative measure of determining the closeness of the approximation. Examples of signals.4). These measures are but convenient indicators of the -signal size. (2. a camp signal get) . In communication systems. Units of Energy Power: Equations (2.R-ohm resistor. 00. If a voltage get) is applied across an . Eq.. but also on the load.2a).) is a convenient indicator of the goodness of the approximation. (2.the error in the approximation is ee. However. The signal energy as defined inEq.2b. (2. (2. It can. however. dt = _J_ -1 -00 R R (2..) =. Signal for Example 2. In-the present context the units af energy and power depend on the nature of the signal g(t). its energy Eg has units of volts squared seconds. The same observationsapply to Eqs. the current through the resistor is g(t)/ R (Fig.11Size ofa Signal 17 The measure of "energy" (or "power") is therefore indicative of the energy (or power) capability of the signal. 2. (b) Signal with finite power.

2. or it maybe an algorithm that computes an output from an input signal (software realization) .' .. time.1 . Otherwise the integral in Eq.~ . D9t only more tractable mathematically. A necessary condition for the energy to be finite is thai the signal amplitude -+ 0 as It I ~ 00 (Fig.. Thus.the radar signal (the input). mechanical. This difficulty can be ~ by definmg the SIgnal size asthe area under g2(t). When an electrical charge is distributed over a surface.. for instance.. rather than time. discuss ce.. ee . If we wish to be a little more ~'. the antiaircraft gun operator knows the past location and velocity of the target By' properly processing . V . The discussion. density. are processed by systems. the signal energy is infinite. this ~nbe a defective measure because get) could be a large signal. We call this measure the signal energy Eg. as in electrical. 2. system may be made up of physical components. s.this product over the entire length of the person. but also the duration.e.size in such a case would be the time average of the energy (if it exists). the' signal amplitude varies with time.._'. if we are to devise a aingle number Vasa measure of the size of a H. . . . cc T 2. . which is the average power Pg defined (for a real signal) by Pg We can generalize Signa) Power = T . l1(t) dt T-T/2 (2. we Should ~verage. sha1l start with explaining the terms signals and systems . . . Generally speaking. or hydraulic systems (hardware realization).rtain basic signal concepts. Ig(t)12 dt .. The s!gnal energy must be finite foritto beameaningfulmeasure of the signal size.This is not always the case. we can approximately estimate the future location of the target. a function' of space .2) i: f~ g2(t) dt (2.1 a). Indeed. Eg. but also the height. is. which may modify them or extract additional information from them: For example. Signals as .. yet its positive and negative areas co~d cance. whichris always positive.edfwther by systeJllS. Signals We..t j ~ize or s~gnal str~ngth? SU:h a measure must consider not only the signal amplitude. Systems Signals may be process. then a reasonable measure of the size of a person of height H 18 the-person's volume V. value of get). because it . the Dow Jones averages) . The abov~ energy measure. .. however.'. ".ODUCTION TO '.4) .1 Size of a Signal 15 ..1 SIZE OF A SIGNAL .wnan be~g. however..is the mean-squared 14 . .= A siglial. but is also more meaningful (as shown later) in the sense that it is indicative of theenergy that can be extracted from the signal. defined (for a reallsignals) as .this chapter n. Knowing the radar signal. . monthly sales of a corporation.1) will not converge. Pormstance. the signals are functions of the indepeildent'iiariable time . Signal Energy. Ai~g in this-manner. 1 ITI2 lim . iadlcating a signal of smalll size. I T12 -T12 . but also Its.. Iftheamplitndeofg(t)doesnot~ Oasltl ~ 00 (Fig.. given by_ . the term implies. which is being tracked by' a radar. we may consider the area under a signal get) its a possible measure of. we mu.'·:1 '''¥ SIGNALS I we 21NTIR. A more meaningful measure of the signal.1) There are also other possible measures of signal size. an antiaircraft gun operator may want toknow the future location of a hostile moving target. A. .lts ~t. such as the area under Ig(t)l. Examples include a telephone or a television signal. =1t Jo (H r2(h): dh .. however. which v~es w!~ the height h of the person.1b). ! R 2. If we m~e th~ sunp~g assumption that the shape of aJ person is a cylinder of radius r.l each ~ther. the signal is the charge. Obs~e that.The size of any entity is a number that indicates the largeness or strength of that entity. ill this book we deal almost exclusively with signals that are functions of . (2.g. or the· daily closing prices ofa stock marketIe.duration.(2.In all these examples.3) Uris definition for a complex signal g(t) as Pg = lim -1 T . the square toot of P~ is the familiar root mean square (nos) value of g(t). How can a signal that exists over a certain time' interval with varying· amplitude be measured by one number that will indicate the signal that the' signal power Pg is the time average (mean) of the signal amplitude squared. = This definition can be generalized . applies equally well Ito other independent variables.st ~deqlOt only his or her width (girth). is. a system is an entity that processes a set of signals (inputs) to yield another set of signals (outputs). The ~t of ~ and height IS a reasonable measure of the size of a person. a set of information or data. to it complex valued signal g( r) as Ig(t}12dt (2.. However.takes account of not only the 'amplitude.

By Introducing .the pulse trains corresponding to various signals are separated.can only ~tect a~ error. to the code words OOOl.What is more interesting. for L. Redundancy thus helps.beforehand and would convey no information. We can use various audio signals to modulate different carrier frequencies. AND CODING Randomness plays an important role in communication. Itwill be shown iii Chapter 15 that for cligitiIJ. . it can be shown that properly adding three pulses will not ouly. Thus. If all the redundancy . one:can use a tunable bandpass filter to select the desired station or signal. English is about50 percent redundant. . is that from the 'engineering point of view. or uncertainty. it would be rather difficult to 'make sense out of the received message. the. ' .number of signals by interleaving the pulse trains of various signals in a specified order.. but cannot locate it. and information is associated with probability. the-information of a message with probalillity P is PI0portional to log (1/ Pl.112 INTRODUCTION. of the outcome. Randomness. Randomness is also closely associated with iriformation. The information-of message. If a single error occurs in any position.. This method of transmitting several signals simultaneously is known as frequency-division Multiplexing (FDM). = 16. As 'noted earlier. The pulses are made narrower. All languages are redundant For example. . however. it would convey no meaning-The unpredictability of the winking is what gives theinformation to the s~al. on the average. Now the number of positive pulses is 2 (even). dietability.6. Here the bandwidth of the channel is shared by various signals without any overlapping. If the various carriers are chosen sufficiently far apart in"frequency. Ifa person winks. Redundancy also plays an'iniportant role in communication. tomakeanew code word. Thus. At the receiver. the. Indeed. we nuty throw out half of the Ietters or words without-destroying the message. defined as a quantity proportional to the minimum iiliie. we have a code that can detect a single error in 'any place. l. therefore" plays ~~eful ~le in combating the noise in tile channel. thus translating each signal to a different frequency -range. this parity will be violated. But if a person were to wink continuously with the regularity of a clock. as shown in' Fig. .. however.determined by the statistical structure of the language. even. If a source had no unpre.. error occurs at the receiver. It is essential fQr reliable comm~ication. This same principle fit redundancy applies m coding messages. and the spaces that are left between pulses are used for pulses from other signals. FM or PM can effect such an exchange. This subject of error-correcting codes will be discussed in Chapter 16. Tliis is a verysimple coding scheme. of positive polarity. from the engineering point of view. q. . also. and z). We have shown earlier that it is possible to exchange SNR with the bandwidth of transmission. The remaining half'i$. ' Another method of multiplexing several signals is known as time-division multiplexing '(TDM). no redundancy exists. In order to minimize-the transmission time. it cannot detect an evenmumber of errors. Redundancy. Thus. Inthis code. Fot example.needed to transmit it Consider the Morse code. One way to solve this problem is to use modulation. in effect. It.inations of marks and spaces (code words) are assigned to each letter.. without any modification. J"and a) and longer code' words are assigned to rarely occurring (less probable) letters (such asx.. if we add to each code word one-more pulse of such polarity as tti make the number of positive pulses. the receiver will produce a wrong vlilue~ He~e we ma~use redund~cy to eliminate theeffeet of possible errors caused by channel noise or lUlperfections.one of the limiting factors in the rate of communication is noise" which is It random signal. Probability is the measure of certainty. is. the overall transmission time is minimiz¢djf. it is possible not only to detect but also-to correct errors. it conveys some-information in a given context. Consider the case ~f several radio stations broadcasting audio baseband signals directly.(or symbol) of pro~~ P i~assign~ acod~ Wordwitha length prOportional to log (11P). randomness is. This method is suitable when a Signal is in the form of a pulse train (as in PCM).mp.transmission time is shared by a. This also means that in any English message. and Coding 13 Simullimeous 'lhmsmiSsion of Several Signals Ji:ffecting the EXchange of'SNR with B " RANDOMNESS.. in order to transmit samples with L == 16 quantizing levels. 00011. detect but also correct a single error occurring at any location. the speaker or the writer has free choice over half the letters or words. not interfere with each other. Thus. forexarnple. For ex~le.Hence. This is wasteful because the channel bandwidth may be much larger than that of the signal. In this COdingscheme. If art error occurs inthe reception of even one of the pulses. They would interfere With each other because the spectra of all the signals occupy more or less the same bandwidth.weadd a fifth pulse. The amount of modulation (to be definedlater) used controls the exchange of SNR and the transmission bandwidth. B~ause of redundancy.Thus. At the receiver. Moreover. Randomness means unpredictability. it would be possible to broadcast from only one radio or television station at a time. will not overlap. a :::~+:. that is. we may use a group of four bmary pulses. from an engmeenng pomt of VIew.of English were removed. the essence of communication. . signals. it would take about half the time to transmit a telegram or telephone conversation: If an.e required to transmit a message is' closely related to tile probability of its occurrence. a IIIfSsagi. combat noise. A deliberate redundancy is used to combat the noise. The receiver knows that an error has been made and can request retransmission of theemessage. REDUNDANCY. the spectra of the modulated signals. on the average.more redundancy. we are able to decode a message accurately despite errors m the received message . information is associated with uncertainty. it would be known . The redundancy in a message. various comb. and thus will. shorter code words are assigned to more frequently occurring (more probable) letters (such as e.oruncertainty.

Sign up to vote on this title
UsefulNot useful