# Notes for Signals and Systems

Version 1.0

Wilson J. Rugh These notes were developed for use in 520.214, Signals and Systems, Department of Electrical and Computer Engineering, Johns Hopkins University, over the period 2000 – 2005. As indicated by the Table of Contents, the notes cover traditional, introductory concepts in the time domain and frequency domain analysis of signals and systems. Not as complete or polished as a book, though perhaps subject to further development, these notes are offered on an as is or use at your own risk basis. Prerequisites for the material are the arithmetic of complex numbers, differential and integral calculus, and a course in electrical circuits. (Circuits are used as examples in the material, and the last section treats circuits by Laplace transform.) Concurrent study of multivariable calculus is helpful, for on occasion a double integral or partial derivative appears. A course in differential equations is not required, though some very simple differential equations appear in the material. The material includes links to demonstrations of various concepts. These and other demonstrations can be found at http://www.jhu.edu/~signals/ . Email comments to rugh@jhu.edu are welcome.

Copyright 2000 -2005, Johns Hopkins University and Wilson J. Rugh, all rights reserved. Use of this material is permitted for personal or non-profit educational purposes only. Use of this material for business or commercial purposes is prohibited.

Notes for Signals and Systems

0. Introduction ……………………………………………………………………………4 0.1. Introductory Comments 0.2. Background in Complex Arithmetic 0.3. Analysis Background Exercises 1. Signals ……………………………………………………………………… ……………10 1.1. Mathematical Definitions of Signals 1.2. Elementary Operations on Signals 1.3. Elementary Operations on the Independent Variable 1.4. Energy and Power Classifications 1.5. Symmetry-Based Classifications of Signals 1.6. Additional Classifications of Signals 1.7. Discrete-Time Signals: Definitions, Classifications, and Operations Exercises 2. Continuous-Time Signal Classes ………………………………………………………..23 2.1. Continuous-Time Exponential Signals 2.2. Continuous-Time Singularity Signals 2.3. Generalized Calculus Exercises 3. Discrete-Time Signal Classes ………………………………………………..…………..37 3.1. Discrete-Time Exponential Signals 3.2. Discrete-Time Singularity Signals Exercises 4. Systems ………………………………………………………..………………………….43 4.1. Introduction to Systems 4.2. System Properties 4.3. Interconnections of Systems Exercises 5. Discrete-Time LTI Systems ………………………………………….…………………..50 5.1. DT LTI Systems and Convolution 5.2. Properties of Convolution - Interconnections of DT LTI Systems 5.3. DT LTI System Properties 5.4. Response to Singularity Signals 5.5. Response to Exponentials (Eigenfunction Properties) 5.6. DT LTI Systems Described by Linear Difference Equations Exercises 6. Continuous-Time LTI Systems ……………………………………………….………….68 6.1. CT LTI Systems and Convolution

2

6.2. Properties of Convolution - Interconnections of DT LTI Systems 6.3. CT LTI System Properties 6.4. Response to Singularity Signals 6.5. Response to Exponentials (Eigenfunction Properties) 6.6. CT LTI Systems Described by Linear Difference Equations Exercises 7. Introduction to Signal Representation ……………………………………………………82 7.1. Introduction to CT Signal Representation 7.2. Orthogonality and Minimum ISE Representation 7.3. Complex Basis Signals 7.4. DT Signal Representation Exercises 8. Periodic CT Signal Representation (Fourier Series) …………………………………….92 8.1. CT Fourier Series 8.2. Real Forms, Spectra, and Convergence 8.3. Operations on Signals 8.4. CT LTI Frequency Response and Filtering Exercises 9. Periodic DT Signal Representation (Fourier Series) ……………………………….…….106 9.1. DT Fourier Series 9.2. Real Forms, Spectra, and Convergence 9.3. Operations on Signals 9.4. DT LTI Frequency Response and Filtering Exercises 10. Fourier Transform Representation for CT Signals …………………………..…………118 10.1. Introduction to CT Fourier Transform 10.2. Fourier Transform for Periodic Signals 10.3. Properties of Fourier Transform 10.4. Convolution Property and LTI Frequency Response 10.5. Additional Fourier Transform Properties 10.6. Inverse Fourier Transform 10.7. Fourier Transform and LTI Systems Described by Differential Equations 10.8. Fourier Transform and Interconnections of LTI Systems Exercises 11. Unilateral Laplace Transform ……………………………………………………………143 11.1. Introduction 11.2. Properties of the Laplace Transform 11.3. Inverse Transform 11.4. Systems Described by Differential Equations 11.5. Introduction to Laplace Transform Analysis of Systems Exercises 12. Application to Circuits ………………………………………………………..………….156 12.1. Circuits with Zero Initial Conditions 12.2. Circuits with Nonzero Initial Conditions Exercises

3

written c = a + jb where a and b are real numbers.. and a complement to the time-domain viewpoint. • the question and the answer. and systems in terms of their effect on the frequency content of the input signal. c1 + c2 = a1 + jb1 + a2 + jb2 = (a1 + a2 ) + j (b1 + b2 ) Of course. the other way round. connections between the two forms of a complex number c include | c | = | a + jb | = a 2 + b 2 . These notes are about the mathematical representation of signals and systems. e. and systems are abstractions of processes that modify these quantities to produce new time-varying quantities of interest. signals are functions of time (continuous-time signals) or sequences in time (discrete-time signals) that presumably represent quantities of interest. More seriously. systems are simply functions that have domain and range that are sets of functions of time (or sequences in time). Some of the associated mathematical concepts and manipulations involved are challenging. Indeed engineers and scientists often think of signals in terms of frequency content. ∠c = ∠(a + jb) = tan −1 (b / a ) and. e. The most important representations we introduce involve the frequency domain – a different way of looking at signals and systems. 4 . • the fever and the cure. is most convenient for addition and subtraction. Of course. include • the α and the ω . Simply remember that signals are abstractions of time-varying quantities of interest.Notes for Signals and Systems 0. but perhaps unhelpful answers.2 Background in Complex Arithmetic We assume easy familiarity with the arithmetic of complex numbers.1 Introductory Comments What is “Signals and Systems?” Easy.. written as c =| c | e j∠c is most convenient for multiplication and division. It is traditional to use a fancier term such as operator or mapping in place of function.g. Systems are operators that accept a given signal (the input signal) and produce a new signal (the output signal).g. • calculus and complex arithmetic for fun and profit. However we will not be so formal with our viewpoints or terminologies. In particular. From a more general viewpoint. but the mathematics leads to a new way of looking at the world! 0. to describe such a situation. this is an abstraction of the processing of a signal. c1 c2 = | c1 | e j∠c1 | c2 | e j∠c2 = | c1 || c2 | e j (∠c1 +∠c2 ) The rectangular form for c . the polar form of a complex number c .

In all cases. | x(t ) | nonnegative for all t ). or some other real variable. with polar form most convenient for multiplication and rectangular form most convenient for addition. ∠(1 − j ) = tan −1 (−1/1) = − π / 4 while ∠(−1 + j ) = tan −1 (1/(−1)) = 3π / 4 It is important to be able to mentally compute the sine. and tangent of angles that are integer multiples of π / 4 . sin(θ ) = 2j Notions of complex numbers extend to notions of complex-valued functions (of a real variable) in the obvious way. signals we encounter are functions of the real variable t . e jθ = cos(θ ) + j sin(θ ) and the complex exponential representation for trigonometric functions: e jθ + e− jθ e jθ − e− jθ cos(θ ) = 2 . 0. since many problems will be set up this way to avoid the distraction of calculators. multiplication and addition of complex functions can be carried out in the obvious way. b = Im{c} = | c |sin(∠c) Note especially that the quadrant ambiguity of the inverse tangent must be resolved in making these computations. This notation is intended to emphasize the similarity of our treatment of functions of a continuous variable (time) and our treatment of sequences (in time). 5 . in the rectangular form x(t ) = Re { x(t )} + j Im { x(t )} In a simpler notation this can be written as x(t ) = xR (t ) + j xI (t ) where xR (t ) and xI (t ) are real-valued functions of t . That is. x(t ) . we will not deal with functions of a complex variable until near the end of the course. will arise as mathematical conveniences. For example.a = Re{c} = | c |cos(∠c) . But use of the square brackets is intended to remind us that the similarity should not be overdone! Summation notation. Or we can consider polar form. You should also be familiar with Euler’s formula. while signals that are complex-valued functions of t .3 Analysis Background We will use the notation x[n] for a real or complex-valued sequence (discrete-time signal) defined for integer values of n. for example. cosine. In terms of these forms. of course. x(t ) =| x(t ) | e j∠x (t ) where | x(t ) | and ∠x(t ) are real-valued functions of t (with. For example. we can think of a complex-valued function of time.

A typical case is k =1 ∑ x[n − k ] Let j = n − k (relying on context to distinguish the new index from the imaginary unit j ) to rewrite the sum as n −3 j = n −1 ∑ x[ j ] = ∑ x[ j ] j = n −3 n −1 Sometimes we will encounter multiple summations. often as a result of a product of summations. changing the variable of integration in the expression 6 . again.k =1 ∑ x[k ] = x[1] + x[2] + x[3] 3 is extensively used. in slightly different form. For example. For example. z[k ] ∑ x[k ] k =1 is not the same thing as k =1 ∑ z[k ] x[k ] ∑ z[k ] x[ j ] 3 3 But it is the same thing as j =1 3 All these observations are involved in changes of variables of summation. when integral expressions are manipulated. write ⎞ ⎛ 4 ⎞⎛ 5 ⎞ ⎛ 4 ⎞⎛ 5 ⎜ ∑ x[k ] ⎟ ⎜ ∑ z[k ] ⎟ = ⎜ ∑ x[k ] ⎟ ⎜ ∑ z[ j ] ⎟ ⎟ ⎝ k =1 ⎠ ⎝ k =0 ⎠ ⎝ k =1 ⎠ ⎜ j =0 ⎝ ⎠ before proceeding as above. 5 4 ⎞ 4 5 ⎛ 4 ⎞⎛ 5 ⎜ ∑ x[k ] ⎟ ⎜ ∑ z[ j ] ⎟ = ∑ ∑ x[k ] z[ j ] = ∑ ∑ x[k ] z[ j ] ⎟ k =1 j = 0 j = 0 k =1 ⎝ k =1 ⎠ ⎜ j =0 ⎝ ⎠ The order of summations here is immaterial. look ahead to be sure to avoid index collisions by changing index names when needed. For example. for example. But. and so we conclude that k =1 ∑ x[k ] = ∑ x[k ] k =3 3 1 Care must be exercised in consulting other references since some use the convention that a summation is zero if the upper limit is less than the lower limit. These considerations also arise. Of course. addition is commutative. And of course this summation limit reversal is not to be confused with the integral limit reversal formula: 3 1 3 ∫ x(t ) dt = − ∫ x(t ) dt 3 1 It is important to manage summation indices to avoid collisions.

For complex-valued functions of time.g. but again further details will be ignored. usually as a result of a product of integrals. It sometimes helps to think of the function in rectangular form to justify this view: for example. continuous) functions with finite integration limits. rather we leave consideration of convergence to particular cases. for infinite sums or improper integrals (over an infinite range) we should be concerned about convergence and then about various manipulations involving change of order of operations. and collisions of integration variables must be avoided by renaming. many of the mathematical magic tricks that appear in our subject are explainable only by taking a very rigorous view of these issues. there are no particular technical concerns about existence of the sum or integral.∫ x(t − τ ) dτ to σ = t − τ gives 0 t 0 t ∫ x(σ ) (−dσ ) = ∫ x(σ ) dσ 0 t We encounter multiple integrals on rare occasions. Typically we will not worry about general sufficient conditions. For example. then 7 . if x(t ) = xR (t ) + j xI (t ) . ⎛3 ⎞⎛ 3 ⎞ ⎛3 ⎞⎛ 3 ⎞ ⎜ ∫ x(t ) dt ⎟ ⎜ ∫ z (t ) dt ⎟ = ⎜ ∫ x(t ) dt ⎟ ⎜ ∫ z (τ ) dτ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎝0 ⎠ ⎝ −1 ⎠ ⎝0 ⎠ ⎝ −1 ⎠ = ∫ ∫ x(t ) z (τ ) dt dτ 0 −1 33 The Fundamental Theorem of Calculus arises frequently: d t ∫ x(τ ) dτ = x(t ) dt −∞ For finite sums. Such rigor is beyond our scope. However. or integrals of well-behaved (e. For integrals such as ∞ −∞ ∫ x(t ) dt an obvious necessary condition for convergence is that | x(t ) | → 0 as t → ± ∞ . Indeed. However. operations such as differentiation and integration are carried out in the usual fashion with j viewed as a constant. We especially will ignore conditions under which the order of a double (infinite) summation can be interchanged. or the order of a double (improper) integral can be interchanged. we will be a bit cavalier about this. For summations such as k =−∞ ∑ x[k ] ∞ a rather obvious necessary condition for convergence is that | x[k ] | → 0 as k → ± ∞ . or interchange of order of integration or summation.

setting u (0) = 1/ 2 is most suitable. For some purposes. the signal x(t ) in the example above effectively exhibits two simultaneous jumps. we will happily adjust the value of a function at isolated values of t for purposes of convenience and simplicity. But in every instance we freely choose the value of u (0) to fit the purpose at hand. is zero – there being no area under a single point. Pathologies that sometimes arise in the calculus. Compute the rectangular form of the complex numbers 2 e j 5π / 4 and e− jπ + e j 6π . Example Consider the function ⎧1. and permit only simple. Impulse functions aside. In a similar fashion. consider ⎧1. We will capture this notion using generalized functions and a notion of generalized calculus in the sequel.−∞ ∫ x(τ ) dτ = ∫ xR (τ ) dτ + j ∫ xI (τ ) dτ −∞ −∞ t t t Similar comments apply to complex summations and sequences.” How do we deal with this? The answer is to view x(t ) as equivalent to the identically-zero function. but is undefined in the usual calculus sense at t = 0 . t >0 t<0 which probably is familiar as the unit-step function. will be very useful for representing special types of signals and systems. The derivative of x(t ) is zero for any t ≠ 0 . we will take a very formulaic approach to working with them. Exercises 1. particularly the impulse function. such as everywhere continuous but nowhere differentiable functions (signals). 8 . u (t ) = ⎨ ⎩0. 2. What value should we assign to u (0) ? Again. for other purposes u (0) = 1 is best. finite jumps as discontinuities. t = 0 x(t ) = ⎨ ⎩ 0. Because we do not provide a careful mathematical background for generalized functions. the answer is that we choose u (0) for convenience. However there is an intuitive notion that a jump upward has infinite slope (and a jump downward has slope −∞ ). to be discussed in the sequel. By comparison. are of no interest to us! On the other hand. there being no reasonable notion of “slope. and there is little alternative than to simplify x(t ) to the zero signal. The derivative of u (t ) is zero for all t ≠ 0 . fussy matters such as signals that have inconvenient values at isolated points will be handled informally by simply adjusting values to achieve convenience. but the derivative is undefined at t = 0. Compute the polar form of the complex numbers e j (1+ j ) and (1 + j )e− jπ / 2 . we typically work in the context of piecewise-continuous functions. certain generalized notions of functions. Indeed. else Certainly the integral of x(t ) between any two limits. Except for generalized functions.

−t ∫ x(σ ) dσ . the magnitude | ( 2 − j 2 ) | and the angle ∠ (−1 − j )2 . Simplify the three expressions below. If z1 and z2 are complex numbers. and a star denotes complex conjugate. Re[ z1 / z2 ] ∞ 6. What is the relationship among the three expressions below? −∞ d dt t ∫ x(σ ) dσ . Evaluate. d dσ 0 t ∫ x(σ ) dσ 9 . e jθ = cos θ + j sin θ . derive the expression cos θ = 1 e jθ + 1 e − jθ 2 2 5. the easy way. 2 ∫ x(2σ ) dσ −∞ 0 7. express the following quantities in terms of the real and imaginary parts of z1 and z2 : ∗ Re[ z1 − z1 ] . ∞ ∞ Im[ z1z2 ] . ∫ x(σ ) dσ .3. 3 4. Using Euler's relation. 0 −∞ d dt ∫ x(−σ ) dσ .

1 Mathematical Definitions of Signals A continuous-time signal is a quantity of interest that depends on an independent variable. Discontinuities (jumps) in a signal are indicated by a vertical line. On planet earth. of course. in the sense of calculus. A more precise. it could be distance.” with the meaning presumably clear from context. where we usually think of the independent variable as time. A crude representation of such a signal is a sketch. though it turns out that sometimes it is mathematically convenient to consider complex-valued functions of t. Because many discrete-time signals arise as equally-spaced samples of a continuous-time signal. as drawn above. We use ellipses as shown above to indicate that the signal “continues in a similar fashion. However. • The default domain of definition is always the whole real line – a convenient abstraction that ignores various big-bang theories. for example. A continuous-time signal is a function x(t ) of the real variable t defined for −∞ < t < ∞ . usually designated t = 0 for convenience. as shown. versus t . A discrete-time signal is a sequence of values of interest. a sketch of the real part. day 1. • An important subclass of signals is the class of unilateral or right-sided signals that are zero for negative arguments. where the integer index can be thought of as a time index. versus t and a sketch of the imaginary part. for example. the default is real-valued x(t ) . These are used to represent situations where there is a definite starting time. then we usually define it to be zero outside of this interval so that the domain of definition remains the whole real line. and so on. • The independent variable need not be time. and indeed the type of sketch exhibited above is valid only for real-valued signals. it is often more convenient to think of the index as the “sample number. If a signal is of interest only over a particular interval in the real line. day 2. both as functions of time. But for simplicity we will always consider it to be time. Other conventions are possible. In some cases a signal defined on a finite interval is extended to the whole real line by endlessly repeating the signal (in both directions). Im{x(t )} . Two examples are the voltage at a particular node in an electrical circuit and the room temperature at a particular spot. In these cases. Remarks: • A continuous-time signal is not necessarily a continuous function. Re{x(t )} . A sketch of a complex-valued signal x(t ) requires an additional dimension or multiple sketches. and the values in the sequence represent some physical quantity of interest. physical quantities take on real numerical values.Notes for Signals and Systems 1. the sample number would be day 0. 10 . mathematical definition is the following.” Examples are the closing Dow-Jones stock average each day and the room temperature at 6 pm each day.

Because these involve the independent variable. all the remarks about domains of definition extend to the discrete-time case in the obvious way. In addition. where a is a real (or possibly complex) constant Amplitude Shift: y(t) = x(t)+ b. though the default assumption is that x[n] is a real sequence. However. and z(t) is assumed to be a fixed. These operations can be viewed as somewhat less simple examples of systems. the operations sometimes subtly involve our customary notation for functions.We use the following mathematical definition. This viewpoint often is not particularly useful for such simple situations. In particular. a topic discussed at length in the sequel. 1. if a and b are assumed real. We display x[n] graphically as a string of lollypops of appropriate height.3 Elementary Operations on the Independent Variable Transformations of the independent variable are additional. 11 . 1. Of course there is no concept of continuity in this setting. A discrete-time signal is a sequence x[n] defined for all integers −∞ < n < ∞ . then each operation describes a system with input signal x(t ) and output signal y (t ) . these operations can be viewed as simple examples of systems. complexvalued discrete-time signals often are mathematically convenient. real signal. where b is a real (or possibly complex) constant Addition: y(t) = x(t) + z(t) Multiplication: y(t) = x(t) z(t) With a change in viewpoint. In due course we discuss converting a signal from one domain to the other – sampling and reconstruction. The description of these operations for the case of discrete-time signals is completely analogous. also called analog-to-digital (A/D) and digital-to-analog (D/A) conversion.2 Elementary Operations on Signals Several basic operations by which new signals are formed from given signals are familiar from the algebra and calculus of functions. • • • • Amplitude Scale: y(t) = a x(t). and sometimes such an alternate view is adopted. that is. basic operations by which new signals are formed from a given signal. the argument (t). however.

For example. the `beginning time’ or `ending time’ of a pulse-type signal will be changed in the new time scale. The recommended approach to sketching time-scaled signals is simply to evaluate y (t ) for a selection of values of t until the result becomes clear. such as x(i) . to describe the entire signal. giving the constant signal y (t ) = x(0) that is only slightly related to x(t ) . If T > 0 . • Time Shift: Suppose y (t ) = x(t − T ) where T is a real constant. the result is time dilation. That is. Subtleties that arise from our dual view will be discussed in the particular context. If a < 0.As is typical in calculus. By sketching simple examples. or a time advance. the case a = 0 is trivial. the shift is a right shift in time. For a ≠ 0. and the value of the signal at a value of the independent variable called t. • Time Scale: Suppose y (t ) = x(at ) where a is a real constant. then there is a time reversal. If T is negative. 12 . in addition to compression or dilation. it becomes clear that if a > 1. and if 0 < a < 1. the result is a time-compressed signal. we use the notation x(t) to denote both the entire signal. This is simpler than adopting a special notation. For example. or a time delay. x(t ) can be recovered from y (t ) . Notice that in addition to compression or dilation. The interpretation depends on context. Of course. we have a left shift. the operation is invertible.

and the sign of T is changed to write y (t ) = x(T − t ) . However. This is accomplished graphically by reversing time and then shifting the reversed signal T units to the right if T > 0 . and figure out the result by brute-force graphical methods: substitute various values of t until y (t ) becomes clear. flip and shift 1. Continuing the example. or to the left if T < 0 . or a shift followed by a scale. The recommended approach is to ignore shortcuts. It is tempting to think about this as two operations in sequence -. For example. Example: The most important scale and shift combination for the sequel is the case where a = −1 .a scale followed by a shift. you should verify the interpretation of the flip and shift by hand sketches of a few examples. We refer to this transformation as the flip and shift. where x(t ) is defined for −∞ < t < ∞. is E∞ = ∫ x 2 (t )dt = lim ∫ x 2 (t ) dt −∞ T →∞ −T ∞ T 13 .4 Energy and Power Classifications The total energy of a continuous-time signal x(t ) . This is dangerous in that a wrong choice leads to incorrect answers. The flip and shift operation can be explored in the applet below.• Combination Scale and Shift: Suppose y (t ) = x(at − T ) .

For a power signal. u (t ) = ⎨ ⎩0. but in any case we first compute T 1 T 2 ∫ x (t ) dt T →∞ 2T −T Letting T → ∞ confirms our suspicions. A more unusual example is ⎧t −1/ 2 . The time-average power of a signal is P∞ = lim For example the constant signal x(t ) = 1 (for all t) has time-average power of unity. For example. 14 . trivially. defined by ⎧1. t ≥ 1 ⎪ x(t ) = ⎨ t <1 ⎪ 0. for all t are energy signals. then the signal values must approach zero as t approaches positive and negative infinity. x(t ) = et is a signal with both E∞ and P∞ infinite. for all t. continuous-time signals into one of two classes: • • An energy signal is a signal with finite E∞ . since the limit doesn’t exist. E∞ = ∞ . x(t ) = e−|t | . Example Most would suspect that x(t) = sin(t) is not an energy signal. For an energy signal. The second step of the power-signal calculation gives −T ∫ sin (t ) dt = ∫ 2 T −T ( 1 − 1 cos(2t ) ) dt = T − 1 sin(2T ) 2 2 2 ( ) P∞ = lim 1 T − 1 sin(2T ) = 1 2T 2 2 T →∞ and we conclude that x(t ) is a power signal. if x(t ) is the current through. With these definitions. ⎩ This signal has infinite energy but zero average power. we can place most. though more interesting examples are not obvious and require analysis.In many situations. a resistor. For example. A power signal is a signal with finite. An example is x(t) =1. P∞ = 0 . since t >0 t<0 1 T T /2 0 2 limT →∞ 1 T T /2 −T / 2 2 ∫ u (t ) dt = limT →∞ ∫ 1 dt = limT →∞ 1 T = 1 T 2 Example There are signals that belong to neither of these classes. nonzero P∞ . for example. Example The unit-step function. and. x(t) = 0. this quantity is proportional to a physical notion of energy. or voltage across. is a power signal. but not all. If a signal has finite energy.

3T. x(t) = 3. These energy and power definitions also can be used for complex-valued signals. For complex-valued signals. If x(−t ) = − x(t ) . Of course if a periodic signal has period T. then it also has period 2T. characterized by x(t ) = x∗ (−t ) where superscript star denotes complex conjugate. in which case A signal x(t ) is called an even signal if x(−t ) = x(t ) for all t. is called the fundamental period of the signal. (Here the absolute value is interpreted as magnitude if the signal is complex valued.6 Additional Classifications of Signals Boundedness: A signal x(t ) is called bounded if there is a finite constant K such that | x(t ) | ≤ K . • • 15 . For example. Periodicity: A signal x(t ) is called periodic if there is a positive constant T such that x(t ) = x(t + T ) . and so on. and often is denoted To .The RMS (root-mean-square) value of a power signal x(t ) is defined as we replace x 2 (t ) by | x(t ) |2 . Such a T is called a period of the signal. and the fundamental period is not well defined (there is no smallest positive number). and we can take K = 1 . and sometimes we say a signal is T-periodic. for all t. for any signal x(t ) we can write a decomposition as x(t ) = xev (t ) + xod (t ) These concepts are most useful for real signals. is periodic with period any T > 0. then x(t ) is called an odd signal. for all t. for example. for all t.5 Symmetry-Based Classifications of Signals P∞ . The even part of a signal x(t ) is defined as xev (t ) = and the odd part of x(t ) is x(t ) + x(−t ) 2 xod (t ) = xev (−t ) = x(t ) − x(−t ) 2 The even part of a signal is an even signal. That is. x(t ) = t sin(3t ) is unbounded. Obviously. 1. Also. for all t. a signal is unbounded if no such K exists.) Otherwise a signal is called unbounded. 1. The smallest value of T for which x(t ) = x(t + T ) . since x(−t ) + x(t ) = xev (t ) 2 and a similar calculation shows that the odd part of a signal is an odd signal. Note also that a constant signal. x(t ) = sin(3t ) is a bounded signal. a symmetry concept that sometimes arises is conjugate symmetry.

Note that the average power per period of a T-periodic signal x(t ) becomes. and musical tones (but not tunes). and the fundamental frequency of a periodic signal is defined by ωo = 2π To with units of radians/second. we apply the periodicity condition sin(3(t + T ) = sin(3t ) . and the fundamental period To if periodic. an at-rest ECG. and this cannot be discerned from a sketch. since there are signals that are very close to. As a second example. P =1 T T T /2 −T / 2 2 ∫ x (t ) dt or. though some other sources use frequency in Hertz. Typically we consider the period of a periodic signal to have units of seconds. denoted by the symbol fo . more generally. The relation between radian frequency and Hertz is ω 1 fo = o = 2π To The main difference that arises between the use of the two frequency units involves the placement of 2π factors in various formulas. if one exists. We will use radian frequency throughout. however. but there is no fundamental period.Examples To determine periodicity of the signal x(t ) = sin(3t ) . T is a positive integer multiple of 2π / 3 . but not. Sometimes the best approach is to plot out the signal and try to determine periodicity and the fundamental period by inspection. − ∞ < t < ∞ convenience the value u (0) = 1/ 2 . Given a literal expression for a signal. by assuming for Periodic signals are an important subclass of all signals. can be arbitrarily difficult. for all t for the smallest value of T . we regard x(t ) = u (t ) + u (−t ) as periodic. solving the equation x(t ) = x(t + T ). Thus the signal is periodic. − ∞ < t < ∞ Rewriting this as sin(3t + 3T ) = sin(3t ) . To prove this. Such a conclusion is not definitive. P = 1 ∫ x 2 (t ) dt T T T where we have indicated that the integration can be performed over any interval of length T. Physical examples include the ocean tides. periodic. it is clear that the condition holds if and only if 3T is an integer multiple of 2π . for any constant to consider 1 T to +T to 2 ∫ x (t ) dt 16 . and the fundamental period is To = 2π / 3 . that is.

1.and perform the variable change τ = t − to − T / 2 . and Operations For discrete-time signals. The amplitude A is such that ⎛ 1/ 60 ⎞ 110 = ⎜ 60 ∫ A2 cos 2 (120π t ) dt ⎟ ⎜ ⎟ 0 ⎝ ⎠ from which we compute A ≈ 140 . • Energy and Power: The total energy of a discrete-time signal is defined by ∞ Example Ordinary household electrical power is supplied as a 60 Hertz sinusoid with RMS value about 110 volts. n < 0 is a power signal with time-average power 17 . The average power per period is the same as the average power of the periodic signal. E∞ = ∑ x 2 [n] = lim n =−∞ N →∞ n =− N ∑ x 2 [ n] N The time-average power is P∞ = lim N 1 ∑ x 2 [ n] N →∞ 2 N +1 n =− N and discrete-time classifications of energy signals and power signals are defined exactly as in the continuous-time case. δ [ n] = ⎨ ⎧1 . n ≠ 0 is an energy signal. Examples The unit pulse signal. Therefore the RMS value of a periodic signal x(t ) is 1 ⎛1 ⎞2 P∞ = ⎜ ∫ x 2 (t ) dt ⎟ ⎜T ⎟ ⎝ T ⎠ x(t ) = A cos(120π t ) and the fundamental period is To = 1/ 60 sec.7 Discrete-Time Signals: Definitions. we simply need to convert the various notions from the setting of functions to the setting of sequences. That is. with E∞ = 1 . The unit-step signal. x[n] . n = 0 ⎩0 . Classifications. ⎧1 . n ≥ 0 u[n] = ⎨ ⎩0 .

such that x[n + N ] = x[n] for all integer n. though sometimes the subscript is dropped in particular contexts. ± 2. that is. The smallest period of a signal. though it must be remembered that only integer argument values are permitted in the discrete-time case • Time Scale: Suppose y[n] = x[ an] . • Elementary operations: Elementary operations. The fundamental period is denoted N o . ± 1. unlike the continuous-time case.P∞ = lim N →∞ = lim N →∞ = lim N →∞ N 1 ∑ u 2 [ n] 2 N + 1 n =− N N 1 ∑1 2 N + 1 n =0 N +1 1 = 2N +1 2 • Periodicity: The signal x[n] is periodic if there is a positive integer N. ± 2. Elementary transformations of the independent variable also are easy. Example For a = 2 . on discrete-time signals are obvious conversions from the continuous-time case. is called the fundamental period of the signal.… sin(3n + 3N ) = sin(3n) . Thus the signal is not periodic. n = 0. an. is an integer for all integer n). But for any case beyond a = ± 1. this is a time reversal. the least value of N such that the periodicity condition is satisfied.… This condition holds if and only if 3N is an integer multiple of 2π . we check if there is a positive integer N such that sin(3(n + N )) = sin(3n) . n = 0. be aware that loss of information in the signal occurs. Example To check periodicity of the signal x[n] = sin(3n) . called a period. where a is a positive or negative integer (so that the product. compare the time scaled signal with the original: That is 18 . If a = − 1. a condition that cannot be met by integer N. ± 1. for example addition and scalar multiplication.

• Time Shift: Suppose y[n] = x[ n − N ] , where N is a fixed integer. If N is positive, then this is a right shift, or delay, and if N is negative, it is a left shift or advance. • Combination Scale and Shift: Suppose y[n] = x[an − N ] , where a is a nonzero integer and N is an integer. As in the continuous-time case, the safest approach to interpreting the result is to simply plot out the signal y[n]. Example: Suppose y[n] = x[ N − n] . This is a flip and shift, and occurs sufficiently often that it is worthwhile verifying and remembering the shortcut: y[n] can be plotted by time-reversing (flipping) x[n] and then shifting the reversed signal to move the original value at n = 0 to n = N . That is, shift N samples to the right if N > 0, and |N| samples to the left if N < 0.

Exercises 1. Given the signal shown below,

sketch the signal y (t ) = (a) x(t ) − x(t − 1) (b) x(−2t ) (c) x(t − 1)u (1 − t ) (d) x(2t ) + x(−3t ) (e) x(3t − 1) 2. Determine if the following signals are power signals or energy signals, and compute the total energy or time-average power, as appropriate. (a) x(t ) = sin(2t )u (t )

19

(b) x(t ) = e−|t | (c) x(t ) = tu (t ) (d) x(t ) = 5e−3t u (t ) 3. For an energy signal x(t ) , prove that the total energy is the sum of the total energy of the even part of x(t ) and the total energy of the odd part of x(t ) . 4. If a given signal x(t ) has total energy E = 5 , what is the total energy of the signal y (t ) = 2 x(3t − 4) ? 5. Under what conditions on the real constant α is the continuous-time signal x(t ) = eα t u (−t ) an energy signal? When your conditions are satisfied, what is the energy of the signal? 6. Sketch the even and odd parts of the signals below. (a)

(b)

7. Suppose that for a signal x(t ) it is known that Ev{x(t )} = Od {x(t )} = 1 for t > 0 .What is x(t ) ? 8. Determine which of the following signals are periodic, and specify the fundamental period. (a) x(t ) = e jπ cos(2π t + π ) (b) x(t ) = sin 2 (t ) (c) x(t ) =
k =−∞ 2 − j 3π t ∞

∑ u (t − 2k ) − u (t − 1 − 2k )

(d) x(t ) = 3e

9. Suppose that x1 (t ) and x2 ( t ) are periodic signals with respective fundamental periods T1 and T2 . Show that if there are positive integers m and n such that

20

T1 m = T2 n
(that is, the ratio of fundamental periods is a rational number), then x( t ) = x1( t ) + x2 ( t ) is periodic. If the condition holds, what is the fundamental period of x( t ) ? 10. Determine which of the following signals are bounded, and specify a smallest bound. (a) x(t ) = e3t u (t ) (b) x(t ) = e3t u (−t ) (c) x(t ) = 4e −6|t | (d) x(t ) = −2 t e −3t sin(t 5 ) u (t ) 11. Given the signal x[n] = δ [n] − δ [ n − 1] , sketch y[n] = (a) x[4n − 1] (b) x[n]u[1 − n] (c) 3 x[ −2n + 3] (d) x[2n] − x[1 − n] 12. Determine whether the following signals are periodic, and if so determine the fundamental period. (a) x[n] = u[n] + u[− n] (b) x[n] = e − j 3π n (c) x[n] = (−1)n + e (d) x[n] = cos( π n)
4
jπ n 2

13. Suppose x[n] is a discrete-time signal, and let y[n]=x[2n]. (a) If x[n] is periodic, is y[n] periodic? If so, what is the fundamental period of y[n] in terms of the fundamental period of x[n]? (b) If y[n] is periodic, is x[n] periodic? If so, what is the fundamental period of x[n] in terms of the fundamental period of y[n]? 14. Under what condition is the sum of two periodic discrete-time signals periodic? When the condition is satisfied, what is the fundamental period of the sum, in terms of the fundamental periods of the summands? 15. Is the signal x[n] = 3(−1) n u[n] an energy signal, power signal, or neither? 16. Is the signal x[n] = e− j 2π n + e jπ n periodic? If so, what is the fundamental period? 17. Answer the following questions about the discrete-time signal x[n] = e − j (π / 2) n . (a) Is x[n] periodic? If so, what is its fundamental period?

21

what is the fundamental period? (a) x[n] = e 4 jπ n 2 j8πn (b) x[n] = e (c) x[n] = e − j 7 π ( n −1) 8 22 . Which of the following signals are periodic? For those that are periodic.(b) Is x[n] an even signal? Is it an odd signal? (c) Is x[n] an energy signal? Is it a power signal? 18.

a = σ o + jωo x(t ) = | c | e e = | c | eσ o t e j (ωo t +φo ) Using Euler’s formula. is viewed as time. units of ωo typically are radians/second and units of To naturally are seconds. In addition.1 The Class of CT Exponential Signals There are several ways to represent the complex-valued signal x(t ) = c eat . if φo = 0 x(t ) = ⎨ σ t ⎪− | c | e o . σ o = 0 . t. ωo = 0 and φo is either 0 or π. the subscript o’s are intended to emphasize that the quantities are fixed real numbers. Then we have the familiar exponentials σ ot ⎧ ⎪ | c | e . That is. Then j (ωo t +φo ) x(t ) = | c | e = | c | cos(ωot + φo ) + j | c | sin(ωot + φo ) Both the real and imaginary parts of x(t ) are periodic signals. Special Case 2: Suppose c is complex and a is purely imaginary. with fundamental period To = 2π |ω o | Since the independent variable. 23 . 2. That is. Then jφo (σ o + jωo )t . we can write the signal in rectangular form as σ ot σ ot x(t ) = | c | e cos(ωot + φo ) + j | c | e sin(ωot + φo ) There are two special cases that are of most interest. Though it is far from obvious.Notes for Signals and Systems Much of our discussion will focus on two broad classes of signals: the class of complex exponential signals and the class of singularity signals. if φo = π ⎩ x(t ) = c eσ o t Or. Special Case 1: Suppose both c and a are real. − ∞ < t < ∞ where both c and a are complex numbers. A signal of this form is often called a phasor. more simply. it turns out that essentially all signals of interest can be addressed in terms of these two classes. A convenient approach is to write c in polar form. and a in rectangular form. jφo c = | c |e where φo = ∠c and where we have chosen notations for the rectangular form of a that are customary in the field of signals and systems.

then the fundamental period of x(t ) is To = 2π / ω0 . The applet in the link below illustrates this rotating-vector interpretation. These are best visualized using the “head-to-tail” construction of vector addition. if ω 0 is the largest frequency for which (2. ω 2 = lω0 (2. the projection on the vertical axis. but the values of ω1 x(t ) = c1 e + c2 e Furthermore. simply by attempting to satisfy the periodicity with any smaller. The applet below illustrates. If ωo > 0 . We can view a phasor signal as a vector at the origin of length | c | rotating in the complex plane with angular frequency ωo radians/second. Sum of Two Phasors The question of periodicity becomes much more interesting for phasor sums. It is worthwhile to provide a formal statement and proof of the result. in which case it is called the fundamental frequency for x(t ) . One Phasor Sums of phasors that have different frequencies are also very important. with c1 .1) can be satisfied. Consider jω1t jω 2t and ω 2 are. it is clear that To is the fundamental period of the signal. then the signal is a constant. Theorem The complex valued signal The values of c1 and c2 are not essential factors in the periodicity question. ω1 ≠ 0 is periodic if and only if there exists a positive frequency ω0 and integers k and l such that ω1 = kω0 . positive value for the period. that is. and we first discuss this for sums of two phasors. if ωo = 0 . and it is not surprising that the notion of a fundamental period falls apart. then the rotation is clockwise. then the rotation is counter clockwise. If ωo < 0 . c2 ≠ 0 and ω 2 . and also displays the imaginary part of the phasor.1) x(t ) = c1 e jω1t + c2 e jω 2t 24 . x(t ) is periodic with x(t + To ) = | c | e j[ωo (t +To ) +φo ] = | c | e j (ωo t +φo ) e± j 2π = | c | e j (ωo t +φo ) = x(t ) Also. with assumptions designed to rule out trivialities and needless complexities. we can check directly that given any ωo . beginning with the angle φo at t = 0 . Of course.period To = 2π / | ωo | : Left in exponential form.

and thus fundamental period 2π. It is easy to see that if ωo is the largest frequency such that (2. Choosing T = 2π / ω0 . and T > 0 is such that x(t + T ) = x(t ) for all t. Phasor Sums 25 . l satisfy (2. real signal. Picking the particular times t = 0 and t = π /(ω 2 − ω1 ) gives the two algebraic equations for all t. since the frequencies 2 and π cannot be integer multiples of a fixed frequency. we see that jkω0 (t +T ) jlω0 (t +T ) x(t + T ) = c1e + c2e = e jk 2π c1e jkω0t + e jl 2π c2e jlω0t = c1e jkω0t + c2e jlω0t = x(t ) for all t. Example The signal e =e =1 x(t ) = 4e j 2t − 5e j 3t is periodic with fundamental frequency ωo = 1 . The signal x(t ) = 4e j 2t − 5e jπ t is not periodic.1). and also subtracting the second from the first. This implies (e jω1T − 1)c1 + (e jω 2T − 1) c2 = 0 (e jω1T − 1)c1 − (e jω 2T − 1) c2 = 0 By adding these two equations. (Perhaps we are beginning to abuse the subscript oh’s and zeros…) Now suppose that x(t) is periodic. and the imaginary part exhibits the corresponding periodic. we obtain jω1T jω 2T Therefore both frequencies must be integer multiples of frequency 2π / T . Such frequency terms are often called harmonically related. then the fundamental period of x(t ) is To = 2π / ωo . That is.1) is satisfied.Proof First we assume that positive ω0 and the integers k. jω1 (t +T ) jω 2 (t +T ) jω1t jω 2 t c1e + c2e = c1e + c2e (e jω1T − 1)c1 + (e jω 2T − 1) c2e j (ω 2 −ω1 )t = 0 for all t. The theorem generalizes to the case where x(t ) is a sum of any number of complex exponential terms: the signal is periodic if and only if there exists ω 0 such that every frequency present in the sum can be written as an integer multiple of ω0 . The applet below can be used to visualize sums of several harmonically related phasors. and x(t ) is periodic.

and yet has unit area.2) for nonzero values of the integration variable. This indicates that the impulse must be zero for nonzero arguments. we develop further properties of the unit impulse by focusing on implications of the sifting property. for t ≠ 0 (2. the signals x(t) = t . shows that δ (t ) cannot be a function in the ordinary sense. …. so that the value of x(t ) at t = 0 is well defined. Obviously δ (0) cannot be zero. Here x(t ) is any continuoustime signal that is a continuous function at t = 0.2) x(0+ ) + x(0− ) ∫ x(t )δ (t ) dt = 2 −∞ ∞ For example. That is. as occurs in the unit step signal. a unit step. we see the unit impulse must satisfy (2. reviewed in detail below. t 3 . This makes clear the fact that we are dealing with something outside the realm of basic calculus. in some sense. if the signal is the unit step. and indeed it must have. then the sift would yield 1/ 2 . a signal we invent in order to have the following sifting property with respect to ordinary signals.3). t 2 . would not be eligible for use in the sifting property. • Area ∞ Considering the sifting property with the signal x(t ) = 1 . or the signal x(t ) = 1/ t .4) By considering x(t ) to be any signal that is continuous at t = 0 with x(0) = 0 .2 The Class of CT Singularity Signals The basic singularity signal is the unit impulse. δ (t ) causes the integral to “sift out” the value of x(0) . for example. −∞ ∫ x(t )δ (t ) dt = x(0) (2. and the sifting property is defined to give the mid-point of the jump. (However. the unit impulse is zero everywhere except t = 0 . That is. x(t ) : ∞ That is.3) δ (t ) = 0. For example.2. However. while insisting that in other respects δ (t ) behave in a manner consistent with the usual rules of arithmetic and calculus of ordinary functions. δ (t ) . infinite value. some treatments do allow a finite jump in x(t ) at t = 0 . it can be shown that there is no contribution to the integral in (2. 26 . for all t.) A little thought. • Time values −∞ ∫ δ (t ) dt = 1 (2. Notice also that these first two properties imply that a −a ∫ δ (t ) dt = 1 for any a > 0 .

in the sifting expression. To interpret the sifting property for the time shifted unit impulse. together with the function multiplication property gives the more general statement z (t )δ (t − to ) = z (to )δ (t − to ) where to is any real constant and z(t) is any ordinary signal that is a continuous function of t at t = to . ∞ −∞ ∫ x(t ) δ (at ) dt = 1 x(0). for all t. • Time scale Since an impulse is zero for all nonzero arguments. the sifting property gives ∞ −∞ ∫ x(t ) [ z (t )δ (t )] dt = ∞ −∞ ∫ [ x(t ) z (t )] δ (t ) dt = x(0)z (0) This is the same as the result obtained when the unit impulse is multiplied by the constant z (0) . for any nonzero constant a. time scaling an impulse has impact only with regard to the sifting property where. ∞ −∞ ∫ x(t )[ z (0)δ (t )] dt = z (0) ∫ x(t )δ (t ) dt = z (0) x(0) −∞ ∞ Therefore we conclude the signal multiplication property shown above. which is assumed to be continuous at t = 0. a ≠ 0 |a| 27 . • Signal Multiplication z (t )δ (t ) = z (0)δ (t ) When a unit impulse is multiplied by a signal z(t).• Scalar multiplication We treat the scalar multiplication of an impulse the same as the scalar multiplication of an ordinary signal. a change of integration variable from t to τ = t − to gives ∞ −∞ ∫ x(t ) δ (t − to ) dt = ∞ −∞ ∫ x(τ + to ) δ (τ ) dτ = x(to ) This property. To interpret the sifting property for a δ (t ) . note that the properties of integration imply ∞ −∞ ∫ x(t ) [aδ (t )] dt = a ∫ x(t ) δ (t ) dt = a x(0) −∞ ∞ The usual terminology is that aδ(t) is an “impulse of area a. δ (t − to ) . • Time shift We treat the time shift of an impulse the same as the time shift of any other signal. where a is a constant.” based on choosing x(t ) = 1 .

but we will not go there. We graphically represent an impulse by an arrow. and t is any real value. and so on. assume first that a > 0. and in particular with the change of integration variable from t to τ = at. a < 0. a These two cases can be combined into one expression given above. using the calculus consistency principle to figure out how to interpret δ (at − to ) . where now the change of integration variable yields an interchange of limits.To justify this expression. But we only need the properties justified above.) We could continue this investigation of properties of the impulse. so we regard the unit impulse as an “even function.” Other interpretations are possible. ∞ −∞ ∫ x(t ) δ (at ) dt = 1 a ∞ −∞ ∫ x(τ / a) δ (τ ) dτ = 1 x(0) . Even more remarkable is an expression that relates impulses and complex exponentials: • Special Property 2 1 ∞ jtω ∫ e dω 2π −∞ δ (t ) = 28 . z (t )δ (at ) . gives ∞ −∞ ∫ x(t ) δ (at ) dt = 1 a ∞ ∫ x(τ / a) δ (τ ) dτ a<0 =−1 a −∞ ∫ x(τ / a) δ (τ ) dτ = − 1 x(0) . by the principle of consistency with the usual rules of integration. and two additional properties that are simply wondrous. a −∞ ∞ a>0 A similar calculation for a < 0. Then the sifting property must obey. Thus the sifting property leads to the definition: δ (at ) = • Symmetry 1 δ (t ) . (If the area of the impulse is negative. for example. as shown below. a ≠ 0 |a| Note that the case a = 1 in time scaling gives the result that δ(−t) acts in the sifting property exactly as δ(t). These include an extension of the sifting property that violates the continuity condition: • Special Property 1 ∞ −∞ ∫ δ (τ ) δ (t − τ ) dτ = δ (t ) Note here that the integration variable is τ. sometimes the arrow is drawn pointing south.

But it turns out that a more interesting example is to use the sinc function defined by sinc(t ) = and let sin(π t ) (π t ) d n (t ) = n sin c(nt ) .… The pulses get taller and thinner as n increases. and the mean-value theorem can be used to show ∞ −∞ ∫ d n (t ) x(t ) dt = n 1/(2 n) −1/(2n ) ∫ x(t ) dt ≈ n x(0) n with the approximation getting better as n increases.… . as n → ∞ . that have the unit-area property ∞ −∞ ∫ d n (t ) dt = 1. and view δ(t) as “some sort” of limit: δ (t ) = lim n →∞ d n (t ) This view is useful for intuition purposes. and can be interpreted in the usual way. of these unit-area rectangles. n = 1.3. and that the sifting property ∞ −∞ ∫ d n (t ) x(t ) dt ≈ x(0) 29 .… ∞ and also have the property that for any other function x(t) that is continuous at t = 0. for the sequences of functions d n (t ) typically considered. However we next interchange the order of the limit and the integration. n = 1. since | e jtω |= 1 for any (real) values of t and ω. with little rigor. 2. we briefly discuss one of the mathematical approaches to the subject. To arrive at the unit impulse.3. by evaluating an integral that is not quite elementary. d n (t ). 2.Note here that the integral simply does not converge in the usual sense of basic calculus. 2.… It can be shown.3.” However. but clearly every d n (t ) is unit area. Examples Consider the rectangular-pulse signals ⎧ n. Remark Our general approach to these impulse properties will be “don’t think about impulses… simply follow the rules. −1 < t < 1 ⎪ 2n 2n d n (t ) = ⎨ ⎪0. n = 1. lim n →∞ ∫ d n (t ) x(t ) dt = x(0) −∞ Here the limit involves a sequence of numbers defined by ordinary integrals. consider the possibility of an infinite sequence of functions. width 2/n. In particular. without proper justification.3. Thus we can casually view a unit impulse as the limit. but is dangerous if pursued too far by elementary means. else ⎩ . that these signals all have area 2π. A similar example is to take d n (t ) to be a triangle of height n. 2. centered at the origin. to provide a bit of explanation. the limit does not exist in any usual sense. n = 1.

and provides a picture of how the impulse might arise as W increases. freely assignable following our general policy. dW (t ) = 1 ∫ cos(ω t ) dω π 0 W sin(Wt ) =1 π t = W sinc( Wt ) π π This dW (t ) can be shown to have unit area for every W > 0. 2πδ (t ) . Therefore the Special Property 2 might be expected. Integration leads to t ⎧0. t < 0 ∫ δ (τ ) dτ = ⎨1. t > 0 ⎩ −∞ which is the familiar unit-step function. where the jump occurs. The applet below shows a plot of dW (t ) as W is varied. width 1/n . as a limit of these functions. rectangular pulses as n increases. and you can get a pictorial view of how an impulse might arise from sinc’s as n increases. in much the same way as the impulse arises from height n. Another Sinc Family • Additional Singularity Signals From the unit impulse we generate additional singularity signals using a generalized form of calculus.) The “running integral” in this expression actually can be interpreted graphically in very nice way. (We leave the value of u(0). u (t ) . and again the sifting property is approximated when W is large. Family of Sincs Remark Special Property 2 can be intuitively understood in terms of our casual view of impulses as follows.is a better and better approximation as n grows without bound. Therefore we can view an impulse of area 2π. This sequence of sinc signals is displayed in the applet below for a range of n. And a variable change from τ to σ = t − τ gives the alternate expression 30 . Let dW (t ) = 1 ∫ e jω t dω 2π = 1 ∫ [cos(ω t ) + j sin(ω t )] dω 2π j = 1 ∫ cos(ω t ) dω + sin(ω t ) dω 2π 2π ∫ −W −W −W W W −W W W Using the fact that a sinusoid is an odd function of its argument. again by a non-elementary integration. that is.

Cheating more. t ≥ 0 ⎩ −∞ t2 u (t ) 2 which might be called the unit parabola. where 31 . using differentiation and the fundamental theorem of calculus. By considering the running integral of the unit-step function. We need go no further than the first derivative. We can also consider “derivatives” of the unit impulse. Continuing. strictly speaking. because of the discontinuity there in u(t). to write d d t r (t ) = ∫ u (τ ) dτ = u (t ) dt dt −∞ That this cheating is not unreasonable follows from a plot of the unit ramp. a graphical interpretation makes this seem less unreasonable. Clearly. r(t). we go further. as further integrations yield signals little used in the sequel. We stop here. is a continuous function of time. though it is unbounded. t ≥ 0 ⎩ −∞ = tu (t ) Often we write this as r(t). we also write d d t u (t ) = ∫ δ (τ ) dτ = δ (t ) dt dt −∞ Again. but we ignore this issue. since the unit step is not continuous.u (t ) = ∫ δ (t − σ ) dσ 0 ∞ Analytically this can be viewed as an application of a sifting property applied to the case x(t) = u(t): ∞ ∫ δ (t − σ ) dσ = ∞ −∞ ∫ u (σ )δ (t − σ ) dσ = ∞ −∞ ∫ u (t − σ )δ (σ ) dσ = u (t ) 0 This is not. p(t). t ⎧ ⎪ 0. However. legal for t = 0. and use integration by parts to interpret the “sifting property” that should be satisfied. and then a plot of the slope at each value of t. d d t p(t ) = ∫ r (τ ) dτ = r (t ) dt dt −∞ and this is a perfectly legal application of the fundamental theorem since the integrand. cheating a bit on the assumptions. we arrive at the unit-ramp: t ⎧0. Notice that the unit ramp is a continuous function of time. The approach is again to demand consistency with other rules of calculus. = We can turn matters around. t < 0 r (τ ) dτ = ⎨ 2 ∫ ⎪t / 2. t < 0 ∫ u (τ ) dτ = ⎨ t . r(t).

d u 2 (t ) = u (t )u (t ) + u (t )u (t ) = 2u (t )δ (t ) dt The multiplication rule for impulses does not apply. and denoted δ (t ) . We will not need to take matters this far. the midpoint of the jump. just as for the unit impulse. For example. Example The signal shown below can be written as a sum of step functions. and so we are stuck. For example.∞ −∞ ∫ d x(t )[ dt δ (t )] dt = x(t ) δ (t ) −∞ − ∞ ∞ −∞ ∫ x(t ) δ (t ) dt = − x(0) This assumes. of course. For example. finally. we sketch the unit doublet as shown below. to a point where ambiguities begin to overwhelm and worrisome liberties must be taken. since u(t) is not continuous at t = 0. However if we interpret u(0) as ½. signals with uncomplicated wave shapes. It is also easy to verify the property ∫ x(t ) δ (t − to ) dt = − x(to ) and. x(t ) is continuously differentiable at t = 0 . − ∞ < t < ∞. choosing x(t ) = 1. As another example. that is. the doublet has zero area – a true ghost. This unit-impulse derivative is usually called the unit doublet. and ignoring the fact that u 2 (t ) is the same signal as u(t). using the product rule for differentiation.3 Linear Combinations of Singularity Signals and Generalized Calculus For simple signals. we get a result consistent with u (t ) = δ (t ) . the chain rule gives d u (t − to ) = δ (t − to ) dt Remark These matters can be taken too far. 32 . that x(t ) is continuous at t = 0 . since we use these generalized notions only for rather simple signals. Various properties can be deduced. it is convenient for many purposes to use singularity signals for representation and calculation. 2. that is. the sifting property for the doublet gives ∞ −∞ ∞ −∞ ∫ δ (t ) dt = 0 In other words. All of the “generalized calculus” properties can be generalized in various ways. the product rule gives d [t u (t )] = 1 u (t ) + t δ (t ) dt = u (t ) where we have used the multiplication rule to conclude that tδ (t ) = 0 for all t .

using our generalized notion of differentiation. and the last step uses the evenness of the impulse. Example The signal shown below can be written as x(t ) = 2r (t ) − 2r (t − 1) − 2u (t − 3) 33 . representation as a linear combination usually is much simpler. x(t ) = 4δ (t + 1) − 4δ (t − 1) This signal is shown below. regardless of the approach taken. though usage of the product rule and interpretation of the final result makes the derivative calculation more difficult. Differentiation of the first expression for x(t ) gives. Note that the constant of integration is taken to be zero since it is known that the signal x(t ) is zero for t < −1 . the second step uses the signalmultiplication rule for impulses.x(t ) = 4u (t + 1) − 4u (t − 1) x(t ) = 4u (t + 1)u (1 − t ) This uses the unit steps as “cutoff” functions.) Of course. x(t ) = 4δ (t + 1)u (1 − t ) + 4u (t + 1)δ (1 − t ) = 4δ (t + 1) − 4δ (1 − t ) = 4δ (t + 1) − 4δ (t − 1) (The first step makes use of the product rule for differentiation. Another representation is The same result can be obtained by differentiating the “step cutoff” representation for x(t). That is. However for further calculation. graphical methods easily verify −∞ ∫ x(τ ) dτ = x(t ) t in this example. and sometimes this usage is advantageous.

and that the impulse in x(t ) is crucial in verifying the relationship −∞ ∫ x(τ ) dτ = x(t ) t Example The periodic signal shown below can be written as x(t ) = k =−∞ ∑ [ 4u (t + 1 − 3k ) − 4u (t − 1 − 3k )] ∞ 34 . the derivative is straightforward to compute. gives x(t ) = −2sin(2t ) u (t ) + cos(2t ) δ (t ) = −2sin(2t ) u (t ) + δ (t ) You should graphically check that this is a consistent result. caution should be exercised. singularity signals. x(t ) = 2u (t ) − 2u (t − 1) − 2δ (t − 3) and sketch. followed by the impulse multiplication rule. This can introduce complications in some contexts. Example A right-sided cosine signal can be written as x(t ) = cos(2t ) u (t ) Then differentiation using the product rule. and unbounded signals.Again. Graphical interpretations of differentiation support these computations. While representation in terms of linear combinations of singularity signals leads to convenient shortcuts for some purposes. well-behaved energy signals have been represented as linear combinations of signals that are power signals. In the examples so far. Sometimes we use these generalized calculus ideas for signals are nonzero for infinite time intervals.

(a) x(t ) = e−(π − 2 j )t (b) x(t ) = e −π + 2 jt (c) x(t ) = 3e j 4t − 2e j 5t (d) x(t ) = e j 7t 2 + e j 7t (e) x(t ) = 6e− j 2t + 4e j 7(t −1) − 3e j 6(t +1) (f) x(t ) = k =−∞ ∑ ⎡e−(t − 2k )u (t − 2k ) − e−(t − 2k )u (t − 2k − 1) ⎤ ⎣ ⎦ ∞ 2. and a sketch of x(t ) is shown below." 35 . and if so determine the fundamental period. Exercises 1. Determine whether the following signals are periodic. Check your integration by t "graphical differentiation. Sketch the signal x(t ) and compute and sketch −∞ ∫ x(σ ) dσ .Then x(t ) = ∑ ∞ k =−∞ [ 4δ (t + 1 − 3k ) − 4δ (t − 1 − 3k )] by generalized differentiation. Simplify the following expressions and sketch the signal. (a) x(t ) = δ (t − 2) r (t ) + d [u (t ) − r (t − 1)] (b) x(t ) = δ (t )δ (t − 1) − e (c) x(t ) = (d) x(t ) = t dt t +3 δ (t + 2) + u (1 − t )r (t ) −∞ t ∫ u (τ − 3) dτ − 2r (t − 4) + r (t − 5) d ∫ δ (τ + 2) dτ + dt [u (t + 2)r (t )] −∞ 3.

36 . the generalized derivative of x(t ) . Check your derivative by "graphical integration. Compute and sketch x(t ) .(a) (b) (c) (d) x(t ) = u (t ) − u (t − 1) + 2δ (t − 2) − 3δ (t − 3) + δ (t − 4) x(t ) = 3u (t ) − 3u (t − 4) − 12δ (t − 5) x(t ) = − 2u (t + 1) + 3u (t ) − u (t − 2) x(t ) = u (t ) + δ (t − 1) − 2δ (t − 2) + δ (t − 4) 4. Sketch the signal x(t ) and compute and sketch x(t ) ." (a) x(t ) = r (t + 1) − r (t ) + u (t ) − 3u (t − 1) + u (t − 2) (b) x(t ) = 2r (t + 1) − 4r (t ) + 2r (t − 1) (c) x(t ) = 2u (t − 1) − (2 / 3)r (t − 1) + (2 / 3)r (t − 4) 5. Write a mathematical expression for the signal x(t ) shown below.

we can write this signal in rectangular form as x[n] = | c | eσ o n cos(ωo n + φo ) + j | c | eσ o n sin(ω o n + φo ) This expression is similar to the continuous-time case. 37 . we can write x[n] = c α n σ where α = −e o is a real negative number. Special Case 1: Suppose both c and a are real. 1 The Class of DT Exponential Signals Consider a discrete-time signal x[n] = c ean where both c and a are complex numbers. x[n] = c e[σ o + j (2k +1)π ]n = c eσ o n e j (2k +1)π = c eσ o n (−1)n = c (−eσ o ) n Or. and a in rectangular form. The appearance of x[n] is a bit different in this case. We need only consider three cases in detail. That is. with the obvious consideration of the sign of c. write c in polar form. as usual. covers the range −∞ < n < ∞. or sample number. the integer index. jφo c = | c |e where φo = ∠c and we have chosen customary notations for the rectangular form of a. as shown below for c = 1 . Special Case 2: Suppose c is real and a has the special form a = σ o + j (2k + 1)π where k is an integer.Notes for Signals and Systems 3. more simply. but differences appear upon closer inspection. ωo = 0 and φo has value either 0 or π. Then. To more conveniently interpret x[n]. Then . a = σ o + jωo x[n] = | c | e jφo e(σ o + jωo ) n = | c | eσ o n e j (ωo n +φo ) Using Euler’s formula. In this case. and. σ on x[n] = c e and we have the familiar interpretation of an exponentially-decreasing (if σ o < 0 ) or exponentially-increasing σ o > 0 sequence of values.

that is. is viewed as a sample index. This will hold if and only if N satisfies jωo N that is. we assume that c = 1. since ωo = 2 cannot be written as a rational multiple of 2π . Example x[n] = e j 2n is not periodic. with fundamental period 16. and we are considering a discrete-time phasor. The signal jωo n x[n] = e is periodic if and only if there is a positive integer N such that jωo ( n + N ) jωo n e =e for all integer n. then the fundamental period of the signal is obtained when the smallest value of N is found for which this expression holds. e =1 ωo = m 2π N for some integers m and N. In order to simplify further discussion. Special Case 3: Suppose c is complex and a is purely imaginary. units of ω o typically are radians/sample for discrete-time signals. That is. The first difference from the continuous-time case involves the issue of periodicity. Then jφo jω o n x[n] =| c | e e = | c | e j (ωo n +φo ) = | c | cos(ωo n + φo ) + j | c | sin(ωo n + φo ) Since the independent variable. if and only if ωo N = m 2π for some integer m. Obviously. x[n] is periodic if and only if the frequency ωo is a rational multiple of 2π. π Example x[n] = e 8 j n is periodic. N are relatively prime. In words. these first two special cases can be combined by considering that α might be either positive or negative. m and N have no common integer factors other than unity. since π 8 = 1 2π 16 38 . σ o = 0 . If this is satisfied. n. this occurs when m.Of course.

Proof By direct calculation. and clockwise if the frequency is negative. Most often the range of choice is −π < ωo ≤ π for reasons of symmetry. the issue of periodicity of sums of periodic discrete-time phasors is relatively simple. raising or lowering the frequency by any integer multiple of 2π has the same (absence of) effect. though the reasoning often requires trig identities and is more involved than for phasors. jω1 ( n + N ) jω 2 ( n + N ) x[n + N ] = c1e + c2e = c1e jω1n e m j N1 2π N1 N 2 1 + c2e jω 2 n e j N2 2π N1 N 2 2 m = c1e jω1n e jm1 N 2 2π + c2e jω 2 n e jm2 N1 2π = c1e jω1n + c2e jω 2 n = x[n] for any n. the phasor rotates more rapidly. ω 2 = (m2 / N 2 )2π Then x[n] is periodic with period (not necessarily fundamental period) given by N = N1N 2 . The applet below illustrates the behavior of discrete-time phasors for various choices of frequency. as the frequency ω o increases. Discrete-Time Frequency Remark The various analyses of phasor properties can also be carried out for real trigonometric signals. ωo . For example. Thus we need only consider discrete-time frequencies in a single 2π range. using the claimed period N. the complex exponential signal doesn’t change. and suggests a number of exercises. But in discrete time. suppose x[n] = cos(ωo n) 39 . In continuous time. though sometimes the range 0 ≤ ωo < 2π is convenient. in continuous time a phasor rotates counterclockwise if the frequency ωo is positive. As a final observation. To balance these complications in comparison with the continuous-time case. ω1 = (m1 / N1 )2π . However there is no clear visual interpretation of direction of rotation depending on the sign of frequency in the discrete time case. Clearly.The second major difference between continuous and discrete-time complex exponentials concerns the issue of frequency. consider what happens when the frequency is raised or lowered by 2π: j (ωo ± 2π ) n jωo n ± j 2π n jωo n e =e e =e That is. Theorem For the complex-valued signal x[n] = c1e jω1 n + c2e jω 2 n suppose both frequencies are rational multiples of 2π.

3. Graphically. n ≥ 0 u[n] = ⎨ ⎩0. for all integers n. In particular. Graphically. unlike the continuous-time case where we decline to assign an immoveable value to u (0) . δ [n] is a lonely lollypop at the origin: The discrete-time unit-step function is ⎧1. n = 0 ⎩0.2 The Class of DT Singularity Signals The basic discrete-time singularity signal is the unit pulse. This can be written as cos[ωo (n + N )] = cos(ωo n) cos(ωo n + ωo N ) = cos(ωo n) cos(ωo N ) − sin(ωo n) sin(ωo N ) = cos(ωo n) .is periodic with period N. there is nothing “generalized” about this simple signal. − ∞ < n < ∞ We conclude from this that N must be such that cos(ωo N ) = 1. of course. ωo must be a rational multiple of 2π . Thus ωo N = m2π for some integer m. that is. the value of u[0] is unity. otherwise Contrary to the continuous-time case. δ [ n] = ⎨ ⎧1. u[n] = ∑ δ [k ] Changing summation variable from k to l = n − k gives an alternate expression k =−∞ ∞ n u[n] = ∑ δ [n − l ] l =0 This process can be continued to define the unit ramp as the running sum of the unit step. we have The unit step can be viewed as the running sum of the unit pulse. 40 . n < 0 and again there are no technical issues here. That is. sin(ωo N ) = 0 .

Show that x[n] has period N . the time-scaled unit pulse. ω 2 = (m2 / N 2 )2π Suppose that N is a positive integer such that N = k1N1 = k2 N 2 for some integers k1 . Exercises 1. Consider the signal x[n] = c1e of 2π . ramps. n ≥ 0 r[n] = ∑ u[k ] = ⎨ ⎩0. unlike the continuous-time case. though no “generalized” interpretations are needed.1. + c2e jω2 n where both frequencies are rational multiples ω1 = (m1 / N1 )2π . x[n]δ [n − no ] = x[no ]δ [n − no ] x(t )δ (t − to ) = x(to )δ (t − to ) is a discrete-time version of the multiplication rule However. since u[0] = 1 and r[0] = 0 : n −1 ⎧n. (Typically N < N1N 2 . as is easily verified. as used in the theorem in Section 3. Discrete-time singularity signals also have sifting and multiplication properties similar to the continuous-time case. k2 . is identical to δ [n] . j 20π n 3 − jπ n 4 (a) x[n] = e (b) x[n] = e j 4π n − e (c) x[n] = 3e j7n 3 4 j 5π n (d) x[n] = 1 + e (e) x[n] = e 5 j 7π n +e − j 3π n 4 jω1 n 2. and if so compute the fundamental period.though there is a little adjustment involved in the upper limit of the sum. δ [an] . where a is a nonzero integer. n < 0 k =−∞ We will have no reason to pursue further iterations of running sums. and there is little need to write signals in terms of steps.) 41 . It is straightforward to verify that n =−∞ ∞ ∑ x[n]δ [n − no ] = x[no ] ∞ which is analogous to the continuous-time sift −∞ ∫ x(t )δ (t − to ) dt = x(to ) Also. Determine if the following signals are periodic. One reason is that simple discrete-time signals can be written as sums of amplitude scaled and time shifted unit pulses. and so on.

(a) x[n] = (b) x[n] = k =−∞ ∞ k =−∞ ∑ 3δ [k − 2] + δ [n + 1]cos(π n) n ∑ 4δ [n − k ] e3k k =−∞ (c) x[n] = r[n − 3]δ [n − 5] + ∑ 3δ [n − k ] k =−∞ n (d) x[n] = cos(π n) [δ [n] − δ [n − 1]] − δ 3[n] + ∑ u[k − 3] n 42 .3. Simplify the following expressions and sketch the signal.

is y(0) = S(x)(0). y (t1 ) is the accumulated net area under x(t ) for −∞ < t ≤ t1 . We represent a system “S” with input signal x(t ) and output signal y (t ) by a box labeled as shown below. at any −∞ t In the discrete-time case.e. for example. the notation is designed to emphasize that for the input signal “x” the output signal is “S(x). we use an entirely similar notation. y (t ) = ∫ x(τ ) dτ time t1 . the output signal at any time t can depend on the input signal values at all times. in part because the notion of a system is so general that it is difficult to include all the details. i. so we will abandon precise articulation and rely on the intuition that develops from examples. and output signal the voltage y (t ) across the capacitor.2 System Properties The discussion of properties of systems will be a bit tentative at this point. But. We use the mathematical notation y (t ) = S ( x)(t ) to emphasize this fact. Attempting a formal definition of a system is a tedious exercise in avoiding circularity.” and the value of this output signal at. y (0) depends only on x(0) . some try to emphasize that a system maps signals into signals by using square brackets. Specifically. say t = 0. y (0) = S [ x(0)] . and in part because the mathematical description of a system might presume certain properties of allowable input signals.. A physical interpretation is a capacitor with input signal the current x(t ) through the capacitor.Notes for Signals and Systems 4. For example. the output at any time t1 depends.1 Introduction to Systems A continuous-time system produces a new continuous-time signal (the output signal) from a provided continuous-time signal (the input signal). assuming unit capacitance. Since a system maps signals into signals. In this case. y (t ) = S [ x(t )] Perhaps this continues to tempt the interpretation that. 4. on input values for all t ≤ t1 . writing y[n] = S ( x)[n] and representing a system by the diagram shown below. Example The running integral is an example of a system. Remark There are many notations for a system in the literature. 43 . Then we have.

the input signal x(t ) = x(t − to ) yields the output signal y (t ) = y (t − to ) . t. Given any constant. untrue. A memoryless system is always causal. y (t ) = ∫ τ x(τ ) dτ −∞ To check if a system is time invariant requires application of the defining condition. The corresponding response computation begins with −∞ t y (t ) = ∫ τ x(τ ) dτ = ∫ τ x(τ − to ) dτ To compare this to y (t − to ) . of course. This is sometimes called “shift invariance. for y (t ) = ∫ τ x(τ ) dτ we consider the input signal x(t ) = x(t − to ) . This gives −∞ −∞ t t 44 . y (t ) = ∫ x(τ ) dτ • Memoryless System A system is memoryless if the output value at any time t depends only on the input signal value at that same time. to . Examples of memoryless systems are y (t ) = 2 x(t ). Examples of causal systems are y (t ) = 3x(t − 2). We can be considerably more precise when we consider specific classes of systems that admit particular types of mathematical descriptions.” since any time shift of an input signal results in the exact same shift of the output signal. • Causal System A system is causal if the output signal value at any time t depends only on input signal values for times no larger than t.For example. y (t ) = ∫ x(τ ) dτ . In the interim the intent is mainly to establish some intuition concerning properties of systems in general. y (t ) = x 2 (t ). where to is any constant. y (t ) = 3x(t + 2). For example. −∞ t t y (t ) = 3 x(t − 2) Examples of systems that are not time invariant are y (t ) = sin(t ) x(t ). y (t ) = ∫ x(τ ) dτ . Examples of time-invariant systems are y (t ) = sin( x(t )). y (t ) = te x (t ) • Time-Invariant System A system is time invariant if for every input signal x(t) and corresponding output signal y(t) the following property holds. We proceed with a list of properties. −∞ t y (t ) = x3 (t ) t +1 −∞ Examples of systems that are not causal are y (t ) = x(2). though the reverse is. phrased in the continuous-time case. it is convenient to change the variable of integration to σ = τ − to . the input signal to a running-integrator system must be sufficiently well behaved that the integral is defined.

the x2 (t ) = x1 (t ) gives the homogeniety requirement that the response to x(t ) = (b + 1) x1 (t ) should be y (t ) = (b + 1) y1 (t ) for any constant b. there is aother constant P such that the corresponding output signal satisfies |y(t)| < P for all t. a system is stable (or bounded-input. simply take x1 (t ) = x2 (t ) (so that y1 (t ) = y2 (t ) ) and b = −1 in the definition of linearity. And since we have 45 . Since we have not established a class of input signals that we consider for systems. In detail. y (t ) = 3 x(t − 2) + 4t The thoughtful reader will be justifiably nervous about this definition. Examples of linear systems are y (t ) = et x(t ). • Stable System Recalling the definition of a bounded signal in Section1. response to the input signal x(t ) = bx1 (t ) + x2 (t ) is y (t ) = by1 (t ) + y2 (t ) . x2 (t ) . And taking with corresponding output signals y1 (t ). y (t ) = cos( x(t )) Remark It should be noted that for a linear system the response to the zero input is the zero output signal. To see this.y (t ) = ∫ (σ + to ) x(σ ) dσ −∞ t − to which is not the same as y (t − to ) = ∫ τ x(τ ) dτ −∞ t − to Therefore the system is not time invariant. or a corresponding class of output signals. the following holds. t y (t ) = sin(t ) x(t ) y (t ) = et x(t ). Examples of stable systems are y (t ) = e x (t ) . y (t ) = ∫ x(τ ) dτ −∞ t Examples of systems that are “nonlinear” are y (t ) = ∫ x 2 (σ ) dσ . Taking b = 1 yields the additivity requirement that the response to x(t ) = x1 (t ) + x2 (t ) be y (t ) = y1 (t ) + y2 (t ) . t y (t ) = 3x(t − 2).6. (This is more concise than popular two-part definitions of linearity in the literature. y (t ) = ∫ x(τ ) dτ −∞ • Invertible System A system is invertible if the input signal can be uniquely determined from knowledge of the output signal. For every constant b. Examples of “unstable” systems are y (t ) = x(t − 2) t2 +1 . where M is a constant. Invertibility of a mathematical operation requires two features: the operation must be one-to-one and also onto. bounded-output stable) if every bounded input signal yields a bounded output signal. Examples of invertible systems are y (t ) = x3 (t ). y2 (t ) . • Linear System A system is linear if for every pair of input signals x1 (t ). −∞ y (t ) = 1 + x(t ). the issue of “onto” is left vague. for any input signal x(t) such that |x(t)| < M for all t.

Perhaps the easiest situation is showing that a system is not invertible by exhibiting two legitimately different input signals that yield the same output signal. yield identical output signals. for all t. Determining invertibility of a given system can be quite difficult. An immediate question is how to mathematically describe the input-output behavior of the overall system in terms of the subsystem descriptions. has each of the properties we consider. y (t ) = x 2 (t ) is not invertible because constant input signals of x(t ) = 1 and x(t ) = −1 . the system y (t ) = is not invertible since x(t ) = 1 + x(t ) yields the same output signal as x(t). and consider the following types of connections. As another example. Little more is required than to replace parentheses by square brackets and t by n. If two input signals differ only at isolated points in time. it is important to note that these are input-output properties of systems. 4. for which you do not have a mathematical description.decided to ignore or reassign values of a signal at isolated points in time for reasons of simplicity or convenience. the output signals will be identical. and there are a few main cases to consider.3 Interconnections of Systems – Block Diagrams Engineers often connect systems. We begin with the two subsystems shown below as “blocks” ready for connection. and what use you would make of these measurements. and thus the system is not invertible if we consider such input signals to be legitimately different. sometimes called subsystems in this context. everything is stated in terms of input signals and corresponding output signals. As a final example. what input signals would you apply. Finally it is worthwhile to think of how you would ascertain whether a given physical system. In particular. 46 . All of these properties translate easily to discrete-time systems. even the issue of “one-to-one” is unsettled. in a benign setting the running-integrator system d x(t ) dt y (t ) = ∫ x(τ ) dτ −∞ t is invertible by the fundamental theorem of calculus: d t ∫ x(τ ) dτ = x(t ) dt −∞ But the fact remains that technicalities are required for this conclusion. what measurements of the response would you take. Of course these connections correspond to mathematical operations on the functions describing the subsystems. But regardless of the time domain. That is. For example. nothing is being stated about the internal workings of the system. together to form new systems.

and we can represent it as a single block: • Series or cascade connection The mathematical description of the overall system corresponds to the composition of the functions describing the subsystems. according to y (t ) = S2 ( S1 ( x))(t ) Thus the overall system can be represented as the block diagram • Feedback connection 47 . This connection describes a new system with output given by y (t ) = S1 ( x)(t ) − S2 ( x)(t ) = ( S1 − S2 )( x)(t ) Thus the overall system is represented by the function ( S1 − S2 )( x) = S1 ( x) − S2 ( x) .• Additive parallel connection In this block diagram. as a particular example. the circular element shown represents the signed addition of the output signals of the two subsystems.

In electrical engineering. or stable. For example. and we return to this issue in the sequel. Justify your answer. we begin by noting that the output signal is the response of the system S1 with input signal that is the output of the summer. Determine if each of the following systems is causal. Therefore. A second assumption is that domain and range conditions are met. input signal to S1 is x (t ) − S 2 ( y )(t ) . That is. To attempt development of a mathematical description of the system described by this connection. the output of S1 must satisfy any assumption required on the input to S2 . (a) y (t ) = 2 x(t ) + 3 x 2 (t − 1) (b) y (t ) = cos 2 (t ) x(t ) (c) y (t ) = 2 + (d) y (t ) = e−t (e) y (t ) = t t −1 −∞ t −∞ ∫ x(t − τ )dτ τ ∫ e x(τ )dτ (f) y (t ) = x(−t ) (g) y (t ) = x(3t ) (h) −∞ ∫ x(2τ )dτ y (t ) = ∫ e(t −σ ) x 2 (σ ) dσ −∞ t 48 . this is often expressed as “there are no loading effects. but extremely important. If the system S2 takes the square root of the input signal. Exercises 1. linear. without further mathematical assumptions on the two functions S1 and S 2 . Remark In our discussion of interconnections of systems. we cannot solve this equation for y(t) in terms of x(t) to obtain a mathematical description of the overall system of the form y (t ) = S ( x)(t ) This indicates the sophisticated and subtle nature of the feedback connection of systems. memoryless. S 2 ( x)(t ) = x(t ) . time invariant. The basic interconnections of discrete-time systems are completely similar to the continuous-time case. in the cascade connection. then the output of S1 must never become negative. y (t ) = S1 ( x − S2 ( y ))(t ) and we see that the feedback connection yields an equation involving both the input and output signals.” This assumption is not always satisfied in practice. an underlying assumption is that the behavior of the various subsystems is not altered by the connection to other subsystems. Unfortunately.The feedback connection is more complicated than the others.

(i)

y (t ) = x(t ) ∫ e−τ x(τ ) dτ
−∞

t

(j) (k) (l)

y (t ) = 3 x(t + 1) − 4

y (t ) = −3 x(t ) + ∫ 3 x(σ ) dσ

t

y (t ) = 3 x(t )− | x(t − 3) |

0

2. Determine if each of the following systems is causal, memoryless, time invariant, linear, or stable. Justify your answer. (a) y[n] = 3 x[n]x[n − 1]

(b) y[n] =

n+2

(c) y[n] = 4 x[3n − 2] (d) y[n] = (e)
k =−∞ n

k =n−2 n

∑ x[k ]

∑ ek u[k ] x[n − k ]

y[n] = ∑ cos( x[k ])
k = n −3

3. Determine if each of the following systems is invertible. If not, specify two different input signals that yield the same output. If so, give an expression for the inverse system. (a) y (t ) = cos[ x(t )]

(b) y (t ) = x3 (t − 4)
4. Determine if each of the following systems is invertible. If not, specify two different input signals that yield the same output. If so, give an expression for the inverse system.

(a) y[n] =

(b) y[n] = (n − 1) x[n] (c) y[n] = x[n] − x[n − 1]
5. For each pair of systems S1 , S 2 specified below, give a mathematical description of the

k =−∞

∑ x[k ]

n

cascade connection S 2 ( S1 ) .
2 (a) y1[n] = x1 [ n − 2],

y2 [n] = 3 x2 [n + 2]
n k =−∞

(b) y1[n] =

k =−∞

∑ δ [k ]x1[n − k ], y2 [n] = ∑ 2δ [n − k ]x2 [k ]

n

49

Notes for Signals and Systems
5.1 DT LTI Systems and Convolution

Discrete-time systems that are linear and time invariant often are referred to as LTI systems. LTI systems comprise a very important class of systems, and they can be described by a standard mathematical formalism. To each LTI system there corresponds a signal h[n] such that the inputoutput behavior of the system is described by

y[n] = ∑ x[k ]h[n − k ]
k =−∞

This expression is called the convolution sum representation for LTI systems. In addition, the sifting property easily shows that h[n] is the response of the system to a unit-pulse input signal. That is, for x[n] = δ[n],

y[n] = ∑ x[k ]h[n − k ] = ∑ δ [k ]h[n − k ] = h[n]
k =−∞ k =−∞

Thus the input-output behavior of a discrete-time, linear, time-invariant system is completely described by the unit-pulse response of the system. If h[n] is known, then the response to any input can be computed from the convolution sum.
Derivation It is straightforward to show that a system described by the convolution sum, with specified h[n], is a linear and time-invariant system. Linearity is clear, and to show time ˆ invariance, consider a shifted input signal x[n] = x[n − no ] . The system response to this input signal is given by

ˆ ˆ y[n] = ∑ x[k ] h[n − k ] = ∑ x[k − no ] h[n − k ]
To rewrite this expression, change the summation index from k to l = k − N, to obtain
k =−∞ k =−∞ ∞

ˆ y[n] = ∑ x[l ] h[n − no − l ]
l =−∞

= y[n − no ]
This establishes time invariance. It is less straightforward to show that essentially any LTI system can be represented by the convolution sum. But the convolution representation for linear, time-invariant systems can be developed by adopting a particular representation for the input signal and then enforcing the properties of linearity and time invariance on the corresponding response. The details are as follows. Often we will represent a given signal as a linear combination of “basis” signals that have certain desirable properties for the purpose at hand. To develop a representation for discrete-time LTI systems, it is convenient to represent the input signal as a linear combination of shifted unit pulse signals: δ [n], δ [n − 1], δ [n + 1], δ [n − 2], … . Indeed it is easy to verify the expression

50

x[n] = ∑ x[k ]δ [n − k ]
Here the coefficient of the signal δ [n − k ] in the linear combination is the value x[k]. Thus, for example, if n = 3, then the right side is evaluated by the sifting property to verify
k =−∞ k =−∞

∑ x[k ]δ [3 − k ] = x[3]

We can use this signal representation to derive an LTI system representation as follows. The response of an LTI system to a unit pulse input, x[n] = δ[n], is given the special notation y[n] = h[n]. Then by time invariance, the response to a k−shifted unit pulse, x[n] = δ [n − k ] is y[n] = h[n − k ] . Furthermore, by linearity, the response to a linear combination of shifted unit pulses is the linear combination of the responses to the shifted unit pulses. That is, the response to x[n], as written above, is

y[n] = ∑ x[k ] h[n − k ]
k =−∞

Thus we have arrived at the convolution sum representation for LTI systems. The convolution representation follows directly from linearity and time invariance – no other properties of the system are assumed (though there are some convergence issues that we have ignored). An alternate expression for the convolution sum is obtained by changing the summation variable from k to l = n − k:

y[n] = ∑ h[l ]x[n − l ]
k =−∞

It is clear from the convolution representation that if we know the unit-pulse response of an LTI system, then we can compute the response to any other input signal by evaluating the convolution sum. Indeed, we specifically label LTI systems with the unit-pulse response in drawing block diagrams, as shown below

The demonstration below can help with visualizing and understanding the convolution representation. Joy of Convolution (Discrete Time)

Response Computation

Evaluation of the convolution expression, given x[n] and h[n], is not as simple as might be expected because it is actually a collection of summations, over the index k, that can take different forms for different values of n. There are several strategies that can be used for evaluation, and the main ones are reviewed below. • Analytic evaluation When x[n] and h[n] have simple, neat analytical expressions, and the character of the summation doesn’t change in complicated ways as n changes, sometimes y[n] can be computed analytically.

51

but the lower limit of the sum is raised to zero.Example Suppose the unit pulse response of an LTI system is a unit ramp. n=0 ∑ αn = 1 1−α • Graphical method This method is useful for more complicated cases. the (n − 1) with the 1 . Counting the number of pairs for even n and for odd n gives ⎧ n(n + 1) . then perform “lollypop-by-lollypop” multiplication and plot the summand. Therefore y[n] = 0 for n < 0. the following geometric-series formulas for discrete-time signal calculations are useful for analytic evaluation of convolution. or. n≥0 ⎪ y[n] = ⎨ 2 ⎪ n<0 ⎩ 0. Example To rework the previous example by the graphical method. more compactly. we can remove the step u[n − k ] from the summand if we lower the upper limit to n. write the convolution representation as y[n] = ∑ x[k ]h[n − k ] = ∑ u[k ](n − k ) u[n − k ] = ∑ (n − k ) u[n − k ] k =0 k =−∞ ∞ k =−∞ ∞ Note that in the second line the unit-step u[k] in the summand is removed. We simply plot the two signals. writing 52 . in the summand versus k for the value of n of interest. and then add up the lollypop values to obtain y[n]. then the argument of the step function in the summand is negative for every k ≥ 0. N −1 n=0 ∞ ∑ αn = 1−α N 1−α For any complex number satisfying | α |< 1 . y[n] = n(n + 1) u[n] 2 Remark Because of the prevalence of exponential signals. and so on. if n < 0. for n ≥ 0. Then y[n] = ∑ (n − k ) = n + (n − 1) + k =0 n + 2 +1+ 0 Using the very old trick of pairing the n with the 0 . h[n] = r[n] = n u[n] ∞ To compute the response of this system to a unit-step input. and this is valid regardless of the value of n. Now. x[k] and h[n − k]. For any complex number α ≠ 1 . we see that each pair sums to n. But.

with x[n] = h[n] = 1. we plot h[3 − k] and. and can be written as x[n] = δ [n] + 2δ [n − 1] − 3δ [n − 3] y[n] = h[n] + 2h[n − 1] − 3h[n − 3] Then linearity and time invariance dictate that Depending on the form of the unit-pulse response. Indeed. say. this clearly holds for n < 0. for all n. Adding up the lollypop values gives y[3] = 3 + 2 + 1 = 6 To compute. Remark The convolution operation can explode – fail to be well defined – for particular choices of input signal and unit-pulse response. For example. Even in complicated cases. plot x[k] immediately below: Then lollypop-by-lollypop multiplication gives a plot of the summand. for the value of n of interest. there is no 53 . suppose that the input signal comprises three nonzero lollypops. slide the plot of h[3 − k] one sample to the right to obtain a plot of h[4 − k] and repeat the multiplication with x[k] and addition. • LTI cleverness The third method makes use of the properties of linearity and time invariance. it is simply a specialization of the approach we took to the derivation of the convolution sum.y[n] = ∑ x[k ]h[n − k ] k =−∞ ∞ first plot h[k] as shown. For n = 3. In simple cases such as this. and is well suited for the case where x[n] has only a few nonzero values. Then. or by graphical addition of plots of the unit-pulse response and its shifts and amplitude scales. Example With an arbitrary h[n]. y[4].” In the example. to facilitate the multiplication. flip and shift. because the plots of x[k] and h[n − k] are “non-overlapping. it is often easy to identify ranges of n where y[n] = 0. there is little need to redraw because the pattern is clear. this can be evaluated analytically.

In particular. q → ∓ ∞ . Beginning with the left side. we adopt a shorthand “star” notation for convolution and write ∞ y[n] = ( x ∗ h)[n] = ∑ x[k ]h[n − k ] k =−∞ Note that since. The value of h[n − k] determines how the nth value of the output signal depends on the k th value of the input signal. replace the summation variable k by q = n − k. k =−∞ ∑ x[k ]h[n − k ] = ∑ h[k ]x[n − k ] . this is the reason we say that “essentially” any LTI system can be described by the convolution sum. though these are more in the nature of mathematical oddities than engineering legitimacies. we did not worry about convergence of the summation. That is. more mathematically. in complete detail. there are LTI systems that cannot be described by a convolution sum. Then as k → ±∞ . The diligent student must always be on the look out for such anomalies. Or. for example. we use the more general operator notation style. This mathematical operation obeys certain algebraic properties. the value of h[q] determines how the value of y[n] depends on the value of x[n − q]. ( x ∗ h)[n] = ( h ∗ x)[n] . Thus ( x ∗ h)[n] = ∑ x[k ]h[n − k ] = ∑ x[n − q ]h[q ] k =−∞ ∞ q =∞ ∞ −∞ = ∑ h[q]x[n − q] = (h ∗ x)[n] q =−∞ Using this result. Algebraic properties of the “star operation” are discussed below. ( x ∗ (h1 ∗ h2 ))[n] = (( x ∗ h1 ) ∗ h2 )[n] 54 . or. because the convolution sum is infinite for every n.2 Properties of Convolution – Interconnections of DT LTI Systems The convolution of two signals yields a signal. In any case. y[2] = x[2] ∗ h[2] . • Commutativity Convolution is commutative. This again is a consequence of our decision not to be precise about classes of allowable signals. the value of y[n] in general depends on all values of the signals x[n] and h[n]. domains and ranges. Furthermore. • Associativity Convolution is associative. and this obviously is a mathematical operation – a sort of “weird multiplication” of signals. and these properties can be interpreted as properties of systems and their interconnections. we do not write y[n] = x[n] ∗ h[n] because of the temptation to conclude that. In our derivation. k =−∞ ∞ ∞ for all n The proof of this involves the first standard rule for proofs in this course: Use change of variable of summation. but (unlike integration) it does not matter whether we sum from left-toright or right-to-left. That is. To simplify matters. 5.value of n for which y[n] is defined. for any n. there are two different ways to describe in words the role of the unit-pulse response values in the input-output behavior of an LTI system. or.

Distributivity implies that the interconnection below has the same input-output behavior as the system 55 .The proof of this property is a messy exercise in manipulating summations. For example. Also we can write (δ ∗ δ ) [ n] = δ [ n] . and it is omitted. Namely. ((bx) ∗ h)[n] = b ( x ∗ h)[n] • Shift Property This is simply a restatement of the time-invariance property. then ˆ ( x ∗ h ) [n] = ( x ∗ h ) [n − no ] • Identity It is worth noting that the “star” operation has the unit pulse as an identity element. but for LTI systems we label each block with the corresponding unit-pulse response. The remaining part of the linearity condition is written in the new notation as follows. Of course we use block diagram representations to describe interconnections. For any constant b. ( x ∗ (h1 + h2 ))[ n] = ( x ∗ h1 )[ n] + ( x ∗ h2 )[n] Of course. ( x ∗ δ )[n] = x[n] This can be interpreted in system-theoretic terms as the fact that the identity system. For any integer no . That is. y[n] = x[n] has the unit-pulse response h[n] = δ [n] . an expression that says nothing more than: The unit pulse is the unit-pulse response of the system whose unitpulse response is a unit pulse. if x[n] = x[n − no ] . These algebraic properties of the mathematical operation of convolution lead directly to methods for describing the input-output behavior of interconnections of LTI systems. • Distributivity Convolution is distributive (with respect to addition). though the ˆ notation makes it a bit awkward. distributivity is a restatement of part of the linearity property of LTI systems and so no proof is needed.

we can write y[n] = ( g ∗ e)[n] and e[n] = x[n] − (h ∗ y )[n] y[n] = ( g ∗ x)[n] − ( g ∗ h ∗ y )[n] as descriptions of the interconnection. in an attempt to obtain a description for its input-output behavior. the algebraic rules for the convolution operation provide an easy formalism for simplifying block diagrams. 56 . writing y[n] = (δ ∗ y )[n] we get ( (δ − g ∗ h) ∗ y ) [n] = ( g ∗ x)[n] However. With the intermediate signal e[n] labeled as shown. Typically it is easiest to start at the output signal and write descriptions of the intermediate signals (labeled if needed) while working back toward the input signal. But for systems without feedback. we analyze the feedback connection as follows. and return to the problem of describing the feedback connection after developing more tools. Let’s stop here.Commutativity and associativity imply that the interconnections both have the same input-output behavior as the system Finally. Substituting the second into the first gives or. we cannot solve for y[n] on the left side unless we know that the LTI system with unit-pulse response (δ − g ∗ h)[n] is invertible.

via the convolution expression y[n] = ∑ x[k ]h[n − k ] = ∑ h[k ]x[n − k ] k =−∞ k =−∞ ∞ ∞ the input-output properties of the system can be characterized very precisely in terms of properties of h[n]. If the unit-pulse response is right sided. if and only if h[n] is right sided. The output signal can be written as y[n] = (h4 ∗ x)[n] + (h2 ∗ h1 ∗ x)[n] − (h3 ∗ h1 ∗ x)[n] = ( (h4 + h2 ∗ h1 − h3 ∗ h1 ) ∗ x ) [n] Thus the input-output behavior of the system is identical to the input-output behavior of the system 5. 57 . The proof of this is quite easy from the convolution expression. h[n]. that is.3 DT LTI System Properties Since the input-output behavior of a discrete-time LTI system is completely characterized by its unit-pulse response.Example For the interconnected system shown below. If the unit-pulse response is not right sided. there is no need to label internal signals as the structure is reasonably transparent. then it is easy to see that the value of y[n] at a particular n depends on future values of the input signal. then the convolution expression simplifies to y[n] = ∑ h[k ]x[n − k ] k =0 ∞ and. at any value of n. the value of y[n] depends only on the current and earlier values of the input signal. • Causal System An LTI system is causal if and only if h[n] = 0 for n < 0.

Consider the input x[n] defined by ⎧ 1.5)n = n=0 ∞ ∞ 1 =2 1 − 0. for some constant c. h[n] = u[n − 1] • Invertible System There is no simple characterization of invertibility in terms of the the unit-pulse response. a proof is quite easy to argue from the convolution expression. and we have shown that the system is stable. that is.• Memoryless System An LTI system is memoryless if and only if h[n] = cδ[n]. h[n] ≥ 0 x[−n] = ⎨ ⎩−1. hI [ n] . That is. 58 . suppose x[n] is a bounded input. and therefore the unitpulse response is absolutely summable. Example The system with unit-pulse response h[n] = (0. To prove that stability of the system implies absolute summability requires considerable cleverness. bounded-output) stable if and only if the unit-pulse response is absolutely summable. y[n] is bounded. in particular examples it is sometimes possible to compute the unit-pulse response of an inverse system. and therefore y[0] is bounded. • Stable System An LTI system is (bounded-input. However. and the corresponding response y[n] at n = 0 is y[0] = ∑ h[k ] x[−k ] = ∑ | h[k ] | k =−∞ k =−∞ ∞ ∞ Since the system is stable. there is a constant M such that | x[n] | ≤ M for all n. from the requirement (h ∗ hI )[n] = δ [n] This condition expresses the natural requirement that a system in cascade with its inverse should be the identity system. h[n] < 0 Clearly x[n] is a bounded input signal. n =−∞ ∑ | h[n] | ∞ is finite.5)n u[n] is a stable system since n =−∞ ∑ | h[n] | = ∑ (0. the system with unit-pulse response is unstable.5 On the other hand. Again. the output signal is bounded for any bounded input signal. To prove this. if the absolute summability condition holds. Then the absolute value of the output signal satisfies | y[n] | = | ∑ h[k ]x[n − k ] | ≤ ∑ | h[k ] || x[n − k ] | k =−∞ k =−∞ ∞ ∞ ≤M k =−∞ ∑ | h[k ] | ∞ Therefore.

it is easy to see that in general the output of this inverse system is the first difference of the input signal. that is. If the input signal is a unit step. the unit-pulse response is right sided. x[n] = u[n] . Of course. If the system is described by y[n] = ∑ x[k ]h[n − k ] then the unit pulse response is simply y[n] = h[n] . then k =−∞ ∞ y[ n] = ∑ u[k ]h[n − k ] = ∑ h[n − k ] = ∑ h[l ] l =−∞ k =−∞ n k =0 ∞ ∞ In words. then 59 . for n < 0 . if the system is causal. Using this result. the unit-step response is the running sum of the unit-pulse response. it is clear that the inverse-system requirement is satisfied by taking all remaining values of hI [n] to be zero. Continuing for further values of n. for n = 0 the requirement is k =0 k =0 ∑ hI [−k ] = hI [0] = 1 ∞ For n = 1 the requirement is ∞ k =0 ∑ hI [1 − k ] = hI [1] + hI [0] = 0 which gives hI [1] = −1 . we must find hI [n] that satisfies k =−∞ ∑ u[k ] hI [n − k ] = δ [n] ∑ hI [n − k ] = δ [n] ∞ ∞ Simplifying the summation gives It is clear that we should take hI [n] = 0 . 5. the LTI system with unit pulse response h[n] = u[n] .4 Response to Singularity Signals The response of a DT LTI system to the basic singularity signals is quite easy to compute. Thus the inverse system has the unit pulse response hI [n] = δ [n] − δ [n − 1] Of course. that is.Example To compute an inverse of the running summer.

These simple forms underlie many approaches to the analysis of LTI systems. That is. the responses to certain types of exponential input signals have particularly simple forms. stable LTI system to a growing exponential input signal is a constant multiple of the exponential. For historical reasons in mathematics. suppose the LTI system y[n] = ∑ h[k ]x[n − k ] k =−∞ ∞ is causal and stable. that is. To work this out in detail.⎧ n ⎪ ∑ h[k ] . Furthermore. and basic properties of absolute values. an input signal x[n] is called an eigenfunction of the system if the corresponding output signal is simply a constant multiple of the input signal. (We do permit the constant to be complex. the basic approach of “LTI cleverness” can be applied to evaluate the response. −∞ < n < ∞ where σ o ≥ 0 . h[n] is right-sided and absolutely summable. 60 . when considering complex input signals. n ≥ 0 y[n] = ⎨ k = 0 ⎪ 0.) • Real Eigenfunctions The response of a causal. Then the response computation becomes ∞ ⎛ ∞ ⎞ y[n] = ∑ h[k ] eσ o ( n − k ) = ⎜ ∑ h[k ] e−σ o k ⎟ eσ o n k =−∞ ⎝ k =−∞ ⎠ Using the causality and stability assumptions on the system. and we consider several variants. each of which requires slightly different assumptions on the LTI system. where the constant depends on the exponent.5 Response to Exponentials (Eigenfunction Properties) + x[nm ] y[n − nm ] For important classes of LTI systems. if the input signal can be written as x[n] = x[n0 ]δ [n − n0 ] + x[n1 ]δ [n − n1 ] + then the response is given by + x[nm ]δ [n − nm ] y[n] = x[n0 ]h[n − n0 ] + x[n1 ] y[n − n1 ] + 5. the assumption that σ o ≥ 0 . n<0 ⎩ ⎛ n ⎞ = ⎜ ∑ h[k ] ⎟ u[n] ⎝ k =0 ⎠ For input signals that have only a small number of nonzero values. suppose the input signal is the real exponential signal σ on x[n] = e .

k =−∞

−σ k −σ k ∑ h[k ]e o = ∑ h[k ]e o

≤ ∑ | h[k ] | e −σ o k ≤ ∑ | h[k ] | < ∞
That is, the summation converges to a real number, which we write as H (σ o ) to show the dependence of the real number on the value chosen for σ o . Thus the output signal is a scalar multiple of the input signal, σ on
k =0 k =0 ∞

k =0 ∞

y[n] = H (σ o ) e

where

H (σ o ) = ∑ h[k ] e −σ o k
k =0

Of course the exponential input, which begins as a vanishingly small signal at n → −∞ , grows without bound as n increases, as does the response, unless σ o = 0 or H (σ o ) = 0 .
Example For the LTI system with unit-pulse response

h[n] = (0.5)n u[n]
given any σ o ≥ 0 we can compute

H (σ o ) = ∑ (0.5)k e −σ o k = ∑ (e−σ o / 2) k =
k =0 k =0

1 1− e−σ o / 2

Therefore the response of the system to the input σ on

x[n] = e

, −∞ < n < ∞

is

y[n] =

2 eσ o n −σ o 2−e

It is important to note that only one summation must be evaluated to compute the eigenfunction response. Contrast this with the usual convolution, which typically involves a family of summations with the nature of the summation changing with the value of n. • Complex Eigenfunctions It is mathematically convenient to consider complex input signals to LTI systems, though of course the unit-pulse response, h[n], is assumed to be real. If a complex input signal is written in rectangular form as

x[n] = xR [n] + jxI [n]
where, for each n,

xR [n] = Re { x[n]} , xI [n] = Im { x[n]}

then the corresponding, complex output signal is

61

y[n] = ∑ x[k ]h[n − k ]

= ∑ xR [k ]h[ n − k ] + j ∑ xI [ k ]h[n − k ]
k =−∞ k =−∞

k =−∞ ∞

That is,

Re { y[n]} = Re {( x ∗ h)[n]} = ( xR ∗ h ) [n] , Im { y[n]} = Im {( x ∗ h)[n]} = ( xI ∗ h ) [n]

and so we get two real input-output calculations for a single complex calculation. (We note in passing that linearity of an LTI system holds for complex-coefficient linear combinations of complex input signals.) As an important application of this fact, suppose the LTI system is stable, and consider the input signal jωo n

x[n] = e

, −∞ < n < ∞

where ωo is a real number. The corresponding output signal is

y[n] = ∑ h[k ]e jωo ( n − k ) = ∑ h[k ]e− jωo k e jωo n
k =−∞

Since the system is stable, and | e

jωo k

k =−∞

|= 1 for every k,
∞ k =−∞ ∞

k =−∞

− jω k − jω k ∑ h[k ]e o ≤ ∑ | h[k ]e o |

≤ ∑ | h[k ] |
k =−∞

<∞
and we have, for any frequency ωo , convergence of the sum to a (complex) number that we write as H (ωo ) . Then jωo n

y[n] = H (ωo ) e

where

H (ωo ) = ∑ h[k ] e− jωo k
k =−∞

Again, there is not a family of summations (as in convolution, in which different values of n can lead to different forms of the sum) to evaluate for this complex input signal, rather there is a single summation to evaluate!
Example If

x[n] = cos(ωo n) = Re{e jωo n } y[n] = Re{H (ωo ) e jωo n }

and the system is stable, then

62

To evaluate this expression, write the complex number H (ωo ) in polar form as j H (ωo )

H (ωo ) = | H (ωo ) | e

Then

y[n] = Re{| H (ωo ) | e j (ωo n + H (ωo )) } = | H (ωo ) | cos(ωo n + H (ωo )}

That is, the response of a stable system to a cosine input signal is a cosine with the same frequency but with amplitude adjustment and phase shift. Another way to write this response follows from writing H (ωo ) in rectangular form, but we leave this as an exercise. • Steady-State Eigenfunctions Suppose that the system is causal as well as stable, and that the input signal is a right-sided complex exponential, jωo n

x[n] = e

u[n]

Then y[n] = 0 for n < 0, and for n ≥ 0,

y[n] = ∑ h[k ]e jωo ( n − k )u[n − k ] = ∑ h[k ]e − jωo k e jωo n
As n → ∞ , remembering that the unit-pulse response is right sided by causality, and absolutely summable by stability,
k =0 − jω k − jω k ∑ h[k ] e o → H (ωo ) = ∑ h[k ] e o k =0 n ∞ k =0 k =−∞ n

and therefore, as n increases, y[n] approaches the “steady-state response” jωo n

yss [n] = H (ωo ) e That is, for large values of n, y[n] ≈ yss [n] . This is a “one-sided input, steady-state output”

version of the eigenfunction property of complex exponentials. Of course a similar property for one-sided sine and cosine inputs is implied via the operations of taking real and imaginary parts.
5.6 DT LTI Systems Described by Linear Difference Equations

Systems described by constant-coefficient, linear difference equations are LTI systems. In exploring this fact, it is important to keep in mind that our default setting is that all signals are defined for −∞ < n < ∞. This brings about significant differences (!) with other treatments of difference equations. Suppose we have a system whose input and output signals are related by

y[n] + ay[n − 1] = bx[n] , − ∞ < n < ∞

where a and b are real constants. This is called a first-order, constant-coefficient, linear difference equation. Given an input signal x[n], this can be viewed as an equation that must be solved for y[n], and we leave to other courses the argument that for each input signal, x[n] , there is a unique solution for the output signal, y[n] . We simply make the claim that the solution is

y[n] = ∑ (−a )n − k bx[k ]
k =−∞

n

63

However. cannot arise in the context of LTI systems. Nonzero initial conditions.and verify this solution as follows. constant-coefficient. − ∞ < n < ∞ as well as higher order. such equations describe causal. Suppose an LTI system with input signal x[n] = u[n] − u[n − 2] has the response y[n] = 2r[n] − 2r[n − 2] . Using the assumed y[n]. we must require zero initial conditions in a right-sided setting. by showing that h[n] + ah[n − 1] = bδ [n] From the form of h[n] it follows that a first-order. 64 . and to connect the stability property to the coefficients in the difference equation. Results are similar for systems described by second-order. and also sketch the system response to each of the input signals below. This means that the difference equation need only be addressed for n ≥ 0. linear difference equations. Exercises 1. Furthermore the system is memoryless if and only if a = 0. Sketch this input signal and output signal. and stable if and only if |a| < 1. y[n] + a1 y[n − 1] + a2 y[n − 2] = bx[n]. That is. In summary. LTI systems. the solution can be written as y[n] = ∑ (−a) n − k bu[n − k ] x[k ] k =−∞ ∞ so it is clear that the difference equation describes an LTI system with unit-pulse response h[n] = (− a )n bu[n] That this is the unit-pulse response also can be verified directly. where the causality property of systems described by difference equations implies that the corresponding outputs are also right sided. Right-Sided Setting Often we are interested in right-sided inputs. though we must view the output as zero for negative values of n. constant-coefficient. Therefore the “initial conditions” are y[−1] = y[−2] = = 0 . ay[n − 1] = a ∑ (− a)n −1− k b x[k ] = − ∑ (− a) n − k b x[k ] k =−∞ k =−∞ n −1 n −1 Therefore y[n] + ay[n − 1] = ∑ (− a)n − k bx[k ] k =n n = bx[n] Of course. as considered in mathematics or other engineering courses dealing with difference equations. it is more difficult to compute the unit-pulse response. linear difference equation defines a causal LTI system. for a consequence of input-output linearity is that the identically zero input signal must yield the identically zero output signal. since we are focusing on systems whose input-output behavior is linear.

1 ≤ n ≤ 3. h[n] = β nu[− n] (What additional assumption on α and β is needed?) (c) x[n] = δ [n − 1] . h[n] = ⎨ ⎩0. else (c) x[n] = 1 . What is the response of the system to the input signal shown below. h[n] = (1/ 2) n u[n − 1] (g) x[n] = r[n] . compute and sketch y[n] = (h * x)[n] for the following signals. h[ n] = 4u[3 − n] (d) x[n] = α n − 2u[n − 2]. h[n] shown below (h) x[n] = u[n] . 0 ≤ n ≤ 3 ⎧1. h[n] = δ [n] − 2δ [n − 1] + δ [n − 2] (d) x[n] = u[n − 1] − u[n − 3]. h[n] = e− nu[n] (f) x[n] = u[n] . h[n] = 3δ [n + 1] − 3δ [n − 3] ⎧1. (a) x[n] = α nu[n] . Using the analytical method. else ⎩0. h[n] = −u[n] + u[n − 3] (b) x[n] = ⎨ (e) x[n] = en ( u[n] − u[n − 2]) . h[ n] = (−1) n u[n] x[n] = r[n]. h[n] = β (What additional assumption on α and β is needed?) 4. for all n. h[n] = δ [n] − 2δ [n − 1] + δ [n + 1] 3. 65 .(a) xa [n] = 3u[n − 1] − 3u[n − 3] (b) xb [n] = u[n] − u[n − 1] − u[n − 2] + u[ n − 3] (c) xc [n] = u[n] − u[n − 4] 2. where α and β are distinct real numbers. 7 ≤ n ≤ 9 . h[n] = β nu[n] (b) x[n] = α n . (i) (j) h[n] = r[n] u[3 − n] x[n] = u[n − 1] − u[n − 4]. Using the graphical method. An LTI system with a unit-step input signal has the response y[n] = (1/ 2) n u[n] . compute and sketch y[n] = (h * x)[n] for (a) x[n] = δ [n] − δ [n − 3] .

Determine if the DT LTI systems with the following unit-pulse responses are causal and/or stable. use the eigenfunction (2) properties to compute the response to the input signals 66 . ˆ h[n] = h[n − 3] ˆ h[n] = h[−n] ˆ ˆ (d) x[n] = x[−1 − n] . Compute the overall unit-pulse response for the interconnections of DT LTI systems shown below. ˆ ˆ (a) x[n] = x[n − 3] . Consider discrete-time signals h[n] that is zero outside the interval n0 ≤ n ≤ n1 and x[n] that is zero outside the interval n2 ≤ n ≤ n3 . (a) (b) 7. ˆ (c) x[n] = x[−n] . h[n] = h[1 − n] 8.5. For each of the pairs of signals given below. (2) n (b) h[n] = ( 1 ) u[− n] 2 (c) h[n] = 2n u[3 − n] (d) h[n] = 2n r[− n] n (a) h[n] = 1 u[n + 1] n 9. For the DT LTI system with unit-pulse response h[n] = 1 u[n] . show how ˆ ˆ ˆ y[n] = ( x ∗ h)[n] is related to y[n] . 6. h[n] = h[n + 3] ˆ (b) x[n] = x[n − 3] . Suppose y[n] = ( x ∗ h)[n] . Show how to define n4 and n5 such that (h ∗ x)[n] = 0 outside the interval n4 ≤ n ≤ n5 .

(a) x[n] = 1 (b) x[n] = (−1) n (c) x[n] = 2 cos(π n / 2) (d) x[n] = 3sin(− 3π n) 2 67 .

68 . except that some facts are more difficult to prove. as usual.) Indeed. linear. timeinvariant systems can be described by the convolution expression. then y (t ) = ∫ x(τ )h(t − τ )dτ = ∫ δ (τ )h(t − τ )dτ = h(t ) −∞ −∞ ∞ ∞ It is also true that the input-output behavior of essentially all continuous-time. must be evaluated. we compute the response to x (t ) as y (t ) = ∫ x (τ )h(t − τ ) dτ = ∫ [a x1 (τ ) + x2 (τ − to )]h(t − τ ) dτ = a ∫ x1 (τ ) h(t − τ ) dτ + ∫ x2 (τ − to )h(t − τ ) dτ Changing the variable of integration from τ to σ = τ − to in the last integral gives −∞ −∞ −∞ ∞ −∞ ∞ ∞ ∞ y (t ) = a ∫ x1 (τ ) h(t − τ ) dτ + ∫ x2 (σ )h(t − to − σ ) dσ −∞ −∞ ∞ ∞ = a y1 (t ) + y2 (t − to ) Thus a system described by convolution is an LTI system. for a system described by convolution. There are three main approaches. combining the characterizing conditions for linearity and time invariance. we need only check the following. Evaluation of the Convolution Integral Calculating the response of a system to a given input signal by evaluation of the convolution integral is not as simple as might be expected. It is easy to check that given a continuous-time signal h(t ) .1 CT LTI Systems and Convolution The treatment of the continuous-time case parallels the discrete-time case. the characterizing signal h(t ) is the unit-impulse response of the system. that h(t ) and the input signal x(t ) are such that the integral is defined. (We are assuming here. as is easily verified by a sifting calculation: if x(t ) = δ (t ) . to prove this fact involves delicate arguments that we will skip. However. and for any constants a and to . Furthermore. and the character of the integration can change with the parameter. For any input signals x1 (t ) and x2 (t ) . parametrized by t . with corresponding responses y1 (t ) and y2 (t ) .Notes for Signals and Systems 6. a system described by the convolution integral y (t ) = ∫ x(τ )h(t − τ )dτ −∞ ∞ is an LTI system. the response to x (t ) = a x1 (t ) + x2 (t − to ) should be y (t ) = a y1 (t ) + y2 (t − to ) So. This is because a family of integrations.

the convolution integral sometimes can be evaluated by analytical means. and so −∞ −∞ ∞ ∞ y (t ) = ∫ u (t − τ )dτ Now the integrand is zero for τ > t . we can write the response as the unit ramp: y (t ) = t u (t ) = r (t ) Example If the unit-impulse response is a unit-step function and the input signal is the constant signal x(t ) = 1 . Example If the input signal and the unit-impulse response are unit-step signals. Then the input signal is plotted in the variable τ immediately below to facilitate the multiplication of signals: 69 . x(τ ) and h(t − τ ) . regardless of the value of t . Example We compute ∞ y (t ) = ∫ x(τ )h(t − τ )dτ −∞ for the input signal and unit-impulse response shown below. and the net area must be computed.Analytical Method When both the impulse response h(t ) and the input signal x(t ) have simple analytical descriptions. but this simplification involves the value of t . the integrand is zero for negative τ . a graphical approach is valuable for keeping track of the calculations that must be done. Basically. for the value of t of interest. we plot the two signals in the integrand. then the response calculation is y (t ) = ∫ x(τ )h(t − τ )dτ = ∫ u (t − τ )dτ −∞ −∞ ∞ ∞ Of course the conclusion is that the response is undefined for any value of t ! This is a reminder that convolution expressions must be checked to make sure they are meaningful. t ≤0 ⎧ ⎪t y (t ) = ⎨ ⎪ ∫ 1 dτ = t . Then multiplying the two signals provides the integrand. t > 0 ⎩0 Summarizing. In fact 0 ∞ 0. First the impulse response is flipped and shifted on the τ axis to a convenient value of t . Graphical Method For more complicated cases. then the response can be written as y (t ) = ∫ x(τ )h(t − τ )dτ = ∫ u (τ )u (t − τ )dτ Of course. versus τ .

x(t ) is a linear combination of shifted step functions. sketching the response yields LTI Cleverness Method If one of the signals in the convolution can be written as a linear combination of simple. and so the response is y (t ) = 0 for these two ranges of t . Example In the example given above. 4 t −1 y (t ) = ∫ 2 dτ = 10 − 2t t −1 Of course these calculations are also obvious from consideration of the multiplication of the two signals in the various ranges and sketches of the resulting integrand. In any case. 70 . then by the properties of linearity and time invariance. the response can be computed from a single convolution involving the simple signals. 2 t t y (t ) = ∫ 2 dτ = 2 For 4 < t ≤ 5 . that is.It is easy to see that for t < 2 and for t > 5 the product of the two signals is identically zero. If we compute the convolution y (t ) = ∫ u (τ )h(t − τ )dτ −∞ ∞ then the response we seek is given by y (t ) = y (t − 2) − y (t − 4) The reader is encouraged to work the details and sketch y (t ) . y (t ) = ∫ 2 dτ = 2t − 4 For 3 ≤ t ≤ 4 . shifted signals. we can write x(t ) = u (t − 2) − u (t − 4) . For 2 ≤ t ≤ 3 .

6. via the convolution expression y (t ) = ∫ x(τ )h(t − τ )dτ = ∫ h(τ ) x(t − τ )dτ −∞ −∞ ∞ ∞ Therefore the input-output properties of an LTI system can be characterized in terms of properties of h(t ) . h(t ) . In our class notation. just as in the discrete-time case. we note that (δ ∗ x)(t ) = x(t ) . though the notation is a bit awkward: For any time ˆ to . But in the present context Special Property 1 states the plainly simple fact that the unit-impulse response of the system whose unit-impulse response is a unit impulse is a unit impulse. one that follows from time invariance.3 CT LTI System Properties The input-output behavior of a continuous-time LTI system is described by its unit-impulse response. writing the unit impulse function as δ (t ) . for any x(t ) continuous at t = 0. 6. and ignores continuity requirements of the sifting property. what we have been using in class. This last expression invokes Special Property 1 of Section 2. The basic results are similar to the discrete-time case. and not as good as. ILM: LTI Systems and Convolution But be aware that the notation in the Web lecture is a bit different than. if x(t ) = x(t − to ) . then Finally.2 Properties of Convolution – Interconnections of CT LTI Systems This topic also is covered in the interactive Web lecture. ( (bh) ∗ x ) (t ) = b ( h ∗ x ) (t ) ˆ ( x * h ) (t ) = ( x ∗ h ) (t − to ) Also. and furthermore (δ ∗ δ )(t ) = δ (t ) . All of these properties can be interpreted in terms of block diagram manipulations involving LTI systems.2.The Web lecture linked below can be consulted for further discussion. though. where we write y (t ) = (h ∗ x)(t ) the various properties of convolution appear as follows: • • • Commutativity: Distributivity: Associativity: ( x ∗ h)(t ) = (h ∗ x)(t ) ( x ∗ (h1 + h2 ) (t ) = ( x ∗ h1 )(t ) + ( x ∗ h2 )(t ) ( ( x ∗ h1 ) ∗ h2 ) (t ) = ( x ∗ (h1 ∗ h2 ) (t ) Additional properties include one that follows from linearity: For any constant b. as usual in continuous 71 .

Thus the system is not memoryless. • If h(t ) = 0 for t ≠ 0 .) Memoryless System An LTI system is memoryless if and only if h(t ) = 0 for t ≠ 0. we use the same sort of cleverness as in the discrete-time case. and so on. Then the absolute value of the output signal satisfies | y (t ) |≤ ∫ | h(τ ) | | x(t − τ ) | dτ ≤ M ∫ | h(τ ) | dτ ≤ MK −∞ −∞ ∞ ∞ for all t. Suppose h(t ) is nonzero at only one point in time. there are unmentioned technical assumptions to guarantee that integrals are defined. and in particular is zero for t < 0 . if and only if h(t ) is right sided. then the unit-impulse input. bounded-output) stable if and only if the unit-impulse response is absolutely integrable. and therefore the system is stable. It follows from this discussion that a memoryless LTI system is characterized by an impulse response of the form h(t ) = b δ (t ) . or takes a nonzero value at t = 0 . To prove this.time. that is. Then unless h(t ) is an impulse the response of the system to every input signal will be y (t ) = 0 for all t . and | x(t ) |≤ M . We use the fact that the absolute value of an integral with upper limit greater than lower limit is bounded by the integral of the absolute value of the integrand. where b is a real constant. which is nonzero only at t = 0 yields a response that is nonzero at the nonzero time ta . That is ∞ −∞ −∞ ∞ ∫ | h(t ) | dt is finite. for all t. suppose x(t ) is a bounded input. then since y (t ) = ∫ h(τ ) x(t − τ )dτ it follows that y (t ) can only depend on x(t ) . Since the unit-impulse input is nonzero only at t = 0 . On the other hand. Consider the bounded input signal 72 . and up to the time t = 0 a causal system does not know whether the input signal continues to be zero. if h(t ) ≠ 0 for t = ta ≠ 0 . • Causal System An LTI system is causal if and only if h(t ) = 0 for t < 0 . To prove that stability of the system implies absolute integrability of h(t ) . (Here we rely on the fact that the response of an LTI system to the identically-zero input signal is identically zero. causality is equivalent to h(t ) = 0 for t < 0 . • Stable System An LTI system is (bounded-input. This should be believable from the corresponding fact about sums.

6. at t = 0 . for all t. the utility of such an expression depends on the complexity of h(t ) . h(−t ) < 0 Then the corresponding output signal.⎧ 1.4 Response to Singularity Signals It is easy to show that the response of a CT LTI system to a unit-step input signal is the running integral of the unit-impulse response. With x(t ) = r (t ) . K ≥ y (0) = ∫ h(τ ) x(−τ ) dτ = ∫ | h(τ ) | dτ −∞ −∞ ∞ ∞ Thus h(t ) is absolutely integrable. Then we can make the following statement: An LTI system described by h(t ) is invertible if and only if there exists a signal hI (t ) (the impulse response of the inverse system) such that • Invertible System First note that the identity system in continuous time. Of course. y (t ) = x(t ) . We will not pursue this further. In particular. these expressions can be used to write the corresponding response as a linear combination of running integrals of the unit-impulse response. and the inverse system for an LTI system must be an LTI system. if x(t ) = u (t ) . then y (t ) = ∫ h(τ )u (t − τ )dτ = ∫ h(τ )dτ −∞ −∞ ∞ t It takes a bit more work to show that the unit-ramp response can be written as an iterated running integral of the unit-impulse response. y (t ) . (h ∗ hI )(t ) = δ (t ) Such an hI (t ) might not exist. is bounded. and if it does. has the unit impulse response h(t ) = δ (t ) . say by the constant K. h(−t ) ≥ 0 x(t ) = ⎨ ⎩−1. 73 . Applying integration-by-parts to this expression gives τ1 t τ1 y (t ) = (t − τ1 ) ∫ h(τ 2 )dτ 2 |t − ∫ ∫ h(τ 2 )dτ 2 (− dτ1 ) −∞ −∞ −∞ −∞ Evaluating the first term at τ1 = t and τ1 = −∞ shows that it vanishes. leaving t τ1 y (t ) = ∫ ∫ h(τ 2 )dτ 2 dτ1 −∞ −∞ For an input signal that has a regular geometric shape and can be represented conveniently as a linear combination of singularity signals. we can write y (t ) = ∫ h(τ1 )r (t − τ1 )dτ1 = ∫ h(τ1 )(t − τ1 )dτ1 −∞ −∞ ∞ t where a subscripted variable of integration has been used to make the end result pretty. it might be difficult to compute. Indeed.

Explicitly. systems with a real unit-impulse response h(t ) . the responses to certain types of exponential input signals have particularly simple forms.Example For the input signal x(t ) = 2u (t ) − u (t − 1) + 3r (t − 2) t −1 −∞ t − 2 τ1 the response of a CT LTI system can be written as t y (t ) = 2 ∫ h(τ )dτ − ∫ h(τ )dτ + 3 ∫ ∫ h(τ 2 )dτ 2 dτ1 −∞ −∞ −∞ 6. growing exponential σ ot x(t ) = e ∞ . σo ≥ 0 . an input signal is called an eigenfunction of the system if the response is simply a constant multiple of the input signal. • Complex Eigenfunctions Though we consider only real LTI systems. write a complex input signal in rectangular form 74 . − ∞ < t < ∞ It is important to observe that only one integral must be evaluated to compute this response. it is convenient for mathematical purposes to permit complex-valued input signals. Convergence of the integral is guaranteed by the stability assumption. To see how the general calculation proceeds. in contrast with the general convolution calculation that involves a family of integrals. As in the discrete-time case. − ∞ < t < ∞ ∞ Then y (t ) = ∫ h(τ )eσ o (t −τ ) dτ = ∫ h(τ )e−σ oτ dτ eσ o t = ∫ h(τ )e−σ oτ dτ eσ o t 0 −∞ ∞ −∞ where the lower limit has been raised to zero since the unit-impulse response is right sided. These simple forms motivate many approaches to the analysis of LTI systems and we consider several variants. The details rely on the fact that the absolute value of an integral is less than the integral of the absolute value (so long as the upper limit is greater than the lower limit. ∞ 0 −σ τ −σ τ ∫ h(τ )e o dτ ≤ ∫ | h(τ )e o | dτ ≤ ∫ | h(τ ) | dτ < ∞ 0 0 ∞ ∞ Therefore we can define the constant H (σ o ) by H (σ o ) = ∫ h(τ )e −σ oτ dτ 0 ∞ and write y (t ) = H (σ o )eσ o t . that is. and by the fact that | e−σ oτ | ≤ 1 . • Real Eigenfunctions Suppose the LTI system is causal and stable. and suppose the input signal is the real.5 Response to Exponentials (Eigenfunction Properties) For important classes of LTI systems. τ ≥ 0 . each of which requires slightly different assumptions on the system.

is applied to a stable LTI system Example Suppose the signal x(t ) = sin(ω ot ) . − ∞ < t < ∞ − ∞ < t < ∞ . − ∞ <τ < ∞ Thus we have y (t ) = H (ωo ) e jωo t . Re{ y (t )} = Re{( x ∗ h)(t )} = ( xR ∗ h)(t ) Im{ y (t )} = Im{( x ∗ h)(t )} = ( xI ∗ h)(t ) This means that with one complex calculation we include two real calculations The most important application of complex inputs is the case the LTI system is stable and the input is a phasor. we immediately have that y (t ) = Im{H (ωo )e jωo t } . and j is a constant. jω t with unit-impulse response h(t ) . Since x(t ) = Im{e o } . xI (t ) = Im{x(t )} Then. jωo t x(t ) = e . This expression can be made more explicit by writing H (ωo ) in polar form: j H (ωo ) H (ωo ) = | H (ωo ) | e Then y (t ) = Im{H (ωo )e jωo t } = Im{| H (ωo ) | e j H (ωo ) e jωo t } = | H (ωo ) | Im{e j (ωo t + H (ωo )) } = | H (ωo ) | sin(ωot + H (ωo ) ). −∞ < t < ∞ ∞ The response is given by y (t ) = ∫ h(τ )e jωo (t −τ ) dτ = ∫ h(τ )e− jωoτ dτ e jωo t −∞ −∞ ∞ Here we define the complex constant H (ωo ) = ∫ h(τ )e− jωoτ dτ −∞ ∞ where convergence of the integral is guaranteed by the stability assumption and the fact that − jωoτ |e | ≤1.x(t ) = xR (t ) + jxI (t ) where. since h(t ) is real. y (t ) = ∫ x(τ )h(t − τ )dτ = ∫ xR (τ )h(t − τ )dτ + j ∫ xI (τ )h(t − τ )dτ −∞ −∞ −∞ ∞ ∞ ∞ That is. 75 . for each t. − ∞ < t < ∞ An alternate expression follows from writing H (ω o ) in rectangular form. xR (t ) = Re{x(t )} .

This is called a first-order. • SteadyState Eigenfunctions Suppose the LTI system is causal as well as stable. the key fact is the following. 6. linear differential equation. every value of t is a “steady-state” value in that an infinite period of time has elapsed since the beginning of the input signal. and for t ≥ 0 . −∞ y (t ) = ∫ h(τ )e jωo (t −τ ) dτ = ∫ h(τ )e− jωoτ dτ e jωo t . by causality. Because ofthis setting. and the input signal is the right-sided phasor jωo t ∞ t x(t ) = e u (t ) Then y (t ) = 0 for t < 0 . t ≥ 0 0 Therefore as t increases. − ∞ < t < ∞ where a and b are real constants. If the input is a sinusoid (or phasor) of frequency ωo . our treatment may not be as similar to other treatments as you might expect. and we will demonstrate that this solution can be written as 76 . although the amplitude and phase angle. is altered by the system. relative to the input sinusoid. Of course. is a sinusoid (or phasor) with the same frequency. Consider a system where the input and output signals are related by y (t ) + ay (t ) = bx(t ) . since the input began at t = −∞ .H (ωo ) = Re{H (ωo )} + j Im{H (ωo )} Then y (t ) = Im {[Re{H (ωo )} + j Im{H (ωo )}][cos(ωot ) + j sin(ωot )]} = Re{H (ωo )}cos(ωot ) − Im{H (ωo ) sin(ωot ) . this can be viewed as an equation that must be solved for y(t ) . constant-coefficient. − ∞ < t < ∞ Regardless of the particular form chosen for y (t ) . It can be shown that there is only one solution. and again the stability assumption guarantees that H (ωo ) is well defined. then the output is a sinusoid (or phasor) with the same frequency. Once x(t ) is specified. However.6 CT LTI Systems Described by Linear Differential Equations Systems described by constant-coefficient. while stating this fact it is important to keep in mind that by LTI we mean “inputoutput linear” systems and that our default time interval is −∞ < t < ∞ . y (t ) more and more closely approximates the steady-state response jωo t yss (t ) = H (ωo ) e where H (ωo ) = ∫ h(τ )e− jωoτ dτ = ∫ h(τ )e − jωoτ dτ −∞ 0 ∞ ∞ response of a causal and stable LTI system to a sinusoid (or phasor) of frequency ωo . linear differential equations are LTI systems. Thus the steady-state It is interesting to compare this steady-state property with the previous case where the phasor input signal defined for −∞ < t < ∞ results in the phasor output at every value of t .

y (t ) = ∫ e− a (t −τ )bx(τ ) dτ −∞ t The demonstration involves substituting into the differential equation. with initial conditions specified at t = 0− . constant-coefficient. For a second-order. h(t ) . For example. linear differential equation. and proceeds in an elementary fashion by writing y (t ) = e− at ∫ eaτ bx(τ ) dτ −∞ t In this form. it follows that the LTI system described by the first-order linear differential equation is causal and is not memoryless. The system is stable if and only if a > 0 . Such equations describe causal LTI systems. and the fundamental theorem of calculus. However it is more difficult to compute the unit-impulse response. Indeed. − ∞ < t < ∞ From the form of the unit-impulse response. By inserting the appropriate unit-step function. the situation is similar to the first-order case. The verification involves using generalized calculus to compute h(t ) = −bae− at u (t ) + be− atδ (t ) = −bae− at u (t ) + bδ (t ) Then it is easy to see that h(t ) + ah(t ) = bδ (t ) . y (t ) = − ae− at ∫ e aτ bx(τ ) dτ + e− at eat bx(t ) −∞ t = − ay (t ) + bx(t ) and the solution is verified. the calculation of y (t ) is a simple matter of the product rule. t ≥ 0 77 . in the first-order case. y (t ) + a1 y (t ) + a0 y (t ) = bx(t ) and also for higher-order linear differential equations. and to characterize stability properties in terms of the coefficients of the differential equation. consider y (t ) + ay (t ) = bx(t ) . we can write y(t ) in the form y (t ) = ∫ be − a (t −τ )u (t − τ ) x(τ ) dτ −∞ ∞ and it is clear that the differential equation describes an LTI system with unit-impulse response h(t ) = be − at u (t ) Remark It is interesting to show directly that this impulse response satisfies the differential equation (for all t) when x(t ) = δ (t ) . Right-Sided Setting In other courses you may have encountered linear differential equations defined for t ≥ 0 .

) Put another way. h(t ) is absolutely integrable. if the voltage input signal is x(t ) = cos(ωot ) u (t ) Then the steady-state response of the circuit can be written as 78 . y (0− ) must be zero. and the output signal of interest. y (t ) . we can extract the response to various sinusoidal input signals. This setting can be embedded into our framework by considering the input signal to be zero for t < 0 . then the output signal must be zero for all t. − ∞ < t < ∞ RC RC This describes an LTI system with unit-impulse response − t h(t ) = 1 e RC u (t ) 1 If we are interested in the response of this system to sinusoidal inputs with frequency ωo . and in particular. Then. x(t ) . specified. Example Suppose a voltage signal. C. is applied to the terminals of a series R-C circuit shown below. t ≥ 0 . linear differential equation with right-sided input signals describes an LTI system if and only if all initial conditions are zero.) Thus the response to the phasor input signal is 1 y (t ) = e jωo t . is the voltage across the capacitor. −∞ < t < ∞ ∞ 0 and compute H (ωo ) = ∫ h(τ )e− jωoτ dτ = ∫ 1 e RC −∞ ∞ 1 − RC τ e− jωoτ dτ = = −1 1+ jRCωo 1 1+ jRCωo −( 1 + e RC jωo )τ ∞ 0 (Notice that the implicit assumption that R and C are positive is crucial in the evaluation of the integral. For example. the output signal is zero for t < 0 . This is the stability requirement – with positive R and C . we consider the input signal jωo t RC x(t ) = e . by causality. (Recall that if the input signal to an LTI system is zero for all t. a constantcoefficient.with y (0− ) and x(t ). Kirchhoff’s voltage law gives the circuit description as a first-order differential equation y (t ) + 1 y (t ) = 1 x(t ) . − ∞ < t < ∞ 1+ jRCωo From this basic fact.

ak = 1 k ≥ 0 ak = (1/ 2) k k ≥ 0 79 . if the input frequency is small. (b) h(t ) = e −|t | . (c) h(t ) = et u (−t ) . On the other hand. then the steady-state voltage across the capacitor will be small. Furthermore. ak = 1 for all k ≥ 0 (b) T = 2 . ωo . but with the amplitudes and phase angles influenced according to H (ω o ) at the various values of ωo . The phase angle of the response.yss (t ) = Re = { 1 1+ jRCωo 1 ⎧ ⎫ −1 ⎪ ⎪ 1 e jωo t = Re ⎨ e j (ωo t − tan ( RCωo )) ⎬ 2 2 2 ⎪ ⎪ ⎩ 1+ R C ωo ⎭ } 2 1+ R 2C 2ωo cos[ωot − tan −1 ( RCωo )] If the input frequency. x(t ) = 2u (t ) − 2u (t − 1) x(t ) = u (t − 2) x(t ) = u (3 − t ) x(t ) = et −1u (t ) x(t ) = u (t ) x(t ) = δ (t ) − u (t ) 2. then the steady-state response will contain the same set of frequencies. An LTI system has the impulse response shown below: For an input signal of the form x(t ) = ∑ ak δ (t − kT ) k =0 ∞ sketch the output signal if (a) T = 3 . This is the basis of frequency-selective filtering. relative to the input signal. then the steady-state response is similar in amplitude to the input signal. compute and sketch y (t ) = (h ∗ x)(t ) for (a) h(t ) = e−t u (t ) . if the input signal is a linear combination of sinusoids at various frequencies. (c) T = 3 . is large. Exercises 1. (d) h(t ) = e −t u (t ) . (f) h(t ) = et . Using the graphical method. also depends on the frequency. (e) h(t ) = e −2t u (t ) .

Express y (t ) = ( h ∗ x )(t ) in terms of y (t ) = ( h ∗ x)(t ) for the following signal choices. then the system is unstable. (a) x (t ) = x(t − 1) . (a) If h(t ) is right sided and bounded.3. Suppose the continuous-time signal h(t ) is zero outside the interval t0 ≤ t ≤ t1 and the signal x(t ) is zero outside the interval t2 ≤ t ≤ t3 . Determine if the LTI systems described by the following unit-impulse responses are stable and/or causal. (e) x (t ) = x(−t ) . (d) A memoryless LTI system is always stable. x(t ) = δ (t ) − u (t ) 6. 80 . (c) The cascade connection of a causal LTI system and a non-causal LTI system is always noncausal.. (c) x (t ) = x(2t ) . (b) x (t ) = x(t − 2) . h (t ) = h(t − 2) h (t ) = h(t + 2) h (t ) = h(−2t ) h (t ) = h(3t ) h (t ) = h(−t ) 5. 4. Consider a system described by y (t ) = ∫ e−2(t −τ ) x(τ − 1)dτ −∞ t (a) Show that this is an LTI system. then the system is stable. Justify your answers. (d) x (t ) = x(3t ) . (a) h(t ) = e−2t u (t + 3) (b) h(t ) = e3t u (−4 − t ) (c) h(t ) = e −4|t | (d) h(t ) = et u (t − 3) 7. 8. Using the analytical method. (b) Compute the unit-impulse response of the system. Show how to define t4 and t5 such that (h ∗ x)(t ) = 0 outside the interval t4 ≤ t ≤ t5 . compute and sketch y (t ) = (h ∗ x)(t ) for (a) h(t ) = et . (c) Compute the response of the system to x(t ) = u (t ) − u (t − 1) by convolution and then by using the fact that the unit-step response of an LTI system is the running integral of the unit-impulse response. (b) If h(t ) is periodic and not identically zero. Determine if the following statements about LTI systems are true or false.

) 81 . and R = 4. L = 4 . For the R-L circuit shown below. For the LTI system with unit-impulse response h(t ) = et u (t ) . (c) the response to x(t ) = 1 . compute the response to the input signal x(t ) = e3t cos(2t ) . 10. compute (a) the steady-state response to the input signal x(t ) = 3cos(t )u (t ) . with input and output current signals as shown.9. (Hint: We did not discuss all the eigenfunction properties that LTI systems have. (b) the steady-state response to x(t ) = 2sin(3t )u (t ) .

φ1 (t ). φ0 (t ). a1. φ2 (t ) = t 2 / 2. storage or transmission of a signal can be accomplished by storage or transmission of the coefficients. aK −1 such that x(t ) ≈ a0 φ0 (t ) + a1 φ1 (t ) + + aK −1 φ K −1 (t ) There are many reasons for this approach. There are many basic questions to be addressed in developing this approach: What properties of basis sets would be useful? How many basis signals are needed? What is the appropriate nature of the approximation “ ≈ ?” Answers to these questions are developed in some detail in the next few sections. but the computation of the coefficients is rather simple. and exact only at zero. we can choose a0 = x(0) = 1. a2 = x(0) = 1 to obtain the representation ⎧1 − t + t 2 / 2. particularly if the basis signals have nice properties as input signals to such systems. with ⎧e −t . and we have some notion of the sense of approximation.1 Introduction to CT Signal Representation A fundamental idea in signal analysis is to represent signals in terms of linear combinations of ‘basis’ signals. That is. Those wishing to omit the general discussion can proceed directly to Section 8. if we want to refine the representation by adding additional basis signals. a1. once a set of basis signals has been selected. Some examples can motivate the discussion. it is obvious how to compute the coefficient a3 . given x(t ) . Then. else ⎩ Thus Taylor’s formula fits the framework we are considering. and are well suited to the class of signals to be represented.) Recalling Taylor’s formula. Also. φ K −1 (t ) that are relatively simple. Example Consider the signal and the basis set of three signals that are zero outside the interval −1 ≤ t ≤ 1 . − 1 ≤ t ≤ 1 ⎪ x(t ) = ⎨ else ⎪ 0. φ1 (t ) = t . … . ⎩ φ0 (t ) = 1. Certainly it makes sense for signal processing by LTI systems. … . we compute scalar coefficients a0 . aK −1 . … . the approximation will be good for values of t close to t = 0 . for example φ3 (t ) = t 3 /(3!) . − 1 ≤ t ≤ 1 ⎪ x(t ) ≈ φ0 (t ) − φ1 (t ) + φ2 (t ) = ⎨ ⎪ 0. a1 = x(0) = −1. we choose a set of basis signals. − 1 ≤ t ≤ 1 (Indeed. though the signal must be 82 .Notes for Signals and Systems 7. have useful properties. it would be simpler to dispense with our default domain of definition of signals and simply work on the interval −1 ≤ t ≤ 1 as the domain of definition. That is. Of course the class of signals to be represented in this way must be twice differentiable at the t = 0 . Furthermore. a0 .1 where the representation of main interest is introduced in an ad-hoc fashion. We retain the default mainly for emphasis.

Also. Recalling polynomial interpolation. aK −1 are computed to minimize the integral square error. φ1 (t ). but suppose we now require that the approximation have zero error at the three values t = −1. first three coefficients in the representation do not change.186 t + 0. say φ3 (t ) = t 3 / 3! . t = −1.thrice differentiable at t = 0 . l ≠ k . is the following. 7.063 a0 = 1.186φ1 (t ) + 1. a2 = = 1.2 Orthogonality and Minimum ISE Representation A popular choice of the nature of the approximation in signal representation. … . we proceed by setting x(t ) = a0 φ0 (t ) + a1 φ1 (t ) + a2 φ2 (t ).186. suppose the coefficients a0 . 1 . and we require zero error at a fourth point to be consistent with polynomial interpolation capabilities. Example Consider the same signal and basis set. we require that the basis set satisfy the condition ∞ −∞ ∞ ⎛ K −1 2 ∫ φl (t ) φk (t ) dt = 0. An advantage of the setup is that in computing the fourth coefficient. Both of these features are sensible for signal representation. then the representation must be recomputed from the beginning as the values of the first three coefficients will change. else ⎩ This representation is perhaps better for some purposes than the Taylor’s formula. k = 0. the representation error at each point in time is positively weighted in the process of minimization. … .063φ2 (t ) ⎧1 − 1. … . K − 1 83 . Given an energy signal x(t ) and a set of basis energy signals φ0 (t ). a1. ⎞ I = ∫ ⎜ x(t ) − ∑ ak φk (t ) ⎟ dt k =0 ⎠ −∞ ⎝ Because of the square. and larger errors are more severely penalized. φ K −1 (t ) . a1 = e 2e and the resulting representation is x(t ) ≈ φ0 (t ) − 1.503 t 2 . though it is inconvenient to have to solve equations for the coefficients. − 1 ≤ t ≤ 1 ⎪ =⎨ ⎪ 0. and the approximation we focus on in the sequel. 1 e = a0 − a1 + a2 / 2 1 = a0 e−1 = a0 + a1 + a2 / 2 This yields three equations in three unknowns: Solving this set of equations gives e 2 − 2e + 1 1 − e2 = −1. • Orthogonality Partly to ease the problem of computing the minimizing coefficients. 0. 0. if a fourth basis signal is added to the set.

2 Also. but it is usually not obvious!) • Minimum ISE Given x(t ) and an orthogonal basis set φ0 (t ). the basis set of rectangular pulses is orthogonal. … . E1. if the time interval of interest is changed. If a new basis signal is added to an orthogonal set. φ K −1 (t ) with basis signal energies E0 . … . φ0 (t ) = 1. set the derivative of I with respect to each coefficient ak to zero. with all signals zero outside this interval. for example. overlapping signals also can be orthogonal. EK −1 the minimum ISE coefficients are given by ak = 1 ∞ ∫ x(t ) φk (t ) dt = 0. the orthogonality condition might not hold for the new interval. (Of course. k = 0. … . This is clear because the basis signals are non-overlapping in the obvious sense. 1. k = 0. K − 1 ∂ak 84 . ∞ −∞ 2 ∫ φ0 (t ) φ2 (t ) dt = ∫ t / 2 dt ≠ 0 1 −1 On the other hand. K − 1 Ek −∞ K −1 ∞ K −1 k =0 To prove this result. first expand the quadratic integrand in I and distribute the integral over the sum of terms. and orthonormal.A basis set that satisfies this condition is called orthogonal. − 1 ≤ t ≤ 1 . φ1 (t ) = t . φ1 (t ). Using orthogonality and the notation for basis signal energy. the check that the new set is orthogonal involves all the signals in the set. the basis set is orthonormal since each φk (t ) is a unit-area rectangular pulse. It is notationally convenient to denote the energy of the k th basis signal by 2 Ek = ∫ φk (t ) dt . ∂I = 0 . K − 1 ∞ −∞ If all the Ek -coefficients are unity in an orthogonal basis set. … .1. It is important to emphasize that orthogonality and orthomormality are properties of basis sets. and one consequence of orthogonality is that when the integrand in I is expanded. φ2 (t ) = t 2 / 2. Example The basis set used in the examples in Section 7. the basis set is called orthonormal. … . k = 0. this gives 2 I = ∫ x 2 (t ) dt − 2 ∑ ∫ x(t ) φk (t ) dt ak + ∑ Ek ak ∞ −∞ k = 0 −∞ To minimize I. Also. is not orthogonal since. a large number of cross-terms disappear. 1.

Indeed. To show that this indeed provides a minimum. For example. − 1 ≤ t ≤ 1 2 2 2 where the approximation is understood to be minimum integral-squared error using the first three Legendre basis signals.14 2 2 2 2 e −1 −1 yielding the representation −1 1 −1 1 −t e −e ∫ e dt = 2 = 1. … . this is quite easy. as well as verification that 2 2 E0 = 2. Example Consider again the signal x(t ) = e−t . Remark Orthogonality is such a useful property of basis sets that non-orthogonal sets are seldom encountered. This time we choose the first three Legendre basis signals: φ0 (t ) = 1. E1 = . an easy exercise left to the reader. k = 0. K − 1 −∞ ∞ This expression easily rearranges to the claimed formula for the coefficients.18 − 1. − 1 ≤ t ≤ 1 where we abandon the artifice of defining the given signal. families of orthogonal basis sets of very different natures. to be zero for | t |> 1 .14 ( 3t − 1 ) .10 t + 0. These sets often are named after the discoverer. φ2 (t ) = 3 t 2 − 1 .18 φ0 (t ) − 1. and the basis signals.14 φ2 (t ) = 1.Because I is a quadratic polynomial in the coefficients. suitable for representing widely varying classes of signals.10 2 2 e2 − 7 a2 = 7 ∫ x(t )φ2 (t ) dt = 7 ∫ ( 3 t 2 − 1 ) e−t dt = = 0. it can be shown that the matrix of second partials is positive definite.10 φ1 (t ) + 0. E2 = 3 7 The minimum integral-squared-error representation is specified by the coefficients a0 = 1 2 1 −1 1 −1 1 ∫ x(t )φ0 (t ) dt = 1 2 1 a1 = 3 ∫ x(t )φ1 (t ) dt = 3 ∫ te −t dt = − 3e−1 = −1.18 −1 x(t ) ≈ 1. and simply work on the specified interval. and if orthogonality of the basis set is preserved we need only compute the new coefficients. yielding −2 ∫ x(t ) φk (t ) dt + 2 Ek ak = 0. skipping the actual evaluation of the coefficient a3 = the resulting representation is 21 ∫ x(t )φ3 (t ) dt 7 −1 85 . Then. − 1 ≤ t ≤ 1 2 2 and leave verification of orthogonality as an exercise. φ1 (t ) = t . the fourth Legendre basis signal is φ3 (t ) = (5 / 2)t 3 − (3 / 2)t with E3 = 2 / 7 . This representation can be refined by adding additional basis signals. have been discovered and cataloged.

1 ∫ φ0 (t )φ1(t ) dt = ∫ 1 dt + ∫ (−1) dt =0 0 1/ 2 1/ 2 1 0 Using this basis set to represent the signal x(t ) = e−t . as is the signal to be represented.10 t + 0. and it is clear that this basis set is not particularly well suited to smoothly varying signals. as shown below. note that in this case it is not clear how to add basis signals to maintain orthonormality and improve the representation.x(t ) ≈ 1. For example. Also. and we assume the basis signals are zero outside this interval.10 φ1 (t ) + 0. 0 ≤ t ≤ 1 yields the coefficients 86 .18 − 1. It is straightforward to verify that this is an orthonormal basis set by time slicing the integrals involved.14 φ2 (t ) + a3 φ3 (t ) = 1. Example Consider the first three basis signals in the Walsh basis set. These typically are defined on the time interval 0 ≤ t ≤ 1 .18 φ0 (t ) − 1.14 ( 3t − 1 ) + a3 ( 5t − 3t ) . − 1 ≤ t ≤ 1 2 2 2 2 2 3 Example We could also represent the signal in the preceding example using the orthonormal basis of rectangular pulses considered before. In this case the minimum integral-squared error coefficients are given by a0 = ∫ x(t ) φ0 (t ) dt = ∫ e−t 3 / 2 dt = 3 / 2 (e − e1/ 3 ) a1 = ∫ x(t ) φ1 (t ) dt = ∫ e−t 3 / 2 dt = 3 / 2 (e1/ 3 − e−1/ 3 ) a2 = ∫ x(t ) φ2 (t ) dt = ∫ e−t 3 / 2 dt = 3 / 2 (e−1/ 3 − e−1 ) −∞ −∞ ∞ −1/ 3 1 −∞ ∞ −1 1/ 3 ∞ −1/ 3 1/ 3 The nature of the resulting representation is shown below.

and write I = ∫ x(t ) − ∑ ak φk (t ) dt −∞ ∞ k =− K K K ⎛ ⎞⎛ ⎞ = ∫ ⎜ x(t ) − ∑ akφk (t ) ⎟ ⎜ x(t ) − ∑ akφk (t ) ⎟ dt k =− K k =− K ⎠⎝ ⎠ −∞ ⎝ ∞ ⎛ K K ⎞⎛ ∗ ∗ ⎞ = ∫ ⎜ x(t ) − ∑ akφk (t ) ⎟ ⎜ x(t ) − ∑ akφk (t ) ⎟ dt k =− K k =− K ⎠⎝ ⎠ −∞ ⎝ ∗ ∞ K 2 87 . for the purpose of exposition. 7. φ0 (t ). though. φ K (t ) ∗ φ− k (t ) = φk (t ) . though we leave further details to references.632 a1 = ∫ e−t dt − ∫ e−t dt = 1 − 2e−1/ 2 + e−1 = 0. φ−( K −1) (t ). We simply choose the default interval. instead of the square. The appropriate definition of integral-squared error must account for the possibility of a complex representation. −∞ < t < ∞ .019 0 1/ 4 3/ 4 0 1/ 4 1/ 2 3/ 4 1 0 1/ 2 1 1 and the representation shown below. Thus we use the magnitude squared. this will not occur. it turns out that complex basis signals can be mathematically convenient.… . with everything set to zero outside this interval. A suitable notation for a complex basis set is to number the basis signals as φ− K (t ). φ1 (t ).3 Complex Basis Signals Even though we are interested in representing real signals. These 2 K + 1 basis signals should be considered on the same interval as the signals to be represented. in the integrand.155 a2 = ∫ e−t dt − ∫ e−t dt − ∫ e−t dt = 0. In this case there is a natural continuation of the basis set that will refine the staircase approximation evident in the representation.… K where φ0 (t ) is real. … . k = 1. φ−1 (t ). 2. as mentioned above.a0 = ∫ e−t dt = 1 − e−1 = 0. and As we will see below. the condition that conjugate basis signals be included in the set yields the pleasing result that the approximation to a real signal is real.

typically is precomputed for standard basis sets.… . ± 1. l ≠ k While we will not justify it in detail. Specifically. and Ek is real. sometimes complex basis signals again are mathematically convenient. and we require that conjugates be included. if φk [n] is complex. That is. the minimum integral-square-error approximation of a real signal is a real signal. the real. … . φ0 [n]. Also. with the default range −∞ < n < ∞ . need to be computed explicitly. k = 0. then for some m we 88 . ∗ ∗ a− kφ− k (t ) = ak φk (t ) = [ak φk (t )]∗ = a− k That is. φM [n] where we assume that the signal to be represented and all the basis signals are defined on the same sample range. and it is perfectly sensible to write x(t ) ≈ ∑ ak φk (t ) k =− K K where the approximation is in the sense of minimum integral-square error. • Since x(t ) is real. only K + 1 coefficients.It turns out that the appropriate definition of orthogonality in this case is the condition ∞ −∞ ∗ ∫ φl (t ) φk (t ) dt = 0 . the complex conjugate of the k th coefficient satisfies ∗ ak = 1 Ek ∞ ⎛∞ ⎞ ∗ ∗ 1 ⎜ ∫ [ x(t ) φk (t )] dt ⎟ = E ∫ [ x(t ) φk (t )] dt ⎜ ⎟ k −∞ ⎝ −∞ ⎠ ∞ ∞ ∗ ∗ = 1 ∫ x(t ) φk (t ) dt = 1 ∫ x(t ) φ− k (t ) dt E E k −∞ k −∞ Therefore. nonnegative number Ek . not 2 K + 1 . ± 1.… . it is clear that this condition eliminates cross-terms among the basis signals in expanding the quadratic integrand of I.4 DT Signal Representation Just as in the continuous-time case. k = 0. 7. aK that minimize the integralsquared error for a real signal x(t ) are given by ak = 1 ∞ ∗ ∫ x(t ) φk (t ) dt . φ1[n]. it can be shown that the coefficients a− K . it is convenient to represent discrete-time signals as linear combinations of specified basis signals. ± K −∞ −∞ ∞ ∞ With this setup. ± K Ek −∞ Remarks • The denominator of this expression. we let ∗ Ek = ∫ | φk (t ) |2 dt = ∫ φk (t ) φk (t ) dt . … . Furthermore. this property implies that the complex conjugate of each term in the representation also is in the representation. Though we are interested in representing real signals.

To minimize I it is convenient to write the (possibly) complex coefficients am in rectangular form. Since we require that the basis set be self conjugate. though the signal x(t ) is assumed to be real. then the corresponding coefficients in the minimum SSE representation satisfy ∗ am = n =−∞ n =−∞ ∞ ∞ ∑ x[n]φm [n] ∗ φm [n]φm [n] ∑ ∞ = n =−∞ n =−∞ ∗ ∑ x[n]φk [n] ∗ φk [n]φk [n] ∑ ∞ = ak The resulting representation 89 . M This expression should appear natural from the continuous-time case Further computation. Orthogonality simplifies this process considerably. in the discrete-time case it is not traditional to adopt the same numbering system as the continuous-time case. A very convenient property of basis sets is orthogonality. However. and the result is that the minimizing coefficients are given by am = n =−∞ n =−∞ ∗ ∑ x[n]φm [n] ∗ φm [n]φm [n] ∞ ∑ ∞ . the basis set is called orthonormal if the following condition also holds: n =−∞ ∗ ∑ φm [n]φm [n] = 1 . 1.… . k ≠ m ∞ Furthermore. 1.) The analysis of this problem proceeds in a manner very similar to the continuous-time case. that is for each k there is an m such that ∗ φk [n] = φm [n] . shows that the second-derivative test for minimality is satisfied. m = 0. which in the discrete-time case is defined by the condition n =−∞ ∞ ∗ ∑ φk [n]φm [n] = 0 . then magnitude signs are superfluous. assuming that none of the basis signals is identically zero. a1.… .… . M Clearly this quantity must be a positive number. Once this important condition is addressed. A meaningful objective is to choose coefficients a0 . and set the derivatives with respect to each real and imaginary part to zero. also omitted. and the condition that the positive number be unity is indeed a normalization of the basis signals. m = 0. aM to minimize the sum-squared error I = ∑ | x[n] − ∑ amφm [n] |2 n =−∞ m=0 ∞ M (Magnitude signs are used for the summand because at this point we have not specified that the representation be real.∗ have φm [n] = φk [n] for all n . expand the expression for I.

−T ≤ t ≤ T . is ϕ k (t ) . 90 . is the new basis set orthogonal on −∞ < t < ∞ .… . On what interval. The fourth Walsh basis signal is defined as shown below 2 1 2 For the signal x(t ) = e−t . 2.… . xod (t ) = Od{x(t )} form an orthogonal basis set over any interval of the form −T ≤ t ≤ T . 3. k = 0. (a) Show that for any x(t ) . k = 0. φ1 (t ) = t . and defined to be zero outside this interval: φ (t ) = sin( π t ) . compute and sketch the (a) minimum integral square error representation using φ0 (t ) .… . if any. If we change the third signal to φ2 (t ) = t 2 . − 1 ≤ t ≤ 1 . K − 1 an orthogonal set? k = 0.… . On what interval time interval. φ (t ) = r (t ) − 2r (t − 2) 0 Is this an orthogonal set on the interval 0 ≤ t ≤ 4 ? 4. xev (t ) = Ev{x(t )} . (c) minimum integral square error representation using the first 4 Walsh basis signals. φ (t ) = u (t − 1) − u (t − 3) . (b) minimum integral square error representation using φ2 (t ) . k = 0. is an orthogonal basis set on the interval −∞ < t < ∞ . K − 1 .a0φ0 [n] + a1φ1[n] + + aM φ M [ n ] is a real signal since for every k there is an m such that the corresponding summands satisfy ∗ ∗ ak φk [n] = amφm [n] and the sums of these pairs of terms is real. is ϕ k (t ) . Let ϕ k (t ) = φk (3t − 1) . φ1 (t ) . K − 1 an orthogonal set? (c) Suppose φk (t ) . Let ϕ k (t ) = φk (t / 3) . K − 1 . 0 ≤ t ≤ 1 . φ2 (t ) = 3 t 2 − 1 2 2 with the signals zero outside of this interval. Show that the basis set that is defined on −1 ≤ t ≤ 1 by φ0 (t ) = 1 .… . φ3 (t ) . if any. (b) Suppose φk (t ) . Consider the set of signals defined as shown for 0 ≤ t ≤ 4 .… . Exercises 1. k = 0. K − 1 is an orthogonal basis set on the interval 0 ≤ t ≤ 1 . K − 1 is an orthogonal basis set on the interval −1 ≤ t ≤ 1 . k = 0.

and c so that the signals form an orthonormal basis set on the time interval 0 ≤ t < ∞ . Determine values of the coefficients a. φ0 (t ) = ae−t . b. φ1 (t ) = be−t + ce−2t 91 .5.

This gives. t1 +To t1 − jlω t ∫ x(t )e o dt = ∑ X k k =−∞ ∞ t1 +To t1 j ( k − l )ωo t dt ∫ e Using the easily-verified fact that t1 +To 1 j ( k − l )ωo t dt = ⎨ ∫ e ⎩To . visit Listen to Fourier Series We offer two approaches to developing the subject mathematically. multiply both sides by the complex − jlωo t exponential signal e . we provide a shortcut. periodic signal x(t ) . the Fourier series representation involves writing a periodic signal as a linear combination of harmonically-related sinusoids. and another is the nature of approximation when a truncated series is used: x(t ) ≈ ∑ X k e jkωo t k =− K K 92 . k ≠ l we obtain Xl = 1 To t1 +To t1 − jlω t ∫ x(t )e o dt This shortcut provides a formula for the Fourier series coefficients of a periodic signal. This is a surprising yet familiar notion.1 CT Fourier Series Informally. ωo = 2π / To . the basic fact is that a real. The coefficients X k in general are complex. k = l t ⎧ 0. can be written as ∞ x(t ) = ∑ X k e jkωo t k =−∞ where ωo is the fundamental frequency of x(t ) . For those who have skipped over the general introduction to signal representation in Chapter 7. For an introduction based on audio signals. One of these is the nature of convergence of the infinite series. • Shortcut It is often more convenient to represent a periodic signal as a linear combination of harmonically-related complex exponentials. In these terms.Notes for Signals and Systems 8. For those interested in a deeper understanding based on the notions in Chapter 7. rather than trigonometric functions. we present the Fourier series as a special case of an orthogonal representation using a particular set of complex basis signals. To see how to determine these coefficients. with fundamental period To . though a number of issues and questions are left aside. where l is an integer and then integrate over one period. for any value of t1 .

We note here only that the approximation is real. the Fourier series representation for real. we compute. the signals φ2 (t ) and φ−2 (t ) have fundamental period To / 2 . since the coefficients obey the conjugacy relation X l∗ = X −l and thus the complex congugate of the k = l term is the k = −l term. orthogonal basis set. k = 0. for l ≠ k . and so on. Suppose x(t ) is real and periodic with fundamental period To and fundamental frequency ωo = 2π / To . and any t1 . for l = k . 93 . This complex-form Fourier series is at first less intuitive than real forms. since φ0 (t ) = 1 and. for nonzero k. φk (t + To ) = e jkωo (t +To ) = e jkωo t e To = φk (t )e jk 2π = φk (t ) Furthermore.… . t1 +To ∗ φl (t )φk (t ) dt = t1 +To t1 jlωo t − jkωo t t1 +To t1 j (l − k )ωo t dt ∫ e jk 2π To ∫ ∫ e e dt = t1 = t +T 1 e j (l − k )ωo t 1 o j (l − k )ωo t1 j ( l − k )ωot1 j ( l − k )ωot1 =e =e j (l − k )ωo ⎣ j (l − k )ωo ⎣ ⎡e j (l − k )ωoTo − 1⎤ ⎦ ⎡e j (l − k )2π − 1⎤ ⎦ =0 Also. • The basis set is orthogonal on any time interval of length To . where the basis signals are periodic with the same period as the signal being represented. but it offers significant mathematical advantages.) • Every basis signal has period To . the signals φ1 (t ) and φ−1 (t ) have fundamental period To . we often move negative signs around for convenience of interpretation. ∗ φk (t ) = e jkωo t ( ) ∗ = e− jkωo t = e j ( − k )ωo t = φ− k (t ) (As in the exponent in this calculation. • The basis set is self-conjugate. To show this. We then choose a basis set of harmonically related phasors according to φk (t ) = e jkωo t . • The Fourier Basis Set From a mathematical viewpoint. periodic signals can be based on minimum integral-square-error representation using a complex. which is included in the sum. ± K This basis set has several properties. ± 1 .

for any integer k. Example The periodic signal shown below is a repeated version of a signal considered in earlier examples: 94 . Notice again that the representation is real. we can compute the minimum integral-square-error representation for x(t ) over one fundamental period. and any t1 . Of course. Using the general formula for coefficients that minimize integral square error. However. ∗ Xk = ⎜ 1 ⎜ To ⎛ ⎝ t1 +To ∫ t1 ∗ t1 +To ⎞ ∗ ∗ ⎟ = 1 ∫ [ x(t )φk (t )]∗ dt x(t )φk (t ) dt To ⎟ t1 ⎠ ∗ t1 +To t1 = 1 To = 1 To t1 +To ∫ t1 t1 +To t1 ∗∗ x (t )φk (t ) dt = 1 To ∫ x(t )φk (t ) dt ∗ ∫ x(t )φ− k (t ) dt = X − k ∗ Along with the relation φk (t ) = φ− k (t ) . this implies that the complex conjugate of each term. if there is nonzero integral square error over one period. and in fact over the interval −∞ < t < ∞ . since.Ek = = t1 +To t1 t1 +To t1 jkω t − jkωo t ∗ dt ∫ φk (t )φk (t ) dt = ∫ e o e t1 t1 +To ∫ 1 dt = To so that the basis set is orthonormal when To = 1 . Using these properties. we have the minimum integral-square-error representation over any number of fundamental periods. in the representation is also included in the representation. let Xk = 1 This gives the representation t1 +To t1 To ∗ 1 ∫ x(t )φk (t ) dt = T t1 +To t1 K o − jkωo t dt ∫ x(t )e x(t ) ≈ ∑ X k φk (t ) = ∑ X k e jkωo t k =− K k =− K K where the approximation is understood to be in the sense of minimum integral square error using 2 K + 1 basis signals. it is clear that by minimizing integral square error over one period we are minimizing integral square error over the interval −∞ < t < ∞ in a reasonable sense. Then since the signal and every term in the representation repeat. then there will be infinite integral square error over the infinite interval. and adopting a special notation for the coefficients. X k e jkωo t .

in some examples a little addition work yields an improved formula.Clearly To = 3 . −T1 o 95 . Choosing to integrate over the fundamental period centered at the origin. and also fixes the basis signals. for k ≠ 0 . The value of To fixes the fundamental frequency ωo = 2π / To . and therefore ωo = 2π / 3 . This information specifies the Fourier basis set. where of course the pulse width and fundamental period satisfy T1 < To / 2 . Example Consider the periodic rectangular-pulse signal shown below. the coefficient formula gives Xk = 1 To To / 2 −To / 2 ∫ ∗ x(t )φk (t ) dt = 1 To T1 It is necessary to separate out the k = 0 case. and the coefficients are given by 1 Xk = To −1+To −1 ∫ x(t ) e 2π 3 − jkωo t − jk 1 2 dt = ∫ x(t ) e 3 −1 2π 3 t dt 2π 3 1 1 −(1+ jk = ∫ e 3 −1 1+ jk 2π 3 )t dt = 2π 3 −1 3(1 + jk 2π ) 3 − (1+ jk e )t 1 −1 = e −e −(1+ jk ) π 3(1 + jk 23 ) These coefficients are used in the expression x(t ) = ∑ X k e jkωo t k =−∞ ∞ We should not expect that the Fourier series coefficients will be simple or pretty in all cases! However. that is. compute the minimum ISE representation on the one-period time interval −To / 2 < t ≤ To / 2 . to obtain T −T1 − jkωo t dt ∫ e 1 2T X 0 = 1 ∫ 1 dt = 1 T T o Then.

Therefore this Xk = 1 x(t ) = ∑ sinc(k 1 ) e To To expression for the coefficients is suitable for all values of k. as Xk = sin(k 2π T1 / To ) kπ It turns out that this expression for the coefficients usually is presented in terms of a standard mathematical function. To . Spectra. x(t ) = ∑ X k e jkωo t k =− K K can be rewritten in at least three different ways. and Convergence • Real Forms The complex-form Fourier series. we can group complex-conjugate terms to write x(t ) = X 0 + ∑ [ X k e jkωo t + X − k e− jkωo t ] = X 0 + ∑ [ X k e jkωo t + ( X k e jkωo t )∗ ] = X 0 + ∑ 2 Re{ X k e jkωot } k =1 k =1 K k =1 K K 96 . sinc(θ ) . a simple application of L’Hospital’s rule shows that sinc(0) = 1 . again. sin(k 2π T1 / To ) 2T1 sin(π 2kT1 / To ) 2T1 2T = = sinc(k 1 ) To To To π 2kT1 / To kπ Furthermore.2 Real Forms. Fourier series coefficients that are in fact real should always be manipulated into real form! 8. defined by sin(πθ ) sinc(θ ) = πθ Minor rewriting of X k gives. for k ≠ 0 . Thus we can write ∞ 2T 2T jk 2π t T o k =−∞ Alternately. as a matter of good style.T1 T X k = 1 ∫ e − jkωo t dt = 1 −1 e− jkωo t 1 = −1 e− jkωoT1 − e jkωoT1 To To jkωo −T1 jkωoTo −T 1 ( ) ⎛ e jkωoT1 − e− jkωoT1 ⎞ sin(kωoT1 ) = 1 ⎜ ⎟= kπ 2j kπ ⎝ ⎠ 2sin(kωoT1 ) = kωoTo This can also be expressed in terms of the fundamental period. First. we can write this representation in terms of the fundamental frequency as ∞ ω T ω T x(t ) = ∑ o 1 sinc(k o 1 ) e jkωo t π π k =−∞ In any case.

but the complex-form Fourier series offers a general mathematical convenience and economy that is especially useful for electrical engineers. kωo on a real frequency ω axis. or. A third form follows by again writing X k in polar form. • Spectra The magnitude spectrum of a To − periodic signal x(t ) is a line chart showing | X k | vs. If we write X k in rectangular form as X k = Re{ X k } + j Im{ X k } . we computed the Fourier series coefficients Xk = 2T1 2T 2T sinc(k 1 ) = 1 sinc(θ ) 2T To To To θ = k T1 o where the sinc function is defined as sinc(θ ) = sin(πθ ) πθ 97 . Using the facts that X 0 is real. x(t ) is expressed as the real part of a sum of harmonic phasors. Each of these forms is useful in particular situations. and the real-part of a sum is the sum of the real parts. we often simply plot a line chart of X k . vs. yields K ⎧ ⎫ x(t ) = Re ⎨ X 0 + ∑ 2 | X k | e j ( kωo t + X k ) ⎬ k =1 ⎩ ⎭ That is. Example For the periodic rectangular pulse signal. and this is referred to as an amplitude spectrum. or vs. When the signal is such that X k is real for all k. though alternate interpretations of the topics we discuss are available for the other forms as well. k on a real axis. more often. k on a real axis. then x(t ) = X 0 + ∑ 2 Re{ X k e jkωo t } = X 0 + ∑ 2 Re (Re{ X k } + j Im{ X k })e jkωo t k =1 K k =1 k =1 K K { } = X 0 + ∑ 2[Re{ X k }cos(kωot ) − Im{ X k }sin(kωot )] This is called the sine-cosine form of the Fourier series. Therefore we will focus on this form.Expressing X k in polar form as X k =| X k | e k =1 K k =1 j Xk yields K x(t ) = X 0 + ∑ 2 Re{| X k | e j ( kωo t + X k ) } = X 0 + ∑ 2 | X k | cos(kωot + X k ) This is called the cosine trigonometric form. kωo on a real frequency ω axis. The phase spectrum of x(t ) is a line chart showing X k vs.

98 . the phase spectrum can be chosen to be anti-symmetric about the vertical axis by choosing the angle range from −π to π . Since ∗ | X − k |=| X k |=| X k | . and then a plot of the values of the X k using this envelope. the magnitude spectrum is symmetric about the vertical axis. where of course 2T1 / To < 1 . and leads to an amplitude spectrum for the rectangular pulse train.This function provides an envelope for the Fourier series coefficients. and since ∗ X − k = X k = − X k . Translating the horizontal axis into a frequency axis gives the amplitude spectrum: From this amplitude spectrum we can easily plot the magnitude and phase spectra. First we show a plot of the envelope (2T1 / To ) sinc(θ ) .

(ii) at each value of t where x(t ) is continuous. then (i) lim K →∞ ( I 2 K +1 ) = 0 . though we will not consider these issues further. called the Dirichlet condition: Theorem If x(t ) is periodic with fundamental period To . That is. mid-point of the discontinuity. The convergence issue we address is whether the integral square error approaches zero as K increases. k = 0. Also. under what conditions do we have To k =− K K lim K →∞ ( I 2 K +1 ) = 0 This is a difficult question. x(t ) = (iii) at each value of t where x(t ) has a discontinuity. To (b) x(t ) has at most a finite number of maxima and minima in one period. ± 2. ± 1. there are signals that do not satisfy the Dirichlet condition but nonetheless have Fourier series with similar convergence properties. (c) x(t ) has at most a finite number of finite discontinuities in one period. and this is not the type of convergence typically studied in beginning calculus. and if (a) ∫ | x(t ) | dt < ∞ . Notice that the Dirichlet condition is a sufficient condition for a type of convergence of the Fourier series. The nature of convergence of Fourier series results in an important phenomenon called the Gibbs Effect when a truncated (finite) Fourier series is used as an approximation to the signal. and we will simply state the best known sufficient condition. takes the value of the jkω t ∑ Xke o 99 .• Convergence Issues We have shown that the coefficient choice X k = 1 ∫ x(t )e− jkωo t dt . For k =−∞ ∞ k =−∞ jkω t ∑ Xke o ∞ . there are different types of convergence that can be considered.… . K To To minimizes the integral square error I 2 K +1 = ∫ [ x(t ) − ∑ X k e jkωo t ]2 dt where we have added a subscript to emphasize that this is the integral square error with 2 K + 1 basis signals. Because it is a sufficient condition.

3 Fourier Series Interpretations of Operations on Signals Periodic signals are determined. Example Given a signal x(t ) = ∑ X k e jkωo t K ˆ suppose a new signal is formed by amplitude transformation. that is. and on the important notion of windowing the coefficients to remove the effect. X k . to desired accuracy in terms of integral square error. and an obvious. it is remarkable how well a few terms of the Fourier series can approximate a periodic signal. Sketch in various signals and notice how 4 or 5 harmonics of the Fourier series can render a good approximation. and indeed it is easy to determine the Fourier series coefficients of x(t ) by inspection. ± 1. the coefficients of various harmonic frequencies that make up the signal. ± K . 8. To get an appreciation of this. Convergence issues aside. where a ≠ 0 and ˆ b are real constants.… . namely. k = 0. It is clear that x(t ) is periodic. see the Web lecture Harmonic Phasors and Fourier Series and also the demonstration Phasor Phactory Both of these use the phasor representation of the Fourier series. by knowledge of the fundamental frequency. Thus the time-domain view of periodic signals is complemented by a “frequency domain” view. This raises the possibility of performing or interpreting operations on signals by performing or interpreting operations on the frequency domain representation. k = 0 ˆ Xk = ⎨ 0 ⎩ aX k . with the same fundamental period/frequency ˆ as x(t ) . We simply write ˆ ˆ x(t ) = ∑ X k e jkωo t = b + a ∑ X k e jkωo t k =− K k =− K K K k =− K and conclude that ⎧aX + b . since this topic will reappear when we consider a more general frequency-domain viewpoint that includes aperiodic signals as well.details on this. and a suitable number of the complex-form Fourier series coefficients. appropriate modification of the notions of magnitude and phase spectra. ωo . x(t ) = ax(t ) + b . on the Fourier series coefficients We will not go through a long list of operations. consult the demonstration Fourier Series Approximation This demonstration uses the cosine-trigonometric form of the Fourier series. However we consider a few examples. k ≠ 0 100 .

and the magnitude spectrum of the signal unchanged. at least approximately. ω . Suppose the system has unit-impulse response h(t ) . ωo . 8. leaves the fundamental frequency unchanged. a = − | a | . especially for more complicated operations. The complex-form Fourier series coefficients are given by | a| ˆ ˆ ˆ X k = 1 ∫ x(t )e− jkωo t dt = ˆ To To 0 ˆ To To /|a| − jk |a|ωo t dt ∫ x(at )e To proceed. Then x(t ) is periodic with ˆ ˆ fundamental period To = To / | a | and fundamental frequency ωo =| a | ωo . as a particular example. If a < 0 . Rather than think of a fixed frequency. time scale by a positive constant leaves the Fourier series coefficients unchanged. is to begin with the expression for the Fourier series coefficients of the new signal.4 CT LTI Frequency Response and Filtering We can combine the Fourier series representation for periodic signals with the eigenfunction property for stable LTI systems to represent system responses to periodic input signals. though of course the fundamental frequency is changed. that is. namely −To ( ) ˆ Xk = Xk That is. we need to separate the cases of positive and negative a. which is time reversal. by the Fourier series expression −∞ ∞ x(t ) = ∑ X k e jkωo t k =− K K linearity and the eigenfunction property give the output expression y (t ) = ∑ H (kωo ) X k e jkωo t k =− K K 101 . ˆ ˆ Example Suppose x(t ) = x(at ) . we think of frequency as a variable. where a is a nonzero constant. On the other hand.This approach relies on the fact that the terms in a Fourier series for a periodic signal are unique. a safer approach. but changes the phase spectrum. the stability assumption guarantees that H (ω ) is well defined for all ω . time scale by a = −1 . then the change of integration variable from t to τ = at = − | a | t gives −T 0 |a| o ˆ Xk = x(τ )e− jk |a|ωoτ / a dτ = 1 ∫ x(τ )e jkωoτ dτ To ∫ −|a| To 0 0 −To 0 ∗ ∗ = 1 ∫ x(τ ) e− jkωoτ dτ = X k = X − k To It is left as an exercise to show that for a > 0 a somewhat different result is obtained. it is convenient to change our earlier notation. However. and relate it to the expression for coefficients of the original signal. Since we will be considering different frequencies. Then given a periodic input signal described. and define the frequency response function of the system by H (ω ) = ∫ h(t )e − jω t dt Of course. a fact that should be clear since each coefficient is determined independently of the others.

This leads to the conclusion that the Yk ’s are Fourier series coefficients for the periodic output signal. the steady-state response is periodic and is as described above. the output signal typically has the same fundamental frequency as the input signal. C = 1 . we often display a plot of the magnitude of the frequency response function. | H (kω o ) | is the gain of the system at frequency kωo . and noting the conjugacy property that H (−ω ) = H ∗ (ω ) . though not always since the frequency response function can be zero at particular frequencies. This property also carries over to the case of causal. stable LTI systems with “right-sided periodic” input signals. That is. and high-frequency input signal components are attenuated much more than low-frequency components. where an LTI system is designed to have desired effects on the frequency components of the input signal. This is the basis of frequency selective filtering. ω .6. To show the filtering properties of a system. To be specific.Letting Yk = H (kωo ) X k . Namely. | H (ω ) | . The unit-impulse response is − t h(t ) = 1 e RC u (t ) = e−t u (t ) 1 RC Therefore the frequency response function for the circuit is H (ω ) = ∫ e−t u (t ) e− jω t dt = ∫ e−(1+ jω )t dt −∞ = 1 1+ jω ∞ ∞ 0 Since | H (ω ) | = 1 1+ω 2 it is straightforward to sketch the magnitude of the frequency response function: Clearly the circuit acts as a low-pass filter. We can consider the magnitude of the frequency response function as a frequency-dependent gain of the system. Example Consider again the R-C circuit in Section 6. we see ∗ that the Yk coefficients satisfy Y− k = Yk . suppose x(t ) = 1 + cos(t ) + cos(30t ) Since x(t ) = Re{e j 0t } + Re{e jt } + Re{e j 30t } linearity and the eigenfunction property can be used to write the response as y (t ) = Re{H (0)e j 0t } + Re{H (1)e jt } + Re{H (30)e j 30t } 102 . This expression for y (t ) can be converted to various real forms in the usual way. vs. Of course. the factor by which the amplitude of the k th harmonic of the input signal is increased or decreased. with R = 1.

Compute the complex-form Fourier series coefficients and sketch the magnitude and phase spectra for (a) the signal x(t ) that has fundamental period To = 1 . then X k = 0 for every even integer k. 3. (b) if x(t ) is “half-wave odd. x(t ) = x(−t ) . (c) the signal x(t ) = ∑ (−1) k δ (t − 2k ) k =−∞ ∞ (d) the signal x(t ) shown below 2. H (30) ≈ 1 e − jπ / 2 2 30 give y (t ) ≈ 1 + 1 cos(t − π / 4) + 1 cos(30t − π / 2) 2 30 Exercises 1. (c) if x(t ) is even. then X k = X − k for all k. Suppose the signal x(t ) has fundamental period To and complex-form Fourier series coefficients X k . (b) the signal x(t ) shown below 0 ≤ t ≤ 1. with x(t ) = e −t .” x(t ) = − x(t + To / 2) . Show that (a) if x(t ) is odd. Suppose x(t ) is periodic with fundamental period To and complex-form Fourier series coefficients X k .Then the computations H (0) = 1. x(t ) = − x(−t ) . (a) x (t ) = 2 x(t − 3) + 1 (b) x (t ) = x(1 − t ) 103 . H (1) = 1 e− jπ / 4 . Derive expressions for the complex-form Fourier series coefficients of the following signals in terms of X k . then X k = − X − k for all k.

the following questions about the response of the LTI system that | Yk | < | X k | ? has impulse response h(t ) = 25 t e−3t u (t ) . with justification. (a) For a periodic input signal x(t ) that has fundamental period To = 2 . what harmonics will appear in the output signal with diminished magnitude? 7. what values of k yield (b) For a periodic input signal that has fundamental period To = 4 π . else Compute and sketch the magnitude and phase spectra of the signal.(c) x (t ) = d x(t ) (d) x (t ) = dt t −∞ ∫ x(τ ) dτ (What additional assumption is required on the Fourier series coefficients To t − jkωo t dt ∫ ∫ x(τ )dτ e ˆ of x(t ) ?) Hint: X k = 1 To and integration-by-parts can be used to write 0 −∞ this in a way that X k can be recognized. k = ±1. the following questions about the response of the stable LTI system with frequency response function H (ω ) = 5 3 + jω (a) For a periodic input signal x(t ) that has fundamental period To = 2π . what harmonics will appear in the output signal y (t ) with diminished magnitude? That is. Given the LTI system with unit-impulse response h(t ) = e . what harmonics will appear in the output signal y (t ) with diminished magnitude? That is. ± 3 ⎩0 . Answer. (e) x (t ) = x(t + To / 2) 4. Answer. A continuous-time periodic signal x (t ) has Fourier series coefficients Xk = ⎨ ⎧ (6 / jk )e jkπ / 4 . with justification. what values of k yield (b) For a periodic input signal that has fundamental period To = π . 6. what harmonics will appear in the output signal with diminished magnitude? | Yk | < | X k | ? 104 . compute the Fourier series representation for the response y (t ) of the system to the input signal −4 |t| (a) x (t ) = (b) x (t ) = n =−∞ ∞ ∑ δ (t − n) ∞ n =−∞ ∑ (−1) nδ (t − n) 5.

and suppose the input signal is x(t ) = 1 + 2 cos(t ) + 3cos(2t ) .8. Compute the response y (t ) . Consider the LTI system that has impulse response h(t ) = δ (t ) − e −2t u (t ) . 9. 105 . Compute the response y (t ) . and suppose the input signal is x(t ) = 2 + 2 cos(t ) . Consider the LTI system that has impulse response h(t ) = te−t u (t ) + 2e−t u (t ) − δ (t ) .

Of course.1 Periodic DT Signal Representation (Fourier Series) Suppose x[n] is a real. φo [n] is real. of course.Notes for Signals and Systems 9. periodic signal with fundamental period N o and. − jkωo n − jkωo n jN oωo n j ( N o − k )ωo n ∗ • = φk [ n ] φk [ n ] = e =e e =e = φ N − k [ n] o • Each basis signal is periodic with period (not necessarily fundamental period) N o .… . and for any other k in the jN ω range. called the discrete-time Fourier basis set: jkωo n φk [ n ] = e . N o −1 l =0 N o −1 ⎧ No . α ≠ 1 ⎩ 106 . fundamental frequency ωo = 2π / N o . To show this. α = 1 ⎪ α l = ⎨1−α No ∑ l =0 ⎪ 1−α . since e o o = e j 2π = 1 . We choose a basis set of N o harmonically related discrete-time phasors. k = 0. which holds for any complex number α . consider the range r ≤ n ≤ r + N o − 1 . 1.) This basis set has several properties: • There are exactly N o distinct basis signals of this type (not an infinite number as in the continuous-time case) since j ( k + N o )ωo n jkωo n jN oωo n φk + N o [ n ] = e =e e The basis set is self-conjugate. but we make use of the fundamental (Often these signals are written out in the form e frequency to simplify the appearance of the exponent. N o − 1 2 jk Nπ n o . and compute r + N o −1 n=r ∗ ∑ φk [n]φm [n] = r + N o −1 n=r jkω n − jmωo n ∑ e o e Changing the summation index from n to l = n − r gives r + N o −1 n=r jkω (l + r ) − jmωo (l + r ) ∗ e ∑ φk [n]φm [n] = ∑ e o l =0 N o −1 = e j ( k − m)ωo r ∑ e j ( k − m)ωo l Next we apply the identity. where r is any integer. jkωo ( n + N o ) jkωo n jk 2π φk [ n + N o ] = e = φk [ n ] =e e • The basis set is orthogonal over any range of length N o in n.

we will compute a surprising expression for the minimum value of the sum squared error per period. we can conclude that to represent the N o − periodic. Next. k ≠ m Thus we have established orthogonality.to the summation to obtain r + N o −1 n=r ∑ l N −1 j ( k − m)ωo r o j ( k − m)ωo ∗ φk [n]φm [n] = e ∑ e l =0 ( ) ⎧ No . k≠m ⎪e 1− e j ( k − m )ωo ⎩ ⎧N . Therefore the representation is written as x[n] ≈ ∑ X m e jmωo n m=0 N o −1 where the approximation is in the sense of minimum sum-squared error. Begin by writing the representation as 107 . in addition to the conjugacy relation − jmωo n − jmωo n jN oωo n − j ( N o − m)ωo n ∗ φm [ n ] = e =e e =e = φ N − m [ n] o we have ∗ Xm = 1 r + N o −1 n=r r + N o −1 n=r No ∑ x[n] e jmωo n = 1 r + N o −1 n=r No ∑ x[n] e jmωo n e− jNoωo n = 1 No ∑ x[n] e− j ( N o − m)ωo n = X N o − m jmωo n Therefore the conjugate of every term X m e is included as another term in the sum. the coefficients are given by Xm = 1 r + N o −1 n=r No ∑ ∗ x[n] φm [n] = 1 r + N o −1 n=r No ∑ x[n] e − jmωo n Here r is any integer.4. and furthermore. before working examples. real signal x[n] with minimum sum squared error per period using the Fourier basis set. Of course it is important to immediately note that. and so the minimum sum-squared-error representation is real. k = m =⎨ o ⎩ 0. r + N o −1 n=r ∗ ∑ φm [n]φm [n] = N o From these properties and the general formulas for minimum sum-squared-error representations in Section 7. and we again use the special notation X m to denote the mth Fourier series coefficient for the signal x[n] . k = m ⎪ = ⎨ j ( k − m)ωo r 1− e j ( k − m )ωo No .

Example Consider a periodic train of clumps of 2 N1 + 1 pulses of unit height repeating with 1 fundamental period N o . and the two basis signals are e j 0π n = 1 and e jπ n . but it illustrates the calculation of the discrete-time Fourier series. and thus jmω n ∑ X me o = x[n] That is. 108 . N o = 2 . n ≠ l ⎩ No . the sum squared error per period is zero! This means that our approximate representation actually is an exact representation.N o −1 m =0 ∑ X me jmωo n ⎡ 1 No −1 − jmωo l ⎤ jmωo n = ∑ ⎢ ⎥e ∑ x[l ] e N ⎥ m =0 ⎢ o l =0 ⎣ ⎦ N o −1 N o −1 N o −1 We can interchange the order of summation to write N o −1 m =0 jmω n ∑ X me o = ∑ l =0 m=0 N o −1 N o −1 1 ∑ x[l ] ∑ φl∗[m]φn [m] No l =0 m=0 1 No x[l ] ∑ e− jlωo m e jnωo m = But N o −1 m =0 ∑ φl∗[m]φn [m] = ⎨ N o −1 m =0 ⎧ 0. n = l by orthogonality. Example This example is almost too simple. ωo = π . Consider x[n] = (−1)n In this case. as shown below. which of course is (−1)n . and there is danger of confusion. The Fourier coefficients are given by 1 X 0 = 1 ∑ (−1) n = 0 2 n =0 1 X1 = 1 ∑ (−1)n e− jπ n = 2 n=0 1 − e − jπ =1 2 Thus the Fourier series representation is x[n] = ∑ X m e jmπ n = ⎡ 0 + e jπ n ⎤ = e jπ n ⎣ ⎦ m=0 Certainly this is no surprise.

) Now an easy calculation with L’Hospital’s rule shows that this formula is valid for m = 0 as well. This expression of the Fourier coefficients is often written in terms of evaluations of an envelope function as sin[(2 N1 + 1)ω / 2] Xm = 1 No ω = mωo sin(ω / 2) and sometimes the ratio of sine functions is called the aliased sinc. There are two important observations about the DTFS. for any m . when a coefficient can be written in real form we continue computing until we obtain a real expression. the first of which is something we discussed previously: Remark 1 Since both x[n] and e − jmωo n . we can compute 109 .The Fourier coefficients can be computed from Xm = 1 N o − N1 −1 n =− N1 No ∑ x[n]e− jmωo n = 1 N1 No n =− N1 − jmωo n ∑ e Clearly X 0 = (2 N1 + 1) / N o . and for nonzero m we replace the summation index n by k = n + N1 to write 1 1 X m = 1 ∑ e− jmωo ( k − N1 ) = 1 e jmωo N1 ∑ e− jmωo k No No 2N 2N k =0 2 N1 k =0 Recognizing the right-most summation to be of the form k =0 ∑ e ( − jmωo ) k = 1 − e− jmωo (2 N1 +1) 1 − e− jmωo gives Xm = 1 No e jmωo N1 1 − e− jmωo (2 N1 +1) 1 − e− jmωo e− jmωo / 2 (e jmωo N1 e jmωo / 2 − e− jmωo N1 e− jmωo / 2 ) = 1 No e − jmωo / 2 (e jmωo / 2 − e− jmωo / 2 ) sin(mωo ( N1 + 1/ 2) = 1 No sin(mωo / 2) (As usual. are periodic sequences in n with period N o .

jkωo n Xke = X No + k e j ( No + k )ωo n Thus we can write x[n] = ∑ X m e jmωo n = m =0 N o −1 m =< N o > ∑ X m e jmωo n where again the angle-bracket notation indicates a summation over N o consecutive values of the index m.… . not necessarily the values from zero to N o − 1 . for any integer k. though some details are omitted: X 0 = 1 ∑ sin(2π n / 3) = 0 3 X1 = 1 ∑ sin(2π n / 3) e 3 n=0 2 n =0 2 π − j 23 n 2 = 1 2j X 2 = 1 ∑ sin(2π n / 3) e 3 n=0 π − j 2 23 n = −1 2j 2π 3 However.” Remark 2 The DTFS coefficients X 0 . This is often written as Xm = 1 No n =< N o > ∑ x[n] e− jmωo n where the special angle brackets denote an index range of “ N o consecutive values. for all m This follows from the calculation X m + No = 1 = 1 N o −1 n=0 N o −1 n =0 No − j ( m + N o )ωo n ∑ x[n] e No ∑ x[n] e 2 − jm Nπ n − j 2π n o e = Xm A consequence is that. there is a shortcut available. Simply write j sin(2π n / 3) = 1 e 2π 3 n 2j −j − 1 e 2j n and compare this to the expression 110 . X N −1 can be extended in either direction to form a o sequence that repeats according to X m + No = X m . We can calculate the DTFS coefficients as follows.Xm = 1 N o −1 n =0 No − jmωo n ∑ x[n] e by summing over any N o consecutive values of the index n. Example The signal x[n] = sin(2π n / 3) is periodic with fundamental period N o = 3 .

1 we see that m =< 3> ∑ X me π jm 23 n X −1 = −1 .” Then you can sketch in a signal and “play” the DTFS frequency components. usually on an angular range from −π to π . DT signal x[n] . Finally. the discrete-time phasor corresponding to the coefficient X 2 is e corresponding to X −1 .2 Spectra of DT Signals For a real. Example In Section 9. See the main demonstrations page for other versions of the applet.x[n] = Choosing the index range < 3 >= −1. periodic. The magnitude spectrum of x[n] is a line plot of | X m | vs the index m. since ωo = π . X1 = 1 2j 2j These two results can be reconciled by noting that X 2 = X −1+ 3 = X −1 . 0. π j 2 23 n π − j 23 n and this is identical to the phasor e The first applet in the demonstration linked below permits you to explore the DTFS for signals with period 5.) Discrete-Time Fourier Series 9. X1 = 1. and X k + 2 = X k . The phase spectrum of x[n] is a similar plot of ∠X m . 111 . and these will be useful in Section 9. (You will need to use MSIE 5. or vs mωo on a frequency axis. X 0 = 0 . Indeed.1 we computed the DTFS coefficients of x[n] = (−1) n as X 0 = 0.2. In this case the amplitude spectrum of the signal is simply or. the frequency content of the signal is revealed by the coefficients in the DTFS expression x[n] = m =< N o > ∑ X m e jmωo n The following graphical displays of these coefficients define various spectra of x[n] .5+ with the MathPlayer plugin to use this link. as is easily verified. when the DTFS coefficients are all real. for other values of k. Other options permit you to enter the coefficients in the DTFS or enter the magnitude and phase spectra. the amplitude spectrum of x[n] is simply a plot of the coefficients X m . in terms of a frequency axis. Select the “Input x[n]” option and set the speed to “slow.

there is more low-frequency content as indicated by the components near the frequencies zero and 2π . for nonzero integer k. a periodic train of width 2 N1 + 1 clumps of unitheight lollypops. 112 . that is. While there is some high-frequency content in x[n] . Sketching this envelope function yields the amplitude spectrum shown below. we note the obvious fact that x[n] is a high-frequency signal. and the phase spectrum is zero. for ω = 2kπ / 5 . in particular the component at frequency π . Finally. N o = 10 .Since π corresponds to the highest frequency in discrete time. the magnitude and phase spectra are shown below. Example The second example in Section 9. Choosing N1 = 2.1. consult the demonstration Discrete-Time Fourier Series linked at the end of Section 9. though again the DTFS coefficients are real. To explore the notion of spectra in more detail. Finally. the magnitude spectrum is identical to the amplitude spectrum. the coefficients are given as an evaluation of an aliased sinc envelop by sin(5ω / 2) Xm = 1 10 sin(ω / 2) ω = mπ / 5 The envelope function is zero when 5ω / 2 = kπ .1. is considerably more complicated. for this simple case. it should be noted that these spectra repeat outside the frequency ranges shown. Correspondingly.

− jkωo N o ˆ Xk = e Xk = Xk In simple cases. X = 1 x[m] e− jkωo ( m + no ) = e− jkωo no 1 k No m =< N o > ∑ No n =< N o > ∑ x[n] e− jkωo m = e− jkωo no X k Notice that the magnitude spectrum of the signal is unchanged by time shift. and any N o consecutive Fourier coefficients.. with x[n] as given above. Therefore we can compute the Fourier coefficients for x[n] from the standard formula: − jkωo n − jkωo n 1 1 Xk = No n =< N o > ∑ x[n] e = No n =< N o > ∑ x[n − no ] e Changing the summation index from n to m = n − no . However. ωo .. Example 1 Given a signal x[n] = k =< N o > ∑ X k e jkωo n suppose a new signal is obtained by the index shift x[n] = x[n − no ] where no is a fixed integer. Again. periodic signals are completely determined by the fundamental frequency. Example 2 Suppose x[n] = x[−n] . or the fundamental period. Thus the signal is described in terms of its frequency content. or fundamental frequency. This raises the possibility of interpreting various time-domain operations on signals as operations on the frequency-domain description. Clearly a shift does not change periodicity. N o . it is possible to ascertain the effect of the operation on the Fourier coefficients by inspection of the representation. say. rather than give a lengthy treatment of this issue. or fundamental period. the fundamental frequency does not change. regardless of the integer value of k. such as time-index shift. it is clear that jkωo ( n − no ) − jkωo no jkωo n x[n] = x[n − no ] = k =< N o > ∑ Xk e = k =< N o > ∑ e Xk e and we simply recognize the form of the expression and the corresponding Fourier coefficients Xk . Indeed. since. and the Fourier coefficients for x[n] are given by − jkωo n − jkωo n 1 1 Xk = No Changing the summation index to m = − n gives n =< N o > ∑ x[n] e = No n =< N o > ∑ x[−n] e 113 .. X No −1 . we will simply discuss a few examples..3 Operations on Signals Discrete-time. X1 . X 0 .9.

Since these functions repeat with period 2π . To display this effect. 2 2 114 . Further discussion of operations on discrete-time signals can be found in the demonstration DTFS Properties 9. with fundamental period N o and corresponding fundamental frequency ωo . If the input signal x[n] is periodic. we define the frequency response function of the system as H (ω ) = ∑ h[n] e − jω n n =−∞ ∞ This is a slight change in notation from Section 5.5 in that we show frequency as a variable. the DTFS and the eigenfunction property provide a way to compute and interpret the response. Note also that H (ω + 2π ) = ∑ h[n]e− j (ω + 2π ) n n =−∞ ∞ = H (ω ) so in discrete time the frequency response function repeats every 2π radians in frequency. for example. we often show the plots for only this range. for −π < ω ≤ π . Example Suppose an LTI system has the unit pulse response h[n] = 1 δ [n] + 1 δ [n − 1] 2 2 Obviously the system is stable. plots of | H (ω ) | and ∠H (ω ) vs ω can be given. and the frequency response function is H (ω ) = 1 + 1 e − jω = e− jω / 2 cos(ω / 2) The response of the system to an input of frequency ωo .Xk = 1 No m =< N o > ∑ x[m] e− jkωo ( − m) = 1 No m =< N o > ∑ x[m] e− j ( − k )ωo n = X −k This conclusion also could be reached by inspection of the representation.4 DT LTI Frequency Response and Filtering For a stable DT LTI system with periodic input signal. If the unit-pulse response of the system is h[n] . we can write jkωo n x[n] = k =< N o > ∑ Xk e and the eigenfunction property gives y[n] = k =< N o > ∑ H (kωo ) X k e jkωo n Thus the frequency response function of the system describes the effect of the system on various frequency components of the input signal. for example.

x[n] = cos(ωo n) = Re e jωo n y[n] = Re H (ωo )e { } is given by a now-standard calculation using the eigenfunction property: jωo n { } = | H (ωo ) | cos(ωon + ∠H (ωo )) In this case.. we assume | a |< 1 . | H (ω ) | = cos(ω / 2) . and this system is a high-pass filter. | H (ω ) |= | = |b| (1+ ae − jω )(1+ ae jω = ) |b| 1+ 2a cos(ω ) + a 2 If 0 < a < 1 and b = 1 . − π < ω ≤ π and from the plot below we see that the system is a low-pass filter. y[n] + ay[n − 1] = bx[n] where. Compute the discrete-time Fourier series coefficients for the signals below and sketch the magnitude and phase spectra. then the magnitude of the frequency response is shown below. Then h[n] = (− a)n bu[n] and the frequency response function is n =−∞ b = 1+ ae− jω |b| |1+ ae − jω Example Suppose an LTI system is described by the difference equation ∞ ∞ H (ω ) = ∑ (−a )n bu[n] e − jω n = b ∑ (− a)n e− jω n n=0 In this case. (a) x[n] = 1 + cos(π n / 3) (b) 115 . to guarantee stability. Exercises 1.

X 3 = −2.(c) (d) x[n] = k =−∞ ∑ δ (n − 4k − 1) ∞ 2. ωo = π ⎩−1/ 2. X 4 = 1. discrete-time signal. X 5 = 0. ⎧ 1/ 2. X 2 = 1. k odd (b) X k = 1/ 2. what are the Fourier series coefficients for 116 . 4. For the sets of DTFS coefficients given below. periodic signal x[n] . for all n. If x[n] has fundamental period N o . ωo = π (c) X 0 = −1. k even . Suppose x[n] is periodic with even fundamental period N o and DTFS coefficients X k . for all k . and discrete-time Fourier series coefficients X k . what is the signal? (a) N o = 5 5. If x[n] also satisfies x[n] = − x[n + N o / 2] . show that X k = 0 if k is even. ωo = π / 3 (a) X k = ⎨ 3. X k + 6 = X k . Given the fundamental period N o and the magnitude and phase spectra as shown for a real. determine the corresponding real. an even integer. X1 = 0.

(a) h[n] = 1 δ [n] − 1 δ [n − 1] 2 2 (b) h[ n] = δ [n] − ( 1 2 ) n u[n] (c) h[n] = (1/ 2) n u[n] 117 . For the LTI systems specified below.) 6.(a) x[n] = x[n + N o / 2] (b) x[n] = (−1)n x[n] (Assume that N o = N o and give an example to show why this assumption is needed. sketch the magnitude of the frequency response function and determine if the system is a low-pass or high-pass filter.

… ∫ x(t )e (10. x(t ) approaches an aperiodic signal. Then the sum transfigures to an integral. the difference (k + 1)ωo − kωo = ωo shrinks toward 0 so that any given real number can be approximated by kωo . from a frequency content viewpoint. that is. as To → ∞ . k = 0. Define.3) The Fourier series coefficients (10. ±2. X k = 1 X (kωo ) = 1 X (kωo )ω o To 2π so we can write x(t ) = 1 2π k =−∞ jkω t ∑ X (kωo ) e o ωo ∞ Letting To grow without bound. ω . Therefore we skip mathematical details and simply provide a motivational argument leading to a description of the frequency content of aperiodic signals. and in any given frequency range. we can write the Fourier series description ∞ x(t ) = ∑ X k e jkωo t k =−∞ (10.1 Introduction to the CT Fourier Transform If x(t ) is To -periodic. for all ω. we can view ω o as a differential dω and consider that kωo takes on the character of a real variable. there are more and more frequency components of x(t) since ω o becomes smaller and smaller. letting ωo shrink toward 0. perhaps it is not surprising that a truely aperiodic signal typically contains frequency components at all frequencies. ±1. yielding x(t ) = 1 ∫ X (ω )e jω t dω 2π −∞ ∞ (10. for suitable k.2) Consider what happens as we let To grow without bound. Analysis of a limit process by which a periodic signal approaches an aperiodic signal.2) for x(t ) can be written as evaluations of this “envelope” function. not just at integer multiples of a fundamental frequency.Notes for Signals and Systems 10. In a sense. say − 1 ≤ ω ≤ 1.1) where ωo = 2π / To and Xk = 1 To To / 2 −To / 2 − jkωo t dt . That is. Therefore.4) 118 . is difficult to undertake. the complex-valued function X (ω ) by X (ω ) = To / 2 −To / 2 − jω t dt ∫ x(t )e (10.

4). To perform a sanity check on these claims of a transformation and an inverse transformation. mathematically. taking the Fourier transform and then the inverse transform indeed returns the original signal. Indeed. Example For x(t ) = e −3t u (t ) . x(t ) is given by the inverse Fourier transform expression (10. since we let To → ∞ .In this expression.2. Then the integration with respect to τ is evaluated by the sifting property to give ∞ −∞ ∫ x(τ ) 2πδ (t − τ ) dτ = x(t ) In other words. Immediate questions are: When is X (ω ) a meaningful representation for x(t ) ? Are standard operations on x(t ) easy to interpret as operations on X (ω ) ? Before addressing these. which is analogous to the expression of a To periodic signal in terms of its Fourier series.5) The function X (ω ) is defined as the Fourier transform of x(t ) .3) becomes X (ω ) = ∫ x(t )e − jω t dt −∞ ∞ (10. X (ω ) = ∫ x(t )e− jω t dt = ∫ e−3t e− jω t dt = −∞ 0 ∞ ∞ −1 −(3+ jω )t ∞ e 0 3 + jω = 1 3 + jω 119 . (10. suppose that given x(t ) we compute X (ω ) = ∞ − jωτ dτ ∫ x(τ )e −∞ where we have used a different name for the integration variable in order to avoid a looming notational collision. Substituting this into the right side of (10. and in a very useful sense it describes the frequency content of the aperiodic signal. we work a few examples.4) gives 1 2π ∞ −∞ jω t − jωτ 1 dτ e jω t dω ∫ X (ω )e dω = 2π ∫ ∫ x(τ )e −∞ −∞ ∞ ∞ ∞ ∞ Interchanging the order of integration gives 1 2π −∞ jω (t −τ ) dω dτ ∫ x(τ ) ∫ e −∞ In this expression we recognize that ∞ −∞ jω (t −τ ) dω = 2π δ (t − τ ) ∫ e from Special Property 2 in Section 2.

| X (ω ) |= 1 9 +ω 2 . The sifting property immediately gives X (ω ) = ∞ −∞ ∫ x(t )e − jω t dt = 1 In this case the magnitude (or amplitude) spectrum of the signal is a constant. Remark The symmetry properties of the magnitude and phase spectra of a real signal x(t ) are easy to justify in general from the fact that ∞ ∞ ⎡∞ ⎤ X (ω ) = ⎢ ∫ x(t )e − jω t dt ⎥ = ∫ [ x(t )e− jω t ]∗ dt = ∫ x(t )e jω t dt ⎢ −∞ ⎥ −∞ −∞ ⎣ ⎦ ∗ ∗ = ∫ x(t )e− j ( −ω )t dt = X (−ω ) −∞ ∞ This gives X ( −ω ) = X (ω ) and ∠X (−ω ) = −∠X (ω ) . ∠X (ω ) = − tan −1 (ω / 3) and the magnitude and phase spectra are shown below.It is important to note that the evaluation at t = ∞ yields zero. Example A simple though somewhat extreme example is the unit impulse. indicating that the unit impulse is made up of equal amounts of all frequencies! 120 . In fact the Fourier transform X (ω ) describes the frequency content of the signal x(t ) . The magnitude spectrum or amplitude spectrum of a signal display the frequency content of the signal. For example. If X (ω ) is a real-valued function. and the phase spectrum is a plot of ∠X (ω ) vs ω . if we change the sign of the exponent in the signal. x(t ) = δ (t ) . and thus the magnitude spectrum is an even function of ω . and we display this content using the following. familiar plots. The magnitude spectrum of a signal x(t ) is a plot of | X (ω ) | vs ω . For the example above. the Fourier transform would not exist. while the phase spectrum is an odd function of ω . sometimes we plot the amplitude spectrum of the signal as X (ω ) vs ω .

for example adjusting the value of x(t ) at isolated values of t. but the calculations are not unfamiliar: X (ω ) = = ∞ −∞ ∫ x(t )e − jω t dt = T1 −T1 ∫ e− jω t dt = −1 − jω t T1 e −T1 jω 1 (e jωT1 − e − jωT1 ) = 2T1 ω1 sin(ωT1 ) T1 jω = 2T1 sinc(ωT1 / π ) Thus the amplitude spectrum of x(t ) is As with Fourier series calculations. Dirichlet Condition Suppose a signal x(t ) is such that ∞ (a) x(t ) is absolutely integrable. Such a change does not effect the result of the integration leading to the transform. (This neglects trivial changes in the signal. then it is important to express it in real form. The Fourier transform computation involves a bit of work to put the answer in a nice form. the unit impulse would be thought of as having an infinite discontinuity at t = 0 . Furthermore. if the Fourier transform can be written as a real function.Example Consider the rectangular pulse signal shown below. For example.) There are various sufficient conditions that can be stated for a signal to have a unique Fourier transform. (b) x(t ) has no more than a finite number of minima and maxima in any finite time interval. −∞ ∫ | x(t ) | dt < ∞ . and there are signals that do not satisfy the condition yet have a unique Fourier transform. and (c) x(t ) has no more than a finite number of discontinuities in any finite time interval. Convergence Issues It is difficult to explicitly characterize the class of signals for which the Fourier transform is well defined. and these discontinuities are finite. It is important to remember that this is a sufficient condition. Then there exists a unique Fourier transform X (ω ) corresponding to x(t ) . that is. the uniqueness of the Fourier transform for a given signal is an important issue – to each signal x(t ) there should correspond exactly one X (ω ) . and we present only the best known of these. and yet the sifting property gives a Fourier 121 . or the transform.

However. that is. with evaluation by the sifting property. because of the failure of the integral to converge. Indeed. A quick check of the inverse transform reassures: x(t ) = 1 ∫ X (ω )e jω t dω = 1 ∫ 2πδ (ω )e jω t dω 2π 2π −∞ −∞ ∞ ∞ =1 Indeed.2.2 Fourier Transform for Periodic Signals If x(t ) is To -periodic. one approach to computing a Fourier transform when the Dirichlet condition is not satisfied or generalized functions might be involved is to guess the transform and verify by use of the inverse transform formula. 122 . we can check the calculation by applying the inverse transform. With X (ω ) = 1 . then it is clear that the Fourier transform X (ω ) = ∫ x(t )e− jω t dt −∞ ∞ does not exist in the usual sense. a reasonable conclusion. we compute x(t ) = 1 ∫ X (ω )e jω t dω = 1 ∫ e jω t dω = δ (t ) 2π 2π −∞ −∞ ∞ ∞ where we have used again Special Property 2 from Section 2.transform. ∞ jω t ∫ e dω = 2π δ (t ) −∞ Example As an additional example. 10. we can use the special property with the roles of t and ω interchanged to compute the Fourier transform of x(t ) = 1 : X (ω ) = ∫ e− jω t dt = ∫ e j ( −ω )t dt = 2πδ (−ω ) = 2πδ (ω ) −∞ −∞ ∞ ∞ This indicates that all the frequency content in the signal is concentrated at zero frequency. The Fourier series expression ∞ x(t ) = ∑ X k e jkωo t k =−∞ indicates that the key is to develop the Fourier transform of the complex signal jωo t x(t ) = e ∞ One approach is to use the Special Property 2 in Section 2. we can take an indirect approach and use notions of generalized functions to extend the Fourier transform to periodic signals in a way that captures the Fourier series expression in a fashion consistent with ordinary Fourier transforms.2 again: X (ω ) = ∫ e jωo t e− jω t dt = ∫ e j (ωo −ω )t dt = 2πδ (ωo − ω ) −∞ −∞ ∞ = 2πδ (ω − ωo ) That this is reasonable is easily verified using the inverse Fourier transform.

however. That is. the phase spectrum has exactly the same form as in the context of Fourier series representations. occurring at integer multiples of the fundamental frequency. First. Of course the Fourier transform expression is invertible by inspection. Example A couple of special cases are interesting to note. to compute the Fourier transform of a periodic signal. in the sense that the Fourier series for x(t ) can be written by inspection from X (ω ) . the magnitude spectrum (or amplitude spectrum. That is. Instead of a line plot. based on the Fourier transform of a phasor. for x(t ) = sin(ωot ) . The phase spectrum computes in a similar fashion.Following this calculation. labeled with the impulse-area magnitudes.” but difficult to interpret results. but that is easy. and then simply substitute this data into the X (ω ) expression above. first compute the Fourier series coefficients. for 1 jωo t 1 − jωo t x(t ) = cos(ωot ) = e 2 + e 2 we have Second. For any value of ω . Because the summands are nonoverlapping in this sense. we can compute the Fourier transform of a signal expressed by a Fourier series in a straightforward manner: X (ω ) = ∫ x(t )e− jω t dt = ∫ −∞ ∞ ∞ ∞ ∞ −∞ k =−∞ jkω t ∑ X k e o e− jω t dt ∞ ∞ = ∑ X k ∫ e jkωo t e− jω t dt = ∑ 2π X k δ (ω − kωo ) k =−∞ −∞ k =−∞ In words. X (ω ) = πδ (ω − ωo ) + πδ (ω + ωo ) X (ω ) = − jπδ (ω − ωo ) + jπδ (ω + ωo ) Example Other approaches to computing the Fourier transform of periodic signals can give “correct. We need to reformat the notions of spectra of x(t ) in consonance with this new framework. From another viewpoint. Therefore the phase spectrum is interpreted as a line plot of the angles of the areas of impulses vs frequency. since the angle of a sum of non-overlapping terms is the sum of the angles. at most one summand in the expression for X (ω ) can be nonzero. we essentially have built the Fourier series into the transform. the magnitude of the sum is the sum of the magnitudes. For the signal x(t ) = ∑ δ (t − k ) k =−∞ ∞ we directly compute 123 . X k . if every X k is real) becomes a plot of impulse functions.

existence of the Fourier transform is assumed. or signals such as periodic signals that have generalized-function transforms. for example those satisfying the Dirichlet condition. but care is needed in a couple of cases. we always prefer the second. then for any constant a.X (ω ) = ∫ ∞ = ∑ e k =−∞ −∞ k =−∞ ∞ − jω k ∑ δ (t − k ) e− jω t dt = ∑ ∞ ∞ ∞ k =−∞ −∞ − jω t dt ∫ δ (t − k ) e On the other hand. Begin with F [ x(t − to )] = ∫ x(t − to )e− jω t dt and change integration variable from t to τ = t − to to obtain −∞ 124 . which easily yields ωo = 2π and X k = 1 . − jω to F [ x(t − to )] = e ∞ X (ω ) The calculation verifying this is by now quite standard. then for any constant to . Furthermore. the recommended procedure is to compute the Fourier series data for x(t ) . Often this is obvious. For generalized functions. and will not be mentioned. where F denotes a “Fourier transform operator:” X (ω ) = F [ x(t )] = ∫ x(t )e − jω t dt −∞ ∞ x(t ) = F −1 [ X (ω )] = 1 2π ∞ Linearity If F [ x(t )] = X (ω ) and F [ z (t )] = Z (ω ) . we should verify that each operation considered yields a signal that has a Fourier transform. Then by inspection we obtain the Fourier transform X (ω ) = ∑ 2π δ (ω − k 2π ) Needless to say. The proofs we offer of the various properties are close to being rigorous for ordinary signals. further interpretation typically is needed. 10.3 Properties of the Fourier Transform k =−∞ ∞ We now consider a variety of familiar operations on a signal x(t ) . −∞ jω t ∫ X (ω )e dω F [ x(t ) + a z (t )] = X (ω ) + a Z (ω ) This property follows directly from the definition. Time Shifting If F [ x(t )] = X (ω ) . Throughout we use the following notation for the Fourier transform and inverse Fourier transform. and interpret the effect of these operations on the corresponding Fourier transform X (ω ) . Of course. In any case. for all k. it is difficult to show by elementary means that these two expressions for X (ω ) are the same.

then for any constant a ≠ 0 .F [ x(t − to )] = ∫ x(τ )e− jω (τ + to ) dt = e− jω to ∫ x(τ )e− jωτ dt =e −∞ − jω to −∞ ∞ ∞ X (ω ) Time Scaling If F [ x(t )] = X (ω ) . the safest approach is to work out the result from the basic definition. X ( −ω ) = X (ω ) . and therefore we notice the interesting fact that a signal and its time reversal have the same magnitude spectra! Example When both a scale and a shift are involved. beginning with F [ x(at )] = ∫ x(at )e− jω t dt −∞ ∞ though the cases of positive and negative a are conveniently handled separately. and the scaling property yields F [ x(−t )] = X (−ω ) As noted previously. To illustrate. If a > 0 . then it is convenient to write a = − | a | and use the variable change τ = − | a | t as follows: −∞ ∞ − jω −τa| 1 − jωτ | 1 F [ x(at )] = ∫ x(τ )e ∞ 1 X (ω ) = |a| a −| a | dτ = |a| −∞ ∫ x(τ )e a dτ Finally we note that both cases are covered by the claimed formula. the variable change from t to τ = at yields F [ x(at )] = ∫ x(τ )e −∞ = 1 X (ω ) a a ∞ ∞ − jω τ 1 − jωτ a dτ = 1 ∫ x(τ )e a dτ a a −∞ If a < 0 . Example An interesting case is a = −1 . the Fourier transform of x(3t − 2) can be computed in terms of the transform of x(t ) via the variable change τ = 3t − 2 as shown in the following: F [ x(3t − 2)] = ∫ x(3t − 2)e− jω t dt = ∫ x(τ )e −∞ 3 −∞ ∞ ∞ − jω τ + 2 1 3 dτ 3 = = 2 1 e − jω 3 ∞ −∞ ∫ x(τ )e 3 − jωτ 3 dτ 2 1 e − jω 3 X ( ω ) 3 125 . F [ x(at )] = 1 X ( ω ) |a| a This is another familiar calculation.

Remark When we apply this property to signals that are not. dt jω 1 + jω x(t ) = d e −t u (t ) = −e−t u (t ) + e−t δ (t ) = −e−t u (t ) + δ (t ) Then the Fourier transform is easy by linearity: ( ) F [ x(t )] = Integration If F [ x(t )] = X (ω ) . and the time-derivative signal x(t ) has a Fourier transform. and the second term accounts for the constant. differentiable. strictly speaking. then −1 jω +1 = 1 + jω 1 + jω ⎡ t ⎤ F ⎢ ∫ x(τ ) dτ ⎥ = 1 X (ω ) + π X (0)δ (ω ) ⎢ −∞ ⎥ jω ⎣ ⎦ This property is difficult to derive.Differentiation If F [ x(t )] = X (ω ) . For example. generalized calculus must be used. Thus the first term is expected. u (t ) = ∫ δ (τ ) dτ −∞ t and the integration property give the Fourier transform of the unit-step function as F [u (t )] = 1 + πδ (ω ) jω We can check this with the differentiation property (using generalized differentiation): F [δ (t )] = F [u (t )] = jω ⎡ 1 + πδ (ω ) ⎤ = 1 + jπω δ (ω ) ⎢ jω ⎥ ⎣ ⎦ =1 though the rule for multiplying an impulse with an ordinary function must be invoked. beginning with the easily-verified transform F [ x(t )] = F [e−t u (t )] = the differentiation property gives 1 1 + jω F [ x(t )] = To check this. 126 . we first compute x(t ) . then F [ x(t )] = jω X (ω ) dt = x(t )e − jω t ∞ To justify this property. Example The relationships F [δ (t )] = 1 . the fact is clear. When x(t ) satisfies the Dirichlet condition. except for an undetermined additive constant. we need to justify the fact that x(t ) approaches zero as t → ±∞ . but we can observe that the running integral is the inverse of differentiation. using integration-by-parts: F [ x(t )] = ∫ x(t )e −∞ ∞ − jω t −∞ −∞ − ∫ x(t )(− jω )e− jω t dτ ∞ = jω X (ω ) To make this rigorous. though in other cases it is less so. we directly compute the transform.

However. though. The effect of various operations on a signal and on the magnitude and phase spectra of the signal can be explored using the Web demonstration below.Example The properties we are discussing sometimes can be used in clever ways to compute Fourier transforms based on a small table of known transforms. sometimes the answer can appear in a form where interpretation is required to simplify the result. Convolution If x(t ) and h(t ) have Fourier transforms X (ω ) and H (ω ) . again.4 Convolution Property and Frequency Response of LTI Systems Perhaps the most important property of the Fourier transform is that convolution in the time domain becomes multiplication of transforms. To illustrate. x(t ) = u (t + 1) − u (t − 1) Using linearity and time-shift properties. This means that for many purposes it is convenient to view LTI systems in the Fourier (frequency) domain. application of the rule for multiplying and impulse function by an ordinary function is involved. consider the simple rectangular pulse. then F [( x ∗ h)(t )] = X (ω ) H (ω ) A direct calculation involving a change in order of integration can be used to establish this property: ∞ F [( x ∗ h)(t )] = ∫ ( x ∗ h)(t )e− jω t dt ⎤ = ∫ ⎢ ∫ x(τ )h(t − τ ) dτ ⎥ e− jω t dt ⎥ −∞ ⎢ −∞ ⎣ ⎦ ∞ ⎡∞ ⎤ = ∫ x(τ ) ⎢ ∫ h(t − τ ) e− jω t dt ⎥dτ ⎢ −∞ ⎥ −∞ ⎣ ⎦ We can change the integration variable in the inner integration from t to σ = t − τ to obtain ∞ ⎡∞ ⎤ F [( x ∗ h)(t )] = ∫ x(τ ) ⎢ ∫ h(σ ) e− jω (σ +τ ) dσ ⎥dτ ⎢ −∞ ⎥ −∞ ⎣ ⎦ −∞ ∞ ⎡∞ and factoring the exponential in τ out of the inner integration gives 127 . we can immediately write X (ω ) = e jω ⎡ 1 + πδ (ω ) ⎤ − e− jω ⎡ 1 + πδ (ω ) ⎤ ⎢ jω ⎥ ⎢ jω ⎥ ⎣ ⎦ ⎣ ⎦ = 1 ⎡e jω − e− jω ⎤ + π ⎡ e jω − e− jω ⎤ δ (ω ) jω ⎣ ⎦ ⎣ ⎦ = 2sin(ω ) ω = 2sinc(ω /π ) This agrees with our earlier conclusion. CTFT Properties 10.

LTI system to a periodic input signal. For example. This is simply the eigenfunction property in terms of Fourier transforms – the output transform is a constant (typically complex) multiple of the input transform. the convolution property implies that the response to X (ω ) = 2πδ (ω − ωo ) is Y (ω ) = H (ω )2πδ (ω − ωo ) = H (ωo )2πδ (ω − ωo ) where H (ω ) is the Fourier transform of the unit-impulse response h(t ) . x(t ) = ∑ X k e jkωo t k =−∞ ∞ Then we immediately have X (ω ) = ∑ 2π X k δ (ω − kωo ) k =−∞ ∞ and Y (ω ) = H (ω ) X (ω ) = ∑ 2π X k H (ω )δ (ω − kωo ) k =−∞ ∞ = ∑ 2π X k H (kωo )δ (ω − kωo ) k =−∞ ∞ The inverse Fourier transform gives y (t ) = ∑ X k H (kωo ) e jkωo t k =−∞ ∞ which typically is the Fourier series representation of the output signal. the eigenfunction property states that the response to jω t is y (t ) = H (ωo ) e o . But such a clear indication is not always provided. the transforms of the signals x(t ) = 1 and h(t ) = u (t ) are X (ω ) = 2πδ (ω ) . Example jωo t x(t ) = e For a stable LTI system. This in fact is true for signals that satisfy the Dirichlet condition. but difficulties can arise when some of our signals with generalized-function transforms are involved. 128 . This example is easily extended to represent the response of a stable.F [( x ∗ h)(t )] = ∫ x(τ ) e − jωτ H (ω ) dτ = X (ω ) H (ω ) −∞ ∞ Remark We did not check that the convolution of Fourier transformable signals yields a Fourier transformable signal. and also an impulse multiplied by a function that is dramatically discontinuous at ω = 0 . We view the periodic input signal in terms of its Fourier series. where H (ωo ) = ∫ h(t )e− jωo t dt −∞ ∞ In terms of Fourier transforms. Fortunately the product of the transforms indicates that something is amiss in that the square of an impulse function at ω = 0 appears. H (ω ) = 1 + πδ (ω ) jω The convolution ( x ∗ h)(t ) is undefined in this case.

and for convenience we have set the constant gain to unity. | ω |> ωc ⎩ In this expression. we can compute the impulse response via the inverse Fourier transform: ωc ∞ h(t ) = 1 ∫ H (ω ) e jω t dω = 1 ∫ e jω (t −to ) dω 2π 2π −∞ −ωc ω = c sinc[ωc (t − to ) / π ] π A key point is that no matter how large we permit the delay time to to be. Y (ω ) = a e − jω to X (ω ) . and thus the ideal low-pass filter is not a causal system. this unit-impulse response is not right sided. at least for the frequency range of interest. and remove all frequencies outside this range. That is. | ω |≤ ωc H (ω ) = ⎨ ⎪ 0 . ωc > 0 is the cutoff frequency. there are positive constants a and to such that for any input x(t ) the output signal is y (t ) = a x(t − to ) . assuming the input signal has a Fourier transform. and we can view the inputoutput behavior of the system in terms of Y (ω ) = H (ω ) X (ω ) Then the magnitude spectrum of the output signal is related to the magnitude spectrum of the input signal by | Y (ω ) |=| H (ω ) | | X (ω ) | Thus the magnitude of the frequency response function can be viewed as a frequency-dependent gain of the system. Example The LTI system described by y (t ) + ωc y (t ) = ωc x(t ) with ωc > 0 . Simple. which we now recognize as the Fourier transform of the unit-impulse response. the ideal low-pass filter remains a useful concept. familiar calculations give 129 . To interpret this filter in the time domain. Example An ideal filter should transmit without distortion all frequencies in the specified frequency range. Despite this drawback. and thus we see that for distortionless transmission the frequency response function has the form H (ω ) = Y (ω ) = a e− jω to X (ω ) Often this is stated as the frequency response function must have “flat magnitude” and phase that is a linear function of frequency. That is. For example. is well defined. and ideal low-pass filter should have the frequency response function ⎧ − jω to ⎪e . then the frequency response function H (ω ) . is a low-pass filter that can be implemented by an R-C circuit. Example An LTI system is said to exhibit distortionless transmission if the output signal is simply an amplitude scaled (positively) and time-delayed version of the input signal.If an LTI system is stable.

To prove this property. 10. (For the ideal filter. for illustration. we directly compute the inverse transform of 1 ( X ∗ Z )(ω ) : 2π F −1[ 1 ( X ∗ Z )(ω )] = 1 ∫ 2π 2π ∞ = 1 ∫ X (ξ ) 1 ∫ Z (ω − ξ ) e jω t dω dξ 2π 2π Changing the variable of integration in the inner integral from ω to η = ω − ξ gives −∞ −∞ −∞ ∞ 1 2π ∞ −∞ jω t ∫ X (ξ ) Z (ω − ξ ) dξ e dω ∞ 130 . but higher-order differential equations can perform closer to the ideal. the Fourier transform of a product of signals is the convolution of the transforms: ( X ∗ Z )(ω ) . we choose to = π /(4ωc ) to obtain an angle of −π / 4 at ω = ωc . then F [ x(t ) z (t )] = 1 2π 1 2π ∞ −∞ ∫ X (ξ ) Z (ω − ξ ) dξ That is.) Clearly the first-order differential equation is a rather poor approximation to an ideal filter.5 Additional Fourier Transform Properties Frequency-Domain Convolution If x(t ) and z (t ) have Fourier transforms X (ω ) and Z (ω ) .H (ω ) = ωc + jω e ωc Expressing this frequency response function in polar form. as ωc − j tan −1 (ω / ωc ) H (ω ) = 2 ωc + ω 2 the characteristics of this filter can be compared to the ideal low-pass filter via the magnitude and phase plots shown below.

If z (t ) = e o . the Fourier transform M (ω ) of the message signal is zero outside the frequency range −ω m ≤ ω ≤ ω m . We also assume that the message signal is bandlimited by ω m . That is. for all t . then the sifting property of impulses gives F [e jωo t x(t )] = 1 ∫ X (ξ ) 2π δ (ω − ξ − ωo ) dξ 2π −∞ ∞ = X (ω − ωo ) We can illustrate this property by considering the structure of AM radio signals. we turn to the frequency domain via the Fourier transform. x(t ) has the form of a high-frequency (rapidly oscillating) sinusoid with a relatively slowly-varying amplitude envelope. and the constant k is called the modulation index. Amplitude Modulated Signals An amplitude modulated (AM) signal of the most basic type has the form x(t ) = [1 + k m(t )]cos(ωct ) where cos(ωc t ) is called the carrier signal. To reveal these properties. We could sketch a typical case in the time domain. Using the Fourier transform F [cos(ωct )] = π δ (ω − ωc ) + π δ (ω + ω c ) and the frequency-domain convolution 131 . Under these assumptions. These are standard situations in practice. where ω m ωc . We assume that the modulation index is such that 1 + k m(t ) ≥ 0 . but it would be rather uninformative as to the special properties of AM signals that make them so useful in communications.F −1[ 1 ( X ∗ Z )(ω )] = 1 ∫ X (ξ ) 1 ∫ Z (η ) e j (η +ξ )t dη dξ 2π 2π 2π = 1 ∫ X (ξ )e jξ t 1 ∫ Z (η ) e jηt dη dξ 2π 2π = 1 ∫ X (ξ )e jξ t z (t ) dξ 2π −∞ −∞ ∞ −∞ −∞ ∞ −∞ ∞ ∞ ∞ = x(t ) z (t ) Example The most important application of this property is the so-called modulation or frequency jω t shifting property. and we will represent the magnitude spectrum of the message signal as shown below. m(t ) is called the message signal.

F [m(t ) cos(ωct )] = 1 ∫ M (ξ )[π δ (ω − ξ − ωc ) + π δ (ω − ξ + ωc )] dξ 2π = −∞ 1 M (ω 2 ∞ − ωc ) + 1 M (ω + ωc ) 2 we conclude that the transform of the AM signal is X (ω ) = π δ (ω − ωc ) + π δ (ω + ωc ) + k M (ω − ωc ) + k M (ω + ωc ) To see the properties of an AM signal. we consider the magnitude spectrum. X ∗ (ω ) .) Because of this special structure of the terms. the messages are kept distinct. and we obtain 2 2 From this plot is clear that AM modulation is used to shift the spectral content of a message to a frequency range reserved for a particular transmitter. and therefore ∞ ∗ 2 2 1 1 ∫ x (t ) dt = 2π ∫ X (ω ) X (ω ) dω = 2π ∫ | X (ω ) | dω −∞ −∞ ∞ ∞ −∞ The importance of Parseval’s theorem is that energy can be associated with frequency content. then the total energy of the signal is given by ∞ −∞ 2 2 1 ∫ x (t ) dt = 2π ∫ | X (ω ) | dω −∞ ∞ To establish this result. | X (ω ) | . but because of the assumptions on the highest message frequency and carrier frequency. the magnitude of this particular sum essentially is the sum of the magnitudes. we substitute the inverse Fourier transform expression for one of the x(t ) ’s in the time-domain energy expression to obtain ∞ 2 jω t jω t 1 1 ∫ x (t ) dt = ∫ x(t ) 2π ∫ X (ω )e dω dt = ∫ X (ω ) 2π ∫ x(t )e dt dω −∞ −∞ −∞ −∞ ∞ ∞ ∞ ∞ −∞ The inner integral in this expression can be recognized as the conjugate of the Fourier transform of x(t ) . with a separation of at least 2 ω m in the different carriers. we have an impulse and an ordinary value. For example. (At the carrier frequency. 132 . at most one term in the sum is nonzero at every frequency. and we can display this situation graphically in the obvious way. Parseval’s Theorem If x(t ) is a real energy signal with Fourier transform X (ω ) . By assigning different carrier frequencies to different transmitters. Typically it is difficult to compute the magnitude of a sum. except at the carrier frequency.

6 Inverse Fourier Transform Given a Fourier transform. x(t ) = 1 ∫ X (ω ) e jω t dω 2π −∞ ∞ Another approach is table lookup. then −3 2 ∫ | X (ω ) | dω F [ X (t )] = 2π x(−ω ) This property can be recognized from an inspection of the Fourier and inverse Fouier transform expression. it turns out that in many situations X (ω ) can be written in the form of a proper rational function in the argument ( jω ) . where the term 133 . A more usual example is that since F ⎡e −t u (t ) ⎤ = 1 ⎣ ⎦ 1+ jω the duality property gives F ⎡ 1 ⎤ = 2π eω u (−ω ) ⎢1+ jt ⎥ ⎣ ⎦ Thus we see that since many Fourier transforms are complex. the duality property gives F [1] = 2π δ (−ω ) = 2π δ (ω ) . the duality property often provides Fourier transforms for complex time signals. This gives −∞ ∞ x(−ω ) = 1 ∫ X (t )e − jω t dt 2π Now change variables from ω to ω .3 is the portion of the energy of x(t ) that resides in the frequency band −3 ≤ ω ≤ 3 . making use of the wide collection of tables of Fourier transform pairs that have been established. one approach to computing the corresponding time signal is via the inverse transform formula. we will be pedantic and list out the appropriate sequence of variable changes. and t to t (erase the hats ) to obtain −∞ ∞ x(−ω ) = 1 ∫ X (t )e − jω t dt = 1 F [ X (t )] 2π 2π −∞ ∞ One use of the duality property is in reading tables of Fourier transforms backwards to generate additional Fourier transforms! Examples Since F [δ (t )] = 1 . X (ω ) . 10. Beginning with x(t ) = 1 ∫ X (ω )e jω t dω 2π change the variable of integration from ω to t and then replace the variable t by −ω . Duality Property If x(t ) has Fourier transform X (ω ) . However. However.

but the first term might require interpretation. Also. t < 0 ⎩ F [u (t )] = where we have written the result in terms of the so-called signum function. if a > 0 . Any of the partial-fraction expansion methods can be used. in the table lookup phase. and proceed as follows (assuming a ≠ 0 ): 1 s 3 + 2a s 2 + a 2 s Thus = 1 s( s + a)2 = 1/ a 2 1/ a 2 1/ a − − s s + a ( s + a)2 X (ω ) = 1/ a 2 1/ a 2 1/ a − − jω a + jω (a + jω )2 The second and third terms can be found in even the shortest table. we see that 1/( jω ) corresponds to the time signal ⎧ 1/ 2 . t > 0 u (t ) − 1 = ⎨ = 1 sgn(t ) 2 2 −1/ 2 . and for many it is convenient to switch from the argument ( jω ) to a more convenient notation.proper refers to a rational function in which the degree of the numerator polynomial is no greater than the degree of the denominator polynomial. That is. we substitute s for ( jω ) . but we can use partial-fraction expansion to write X (ω ) as a sum of the simpler rational functions that are listed in all tables. Thus we have. some creativity might be required to recognize the inverse transforms of various terms. Example 2 To write the Fourier transform in Example 1 in more convenient notation. The linearity property and table lookup then yield the corresponding x(t ) . x(t ) = 1 2 sgn(t ) − 12 e− at u (t ) − 1 t e− at u (t ) 2a a a again under the assumption that a is positive. Example 1 Given X (ω ) = 1 −2a ω + j (a 2ω − ω 3 ) 2 we can use the substitution ω k = ( jω )k / j k as follows: X (ω ) = −2a 1 ( jω ) j2 2 ⎛ ( jω ) ( jω ) ⎞ + j ⎜ a2 j − 3 ⎟ j ⎠ ⎝ 3 = 1 ( jω ) + 2a ( jω )2 + a 2 ( jω ) 3 To perform a partial fraction expansion. F [1] = 2π δ (ω ) jω and the linearity property. The first example addresses the issue that often a Fourier transform does not present itself in the nicest form. the denominator polynomial must be put in factored form. it is convenient to consider simple examples. 134 . To explain some of the mechanics. X (ω ) = bn ( jω )n + bn −1 ( jω ) n −1 + ( jω ) + an −1 ( jω ) n n −1 + b0 + a0 + This general form is not in the tables. and mild subtleties that arise. From the standard transforms 1 + π δ (ω ) .

This brings up an additional manipulation. Example 3 Given X (ω ) = ( jω )2 + 2( jω ) + 2 ( jω )2 + 2( jω ) + 1 it is convenient to first divide the denominator polynomial into the numerator polynomial to write X (ω ) as a constant plus a strictly-proper rational function. That is. In any case. anything not on the official table must be derived from the official table or from first principles. s instead of ( jω ) might be convenient. a > 0 te − at u (t ). reprinted below. it is easy to verify that the result is X (ω ) = 1 + This gives. and linked on the course webpage. 1 ( jω ) + 2( jω ) + 1 2 =1+ 1 (1 + jω ) 2 x(t ) = δ (t ) + t e−t u (t ) Remark We will have a standard table for use in 520. Official 520. a > 0 e− a|t | . that is. will be provided in exams. This table. Use of any other table is not permitted.214.Perusal of tables of Fourier transforms indicates that most of the simple rational transforms that are covered are strictly-proper rational functions of ( jω ) . say.214 CT Fourier Transform Table x(t ) δ (t ) 1 e jωo t cos(ωot ) sin(ωot ) u (t ) e− at u (t ). This is another calculation where a change of variable to. the numerator degree is strictly less than the denominator degree. ωo = 2π / T k =−∞ ∞ 135 . from standard tables. a > 0 u (t + T1 ) − u (t − T1 ) X (ω ) 1 2πδ (ω ) 2πδ (ω − ωo ) πδ (ω − ωo ) + πδ (ω + ωo ) − jπδ (ω − ωo ) + jπδ (ω + ωo ) 1 + πδ (ω ) jω 1 a + jω 1 ( a + jω )2 2a a +ω 2 sin(ωT1 ) 2T1 ωT1 2 k =−∞ ∑ δ (t − kT ) ∞ ωo ∑ δ (ω − kωo ) .

It should be clear that this approach applies to higher-order. b = 1 . we can express the relation between time signals in the differential equation as a relation between Fourier transforms. Y (ω ) = F [ y (t )] linearity and the differentiation property give This can be solved algebraically to obtain jω Y (ω ) + aY (ω ) = b X (ω ) Y (ω ) = b X (ω ) a + jω From this we recognize H (ω ) . then the Fourier transform has a different form. and the input signal is a unit step function. Again.7 Fourier Transform and LTI Systems Described by Differential Equations If a system is described by a first-order. and if a = 0 . this is valid only if the system is stable. y (t ) + ay (t ) = bx(t ) .10. and the obvious inverse Fourier transform gives h(t ) . the stability condition is not explicit in the algebraic manipulations leading to the frequency response function. computation of the response y (t ) is simply a matter of computing the inverse Fourier transform of Y (ω ) = H (ω ) X (ω ) by partial-fraction expansion and table lookup. the unitimpulse response of the system. Example When the input signal has a Fourier transform that is not a proper rational function. More directly. again. In other words. Suppose a = 2. (If a < 0 . this is valid only for a > 0 . Letting X (ω ) = F [ x(t )] . − ∞ < t < ∞ h(t ) = be − at u (t ) then from Section 6. and the unit-impulse response is given by Therefore we can readily compute the frequency response function of the system. if the input signal has a proper rational Fourier transform. H (ω ) = b a + jω However.) Because H (ω ) is a strictly-proper rational function. linear differential equation. and is omitted). and a danger is that this condition is not apparent until the inverse Fourier transform is attempted. The frequency response function in such a case can be written in the form H (ω ) = where m < n . for a large class of input signals. that is. then the Fourier transform of h(t ) does not exist. Then ( jω )n + an −1 ( jω )n −1 + bm ( jω )m + + b1 ( jω ) + b0 + a0 136 . So. a > 0 . the calculations become slightly more complicated and involve some recognition of combinations of terms.6 we have that the system is linear and time invariant. That is. linear differential equations that correspond to stable systems. the response computation is completely algebraic. computation of the response to a large class of input signals is completely algebraic (though checking the stability condition is more subtle.

(Again. the negative sign on the feedback line at the summer is traditional. something that we were unable to accomplish in the time domain. Namely. for additive parallel connections. in these cases it is clear that stability of the overall system follows from stability of the individual subsystems. but at least the Fourier transform representation permits us to achieve an explicit representation for the overall system. where the overall unit-impulse response is the convolution of the subsystem unit-impulse responses. The situation is more complicated for the feedback connection of stable LTI systems.) 137 . the feedback connection below gives the following algebraic relationship between the Fourier transforms of the input and output signals. Assuming this. table lookup gives the output signal as y (t ) = 1 u (t ) − 1 e −2t u (t ) 2 2 10. and for cascade connections.8 Fourier Transform and Interconnections of LTI Systems Interconnections of stable LTI systems are conveniently described in terms of frequency response functions. block diagram equivalences in terms of frequency response functions follow from the time domain results. though it must be guaranteed that the overall system also is stable for the overall frequency response function to be meaningful.Y (ω ) = = ⎤ 1 ⎡ 1 1 π ⎢ jω + π δ (ω ) ⎥ = ( jω )(2 + jω ) + 2 + jω δ (ω ) 2 + jω ⎣ ⎦ 1/ 2 jω − 1/ 2 2 + jω + π 2 δ (ω ) Grouping together the first and last terms. Beginning with the output. where the overall unit-impulse response is the sum of the subsystem unit-impulse responses. we immediately have Of course. at least for the first two cases.

called the closed-loop system in this context. then the frequency response of the closed-loop system is 3 /(2 + jω ) 3 = 1 + 3k /(2 + jω ) (3k + 2) + jω This is a valid frequency response if k > −2 / 3 . H 2 (ω ) = k 2 + jω where k is a constant. so that further pursuit of this topic first requires the development of stability criteria for feedback systems.Y (ω ) = H1 (ω ) [ X (ω ) − H 2 (ω )Y (ω ) ] Solving for Y (ω ) by algebraic manipulation gives = H1 (ω ) X (ω ) − H1 (ω ) H 2 (ω )Y (ω ) Y (ω ) = H1 (ω ) X (ω ) 1 + H1 (ω ) H 2 (ω ) That is. must be stable for the frequency response function shown to be meaningful. 0 < t < 1 ⎪ −(t −1) . Example If H1 (ω ) = H cl (ω ) = 3 . From the basic definition. Unfortunately. Thus it does not have a meaningful frequency response function and the H cl (ω ) we computed is a fiction. Exercises 1. compute the Fourier transforms of the signals (a) x(t ) = e −(t − 2)u (t − 3) (b) x(t ) = e−|t +1| ⎧ 0. in which case hcl (t ) = 3 e −(3k + 2)t u (t ) Indeed. t ≥1 ⎩ 2e (d) x(t ) = k =0 ∑ a k δ (t − k ) . the feedback connection of stable systems does not always yield a stable closed-loop system. by choice of k we can achieve arbitrarily fast exponential decay of the closed-loop system’s unit-impulse response! But it is important to note that for k < −2 / 3 the closed-loop system is not stable. the feedback connection above is equivalent to Of course the overall system. | a |< 1 ∞ 138 . t ≤ 0 ⎪ (c) x(t ) = ⎨ 2.

compute the signals corresponding to the Fourier transforms (a) X (ω ) = 2π e−|ω | (b) | X (ω ) |= ⎨ ⎧2π . evaluate the following quantities for the signal shown below. else ⎧−ω .2. that is. and use the properties of the Fourier transform to determine the transforms of the following signals without calculation. (a) 139 . Compute the Fourier transforms of the signals (a) x(t ) = (b) x(t ) = k =−∞ ∞ ∑ 2(−1)k δ (t − 3k ) ∑ e−(t −3k ) [u (t − 3k ) − u (t − 3k − 1)] ∞ k =−∞ 4. From the basic definition. − 2 ≤ ω ≤ 2 ⎩ 0. By inspection of the defining formulas for the Fourier transform and inverse Fourier transform. ∞ (a) −∞ ∞ ∫ X (ω ) dω jω ∫ X (ω ) e dω (b) −∞ (c) X (0) 5. else (c) X (ω ) = e −ω u (ω ) (d) X (ω ) specified by the sketches below: 3. − 2 ≤ ω ≤ 2 ∠X (ω ) = ⎨ ⎩ 0. without computing the Fourier transform. Compute the Fourier transform of the signal x(t ) shown below.

(b) (c) 6. From the basic definitions and properties of the Fourier transform and inverse Fourier transform. A signal x(t ) has the Fourier transform X (ω ) = 3 − jω 3 + jω (a) Sketch the magnitude spectrum of the signal. (a) What is ∠X (ω ) ? (Hint: An even signal has a real Fourier transform. 7.) (b) What is X (0) ? ∞ (c) What is −∞ ∫ X (ω ) dω ? 8. (b) Sketch the phase spectrum of the signal (c) Find the signal x(t ) by using the properties of the Fourier transform. answer the following questions about the transform of the signal shown below (without calculating the transform). Given that the Fourier transform of the signal x(t ) = t e−2t u (t ) is 140 .

± 2. 10. ± 1. Suppose y (t ) = x(t ) cos(t ) and the Fourier transform of y (t ) is described in terms of unitstep functions as Y (ω ) = u (ω + 2) − u (ω − 2) What is x(t ) ? 12. An input signal x(t ) applied to the LTI system with frequency response function H (ω ) = yields the output signal jω 1 + jω y (t ) = δ (t ) + 3e−t u (t ) − 7e−2t u (t ) What is x(t ) ? 11. determine an expression for Y (ω ) . Given that the Fourier transform of x(t ) = e−|t | is X (ω ) = 2 1+ω 2 compute and sketch the magnitude and phase spectra for y (t ) = e j 3t d e−|t | dt 14. k = 0.… . 13. Compute the Fourier transform for the signal 141 . (c) Sketch the amplitude spectrum of y (t ) if p (t ) = cos(t ) . Suppose x(t ) has Fourier transform described in terms of unit-ramp functions as X (ω ) = r (ω + 1) − 2r (ω ) + r (ω − 1) and suppose p(t ) is periodic with fundamental frequency ωo and Fourier series coefficients X k . Compute the responses of the two systems to the input signal x(t ) = cos(t ) .X (ω ) = (a) y (t ) = d x(t ) (b) y (t ) = dt t 1 (2 + jω )2 sketch the magnitude and phase spectra for the signals (c) y (t ) = x(−2t + 4) (d) y (t ) = 2 x(t ) + x(t ) 9. (b) Sketch the amplitude spectrum of y (t ) if p(t ) = cos(t / 2) . Two LTI systems are specified by the unit-impulse responses h1 (t ) = −2δ (t ) + 5e−2t u (t ) and −∞ ∫ x(τ ) dτ h2 (t ) = 2te −t u (t ) . (a) If y (t ) = x(t ) p (t ) .

A continuous-time LTI system is described by the frequency response function H (ω ) = and the input signal has the Fourier transform 2 2 − ω 2 + jω 3 X (ω ) = e − jω 3 Compute the response y (t ) .x(t ) = sin(ωot ) u (t ) (Hint: You may have to use special properties of impulses in the calculation.) (a) 142 . Compute the inverse Fourier transform for e j (π −πω ) (5 − ω 2 + j 4ω ) (a) X (ω ) = (9 − ω 2 + j 6ω )(2 + jω ) 10 + 10e− jω 2 − ω 2 + j 3ω (b) X (ω ) = e − j 2ω + 18. 16. (Of course. Use partial fraction expansion to compute the inverse Fourier transform for (a) X (ω ) = (b) X (ω ) = 5 jω + 12 ( jω )2 + 5 jω + 6 4 3 − ω 2 + 4 jω 17. Compute the overall frequency response function H (ω ) = Y (ω ) / X (ω ) for the systems shown below.) 15. assume that the subsystems and the overall system are stable.

or such that we can apply generalized function techniques to arrive at a transform (for example. The resulting transform is called the unilateral Laplace transform. the unilateral Laplace transform is defined as X ( s ) = ∫ x(t )e− st dt 0− ∞ Here s is a complex variable. we can be a bit cavalier and ignore detailed analysis of the regions of convergence. often written in rectangular form using the standard notation s = σ + jω . the case of periodic signals). sometimes written as x(t )u (t ) to emphasize that the signal is zero for t < 0 . signal x(t ) . but often we leave this understood and simply write the lower limit as 0.” that is. by including a damping factor in the integral. as shown below. Indeed.Notes for Signals and Systems 11. 143 . then | e− st |=| e−σ t e− jω t |=| e−σ t | → 0 as t → ∞ Therefore X ( s ) can be well defined even though x(t ) does not go to zero as t goes to ∞ . confident in the knowledge that there is a whole half-plane of values of s for which X ( s ) is well defined. c such that | x(t ) |≤ Kect . if x(t ) is of “exponential order. In other words. Because signals encountered in the sequel will always be of exponential order.1 Introduction to the Unilateral Laplace Transform From Chapter 10 it is clear that there is one main limitation on the use of the Fourier transform: the signal must be such that the transform integral converges. there exist real constants K . t ≥ 0 then X ( s ) exists if we think of s as satisfying Re{s} > c . For a right-sided. or unilateral. It turns out that this limitation can be avoided. for signals of exponential order the unilateral Laplace transform always exists for a half-plane of complex values of s . particularly for right-sided signals. The lower limit of integration is shown as 0− to emphasize the fact that an impulse or doublet at t = 0 is included in the range of integration. If we consider σ = Re{s} > 0 . And the actual numerical values of s for which the integral converges turn out to be of no interest for our purposes.

For signals involving generalized functions. the notion of exponential order does not apply.Example The signal x(t ) = et u (t ) is not of exponential order since for any given values of K and c . X ( s ) = ∫ x(t )e − st dt = ∫ x(t )e − jω t dt = F [ x(t )] 0 0 ∞ ∞ 144 . However. and indeed this condition is crucial in = −1 −( s + 3)t ∞ e 0 s+3 = evaluating the integral at the upper limit. For the signal x(t ) = e3t u (t ) . x(t ) = δ (t ) . 2 x(t ) = e5t u (t ) is of exponential order. a similar calculation gives 1 s −3 and the half-plane of convergence in this case is Re{s} > 3 . taking σ = 0 . Remark If a right-sided signal x(t ) has a unilateral Laplace transform that converges for Re{s} = 0 . and we can take K = 1 . Example The Laplace transform of the impulse. we X ( s) = will not insist on keeping track of the convergence region. 2 et > Kect . is easily evaluated using the sifting property: X ( s ) = ∫ δ (t )e− st dt = e − s 0 = 1 0− ∞ Example The Laplace transform of the unit step function is X ( s ) = ∫ u (t )e− st dt = ∫ e− st dt = 0− 0 ∞ ∞ 1 s where in this case a half-plane of convergence is given by Re{s} > 0 . but in typical cases the special properties of generalized functions can be used to evaluate the Laplace transform. for t sufficiently large The signal. as mentioned above. then we can write. c = 5 to prove it. Example The Laplace transform of x(t ) = e −3t u (t ) is X ( s ) = ∫ e −3t u (t )e − st dt = ∫ e−( s + 3)t dt 0− 0 ∞ ∞ 1 s+3 Here a half-plane of convergence is given by Re{s} > −3 .

the Laplace transform with σ = 0 is the Fourier transform for right-sided signals. L[ x(t − to )u (t − to )] = e− sto X ( s ) The calculation verifying this is by now quite standard. This unfortunate notational collision should be viewed as a mild inconvenience. We assume that signals are of exponential order so that existence of the Laplace transform is assured. Often this is obvious. L[a x(t ) + z (t )] = a X ( s ) + Z ( s ) This property follows directly from the definition. 11.That is. rather than X ( jω ) . we should verify that each operation considered yields a signal that also is of exponential order.) Example For x(t ) = e −3t u (t ) we see that the half-plane of convergence includes σ = 0 . Time Delay If L[ x(t )] = X ( s ) . but in this case the region of convergence does not include σ = 0 . and it should not be permitted to obscure the relationship between the Fourier and Laplace transforms of right-sided signals. Begin with L[ x(t − to )u (t − to )] = ∫ x(t − to )u (t − to )e− st dt 0 ∞ 145 . and from above we have X (ω ) = 1 3 + jω For the unit-step function we have X ( s ) = 1/ s . Namely. Throughout we use the following notation for the Laplace transform where L denotes a “Laplace transform operator:” X ( s ) = L[ x(t )] = ∫ x(t )e− st dt 0 ∞ Linearity If L[ x(t )] = X ( s) and L[ z (t )] = Z ( s ) . and interpret the effect of these operations on the corresponding unilateral Laplace transform X ( s ) . then for any constant to ≥ 0 . (Sometimes this is written as X ( s ) s = jω = X ( jω ) which leads to a different notation for the Fourier transform than we have used. and will not be mentioned. the operations we consider must yield right-sided signals. and indeed the Fourier transform of the unit step is not simply 1/( jω ) .2 Properties of the Unilateral Laplace Transform We now consider a variety of familiar operations on a right-sided signal x (t ) . Furthermore. then for any constant a. absorbing the imaginary unit j into the function rather than displaying it in the argument. but care is needed in a couple of cases. Of course. we write the Fourier transform as X (ω ) .

the differentiation property confirms that 1 L[δ (t )] = L[u (t )] = s = 1 s Note that we can iterate the differentiation property to obtain the Laplace transform for higher derivatives. we can use linearity and the delay property to write K 1 − e−to s −to s K X (s) = − e =K s s s Time Scaling If L[ x(t )] = X ( s ) . x(t − to ) is zero for t < to . for example. L[ x(t )] = s L[ x(t )] − x(0− ) = s 2 L[ x(t )] − s x(0− ) − x(0− ) Integration If L[ x(t )] = X ( s) . then L[ x(t )] = sX ( s ) − x(0− ) To justify this property. L[ x(at )] = 1 X s a (a) This is another familiar calculation. The next two properties require a more careful interpretation of the lower limit in the Laplace transform definition. then for any constant a > 0 . then 146 . directly compute the transform. x(t ) = Ku (t ) − Ku (t − to ) . Differentiation If L[ x(t )] = X ( s ) . to > 0 . Example For a rectangular pulse. and the details will be skipped.and change integration variable from t to τ = t − to to obtain the result. and we write that limit as 0− . → 0 as t → ∞ . Notice that we use the unit-step notation to make explicit the fact that the right-shifted signal. and the time-derivative signal x (t ) has a Fourier transform. using integration-by-parts: ∞ ∞ − st ∞ dt = x(t )e + ∫ x(t ) s e− st dt − 0 0− − st L[ x(t )] = ∫ x(t )e 0− − st We assume that σ = Re{s} is such that x(t )e as x(0− ) to arrive at the claimed result. and we interpret x(0− )e − s 0 − Example Beginning with L[u (t )] = 1/ s . The assumption that a > 0 is required so that the scaled signal is right sided. and z (t ) = ∫ x(τ )dτ u (t ) 0− t where the unit-step is appended simply for emphasis.

then the convolution (h ∗ x)(t ) yields a right-sided signal that can be written y (t ) = ∫ x(τ )h(t − τ ) dτ u (t ) 0 t with Laplace transform Y ( s) = H ( s) X ( s) The proof of this property is very similar to the Fourier-transform case. Example A straightforward calculation gives ∞ 0 L[sin(t )u (t )] = ∫ sin(t ) e = = Obviously. we present an example that illustrates the danger in applying it recklessly. and so we can assume that σ = Re{s} is such that as t → ∞ the product of the running integral and the exponential goes to zero. and therefore is omitted. lim s → 0 sX ( s ) both exist. Final Value Theorem If L[ x(t )] = X ( s ) and the limits limt →∞ x(t ) . Convolution If x(t ) and h(t ) are right-sided signals with unilateral Laplace transforms X ( s ) and Z ( s ) . then limt →∞ x(t ) = lim s → 0 sX ( s ) Rather than prove this result. − st dt = ∫ ∞ e jt 1/ 2 j −( s + j )t −1/ 2 j −( s − j )t e e + s− j s+ j 0 0 1 s2 + 1 s s2 + 1 =0 0 ∞ − e − jt − st e dt 2j ∞ lim s →0 s X ( s ) = lim s →0 147 . but for different reasons. It can be shown that the running integral of a signal of exponential order is of exponential order. The evaluation at t = 0− yields zero for more obvious reasons.Z ( s) = ∞ t 1 X (s) s ∞ ∞ 1 + ∫ x(t ) e− st dt s − 0 The proof of this is another integration by parts that is outlined below: L[ z (t )] = ∫ ∫ x(τ ) dτ u (t ) e 0− 0 − − st −1 dt = ∫ x(τ ) dτ u (t ) e− st s − t 0 0− = 1 X ( s) s The evaluations of the first term resulting from the integration-by-parts are both zero.

rational function. followed by table lookup. indicates that the signals typically encountered have transforms that are strictly-proper rational functions.) Remark Shown below is the standard table of Laplace transforms for use in 520. As might be expected from the Fourier-transform case. or a table of transforms. is the main tool for computing the time signal corresponding to a given transform. but it involves line integrals in the complex plane and we will not make use of it.214 Unilateral Laplace Transform Table x(t ) δ (t ) u (t ) r (t ) e− at u (t ) te − at u (t ) cos(ωot )u (t ) sin(ωot )u (t ) e− at cos(ωot )u (t ) e− at sin(ωot )u (t ) X (s) 1 1 s 1 s2 1 s+a 1 ( s + a)2 s 2 s +ω o 2 ωo 2 s +ω o 2 s+a 2 ( s + a ) 2 +ω o ωo 2 ( s + a ) 2 +ω o We illustrate the calculation of inverse transforms with two examples. Anything not on this official table must be derived from entries on the table or from the definition of the transform. limt →∞ [sin(t )u (t )] does not exist and we see that the Final Value Theorem can give misleading results when loosely applied! 11. These are ratios of polynomials in s with the degree of the numerator polynomial less than the degree of the denominator polynomial. Official 520.214.However. Use of any other table in exams or homework assignments is not permitted. but not strictly-proper. we can divide the numerator by the denominator to write 148 . partial fraction expansion. (There is a more general inverse transform formula.3 Inverse Unilateral Laplace Transform Inspection of the Laplace transforms we have computed. Example Given X ( s) = s2 + s + 1 s2 + 1 which is a proper.

the system can be described in terms of unilateral Laplace transforms. = s 1/ 2 1/ 2 = + ( s + j )( s − j ) s + j s − j ⎡ s ⎤ 1 − jt 1 jt L−1 ⎢ ⎥ = 2 e u (t ) + 2 e u (t ) = cos(t ) u (t ) 2 ⎣ s + 1⎦ Therefore x(t ) = δ (t ) + cos(t ) u (t ) Another case that is straightforward to handle is when there are “delay factors” in the transform. and in particular regardless of the stability property of the system. and derivative properties in conjunction with the previous example. delay. Regardless of the values of the constants a and b . we can treat the terms separately.X ( s) = 1 + s s2 + 1 Using linearity of the Laplace transform. we obtain x(t ) = cos(t − 2) u (t ) + sin(t − 1) u (t − 1) 11.4 Systems Described by Linear Differential Equations Consider a system where the input and output signals are related by the first-order differential equation y (t ) + ay (t ) = bx(t ) Assuming that the input signal is right sided. Example Given X ( s) = we can write s e−2 s + e − s s2 + 1 s s +1 2 X ( s) = e−2 s + e− s 1 s +1 2 Using the linearity. the output signal is right sided and the system is linear and time invariant. Partial fraction expansion of the second term gives s s2 + 1 From the table of transforms. since an LTI system with identically zero input must have identically zero output.) In the setting of right-sided input and output signals. we can compute the Laplace transform of the unit impulse response h(t ) = be − at u (t ) to obtain H ( s) = b s+a 149 . the assumption of zero initial condition is important. (In particular. and assuming that initial condition at t = 0 is zero.

this is called the transfer function of the system. Therefore we can solve for the response to a wide class of input signals by the algebraic process of partial fraction expansion and table lookup. b = 1 and where the input signal is x(t ) = e3t u (t ) that is.5 Introduction to Laplace Transform Analysis of LTI Systems 150 . an unstable system with unbounded input signal. though of course the roots of the denominator must be computed for the partial fraction expansion. Using the linearity and differentiation properties gives ( s + a)Y ( s ) = bX ( s ) Thus. 11. Again. the output signal will have a proper rational Laplace transform. we immediately obtain Y (s) = Partial fraction expansion easily leads to 1 ( s − 1)( s − 3) y (t ) = − 1 et u (t ) + 1 e3t u (t ) 2 2 For systems described by higher order linear differential equations. in contrast to the Fourier transform approach. and in terms of the Laplace transforms X ( s ) and Y ( s ) of the right-sided input and output signals the system is described by Y ( s) = H ( s) X ( s) Another approach is to equate the Laplace transforms of the left and right sides of the differential equation.Rather than the term frequency response function. and the solution procedure for the output signal is again algebraic. again. then it is clear that the output signal has a strictly-proper rational Laplace transform. and this approach has the advantage of not requiring knowledge of the unit-impulse response. Example For the case where a = −1 . an advantage of the Laplace-transform approach in this unilateral setting is that systems with unbounded input signals and/or output signals. or systems that are unstable. can be treated. y ( n) (t ) + an −1 y ( n −1) (t ) + + a0 y (t ) = bn −1x( n −1) (t ) + + b0 x(t ) it is straightforward to equate the Laplace transforms of the right and left sides to show that the corresponding transfer function is H (s) = bn −1s n −1 + s + an −1s n n −1 + b1s + b0 + + a0 This is a strictly-proper rational function of s . Thus for input signals that have proper rational Laplace transforms. again with unilateral input and output signals and zero initial conditions. we obtain Y (s) b = H (s) = X ( s) s+a If the input signal has a proper rational Laplace transform.

though that is not necessary. we introduce some standard methods based on the transfer function description of the system. and the proofs are essentially obvious applications of partial fraction expansion and inspection of the transform table for the types of terms that arise from partial fraction expansion. and furthermore we assume that the system transfer function is a strictly-proper rational function. In counting the poles and zeros. the numerator and denominator polynomials are assumed to be relatively prime. Example The transfer function s3 + 4 s 2 + 5s + 2 ( s + 2)( s + 1) 2 ( s + 1)2 has no zeros and two poles at s = −1 (often stated as a multiplicity 2 pole at s = −1 ) . There are two important results that can be stated immediately using this definition. X ( s) = Then Y (s) = and the output signal is given by 20 s 2 ( s + 2) = 10 s2 151 . This assumption is made to avoid equivalent forms of the transfer function or transform that superficially appear different. Theorem An LTI system with strictly-proper rational transfer function is stable if and only if all poles of the transfer function have negative real parts. That is. In any case. or more generally any strictly-proper Laplace transform. and the zeros are the roots of the numerator polynomial. Theorem A right-sided signal x(t ) with strictly-proper rational Laplace transform is bounded if and only if all poles of the transform have non-positive real parts and those with zero real parts have multiplicity 1.We consider LTI systems with right-sided input signals in this section. Example Consider a system with transfer function H (s) = 5( s + 2) = 5( s + 2) = 5 H (s) = 20 s ( s + 2) 1 s − 5 5 + s s+2 and suppose the input signal is a unit-step function. we use the standard terminology associated with repeated roots of a polynomial. Thus we can think of the system as arising from a differential equation description. Definition The poles of a rational transfer function (or transform) are the roots of the denominator polynomial. When we write such a transfer function. b s n −1 + + b1s + b0 H ( s ) = n −1 s n + an −1s n −1 + + a0 we will assume that there are no common roots of the numerator and denominator polynomials.

the Laplace transform approach is not beset by the stability limitations that are implicit in applying the Fourier transform.8. H 2 ( s) = 2 d1 ( s ) d2 ( s) where deg n1 ( s ) < deg d1 ( s ) and deg n2 ( s ) ≤ deg d 2 ( s ) . which is not unexpected since the system is not stable. in polynomial form. that among our typical input signals the only bounded input signal that produces an unbounded response is a step input.y (t ) = 10 r (t ) − 5 u (t ) + 5 e −2t u (t ) This response is unbounded. Then. However. Then we can write these transfer functions in terms of their numerator and denominator polynomials as n (s) n (s) H1 ( s ) = 1 . Suppose that H1 ( s ) is a strictly-proper rational function and H 2 ( s ) is a proper rational H (s) = function. 152 . Then Y (s) = and the output is the bounded signal 1 1 1 = − ( s + 1)( s + 2) s + 1 s + 2 y (t ) = e −t u (t ) − e−2t u (t ) This example gives an interpretation of the zeros of a transfer function in terms of growing exponential inputs that are “swallowed” by the system! Remark It should be clear that the transfer function description of interconnections of LTI systems is very similar in appearance to the frequency response function description based on the Fourier transform discussed in Section 10. the input is the unbounded signal x(t ) = e3t u (t ) . where H1 ( s ) and H 2 ( s ) are the subsystem transfer functions. Consider the feedback system shown below. Example Consider a system with transfer function H ( s) = and suppose the input signal is described by s −3 ( s + 1)( s + 2) 1 s −3 X ( s) = that is. however. Straightforward calculations give the unsurprising result that the overall system is described by the transfer function H1 ( s ) 1 + H1 ( s ) H 2 ( s ) Under reasonable hypotheses we can show that H ( s ) is a strictly-proper rational function as follows. because of the ramp component. Notice.

if H1 ( s ) = then 3 . X ( s ) = 1/ s . Example Repeating the example from Section 10. that the system is stable if and only if 2 + 3k > 0 . Using either the basic definition or tables and properties. Exercises 1. Otherwise an easy partial fraction expansion calculation gives y (t ) = 3 2 + 3k u (t ) − 3 e − (2 + 3k )t u (t ) 2 + 3k Inspection of these responses indicates. then the response is described by 3 Y (s) = s ( s + 2 + 3k ) H ( s) = If k = −2 / 3 . Thus regardless of stability issues.8.H (s) = n1 ( s )d 2 ( s ) d1 ( s )d 2 ( s ) + n1 ( s )n2 ( s ) and it follows that H ( s ) is a strictly-proper rational function. H 2 (s) = k s+2 3 s + 2 + 3k If the input to the feedback system is a unit step. then Y ( s ) = 3 / s 2 and y (t ) = 3r (t ) . However. compute the Laplace transforms of (a) x(t ) = e −2t u (t − 1) (b) x(t ) = δ (t − 1) + δ (t ) + e−2(t + 3)u (t − 1) (c) (d) 153 . and the stability criterion described above confirms. the response to the overall system to various input signals can be calculated in the usual way. in any case the transfer function representation is valid and can be used for response calculations and other purposes.

For the system with transfer function Y ( s) s −3 = X ( s ) ( s + 4) 2 compute the steady-state response yss (t ) . differentiate the signal and use the tables. the time function that y (t ) approaches asymptotically. Second. Compute the signal x(t ) corresponding to (a) X ( s ) = 5s + 4 s 3 + 3s 2 + 2 s 154 . (a) X ( s ) = (b) X ( s ) = (c) X ( s ) = (d) X ( s ) = 2s + 4 s 2 + 5s + 6 2s + 4 s 3 + 5s 2 + 6 s e− s s 3 + 5s 2 + 6 s 2 ( s − 1)2 s 3. as t → ∞ . use the differentiation property. use the final value theorem to determine limt →∞ x(t ) . For each of the following Laplace transforms. and state whether the conclusion is valid. Determine the final value of the signal x(t ) corresponding to (a) X ( s ) = (b) X ( s ) = (c) X ( s ) = 2s 2 + 3 s 2 + 5s + 6 2s 2 + 3 s 3 + 5s 2 + 6 s 3 s2 −1 5. Given that the Laplace transform of x(t ) = cos(2t ) u (t ) is X ( s ) = compute the Laplace s +4 transform of dx(t ) / dt by two methods: First. 2 4. ) 6.2. to the input signals (a) x(t ) = e −3t u (t ) (b) x(t ) = e3t u (t ) (c) x(t ) = 2sin(3t )u (t ) (e) x(t ) = u (t ) (d) x(t ) = δ (t ) (Hint: Do not calculate quantities you don’t need! And you may skip calculation of the phase angle in (c) – simply call it θ .

of the system? What is the unit-impulse response of the system? 155 .(b) X ( s ) = (c) e−4 s ( s + 1) s3 − s 10( s + 2) ( s + 5)( s3 + 2s 2 + s ) X ( s) = 7. Find a differential equation description for the LTI system described by (a) H ( s ) = 3s ( s + 1)( s + 2) (b) h(t ) = [2 − 2e−t ] u (t ) 8. An LTI system with input signal x(t ) = [et + e−2t ] u (t ) has the response y (t ) = [2 − 3t ]e−2t u (t ) What is the transfer function. H ( s ) .

a typically more efficient approach is to consider the circuit directly in terms of Laplace transform representations. as shown below.. For the case of zero initial conditions. and then solve the equation using the Laplace transform. We assume in doing this that the input voltage or current for the circuit is a right-sided signal. this permits describing the behavior of each circuit element in terms of a transfer function.” I R ( s) 1 = VR ( s ) R For an inductor. However. this gives the transfer function descriptions VL ( s ) = Ls I L (s) and I L ( s) 1 = VL ( s ) Ls 156 . the voltage-current relation is vL (t ) = L diL (t ) .” VR ( s ) =R I R ( s) or a “conductance transfer function.Notes for Signals and Systems 12. is the simplest case. one approach is to write the differential equation (or integrodifferential equation) for the circuit. t≥0 dt With zero initial conditions. The voltage-current relation is vR (t ) = R iR (t ) . as defined in terms of the unilateral Laplace transform.1 Unilateral Laplace Transform – Application to Circuits When considering RLC circuits. t ≥ 0 and this can be represented as a “resistance transfer function. A resistor.

we can represent the voltage current behavior in the Laplace-transform domain by a block diagram. with appropriate labels depending on the choice of input or output. and the admittance of an inductor is Y ( s ) = 1/( Ls ) . For each of the three basic circuit elements. 1/ R . For a capacitor. with voltage-current relation vC (t ) = we have 1 C ∫ iC (τ ) dτ 0 t VC ( s ) = and thus the impedance is Z ( s ) = 1/(Cs ) while the admittance is Y ( s ) = Cs . Or. the admittance of a resistor is the conductance. the impedance Z ( s ) of the circuit is the transfer function Z ( s) = V (s) I (s) I ( s) V ( s) and the admittance Y ( s ) of the circuit is the transfer function Y (s) = Using this definition. and the impedance of an inductor is Z ( s ) = Ls .The terminology that goes with these transfer functions can be formally defined as follows. Definition For a two-terminal electrical circuit with all independent sources in the circuit set to zero and all initial conditions zero. For example. 1 IC ( s ) Cs 157 . the impedance of a resistor is the resistance.

then the current law k =1 ∑ ik (t ) = 0 K can alternately be expressed in the transform domain as k =1 ∑ I k ( s) = 0 K Obviously a similar statement can be made about the sum of voltages across a number of circuit elements in a loop.Once we represent a circuit in this way. This means that circuit analysis in the Laplace domain proceeds much as does resistive circuit analysis in the time domain. Example Consider the parallel connection of circuit elements described by their admittances. as shown below. If i1 (t ) . and leads to the statement that “admittances in parallel add. it is algebraic in nature with impedance and admittance of circuit elements playing roles similar to resistance and conductance. namely. Kirchhoff’s current law at the top node gives I ( s ) = I1 ( s ) + I 2 ( s ) = Y1 ( s )V ( s ) + Y2 ( s )V ( s ) = [Y1 ( s ) + Y2 ( s )]V ( s ) Thus the admittance of the overall circuit is Y (s) = I ( s) = Y1 ( s ) + Y2 ( s ) V (s) It is easy to see that this calculation extends to any number of admittances in parallel. … . it is an easy calculation to obtain the impedance: 158 . iK (t ) are the currents entering a node. a key observation is that the basic circuit analysis tools such as Kirchhoff’s laws can be applied in the transform domain.” Form the circuit admittance.

From Kirchhoff’s voltage law. Of course. assuming that I ( s ) is a proper rational function. C > 0 . then the circuit impedance is a (bounded-input. L. Example To calculate the impedance of the circuit we first redraw it as a Laplace domain diagram with admittance labels: Then Y (s) = and the circuit impedance is I ( s) V (S ) = Y1 ( s) + Y2 ( s ) + Y3 ( s ) = 1 1 + + Cs R Ls Z ( s) = 1 1 R 1 + Ls + Cs = s2 + Notice that if R. 159 . bounded-output) stable system since the poles of Z ( s ) will have negative real parts. to compute the voltage for a given current i (t ) . we compute I ( s ) and then the terminal voltage (Laplace transform) is given by 1s C 1 s+ 1 RC LC V ( s) = Z ( s) I ( s) Finally. the usual case. Example Consider a series connection of circuit elements described by their impedances. v(t ) can be computed by partial fraction expansion and table lookup.Z (s) = V ( s) 1 = = I ( s ) Y1 ( s ) + Y2 ( s ) 1 1 + 1 Z1 ( s ) Z 2 ( s ) = Z1 ( s ) Z 2 ( s ) Z1 ( s ) + Z 2 ( s ) These expressions have familiar forms from the analysis of resistive circuits when we interpret impedance as resistance and admittance as conductance.

Example The impedance of the circuit is computed from the Laplace transform equivalent as Z ( s ) = R + Ls + 1 Ls 2 + Rs + 1/ C = Cs s It is interesting to note that this impedance is an “improper” rational function of s . Also. Consider the circuit 160 . a unit step in current produces a ramp in voltage. The analysis of circuits via the transformed circuit also applies to other transfer functions of interest. of course. the techniques apply for other choices of input and output signals. with.V ( s ) = V1 ( s ) + V2 ( s ) = Z1 ( s ) I ( s ) + Z 2 ( s) I ( s ) = [ Z1 ( s ) + Z1 ( s )]I ( s ) Therefore the impedance of the circuit is Z ( s) = and the admittance is V (s) = Z1 ( s ) + Z 2 ( s ) I ( s) I ( s) 1 = V ( s ) Z1 ( s) + Z 2 ( s ) Y (s) = This calculation extends to a series connection of any number of circuit elements in the obvious manner. all initial conditions zero. a unit-step in current to the circuit will produce an impulse in voltage. In particular. That is. For example. in addition to the impedance and admittance. the circuit is unstable since Z ( s ) has a pole at s = 0 . in that the numerator degree is higher than the denominator degree.

where the voltage transfer function Vout ( s ) / Vin ( s ) is of interest. Clearly

Vin ( s ) = V1 ( s ) + V2 ( s ) + V3 ( s ) = [ Z1 ( s) + Z 2 ( s) + Z3 ( s )]I ( s)
which gives

I (s) =
Then the output voltage transform is

1 Vin ( s ) Z1 ( s ) + Z 2 ( s ) + Z3 ( s) Z3 ( s ) Vin ( s ) Z1 ( s ) + Z 2 ( s ) + Z3 ( s )

Vout ( s ) = Z3 ( s ) I ( s ) =

and the voltage transfer function is clear. Clearly this is the analog of the resistive voltage divider circuit. In a similar manner, the current divider circuit

yields transfer functions from the input current to the j th − branch current via the calculations

Iin ( s ) = I1( s) + I 2 ( s) + I3 ( s) = [Y1 ( s) + Y2 ( s) + Y3 ( s)]Vin ( s)
yielding

I j ( s ) = Y j ( s )Vin ( s ) =

Y j (s) Y1 ( s ) + Y2 ( s ) + Y3 ( s )

Iin ( s )

The notion of source transformations in resistive circuits also can be applied in the setting of the transformed circuit. These transformations replace voltage sources in series with impedances by current sources in parallel with impedances, or the reverse. For the circuit shown below,

161

the voltage divider rule immediately gives

V2 ( s ) =

Z 2 ( s) Vin ( s ) Z1 ( s ) + Z 2 ( s ) Vin ( s ) 1 1 + 1 Z (s) Z1 ( s ) Z 2 ( s ) 1

Assuming Z1 ( s ) ≠ 0 , this expression can be rearranged as

V2 ( s ) = =

Z1 ( s ) Z 2 ( s ) Vin ( s ) = Z1 ( s ) + Z 2 ( s ) Z1 ( s ) Vin ( s) 1 Y1 ( s ) + Y2 ( s ) Z1 ( s )

This is readily seen to correspond to the circuit shown below:

The equivalence of these two circuit structures is often convenient in facilitating application of voltage or current division.
Example Consider the circuit shown below, where initial conditions are zero, the input current is a unit-step signal, and the output signal is the voltage across the capacitor:

Converting this to the transform equivalent circuit gives

A source transformation leads to another equivalent circuit in the transform domain, where the current source is replaced by a transform voltage source

V ( s ) = Z1 ( s ) I ( s ) = Ls

1 =L s

162

Next the voltage divider relation gives

VC ( s ) = =

Z 2 ( s) 1/(Cs ) V (s) = L Z1 ( s ) + Z 2 ( s ) Ls + 1/(Cs ) 1/ C s 2 + 1/( LC ) = L/C 1/ LC s 2 + 1/( LC )

and table lookup gives the inverse Laplace transform

vC (t ) = L / C sin( 1 t ) u (t )
LC

Example These basic approaches are quite efficient, even for reasonably complicated circuits. Consider the case below, where the objective is to compute the transfer function I out ( s ) / Vin ( s ) :

Converting to the transform equivalent circuit gives

where we have chosen impedance labels for the various portions of the overall circuit. Next, a source transformation gives

163

by current division. if we choose a capacitor in the feedback path and a resistor in the input path. we use the virtual short property of the ideal op amp to conclude that Vout ( s ) = − Z f ( s ) I ( s ) and also Vin ( s ) = Z ( s ) I ( s ) Thus it is easy to see that Z f ( s) Vout ( s ) =− Vin ( s ) Z ( s) For example. Example Beginning directly in the Laplace domain. This gives Vout ( s ) 1/( RC ) =− Vin ( s ) s and in the time domain we recognize that the circuit is a running integrator. Vout ( s ) / Vin ( s ) . I out ( s ) = = Y3 ( s ) Vin ( s ) Y1 ( s ) + Y2 ( s) + Y3 ( s ) s + 1 1 + s +1 = s s2 + 2 s + s 2 s +1 s + 2 2 Vin ( s ) s +1 Vin ( s ) s ( s + 1) 3s 4 + 2s3 + 6s 2 + 3s + 2 The notion of a transform equivalent circuit can be used in other settings that are not restricted to RLC circuits.Finally. then Z f ( s ) = 1/(Cs ) and Z ( s ) = R . 164 . We illustrate this by considering a circuit involving an ideal operational amplifier. consider the ideal op amp with impedances Z ( s ) and Z f ( s ) as shown: In order to compute the voltage transfer function .

the unilateral Laplace transform differentiation property. the voltage-current relation in the time domain remains vL (t ) = L diL (t ) . nonzero initial conditions. yields VL ( s ) = L[ sI L ( s ) − iL (0− )] = LsI L ( s ) − LiL (0− ) This corresponds to the transform equivalent circuit shown below. taking account of the initial condition. where the initial condition is represented as a current source with appropriate polarity: 165 . we can develop Laplace transform equivalent circuits that represent the initial conditions as voltage or current sources. For an inductor. an alternate approach is to write I L (s) = i (0− ) 1 VL ( s ) + L Ls s This leads to an admittance version of the transform equivalent.2 Circuits with Nonzero Initial Conditions For circuit elements with nonzero initial stored energy. where the initial condition term is represented as a voltage source with appropriate polarity: Of course.12. with initial current iL (0− ) in the indicated direction. that is. t≥0 dt However.

where the initial condition is represented by a current source. An alternative is to write vC (0− ) 1 VC ( s ) = IC ( s) + Cs s which corresponds to the circuit 166 . t ≥ 0 The Laplace transform differentiation property gives C[ sVC ( s ) − vC (0− )] = IC ( s ) or IC ( s ) = CsVC ( s ) − CvC (0− ) This corresponds to the transform equivalent circuit shown below. the capacitor with polarity as marked is described by C vC (t ) = iC (t ) .Similar calculations for a capacitor are almost apparent. With an initial voltage vC (0− ) .

To compute the output. the initial current in the inductor is iL (0− ) = 1 .where a voltage source accounts for the initial condition. vout (t ) . we first sketch the transform equivalent circuit: The two voltage sources can be combined. Example Consider the circuit where the input voltage is vin (t ) = 4u (t ) . and the initial voltage on the capacitor is zero. and impedances in parallel can be combined according to Z 4 ( s) = Z 2 ( s ) Z3 ( s ) 4 = Z 2 ( s ) + Z3 ( s ) 2 s + 5 This gives the equivalent circuit shown below Now a straightforward voltage-divider calculation gives Vout ( s ) : Vout ( s) = 4 2s +5 2 s + 2 s4 5 + 2s + 4 2s + 4 2 = = 3 5 2 s s + 2 s + s s( s + 1 ) 2 −1t 2 and partial fraction expansion leads to vout (t ) = 4u (t ) − 4e u (t ) 167 .

the initial current in the inductor is iL (0− ) = 1 in the direction shown. 2. Consider the electrical circuit shown below where the input voltage is vin (t ) = u (t ) and the initial conditions are iL (0− ) = 1 . resistor. For the circuit shown below. iR (t ) for t ≥ 0 . vC (t ) .Exercises 1. 3. suppose the input current is iin (t ) = 2u (t ) . Compute the impedance Z ( s ) for the circuit shown below. vC (0− ) = 0 . with L=1 and C=1. Compute the voltage output. and the initial voltage on the capacitor is zero. Compute the current through the 168 .