Professional Documents
Culture Documents
Sabah
v
vi Contents
Abstract :
The concept and theory of signals and systems are needed in almost all electrical engineering
fields and in many other engineering and scientific disciplines as well. In this chapter, we
introduce the mathematical description and representation of signals and systems, classifi-
cations of signals (periodic signals, even and odd signals), signal transformations, complex
exponential signals, and system properties. We also define several important basic signals
essential to our studies such as unit impulse and unit step functions.
The concept of signal refers to the time, space or other types of variations in the physical state
of an object, phenomenon, entity etc. The quantification of this state is used to represent, store
or transmit a message.
A signal is a function representing a physical quantity or variable that conveys information
about the behavior or nature of the phenomenon. For instance, in a RC circuit the signal may
represent the voltage across the capacitor or the current flowing in the resistor. Mathemati-
cally, a signal is represented as a function of an independent variable t. Usually t represents
time. Thus, a signal is denoted by x(t). Examples of signal include:
• Acoustic signals: Acoustic pressure (sound) over time,
• Mechanical signals: Velocity, acceleration of a car over time, and
• Video signals: Intensity level of a pixel (camera, video) over time.
1
2 1 Signals and Systems
2 2
1.5 1.5
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1.5 −1.5
0 5 10 15 20 0 5 10 15 20
DT signal x[n] may represent a phenomenon for which the independent variable is inherently
discrete. For instance, the daily closing stock market average is by its nature a signal that
evolves at discrete points in time. On the other hand a DT signal x[n] may be obtained by
sampling a CT signal x(t) such as
or in a discrete form as
x[0], x[1], ....., x[n], ....
where
xn = x[n] = x(tn ),
and xn ’s are called samples and the time interval between them is called the sampling interval.
When the sampling intervals are equal (uniform sampling), then
xn = x[n] = x(nTs ),
xn = (..., 0, 0, 1, 2, 2, 1, 0, 1, 0, 2, 0, 0, ...).
1 2
p(t) = v(t)i(t) = v (t). (1.1)
R
The total energy E dissipated in the resistor over the time interval t1 ≤ t ≤ t2 is
1.3 Energy and Power Signals 3
∫ t2 ∫ t2
1 2
E= p(t)dt = v (t)dt. (1.2)
t1 t1 R
The average power P dissipated in the resistor over the time interval t1 ≤ t ≤ t2 is
∫ t2 ∫ t2
1 1 1 2
P= p(t)dt = v (t)dt. (1.3)
t2 − t1 t1 t2 − t1 t1 R
Similarly, for a DT signal x[n], the normalized total energy content E of x[n] over n1 ≤ n ≤ n2
is defined as
n2
E= ∑ |x[n]|2 . (1.6)
n=n1
For a given CT signal, when we consider the total energy over the infinite time interval
−∞ ≤ t ≤ ∞, E∞ is given by
∫ T
E∞ = lim |x(t)|2 dt. (1.8)
T →∞ −T
For a given DT signal, when we consider the total energy over the infinite time interval
−∞ ≤ n ≤ ∞, E∞ is given by
N
E∞ = lim
N→∞
∑ |x[n]|2 . (1.10)
−N
Based on definitions of Eqs. (1.8 to 1.11), the following classes of signals are defined:
1. A signal is said to be an energy signal if and only if 0 < E < ∞, and so P∞ = 0 since
4 1 Signals and Systems
E∞
P∞ = lim = 0.
T →∞ 2T
2. Similarly, a signal is said to be a power signal if and only if 0 < P < ∞, thus implying
that E = ∞.
3. Signals that satisfy neither property are referred to as neither energy signals nor power
signals.
NOTE: There are some signals for which both P∞ and E∞ are infinite.
Periodic and random signals are usually power signals (infinite energy and finite average
power) if their energy content per period is finite, and then the average power of this sig-
nal need only be calculated over a period. However, signals that are both deterministic and
aperiodic (bounded finite duration) are usually energy signals.
A signal is deterministic if we can define its value at each time point as a mathematical func-
tion (e.g., sine wave), while a signal is random if it cannot be described by a mathematical
function (can only define statistics) (e.g., electrical noise generated in an amplifier of a ra-
dio/TV receiver).
Example 1.1.
2π
x(t) = Asin (ω0t + ϕ ) is a periodic signal with period T = ω0 . Determine whether the signal
is energy signal, power signal or neither
Solution 1.1.
Since the signal x(t) is a periodic signal, so it must be a power signal.
∫ T ∫ T
1 1
P = |x(t)|2 dt = A2 sin2 (ω0t + ϕ ) dt,
T 0 T 0
1
where sin2 = (1 − cos(2x))
2
∫ T
1 1
= A2 [1 − cos (2ω0t + 2ϕ )] dt,
T 0 2
∫ 2π /ω0
A ω0
2 1 A2
= [1 − cos (2ω0t + 2ϕ )] dt = .
2π 0 2 2
Example 1.2.
The given signal is showing in Figure 1.2. Determine whether the following signal is energy
signal, power signal or neither
{
Ae−t , t ≥ 0
x(t) =
0, t < 0,
1.3 Energy and Power Signals 5
x(t)
Solution 1.2.
Solving the average energy:
∫ ∞ ∫ ∞
E= |x(t)|2 dt = A2 e−2t dt,
−∞ 0
A2 −2t ∞ A2 A2
= − e = − [0 − 1] = .
2 0 2 2
−A2 [ −2L ] A2
= lim e −1 = .
L→∞ 2 2
Solving the average power using the limit:
[∫ L ] [∫ L ]
1 2 1 2 −2t
P = lim |x(t)| dt = lim A e dt ,
L→∞ 2L −L L→∞ 2L 0
−A2 [ −2L ]
= lim e − 1 = 0.
L→∞ 4L
A2
Since E = 2 < ∞ and P = 0. Therefore, the signal is an energy signal.
Example 1.3.
The given signal is showing in Figure 1.3. Determine whether the following signal is energy
signal, power signal or neither
{
Ae−at , t ≥ 0
x(t) =
Aeat , t ≤ 0,
6 1 Signals and Systems
x(t) = A e -a|t|
A
t
0
Solution 1.3.
Solving the average energy:
∫ ∞ ∫ 0 ∫ ∞
E= |x(t)|2 dt = A2 e2at dt + A2 e−2at dt,
−∞ −∞ 0
∞
A2 2at 0 A2
= e − e−2at ,
2a −∞ 2a 0
A2 A2
= [(1 − 0) − (0 − 1)] = .
2a a
Solving the average power:
[∫ L
] [∫ L
]
1 2 1 2 −2a|t|
P = lim |x(t)| dt = lim A e dt ,
L→∞ 2L −L L→∞ 2L −L
[ ]
−A −1 −2a|t| L
2
= lim e ,
L→∞ 2L 2a −L
−A2 [ −2aL ]
= lim e − e−2aL = 0.
L→∞ 4aL
A2
Since E = a < ∞ and P = 0. Therefore, the signal is an energy signal.
Example 1.4.
The given signal is showing in Figure 1.4. Determine whether the following signal is energy
signal, power signal or neither
{
Ae−t , t ≥0
x(t) =
Au(−t), t < 0,
Solution 1.4.
Solving the average energy:
1.3 Energy and Power Signals 7
x(t)
A
t
0
[∫ L
] [∫ 0 ∫ L ]
E = lim |x(t)|2 dt = lim A2 dt + A2 e−2t dt ,
L→∞ −L L→∞ −L 0
[ ]
1
= lim A2 L + (1 − e−2L ) = ∞.
L→∞ 2
A2
Since E = ∞ and P = 2 < ∞. Therefore, the signal is a power signal.
Example 1.5.
The given signal is showing in Figure 1.5. Determine whether the following signal is energy
signal, power signal or neither
{
A, τ /2 ≤ t ≤ τ /2
x(t) =
0, elsewhere,
x(t)
A
t
-τ/2 0 τ/2
Solution 1.5.
8 1 Signals and Systems
A2t τ /2 A2 τ A2 τ
= lim − τ
= lim = = 0.
L→∞ 2L /2 L→∞ 2L ∞
A central concept in signal analysis is the transformation of one signal into another signal. Of
particular interest are simple transformations that involve a transformation of the time axis
only. A linear time shift signal transformation is given by
where α parameter represents the signal scaling (stretch and compression) and reversal, and
the β represents a signal offset from 0. Given x(α t + β ), depending on the values of α and β
we have
• time shift → β
• time scaling → α
• time reversal → the sign of time t
We will investigate x(α t + β ) given x(t) for different values of α and β .
• if 0 < α < 1, then the signal is linearly stretched (see Figure 1.7).
• if α > 1, then the signal is linearly compressed (see Figure 1.7).
• if α < 0, then the signal is reversed in time (see Figure 1.6).
• if β > 0, then the signal is time advance (the signal shifts left), (see Figure 1.9).
• if β < 0, then the signal is time delay (the signal shifts right), (see Figure 1.8).
Note: To find y(t) = x(α t + β ), It is important to shift first by β and then to scale (com-
press/stretch) by α .
A simple and very important example of transforming the independent variable of a signal is
the time shift.
1.4 Transformations of the Independent Variable 9
Example 1.6.
Consider the triangular pulse x(t) shown in Figure 1.6(a). Find the reflected version of x(t)
about the amplitude axis.
Solution 1.6.
Replacing the independent variable t in x(t) with −t, we get the result y(t) = x(−t) shown in
Figure 1.6(b). Note that for this example, we have
t t
-T 1 0 T2 -T 2 0 T1
(a) (b)
Figure 1.6 Operation of signal reflection. (a) CT signal x(t). (b) Reflected version of x(t) about the origin
Example 1.7.
Figure 1.7(a) shows a signal x(t) of unit amplitude and unit duration. Find y(t) = x( 12 t) and
y(t) = x(2t).
Solution 1.7.
The signal y(t) = x( 21 t) obtained by stretching the signal x(t) by a factor of 2 unit as shown
in Figure 1.7(b). The signal y(t) = x(2t) is a compressed version of x(t) by 12 unit as shown
in Figure 1.7(c).
Example 1.8.
Figure 1.8(a) shows a rectangular pulse x(t) of unit amplitude and unit duration. Find y(t) =
x(t − 2).
10 1 Signals and Systems
y(t) = x(2 t)
x(t) y(t) = x(0.5 t)
1
1 1
t t t
-1 0 1 -2 0 2 -0.5 0 0.5
(a) (b) (c)
Figure 1.7 Time-scaling operation. (a) CT signal x(t). (b) Stretched version of x(t). (c) Compressed version
of x(t).
Solution 1.8.
In this example, the time shift t0 equal 2 time units (time delay). Hence, by shifting x(t) to
the right by 2 time units we get the rectangular pulse y(t) shown in Figure 1.8(b). The pulse
y(t) has exactly the same shape as the original pulse x(t); it is merely shifted along the time
axis.
t t
-1/2 0 1/2 0 1 3/2 2 5/2
(a) (b)
Figure 1.8 Time-shifting operation. (a) CT signal x(t). (b) Time-shifted version of x(t)
.
Example 1.9.
Consider the triangular pulse x(t) shown in Figure 1.9(a). Find v(t) = x(t + 3) and y(t) =
v(2t).
Solution 1.9.
In this example, the time shift t0 equal 3 time units (time advance). Hence, by shifting x(t) to
the left by 3 time units we get the rectangular pulse v(t) shown in Figure 1.9(b). The signal
y(t) = v(2t) is a compressed version of v(t) by 2 unit as shown in Figure 1.9(c).
Example 1.10.
Consider the signal x(t) shown in Figure 1.10. Find the corresponding version of the signal.
1. x(t + 1)
1.4 Transformations of the Independent Variable 11
t t t
-1 0 1 -4 -3 -2 -1 0 -2 -1 0
(a) (b) (c)
Figure 1.9 Time-shifting and scaling operations. (a) Rectangular pulse x(t), symmetric about the origin. (b)
v(t) is a time-shifted version of x(t). (c) y(t) is a compression version of v(t).
2. x(−t + 1)
3. x( 32 t)
4. x( 32 t + 1)
x(t)
t
0 1 2
Solution 1.10.
The signal x(t) in Figure 1.10 is time-advance (shift to the left), time-reversed, time-scaled
and time-advance as illustrated in the following figures.
1. The signal x(t + 1) corresponds to time-advance (shift to the left) by one unit along the t
axis as illustrated in Figure 1.11.
x(t + 1)
t
-1 0 1 2
x(-t + 1)
t
0 1 2
2. The signal x(−t + 1) corresponds to time-reversed version of x(t + 1) along the t axis.
This may be obtained by first advance (shift to the left) x(t) by 1 unit and second reverse
the advanced signal along the t axis as illustrated in Figure 1.12.
x(1.5 t)
t
0 2/3 4/3
Figure 1.13 Time-scaled signal x( 23 t).
x(1.5 t + 1)
t
-2/3 0 2/3
Figure 1.14 Time-scaled signal x( 23 t).
1.4 Transformations of the Independent Variable 13
An important class of signals is the class of periodic signals. A signal is periodic if it repeats
itself after a fixed period, otherwise the signal is called non-periodic or aperiodic.
CT signal x(t) is periodic if it has the property that there is a positive value of T for which
From Eq. 1.13, we can deduce that if x(t) is periodic with period T , then x(t) = x(t + mT ) for
all t and for all integers m. Thus, x(t) is also periodic with period 2T, 3T, ...... The fundamental
period T0 of x(t) is the smallest positive value of T for which Eq. 1.13 holds. A constant
signal x(t) = 5 is periodic with any real period. However, the fundamental period T0 of x(t)
is undefined.
DT signal x[n] is periodic with period N if it is unchanged by a time shift of N, where N is
an integer,
x[n] = x[n + N] N > 0, for all n. (1.14)
If Eq. 1.14 holds for all values of n, then x[n] is periodic with period N, then x[n] = x[n +
mN] for all n and for all integers m. Thus, x[n] is also periodic with period 2N, 3N, ..... The
fundamental period N0 of x[n] is the smallest integer of N for which Eq. 1.14 holds. A constant
signal x[n] = 5 is periodic with any real integer. However, the fundamental period N0 = 1.
A DT signal cos(ω0 n), sin(ω0 n) and e jω0 n are periodic only if ω2π0 is an element of a rational
value θ , ω2π0 ∈ θ , where ω0 = 2π f is the radian frequency, which has the units of radians/s
and f is the frequency in Hertz.
ω0
1. If 2π is rational, then cos(ω0 n), sin(ω0 n) and e jω0 n are periodic
• The samples fall at the same points in each super period of cos(ω0t), it may take
several periods of cos(ω0t) to make one period of cos(ω0 n).
• For ω2π0 to be rational, ω0 must contains the factor π .
2. If ω2π0 ∈ θ , we can find the fundamental period by writing ω0
2π = m
N, where m, N ∈ Z, Z is
the set of integers.
• If Nm is in reduced form, then m and N have no common factors and N is the funda-
mental period.
Example 1.11.
Determine whether or not each of the following DT signal is periodic. If it is, what is the
fundamental period?.
( )
1. x[n] = cos 831π n
( )
2. x[n] = cos n6
Solution 1.11.
Example 1.12.
Determine whether or not each of the following CT signal is periodic. If it is, what is the
fundamental period?.
1. x(t) = cos(4t) + 2sin(8t)
Solution 1.12.
ω1
1. Angular frequency, ω1 = 4 ⇒ f1 = 2π = 4
2π = 2
π
π
∴ T1 = 2
ω2
Angular frequency, ω2 = 8 ⇒ f2 = 2π = 8
2π = 4
π
π
∴ T2 = 4
= ππ /2
T1
T2 /4 = 2 ∈ θ ⇒ rational value.
The signal is periodic with a fundamental period T , LCM( π2 , π4 ) = π2 .
ω1
2. Angular frequency, ω1 = 4 ⇒ f1 = 2π = 4
2π = 2
π
π
∴ T1 = 2
ω2 π
Angular frequency, ω2 = π ⇒ f2 = 2π = 2π = 1
2
∴ T2 = 2
T1 π /2 π
T2 = 2 = 4 ̸∈ θ ⇒ irrational value. Therefore, the signal is aperiodic signal.
ω1 3π
3. Angular frequency, ω1 = 3π ⇒ f1 = 2π = 2π = 3
2
∴ T1 = 2
3
ω2 4π
Angular frequency, ω2 = 4π ⇒ f2 = 2π = 2π =2
∴ T2 = 1
2
T1 2/3
T2 = 1/2 = 4
3 ∈ θ ⇒ rational value.
The signal is periodic with a fundamental period T , LCM( 23 , 21 ) = 2.
ω0
4. Angular frequency, ω0 = 2 ⇒ f0 = 2π = π1 .
∴ T0 = π
π ∈ θ . Hence, the signal is periodic with a fundamental period, T = π .
√ √
5. Angular frequency, ω1 = 3 2 ⇒ f1 = ω2π1 = 32π2
2√π
∴ T1 = 3 2
√ ω2
√ √
Angular frequency, ω2 = 6 2 ⇒ f2 = 2π = 6 2 3 2
2π = π
π
∴ T2 = √
3 2
16 1 Signals and Systems
√
T1 2π /3√ 2
m
k = T2 = π /3 2
= 2
1 ∈ θ ⇒ rational value.
The signal is periodic with a fundamental period T , LCM( 32√π2 , 3√
π
2
)= 2√π
3 2
.
OR
T1 k = T2 m ⇒ 32√π2 (1) = 3√
π
(2) = 32√π2
√ 2
√
Therefore, f = 32π2 and the fundamental frequency, ω = 3 2
In addition to representing physical phenomena of a signal such as the time shift in a radar
signal and the reversal of an audio tape, transformations of the independent variable are ex-
tremely useful in examining some of the important properties that signal may possess, signal
with these properties can be even or odd signal.
An even signal x(t) is identical to its time-reversed counterpart signal, it can be reflected in
the origin and is equal to the original as shown in Figure 1.15(a), i.e. x(t) = cos(t).
An odd signal x(t) is identical to its negated, time reversed signal, it is equal to the negative
reflected signal as shown in Figure 1.15(b), i.e. x(t) = sin(t).
Odd signals must necessarily be zero at t = 0 since x(−t) = −x(t) −→ x(0) = −x(0) = 0.
(Only 0 satisfies the condition).
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
−0.2 −0.2
−0.4 −0.4
−0.6 −0.6
−0.8 −0.8
−1 −1
−8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8
An important fact is that any signal can be decomposed into a sum of two signals, one of
which is even and one of which is odd.
1
Ev{x(t)} = [x(t) + x(−t)] , (1.17)
2
1.5 Exponential and Sinusoidal Signals 17
1
Odd{x(t)} = [x(t) − x(−t)] , (1.18)
2
The odd part of a signal is an odd signal, since
1 1
Odd{x(−t)} = [x(−t) − x(t)] = − [x(t) − x(−t)] = −Odd{x(t)}.
2 2
For any signal x(t), we can decompose the signal into a sum of its even part Ev{x(t)} and its
odd part Odd{x(t)} as follows:
Properties:
• If x(t) is odd, then x(0)
∫a
= 0. ∫
• If x(t) is even, then∫ −a x(t)dt = 2 0a x(t)dt.
a
• If x(t) is odd, then −a x(t)dt = 0.
• even x even = even.
• even x odd = odd.
• odd x odd = even.
Exponential and sinusoidal signals arise frequently in many applications and many other
signals can be constructed from them.
4 4
3.5 3.5
3 3
2.5 2.5
C eat
C eat
2 2
1.5 1.5
1 1
0.5 0.5
0 0
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Time Time
Thus, the signals e jω0 T and e− jω0 T have the same fundamental period.
A signal closely related to the periodic complex exponential is the sinusoidal signal
where ω0 = 2π f0 is the radian frequency and has the units of radians/s, ϕ is the phase angle
in radians and f0 is the frequency in Hertz. As the complex exponential signal, the sinusoidal
signal is also a periodic signal with a fundamental period of T0 as illustrated in Figure 1.17.
Using Euler’s relation, a complex exponential can be expressed in terms of sinusoidal signals
with the same fundamental period:
Similarly, a sinusoidal signal can also be expressed in terms of periodic complex exponentials
with the same fundamental period:
A [ j(ω0 t+ϕ ) ] A A
Acos(ω0t + ϕ ) = e + e− j(ω0 t+ϕ ) = e jϕ e jω0 t + e− jϕ e− jω0 t , (1.26)
2 2 2
1.5 Exponential and Sinusoidal Signals 19
x(t) = A cos(ω0 t + φ)
1.5
2π
T0 = ω0
1 A
0.5
−0.5
−1
−1.5
−8 −6 −4 −2 0 2 4 6 8
ℜe{.} denotes the real part of the signal and Im{.} denotes the imaginary part of the signal
Periodic signals, such as the sinusoidal signals provide important examples of signal with
infinite total energy, but let us calculate the total energy and average power in this signal over
one period:
∫ T0 ∫ T0
Eperiod = e jω0 t 2 dt = 1.dt = T0 . (1.29)
0 0
∫ ∫
1 1 T0 jω0 t 2 1 T0
Pperiod = Eperiod = e dt = 1.dt = 1. (1.30)
T0 T0 0 T0 0
Since there are an infinite number of periods as t ranges from −∞ to +∞, the total energy
integrated over all time is infinite. However, each period of the signal looks exactly the same.
The average power is finite since the average power over one period is 1, averaging over
multiple periods always yields an average power of 1. The complex periodic exponential
signal has finite average power equal to
∫ T
1 e jω0 t 2 dt = 1.
P∞ = lim (1.31)
T →∞ 2T −T
Thus, for r = 0, the real and imaginary parts of a complex exponential are sinusoidal, such
signals arise in unstable systems. For r > 0, the sinusoidal signals are multiplied by a growing
exponential as shown in Figure 1.18(a), and for r < 0, the sinusoidal signals are multiplied
by a decaying exponential, which is referred to as damped sinusoids as shown in Figure
1.18(b), such signals arise in stable systems, i.e. RLC circuits or mass-spring-friction system,
where the energy is dissipated due to the resistors or friction, respectively.
20 rt rt
|C|e 0.8 |C|e
15 0.6
10 0.4
5 0.2
ℜe{Ceat}
ℜe{Ceat}
0 0
−5 −0.2
−10 −0.4
−15 −0.6
−25 −1
0 5 10 15 20 25 30 0 5 10 15 20 25 30
(a) (b)
Figure 1.18 (a) Growing sinusoidal signal; (b) Decaying sinusoidal signal.
The unit step and unit impulse functions in continuous and discrete time are considerably
important in signal and system analysis. Furthermore, unit impulse function is one of the
most useful functions in all of applied mathematics and in the study of linear systems, which
can be used to filter another function at a specific point.
The CT unit step function u(t) is shown in Figure 1.19 and given by
{
1, t ≥ 0
u(t) = (1.34)
0, t < 0,
The CT unit step function is the running integral of the unit impulse function (dirac-delta)
δ (τ ) ∫ t
u(t) = δ (τ )d τ . (1.35)
−∞
1.6 The Unit Impulse and Unit Step Functions 21
u(t)
1
t
0
The CT unit impulse δ (t) can also be considered as the first derivative of the CT unit step
function,
du(t)
δ (t) = . (1.36)
dt
Since u(t) is discontinuous at t = 0 and consequently is formally not differentiable. This can
be interpreted, however, by considering an approximation to the unit step u∆ (t), as illustrated
in Figure 1.20, which rises from the value of 0 → 1 in a short time interval of length ∆ .
(a) (b)
Figure 1.20 (a) Continuous approximation to the unit step u△ (t); (b) Derivative of u△ (t).
An ideal unit impulse function is a function that is zero everywhere and infinitely high at the
origin. However, the area of the impulse is finite. Graphically, δ (t) is represented by an arrow
pointing to infinity at t = 0. Even though we use an arrow in the graph, this is not a vector.
The arrow represents the fact that the function height is infinity at t = 0. We draw ”1” next to
the arrow to represent that the area of the impulse is one as shown in Figure 1.21.
• Because δ (t) is the limit of graphs of area 1, the area under its graph is 1. More precisely:
22 1 Signals and Systems
{
1, a < t < b
δ (t) =
0, otherwise,
δ (t)x(t) = δ (t)x(0)
and
∫ b
{
x(0), a < 0 < b
δ (t)x(t)dt =
a 0, otherwise,
The first statement follows because δ (t) = 0 everywhere except at t = 0 and the second
statement follows from the first statement and property (2).
• We can place the delta function over any value of t: δ (t − τ ) = 0 everywhere but at t = τ .
Its total area remains 1 and the spike is shifted to be over t = τ ; and we have
δ (t − τ )x(t) = δ (t − τ )x(τ )
∫ b
{
x(τ ), a < τ < b
δ (t − τ )x(t)dt =
a 0, otherwise,
• δ (t) = du(t)
dt , where u(t) is the unit step function. Because u(t) has a jump at 0, δ (t) is not
a derivative in the usual sense, but is called a generalized derivative.
• In practical terms, you should think of δ (t) as any function of unit area, concentrated very
near t = 0.
∫t ∫∞
• u(t) = −∞ δ (τ )d τ or u(t) = 0δ (t − τ )d τ
• x(t)δ (t − t0 ) = x(t0 )δ (t − t0 ) : δ (t) is a sampling function.
∫∞
• −∞ x(t)δ (t − t0 )dt = x(t0 )
∫∞
• x(t) = −∞ x(τ )δ (t − τ )d τ
Consider first the ramp function r(t) shown in the upper left of Figure 1.22. It is zero for
t < 0 and one for t > T , and goes linearly from 0 → 1 as time goes from 0 → T . If we let
T → 0, we get a unit step function of y(t) (upper right). If we take the derivative of the ramp
function r(t), we get a rectangular pulse with height 1/T (the slope of the line) and width T
(lower left). This rectangular pulse has an area of one (height x width). If we take the limit
as T → 0, we get a pulse of infinite height and zero width (lower right), but still with an area
1.6 The Unit Impulse and Unit Step Functions 23
of one; this is the unit impulse and we represent it by δ (t). Since we can’t show the height of
the impulse on our graph, we use the vertical axis to show the area. The unit impulse has area
= 1, so that is the shown height.
1 1
t t
0 T 0
t t
0 T 0
Figure 1.22 Relation between ramp function r(t), unit step function u(t), and unit impulse function δ (t).
Example 1.13.
Simplify and evaluate the following functions
∫∞
1. −∞ x(t)δ (t)dt
∫b
2. a x(t)δ (t)dt
∫b
3. a δ (t)x(t − T )dt
Solution 1.13.
1. We can simplify this integral by noting that the unit impulse δ (t) is zero everywhere
except at t = 0, we can replace δ (t)∆x(t) by δ (t)∆x(0).
∫ ∞ ∫ ∞
x(t)δ (t)dt = x(0)δ (t)dt,
−∞ −∞
∫ ∞
= x(0) δ (t)dt = x(0).
−∞
2. More generally and by the same reasoning of the previous solution, we can write b > a.
∫ b ∫ b ∫ b
x(t)δ (t)dt = x(0)δ (t)dt = x(0) δ (t)dt,
a
{a a
x(0), a < 0 < b
=
0, otherwise,
24 1 Signals and Systems
3. Noting that the unit impulse δ (t) is zero everywhere except at t = 0. δ (t)x(t − T ) =
δ (t)x(0 − T ) = δ (t)x(−T ),
∫ b ∫ b ∫ b
δ (t)x(t − T )dt = δ (t)x(−T )dt = x(−T ) δ (t)dt,
a
{a a
x(−T ), a < 0 < b
=
0, otherwise,
There are many reasons for studying and understanding a system. For example, we may want
to design a system to remove the noise or remove echoes in an audio recording system or
sharpen an out-of-focus image. The system might have a distortion or interfering effect that
you need to characterize or measure. Unfortunately, the input signal to a transmission line (the
system) is usually not identical to the output signal. However, if we know the characteristic of
the transmission line, we maybe can compensate for its effect. In other cases, the system may
represent some physical process that you want to study or analyze, i.e., a radar operates by
comparing the transmitted and reflected signals to find the characteristics of a remote object.
In terms of system theory, the problem is to find the system that changes the transmitted
signal into the received signal.
A system is an entity that operates on input signals to produce output signals, it can be
a device, a program or a natural system. A system is a mathematical model of a physical
process that relates the input signal to the output signal. Let x and y be the input and output
signals of a system, respectively. Then the system is viewed as a transformation (mapping) of
x into y. This transformation is represented by the mathematical notation
y = T x, (1.38)
where T is the operator representing some defined rule by which the input signal x at any
time is transformed into the output signal y at any time.
If the input and output signals x and y are CT signals, then the system is called a CT system as
shown in Figure 1.23(a). However, if the input and output signals are DT signals or sequences,
then the system is called a DT system as shown in Figure 1.23(b).
y(t) y[n]
x(t) CT-System x[n] DT-System
T T
(a) (b)
The RC circuit is a simple example of a system as shown in Figure 1.24. The current i(t) is
proportional to the voltage drop across the resistor:
1.8 Basic System Properties 25
+
V s (t) C V 0 (t)
i(t) -
vs (t) − vc (t)
i(t) = , (1.39)
R
The current through the capacitor is
dvc (t)
i(t) = C , (1.40)
dt
Equating the right-hand sides of Eqs. 1.39 and 1.40, we obtain a differential equation describ-
ing the relationship between the input and output:
dvc (t) 1 1
+ vc (t) = vs (t). (1.41)
dt RC RC
Figure 1.25 considers some cases of subsystems interconnection with various connections to
form new systems that serve specific applications. The input-output behavior of the overall
system in terms of the subsystem descriptions can be evaluated mathematically.
In general, a system is an operator that takes a function as its input and produces a function
as its output. The discussion of systems properties are hesitant at this point, this is because
the notion of a system is too general that it is difficult to include all the details and the
mathematical description of a system might presume certain properties of allowable input
signals. For example, the input signal to a running-integrator system must be sufficiently
well behaved that the integral is defined. We can be considerably more precise when we
consider specific classes of systems that admit particular types of mathematical descriptions.
In the short-term, the objective is mainly to establish some perception of systems properties.
It is extremely helpful to classify systems in terms of formal properties as they can greatly
simplify the analysis. Some list of properties are included:
26 1 Signals and Systems
(a)
System 1
x(t) y(t)
Input + Output
System 2
(b)
System 1 System 2
x(t) y(t)
Input + Output
System 3
(c)
x(t) y(t)
Input + System 1 Output
System 2
(d)
Figure 1.25 Interconnection of systems. (a) A series or cascade interconnection of two systems; (b) A parallel
interconnection of two systems; (c) Combination of both series and parallel systems; (d) Feedback intercon-
nection.
1.8 Basic System Properties 27
A system is memoryless if its output value at any time t depends only on the input signal
value at that same time. In addition, memoryless system is always causal but not vice versa.
One particularly simple of memoryless system is the identity system, whose output is identical
to its input, y(t) = x(t). Also, a resistor is a memoryless CT system, since the input current
and output voltage of Figure 1.26 has the relationship:
v(t) = R i(t).
i(t)
+
v(t)
-
Figure 1.26 The relationship between the input current and output voltage drop across a resistor.
i(t)
+
v(t)
-
Figure 1.27 The relationship between the input current and output voltage drop at a capacitor.
n n−1
y[n] = ∑ x[k] = ∑ x[k] + x[n] = y[n − 1] + x[n],
k=−∞ k=−∞
or
y[n] − y[n − 1] = x[n].
Example 1.14.
Check the following systems with respect to memory?
1. y(t) = x(t) + 5
2. y(t) = x(t + 5)
3. y(t) = (t + 5)x(t)
5. y(t) = x(5)
6. y(t) = x(2t)
{
x(t) + x(t − 2), t ≥ 0
7. y(t) =
0, t < 0,
Solution 1.14.
1. y(t) = x(t) + 5 ⇒ The system is memoryless because the value 5 is not affecting time
function.
2. y(t) = x(t + 5) ⇒ The system have memory because it is time shift, where the time
function is affected. Let t = 0, then y(0) = x(5).
3. y(t) = (t + 5)x(t) ⇒ The system is memoryless because the value (t + 5) is a scale and
not affecting time function.
4. y(t) = [x(t + 5)]2 ⇒ The system have memory because it is time shift, where the time
function is affected.
5. y(t) = x(5) ⇒ The system have memory because the output is a function of time regard-
less of input. Let t = 0, then y(0) = x(5).
6. y(t) = x(2t) ⇒ The system have memory because it is time scaling, where the time
function is affected. Let t = 1, then y(1) = x(2).
1.8 Basic System Properties 29
{
x(t) + x(t − 2), t ≥ 0
7. y(t) =
0, t < 0,
The system have memory because the output y(t) is a function of x(t − 2). Let t = 3, then
y(3) = x(3) + x(1). Since y(3) depends on x(1), then the system have memory.
For an invertible system, distinct inputs lead to distinct outputs. If a system is invertible,
then an inverse system exists in which when cascaded with the original system gives the
same output as the original input of the first system as shown in Figure 1.28. The systems
with input x and output y:
n
y(t) = 2x(t), y[n] = ∑ x[k],
k=−∞
y(t)
w(t) = , w[n] = y[n] − y[n − 1].
2
y(t)
x(t) y(t) = 2x(t) w(t) = 0.5 y(t) w(t) = x (t)
y[n]
x[n] y[n] = Σn k = -∞ x [k] w[n] = y[n] - y[n - 1] w[n] = x [n]
Encoder in communication systems is an example of invertible system, where the input at the
encoder is exactly recoverable at the output. The running-integrator system is invertible by
the fundamental theorem of calculus:
∫ t ∫ t
d
y(t) = x(τ ) d τ , V x(τ ) d τ = x(t).
−∞ dt −∞
Determining invertibility of a given system can be quite difficult. Perhaps the easiest situation
is to show that a system is not invertible by exhibiting two legitimately different input signals
that yield the same output signal. For example, y(t) = x2 (t) is not invertible because the
sign of the input cannot be determined from the knowledge of the output, the constant input
signals of x(t) = 1 and x(t) = −1, for all t, yield identical output signals. Also, y(t) = 0 is not
invertible because the system produces zero output sequence for any input sequence. Another
30 1 Signals and Systems
d
example, the system y(t) = dt x(t) is not invertible since x(t) = 1 + x(t) yields the same output
signal as x(t).
Example 1.15.
Check if the following systems are invertible?
1. y(t) = x(t + 2)
2. y(t) = x2 (t)
3. y(t) = x(2t)
Solution 1.15.
1. y(t) = x(t + 2) ⇒ The system is invertible, since y(t) = x(t + 2) then y(t − 2) = x(t)
and the inverse of the signal is y(t) = x(t − 2). Therefore the inverse of the function is
z(t) = y(t − 2) = x((t − 2) + 2) = x(t).
2. y(t) = x2 (t) ⇒ The system is non-invertible because we can only determine the value of
the input but not the sign of the input from the knowledge of the output.
3. y(t) = x(2t) ⇒ The system is invertible, since y(t) = x(2t) then y( 2t ) = x(t) and the
inverse of the signal is y(t) = x( 2t ). Therefore the inverse of the function is z(t) = y( 2t ) =
x(2t( 21 )) = x(t).
1.8.3 Causality
A system is causal if the output does not anticipate future values of the input, i.e., if the
output at any time depends only on the values of the input at the present time and in the past.
All memoryless systems are causal because the output responds only to the current value of
input. However, being causal does not necessarily imply that a system is memoryless, the
output at t = t0 depends only on input at t ≤ t0 . All physical real-time systems are causal
because they are not anticipating the future. Causality does not apply to spatially varying
signals or to systems processing recorded signals, i.e., taped sports games vs. live broadcast.
Non-causal systems are useful in various applications where the independent variable is not
time.
The RC circuit in Figure 1.24 is causal, since the capacitor voltage responds only to the
present and past values of the source voltage. Also, the motion of a car is causal, since it does
not anticipate future actions of the driver.
Example 1.16.
Check if the following systems are causal?
1.8 Basic System Properties 31
2. y(t) = x(t + 1)
3. y[n] = x[−n]
4. y(t) = x(t)cos(t + 1)
∫t0 +a
5. y(t0 ) = −∞ x(t) dt
{
x(t) + x(t − 2), t ≥ 0
6. y(t) =
0, t < 0,
Solution 1.16.
1. y[n] = x[n] − x[n + 1] ⇒ The system is non-causal, when n = 0, y[0] = x[0] − x[1], the
output at this time depends on a future value of the input.
2. y(t) = x(t + 1) ⇒ The system is non-causal, when n = 0, y(0) = x(1), the output at this
time depends on a future value of the input.
3. y[n] = x[−n] ⇒ The system is non-causal, since when n < 0, e.g. n = −1 , we see that
y[−1] = x[1], so the output at this time depends on a future value of the input.
4. y(t) = x(t)cos(t + 1) ⇒ The system is causal because the output at any time equals the
input at the same time multiplied by a number that varies with time cos(t + 1).
∫t0 +a
5. y(t0 ) = −∞ x(t) dt ⇒ The system is non-causal, since we do not know the value of a.
If a > 0, then the output depends on a future value of input.
{
x(t) + x(t − 2), t ≥ 0
6. y(t) =
0, t < 0,
The system is causal, when t < 0, the output does not depend on the input. When t ≥ 0,
the output y(t) depends on current input x(t) and past input x(t − 2). The system is causal
because the output does not depend on a future input.
1.8.4 Stability
For a system to be stable, small inputs lead to responses that do not diverge, e.g. pendulum
and inverted pendulum. If the input to a stable system is bounded, then the output must be also
bounded. A CT system is bounded-input bounded-output (BIBO) stable if for all bounded
input functions x(t), the output function y(t) = T x(t) is also bounded.
Example 1.17.
Check if the following systems are stable?
32 1 Signals and Systems
1. y(t) = tx(t)
2. y(t) = ex(t)
Solution 1.17.
1. y(t) = tx(t) ⇒ The system is not stable, when a constant input x(t) = 1, yields an output
y(t) = t, which is not bounded - no matter what finite constant we pick, y(t) will exceed
the constant for some t.
2. y(t) = ex(t) ⇒ The system is stable, assume the input is bounded x(t) < B, or −B < x(t) <
B for all t, then y(t) is bounded e−B < y(t) < eB .
dVc (t)
3. i(t) = C dt ⇒ Let i(t) = B1 u(t), where B1 ̸= 0
∫
1 t
Vc (t) = i(τ ) d τ ,
C −∞
∫ t
1
= B1 u(τ ) d τ ,
C −∞
∫
1 t
= B1 d τ ,
C 0
B1
= t.
C
Vc (t) = BC1 t grows linearly with t and as t → ∞,Vc (t) → ∞. Bounded-input gives
unbounded-output, so a capacitor is not BIBO stable.
{
x(t) + x(t − 2), t ≥ 0
4. y(t) =
0, t < 0,
The system is stable, assume the input is bounded B > 0 and |x(t)| ≤ B ∀ t.
When t < 0 → y(t) = 0, so |y(t)| = 0 < 2B.
When t ≥ 0, |y(t)| = |x(t) + x(t − 2)| ≤ |x(t)| + |x(t − 2)| ≤ B + B = 2B.
So |y(t)| ≤ 2B ∀ t → y(t) is bounded. Since y(t) is bounded, then the system is stable.
A system is time-invariant if its behavior does not depend on what time it is, the behavior
and characteristics of the system are fixed over time. A time shift in the input must result in
an identical time shift in the output signal. Mathematically, if the system output is y(t) when
1.8 Basic System Properties 33
the input is x(t), a continuous time-invariant system will have an output of y(t − t0 ) when the
input is x(t − t0 ) for all t0 and for all functions x(t) and for y(t) = T x(t), then it is always the
case that
y(t − t0 ) = T x(t − t0 )
Encoder in communication systems is an example of invertible system, that is, the input to the
encoder must be exactly recoverable from the output. If the input to a time-invariant system
is periodic, then the output is periodic with the same period.
Example 1.18.
Check if the following systems are time-invariant?
1. y(t) = x(2t)
2. y(t) = tx(t)
3. y(t) = x2 (t)
{
x(t) + x(t − 2), t ≥ 0
4. y(t) =
0, t < 0,
Solution 1.18.
1. y(t) = x(2t) → Consider an arbitrary input x1 (t) as shown in Figure 1.29(a), then result-
ing output y1 (t) is depicted in Figure 1.29(b),
y1 (t) = x1 (2t)
y1 (t − 2) = x1 (2t − 2)
x2 (t) = x1 (t − 2)
34 1 Signals and Systems
x 1 (t)
t
-2 -1 0 1 2
(a)
y1 (t)
y1 (t- 2) = x 1 (2 t- 2)
t t
-1 0 1 0 1 2 3
(b) (c)
t t
0 1 2 3 4 0 1 2
(d) (e)
t t
-1 0 1 0 1 2 3
t
-2 -1 0 1 2
Shift Sys.
t t
0 1 2 3 4 1 2
0
Figure 1.30 Illustration block diagram of inputs and outputs of the system y(t) = x(2t).
2. y(t) = tx(t) → Consider an arbitrary input x1 (t) = u(t) and let t = 3 as shown in Figure
1.31(a), then resulting output y1 (t) is depicted in Figure 1.31(b),
y1 (t − 2) = (t − 2)x1 (t − 2) = u(t − 2)
x2 (t) = x1 (t − 2) = u(t − 2)
x 1 (t) = u(t)
1
t
0 1 2 3 4
(a)
t t
0 1 2 3 4 0 1 2 3 4 5 6
(b) (c)
t t
0 1 2 3 4 5 6 0 1 2 3 4 5 6
(d) (e)
3. y(t) = x2 (t) → Consider an arbitrary input x1 (t), then resulting output y1 (t) is given by,
36 1 Signals and Systems
y1 (t) = x1 2 (t)
y1 (t − 2) = x1 2 (t − 2)
x2 (t) = x1 (t − 2)
y2 (t) = x2 2 (t) = x1 2 (t − 2)
y 1 (t)
1
y 1 (t-2 )
1
t
0 1 2
t
0 1 2 3 4
x 1 (t)
1 Sys. Shift
t
0 1
Shift Sys.
y 2 (t)
x 2 (t) 1
1
t
t 0 1 2 3 4
0 1 2 3
Figure 1.32 Illustration block diagram of inputs and outputs of the system y(t) = x2 (t).
{
x(t) + x(t − 2), t ≥ 0
4. y(t) =
0, t < 0,
Let x(t) = u(t) and v(t) = u(t) + u(t − 2). Consider an arbitrary input x1 (t) = u(t), then
resulting output y1 (t) is given by,
y1 (t − 2) = v1 (t − 2) = u(t − 2) + u(t − 4)
x2 (t) = x1 (t − 2)
1 1
t t
0 1 2 3 4 0 1 2 3 4
x 1 (t)
1 Sys. Shift
t
0 1 2 3 4
Shift Sys.
y 2 (t) = v 2 (t) = v 1 (t-2 ) = u(t-2 ) + u(t-4 )
2
x 2 (t) = x 1 (t -2 )
1 1
t t
0 1 2 3 4 0 1 2 3 4
Figure 1.33 Illustration block diagram of inputs and outputs of the system y(t) = x(t) + x(t − 2).
1.8.6 Linearity
A system is called linear if it has two mathematical properties: Homogeneity or scaling prop-
erty and additive property. Hence, the system has the superposition property if the system has
both properties, then we can prove that the system is linear. Likewise, if we can show that
the system doesn’t have one or both properties, then the system is not linear. As illustrated
in Figure 1.34, homogeneity means that a change in the input signal’s amplitude results in
a corresponding change in the output signal’s amplitude. Therefore, a system is said to be
homogeneous if an amplitude change in the input results in an identical amplitude change
in the output. In mathematical terms, if an input signal of x(t) results in an output signal of
y(t) and an input signal of kx(t) results in an output signal of ky(t), for any input signal and
constant k, then the system is homogeneous.
A simple resistor provides a good example of both homogenous and nonhomogeneous sys-
tems. If the input to the system is the voltage v(t) across the resistor and the output from the
system is the current i(t) through the resistor, then the system is homogeneous. Ohm’s law
guarantees the homogeneous property, if the voltage is increased or decreased, there will be
a corresponding increase or decrease in the current. Consider another system where the input
signal is the voltage v(t) across the resistor, but the output signal is the power p(t) being
dissipated in the resistor. Since power is proportional to the square of the voltage, if the input
signal is increased by a factor of two, then the output signal is increase by a factor of four.
This system is not homogeneous and therefore cannot be linear.
The property of additivity is illustrated in Figure 1.35. Consider a system where an input
x1 (t) produces an output of y1 (t). Further suppose that a different input x2 (t) produces another
output y2 (t). The system is said to be additive if an input of x1 (t)+x2 (t) results in an output of
38 1 Signals and Systems
If
x(t) y(t)
System
Then
kx (t) ky(t)
System
y1 (t) + y2 (t), for all possible input signals. In other words, signals added at the input produce
signals that are added at the output.
If
System
x 1 (t) y 1 (t)
And if
System
x 2 (t) y 2 (t)
And if
System
x 1 (t) + x 2 (t) y 1 (t) + y 2 (t)
A good example of a non-additive circuit is the mixer stage in a radio transmitter. Two signals
are present: an audio signal that contains the voice or music and a carrier wave that can
propagate through space when applied to an antenna. The two signals are added and applied
to a non-linearity, such as a pn junction diode. A third signal is resulting from merging the
two signals, a modulated radio wave capable of carrying the information over great distances.
If the operator T in Eq. 1.42 satisfies the following two conditions, then T is called a linear
operator and the system represented by a linear operator T is called a linear system:
1.8 Basic System Properties 39
T [x1 (t) + x2 (t)] = y1 (t) + y2 (t), for any input signals x1 (t) and x2 (t), (1.42)
2. Homogeneity or scaling:
T [α x(t)] = α y(t), for any input signal x(t) and any scalar α , (1.43)
Any system that does not satisfy Eq. 1.42 and/or Eq. 1.43 is classified as a non-linear sys-
tem. Eq. 1.42 and 1.43 can be combined into a single condition to form the superposition
property as
T [α1 x1 (t) + α2 x2 (t)] = α1 y1 (t) + α2 y2 (t), (1.44)
where α1 and α2 are arbitrary scalars.
All of these properties translate easily to DT systems. It is only require to replace the paren-
theses by square brackets and t by n. But regardless of the time domain, it is important to
note that these are input-output properties of systems. In particular, nothing is being stated
about the internal workings of the system, everything is stated in terms of input signals and
corresponding output signals. For a linear system, zero input leads to zero output.
Example 1.19.
Check if the following systems are linear?
1. y(t) = x(2t)
2. y(t) = tx(t)
3. y(t) = x2 (t)
{
x(t) + x(t − 2), t ≥ 0
4. y(t) =
0, t < 0,
Solution 1.19.
y3 (t) = x3 (2t),
= α1 x1 (2t) + α2 x2 (2t),
= α1 y1 (t) + α2 y2 (t).
y3 (t) = x3 2 (t),
= [α1 x1 (t) + α2 x2 (t)]2 ,
= α1 2 x1 2 (t) + 2α1 x1 (t) α2 x2 (t) + α2 2 x2 2 (t).
Problems of Chapter 1
1.1. Determine whether the following signals are energy signals, power signals or neither
{
A, −1 ≤ t ≤ 1
a. x(t) =
0, elsewhere
t, 0≤t ≤1
b. x(t) = 2 − t, 1 ≤ t ≤ 2
0, elsewhere
{√
t, t > 1
c. x(t) =
0, t ≤ 1
d. x(t) = A eat
f. x(t) = 5 e−2|t|
g. x(t) = 2
3 cos(ω t + θ )
{
2 cos(ω t), −1 ≤ t ≤ 1
h. x(t) =
0, elsewhere
j. x(t) = t u(t)
x(t)
1
t
-1/2 0 1/2
Figure P1.2
c. w(t) = y (−t + 2)
( )
d. f (t) = x − 21 t − 1
e. g(t) = u (t) − z (t + 3)
x(t)
1
t
-1 0 1
Figure P1.3
b. z(t) = x (2t)
( )
c. w(t) = x 12 t − 12
d. f (t) = y (t + 1)
e. g(t) = f (−t + 1)
1.5. Sketch and label the signal y(t) that is related to the signal x(t) in problem 1.4 as follows:
a. y(t) = x (8t + 4)
44 1 Signals and Systems
b. y(t) = x (8t − 4)
c. y(t) = x (4t)
( )
d. y(t) = x 15 t
1.7. Determine whether the following CT signals are periodic. If the signals are periodic,
determine the fundamental period.
( )
a. x(t) = cos t + π2
1.8. Determine whether the following DT signals are periodic. If the signals are periodic,
determine the fundamental period.
[ ]
a. x[t] = cos 14 n
[ ]
b. x[n] = 5 cos 23 n + 43
[ ]
c. x[n] = cos π4 n
[ ]
d. x[n] = cos2 π4 n
[ ]
e. x[n] = sin 23π n + 2
[ ] [ ]
f. x[n] = cos π4 n + sin π3 n
[ ] [ ]
g. x[n] = cos 14 n + sin π3 n
[ ] [ ]
h. x[n] = cos 14 n + sin 31 n
1.9. Determine whether the following systems are memoryless, invertible, causal, stable,
time-invariant, and linear. The systems have the input-output relation given by
a. y(t) = x(t + 2) sin (ω t + 2)
b. y(t) = x(−t)
c. y(t) = t x(t + 2)
d. y(t) = x2 (t)
f. y[n] = x2 [n + 2]
g. y[n] = x[n − 1]
h. y[n] = n x[n]
1.10. A system has the input-output relation given by the following statement:
a. Show that a CT linear system is causal if the following statement is satisfied: For any input
x(t) and at any time t0 . If x(t) = 0 for t ≤ t0 , then the output y(t) = 0 for t ≤ t0 .
b. Find a nonlinear system that is causal but does not satisfy this condition.
c. Find a nonlinear system that is not causal but satisfies this condition.
46 1 Signals and Systems
References
1. P. Gupta and P.R. Kumar. “The capacity of wireless networks”, IEEE Transactions on Information Theory,
vol. 46, no. 2, pp. 388-404, March 2000.
2. A. Papoulis and S.U. Pillai. Probability, Random Variables, and Stochastic Processes. 4th Edition, New
York, USA: McGraw-Hill, 2002.
3. A.V. Oppenheim, A.S. Willsky, S.H. Nawab. Signals and Systems. 2nd Edition, New Jersey, USA: Prentice
Hall, 1997.
4. S. Haykin and B. Van Veen. Signals and Systems. 2nd Edition, New Jersey, USA: John Wiley & Sons,
2003.
Chapter 2
Linear Time Invariant (LTI) Systems
Abstract :
The system is linear time-invariant (LTI) if it satisfies the property of linearity and time-
invariance. LTI systems are the easiest and ideal systems to deal with from the analysis and
design perceptions, they play a fundamental role in signal and system analysis because of the
physical phenomena that can be modeled. An LTI system is completely characterized by its
unit-impulse response h(t) and output y(t).
2.1 Introduction
The defining properties of any LTI system are linearity and time-invariance. If the two prop-
erties hold, then the system corresponds to LTI system. In this chapter, we apply the property
of linearity and time-invariance to develop the fundamental input-output relationship which
will be described in terms of convolution operation. The importance of convolution operation
in LTI systems stems from the fact that we can find the system response y(t) of any input
signal x(t) given the knowledge of unit-impulse response of the LTI system h(t).
Linearity:
Linearity means that the relationship between the input and the output of the system is linear
and the system must satisfy the two properties as shown in Figure 2.1.
x(t) y(t)
T . x (t)
Linear system
47
48 2 Linear Time Invariant (LTI) Systems
Additive :
y1 (t) = T x1 (t); y2 (t) = T x2 (t),
y1 (t) + y2 (t) = T x1 (t) + T x2 (t).
Scalling :
y(t) = T x(t) → y(t) = T [ax(t)] .
Superposition :
a1 y1 (t) + a2 y2 (t) = T [a1 x1 (t) + a2 x2 (t)] . (2.1)
Time-Invariant:
Time-invariant indicates that the behavior and characteristics of the system are fixed over
time. Any time shift in the input results in the same shift in the output as shown in Figure 2.2.
x(t) y(t)
Delay
Time-invariant system
Figure 2.2 CT invariant system.
Example 2.1.
The following signals are the input x(t) and the output y(t) of a CT LTI system.
y(t)
x(t) 2
1
t t
-1 0 1 -1 0 1 2 3
Figure 2.3 CT LTI system for example 2.1.
b. x(t − 1)
c. 12 x(t)
d. x(2t)
2.2 CT LTI Systems: Convolution Integral 49
Solution 2.1.
y(t+1)
2
Since the system is time-invariant, then the output
a.
is y(t + 1).
t
-1 0 1 2
y(t-1)
2
Since the system is time-invariant, then the output
b.
is y(t − 1).
t
0 1 2 3 4
0.5 y(t)
1
c. Since the system is linear, then the output is 12 y(t).
t
-1 0 1 2 3
y(2 t)
2
The response of CT LTI system y(t) can be computed by convoluting the unit-impulse re-
sponse of the system h(t) with the input signal x(t) using a convolution integral, the convolu-
tion is one of the most regularly applied operation that describes the input-output relationship
of an LTI system. A good example of LTI systems are electrical circuits that can be made up
of resistors, capacitors, and inductors.
The impulse response h(t) of a CT LTI system (represented by T) is defined to be the re-
sponse of the system when the input is δ (t)
Since the system is linear, the response y(t) of the system to an arbitrary input x(t) can be
expressed as
y(t) = T {x(t)} ,
{∫ ∞ }
=T x(τ ) δ (t − τ ) d τ ,
−∞
∫ ∞
= x(τ ) T {δ (t − τ )} d τ . (2.5)
−∞
h(t − τ ) = T {δ (t − τ )} . (2.6)
equation 2.7 indicates that a CT LTI system is completely characterized by its impulse re-
sponse h(t).
We can obtain Eq. 2.4 using the sampling property of the impulse function. If we consider t
is fixed and τ is time variable, then we have
hence
∫ ∞ ∫ ∞ ∫ ∞
x(τ ) δ (t − τ ) d τ = x(t) δ (t − τ ) d τ = x(t) δ (t − τ ) d τ = x(t).
−∞ −∞ −∞
Any system that can be described by a convolution is an LTI system. Furthermore, the char-
acterizing signal h(t) is the unit-impulse response of the system, as is easily verified by a
shifting calculation: if x(t) = δ (t) as shown in Figure 2.4, then
∫ ∞ ∫ ∞
y(t) = x(τ ) h(t − τ ) d τ = δ (τ ) h(t − τ ) d τ = h(t). (2.8)
−∞ −∞
Defining the response of the system to a shifted unit impulse as shown in Figure 2.5
δ (t − τ ) = T [δ (t − τ )] = h(t, τ ),
2.2 CT LTI Systems: Convolution Integral 51
y(t) = T {x(t)} ,
{∫ ∞ }
=T x(τ ) δ (t − τ ) d τ ,
−∞
∫ ∞
= x(τ ) T {δ (t − τ )} d τ due to linearity,
−∞
∫ ∞
= x(τ ) h(t, τ ) d τ . (2.9)
−∞
∞ ∞
x(t) = ∫ x(τ) δ(t - τ) dτ ∫ x(τ) h(t, τ) dτ
-∞ LTI -∞
System
But the system is also time-invariant, then T [δ (t − τ )] = h(t, τ ) = h(t − τ ), since we had
T [δ (t)] = h(t). Therefore, the output of any CT LTI system is the convolution of the input
x(t) with the impulse response h(t) of the system as illustrated in Figure 2.7.
∫ ∞
y(t) = x(τ ) h(t − τ ) d τ = x(t) ∗ h(t), (2.10)
−∞
52 2 Linear Time Invariant (LTI) Systems
∞
x(t) y(t) = ∫ x(τ) h(t - τ) dτ
LTI -∞
System
y(t) = x(t) ∗ h(t)
∫∞
Figure 2.7 Convolution integral representation of an LTI system: y(t) = −∞ x(τ ) h(t − τ ) = x(t) ∗ h(t).
The notation h(t − τ ) in Eq. 2.10 means that the function h(τ ) is flipped and shifted across
the function x(τ ). Here are the steps that will greatly simplify doing convolutions:
1. First, the impulse response h(τ ) is time-reversed (reflected about the origin) to obtain
h(−τ ) and then shifted by t to form h(t − τ ) = h[−(τ − t)] which is a function of τ with
parameter t. Note that the independent variable of h(t − τ ) is t and not τ , the variable t is
the shift parameter, i.e. the function h(τ ) is shifted by an amount of t.
2. Next, fix t and multiply x(τ ) with h(t − τ ) for all values of τ with t fixed at some value.
3. The product x(τ )h(t − τ ) is integrated over all τ to produce an output single y(t), which
is a single value that depends on t. Remember that τ is the integration variable and that t
is treated like a constant when doing the integral.
4. Steps 1 to 3 are repeated for all values of t to produce the entire output y(t), it usually
falls out that there are only several regions of interest and the rest of y(t) is zero.
Example 2.2.
Convolve the following signals
a. x(t) = u(t) − u(t − 1); h(t) = e−t
x(t) h(t)
1 1 e -t
t t
0 1 2 0 1 2
Figure 2.8 x(t) = u(t) − u(t − 1); h(t) = e−t .
x(t) h(t)
1 1
t t
0 1 -1 0 1
Figure 2.9 x(t) = t[u(t) − u(t − 1)]; h(t) = u(t + 1) − u(t − 1).
x(t) h(t)
1 e -at u(t) 1
t t
0 1 2 0 1 2
Figure 2.10 x(t) = e−at u(t); h(t) = u(t).
x(t) h(t)
1 1
e at u(-t) e -at u(t)
t t
-2 -1 0 0 1 2
Figure 2.11 x(t) = eat u(−t); h(t) = e−at u(t).
Solution 2.2.
a. The function x(τ ) is time-reversed and then shifted by t across the impulse response h(τ )
to obtain x(t − τ ) as showing in Figure 2.12.
For t < 0, no overlapping between the two functions. Therefore, y(t) = 0
Shift x(t − τ ) towards the right to partially overlaps with h(τ ). For t ≥ 0, and t − 1 < 0,
then 0 ≤ t < 1
54 2 Linear Time Invariant (LTI) Systems
Shift x(t − τ ) towards the right to completely overlaps with h(τ ). For t − 1 ≥ 0, then t ≥ 1
b. The impulse response h(τ ) is time-reversed since it is simpler and symmetric and then
shifted by t across the function x(τ ) to obtain h(t − τ ) as showing in Figure 2.13. The
convolution in this example can be divided into 5 parts.
For t + 1 < 0, then t < −1, no overlapping between the two functions. Therefore, y(t) = 0.
Shift h(t − τ ) towards the right to partially overlaps with x(τ ). For t + 1 ≥ 0, and t < 0,
then −1 ≤ t < 0.
Shift h(t − τ ) towards the right to completely overlaps with x(τ ). For t − 1 ≤ 0, and t ≥ 0,
then 0 ≤ t ≤ 1.
2.2 CT LTI Systems: Convolution Integral 55
Shift h(t − τ ) more towards the right to partially overlaps with x(τ ). For t − 1 ≤ 1, and
t − 1 > 0, then 1 < t ≤ 2.
τ τ τ
t-1 t t+1 0 1 t-1 t 0 t+11 t-1 0 t 1 t+1
τ τ
0 t-1 1 t t+1 0 1 t-1 t t+1
Figure 2.13 Convolution for example 2.2b.
c. The impulse response h(τ ) is time-reversed and then shifted by t across the function x(τ ) to
obtain h(t − τ ) as showing in Figure 2.14. The convolution in this example can be divided
into 2 parts.
For t < 0, no overlapping between the two functions. Therefore, y(t) = 0.
Shift h(t − τ ) towards the right to overlap with x(τ ). For t ≥ 0.
56 2 Linear Time Invariant (LTI) Systems
τ τ
t 0 1 2 0 t 1 2
Figure 2.14 Convolution for example 2.2c.
d. The impulse response h(τ ) is time-reversed and then shifted by t across the function x(τ ) to
obtain h(t − τ ) as showing in Figure 2.15. The convolution in this example can be divided
into 2 parts.
For t < 0, the two functions overlap partially.
x(τ) 1 x(τ) 1
h(t-τ) h(t-τ)
τ τ
-2 -1 t 0 -2 -1 0 t
The input-output behavior of a CT LTI system is described by its impulse response h(t) via
the convolution expression. Therefore the input-output properties of an LTI system can be
characterized in terms of properties of h(t).
The commutative property of convolution states that changing the order of operands will not
affect the response in systems as shown in the Figure 2.16. In other words, the distinction
between impulse response and signal is of no mathematical consequence in the context of
convolution [1].
Figure 2.16 Commutative operation of an LTI system: x(t) ∗ h(t) = h(t) ∗ x(t).
58 2 Linear Time Invariant (LTI) Systems
The distributive property of convolution means that summing the outputs of two systems is
equivalent to a system with an impulse response equal to the sum of the impulse response of
the two individual systems, this is because the order of different linear operations is irrelevant
due to the distributive property of convolution as shown in the Figure 2.17.
y(t) = x(t) ∗ {h1 (t) + h2 (t)} = x(t) ∗ h1 (t) + x(t) ∗ h2 (t). (2.12)
h 1 (t)
x(t) y(t) x(t) y(t)
+ = h 1 (t) + h 2 (t)
h 2 (t)
Figure 2.17 Distributive property of an LTI system: x(t) ∗ {h1 (t) + h2 (t)} = x(t) ∗ h1 (t) + x(t) ∗ h2 (t).
y(t) = {x1 (t) + x2 (t)} ∗ h(t) = x1 (t) ∗ h(t) + x2 (t) ∗ h(t) = y1 (t) + y2 (t). (2.13)
The associative property of convolution can be exploited to change the order of a complicated
convolution of a cascaded system into several simpler orders. For LTI systems, the changing
order of subsequent convolution operations will not affect the response in systems as shown
in Figure 2.18.
y(t) = x(t) ∗ {h1 (t) ∗ h2 (t)} = {x(t) ∗ h1 (t)} ∗ h2 (t). (2.14)
x(t) y(t)
h 1 (t) h 2 (t)
x(t) y(t)
h 1 (t) * h 2 (t)
Figure 2.18 Associative property of an LTI system: x(t) ∗ {h1 (t) ∗ h2 (t)} = {x(t) ∗ h1 (t)} ∗ h2 (t).
For non-linear systems, generally the order of cascaded systems cannot be changed. Figure
2.19 illustrates two cascaded memoryless systems, one is being multiplied by 2 and the other
squaring the input. If we multiply first and square second, we obtain
2.3 Properties of LTI Systems 59
As shown in Figure 2.19, the outputs of the two cascaded systems are different if the order is
changed. Thus, the interchanging order in cascaded systems is a property of LTI systems.
Example 2.3.
Verify the following:
a. (x(t) ∗ h(t) = h(t) ∗ x(t)).
Solution 2.3.
∫ ∞
x(t) ∗ h(t) = x(τ ) h(t − τ ) d τ ,
−∞
Let λ = t − τ , then
∫ ∞
= x(t − λ ) h(λ ) d λ ,
−∞
∫ ∞
= h(λ ) x(t − λ ) d λ = h(t) ∗ x(t).
−∞
c. By definition of associative property of convolution, let f1 (t) = x(t) ∗ h1 (t) and f2 (t) =
h1 (t) ∗ h2 (t) ∫ ∞
f1 (t) = x(τ ) h1 (t − τ ) d τ ,
−∞
∫ ∞
{x(t) ∗ h1 (t)} ∗ h2 (t) = f1 (t) ∗ h2 (t) = f1 (α ) h2 (t − α ) d α .
−∞
∫ ∞ [∫ ∞ ]
= x(τ ) h1 (α − τ ) d τ h2 (t − α ) d α ,
−∞ −∞
Since ∫ ∞
f2 (t) = h1 (λ ) h2 (t − λ ) d λ
−∞
then, we have ∫ ∞
f2 (t − τ ) = h1 (λ ) h2 (t − τ − λ ) d λ
−∞
Thus, we have
∫ ∞
{x(t) ∗ h1 (t)} ∗ h2 (t) = x(τ ) f2 (t − τ )d τ ,
−∞
= x(t) ∗ f2 (t) = x(t) ∗ {h1 (t) ∗ h2 (t)} .
Example 2.4.
Consider the following interconnection of LTI systems showing in Figure 2.20
h 1 (t)
+ + y(t)
h 3 (t)
+ -
x(t) h 2 (t)
h 4 (t)
Given the impulse responses of the system, find the overall system impulse response h(t)
Solution 2.4.
Driving the overall impulse response in terms of the impulse response of each system. First,
apply the distributive property to the parallel impulse responses h1 (t) and h2 (t) to obtain the
equivalent system is h12 (t) as illustrated in Figure 2.21(a).
Second, apply the associative property to the series impulse responses h12 (t) and h3 (t), then
convolve to obtain the equivalent system h123 (t) as illustrated in Figure 2.21(b).
Finally, apply the distributive property to the parallel impulse responses h123 (t) and h4 (t),
then subtract h4 (t) from h123 (t) to obtain the overall impulse response h(t) as illustrated in
Figure 2.21(c).
h(t) = h123 (t) − h4 (t) = u(t) − u(t − 2).
+ y(t)
h 12 (t)= h 1 (t)+ h 2 (t) h 3 (t)
x(t) -
h 4 (t)
(a)
+ y(t)
h 123 (t)=[ h 1 (t)+ h 2 (t)]* h 2 (t)
x(t) -
h 4 (t)
(b)
x(t) y(t)
h(t)=[ h 1 (t)+ h 2 (t)]* h 2 (t) - h 4 (t)
(c)
The output y[n] of a DT LTI system can be expressed using the commutative property of
convolution
∞
y[n] = h[n] ∗ x[n] = ∑ h[k] x[n − k],
k=−∞
= . . . + h[−2]x[n + 2] + h[−1]x[n + 1] + h[0]x[n]
+ h[1]x[n − 1] + h[2]x[n − 2] + . . .
The output y[n] of a DT LTI memoryless system at any time depends only on the input x[n]
at the same time and does not depends on x[n − k] for k ̸= 0. Hence, all terms should equal
zero, except h[0]x[n]. This implies that the output y[n] is given by
y[n] = Kx[n],
where K is an arbitrary constant scalar and the corresponding impulse response h[n] is given
by
h[n] = K δ [n].
Therefore, for a DT LTI memoryless system, the impulse response h[k] = 0 for k ̸= 0, whereas
for a DT LTI memory system, the impulse response h[k] ̸= 0 for k ̸= 0.
A CT LTI system is memoryless if its output y(t) at any time depends only on the value of
its input x(t) at the same time. Therefore, the output does not depends on past and/or future
values of input x(t) and this is true for a continuous-time system, if h(t) = 0 for t ̸= 0 and the
convolution integral reduces to the relation
∫ ∞
y(t) = h(t) ∗ x(t) = h(τ ) x(t − τ ) d τ = Kx(t), (2.15)
−∞
where K = h(0) is a constant scalar and the impulse response has the form
A CT LTI system is memoryless if the impulse response h(t) = 0 for t ̸= 0 and the LTI system
has memory if h(t) ̸= 0 for t ̸= 0.
2.3 Properties of LTI Systems 63
Example 2.5.
Given the impulse response of LTI systems, verify whether the following systems are mem-
oryless:
a. h[n] = 2n u[n]
b. h[n] = 2n u[n + 1]
d. h(t) = e−t
e. h(t) = e2|t|
Solution 2.5.
A DT LTI system is memoryless if the impulse response h[n] = 0 for n ̸= 0 and for a CT LTI
system, the impulse response h(t) = 0 for t ̸= 0.
a. Let n = 1, then h[1] = 21 u[1] ̸= 0, Therefore, the system has memory, since h[n] ̸= 0 for
n ̸= 0.
b. Let n = 1, then h[1] = 21 u[2] ̸= 0, Therefore, the system has memory, since h[n] ̸= 0 for
n ̸= 0.
c. Let n = −1, then h[−1] = −1−1 u[1] ̸= 0, Therefore, the system has memory, since h[n] ̸= 0
for n ̸= 0.
d. Let t = 1, then h(1) = e−1 ̸= 0, Therefore, the system has memory, since h(t) ̸= 0 for t ̸= 0.
e. Let t = 1, then h(1) = e2 ̸= 0, Therefore, the system has memory, since h(t) ̸= 0 for t ̸= 0.
The system S is invertible if and only if there exists an inverse system S−1 such that the
overall series impulse response is an identity system. Figure 2.22 illustrates a system with
impulse response h(t) and an inverse system with impulse response h1 (t), given the output
y(t) = x(t), such that the series interconnection impulse response is identical to the identity
system.
h(t) ∗ h1 (t) = δ (t), (2.17)
since the overall series impulse response equal δ (t), then h1 (t) is the impulse response of the
inverse system. Invertibility property used to recover the original signal x(t) at the receiving
end through processing filter (filtering out the noise from communication systems) whose
impulse response is designed to be the inverse of the impulse response of the communication
channel.
64 2 Linear Time Invariant (LTI) Systems
Figure 2.22 An inverse system for CT LTI systems: h(t) ∗ h1 (t) = δ (t).
Example 2.6.
Given the impulse response of LTI systems, verify whether the following systems are invert-
ible:
a. h[n] = 2n u[n]
b. h[n] = 2n u[n + 1]
d. h(t) = e−t
e. h(t) = e2|t|
Solution 2.6.
We may verify the systems’ invertibility by finding the inverse for h[n] such that the overall
series impulse response is an identity system h[n] ∗ g[n] = δ [n].
x[n] y[n]
h[n]
a. h[n] = 2n u[n]
n n
y[n] = ∑ x[k] h[n − k] = ∑ x[k] 2n−k Eq.(1)
k=−∞ k=−∞
Substitute the value of y[n − 1] of Eq.(3) in Eq.(2) and derive the system expression
Since y[n] ∗ g[n] = x[n], this implies that the overall series of the impulse responses h[n] ∗
g[n] = δ [n], which is an identity system. You may also check the answer by proving h[n] ∗
g[n] = δ [n].
b. h[n] = 2n u[n + 1]
n n+1
y[n] = ∑ x[k] h[n − k] = ∑ x[k] 2n−k Eq.(1)
k=−∞ k=−∞
Substitute the value of y[n − 2] of Eq.(3) in Eq.(2) and derive the system expression
1
y[n] = x[n] + x[n + 1] + 4y[n − 2]. Eq.(4)
2
Rewrite the system expression in term of x[n], we get the following
1
x[n] + x[n + 1] = y[n] − 4y[n − 2].
2
Therefore, the system is invertible and the inverse for x[n] is
1
g[n] = δ [n] − 4δ [n − 1] − g[n + 1].
2
Substitute the value of y[n + 1] of Eq.(3) in Eq.(2) and derive the system expression
Since y[n] ∗ g[n] = x[n], this implies that the overall series of the impulse responses h[n] ∗
g[n] = δ [n], which is an identity system.
d. h(t) = e−t
∫ ∞ ∫ t
y(t) = x(τ ) h(t − τ ) d τ = x(τ ) e−(t−τ ) d τ ,
−∞ −∞
∫ t
= e−t x(τ ) eτ d τ ,
−∞
∫ t
et y(t) = x(τ ) eτ d τ ,
−∞
d [ t ]
e y(t) = et x(t),
dt
d [ t ]
x(t) = e−t e y(t) ,
dt
d [ t ]
y−1 (t) = e−t e x(t) .
dt
Therefore, the system is invertible.
e. h(t) = e2|t|
The output of a causal LTI system depends only on present and/or past values of the input
signal, witting the convolution sum of a DT LTI system as
The present and past inputs values, x[n], x[n − 1], x[n − 2], ...., are associated with k ≥ 0 in the
impulse response h[k], whereas future inputs values, x[n + 1], x[n + 2], ...., are associated with
k < 0 in the impulse response h[k]. The output y[n] of a causal LTI system should depends on
present and/or past values of the input signal, this is h[k] = 0 for k < 0. Applying the causality
condition, the output y[n] of a causal DT LTI system is expressed as
n ∞
y[n] = ∑ x[k] h[n − k] = ∑ h[k] x[n − k]
k=−∞ k=0
68 2 Linear Time Invariant (LTI) Systems
This illustrates that the only values of the input x[n] used to evaluate the output y[n] are those
for k ≤ 0. An example of a causal system is h[n] = δ [n] − δ [n − 2], this is because the only
non-zero values of the impulse response are at n = 0 and n = 2, whereas an example of a
non-causal system is h[n] = δ [n + 1] + δ [n] − δ [n − 2], this is because at n = −1, the impulse
response h[−1] ̸= 0 due to the presence of δ [n + 1].
As the condition of causality property, any input signal x[n] is called causal if
when the input signal x[n] is causal, then the output y[n] of a causal DT LTI system is given
by
n n
y[n] = ∑ x[k] h[n − k] = ∑ h[k] x[n − k]
k=0 k=0
A system is causal if its current state is not a function of future events and its output depends
only on present and/or past values of the input signal. Specifically, for a CT LTI system, this
requirement is y(t) should not depend on x(τ ) for τ > t. Based on the convolution integral
equation, all the coefficients h(t − τ ) that multiply values of x(τ ) for τ > t must be zero,
which means that the impulse response of a causal CT LTI system should satisfy the following
condition
h(t) = 0, for t < 0. (2.18)
The convolution integral is given by
∫ t ∫ ∞
y(t) = x(τ ) h(t − τ ) d τ = h(τ ) x(t − τ ) d τ , (2.19)
−∞ 0
Eq. 2.19 implies that the only values of the input x(t) used to evaluate the output y(t) are
those values for τ ≤ t. Causality is a property of a system and can be referred to a signal,
based on causality condition, any input signal x(t) is called causal if
when the input signal x(t) is causal, then the output y(t) of a causal CT LTI system is given
by ∫ ∫
t t
y(t) = x(τ ) h(t − τ ) d τ = h(τ ) x(t − τ ) d τ .
0 0
A causal system is causal if its impulse response is zero for negative time; this makes sense as
the system should not have a response before impulse is applied. Pure time shift with impulse
2.3 Properties of LTI Systems 69
response h(t) = δ (t − t0 ) is causal for t0 ≥ 0 (time shift delay), but is non-causal for t0 < 0
(time shift advance, where the output anticipates future values of the input).
Example 2.7.
Given the impulse response of LTI systems, verify whether the following systems are causal:
a. h[n] = 2n u[n]
b. h[n] = 2n u[n + 1]
d. h(t) = e−t
e. h(t) = e2|t|
Solution 2.7.
A DT LTI system is causal if the impulse response h[n] = 0 for n < 0 and for a CT LTI system,
the impulse response h(t) = 0 for t < 0.
a. Let n = −1, then h[−1] = 2−1 u[−1] = 0, Therefore, the system is causal, since h[n] = 0
for n < 0.
b. Let n = −1, then h[−1] = 2−1 u[0] ̸= 0, Therefore, the system is not-causal, since h[n] ̸= 0
for n < 0.
c. Let n = −1, then h[−1] = −1−1 u[1] ̸= 0, Therefore, the system is non-causal, since h[n] ̸= 0
for n < 0.
d. Let t = −1, then h(−1) = e1 = 0, Therefore, the system is causal, since h(t) = 0 for t < 0.
e. Let t = −1, then h(−1) = e2 ̸= 0, Therefore, the system is not-causal, since h(t) ̸= 0 for
t < 0.
A system is stable if every bounded input produces a bounded output. For LTI system, if the
input x(t) is bounded in magnitude
where B is a constant, if this input signal is applied to an LTI system with unit impulse
response h(t), the magnitude of the output is given by:
70 2 Linear Time Invariant (LTI) Systems
∫ ∞
| y(t) | = h(τ ) x(t − τ ) d τ
−∞
∫ ∞ ∫ ∞
≤ | h(τ ) || x(t − τ ) | d τ = | h(τ ) | B d τ
−∞ −∞
∫ ∞
=B | h(τ ) | d τ . (2.21)
−∞
Therefore, the LTI system is BIBO stable if its impulse response h(t) is absolutely integrable
∫ ∞
| h(τ ) | d τ < ∞. (2.22)
−∞
The same analysis is applied to a DT LTI system. If we apply a bounded input with |x[n]| < B,
where B is a constant, then the bound output | y[n] | is
∞
| y[n] |= B ∑ | h[k] | .
k=−∞
Therefore, |y[n]| is finite if the summation of the absolute value of the impulse response is
finite, then the LTI system is BIBO stable if its impulse response h[n] is absolutely summable.
∞
∑ | h[k] | −→ finite
k=−∞
Example 2.8.
Given the impulse response of LTI systems, verify whether the following systems are stable:
a. h[n] = 2n u[n]
( )n
b. h[n] = 12 u[n + 1]
d. h(t) = e−t
e. h(t) = e2|t|
Solution 2.8.
A DT LTI system is stable if its impulse response h[n] is absolutely summable and for a CT
LTI system, the impulse response h(t) is absolutely integrable.
a. h[n] = 2n u[n]
∞ ∞
|h[n]| = ∑ |h[k]| = ∑ 2k −→ ∞.
k=−∞ k=0
Note that the summation is finite if the following geometric series condition is satisfied
∞
1
∑ ak = 1−a
, |a| < 1.
k=0
2.3 Properties of LTI Systems 71
Since |a| > 1, then the summation of impulse response h[n] = ∞ and therefore the system
is unstable.
( )n
b. h[n] = 12 u[n + 1]
∞ ∞ ( )k
1
|h[n]| = ∑ |h[k]| = ∑ ,
k=−∞ k=−1
2
∞ ( )k
1
= 2+ ∑ ,
k=0
2
1
= 2+ = 4.
1 − 12
Since the summation of impulse response h[n] = 4 is finite, then the system is stable.
Since the summation of impulse response h[n] = ∞, then the system is unstable.
d. h(t) = e−t
∫ ∞ ∫ ∞
|h(t)| = |h(t)| dt = e−t dt,
−∞ 0
∞
= −e−t = 1 < ∞.
0
Since the impulse response is absolutely integrable, h(t) = 1 is finite, then the system is
stable.
e. h(t) = e2|t|
∫ ∞ ∫ 0 ∫ ∞
|h(t)| = |h(t)| dt = e2t dt + e−2t dt,
−∞ −∞ 0
0 ∞
1 −1 −2t 1
= e2t + e = < ∞.
2 −∞ 2 0 4
1
Since the impulse response is absolutely integrable, h(t) = 4 is finite, then the system is
stable.
72 2 Linear Time Invariant (LTI) Systems
The step response s(t) of an LTI system is simply the response of the system to a unit
step input signal u(t), which conveys a lot of information about the system. Based on the
commutative property of convolution, s(t) = u(t) ∗ h(t) = h(t) ∗ u(t). Therefore, s(t) can be
viewed as the response to input h(t) of a CT LTI system with unit impulse response u(t). That
is, the step response s(t) of a CT LTI system is the running integral of its impulse response.
∫ ∞ ∫ t
s(t) = h(t) ∗ u(t) = h(τ )u(t − τ ) d τ = h(τ ) d τ , (2.23)
−∞ −∞
The step response can also be used to characterize an LTI system, since the impulse response
is the first derivative of step response
ds(t)
h(t) = = s′ (t), (2.24)
dt
For a DT LTI system with impulse response h[n], the step response is s[n] is given by
∞
s[n] = h[n] ∗ u[n] = ∑ h[k] u[n − k],
k=−∞
Since u[n − k] = 0 for k > n and u[n − k] = 1 for k ≤ n, then the step response s[n] is the
summation of its impulse response
n
s[n] = ∑ h[k].
k=−∞
Example 2.9.
Given the impulse response of LTI systems, find the step response of the systems
( )n
a. h[n] = −1
2 u[n]
b. h(t) = e−2t
Solution 2.9.
( )n
a. h[n] = −1
2 u[n]
For n < 0
s[n] = 0,
For n ≥ 0
2.4 Causal LTI Systems Described by Differential and Difference Equations 73
( )k
n n
−1
s[n] = ∑ h[k] = ∑ 2
,
k=−∞ k=0
b. h(t) = e−2t
∫ t ∫ t
s(t) = h(τ ) d τ = e−2τ d τ ,
−∞ 0
−1 −2τ t −1 [ −2t ]
= e = e −1 .
2 0 2
This is a class of systems for which the input and output are related through A linear constant-
coefficient differential equation in CT, or DT.
An important idea of causal LTI systems is that the input-output response of CT systems are
often described by linear constant-coefficient differential equations. Consider the first order
differential equation
dy(t)
+ ay(t) = bx(t), (2.25)
dt
where y(t) denotes the output of the system and x(t) is the input. Furthermore, a general N th
order linear constant-coefficient differential equation is given by
N M
d k y(t) d k x(t)
∑ ak dt k
= ∑ bk
dt k
, (2.26)
k=0 k=0
where coefficients a and b are real constants. The order N refers to the highest derivative of
y(t). Such differential equations play a central role in describing the input-output relationships
of a wide variety of electrical, mechanical and chemical systems. For instance, in the RC
circuit, the input x(t) = vs (t) and the output y(t) = vc (t) are related by a first-order constant-
74 2 Linear Time Invariant (LTI) Systems
dy(t) 1 1
+ y(t) = x(t).
dt RC RC
if N = 0, Eq. 2.26 reduces to
1 M d k x(t)
y(t) = ∑
a0 k=0
bk
dt k
, (2.27)
if M ̸= 0, the system has memory in the sense that its output depends on not only the input
but also the previous output; otherwise, it is said to be memoryless and its output depends
only on the input.
Furthermore, the system stability depends on the coefficients ak . Consider a first order LTI
differential equation with a0 = 1:
dy(t)
− a1 y(t) = 0, where y(t) = A ea1 t ,
dt
If a1 > 0, the system is unstable as its impulse response represents a growing exponential
function of time. Whereas, if a1 < 0, the system is stable as its impulse response corresponds
to a decaying exponential function of time.
The solution to a given differential equation is the sum of the homogeneous solution (a solu-
tion with input set to zero) and the particular solution (a function that satisfy the differential
equation).
y(t) = yh (t) + y p (t).
Natural response of the system = homogeneous solution (depends on the initial conditions
and forced response), yh (t) is the homogeneous solution and obtained by setting x(t) = 0.
While the forced response of the system = particular solution (usually has the form of the
input signal), y p (t) is the particular solution and obtained for the given input.
Example 2.10.
Solve the following system
dy(t)
+ 7y(t) = x(t)
dt
Solution 2.10.
The general solution consists of the homogeneous response and the particular solution
Particular solution:
x(t) = Ke5t for t > 0,
Because of the nature of the input, y p (t) is a signal that has the same form as x(t) for t > 0
Substituting the value x(t) = Ke5t and y p (t) = Ye5t in the given system, we get
Homogenous solution:
To determine the natural response yh (t) of the system, we hypothesize a solution of the form
of an exponential. Let yh (t) = Aest and substitute it in dy(t)
dt − 7y(t) = 0.
sAest − 7Aest = 0,
sAest = 7Aest =⇒ s = 7.
Therefore,
yh (t) = Ae7t for t > 0.
Combining the natural response and the forced response, we get the solution to the differential
equation
K
y(t) = − e5t + Ae7t for t > 0.
2
The response is not completely determined (the value of A still unknown), this is because the
initial condition on y(0) is not specified. For causal LTI systems defined by linear constant
coefficient differential equations, the initial condition is always y(0) = 0, which is called the
initial rest.
The initial rest implies that
K K
y(0) = − e0 + Ae0 = 0 =⇒ A =
2 2
The complete solution is
76 2 Linear Time Invariant (LTI) Systems
K [ 7t ]
y(t) = e − e5t u(t) for t > 0.
2
For t < 0, since x(t) = 0 for t < 0, the condition of initial rest and causality of the system
implies that y(t) = 0 for t < 0.
Block diagram interconnection is very simple and nature way to represent the systems de-
scribed by linear constant-coefficient difference and differential equations. For example, the
causal CT system described by the first-order difference equation is
dy(t)
+ a y(t) = b x(t)
dt
The equation can be rewritten as
dy(t) b
y(t) = − + x(t)
a dt a
The block diagram representation for this CT system is shown in Figure 2.24, it involves three
basic operations: addition, multiplication by a coefficient, and differentiation.
2.4 Causal LTI Systems Described by Differential and Difference Equations 77
x(t) y(t)
+
b
a
D
-1
a dy(t)
dt
(a)
x 2 (t)
x(t) dx(t)/dt
D
Differentiator
(d)
Figure 2.24 Block diagram representation for causal CT system described by first-order difference equation.
78 2 Linear Time Invariant (LTI) Systems
Problems of Chapter 2
x(t) h(t)
1 1
0 1 2 3 4 t 0 1 2 t
Figure P2.1a x(t) = u(t) − u(t − 4); h(t) = r(t)
x(t) h(t)
1 1 e -2t
0 1 2 t 0 1 2 t
Figure P2.1b x(t) = δ (t); h(t) = e−2t u(t)
x(t) h(t)
1 1
0 1 2 t 0 1 2 t
Figure P2.1c x(t) = r(t)[u(t) − u(t − 1)]; h(t) = u(t) − u(t − 2)
h(t)
2
x(t)
1
t 1
e e -t
-1 0 1 t 0 1 2 3 t
Figure P2.1d x(t) = e|t| [u(t + 1) − u(t − 1)]; h(t) = 2u(t) − 2u(t − 3)
h(t)
2
x(t)
1 1 2e -at
0 1 2 3 4 t 0 1 2 t
Figure P2.1e x(t) = u(t) − u(t − 4); h(t) = 2e−at u(t)
−1, 1 < t < 2
f. x(t) = 1, 2 < t < 3; h(t) = e−t u(t)
0, otherwise
x(t)
2
h(t)
1 1
e -t
0 1 2 3 t 0 1 2 t
-1
-2
Figure P2.1f x(t) = 4u(t − 2) − 2u(t − 3) − 2u(t − 1); h(t) = e−t u(t)
1, 0 < t < 1
1, 0 < t < 1
g. x(t) = −1, 1 < t < 2; h(t) = −1, 1 < t < 2
0, otherwise 0, otherwise
80 2 Linear Time Invariant (LTI) Systems
x(t) h(t)
1 1
0 1 2 t 0 1 2 t
-1 -1
Figure P2.1g x(t) = u(t) + u(t − 2) − 2u(t − 1); h(t) = u(t) + u(t − 2) − 2u(t − 1)
x(t) h(t)=e 2t
1 1
0 1 2 3 4 5 t -3 -2 -1 0 1 t
Figure P2.1h x(t) = u(t − 3); h(t) = e2t u(t)
1 x(t)=e -t 1 h(t)
-1 0 1 2 3 t -1 0 1 2 3 t
Figure P2.1i x(t) = e−t u(t); h(t) = u(t) − u(t − 3)
2.2. Find the response of a system to an input of x(t) = 2u(t − 2) if the impulse response is
given by
a. h(t) = sin(2t) u(t)
b. h(t) = u(t)
c. h(t) = u(t − 4)
d. h(t) = 2u(t + 2)
2.3. CT LTI system has impulse response h(t) illustrated in Figure P2.3.
2.4 Causal LTI Systems Described by Differential and Difference Equations 81
h(t)
1
-1 0 1
Figure P2.3
Use linearity and time invariance to determine the system output y(t) if the input x(t) is
a. x(t) = 2δ (t + 2) + 2δ (t − 2)
b. x(t) = δ (t − 1) + δ (t − 2) + δ (t − 3)
∞
c. x(t) = ∑ δ (t − 3n)
n=−∞
∞
d. x(t) = ∑ δ (t − 2n)
n=−∞
∞
e. x(t) = ∑ δ (t − 1.5n)
n=−∞
2.4. CT LTI system responds to the following inputs with the corresponding outputs:
( )
x(t) = u(t) → y(t) = 2 1 − e−t u(t), x(t) = cos(t) → y(t) = 2 cos (t − π /4)
Using linearity/time invariance properties, find y(t) for the following inputs:
a. x(t) = 2u(t) − 2u(t − 1)
b. x(t) = 3 cos(2t − 2)
d. x(t) = t u(t)
c. Using linearity/time invariance properties, find the system output y(t) for the given inputs:
• x(t) = u(t − 3)
• x(t) = u(t + 3)
82 2 Linear Time Invariant (LTI) Systems
2.6. CT LTI system with input x(t) and output y(t) related by
∫ t
y(t) = e−(t−τ ) x(τ − 2)d τ
−∞
2.7. Given the impulse response of LTI systems, verify whether the following systems are
memoryless, invertible, causal, and stable
a. h(t) = e−4t u(t − 2)
b. h(t) = 4δ (t)
c. h[n] = 3n u[n − 2]
( )n ( )n
d. h[n] = − 12 u[n] + 12 u[n − 1]
2.8. Given the impulse response of LTI systems, find the step response of the systems
a. h(t) = te−t
c. h[n] = u[n]
d. h[n] = n u[n]
2.9. Determine the output of the systems y(t) described by the following differential equation
with input and an initial reset conditions as specified:
dy(t)
5 y(t) + 10y(t) = 2x(t)
dt
a. Input → x(t) = 2
2.10. Determine the output of the systems y(t) described by the following differential equa-
tion with input and an initial reset conditions as specified:
dy(t) dx(t)
2y(t) = x(t) +
dt dt
References
1. A. Lerch. An introduction to audio content analysis: Applications in signal processing and music infor-
matics. 1st Edition, John Wiley & Sons, 2012.
2. B. P. Lathi. Signal processing and linear systems. Carmichael, California: Berkeley-Cambridge Press,
1998.
3. E. B. Saff and A. D. Snider. Fundamentals of complex analysis for mathematics, science, and engineering.
New Jersey: Prentice Hall, 1993.
4. R. V. Churchill and J. W. Brown. Complex variables and applications. 5th Edition, New York: McGraw
Hill, 1990.
5. J. W. Dettman. Applied complex variables. New York: Dover Publications, 1984
Chapter 3
Continuous-Time Fourier Series Representation of
Period Signals
Abstract :
The development of signals representation as linear combinations of set of basic signals. For
this alternative representation, we use complex exponentials. The resulting representation
are known as continuous-time and discrete-time Fourier series. This chapter introduces the
Fourier series in the context of continuous-time signals and systems
3.1 Introduction
By 1807, Fourier had completed a work that series of harmonically related sinusoids were
useful in representing temperature distribution of a body. He claimed that any periodic signal
could be represented by a Fourier series and he also obtained a representation for non-periodic
signals as weighted integrals of sinusoids-Fourier transform.
85
86 3 Continuous-Time Fourier Series Representation of Period Signals
• The set of basic signals can be used to construct a broad and useful class of signals.
• The response of an LTI system to each signal should be simple enough in structure to
provide us with a convenient representation for the response of the system to any signal
constructed as a linear combination of the basic signal.
Both of these properties are provided by Fourier analysis.
The importance of complex exponentials in the study of LTI system is that the response of
an LTI system to a complex exponential input is the same complex exponential with only a
change in amplitude; that is
A signal for which the system output is (possible complex) the product of a constant and the
input is referred to as an eigenfunction of the system, while the amplitude factor is referred
to as the system’s eigenvalue. Note that the complex exponentials are eigenfunctions.
For an input x(t) applied to an LTI system with impulse response of h(t), the output is
∫ ∞ ∫ ∞
y(t) = h(τ ) x(t − τ ) d τ = h(τ ) es(t−τ ) d τ ,
−∞ −∞
∫ ∞
= est h(τ ) e−sτ d τ , (3.3)
−∞
∫∞ −sτ d τ
The integral −∞ h(τ )e converges and is expressed as
3.2 The Response of LTI Systems to Complex Exponentials 87
∫ ∞
H(s) = h(τ ) e−sτ d τ , (3.4)
−∞
Complex exponential sequences are eigenfunctions of discrete-time LTI systems. That is,
suppose that an LTI system with impulse response h[n] has as its input sequence
x[n] = zn , (3.6)
where z is a complex number, then the output of the system can be determined from the
convolution sum as
∞ ∞
y[n] = ∑ h[k] x[n − k] = ∑ h[k] zn−k ,
k=−∞ k=−∞
∞
=z n
∑ h[k] z−k . (3.7)
k=−∞
Assuming that the summation on the right-hand side of Eq. 3.7 converges, then the output is
the product of the same complex exponential and the constant that depends on the value of z.
That is,
y[n] = H(z) zn , (3.8)
where
∞
H(z) = ∑ h[k] z−k . (3.9)
k=−∞
It is shown that the complex exponential is the eigenfunction zn of the LTI system and the
response H(z) for a specific value of z is the eigenvalue associated with the eigenfunction.
The usefulness of decomposing general signals in terms of eigenfunctions for LTI system
analysis is shown in Eq. 3.10, let
from the superposition property, the response to the sum is the sum of the responses,
the output is
y[n] = ∑ ak H(zk ) znk , (3.15)
k
Example 3.1.
Consider a continuous-time LTI system, where the input and output are related by the follow-
ing relation:
y(t) = x(t − 3)
Find the impulse response, eigenvalue, and eigenfunction of the system for the following
inputs?
a. x(t) = e j2t
b. x(t) = cos(2t)
Solution 3.1.
The impulse response of the system is h(t) = δ (t −3). Therefore, the eigenvalue of the system
is given as follow ∫ −∞
H(s) = δ (t − 3) e−sτ d τ = e−3s
∞
a. If the input x(t) = e j2t , then the system output is written as:
Note that the corresponding input e j2t has an eigenfunction of e j2t and associated eigen-
value of H( j2) = e− j6 .
1 ( j2t )
cos(2t) = e + e− j2t
2
By the superposition property and the eigenvalue of the system H(s) = e−3s , the system
output is written as:
3.3 Fourier Series representation of Continuous-Time Periodic Signals 89
1 ( − j6 j2t )
y(t) = e e + e j6 e− j2t ,
2
1 ( j2(t−3) )
= e + e− j2(t−3) ,
2
= cos(2(t − 3)).
Note that a corresponding input component e j2t has an eigenfunction of e j2t and associated
eigenvalue of H( j2) = e− j6 .
So because the eigenvalue is purely imaginary, this corresponds to a phase shift (time
delay) in the system’s response. However, if the eigenvalue has a real component, this
would correspond to an amplitude variation.
c. Expanding the input signal x(t) = cos(4t) + cos(7t) using Euler’s relation:
1 ( j4t ) 1( )
x(t) = e + e− j4t + e j7t + e− j7t
2 2
By the superposition property and the eigenvalue of the system H(s) = e−3s , the system
output is written as:
1 ( − j12 j4t ) 1( )
y(t) = e e + e j12 e− j4t + e− j21 e j7t + e j21 e− j7t ,
2 2
1 ( j4(t−3) ) 1( )
= e + e− j4(t−3) + e j7(t−3) + e− j7(t−3) ,
2 2
= cos(4(t − 3)) + cos(7(t − 3)).
Note that a corresponding input component e j4t has an eigenfunction of e j4t and asso-
ciated eigenvalue of H( j4) = e− j12 . Also, a corresponding input component e j7t has an
eigenfunction of e j7t and associated eigenvalue of H( j7) = e− j21 .
The Fourier series representation involves writing a periodic signal as a linear combination
of harmonically-related sinusoids. In addition, it is more convenient to represent a periodic
signal as a linear combination of harmonically-related complex exponentials, rather than
trigonometric functions.
The fundamental period of x(t) is the smallest positive value of T , and ω0 = 2π /T is referred
to as the fundamental (angular) frequency.
90 3 Continuous-Time Fourier Series Representation of Period Signals
The two basic periodic signals are the real sinusoidal signal
Both signals are periodic with fundamental frequency ω0 and fundamental period T = 2π /ω0 .
Associated with the signal in Eq. 3.18 is the set of harmonically related complex exponentials
Each of these signals are periodic with period of T (although for |k| ≥ 2, the fundamental
period of ϕk (t) is a fraction of T ). Thus, a linear combination of harmonically related complex
exponentials of the form
∞ ∞
x(t) = ∑ ak e jkω0 t = ∑ ak e jk(2π /T )t , (3.20)
k=−∞ k=−∞
is also periodic with period of T . Where ak is known as the complex Fourier coefficient.
• k = 0, dc component which is constant.
• k = ±1, both have fundamental frequency equal to ω0 and are collectively referred to as
the fundamental components or the first harmonic components.
• k = ±2, periodic with half the period (twice the frequency) of the fundamental components
and are referred to as the second harmonic components.
• k = ±N, the components are referred to as the N th harmonic components.
Since any periodic waveform x(t) can be expressed as a Fourier series, it follows that the sum
of dc component, first harmonic component, the second harmonic component and so on, must
3.3 Fourier Series representation of Continuous-Time Periodic Signals 91
produce the waveform. Generally, the sum of two or more sinusoids of different frequencies
produce a waveform that is not a sinusoid as shown in Figure 3.2. However, each of the ϕk is
periodic with period T , any sum of ϕk ’s will also be periodic with the same period.
Example 3.2.
A periodic signal x(t), with fundamental frequency 2π , that is expressed as follow:
3
x(t) = ∑ ak e jk2π t ,
k=−3
Determine the time-domain signals represented by the following Fourier series coefficients
1
a0 = 1, a1 = a−1 =
4
1 1
a2 = a−2 = , a3 = a−3 = .
2 3
Solution 3.2.
Rewrite x(t) and collecting each harmonic components which have the same fundamental
frequency, we obtain:
1 ( j2π t ) 1 ( j4π t ) 1 ( j6π t )
x(t) = 1 + e + e− j2π t + e + e− j4π t + e + e− j6π t .
4 2 3
Using Euler’s relation, we can obtain the time-domain signal x(t) in the form:
1 2
x(t) = 1 + cos(2π t) + cos(4π t) + cos(6π t).
2 3
Example 3.3.
Use the definition of the Fourier series to determine the time-domain signals x(t) represented
by the following Fourier series coefficients:
a. X[k] = jδ [k − 1] − jδ [k + 1] + δ [k − 3] + δ [k + 3], with fundamental frequency, ω0 = 2π .
Solution 3.3.
a. Fundamental frequency, ω0 = 2π :
92 3 Continuous-Time Fourier Series Representation of Period Signals
∞
x(t) = ∑ X[k] e jk2π t ,
k=−∞
b. Fundamental frequency, ω0 = 4π :
∞
x(t) = ∑ X[k] e jk4π t ,
k=−∞
c. Fundamental frequency, ω0 = π2 :
∞ π
x(t) = ∑ X[k] e jk 2 t ,
k=−∞
3 − j(1+ 1 ) π t 3 j(1+ 1 ) π t
= e 2t 2 + e 2t 2 ,
2 2
3 π π 3 π π
= e− j( 2 t+ 4 ) + e j( 2 t+ 4 ) ,
2 ( 2
π π)
= 3cos t+ .
2 4
The solution of example 3.2 and 3.3 is an alternative form of Fourier series of real periodic
signals. Suppose x(t) is real, then it can be represented in the form of Eq. 3.20, we obtain
∞
x(t) = x∗ (t) = ∑ a∗k e− jkω0 t , (3.21)
k=−∞
where the asterisk indicates the complex conjugate. We assume that x(t) is real, that is, x(t) =
x∗ (t). Replacing k by −k in Eq. 3.21, we have
∞
x(t) = ∑ a∗−k e jkω0 t , (3.22)
k=−∞
To derive the alternative forms of the Fourier series, we rewrite the terms in Eq. 3.20 as
∞ [ ]
x(t) = a0 + ∑ ak e jkω0 t + a−k e− jkω0 t . (3.24)
k=1
When the signal x(t) is real and since the two terms inside the summation are complex con-
jugate of each other, this can be expressed as
∞ { }
x(t) = a0 + ∑ 2ℜ ak e jkω0 t . (3.26)
k=1
that is
∞
x(t) = a0 + 2 ∑ Ak cos (kω0t + θk ) . (3.27)
k=1
It is one common form encountered for the Fourier series of real periodic signals in continu-
ous time and the other form can be obtained by writing ak in rectangular form, ak = Bk + jCk ,
then Eq. 3.26 becomes
∞
x(t) = a0 + 2 ∑ Bk cos(kω0t) −Ck sin(kω0t). (3.28)
k=1
The coefficients ak in general are complex and to determine these coefficients, multiply both
sides of Eq. 3.20 by the complex exponential signal e− jnω0 t , where n is an integer, we obtain
∞
x(t) e− jnω0 t = ∑ ak e jkω0 t e− jnω0 t , (3.29)
k=−∞
0
x(t) e− jnω0 t dt = ∑ ak
0
e jkω0 t e− jnω0 t dt,
k=−∞
∞ ∫ T
= ∑ ak
0
e j(k−n)ω0 t dt, (3.30)
k=−∞
The Fourier series of a periodic continuous-time signal is given by the following synthesis
equation
∞ ∞
x(t) = ∑ ak e jkω0 t = ∑ ak e jk(2π /T )t . (3.32)
k=−∞ k=−∞
The following equation is called the analysis equation, where the set of coefficients ak are
often called the Fourier series coefficients of the spectral coefficients of x(t).
∫ ∫
1 1
ak = x(t) e− jkω0 t dt = x(t) e− jk(2π /T )t dt. (3.33)
T T T T
The coefficient a0 is the dc or constant component and is given with k = 0, a0 , Bk , and Ck are
the trigonometric Fourier series coefficients are given by:
∫
1
a0 = x(t) dt.
T T
∫
2
Bk = x(t) cos(2π kω0t) dt.
T T
∫
2
Ck = x(t) sin(2π kω0t) dt. (3.34)
T T
Example 3.4.
Considering the following signal. Find the complex exponential Fourier series coefficients.
a. x1 (t) = sin(ω0t)
b. x2 (t) = cos(ω0t)
( )
c. x3 (t) = cos 2t + π4
Solution 3.4.
One approach to determine the Fourier series coefficients for x1 (t) = sin(ω0t) is to apply Eq.
3.33. Expanding the sinusoidal signal as a linear combination of complex exponentials and
identify the Fourier series coefficients by inspection.
a. The Fourier series coefficients for x1 (t) = sin(ω0t)
1 [ jω0 t ]
x1 (t) = e − e− jω0 t
2j
∞
1 1 j ω0 t
= − e− jω0 t + e = ∑ ak e jkω0 t .
2j 2j k=−∞
3.3 Fourier Series representation of Continuous-Time Periodic Signals 95
Comparing the right-hand sides of this equation and Eq. 3.32, we have
1 1
a−1 = − , a1 = ,
2j 2j
ak = 0, k ̸= ±1.
1 [ j ω0 t ]
x2 (t) = e + e− jω0 t
2
∞
1 1
= e− jω0 t + e jω0 t = ∑ ak e jkω0 t .
2 2 k=−∞
Comparing the right-hand sides of this equation and Eq. 3.32, we have
1 1
a−1 = , a1 = ,
2 2
ak = 0, k ̸= ±1.
( )
c. The Fourier series coefficients for x3 (t) = cos 2t + π4 , where the fundamental (angular)
frequency, ω0 = 2. Thus,
1 [ j(2t+π /4) ]
x3 (t) = e + e− j(2t+π /4)
2
∞
1 1
= e− jπ /4 e− j2t + e jπ /4 e j2t = ∑ ak e j2kt .
2 2 k=−∞
d. The Fourier series coefficients for x4 (t) = cos(4t) + sin(6t), where the fundamental period,
T0 = π and the fundamental (angular) frequency, ω0 = 2π /T0 = 2. Thus,
1 [ j4t ] 1 [ j6t ]
x4 (t) = e + e− j4t + e − e− j6t
2 2j
∞
1 1 1 1 j6t
= − e− j6t + e− j4t + e j4t + e = ∑ ak e j2kt .
2j 2 2 2j k=−∞
ak = 0, Otherwise.
e. The Fourier series coefficients for x5 (t) = sin2 (t), where the fundamental period T0 = π
and the fundamental angular frequency ω0 = 2π /T0 = 2. Thus,
[ ]2
e jt − e− jt 1 ( j2t )
x5 (t) = =− e − 2 + e− j2t
2j 4
∞
1 1 1
= − e− j2t + − e j2t = ∑ ak e j2kt .
4 2 4 k=−∞
Example 3.5.
The periodic square wave signal shown in figure 3.3 is defined over one period, where the
signal has a fundamental period of T and fundamental frequency, ω0 = 2π /T .
{
1, |t| < T1
x(t) =
0, T1 < |t| < T /2
Solution 3.5.
Using Eq. 3.33 to determine the Fourier series coefficients for x(t). Because of the symmetry
of x(t) about t = 0, we choose −T /2 as the interval over which the integration is performed,
although any other interval of length T is equally valid, thus leads to the same result.
3.4 Convergence of the Fourier Series 97
For k = 0,
∫ T1
1 t T1 2T1
a0 = x(t) dt = = ,
T −T1 T −T1 T
For k ̸= 0, we obtain
∫ T1 T1
1 − jkω0 t 1
− jkω0 t
ak = e dt = − e ,
T −T1 jk ω 0 T −T1
[ jkω T − ω ]
2 e 0 1 −e jk 0 T1
= ,
k ω0 T 2j
2sin(kω0 T1 ) sin(kω0 T1 )
= = .
kω0 T kπ
Figure 3.4 is a bar graph of the Fourier series coefficients for a fixed T1 and several values of
T . For this example, the Fourier coefficients are real, so they can be depicted with a single
graph. For complex Fourier coefficients, two graphs corresponding to real and imaginary
parts or amplitude and phase of each coefficient, would be required. For T = 4T1 , the signal
is a square wave that is unity for half the period and zero for half the period. In this case,
ω0 T1 = π /2. The coefficients are regularly spaced samples of the envelop [2sin(ω T1 )]/ω ,
where the spacing between samples decreases as T increases.
For k ̸= 0,
sin(π k/2)
ak = ,
kπ
1
while a0 = ,
2
1 1 1
a1 = a−1 = , a3 = a−3 = , a5 = a−5 = ,
π 3π 5π
ak = 0 for k even and nonzero of odd. Also, the sin(π k/2) alternates between ±1 for succes-
sive odd values of k.
Figure 3.4 Fourier series coefficients Tak for the periodic square wave signal with T1 fixed and several values
of T : [a] T = 4T1 ; [b] T = 8T1 ; [c] T = 16T1 .
The criterion used to measure quantitatively the approximation error is the energy in the error
over one period: ∫
EN = |eN (t)|2 dt. (3.37)
T
The particular choice for the coefficients in Eq. 3.35 that minimize the energy in the error is
∫
1
ak = x(t) e− jkω0 t dt. (3.38)
T T
It can be seen that Eq. 3.38 is identical to the expression used to determine the Fourier series
coefficients. Thus, if x(t) has a Fourier series representation, the best approximation using
only a finite number of harmonically related complex exponentials is obtained by truncating
the Fourier series to the desired number of terms. As N increases, new terms are added and
EN decreases. If x(t) has a Fourier series representation, then the limit of EN as N → ∞ is zero.
One class of periodic signals that are representable through Fourier series is those signals
which have finite energy over a single period,
∫
|x(t)|2 dt < ∞, (3.39)
T
3.4 Convergence of the Fourier Series 99
When this condition is satisfied, we can guarantee that the coefficients obtained from Eq. 3.33
are finite. We define
∞
e(t) = x(t) − ∑ ak e jkω0 t , (3.40)
k=−∞
then ∫
|e(t)|2 dt = 0, (3.41)
T
The convergence guaranteed when x(t) has finite energy over a period is very useful. In this
case, we may say that x(t) and its Fourier series representation are indistinguishable. Alter-
native set of conditions developed by Dirichlet that guarantees the equivalence of the signal
and its Fourier series representation:
Condition 3: In any finite interval of time, there are only a finite number of discontinuities.
Furthermore, each of these discontinuities is finite.
An example that violates the third condition is a function defined as
1, 0 ≤ t < 4
2, 4 ≤ t < 6
1
x(t) = 1 , 6 ≤ t < 7
4
18 , 7 ≤ t < 7.5
The above examples are shown in the figure 3.5. They are generally pathological in nature
and consequently do not typically arise in practical contexts.
Summary:
• For a periodic signal that has no discontinuities, the Fourier series representation converges
and equals to the original signal at all the values of t.
100 3 Continuous-Time Fourier Series Representation of Period Signals
• For a periodic signal with a finite number of discontinuities in each period, the Fourier
series representation equals to the original signal at all the values of t except the isolated
points of discontinuity.
Gibbs Phenomenon: Near a point, where x(t) has a jump discontinuity, the partial sums xN (t)
of a Fourier series exhibit a substantial overshoot near these endpoints, and an increase in N
will not diminish the amplitude of the overshoot, although with increasing N, the overshoot
occurs over smaller and smaller intervals. This phenomenon is called Gibbs phenomenon.
Enough large value of N should be chosen to guarantee that the total energy in these ripples
is insignificant.
3.5 Properties of the Continuous-Time Fourier Series 101
Notation: suppose x(t) is a periodic signal with period T and fundamental frequency ω0 .
Then if the Fourier series coefficients of x(t) are denoted by ak . To signify the pairing of a
FS
periodic signal with its Fourier series coefficients, we use the notation x(t) ←→ ak .
3.5.1 Linearity
Let x(t) and y(t) denote two periodic signals with period T and which have Fourier series
FS FS
coefficients denoted by ak and bk , that is x(t) ←→ ak and y(t) ←→ bk , then we have
FS
z(t) = Ax(t) + By(t) ←→ Ck = Aak + Bbk . (3.45)
When a time shift to a periodic signal x(t), the period T of the signal is preserved.
FS
If x(t) ←→ ak , then we have
x(t − t0 ) ←→ e− jkω0 t0 ak .
FS
(3.46)
Where ak is the kth Fourier series coefficient of x(t), when a periodic signal is shifted in time,
the magnitudes of its Fourier series coefficients remain unchanged.
102 3 Continuous-Time Fourier Series Representation of Period Signals
Time reversal applied to a continuous-time signal results in a time reversal of the correspond-
ing sequence of Fourier series coefficients.
FS
If x(t) ←→ ak , then
FS
x(−t) ←→ a−k . (3.47)
If x(t) is even, that is x(t) = x(−t), the Fourier series coefficients are also even, a−k = ak .
Similarly, if x(t) is odd, that is x(−t) = −x(−t), the Fourier series coefficients are also odd,
a−k = −ak
While Fourier series coefficients have not changes, the Fourier series representation has
changed because of the change in the fundamental frequency.
3.5.5 Multiplication
FS
Suppose x(t) and y(t) are two periodic signals with period T and that x(t) ←→ ak and
FS
y(t) ←→ bk . Since the product x(t) y(t) is also periodic with period T , its Fourier series
coefficients hk is
∞
∑
FS
x(t) y(t) ←→ hk = ai bk−i . (3.49)
i=−∞
The sum on the right-hand side of Eq. 3.49 may be interpreted as the discrete-time convolu-
tion of the sequence representing the Fourier coefficients of x(t) and the sequence represent-
ing the Fourier coefficients of y(t).
Taking the complex conjugate of a periodic signal x(t) has the effect of complex conjugation
FS
and time reversal on the corresponding Fourier series coefficients. That is, if x(t) ←→ ak ,
then
3.6 Summary of Properties of the Continuous-Time Fourier Series 103
x∗ (t) ←→ a∗−k .
FS
(3.50)
If x(t) is real, that is, x(t) = x∗ (t), the Fourier series coefficients will be conjugate symmetric,
that is
x−k = a∗k . (3.51)
From this expression, we may get various symmetry properties for magnitude, phase, real
and imaginary parts of the Fourier series coefficients of real signals. From Eq. 3.51, we see
that:
• If x(t) is real, then a0 is real and |a−k | = |ak |.
• If x(t) is real and even, we have ak = a−k . However, from Eq. 3.51, a−k = a∗k , so that
ak = a∗k , the Fourier series coefficients are real and even.
• If x(t) is real and odd, then the Fourier series coefficients are real and odd.
since ∫ 2 ∫
1 jkω0 t 1
ak e dt = |ak |2 dt = |ak |2 , (3.53)
T T T T
so that |ak |2 is the average power in the kth harmonic component. Thus, Parseval’s relation
states that the total average power in a periodic signal equals the sum of the average powers
in all of its harmonic components.
Table 4.2 contains a summary of the properties of the continuous-time Fourier series pre-
sented in this section.
Example 3.6.
Consider the signal g(t) with a fundamental period of 4, shown in figure 3.7. The Fourier
series representation can be obtained directly using the analysis equation 3.33. We may also
use the relation of g(t) to the symmetric periodic square wave x(t).
1
g(t) = x(t − 1) − .
2
104 3 Continuous-Time Fourier Series Representation of Period Signals
Differentiation dx(t)
dt jkω0 tak = jk 2Tπ ak
∫t ( ) ( )
−∞ x(τ ) dτ 1 1
Integration jkω0 ak = jk(2π /T ) ak
ak = a∗−k
Re{ak } = Re{a−k }
Conjugate Symmetry for x(t) real Im{ak } = −Im{a−k }
Real Signals |ak | = |a−k |
∠ak = −∠a−k
Real and Even Signals x(t) real and even ak real and even
Real and Odd Signals x(t) real and Odd ak purely imaginary and odd
{ {
xe (t) = Ev[x(t)] [x(t)real] Re{ak }
Even-Odd Decomposition
xo (t) = Od[x(t)] [x(t)real] jIm{ak }
of Real Signals
Solution 3.6.
Using Eq. 3.33 to determine the Fourier series coefficients of x(t). Because of the symmetry
of x(t) about t = 0, we choose −T /2 as the interval over which the integration is performed,
although any other interval of length T is equally valid, thus leads to the same result.
The time-shift property indicates that if the Fourier series coefficients of x(t) are denoted by
ak , the Fourier series coefficients of x(t − 1) may be expressed as
bk = ak e− jkπ /2 .
3.6 Summary of Properties of the Continuous-Time Fourier Series 105
The Fourier coefficients of the dc offset in g(t), i.e., the term -1/2 on the right-hand side of
the given equation are given by
{
0, for k ̸= 0
ck = .
− 2 , for k = 0
1
Applying the linearity property, we conclude that the coefficients for g(t) can be expressed as
{
ak e− jkπ /2 , for k ̸= 0
dk = ,
a0 − 12 , for k = 0
sin(kπ /2)
replacing ak = kπ and a0 = 12 , then we have
{
sin(kπ /2) − jkπ /2
kπ e , for k ̸= 0
dk = .
0, for k = 0
Example 3.7.
The triangular wave signal x(t) with period T = 4, and fundamental frequency ω0 = π /2 is
shown in figure 3.8.
Solution 3.7.
The derivative of this function is the signal g(t) in the previous preceding example. Denoting
the Fourier series coefficients of g(t) by dk , and those of x(t) by ek , based on the differentia-
tion property, we have
dk = jk(π /2) ek ,
x(t)
For k = 0, e0 can be simply calculated by calculating the area of the signal under one period
and divide by the length of the period, that is
1
e0 = .
2
Example 3.8.
The properties of the Fourier series representation of a periodic train of impulse. The impulse
train with period T may be expressed as
∞
x(t) = ∑ δ (t − kT ).
k=−∞
Solution 3.8.
We use Eq. 3.33 and select the integration interval to be −T /2 ≤ t ≤ T /2, avoiding the
placement of impulses at the integration limits.
∫ T /2
1 1
ak = δ (t) e− j(2π /T )t dt = .
T −T /2 T
3.6 Summary of Properties of the Continuous-Time Fourier Series 107
All the Fourier series coefficients of this periodic train of impulse are identical. These coeffi-
cients are also real and even. The periodic train of impulse has a straight forward relation to
square-wave signals such as g(t) in figure 3.10.
The derivative of g(t) is the signal q(t) shown in the figure 3.11, which can also interpreted
as the difference of two shifted versions of the impulse train x(t), that is
Based on the time-shifting and linearity properties, we may express the Fourier coefficients
bk of q(t) in terms of the Fourier series coefficient ak of x(t); that is,
bk = e jkω0 T1 ak − e− jkω0 T1 ak
1 [ jkω0 T1 ] 2 jsin(kω T )
− e− jkω0 T1 =
0 1
= e .
T T
Finally, since q(t) is the derivative of g(t), we can use the differentiation property to get
bk = jkω0 ck ,
bk 2 jsin(kω0 T1 ) sin(kω0 T1 )
ck = = = , k ̸= 0,
jkω0 jkω0 T kπ
Since c0 is just the average value of g(t) over one period, we can be solve it by inspection
from the figure 3.10
2T1
c0 = .
T
Example 3.9.
Suppose we are given the following facts about a signal x(t):
a. x(t) is a real signal.
Solution 3.9.
Let us show that the information is sufficient to determine the signal x(t) to within a sign
factor.
a. According to Fact 3, x(t) has at most three nonzero Fourier series coefficients ak : a0 , a1
and a−1 . Since the fundamental frequency ω0 = 2π /T = 2π /4 = π /2, it follows that
b. Since x(t) is real (Fact 1), based on the symmetry property a0 is real and a1 = a∗−1 , conse-
quently. [ ] { }
∗
x(t) = a0 + a1 e jπ t/2 + a1 e jπ t/2 = a0 + 2Re a1 e jπ t/2 .
c. Based on the Fact 4 and considering the time-reversal property, we note that a−k cor-
responds to x(−t). Also the multiplication property indicates that multiplication of kth
Fourier series by e− jkπ /2 = e− jkω0 corresponds to the signal being shifted by 1 to the right.
We conclude that the coefficients bk correspond to the signal x(−(t − 1)) = x(−t + 1),
which, according to Fact 4, must be odd. Since x(t) is real, x(−t + 1) must also be real. So
based the property, the Fourier series coefficients must be purely imaginary and odd. Thus,
b0 = 0 and b−1 = −b1 .
d. Since time-reversal and time-shift operation cannot change the average power per period,
Fact 5 holds even if x(t) is replaced by x(−t + 1). That is,
∫
1
|x(−t + 1)|2 dt = 1/2.
4 4
Example 3.10.
Compute the trigonometric Fourier series of the square waveform signal x(t) of figure 3.12.
Solution 3.10.
The trigonometric series will consist of sine terms only because this waveform is an odd
function. Moreover, only odd harmonics will be present since this waveform has half-wave
symmetry. However, we will compute all coefficients to verify this. Also, for brevity, we will
assume that w = 1. The ak coefficients are found from
∫ 2π [∫ π ∫ 2π ]
1 1
ak = x(t)cos(kt) dt = Acos(kt) dt + −Acos(kt) dt
π 0 π 0 π
[ π 2π ]
A A
= sin(kt) −sin(kt) = [2sin(kπ ) − sin(k2π )] ,
kπ 0 π k π
and since is an integer (positive or negative) or zero, the terms [2sin(kπ ) − sin(2π )] are zero
and therefore, all ak coefficients are zero, as expected since the square waveform has odd
symmetry. Also, by inspection, the average DC value is zero, but if we attempt to verify it,
110 3 Continuous-Time Fourier Series Representation of Period Signals
we will get the indeterminate form 0/0. To work around this problem, we will evaluate a0
directly. Then,
[∫ π ∫ 2π ]
1 A
a0 = A dt + −A dt = (π − 0 − 2π + π ) = 0.
π 0 π π
For k = even
A
(1 − 2 + 1) = 0,
bk =
kπ
as expected, since the square waveform has half-wave symmetry.
For k = odd
A 4A
bk = (1 + 2 + 1) = ,
kπ kπ
and thus
4A 4A 4A
b1 = , b3 = , b5 = ,
π 3π 3π
Therefore, the trigonometric Fourier series for the square waveform with odd symmetry is
[ ]
4A 1 1 4A 1
x(t) =
π
sin(ω t) + sin(3ω t) + sin(5ω t) + ... =
3 5 ∑ k sin(kω t),
π n=odd
If the given waveform has half-wave symmetry, and it is also an odd or an even function, we
can integrate from 0 to π2 , and multiply the integral by 4.
3.6 Summary of Properties of the Continuous-Time Fourier Series 111
Problems of Chapter 3
3.1. Using the definition of Fourier series, express the following terms in polar notation:
a. e jπ /4 + e− jπ /8
b. e j2 + 1 + j
c. 1 + e j4
3.2. By inspection, compute the complex exponential Fourier series coefficients of the fol-
lowing signals for all values of k:
a. x(t) = cos(5t − π /4)
3.3. A continuous-time periodic signal x(t), with fundamental period T = 8 and non-zero
Fourier series coefficients for x(t) are:
a1 = a−1 = 2, a3 = a∗−3 = 4 j.
3.5. Find the trigonometric Fourier coefficients (a0 , Bk , and Ck ) of the following periodic
square waveform signals:
112 3 Continuous-Time Fourier Series Representation of Period Signals
a. The periodic
{ pulses is shown in Figure P3.5a
2, 0 < t < 1
x(t) =
0, 1 < t < 5
2
x(t)
-5 -4 0 1 5 6 t
Figure P3.5a
b. The given
{ periodic pulses is shown in Figure P3.5b
1, 0 < t < T0 /2
x(t) =
0, T0 /2 < t < T0
x(t)
1
0 T 0 /2 T0 3T 0 /2 2T 0 5T 0 /2 t
Figure P3.5b
c. The given
{ periodic pulses is shown in Figure P3.5c
1, 0 < t < T0 /2
x(t) =
−1, T0 /2 < t < T0
x(t)
1
0 T 0 /2 T0 3T 0 /2 2T 0 5T 0 /2 t
-1
Figure P3.5c
3.6. Find the exponential form of the Fourier series of the periodic signals given in problem
3.5:
3.7. Find the trigonometric Fourier coefficients (a0 , Bk , and Ck ) of the following periodic
sawtooth waveform signals:
3.6 Summary of Properties of the Continuous-Time Fourier Series 113
x(t)
2
-6 -4 -2 0 2 4 6 t
Figure P3.7a
b. The periodic
{ signal is shown in Figure P3.7b
t, 0 < t < 1
x(t) =
0, 1 < t < 2
x(t)
1
-4 -3 -2 -1 0 1 2 3 4 t
Figure P3.7b
c. The periodic
{ signal is shown in Figure P3.7c
2(1 − t), 0 < t < 1
x(t) =
0, 1<t <2
x(t)
2
-4 -3 -2 -1 0 1 2 3 4 t
Figure P3.7c
d. The periodic
{ signal is shown in Figure P3.7d
t, 0 < t < 1
x(t) =
t, −1 < t < 0
114 3 Continuous-Time Fourier Series Representation of Period Signals
x(t)
1
-3 -2 -1 0 1 2 3 t
-1
Figure P3.7d
3.8. Find the exponential form of the Fourier series of the periodic signals given in problem
3.7:
3.10. Consider the periodic square wave x(t) shown in Figure P3.10.
a. Determine the complex exponential Fourier series of x(t).
x(t)
A
References
1. A.V. Oppenheim, A.S. Willsky, S.H. Nawab. Signals and Systems. 2nd Edition, New Jersey, USA: Prentice
Hall, 1997.
2. S. Haykin and B. Van Veen. Signals and Systems. 2nd Edition, New Jersey, USA: John Wiley & Sons,
2003.
Chapter 4
Continuous-Time Fourier Transform
Abstract :
Fourier transform converts time-domain signals into frequency-domain (or spectral) repre-
sentations. In addition of providing spectral representations of signals, Fourier transform is
also essential for describing certain types of systems and their properties in the frequency-
domain. In this chapter we shall introduce Fourier transform in the context of continuous-time
signals and systems.
4.1 Introduction
Starting from revising the Fourier series representation for the continuous-time periodic
square wave, this is over one period because the period is repeated periodically with a pe-
riod T as shown in figure 4.10
{
1, |t| < T1
x(t) =
0, T1 < |t| < T /2,
115
116 4 Continuous-Time Fourier Transform
2sin(kω0 T1 ) 2sin(ω T1 )
ak = = , (4.1)
k ω0 T ωT ω =kω0
where 2sin(ω T1 )/ω represents the envelope of Tak and the Fourier series coefficients ak are
equally spaced sample of this envelope as interpreted in Eq. 4.2.
In figure 4.2, as T increases or the fundamental frequency decreases, the envelope is sampled
with a closer and closer spacing. As T becomes arbitrarily large, the original periodic square
wave approaches a rectangular pulse. Also, when the Fourier series coefficient ak is multiplied
by T , it becomes more and more closely spaced samples of the envelope, as T → ∞, the
Fourier series coefficient approaches the envelope function.
This example illustrates the basic idea behind the Fourier’s development for non-periodic sig-
nals representation. From a frequency domain viewpoint, a non-periodic signal is typically
contains frequency components at all frequencies, not just at integer multiples of a funda-
mental frequency. Therefore, we can derive the Fourier transform for non-periodic signals,
consider a signal x(t) with a finite duration, that is, x(t) = 0 for |t| > T1 , as illustrated in figure
4.3(a). From this non-periodic signal, we can construct a periodic signal x̃(t) for which x(t)
is one period as indicated in figure 4.3(b). If the period T is large, then x̃(t) is identical to x(t)
over a longer interval and as T → ∞, x̃(t) = x(t) for any infinite value of t.
The Fourier series representation of x̃(t) is carried out over the interval −T /2 6 t 6 T /2, we
have
∞
x̃(t) = ∑ ak e jkω0 t , (4.3)
k=−∞
∫ T /2
1
ak = x̃(t) e− jkω0 t dt, (4.4)
T −T /2
Since x̃(t) = x(t) for |t| < T /2. Also, since x(t) = 0 outside this interval, so we have
∫ T /2 ∫ ∞
1 1
ak = x(t) e− jkω0 t dt = x(t) e− jkω0 t dt.
T −T /2 T −∞
Figure 4.2 The Fourier series coefficient and their envelope for the periodic square wave for several values
of T : (a) T = 4T1 (b) T = 8T1 (c) T = 16T1 .
Figure 4.3 (a) Non-periodic signal x(t); (b) periodic signal x̃(t), constructed to be equal to x(t) over one
period.
∫ ∞
X( jω ) = x(t) e− jω t dt. (4.5)
−∞
The Fourier series coefficients ak for x(t) can be written as evaluations of this envelope func-
tion
1
ak = X( jkω0 ). (4.6)
T
118 4 Continuous-Time Fourier Transform
As T → ∞, x̃(t) = x(t) and consequently, Eq. 4.7 becomes a representation of x(t). In addition,
ω0 → 0 as T → ∞ and the right-hand side of Eq. 4.7 becomes an integral. Finally, Eq. 4.5 and
4.7 become the following Fourier transform:
∫ ∞
1
x(t) = X( jω ) e jω t d ω , (4.8)
2π −∞
∫ ∞
X( jω ) = x(t) e− jω t dt. (4.9)
−∞
The function X( jω ) in Eq. 4.9 describes the frequency content of a non-periodic signal and
is defined as the Fourier transform of x(t). However, the inverse Fourier transform expression
is given by x(t) in Eq. 4.8, which is analogous to the expression of a periodic signal in terms
of its Fourier series.
The eq. in 4.8 is valid (x̃(t) is a valid representation of the original signal x(t)) if the signal
x(t) has finite energy (if it is square integrable),
∫ ∞
x̃(t) = |x(t)|2 dt < ∞, (4.10)
−∞
If e(t) = x̃(t)−x(t), where e(t) is the error between x̃(t)andx(t), then X( jω ) in 4.8 converges,
∫ ∞
|e(t)|2 dt = 0. (4.11)
−∞
The derivation of the Fourier transform suggests a set of conditions that is required for the
convergence of Fourier series. An alterative set of conditions that are sufficient to ensure the
convergence:
• Condition 1: Over any period, x(t)) must be absolutely integrable, that is
∫ ∞
|x(t)| dt < ∞.
−∞
• Condition 2: x(t) have a finite number of maxima and minima within any finite interval of
time.
• Condition 3: x(t) have a finite number of discontinuities within any finite interval of time.
Furthermore, each of these discontinuities must be finite.
4.1 Introduction 119
Example 4.1.
Find the Fourier transform of the following signal showing in figure 4.4
x(t)
t
Figure 4.4 x(t) = e−at u(t), a > 0.
Solution 4.1.
∫ ∞
X( jω ) = e−at e− jω t dt,
0
∫ ∞
= e−(a+ jω )t dt,
0
∞
1
−(a+ jω )t
=− e ,
a + jω 0
1
= , a > 0.
a + jω
The Fourier transform can be plotted in terms of magnitude as shown in the figure 4.5
Figure 4.5 The Fourier transform of signal x(t) = e−at u(t), a > 0.
120 4 Continuous-Time Fourier Transform
Example 4.2.
Find the Fourier transform of the following signal
Solution 4.2.
∫ ∞
X( jω ) = e−a|t| e− jω t dt,
−∞
∫ 0 ∫ ∞
= eat e− jω t dt + e−at e− jω t dt,
−∞ 0
∫ 0 ∫ ∞
= et(a− jω ) dt + e−t(a+ jω ) dt,
−∞ 0
0 ∞
1
t(a− jω ) 1
−t(a+ jω )
= e − e ,
a − jω −∞ a + jω 0
1 1
= + ,
a − jω a + jω
2a
= 2 .
a + ω2
The signal and the Fourier transform are sketched in figure 4.6
Example 4.3.
Find the Fourier transform of the following signal
x(t) = δ (t).
Solution 4.3. ∫ ∞
X( jω ) = δ (t) e− jω t dt = 1.
−∞
As shown in figure 4.7, the unit impulse has a Fourier transform consisting of equal contri-
butions at all frequencies.
4.1 Introduction 121
Figure 4.7 (a) Unit impulse δ (t); (b) Fourier transform X( jω ) of unit impulse.
Example 4.4.
Find the Fourier transform of the rectangular pulse signal
{
1, |t| < T1
x(t) =
0, |t| > T1 ,
Solution 4.4.
∫ T1
− jω t −1 − jω t T1
X( jω ) = e dt = e ,
−T1 jω −T1
−1 − jω T1
= [e − e jω T1 ],
jω
( )
sin(ω T1 ) ω T1
=2 = 2T1 sinc .
ω π
sin(π x)
where sinc(x) = πx . The signal and the Fourier transform are sketched in figure 4.8
Figure 4.8 The Fourier transform of signal x(t) = 1, −T1 < t < T1 .
The convergence of x̃(t) to x(t) everywhere except at the discontinuity exhibits Gibbs phe-
nomenon, t = ±T1 , where x̃(t) converges to 1/2, which is the average value of x(t) on both
sides of the discontinuity. Specifically, the integral over a finite-length interval of frequencies.
∫ W
1 sin(ω T1 ) jω t
x(t) = 2 e dω .
2π −W ω
As W −→ ∞, the signal converges to x(t) everywhere except at the discontinuity. More over,
the signal exhibits ripples near the discontinuities. The peak values of these ripples do not de-
crease as W increases, although the ripples do become compressed toward the discontinuity,
and the energy in the ripples converges to zero.
Example 4.5.
Find the inverse Fourier transform of the following signal
{
1, |ω | < W
X( jω ) =
0, |ω | > W,
Solution 4.5.
The inverse Fourier transform is
∫
1 W jω t 1 1 jω t W
x(t) = e dω = e ,
2π −W 2π jt −W
1 1 [ jWt ]
= e − e− jWt ,
2π jt
( )
sin(Wt) W Wt
= = sinc .
πt π π
The signal and the Fourier transform are sketched in figure 4.9
Comparing the results in the preceding example and this example, we have
FT
Square wave ⇐=⇒ Sinc function
4.2 Fourier Transform for Periodic Signals 123
This means a square wave in the time domain, its Fourier transform is a sinc function. How-
ever, if the signal in the time domain is a sinc function, then its Fourier transform is a square
wave. This property is referred to as Duality Property. We also note that when the width
of X( jω ) increases, its inverse Fourier transform x(t) will be compressed. When W −→ ∞,
X( jω ) converges to an impulse. The transform pair with several different values of W is
shown in the figure.
Constructing the Fourier transform of a periodic signal directly from its Fourier series repre-
sentation. This resulting transform consists of impulses in the frequency domain and with the
area of the impulse proportional to the Fourier series coefficients. Consider a signal x(t) with
a Fourier transform X( jω ) and with a single impulse of area 2π at ω = ω0
X( jω ) = 2πδ (ω − ω0 ). (4.12)
To determine the signal x(t), we apply the inverse Fourier transform relation
∫ ∞
1
x(t) = 2πδ (ω − ω0 ) e jω t d ω = e jω0 t ,
2π −∞
Example 4.6.
The Fourier series coefficients ak for the square wave shown in figure 4.10 are
sin(kω0 T1 )
ak =
πk
Solution 4.6.
The Fourier transform of this signal is
∞
2sin(kω0 T1 )
FT{x(t)} = X( jω ) = ∑ k
δ (ω − kω0 ).
k=−∞
124 4 Continuous-Time Fourier Transform
Example 4.7.
Calculate the Fourier transform for the following signals
Solution 4.7.
The Fourier series coefficients for the signal x1 (t) = sin(ω0t) is
1 [ j ω0 t ]
x1 (t) = e − e− jω0 t
2j
1 1 jω0 t
= − e− jω0 t + e .
2j 2j
1 1
a1 = , a−1 = − , ak = 0 for k ̸= ±1.
2j 2j
1 [ jω0 t ]
x2 (t) = e + e− jω0 t
2
1 − j ω0 t 1 j ω0 t
= e + e .
2 2
1
a1 = a−1 = , ak = 0 for k ̸= ±1.
2
Example 4.8.
Calculate the Fourier transform for the following signal
∞
x(t) = ∑ δ (t − kT )
k=−∞
.
4.3 Properties of Continuous-Time Fourier Transform 125
Solution 4.8.
The Fourier series coefficients of the signal is
∫ T /2
1 1
ak = δ (t) e− jkω0 t dt = .
T −T /2 T
Every Fourier coefficient of the periodic impulse train has the same value, 1/T . Substitute
the value of ak in eq. 4.13 to find the Fourier transform
∞ ( )
2π ∞ 2π k
X( jω ) = ∑ 2π ak δ (ω − kω0 ) = ∑ δ ω− T .
k=−∞ T k=−∞
The Fourier transform of a periodic impulse train in the time domain with period T is a
periodic impulse train in the frequency domain with period 2π /T
The properties of the Fourier transform are crucial in the context of signals and signal pro-
cessing. These properties provide valuable insight into the signals’ operating in time-domain
and described in frequency-domain.
4.3.1 Linearity
Suppose we have two functions x1 (t) and x2 (t), with Fourier transform is given by X1 ( jω )
and X1 ( jω ), respectively
FT
x1 (t) ⇐=⇒ X1 ( jω )
FT
x2 (t) ⇐=⇒ X2 ( jω )
For constants a and b, then the Fourier transform of x1 (t) and x2 (t) can be easily found as
x(t − t0 ) ⇐=⇒ e− jω t0 X( jω )
FT
x(t + t0 ) ⇐=⇒ e jω t0 X( jω )
FT
(4.16)
An alternative solution to establish the time shifting property, consider eq. 4.8
∫ ∞
1
x(t) = X( jω ) e jω t d ω ,
2π −∞
Replacing t by (t − t0 ), we obtain
∫
1 ∞
x(t − t0 ) = X( jω ) e jω (t−t0 ) d ω ,
2π −∞
∫
1 ∞ ( − jω t0 )
= e X( jω ) e jω t d ω ,
2π −∞
FT{x(t − t0 )} = e− jω t0 X( jω )
The frequency shifting is the basis for every audio and video transmitter. If x(t) has a Fourier
transform X( jω ), then
Note that the frequency-shifting property is the dual of time shifting property of Eq. 4.16, this
means that a shifting in frequency-domain is equivalent to a phase shifting in time domain,
then the Fourier transform of x(t)e jω0 t is
∫ ∞[ ]
FT{x(t)e jω0 t } = x(t)e jω0 t e− jω t dt,
−∞
4.3 Properties of Continuous-Time Fourier Transform 127
∫ ∞
= x(t) e− j(ω −ω0 )t dt = X(ω − ω0 ).
−∞
x(t)e jω0 t is called a complex modulation, this is the multiplication of a signal x(t) by a com-
plex exponential signal e jω0 t . Thus, Eq. 4.17 shows that the complex modulation in time
domain corresponds to a shift of X(ω ) in the frequency domain.
It is important to realize that shifting in time does not change the frequency content of the
signal (the signal does not change when delayed or advanced), but the magnitude of the two
transforms correspond to the original signal wether it is shifted or not (delayed or advanced),
and the effect of the time shift on a signal is only in the phase spectrum, namely −ω t0 , which
is a linear combination of ω .
x∗ (t) =⇐=⇒ X ∗ (− jω ).
FT
(4.18)
Given X ∗ ( jω )
[∫ ∞
]∗ ∫ ∞
− jω t
∗
X ( jω ) = x(t) e dt = x∗ (t) e jω t dt,
−∞ −∞
If x(t) is real, then X(− jω ) = X ∗ ( jω ). Note that if x(t) is both real and even, then X( jω ) is
real and even. Similarly, if x(t) is both real and odd, then X( jω ) is purely imaginary and odd.
A real function x(t) can be expressed in terms of the sum of an even and an odd function,
x(t) = xe (t) + xo (t) and from the linearity property
Thus, we conclude that if x(t) is real, FT{xe (t)} is real function, and FT{xo (t)} is purely
imaginary, then
FT
x(t) ⇐=⇒ X( jω )
FT
Ev{x(t)} ⇐=⇒ ℜe{X( jω )}
FT
Odd{x(t)} ⇐=⇒ Im{X( jω )}
128 4 Continuous-Time Fourier Transform
If x(t) has a Fourier transform X( jω ), then the Fourier transform of the derivative of x(t) is
given by
dx(t) FT
⇐=⇒ jω X( jω ). (4.20)
dt
The effect of differentiation in time domain in Eq. 4.20 resulting in a multiplication of X( jω )
by jω in the frequency domain. To verify the differentiation property, we compute the Fourier
transform using integration-by-parts
{ } ∫∞
dx(t) dx(t) − jω t
FT = e dt,
dt −∞ dt
∞ ∫ ∞
= x(t)e− jω t − x(t)(− jω ) e− jω t dt,
−∞ −∞
∫ ∞
= jω x(t) e− jω t dt = jω X( jω ).
−∞
FT dX( jω )
(− jt)x(t) ⇐=⇒ . (4.21)
dω
The Fourier transform of (− jt)x(t) can be verified
[∫ ∞ ]
d d
X( jω ) = x(t)e− jω t dt ,
dω d ω −∞
∫ ∞ ∫ ∞
d
= x(t)e− jω t dt = [(− jt)x(t)] e− jω t dt,
−∞ d ω −∞
= FT {(− jt)x(t)} .
If the Fourier transform of x(t) is X( jω ), then the Fourier transform of the integration func-
tion in time domain of x(t) is given by
∫ t
FT X( jω )
x(t) dt ⇐=⇒ + π X(0)δ (ω ). (4.22)
−∞ jω
∫ t
x(t) dt = x(t) ∗ u(t),
−∞
Thus, by the time convolution theorem in Section 4.3.8
[ ]
1
FT {x(t) ∗ u(t)} = X( jω ) πδ (ω ) + ,
jω
4.3 Properties of Continuous-Time Fourier Transform 129
X( jω )
= π X( jω )δ (ω ) + ,
jω
Since X( jω )δ (ω ) = X(0)δ (ω )
X( jω )
= π X(0)δ (ω ) + .
jω
FT 1 (ω )
x(at) ⇐=⇒ X . (4.23)
|a| a
The scaling of time variable t by the factor a causes( an) inverse scaling of the frequency
variable ω by 1a , as well as an amplitude scaling of X ωa by |a| 1
. This imply that time com-
pression of a signal a > 1 resulting in a spectral expansion and the time expansion of a signal
a < 1 resulting in a spectral compression.
Time scaling can be verified using the definition. If a > 0, the variable t changes to τ = at
∫ ∞ ∫ ∞
τ 1
FT {x(at)} = x(at)e− jω t dt = x(τ )e− jω a d τ ,
−∞ −∞ a
∫ ∞
1 τ 1 (ω )
= x(τ )e− jω a dτ = X .
a −∞ a a
If a < 0, then it is convenient to write a = −|a| and use the variable τ = −|a|t as follows
∫ ∞ ∫ ∞ τ
− jω t − jω −|a| 1
FT {x(at)} = x(at)e dt = x(τ )e dτ ,
−∞ −∞ −|a|
∫ ∞
1 − j ωa τ 1 (ω )
= x(τ )e dτ = X .
|a| −∞ |a| a
4.3.6 Duality
An important property of Fourier transforms is the duality between time and frequency do-
mains. The duality property allows us to obtain both of these dual Fourier transform pairs
from one evaluation, this property relates to the fact that the analysis equation and synthesis
equation look almost identical except for a factor of 21π and the difference of a minus sign
in the exponential in the integral. If the Fourier transform of x(t) is X( jω ), then the Fourier
transform of X(t) is 2π x(−ω ); that is
FT
X(t) ⇐=⇒ 2π x(−ω ). (4.24)
For any transform pair, there is a dual pair with time and frequency variables interchanged.
Hence, the duality property can be verified from an inspection of the Fourier and inverse
Fourier transform expression.
130 4 Continuous-Time Fourier Transform
∫ ∞
1
x(t) = FT−1 {X( jω )} = X( jω ) e jω t d ω ,
2π −∞
where ∫ ∞
2π x(t) = X( jω ) e jω t d ω ,
−∞
Replacing t with −t, we obtain
∫ ∞
2π x(−t) = X( jω ) e− jω t d ω ,
−∞
Interchanging t and ω and compare the result with the Eq. 4.24
∫ ∞
2π x(−ω ) = X( jt) e− jω t dt = FT {X( jt)} .
−∞
The quantity on the left-hand side of Eq. 4.25 is the normalized energy content E of x(t).
Hence, the total energy E may be determined either by computing the energy per unit time
|x(t)|2 and integrating over all time t or by computing the energy per unit frequency |X(2jπω )|
2
and integrating over all frequencies ω . Note that |X( jω )|2 is referred to as the energy-density
spectrum of x(t).
The Fourier transform maps the convolution of two signals in time-domain into product of
their Fourier transforms in frequency-domain, where the Fourier transform of the impulse re-
4.3 Properties of Continuous-Time Fourier Transform 131
sponse H( jω ) is the frequency response of the LTI system, which is completely characterizes
an LTI system.
FT 1
x(t) h(t) ⇐=⇒ X( jω ) ∗ H( jω ). (4.27)
2π
Multiplication of one signal by another can be thought of as one signal to scale or modulate
the amplitude of the other signal and consequently, the multiplication of two signals is often
referred to as amplitude modulation. Consider the multiplication of two signal x(t) and y(t)
∫ ∞
1 1
x(t) y(t) = X( jv) Y [ j(ω − v)] dv = X( jω ) ∗Y ( jω ).
2π −∞ 2π
The multiplication property is often referred to as the frequency convolution theorem. Thus,
the multiplication in time-domain becomes convolution in the frequency-domain.
Example 4.9.
Using the symmetry properties of Fourier transform, evaluate the Fourier transform of the
signal x(t)
x(t) = e−a|t| , where a > 0
FT
Hint: e−at u(t) ⇐=⇒ 1
a+ jω
Solution 4.9.
The Fourier transform of the signal x(t) is
Example 4.10.
132 4 Continuous-Time Fourier Transform
Using the convolution in the frequency-domain properties of Fourier transform, evaluate the
Fourier transform of the signal x(t)
Solution 4.10.
The Fourier transform of the signal x(t) is
1
FT {x(t)} = [FT {sin(t)} ∗ FT {cos(t)}] ,
2π
1
= [ jπ [δ (ω + 1) − δ (ω − 1)] ∗ π [δ (ω + 1) + δ (ω − 1)]] ,
2π
jπ 2
= [[δ (ω + 1) ∗ δ (ω + 1)] − [δ (ω − 1) ∗ δ (ω − 1)]] ,
2π
jπ FT 1
= [δ (ω + 2) − δ (ω − 2)] ⇐=⇒ sin(2t) = sin(t) cos(t)
2 2
4.4 Summary of Fourier Transform Properties and Basic Fourier Transform Pairs 133
Table 4.2 contains a summary of the properties of the continuous-time Fourier transform
presented in this section.
Conjugation x∗ (t) X ∗ (− jω )
dx(t)
Differentiation in Time dt jω X( jω )
∫t
jω X( j ω ) + π X(0)δ (ω )
1
Integration −∞ x(t) dt
Real and Even Signals x(t) real and even X( jω ) real and even
Real and Odd Signals x(t) real and Odd X( jω ) purely imaginary and odd
{ {
xe (t) = Ev[x(t)] [x(t)real] Re{X( jω )}
Even-Odd Decomposition
xo (t) = Od[x(t)] [x(t)real] jIm{X( jω )}
of Real Signals
Parseval’s
∫∞
Relation for
∫
Periodic Signals
|x(t)|2 dt = 1 ∞ |X( j ω )|2 d ω
−∞ 2π −∞
134 4 Continuous-Time Fourier Transform
δ (t) 1
δ (t − t0 ) e− jω t0
1 2πδ (ω )
e j ω0 t 2πδ (ω − ω0 )
cos(ω0 t) π [δ (ω + ω0 ) + δ (ω − ω0 )]
sin(ω0 t) jπ [δ (ω + ω0 ) − δ (ω − ω0 )]
u(t) πδ (ω ) + 1
jω
u(−t) πδ (ω ) − 1
jω
e−a|t| , a > 0 2a
a2 +ω 2
1
a2 +t 2
e−a|ω |
√
π −ω 2 /4a
e−at , a > 0
2
ae
{
1, |t| < a ω)
pa (t) = 2a sin(a
aω
0, |t| > a
{
sin(at) 1, |ω | < a
πt pa (ω ) =
0, |ω | > a
2
sgn(t) jω
∞ ∞
2π
∑ δ (t − kT ) ω0 ∑ δ (ω − kω0 ), ω0 = T
k=−∞ k=−∞
4.4 Summary of Fourier Transform Properties and Basic Fourier Transform Pairs 135
Problems of Chapter 4
4.1. From the basic definition, compute the Fourier transforms of the following signals
d. A rectangular
{ pulse signal x(t) = pa (t) defined by
1, |t| < a
x(t) =
0, |t| > a
e. x(t) = 1
f. x(t) = e jω0 t
g. x(t) = e− jω0 t
h. x(t) = u(−t)
4.2. From the basic definition, compute the inverse Fourier transforms of the following sig-
nals
a. X( jω ) = 1
(a+ jω )2
. Hint: use the time convolution theorem
jω
b. X( jω ) = (1+ jω )2
. Hint: use tables of transforms and properties
e j4ω
c. X( jω ) = (2+ jω )2
. Hint: use tables of transforms and properties
d. X( jω ) = 1
2+ j(ω −3) + 2+ j(1ω +3) . Hint: use the frequency shifting property
2sin(ω )
f. X( jω ) = ω (2+ jω ) . Hint: use tables of transforms and properties
136 4 Continuous-Time Fourier Transform
References
1. A.V. Oppenheim, A.S. Willsky, S.H. Nawab. Signals and Systems. 2nd Edition, New Jersey, USA: Prentice
Hall, 1997.
2. S. Haykin and B. Van Veen. Signals and Systems. 2nd Edition, New Jersey, USA: John Wiley & Sons,
2003.