You are on page 1of 140

Nasser M.

Sabah

SIGNALS & SYSTEMS


First Edition

Palestine Technical College


Deir El-balah, Gaza Strip
Palestine
Contents

1 Signals and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 What is a signal? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Continuous and Discrete-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Energy and Power Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Transformations of the Independent Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.1 Example of Transformations of the Independent Variable . . . . . . . . . . . 8
1.4.2 Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.3 Even and Odd Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5 Exponential and Sinusoidal Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5.1 Continuous-Time Complex Exponential and Sinusoidal Signals . . . . . 17
1.6 The Unit Impulse and Unit Step Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.6.1 The Continuous-Time Unit Step and Unit Impulse Functions . . . . . . . . 20
1.7 Continuous-Time and Discrete-Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.8 Basic System Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.8.1 Systems with and without Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.8.2 Invertibility and Inverse System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.8.3 Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.8.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.8.5 Time Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.8.6 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

2 Linear Time Invariant (LTI) Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.2 CT LTI Systems: Convolution Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.2.1 Representation of CT Signals in Terms of Impulses . . . . . . . . . . . . . . . . 49
2.2.2 Representation of an LTI system: Convolution Integral . . . . . . . . . . . . . 50
2.3 Properties of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.3.1 Commutative Property of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.3.2 Distributive Property of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.3.3 Associative Property of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.3.4 LTI system with and without memory . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

v
vi Contents

2.3.5 Invertibility of LTI systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63


2.3.6 Causality for LTI systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.3.7 Stability for LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.3.8 The Step Response of an LTI System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.4 Causal LTI Systems Described by Differential and Difference Equations . . . . 73
2.4.1 Linear Constant-Coefficient Differential Equations . . . . . . . . . . . . . . . . 73
2.4.2 Block Diagram Representations of 1st-order Systems . . . . . . . . . . . . . . 76
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

3 Continuous-Time Fourier Series Representation of Period Signals . . . . . . . . . . . 85


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.1.1 Historical Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.2 The Response of LTI Systems to Complex Exponentials . . . . . . . . . . . . . . . . . . 85
3.3 Fourier Series representation of Continuous-Time Periodic Signals . . . . . . . . . 89
3.3.1 Linear Combinations of harmonically Related Complex Exponentials 89
3.3.2 Determination of the Fourier Series Representation of Periodic Signal 93
3.4 Convergence of the Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.5 Properties of the Continuous-Time Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . 101
3.5.1 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
3.5.2 Time Shifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
3.5.3 Time Reversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.5.4 Time Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.5.5 Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.5.6 Conjugate and Conjugate Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.5.7 Parseval’s Relation for periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.6 Summary of Properties of the Continuous-Time Fourier Series . . . . . . . . . . . . . 103
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

4 Continuous-Time Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115


4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.1.1 Fourier Transform Representation of an Aperiodic Signal . . . . . . . . . . 115
4.1.2 Convergence of Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.1.3 Examples of Continuous-Time Fourier Transform . . . . . . . . . . . . . . . . . 119
4.2 Fourier Transform for Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.3 Properties of Continuous-Time Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . 125
4.3.1 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.3.2 Time Shifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
4.3.3 Conjugation and Conjugate Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.3.4 Differentiation and Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
4.3.5 Time Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.3.6 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.3.7 Parseval’s Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.3.8 Convolution Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.3.9 Multiplication Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.4 Summary of Fourier Transform Properties and Basic Fourier Transform Pairs 133
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Chapter 1
Signals and Systems

Abstract :
The concept and theory of signals and systems are needed in almost all electrical engineering
fields and in many other engineering and scientific disciplines as well. In this chapter, we
introduce the mathematical description and representation of signals and systems, classifi-
cations of signals (periodic signals, even and odd signals), signal transformations, complex
exponential signals, and system properties. We also define several important basic signals
essential to our studies such as unit impulse and unit step functions.

1.1 What is a signal?

The concept of signal refers to the time, space or other types of variations in the physical state
of an object, phenomenon, entity etc. The quantification of this state is used to represent, store
or transmit a message.
A signal is a function representing a physical quantity or variable that conveys information
about the behavior or nature of the phenomenon. For instance, in a RC circuit the signal may
represent the voltage across the capacitor or the current flowing in the resistor. Mathemati-
cally, a signal is represented as a function of an independent variable t. Usually t represents
time. Thus, a signal is denoted by x(t). Examples of signal include:
• Acoustic signals: Acoustic pressure (sound) over time,
• Mechanical signals: Velocity, acceleration of a car over time, and
• Video signals: Intensity level of a pixel (camera, video) over time.

1.2 Continuous and Discrete-Time Signals

A signal x(t) is a continuous-time (CT) signal if t is a continuous variable. If t is a discrete


variable, that is, x(t) is defined at discrete times, then x(t) is a discrete-time (DT) signal.
Since a DT signal is defined at discrete times, a DT signal is often identified as a sequence of
numbers, denoted by (xn ) or x[n], where n is an integer. Illustrations of a CT signal x(t) and
a DT signal x[n] are shown in Figure 1.1.

1
2 1 Signals and Systems
2 2

1.5 1.5

1 1

0.5 0.5

0 0

−0.5 −0.5

−1 −1

−1.5 −1.5
0 5 10 15 20 0 5 10 15 20

(a) CT signal (b) DT signal

Figure 1.1 Graphical representation of signals.

DT signal x[n] may represent a phenomenon for which the independent variable is inherently
discrete. For instance, the daily closing stock market average is by its nature a signal that
evolves at discrete points in time. On the other hand a DT signal x[n] may be obtained by
sampling a CT signal x(t) such as

x(t0 ), x(t1 ), ....., x(tn ), ....

or in a discrete form as
x[0], x[1], ....., x[n], ....
where
xn = x[n] = x(tn ),
and xn ’s are called samples and the time interval between them is called the sampling interval.
When the sampling intervals are equal (uniform sampling), then

xn = x[n] = x(nTs ),

where the constant Ts is the sampling interval.


DT signal x[n] can be defined in a way where the values of the sequence are explicitly listed.
For example, the sequence shown in Figure 1.1(b) can be written as

xn = (..., 0, 0, 1, 2, 2, 1, 0, 1, 0, 2, 0, 0, ...).

1.3 Energy and Power Signals

Consider v(t) to be the instantaneous voltage across a resistor R producing instantaneous


current i(t). The instantaneous power p(t) per ohm is defined as

1 2
p(t) = v(t)i(t) = v (t). (1.1)
R
The total energy E dissipated in the resistor over the time interval t1 ≤ t ≤ t2 is
1.3 Energy and Power Signals 3
∫ t2 ∫ t2
1 2
E= p(t)dt = v (t)dt. (1.2)
t1 t1 R

The average power P dissipated in the resistor over the time interval t1 ≤ t ≤ t2 is
∫ t2 ∫ t2
1 1 1 2
P= p(t)dt = v (t)dt. (1.3)
t2 − t1 t1 t2 − t1 t1 R

The normalized total energy content E of an arbitrary CT signal x(t) is defined as


∫ t2
E= |x(t)|2 dt. (1.4)
t1

The normalized average power content P of an arbitrary CT signal x(t) is defined as


∫ t2
1
P= |x(t)|2 dt. (1.5)
t2 − t1 t1

Similarly, for a DT signal x[n], the normalized total energy content E of x[n] over n1 ≤ n ≤ n2
is defined as
n2
E= ∑ |x[n]|2 . (1.6)
n=n1

The normalized average power P of x[n] is defined as


n2
1
P= ∑
n2 − n1 n=n1
|x[n]|2 . (1.7)

For a given CT signal, when we consider the total energy over the infinite time interval
−∞ ≤ t ≤ ∞, E∞ is given by
∫ T
E∞ = lim |x(t)|2 dt. (1.8)
T →∞ −T

The average power over infinite time P∞ of x(t) is defined as


∫ T
1
P∞ = lim |x(t)|2 dt. (1.9)
T →∞ 2T −T

For a given DT signal, when we consider the total energy over the infinite time interval
−∞ ≤ n ≤ ∞, E∞ is given by
N
E∞ = lim
N→∞
∑ |x[n]|2 . (1.10)
−N

The average power over infinite time P∞ of x[t] is defined as


N
1
P∞ = lim ∑
N→∞ 2N + 1 −N
|x[n]|2 . (1.11)

Based on definitions of Eqs. (1.8 to 1.11), the following classes of signals are defined:
1. A signal is said to be an energy signal if and only if 0 < E < ∞, and so P∞ = 0 since
4 1 Signals and Systems

E∞
P∞ = lim = 0.
T →∞ 2T
2. Similarly, a signal is said to be a power signal if and only if 0 < P < ∞, thus implying
that E = ∞.
3. Signals that satisfy neither property are referred to as neither energy signals nor power
signals.
NOTE: There are some signals for which both P∞ and E∞ are infinite.
Periodic and random signals are usually power signals (infinite energy and finite average
power) if their energy content per period is finite, and then the average power of this sig-
nal need only be calculated over a period. However, signals that are both deterministic and
aperiodic (bounded finite duration) are usually energy signals.
A signal is deterministic if we can define its value at each time point as a mathematical func-
tion (e.g., sine wave), while a signal is random if it cannot be described by a mathematical
function (can only define statistics) (e.g., electrical noise generated in an amplifier of a ra-
dio/TV receiver).

Example 1.1.

x(t) = Asin (ω0t + ϕ ) is a periodic signal with period T = ω0 . Determine whether the signal
is energy signal, power signal or neither

Solution 1.1.
Since the signal x(t) is a periodic signal, so it must be a power signal.
∫ T ∫ T
1 1
P = |x(t)|2 dt = A2 sin2 (ω0t + ϕ ) dt,
T 0 T 0
1
where sin2 = (1 − cos(2x))
2
∫ T
1 1
= A2 [1 − cos (2ω0t + 2ϕ )] dt,
T 0 2
∫ 2π /ω0
A ω0
2 1 A2
= [1 − cos (2ω0t + 2ϕ )] dt = .
2π 0 2 2

Example 1.2.
The given signal is showing in Figure 1.2. Determine whether the following signal is energy
signal, power signal or neither
{
Ae−t , t ≥ 0
x(t) =
0, t < 0,
1.3 Energy and Power Signals 5

x(t)

Figure 1.2 Infinite signal for example 1.2.

Solution 1.2.
Solving the average energy:
∫ ∞ ∫ ∞
E= |x(t)|2 dt = A2 e−2t dt,
−∞ 0

A2 −2t ∞ A2 A2
= − e = − [0 − 1] = .
2 0 2 2

Solving the average energy using the limit:


[∫ L ]
2 −2t
E = lim A e dt ,
L→∞ 0
[ ]
−A2 −2t L
= lim e ,
L→∞ 2 0

−A2 [ −2L ] A2
= lim e −1 = .
L→∞ 2 2
Solving the average power using the limit:
[∫ L ] [∫ L ]
1 2 1 2 −2t
P = lim |x(t)| dt = lim A e dt ,
L→∞ 2L −L L→∞ 2L 0
−A2 [ −2L ]
= lim e − 1 = 0.
L→∞ 4L

A2
Since E = 2 < ∞ and P = 0. Therefore, the signal is an energy signal.

Example 1.3.
The given signal is showing in Figure 1.3. Determine whether the following signal is energy
signal, power signal or neither
{
Ae−at , t ≥ 0
x(t) =
Aeat , t ≤ 0,
6 1 Signals and Systems

x(t) = A e -a|t|
A

t
0

Figure 1.3 Infinite signal for example 1.3.

Solution 1.3.
Solving the average energy:
∫ ∞ ∫ 0 ∫ ∞
E= |x(t)|2 dt = A2 e2at dt + A2 e−2at dt,
−∞ −∞ 0

A2 2at 0 A2
= e − e−2at ,
2a −∞ 2a 0
A2 A2
= [(1 − 0) − (0 − 1)] = .
2a a
Solving the average power:
[∫ L
] [∫ L
]
1 2 1 2 −2a|t|
P = lim |x(t)| dt = lim A e dt ,
L→∞ 2L −L L→∞ 2L −L
[ ]
−A −1 −2a|t| L
2
= lim e ,
L→∞ 2L 2a −L

−A2 [ −2aL ]
= lim e − e−2aL = 0.
L→∞ 4aL
A2
Since E = a < ∞ and P = 0. Therefore, the signal is an energy signal.

Example 1.4.
The given signal is showing in Figure 1.4. Determine whether the following signal is energy
signal, power signal or neither
{
Ae−t , t ≥0
x(t) =
Au(−t), t < 0,

Solution 1.4.
Solving the average energy:
1.3 Energy and Power Signals 7

x(t)
A

t
0

Figure 1.4 Infinite signal for example 1.4.

[∫ L
] [∫ 0 ∫ L ]
E = lim |x(t)|2 dt = lim A2 dt + A2 e−2t dt ,
L→∞ −L L→∞ −L 0
[ ]
1
= lim A2 L + (1 − e−2L ) = ∞.
L→∞ 2

Solving the average power:


[∫ L ] [∫ 0 ∫ L ]
1 2 1 2 −2t
P = lim |x(t)| dt = lim 2
A dt + A e dt ,
L→∞ 2L −L L→∞ 2L −L 0
[ ]
A2 1 A2
= lim L + (1 − e−2L ) = .
L→∞ 2L 2 2

A2
Since E = ∞ and P = 2 < ∞. Therefore, the signal is a power signal.

Example 1.5.
The given signal is showing in Figure 1.5. Determine whether the following signal is energy
signal, power signal or neither
{
A, τ /2 ≤ t ≤ τ /2
x(t) =
0, elsewhere,

x(t)
A

t
-τ/2 0 τ/2

Figure 1.5 Square signal for example 1.5.

Solution 1.5.
8 1 Signals and Systems

Solving the average energy:


∫ τ /2
E= |x(t)|2 dt,
−τ /2
∫ τ /2 τ /2
= A2 dt = A2t −τ /2 = A2 τ .
−τ /2

Solving the average power:


[∫ ] ∫
1 L 1 τ /2 2
P = lim |x(t)|2 dt = lim A dt,
L→∞ 2L −L L→∞ 2L −τ /2

A2t τ /2 A2 τ A2 τ
= lim − τ
= lim = = 0.
L→∞ 2L /2 L→∞ 2L ∞

Since E = A2 τ and P = 0. Therefore, the signal is an energy signal.

1.4 Transformations of the Independent Variable

A central concept in signal analysis is the transformation of one signal into another signal. Of
particular interest are simple transformations that involve a transformation of the time axis
only. A linear time shift signal transformation is given by

y(t) = x(α t + β ), (1.12)

where α parameter represents the signal scaling (stretch and compression) and reversal, and
the β represents a signal offset from 0. Given x(α t + β ), depending on the values of α and β
we have
• time shift → β
• time scaling → α
• time reversal → the sign of time t
We will investigate x(α t + β ) given x(t) for different values of α and β .
• if 0 < α < 1, then the signal is linearly stretched (see Figure 1.7).
• if α > 1, then the signal is linearly compressed (see Figure 1.7).
• if α < 0, then the signal is reversed in time (see Figure 1.6).
• if β > 0, then the signal is time advance (the signal shifts left), (see Figure 1.9).
• if β < 0, then the signal is time delay (the signal shifts right), (see Figure 1.8).
Note: To find y(t) = x(α t + β ), It is important to shift first by β and then to scale (com-
press/stretch) by α .

1.4.1 Example of Transformations of the Independent Variable

A simple and very important example of transforming the independent variable of a signal is
the time shift.
1.4 Transformations of the Independent Variable 9

Example 1.6.
Consider the triangular pulse x(t) shown in Figure 1.6(a). Find the reflected version of x(t)
about the amplitude axis.

Solution 1.6.
Replacing the independent variable t in x(t) with −t, we get the result y(t) = x(−t) shown in
Figure 1.6(b). Note that for this example, we have

x(t) = 0 → f or t < −T1 and t > T2 .

Correspondingly, we find that

y(t) = 0 → f or t > T1 and t < −T2 .

x(t) y(t) = x(-t)

t t
-T 1 0 T2 -T 2 0 T1
(a) (b)

Figure 1.6 Operation of signal reflection. (a) CT signal x(t). (b) Reflected version of x(t) about the origin

Example 1.7.
Figure 1.7(a) shows a signal x(t) of unit amplitude and unit duration. Find y(t) = x( 12 t) and
y(t) = x(2t).

Solution 1.7.
The signal y(t) = x( 21 t) obtained by stretching the signal x(t) by a factor of 2 unit as shown
in Figure 1.7(b). The signal y(t) = x(2t) is a compressed version of x(t) by 12 unit as shown
in Figure 1.7(c).

Example 1.8.
Figure 1.8(a) shows a rectangular pulse x(t) of unit amplitude and unit duration. Find y(t) =
x(t − 2).
10 1 Signals and Systems

y(t) = x(2 t)
x(t) y(t) = x(0.5 t)
1
1 1

t t t
-1 0 1 -2 0 2 -0.5 0 0.5
(a) (b) (c)

Figure 1.7 Time-scaling operation. (a) CT signal x(t). (b) Stretched version of x(t). (c) Compressed version
of x(t).

Solution 1.8.
In this example, the time shift t0 equal 2 time units (time delay). Hence, by shifting x(t) to
the right by 2 time units we get the rectangular pulse y(t) shown in Figure 1.8(b). The pulse
y(t) has exactly the same shape as the original pulse x(t); it is merely shifted along the time
axis.

x(t) y(t) = x(t- 2)


1 1

t t
-1/2 0 1/2 0 1 3/2 2 5/2
(a) (b)

Figure 1.8 Time-shifting operation. (a) CT signal x(t). (b) Time-shifted version of x(t)
.

Example 1.9.
Consider the triangular pulse x(t) shown in Figure 1.9(a). Find v(t) = x(t + 3) and y(t) =
v(2t).

Solution 1.9.
In this example, the time shift t0 equal 3 time units (time advance). Hence, by shifting x(t) to
the left by 3 time units we get the rectangular pulse v(t) shown in Figure 1.9(b). The signal
y(t) = v(2t) is a compressed version of v(t) by 2 unit as shown in Figure 1.9(c).

Example 1.10.
Consider the signal x(t) shown in Figure 1.10. Find the corresponding version of the signal.
1. x(t + 1)
1.4 Transformations of the Independent Variable 11

x(t) v(t) = x(t+3) y(t) = v(2 t)


1 1
1

t t t
-1 0 1 -4 -3 -2 -1 0 -2 -1 0
(a) (b) (c)

Figure 1.9 Time-shifting and scaling operations. (a) Rectangular pulse x(t), symmetric about the origin. (b)
v(t) is a time-shifted version of x(t). (c) y(t) is a compression version of v(t).

2. x(−t + 1)

3. x( 32 t)

4. x( 32 t + 1)

x(t)

t
0 1 2

Figure 1.10 CT signal x(t) for example 1.10.

Solution 1.10.
The signal x(t) in Figure 1.10 is time-advance (shift to the left), time-reversed, time-scaled
and time-advance as illustrated in the following figures.
1. The signal x(t + 1) corresponds to time-advance (shift to the left) by one unit along the t
axis as illustrated in Figure 1.11.

x(t + 1)

t
-1 0 1 2

Figure 1.11 Time-shift signal x(t + 1).


12 1 Signals and Systems

x(-t + 1)

t
0 1 2

Figure 1.12 Time-reversed signal x(−t + 1).

2. The signal x(−t + 1) corresponds to time-reversed version of x(t + 1) along the t axis.
This may be obtained by first advance (shift to the left) x(t) by 1 unit and second reverse
the advanced signal along the t axis as illustrated in Figure 1.12.

3. The signal x( 32 t) is a compressed version of x(t) by 2


3 unit as shown in Figure 1.13.

x(1.5 t)

t
0 2/3 4/3
Figure 1.13 Time-scaled signal x( 23 t).

4. The signal x( 32 t + 1) is a time-advance (shift to the left) of x( 32 t) by 1 unit as shown in


Figure 1.14. The signal x( 32 t + 1) may be determined by first advance (shift to the left)
x(t) by 1 unit as shown in Figure 1.11, then linearly compress the shifted signal of Figure
1.11 by a factor of x( 23 ) unit to obtain the signal shown in figure 1.14.

x(1.5 t + 1)

t
-2/3 0 2/3
Figure 1.14 Time-scaled signal x( 23 t).
1.4 Transformations of the Independent Variable 13

1.4.2 Periodic Signals

An important class of signals is the class of periodic signals. A signal is periodic if it repeats
itself after a fixed period, otherwise the signal is called non-periodic or aperiodic.
CT signal x(t) is periodic if it has the property that there is a positive value of T for which

x(t) = x(t + T ) T > 0, for all t. (1.13)

From Eq. 1.13, we can deduce that if x(t) is periodic with period T , then x(t) = x(t + mT ) for
all t and for all integers m. Thus, x(t) is also periodic with period 2T, 3T, ...... The fundamental
period T0 of x(t) is the smallest positive value of T for which Eq. 1.13 holds. A constant
signal x(t) = 5 is periodic with any real period. However, the fundamental period T0 of x(t)
is undefined.
DT signal x[n] is periodic with period N if it is unchanged by a time shift of N, where N is
an integer,
x[n] = x[n + N] N > 0, for all n. (1.14)
If Eq. 1.14 holds for all values of n, then x[n] is periodic with period N, then x[n] = x[n +
mN] for all n and for all integers m. Thus, x[n] is also periodic with period 2N, 3N, ..... The
fundamental period N0 of x[n] is the smallest integer of N for which Eq. 1.14 holds. A constant
signal x[n] = 5 is periodic with any real integer. However, the fundamental period N0 = 1.
A DT signal cos(ω0 n), sin(ω0 n) and e jω0 n are periodic only if ω2π0 is an element of a rational
value θ , ω2π0 ∈ θ , where ω0 = 2π f is the radian frequency, which has the units of radians/s
and f is the frequency in Hertz.
ω0
1. If 2π is rational, then cos(ω0 n), sin(ω0 n) and e jω0 n are periodic
• The samples fall at the same points in each super period of cos(ω0t), it may take
several periods of cos(ω0t) to make one period of cos(ω0 n).
• For ω2π0 to be rational, ω0 must contains the factor π .
2. If ω2π0 ∈ θ , we can find the fundamental period by writing ω0
2π = m
N, where m, N ∈ Z, Z is
the set of integers.
• If Nm is in reduced form, then m and N have no common factors and N is the funda-
mental period.

Example 1.11.
Determine whether or not each of the following DT signal is periodic. If it is, what is the
fundamental period?.
( )
1. x[n] = cos 831π n
( )
2. x[n] = cos n6

3. x[n] = sin (2n)

4. x[n] = 1 + sin2 (2π n)


( )
5. x[n] = sin 67π n − 1
14 1 Signals and Systems

Solution 1.11.

1. Angular frequency, ω0 = 831π ⇒ f0 = ω2π0 = 62


8 4
= 31 = Nm0 .
31 ∈ θ ⇒ rational value. Hence, the signal is periodic with a fundamental period, N0 =
4

31. Each m = 4 periods of cos(ω0t) make one period of cos(ω0 n).

2. Angular frequency, ω0 = 16 ⇒ f0 = ω2π0 = 121π .


12π ̸∈ θ ⇒ irrational value. Hence, the signal is aperiodic.
1

3. Angular frequency, ω0 = 2 ⇒ f0 = ω2π0 = π1 .


π ̸∈ θ ⇒ irrational value. Hence, the signal is aperiodic.
1

4. x[n] = 1 + sin2 (2π n) = 1 + 12 [1 − cos (4π n)] = 32 − 12 cos (4π n).


Neglect the DC current and consider the period of cos (4π n) only.
Angular frequency, ω0 = 4π ⇒ f0 = ω2π0 = 21 = Nm0 .
2 ∈ θ ⇒ rational value. Hence, the signal is periodic with a fundamental period, N0 = 1.
Each m = 2 periods of cos(ω0t) make one period of cos(ω0 n).

5. Angular frequency, ω0 = 67π ⇒ f0 = ω2π0 = 37 = Nm0 .


7 ∈ θ ⇒ rational value. Hence, the signal is periodic with fundamental period, N0 = 7.
3

Each m = 3 periods of sin(ω0t) make one period of sin(ω0 n).


Least common multiple (LCM) is also called the lowest common multiple or smallest com-
mon multiple of two integers a and b, usually denoted by LCM(a, b), is the smallest positive
integer that is divisible by both a and b.

Example: Find the least common multiple for 4 and 6


Multiples of 4 are: 4, 8, 12, 16, 20, 24, 28, 32, 36, ...
Multiples of 6 are: 6, 12, 18, 24, 30, 36, ...
Therefore, 12 is the least common multiple.

Example: Find the least common multiple for 4, 6, and 8


Multiples of 4 are: 4, 8, 12, 16, 20, 24, 28, 32, 36, ...
Multiples of 6 are: 6, 12, 18, 24, 30, 36, ...
Multiples of 8 are: 8, 16, 24, 32, 40, ....
Therefore, 24 is the least common multiple.

Example: Find the least common multiple for 4 and 5


Multiples of 4 are: 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, ...
Multiples of 5 are: 5, 10, 15, 20, 25, 30, 35, 40, 45, 50,...
Therefore, 20 is the least common multiple.

Example 1.12.
Determine whether or not each of the following CT signal is periodic. If it is, what is the
fundamental period?.
1. x(t) = cos(4t) + 2sin(8t)

2. x(t) = 3cos(4t) + sin(π t)


1.4 Transformations of the Independent Variable 15

3. x(t) = cos(3π t) + 2cos(4π t)

4. x(t) = sin (2t)


( √ ) ( √ )
5. x(t) = 3sin 3 2t + θ + 7cos 6 2t

Solution 1.12.
ω1
1. Angular frequency, ω1 = 4 ⇒ f1 = 2π = 4
2π = 2
π
π
∴ T1 = 2
ω2
Angular frequency, ω2 = 8 ⇒ f2 = 2π = 8
2π = 4
π
π
∴ T2 = 4

= ππ /2
T1
T2 /4 = 2 ∈ θ ⇒ rational value.
The signal is periodic with a fundamental period T , LCM( π2 , π4 ) = π2 .
ω1
2. Angular frequency, ω1 = 4 ⇒ f1 = 2π = 4
2π = 2
π
π
∴ T1 = 2
ω2 π
Angular frequency, ω2 = π ⇒ f2 = 2π = 2π = 1
2

∴ T2 = 2
T1 π /2 π
T2 = 2 = 4 ̸∈ θ ⇒ irrational value. Therefore, the signal is aperiodic signal.
ω1 3π
3. Angular frequency, ω1 = 3π ⇒ f1 = 2π = 2π = 3
2

∴ T1 = 2
3
ω2 4π
Angular frequency, ω2 = 4π ⇒ f2 = 2π = 2π =2

∴ T2 = 1
2
T1 2/3
T2 = 1/2 = 4
3 ∈ θ ⇒ rational value.
The signal is periodic with a fundamental period T , LCM( 23 , 21 ) = 2.
ω0
4. Angular frequency, ω0 = 2 ⇒ f0 = 2π = π1 .

∴ T0 = π
π ∈ θ . Hence, the signal is periodic with a fundamental period, T = π .
√ √
5. Angular frequency, ω1 = 3 2 ⇒ f1 = ω2π1 = 32π2
2√π
∴ T1 = 3 2
√ ω2
√ √
Angular frequency, ω2 = 6 2 ⇒ f2 = 2π = 6 2 3 2
2π = π
π
∴ T2 = √
3 2
16 1 Signals and Systems

T1 2π /3√ 2
m
k = T2 = π /3 2
= 2
1 ∈ θ ⇒ rational value.
The signal is periodic with a fundamental period T , LCM( 32√π2 , 3√
π
2
)= 2√π
3 2
.
OR
T1 k = T2 m ⇒ 32√π2 (1) = 3√
π
(2) = 32√π2
√ 2

Therefore, f = 32π2 and the fundamental frequency, ω = 3 2

1.4.3 Even and Odd Signals

In addition to representing physical phenomena of a signal such as the time shift in a radar
signal and the reversal of an audio tape, transformations of the independent variable are ex-
tremely useful in examining some of the important properties that signal may possess, signal
with these properties can be even or odd signal.
An even signal x(t) is identical to its time-reversed counterpart signal, it can be reflected in
the origin and is equal to the original as shown in Figure 1.15(a), i.e. x(t) = cos(t).

x(−t) = x(t) for all t. (1.15)

An odd signal x(t) is identical to its negated, time reversed signal, it is equal to the negative
reflected signal as shown in Figure 1.15(b), i.e. x(t) = sin(t).

x(−t) = −x(t) for all t. (1.16)

Odd signals must necessarily be zero at t = 0 since x(−t) = −x(t) −→ x(0) = −x(0) = 0.
(Only 0 satisfies the condition).

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

−0.2 −0.2

−0.4 −0.4

−0.6 −0.6

−0.8 −0.8

−1 −1
−8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8

(a) x(t) = cos(t) (b) x(t) = sin(t)

Figure 1.15 (a) An even CT signal; (b) An odd CT signal.

An important fact is that any signal can be decomposed into a sum of two signals, one of
which is even and one of which is odd.

The even part of a signal x(t) is defined as

1
Ev{x(t)} = [x(t) + x(−t)] , (1.17)
2
1.5 Exponential and Sinusoidal Signals 17

The even part of a signal is an even signal, since


1
Ev{x(−t)} = [x(−t) + x(t)] = Ev{x(t)}.
2
The odd part of a signal x(t) is defined as

1
Odd{x(t)} = [x(t) − x(−t)] , (1.18)
2
The odd part of a signal is an odd signal, since
1 1
Odd{x(−t)} = [x(−t) − x(t)] = − [x(t) − x(−t)] = −Odd{x(t)}.
2 2
For any signal x(t), we can decompose the signal into a sum of its even part Ev{x(t)} and its
odd part Odd{x(t)} as follows:

x(t) = Ev{x(t)} + Odd{x(t)}.

Properties:
• If x(t) is odd, then x(0)
∫a
= 0. ∫
• If x(t) is even, then∫ −a x(t)dt = 2 0a x(t)dt.
a
• If x(t) is odd, then −a x(t)dt = 0.
• even x even = even.
• even x odd = odd.
• odd x odd = even.

1.5 Exponential and Sinusoidal Signals

Exponential and sinusoidal signals arise frequently in many applications and many other
signals can be constructed from them.

1.5.1 Continuous-Time Complex Exponential and Sinusoidal Signals

The CT complex exponential signal is given by

x(t) = C eat , (1.19)

where C and a are in general complex numbers.

Real exponential signals:


Figure 1.16 shows a CT complex exponential signal x(t) = C eat , where Figure 1.16(a) rep-
resents exponential growth case (a > 0) as in unstable systems and Figure 1.16(b) represents
exponential decay case (a < 0) as in stable systems.
18 1 Signals and Systems
C > 0, a > 0 C > 0, a < 0
4.5 4.5

4 4

3.5 3.5

3 3

2.5 2.5

C eat

C eat
2 2

1.5 1.5

1 1

0.5 0.5

0 0
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Time Time

(a) Exponential growing signal (b) Exponential decaying signal

Figure 1.16 The CT complex exponential signal x(t) = C eat .

Periodic complex exponential and sinusoidal signals:


If a is purely imaginary, we have
x(t) = e jω0 t . (1.20)
An important property of this signal is that it is periodic. We know that the signal x(t) is
periodic with period T if

x(t) = e jω0 t = e jω0 (t+T ) = e jω0 t e jω0 T , (1.21)

For periodicity, we must have


e jω0 T = 1, (1.22)
If ω0 = 0 in Eq. 1.21, then x(t) = 1, which is periodic for any value of T . However, if ω0 ̸= 0,
then the fundamental period T0 is the smallest positive value of T

T0 = , (1.23)
|ω0 |

Thus, the signals e jω0 T and e− jω0 T have the same fundamental period.

A signal closely related to the periodic complex exponential is the sinusoidal signal

x(t) = Acos(ω0t + ϕ ), (1.24)

where ω0 = 2π f0 is the radian frequency and has the units of radians/s, ϕ is the phase angle
in radians and f0 is the frequency in Hertz. As the complex exponential signal, the sinusoidal
signal is also a periodic signal with a fundamental period of T0 as illustrated in Figure 1.17.
Using Euler’s relation, a complex exponential can be expressed in terms of sinusoidal signals
with the same fundamental period:

e jω0 t = cos(ω0t) + jsin(ω0t), (1.25)

Similarly, a sinusoidal signal can also be expressed in terms of periodic complex exponentials
with the same fundamental period:
A [ j(ω0 t+ϕ ) ] A A
Acos(ω0t + ϕ ) = e + e− j(ω0 t+ϕ ) = e jϕ e jω0 t + e− jϕ e− jω0 t , (1.26)
2 2 2
1.5 Exponential and Sinusoidal Signals 19

x(t) = A cos(ω0 t + φ)
1.5

T0 = ω0

1 A

0.5

−0.5

−1

−1.5
−8 −6 −4 −2 0 2 4 6 8

Figure 1.17 CT sinusoidal signal.

A sinusoid can also be expressed in term of a complex exponential signal as

Acos(ω0t + ϕ ) = Aℜe{e j(ω0 t+ϕ ) }, (1.27)

ℜe{.} denotes the real part of the signal and Im{.} denotes the imaginary part of the signal

Asin(ω0t + ϕ ) = AIm{e j(ω0 t+ϕ ) }. (1.28)

Periodic signals, such as the sinusoidal signals provide important examples of signal with
infinite total energy, but let us calculate the total energy and average power in this signal over
one period:
∫ T0 ∫ T0
Eperiod = e jω0 t 2 dt = 1.dt = T0 . (1.29)
0 0
∫ ∫
1 1 T0 jω0 t 2 1 T0
Pperiod = Eperiod = e dt = 1.dt = 1. (1.30)
T0 T0 0 T0 0
Since there are an infinite number of periods as t ranges from −∞ to +∞, the total energy
integrated over all time is infinite. However, each period of the signal looks exactly the same.
The average power is finite since the average power over one period is 1, averaging over
multiple periods always yields an average power of 1. The complex periodic exponential
signal has finite average power equal to
∫ T
1 e jω0 t 2 dt = 1.
P∞ = lim (1.31)
T →∞ 2T −T

General complex Exponential signals:


Consider a complex exponential Ceat , where C = |C|e jθ is expressed in polar and a = r + jω0
is expressed in rectangular form. Then

Ceat = |C|e jθ e(r+ jω0 )t = |C|ert e j(ω0 t+θ ) . (1.32)

Using Euler’s formula, we can expand this further as


20 1 Signals and Systems

Ceat = |C|ert cos(ω0t + θ ) + j|C|ert sin(ω0t + θ ). (1.33)

Thus, for r = 0, the real and imaginary parts of a complex exponential are sinusoidal, such
signals arise in unstable systems. For r > 0, the sinusoidal signals are multiplied by a growing
exponential as shown in Figure 1.18(a), and for r < 0, the sinusoidal signals are multiplied
by a decaying exponential, which is referred to as damped sinusoids as shown in Figure
1.18(b), such signals arise in stable systems, i.e. RLC circuits or mass-spring-friction system,
where the energy is dissipated due to the resistors or friction, respectively.

C > 0, a > 0 C > 0, a < 0


25 1

20 rt rt
|C|e 0.8 |C|e

15 0.6

10 0.4

5 0.2
ℜe{Ceat}

ℜe{Ceat}
0 0

−5 −0.2

−10 −0.4

−15 −0.6

−20 −|C|ert −0.8 −|C|e


rt

−25 −1
0 5 10 15 20 25 30 0 5 10 15 20 25 30

(a) (b)

Figure 1.18 (a) Growing sinusoidal signal; (b) Decaying sinusoidal signal.

1.6 The Unit Impulse and Unit Step Functions

The unit step and unit impulse functions in continuous and discrete time are considerably
important in signal and system analysis. Furthermore, unit impulse function is one of the
most useful functions in all of applied mathematics and in the study of linear systems, which
can be used to filter another function at a specific point.

1.6.1 The Continuous-Time Unit Step and Unit Impulse Functions

The CT unit step function u(t) is shown in Figure 1.19 and given by
{
1, t ≥ 0
u(t) = (1.34)
0, t < 0,

The CT unit step function is the running integral of the unit impulse function (dirac-delta)
δ (τ ) ∫ t
u(t) = δ (τ )d τ . (1.35)
−∞
1.6 The Unit Impulse and Unit Step Functions 21

u(t)
1

t
0

Figure 1.19 CT unit step function.

The CT unit impulse δ (t) can also be considered as the first derivative of the CT unit step
function,
du(t)
δ (t) = . (1.36)
dt
Since u(t) is discontinuous at t = 0 and consequently is formally not differentiable. This can
be interpreted, however, by considering an approximation to the unit step u∆ (t), as illustrated
in Figure 1.20, which rises from the value of 0 → 1 in a short time interval of length ∆ .

(a) (b)

Figure 1.20 (a) Continuous approximation to the unit step u△ (t); (b) Derivative of u△ (t).

u(t) has a discontinuity at t = 0. The unit impulse is a generalized function, let


{
1
, 0<t <ε
δε (t) = ε (1.37)
0, otherwise,

An ideal unit impulse function is a function that is zero everywhere and infinitely high at the
origin. However, the area of the impulse is finite. Graphically, δ (t) is represented by an arrow
pointing to infinity at t = 0. Even though we use an arrow in the graph, this is not a vector.
The arrow represents the fact that the function height is infinity at t = 0. We draw ”1” next to
the arrow to represent that the area of the impulse is one as shown in Figure 1.21.

Properties of unit impulse function:


• From the previous section we have a spike at t = 0
{
∞, if t = 0
δ (t) =
0, if t ̸= 0,

• Because δ (t) is the limit of graphs of area 1, the area under its graph is 1. More precisely:
22 1 Signals and Systems

Figure 1.21 CT unit impulse.

{
1, a < t < b
δ (t) =
0, otherwise,

• For any continuous function x(t) we have

δ (t)x(t) = δ (t)x(0)

and
∫ b
{
x(0), a < 0 < b
δ (t)x(t)dt =
a 0, otherwise,
The first statement follows because δ (t) = 0 everywhere except at t = 0 and the second
statement follows from the first statement and property (2).
• We can place the delta function over any value of t: δ (t − τ ) = 0 everywhere but at t = τ .
Its total area remains 1 and the spike is shifted to be over t = τ ; and we have

δ (t − τ )x(t) = δ (t − τ )x(τ )

∫ b
{
x(τ ), a < τ < b
δ (t − τ )x(t)dt =
a 0, otherwise,

• δ (t) = du(t)
dt , where u(t) is the unit step function. Because u(t) has a jump at 0, δ (t) is not
a derivative in the usual sense, but is called a generalized derivative.
• In practical terms, you should think of δ (t) as any function of unit area, concentrated very
near t = 0.
∫t ∫∞
• u(t) = −∞ δ (τ )d τ or u(t) = 0δ (t − τ )d τ
• x(t)δ (t − t0 ) = x(t0 )δ (t − t0 ) : δ (t) is a sampling function.
∫∞
• −∞ x(t)δ (t − t0 )dt = x(t0 )
∫∞
• x(t) = −∞ x(τ )δ (t − τ )d τ
Consider first the ramp function r(t) shown in the upper left of Figure 1.22. It is zero for
t < 0 and one for t > T , and goes linearly from 0 → 1 as time goes from 0 → T . If we let
T → 0, we get a unit step function of y(t) (upper right). If we take the derivative of the ramp
function r(t), we get a rectangular pulse with height 1/T (the slope of the line) and width T
(lower left). This rectangular pulse has an area of one (height x width). If we take the limit
as T → 0, we get a pulse of infinite height and zero width (lower right), but still with an area
1.6 The Unit Impulse and Unit Step Functions 23

of one; this is the unit impulse and we represent it by δ (t). Since we can’t show the height of
the impulse on our graph, we use the vertical axis to show the area. The unit impulse has area
= 1, so that is the shown height.

r(t) y(t) = u (t)

1 1

t t
0 T 0

x(t) unit impulse


1/T 1

t t
0 T 0

Figure 1.22 Relation between ramp function r(t), unit step function u(t), and unit impulse function δ (t).

Example 1.13.
Simplify and evaluate the following functions
∫∞
1. −∞ x(t)δ (t)dt
∫b
2. a x(t)δ (t)dt
∫b
3. a δ (t)x(t − T )dt

Solution 1.13.

1. We can simplify this integral by noting that the unit impulse δ (t) is zero everywhere
except at t = 0, we can replace δ (t)∆x(t) by δ (t)∆x(0).
∫ ∞ ∫ ∞
x(t)δ (t)dt = x(0)δ (t)dt,
−∞ −∞
∫ ∞
= x(0) δ (t)dt = x(0).
−∞

2. More generally and by the same reasoning of the previous solution, we can write b > a.
∫ b ∫ b ∫ b
x(t)δ (t)dt = x(0)δ (t)dt = x(0) δ (t)dt,
a
{a a
x(0), a < 0 < b
=
0, otherwise,
24 1 Signals and Systems

3. Noting that the unit impulse δ (t) is zero everywhere except at t = 0. δ (t)x(t − T ) =
δ (t)x(0 − T ) = δ (t)x(−T ),
∫ b ∫ b ∫ b
δ (t)x(t − T )dt = δ (t)x(−T )dt = x(−T ) δ (t)dt,
a
{a a
x(−T ), a < 0 < b
=
0, otherwise,

1.7 Continuous-Time and Discrete-Time Systems

There are many reasons for studying and understanding a system. For example, we may want
to design a system to remove the noise or remove echoes in an audio recording system or
sharpen an out-of-focus image. The system might have a distortion or interfering effect that
you need to characterize or measure. Unfortunately, the input signal to a transmission line (the
system) is usually not identical to the output signal. However, if we know the characteristic of
the transmission line, we maybe can compensate for its effect. In other cases, the system may
represent some physical process that you want to study or analyze, i.e., a radar operates by
comparing the transmitted and reflected signals to find the characteristics of a remote object.
In terms of system theory, the problem is to find the system that changes the transmitted
signal into the received signal.
A system is an entity that operates on input signals to produce output signals, it can be
a device, a program or a natural system. A system is a mathematical model of a physical
process that relates the input signal to the output signal. Let x and y be the input and output
signals of a system, respectively. Then the system is viewed as a transformation (mapping) of
x into y. This transformation is represented by the mathematical notation

y = T x, (1.38)

where T is the operator representing some defined rule by which the input signal x at any
time is transformed into the output signal y at any time.

If the input and output signals x and y are CT signals, then the system is called a CT system as
shown in Figure 1.23(a). However, if the input and output signals are DT signals or sequences,
then the system is called a DT system as shown in Figure 1.23(b).

y(t) y[n]
x(t) CT-System x[n] DT-System
T T

(a) (b)

Figure 1.23 (a) CT system; (b) DT system.

The RC circuit is a simple example of a system as shown in Figure 1.24. The current i(t) is
proportional to the voltage drop across the resistor:
1.8 Basic System Properties 25

+
V s (t) C V 0 (t)
i(t) -

Figure 1.24 CT invariant system.

vs (t) − vc (t)
i(t) = , (1.39)
R
The current through the capacitor is

dvc (t)
i(t) = C , (1.40)
dt
Equating the right-hand sides of Eqs. 1.39 and 1.40, we obtain a differential equation describ-
ing the relationship between the input and output:

dvc (t) 1 1
+ vc (t) = vs (t). (1.41)
dt RC RC
Figure 1.25 considers some cases of subsystems interconnection with various connections to
form new systems that serve specific applications. The input-output behavior of the overall
system in terms of the subsystem descriptions can be evaluated mathematically.

1.8 Basic System Properties

In general, a system is an operator that takes a function as its input and produces a function
as its output. The discussion of systems properties are hesitant at this point, this is because
the notion of a system is too general that it is difficult to include all the details and the
mathematical description of a system might presume certain properties of allowable input
signals. For example, the input signal to a running-integrator system must be sufficiently
well behaved that the integral is defined. We can be considerably more precise when we
consider specific classes of systems that admit particular types of mathematical descriptions.
In the short-term, the objective is mainly to establish some perception of systems properties.
It is extremely helpful to classify systems in terms of formal properties as they can greatly
simplify the analysis. Some list of properties are included:
26 1 Signals and Systems

x(t) w(t) y(t)


Input System 1 System 2 Output

(a)

System 1

x(t) y(t)
Input + Output

System 2

(b)

System 1 System 2

x(t) y(t)
Input + Output

System 3

(c)

x(t) y(t)
Input + System 1 Output

System 2

(d)

Figure 1.25 Interconnection of systems. (a) A series or cascade interconnection of two systems; (b) A parallel
interconnection of two systems; (c) Combination of both series and parallel systems; (d) Feedback intercon-
nection.
1.8 Basic System Properties 27

1.8.1 Systems with and without Memory

A system is memoryless if its output value at any time t depends only on the input signal
value at that same time. In addition, memoryless system is always causal but not vice versa.
One particularly simple of memoryless system is the identity system, whose output is identical
to its input, y(t) = x(t). Also, a resistor is a memoryless CT system, since the input current
and output voltage of Figure 1.26 has the relationship:

v(t) = R i(t).

i(t)
+

v(t)

-
Figure 1.26 The relationship between the input current and output voltage drop across a resistor.

Examples of memoryless CT systems are:

y(t) = 2x(t), y(t) = x2 (t), y(t) = tex(t) .

A capacitor is an example of a CT system with memory as shown in Figure 1.27,


∫ t
1
i(τ )d τ .
C −∞

i(t)
+

v(t)

-
Figure 1.27 The relationship between the input current and output voltage drop at a capacitor.

An accumulator or summer is an example of a DT system with memory,


28 1 Signals and Systems

n n−1
y[n] = ∑ x[k] = ∑ x[k] + x[n] = y[n − 1] + x[n],
k=−∞ k=−∞

or
y[n] − y[n − 1] = x[n].

Also, a delay or advance is an example of a system with memory,

y(t) = x(t − 1),

y(t) = x(t + 1).

Example 1.14.
Check the following systems with respect to memory?
1. y(t) = x(t) + 5

2. y(t) = x(t + 5)

3. y(t) = (t + 5)x(t)

4. y(t) = [x(t + 5)]2

5. y(t) = x(5)

6. y(t) = x(2t)
{
x(t) + x(t − 2), t ≥ 0
7. y(t) =
0, t < 0,

Solution 1.14.

1. y(t) = x(t) + 5 ⇒ The system is memoryless because the value 5 is not affecting time
function.

2. y(t) = x(t + 5) ⇒ The system have memory because it is time shift, where the time
function is affected. Let t = 0, then y(0) = x(5).

3. y(t) = (t + 5)x(t) ⇒ The system is memoryless because the value (t + 5) is a scale and
not affecting time function.

4. y(t) = [x(t + 5)]2 ⇒ The system have memory because it is time shift, where the time
function is affected.

5. y(t) = x(5) ⇒ The system have memory because the output is a function of time regard-
less of input. Let t = 0, then y(0) = x(5).

6. y(t) = x(2t) ⇒ The system have memory because it is time scaling, where the time
function is affected. Let t = 1, then y(1) = x(2).
1.8 Basic System Properties 29
{
x(t) + x(t − 2), t ≥ 0
7. y(t) =
0, t < 0,
The system have memory because the output y(t) is a function of x(t − 2). Let t = 3, then
y(3) = x(3) + x(1). Since y(3) depends on x(1), then the system have memory.

1.8.2 Invertibility and Inverse System

For an invertible system, distinct inputs lead to distinct outputs. If a system is invertible,
then an inverse system exists in which when cascaded with the original system gives the
same output as the original input of the first system as shown in Figure 1.28. The systems
with input x and output y:
n
y(t) = 2x(t), y[n] = ∑ x[k],
k=−∞

are invertible and the inverse systems are

y(t)
w(t) = , w[n] = y[n] − y[n − 1].
2

y(t)
x(t) y(t) = 2x(t) w(t) = 0.5 y(t) w(t) = x (t)

y[n]
x[n] y[n] = Σn k = -∞ x [k] w[n] = y[n] - y[n - 1] w[n] = x [n]

Figure 1.28 Concept of an inverse system.

Encoder in communication systems is an example of invertible system, where the input at the
encoder is exactly recoverable at the output. The running-integrator system is invertible by
the fundamental theorem of calculus:
∫ t ∫ t
d
y(t) = x(τ ) d τ , V x(τ ) d τ = x(t).
−∞ dt −∞

Determining invertibility of a given system can be quite difficult. Perhaps the easiest situation
is to show that a system is not invertible by exhibiting two legitimately different input signals
that yield the same output signal. For example, y(t) = x2 (t) is not invertible because the
sign of the input cannot be determined from the knowledge of the output, the constant input
signals of x(t) = 1 and x(t) = −1, for all t, yield identical output signals. Also, y(t) = 0 is not
invertible because the system produces zero output sequence for any input sequence. Another
30 1 Signals and Systems

d
example, the system y(t) = dt x(t) is not invertible since x(t) = 1 + x(t) yields the same output
signal as x(t).

Example 1.15.
Check if the following systems are invertible?
1. y(t) = x(t + 2)

2. y(t) = x2 (t)

3. y(t) = x(2t)

Solution 1.15.

1. y(t) = x(t + 2) ⇒ The system is invertible, since y(t) = x(t + 2) then y(t − 2) = x(t)
and the inverse of the signal is y(t) = x(t − 2). Therefore the inverse of the function is
z(t) = y(t − 2) = x((t − 2) + 2) = x(t).

2. y(t) = x2 (t) ⇒ The system is non-invertible because we can only determine the value of
the input but not the sign of the input from the knowledge of the output.

3. y(t) = x(2t) ⇒ The system is invertible, since y(t) = x(2t) then y( 2t ) = x(t) and the
inverse of the signal is y(t) = x( 2t ). Therefore the inverse of the function is z(t) = y( 2t ) =
x(2t( 21 )) = x(t).

1.8.3 Causality

A system is causal if the output does not anticipate future values of the input, i.e., if the
output at any time depends only on the values of the input at the present time and in the past.
All memoryless systems are causal because the output responds only to the current value of
input. However, being causal does not necessarily imply that a system is memoryless, the
output at t = t0 depends only on input at t ≤ t0 . All physical real-time systems are causal
because they are not anticipating the future. Causality does not apply to spatially varying
signals or to systems processing recorded signals, i.e., taped sports games vs. live broadcast.
Non-causal systems are useful in various applications where the independent variable is not
time.

The RC circuit in Figure 1.24 is causal, since the capacitor voltage responds only to the
present and past values of the source voltage. Also, the motion of a car is causal, since it does
not anticipate future actions of the driver.

Example 1.16.
Check if the following systems are causal?
1.8 Basic System Properties 31

1. y[n] = x[n] − x[n + 1]

2. y(t) = x(t + 1)

3. y[n] = x[−n]

4. y(t) = x(t)cos(t + 1)
∫t0 +a
5. y(t0 ) = −∞ x(t) dt
{
x(t) + x(t − 2), t ≥ 0
6. y(t) =
0, t < 0,

Solution 1.16.

1. y[n] = x[n] − x[n + 1] ⇒ The system is non-causal, when n = 0, y[0] = x[0] − x[1], the
output at this time depends on a future value of the input.

2. y(t) = x(t + 1) ⇒ The system is non-causal, when n = 0, y(0) = x(1), the output at this
time depends on a future value of the input.

3. y[n] = x[−n] ⇒ The system is non-causal, since when n < 0, e.g. n = −1 , we see that
y[−1] = x[1], so the output at this time depends on a future value of the input.

4. y(t) = x(t)cos(t + 1) ⇒ The system is causal because the output at any time equals the
input at the same time multiplied by a number that varies with time cos(t + 1).
∫t0 +a
5. y(t0 ) = −∞ x(t) dt ⇒ The system is non-causal, since we do not know the value of a.
If a > 0, then the output depends on a future value of input.
{
x(t) + x(t − 2), t ≥ 0
6. y(t) =
0, t < 0,
The system is causal, when t < 0, the output does not depend on the input. When t ≥ 0,
the output y(t) depends on current input x(t) and past input x(t − 2). The system is causal
because the output does not depend on a future input.

1.8.4 Stability

For a system to be stable, small inputs lead to responses that do not diverge, e.g. pendulum
and inverted pendulum. If the input to a stable system is bounded, then the output must be also
bounded. A CT system is bounded-input bounded-output (BIBO) stable if for all bounded
input functions x(t), the output function y(t) = T x(t) is also bounded.

Example 1.17.
Check if the following systems are stable?
32 1 Signals and Systems

1. y(t) = tx(t)

2. y(t) = ex(t)

3. Capacitor: i(t) = C dVdtc (t)


{
x(t) + x(t − 2), t ≥ 0
4. y(t) =
0, t < 0,

Solution 1.17.

1. y(t) = tx(t) ⇒ The system is not stable, when a constant input x(t) = 1, yields an output
y(t) = t, which is not bounded - no matter what finite constant we pick, y(t) will exceed
the constant for some t.

2. y(t) = ex(t) ⇒ The system is stable, assume the input is bounded x(t) < B, or −B < x(t) <
B for all t, then y(t) is bounded e−B < y(t) < eB .
dVc (t)
3. i(t) = C dt ⇒ Let i(t) = B1 u(t), where B1 ̸= 0


1 t
Vc (t) = i(τ ) d τ ,
C −∞
∫ t
1
= B1 u(τ ) d τ ,
C −∞

1 t
= B1 d τ ,
C 0
B1
= t.
C

Vc (t) = BC1 t grows linearly with t and as t → ∞,Vc (t) → ∞. Bounded-input gives
unbounded-output, so a capacitor is not BIBO stable.
{
x(t) + x(t − 2), t ≥ 0
4. y(t) =
0, t < 0,
The system is stable, assume the input is bounded B > 0 and |x(t)| ≤ B ∀ t.
When t < 0 → y(t) = 0, so |y(t)| = 0 < 2B.
When t ≥ 0, |y(t)| = |x(t) + x(t − 2)| ≤ |x(t)| + |x(t − 2)| ≤ B + B = 2B.
So |y(t)| ≤ 2B ∀ t → y(t) is bounded. Since y(t) is bounded, then the system is stable.

1.8.5 Time Invariance

A system is time-invariant if its behavior does not depend on what time it is, the behavior
and characteristics of the system are fixed over time. A time shift in the input must result in
an identical time shift in the output signal. Mathematically, if the system output is y(t) when
1.8 Basic System Properties 33

the input is x(t), a continuous time-invariant system will have an output of y(t − t0 ) when the
input is x(t − t0 ) for all t0 and for all functions x(t) and for y(t) = T x(t), then it is always the
case that
y(t − t0 ) = T x(t − t0 )
Encoder in communication systems is an example of invertible system, that is, the input to the
encoder must be exactly recoverable from the output. If the input to a time-invariant system
is periodic, then the output is periodic with the same period.

Examples of time-invariant systems:


• RC circuit is time-invariant system, since R and C do not change in time.
• y(t) = sin(x(t)).
• y(t) = 3x(t − 2).
• y(t) = x(t) + x(t − 1).
• y(t) = x2 (t).

Examples of time-variant systems:


• y(t) = 0 → The system produces zero output sequence for any input sequence.
• y(t) = tx(t) → The system depends on what time it is.
• y(t) = t + x(t − 1) → The system depends on time.

Example 1.18.
Check if the following systems are time-invariant?
1. y(t) = x(2t)

2. y(t) = tx(t)

3. y(t) = x2 (t)
{
x(t) + x(t − 2), t ≥ 0
4. y(t) =
0, t < 0,

Solution 1.18.

1. y(t) = x(2t) → Consider an arbitrary input x1 (t) as shown in Figure 1.29(a), then result-
ing output y1 (t) is depicted in Figure 1.29(b),

y1 (t) = x1 (2t)

Consider a time shift delay of y1 (t) by 2 as shown in Figure 1.29(c),

y1 (t − 2) = x1 (2t − 2)

Consider a second input obtained by delaying x1 (t) by 2 as shown in Figure 1.29(d),

x2 (t) = x1 (t − 2)
34 1 Signals and Systems

We obtain the second output y2 (t) as shown in Figure 1.29(e),

y2 (t) = x2 (2t) = x1 (2t − 4)

x 1 (t)

t
-2 -1 0 1 2
(a)

y1 (t)
y1 (t- 2) = x 1 (2 t- 2)

t t
-1 0 1 0 1 2 3
(b) (c)

x 2 (t) = x 1 (t- 2) y2 (t) = x 2 (2 t) = x 1 (2 t- 4)

t t
0 1 2 3 4 0 1 2
(d) (e)

Figure 1.29 Inputs and outputs of the system y(t) = x(2t).

y 1 (t) y1 (t-2 ) = x 1 (2 t-2 )

t t
-1 0 1 0 1 2 3

x 1 (t) Sys. Shift

t
-2 -1 0 1 2

Shift Sys.

x 2 (t) = x 1 (t-2 ) y 2 (t) = x 2 (2 t) = x 1 (2 t-4 )

t t
0 1 2 3 4 1 2
0

Figure 1.30 Illustration block diagram of inputs and outputs of the system y(t) = x(2t).

As shown in Figure 1.29 and 1.30, y1 (t − 2) ̸= y2 (t), so the system is time-varying.


1.8 Basic System Properties 35

2. y(t) = tx(t) → Consider an arbitrary input x1 (t) = u(t) and let t = 3 as shown in Figure
1.31(a), then resulting output y1 (t) is depicted in Figure 1.31(b),

y1 (t) = tx1 (t) = 3u(t)

Consider a time shift delay of y1 (t) by 2 as shown in Figure 1.31(c),

y1 (t − 2) = (t − 2)x1 (t − 2) = u(t − 2)

Consider a second input obtained by delaying x1 (t) by 2 as shown in Figure 1.31(d),

x2 (t) = x1 (t − 2) = u(t − 2)

We obtain the second output y2 (t) as shown in Figure 1.31(e),

y2 (t) = tx2 (t) = tx1 (t − 2) = 3u(t − 2)

As shown in Figure 1.31, y1 (t − 2) ̸= y2 (t), so the system is time-varying.

x 1 (t) = u(t)

1
t
0 1 2 3 4
(a)

y1 (t) = t x 1 (t) = 3 u(t)


3

y1 (t- 2) = x 1 (t- 2) = u(t- 2)


1

t t
0 1 2 3 4 0 1 2 3 4 5 6
(b) (c)

y2 (t) = t x 2 (t) = 3 u(t- 2)


3

x 2 (t) = x 1 (t- 2) = u(t- 2)


1

t t
0 1 2 3 4 5 6 0 1 2 3 4 5 6
(d) (e)

Figure 1.31 Inputs and outputs of the system y(t) = tx(t).

3. y(t) = x2 (t) → Consider an arbitrary input x1 (t), then resulting output y1 (t) is given by,
36 1 Signals and Systems

y1 (t) = x1 2 (t)

Consider a time shift delay of y1 (t) by 2,

y1 (t − 2) = x1 2 (t − 2)

Consider a second input obtained by delaying x1 (t) by 2,

x2 (t) = x1 (t − 2)

We obtain the second output y2 (t),

y2 (t) = x2 2 (t) = x1 2 (t − 2)

As shown in Figure 1.32, y1 (t − 2) = y2 (t), so the system is time-invariant.

y 1 (t)
1
y 1 (t-2 )
1
t
0 1 2
t
0 1 2 3 4
x 1 (t)
1 Sys. Shift

t
0 1

Shift Sys.
y 2 (t)
x 2 (t) 1
1
t
t 0 1 2 3 4
0 1 2 3

Figure 1.32 Illustration block diagram of inputs and outputs of the system y(t) = x2 (t).

{
x(t) + x(t − 2), t ≥ 0
4. y(t) =
0, t < 0,
Let x(t) = u(t) and v(t) = u(t) + u(t − 2). Consider an arbitrary input x1 (t) = u(t), then
resulting output y1 (t) is given by,

y1 (t) = v1 (t) = u(t) + u(t − 2)

Consider a time shift delay of y1 (t) by 2,

y1 (t − 2) = v1 (t − 2) = u(t − 2) + u(t − 4)

Consider a second input obtained by delaying x1 (t) by 2,

x2 (t) = x1 (t − 2)

We obtain the second output y2 (t),


1.8 Basic System Properties 37

y2 (t) = x2 (t) = x1 (t − 2) = v(t − 2) = u(t − 2) + u(t − 4)

As shown in Figure 1.33, y1 (t − 2) = y2 (t), so the system is time-invariant.

y 1 (t) = v 1 (t) = u(t) + u(t-2 ) y 1 (t-2 ) = v 1 (t-2 ) = u(t-2 ) + u(t-4 )


2 2

1 1

t t
0 1 2 3 4 0 1 2 3 4

x 1 (t)
1 Sys. Shift

t
0 1 2 3 4

Shift Sys.
y 2 (t) = v 2 (t) = v 1 (t-2 ) = u(t-2 ) + u(t-4 )
2
x 2 (t) = x 1 (t -2 )
1 1

t t
0 1 2 3 4 0 1 2 3 4

Figure 1.33 Illustration block diagram of inputs and outputs of the system y(t) = x(t) + x(t − 2).

1.8.6 Linearity

A system is called linear if it has two mathematical properties: Homogeneity or scaling prop-
erty and additive property. Hence, the system has the superposition property if the system has
both properties, then we can prove that the system is linear. Likewise, if we can show that
the system doesn’t have one or both properties, then the system is not linear. As illustrated
in Figure 1.34, homogeneity means that a change in the input signal’s amplitude results in
a corresponding change in the output signal’s amplitude. Therefore, a system is said to be
homogeneous if an amplitude change in the input results in an identical amplitude change
in the output. In mathematical terms, if an input signal of x(t) results in an output signal of
y(t) and an input signal of kx(t) results in an output signal of ky(t), for any input signal and
constant k, then the system is homogeneous.
A simple resistor provides a good example of both homogenous and nonhomogeneous sys-
tems. If the input to the system is the voltage v(t) across the resistor and the output from the
system is the current i(t) through the resistor, then the system is homogeneous. Ohm’s law
guarantees the homogeneous property, if the voltage is increased or decreased, there will be
a corresponding increase or decrease in the current. Consider another system where the input
signal is the voltage v(t) across the resistor, but the output signal is the power p(t) being
dissipated in the resistor. Since power is proportional to the square of the voltage, if the input
signal is increased by a factor of two, then the output signal is increase by a factor of four.
This system is not homogeneous and therefore cannot be linear.
The property of additivity is illustrated in Figure 1.35. Consider a system where an input
x1 (t) produces an output of y1 (t). Further suppose that a different input x2 (t) produces another
output y2 (t). The system is said to be additive if an input of x1 (t)+x2 (t) results in an output of
38 1 Signals and Systems

If

x(t) y(t)
System

Then

kx (t) ky(t)
System

Figure 1.34 Homogeneity property.

y1 (t) + y2 (t), for all possible input signals. In other words, signals added at the input produce
signals that are added at the output.

If

System
x 1 (t) y 1 (t)

And if

System
x 2 (t) y 2 (t)
And if

System
x 1 (t) + x 2 (t) y 1 (t) + y 2 (t)

Figure 1.35 Additive property.

A good example of a non-additive circuit is the mixer stage in a radio transmitter. Two signals
are present: an audio signal that contains the voice or music and a carrier wave that can
propagate through space when applied to an antenna. The two signals are added and applied
to a non-linearity, such as a pn junction diode. A third signal is resulting from merging the
two signals, a modulated radio wave capable of carrying the information over great distances.
If the operator T in Eq. 1.42 satisfies the following two conditions, then T is called a linear
operator and the system represented by a linear operator T is called a linear system:
1.8 Basic System Properties 39

1. Additivity: given that T x1 (t) = y1 (t) and T x2 (t) = y2 (t), then

T [x1 (t) + x2 (t)] = y1 (t) + y2 (t), for any input signals x1 (t) and x2 (t), (1.42)

2. Homogeneity or scaling:

T [α x(t)] = α y(t), for any input signal x(t) and any scalar α , (1.43)

Any system that does not satisfy Eq. 1.42 and/or Eq. 1.43 is classified as a non-linear sys-
tem. Eq. 1.42 and 1.43 can be combined into a single condition to form the superposition
property as
T [α1 x1 (t) + α2 x2 (t)] = α1 y1 (t) + α2 y2 (t), (1.44)
where α1 and α2 are arbitrary scalars.
All of these properties translate easily to DT systems. It is only require to replace the paren-
theses by square brackets and t by n. But regardless of the time domain, it is important to
note that these are input-output properties of systems. In particular, nothing is being stated
about the internal workings of the system, everything is stated in terms of input signals and
corresponding output signals. For a linear system, zero input leads to zero output.

Example 1.19.
Check if the following systems are linear?
1. y(t) = x(2t)

2. y(t) = tx(t)

3. y(t) = x2 (t)
{
x(t) + x(t − 2), t ≥ 0
4. y(t) =
0, t < 0,

Solution 1.19.

1. y(t) = x(2t) → Consider an arbitrary 2 inputs x1 (t) and x2 (t)

x1 (t) ↔ y1 (t) = x1 (2t)


x2 (t) ↔ y2 (t) = x2 (2t)
We obtain the sum of the 2 outputs

α1 y1 (t) + α2 y2 (t) = α1 x1 (2t) + α2 x2 (2t)

Consider a third input x3 (t) obtained by combining x1 (t) and x2 (t)

x3 (t) = α1 x1 (t) + α2 x2 (t)

We obtain the third output y3 (t)


40 1 Signals and Systems

y3 (t) = x3 (2t),
= α1 x1 (2t) + α2 x2 (2t),
= α1 y1 (t) + α2 y2 (t).

Therefor, the system is linear.

2. y(t) = tx(t) → Consider an arbitrary 2 inputs x1 (t) and x2 (t)

x1 (t) ↔ y1 (t) = tx1 (t)


x2 (t) ↔ y2 (t) = tx2 (t)
We obtain the sum of the 2 outputs

α1 y1 (t) + α2 y2 (t) = α1 tx1 (t) + α2 tx2 (t)

Consider a third input x3 (t) obtained by combining x1 (t) and x2 (t)

x3 (t) = α1 x1 (t) + α2 x2 (t)

We obtain the third output y3 (t)

y3 (t) = tx3 (t),


= t[α1 x1 (t) + α2 x2 (t)],
= α1 tx1 (t) + α2 tx2 (t),
= α1 y1 (t) + α2 y2 (t).

Therefor, the system is linear.

3. y(t) = x2 (t) → Consider an arbitrary 2 inputs x1 (t) and x2 (t)

x1 (t) ↔ y1 (t) = x1 2 (t)


x2 (t) ↔ y2 (t) = x2 2 (t)
We obtain the sum of the 2 outputs

α1 y1 (t) + α2 y2 (t) = α1 x1 2 (t) + α2 x2 2 (t)

Consider a third input x3 (t) obtained by combining x1 (t) and x2 (t)

x3 (t) = α1 x1 (t) + α2 x2 (t)

We obtain the third output y3 (t)

y3 (t) = x3 2 (t),
= [α1 x1 (t) + α2 x2 (t)]2 ,
= α1 2 x1 2 (t) + 2α1 x1 (t) α2 x2 (t) + α2 2 x2 2 (t).

Since y3 (t) ̸= α1 y1 (t) + α2 y2 (t), then the system is non-linear.


1.8 Basic System Properties 41
{
x(t) + x(t − 2), t ≥ 0
4. y(t) =
0, t < 0,
Consider an arbitrary 2 inputs x1 (t) and x2 (t)

x1 (t) ↔ y1 (t) = x1 (t) + x1 (t − 2) for t ≥ 0


x2 (t) ↔ y2 (t) = x2 (t) + x2 (t − 2) for t ≥ 0
We obtain the sum of the 2 outputs

α1 y1 (t) + α2 y2 (t) = α1 [x1 (t) + x1 (t − 2)] + α2 [x2 (t) + x2 (t − 2)]

Consider a third input x3 (t) obtained by combining x1 (t) and x2 (t)

x3 (t) = α1 x1 (t) + α2 x2 (t)

We obtain the third output y3 (t)

y3 (t) = x3 (t) + x3 (t − 2),


= α1 x1 (t) + α2 x2 (t) + α1 x1 (t − 2) + α2 x2 (t − 2),
= α1 [x1 (t) + x1 (t − 2)] + α2 [x2 (t) + x2 (t − 2)],
= α1 y1 (t) + α2 y2 (t).

Therefor, the system is linear.


42 1 Signals and Systems

Problems of Chapter 1

1.1. Determine whether the following signals are energy signals, power signals or neither
{
A, −1 ≤ t ≤ 1
a. x(t) =
0, elsewhere


 t, 0≤t ≤1
b. x(t) = 2 − t, 1 ≤ t ≤ 2


0, elsewhere
{√
t, t > 1
c. x(t) =
0, t ≤ 1

d. x(t) = A eat

e. x(t) = 2 e−2t u(t)

f. x(t) = 5 e−2|t|

g. x(t) = 2
3 cos(ω t + θ )
{
2 cos(ω t), −1 ≤ t ≤ 1
h. x(t) =
0, elsewhere

i. x(t) = 2 cos(ω t) + sin(2ω t)

j. x(t) = t u(t)

1.2. A CT signal x(t) is shown in Figure P1.2.

x(t)
1

t
-1/2 0 1/2
Figure P1.2

Sketch and label each of the following functions:


1.8 Basic System Properties 43
( )
a. y(t) = x t + 23
( )
b. z(t) = x 12 t − 2

c. w(t) = y (−t + 2)
( )
d. f (t) = x − 21 t − 1

e. g(t) = u (t) − z (t + 3)

1.3. A CT signal x(t) is shown in Figure P1.3.

x(t)
1

t
-1 0 1
Figure P1.3

Sketch and label each of the following functions:


( )
a. y(t) = x 12 t

b. z(t) = x (2t)
( )
c. w(t) = x 12 t − 12

d. f (t) = y (t + 1)

e. g(t) = f (−t + 1)

1.4. A signal x(t) is define by the following functions:



 4 − t, 3 ≤ t ≤ 4


 1, −3 ≤ t ≤ 3
x(t) =

 t + 4, −4 ≤ t ≤ −3


0, elsewhere
Determine the total energy of the given signal x(t).

1.5. Sketch and label the signal y(t) that is related to the signal x(t) in problem 1.4 as follows:
a. y(t) = x (8t + 4)
44 1 Signals and Systems

b. y(t) = x (8t − 4)

c. y(t) = x (4t)
( )
d. y(t) = x 15 t

1.6. Sketch and label the following signals:


a. x(t) = u(t) − u(t − 1)

b. x(t) = 2u(t) + 2u(−t)

c. x(t) = 2u(t) − 2u(−t)

d. x(t) = 3t u(t) − 3t u(−t)

e. x(t) = 3t u(t) + 3t u(−t)

f. x(t) = u(t + 1) + u(t − 1) − 2u(t)

g. x(t) = u(t) + 2u(t − 3) − 2u(t − 6) − u(t − 9)


( ) ( ) ( )
h. x(t) = rect 6t + rect 4t + rect 2t

1.7. Determine whether the following CT signals are periodic. If the signals are periodic,
determine the fundamental period.
( )
a. x(t) = cos t + π2

b. x(t) = cos (t) u (t)

c. x(t) = sin (2π t))

d. x(t) = sin2 (t)

e. x(t) = cos2 (t)

f. x(t) = e j[(π /4)t−2]


( ) ( )
g. x(t) = cos 23π t + sin π2 t

h. x(t) = 3 cos (10t) + 7 sin (4π t)


( ) ( )
i. x(t) = cos 32 t sin 25π t
(√ )
j. x(t) = cos (2t) + sin 2t

k. x(t) = Ev {sin (4π t)}


1.8 Basic System Properties 45

1.8. Determine whether the following DT signals are periodic. If the signals are periodic,
determine the fundamental period.
[ ]
a. x[t] = cos 14 n
[ ]
b. x[n] = 5 cos 23 n + 43
[ ]
c. x[n] = cos π4 n
[ ]
d. x[n] = cos2 π4 n
[ ]
e. x[n] = sin 23π n + 2
[ ] [ ]
f. x[n] = cos π4 n + sin π3 n
[ ] [ ]
g. x[n] = cos 14 n + sin π3 n
[ ] [ ]
h. x[n] = cos 14 n + sin 31 n

i. x[n] = e j(π /2)n

1.9. Determine whether the following systems are memoryless, invertible, causal, stable,
time-invariant, and linear. The systems have the input-output relation given by
a. y(t) = x(t + 2) sin (ω t + 2)

b. y(t) = x(−t)

c. y(t) = t x(t + 2)

d. y(t) = x2 (t)

e. y(t) = x(t) cos(ω t)

f. y[n] = x2 [n + 2]

g. y[n] = x[n − 1]

h. y[n] = n x[n]

1.10. A system has the input-output relation given by the following statement:
a. Show that a CT linear system is causal if the following statement is satisfied: For any input
x(t) and at any time t0 . If x(t) = 0 for t ≤ t0 , then the output y(t) = 0 for t ≤ t0 .

b. Find a nonlinear system that is causal but does not satisfy this condition.

c. Find a nonlinear system that is not causal but satisfies this condition.
46 1 Signals and Systems

References

1. P. Gupta and P.R. Kumar. “The capacity of wireless networks”, IEEE Transactions on Information Theory,
vol. 46, no. 2, pp. 388-404, March 2000.
2. A. Papoulis and S.U. Pillai. Probability, Random Variables, and Stochastic Processes. 4th Edition, New
York, USA: McGraw-Hill, 2002.
3. A.V. Oppenheim, A.S. Willsky, S.H. Nawab. Signals and Systems. 2nd Edition, New Jersey, USA: Prentice
Hall, 1997.
4. S. Haykin and B. Van Veen. Signals and Systems. 2nd Edition, New Jersey, USA: John Wiley & Sons,
2003.
Chapter 2
Linear Time Invariant (LTI) Systems

Abstract :
The system is linear time-invariant (LTI) if it satisfies the property of linearity and time-
invariance. LTI systems are the easiest and ideal systems to deal with from the analysis and
design perceptions, they play a fundamental role in signal and system analysis because of the
physical phenomena that can be modeled. An LTI system is completely characterized by its
unit-impulse response h(t) and output y(t).

2.1 Introduction

The defining properties of any LTI system are linearity and time-invariance. If the two prop-
erties hold, then the system corresponds to LTI system. In this chapter, we apply the property
of linearity and time-invariance to develop the fundamental input-output relationship which
will be described in terms of convolution operation. The importance of convolution operation
in LTI systems stems from the fact that we can find the system response y(t) of any input
signal x(t) given the knowledge of unit-impulse response of the LTI system h(t).

Linearity:
Linearity means that the relationship between the input and the output of the system is linear
and the system must satisfy the two properties as shown in Figure 2.1.

x(t) y(t)
T . x (t)

Linear system

Figure 2.1 CT linear system.

47
48 2 Linear Time Invariant (LTI) Systems

Additive :
y1 (t) = T x1 (t); y2 (t) = T x2 (t),
y1 (t) + y2 (t) = T x1 (t) + T x2 (t).
Scalling :
y(t) = T x(t) → y(t) = T [ax(t)] .
Superposition :
a1 y1 (t) + a2 y2 (t) = T [a1 x1 (t) + a2 x2 (t)] . (2.1)

Time-Invariant:
Time-invariant indicates that the behavior and characteristics of the system are fixed over
time. Any time shift in the input results in the same shift in the output as shown in Figure 2.2.

x(t) y(t)
Delay

Time-invariant system
Figure 2.2 CT invariant system.

y(t) = x(t) → x(t − t0 ) = y(t − t0 ). (2.2)

Example 2.1.
The following signals are the input x(t) and the output y(t) of a CT LTI system.

y(t)
x(t) 2
1

t t
-1 0 1 -1 0 1 2 3
Figure 2.3 CT LTI system for example 2.1.

Sketch the outputs to the following inputs.


a. x(t + 1)

b. x(t − 1)

c. 12 x(t)

d. x(2t)
2.2 CT LTI Systems: Convolution Integral 49

Solution 2.1.

y(t+1)
2
Since the system is time-invariant, then the output
a.
is y(t + 1).
t
-1 0 1 2

y(t-1)
2
Since the system is time-invariant, then the output
b.
is y(t − 1).
t
0 1 2 3 4

0.5 y(t)
1
c. Since the system is linear, then the output is 12 y(t).
t
-1 0 1 2 3

y(2 t)
2

d. Since the system is linear, then the output is y(2t).


t
-1 -1 0 1 3 2
2 2

2.2 CT LTI Systems: Convolution Integral

The response of CT LTI system y(t) can be computed by convoluting the unit-impulse re-
sponse of the system h(t) with the input signal x(t) using a convolution integral, the convolu-
tion is one of the most regularly applied operation that describes the input-output relationship
of an LTI system. A good example of LTI systems are electrical circuits that can be made up
of resistors, capacitors, and inductors.

2.2.1 Representation of CT Signals in Terms of Impulses

The impulse response h(t) of a CT LTI system (represented by T) is defined to be the re-
sponse of the system when the input is δ (t)

h(t) = T {δ (t)} . (2.3)


50 2 Linear Time Invariant (LTI) Systems

An input signal x(t) can be expressed as a linear combination of continuous impulses by


using sifting property, then we can write the signal x(t) as an integral of shifted impulses that
are weighted by weights x(τ )
∫ ∞
x(t) = x(τ ) δ (t − τ ) d τ . (2.4)
−∞

Since the system is linear, the response y(t) of the system to an arbitrary input x(t) can be
expressed as

y(t) = T {x(t)} ,
{∫ ∞ }
=T x(τ ) δ (t − τ ) d τ ,
−∞
∫ ∞
= x(τ ) T {δ (t − τ )} d τ . (2.5)
−∞

Since the system is time-invariant, we have

h(t − τ ) = T {δ (t − τ )} . (2.6)

substituting Eq. 2.6 into Eq. 2.5, we obtain


∫ ∞
y(t) = x(τ ) h(t − τ ), (2.7)
−∞

equation 2.7 indicates that a CT LTI system is completely characterized by its impulse re-
sponse h(t).

We can obtain Eq. 2.4 using the sampling property of the impulse function. If we consider t
is fixed and τ is time variable, then we have

x(τ ) δ (t − τ ) = x(τ ) δ [−(τ − t)] = x(t) δ (t − τ ),

hence
∫ ∞ ∫ ∞ ∫ ∞
x(τ ) δ (t − τ ) d τ = x(t) δ (t − τ ) d τ = x(t) δ (t − τ ) d τ = x(t).
−∞ −∞ −∞

2.2.2 Representation of an LTI system: Convolution Integral

Any system that can be described by a convolution is an LTI system. Furthermore, the char-
acterizing signal h(t) is the unit-impulse response of the system, as is easily verified by a
shifting calculation: if x(t) = δ (t) as shown in Figure 2.4, then
∫ ∞ ∫ ∞
y(t) = x(τ ) h(t − τ ) d τ = δ (τ ) h(t − τ ) d τ = h(t). (2.8)
−∞ −∞

Defining the response of the system to a shifted unit impulse as shown in Figure 2.5

δ (t − τ ) = T [δ (t − τ )] = h(t, τ ),
2.2 CT LTI Systems: Convolution Integral 51

δ(t) LTI h(t) = T{δ(t)}


System
Impulse response

Figure 2.4 The system impulse response: h(t) = T [δ (t)].

δ(t - τ) LTI h(t, τ) = T{δ(t - τ)}


System

Figure 2.5 Shifted unit impulse: h(t, τ ) = T [δ (t − τ )].

Since the system is linear as shown in Figure 2.6, then

T [a1 x1 (t) + a2 x2 (t)] = a1 y1 (t) + a2 y2 (t),

where y1 (t) = T [x1 (t)] and y2 (t) = T [x2 (t)], Therefor

y(t) = T {x(t)} ,
{∫ ∞ }
=T x(τ ) δ (t − τ ) d τ ,
−∞
∫ ∞
= x(τ ) T {δ (t − τ )} d τ due to linearity,
−∞
∫ ∞
= x(τ ) h(t, τ ) d τ . (2.9)
−∞

a 1 x 1 (t) + a 2 x 2 (t) a 1 y1 (t) + a 2 y2 (t)


LTI
System

T[a 1 x 1 (t) + a 2 x 2 (t)] = a 1 y1 (t) + a 2 y2 (t)

∞ ∞
x(t) = ∫ x(τ) δ(t - τ) dτ ∫ x(τ) h(t, τ) dτ
-∞ LTI -∞
System

Figure 2.6 Linear system: T [a1 x1 (t) + a2 x2 (t)] = a1 y1 (t) + a2 y2 (t).

But the system is also time-invariant, then T [δ (t − τ )] = h(t, τ ) = h(t − τ ), since we had
T [δ (t)] = h(t). Therefore, the output of any CT LTI system is the convolution of the input
x(t) with the impulse response h(t) of the system as illustrated in Figure 2.7.
∫ ∞
y(t) = x(τ ) h(t − τ ) d τ = x(t) ∗ h(t), (2.10)
−∞
52 2 Linear Time Invariant (LTI) Systems


x(t) y(t) = ∫ x(τ) h(t - τ) dτ
LTI -∞
System
y(t) = x(t) ∗ h(t)
∫∞
Figure 2.7 Convolution integral representation of an LTI system: y(t) = −∞ x(τ ) h(t − τ ) = x(t) ∗ h(t).

The notation h(t − τ ) in Eq. 2.10 means that the function h(τ ) is flipped and shifted across
the function x(τ ). Here are the steps that will greatly simplify doing convolutions:
1. First, the impulse response h(τ ) is time-reversed (reflected about the origin) to obtain
h(−τ ) and then shifted by t to form h(t − τ ) = h[−(τ − t)] which is a function of τ with
parameter t. Note that the independent variable of h(t − τ ) is t and not τ , the variable t is
the shift parameter, i.e. the function h(τ ) is shifted by an amount of t.

2. Next, fix t and multiply x(τ ) with h(t − τ ) for all values of τ with t fixed at some value.

3. The product x(τ )h(t − τ ) is integrated over all τ to produce an output single y(t), which
is a single value that depends on t. Remember that τ is the integration variable and that t
is treated like a constant when doing the integral.

4. Steps 1 to 3 are repeated for all values of t to produce the entire output y(t), it usually
falls out that there are only several regions of interest and the rest of y(t) is zero.

Example 2.2.
Convolve the following signals
a. x(t) = u(t) − u(t − 1); h(t) = e−t

x(t) h(t)

1 1 e -t

t t
0 1 2 0 1 2
Figure 2.8 x(t) = u(t) − u(t − 1); h(t) = e−t .

b. x(t) = t[u(t) − u(t − 1)]; h(t) = u(t + 1) − u(t − 1)


2.2 CT LTI Systems: Convolution Integral 53

x(t) h(t)

1 1

t t
0 1 -1 0 1
Figure 2.9 x(t) = t[u(t) − u(t − 1)]; h(t) = u(t + 1) − u(t − 1).

c. x(t) = e−at u(t); h(t) = u(t)

x(t) h(t)

1 e -at u(t) 1

t t
0 1 2 0 1 2
Figure 2.10 x(t) = e−at u(t); h(t) = u(t).

d. x(t) = eat u(−t); h(t) = e−at u(t)

x(t) h(t)

1 1
e at u(-t) e -at u(t)
t t
-2 -1 0 0 1 2
Figure 2.11 x(t) = eat u(−t); h(t) = e−at u(t).

Solution 2.2.

a. The function x(τ ) is time-reversed and then shifted by t across the impulse response h(τ )
to obtain x(t − τ ) as showing in Figure 2.12.
For t < 0, no overlapping between the two functions. Therefore, y(t) = 0
Shift x(t − τ ) towards the right to partially overlaps with h(τ ). For t ≥ 0, and t − 1 < 0,
then 0 ≤ t < 1
54 2 Linear Time Invariant (LTI) Systems

y(t) = x(t) ∗ h(t),


∫ ∞ ∫ t
= x(t − τ ) h(τ ) d τ = (1) e−τ d τ ,
−∞ 0
t
−τ

= −e = 1 − e−t .
0

Shift x(t − τ ) towards the right to completely overlaps with h(τ ). For t − 1 ≥ 0, then t ≥ 1

y(t) = x(t) ∗ h(t),


∫ ∞ ∫ t
= x(t − τ ) h(τ ) d τ = (1) e−τ d τ ,
−∞ t−1
t

−τ
= −e = −e−t + e1−t = e−t (e − 1) .
t−1

x(t-τ) 1 1 x(t-τ) 1 x(t-τ)


e -t e -t e -t
h(τ) h(τ) h(τ)
τ τ τ
t-1 t 0 1 2 t-1 0 t 1 2 0 t-1 1 t 2
Figure 2.12 Convolution for example 2.2a.

The output of y(t) for all t is


y(t)
 1

 0, t <0 1 - e -t e -t (e - 1)
−t 0.5
y(t) = 1 − e , 0≤t <1

 −t
e (e − 1) , t ≥ 1 t
0 0.5 1 1.5 2

b. The impulse response h(τ ) is time-reversed since it is simpler and symmetric and then
shifted by t across the function x(τ ) to obtain h(t − τ ) as showing in Figure 2.13. The
convolution in this example can be divided into 5 parts.
For t + 1 < 0, then t < −1, no overlapping between the two functions. Therefore, y(t) = 0.
Shift h(t − τ ) towards the right to partially overlaps with x(τ ). For t + 1 ≥ 0, and t < 0,
then −1 ≤ t < 0.

y(t) = x(t) ∗ h(t),


∫ ∞ ∫ t+1
= x(τ ) h(t − τ ) d τ = (1) τ d τ ,
−∞ 0

τ 2 t+1 2
= (t + 1) .
=
2 0 2

Shift h(t − τ ) towards the right to completely overlaps with x(τ ). For t − 1 ≤ 0, and t ≥ 0,
then 0 ≤ t ≤ 1.
2.2 CT LTI Systems: Convolution Integral 55

y(t) = x(t) ∗ h(t),


∫ ∞ ∫ 1
= x(τ ) h(t − τ ) d τ = (1) τ d τ ,
−∞ 0

τ 2 1
= 1.
=
2 0 2

Shift h(t − τ ) more towards the right to partially overlaps with x(τ ). For t − 1 ≤ 1, and
t − 1 > 0, then 1 < t ≤ 2.

y(t) = x(t) ∗ h(t),


∫ ∞ ∫ 1
= x(τ ) h(t − τ ) d τ = (1) τ d τ ,
−∞ t−1

τ 2 1
= 1 − (t − 1) .
2
=
2 t−1 2 2

For t − 1 ≥ 1, then t ≥ 2, no overlapping between the two functions. Therefore, y(t) = 0.

h(t-τ) 1 x(τ) h(t-τ) 1 x(τ) h(t-τ) 1 x(τ)

τ τ τ
t-1 t t+1 0 1 t-1 t 0 t+11 t-1 0 t 1 t+1

1 x(τ) h(t-τ) 1 x(τ) h(t-τ)

τ τ
0 t-1 1 t t+1 0 1 t-1 t t+1
Figure 2.13 Convolution for example 2.2b.

The output of y(t) for all t is




 0, t < −1 y(t)

 1

 (t+1)2

 2 , −1 ≤ t < 0
0.5
y(t) = 0.5, 0≤t ≤1



 (t−1)2
 2 − 2 , 1<t ≤2
 1 t

 0 0.5 1 1.5 2
0, t ≥2

c. The impulse response h(τ ) is time-reversed and then shifted by t across the function x(τ ) to
obtain h(t − τ ) as showing in Figure 2.14. The convolution in this example can be divided
into 2 parts.
For t < 0, no overlapping between the two functions. Therefore, y(t) = 0.
Shift h(t − τ ) towards the right to overlap with x(τ ). For t ≥ 0.
56 2 Linear Time Invariant (LTI) Systems

y(t) = x(t) ∗ h(t),


∫ ∞ ∫ t
= x(τ ) h(t − τ ) d τ = (1) e−aτ d τ ,
−∞ 0

e−aτ t 1 [ ]
=− = 1 − e−at .
a 0 a

h(t-τ) 1 x(τ) h(t-τ) 1 x(τ)

τ τ
t 0 1 2 0 t 1 2
Figure 2.14 Convolution for example 2.2c.

The output of y(t) for all t is


y(t)
1
{
0, t <0 1/a
y(t) = 1
a [1 − e−at ] , t ≥0
t
0 1 2

d. The impulse response h(τ ) is time-reversed and then shifted by t across the function x(τ ) to
obtain h(t − τ ) as showing in Figure 2.15. The convolution in this example can be divided
into 2 parts.
For t < 0, the two functions overlap partially.

y(t) = x(t) ∗ h(t),


∫ ∞ ∫ t
= x(τ ) h(t − τ ) d τ = eaτ e−a(t−τ ) d τ ,
−∞ −∞
∫ t
e−at 2aτ t
= e−at e2aτ d τ = e ,
−∞ 2a −∞
e−at [ ] eat
= e −0 =
2at
.
2a 2a

For t > 0, the two functions overlap partially.

y(t) = x(t) ∗ h(t),


∫ ∞ ∫ 0
= x(τ ) h(t − τ ) d τ = eaτ e−a(t−τ ) d τ ,
−∞ −∞
∫ 0
e−at 2aτ 0
= e−at e2aτ dτ = e ,
−∞ 2a −∞
e−at e−at
= [1 − 0] = .
2a 2a
2.3 Properties of LTI Systems 57

x(τ) 1 x(τ) 1
h(t-τ) h(t-τ)
τ τ
-2 -1 t 0 -2 -1 0 t

Figure 2.15 Convolution for example 2.2d.

The output of y(t) for all t is


y(t)
{ e -a|t| u(t)
eat 1/2a
2a , t ≤0
y(t) =
e−at
2a , t ≥0 t
-2 -1 0 1 2

2.3 Properties of LTI Systems

The input-output behavior of a CT LTI system is described by its impulse response h(t) via
the convolution expression. Therefore the input-output properties of an LTI system can be
characterized in terms of properties of h(t).

2.3.1 Commutative Property of LTI Systems

The commutative property of convolution states that changing the order of operands will not
affect the response in systems as shown in the Figure 2.16. In other words, the distinction
between impulse response and signal is of no mathematical consequence in the context of
convolution [1].

y(t) = x(t) ∗ h(t) = h(t) ∗ x(t),


∫ ∞ ∫ ∞
= x(τ ) h(t − τ ) d τ = h(τ ) x(t − τ ) d τ . (2.11)
−∞ −∞

x(t) y(t) h(t) y(t)


h(t) x (t)

Figure 2.16 Commutative operation of an LTI system: x(t) ∗ h(t) = h(t) ∗ x(t).
58 2 Linear Time Invariant (LTI) Systems

2.3.2 Distributive Property of LTI Systems

The distributive property of convolution means that summing the outputs of two systems is
equivalent to a system with an impulse response equal to the sum of the impulse response of
the two individual systems, this is because the order of different linear operations is irrelevant
due to the distributive property of convolution as shown in the Figure 2.17.

y(t) = x(t) ∗ {h1 (t) + h2 (t)} = x(t) ∗ h1 (t) + x(t) ∗ h2 (t). (2.12)

h 1 (t)
x(t) y(t) x(t) y(t)
+ = h 1 (t) + h 2 (t)

h 2 (t)

Figure 2.17 Distributive property of an LTI system: x(t) ∗ {h1 (t) + h2 (t)} = x(t) ∗ h1 (t) + x(t) ∗ h2 (t).

The distributive property of convolution can be exploited to break a complicated convolution


into several simpler ones, we may use this property to express y(t) as the sum of the results
of two simpler convolution problems.

y(t) = {x1 (t) + x2 (t)} ∗ h(t) = x1 (t) ∗ h(t) + x2 (t) ∗ h(t) = y1 (t) + y2 (t). (2.13)

2.3.3 Associative Property of LTI Systems

The associative property of convolution can be exploited to change the order of a complicated
convolution of a cascaded system into several simpler orders. For LTI systems, the changing
order of subsequent convolution operations will not affect the response in systems as shown
in Figure 2.18.
y(t) = x(t) ∗ {h1 (t) ∗ h2 (t)} = {x(t) ∗ h1 (t)} ∗ h2 (t). (2.14)

x(t) y(t)
h 1 (t) h 2 (t)

x(t) y(t)
h 1 (t) * h 2 (t)

Figure 2.18 Associative property of an LTI system: x(t) ∗ {h1 (t) ∗ h2 (t)} = {x(t) ∗ h1 (t)} ∗ h2 (t).

For non-linear systems, generally the order of cascaded systems cannot be changed. Figure
2.19 illustrates two cascaded memoryless systems, one is being multiplied by 2 and the other
squaring the input. If we multiply first and square second, we obtain
2.3 Properties of LTI Systems 59

y(t) = 4x2 (t).

However, if we square first and multiply second, we obtain

y(t) = 2x2 (t).

x(t) h 1 (t) w(t)=2 x(t) h 2 (t) y(t)=4 x 2


2 w 2 (t)

x(t) h 1 (t) w(t)= x 2 ( t ) y(t)=2 x 2


h 2 (t)
x 2 (t) 2

Figure 2.19 Associative property is invalid in non-linear systems.

As shown in Figure 2.19, the outputs of the two cascaded systems are different if the order is
changed. Thus, the interchanging order in cascaded systems is a property of LTI systems.

Example 2.3.
Verify the following:
a. (x(t) ∗ h(t) = h(t) ∗ x(t)).

b. x(t) ∗ {h1 (t) + h2 (t)} = x(t) ∗ h1 (t) + x(t) ∗ h2 (t).

c. {x(t) ∗ h1 (t)} ∗ h2 (t) = x(t) ∗ {h1 (t) ∗ h2 (t)}.

Solution 2.3.

a. By definition of commutative property of convolution, we have the following

∫ ∞
x(t) ∗ h(t) = x(τ ) h(t − τ ) d τ ,
−∞

Let λ = t − τ , then
∫ ∞
= x(t − λ ) h(λ ) d λ ,
−∞
∫ ∞
= h(λ ) x(t − λ ) d λ = h(t) ∗ x(t).
−∞

b. By definition of distributive property of convolution, we have the following


60 2 Linear Time Invariant (LTI) Systems
∫ ∞
x(t) ∗ {h1 (t) + h2 (t)} = x(τ ) [h1 (t − τ ) + h2 (t − τ )] d τ ,
−∞
∫ ∞
= x(τ ) h1 (t − τ ) + x(τ ) h2 (t − τ ) d τ ,
−∞
∫ ∞ ∫ ∞
= x(τ ) h1 (t − τ ) d τ + x(τ ) h2 (t − τ ) d τ ,
−∞ −∞
= x(t) ∗ h1 (t) + x(t) ∗ h2 (t).

c. By definition of associative property of convolution, let f1 (t) = x(t) ∗ h1 (t) and f2 (t) =
h1 (t) ∗ h2 (t) ∫ ∞
f1 (t) = x(τ ) h1 (t − τ ) d τ ,
−∞
∫ ∞
{x(t) ∗ h1 (t)} ∗ h2 (t) = f1 (t) ∗ h2 (t) = f1 (α ) h2 (t − α ) d α .
−∞
∫ ∞ [∫ ∞ ]
= x(τ ) h1 (α − τ ) d τ h2 (t − α ) d α ,
−∞ −∞

Let λ = α − τ and interchange the integration order, we have


∫ ∞ [∫ ∞ ]
{x(t) ∗ h1 (t)} ∗ h2 (t) = x(τ ) h1 (λ ) h2 (t − τ − λ ) d λ d τ ,
−∞ −∞

Since ∫ ∞
f2 (t) = h1 (λ ) h2 (t − λ ) d λ
−∞
then, we have ∫ ∞
f2 (t − τ ) = h1 (λ ) h2 (t − τ − λ ) d λ
−∞
Thus, we have
∫ ∞
{x(t) ∗ h1 (t)} ∗ h2 (t) = x(τ ) f2 (t − τ )d τ ,
−∞
= x(t) ∗ f2 (t) = x(t) ∗ {h1 (t) ∗ h2 (t)} .

Example 2.4.
Consider the following interconnection of LTI systems showing in Figure 2.20

h 1 (t)
+ + y(t)
h 3 (t)
+ -
x(t) h 2 (t)

h 4 (t)

Figure 2.20 systems’ interconnections for example 2.4


2.3 Properties of LTI Systems 61

Given the impulse responses of the system, find the overall system impulse response h(t)

h1 (t) = u(t), h2 (t) = u(t + 2) − u(t),


h3 (t) = δ (t − 2), h4 (t) = u(t − 2).

Solution 2.4.
Driving the overall impulse response in terms of the impulse response of each system. First,
apply the distributive property to the parallel impulse responses h1 (t) and h2 (t) to obtain the
equivalent system is h12 (t) as illustrated in Figure 2.21(a).

h12 (t) = h1 (t) + h2 (t) = u(t) + u(t + 2) − u(t) = u(t + 2).

Second, apply the associative property to the series impulse responses h12 (t) and h3 (t), then
convolve to obtain the equivalent system h123 (t) as illustrated in Figure 2.21(b).

h123 (t) = h12 (t) ∗ h3 (t) = u(t + 2) ∗ δ (t − 2) = u(t).

Finally, apply the distributive property to the parallel impulse responses h123 (t) and h4 (t),
then subtract h4 (t) from h123 (t) to obtain the overall impulse response h(t) as illustrated in
Figure 2.21(c).
h(t) = h123 (t) − h4 (t) = u(t) − u(t − 2).

+ y(t)
h 12 (t)= h 1 (t)+ h 2 (t) h 3 (t)
x(t) -

h 4 (t)

(a)

+ y(t)
h 123 (t)=[ h 1 (t)+ h 2 (t)]* h 2 (t)
x(t) -

h 4 (t)

(b)

x(t) y(t)
h(t)=[ h 1 (t)+ h 2 (t)]* h 2 (t) - h 4 (t)

(c)

Figure 2.21 Equivalent system for Figure 2.20.


62 2 Linear Time Invariant (LTI) Systems

2.3.4 LTI system with and without memory

The output y[n] of a DT LTI system can be expressed using the commutative property of
convolution


y[n] = h[n] ∗ x[n] = ∑ h[k] x[n − k],
k=−∞
= . . . + h[−2]x[n + 2] + h[−1]x[n + 1] + h[0]x[n]
+ h[1]x[n − 1] + h[2]x[n − 2] + . . .

The output y[n] of a DT LTI memoryless system at any time depends only on the input x[n]
at the same time and does not depends on x[n − k] for k ̸= 0. Hence, all terms should equal
zero, except h[0]x[n]. This implies that the output y[n] is given by

y[n] = Kx[n],

where K is an arbitrary constant scalar and the corresponding impulse response h[n] is given
by
h[n] = K δ [n].
Therefore, for a DT LTI memoryless system, the impulse response h[k] = 0 for k ̸= 0, whereas
for a DT LTI memory system, the impulse response h[k] ̸= 0 for k ̸= 0.

A CT LTI system is memoryless if its output y(t) at any time depends only on the value of
its input x(t) at the same time. Therefore, the output does not depends on past and/or future
values of input x(t) and this is true for a continuous-time system, if h(t) = 0 for t ̸= 0 and the
convolution integral reduces to the relation
∫ ∞
y(t) = h(t) ∗ x(t) = h(τ ) x(t − τ ) d τ = Kx(t), (2.15)
−∞

where K = h(0) is a constant scalar and the impulse response has the form

h(t) = K δ (t), (2.16)

A memoryless system performs a scalar multiplication K on the input. Note that if K = 1 in Eq


2.16, then these systems become identity systems, with output equal to the input y(t) = x(t),
and with impulse response equal to the unit impulse h(t) = δ (t), the convolution integral
implies that
x(t) = x(t) ∗ δ (t),
which reduces to the shifting property of the CT unit impulse
∫ ∞
x(t) = x(τ ) δ (t − τ ) d τ .
−∞

A CT LTI system is memoryless if the impulse response h(t) = 0 for t ̸= 0 and the LTI system
has memory if h(t) ̸= 0 for t ̸= 0.
2.3 Properties of LTI Systems 63

Example 2.5.
Given the impulse response of LTI systems, verify whether the following systems are mem-
oryless:
a. h[n] = 2n u[n]

b. h[n] = 2n u[n + 1]

c. h[n] = (−1)n u[−n]

d. h(t) = e−t

e. h(t) = e2|t|

Solution 2.5.
A DT LTI system is memoryless if the impulse response h[n] = 0 for n ̸= 0 and for a CT LTI
system, the impulse response h(t) = 0 for t ̸= 0.
a. Let n = 1, then h[1] = 21 u[1] ̸= 0, Therefore, the system has memory, since h[n] ̸= 0 for
n ̸= 0.

b. Let n = 1, then h[1] = 21 u[2] ̸= 0, Therefore, the system has memory, since h[n] ̸= 0 for
n ̸= 0.

c. Let n = −1, then h[−1] = −1−1 u[1] ̸= 0, Therefore, the system has memory, since h[n] ̸= 0
for n ̸= 0.

d. Let t = 1, then h(1) = e−1 ̸= 0, Therefore, the system has memory, since h(t) ̸= 0 for t ̸= 0.

e. Let t = 1, then h(1) = e2 ̸= 0, Therefore, the system has memory, since h(t) ̸= 0 for t ̸= 0.

2.3.5 Invertibility of LTI systems

The system S is invertible if and only if there exists an inverse system S−1 such that the
overall series impulse response is an identity system. Figure 2.22 illustrates a system with
impulse response h(t) and an inverse system with impulse response h1 (t), given the output
y(t) = x(t), such that the series interconnection impulse response is identical to the identity
system.
h(t) ∗ h1 (t) = δ (t), (2.17)
since the overall series impulse response equal δ (t), then h1 (t) is the impulse response of the
inverse system. Invertibility property used to recover the original signal x(t) at the receiving
end through processing filter (filtering out the noise from communication systems) whose
impulse response is designed to be the inverse of the impulse response of the communication
channel.
64 2 Linear Time Invariant (LTI) Systems

x(t) w(t) y(t)= x(t)


h(t) h 1 (t)

x(t) Identity x(t)


system

Figure 2.22 An inverse system for CT LTI systems: h(t) ∗ h1 (t) = δ (t).

Example 2.6.
Given the impulse response of LTI systems, verify whether the following systems are invert-
ible:
a. h[n] = 2n u[n]

b. h[n] = 2n u[n + 1]

c. h[n] = (−1)n u[−n]

d. h(t) = e−t

e. h(t) = e2|t|

Solution 2.6.
We may verify the systems’ invertibility by finding the inverse for h[n] such that the overall
series impulse response is an identity system h[n] ∗ g[n] = δ [n].

x[n] y[n]
h[n]

x[n] y[n] y[n]* g[n]


h[n] g[n]
x[n]* h[n]

Figure 2.23 The inverse system for example 2.6.

a. h[n] = 2n u[n]
n n
y[n] = ∑ x[k] h[n − k] = ∑ x[k] 2n−k Eq.(1)
k=−∞ k=−∞

Rewrite Eq.(1) up to summation of n − 1


n−1
y[n] = x[n] + ∑ x[k] 2n−k . Eq.(2)
k=−∞
2.3 Properties of LTI Systems 65

Let n = n − 1 in Eq.(1) or time shift delay of y[n] by a value of 1 in Eq.(1)


n−1 n−1
y[n − 1] = ∑ x[k] 2n−k−1 = 2−1 ∑ x[k] 2n−k . Eq.(3)
k=−∞ k=−∞

Substitute the value of y[n − 1] of Eq.(3) in Eq.(2) and derive the system expression

y[n] = x[n] + 2y[n − 1]. Eq.(4)

Rewrite the system expression in term of x[n], we get the following

x[n] = y[n] − 2y[n − 1].

Therefore, the system is invertible and the inverse for x[n] is

g[n] = δ [n] − 2δ [n − 1].

To check the answer

y[n] ∗ g[n] = y[n] ∗ [δ [n] − 2δ [n − 1]] ,


= y[n] ∗ δ [n] − y[n] ∗ 2δ [n − 1],
= y[n] − 2y[n − 1],

Substitute the value of y[n] from Eq.(4)

= x[n] + 2y[n − 1] − 2y[n − 1] = x[n].

Since y[n] ∗ g[n] = x[n], this implies that the overall series of the impulse responses h[n] ∗
g[n] = δ [n], which is an identity system. You may also check the answer by proving h[n] ∗
g[n] = δ [n].

h[n] ∗ g[n] = h[n] ∗ (δ [n] − 2δ [n − 1]) ,


= h[n] ∗ δ [n] − h[n] ∗ 2δ [n − 1],
= h[n] − 2h[n − 1],
= 2n u[n] − 2(2)n−1 u[n − 1],
= 2n (u[n] − u[n − 1]) ,
= 2n δ [n] = 20 δ [n] = δ [n].

b. h[n] = 2n u[n + 1]
n n+1
y[n] = ∑ x[k] h[n − k] = ∑ x[k] 2n−k Eq.(1)
k=−∞ k=−∞

Rewrite Eq.(1) up to summation of n − 1


n−1
1
y[n] = x[n] + x[n + 1] + ∑ x[k] 2n−k . Eq.(2)
2 k=−∞
66 2 Linear Time Invariant (LTI) Systems

Time shift delay of y[n] by a value of 2 in Eq.(1)


n−1 n−1
y[n − 2] = ∑ x[k] 2n−k−2 = 2−2 ∑ x[k] 2n−k . Eq.(3)
k=−∞ k=−∞

Substitute the value of y[n − 2] of Eq.(3) in Eq.(2) and derive the system expression

1
y[n] = x[n] + x[n + 1] + 4y[n − 2]. Eq.(4)
2
Rewrite the system expression in term of x[n], we get the following

1
x[n] + x[n + 1] = y[n] − 4y[n − 2].
2
Therefore, the system is invertible and the inverse for x[n] is

1
g[n] = δ [n] − 4δ [n − 1] − g[n + 1].
2

c. h[n] = (−1)n u[−n]


n ∞
y[n] = ∑ x[k] h[n − k] = ∑ x[k] − 1n−k Eq.(1)
k=−∞ k=n

Rewrite Eq.(1) up to summation of n + 1



y[n] = x[n] + ∑ x[k] − 1n−k . Eq.(2)
k=n+1

Let n = n + 1 in Eq.(1) or time shift advance of y[n] by a value of 1 in Eq.(1)


∞ ∞
y[n + 1] = ∑ x[k] − 1n−k+1 = − ∑ x[k] − 1n−k . Eq.(3)
k=n+1 k=n+1

Substitute the value of y[n + 1] of Eq.(3) in Eq.(2) and derive the system expression

y[n] = x[n] − y[n + 1]. Eq.(4)

Rewrite the system expression in term of x[n], we get the following

x[n] = y[n] + y[n + 1].

Therefore, the system is invertible and the inverse for x[n] is

g[n] = δ [n] + δ [n + 1].

To check the answer


2.3 Properties of LTI Systems 67

y[n] ∗ g[n] = y[n] ∗ [δ [n] + δ [n + 1]] ,


= y[n] ∗ δ [n] + y[n] ∗ δ [n + 1],
= y[n] + y[n + 1],

Substitute the value of y[n] from Eq.(4)

= x[n] − y[n + 1] + y[n + 1] = x[n].

Since y[n] ∗ g[n] = x[n], this implies that the overall series of the impulse responses h[n] ∗
g[n] = δ [n], which is an identity system.

d. h(t) = e−t
∫ ∞ ∫ t
y(t) = x(τ ) h(t − τ ) d τ = x(τ ) e−(t−τ ) d τ ,
−∞ −∞
∫ t
= e−t x(τ ) eτ d τ ,
−∞
∫ t
et y(t) = x(τ ) eτ d τ ,
−∞
d [ t ]
e y(t) = et x(t),
dt
d [ t ]
x(t) = e−t e y(t) ,
dt
d [ t ]
y−1 (t) = e−t e x(t) .
dt
Therefore, the system is invertible.

e. h(t) = e2|t|

2.3.6 Causality for LTI systems

The output of a causal LTI system depends only on present and/or past values of the input
signal, witting the convolution sum of a DT LTI system as

y[n] = .... + h[−2]x[n + 2] + h[−1]x[n + 1] + h[0]x[n] + h[1]x[n − 1] + h[2]x[n − 2] + ....

The present and past inputs values, x[n], x[n − 1], x[n − 2], ...., are associated with k ≥ 0 in the
impulse response h[k], whereas future inputs values, x[n + 1], x[n + 2], ...., are associated with
k < 0 in the impulse response h[k]. The output y[n] of a causal LTI system should depends on
present and/or past values of the input signal, this is h[k] = 0 for k < 0. Applying the causality
condition, the output y[n] of a causal DT LTI system is expressed as
n ∞
y[n] = ∑ x[k] h[n − k] = ∑ h[k] x[n − k]
k=−∞ k=0
68 2 Linear Time Invariant (LTI) Systems

This illustrates that the only values of the input x[n] used to evaluate the output y[n] are those
for k ≤ 0. An example of a causal system is h[n] = δ [n] − δ [n − 2], this is because the only
non-zero values of the impulse response are at n = 0 and n = 2, whereas an example of a
non-causal system is h[n] = δ [n + 1] + δ [n] − δ [n − 2], this is because at n = −1, the impulse
response h[−1] ̸= 0 due to the presence of δ [n + 1].

As the condition of causality property, any input signal x[n] is called causal if

Causal → x[n] = 0, for n < 0,

and any input signal x[n] is called non-causal if

Non-causal → x[n] = 0, for n ≥ 0,

when the input signal x[n] is causal, then the output y[n] of a causal DT LTI system is given
by
n n
y[n] = ∑ x[k] h[n − k] = ∑ h[k] x[n − k]
k=0 k=0

A system is causal if its current state is not a function of future events and its output depends
only on present and/or past values of the input signal. Specifically, for a CT LTI system, this
requirement is y(t) should not depend on x(τ ) for τ > t. Based on the convolution integral
equation, all the coefficients h(t − τ ) that multiply values of x(τ ) for τ > t must be zero,
which means that the impulse response of a causal CT LTI system should satisfy the following
condition
h(t) = 0, for t < 0. (2.18)
The convolution integral is given by
∫ t ∫ ∞
y(t) = x(τ ) h(t − τ ) d τ = h(τ ) x(t − τ ) d τ , (2.19)
−∞ 0

Eq. 2.19 implies that the only values of the input x(t) used to evaluate the output y(t) are
those values for τ ≤ t. Causality is a property of a system and can be referred to a signal,
based on causality condition, any input signal x(t) is called causal if

Causal → x(t) = 0, for t < 0,

and any input signal x(t) is called non-causal if

Non-causal → x(t) = 0, for t > 0,

when the input signal x(t) is causal, then the output y(t) of a causal CT LTI system is given
by ∫ ∫
t t
y(t) = x(τ ) h(t − τ ) d τ = h(τ ) x(t − τ ) d τ .
0 0
A causal system is causal if its impulse response is zero for negative time; this makes sense as
the system should not have a response before impulse is applied. Pure time shift with impulse
2.3 Properties of LTI Systems 69

response h(t) = δ (t − t0 ) is causal for t0 ≥ 0 (time shift delay), but is non-causal for t0 < 0
(time shift advance, where the output anticipates future values of the input).

Example 2.7.
Given the impulse response of LTI systems, verify whether the following systems are causal:
a. h[n] = 2n u[n]

b. h[n] = 2n u[n + 1]

c. h[n] = (−1)n u[−n]

d. h(t) = e−t

e. h(t) = e2|t|

Solution 2.7.
A DT LTI system is causal if the impulse response h[n] = 0 for n < 0 and for a CT LTI system,
the impulse response h(t) = 0 for t < 0.
a. Let n = −1, then h[−1] = 2−1 u[−1] = 0, Therefore, the system is causal, since h[n] = 0
for n < 0.

b. Let n = −1, then h[−1] = 2−1 u[0] ̸= 0, Therefore, the system is not-causal, since h[n] ̸= 0
for n < 0.

c. Let n = −1, then h[−1] = −1−1 u[1] ̸= 0, Therefore, the system is non-causal, since h[n] ̸= 0
for n < 0.

d. Let t = −1, then h(−1) = e1 = 0, Therefore, the system is causal, since h(t) = 0 for t < 0.

e. Let t = −1, then h(−1) = e2 ̸= 0, Therefore, the system is not-causal, since h(t) ̸= 0 for
t < 0.

2.3.7 Stability for LTI Systems

A system is stable if every bounded input produces a bounded output. For LTI system, if the
input x(t) is bounded in magnitude

| x(t) |< B for all t. (2.20)

where B is a constant, if this input signal is applied to an LTI system with unit impulse
response h(t), the magnitude of the output is given by:
70 2 Linear Time Invariant (LTI) Systems
∫ ∞


| y(t) | = h(τ ) x(t − τ ) d τ
−∞
∫ ∞ ∫ ∞
≤ | h(τ ) || x(t − τ ) | d τ = | h(τ ) | B d τ
−∞ −∞
∫ ∞
=B | h(τ ) | d τ . (2.21)
−∞

Therefore, the LTI system is BIBO stable if its impulse response h(t) is absolutely integrable
∫ ∞
| h(τ ) | d τ < ∞. (2.22)
−∞

The same analysis is applied to a DT LTI system. If we apply a bounded input with |x[n]| < B,
where B is a constant, then the bound output | y[n] | is

| y[n] |= B ∑ | h[k] | .
k=−∞

Therefore, |y[n]| is finite if the summation of the absolute value of the impulse response is
finite, then the LTI system is BIBO stable if its impulse response h[n] is absolutely summable.

∑ | h[k] | −→ finite
k=−∞

Example 2.8.
Given the impulse response of LTI systems, verify whether the following systems are stable:
a. h[n] = 2n u[n]
( )n
b. h[n] = 12 u[n + 1]

c. h[n] = (−1)n u[−n]

d. h(t) = e−t

e. h(t) = e2|t|

Solution 2.8.
A DT LTI system is stable if its impulse response h[n] is absolutely summable and for a CT
LTI system, the impulse response h(t) is absolutely integrable.
a. h[n] = 2n u[n]
∞ ∞

|h[n]| = ∑ |h[k]| = ∑ 2k −→ ∞.
k=−∞ k=0

Note that the summation is finite if the following geometric series condition is satisfied

1
∑ ak = 1−a
, |a| < 1.
k=0
2.3 Properties of LTI Systems 71

Since |a| > 1, then the summation of impulse response h[n] = ∞ and therefore the system
is unstable.
( )n
b. h[n] = 12 u[n + 1]

∞ ∞ ( )k
1
|h[n]| = ∑ |h[k]| = ∑ ,
k=−∞ k=−1
2

∞ ( )k
1
= 2+ ∑ ,
k=0
2
1
= 2+ = 4.
1 − 12

Since the summation of impulse response h[n] = 4 is finite, then the system is stable.

c. h[n] = (−1)n u[−n]


∞ ∞

|h[n]| = ∑ |h[k]| = ∑ (−1)k u[−k] ,
k=−∞ k=−∞
0 0

= ∑ (−1)k = ∑ 1k ,
k=−∞ k=−∞
∞ ∞
= ∑1 −k
= ∑ 1k −→ ∞.
k=0 k=0

Since the summation of impulse response h[n] = ∞, then the system is unstable.

d. h(t) = e−t
∫ ∞ ∫ ∞
|h(t)| = |h(t)| dt = e−t dt,
−∞ 0


= −e−t = 1 < ∞.
0

Since the impulse response is absolutely integrable, h(t) = 1 is finite, then the system is
stable.

e. h(t) = e2|t|
∫ ∞ ∫ 0 ∫ ∞
|h(t)| = |h(t)| dt = e2t dt + e−2t dt,
−∞ −∞ 0
0 ∞
1 −1 −2t 1
= e2t + e = < ∞.
2 −∞ 2 0 4
1
Since the impulse response is absolutely integrable, h(t) = 4 is finite, then the system is
stable.
72 2 Linear Time Invariant (LTI) Systems

2.3.8 The Step Response of an LTI System

The step response s(t) of an LTI system is simply the response of the system to a unit
step input signal u(t), which conveys a lot of information about the system. Based on the
commutative property of convolution, s(t) = u(t) ∗ h(t) = h(t) ∗ u(t). Therefore, s(t) can be
viewed as the response to input h(t) of a CT LTI system with unit impulse response u(t). That
is, the step response s(t) of a CT LTI system is the running integral of its impulse response.
∫ ∞ ∫ t
s(t) = h(t) ∗ u(t) = h(τ )u(t − τ ) d τ = h(τ ) d τ , (2.23)
−∞ −∞

The step response can also be used to characterize an LTI system, since the impulse response
is the first derivative of step response

ds(t)
h(t) = = s′ (t), (2.24)
dt
For a DT LTI system with impulse response h[n], the step response is s[n] is given by

s[n] = h[n] ∗ u[n] = ∑ h[k] u[n − k],
k=−∞

Since u[n − k] = 0 for k > n and u[n − k] = 1 for k ≤ n, then the step response s[n] is the
summation of its impulse response
n
s[n] = ∑ h[k].
k=−∞

The impulse response can be expressed in terms of the step response

h[n] = s[n] − s[n − 1].

Example 2.9.
Given the impulse response of LTI systems, find the step response of the systems
( )n
a. h[n] = −1
2 u[n]

b. h(t) = e−2t

Solution 2.9.
( )n
a. h[n] = −1
2 u[n]

For n < 0

s[n] = 0,

For n ≥ 0
2.4 Causal LTI Systems Described by Differential and Difference Equations 73
( )k
n n
−1
s[n] = ∑ h[k] = ∑ 2
,
k=−∞ k=0

Using the geometric series


n
rn+1 − rm
∑ rk = r−1
r ̸= 1
k=m
( −1 )n+1 ( −1 )0 [( )n ]
− −2 −1 −1
= 2
−1
2
= −1 ,
2 −1 3 2 2
[( )n ]
1 −1
= +2 .
3 2

b. h(t) = e−2t
∫ t ∫ t
s(t) = h(τ ) d τ = e−2τ d τ ,
−∞ 0

−1 −2τ t −1 [ −2t ]
= e = e −1 .
2 0 2

2.4 Causal LTI Systems Described by Differential and Difference


Equations

This is a class of systems for which the input and output are related through A linear constant-
coefficient differential equation in CT, or DT.

2.4.1 Linear Constant-Coefficient Differential Equations

An important idea of causal LTI systems is that the input-output response of CT systems are
often described by linear constant-coefficient differential equations. Consider the first order
differential equation
dy(t)
+ ay(t) = bx(t), (2.25)
dt
where y(t) denotes the output of the system and x(t) is the input. Furthermore, a general N th
order linear constant-coefficient differential equation is given by
N M
d k y(t) d k x(t)
∑ ak dt k
= ∑ bk
dt k
, (2.26)
k=0 k=0

where coefficients a and b are real constants. The order N refers to the highest derivative of
y(t). Such differential equations play a central role in describing the input-output relationships
of a wide variety of electrical, mechanical and chemical systems. For instance, in the RC
circuit, the input x(t) = vs (t) and the output y(t) = vc (t) are related by a first-order constant-
74 2 Linear Time Invariant (LTI) Systems

coefficient differential equation.

dy(t) 1 1
+ y(t) = x(t).
dt RC RC
if N = 0, Eq. 2.26 reduces to
1 M d k x(t)
y(t) = ∑
a0 k=0
bk
dt k
, (2.27)

if M ̸= 0, the system has memory in the sense that its output depends on not only the input
but also the previous output; otherwise, it is said to be memoryless and its output depends
only on the input.

Furthermore, the system stability depends on the coefficients ak . Consider a first order LTI
differential equation with a0 = 1:

dy(t)
− a1 y(t) = 0, where y(t) = A ea1 t ,
dt
If a1 > 0, the system is unstable as its impulse response represents a growing exponential
function of time. Whereas, if a1 < 0, the system is stable as its impulse response corresponds
to a decaying exponential function of time.

To solve the differential equation for y(t):


• We need to know the initial conditions or auxiliary conditions on the output variable and
its derivatives.

• Examine such systems and relate them to the system properties.

The solution to a given differential equation is the sum of the homogeneous solution (a solu-
tion with input set to zero) and the particular solution (a function that satisfy the differential
equation).
y(t) = yh (t) + y p (t).
Natural response of the system = homogeneous solution (depends on the initial conditions
and forced response), yh (t) is the homogeneous solution and obtained by setting x(t) = 0.
While the forced response of the system = particular solution (usually has the form of the
input signal), y p (t) is the particular solution and obtained for the given input.

Example 2.10.
Solve the following system

dy(t)
+ 7y(t) = x(t)
dt

x(t) = Ke5t u(t)


where K is a real number
2.4 Causal LTI Systems Described by Differential and Difference Equations 75

Solution 2.10.
The general solution consists of the homogeneous response and the particular solution

y(t) = y p (t) + yh (t),


dy(t)
where the particular solution y p (t) satisfies dt − 7y(t) = x(t) and the homogenous solution
dy(t)
yh (t) satisfies dt − 7y(t) = 0.

Particular solution:
x(t) = Ke5t for t > 0,
Because of the nature of the input, y p (t) is a signal that has the same form as x(t) for t > 0

y p (t) = Ye5t for t > 0,

Substituting the value x(t) = Ke5t and y p (t) = Ye5t in the given system, we get

5Ye5t − 7Ye5t = Ke5t for t > 0,


K
−2Ye5t = Ke5t =⇒ Y = − .
2
Therefore,
K
y p (t) = − e5t for t > 0.
2

Homogenous solution:
To determine the natural response yh (t) of the system, we hypothesize a solution of the form
of an exponential. Let yh (t) = Aest and substitute it in dy(t)
dt − 7y(t) = 0.

sAest − 7Aest = 0,
sAest = 7Aest =⇒ s = 7.
Therefore,
yh (t) = Ae7t for t > 0.
Combining the natural response and the forced response, we get the solution to the differential
equation
K
y(t) = − e5t + Ae7t for t > 0.
2
The response is not completely determined (the value of A still unknown), this is because the
initial condition on y(0) is not specified. For causal LTI systems defined by linear constant
coefficient differential equations, the initial condition is always y(0) = 0, which is called the
initial rest.
The initial rest implies that
K K
y(0) = − e0 + Ae0 = 0 =⇒ A =
2 2
The complete solution is
76 2 Linear Time Invariant (LTI) Systems

K [ 7t ]
y(t) = e − e5t u(t) for t > 0.
2
For t < 0, since x(t) = 0 for t < 0, the condition of initial rest and causality of the system
implies that y(t) = 0 for t < 0.

2.4.2 Block Diagram Representations of 1st-order Systems

Block diagram interconnection is very simple and nature way to represent the systems de-
scribed by linear constant-coefficient difference and differential equations. For example, the
causal CT system described by the first-order difference equation is

dy(t)
+ a y(t) = b x(t)
dt
The equation can be rewritten as

dy(t) b
y(t) = − + x(t)
a dt a
The block diagram representation for this CT system is shown in Figure 2.24, it involves three
basic operations: addition, multiplication by a coefficient, and differentiation.
2.4 Causal LTI Systems Described by Differential and Difference Equations 77

x(t) y(t)
+
b
a

D
-1
a dy(t)
dt
(a)

x 2 (t)

x 1 (t) x 1 (t) + x 2 (t)


+ x(t) a a x (t)
Adder Multiplication by a coefficient
(b) (c)

x(t) dx(t)/dt
D

Differentiator
(d)

Figure 2.24 Block diagram representation for causal CT system described by first-order difference equation.
78 2 Linear Time Invariant (LTI) Systems

Problems of Chapter 2

2.1. Convolve the following:


a. x(t) = u(t) − u(t − 4); h(t) = r(t)

x(t) h(t)
1 1

0 1 2 3 4 t 0 1 2 t
Figure P2.1a x(t) = u(t) − u(t − 4); h(t) = r(t)

b. x(t) = δ (t); h(t) = e−2t u(t)

x(t) h(t)
1 1 e -2t

0 1 2 t 0 1 2 t
Figure P2.1b x(t) = δ (t); h(t) = e−2t u(t)

c. x(t) = r(t)[u(t) − u(t − 1)]; h(t) = u(t) − u(t − 2)

x(t) h(t)
1 1

0 1 2 t 0 1 2 t
Figure P2.1c x(t) = r(t)[u(t) − u(t − 1)]; h(t) = u(t) − u(t − 2)

d. x(t) = e|t| [u(t + 1) − u(t − 1)]; h(t) = 2u(t) − 2u(t − 3)


2.4 Causal LTI Systems Described by Differential and Difference Equations 79

h(t)
2
x(t)
1
t 1
e e -t

-1 0 1 t 0 1 2 3 t
Figure P2.1d x(t) = e|t| [u(t + 1) − u(t − 1)]; h(t) = 2u(t) − 2u(t − 3)

e. x(t) = u(t) − u(t − 4); h(t) = 2e−at u(t)

h(t)
2
x(t)
1 1 2e -at

0 1 2 3 4 t 0 1 2 t
Figure P2.1e x(t) = u(t) − u(t − 4); h(t) = 2e−at u(t)


 −1, 1 < t < 2

f. x(t) = 1, 2 < t < 3; h(t) = e−t u(t)


0, otherwise

x(t)
2

h(t)
1 1
e -t

0 1 2 3 t 0 1 2 t
-1

-2

Figure P2.1f x(t) = 4u(t − 2) − 2u(t − 3) − 2u(t − 1); h(t) = e−t u(t)

 

 1, 0 < t < 1 
 1, 0 < t < 1
g. x(t) = −1, 1 < t < 2; h(t) = −1, 1 < t < 2

 

0, otherwise 0, otherwise
80 2 Linear Time Invariant (LTI) Systems

x(t) h(t)
1 1

0 1 2 t 0 1 2 t
-1 -1
Figure P2.1g x(t) = u(t) + u(t − 2) − 2u(t − 1); h(t) = u(t) + u(t − 2) − 2u(t − 1)

h. x(t) = u(t − 3); h(t) = e2t u(t)

x(t) h(t)=e 2t
1 1

0 1 2 3 4 5 t -3 -2 -1 0 1 t
Figure P2.1h x(t) = u(t − 3); h(t) = e2t u(t)

i. x(t) = e−t u(t); h(t) = u(t) − u(t − 3)

1 x(t)=e -t 1 h(t)

-1 0 1 2 3 t -1 0 1 2 3 t
Figure P2.1i x(t) = e−t u(t); h(t) = u(t) − u(t − 3)

2.2. Find the response of a system to an input of x(t) = 2u(t − 2) if the impulse response is
given by
a. h(t) = sin(2t) u(t)

b. h(t) = u(t)

c. h(t) = u(t − 4)

d. h(t) = 2u(t + 2)

2.3. CT LTI system has impulse response h(t) illustrated in Figure P2.3.
2.4 Causal LTI Systems Described by Differential and Difference Equations 81

h(t)
1

-1 0 1
Figure P2.3

Use linearity and time invariance to determine the system output y(t) if the input x(t) is
a. x(t) = 2δ (t + 2) + 2δ (t − 2)

b. x(t) = δ (t − 1) + δ (t − 2) + δ (t − 3)

c. x(t) = ∑ δ (t − 3n)
n=−∞


d. x(t) = ∑ δ (t − 2n)
n=−∞


e. x(t) = ∑ δ (t − 1.5n)
n=−∞

2.4. CT LTI system responds to the following inputs with the corresponding outputs:
( )
x(t) = u(t) → y(t) = 2 1 − e−t u(t), x(t) = cos(t) → y(t) = 2 cos (t − π /4)

Using linearity/time invariance properties, find y(t) for the following inputs:
a. x(t) = 2u(t) − 2u(t − 1)

b. x(t) = 3 cos(2t − 2)

c. x(t) = 4u(t) + 5 cos(2t)

d. x(t) = t u(t)

2.5. The impulse response h(t) of a CT LTI system is given by

h(t) = e2t u(−t)

a. Is the system causal? justify your answer.

b. Is the system stable? justify your answer.

c. Using linearity/time invariance properties, find the system output y(t) for the given inputs:
• x(t) = u(t − 3)
• x(t) = u(t + 3)
82 2 Linear Time Invariant (LTI) Systems

2.6. CT LTI system with input x(t) and output y(t) related by
∫ t
y(t) = e−(t−τ ) x(τ − 2)d τ
−∞

a. What is the impulse response h(t) of the LTI system?

b. Is the system causal? justify your answer.

2.7. Given the impulse response of LTI systems, verify whether the following systems are
memoryless, invertible, causal, and stable
a. h(t) = e−4t u(t − 2)

b. h(t) = 4δ (t)

c. h[n] = 3n u[n − 2]
( )n ( )n
d. h[n] = − 12 u[n] + 12 u[n − 1]

2.8. Given the impulse response of LTI systems, find the step response of the systems
a. h(t) = te−t

b. h(t) = u(t) − u(t − 4)

c. h[n] = u[n]

d. h[n] = n u[n]

2.9. Determine the output of the systems y(t) described by the following differential equation
with input and an initial reset conditions as specified:

dy(t)
5 y(t) + 10y(t) = 2x(t)
dt
a. Input → x(t) = 2

b. Input → x(t) = e−t

2.10. Determine the output of the systems y(t) described by the following differential equa-
tion with input and an initial reset conditions as specified:

dy(t) dx(t)
2y(t) = x(t) +
dt dt

Input → x(t) = K e−α t u(t)


References 83

References

1. A. Lerch. An introduction to audio content analysis: Applications in signal processing and music infor-
matics. 1st Edition, John Wiley & Sons, 2012.
2. B. P. Lathi. Signal processing and linear systems. Carmichael, California: Berkeley-Cambridge Press,
1998.
3. E. B. Saff and A. D. Snider. Fundamentals of complex analysis for mathematics, science, and engineering.
New Jersey: Prentice Hall, 1993.
4. R. V. Churchill and J. W. Brown. Complex variables and applications. 5th Edition, New York: McGraw
Hill, 1990.
5. J. W. Dettman. Applied complex variables. New York: Dover Publications, 1984
Chapter 3
Continuous-Time Fourier Series Representation of
Period Signals

Abstract :
The development of signals representation as linear combinations of set of basic signals. For
this alternative representation, we use complex exponentials. The resulting representation
are known as continuous-time and discrete-time Fourier series. This chapter introduces the
Fourier series in the context of continuous-time signals and systems

3.1 Introduction

Fourier series representation involves writing a periodic signal as a linear combination of


harmonically-related sinusoids. It is often more convenient to represent a periodic signal as a
linear combination of harmonically-related complex exponentials, rather than trigonometric
functions.
• Signals can be represented using complex exponentials; continuous-time and discrete-time
Fourier series and transform.
• If the input to an LTI system is expressed as a linear combination of periodic complex
exponentials or sinusoids, then the output can also be expressed in this form.

3.1.1 Historical Perspective

By 1807, Fourier had completed a work that series of harmonically related sinusoids were
useful in representing temperature distribution of a body. He claimed that any periodic signal
could be represented by a Fourier series and he also obtained a representation for non-periodic
signals as weighted integrals of sinusoids-Fourier transform.

3.2 The Response of LTI Systems to Complex Exponentials

It is advantageous in the study of LTI systems to represent signals as linear combinations of


basic signals that possess the following two properties:

85
86 3 Continuous-Time Fourier Series Representation of Period Signals

Figure 3.1 Jean Baptiste Joseph Fourier.

• The set of basic signals can be used to construct a broad and useful class of signals.
• The response of an LTI system to each signal should be simple enough in structure to
provide us with a convenient representation for the response of the system to any signal
constructed as a linear combination of the basic signal.
Both of these properties are provided by Fourier analysis.
The importance of complex exponentials in the study of LTI system is that the response of
an LTI system to a complex exponential input is the same complex exponential with only a
change in amplitude; that is

Continuous time : est → H(s)est , (3.1)

Discrete time : zn → H(z)zn , (3.2)


where the complex amplitude factor H(s) or H(z) will be in general a function of the complex
variable s or z.

A signal for which the system output is (possible complex) the product of a constant and the
input is referred to as an eigenfunction of the system, while the amplitude factor is referred
to as the system’s eigenvalue. Note that the complex exponentials are eigenfunctions.

For an input x(t) applied to an LTI system with impulse response of h(t), the output is
∫ ∞ ∫ ∞
y(t) = h(τ ) x(t − τ ) d τ = h(τ ) es(t−τ ) d τ ,
−∞ −∞
∫ ∞
= est h(τ ) e−sτ d τ , (3.3)
−∞
∫∞ −sτ d τ
The integral −∞ h(τ )e converges and is expressed as
3.2 The Response of LTI Systems to Complex Exponentials 87
∫ ∞
H(s) = h(τ ) e−sτ d τ , (3.4)
−∞

the response to est is of the form


y(t) = H(s) est . (3.5)
It is shown that the complex exponential est is the eigenfunction of LTI system and the re-
sponse H(s) for a specific value of s is the eigenvalue associated with the eigenfunction.

Complex exponential sequences are eigenfunctions of discrete-time LTI systems. That is,
suppose that an LTI system with impulse response h[n] has as its input sequence

x[n] = zn , (3.6)

where z is a complex number, then the output of the system can be determined from the
convolution sum as
∞ ∞
y[n] = ∑ h[k] x[n − k] = ∑ h[k] zn−k ,
k=−∞ k=−∞

=z n
∑ h[k] z−k . (3.7)
k=−∞

Assuming that the summation on the right-hand side of Eq. 3.7 converges, then the output is
the product of the same complex exponential and the constant that depends on the value of z.
That is,
y[n] = H(z) zn , (3.8)
where

H(z) = ∑ h[k] z−k . (3.9)
k=−∞

It is shown that the complex exponential is the eigenfunction zn of the LTI system and the
response H(z) for a specific value of z is the eigenvalue associated with the eigenfunction.

The usefulness of decomposing general signals in terms of eigenfunctions for LTI system
analysis is shown in Eq. 3.10, let

x(t) = a1 es1 t + a2 es2 t + a3 es3 t , (3.10)

from the eigenfunction property, the response to each entity is

a1 es1 t → y1 (t) = a1 H(s1 ) es1 t ,


a2 es2 t → y2 (t) = a2 H(s2 ) es2 t ,
a3 es3 t → y3 (t) = a3 H(s3 ) es3 t ,

from the superposition property, the response to the sum is the sum of the responses,

y(t) = a1 H(s1 ) es1 t + a2 H(s2 ) es2 t + a3 H(s3 ) es3 t . (3.11)

Generally, if the input is a linear combination of complex exponentials,


88 3 Continuous-Time Fourier Series Representation of Period Signals

x(t) = ∑ ak esk t , (3.12)


k

the output will be


y(t) = ∑ ak H(sk ) esk t , (3.13)
k

Similarly for discrete-time LTI systems, if the input is

x[n] = ∑ ak znk , (3.14)


k

the output is
y[n] = ∑ ak H(zk ) znk , (3.15)
k

Example 3.1.
Consider a continuous-time LTI system, where the input and output are related by the follow-
ing relation:
y(t) = x(t − 3)
Find the impulse response, eigenvalue, and eigenfunction of the system for the following
inputs?
a. x(t) = e j2t

b. x(t) = cos(2t)

c. x(t) = cos(4t) + cos(7t)

Solution 3.1.
The impulse response of the system is h(t) = δ (t −3). Therefore, the eigenvalue of the system
is given as follow ∫ −∞
H(s) = δ (t − 3) e−sτ d τ = e−3s

a. If the input x(t) = e j2t , then the system output is written as:

y(t) = x(t − 3) = e j2(t−3) = e− j6 e j2t

Note that the corresponding input e j2t has an eigenfunction of e j2t and associated eigen-
value of H( j2) = e− j6 .

b. Expanding the input signal cos(2t) using Euler’s relation:

1 ( j2t )
cos(2t) = e + e− j2t
2
By the superposition property and the eigenvalue of the system H(s) = e−3s , the system
output is written as:
3.3 Fourier Series representation of Continuous-Time Periodic Signals 89

1 ( − j6 j2t )
y(t) = e e + e j6 e− j2t ,
2
1 ( j2(t−3) )
= e + e− j2(t−3) ,
2
= cos(2(t − 3)).

Note that a corresponding input component e j2t has an eigenfunction of e j2t and associated
eigenvalue of H( j2) = e− j6 .
So because the eigenvalue is purely imaginary, this corresponds to a phase shift (time
delay) in the system’s response. However, if the eigenvalue has a real component, this
would correspond to an amplitude variation.

c. Expanding the input signal x(t) = cos(4t) + cos(7t) using Euler’s relation:

1 ( j4t ) 1( )
x(t) = e + e− j4t + e j7t + e− j7t
2 2
By the superposition property and the eigenvalue of the system H(s) = e−3s , the system
output is written as:
1 ( − j12 j4t ) 1( )
y(t) = e e + e j12 e− j4t + e− j21 e j7t + e j21 e− j7t ,
2 2
1 ( j4(t−3) ) 1( )
= e + e− j4(t−3) + e j7(t−3) + e− j7(t−3) ,
2 2
= cos(4(t − 3)) + cos(7(t − 3)).

Note that a corresponding input component e j4t has an eigenfunction of e j4t and asso-
ciated eigenvalue of H( j4) = e− j12 . Also, a corresponding input component e j7t has an
eigenfunction of e j7t and associated eigenvalue of H( j7) = e− j21 .

3.3 Fourier Series representation of Continuous-Time Periodic Signals

The Fourier series representation involves writing a periodic signal as a linear combination
of harmonically-related sinusoids. In addition, it is more convenient to represent a periodic
signal as a linear combination of harmonically-related complex exponentials, rather than
trigonometric functions.

3.3.1 Linear Combinations of harmonically Related Complex


Exponentials

A periodic signal with period of T ,

x(t) = x(t + T ) for all t. (3.16)

The fundamental period of x(t) is the smallest positive value of T , and ω0 = 2π /T is referred
to as the fundamental (angular) frequency.
90 3 Continuous-Time Fourier Series Representation of Period Signals

The two basic periodic signals are the real sinusoidal signal

x(t) = cos(ω0t), (3.17)

and the periodic complex exponential signal

x(t) = e jω0 t , (3.18)

Both signals are periodic with fundamental frequency ω0 and fundamental period T = 2π /ω0 .
Associated with the signal in Eq. 3.18 is the set of harmonically related complex exponentials

ϕk (t) = e jkω0 t = e jk(2π /T )t , k = 0, ±1, ±2, .... (3.19)

Each of these signals are periodic with period of T (although for |k| ≥ 2, the fundamental
period of ϕk (t) is a fraction of T ). Thus, a linear combination of harmonically related complex
exponentials of the form
∞ ∞
x(t) = ∑ ak e jkω0 t = ∑ ak e jk(2π /T )t , (3.20)
k=−∞ k=−∞

is also periodic with period of T . Where ak is known as the complex Fourier coefficient.
• k = 0, dc component which is constant.
• k = ±1, both have fundamental frequency equal to ω0 and are collectively referred to as
the fundamental components or the first harmonic components.
• k = ±2, periodic with half the period (twice the frequency) of the fundamental components
and are referred to as the second harmonic components.
• k = ±N, the components are referred to as the N th harmonic components.

Figure 3.2 Summation of a fundamental, second and third harmonic.

Since any periodic waveform x(t) can be expressed as a Fourier series, it follows that the sum
of dc component, first harmonic component, the second harmonic component and so on, must
3.3 Fourier Series representation of Continuous-Time Periodic Signals 91

produce the waveform. Generally, the sum of two or more sinusoids of different frequencies
produce a waveform that is not a sinusoid as shown in Figure 3.2. However, each of the ϕk is
periodic with period T , any sum of ϕk ’s will also be periodic with the same period.

Example 3.2.
A periodic signal x(t), with fundamental frequency 2π , that is expressed as follow:
3
x(t) = ∑ ak e jk2π t ,
k=−3

Determine the time-domain signals represented by the following Fourier series coefficients
1
a0 = 1, a1 = a−1 =
4
1 1
a2 = a−2 = , a3 = a−3 = .
2 3

Solution 3.2.
Rewrite x(t) and collecting each harmonic components which have the same fundamental
frequency, we obtain:
1 ( j2π t ) 1 ( j4π t ) 1 ( j6π t )
x(t) = 1 + e + e− j2π t + e + e− j4π t + e + e− j6π t .
4 2 3
Using Euler’s relation, we can obtain the time-domain signal x(t) in the form:

1 2
x(t) = 1 + cos(2π t) + cos(4π t) + cos(6π t).
2 3

Example 3.3.
Use the definition of the Fourier series to determine the time-domain signals x(t) represented
by the following Fourier series coefficients:
a. X[k] = jδ [k − 1] − jδ [k + 1] + δ [k − 3] + δ [k + 3], with fundamental frequency, ω0 = 2π .

b. X[k] = jδ [k − 1] − jδ [k + 1] + δ [k − 3] + δ [k + 3], with fundamental frequency, ω0 = 4π .

c. X[k] = 32 δ [k − (1 + 2t1 )] + 23 δ [k + (1 + 2t1 )], with fundamental frequency, ω0 = π2 .

Solution 3.3.

a. Fundamental frequency, ω0 = 2π :
92 3 Continuous-Time Fourier Series Representation of Period Signals

x(t) = ∑ X[k] e jk2π t ,
k=−∞

= je j(−1)2π t − je j(1)2π t + e j(−3)2π t + e j(3)2π t ,


= −2sin(2π t) + 2cos(6π t).

b. Fundamental frequency, ω0 = 4π :

x(t) = ∑ X[k] e jk4π t ,
k=−∞

= je j(−1)4π t − je j(1)4π t + e j(−3)4π t + e j(3)4π t ,


= −2sin(4π t) + 2cos(12π t).

c. Fundamental frequency, ω0 = π2 :
∞ π
x(t) = ∑ X[k] e jk 2 t ,
k=−∞
3 − j(1+ 1 ) π t 3 j(1+ 1 ) π t
= e 2t 2 + e 2t 2 ,
2 2
3 π π 3 π π
= e− j( 2 t+ 4 ) + e j( 2 t+ 4 ) ,
2 ( 2
π π)
= 3cos t+ .
2 4

The solution of example 3.2 and 3.3 is an alternative form of Fourier series of real periodic
signals. Suppose x(t) is real, then it can be represented in the form of Eq. 3.20, we obtain

x(t) = x∗ (t) = ∑ a∗k e− jkω0 t , (3.21)
k=−∞

where the asterisk indicates the complex conjugate. We assume that x(t) is real, that is, x(t) =
x∗ (t). Replacing k by −k in Eq. 3.21, we have

x(t) = ∑ a∗−k e jkω0 t , (3.22)
k=−∞

which, by comparison with Eq. 3.20, requires that ak = a∗−k or equivalently

a∗k = a−k . (3.23)

To derive the alternative forms of the Fourier series, we rewrite the terms in Eq. 3.20 as
∞ [ ]
x(t) = a0 + ∑ ak e jkω0 t + a−k e− jkω0 t . (3.24)
k=1

Substituting a∗k for a−k , we have


3.3 Fourier Series representation of Continuous-Time Periodic Signals 93
∞ [ ]
x(t) = a0 + ∑ ak e jkω0 t + a∗k e− jkω0 t . (3.25)
k=1

When the signal x(t) is real and since the two terms inside the summation are complex con-
jugate of each other, this can be expressed as
∞ { }
x(t) = a0 + ∑ 2ℜ ak e jkω0 t . (3.26)
k=1

If ak is expressed in polar from ak = Ak e jθk , then Eq. 3.26 becomes


∞ { }
x(t) = a0 + ∑ 2ℜ Ak e j(kω0 t+θk ) .
k=1

that is

x(t) = a0 + 2 ∑ Ak cos (kω0t + θk ) . (3.27)
k=1

It is one common form encountered for the Fourier series of real periodic signals in continu-
ous time and the other form can be obtained by writing ak in rectangular form, ak = Bk + jCk ,
then Eq. 3.26 becomes

x(t) = a0 + 2 ∑ Bk cos(kω0t) −Ck sin(kω0t). (3.28)
k=1

where a0 , Bk , and Ck are the trigonometric Fourier series coefficients.

3.3.2 Determination of the Fourier Series Representation of Periodic


Signal

The coefficients ak in general are complex and to determine these coefficients, multiply both
sides of Eq. 3.20 by the complex exponential signal e− jnω0 t , where n is an integer, we obtain

x(t) e− jnω0 t = ∑ ak e jkω0 t e− jnω0 t , (3.29)
k=−∞

Integrating both sides over one period, from 0 to T = 2π /ω0 , we have


∫ T ∞ ∫ T

0
x(t) e− jnω0 t dt = ∑ ak
0
e jkω0 t e− jnω0 t dt,
k=−∞
∞ ∫ T
= ∑ ak
0
e j(k−n)ω0 t dt, (3.30)
k=−∞

note that, since the signal is periodic


∫ T {
j(k−n)ω0 t T, k=n
e dt =
0 0, k ̸= n
94 3 Continuous-Time Fourier Series Representation of Period Signals

Therefore, Eq. 3.30 becomes


∫ T
1
ak = x(t) e− jkω0 t dt. (3.31)
T 0

The Fourier series of a periodic continuous-time signal is given by the following synthesis
equation
∞ ∞
x(t) = ∑ ak e jkω0 t = ∑ ak e jk(2π /T )t . (3.32)
k=−∞ k=−∞

The following equation is called the analysis equation, where the set of coefficients ak are
often called the Fourier series coefficients of the spectral coefficients of x(t).
∫ ∫
1 1
ak = x(t) e− jkω0 t dt = x(t) e− jk(2π /T )t dt. (3.33)
T T T T

The coefficient a0 is the dc or constant component and is given with k = 0, a0 , Bk , and Ck are
the trigonometric Fourier series coefficients are given by:

1
a0 = x(t) dt.
T T

2
Bk = x(t) cos(2π kω0t) dt.
T T

2
Ck = x(t) sin(2π kω0t) dt. (3.34)
T T

Example 3.4.
Considering the following signal. Find the complex exponential Fourier series coefficients.
a. x1 (t) = sin(ω0t)

b. x2 (t) = cos(ω0t)
( )
c. x3 (t) = cos 2t + π4

d. x4 (t) = cos(4t) + sin(6t)

e. x5 (t) = sin2 (t)

Solution 3.4.
One approach to determine the Fourier series coefficients for x1 (t) = sin(ω0t) is to apply Eq.
3.33. Expanding the sinusoidal signal as a linear combination of complex exponentials and
identify the Fourier series coefficients by inspection.
a. The Fourier series coefficients for x1 (t) = sin(ω0t)

1 [ jω0 t ]
x1 (t) = e − e− jω0 t
2j

1 1 j ω0 t
= − e− jω0 t + e = ∑ ak e jkω0 t .
2j 2j k=−∞
3.3 Fourier Series representation of Continuous-Time Periodic Signals 95

Comparing the right-hand sides of this equation and Eq. 3.32, we have
1 1
a−1 = − , a1 = ,
2j 2j
ak = 0, k ̸= ±1.

b. The Fourier series coefficients for x2 (t) = cos(ω0t)

1 [ j ω0 t ]
x2 (t) = e + e− jω0 t
2

1 1
= e− jω0 t + e jω0 t = ∑ ak e jkω0 t .
2 2 k=−∞

Comparing the right-hand sides of this equation and Eq. 3.32, we have
1 1
a−1 = , a1 = ,
2 2
ak = 0, k ̸= ±1.

( )
c. The Fourier series coefficients for x3 (t) = cos 2t + π4 , where the fundamental (angular)
frequency, ω0 = 2. Thus,
1 [ j(2t+π /4) ]
x3 (t) = e + e− j(2t+π /4)
2

1 1
= e− jπ /4 e− j2t + e jπ /4 e j2t = ∑ ak e j2kt .
2 2 k=−∞

The complex Fourier coefficients are


√ √
1 − jπ /4 1 1 − j 2 1 jπ /4 1 1 + j 2
a−1 = e = √ = (1 − j), a1 = e = √ = (1 + j),
2 2 2 4 2 2 2 4
ak = 0, k ̸= ±1.

d. The Fourier series coefficients for x4 (t) = cos(4t) + sin(6t), where the fundamental period,
T0 = π and the fundamental (angular) frequency, ω0 = 2π /T0 = 2. Thus,

1 [ j4t ] 1 [ j6t ]
x4 (t) = e + e− j4t + e − e− j6t
2 2j

1 1 1 1 j6t
= − e− j6t + e− j4t + e j4t + e = ∑ ak e j2kt .
2j 2 2 2j k=−∞

The complex Fourier coefficients are


1 1
a−3 = − , a−2 =
2j 2
1 1
a2 = , a3 =
2 2j
96 3 Continuous-Time Fourier Series Representation of Period Signals

ak = 0, Otherwise.

e. The Fourier series coefficients for x5 (t) = sin2 (t), where the fundamental period T0 = π
and the fundamental angular frequency ω0 = 2π /T0 = 2. Thus,
[ ]2
e jt − e− jt 1 ( j2t )
x5 (t) = =− e − 2 + e− j2t
2j 4

1 1 1
= − e− j2t + − e j2t = ∑ ak e j2kt .
4 2 4 k=−∞

The complex Fourier coefficients are


1 1
a−1 = − , a0 =
4 2
1
a1 = − , ak = 0, Otherwise.
4

Example 3.5.
The periodic square wave signal shown in figure 3.3 is defined over one period, where the
signal has a fundamental period of T and fundamental frequency, ω0 = 2π /T .

{
1, |t| < T1
x(t) =
0, T1 < |t| < T /2

Figure 3.3 Periodic square wave signal with fundamental period of T .

Solution 3.5.
Using Eq. 3.33 to determine the Fourier series coefficients for x(t). Because of the symmetry
of x(t) about t = 0, we choose −T /2 as the interval over which the integration is performed,
although any other interval of length T is equally valid, thus leads to the same result.
3.4 Convergence of the Fourier Series 97

For k = 0,
∫ T1
1 t T1 2T1
a0 = x(t) dt = = ,
T −T1 T −T1 T
For k ̸= 0, we obtain
∫ T1 T1
1 − jkω0 t 1
− jkω0 t
ak = e dt = − e ,
T −T1 jk ω 0 T −T1
[ jkω T − ω ]
2 e 0 1 −e jk 0 T1
= ,
k ω0 T 2j
2sin(kω0 T1 ) sin(kω0 T1 )
= = .
kω0 T kπ

Figure 3.4 is a bar graph of the Fourier series coefficients for a fixed T1 and several values of
T . For this example, the Fourier coefficients are real, so they can be depicted with a single
graph. For complex Fourier coefficients, two graphs corresponding to real and imaginary
parts or amplitude and phase of each coefficient, would be required. For T = 4T1 , the signal
is a square wave that is unity for half the period and zero for half the period. In this case,
ω0 T1 = π /2. The coefficients are regularly spaced samples of the envelop [2sin(ω T1 )]/ω ,
where the spacing between samples decreases as T increases.

For k ̸= 0,
sin(π k/2)
ak = ,

1
while a0 = ,
2
1 1 1
a1 = a−1 = , a3 = a−3 = , a5 = a−5 = ,
π 3π 5π
ak = 0 for k even and nonzero of odd. Also, the sin(π k/2) alternates between ±1 for succes-
sive odd values of k.

3.4 Convergence of the Fourier Series

If a periodic signal x(t) is approximated by a linear combination of finite number of harmon-


ically related complex exponentials
N
xN (t) = ∑ ak e jkω0 t . (3.35)
k=−N

Let eN (t) denote the approximation error,


N
eN (t) = x(t) − xN (t) = x(t) − ∑ ak e jkω0 t . (3.36)
k=−N
98 3 Continuous-Time Fourier Series Representation of Period Signals

Figure 3.4 Fourier series coefficients Tak for the periodic square wave signal with T1 fixed and several values
of T : [a] T = 4T1 ; [b] T = 8T1 ; [c] T = 16T1 .

The criterion used to measure quantitatively the approximation error is the energy in the error
over one period: ∫
EN = |eN (t)|2 dt. (3.37)
T
The particular choice for the coefficients in Eq. 3.35 that minimize the energy in the error is

1
ak = x(t) e− jkω0 t dt. (3.38)
T T

It can be seen that Eq. 3.38 is identical to the expression used to determine the Fourier series
coefficients. Thus, if x(t) has a Fourier series representation, the best approximation using
only a finite number of harmonically related complex exponentials is obtained by truncating
the Fourier series to the desired number of terms. As N increases, new terms are added and
EN decreases. If x(t) has a Fourier series representation, then the limit of EN as N → ∞ is zero.

One class of periodic signals that are representable through Fourier series is those signals
which have finite energy over a single period,

|x(t)|2 dt < ∞, (3.39)
T
3.4 Convergence of the Fourier Series 99

When this condition is satisfied, we can guarantee that the coefficients obtained from Eq. 3.33
are finite. We define

e(t) = x(t) − ∑ ak e jkω0 t , (3.40)
k=−∞

then ∫
|e(t)|2 dt = 0, (3.41)
T
The convergence guaranteed when x(t) has finite energy over a period is very useful. In this
case, we may say that x(t) and its Fourier series representation are indistinguishable. Alter-
native set of conditions developed by Dirichlet that guarantees the equivalence of the signal
and its Fourier series representation:

Condition 1: Over any period, x(t) must be absolutely integrable, that is



|x(t)| dt < ∞, (3.42)
T

This guarantees each coefficient ak will be finite, since


∫ ∫
1 1
|ak | ≤ x(t)e− jkω0 t dt = |x(t)| dt. (3.43)
T T T T
A periodic function that violates the first Dirichlet condition is
1
x(t) = , 0 < t ≤ 1.
t
Condition 2: In any finite interval of time, x(t) is of bounded variation; that is, there are no
more than a finite number of maxima and minima during a single period of the signal.
An example of a function that meets Condition 1 but not Condition 2:
( )

x(t) = sin , 0 < t ≤ 1. (3.44)
t

Condition 3: In any finite interval of time, there are only a finite number of discontinuities.
Furthermore, each of these discontinuities is finite.
An example that violates the third condition is a function defined as


 1, 0 ≤ t < 4


 2, 4 ≤ t < 6
 1

x(t) = 1 , 6 ≤ t < 7
4




 18 , 7 ≤ t < 7.5

The above examples are shown in the figure 3.5. They are generally pathological in nature
and consequently do not typically arise in practical contexts.

Summary:
• For a periodic signal that has no discontinuities, the Fourier series representation converges
and equals to the original signal at all the values of t.
100 3 Continuous-Time Fourier Series Representation of Period Signals

(a) The signal violates the first Dirichlet


condition

(b) The signal violates the second Dirichlet


condition

(c) The signal violates the third Dirichlet


condition

Figure 3.5 Signals that violates the Dirichlet conditions.

• For a periodic signal with a finite number of discontinuities in each period, the Fourier
series representation equals to the original signal at all the values of t except the isolated
points of discontinuity.

Gibbs Phenomenon: Near a point, where x(t) has a jump discontinuity, the partial sums xN (t)
of a Fourier series exhibit a substantial overshoot near these endpoints, and an increase in N
will not diminish the amplitude of the overshoot, although with increasing N, the overshoot
occurs over smaller and smaller intervals. This phenomenon is called Gibbs phenomenon.
Enough large value of N should be chosen to guarantee that the total energy in these ripples
is insignificant.
3.5 Properties of the Continuous-Time Fourier Series 101

Figure 3.6 Gibbs phenomenon.

3.5 Properties of the Continuous-Time Fourier Series

Notation: suppose x(t) is a periodic signal with period T and fundamental frequency ω0 .
Then if the Fourier series coefficients of x(t) are denoted by ak . To signify the pairing of a
FS
periodic signal with its Fourier series coefficients, we use the notation x(t) ←→ ak .

3.5.1 Linearity

Let x(t) and y(t) denote two periodic signals with period T and which have Fourier series
FS FS
coefficients denoted by ak and bk , that is x(t) ←→ ak and y(t) ←→ bk , then we have
FS
z(t) = Ax(t) + By(t) ←→ Ck = Aak + Bbk . (3.45)

3.5.2 Time Shifting

When a time shift to a periodic signal x(t), the period T of the signal is preserved.
FS
If x(t) ←→ ak , then we have
x(t − t0 ) ←→ e− jkω0 t0 ak .
FS
(3.46)
Where ak is the kth Fourier series coefficient of x(t), when a periodic signal is shifted in time,
the magnitudes of its Fourier series coefficients remain unchanged.
102 3 Continuous-Time Fourier Series Representation of Period Signals

3.5.3 Time Reversal

Time reversal applied to a continuous-time signal results in a time reversal of the correspond-
ing sequence of Fourier series coefficients.

FS
If x(t) ←→ ak , then
FS
x(−t) ←→ a−k . (3.47)
If x(t) is even, that is x(t) = x(−t), the Fourier series coefficients are also even, a−k = ak .
Similarly, if x(t) is odd, that is x(−t) = −x(−t), the Fourier series coefficients are also odd,
a−k = −ak

3.5.4 Time Scaling



If x(t) has the Fourier series representation x(t) = ∑ ak e jkω0 t , then the Fourier series rep-
k=−∞
resentation of the time-scaled signal x(α t) is

x(α t) = ∑ ak e jk(αω0 )t . (3.48)
k=−∞

While Fourier series coefficients have not changes, the Fourier series representation has
changed because of the change in the fundamental frequency.

3.5.5 Multiplication

FS
Suppose x(t) and y(t) are two periodic signals with period T and that x(t) ←→ ak and
FS
y(t) ←→ bk . Since the product x(t) y(t) is also periodic with period T , its Fourier series
coefficients hk is


FS
x(t) y(t) ←→ hk = ai bk−i . (3.49)
i=−∞

The sum on the right-hand side of Eq. 3.49 may be interpreted as the discrete-time convolu-
tion of the sequence representing the Fourier coefficients of x(t) and the sequence represent-
ing the Fourier coefficients of y(t).

3.5.6 Conjugate and Conjugate Symmetry

Taking the complex conjugate of a periodic signal x(t) has the effect of complex conjugation
FS
and time reversal on the corresponding Fourier series coefficients. That is, if x(t) ←→ ak ,
then
3.6 Summary of Properties of the Continuous-Time Fourier Series 103

x∗ (t) ←→ a∗−k .
FS
(3.50)
If x(t) is real, that is, x(t) = x∗ (t), the Fourier series coefficients will be conjugate symmetric,
that is
x−k = a∗k . (3.51)
From this expression, we may get various symmetry properties for magnitude, phase, real
and imaginary parts of the Fourier series coefficients of real signals. From Eq. 3.51, we see
that:
• If x(t) is real, then a0 is real and |a−k | = |ak |.
• If x(t) is real and even, we have ak = a−k . However, from Eq. 3.51, a−k = a∗k , so that
ak = a∗k , the Fourier series coefficients are real and even.
• If x(t) is real and odd, then the Fourier series coefficients are real and odd.

3.5.7 Parseval’s Relation for periodic Signals

Parseval’s relation for continuous-time periodic signals is


∫ ∞
1
T T
|x(t)|2 dt = ∑ |ak |2 , (3.52)
k=−∞

since ∫ 2 ∫
1 jkω0 t 1
ak e dt = |ak |2 dt = |ak |2 , (3.53)
T T T T

so that |ak |2 is the average power in the kth harmonic component. Thus, Parseval’s relation
states that the total average power in a periodic signal equals the sum of the average powers
in all of its harmonic components.

3.6 Summary of Properties of the Continuous-Time Fourier Series

Table 4.2 contains a summary of the properties of the continuous-time Fourier series pre-
sented in this section.

Example 3.6.
Consider the signal g(t) with a fundamental period of 4, shown in figure 3.7. The Fourier
series representation can be obtained directly using the analysis equation 3.33. We may also
use the relation of g(t) to the symmetric periodic square wave x(t).

1
g(t) = x(t − 1) − .
2
104 3 Continuous-Time Fourier Series Representation of Period Signals

Table 3.1 Properties of Continuous-Time Fourier Series.


Property Periodic Signal Fourier Series Coefficients
{ {
x(t) ak
y(t) bk

Linearity Ax(t) + By(t) Aak + Bbk

Time Shifting x(t − t0 ) ak e− jkω0 t

Frequency shifting x(t)e jM ω0 t ak−M

Conjugation x∗ (t) a∗−k

Time Reversal x(−t) a−k

Time Scaling x(at), a > 0 (Periodic with period T /a) ak



Periodic Convolution T x(τ )y(t − τ )d τ Tak bk

Multiplication x(t)y(t) ∑ ai bk−i
i=−∞

Differentiation dx(t)
dt jkω0 tak = jk 2Tπ ak
∫t ( ) ( )
−∞ x(τ ) dτ 1 1
Integration jkω0 ak = jk(2π /T ) ak


 ak = a∗−k


 Re{ak } = Re{a−k }
Conjugate Symmetry for x(t) real Im{ak } = −Im{a−k }


Real Signals  |ak | = |a−k |

∠ak = −∠a−k

Real and Even Signals x(t) real and even ak real and even

Real and Odd Signals x(t) real and Odd ak purely imaginary and odd
{ {
xe (t) = Ev[x(t)] [x(t)real] Re{ak }
Even-Odd Decomposition
xo (t) = Od[x(t)] [x(t)real] jIm{ak }
of Real Signals

Parseval’s Relation for Periodic Signals


∫ ∞
1
T T |x(t)|2 dt = ∑ |ak |2
k=−∞

Solution 3.6.
Using Eq. 3.33 to determine the Fourier series coefficients of x(t). Because of the symmetry
of x(t) about t = 0, we choose −T /2 as the interval over which the integration is performed,
although any other interval of length T is equally valid, thus leads to the same result.

The time-shift property indicates that if the Fourier series coefficients of x(t) are denoted by
ak , the Fourier series coefficients of x(t − 1) may be expressed as

bk = ak e− jkπ /2 .
3.6 Summary of Properties of the Continuous-Time Fourier Series 105

Figure 3.7 Periodic square wave signal for example 3.3.

The Fourier coefficients of the dc offset in g(t), i.e., the term -1/2 on the right-hand side of
the given equation are given by
{
0, for k ̸= 0
ck = .
− 2 , for k = 0
1

Applying the linearity property, we conclude that the coefficients for g(t) can be expressed as
{
ak e− jkπ /2 , for k ̸= 0
dk = ,
a0 − 12 , for k = 0

sin(kπ /2)
replacing ak = kπ and a0 = 12 , then we have
{
sin(kπ /2) − jkπ /2
kπ e , for k ̸= 0
dk = .
0, for k = 0

Example 3.7.
The triangular wave signal x(t) with period T = 4, and fundamental frequency ω0 = π /2 is
shown in figure 3.8.

Solution 3.7.
The derivative of this function is the signal g(t) in the previous preceding example. Denoting
the Fourier series coefficients of g(t) by dk , and those of x(t) by ek , based on the differentia-
tion property, we have
dk = jk(π /2) ek ,

This equation can be used to express ek in terms of dk except when k = 0

2dk 2sin(kπ /2) − jkπ /2


ek = = e .
jkπ j(kπ )2
106 3 Continuous-Time Fourier Series Representation of Period Signals

x(t)

Figure 3.8 triangular wave signal for example 3.4.

For k = 0, e0 can be simply calculated by calculating the area of the signal under one period
and divide by the length of the period, that is
1
e0 = .
2

Example 3.8.
The properties of the Fourier series representation of a periodic train of impulse. The impulse
train with period T may be expressed as

x(t) = ∑ δ (t − kT ).
k=−∞

It is illustrated in figure 3.9.

Figure 3.9 Periodic train of impulses.

Solution 3.8.
We use Eq. 3.33 and select the integration interval to be −T /2 ≤ t ≤ T /2, avoiding the
placement of impulses at the integration limits.
∫ T /2
1 1
ak = δ (t) e− j(2π /T )t dt = .
T −T /2 T
3.6 Summary of Properties of the Continuous-Time Fourier Series 107

All the Fourier series coefficients of this periodic train of impulse are identical. These coeffi-
cients are also real and even. The periodic train of impulse has a straight forward relation to
square-wave signals such as g(t) in figure 3.10.

Figure 3.10 Periodic square wave.

The derivative of g(t) is the signal q(t) shown in the figure 3.11, which can also interpreted
as the difference of two shifted versions of the impulse train x(t), that is

q(t) = x(t + T1 ) − x(t − T1 ).

Figure 3.11 Derivative of the periodic square wave in figure 3.10.

Based on the time-shifting and linearity properties, we may express the Fourier coefficients
bk of q(t) in terms of the Fourier series coefficient ak of x(t); that is,

bk = e jkω0 T1 ak − e− jkω0 T1 ak
1 [ jkω0 T1 ] 2 jsin(kω T )
− e− jkω0 T1 =
0 1
= e .
T T
Finally, since q(t) is the derivative of g(t), we can use the differentiation property to get

bk = jkω0 ck ,

where kc is the Fourier series coefficients of g(t). Thus


108 3 Continuous-Time Fourier Series Representation of Period Signals

bk 2 jsin(kω0 T1 ) sin(kω0 T1 )
ck = = = , k ̸= 0,
jkω0 jkω0 T kπ

Since c0 is just the average value of g(t) over one period, we can be solve it by inspection
from the figure 3.10
2T1
c0 = .
T

Example 3.9.
Suppose we are given the following facts about a signal x(t):
a. x(t) is a real signal.

b. x(t) is periodic with period T = 4, and it has Fourier series coefficients ak .

c. ak = 0 for |k| > 1.

d. The signal with Fourier coefficients bk = e− jπ k/2 a−k is odd.


1∫
4 4 |x(t)| dt
e. 2 = 1/2

Solution 3.9.
Let us show that the information is sufficient to determine the signal x(t) to within a sign
factor.
a. According to Fact 3, x(t) has at most three nonzero Fourier series coefficients ak : a0 , a1
and a−1 . Since the fundamental frequency ω0 = 2π /T = 2π /4 = π /2, it follows that

x(t) = a0 + a1 e jπ t/2 + a−1 e− jπ t/2 .

b. Since x(t) is real (Fact 1), based on the symmetry property a0 is real and a1 = a∗−1 , conse-
quently. [ ] { }

x(t) = a0 + a1 e jπ t/2 + a1 e jπ t/2 = a0 + 2Re a1 e jπ t/2 .

c. Based on the Fact 4 and considering the time-reversal property, we note that a−k cor-
responds to x(−t). Also the multiplication property indicates that multiplication of kth
Fourier series by e− jkπ /2 = e− jkω0 corresponds to the signal being shifted by 1 to the right.
We conclude that the coefficients bk correspond to the signal x(−(t − 1)) = x(−t + 1),
which, according to Fact 4, must be odd. Since x(t) is real, x(−t + 1) must also be real. So
based the property, the Fourier series coefficients must be purely imaginary and odd. Thus,
b0 = 0 and b−1 = −b1 .

d. Since time-reversal and time-shift operation cannot change the average power per period,
Fact 5 holds even if x(t) is replaced by x(−t + 1). That is,

1
|x(−t + 1)|2 dt = 1/2.
4 4

Using Parseval’s relation,


3.6 Summary of Properties of the Continuous-Time Fourier Series 109

|b1 |2 + |b−1 |2 = 1/2.


Substituting b1 = −b−1 in this equation, we obtain |b1 | = 1/2. Since b1 is known to be
purely imaginary, it must be either b1 = j/2 or b1 = − j/2.

e. Finally we translate the conditions on b0 and b1 into the equivalent statement on a0


and a1 . First, since b0 = 0, then a0 = 0. With k = 1, this condition implies that a1 =
e− jπ /2 b−1 = − jb−1 = jb1 . Thus, if we take b1 = j/2, then a1 = −1/2, and therefore,
x(t) = −cos(π t/2). Alternatively, if we take b1 = − j/2, then a1 = 1/2 and therefore,
x(t) = cos(π t/2).

Example 3.10.
Compute the trigonometric Fourier series of the square waveform signal x(t) of figure 3.12.

Figure 3.12 Square waveform for Example 3.7.

Solution 3.10.
The trigonometric series will consist of sine terms only because this waveform is an odd
function. Moreover, only odd harmonics will be present since this waveform has half-wave
symmetry. However, we will compute all coefficients to verify this. Also, for brevity, we will
assume that w = 1. The ak coefficients are found from

∫ 2π [∫ π ∫ 2π ]
1 1
ak = x(t)cos(kt) dt = Acos(kt) dt + −Acos(kt) dt
π 0 π 0 π
[ π 2π ]
A A
= sin(kt) −sin(kt) = [2sin(kπ ) − sin(k2π )] ,
kπ 0 π k π

and since is an integer (positive or negative) or zero, the terms [2sin(kπ ) − sin(2π )] are zero
and therefore, all ak coefficients are zero, as expected since the square waveform has odd
symmetry. Also, by inspection, the average DC value is zero, but if we attempt to verify it,
110 3 Continuous-Time Fourier Series Representation of Period Signals

we will get the indeterminate form 0/0. To work around this problem, we will evaluate a0
directly. Then,
[∫ π ∫ 2π ]
1 A
a0 = A dt + −A dt = (π − 0 − 2π + π ) = 0.
π 0 π π

The bk coefficients are found as follow


∫ 2π [∫ π ∫ 2π ]
1 1
bk = x(t)sin(kt) dt = Asin(kt) dt + −Asin(kt) dt
π 0 π 0 π
[ π 2π ]
A A
= −cos(kt) +cos(kt) = [1 − 2cos(kπ ) + cos(2kπ )] ,
kπ 0 π k π

For k = even
A
(1 − 2 + 1) = 0,
bk =

as expected, since the square waveform has half-wave symmetry.

For k = odd
A 4A
bk = (1 + 2 + 1) = ,
kπ kπ
and thus
4A 4A 4A
b1 = , b3 = , b5 = ,
π 3π 3π
Therefore, the trigonometric Fourier series for the square waveform with odd symmetry is
[ ]
4A 1 1 4A 1
x(t) =
π
sin(ω t) + sin(3ω t) + sin(5ω t) + ... =
3 5 ∑ k sin(kω t),
π n=odd

If the given waveform has half-wave symmetry, and it is also an odd or an even function, we
can integrate from 0 to π2 , and multiply the integral by 4.
3.6 Summary of Properties of the Continuous-Time Fourier Series 111

Problems of Chapter 3

3.1. Using the definition of Fourier series, express the following terms in polar notation:
a. e jπ /4 + e− jπ /8

b. e j2 + 1 + j

c. 1 + e j4

d. e j(ω t+π /2) + e j(ω t−π /3)

3.2. By inspection, compute the complex exponential Fourier series coefficients of the fol-
lowing signals for all values of k:
a. x(t) = cos(5t − π /4)

b. x(t) = sin(t) + cos(t)


( )
c. x(t) = cos(t − 1) + sin t − 12

d. x(t) = cos(2t) sin(3t)

e. x(t) = cos2 (5t)

f. x(t) = cos(3t) + cos(5t)

3.3. A continuous-time periodic signal x(t), with fundamental period T = 8 and non-zero
Fourier series coefficients for x(t) are:

a1 = a−1 = 2, a3 = a∗−3 = 4 j.

Determine the time-domain signals by expressing x(t) in the form



x(t) = ∑ Ak cos (ωkt + θk )
k=0

3.4. Given the following continuous-time periodic signal


( ) ( )
2π 5π
x(t) = 2 + cos t + 4sin t
3 3
Determine the fundamental frequency ω0 and the Fourier series coefficients ak such that

x(t) = ∑ ak e jkω0 t
k=−∞

3.5. Find the trigonometric Fourier coefficients (a0 , Bk , and Ck ) of the following periodic
square waveform signals:
112 3 Continuous-Time Fourier Series Representation of Period Signals

a. The periodic
{ pulses is shown in Figure P3.5a
2, 0 < t < 1
x(t) =
0, 1 < t < 5

2
x(t)

-5 -4 0 1 5 6 t
Figure P3.5a

b. The given
{ periodic pulses is shown in Figure P3.5b
1, 0 < t < T0 /2
x(t) =
0, T0 /2 < t < T0

x(t)
1

0 T 0 /2 T0 3T 0 /2 2T 0 5T 0 /2 t
Figure P3.5b

c. The given
{ periodic pulses is shown in Figure P3.5c
1, 0 < t < T0 /2
x(t) =
−1, T0 /2 < t < T0

x(t)
1

0 T 0 /2 T0 3T 0 /2 2T 0 5T 0 /2 t
-1

Figure P3.5c

3.6. Find the exponential form of the Fourier series of the periodic signals given in problem
3.5:
3.7. Find the trigonometric Fourier coefficients (a0 , Bk , and Ck ) of the following periodic
sawtooth waveform signals:
3.6 Summary of Properties of the Continuous-Time Fourier Series 113

a. The periodic signal is shown in Figure P3.7a


x(t) = 2 − t, 0<t <2

x(t)
2

-6 -4 -2 0 2 4 6 t
Figure P3.7a

b. The periodic
{ signal is shown in Figure P3.7b
t, 0 < t < 1
x(t) =
0, 1 < t < 2

x(t)
1

-4 -3 -2 -1 0 1 2 3 4 t
Figure P3.7b

c. The periodic
{ signal is shown in Figure P3.7c
2(1 − t), 0 < t < 1
x(t) =
0, 1<t <2

x(t)
2

-4 -3 -2 -1 0 1 2 3 4 t
Figure P3.7c

d. The periodic
{ signal is shown in Figure P3.7d
t, 0 < t < 1
x(t) =
t, −1 < t < 0
114 3 Continuous-Time Fourier Series Representation of Period Signals

x(t)
1

-3 -2 -1 0 1 2 3 t

-1

Figure P3.7d

3.8. Find the exponential form of the Fourier series of the periodic signals given in problem
3.7:

3.9. Find the Fourier series of a rectified sinewave.

y(t) = Asin(π t), 0<t <1

3.10. Consider the periodic square wave x(t) shown in Figure P3.10.
a. Determine the complex exponential Fourier series of x(t).

b. Determine the trigonometric Fourier series of x(t).

x(t)
A

-5T 0 /2 -2T 0 -3T 0 /2 -T 0 -T 0 /2 0 T 0 /2 T0 3T 0 /2 2T 0


t
Figure P3.10

3.11. Calculate the average power in the following signals:


a. x(t) = 4 + 2cos(2t − 30◦ ) + cos(4t − 15◦ )

b. x(t) = 1 + 4cos(60π + 30◦ ) + 2cos(120π − 60◦ )

c. x(t) = 3 + cos(3π + 60◦ ) + 4cos(2π − 80◦ )

References

1. A.V. Oppenheim, A.S. Willsky, S.H. Nawab. Signals and Systems. 2nd Edition, New Jersey, USA: Prentice
Hall, 1997.
2. S. Haykin and B. Van Veen. Signals and Systems. 2nd Edition, New Jersey, USA: John Wiley & Sons,
2003.
Chapter 4
Continuous-Time Fourier Transform

Abstract :
Fourier transform converts time-domain signals into frequency-domain (or spectral) repre-
sentations. In addition of providing spectral representations of signals, Fourier transform is
also essential for describing certain types of systems and their properties in the frequency-
domain. In this chapter we shall introduce Fourier transform in the context of continuous-time
signals and systems.

4.1 Introduction

• A periodic signal can be represented as linear combination of complex exponentials which


are harmonically related.
• An aperiodic signal can be represented as linear combination of complex exponentials,
which are infinitesimally close in frequency. So the representation take the form of an
integral rather than a sum.
• In the Fourier series representation, as the period increases the fundamental frequency
decreases and the harmonically related components become closer in frequency. As the
period becomes infinite, the frequency components form a continuum and the Fourier
series becomes an integral.

4.1.1 Fourier Transform Representation of an Aperiodic Signal

Starting from revising the Fourier series representation for the continuous-time periodic
square wave, this is over one period because the period is repeated periodically with a pe-
riod T as shown in figure 4.10
{
1, |t| < T1
x(t) =
0, T1 < |t| < T /2,

The Fourier series coefficients ak for this square wave are

115
116 4 Continuous-Time Fourier Transform

Figure 4.1 continuous-time periodic square wave.


2sin(kω0 T1 ) 2sin(ω T1 )
ak = = , (4.1)
k ω0 T ωT ω =kω0

where ω0 = 2Tπ is the fundamental frequency. Furthermore, to represent samples of an enve-


lope function, Tak from the given Eq. 4.1

2sin(ω T1 )
Tak = , (4.2)
ω ω =kω0

where 2sin(ω T1 )/ω represents the envelope of Tak and the Fourier series coefficients ak are
equally spaced sample of this envelope as interpreted in Eq. 4.2.

In figure 4.2, as T increases or the fundamental frequency decreases, the envelope is sampled
with a closer and closer spacing. As T becomes arbitrarily large, the original periodic square
wave approaches a rectangular pulse. Also, when the Fourier series coefficient ak is multiplied
by T , it becomes more and more closely spaced samples of the envelope, as T → ∞, the
Fourier series coefficient approaches the envelope function.
This example illustrates the basic idea behind the Fourier’s development for non-periodic sig-
nals representation. From a frequency domain viewpoint, a non-periodic signal is typically
contains frequency components at all frequencies, not just at integer multiples of a funda-
mental frequency. Therefore, we can derive the Fourier transform for non-periodic signals,
consider a signal x(t) with a finite duration, that is, x(t) = 0 for |t| > T1 , as illustrated in figure
4.3(a). From this non-periodic signal, we can construct a periodic signal x̃(t) for which x(t)
is one period as indicated in figure 4.3(b). If the period T is large, then x̃(t) is identical to x(t)
over a longer interval and as T → ∞, x̃(t) = x(t) for any infinite value of t.
The Fourier series representation of x̃(t) is carried out over the interval −T /2 6 t 6 T /2, we
have

x̃(t) = ∑ ak e jkω0 t , (4.3)
k=−∞
∫ T /2
1
ak = x̃(t) e− jkω0 t dt, (4.4)
T −T /2

Since x̃(t) = x(t) for |t| < T /2. Also, since x(t) = 0 outside this interval, so we have
∫ T /2 ∫ ∞
1 1
ak = x(t) e− jkω0 t dt = x(t) e− jkω0 t dt.
T −T /2 T −∞

Define the envelope function of Tak as complex-valued function X( jω ) by


4.1 Introduction 117

Figure 4.2 The Fourier series coefficient and their envelope for the periodic square wave for several values
of T : (a) T = 4T1 (b) T = 8T1 (c) T = 16T1 .

Figure 4.3 (a) Non-periodic signal x(t); (b) periodic signal x̃(t), constructed to be equal to x(t) over one
period.

∫ ∞
X( jω ) = x(t) e− jω t dt. (4.5)
−∞

The Fourier series coefficients ak for x(t) can be written as evaluations of this envelope func-
tion
1
ak = X( jkω0 ). (4.6)
T
118 4 Continuous-Time Fourier Transform

Then x̃(t) can be expressed in terms of X( jω ) as


∞ ∞
1 1
x̃(t) = ∑ T X( jkω0 ) e jkω0t = 2π ∑ X( jkω0 ) e jkω0 t ω0 . (4.7)
k=−∞ k=−∞

As T → ∞, x̃(t) = x(t) and consequently, Eq. 4.7 becomes a representation of x(t). In addition,
ω0 → 0 as T → ∞ and the right-hand side of Eq. 4.7 becomes an integral. Finally, Eq. 4.5 and
4.7 become the following Fourier transform:
∫ ∞
1
x(t) = X( jω ) e jω t d ω , (4.8)
2π −∞

∫ ∞
X( jω ) = x(t) e− jω t dt. (4.9)
−∞

The function X( jω ) in Eq. 4.9 describes the frequency content of a non-periodic signal and
is defined as the Fourier transform of x(t). However, the inverse Fourier transform expression
is given by x(t) in Eq. 4.8, which is analogous to the expression of a periodic signal in terms
of its Fourier series.

4.1.2 Convergence of Fourier Transform

The eq. in 4.8 is valid (x̃(t) is a valid representation of the original signal x(t)) if the signal
x(t) has finite energy (if it is square integrable),
∫ ∞
x̃(t) = |x(t)|2 dt < ∞, (4.10)
−∞

If e(t) = x̃(t)−x(t), where e(t) is the error between x̃(t)andx(t), then X( jω ) in 4.8 converges,
∫ ∞
|e(t)|2 dt = 0. (4.11)
−∞

The derivation of the Fourier transform suggests a set of conditions that is required for the
convergence of Fourier series. An alterative set of conditions that are sufficient to ensure the
convergence:
• Condition 1: Over any period, x(t)) must be absolutely integrable, that is
∫ ∞
|x(t)| dt < ∞.
−∞

• Condition 2: x(t) have a finite number of maxima and minima within any finite interval of
time.
• Condition 3: x(t) have a finite number of discontinuities within any finite interval of time.
Furthermore, each of these discontinuities must be finite.
4.1 Introduction 119

4.1.3 Examples of Continuous-Time Fourier Transform

Example 4.1.
Find the Fourier transform of the following signal showing in figure 4.4

x(t) = e−at u(t), a > 0.

x(t)

t
Figure 4.4 x(t) = e−at u(t), a > 0.

Solution 4.1.
∫ ∞
X( jω ) = e−at e− jω t dt,
0
∫ ∞
= e−(a+ jω )t dt,
0

1
−(a+ jω )t
=− e ,
a + jω 0
1
= , a > 0.
a + jω
The Fourier transform can be plotted in terms of magnitude as shown in the figure 4.5

Figure 4.5 The Fourier transform of signal x(t) = e−at u(t), a > 0.
120 4 Continuous-Time Fourier Transform

Example 4.2.
Find the Fourier transform of the following signal

x(t) = e−a|t| , a > 0.

Solution 4.2.
∫ ∞
X( jω ) = e−a|t| e− jω t dt,
−∞
∫ 0 ∫ ∞
= eat e− jω t dt + e−at e− jω t dt,
−∞ 0
∫ 0 ∫ ∞
= et(a− jω ) dt + e−t(a+ jω ) dt,
−∞ 0
0 ∞
1
t(a− jω ) 1
−t(a+ jω )
= e − e ,
a − jω −∞ a + jω 0
1 1
= + ,
a − jω a + jω
2a
= 2 .
a + ω2
The signal and the Fourier transform are sketched in figure 4.6

Figure 4.6 The Fourier transform of signal x(t) = e−a|t| , a > 0.

Example 4.3.
Find the Fourier transform of the following signal

x(t) = δ (t).

Solution 4.3. ∫ ∞
X( jω ) = δ (t) e− jω t dt = 1.
−∞
As shown in figure 4.7, the unit impulse has a Fourier transform consisting of equal contri-
butions at all frequencies.
4.1 Introduction 121

Figure 4.7 (a) Unit impulse δ (t); (b) Fourier transform X( jω ) of unit impulse.

Example 4.4.
Find the Fourier transform of the rectangular pulse signal
{
1, |t| < T1
x(t) =
0, |t| > T1 ,

Solution 4.4.
∫ T1
− jω t −1 − jω t T1
X( jω ) = e dt = e ,
−T1 jω −T1
−1 − jω T1
= [e − e jω T1 ],

( )
sin(ω T1 ) ω T1
=2 = 2T1 sinc .
ω π
sin(π x)
where sinc(x) = πx . The signal and the Fourier transform are sketched in figure 4.8

Figure 4.8 The Fourier transform of signal x(t) = 1, −T1 < t < T1 .

The Inverse Fourier transform is


∫ ∞
−1 1 sin(ω T1 ) jω t
x̃(t) = FT {X( jω )} = 2 e dω ,
2π −∞ ω
Since the signal x(t) is square integrable,
∫ ∞
e(t) = |x(t) − x̃(t)|2 dt = 0.
−∞
122 4 Continuous-Time Fourier Transform

The convergence of x̃(t) to x(t) everywhere except at the discontinuity exhibits Gibbs phe-
nomenon, t = ±T1 , where x̃(t) converges to 1/2, which is the average value of x(t) on both
sides of the discontinuity. Specifically, the integral over a finite-length interval of frequencies.
∫ W
1 sin(ω T1 ) jω t
x(t) = 2 e dω .
2π −W ω
As W −→ ∞, the signal converges to x(t) everywhere except at the discontinuity. More over,
the signal exhibits ripples near the discontinuities. The peak values of these ripples do not de-
crease as W increases, although the ripples do become compressed toward the discontinuity,
and the energy in the ripples converges to zero.

Example 4.5.
Find the inverse Fourier transform of the following signal
{
1, |ω | < W
X( jω ) =
0, |ω | > W,

Solution 4.5.
The inverse Fourier transform is

1 W jω t 1 1 jω t W
x(t) = e dω = e ,
2π −W 2π jt −W
1 1 [ jWt ]
= e − e− jWt ,
2π jt
( )
sin(Wt) W Wt
= = sinc .
πt π π

The signal and the Fourier transform are sketched in figure 4.9

Figure 4.9 The Fourier transform of signal X( jω ) = 1, −W < ω < W .

Comparing the results in the preceding example and this example, we have
FT
Square wave ⇐=⇒ Sinc function
4.2 Fourier Transform for Periodic Signals 123

This means a square wave in the time domain, its Fourier transform is a sinc function. How-
ever, if the signal in the time domain is a sinc function, then its Fourier transform is a square
wave. This property is referred to as Duality Property. We also note that when the width
of X( jω ) increases, its inverse Fourier transform x(t) will be compressed. When W −→ ∞,
X( jω ) converges to an impulse. The transform pair with several different values of W is
shown in the figure.

4.2 Fourier Transform for Periodic Signals

Constructing the Fourier transform of a periodic signal directly from its Fourier series repre-
sentation. This resulting transform consists of impulses in the frequency domain and with the
area of the impulse proportional to the Fourier series coefficients. Consider a signal x(t) with
a Fourier transform X( jω ) and with a single impulse of area 2π at ω = ω0

X( jω ) = 2πδ (ω − ω0 ). (4.12)

To determine the signal x(t), we apply the inverse Fourier transform relation
∫ ∞
1
x(t) = 2πδ (ω − ω0 ) e jω t d ω = e jω0 t ,
2π −∞

If X( jω ) is of the form of a linear combination of impulses equally spaced in frequency



X( jω ) = ∑ 2π ak δ (ω − kω0 ), (4.13)
k=−∞

Then the application of Eq. 4.7 yields



x(t) = ∑ ak e jkω0 t , (4.14)
k=−∞

Example 4.6.
The Fourier series coefficients ak for the square wave shown in figure 4.10 are

sin(kω0 T1 )
ak =
πk

Find the Fourier transform of this signal

Solution 4.6.
The Fourier transform of this signal is

2sin(kω0 T1 )
FT{x(t)} = X( jω ) = ∑ k
δ (ω − kω0 ).
k=−∞
124 4 Continuous-Time Fourier Transform

Figure 4.10 continuous-time periodic square wave.

Example 4.7.
Calculate the Fourier transform for the following signals

x1 (t) = sin(ω0t) x2 (t) = cos(ω0t)

Solution 4.7.
The Fourier series coefficients for the signal x1 (t) = sin(ω0t) is

1 [ j ω0 t ]
x1 (t) = e − e− jω0 t
2j
1 1 jω0 t
= − e− jω0 t + e .
2j 2j

1 1
a1 = , a−1 = − , ak = 0 for k ̸= ±1.
2j 2j

The Fourier series coefficients for the signal x2 (t) = cos(ω0t) is

1 [ jω0 t ]
x2 (t) = e + e− jω0 t
2
1 − j ω0 t 1 j ω0 t
= e + e .
2 2

1
a1 = a−1 = , ak = 0 for k ̸= ±1.
2

Example 4.8.
Calculate the Fourier transform for the following signal

x(t) = ∑ δ (t − kT )
k=−∞
.
4.3 Properties of Continuous-Time Fourier Transform 125

Solution 4.8.
The Fourier series coefficients of the signal is
∫ T /2
1 1
ak = δ (t) e− jkω0 t dt = .
T −T /2 T

Every Fourier coefficient of the periodic impulse train has the same value, 1/T . Substitute
the value of ak in eq. 4.13 to find the Fourier transform
∞ ( )
2π ∞ 2π k
X( jω ) = ∑ 2π ak δ (ω − kω0 ) = ∑ δ ω− T .
k=−∞ T k=−∞

The Fourier transform of a periodic impulse train in the time domain with period T is a
periodic impulse train in the frequency domain with period 2π /T

4.3 Properties of Continuous-Time Fourier Transform

The properties of the Fourier transform are crucial in the context of signals and signal pro-
cessing. These properties provide valuable insight into the signals’ operating in time-domain
and described in frequency-domain.

4.3.1 Linearity

Suppose we have two functions x1 (t) and x2 (t), with Fourier transform is given by X1 ( jω )
and X1 ( jω ), respectively
FT
x1 (t) ⇐=⇒ X1 ( jω )
FT
x2 (t) ⇐=⇒ X2 ( jω )

For constants a and b, then the Fourier transform of x1 (t) and x2 (t) can be easily found as

FT{ax1 (t) + bx2 (t)} = aFT{x1 (t)} + bFT{x2 (t)},


= aX1 ( jω ) + bX2 ( jω ). (4.15)

Eq. 4.15 can be verified using the definition of Fourier transform


∫ ∞
FT{ax1 (t) + bx2 (t)} = [ax1 (t) + bx2 (t)] e− jω t dt,
−∞
∫ ∞ ∫ ∞
= ax1 (t) e− jω t dt + bx2 (t) e− jω t dt,
−∞ −∞
= aX1 ( jω ) + bX2 ( jω ).
126 4 Continuous-Time Fourier Transform

4.3.2 Time Shifting

If x(t) has a Fourier transform X( jω ), then

x(t − t0 ) ⇐=⇒ e− jω t0 X( jω )
FT

x(t + t0 ) ⇐=⇒ e jω t0 X( jω )
FT
(4.16)

The Fourier transform of x(t − t0 ) is


∫ ∞
FT{x(t − t0 )} = x(t − t0 ) e− jω t dt,
−∞
∫ ∞
= x(τ ) e− jω (τ +t0 ) d τ ,
−∞
∫ ∞
− jω t0
=e x(τ ) e− jωτ d τ = e− jω t0 X( jω ),
−∞

where we changed the variable to τ = (t − t0 ).

An alternative solution to establish the time shifting property, consider eq. 4.8
∫ ∞
1
x(t) = X( jω ) e jω t d ω ,
2π −∞

Replacing t by (t − t0 ), we obtain

1 ∞
x(t − t0 ) = X( jω ) e jω (t−t0 ) d ω ,
2π −∞

1 ∞ ( − jω t0 )
= e X( jω ) e jω t d ω ,
2π −∞

This is the synthesis equation for x(t − t0 ). therefore, we conclude that

FT{x(t − t0 )} = e− jω t0 X( jω )

The frequency shifting is the basis for every audio and video transmitter. If x(t) has a Fourier
transform X( jω ), then

x(t)e jω0 t ⇐=⇒ X(ω − ω0 )


FT

x(t)e− jω0 t ⇐=⇒ X(ω + ω0 )


FT
(4.17)

Note that the frequency-shifting property is the dual of time shifting property of Eq. 4.16, this
means that a shifting in frequency-domain is equivalent to a phase shifting in time domain,
then the Fourier transform of x(t)e jω0 t is
∫ ∞[ ]
FT{x(t)e jω0 t } = x(t)e jω0 t e− jω t dt,
−∞
4.3 Properties of Continuous-Time Fourier Transform 127
∫ ∞
= x(t) e− j(ω −ω0 )t dt = X(ω − ω0 ).
−∞

x(t)e jω0 t is called a complex modulation, this is the multiplication of a signal x(t) by a com-
plex exponential signal e jω0 t . Thus, Eq. 4.17 shows that the complex modulation in time
domain corresponds to a shift of X(ω ) in the frequency domain.

It is important to realize that shifting in time does not change the frequency content of the
signal (the signal does not change when delayed or advanced), but the magnitude of the two
transforms correspond to the original signal wether it is shifted or not (delayed or advanced),
and the effect of the time shift on a signal is only in the phase spectrum, namely −ω t0 , which
is a linear combination of ω .

4.3.3 Conjugation and Conjugate Symmetry

If x(t) has a Fourier transform X( jω ), then

x∗ (t) =⇐=⇒ X ∗ (− jω ).
FT
(4.18)

Given X ∗ ( jω )
[∫ ∞
]∗ ∫ ∞
− jω t

X ( jω ) = x(t) e dt = x∗ (t) e jω t dt,
−∞ −∞

To find the Fourier transform of x∗ (t), replace ω by −ω


∫ ∞
X ∗ (− jω ) = x∗ (t) e− jω t dt, (4.19)
−∞

If x(t) is real, then X(− jω ) = X ∗ ( jω ). Note that if x(t) is both real and even, then X( jω ) is
real and even. Similarly, if x(t) is both real and odd, then X( jω ) is purely imaginary and odd.
A real function x(t) can be expressed in terms of the sum of an even and an odd function,
x(t) = xe (t) + xo (t) and from the linearity property

FT{x(t)} = FT{xe (t)} + FT{xo (t)},

Thus, we conclude that if x(t) is real, FT{xe (t)} is real function, and FT{xo (t)} is purely
imaginary, then
FT
x(t) ⇐=⇒ X( jω )
FT
Ev{x(t)} ⇐=⇒ ℜe{X( jω )}
FT
Odd{x(t)} ⇐=⇒ Im{X( jω )}
128 4 Continuous-Time Fourier Transform

4.3.4 Differentiation and Integration

If x(t) has a Fourier transform X( jω ), then the Fourier transform of the derivative of x(t) is
given by

dx(t) FT
⇐=⇒ jω X( jω ). (4.20)
dt
The effect of differentiation in time domain in Eq. 4.20 resulting in a multiplication of X( jω )
by jω in the frequency domain. To verify the differentiation property, we compute the Fourier
transform using integration-by-parts
{ } ∫∞
dx(t) dx(t) − jω t
FT = e dt,
dt −∞ dt
∞ ∫ ∞

= x(t)e− jω t − x(t)(− jω ) e− jω t dt,
−∞ −∞
∫ ∞
= jω x(t) e− jω t dt = jω X( jω ).
−∞

Differentiation in frequency domain is the dual property of Eq. 4.20.

FT dX( jω )
(− jt)x(t) ⇐=⇒ . (4.21)

The Fourier transform of (− jt)x(t) can be verified
[∫ ∞ ]
d d
X( jω ) = x(t)e− jω t dt ,
dω d ω −∞
∫ ∞ ∫ ∞
d
= x(t)e− jω t dt = [(− jt)x(t)] e− jω t dt,
−∞ d ω −∞
= FT {(− jt)x(t)} .

If the Fourier transform of x(t) is X( jω ), then the Fourier transform of the integration func-
tion in time domain of x(t) is given by
∫ t
FT X( jω )
x(t) dt ⇐=⇒ + π X(0)δ (ω ). (4.22)
−∞ jω

Integrating x(t) in the time-domain leads to division of X( jω ) by jω plus an additive term


π X(0)δ (ω ) to account for the dc component that may appear in the integral output of x(t). If
ω = 0, this indicates that the dc component equal zero when the integral of x(t) over all time
vanishes.

∫ t
x(t) dt = x(t) ∗ u(t),
−∞
Thus, by the time convolution theorem in Section 4.3.8
[ ]
1
FT {x(t) ∗ u(t)} = X( jω ) πδ (ω ) + ,

4.3 Properties of Continuous-Time Fourier Transform 129

X( jω )
= π X( jω )δ (ω ) + ,

Since X( jω )δ (ω ) = X(0)δ (ω )
X( jω )
= π X(0)δ (ω ) + .

4.3.5 Time Scaling

If the Fourier transform of x(t) is X( jω ), then for any real constant a ̸= 0

FT 1 (ω )
x(at) ⇐=⇒ X . (4.23)
|a| a

The scaling of time variable t by the factor a causes( an) inverse scaling of the frequency
variable ω by 1a , as well as an amplitude scaling of X ωa by |a| 1
. This imply that time com-
pression of a signal a > 1 resulting in a spectral expansion and the time expansion of a signal
a < 1 resulting in a spectral compression.
Time scaling can be verified using the definition. If a > 0, the variable t changes to τ = at
∫ ∞ ∫ ∞
τ 1
FT {x(at)} = x(at)e− jω t dt = x(τ )e− jω a d τ ,
−∞ −∞ a
∫ ∞
1 τ 1 (ω )
= x(τ )e− jω a dτ = X .
a −∞ a a

If a < 0, then it is convenient to write a = −|a| and use the variable τ = −|a|t as follows
∫ ∞ ∫ ∞ τ
− jω t − jω −|a| 1
FT {x(at)} = x(at)e dt = x(τ )e dτ ,
−∞ −∞ −|a|
∫ ∞
1 − j ωa τ 1 (ω )
= x(τ )e dτ = X .
|a| −∞ |a| a

4.3.6 Duality

An important property of Fourier transforms is the duality between time and frequency do-
mains. The duality property allows us to obtain both of these dual Fourier transform pairs
from one evaluation, this property relates to the fact that the analysis equation and synthesis
equation look almost identical except for a factor of 21π and the difference of a minus sign
in the exponential in the integral. If the Fourier transform of x(t) is X( jω ), then the Fourier
transform of X(t) is 2π x(−ω ); that is
FT
X(t) ⇐=⇒ 2π x(−ω ). (4.24)

For any transform pair, there is a dual pair with time and frequency variables interchanged.
Hence, the duality property can be verified from an inspection of the Fourier and inverse
Fourier transform expression.
130 4 Continuous-Time Fourier Transform
∫ ∞
1
x(t) = FT−1 {X( jω )} = X( jω ) e jω t d ω ,
2π −∞
where ∫ ∞
2π x(t) = X( jω ) e jω t d ω ,
−∞
Replacing t with −t, we obtain
∫ ∞
2π x(−t) = X( jω ) e− jω t d ω ,
−∞
Interchanging t and ω and compare the result with the Eq. 4.24
∫ ∞
2π x(−ω ) = X( jt) e− jω t dt = FT {X( jt)} .
−∞

4.3.7 Parseval’s Relation

If the Fourier transform of x(t) is X( jω ), then the Parseval’s identity is giving by


∫ ∞ ∫ ∞
|x(t)|2 dt = |X( jω )|2 d ω . (4.25)
−∞ −∞

The quantity on the left-hand side of Eq. 4.25 is the normalized energy content E of x(t).
Hence, the total energy E may be determined either by computing the energy per unit time
|x(t)|2 and integrating over all time t or by computing the energy per unit frequency |X(2jπω )|
2

and integrating over all frequencies ω . Note that |X( jω )|2 is referred to as the energy-density
spectrum of x(t).

4.3.8 Convolution Properties

The convolution of signals x(t) and h(t) in the time-domain is given by


∫ ∞
y(t) = x(t) ∗ h(t) = x(τ ) h(t − τ ) d τ
−∞

Consider the convolution of h(t) with a complex exponential e jω0 t


∫ ∞
h(t) ∗ e jω0 t = h(τ ) e jω0 (t−τ ) d τ ,
−∞
∫ ∞
= e jω0 t h(τ ) e− jω0 τ d τ = H( jω0 ) e jω0 t .
−∞

Therefore, the convolution in the time-domain corresponds to multiplication in the frequency-


domain
FT
y(t) = x(t) ∗ h(t) ⇐=⇒ Y ( jω ) = H( jω ) X( jω ). (4.26)

The Fourier transform maps the convolution of two signals in time-domain into product of
their Fourier transforms in frequency-domain, where the Fourier transform of the impulse re-
4.3 Properties of Continuous-Time Fourier Transform 131

sponse H( jω ) is the frequency response of the LTI system, which is completely characterizes
an LTI system.

Similar to the time-domain convolution, the convolution of two frequency-domain functions


corresponds to the multiplication of the inverse Fourier transforms of the functions in the
time-domain with a scale factor, that is, the multiplication in the time domain, corresponds to
convolution in the frequency domain divided by 21π

FT 1
x(t) h(t) ⇐=⇒ X( jω ) ∗ H( jω ). (4.27)

4.3.9 Multiplication Property

Multiplication of one signal by another can be thought of as one signal to scale or modulate
the amplitude of the other signal and consequently, the multiplication of two signals is often
referred to as amplitude modulation. Consider the multiplication of two signal x(t) and y(t)
∫ ∞
1 1
x(t) y(t) = X( jv) Y [ j(ω − v)] dv = X( jω ) ∗Y ( jω ).
2π −∞ 2π
The multiplication property is often referred to as the frequency convolution theorem. Thus,
the multiplication in time-domain becomes convolution in the frequency-domain.

Example 4.9.
Using the symmetry properties of Fourier transform, evaluate the Fourier transform of the
signal x(t)
x(t) = e−a|t| , where a > 0
FT
Hint: e−at u(t) ⇐=⇒ 1
a+ jω

Solution 4.9.
The Fourier transform of the signal x(t) is

x(t) = e−a|t| = e−at u(t) + eat u(−t),


[ −at ]
e u(t) + eat u(−t)
=2 = 2Ev{e−at u(t)},
2

The signal is even, therefore it is real


{ }
−at 1 2a
2Ev{e u(t)} = 2ℜe{X( jω )} = 2ℜe =
a + jω a2 + ω 2

Example 4.10.
132 4 Continuous-Time Fourier Transform

Using the convolution in the frequency-domain properties of Fourier transform, evaluate the
Fourier transform of the signal x(t)

x(t) = sin(t) cos(t)

Solution 4.10.
The Fourier transform of the signal x(t) is

1
FT {x(t)} = [FT {sin(t)} ∗ FT {cos(t)}] ,

1
= [ jπ [δ (ω + 1) − δ (ω − 1)] ∗ π [δ (ω + 1) + δ (ω − 1)]] ,

jπ 2
= [[δ (ω + 1) ∗ δ (ω + 1)] − [δ (ω − 1) ∗ δ (ω − 1)]] ,

jπ FT 1
= [δ (ω + 2) − δ (ω − 2)] ⇐=⇒ sin(2t) = sin(t) cos(t)
2 2
4.4 Summary of Fourier Transform Properties and Basic Fourier Transform Pairs 133

4.4 Summary of Fourier Transform Properties and Basic Fourier


Transform Pairs

Table 4.2 contains a summary of the properties of the continuous-time Fourier transform
presented in this section.

Table 4.1 Properties of Continuous-Time Fourier Transform.


Property Aperiodic Signal Fourier Transform
{ {
x(t) X( jω )
y(t) Y ( jω )

Linearity ax(t) + by(t) aX( jω ) + bY ( jω )

Time Shifting x(t − t0 ) e− jω t0 X( jω )

Frequency shifting x(t)e jω0 t X[ j(ω − ω0 )]

Conjugation x∗ (t) X ∗ (− jω )

Time Reversal x(−t) X(− jω )


( )
1 jω
Time & Frequency Scaling x(at), a > 0 |a| X a

Convolution x(t) ∗ y(t) X( jω )Y ( jω )


1 ∫∞
Multiplication x(t)y(t) 2π −∞ X( jv) Y [ j(ω − v)] dv

dx(t)
Differentiation in Time dt jω X( jω )
∫t
jω X( j ω ) + π X(0)δ (ω )
1
Integration −∞ x(t) dt

Differentiation in Fre- tx(t) j ddω X( jω )


quency

 X( jω ) = X ∗ (− jω )


 Re{X( jω )} = Re{X(− jω )}
Conjugate Symmetry for x(t) real Im{X( jω )} = −Im{X(− jω )}


Real Signals  |X( jω )| = |X(− jω )|

∠X( jω ) = −∠X(− jω )

Real and Even Signals x(t) real and even X( jω ) real and even

Real and Odd Signals x(t) real and Odd X( jω ) purely imaginary and odd
{ {
xe (t) = Ev[x(t)] [x(t)real] Re{X( jω )}
Even-Odd Decomposition
xo (t) = Od[x(t)] [x(t)real] jIm{X( jω )}
of Real Signals

Parseval’s
∫∞
Relation for

Periodic Signals
|x(t)|2 dt = 1 ∞ |X( j ω )|2 d ω
−∞ 2π −∞
134 4 Continuous-Time Fourier Transform

Table 4.2 Common Fourier Transforms Pairs.


x(t) X( jω )

δ (t) 1

δ (t − t0 ) e− jω t0

1 2πδ (ω )

e j ω0 t 2πδ (ω − ω0 )

cos(ω0 t) π [δ (ω + ω0 ) + δ (ω − ω0 )]

sin(ω0 t) jπ [δ (ω + ω0 ) − δ (ω − ω0 )]

u(t) πδ (ω ) + 1

u(−t) πδ (ω ) − 1

e−at u(t), a > 0 1


a+ jω

te−at u(t), a > 0 1


(a+ jω )2

e−a|t| , a > 0 2a
a2 +ω 2

1
a2 +t 2
e−a|ω |

π −ω 2 /4a
e−at , a > 0
2
ae
{
1, |t| < a ω)
pa (t) = 2a sin(a

0, |t| > a
{
sin(at) 1, |ω | < a
πt pa (ω ) =
0, |ω | > a
2
sgn(t) jω

∞ ∞

∑ δ (t − kT ) ω0 ∑ δ (ω − kω0 ), ω0 = T
k=−∞ k=−∞
4.4 Summary of Fourier Transform Properties and Basic Fourier Transform Pairs 135

Problems of Chapter 4

4.1. From the basic definition, compute the Fourier transforms of the following signals

a. x(t) = e−(t−2) u(t − 3)




 0, t ≤0
b. x(t) = 2, 0<t <1

 −(t−1)
2e , t ≥1

c. x(t) = e−2t u(t − 3)

d. A rectangular
{ pulse signal x(t) = pa (t) defined by
1, |t| < a
x(t) =
0, |t| > a

e. x(t) = 1

f. x(t) = e jω0 t

g. x(t) = e− jω0 t

h. x(t) = u(−t)

i. x(t) = eat u(−t), a > 0

4.2. From the basic definition, compute the inverse Fourier transforms of the following sig-
nals

a. X( jω ) = 1
(a+ jω )2
. Hint: use the time convolution theorem


b. X( jω ) = (1+ jω )2
. Hint: use tables of transforms and properties

e j4ω
c. X( jω ) = (2+ jω )2
. Hint: use tables of transforms and properties

d. X( jω ) = 1
2+ j(ω −3) + 2+ j(1ω +3) . Hint: use the frequency shifting property

4sin(2ω −4) ω +4)


e. X( jω ) = 2ω =4 − 4sin(2
2ω +4 . Hint: use tables of transforms and properties

2sin(ω )
f. X( jω ) = ω (2+ jω ) . Hint: use tables of transforms and properties
136 4 Continuous-Time Fourier Transform

References

1. A.V. Oppenheim, A.S. Willsky, S.H. Nawab. Signals and Systems. 2nd Edition, New Jersey, USA: Prentice
Hall, 1997.
2. S. Haykin and B. Van Veen. Signals and Systems. 2nd Edition, New Jersey, USA: John Wiley & Sons,
2003.

You might also like