Class Notes for the Course ECSE412
Benoˆıt Champagne and Fabrice Labeau
Department of Electrical & Computer Engineering
McGill University
with updates by Peter Kabal
Winter 2004
i
Contents
1 Introduction 1
1.1 The concepts of signal and system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Digital signal processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Digital processing of analog signals . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Basic components of a DSP system . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.3 Pros and cons of DSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.4 Discretetime signal processing (DTSP) . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Applications of DSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Course organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2 Discretetime signals and systems 13
2.1 Discretetime (DT) signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 DT systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Linear timeinvariant (LTI) systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4 LTI systems described by linear constant coefﬁcient difference equations (LCCDE) . . . . . 24
2.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3 Discretetime Fourier transform (DTFT) 30
3.1 The DTFT and its inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2 Convergence of the DTFT: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3 Properties of the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4 Frequency analysis of LTI systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 LTI systems characterized by LCCDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.6 Ideal frequency selective ﬁlters: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.7 Phase delay and group delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
ii Contents
4 The ztransform (ZT) 55
4.1 The ZT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.2 Study of the ROC and ZT examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.3 Properties of the ZT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.4 Rational ZTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.5 Inverse ZT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.5.1 Inversion via PFE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.5.2 Putting X(z) in a suitable form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.6 The onesided ZT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5 Zdomain analysis of LTI systems 78
5.1 The system function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.2 LTI systems described by LCCDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.3 Frequency response of rational systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.4 Analysis of certain basic systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.4.1 First order LTI systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.4.2 Second order systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.4.3 FIR ﬁlters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.5 More on magnitude response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.6 Allpass systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.7 Inverse system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.8 Minimumphase system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.8.2 MPAP decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5.8.3 Frequency response compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.8.4 Properties of MP systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.9 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6 The discrete Fourier Transform (DFT) 108
6.1 The DFT and its inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.2 Relationship between the DFT and the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.2.1 Finite length signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
6.2.2 Periodic signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
c B. Champagne & F. Labeau Compiled January 23, 2004
Contents iii
6.3 Signal reconstruction via DTFT sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.4 Properties of the DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.4.1 Time reversal and complex conjugation . . . . . . . . . . . . . . . . . . . . . . . . 122
6.4.2 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.4.3 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.4.4 Even and odd decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.4.5 Circular shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.4.6 Circular convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
6.4.7 Other properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6.5 Relation between linear and circular convolutions . . . . . . . . . . . . . . . . . . . . . . . 131
6.5.1 Linear convolution via DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
7 Digital processing of analog signals 142
7.1 Uniform sampling and the sampling theorem . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.1.1 Frequency domain representation of uniform sampling . . . . . . . . . . . . . . . . 145
7.1.2 The sampling theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
7.2 Discretetime processing of continuoustime signals . . . . . . . . . . . . . . . . . . . . . . 150
7.2.1 Study of inputoutput relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.2.2 Antialiasing ﬁlter (AAF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7.3 A/D conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
7.3.1 Basic steps in A/D conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
7.3.2 Statistical model of quantization errors . . . . . . . . . . . . . . . . . . . . . . . . 159
7.4 D/A conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
7.4.1 Basic steps in D/A conversion: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
8 Structures for the realization of DT systems 166
8.1 Signal ﬂowgraph representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
8.2 Realizations of IIR systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
8.2.1 Direct form I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
8.2.2 Direct form II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
8.2.3 Cascade form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
8.2.4 Parallel form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.2.5 Transposed direct form II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
c B. Champagne & F. Labeau Compiled January 23, 2004
iv Contents
8.3 Realizations of FIR systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
8.3.1 Direct forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
8.3.2 Cascade form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
8.3.3 Linearphase FIR systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
8.3.4 Lattice realization of FIR systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
9 Filter design 189
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
9.1.1 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
9.1.2 Speciﬁcation of H
d
(ω) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
9.1.3 FIR or IIR, That Is The Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
9.2 Design of IIR ﬁlters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
9.2.1 Review of analog ﬁltering concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 194
9.2.2 Basic analog ﬁlter types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
9.2.3 Impulse invariance method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
9.2.4 Bilinear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
9.3 Design of FIR ﬁlters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
9.3.1 Classiﬁcation of GLP FIR ﬁlters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
9.3.2 Design of FIR ﬁlter via windowing . . . . . . . . . . . . . . . . . . . . . . . . . . 213
9.3.3 Overview of some standard windows . . . . . . . . . . . . . . . . . . . . . . . . . 219
9.3.4 ParksMcClellan method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
10 Quantization effects 228
10.1 Binary number representation and arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . 228
10.1.1 Binary representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
10.1.2 Quantization errors in ﬁxedpoint arithmetic . . . . . . . . . . . . . . . . . . . . . . 231
10.2 Effects of coefﬁcient quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
10.2.1 Sensitivity analysis for direct form IIR ﬁlters . . . . . . . . . . . . . . . . . . . . . 234
10.2.2 Poles of quantized 2nd order system . . . . . . . . . . . . . . . . . . . . . . . . . . 236
10.3 Quantization noise in digital ﬁlters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
10.4 Scaling to avoid overﬂow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
11 Fast Fourier transform (FFT) 251
11.1 Direct computation of the DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
11.2 Overview of FFT algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
c B. Champagne & F. Labeau Compiled January 23, 2004
Contents v
11.3 Radix2 FFT via decimationintime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
11.4 Decimationinfrequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
11.5 Final remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
12 An introduction to Digital Signal Processors 264
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
12.2 Why PDSPs ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
12.3 Characterization of PDSPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
12.3.1 FixedPoint vs. FloatingPoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
12.3.2 Some Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
12.3.3 Structural features of DSPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
12.4 Benchmarking PDSPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
12.5 Current Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
13 Multirate systems 281
13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
13.2 Downsampling by an integer factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
13.3 Upsampling by an integer factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
13.3.1 Lfold expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
13.3.2 Upsampling (interpolation) system . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
13.4 Changing sampling rate by a rational factor . . . . . . . . . . . . . . . . . . . . . . . . . . 291
13.5 Polyphase decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
14 Applications of the DFT and FFT 298
14.1 Block FIR ﬁltering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
14.1.1 Overlapadd method: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
14.1.2 Overlapsave method (optional reading) . . . . . . . . . . . . . . . . . . . . . . . . 302
14.2 Frequency analysis via DFT/FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
14.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
14.2.2 Windowing effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
References 311
c B. Champagne & F. Labeau Compiled January 23, 2004
1
Chapter 1
Introduction
This Chapter begins with a highlevel deﬁnition of the concepts of signals and systems. This is followed
by a general introduction to digital signal processing (DSP), including an overview of the basic components
of a DSP system and their functionality. Some practical examples of reallife DSP applications are then
discussed brieﬂy. The Chapter ends with a presentation of the course outline.
1.1 The concepts of signal and system
Signal:
• A signal can be broadly deﬁned as any quantity that varies as a function of time and/or space and has
the ability to convey information.
• Signals are ubiquitous in science and engineering. Examples include:
 Electrical signals: currents and voltages in AC circuits, radio communications signals, audio and
video signals.
 Mechanical signals: sound or pressure waves, vibrations in a structure, earthquakes.
 Biomedical signals: electroencephalogram, lung and heart monitoring, Xray and other types
of images.
 Finance: time variations of a stock value or a market index.
• By extension, any series of measurements of a physical quantity can be considered a signal (tempera
ture measurements for instance).
Signal characterization:
• The most convenient mathematical representation of a signal is via the concept of a function, say x(t).
In this notation:
 x represents the dependent variable (e.g., voltage, pressure, etc.)
 t the represents the independent variable (e.g., time, space, etc.).
2 Chapter 1. Introduction
• Depending on the nature of the independent and dependent variables, different types of signals can be
identiﬁed:
 Analog signal: t ∈ R →x
a
(t) ∈ R or C
When t denotes the time, we also refer to such a signal as a continuoustime signal.
 Discrete signal: n ∈ Z →x[n] ∈ R or C
When index n represents sequential values of time, we refer to such a signal as discretetime.
 Digital signal: n ∈ Z →x[n] ∈ A
where A ={a
1
, . . . , a
L
} represents a ﬁnite set of L signal levels.
 Multichannel signal: x(t) = (x
1
(t), . . . , x
N
(t))
 Multidimensional signal: x(t
1
, . . . , t
N
);
• Distinctions can also be made at the model level, for example: whether x[n] is considered to be
deterministic or random in nature.
Example 1.1: Speech signal
A speech signal consists of variations in air pressure as a function of time, so that it basically represents
a continuoustime signal x(t). It can be recorded via a microphone that translates the local pressure
variations into a voltage signal. An example of such a signal is given in Figure 1.1(a), which repre
sent the utterance of the vowel “a”. If one wants to process this signal with a computer, it needs to be
discretized in time in order to accommodate the discretetime processing capabilities of the computer
(Figure 1.1(b)), and also quantized, in order to accommodate the ﬁniteprecision representation in a
computer (Figure 1.1(b)). These represent a continuoustime, discretetime and digital signal respec
tively.
As we know from the sampling theorem, the continuoustime signal can be reconstructed from its sam
ples taken with a sampling rate at least twice the highest frequency component in the signal. Speech
signals exhibit energy up to say, 10 kHz. However, most of the intelligibility is conveyed in a band
width less than 4 kHz. In digital telephony, speech signals are ﬁltered (with an antialiasing ﬁlter which
removes energy above 4 kHz), sampled at 8 kHz and represented with 256 discrete (nonuniformly
spaced) levels. Wideband speech (often termed commentary quality) would entail sampling at a higher
rate, often 16 kHz.
Example 1.2: Digital Image
An example of twodimensional signal is a grayscale image, where t
1
and t
2
represent the horizontal
and vertical coordinates, and x(t
1
, t
2
) represents some measure of the intensity of the image at location
(t
1
, t
2
). This example can also be considered in discretetime (or rather in discrete space in this case):
digital images are made up of a discrete number of points (or pixels), and the intensity of a pixel can be
denoted by x[n
1
, n
2
]. Figure 1.2 shows an example of digital image. The rightmost part of the Figure
shows a zoom on this image that clearly shows the pixels. This is an 8bit grayscale image, i.e. each
pixel (each signal sample) is represented by an 8bit number ranging from 0 (black) to 255 (white).
System:
• A physical entity that operates on a set of primary signals (the inputs) to produce a corresponding set
of resultant signals (the outputs).
c B. Champagne & F. Labeau Compiled January 23, 2004
1.1 The concepts of signal and system 3
0 0.01 0.02 0.03 0.04 0.05 0.06
−0.2
0
0.2
Time (s)
x
(
t
)
(a)
0 50 100 150 200 250
−0.2
0
0.2
Samples
x
[
n
]
(b)
0 50 100 150 200 250
−0.2
0
0.2
Samples
x
d
[
n
]
(c)
Fig. 1.1 An utterance of the vowel “a” in analog, discretetime and digital format. Sampling
at 4 kHz and quantization on 4 bits (16 levels).
c B. Champagne & F. Labeau Compiled January 23, 2004
4 Chapter 1. Introduction
Fig. 1.2 Digital image, and zoom on a region of the image to show the pixels
• The operations, or processing, may take several forms: modiﬁcation, combination, decomposition,
ﬁltering, extraction of parameters, etc.
System characterization:
• A system can be represented mathematically as a transformation between two signal sets, as in x[n] ∈
S
1
→y[n] = T{x[n]} ∈ S
2
. This is illustrated in Figure 1.3
] [n x
{.} T
] [n y
{.} T
Fig. 1.3 A generic system
• Depending on the nature of the signals on which the system operates, different basic types of systems
may be identiﬁed:
 Analog or continuoustime system: the input and output signals are analog in nature.
 Discretetime system: the input and output signals are discrete.
 Digital system: the input and outputs are digital.
 Mixed system: a system in which different types of signals (i.e. analog, discrete and/or digital)
coexist.
c B. Champagne & F. Labeau Compiled January 23, 2004
1.2 Digital signal processing 5
1.2 Digital signal processing
1.2.1 Digital processing of analog signals
Discussion:
• Early education in engineering focuses on the use of calculus to analyze various systems and processes
at the analog level:
 motivated by the prevalence of the analog signal model
 e.g.: circuit analysis using differential equations
• Yet, due to extraordinary advances made in microelectronics, the most common/powerful processing
devices today are digital in nature.
• Thus, there is a strong, practical motivation to carry out the processing of analog realworld signals
using such digital devices.
• This has lead to the development of an engineering discipline know as digital signal processing (DSP).
Digital signal processing (DSP):
• In its most general form, DSP refers to the processing of analog signals by means of discretetime
operations implemented on digital hardware.
• From a system viewpoint, DSP is concerned with mixed systems:
 the input and output signals are analog
 the processing is done on the equivalent digital signals.
1.2.2 Basic components of a DSP system
Generic structure:
• In its most general form, a DSP system will consist of three main components, as illustrated in Fig
ure 1.4.
• The analogtodigital (A/D) converter transforms the analog signal x
a
(t) at the system input into a
digital signal x
d
[n]. An A/Dconverter can be thought of as consisting of a sampler (creating a discrete
time signal), followed by a quantizer (creating discrete levels).
• The digital system performs the desired operations on the digital signal x
d
[n] and produces a corre
sponding output y
d
[n] also in digital form.
• The digitaltoanalog (D/A) converter transforms the digital output y
d
[n] into an analog signal y
a
(t)
suitable for interfacing with the outside world.
• In some applications, the A/D or D/A converters may not be required; we extend the meaning of DSP
systems to include such cases.
c B. Champagne & F. Labeau Compiled January 23, 2004
6 Chapter 1. Introduction
A/D
Digital
system
D/A
) (t x
a
) (t y
a
] [n x
d
] [n y
d
Fig. 1.4 A digital signal processing system
A/D converter:
• A/D conversion can be viewed as a twostep process:
Sampler Quantizer
) (t x
a
] [n x
d
] [n x
Fig. 1.5 A/D conversion
• Sampler: in which the analog input is transformed into a discretetime signal, as in x
a
(t) →x[n] =
x
a
(nT
s
), where T
s
is the sampling period.
• Quantizer: in which the discretetime signal x[n] ∈ R is approximated by a digital signal x
d
[n] ∈ A,
with only a ﬁnite set A of possible levels.
• The number of representation levels in the set A is hardware deﬁned, typically 2
b
where b is the
number of bits in a word.
• In many systems, the set of discrete levels is uniformly spaced. This is the case for instance for WAVE
ﬁles used to store audio signals. For WAVE ﬁles, the sampling rate is often 44.1 kHz (the sampling
rate used for audio CD’s) or 48 kHz (used in studio recording). The level resolution for WAVE ﬁles
is most often 16 bits per sample, but some systems use up to 24 bits per samples.
• In some systems, the set of discrete levels is nonuniformly spaced. This is the case for digital tele
phony. Nearly all telephone calls are digitized to 8 bit resolution. However, the levels are not equally
spaced — the spacing between levels increases with increasing amplitude.
c B. Champagne & F. Labeau Compiled January 23, 2004
1.2 Digital signal processing 7
Digital system:
• The digital system is functionally similar to a microprocessor: it has the ability to perform mathemat
ical operations on a discretetime basis and can store intermediate results of computation in internal
memory.
• The operations performed by the digital system can usually be described by means of an algorithm,
on which its implementation is based.
• The implementation of the digital system can take different forms:
 Hardwired: in which dedicated digital hardware components are specially conﬁgured to accom
plish the desired processing task.
 Softwired: in which the desired operations are executed via a programmable digital signal pro
cessor (PDSP) or a general computer programmed to this end.
• The following distinctions are also important:
 Realtime system: the computing associated to each sampling interval can be accomplished in a
time ≤ the sampling interval.
 Offline system: A non realtime system which operates on stored digital signals. This requires
the use of external data storage units.
D/A converter:
This operation can also be viewed as a twostep process, as illustrated in Figure 1.6.
Pulse train
gererator
Interpolator
) (t y
a
] [n y
d
) ( ˆ t y
a
Fig. 1.6 D/A Conversion
• Pulse train generator: in which the digital signal y
d
[n] is transformed into a sequence of scaled, analog
pulses.
• Interpolator: in which the high frequency components of ˆ y
a
(t) are removed via lowpass ﬁltering to
produce a smooth analog output y
a
(t).
This twostep representation is a convenient mathematical model of the actual D/A conversion, though, in
practice, one device takes care of both steps.
c B. Champagne & F. Labeau Compiled January 23, 2004
8 Chapter 1. Introduction
1.2.3 Pros and cons of DSP
Advantages:
• Robustness:
 Signal levels can be regenerated. For binary signals, the zeros and ones can be easily distin
guished even in the presence of noise as long as the noise is small enough. The process of
regeneration make a hard decision between a zero and a one, effectively stripping off the noise.
 Precision not affected by external factors. This means that one gets the results are reproducible.
• Storage capability:
 DSP system can be interfaced to lowcost devices for storage. The retrieving stored digital
signals (often in binary form) results in the regeneration of clean signals.
 allows for offline computations
• Flexibility:
 Easy control of system accuracy via changes in sampling rate and number of representation bits.
 Software programmable ⇒ implementation and fast modiﬁcation of complex processing func
tions (e.g. selftunable digital ﬁlter)
• Structure:
 Easy interconnection of DSP blocks (no loading problem)
 Possibility of sharing a processor between several tasks
Disadvantages:
• Cost/complexity added by A/D and D/A conversion.
• Input signal bandwidth is technology limited.
• Quantization effects. Discretization of the levels adds quantization noise to the signal.
• Simple conversion of a continuoustime signal to a binary stream of data involves an increase in the
bandwidth required for transmission of the data. This however can be mitigated by using compression
techniques. For instance, coding an audio signal using MP3 techniques results in a signal which uses
much less bandwidth for transmission than a WAVE ﬁle.
1.2.4 Discretetime signal processing (DTSP)
Equivalence of analog and digital signal processing
• It is not at all clear that an arbitrary analog system can be realized as a DSP system.
• Fortunately, for the important class of linear timeinvariant systems, this equivalence can be proved
under the following conditions:
c B. Champagne & F. Labeau Compiled January 23, 2004
1.3 Applications of DSP 9
 the number of representation levels provided by the digital hardware is sufﬁciently large that
quantization errors may be neglected.
 the sampling rate is larger than twice the largest frequency contained in the analog input signal
(Sampling Theorem).
The DTSP paradigm
• Based on these considerations, it is convenient to break down the study of DSP into two distinct sets
of issues:
 Discretetime signal processing (DTSP)
 Study of quantization effects
• The main object of DTSP is the study of DSP systems under the assumption that ﬁniteprecision
effects may be neglected ⇒DTSP paradigm.
• Quantization is concerned with practical issues resulting from the use of ﬁniteprecision digital hard
ware in the implementation of DTSP systems.
1.3 Applications of DSP
Typical applications:
• Signal enhancement via frequency selective ﬁltering
• Echo cancellation in telephony:
 Electric echoes resulting from impedance mismatch and imperfect hybrids.
 Acoustic echoes due to coupling between loudspeaker and microphone.
• Compression and coding of speech, audio, image, and video signals:
 Low bitrate codecs (coder/decoder) for digital speech transmission.
 Digital music: CD, DAT, DCC, MD,...and now MP3
 Image and video compression algorithms such as JPEG and MPEG
• Digital simulation of physical processes:
 Sound wave propagation in a room
 Baseband simulation of radio signal transmission in mobile communications
• Image processing:
 Edge and shape detection
 Image enhancement
c B. Champagne & F. Labeau Compiled January 23, 2004
10 Chapter 1. Introduction
Example 1.3: Electric echo cancellation
In classic telephony (see Figure 1.7), the signal transmitted through a line must pass through a hybrid
before being sent to the receiving telephone. The hybrid is used to connect the local loops deserving the
customers (2wire connections) to the main network (4wire connections). The role of the hybrid is to
ﬁlter incoming signals so that they are oriented on the right line: signals coming from the network are
passed on to the telephone set, signals from the telephone set are passed on to the transmitting line of
the network. This separation is never perfect due to impedance mismatch and imperfect hybrids. For
instance, the signal received from the network (on the left side of Figure 1.7) can partially leak into
the transmitting path of the network, and be sent back to the transmitter (on the right side), with an
attenuation and a propagation delay, causing an echo to be heard by the transmitter.
To combat such electric echoes, a signal processing device is inserted at each end of the channel. The
incoming signal is monitored, processed through a system which imitates the effect of coupling, and
then subtracted from the outgoing signal.
Hybrid Hybrid
Receiver
Transmitter
direct signal
echo signal
Fig. 1.7 Illustration of electrical echo in classic telephone lines.
Example 1.4: Edge detection
An edge detection system can be easily devised for grayscale images. It is convenient to study the
principle in one dimension before going to two dimensions. Figure 1.8(a) illustrates what we expect an
ideal edge to look like: it is a transition between two ﬂat regions, two regions with approximately the
same grayscale levels. Of course, in practise, non ideal edges will exist, with smoother transitions and
non ﬂat regions. For the sake of explanation, let us concentrate on this ideal edge. Figure 1.8(b) shows
the impulse response of a ﬁlter that will enable edge detection: the sum of all its samples is equal to one,
so that the convolution of a ﬂat region with this impulse response will yield 0, as the central peak will be
compensated by the equal side values. On the other hand, when the convolution takes place on the edge,
the values on the left of the peak and on the right of the peak will not be the same anymore, so that he
output will be nonzero. So to detect edges in a signal in an automatic way, one only has to ﬁlter it with
a ﬁlter like the one shown in Figure 1.8(b) and threshold the output. A result of this procedure is shown
on Figure 1.9 for the image shown earlier.
c B. Champagne & F. Labeau Compiled January 23, 2004
1.3 Applications of DSP 11
(a) (b)
Fig. 1.8 (a) An ideal edge signal in one dimension and (b) an edge detecting ﬁlter
Fig. 1.9 Illustration of an edge detection system on image of Figure 1.2.
c B. Champagne & F. Labeau Compiled January 23, 2004
12 Chapter 1. Introduction
1.4 Course organization
Three main parts:
• Part I: Basic tools for the analysis of discretetime signals and systems
 Timedomain and transformdomain analysis
 Discretetime Fourier transform (DTFT) and Ztransform
 Discrete Fourier transform (DFT)
• Part II: Issues related to the design and implementation of DSP systems:
 Sampling and reconstruction of analog signals, A/D and D/A
 Structures for realization of digital ﬁlters, signal ﬂowgraphs
 Digital ﬁlter design (FIR and IIR)
 Study of ﬁnite precision effects (quantization noise,...)
 Fast computation of the DFT (FFT)
 Programmable digital signal processors (PDSP)
• Part III: Selected applications and advanced topics.
 Applications of DFT to frequency analysis
 Multirate digital signal processing
 Introduction to adaptive ﬁltering
c B. Champagne & F. Labeau Compiled January 23, 2004
13
Chapter 2
Discretetime signals and systems
2.1 Discretetime (DT) signals
Deﬁnition:
A DT signal is a sequence of real or complex numbers, that is, a mapping from the set of integers Z into
either R or C, as in:
n ∈ Z →x[n] ∈ R or C (2.1)
• n is called the discretetime index.
• x[n], the nth number in the sequence, is called a sample.
• To refer to the complete sequence, one of the following notations may be used: x, {x[n]} or even x[n]
if there is no possible ambiguity.
1
• Unless otherwise speciﬁed, it will be assumed that the DT signals of interest may take on complex
values, i.e. x[n] ∈ C.
• We shall denote by S the set of all complex valued DT signals.
Description:
There are several alternative ways of describing the sample values of a DT signal. Some of the most common
are:
• Sequence notation:
x ={. . . , 0, 0,
¯
1,
1
2
,
1
4
,
1
8
, . . .} (2.2)
where the bar on top of symbol 1 indicates origin of time (i.e. n = 0)
1
The latter is a misuse of notation, since x[n] formally refers to the nth signal sample. Following a common practice in the DSP
literature, we shall often use the notation x[n] to refer to the complete sequence, in which case the index n should be viewed as a
dummy place holder.
14 Chapter 2. Discretetime signals and systems
• Graphical:
n 1 0 2
] [n x
1
• Explicit mathematical expression:
x[n] =
0 n < 0,
2
−n
n ≥0.
(2.3)
• Recursive approach:
x[n] =
0 n < 0,
1 n = 0,
1
2
x[n−1] n > 0
(2.4)
Depending on the speciﬁc sequence x[n], some approaches may lead to more compact representation than
others.
Some basic DT signals:
• Unit pulse:
δ[n] =
1 n = 0,
0 otherwise.
(2.5)
Q
@ >Q G
• Unit step:
u[n] =
1 n ≥0,
0 n < 0.
(2.6)
n 1 0 2
] [n u
1
• Exponential sequence:
for some α ∈ C
x[n] = Aα
n
, n ∈ Z (2.7)
Q
@ >
Q X
Q
¸
¹
·
¨
©
§
c B. Champagne & F. Labeau Compiled September 13, 2004
2.1 Discretetime (DT) signals 15
• Complex exponential sequence (CES):
If α = 1 in (2.6), i.e. α = e
jω
for some ω ∈ R, we have
x[n] = Ae
jωn
, n ∈ Z (2.8)
where ω is called the angular frequency (in radian).
 One can show that a CES is periodic with period N, i.e. x[n+N] =x[n], if and only if ω=2πk/N
for some integer k.
 If ω
2
=ω
1
+2π, then the two CES signals x
2
[n] = e
jω
2
n
and x
1
[n] = e
jω
1
n
are indistinguishable
(see also Figure 2.1 below).
 Thus, for DT signals, the concept of frequency response is really limited to an interval of size
2π, typically [−π, π].
Example 2.1: Uniform sampling
In DSP applications, a common way of generating DT signals is via uniform (or periodic) sampling of
an analog signal x
a
(t), t ∈ R, as in:
x[n] = x
a
(nT
s
), n ∈ Z (2.9)
where T
s
> 0 is called the sampling period.
For example, consider an analog complex exponential signal given by x
a
(t) = e
j2πFt
, where F denotes
the analog frequency (in units of 1/time). Uniform sampling of x
a
(t) results in the discretetime CES
x[n] = e
j2πFnT
s
= e
jωn
(2.10)
where ω = 2πFT
s
is the angular frequency (dimensionless).
Figure 2.1 illustrates the limitation of the concept of frequencies in the discretetime domain. The
continuous and dasheddotted lines respectively show the real part of the analog complex exponential
signals e
jωt
and e
j(ω+2π)t
. Upon uniform sampling at integer values of t (i.e. using T
s
= 1 in (2.9)), the
same sample values are obtained for both analog exponentials, as shown by the solid bullets in the ﬁgure.
That is, the DT CES e
jωn
and e
j(ω+2π)n
are indistinguishable, even tough the original analog signals are
different. This is a simpliﬁed illustration of an important phenomenon known as frequency aliasing.
As we saw in the example, for sampled data signals (discretetime signals formed by sampling continuous
time signals), there are several frequency variables: F the frequency variable (units Hz) for the frequency
response of the continuoustime signal, and ω the (normalized) radian frequency of the frequency response
of the discretetime signal. These are related by
ω = 2πF/F
s
, (2.11)
where F
s
(F
s
= 1/T
s
) is the sampling frequency. One could, of course, add a radian version of F (Ω= 2πF)
or a natural frequency version of ω ( f =ω/(2π)).
For a signal sampled at F
s
, the frequency interval −F
s
/2 ≤F ≤F
s
/2 is mapped to the interval −π ≤ω ≤π.
Basic operations on signal:
• Let S denote the set of all DT signals.
c B. Champagne & F. Labeau Compiled September 13, 2004
16 Chapter 2. Discretetime signals and systems
1 2 3 4 5 6 7 8 9 10
−1
−0.5
0
0.5
1
Fig. 2.1 Illustration of the limitation of frequencies in discretetime.
• We deﬁne the following operations on S:
scaling: (αx)[n] =αx[n], where α ∈ C (2.12)
addition: (x +y)[n] = x[n] +y[n] (2.13)
multiplication: (xy)[n] = x[n]y[n] (2.14)
• Set S equipped with addition and scaling is a vector space.
Classes of signals:
The following subspaces of S play an important role:
• Energy signals: all x ∈ S with ﬁnite energy, i.e.
E
x
∞
∑
n=−∞
x[n]
2
<∞ (2.15)
• Power signals: all x ∈ S with ﬁnite power, i.e.
P
x
lim
N→∞
1
2N+1
N
∑
n=−N
x[n]
2
<∞ (2.16)
• Bounded signals: all x ∈ S that can be bounded, i.e. we can ﬁnd B
x
> 0 such that x[n] ≤ B
x
for all
n ∈ Z
• Absolutely summable: all x ∈ S such that ∑
∞
n=−∞
x[n] <∞
Discrete convolution:
• The discrete convolution of two signals x and y in S is deﬁned as
(x ∗y)[n]
∞
∑
k=−∞
x[k]y[n−k] (2.17)
The notation x[n] ∗y[n] is often used instead of (x ∗y)[n].
c B. Champagne & F. Labeau Compiled September 13, 2004
2.2 DT systems 17
• The convolution of two arbitrary signals may not exist. That is, the sum in (2.17) may diverge.
However, if both x and y are absolutely summable, x ∗y is guaranteed to exist.
• The following properties of ∗ may be proved easily:
(a) x ∗y = y ∗x (2.18)
(b) (x ∗y) ∗z = x ∗(y ∗z) (2.19)
(c) x ∗δ = x (2.20)
• For example, (a) is equivalent to
∞
∑
k=−∞
x[k]y[n−k] =
∞
∑
k=−∞
y[k]x[n−k] (2.21)
which can be proved by changing the index of summation from k to k
= n−k in the LHS summation
(try it!)
2.2 DT systems
Deﬁnition:
A DT system is a mapping T from S into itself. That is
x ∈ S →y = Tx ∈ S (2.22)
An equivalent block diagram form is shown in Figure 2.2 We refer to x as the input signal or excitation, and
to y as the output signal, or response.
= Q Q [ @ >
^` 7
= Q Q \ @ >
^` 7
RXWSXW LQSXW
Fig. 2.2 A generic DiscreteTime System seen as a signal mapping
• Note that y[n], the system output at discretetime n, generally depends on x[k] for all values of k ∈ Z.
• Even tough the notation y[n] = T{x[n]} is often used, the alternative notation y[n] = (Tx)[n] is more
precise.
• Some basic systems are described below.
Time reversal:
• Deﬁnition:
y[n] = (Rx)[n] x[−n] (2.23)
• Graphical interpretation: mirror image about origin (see Figure 2.3)
c B. Champagne & F. Labeau Compiled September 13, 2004
18 Chapter 2. Discretetime signals and systems
n 1 0 2
] [n x
1
n 1 0 2
] [ n x
1
R
Fig. 2.3 Illustration of time reversal: left, original signal; right: result of time reversal.
Delay or shift by integer k:
• Deﬁnition:
y[n] = (D
k
x)[n] x[n−k] (2.24)
• Interpretation:
 k ≥0 ⇒graph of x[n] shifted by k units to the right
 k < 0 ⇒graph of x[n] shifted by k units to the left
• Application: any signal x ∈ S can be expressed as a linear combination of shifted impulses:
x[n] =
∞
∑
k=−∞
x[k]δ[n−k] (2.25)
Other system examples:
• Moving average system:
y[n] =
1
M+N+1
N
∑
k=−M
x[n−k] (2.26)
This system can be used to smooth out rapid variations in a signal, in order to easily view longterm
trends.
• Accumulator:
y[n] =
n
∑
k=−∞
x[k] (2.27)
Example 2.2: Application of Moving Average
An example of Moving Average ﬁltering is given in Figure 2.4: the top part of the ﬁgure shows daily
NASDAQ index values over a period of almost two years. The bottom part of the ﬁgure shows the same
signal after applying a moving average system to it, with M +N +1 = 10. Clearly, the output of the
moving average system is smoother and enables a better view of longerterm trends.
c B. Champagne & F. Labeau Compiled September 13, 2004
2.2 DT systems 19
0 50 100 150 200 250 300 350 400 450 500
0
1000
2000
3000
4000
5000
0 50 100 150 200 250 300 350 400 450 500
0
1000
2000
3000
4000
5000
Fig. 2.4 Effect of a moving average ﬁlter. (Sample values are connected by straight lines to
enable easier viewing)
Basic system properties:
• Memoryless: y[n] = (Tx)[n] is a function of x[n] only.
Example 2.3:
y[n] = (x[n])
2
is memoryless, but y[n] =
1
2
(x[n−1] +x[n]) is not.
Memoryless systems are also called static systems. Systems with memory are termed dynamic sys
tems. Further, systems with memory can have ﬁnite (length) memory or inﬁnite (length) memory.
• Causal: y[n] only depends on values x[k] for k ≤n.
• Anticausal: y[n] only depends on values x[k] for k ≥n
2
.
Example 2.4:
y[n] = (Tx)[n] =∑
n
k=−∞
x[k] is causal, y[n] =∑
∞
k=n
x[k] is anticausal.
• Linear: for any α
1
, α
2
∈ C and x
1
, x
2
∈ S,
T(α
1
x
1
+α
2
x
2
) =α
1
T(x
1
) +α
2
T(x
2
) (2.28)
2
This deﬁnition is not consistent in the literature. Some authors prefer to deﬁne an anticausal system as one where y[n] only
depends on values x[k] for k > n, i.e. the present output does only depend on future inputs, not even the present input sample.
c B. Champagne & F. Labeau Compiled September 13, 2004
20 Chapter 2. Discretetime signals and systems
Example 2.5:
The moving average system is a good example of linear system, but e.g. y[n] = (x[n])
2
is obviously
not linear.
• Timeinvariant: for any k ∈ Z,
T(D
k
x) = D
k
(Tx) (2.29)
or equivalently, T{x[n]} = y[n] ⇒T{x[n−k]} = y[n−k].
Example 2.6:
The moving average system can easily be shown to be time invariant by just replacing n by n−k
in the system deﬁnition (2.26). On the other hand, a system deﬁned by y[n] = (Tx)[n] = x[2n], i.e.
a decimation system, is not time invariant. This is easily seen by considering a delay of k = 1:
without delay, y[n] = x[2n]; with a delay of 1 sample in the input, the ﬁrst sample of the output is
x[−1], which is different from y[−1] = x[−2].
• Stable: x bounded ⇒ y = Tx bounded. That is, if x[n] ≤ B
x
for all n, then we can ﬁnd B
y
such that
y[n] ≤B
y
for all n
Remarks:
In online processing, causality of the system is very important. A causal system only needs the past and
present values of the input to compute the current output sample y[n]. A noncausal system would require
at least some future values of the input to compute its output. If only a few future samples are needed, this
translates into a processing delay, which may not be acceptable for realtime processing; on the other hand,
some noncausal systems require the knowledge of all future samples of the input to compute the current
output sample: these are of course not suited for online processing.
Stability is also a very desirable property of a system. When a system is implemented, it is supposed to meet
certain design speciﬁcations, and play a certain role in processing its input. An unstable system could be
driven to an unbounded output by a bounded input, which is of course an undesirable behaviour.
2.3 Linear timeinvariant (LTI) systems
Motivation:
Discretetime systems that are both linear and timeinvariant (LTI) play a central role in digital signal pro
cessing:
• Many physical systems are either LTI or approximately so.
• Many efﬁcient tools are available for the analysis and design of LTI systems (e.g. Fourier analysis).
c B. Champagne & F. Labeau Compiled September 13, 2004
2.3 Linear timeinvariant (LTI) systems 21
Fundamental property:
Suppose T is LTI and let y = Tx (x arbitrary). Then
y[n] =
∞
∑
k=−∞
x[k]h[n−k] (2.30)
where h Tδ is known as the unit pulse (or impulse) response of T.
Proof: Recall that for any DT signal x, we can write
x =
∞
∑
k=−∞
x[k] D
k
δ (2.31)
Invoking linearity and then timeinvariance of T, we have
y = Tx =
∞
∑
k=−∞
x[k] T(D
k
δ) (2.32)
=
∞
∑
k=−∞
x[k] D
k
(Tδ) =
∞
∑
k=−∞
x[k] D
k
h (2.33)
which is equivalent to (2.30) .
Discussion:
• The inputoutput relation for the LTI system may be expressed as a convolution sum:
y = x ∗h (2.34)
• For LTI system, knowledge of impulse response h = Tδ completely characterizes the system. For any
other input x, we have Tx = x ∗h.
• Graphical interpretation: to compute the sample values of y[n] according to (2.34), or equivalently (2.30),
one may proceed as follows:
 Time reverse sequence h[k] ⇒h[−k]
 Shift h[−k] by n samples ⇒h[−(k −n)] = h[n−k]
 Multiply sequences x[k] and h[n−k] and sum over k ⇒y[n]
Example 2.7: Impulse response of the accumulator
Consider the accumulator given by (2.27). There are different ways of deriving the impulse response
here. By setting x[n] =δ[n] in (2.27), we obtain:
h[n] =
n
∑
k=−∞
δ[k] =
1 n ≥0,
0 n < 0.
(2.35)
= u[n]
c B. Champagne & F. Labeau Compiled September 13, 2004
22 Chapter 2. Discretetime signals and systems
An alternative approach is via modiﬁcation of (2.27), so as to directly reveal the convolution format
in (2.30). For instance, by simply changing the index of summation from k to l = n−k, equation (2.27)
then becomes:
y[n] =
+∞
∑
l=0
x[n−l]
=
+∞
∑
l=−∞
u[l]x[n−l],
so that by identiﬁcation, h[n] = u[n].
Causality:
An LTI system is causal iff h[n] = 0 for n < 0.
Proof: The inputoutput relation for an LTI system can be expressed as:
y[n] =
∞
∑
k=−∞
h[k]x[n−k] (2.36)
=· · · +h[−1]x[n+1] +h[0]x[n] +h[1]x[n−1] +· · · (2.37)
Clearly, y[n] only depends on values x[m] for m ≤n iff h[k] = 0 for k < 0 .
Stability:
An LTI system is stable iff the sequence h[n] is absolutely summable, that is ∑
∞
n=−∞
h[n] <∞.
Proof: Suppose that h is absolutely summable, that is ∑
n
h[n] = M
h
< ∞. Then, for any bounded input
sequence x, i.e. such that x[n] ≤B
x
<∞, we have for the corresponding output sequence y:
y[n] =
∑
k
x[n−k]h[k] ≤
∑
k
x[n−k]h[k]
≤
∑
k
B
x
h[k] = B
x
M
h
<∞ (2.38)
which shows that y is bounded. Now, suppose that h[n] is not absolutely summable, i.e. ∑
∞
n=−∞
h[n] = ∞.
Consider the input sequence deﬁned by
x[−n] =
h[n]
∗
/h[n] if h[n] = 0
0 if h[n] = 0.
Note that x[n] ≤1 so that x is bounded. We leave it as an exercise to the student to verify that in this case,
y[0] = +∞, so that the output sequence y is unbounded and so, the corresponding LTI system is not bounded.
c B. Champagne & F. Labeau Compiled September 13, 2004
2.3 Linear timeinvariant (LTI) systems 23
Example 2.8:
Consider LTI system with impulse response h[n] =α
n
u[n]
• Causality: h[n] = 0 for n < 0 ⇒causal
• Stability:
∞
∑
n=−∞
h[n] =
∞
∑
n=0
α
n
(2.39)
Clearly, the sum diverges if α ≥1, while if α < 1, it converges:
∞
∑
n=0
α
n
=
1
1−α
<∞ (2.40)
Thus the system is stable provided α < 1.
FIR versus IIR
• An LTI system has ﬁnite impulse response (FIR) if we can ﬁnd integers N
1
≤N
2
such that
h[n] = 0 when n < N
1
or n > N
2
(2.41)
• Otherwise the LTI system has an inﬁnite impulse response (IIR).
• For example, the LTI system with
h[n] = u[n] −u[n−N] =
1 0 ≤n ≤N−1
0 otherwise
(2.42)
is FIR with N
1
= 0 and N
2
= N−1. The LTI system with
h[n] =α
n
u[n] =
α
n
n ≥0
0 otherwise
(2.43)
is IIR (cannot ﬁnd an N
2
...)
• FIR systems are necessarily stable:
∞
∑
n=−∞
h[n] =
N
2
∑
n=N
1
h[n] <∞ (2.44)
Interconnection of LTI systems:
• Cascade interconnection (Figure 2.5):
y = (x ∗h
1
) ∗h
2
= x ∗(h
1
∗h
2
) (2.45)
= x ∗(h
2
∗h
1
) = (x ∗h
2
) ∗h
1
(2.46)
LTI systems commute; not true for arbitrary systems.
• Parallel interconnection (Figure 2.6):
y = x ∗h
1
+x ∗h
2
= x ∗(h
1
+h
2
) (2.47)
c B. Champagne & F. Labeau Compiled September 13, 2004
24 Chapter 2. Discretetime signals and systems
K
K
[ \
K
K
K
K
[ \
K
K
[ \
K K
Fig. 2.5 Cascade interconnection of LTI systems
K
K
[ \
K
K
[ \
K K
Fig. 2.6 Parallel interconnection of LTI systems
2.4 LTI systems described by linear constant coefﬁcient difference equations (LCCDE)
Deﬁnition:
A discretetime system can be described by an LCCDE of order N if, for any arbitrary input x and corre
sponding output y,
N
∑
k=0
a
k
y[n−k] =
M
∑
k=0
b
k
x[n−k] (2.48)
where a
0
= 0 and a
N
= 0.
Example 2.9: Difference Equation for Accumulator
Consider the accumulator system:
x[n] −→y[n] =
n
∑
k=−∞
x[k] (2.49)
which we know to be LTI. Observe that
y[n] =
n−1
∑
k=−∞
x[k] +x[n]
= y[n−1] +x[n] (2.50)
This is an LCCDE of order N = 1 (M = 0, a
0
= 1, a
1
=−1, b
0
= 1)
c B. Champagne & F. Labeau Compiled September 13, 2004
2.4 LTI systems described by linear constant coefﬁcient difference equations (LCCDE) 25
Remarks:
LCCDEs lead to efﬁcient recursive implementation:
• Recursive because the computation of y[n] makes use past output signal values (e.g. y[n−1] in (2.50)).
• These past output signal values contain all the necessary information about earlier states of the system.
• While (2.49) requires an inﬁnite number of adders and memory units, (2.50) requires only one adder
and one memory unit.
Solution of LCCDE:
The problem of solving the LCCDE (2.48) for an output sequence y[n], given a particular input sequence
x[n], is of particular interest in DSP. Here, we only look at general properties of the solutions y[n]. The
presentation of a systematic solution technique is deferred to Chapter 4.
Structure of the general solution:
The most general solution of the LCCDE (2.48) can be expressed in the form
y[n] = y
p
[n] +y
k
[n] (2.51)
• y
p
[n] is any particular solution of the LCCDE
• y
h
[n] is the general solution of the homogeneous equation
N
∑
k=0
a
k
y
h
[n−k] = 0 (2.52)
Example 2.10:
Consider the 1st order LCCDE
y[n] = ay[n−1] +x[n] (2.53)
with input x[n] = Aδ[n].
It is easy to verify that the following is a solution to (2.53):
y
p
[n] = Aa
n
u[n] (2.54)
The homogeneous equation is
y
h
[n] +ay
h
[n−1] = 0 (2.55)
Its general solution is given by
y
h
[n] = Ba
n
(2.56)
where B is an arbitrary constant. So, the general solution of the LCCDE is given by:
y[n] = Aa
n
u[n] +Ba
n
(2.57)
c B. Champagne & F. Labeau Compiled September 13, 2004
26 Chapter 2. Discretetime signals and systems
Remarks on example:
• The solution of (2.53) is not unique: B is arbitrary
• (2.53) does not necessarily deﬁne a linear system: in the case A = 0 and B = 1, we obtain y[n] = 0
while x[n] = 0.
• The solution of (2.53) is not causal in general: the choice B =−A gives
y[n] =
0 n ≥0,
−Aa
n
n < 0.
(2.58)
which is an anticausal solution.
General remarks:
• The solution of an LCCDE is not unique; for an Nth order LCCDE, uniqueness requires the speciﬁ
cation of N initial conditions.
• An LCCDE does not necessarily correspond to a causal LTI system.
• However, it can be shown that the LCCDE will correspond to a unique causal LTI system if we further
assume that this system is initially at rest. That is:
x[n] = 0 for n < n
0
=⇒ y[n] = 0 for n < n
0
(2.59)
• This is equivalent to assuming zero initial conditions when solving LCCDE, that is: y[n
0
−l] = 0 for
l = 1, ..., N.
• Back to example: Here x[n] = Aδ[n] = 0 for n < 0. Assuming zeroinitial conditions, we must have
y[−1] = 0. Using this condition in (2.57), we have
y[−1] = Ba
−1
= 0 ⇒B = 0 (2.60)
so that ﬁnally, we obtain a unique (causal) solution y[n] = Aa
n
u[n].
• A systematic approach for solving LCCDE will be presented in Chap. 4.
2.5 Problems
Problem 2.1: Basic Signal Transformations
Let the sequenc x[n] be as illustrated in ﬁgure 2.7. Determine and draw the following sequences:
1. x[−n]
2. x[n−2]
3. x[3−n]
c B. Champagne & F. Labeau Compiled September 13, 2004
2.5 Problems 27
4. x[n] ∗δ[n−3]
5. x[n] ∗(u[n] −u[n−2])
6. x[2n].
n
] [n x
1
1
2
1
Fig. 2.7 Sequence to be used in problem 2.1.
Problem 2.2: Impulse response from LCCDE
Let the inputoutput relationship of an LTI system be described by the following difference equation:
y[n] +
1
3
y[n−1] = x[n] −4x[n−1],
together with initial rest conditions.
1. Determine the impulse response h[n] of the corresponding system.
2. Is the system causal, stable, FIR, IIR ?
3. Determine the output of this system if the input x[n] is given by x[n] =
. . . , 0, 0,
¯
1
2
, 0, −1, 0, 0, 0, . . .
.
Problem 2.3:
Let T
i
, i = 1, 2, 3, be stable LTI systems, with corresponding impulse responses h
i
[n].
1. Is the system illustrated in ﬁgure 2.8 LTI ? If so, prove it, otherwise give a counterexample.
2. If the system is LTI, what is its impulse response ?
Problem 2.4:
Let T
i
, i = 1, 2, be LTI systems, with corresponding impulse responses
h
1
[n] = δ[n] −3δ[n−2] +δ[n−3]
h
2
[n] =
1
2
δ[n−1] +
1
4
δ[n−2].
c B. Champagne & F. Labeau Compiled September 13, 2004
28 Chapter 2. Discretetime signals and systems
{.} T{.}
1
T
{.} T {.}
2
T
{.} T
{.}
3
T
Fig. 2.8 System connections to be used in problem 2.3.
{.} T
{.}
2
T
{.} T{.}
1
T

+
Fig. 2.9 Feedback connection for problem 2.4.
c B. Champagne & F. Labeau Compiled September 13, 2004
2.5 Problems 29
1. Write a linear constant coefﬁcient difference equation (LCCDE) corresponding to the inputoutput
relationships of T
1
and T
2
.
2. Consider the overall system T in ﬁgure 2.9, with impulse response h[n]. Write a LCCDE correspond
ing to its inputoutput relationship. Explain how you could compute h[n] from this LCCDE. Compute
h[0], h[1] and h[2].
c B. Champagne & F. Labeau Compiled September 13, 2004
30
Chapter 3
Discretetime Fourier transform (DTFT)
3.1 The DTFT and its inverse
Deﬁnition:
The DTFT is a transformation that maps DT signal x[n] into a complexvalued function of the real variable,
namely
X(ω) =
∞
∑
n=−∞
x[n]e
−jωn
, ω ∈ R (3.1)
Remarks:
• In general, X(ω) ∈ C
• X(ω+2π) = X(ω) ⇒ω ∈ [−π, π] is sufﬁcient
• X(ω) is called the spectrum of x[n]:
X(ω) =X(ω)e
j∠X(ω)
⇒
X(ω) = magnitude spectrum,
∠X(ω) = phase spectrum.
(3.2)
The magnitude spectrum is often expressed in decibel (dB):
X(ω)
dB
= 20log
10
X(ω) (3.3)
Inverse DTFT:
Let X(ω) be the DTFT of DT signal x[n]. Then
x[n] =
1
2π
π
−π
X(ω)e
jωn
dω, n ∈ Z (3.4)
3.2 Convergence of the DTFT: 31
Proof: First note that
π
−π
e
jωn
dω = 2πδ[n]. Then, we have
π
−π
X(ω)e
jωn
dω =
π
−π
{
∞
∑
k=−∞
x[k]e
−jωk
}e
jωn
dω
=
∞
∑
k=−∞
x[k]
π
−π
e
jω(n−k)
dω
= 2π
∞
∑
k=−∞
x[k] δ[n−k] = 2πx[n] (3.5)
Remarks:
• In (3.4), x[n] is expressed as a weighted sum of complex exponential signals e
jωn
, ω ∈ [−π, π], with
weight X(ω)
• Accordingly, the DTFT X(ω) describes the frequency content of x[n]
• Since the DT signal x[n] can be recovered uniquely from its DTFT X(ω), we say that x[n] together
with X(ω) form a DTFT pair, and write:
x[n]
F
←→X(ω) (3.6)
• DTFT equation (3.1) is called analysis relation; while inverse DTFT equation (3.4) is called synthesis
relation.
3.2 Convergence of the DTFT:
Introduction:
• For the DTFT to exist, the series ∑
∞
n=−∞
x[n]e
−jωn
must converge
• That is, the partial sum
X
M
(ω) =
M
∑
n=−M
x[n]e
−jωn
(3.7)
must converge to a limit X(ω) as M →∞.
• Below, we discuss the convergence of X
M
(ω) for three different signal classes of practical interest,
namely:
 absolutely summable signals
 energy signals
 power signals
• In each case, we state without proof the main theoretical results and illustrate the theory with corre
sponding examples of DTFTs.
c B. Champagne & F. Labeau Compiled September 13, 2004
32 Chapter 3. Discretetime Fourier transform (DTFT)
Absolutely summable signals:
• Recall: x[n] is said to be absolutely summable (sometimes denoted as L
1
) iff
∞
∑
n=−∞
x[n] < ∞ (3.8)
• In this case, X(ω) always exists because:

∞
∑
n=−∞
x[n]e
−jωn
 ≤
∞
∑
n=−∞
x[n]e
−jωn
 =
∞
∑
n=−∞
x[n] < ∞ (3.9)
• Possible to show that X
M
(ω) converges uniformly to X(ω), that is:
For all ε > 0, can ﬁnd M
ε
such that X(ω) −X
M
(ω) < ε for all M > M
ε
and for all ω ∈ R.
• X(ω) is continuous and d
p
X(ω)/dω
p
exists and continuous for all p ≥1.
Example 3.1:
Consider the unit pulse sequence x[n] = δ[n]. The corresponding the DTFT is simply X(ω) = 1.
More generally, consider an arbitrary ﬁnite duration signal, say (N
1
≤N
2
)
x[n] =
N
2
∑
k=N
1
c
k
δ[n−k]
Clearly, x[n] is absolutely summable; its DTFT is given by the ﬁnite sum
X(ω) =
N
2
∑
n=N
1
c
n
e
−jωn
Example 3.2:
The exponential sequence x[n] = a
n
u[n] with a < 1 is absolutely summable. Its DTFT is easily com
puted to be
X(ω) =
∞
∑
n=0
(ae
jω
)
n
=
1
1−ae
−jω
.
Figure 3.1 illustrates the uniform convergence of X
M
(ω) as deﬁned in (3.7) to X(ω) in the special case
a = 0.8. As M increases, the whole X
M
(ω) curve tends to X(ω) at every frequency.
c B. Champagne & F. Labeau Compiled September 13, 2004
3.2 Convergence of the DTFT: 33
0
1
2
3
4
5
ω

X
M
(
ω
)

0
π/4 π/2 3π/4 π
DTFT
M=2
M=5
M=10
M=20
Fig. 3.1 Illustration of uniform convergence for an exponential sequence.
Energy signals:
• Recall: x[n] is an energy signal (square summable or L
2
) iff
E
x
∞
∑
n=−∞
x[n]
2
< ∞ (3.10)
• In this case, it can be proved that X
M
(ω) converges in the mean square sense to X(ω), that is:
lim
M→∞
π
−π
X(ω) −X
M
(ω)
2
dω = 0 (3.11)
• Meansquare convergence is weaker than uniform convergence:
 L
1
convergence implies L
2
convergence.
 L
2
convergence does not guarantee that lim
M→∞
X
M
(ω) exists for all ω.
 e.g., X(ω) may be discontinuous (jump) at certain points.
Example 3.3: Ideal lowpass ﬁlter
Consider the DTFT deﬁned by
X(ω) =
1 ω < ω
c
0 ω
c
< ω < π
whose graph is illustrated in Figure 3.2(a). The corresponding DT signal is obtained via (3.4) as follows:
x[n] =
1
2π
ω
c
−ω
c
e
jωn
dω =
ω
c
π
sinc(
ω
c
n
π
) (3.12)
c B. Champagne & F. Labeau Compiled September 13, 2004
34 Chapter 3. Discretetime Fourier transform (DTFT)
−0.2
0
0.2
0.4
0.6
0.8
1
−π −ω
c
0 ω
c
π
angular frequency ω
X
(
ω
)
(a) DTFT
−10 −8 −6 −4 −2 0 2 4 6 8 10
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
x
[
n
]
discrete−time n
(b) DT signal
Fig. 3.2 Impulse response of the ideal lowpass ﬁlter.
where the sinc function is deﬁned as
sinc(x) =
sinπx
πx
.
The signal x[n] is illustrated in Figure 3.2(b) for the case ω
c
= π/2
It can be shown that:
• ∑
n
x[n] = ∞ ⇒x[n] not absolutely summable
• ∑
n
x[n]
2
< ∞ ⇒x[n] square summable
Here, X
M
(ω) converges to X(ω) in the meansquare only; the convergence is not uniform. We have the
socalled Gibb’s phenomenon (Figure 3.3):
• There is an overshoot of size ∆X ≈0.09 near ±ω
c
;
• The overshoot cannot be eliminated by increasing M;
• This is a manifestation of the nonuniform convergence;
• It has important implications in ﬁlter design.
Power signals
• Recall: x[n] is a power signal iff
P
x
lim
N→∞
1
2N+1
N
∑
n=−N
x[n]
2
< ∞ (3.13)
c B. Champagne & F. Labeau Compiled September 13, 2004
3.2 Convergence of the DTFT: 35
−0.2
0
0.2
0.4
0.6
0.8
1
−π −ω
c
0 ω
c
π
M=5
−0.2
0
0.2
0.4
0.6
0.8
1
−π −ω
c
0 ω
c
π
M=9
−0.2
0
0.2
0.4
0.6
0.8
1
−π −ω
c
0 ω
c
π
M=15
−0.2
0
0.2
0.4
0.6
0.8
1
−π −ω
c
0 ω
c
π
M=35
Fig. 3.3 Illustration of the Gibb’s phenomenon.
c B. Champagne & F. Labeau Compiled September 13, 2004
36 Chapter 3. Discretetime Fourier transform (DTFT)
• If x[n] has inﬁnite energy but ﬁnite power, X
M
(ω) may still converge to a generalized function X(ω):
• The expression of X(ω) typically contains continuous delta functions in the variable ω.
• Most power signals do not have a DTFT even in this sense. The exceptions include the following
useful DT signals:
 Periodic signals
 Unit step
• While the formal mathematical treatment of generalized functions is beyond the scope of this course,
we shall frequently make use of certain basic results, as developed in the examples below.
Example 3.4:
Consider the following DTFT
X(ω) = 2π
∞
∑
k=−∞
δ
a
(ω−2πk)
where δ
a
(ω) denotes an analog delta function centred at ω = 0. By using the synthesis relation (3.4),
one gets
x[n] =
1
2π
π
−π
2π
∞
∑
k=−∞
e
jωn
δ
a
(ω−2πk)dω
=
∞
∑
k=−∞
π
−π
e
jωn
δ
a
(ω−2πk)dω (3.14)
=
π
−π
e
jωn
δ
a
(ω)dω
= e
j0n
= 1 (3.15)
since in (3.14) the only value of k for which the delta function will be nonzero in the integration interval
is k = 0.
Example 3.5:
Consider the unit step sequence u[n] and let U(ω) denotes its DTFT. The following result is given without
proof:
U(ω) =
1
1−e
−jω
+π
∞
∑
k=−∞
δ
a
(ω−2πk)
c B. Champagne & F. Labeau Compiled September 13, 2004
3.3 Properties of the DTFT 37
3.3 Properties of the DTFT
Notations:
• DTFT pairs:
x[n]
F
←→ X(ω)
y[n]
F
←→ Y(ω)
• Real and imaginary parts:
x[n] = x
R
[n] + jx
I
[n]
X(ω) = X
R
(ω) + jX
I
(ω)
• Even and odd components:
x[n] = x
e
[n] +x
o
[n] (3.16)
x
e
[n]
1
2
(x[n] +x
∗
[−n]) = x
∗
e
[−n] (even) (3.17)
x
o
[n]
1
2
(x[n] −x
∗
[−n]) =−x
∗
o
[−n] (odd) (3.18)
• Similarly:
X(ω) = X
e
(ω) +X
o
(ω) (3.19)
X
e
(ω)
1
2
(X(ω) +X
∗
(−ω)) = X
∗
e
(−ω) (even) (3.20)
X
o
(ω)
1
2
(X(ω) −X
∗
(−ω)) =−X
∗
o
(−ω) (odd) (3.21)
Basic symmetries:
x[−n]
F
←→ X(−ω) (3.22)
x
∗
[n]
F
←→ X
∗
(−ω) (3.23)
x
R
[n]
F
←→ X
e
(ω) (3.24)
jx
I
[n]
F
←→ X
o
(ω) (3.25)
x
e
[n]
F
←→ X
R
(ω) (3.26)
x
o
[n]
F
←→ jX
I
(ω) (3.27)
c B. Champagne & F. Labeau Compiled September 13, 2004
38 Chapter 3. Discretetime Fourier transform (DTFT)
Real/imaginary signals:
• If x[n] is real, then X(ω) = X
∗
(−ω)
• If x[n] is purely imaginary, then X(ω) =−X
∗
(−ω)
Proof: x[n] ∈R⇒x
I
[n] =0 ⇒X
o
(ω) =0. In turns, this implies X(ω) =X
e
(ω) =X
∗
(−ω). Similar argument
for purely imaginary case.
Remarks:
• For x[n] real, symmetry X(ω) = X
∗
(−ω) implies
X(ω) =X(−ω), ∠X(ω) =−∠X(−ω) (3.28)
X
R
(ω) = X
R
(−ω), X
I
(ω) =−X
I
(−ω) (3.29)
This means that, for real signals, one only needs to specify X(ω) for ω ∈ [0, π], because of symmetry.
Linearity:
ax[n] +by[n]
F
←→aX(ω) +bY(ω) (3.30)
Time shift (very important):
x[n−d]
F
←→e
−jωd
X(ω) (3.31)
Frequency modulation:
e
jω
0
n
x[n]
F
←→X(ω−ω
0
) (3.32)
Differentiation:
nx[n]
F
←→ j
dX(ω)
dω
(3.33)
Example 3.6:
y[n] = b
0
x[n] +b
1
x[n−1]. By using the linearity and time shift properties, one gets that
Y(ω) = (b
0
+b
1
e
−jω
)X(ω)
c B. Champagne & F. Labeau Compiled September 13, 2004
3.3 Properties of the DTFT 39
Example 3.7:
y[n] = cos(ω
o
n)x[n]. Recall ﬁrst that e
jω
0
n
= cos(ω
o
n) + j sin(ω
o
n), so that cos(ω
o
n) =
1
2
(e
jω
0
n
+
e
−jω
0
n
), and, by linearity and time shift:
Y(ω) =
1
2
(X(ω−ω
0
) +X(ω+ω
0
))
Convolution:
x[n] ∗y[n]
F
←→X(ω)Y(ω) (3.34)
Multiplication:
x[n]y[n]
F
←→
1
2π
π
−π
X(φ)Y(ω−φ)dφ (3.35)
We refer to the RHS of (3.35) as the circular convolution of periodic functions X(ω) and Y(ω).
Thus:
 DT convolution
F
←→multiplication in ω while
 DT multiplication
F
←→circular convolution in ω
We say that these two properties are dual of each other
Parseval’s relation:
∞
∑
n=−∞
x[n]
2
=
1
2π
π
−π
X(ω)
2
dω (3.36)
Plancherel’s relation:
∞
∑
n=−∞
x[n]y[n]
∗
=
1
2π
π
−π
X(ω)Y(ω)
∗
dω (3.37)
We have met two kinds of convolution so far and one more is familiar from continuoustime systems. For
functions of a continuous variable we have ordinary or linear convolution involving an integral from −∞ to
∞. Here we have seen periodic or circular convolution of periodic functions of a continuous variable (in this
case the periodic frequency responses of discretetime systems) involving an integral over one period. For
discretetime systems, we have seen the ordinary or linear convolution which involves a sum from −∞ to
Later in the study of DFTs, we will meet the periodic or circular convolution of sequences involving a sum
over one period.
c B. Champagne & F. Labeau Compiled September 13, 2004
40 Chapter 3. Discretetime Fourier transform (DTFT)
3.4 Frequency analysis of LTI systems
LTI system (recap):
] [n x
{.} T
] [n y
{.} H
y[n] = x[n] ∗h[n] =
∞
∑
k=−∞
x[k]h[n−k] (3.38)
h[n] =H{δ[n]} (3.39)
Response to complex exponential:
• Let x[n] = e
jωn
, n ∈ Z.
• Corresponding output signal:
y[n] =
∑
k
h[k]x[n−k]
=
∑
k
h[k]e
jω(n−k)
={
∑
k
h[k]e
−jωk
}e
jωn
= H(ω)x[n] (3.40)
where we recognize H(ω) as the DTFT of h[n].
• Eigenvector interpretation:
 x[n] = e
jωn
behaves as an eigenvector of LTI system H: Hx = λx
 Corresponding eigenvalue λ = H(ω) provides the system gain
Deﬁnition:
Consider an LTI system H with impulse response h[n]. The frequency response of H, denoted H(ω), is
deﬁned as the DTFT of h[n]:
H(ω) =
∞
∑
n=−∞
h[n]e
−jωn
, ω ∈ R (3.41)
Remarks:
• If DT system H is stable, then the sequence h[n] is absolutely summable and the DTFT converges
uniformly:
 H(ω) exists and is continuous.
c B. Champagne & F. Labeau Compiled September 13, 2004
3.4 Frequency analysis of LTI systems 41
• H(ω) known ⇒we can recover h[n] from the inverse DTFT relation:
h[n] =
1
2π
π
−π
H(ω)e
jωn
dω (3.42)
• We refer to H(ω) as the magnitude response of the system.
We refer to ∠H(ω) as the phase spectrum.
Properties:
Let H be an LTI system with frequency response H(ω). Let y[n] denote the response of H to an arbitrary
input x[n]. We have
Y(ω) = H(ω)X(ω) (3.43)
Proof:
H LTI ⇒ y = h∗x ⇒ Y(ω) = H(ω)X(ω)
Interpretation:
• Recall that input signal x[n] may be expressed as a weighted sum of complex exponential sequences
via the inverse DTFT:
x[n] =
1
2π
π
−π
X(ω)e
jωn
dω (3.44)
• According to (3.43),
y[n] =
1
2π
π
−π
H(ω)X(ω)e
jωn
dω (3.45)
• Note ﬁltering role of H(ω): each frequency component in x[n], i.e. X(ω)e
jωn
, is affected by H(ω) in
the ﬁnal system output y[n]:
 gain or attenuation of H(ω)
 phase rotation of ∠H(ω).
Example 3.8:
Consider a causal moving average system:
y[n] =
1
M
M−1
∑
k=0
x[n−k] (3.46)
• Impulse response:
h[n] =
1
M
M−1
∑
k=0
δ[n−k]
=
1
M
0 ≤n < M
0 otherwise
=
1
M
(u[n] −u[n−M])
c B. Champagne & F. Labeau Compiled September 13, 2004
42 Chapter 3. Discretetime Fourier transform (DTFT)
• Frequency response:
H(ω) =
1
M
M−1
∑
n=0
e
−jωn
=
1
M
e
−jωM
−1
e
−jω
−1
=
1
M
e
−jωM/2
e
−jω/2
e
−jωM/2
−e
jωM/2
e
−jω/2
−e
jω/2
=
1
M
e
−jω(M−1)/2
sin(ωM/2)
sin(ω/2)
(3.47)
• The magnitude response is illustrated in Figure 3.5.
 zeros at ω = 2πk/M (k = 0), where
sin(ωM/2)
sin(ω/2)
= 0
 level of ﬁrst sidelobe ≈−13dB
• The phase response is illustrated in Figure 3.4.
 negative slope of −(M−1)/2
 jumps of π at ω = 2πk/M (k = 0), where
sin(ωM/2)
sin(ω/2)
changes its sign.
0 π/3 2π/3 π 4π/3 5π/3 2π
−π
−3π/4
−π/2
−π/4
0
π/4
π/2
3π/4
π
ω
∠ X(e
jω
)
Fig. 3.4 Phase response of the causal moving average system (M = 6)
Further remarks:
• We have seen that for LTI system, Y(ω) = H(ω)X(ω):
 X(ω) = 0 at a given frequency ω ⇒Y(ω) = 0
if not: system is nonlinear or timevarying
 H(ω) = 0 ⇒Y(ω) = 0, regardless of input
• If system is LTI, its frequency response may be computed as
H(ω) =
Y(ω)
X(ω)
(3.48)
c B. Champagne & F. Labeau Compiled September 13, 2004
3.5 LTI systems characterized by LCCDE 43
0
0.2
0.4
0.6
0.8
1
1.2
0 π/3 2π/3 π 4π/3 5π/3 2π
ω
X(e
jω
)
Fig. 3.5 Magnitude Response of the causal moving average system (M = 6)
3.5 LTI systems characterized by LCCDE
LCCDE (recap):
• DT system obeys LCCDE of order N if
N
∑
k=0
a
k
y[n−k] =
M
∑
k=0
b
k
x[n−k] (3.49)
where a
0
= 0 and a
N
= 0.
• If we further assume initial rest conditions, i.e.:
x[n] = 0 for n < n
0
=⇒ y[n] = 0 for n < n
0
(3.50)
LCCDE corresponds to unique causal LTI system.
Frequency response:
Taking DTFT on both sides of (3.49):
N
∑
k=0
a
k
y[n−k] =
M
∑
k=0
b
k
x[n−k]
⇒
N
∑
k=0
a
k
e
−jωk
Y(ω) =
M
∑
k=0
b
k
e
−jωk
X(ω)
⇒ H(ω) =
Y(ω)
X(ω)
=
∑
M
k=0
b
k
e
−jωk
∑
N
k=0
a
k
e
−jωk
(3.51)
c B. Champagne & F. Labeau Compiled September 13, 2004
44 Chapter 3. Discretetime Fourier transform (DTFT)
3.6 Ideal frequency selective ﬁlters:
Ideal lowpass:
H
LP
(ω) =
1 if ω < ω
c
0 if ω
c
<ω < π
h
LP
[n] =
ω
c
π
sinc(
ω
c
n
π
) (3.52) S S
c
Z
c
Z
) (Z H
Z
1
Figure 3.6: Lowpass ﬁlter
Ideal highpass:
H
HP
(ω) =
1 if ω
c
<ω < π
0 if ω < ω
c
h
HP
[n] = δ[n] −h
LP
[n] (3.53) S S
c
Z
c
Z
) (Z H
Z
1
Figure 3.7: Highpass ﬁlter
Ideal bandpass:
H
BP
(ω) =
1 if ω−ω
o
 < B/2
0 else in [−π, π]
h
BP
[n] = 2cos(ω
o
n)h
LP
[n]
ω
c
≡B/2
(3.54) S S
o
Z
o
Z
) (Z H
Z
1
B
Figure 3.8: Bandpass ﬁlter
Ideal bandstop:
H
BS
(ω) =
0 if ω−ω
o
 < B/2
1 else in [−π, π]
h
BS
[n] = δ[n] −h
BP
[n] (3.55)
S S
o
Z
o
Z
) (Z H
Z
1
B
Figure 3.9: Stoppass ﬁlter
Remarks:
• These ﬁlters are not realizable:
 they are noncausal (h[n] = 0 for n < 0)
 they are unstable (∑
n
h[n] = ∞)
c B. Champagne & F. Labeau Compiled September 13, 2004
3.7 Phase delay and group delay 45
• The main object of ﬁlter design is to derive practical approximations to such ﬁlters (more on this
later...)
3.7 Phase delay and group delay
Introduction:
Consider an integer delay system, for which the inputoutput relationship is
y[n] = x[n−k], k ∈ Z. (3.56)
The corresponding frequency response is computed as
Y(ω) = e
−jωk
X(ω) ⇒H(ω) =
Y(ω)
X(ω)
= e
−jωk
. (3.57)
This clearly shows that the phase of the system∠H(ω) =−ωk provides information about the delay incurred
by the input.
Phase delay:
For arbitrary H(ω), we deﬁne the phase delay as
τ
ph
−
∠H(ω)
ω
(3.58)
For the integer delay system, we get:
−
∠H(ω)
ω
=
ωk
ω
= k (3.59)
The concept of phase delay is useful mostly for wideband signals (i.e. occupying the whole band from −π
to π).
Group delay:
Deﬁnition:
τ
gr
−
d∠H(ω)
dω
(3.60)
This concept is useful when the system input x[n] is a narrowband signal centred around ω
0
, i.e.:
x[n] = s[n]e
jω
0
n
(3.61)
where s[n] is a slowlyvarying envelope. An example of such a narrowband signal is given in Figure 3.7.
The corresponding system output is then
y[n] ≈H(ω
0
)s[n−τ
gr
(ω
0
)]e
jω
0
[n−τ
ph
(ω
0
)]
. (3.62)
c B. Champagne & F. Labeau Compiled September 13, 2004
46 Chapter 3. Discretetime Fourier transform (DTFT)
200 400 600 800 1000 1200 1400 1600 1800 2000
−0.05
0
0.05
n
x
[
n
]
−20
−10
0
10
20
30

X
(
ω
)

(
d
B
)
Frequency ω
0
ω
0
π/4 π/2 3π/4 π
s[n]
Fig. 3.10 An example of a narrowband signal as deﬁned by x[n] = s[n] cos(ω
0
n). Top: the
signal x[n] is shown, together with the dashed envelope s[n] (Samples are connected to ease
viewing). Bottom: the magnitude of the DTFT of x[n].
c B. Champagne & F. Labeau Compiled September 13, 2004
3.8 Problems 47
The above equation shows that the phase delay τ
ph
(ω
0
) contributes a phase change to the carrier e
jω
0
n
,
whereas the group delay τ
gr
(ω
0
) contributes a delay to the envelope s[n]. Notice that this equation is strictly
valid only for integer values of τ
gr
(ω
0
), though a noninteger value of τ
gr
(ω
0
) can still be interpreted as a
noninteger delay
1
.
Pure delay system:
• Generalization of the integer delay system.
• Deﬁned directly in terms of its frequency response:
H(ω) = e
−jωτ
(3.63)
where τ ∈ R is not limited to take integer value.
• Clearly:
H(ω) = 1 ⇒no magnitude distortion (3.64)
τ
ph
(ω) = τ
gr
(ω) = τ (3.65)
Linear phase:
Consider an LTI system with
H(ω) =H(ω)e
j∠H(ω)
. (3.66)
If ∠H(ω) = −ωτ, we say that the system has linear phase. A more general deﬁnition of linear phase will
be given later, but it is readily seen that a linear phase system does not introduce phase distortion, since its
effect on the phase of the input amounts to a delay by τ samples.
One can deﬁne a distortionless system as one having a gain and a linear phase, i.e. H(ω) = Aexp( jωτ). The
gain is easily compensated for and the linear phase merely delays the signal by τ samples.
3.8 Problems
Problem 3.1: Ideal HighPass Filter
Let h[n] be the impulse reponse of an ideal highpass ﬁlter, i.e. H(ω) is deﬁned as:
H(ω) =
0 ω < ω
c
1 elsewhere
,
for the main period −π ≤ω ≤π, and for somr given ω
c
, called the cutoff frequency..
1. plot H(ω).
2. compute h[n] by using the deﬁnition of the inverse DTFT.
1
For instance, a delay of
1
2
sample amount to replacing x[n] by a value interpolated between x[n] and x[n−1].
c B. Champagne & F. Labeau Compiled September 13, 2004
48 Chapter 3. Discretetime Fourier transform (DTFT)
3. Given that H(ω) can be expressed as H(ω) = 1−H
LP
(ω), where H
L
P(ω) is the freqquency response
of an ideal lowpass ﬁlter with cutoff frequency ω
c
, compute h[n] as a function of h
LP
[n].
4. What would be the frequency response G(ω) if g[n] was h[n] delayed by 3 samples ?
5. Is the ﬂter h[n] stable, causal, FIR, IIR ?
Problem 3.2: Windowing
Let x[n] = cos(ω
0
n) for some discrete frequency ω
0
, such that 0 ≤ω
0
≤π. Give an expression of X(ω).
Let w[n] = u[n] −u[n−M]. Plot the sequence w[n]. Compute W(ω) by using the deﬁnition of the DTFT.
Let now y[n] = x[n]w[n]. Compute the frequency response Y(ω).
Problem 3.3:
Given a DTFT pair
x[n]
F
←→X(ω),
1. Compute the DTFT of (−1)
n
x[n];
2. Compute the DTFT of (−1)
n
x
∗
[M−n] for some M ∈ Z;
3. What is the equivalent in the time domain of the following frequency domain condition
X(ω)
2
+X(π−ω)
2
= 1?
[Hint: Recall that a
2
= a
∗
a.]
c B. Champagne & F. Labeau Compiled September 13, 2004
49
Chapter 4
The ztransform (ZT)
Motivation:
• While very useful, the DTFT has a limited range of applicability.
• For example, the DTFT of a simple signal like x[n] = 2
n
u[n] does not exist.
• One may view the ZT as a generalization of the DTFT that is applicable to a larger class of signals.
• The ZT is the discretetime equivalent of the Laplace transform for continuoustime signals.
4.1 The ZT
Deﬁnition:
The ZT is a transformation that maps DT signal x[n] into a function of the complex variable z, deﬁned as
X(z) =
∞
∑
n=−∞
x[n]z
−n
(4.1)
The domain of X(z) is the set of all z ∈ C such that the series converges absolutely, that is:
Dom(X) ={z ∈ C :
∑
n
x[n]z
−n
<∞} (4.2)
Remarks:
• The domain of X(z) is called the region of convergence (ROC).
• The ROC only depends on z: if z ∈ ROC, so is ze
jφ
for any angle φ.
• Within the ROC, X(z) is an analytic function of complex variable z. (That is, X(z) is smooth, derivative
exists, etc.)
• Both X(z) and the ROC are needed when specifying a ZT.
50 Chapter 4. The ztransform (ZT)
Example 4.1:
Consider x[n] = 2
n
u[n]. We have
X(z) =
∞
∑
n=0
2
n
z
−n
=
1
1−2z
−1
where the series converges provided 2z
−1
 < 1. Accordingly, ROC : z > 2
Connection with DTFT:
• The ZT is more general than the DTFT. Let z = re
jω
, so that
ZT{x[n]} =
∞
∑
n=−∞
x[n]r
−n
e
−jωn
= DTFT{x[n]r
−n
} (4.3)
With ZT, possibility of adjusting r so that series converges.
• Consider previous example:
 x[n] = 2
n
u[n] does not have a DTFT (Note: ∑
n
x[n] =∞)
 x[n] has a ZT for z = r > 2
• If z = e
jω
∈ ROC,
X(e
jω
) =
∞
∑
n=−∞
x[n]e
−jωn
= DTFT{x[n]} (4.4)
• In the sequel, the DTFT is either denoted by X(e
jω
), or simply by X(ω) when there is no ambiguity
(as we did in Chapter 3).
Inverse ZT:
Let X(z), with associated ROC denoted R
x
, be the ZT of DT signal x[n]. Then
x[n] =
1
2πj
C
X(z)z
n−1
dz, n ∈ Z (4.5)
where C is any closed, simple contour around z = 0 within R
x
.
Remarks:
• In analogy with the inverse DTFT, signals z
n
, with relative weight X(z).
• In practice, we do not use (4.5) explicitly to compute the inverse ZT (more on this later); the use
of (4.5) is limited mostly to theoretical considerations (e.g. next item).
• Since DT signal x[n] can be recovered uniquely from its ZT X(z) (and associated ROC R
x
), we say
that x[n] together with X(z) and R
x
form a ZT pair, and write:
x[n]
Z
←→X(z), z ∈ R
x
(4.6)
c B. Champagne & F. Labeau Compiled September 13, 2004
4.2 Study of the ROC and ZT examples 51
4.2 Study of the ROC and ZT examples
Signal with ﬁnite duration:
• Suppose there exist integers N
1
≤N
2
such that x[n] = 0 for n < N
1
and for n > N
2
. Then, we have
X(z) =
N
2
∑
n=N
1
x[n]z
−n
= x[N
1
]z
−N
1
+x[N
1
+1]z
−N
1
−1
+· · · +x[N
2
]z
−N
2
(4.7)
• ZT exists for all z ∈ C, except possibly at z = 0 and z =∞:
 N
2
> 0 ⇒z = 0 / ∈ ROC
 N
1
< 0 ⇒z =∞ / ∈ ROC
Example 4.2:
Consider the unit pulse sequence x[n] =δ[n].
X(z) = 1×z
0
= 1
ROC = C (4.8)
Example 4.3:
Consider the ﬁnite length sequence x[n] ={1, 1, 2, 1}.
X(z) = z +1+2z
−1
+z
−2
ROC : 0 <z <∞ (4.9)
Theorem:
To any power series ∑
∞
n=0
c
n
w
n
, we can associate a radius of convergence
R = lim
n→∞
c
n
c
n+1
(4.10)
such that
if w < R ⇒the series converges absolutely
if w > R ⇒the series diverges
c B. Champagne & F. Labeau Compiled September 13, 2004
52 Chapter 4. The ztransform (ZT)
Causal signal:
Suppose x[n] = 0 for n < 0. We have
X(z) =
∞
∑
n=0
x[n]z
−n
=
∞
∑
n=0
c
n
w
n
c
n
≡x[n], w ≡z
−1
(4.11)
Therefore, the ROC is given by
w < R
w
= lim
n→∞
x[n]
x[n+1]
z >
1
R
w
≡r (4.12)
The ROC is the exterior of a circle of radius r, as shown in Figure 4.1.
Re(z)
Im(z)
r
ROC: z>r
Fig. 4.1 Illustration of the ROC for a causal signal
Example 4.4:
Consider the causal sequence
x[n] = a
n
u[n] (4.13)
where a is an arbitrary complex number. We have
X(z) =
∞
∑
n=0
a
n
z
−n
=
∞
∑
n=0
(az
−1
)
n
=
1
1−az
−1
(4.14)
c B. Champagne & F. Labeau Compiled September 13, 2004
4.2 Study of the ROC and ZT examples 53
provided az
−1
 < 1, or equivalently, z >a. Thus,
ROC : z >a (4.15)
Consistent with Figure 4.1, this ROC corresponds to the exterior of a circle of radius r = a in the
zplane.
Anticausal signal:
Suppose x[n] = 0 for n > 0. We have
X(z) =
0
∑
n=−∞
x[n]z
−n
=
∞
∑
n=0
x[−n]z
n
(4.16)
Therefore, the ROC is given by
z < R = lim
n→∞
x[−n]
x[−n−1]
(4.17)
The ROC is the interior of a circle of radius R in the zplane, as shown in Figure 4.2.
Re(z)
Im(z)
R
ROC: z<R
Fig. 4.2 Illustration of the ROC for an anticausal signal.
Example 4.5:
Consider the anticausal sequence
x[n] =−a
n
u[−n−1]. (4.18)
c B. Champagne & F. Labeau Compiled September 13, 2004
54 Chapter 4. The ztransform (ZT)
We have
X(z) = −
−1
∑
n=−∞
a
n
z
−n
= −
∞
∑
n=1
(a
−1
z)
n
= −
a
−1
z
1−a
−1
z
if a
−1
z < 1
=
1
1−az
−1
(4.19)
provided a
−1
z < 1, or equivalently, z <a. Thus,
ROC : z <a (4.20)
Consistent with Figure 4.2, the ROC is the interior of a circle of radius R =a. Note that in this and the
previous example, we obtain the same mathematical expression for X(z), but the ROCs are different.
Arbitrary signal:
We can always decompose the series X(z) as
X(z) =
−1
∑
n=−∞
x[n]z
−n
. .. .
needs z>r
+
∞
∑
n=0
x[n]z
−n
. .. .
needs z<R
(4.21)
We distinguish two cases:
• If r < R, the ZT exists and ROC : r <z < R (see Figure 4.3).
Re(z)
Im(z)
r R
ROC: r<z<R
Fig. 4.3 General annular ROC.
• If r > R, the ZT does not exist.
c B. Champagne & F. Labeau Compiled September 13, 2004
4.3 Properties of the ZT 55
Example 4.6:
Consider
x[n] = (1/2)
n
u[n] −2
n
u[−n−1]. (4.22)
Here, we have
X(z) =
∞
∑
n=0
(1/2)
n
z
−n
. .. .
¸
¸
z>1/2
−
−1
∑
n=−∞
2
n
z
−n
. .. .
¸
¸
z<2
=
1
1−
1
2
z
−1
+
1
1−2z
−1
=
2−
5
2
z
−1
1−
5
2
z
−1
+z
−2
(4.23)
The two series will converge simultaneously iff
ROC : 1/2 <z < 2. (4.24)
This correspond to the situation shown in Figure 4.3 with r = 1/2 and R = 2.
Example 4.7:
Consider
x[n] = 2
n
u[n] −(1/2)
n
u[−n−1]. (4.25)
Here, X(z) = X
+
(z) +X
−
(z) where
X
+
(z) =
∞
∑
n=0
2
n
z
−n
converges for z > 2
X
−
(z) =
−1
∑
n=−∞
(1/2)
n
z
−n
converges for z < 1/2
Since these two regions do not intersect, the ROC is empty and the ZT does not exist.
4.3 Properties of the ZT
Introductory remarks:
• Notations for ZT pairs:
x[n]
Z
←→ X(z), z ∈ R
x
y[n]
Z
←→ Y(z), z ∈ R
y
R
x
and R
y
respectively denote the ROC of X(z) and Y(z)
• When stating a property, we must also specify the corresponding ROC.
• In some cases, the true ROC may be larger than the one indicated.
c B. Champagne & F. Labeau Compiled September 13, 2004
56 Chapter 4. The ztransform (ZT)
Basic symmetries:
x[−n]
Z
←→ X(z
−1
), z
−1
∈ R
x
(4.26)
x
∗
[n]
Z
←→ X
∗
(z
∗
), z ∈ R
x
(4.27)
Linearity:
ax[n] +by[n]
Z
←→aX(Z) +bY(z), z ∈ R
x
∩R
y
(4.28)
Time shift (very important):
x[n−d]
Z
←→z
−d
X(z), z ∈ R
x
(4.29)
Exponential modulation:
a
n
x[n]
Z
←→X(z/a), z/a ∈ R
x
(4.30)
Differentiation:
nx[n]
Z
←→−z
dX(z)
dz
, z ∈ R
x
(4.31)
Convolution:
x[n] ∗y[n]
Z
←→X(z)Y(z), z ∈ R
x
∩R
y
(4.32)
Initial value:
For x[n] causal (i.e. x[n] = 0 for n < 0), we have
lim
z→∞
X(z) = x[0] (4.33)
c B. Champagne & F. Labeau Compiled September 13, 2004
4.3 Properties of the ZT 57
Example 4.8:
Consider
x[n] = cos(ω
o
n)u[n]
=
1
2
e
jω
o
n
u[n] +
1
2
e
−jω
o
n
u[n]
We have
X(z) =
1
2
ZT{e
jω
o
n
u[n]}+
1
2
ZT{e
−jω
o
n
u[n]}
=
1
2
1
1−e
jω
o
z
−1
. .. .
z>e
jωo
=1
+
1
2
1
1−e
−jω
o
z
−1
. .. .
z>e
−jωo
=1
=
1−z
−1
cosω
o
1−2z
−1
cosω
o
+z
−2
, ROC : z > 1
Example 4.9:
Consider
x[n] = na
n
u[n]
We have
X(z) = −z
d
dz
1
1−az
−1
, z >a
=
az
−1
(1−az
−1
)
2
, ROC : z >a
Example 4.10:
Consider the signals x
1
[n] ={1, −2a, a
2
} and x
2
[n] ={1, a, a
2
, a
3
, a
4
} where a ∈ C. Let us compute the
convolution y = x
1
∗x
2
using the ztransform:
X
1
(z) = 1−2az
−1
+a
2
z
−2
= (1−az
−1
)
2
X
2
(z) = 1+az
−1
+a
2
z
−2
+a
3
z
−3
+a
4
z
−4
=
1−a
5
z
−5
1−az
−1
Y(z) = X
1
(z)X
2
(z) = (1−az
−1
)(1−a
5
z
−5
)
= 1−az
−1
−a
5
z
−5
+a
6
z
−6
Therefore, y[n] ={1, −a, 0, 0, 0, −a
5
, a
6
}.
c B. Champagne & F. Labeau Compiled September 13, 2004
58 Chapter 4. The ztransform (ZT)
4.4 Rational ZTs
Rational function:
X(z) is a rational function in z (or z
−1
) if
X(z) =
N(z)
D(z)
(4.34)
where N(z) and D(z) are polynomials in z (resp. z
−1
)
Remarks:
• Rational ZT plays a central role in DSP
• Essential for the realization of practical IIR ﬁlters.
• In this and the next Sections, we investigate two important issues related to rational ZT:
 Polezero (PZ) characterization
 Inversion via partial fraction expansion
Poles and zeros:
• X(z) has a pole of order L at z = p
o
if
X(z) =
ψ(z)
(z −p
o
)
L
, 0 <ψ(p
o
) <∞ (4.35)
• X(z) has a zero of order L at z = z
o
if
X(z) = (z −z
o
)
L
ψ(z), 0 <ψ(z
o
) <∞ (4.36)
• We sometimes refer to the order L as the multiplicity of the pole/zero.
Poles and zeros at ∞:
• X(z) has a pole of order L at z =∞ if
X(z) = z
L
ψ(z), 0 <ψ(∞) <∞ (4.37)
• X(z) has a zero of order L at z =∞ if
X(z) =
ψ(z)
z
L
, 0 <ψ(∞) <∞ (4.38)
c B. Champagne & F. Labeau Compiled September 13, 2004
4.4 Rational ZTs 59
Poles & zeros of a rational X(z):
Consider rational function X(z) = N(z)/D(z):
• Roots of N(z) ⇒zeros of X(z)
Roots of D(z) ⇒poles of X(z)
• Must take into account polezero cancellation:
common roots of N(z) and D(Z) do not count as zeros and poles.
• Repeated roots in N(z) (or D(z)) lead to multiple zeros (respectively poles).
• If we include poles and zeros at 0 and ∞:
number of poles = number of zeros (4.39)
Example 4.11:
(1) Consider the rational function
X(z) =
z
−1
1−2z
−1
+z
−2
=
z
z
2
−2z +1
=
z
(z −1)
2
The poles and zeros of X(z) along with their order are as follows:
poles : p
1
= 1, L = 2
zeros : z
1
= 0, L = 1
z
2
=∞, L = 1
(2) Let
X(z) =
1−z
−4
1+3z
−1
=
z
4
−1
z
3
(z +3)
The corresponding poles and zeros are
zeros : z
k
= e
jπk/2
for k = 0, 1, 2, 3, L = 1
poles : p
1
= 0, L = 3 (triple pole)
p
2
=−3, L = 1
(3) As a ﬁnal example, consider
X(z) = z −1
The poles and zeros are
zeros : z
1
= 1, L = 1
poles : p
1
=∞, L = 1
c B. Champagne & F. Labeau Compiled September 13, 2004
60 Chapter 4. The ztransform (ZT)
Polezero (PZ) diagram:
For rational functions X(z) = N(z)/D(Z), knowledge of the poles and zeros (along with their order) com
pletely specify X(z), up to a scaling factor, say G ∈ C.
Example 4.12:
Let us consider the following pole and zero values:
z
1
= 1 L = 1
p
1
= 2 L = 1
=⇒X(z) = G
z −1
z −2
= G
1−z
−1
1−2z
−1
(4.40)
Remarks:
• Thus, we may represent X(z) by a socalled PZ diagram, as illustrated in Figure 4.4.
Re(z)
Im(z)
=pole of order k
(k)
(k)
=zero of order k
(2)
(2)
(2)
Fig. 4.4 Example of a PZ diagram.
• For completeness, the presence of poles or zeros at ∞ should be mentioned on the diagram.
• Note that it is also useful to indicate ROC on the pzdiagram.
Example 4.13:
Consider x[n] = a
n
u[n], where a > 0. The corresponding ZT is
X(z) =
1
1−az
−1
=
z
z −a
, ROC : z > a (4.41)
z
1
= 0 , L = 1
p
1
= a , L = 1
The PZ diagram is shown in Figure 4.5.
c B. Champagne & F. Labeau Compiled September 13, 2004
4.5 Inverse ZT 61
Re(z)
Im(z)
a
ROC: z>a
Fig. 4.5 PZ diagram for the signal x[n] = a
n
u[n].
ROC for rational ZT:
Let’s summarize a few facts about ROC for rational ZTs:
• ROC does not contain poles (because X(z) does not converge at a pole).
• ROC can always be extended to nearest pole
• ROC delimited by poles ⇒annular region between poles
• If we are given only X(z), then several possible ROC:
 any annular region between two poles of increasing magnitude.
 accordingly, several possible DT signals x[n]
4.5 Inverse ZT
Introduction:
• Several methods do exist for the evaluation of x[n] given its ZT X(z) and corresponding ROC:
 Contour integration via residue theorem
 Partial fraction expansion (PFE)
 Long division
• Partial fraction is by far the most useful technique in the context of rational ZTs
• In this section:
 we present the PFE method in detail;
 we discuss division only as a useful tool to put X(z) in proper form for the PFE.
c B. Champagne & F. Labeau Compiled September 13, 2004
62 Chapter 4. The ztransform (ZT)
4.5.1 Inversion via PFE
PFE:
• Suppose that X(z) =
N(z)
D(z)
where
 N(z) and D(z) are polynomials in z
−1
 degree of D(z) > degree of N(z)
• Under these conditions, X(z) may be expressed as
X(z) =
K
∑
k=1
L
k
∑
l=1
A
kl
(1−p
k
z
−1
)
l
(4.42)
where
 p
1
, . . . , p
K
are the distinct poles of X(z)
 L
1
, . . . , L
K
are the corresponding orders
• The constants A
kl
can be computed as follows:
 simple poles (L
k
= 1):
A
kl
≡A
k1
= (1−p
k
z
−1
)X(z)
z=p
k
(4.43)
 multiple poles (L
k
> 1):
A
kl
=
1
(L
k
−l)!(−p
k
)
L
k
−l
¸
d
L
k
−l
[(1−p
k
z
−1
)
L
k
X(z)]
(dz
−1
)
L
k
−l
¸
z=p
k
(4.44)
Inversion method:
Given X(z) as above with ROC : r <z < R, the corresponding DT signal x[n] may be obtained as follows:
• Determine the PFE of X(z):
X(z) =
K
∑
k=1
L
k
∑
l=1
A
kl
(1−p
k
z
−1
)
l
(4.45)
• Invoking linearity of the ZT, express x[n] as
x[n] =
K
∑
k=1
L
k
∑
l=1
A
kl
Z
−1
¸
1
(1−p
k
z
−1
)
l
¸
(4.46)
where Z
−1
denotes the inverse ztransform.
• Evaluate the elementary inverse ZTs in (4.46):
 simple poles (L
k
= 1):
1
1−p
k
z
−1
Z
−1
−→
p
n
k
u[n] if p
k
 ≤r
−p
n
k
u[−n−1] if p
k
 ≥R
(4.47)
c B. Champagne & F. Labeau Compiled September 13, 2004
4.5 Inverse ZT 63
 higher order poles (L
k
> 1):
1
(1−p
k
z
−1
)
l
Z
−1
−→
n+l−1
l−1
p
n
k
u[n] if p
k
 ≤r
−
n+l−1
l−1
p
n
k
u[−n−1] if p
k
 ≥R
(4.48)
where
n
r
=
n!
r!(n−r)!
(read n choose r)
Example 4.14:
Consider
X(z) =
1
(1−az
−1
)(1−bz
−1
)
, a <z <b
The PFE of X(z) can be rewritten as:
X(z) =
A
1
1−az
−1
+
A
2
1−bz
−1
,
with
A
1
= (1−az
−1
)X(z)
z=a
=
1
1−bz
−1
z=a
=
a
a−b
A
2
= (1−bz
−1
)X(z)
z=b
=
1
1−az
−1
z=b
=
b
b−a
so that
x[n] = A
1
Z
−1
1
1−az
−1
. .. .
z>a⇒causal
+A
2
Z
−1
1
1−bz
−1
. .. .
z<b⇒anticausal
= A
1
a
n
u[n] −A
2
b
n
u[−n−1]
=
a
n+1
a−b
u[n] −
b
n+1
b−a
u[−n−1]
4.5.2 Putting X(z) in a suitable form
Introduction:
• When applying the above PFE method to X(z) = N(z)/D(z), it is essential that
 N(z) and D(z) be polynomials in z
−1
 degree of D(z) > degree of N(z)
• If either one of the above conditions are not satisﬁed, further algebraic manipulations must be applied
to X(z)
• There are two common types of manipulations:
 polynomial division
 use of shift property
c B. Champagne & F. Labeau Compiled September 13, 2004
64 Chapter 4. The ztransform (ZT)
Polynomial division:
• Find Q(z) and R(z), such that
N(z)
D(z)
= Q(z) +
R(z)
D(z)
(4.49)
where Q(z) =∑
N
2
n=N
1
x[n]z
−n
and R(z) is a polynomial in z
−1
with degree less than that of D(z)
• Determination of Q(z) and R(z) via a division table:
Q(z)
D(z) N(z)
−Q(z)D(z)
R(z)
(4.50)
• If we want the largest power of z in N(z) to decrease, we simply express D(z) and N(z) in decreasing
powers of z (e.g. D(z) = 1+2z
−1
+z
−2
)
• If we want the smallest power of z in N(z) to increase we write down D(z) and N(z) in reverse order
(e.g. D(z) = z
−2
+2z
−1
+1)
Example 4.15:
Let us consider the ztransform
X(z) =
−5+3z
−1
+z
−2
3+4z
−1
+z
−2
,
with the constraint that x[n] is causal. The ﬁrst step towards ﬁnding x[n] is to use long division to make
the degree of the numerator smaller than the degree of the denominator.
1
1
z
−2
+4z
−1
+3 z
−2
+3z
−1
−5
−(z
−2
+4z
−1
+3)
− z
−1
−8
(4.51)
so that X(z) rewrites:
X(z) = 1−
z
−1
+8
z
−2
+4z
−1
+3
.
The denominator of the second term has two roots, the poles at z =−
1
3
and z =−1, hence the factoriza
tion:
X(z) = 1−
1
3
z
−1
+8
(1+
1
3
z
−1
)(1+z
−1
)
.
The PFE of the rational term in the above equation is given by two terms:
X(z) = 1−
1
3
A
1
1+
1
3
z
−1
+
A
2
1+z
−1
,
1
That is, we want the smallest power of z in N(z) to increase from z
−2
to z
−1
. Accordingly, we write down N(z) and D(z) in
reverse order when performing the division.
c B. Champagne & F. Labeau Compiled September 13, 2004
4.5 Inverse ZT 65
with
A
1
=
z
−1
+8
1+z
−1
z=−
1
3
=−
5
2
(4.52)
A
2
=
z
−1
+8
1+
1
3
z
−1
z=−1
=
21
2
, (4.53)
so that
X(z) = 1+
5
6(1+
1
3
z
−1
)
−
7
2(1+z
−1
)
.
Now the constraint of causality of x[n] determines the region of convergence of X(z), which is supposed
to be delimited by circles with radius 1/3 and/or 1. Since the sequence is causal, its ROC must extend
outwards from the outermost pole, so that the ROC is z > 1. The sequence x[n] is then given by:
x[n] =δ[n] +
5
6
−
1
3
n
u[n] −
7
2
(−1)
n
u[n].
Use of shift property:
• In some cases, a simple multiplication by z
k
is sufﬁcient to put X(z) into a suitable format, that is:
Y(z) = z
k
X(z) =
N(z)
D(z)
(4.54)
where N(z) and D(z) satisfy previous conditions
• The PFE method is then applied to Y(z), yielding a DT signal y[n]
• Finally, the shift property is applied to recover x[n]:
x[n] = y[n−k] (4.55)
Example 4.16:
Consider
X(z) =
1−z
−128
1−z
−2
z > 1
We could use division to work out this example but this would not be very efﬁcient. A faster approach
is to use a combination of linearity and the shift property. First note that
X(z) =Y(z) −z
−128
Y(z)
where
Y(z) =
1
1−z
−2
=
1
(1−z
−1
)(1+z
−1
)
The inverse ztransform of Y(z) is easily obtained as (please try it)
y[n] =
1
2
(1+(−1)
n
)u[n]
c B. Champagne & F. Labeau Compiled September 13, 2004
66 Chapter 4. The ztransform (ZT)
Therefore
x[n] = y[n] −y[n−128] =
1
2
(1+(−1)
n
)(u[n] −u[n−128])
4.6 The onesided ZT
Deﬁnition
X
+
(z) =Z
+
{x[n]}
∞
∑
n=0
x[n]z
−n
(4.56)
• ROC: z > r, for some r ≥0
• Information about x[n] for n < 0 is lost
• Used to solve LCCDE with arbitrary initial conditions
Useful properties:
• Time shift to the right (k > 0):
x[n−k]
Z
+
−→z
−k
X
+
(z) +x[−k] +x[−k +1]z
−1
+· · · +x[−1]z
−(k−1)
(4.57)
• Time shift to the left (k > 0):
x[n+k]
Z
+
−→z
k
X
+
(z) −x[0]z
k
−x[1]z
k−1
−· · · −x[k −1]z (4.58)
Example 4.17:
Consider the LCCDE
y[n] =αy[n−1] +x[n], n ≥0 (4.59)
where x[n] =βu[n], α and β are real constants with α = 1 and y[−1] is an arbitrary initial condition. Let
us ﬁnd the solution of this equation, i.e. y[n] for n ≥0, using the unilateral ztransform Z
+
.
• Compute Z
+
of x[n]:
x[n] causal ⇒X
+
(z) = X(z) =
β
1−z
−1
, z > 1
c B. Champagne & F. Labeau Compiled September 13, 2004
4.7 Problems 67
• Apply Z
+
to both sides of (4.59) and solve for Y
+
(z):
Y
+
(z) = α(z
−1
Y
+
(z) +y[−1]) +X
+
(z)
= αz
−1
Y
+
(z) +αy[−1] +
β
1−z
−1
⇒(1−αz
−1
)Y
+
(z) = αy[−1] +
β
1−z
−1
⇒Y
+
(z) =
αy[−1]
1−αz
−1
+
β
(1−αz
−1
)(1−z
−1
)
=
αy[−1]
1−αz
−1
+
A
1−αz
−1
+
B
1−z
−1
where
A =−
αβ
1−α
B =
β
1−α
• To obtain y[n] for n ≥ 0, compute the inverse unilateral ZT. This is equivalent to computing the
standard inverse ZT under the assumption of a causal solution, i.e. ROC : z > max(1, α). Thus,
for n ≥0
y[n] = αy[−1]α
n
u[n] +Aα
n
u[n] +B(1)
n
u[n]
= y[−1]α
n+1
+β
1−α
n+1
1−α
, n ≥0
4.7 Problems
Problem 4.1: Inverse ZT
Knowing that h[n] is causal, determine h[n] from its ztransform H(z):
H(z) =
2+2z
−1
(1+
1
4
z
−1
)(1−
1
2
z
−1
)
.
Problem 4.2:
Let a stable system H(z) be described by the polezero plot in ﬁgure 4.6, with the added speciﬁcation that
H(z) is equal to 1 at z = 1.
1. Using Matlab if necessary, ﬁnd the impulse response h[n].
2. From the polezero plot, is this a lowpass, highpass, bandpass or bandstop ﬁlter ? Check your
answer by plotting the magnitude response H(ω).
Problem 4.3:
Let a causal system have the following set of poles and zeros:
• zeros: 0,
3
2
e
±jπ/4
,0.8;
c B. Champagne & F. Labeau Compiled September 13, 2004
68 Chapter 4. The ztransform (ZT)
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Real Part
I
m
a
g
i
n
a
r
y
P
a
r
t
1/12π
1/6 π
3/4π
0.8
Fig. 4.6 PoleZero Plot for problem 4.2.
c B. Champagne & F. Labeau Compiled September 13, 2004
4.7 Problems 69
• poles: ±
1
2
,
1
2
e
±jπ/6
.
It is also known that H(z)
z=1
= 1.
1. Compute H(z) and determine the Region of Convergence;
2. Compute h[n].
Problem 4.4: Difference Equation
Let a system be sepciﬁed by the following LCCDE:
y[n] = x[n] −x[n−1] +
1
3
y[n−1],
with initial rest conditions.
1. What is the corresponding transfer function H(z)? Also determine its ROC.
2. Compute the impulse response h[n]
(a) by inverting theztransform H(z);
(b) from the LCCDE.
3. What is the output of this system if
(a) x[n] =
1
2
n
u[n];
(b) x[n] =δ[n−3].
c B. Champagne & F. Labeau Compiled September 13, 2004
70
Chapter 5
Zdomain analysis of LTI systems
5.1 The system function
LTI system (recap):
y[n] = x[n] ∗h[n] =
∞
∑
k=−∞
x[k]h[n−k] (5.1)
h[n] = H¦δ[n]¦ (5.2)
Response to arbitrary exponential:
• Let x[n] = z
n
, n ∈ Z. The corresponding output signal:
H¦z
n
¦ =
∑
k
h[k]z
n−k
= ¦
∑
k
h[k]z
−k
¦z
n
= H(z)z
n
(5.3)
where we recognize H(z) as the ZT of h[n].
• Eigenvector interpretation:
 x[n] = z
n
behaves as an eigenvector of LTI system H: Hx = λx
 Corresponding eigenvalue λ = H(z) provides the system gain
Deﬁnition:
Consider LTI system H with impulse response h[n]. The system function of H, denoted H(z), is the ZT of
h[n]:
H(z) =
∞
∑
n=−∞
h[n]z
−n
, z ∈ R
H
(5.4)
where R
H
denotes the corresponding ROC.
5.2 LTI systems described by LCCDE 71
Remarks:
• Specifying the ROC is essential: two different LTI systems may have the same H(z) but different R
H
.
• If H(z) and R
H
are known, h[n] can be recovered via inverse ZT.
• If z = e
jω
∈ R
H
, i.e. the ROC contains the unit circle, then
H(e
jω
) =
∞
∑
n=−∞
h[n]e
−jωn
≡H(ω) (5.5)
That is, the system function evaluated at z = e
jω
corresponds to the frequency response at angular
frequency ω.
Properties:
Let H be LTI system with system function H(z) and ROC R
H
.
• If y[n] denotes the response of H to arbitrary input x[n], then
Y(z) = H(z)X(z) (5.6)
• LTI system H is causal iff R
H
is the exterior of a circle (including ∞).
• LTI system H is stable iff R
H
contains the unit circle.
Proof:
• Inputoutput relation:
H LTI ⇒y = h∗x ⇒Y(z) = H(z)X(z)
• Causality:
H causal ⇔h[n] = 0 for n < 0 ⇔R
H
: r <[z[ ≤∞
• Stability:
H stable ⇔
∑
n
[h[n][ =
∑
n
[h[n]e
−jωn
[ < ∞ ⇔e
jω
∈ R
H
5.2 LTI systems described by LCCDE
LCCDE (recap):
• DT system obeys LCCDE of order N if
N
∑
k=0
a
k
y[n−k] =
M
∑
k=0
b
k
x[n−k] (5.7)
where a
0
= 0 and a
N
= 0.
c (B. Champagne & F. Labeau Compiled September 13, 2004
72 Chapter 5. Zdomain analysis of LTI systems
• If we further assume initial rest conditions, i.e.:
x[n] = 0 for n < n
0
=⇒ y[n] = 0 for n < n
0
(5.8)
LCCDE corresponds to unique causal LTI system.
System function:
Taking ZT on both sides of (5.7):
N
∑
k=0
a
k
y[n−k] =
M
∑
k=0
b
k
x[n−k]
⇒
N
∑
k=0
a
k
z
−k
Y(z) =
M
∑
k=0
b
k
z
−k
X(z)
⇒ H(z) =
Y(z)
X(z)
=
∑
M
k=0
b
k
z
−k
∑
N
k=0
a
k
z
−k
(5.9)
Rational system:
More generally we say that LTI system H is rational iff
H(z) = z
−L
B(z)
A(z)
(5.10)
where L is an arbitrary integer and
A(z) =
N
∑
k=0
a
k
z
−k
, B(z) =
M
∑
k=0
b
k
z
−k
(5.11)
Factored form:
If the roots of B(z) and A(z) are known, one can express H(z) as
H(z) = Gz
−L
∏
M
k=1
(1−z
k
z
−1
)
∏
N
k=1
(1−p
k
z
−1
)
(5.12)
where G = system gain (∈ R or C), z
k
’s are nontrivial zeros (i.e = 0 or ∞), p
k
’s are nontrivial poles. Factor
z
−L
takes care of zeros/poles at 0 or ∞.
Properties:
• Rational system (5.12) is causal iff ROC = exterior of a circle:
⇒ no poles at z = ∞ ⇒L ≥0
⇒ ROC: [z[ > max
k
[p
k
[
• Rational system is causal and stable if in addition to above:
⇒ unit circle contained in ROC,
⇒ that is: [p
k
[ < 1 for all k
c (B. Champagne & F. Labeau Compiled September 13, 2004
5.3 Frequency response of rational systems 73
Rational system with real coefﬁcients:
• Consider rational system:
H(z) = z
−L
B(z)
A(z)
= z
−L
∑
M
k=0
b
k
z
−k
∑
N
k=0
a
k
z
−k
(5.13)
• In many applications, coefﬁcients a
k
’s and b
k
’s ∈ R. This implies
H
∗
(z) = H(z
∗
) (5.14)
• Thus, if z
k
is a zero of H(z), then
H(z
∗
k
) = (H(z
k
))
∗
= 0
∗
= 0 (5.15)
which shows that z
∗
k
is also a zero of H(z)
• More generally, it can be shown that
 If p
k
is a pole of order l of H(z), so is p
∗
k
 If z
k
is a zero of order l of H(z), so is z
∗
k
We say that complex poles (or zeros) occur in complex conjugate pairs.
• In the PZ diagram of H(z), the above complex conjugate symmetry translates into a mirror image
symmetry of the poles and zeros with respect to the real axis. An example of this is illustrate in
Figure 5.1.
Re(z)
Im(z)
k
z
*
k
z
k
p
*
k
p
Fig. 5.1 PZ diagram with complex conjugate symmetry.
5.3 Frequency response of rational systems
The formulae:
• Alternative form of H(z) (note K = L+M−N):
(5.12) ⇒H(z) = Gz
−K
∏
M
k=1
(z −z
k
)
∏
N
k=1
(z −p
k
)
(5.16)
c (B. Champagne & F. Labeau Compiled September 13, 2004
74 Chapter 5. Zdomain analysis of LTI systems
• Frequency response:
H(ω) ≡H(z)[
z=e
jω = Ge
−jωK
∏
M
k=1
(e
jω
−z
k
)
∏
N
k=1
(e
jω
−p
k
)
(5.17)
• Deﬁne:
V
k
(ω) =[e
jω
−z
k
[ U
k
(ω) =[e
jω
−p
k
[
θ
k
(ω) =∠(e
jω
−z
k
) φ
k
(ω) =∠(e
jω
−p
k
) (5.18)
• Magnitude response:
[H(ω)[ =[G[
V
1
(ω). . .V
M
(ω)
U
1
(ω). . .U
N
(ω)
(5.19)
[H(ω)[
dB
=[G[
dB
+
M
∑
k=1
V
k
(ω)[
dB
−
N
∑
k=1
U
k
(ω)[
dB
(5.20)
• Phase response:
∠H(ω) =∠G−ωK+
M
∑
k=1
θ
k
(ω) −
N
∑
k=1
φ
k
(ω) (5.21)
• Group delay:
τ
gr
(ω) =−
d
dω
∠H(ω) = K+
M
∑
k=1
θ
/
k
(ω) −
N
∑
k=1
φ
/
k
(ω) (5.22)
Geometrical interpretation:
• Consider pole p
k
:
Re(z)
Im(z)
k
p
Z
Z j
e
k
j
p e '
Z
unit circle
 ∆ = e
jω
−p
k
: vector joining p
k
to point e
jω
on unit circle
 U
k
(ω) =[∆[: length of vector ∆
 φ
k
(ω) =∠∆: angle between ∆ and real axis
• A similar interpretation holds for the terms V
k
(ω) and θ
k
(ω) associated to the zeros z
k
...
c (B. Champagne & F. Labeau Compiled September 13, 2004
5.4 Analysis of certain basic systems 75
Example 5.1:
Consider the system with transfer function:
H(z) =
1
1−.8z
−1
=
z
z −.8
. (5.23)
In this case, the only zero is at z = 0, while the only pole is at z = .8. So we have that
V
1
(ω) =[e
jω
[ = 1 θ
1
(ω) = ω
U
1
(ω) =[e
jω
−.8[ φ
1
(ω) =∠(e
jω
−.8)
The magnitude of the frequency response at frequency ω is thus given by the inverse of the distance
between .8 and e
jω
in the zplane, and the phase response is given by ω−φ
1
(ω). Figure 5.2 illustrates
this construction process.
Some basic principles:
• For stable and causal systems, the poles are located inside the unit circles; the zeros can be anywhere
• Poles near the unit circle at p = re
jω
o
(r < 1) give rise to:
 peak in [H(ω)[ near ω
o
 rapid phase variation near ω
o
• Zeros near the unit circle at z = re
jω
o
give rise to:
 deep notch in [H(ω)[ near ω
o
 rapid phase variation near ω
o
5.4 Analysis of certain basic systems
Introduction
In this section, we study the PZ conﬁguration and the frequency response of basic rational systems. In all
cases, we assume that the system coefﬁcients a
k
s and b
k
are real valued.
5.4.1 First order LTI systems
Description:
The system function is given by:
H(z) = G
1−bz
−1
1−az
−1
(5.24)
The poles and zeros are:
 pole: z = a (simple)
 zero: z = b (simple)
c (B. Champagne & F. Labeau Compiled September 13, 2004
76 Chapter 5. Zdomain analysis of LTI systems
−0.5 0 0.5 1
−1.5
−1
−0.5
0
0.5
1
1.5
−10
−5
0
5
10
15
d
B
0 π/4 π/2 3π/4 π 5π/4 3π/2 7π/4 2π
−1
−0.5
0
0.5
1
r
a
d
i
a
n
s
0 π/4 π/2 3π/4 π 5π/4 3π/2 7π/4 2π
−0.5 0 0.5 1
−1.5
−1
−0.5
0
0.5
1
1.5
−10
−5
0
5
10
15
d
B
0 π/4 π/2 3π/4 π 5π/4 3π/2 7π/4 2π
−1
−0.5
0
0.5
1
r
a
d
i
a
n
s
0 π/4 π/2 3π/4 π 5π/4 3π/2 7π/4 2π
Fig. 5.2 Illustration of the geometric construction of the frequency response of the signal
in (5.23), for ω = π/5 (top) and ω = π/2 (bottom)
c (B. Champagne & F. Labeau Compiled September 13, 2004
5.4 Analysis of certain basic systems 77
Practical requirements:
 causality: ROC : [z[ >[a[
 stability: [a[ < 1
Impulse response (ROC: [z[ >[a[):
h[n] = G(1−
b
a
)a
n
u[n] +G
b
a
δ[n] (5.25)
Lowpass case:
To get a lowpass behavior, one needs a = 1 −ε, where 0 < ε <1 (typically). Additional attenuation of
highfrequency is possible by proper placement of the zero z = b.
Example 5.2: LowPass ﬁrst order system
Two examples of choices for b are shown below.
Re(z)
Im(z)
1 a
ROC: z>a
unit circle
zero: b = 0
H
1
(z) = G
1
1
1−az
−1
G
1
= 1−a ⇒H
1
(ω = 0) = 1
Re(z)
Im(z)
1 a
ROC: z>a
unit circle
1
zero: b =−1
H
2
(z) = G
2
1+z
−1
1−az
−1
G
2
=
1−a
2
⇒H
2
(ω = 0) = 1
c (B. Champagne & F. Labeau Compiled September 13, 2004
78 Chapter 5. Zdomain analysis of LTI systems
The frequency responses of the corresponding ﬁrst order lowpass systems are shown in Figure 5.3.
−40
−30
−20
−10
0
10
m
a
g
n
i
t
u
d
e
(
d
B
)
−π −π/2 0 π/2 π
a=.9, b−0
a=.9, b=−1
p
h
a
s
e
(
r
a
d
)
−π −π/2 0 π/2 π
−π
0
π
0
5
10
g
r
o
u
p
d
e
l
a
y
−π −π/2 0 π/2 π
frequency ω
Fig. 5.3 Frequency response of ﬁrst order system
Highpass case:
To get a highpass behavior, one has to locate the pole at a = −1 +ε, where 0 < ε <1 To get a high
attenuation of the DC component, one has to locate the zero at or near b = 1.
Example 5.3: HighPass ﬁrst order system
The PZ diagram and transfer function of a highpass ﬁrst order system with a =−.9 and b =1 are shown
below.
c (B. Champagne & F. Labeau Compiled September 13, 2004
5.4 Analysis of certain basic systems 79
Re(z)
Im(z)
1
a
ROC: z>a
unit circle
zero: b = 1
H(z) = G
1−z
−1
1−az
−1
G =
1+a
2
⇒H(−π) = 1
The corresponding frequency response is shown in Figure 5.4
−50
−40
−30
−20
−10
0
m
a
g
n
i
t
u
d
e
(
d
B
)
−π −π/2 0 π/2 π
a = −.9, b = 1
p
h
a
s
e
(
r
a
d
)
−π −π/2 0 π/2 π
−π
0
π
frequency ω
Fig. 5.4 Frequency response of a ﬁrstorder highpass ﬁlter with a pole at z =−.9 and a zero
at z = 1
5.4.2 Second order systems
Description:
• System function:
H(z) = G
1+b
1
z
−1
+b
2
z
−2
1+a
1
z
−1
+a
2
z
−2
(5.26)
c (B. Champagne & F. Labeau Compiled September 13, 2004
80 Chapter 5. Zdomain analysis of LTI systems
• Poles:
 if a
2
1
> 4a
2
: 2 distinct poles (real) at p
1,2
=−
a
1
2
±
1
2
a
2
1
−4a
2
 if a
2
1
= 4a
2
: double pole (real) at p
1
=−
a
1
2
 if a
2
1
< 4a
2
: 2 distinct poles (complex) at p
1,2
=−
a
1
2
± j
1
2
4a
2
−a
2
1
• Practical requirements:
 causality: ROC : [z[ > max¦[p
1
[, [p
2
[¦
 stability: can show that [p
k
[ < 1 for all k iff
[a
2
[ < 1, a
2
>[a
1
[ −1 (5.27)
Resonator:
Re(z)
Im(z)
1
ROC: z>r
unit circle
(2)
r
Z
p
1
= re
jω
o
p
2
= re
−jω
o
= p
∗
1
H(z) = G
1
(1−re
jω
o
z
−1
)(1−re
−jω
o
z
−1
)
= G
1
1−2r cos(ω
o
)z
−1
+r
2
z
−2
• The frequency response (Figure 5.5) clearly shows peaks around ±ω
o
.
• For r close to 1 (but < 1), [H(ω)[ attains a maximum at ω =±ω
o
• 3dBbandwidth: for r close to 1, can show:
[H(ω
o
±
∆ω
2
)[ =
1
√
2
, where ∆ω = 2(1−r) (5.28)
Notch ﬁlter:
A notch ﬁlter is an LTI system containing one or more notches in its frequency response, as the results of
zeroes located on (or close to) the unit circle.
c (B. Champagne & F. Labeau Compiled September 13, 2004
5.4 Analysis of certain basic systems 81
−40
−30
−20
−10
0
m
a
g
n
i
t
u
d
e
(
d
B
)
−π −π/2 0 π/2 π
r = −.975, ω
o
=π/4
p
h
a
s
e
(
r
a
d
)
−π −π/2 0 π/2 π
−π
0
π
frequency ω
Fig. 5.5 Frequency response of a resonator system
Re(z)
Im(z)
1
ROC: z>r
unit circle
r
Z
1
p
2
p
1
z
2
z
z
1
= e
jω
o
, z
2
= e
−jω
o
= z
∗
1
p
1
= re
jω
o
, p
2
= re
−jω
o
= p
∗
1
H(z) = G
(1−e
jω
o
z
−1
)(1−e
−jω
o
z
−1
)
(1−re
jω
o
z
−1
)(1−re
−jω
o
z
−1
)
= G
1−2cos(ω
o
)z
−1
+z
−2
1−2r cos(ω
o
)z
−1
+r
2
z
−2
The notches in the frequency response are clearly shown in Figure 5.6.
5.4.3 FIR ﬁlters
Description:
• System function:
H(z) = B(z) = b
0
+b
1
z
−1
+ +b
M
z
−M
= b
0
(1−z
1
z
−1
) (1−z
M
z
−1
) (5.29)
c (B. Champagne & F. Labeau Compiled September 13, 2004
82 Chapter 5. Zdomain analysis of LTI systems
−30
−20
−10
0
m
a
g
n
i
t
u
d
e
(
d
B
)
−π −π/2 0 π/2 π
r = −.9, ω
o
=π/4
p
h
a
s
e
(
r
a
d
)
−π −π/2 0 π/2 π
−π
−π/2
0
π/2
π
frequency ω
Fig. 5.6 Frequency response of a notch ﬁlter.
• This is a zeroth order rational system (i.e. A(z) = 1)
The M zeros z
k
can be anywhere in the complex plane
There is a multiple pole of order M at z = 0.
• Practical requirement: none
Above system is always causal and stable
• Impulse response:
h[n] =
b
n
0 ≤n ≤M
0 otherwise
(5.30)
Moving average system:
• Difference equation:
y[n] =
1
M
M−1
∑
k=0
x[n−k] (5.31)
• System function:
H(z) =
1
M
M−1
∑
k=0
z
−k
=
1
M
1−z
−M
1−z
−1
(5.32)
c (B. Champagne & F. Labeau Compiled September 13, 2004
5.5 More on magnitude response 83
• PZ analysis: Roots of the numerator:
z
M
= 1 ⇒z = e
j2πk/M
, k = 0, 1, ..., M−1 (5.33)
Note: there is no pole at z = 1 because of PZ cancellation. Thus:
H(z) =
1
M
M−1
∏
k=1
(1−e
j2πk/M
z
−1
) (5.34)
• The PZ diagram and frequency response for M = 8 are shown in Figures 5.7 and 5.8 respectively.
Re(z)
Im(z)
1
ROC: z>0
unit circle
M S 2
) 1 ( M
Fig. 5.7 Zero/Pole diagram for a Moving Average System (M = 8).
5.5 More on magnitude response
Preliminaries:
• Consider stable LTI system with impulse response h[n]
• Stable ⇒e
jω
∈ ROC ⇒
H(ω) =
∑
n
h[n]e
−jωn
=
∑
n
h[n]z
−n
[
z=e
jω = H(z)[
z=e
jω (5.35)
• Recall that: h[n] ∈ R ⇐⇒H
∗
(ω) = H(−ω) ⇐⇒H
∗
(z) = H(z
∗
)
c (B. Champagne & F. Labeau Compiled September 13, 2004
84 Chapter 5. Zdomain analysis of LTI systems
−40
−30
−20
−10
0
m
a
g
n
i
t
u
d
e
(
d
B
)
−π −π/2 0 π/2 π
M=8
p
h
a
s
e
(
r
a
d
)
−π −π/2 0 π/2 π
−π
−π/2
0
π/2
π
frequency ω
Fig. 5.8 Frequency response of a Moving Average system (M = 8).
Properties:
The squared magnitude response of a stable LTI system can be expressed in terms of the system function as
follows:
[H(ω)[
2
= H(z)H
∗
(1/z
∗
)[
z=e
jω
= H(z)H(1/z)[
z=e
jω if h[n] ∈ R (5.36)
Proof: Using (5.35), the square magnitude response can be expressed as
[H(ω)[
2
= H(ω)H
∗
(ω) = H(z)H
∗
(z)[
z=e
jω (5.37)
Note that when z = e
jω
, we can write z = 1/z
∗
. Hence:
H
∗
(z) = H
∗
(1/z
∗
)[
z=e
jω (5.38)
When h[n] ∈ R, we have
H
∗
(z) = H(z
∗
) = H(1/z)[
z=e
jω (5.39)
To complete the proof, substitute (5.38) or (5.39) in (5.37) .
Magnitude square function:
• C(z) H(z)H
∗
(1/z
∗
)
c (B. Champagne & F. Labeau Compiled September 13, 2004
5.6 Allpass systems 85
• Important PZ property:
 z
k
= zero of H(z) ⇒z
k
and 1/z
∗
k
are zeros of C(z)
 p
k
= pole of H(z) ⇒ p
k
and 1/p
∗
k
are poles of C(z)
• This is called conjugate reciprocal symmetry (see Figure 5.9).
Re(z)
Im(z)
unit circle
k
p
k
z
*
/ 1
k
p
*
/ 1
k
z
Fig. 5.9 Typcial Zero/Pole plot for a magnitude square function C(z).
• Suppose magnitude response [H(ω)[
2
is known as a function of ω, as well as the number of poles and
zeros of H(z):
 we can always ﬁnd the PZ diagram of C(z)
 from there, only a ﬁnite number of possibilities for H(z)
5.6 Allpass systems
Deﬁnition:
We say that LTI system H is allpass (AP) iff its frequency response, H(ω), satisﬁes
[H(ω)[ = K, for all ω ∈ [−π, π] (5.40)
where K is a positive constant. In the sequel, this constant is set to 1.
Properties:
Let H(z) be an AP system.
c (B. Champagne & F. Labeau Compiled September 13, 2004
86 Chapter 5. Zdomain analysis of LTI systems
• From deﬁnition of AP system,
[H(ω)[
2
= H(z)H
∗
(1/z
∗
)[
z=e
jω = 1, for all ω ∈ [−π, π] (5.41)
This implies
H(z)H
∗
(1/z
∗
) = 1, for all z ∈ C (5.42)
• From above relation, it follows that
H(1/z
∗
) = 1/H
∗
(z) (5.43)
Therefore
z
o
= zero of order l of H(z) ⇐⇒1/z
∗
o
= pole of order l of H(z)
p
o
= pole of order l of H(z) ⇐⇒1/p
∗
o
= zero of order l of H(z)
Trivial AP system:
• Consider integer delay system system: H(ω) = e
−jωk
• This is an allpass system: [H(ω)[ = 1
First order AP system:
From the above PZ considerations, we obtain the following PZ diagram:
Re(z)
Im(z)
unit circle
a
*
/ 1 a
System function:
H
AP
(z) = G
z −
1
a
∗
z −a
=
=
z
−1
−a
∗
1−az
−1
. (5.44)
where the system gain G has been chosen such that H(z) = 1 at z = 1.
The frequency response is shown in Figure 5.10.
c (B. Champagne & F. Labeau Compiled September 13, 2004
5.6 Allpass systems 87
0
0.5
1
m
a
g
n
i
t
u
d
e
−π −π/2 0 π/2 π
a=.9 e
jπ/4
p
h
a
s
e
(
r
a
d
)
−π −π/2 0 π/2 π
−π
−π/2
0
π/2
π
0
5
10
15
20
g
r
o
u
p
d
e
l
a
y
−π −π/2 0 π/2 π
frequency ω
Fig. 5.10 Frequency response of a ﬁrst order allpass system.
General form of rational AP system:
• Consider rational system:
H(z) = z
−L
B(z)
A(z)
= z
−L
∑
M
k=0
b
k
z
−k
∑
N
k=0
a
k
z
−k
(5.45)
• In order for this system to be AP, we need:
B(z) = z
−N
A
∗
(1/z
∗
)
=
N
∑
k=0
a
∗
k
z
k−N
(5.46)
• AP system will be causal and stable if, in addition to above:
 L ≥0
 all poles (i.e. zeros of A(z)) inside U.C.
Remark:
• Consider two LTI systems with freq. resp. H
1
(ω) and H
2
(ω).
c (B. Champagne & F. Labeau Compiled September 13, 2004
88 Chapter 5. Zdomain analysis of LTI systems
• [H
1
(ω)[ =[H
2
(ω)[ for all ω ∈ [−π, π] iff H
2
(z) = H
1
(z)H
ap
(z)
for some allpass system H
ap
(z)
5.7 Inverse system
Deﬁnition:
A system H is invertible iff for any arbitrary input signals x
1
and x
2
, we have:
x
1
= x
2
=⇒Hx
1
=Hx
2
(5.47)
Deﬁnition:
Let H be an invertible system. Its inverse, denoted H
I
, is such that for any input signal x in the domain of
H, we have:
H
I
(Hx) = x (5.48)
Remarks:
• Invertible means that there is a onetoone correspondence between the set of possible input signals
(domain of H) and the set of corresponding output signals (range).
• When applied to output signal y =Hx, the inverse system produces the original input x
] [n x ] [n y
{.} H
{.}
I
H
] [n x
Fundamental property:
Let H be an invertible LTI system with system function H(z). The inverse systemH
I
is also LTI with system
function
H
I
(z) = 1/H(z) (5.49)
Remarks:
• Several ROC are usually possible for H
I
(z) (corresponding to causal, mixed or anticausal impulse
response h
I
[n])
• Constraint: ROC(H) ∩ROC(H
I
) cannot be empty
• From (5.49), it should be clear that the zeros of H(z) become the poles of H
I
(z) and vice versa (with
the order being preserved).
c (B. Champagne & F. Labeau Compiled September 13, 2004
5.7 Inverse system 89
• Thus, causal and stable inverse will exist iff all the zero of H
I
(z) are inside the unit circle.
Proof (the ﬁrst two parts are optional):
 Linearity: Let y
1
=Hx
1
and y
2
=Hx
2
. Since H is linear by assumption, we have H(a
1
x
1
+a
2
x
2
) =
a
2
y
1
+a
2
y
2
. By deﬁnition of an inverse system,
H
I
(a
2
y
1
+a
2
y
2
) = a
1
x
1
+a
2
x
2
= a
1
H
I
(y
1
) +a
2
H
I
(y
2
)
which shows that H
I
is linear.
 Time invariance: Let y = Hx and let D
k
denote the shift by k operation. Since H is assumed to be
timeinvariant, H(D
k
x) = D
k
y. Therefore, H
I
(D
k
y) = D
k
x = D
k
(H
I
y) which shows that H
I
is also
TI.
 Eq. (5.49): For any x in the domain of H, we must have H
I
(Hx) = x. Since both H and H
I
are LTI
systems, this implies that
H
I
(z)H(z)X(z) = X(z)
from which (5.49) follows immediately .
Example 5.4:
Consider FIR system with impulse response (0 < a < 1):
h[n] = δ[n] −aδ[n−1]
The corresponding system function
H(z) = 1−az
−1
, [z[ > 0
Invoking (5.49), we obtain
H
I
(z) =
1
1−az
−1
The corresponding PZ diagrams are shown below
Re(z)
Im(z)
unit circle
a
Im(z)
a
Re(z)
unit circle
) (z H ) (z H
I
ROC1
ROC2
Note that there are two possible ROC for the inverse system H
I
(z):
• ROC
1
: [z[ > a ⇒
h
I
[n] = a
n
u[n] causal & stable
• ROC
2
: [z[ < a ⇒
h
I
[n] =−a
n
u[−n−1] anticausal & unstable
c (B. Champagne & F. Labeau Compiled September 13, 2004
90 Chapter 5. Zdomain analysis of LTI systems
5.8 Minimumphase system
5.8.1 Introduction
It is of practical importance to know when a causal & stable inverse H
I
exists.
Example 5.5: Inverse system and ROC
Consider the situation described by the pole/zero plot in Figure 5.11 (note: H
I
(z) = 1/H(z)). There are
3 possible ROC for the inverse system H
I
:
 ROC
1
: [z[ < 1/2 ⇒anticausal & unstable
 ROC
2
: 1/2 <[z[ < 3/2 ⇒mixed & stable
 ROC
1
: [z[ > 3/2 ⇒causal & unstable
A causal and stable inverse does not exist!
Re(z)
Im(z)
unit circle
) (z H
2 / 1 2 / 3
Re(z)
Im(z)
unit circle
) (z H
I
2 / 1 2 / 3
Fig. 5.11 Example of pole/zero plot of an LTI system and its inverse.
From the above example, it is clear that:
H
I
causal and stable ⇐⇒ poles of H
I
(z) inside u.c.
⇐⇒ zeros of H(z) inside u.c.
These considerations lead to the deﬁnition of minimum phase systems, which fulﬁll the above conditions.
Deﬁnition:
A causal LTI systemH is said to be minimum phase (MP) iff all its poles and zeros are inside the unit circle.
Remarks
• Poles inside u.c. ⇒h[n] can be chsoen to be causal and stable
Zeros inside u.c. ⇒h
I
[n] can be chsoen to be causal and stable exists
c (B. Champagne & F. Labeau Compiled September 13, 2004
5.8 Minimumphase system 91
• Minimum phase systems play a very important role in practical applications of digital ﬁlters.
5.8.2 MPAP decomposition
Any rational system function can be decomposed as the product of a minimumphase and an allpass com
ponent, as illustrated in Figure 5.12:
H(z) = H
min
(z)
MP
H
ap
(z)
AP
(5.50)
) (z H ) (
min
z H ) (z H
ap
Fig. 5.12 Minimum phase/All pass decomposition of a rational system.
Example 5.6:
Consider for instance the transfer function
H(z) =
(1−
1
2
z
−1
)(1−
3
2
z
−1
)
1−z
−1
+
1
2
z
−2
,
whose pole/zero plot is shown in Figure 5.11. The zero at z = 3/2 is not inside the unit circle, so that
H(z) is not minimumphase. The annoying factor is thus 1−
3
2
z
−1
on the numerator of H(z), and it will
have to be included in the allpass component. This means that the allpass component H
ap
(z) will have
a zero at z = 3/2, and thus a pole at z = 2/3. The allpass component is then
H
ap
(z) = G
1−
3
2
z
−1
1−
2
3
z
−1
.
while H
min
(z) is chosen so as to have H(z) = H
min
(z)H
ap
(z):
H
min
(z) =
H(z)
H
ap
(z)
=
1
G
(1−
1
2
z
−1
)(1−
2
3
z
−1
)
1−z
−1
+
1
2
z
−2
The corresponding polezero plots are shown in Figure 5.13. If we further require the allpass component
to have unit gain, i.e. H
ap
(z = 1) = 1, then we must set G =−3/2.
5.8.3 Frequency response compensation
In many applications, we need to remove distortion introduced by a channel on its input via a compensating
ﬁlter (e.g. channel equalization), like illustrated in Figure 5.14.
Suppose H(ω) is not minimum phase; in this case we know that there exists no stable and causal inverse.
Now, consider the MPAP decomposition of the transfer function: H(ω) = H
min
(ω)H
ap
(ω) A possible
c (B. Champagne & F. Labeau Compiled September 13, 2004
92 Chapter 5. Zdomain analysis of LTI systems
Re(z)
Im(z)
unit circle
) (
min
z H
2 / 1
Re(z)
Im(z)
unit circle
) (z H
ap
2 / 3
3 / 2
3 / 2
Fig. 5.13 Pole/Zero plot for a decomposition in Minimumphase and allpass parts.
] [n x ] [n y ] [ ˆ n x
Distorting
system
Compensating
system?
) (Z H ) (Z
c
H
Fig. 5.14 Frequecny response Compensation
choice for the compensating ﬁlter (which is then stable and causal) is the inverse of the minimum phase
component of H(ω)):
H
c
(ω) = 1/H
min
(ω). (5.51)
In this case, the compounded frequency response of the cascaded systems in Figure 5.14 is given by
ˆ
X(ω)
X(ω)
= H
c
(ω)H(ω) = H
ap
(ω), (5.52)
Accordingly, the magnitude spectrum of the compensator output signal ˆ x[n] is identical to that of the input
signal x[n], whereas whereas their phases differ by the phase response of the all pass component H
ap
(z).
[
ˆ
X(ω)[ = [X(ω)[ ⇒ exact magnitude compensation (5.53)
∠
ˆ
X(ω) = ∠X(ω) +∠H
ap
(ω) ⇒ phase distortion
Thanks to this decomposition, at least the effect of H(ω) on the input signal’s magnitude spectrum can be
compensated for.
5.8.4 Properties of MP systems
Consider a causal MP system with frequency response H
min
(ω). For any causal system H(ω) with the same
magnitude response (i.e. [H(ω)[ =[H
min
(ω)[):
(1) group delay of H(ω) ≥ group delay H
min
(ω)
c (B. Champagne & F. Labeau Compiled September 13, 2004
5.9 Problems 93
(2)
M
∑
n=0
[h[n][
2
≤
M
∑
n=0
[h
min
[n][
2
, for all integer M ≥0
According to (1), the minimum phase system has minimum processing delay among all systems with the
same magnitude response. Property (2) states that for the MP system, the energy concentration of the
impulse response h
min
[n] around n = 0 is maximum (i.e. minimum energy delay).
5.9 Problems
Problem 5.1: Polezero plots
For each of the polezero plots in ﬁgure 5.15, state whether it can correspond to
1. an allpass system;
2. a minmum phase system;
3. a system with real impulse response;
4. an FIR system;
5. a Generalized Linear Phase system;
6. a system with a stable inverse.
In each case, specify any additional constraint on the ROC in order for the property to hold.
Problem 5.2: Inversion Moving Average system
An Mpoint moving average system is deﬁned by the impulse response
h[n] =
1
M
(u[n] −u[n−M]).
1. ﬁnd the ztransform of h[n] (use the formula for a sum of a geometric series to simplify your expres
sion);
2. Draw a polezero plot of H(z). What is the region of converhence ?
3. Compute the ztransform for all possible inverses h
I
[n] of h[n]. Is any of these inverses stable ? What
is the geenral shape of the correspndiong impulse responses h
I
[n]?
4. A modiﬁed Moving Average system with forgetting factor is
h[n] =
1
M
a
n
(u[n] −u[n−M]),
where a is real, positive and a < 1. Compute G(z) and draw a polezero plot. Find a stable and causal
inverse G
I
(z) and compute its impulse response g
I
[n].
c (B. Champagne & F. Labeau Compiled September 13, 2004
94 Chapter 5. Zdomain analysis of LTI systems
Re(z)
Im(z)
unit circle
r
1
p
2
p
2
z
1
z
r
1
Re(z)
Im(z)
unit circle
2
2
1
4
(a) (b)
Re(z)
Im(z)
unit circle
2
1
4
Re(z)
Im(z)
unit circle
(c) (d)
Fig. 5.15 PoleZero plots pertaining to problem 5.1.
c (B. Champagne & F. Labeau Compiled September 13, 2004
5.9 Problems 95
Problem 5.3: Implementation of an approximate noncausal inverse
Let a system be deﬁned by
H(z) =
1−bz
−1
1−az
−1
, [z[ >[a[,
where [b[ > 1 >[a[.
1. Is this system stable? Is it causal ?
2. Find the impulse response h[n].
3. Find the ztransforms H
I
(z) of all possible inverses for this system. In each case, specify a region of
convergence, and whether the inverse is stable and/or caussal.
4. For each of the above inverses, compute the impulse response and skecth it for a = 0.5, b = 1.5.
5. Consider the anticausal inverse h
I
[n] above. You want to implement a delayed and truncated version
of this system. Find the number of samples of h
I
[n] to keep and the corresponding delay, so as to keep
99.99% of the energy of the impulse response h
I
[n].
Problem 5.4: Minimumphase/AllPass decomposition
Let a system be deﬁned in the z domain by the following set of poles and zeros:
• zeros: 2, 0.7e
±jπ/8
;
• poles:
3
4
, 0.3e
±jπ/12
.
Furthermore, H(z) is equal to 1 at z = 1.
1. Draw a polezero plot for this system;
2. Assume that the system is causal. Is it stable ?
3. Compute the factorization of H(z) into
H(z) = H
min
(z)H
ap
(z),
where H
min
(z) is minmimumphase , and H
ap
(z) is allpass. Draw a polezero plot for H
min
(z) and
H
ap
(z).
Problem 5.5: Twosided sequence
Let a stable system have the transfer function
H(z) =
1
(1−
3
2
z
−1
)(1−
1
2
e
jπ/4
z
−1
)(1−
1
2
e
−jπ/4
z
−1
)
.
c (B. Champagne & F. Labeau Compiled September 13, 2004
96 Chapter 5. Zdomain analysis of LTI systems
1. Draw a polezero plot for H(z) and specify its ROC;
2. compute the inverse ztransform h[n].
c (B. Champagne & F. Labeau Compiled September 13, 2004
97
Chapter 6
The discrete Fourier Transform (DFT)
Introduction:
The DTFT has proven to be a valuable tool for the theoretical analysis of signals and systems. However, if
one looks at its deﬁnition, i.e.:
X(ω) =
∞
∑
n=−∞
x[n]e
−jωn
, ω ∈ [−π, π], (6.1)
from a computational viewpoint, it becomes clear that it suffers several drawbacks Indeed, its numerical
evaluation poses the following difﬁculties:
• the summation over n is inﬁnite;
• the variable ω is continuous.
In many situations of interest, it is either not possible, or not necessary, to implement the inﬁnite summation
∑
∞
n=−∞
in (6.1):
• only the signal samples x[n] from n = 0 to n = N−1 are available;
• the signal is known to be zero outside this range; or
• the signal is periodic with period N.
In all these cases, we would like to analyze the frequency content of signal x[n] based only on the ﬁnite set
of samples x[0], x[1], . . . , x[N −1]. We would also like a frequency domain representation of these samples
in which the frequency variable only takes on a ﬁnite set of values, say ω
k
for k = 0, 1, . . . , N−1, in order to
better match the way a processor will be able to compute the frequency representation.
The discrete Fourier transform (DFT) fulﬁlls these needs. It can be seen as an approximation to the DTFT.
98 Chapter 6. The discrete Fourier Transform (DFT)
6.1 The DFT and its inverse
Deﬁnition:
The Npoint DFT is a transformation that maps DT signal samples {x[0], . . . , x[N −1]} into a periodic se
quence X[k], deﬁned by
X[k] = DFT
N
{x[n]}
N−1
∑
n=0
x[n]e
−j2πkn/N
, k ∈ Z (6.2)
Remarks:
• Only the samples x[0], ..., x[N−1] are used in the computation.
• The Npoint DFT is periodic, with period N:
X[k +N] = X[k]. (6.3)
Thus, it is sufﬁcient to specify X[k] for k = 0, 1, ..., N−1 only.
• The DFT X[k] may be viewed as an approximation to the DTFT X(ω) at frequency ω
k
= 2πk/N.
• The ”D” in DFT stands for discrete frequency (i.e. ω
k
)
• Other common notations:
X[k] =
N−1
∑
n=0
x[n]e
−jω
k
n
where ω
k
2πk/N (6.4)
=
N−1
∑
n=0
x[n]W
kn
N
where W
N
e
−j2π/N
(6.5)
Example 6.1:
(a) Consider
x[n] =
1 n = 0,
0 n = 1, . . . , N−1.
We have
X[k] =
N−1
∑
n=0
x[n]e
−j2πkn/N
= 1, all k ∈ Z
(b) Let
x[n] = a
n
, n = 0, 1, . . . , N−1
c B. Champagne & F. Labeau Compiled September 28, 2004
6.1 The DFT and its inverse 99
We have
X[k] =
N−1
∑
n=0
a
n
e
−j2πkn/N
=
N−1
∑
n=0
ρ
n
k
where ρ
k
ae
−j2πk/N
=
N if ρ
k
= 1,
1−ρ
N
k
1−ρ
otherwise.
Note the following special cases of interest:
a = 1 =⇒ X[k] =
N if k = 0,
0 if k = 1, . . . , N−1.
a = e
j2πl/N
=⇒ X[k] =
N if k = l modulo N,
0 otherwise .
Inverse DFT (IDFT):
The Npoint IDFT of the samples X[0], . . . ,X[N−1] is deﬁned as the periodic sequence
˜ x[n] = IDFT
N
{X[k]}
1
N
N−1
∑
k=0
X[k]e
j2πkn/N
, n ∈ Z (6.6)
Remarks:
• In general, ˜ x[n] = x[n] for all n ∈ Z (more on this later).
• Only the samples X[0], ..., X[N−1] are used in the computation.
• The Npoint IDFT is periodic, with period N:
˜ x[n+N] = ˜ x[n] (6.7)
• Other common notations:
˜ x[n] =
1
N
N−1
∑
k=0
X[k]e
jω
k
n
where ω
k
2πk/N (6.8)
=
1
N
N−1
∑
k=0
X[k]W
−kn
N
where W
N
e
−j2π/N
(6.9)
IDFT Theorem:
If X[k] is the Npoint DFT of {x[0], ..., x[N−1]}, then
x[n] = ˜ x[n] = IDFT
N
{X[k]}, n = 0, 1, ..., N−1 (6.10)
c B. Champagne & F. Labeau Compiled September 28, 2004
100 Chapter 6. The discrete Fourier Transform (DFT)
Remarks:
• The theorem states that x[n] = ˜ x[n] for n = 0, 1, ..., N−1 only. Nothing is said about sample values of
x[n] outside this range.
• In general, it is not true that x[n] = ˜ x[n] for all n ∈ Z: the IDFT ˜ x[n] is periodic with period N, whereas
no such requirement is imposed on the original signal x[n].
• Therefore, the values of x[n] for n < 0 and for n ≥ N cannot in general be recovered from the DFT
samples X[k]. This is understandable since these sample values are not used when computing X[k].
• However, there are two important special cases when the complete signal x[n] can be recovered from
the DFT samples X[k] (k = 0, 1, ..., N−1):
 x[n] is periodic with period N
 x[n] is known to be zero for n < 0 and for n ≥N
Proof of Theorem:
• First note that:
1
N
N−1
∑
k=0
e
−j2πkn/N
=
1 if n = 0, ±N, ±2N, ...
0 otherwise
(6.11)
Indeed, the above sum is a sum of terms of a geometric series. If n = lN, l ∈ Z, then all the N terms
are equal to one, so that the sum is N. Otherwise, the sum is equal to
1−e
−j2πkn
1−e
−j2πn/N
,
whose numerator is equal to 0.
• Starting from the IDFT deﬁnition:
˜ x[n] =
1
N
N−1
∑
k=0
X[k]e
j2πkn/N
=
1
N
N−1
∑
k=0
N−1
∑
l=0
x[l]e
−j2πkl/N
e
j2πkn/N
=
N−1
∑
l=0
x[l]
1
N
N−1
∑
k=0
e
−j2πk(l−n)/N
(6.12)
For the special case when n ∈ {0, 1, ..., N −1}, we have −N < l −n < N. Thus, according to (6.11),
the bracketed term is zero unless l = n. Therefore:
˜ x[n] = x[n], n = 0, 1, . . . , N−1 (6.13)
c B. Champagne & F. Labeau Compiled September 28, 2004
6.2 Relationship between the DFT and the DTFT 101
6.2 Relationship between the DFT and the DTFT
Introduction
The DFT may be viewed as a ﬁnite approximation to the DTFT:
X[k] =
N−1
∑
n=0
x[n]e
−jω
k
n
≈X(ω) =
∞
∑
n=−∞
x[n]e
−jωn
at frequency ω = ω
k
= 2πk/N As pointed out earlier, in general, an arbitrary signal x[n], n ∈ Z, cannot be
recovered entirely from its Npoint DFT X[k], k ∈ {0, ..., N −1}. Thus, it should not be possible to recover
the DTFT exactly from the DFT.
However, in the following two special cases:
 ﬁnite length signals
 Nperiodic signals,
x[n] can be completely recovered from its Npoint DFT. In these two cases, the DFT is not merely an
approximation to the DTFT: the DTFT can be evaluated exactly, at any given frequency ω ∈ [−π, π], if the
Npoint DFT X[k] is known.
6.2.1 Finite length signals
Assumption: Suppose x[n] = 0 for n < 0 and for n ≥N.
Inverse DFT: In this case, x[n] can be recovered entirely from its Npoint DFT X[k], k = 0, 1, ..., N−1 Let
˜ x[n] denote the IDFT of X[k]:
˜ x[n] = IDFT{X[k]} =
1
N
N−1
∑
k=0
X[k]e
j2πkn/N
, n ∈ Z. (6.14)
For n = 0, 1, ..., N −1, the IDFT theorem yields: x[n] = ˜ x[n]. For n < 0 and for n ≥N, we have x[n] = 0 by
assumption. Therefore:
x[n] =
˜ x[n] if 0 ≤n < N,
0 otherwise.
(6.15)
Relationship between DFT and DTFT: Since the DTFT X(ω) is a function of the samples x[n], it should
be clear that in this case, it is possible to completely reconstruct the DTFT X(ω) from the Npoint DFT X[k].
First consider the frequency ω
k
= 2πk/N:
X(ω
k
) =
∞
∑
n=−∞
x[n]e
−jω
k
n
=
N−1
∑
n=0
x[n]e
−jω
k
n
= X[k] (6.16)
The general case, i.e. ω arbitrary, is handled by the following theorem.
c B. Champagne & F. Labeau Compiled September 28, 2004
102 Chapter 6. The discrete Fourier Transform (DFT)
Theorem: Let X(ω) and X[k] respectively denote the DTFT and Npoint DFT of signal x[n]. If x[n] = 0
for n < 0 and for n ≥0, then:
X(ω) =
N−1
∑
k=0
X[k]P(ω−ω
k
) (6.17)
where
P(ω)
1
N
N−1
∑
n=0
e
−jωn
(6.18)
Remark: This theorem provides a kind of interpolation formula for evaluating X(ω) in between adjacent
values of X(ω
k
) = X[k].
Proof:
X(ω) =
N−1
∑
n=0
x[n]e
−jωn
=
N−1
∑
n=0
1
N
N−1
∑
k=0
X[k]e
jω
k
n
e
−jωn
=
N−1
∑
k=0
X[k]
1
N
N−1
∑
n=0
e
−j(ω−ω
k
)n
=
N−1
∑
k=0
X[k]P(ω−ω
k
) (6.19)
Properties of P(ω):
• Periodicity: P(ω+2π) = P(ω)
• If ω = 2πl (l integer), then e
−jωn
= e
−j2πln
= 1 so that P(ω) = 1.
• If ω = 2πl, then
P(ω) =
1
N
1−e
−jωN
1−e
jω
=
1
N
e
−jω(N−1)/2
sin(ωN/2)
sin(ω/2)
(6.20)
• Note that at frequency ω
k
= 2πk/N:
P(ω
k
) =
1 k = 0,
0 k = 1, ..., N−1.
(6.21)
so that (6.17) is consistent with (6.16).
• Figure 6.1 shows the magnitude of P(ω) (for N = 8).
c B. Champagne & F. Labeau Compiled September 28, 2004
6.2 Relationship between the DFT and the DTFT 103
m
a
g
n
i
t
u
d
e
P(ω)
−π −π/2 0 π/2 π
0
1
frequency ω
N=8
Fig. 6.1 Magnitude spectrum of the frequency interpolation function P(ω) for N = 8.
Remarks: More generally, suppose that x[n] = 0 for n < 0 and for n ≥ L. As long as the DFT size N is
larger than or equal to L, i.e. N ≥L, the results of this section apply. In particular:
• One can reconstruct x[n] entirely from the DFT samples X[k]
• Also, from the theorem: X[k] = X(ω
k
) at ω
k
= 2πk/N.
Increasing the value of N beyond the minimum required value, i.e. N = L, is called zeropadding:
• {x[0], ..., x[L−1]} ⇒{x[0], ..., x[L−1], 0, ..., 0}
• The DFT points obtained give a nicer graph of the underlying DTFT because ∆ω
k
= 2π/N is smaller.
• However, no new information about the original signal is introduced by increasing N in this way.
6.2.2 Periodic signals
Assumption: Suppose x[n] is Nperiodic, i.e. x[n+N] = x[n].
Inverse DFT: In this case, x[n] can be recovered entirely from its Npoint DFT X[k], k = 0, 1, ..., N −1.
Let ˜ x[n] denote the IDFT of X[k], as deﬁned in (6.14). For n = 0, 1, ..., N −1, the IDFT theorem yields:
x[n] = ˜ x[n]. Since both ˜ x[n] and x[n] are known to be Nperiodic, it follows that x[n] = ˜ x[n] must also be true
for n < 0 and for n ≥N. Therefore
x[n] = ˜ x[n], ∀n ∈ Z (6.22)
c B. Champagne & F. Labeau Compiled September 28, 2004
104 Chapter 6. The discrete Fourier Transform (DFT)
Relationship between DFT and DTFT: Since the Nperiodic signal x[n] can be recovered completely
from its Npoint DFT X[k], it should also be possible in theory to reconstruct the DTFT X(ω) from X[k].
The situation is more complicated here because the DTFT of a periodic signal contains inﬁnite impulses.
The desired relationship is provided by the following theorem.
Theorem: Let X(ω) and X[k] respectively denote the DTFT and Npoint DFT of Nperiodic signal x[n].
Then:
X(ω) =
2π
N
∞
∑
k=−∞
X[k]δ
a
(ω−ω
k
) (6.23)
where δ
a
(ω) denotes an analog delta function centered at ω = 0.
Remarks: According to the Theorem, one can recover the DTFT X(ω) from the Npoint DFT X[k]. X(ω)
is made up of a periodic train of inﬁnite impulses in the ω domain (i.e. line spectrum). The amplitude of the
impulse at frequency ω
k
= 2πk/N is given by 2πX[k]/N
Proof of Theorem (optional):
X(ω) =
∞
∑
n=−∞
x[n]e
−jωn
=
∞
∑
n=−∞
1
N
N−1
∑
k=0
X[k]e
jω
k
n
e
−jωn
=
1
N
N−1
∑
k=0
X[k]
∞
∑
n=−∞
e
jω
k
n
e
−jωn
=
1
N
N−1
∑
k=0
X[k] DTFT{e
jω
k
n
}
=
1
N
N−1
∑
k=0
X[k] 2π
∞
∑
r=−∞
δ
a
(ω−ω
k
−2πr)
=
2π
N
∞
∑
r=−∞
N−1
∑
k=0
X[k] δ
a
(ω−
2π
N
(k +rN)) (6.24)
Realizing that X[k] is periodic with period N, and that ∑
∞
r=−∞
∑
N−1
k=0
f (k +rN) is equivalent to ∑
∞
k=−∞
f (k)
we ﬁnally have:
X(ω) =
2π
N
∞
∑
r=−∞
N−1
∑
k=0
X[k +rN] δ
a
(ω−
2π
N
(k +rN))
=
2π
N
∞
∑
k=−∞
X[k] δ
a
(ω−
2πk
N
)
Remarks on the Discrete Fourier series: In the special case when x[n] is Nperiodic, i.e. x[n+N] = x[n],
the DFT admits a Fourier series interpretation. Indeed, the IDFT provides an expansion of x[n] as a sum of
c B. Champagne & F. Labeau Compiled September 28, 2004
6.3 Signal reconstruction via DTFT sampling 105
harmonically related complex exponential signals e
jω
k
n
:
x[n] =
1
N
N−1
∑
k=0
X[k]e
jω
k
n
, n ∈ Z (6.25)
In some textbooks (e.g. [10]), this expansion is called discrete Fourier series (DFS). The DFS coefﬁcients
X[k] are identical to the DFT:
X[k] =
N−1
∑
n=0
x[n]e
−jω
k
n
, n ∈ Z (6.26)
In these notes, we treat the DFS as a special case of the DFT, corresponding to the situation when x[n] is
Nperiodic.
6.3 Signal reconstruction via DTFT sampling
Introduction:
Let X(ω) be the DTFT of signal x[n], n ∈ Z, that is:
X(ω) =
∞
∑
n=−∞
x[n]e
−jωn
ω ∈ R.
Consider the sampled values of X(ω) at uniformly spaced frequencies ω
k
= 2πk/N for k = 0, ..., N −1.
Suppose we compute the IDFT of the samples X(ω
k
):
ˆ x[n] = IDFT{X(ω
k
)} =
1
N
N−1
∑
k=0
X(ω
k
)e
jω
k
n
(6.27)
What is the relationship between the original signal x[n] and the reconstructed sequence ˆ x[n]?
Note that ˆ x[n] is Nperiodic, while x[n] may not be. Even for n = 0, ..., N −1, there is no reason for ˆ x[n] to
be equal to x[n]. Indeed, the IDFT theorem does not apply since X(ω
k
) = DFT
N
{x[n]}.
The answer to the above question turns out to be very important when using DFT to compute linear convo
lution...
Theorem
ˆ x[n] = IDFT{X(ω
k
)} =
∞
∑
r=−∞
x[n−rN] (6.28)
c B. Champagne & F. Labeau Compiled September 28, 2004
106 Chapter 6. The discrete Fourier Transform (DFT)
Proof:
ˆ x[n] =
1
N
N−1
∑
k=0
X(ω
k
)e
jω
k
n
=
1
N
N−1
∑
k=0
∞
∑
l=−∞
x[l]e
−jω
k
l
e
jω
k
n
=
∞
∑
l=−∞
x[l]
1
N
N−1
∑
k=0
e
j2π(n−l)k/N
(6.29)
From (6.11), we note that the bracketed quantity in (6.29) is equal to 1 if n −l = rN, i.e. l = n −rN (r
integer) and is equal to 0 otherwise. Therefore
ˆ x[n] =
∞
∑
r=−∞
x[n−rN] (6.30)
Interpretation
ˆ x[n] is an inﬁnite sum of the sequences x[n −rN], r ∈ Z. Each of these sequences x[n −rN] is a shifted
version of x[n] by an integer multiple of N. Depending on whether or not these shifted sequences overlap,
we distinguish two important cases.
• Time limited signal: Suppose x[n] = 0 for n < 0 and for n ≥N. Then, there is no temporal overlap of
the sequences x[n−rN]. We can recover x[n] exactly from one period of ˆ x[n]:
x[n] =
ˆ x[n] n = 0, 1, ..., N−1,
0 otherwise.
(6.31)
This is consistent with the results of Section 6.2.1. Figure 6.2 shows an illustration of this reconstruc
tion by IDFT.
• Non timelimited signal: Suppose that x[n] =0 for some n <0 or n ≥N. Then, the sequences x[n−rN]
for different values of r will overlap in the timedomain. In this case, it is not true that ˆ x[n] = x[n] for
all 0 ≤n ≤N−1. We refer to this phenomenon as temporal aliasing, as illustrated in Figure 6.3.
6.4 Properties of the DFT
Notations:
In this section, we assume that the signals x[n] and y[n] are deﬁned over 0 ≤ n ≤ N −1. Unless explicitly
stated, we make no special assumption about the signal values outside this range. We denote the Npoint
DFT of x[n] and y[n] by X[k] and Y[k]:
x[n]
DFT
N
←→ X[k]
y[n]
DFT
N
←→ Y[k]
We view X[k] and Y[k] as Nperiodic sequences, deﬁned for all k ∈ Z
c B. Champagne & F. Labeau Compiled September 28, 2004
6.4 Properties of the DFT 107
−15 −10 −5 0 5 10
0
0.5
1
1.5
original signal
x
[
n
]
n
0
2
4
6
8
10
DTFT

X
(
ω
)

ω
0
π 2π
−15 −10 −5 0 5 10
0
0.5
1
1.5
inverse DFT
x
h
a
t
[
n
]
n
0 5 10 15
0
2
4
6
8
10
sampled DTFT

X
(
ω
k
)

k
Fig. 6.2 Illustration of reconstruction of a signal from samples of its DTFT. Top left: the
original signal x[n] is time limited. Top right, the original DTFT X(ω), from which 15 samples
are taken. Bottomright: the equivalent impulse spectrumcorresponds by IDFT to a 15periodic
sequence ˆ x[n] shown on the bottom left. Since the original sequence is zero outside 0 ≤n ≤14,
there is no overlap between replicas of the original signal in time.
c B. Champagne & F. Labeau Compiled September 28, 2004
108 Chapter 6. The discrete Fourier Transform (DFT)
−15 −10 −5 0 5 10
0
0.5
1
1.5
ofiginal signal
x
[
n
]
n
0
2
4
6
8
10
DTFT

X
(
ω
)

ω
0
π 2π
−15 −10 −5 0 5 10
0
0.5
1
1.5
2
2.5
3
inverse DFT
x
h
a
t
[
n
]
n
0 2 4 6
0
2
4
6
8
10
sampled DTFT

X
(
ω
k
)

k
Fig. 6.3 Illustration of reconstruction of a signal from samples of its DTFT. Top left: the
original signal x[n] is time limited. Top right, the original DTFT X(ω), from which 6 samples
are taken. Bottom right: the equivalent impulse spectrum corresponds by IDFT to a 6periodic
sequence ˆ x[n] shown on the bottom left. Since the original sequence is not zero outside 0 ≤
n ≤5, there is overlap between replicas of the original signal in time, and thus aliasing.
c B. Champagne & F. Labeau Compiled September 28, 2004
6.4 Properties of the DFT 109
Modulo N operation:
Any integer n ∈ Z can be expressed uniquely as n = k +rN where
k ∈ {0, 1, ..., N−1} and r ∈ Z. We deﬁne
(n)
N
= n modulo N k. (6.32)
For example: (11)
8
= 3, (−1)
6
= 5.
Note that the sequence deﬁned by x[(n)
N
] is an Nperiodic sequence made of repetitions of the values of x[n]
between 0 and N−1, i.e.
x[(n)
N
] =
∞
∑
r=−∞
ξ[n−rN] (6.33)
where ξ[n] = x[n](u[n] −u[n−N]).
6.4.1 Time reversal and complex conjugation
Circular time reversal: Given a sequence x[n], 0 ≤ n ≤ N −1, we deﬁne its circular reversal (CR) as
follows:
CR{x[n]} = x[(−n)
N
], 0 ≤n ≤N−1 (6.34)
Example 6.2:
Let x[n] = 6−n for n = 0, 1, .., 5:
n 0 1 2 3 4 5
x[n] 6 5 4 3 2 1
Then, for N = 6, we have CR{x[n]} = x[(−n)
6
]:
n 0 1 2 3 4 5
(−n)
6
0 5 4 3 2 1
x[(−n)
6
] 6 1 2 3 4 5
This is illustrated Figure 6.4.
0 1 2 3 4 5
0
6
] [n x
n 1 2 3 4 5
] ) [(
6
n x
n
CR
0
0
6
Fig. 6.4 Illustration of circular time reversal
c B. Champagne & F. Labeau Compiled September 28, 2004
110 Chapter 6. The discrete Fourier Transform (DFT)
Interpretation: Circular reversal can be seen as an operation on the set of signal samples x[0], . . . , x[N−1]:
• x[0] is left unchanged, whereas
• for k = 1 to N−1, samples x[k] and x[N−k] are exchanged.
One can also see the circular reversal as an operation consisting in:
• periodizing the samples of x[n], 0 ≤n ≤N−1 with period N;
• timereversing the periodized sequence;
• Keeping only the samples between 0 and N−1.
Figure 6.5 gives a graphical illustration of this process.
0
5
10
x[n]
0
5
10
x[((n))
15
]
0
5
10
x[((−n))
15
]
−15 −10 −5 0 5 10 15 20 25
0
5
10
x[((−n))
15
] limited
Fig. 6.5 Graphical illustration of circular reversal, with N = 15.
c B. Champagne & F. Labeau Compiled September 28, 2004
6.4 Properties of the DFT 111
Property:
x[(−n)
N
]
DFT
N
←→ X[−k] (6.35)
x
∗
[n]
DFT
N
←→ X
∗
[−k] (6.36)
x
∗
[(−n)
N
]
DFT
N
←→ X
∗
[k] (6.37)
Remarks:
• Since X[k] is Nperiodic, we note that X[−k] = X[(−k)
N
]. For example, if N = 6, then X[−1] = X[5]
in (6.35).
• Property (6.36) leads to the following conclusion for realvalued signals:
x[n] = x
∗
[n] ⇐⇒X[k] = X
∗
[−k] (6.38)
or equivalently, iff X[k] =X[−k] and X[k] =−X[−k].
• Thus, for realvalued signals:
 X[0] is real;
 if N is even: X[N/2] is real;
 X[N−k] = X
∗
[k] for 1 ≤k ≤N−1.
Figure 6.6 illustrates this with a graphical example.
6.4.2 Duality
Property:
X[n]
DFT
N
←→ Nx[(−k)
N
] (6.39)
Proof: First note that since (−k)
N
=−k +rN, for some integer r, we have
e
−j2πkn/N
= e
j2πn(−k)
N
/N
(6.40)
Using this identity and recalling the deﬁnition of the IDFT:
N−1
∑
n=0
X[n]e
−j2πnk/N
=
N−1
∑
n=0
X[n]e
j2πn(−k)
N
/N
= N×IDFT{X[n]} at index value (−k)
N
= Nx[(−k)
N
]
c B. Champagne & F. Labeau Compiled September 28, 2004
112 Chapter 6. The discrete Fourier Transform (DFT)
0 1 2 3 4 5 6 7 8 9
−4
−2
0
2
x[n]
0 1 2 3 4 5 6 7 8 9
−5
0
5
Re(X[k])
0 1 2 3 4 5 6 7 8 9
−5
0
5
Im(X[k])
Fig. 6.6 Illustration of the symmetry properties of the DFT of a realvalues sequence (N =
10).
c B. Champagne & F. Labeau Compiled September 28, 2004
6.4 Properties of the DFT 113
6.4.3 Linearity
Property:
ax[n] +by[n]
DFT
N
←→ aX[k] +bY[k] (6.41)
6.4.4 Even and odd decomposition
Conjugate symmetric components: The even and odd conjugate symmetric components of ﬁnite se
quence x[n], 0 ≤n ≤N−1, are deﬁned as:
x
e,N
[n]
1
2
(x[n] +x
∗
[(−n)
N
]) (6.42)
x
o,N
[n]
1
2
(x[n] −x
∗
[(−n)
N
]) (6.43)
In the case of a Nperiodic sequence, deﬁned for n ∈Z, the modulo N operation can be omitted. Accordingly,
we deﬁne
X
e
[k]
1
2
(X[k] +X
∗
[−k]) (6.44)
X
o
[k]
1
2
(X[k] −X
∗
[−k]) (6.45)
Note that, e.g. x
e,N
[0] is real, whereas x
e,N
[N−n] = x
e,N
[n], n = 1, . . . , N−1.
Property:
Re{x[n]}
DFT
N
←→ X
e
[k] (6.46)
jIm{x[n]}
DFT
N
←→ X
o
[k] (6.47)
x
e,N
[n]
DFT
N
←→ Re{X[k]} (6.48)
x
o,N
[n]
DFT
N
←→ jIm{X[k]} (6.49)
6.4.5 Circular shift
Deﬁnition: Given a sequence x[n] deﬁned over the interval 0 ≤n ≤N−1, we deﬁne its circular shift by k
samples as follows:
CS
k
{x[n]} = x[(n−k)
N
], 0 ≤n ≤N−1 (6.50)
Example 6.3:
Let x[n] = 6−n for n = 0, 1, .., 5.
n 0 1 2 3 4 5
x[n] 6 5 4 3 2 1
c B. Champagne & F. Labeau Compiled September 28, 2004
114 Chapter 6. The discrete Fourier Transform (DFT)
For N = 6, we have CS
k
{x[n]} = x[(n−k)
6
] In particular, for k = 2:
n 0 1 2 3 4 5
(n−2)
6
4 5 0 1 2 3
x[(n−2)
6
] 2 1 6 5 4 3
This is illustrated Figure 6.7.
0 1 2 3 4 5
0
6
] [n x
n 1 2 3 4 5
] ) 2 [(
6
n x
n
2
CS
0
0
6
Fig. 6.7 Illustration of circular time reversal
Interpretation: Circular shift can be seen as an operation on the set of signal samples x[n] (n ∈{0, . . . , N−
1}) in which:
• signal samples x[n] are shifted as in a conventional shift;
• any signal sample leaving the interval 0 ≤n ≤N−1 from one end reenters by the other end.
Alternatively, circular shift may be interpreted as follows:
• periodizing the samples of x[n], 0 ≤n ≤N−1 with period N;
• delaying the periodized sequence by k samples;
• keeping only the samples between 0 and N−1.
Figure 6.8 gives a graphical illustration of this process. It is clearly seen from this ﬁgure that a circular shift
by k samples amounts to taking the last k samples in the window of time from 0 to N−1, and placing these
at the beginning of the window, shifting the rest of the samples to the right.
Circular Shift Property:
x[(n−m)
N
]
DFT
N
←→ e
−j2πmk/N
X[k]
Proof: To simplify the notations, introduce ω
k
= 2πk/N.
DFT
N
{x[(n−m)
N
]} =
N−1
∑
n=0
x[(n−m)
N
]e
−jω
k
n
= e
−jω
k
m
N−1−m
∑
n=−m
x[(n)
N
]e
−jω
k
n
c B. Champagne & F. Labeau Compiled September 28, 2004
6.4 Properties of the DFT 115
0
5
10
x[n]
0
5
10
x[((n))
15
]
0
5
10
x[((n−2))
15
]
−15 −10 −5 0 5 10 15 20 25
0
5
10
x[((n−2))
15
] limited
Fig. 6.8 Graphical illustration of circular shift, with N = 15 and k = 2.
c B. Champagne & F. Labeau Compiled September 28, 2004
116 Chapter 6. The discrete Fourier Transform (DFT)
Observe that the expression to the right of ∑
N−1−m
n=−m
is Nperiodic in the variable n. Thus, we may replace
that summation by ∑
N−1
n=0
without affecting the result (in both cases, we sum over one complete period)
DFT
N
{x[(n−m)
N
]} = e
−jω
k
m
N−1
∑
n=0
x[(n)
N
]e
−jω
k
n
= e
−jω
k
m
N−1
∑
n=0
x[n]e
−jω
k
n
= e
−jω
k
m
X[k]
Frequency shift Property:
e
j2πmn/N
x[n]
DFT
N
←→ X[k −m]
Remark: Since the DFT X[k] is already periodic, the modulo N operation is not needed here, that is:
X[(k −m)
N
] = X[k −m]
6.4.6 Circular convolution
Deﬁnition: Let x[n] and y[n] be two sequences deﬁned over the interval 0 ≤ n ≤ N −1. The Npoint
circular convolution of x[n] and y[n] is given by
x[n] y[n]
N−1
∑
m=0
x[m]y[(n−m)
N
], 0 ≤n ≤N−1 (6.51)
Remarks: This is conceptually similar to the linear convolution of Chapter 2 (i.e. ∑
∞
n=∞
= x[m]y[n−m])
except for two important differences:
• the sum is ﬁnite, running from 0 to N−1
• circular shift is used instead of a linear shift.
The use of (n−m)
N
ensures that the argument of y[ ] remains in the range 0, 1, ..., N−1.
Circular Convolution Property:
x[n] y[n]
DFT
N
←→ X[k]Y[k] (6.52)
Multiplication Property:
x[n]y[n]
DFT
N
←→
1
N
X[k] Y[k] (6.53)
6.4.7 Other properties
Plancherel’s relation:
N−1
∑
n=0
x[n]y
∗
[n] =
1
N
N−1
∑
k=0
X[k]Y
∗
[k] (6.54)
c B. Champagne & F. Labeau Compiled September 28, 2004
6.5 Relation between linear and circular convolutions 117
Parseval’s relation:
N−1
∑
n=0
x[n]
2
=
1
N
N−1
∑
k=0
X[k]
2
(6.55)
Remarks:
• Deﬁne signal vectors
x = [x[0], . . . , x[N−1]], y = [y[0], . . . , y[N−1]]
Then, the LHS summation in (6.54) may be interpreted as the inner product between signal vectors x
and y in vector space C
N
. Similarly, up to scaling factor 1/N, the RHS of (6.54)may be interpreted as
the inner product between the DFT vectors
X = [X[0], . . . ,Y[N−1]], Y = [Y[0], . . . ,Y[N−1]]
Identity (6.54) is therefore equivalent to the statement that the DFT operation preserves the inner
product between timedomain vectors.
• Eq. (6.55) is a special case of (6.54) with y[n] = x[n]. It allows the computation of the energy of the
signal samples x[n] (n = 0, . . . , N−1) directly from the DFT samples X[k].
6.5 Relation between linear and circular convolutions
Introduction:
The circular convolution admits a fast (i.e. very low complexity) realization via the socalled fast Fourier
transform (FFT) algorithms. If the linear and circular convolution were equivalent, it would be possible to
evaluate the linear convolution efﬁciently as well via the FFT.
The problem, then, is the following: under what conditions are the linear and circular convolutions equiva
lent?
In this section, we investigate the general relationship between the linear and circular convolutions. In
particular, we show that both types of convolution are equivalent if
 the signals of interest have ﬁnite length,
 and the DFT size is sufﬁciently large.
Linear convolution:
Time domain expression:
y
l
[n] = x
1
[n] ∗x
2
[n] =
∞
∑
k=−∞
x
1
[k]x
2
[n−k], n ∈ Z (6.56)
Frequency domain representation via DTFT:
Y
l
(ω) = X
1
(ω)X
2
(ω), ω ∈ [0, 2π] (6.57)
c B. Champagne & F. Labeau Compiled September 28, 2004
118 Chapter 6. The discrete Fourier Transform (DFT)
Circular convolution:
Time domain expression:
y
c
[n] = x
1
[n] x
2
[n] =
N−1
∑
k=0
x
1
[k]x
2
[(n−k)
N
], 0 ≤n ≤N−1 (6.58)
Frequency domain representation via Npoint DFT:
Y
c
[k] = X
1
[k]X
2
[k], k ∈ {0, 1, ..., N−1} (6.59)
Observation:
We say that the circular convolution (6.58) and the linear convolution (6.56) are equivalent if
y
l
[n] =
y
c
[n] if 0 ≤n < N
0 otherwise
(6.60)
Clearly, this can only be true in general if the signals x
1
[n] and x
2
[n] both have ﬁnite length.
Finite length assumption:
We say that x[n] is time limited (TL) to 0 ≤n < N iff x[n] = 0 for n < 0 and for n ≥N.
Suppose that x
1
[n] and x
2
[n] are TL to 0 ≤n < N
1
and 0 ≤n < N
2
, respectively. Then, the linear convolution
y
l
[n] in (6.56) is TL to 0 ≤n < N
3
, where N
3
= N
1
+N
2
−1.
Example 6.4:
Consider the TL signals
x
1
[n] ={1, 1, 1, 1} x
2
[n] ={1, 1/2, 1/2}
where N
1
= 3 and N
2
= 4. A simple calculation yields
y
l
[n] ={1, 1.5, 2, 2, 1, .5}
with N
3
= N
1
+N
2
−1 = 6. This is illustrated in ﬁgure 6.9.
Condition for equivalence:
Suppose that x
1
[n] and x
2
[n] are TL to 0 ≤n < N
1
and 0 ≤n < N
2
.
Based on our previous discussion, a necessary condition for equivalence of y
l
[n] and y
c
[n] is that N ≥
N
1
+N
2
−1. We show below that this is a sufﬁcient condition.
c B. Champagne & F. Labeau Compiled September 28, 2004
6.5 Relation between linear and circular convolutions 119
k
] [
1
k x
k
] [
2
k x
1
0
k
k
] 1 [
2
k x
] 5 [
2
k x
0
0 1
0 5
3
k
] [
2
k x
0
1 ] 0 [
l
y
1
5 . 1 ] 1 [
l
y
0 , 0 ] [ n n y
l
5 . ] 5 [
l
y 6 , 0 ] [ t n n y
l
Fig. 6.9 Illustration of the sequences used in example 6.4. Top: x
1
[n] and x
2
[n]; bottom: y
l
[n],
their linear convolution.
c B. Champagne & F. Labeau Compiled September 28, 2004
120 Chapter 6. The discrete Fourier Transform (DFT)
Linear convolution:
y
l
[n] =
∞
∑
k=−∞
x
1
[k]x
2
[n−k]
=
n
∑
k=0
x
1
[k]x
2
[n−k], 0 ≤n ≤N−1 (6.61)
Circular convolution:
y
c
[n] =
N−1
∑
k=0
x
1
[k]x
2
[(n−k)
N
]
=
n
∑
k=0
x
1
[k]x
2
[n−k] +
N−1
∑
k=n+1
x
1
[k]x
2
[N+n−k] (6.62)
If N ≥ N
2
+N
1
−1, it can be veriﬁed that the product x
1
[k]x
2
[N +n −k] in the second summation on the
RHS of (6.62) is always zero. Under this condition, it thus follows that y
l
[n] = y
c
[n] for 0 ≤n ≤N−1.
Conclusion: the linear and circular convolution are equivalent iff
N ≥N
1
+N
2
−1 (6.63)
Example 6.5:
Consider the TL signals
x
1
[n] ={1, 1, 1, 1} x
2
[n] ={1, 1/2, 1/2}
where N
1
= 3 and N
2
=4. First consider the circular convolution y
c
=x
1
x
2
with N =N
1
+N
2
−1 =6,
as illustrated in ﬁgure 6.10.
The result of this computation is
y
c
[n] ={1, 1.5, 2, 2, 1, .5}
which is identical to the result of the linear convolution in example 6.4.
Now consider the circular convolution y
c
= x
1
x
2
with N = 4, as illustrated in ﬁgure 6.11.
The result of this computation is
y
c
[n] ={2, 2, 2, 2}
Remarks:
When N ≥N
1
+N
2
−1, the CTR operation has no effect on the convolution.
On the contrary, when N = N
1
+N
2
−1, the CTR will affect the result of the convolution due to overlap
between x
1
[k] and x
2
[N+n−k].
c B. Champagne & F. Labeau Compiled September 28, 2004
6.5 Relation between linear and circular convolutions 121
k
] [
1
k x
k
] [ ] ) [(
1 6 2
k CRx k x
1
0
k
k
] ) 1 [(
6 2
k x
] ) 5 [(
6 2
k x
0
0 1
0 5
k
] [
2
k x
0
1 ] 0 [
c
y
1
5 . 1 ] 1 [
c
y
5 . ] 5 [
c
y
5 5
5
5
Fig. 6.10 Illustration of the sequences used in example 6.5. Top: x
1
[n] and x
2
[n]; bottom:
y
c
[n], their circular convolution with N = 6.
c B. Champagne & F. Labeau Compiled September 28, 2004
122 Chapter 6. The discrete Fourier Transform (DFT)
k
] [
1
k x
k
] ) [(
4 2
k x
1
0
k
k
] ) 1 [(
4 2
k x
] ) 3 [(
4 2
k x
0
0 1
0
k
] [
2
k x
0
2 ] 0 [
c
y
1
2 ] 1 [
c
y
2 ] 3 [
c
y
3 3
3
3
3
Fig. 6.11 Illustration of the sequences used in example 6.5. Top: x
1
[n] and x
2
[n]; bottom:
y
c
[n], their circular convolution with N = 4.
c B. Champagne & F. Labeau Compiled September 28, 2004
6.5 Relation between linear and circular convolutions 123
Relationship between y
c
[n] and y
l
[n]:
Assume that N ≥ max{N
1
, N
2
}. Then the DFT of each of the two sequences x
1
[n] and x
2
[n] are samples of
the corresponding DTFT:
N ≥N
1
=⇒ X
1
[k] = X
1
(ω
k
), ω
k
= 2πk/N
N ≥N
2
=⇒ X
2
[k] = X
2
(ω
k
)
The DFT of the circular convolution is just the product of DFT:
Y
c
[k] = X
1
[k]X
2
[k]
= X
1
(ω
k
)X
2
(ω
k
) =Y
l
(ω
k
) (6.64)
This shows that Y
c
[k] is also made of uniformly spaced samples of Y
l
(ω), the DTFT of the linear convolution
y
l
[n]. Thus, the circular convolution y
c
[n] can be computed as the Npoint IDFT of these frequency samples:
y
c
[n] = IDFT
N
{Y
c
[k]} = IDFT
N
{Y
l
(ω
k
)}
Making use of (6.28), we have
y
c
[n] =
∞
∑
r=−∞
y
l
[n−rN], 0 ≤n ≤N−1, (6.65)
so that the circular convolution is obtained by
 an Nperiodic repetition of the linear convolution y
l
[n]
 and a windowing to limit the nonzero samples to the instants 0 to N−1.
To get y
c
[n] = y
l
[n] for 0 ≤ n ≤ N −1, we must avoid temporal aliasing. We need: lenght of DFT ≥
length of y
l
[n], i.e.
N ≥N
3
= N
1
+N
2
−1 (6.66)
which is consistent with our earlier development. The following examples illustrate the above concepts:
Example 6.6:
The application of (6.65) for N < N
1
+N
2
−1 is illustrated in Figure 6.12. The result of a linear convo
lution y
l
[n] is illustrated (top) together with two shifted versions by +N and −N samples, respectively.
These signals are added together to yield the circular convolution y
c
[n] (bottom) according to (6.65). In
this case, N = 11 and N
3
= N
1
+N
2
−1 = 15, so that there is time aliasing. Note that out of the N = 11
samples in the circular convolution y
c
[n], i.e. within the interval 0 ≤n ≤10, only the ﬁrst N
3
−N samples
are affected by aliasing (as shown by the white dots), whereas the other samples are common to y
l
[n]
and y
c
[n].
Example 6.7:
To convince ourselves of the validity of (6.65), consider again the sequences x
1
[n] = {1, 1, 1, 1} and
x
2
[n] ={1, .5, .5} where N
1
= 3 and N
2
= 4. From Example 6.4, we know that
y
l
[n] ={1, 1.5, 2, 2, 1, .5}
c B. Champagne & F. Labeau Compiled September 28, 2004
124 Chapter 6. The discrete Fourier Transform (DFT)
0
5
10
y
l
[n]
0
5
10
y
l
[n−N]
0
5
10
y
l
[n+N]
−10 −5 0 5 10 15 20
0
5
10
y
c
[n]
Fig. 6.12 Circular convolution seen as linear convolution plus time aliasing:
c B. Champagne & F. Labeau Compiled September 28, 2004
6.5 Relation between linear and circular convolutions 125
while from Example 6.5, the circular convolution for N = 4 yields
y
c
[n] ={2, 2, 2, 2}
The application of (6.65) is illustrated in Figure 6.13. It can be seen that the sample values of the circular
0 4 8 4
0
1 2 3
1 2 3
] 4 [ n y
l
] [n y
l
] 4 [ n y
l
n
2 2 2 2
¦
r
l c
r n y n y ] 4 [ ] [
n
1 3/2 2 2 1 1/2
1 3/2 2 2 1 1/2 3/2 2 2 1 1 1/2
Fig. 6.13 Illustration of the circular convolution as a timealiased verion of the linear convo
lution (see examepl 6.7).
convolution y
c
[n], as computed from (6.65), are identical to those obtained in Example 6.5.
6.5.1 Linear convolution via DFT
We can summarize the way one can compute a linear convolution via DFT by the following steps:
• Suppose that x
1
[n] is TL to 0 ≤n < N
1
and x
2
[n] is TL to 0 ≤n < N
2
• Select DFT size N ≥N
1
+N
2
−1 (usually, N = 2
K
).
• For i = 1, 2, compute
X
i
[k] = DFT
N
{x
i
[n]}, 0 ≤k ≤N−1 (6.67)
• Compute:
x
1
[n] ∗x
2
[n] =
IDFT
N
{X
1
[k]X
2
[k]} 0 ≤n ≤N−1
0 otherwise
(6.68)
Remarks: In practice, the DFT is computed via an FFT algorithm (more on this later). For large N,
say N ≥ 30, the computation of a linear convolution via FFT is more efﬁcient than direct timedomain
realization. The accompanying disadvantage is that it involves a processing delay, because the data must be
buffered.
c B. Champagne & F. Labeau Compiled September 28, 2004
126 Chapter 6. The discrete Fourier Transform (DFT)
Example 6.8:
Consider the sequences
x
1
[n] ={1, 1, 1, 1} x
2
[n] ={1, .5, .5}
where N
1
= 3 and N
2
= 4. The DFT size must satisfy N ≥ N
1
+N
2
−1 = 6 for the circular and linear
convolutions to be equivalent.
Let us ﬁrst consider the case N = 6. The 6point DFT of the zeropadded sequences x
1
[n] and x
2
[n] are:
X
1
[k] = DFT
6
{1, 1, 1, 1, 0, 0}
= {4, −1.7321j, 1, 01, +1.7321j}
X
1
[k] = DFT
6
{1, 0.5, 0.5, 0, 0, 0}
= {2, 1−0.866 j, 1/2, 1, 1/2, 1+0.866j}
The product of the DFT yields
Y
c
[k] ={8, −1.5−1.7321 j, 0.5, 0, 0.5, −1.5−1.7321j}
Taking the 6point IDFT, we obtain:
y
c
[n] = IDFT
6
{Y[k]}
= {1, 1.5, 2, 2, 1, .5}
which is identical to y
l
[n] previously computed in Example 6.4. The student is invited to try this approach
in the case N = 4.
6.6 Problems
Problem 6.1: DFT of Periodic Sequence
Let x[n] be aperiodic sequence with period N. Let X[k] denote its Npoint DFT. Let Y[k] represent its 2N
point DFT.
Find the relationship between X[k] and Y[k].
Problem 6.2: DFT of a complex exponential
Find the Npoint DFT of x[n] = e
j2πpn/N
.
c B. Champagne & F. Labeau Compiled September 28, 2004
127
Chapter 7
Digital processing of analog signals
As a motivation for DiscreteTime Signal Processing, we have stated in the introduction of this course that
DTSP is a convenient and powerful approach for the processing of realworld analog signals. In this chapter,
we will further study the theoretical and practical conditions that enable digital processing of analog signals.
Structural overview:
A/D
Digital
system
D/A
) (t x
a
) (t y
a
] [n x
d
] [n y
d
Fig. 7.1 The three main stages of digital processing of analog signals
Digital processing of analog signals consists of three main components, as illustrated in Figure 7.1:
• The A/D converter transforms the analog input signal x
a
(t) into a digital signal x
d
[n]. This is done in
two steps:
 uniform sampling
 quantization
• The digital system performs the desired operations on the digital signal x
d
[n] and produces a corre
sponding output y
d
[n] also in digital form.
• The D/A converter transforms the digital output y
d
[n] into an analog signal y
a
(t) suitable for interfac
ing with the outside world.
128 Chapter 7. Digital processing of analog signals
In the rest of this chapter, we will study the details of each of these building blocks.
7.1 Uniform sampling and the sampling theorem
Uniform (or periodic) sampling refers to the process in which an analog, or continuoustime (CT), signal
x
a
(t), t ∈ R, is mapped into a discretetime signal x[n], n ∈ Z, according to the following rule:
x
a
(t) →x[n] x
a
(nT
s
), n ∈ Z (7.1)
where T
s
is called the sampling period. Figure 7.2 gives a graphic illustration of the process.
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
−0.4
−0.2
0
0.2
0.4
time (s)
0 2 4 6 8 10 12 14 16 18 20
−0.4
−0.2
0
0.2
0.4
samples
Fig. 7.2 Illustration of Uniform Sampling: top, the analog signal, where the sampling points
t = nT
s
, n ∈ Z are shown by arrows; bottom: the corresponding discretetime sequence. Notice
the difference in scale betwwen the xaxis of the two plots.
Based on the sampling period T
s
, the Sampling Frequency can be deﬁned as
F
s
1/T
s
, or (7.2)
Ω
s
2πF
s
= 2π/T
s
, (7.3)
We shall also refer to uniform sampling as ideal analogtodiscretetime (A/DT) conversion. We say ideal
because the amplitude of x[n] is not quantized, i.e. it may take any possible values within the set R.
Ideal A/DT conversion is represented in block diagram form as follows:
In practice, uniform sampling is implemented with analogtodigital (A/D) converters. These are nonideal
devices: only a ﬁnite set of possible values is allowed for the signal amplitudes x[n] (see section 7.3).
It should be clear from the above presentation that, for an arbitrary continuoustime signal x
a
(t), information
will be lost through the uniform sampling process x
a
(t) →x[n] = x
a
(nT
s
).
This is illustrated by the following example.
c (B. Champagne & F. Labeau Compiled September 28, 2004
7.1 Uniform sampling and the sampling theorem 129
A/DT
) (t x
a
] [n x
s
T
Fig. 7.3 Block diagram representation of ideal A/DT conversion.
Example 7.1: Information Loss in Sampling
Consider the analog signals
x
a1
(t) = cos(2πF
1
t), F
1
= 100Hz
x
a2
(t) = cos(2πF
2
t), F
2
= 300Hz
Uniform sampling at the rate F
s
= 200Hz yields
x
1
[n] = x
c1
(nT
s
) = cos(2πF
1
n/F
s
) = cos(πn)
x
2
[n] = x
c2
(nT
s
) = cos(2πF
2
n/F
s
) = cos(3πn)
Clearly,
x
1
[n] = x
2
[n]
This shows that uniform sampling is a noninvertible, manytoone mapping.
This example shows that it is not possible in general to recover the original analog signal x
a
(t) for all t ∈ R,
from the set of samples x[n], n ∈ Z.
However, if we further assume that x
a
(t) is bandlimited and select F
s
to be sufﬁciently large, it is possible
to recover x
a
(t) from its samples x[n]. This is the essence of the sampling theorem.
7.1.1 Frequency domain representation of uniform sampling
In this section, we ﬁrst investigate the relationship between the Fourier transform (FT) of the analog signal
x
a
(t) and the DTFT of the discretetime signal x[n] = x
a
(nT
s
). The sampling theorem will follow naturally
from this relationship.
FT of analog signals: For an analog signal x
a
(t), the Fourier transform, denoted here as X
a
(Ω), Ω∈ R, is
deﬁned by
X
a
(Ω)
∞
−∞
x
a
(t)e
−jΩt
dt (7.4)
The corresponding inverse Fourier transform relationship is given by:
x
a
(t) =
1
2π
∞
−∞
X
a
(Ω)e
jΩt
dΩ (7.5)
We assume that the student is familiar with the FT and its properties (if not, Signals and Systems should be
reviewed).
c (B. Champagne & F. Labeau Compiled September 28, 2004
130 Chapter 7. Digital processing of analog signals
Impulse to
Sequence
) (t x
a
) (t x
s
] [n x
) (t s
Fig. 7.4 Uniform sampling model.
A sampling model: It is convenient to represent uniform sampling as shown in Figure 7.4.
In this model:
• s(t) is an impulse train, i.e.:
s(t) =
∞
∑
n=−∞
δ
a
(t −nT
s
) (7.6)
• x
s
(t) is a pulseamplitude modulated (PAM) waveform:
x
s
(t) = x
a
(t)s(t) (7.7)
=
∞
∑
n=−∞
x
a
(nT
s
)δ
a
(t −nT
s
) (7.8)
=
∞
∑
n=−∞
x[n]δ
a
(t −nT
s
) (7.9)
• The sample value x[n] is encoded in the amplitude of the pulse δ
a
(t −nT
s
). The information content
of x
s
(t) and x[n] are identical, i.e. one can be reconstructed from the other and vice versa.
FT/DTFT Relationship: Let X(ω) denote the DTFT of the samples x[n] = x
a
(nT
s
), that is, X(ω) =
∑
∞
n=−∞
x[n]e
−jωn
. The following equalities hold:
X(ω)[
ω=ΩT
s
= X
s
(Ω) =
1
T
s
∞
∑
k=−∞
X
a
(Ω−kΩ
s
) (7.10)
Proof: The ﬁrst equality is proved as follows:
X
s
(Ω) =
∞
−∞
x
s
(t)e
−jΩt
dt
=
∞
−∞
¦
∞
∑
n=−∞
x[n]δ
a
(t −nT
s
)¦e
−jΩt
dt
=
∞
∑
n=−∞
x[n]
∞
−∞
δ
a
(t −nT
s
)e
−jΩt
dt
=
∞
∑
n=−∞
x[n]e
−jΩnT
s
= X(ω) at ω =ΩT
s
(7.11)
c (B. Champagne & F. Labeau Compiled September 28, 2004
7.1 Uniform sampling and the sampling theorem 131
Now consider the second equality: From ECSE 304, we recall that
S(Ω) = FT¦s(t)¦
= FT¦
∞
∑
n=−∞
δ
a
(t −nT
s
)¦
=
2π
T
s
∞
∑
k=−∞
δ
a
(Ω−kΩ
s
) (7.12)
Finally, since x
s
(t) = x
a
(t)s(t), it follows from the multiplicative property of the FT that
X
s
(Ω) =
1
2π
X
a
(Ω) ∗S(Ω)
=
1
2π
∞
−∞
X
a
(Ω−Ω
/
)
2π
T
s
∞
∑
k=−∞
δ
a
(Ω
/
−kΩ
s
)dΩ
/
=
1
T
s
∞
∑
k=−∞
X
a
(Ω−kΩ
s
) (7.13)
Remarks: The LHS of (7.10) is the DTFT of x[n] evaluated at
ω =ΩT
s
=
Ω
F
s
= 2π
F
F
s
(7.14)
We shall refer to
Ω
F
s
as the normalized frequency. The middle term of (7.10) is the FT of the amplitude
modulated pulse train x
s
(t) = ∑
n
x[n]δ
a
(t −nT
s
). The RHS represents an inﬁnite sum of scaled, shifted
copies of X
a
(Ω) by integer multiples of Ω
s
.
Interpretation: Suppose x
a
(t) is a bandlimited (BL) signal, i.e. X
a
(Ω) = 0 for [Ω[ ≥ Ω
N
, where Ω
N
represents a maximum frequency.
• Case 1: suppose Ω
s
≥ 2Ω
N
(see Figure 7.5). Note that for different values of k, the spectral images
X
a
(Ω−kΩ
s
) do not overlap. Consequently, X
a
(Ω) can be recovered from X(ω):
X
a
(Ω) =
T
s
X(ΩT
s
) [Ω[ ≤Ω
s
/2
0 otherwise
(7.15)
• Case 2: suppose Ω
s
<2Ω
N
(see Figure 7.6). In this case, the different images X
a
(Ω−kΩ
s
) do overlap.
As a result, it is not possible to recover X
a
(Ω) from X(ω).
The critical frequency 2Ω
N
(i.e. twice the maximum frequency) is called the Nyquist rate for uniform
sampling. In case 2, i.e. Ω
s
< 2Ω
N
, the distortion resulting from spectral overlap is called aliasing. In
case 1, i.e. Ω
s
≥ 2Ω
N
, the fact that X
a
(Ω) can be recovered from X(ω) actually implies that x
a
(t) can be
recovered from x[n]: this will be stated in the sampling theorem.
c (B. Champagne & F. Labeau Compiled September 28, 2004
132 Chapter 7. Digital processing of analog signals
−500 −400 −300 −200 −100 0 100 200 300 400 500
0
0.5
1
X
c
(Ω)
T
s
=0.016706, Ω
s
=376.1106
−500 −400 −300 −200 −100 0 100 200 300 400 500
0
100
200
300
S(Ω)
−500 −400 −300 −200 −100 0 100 200 300 400 500
0
20
40
60
X
s
(Ω)
Ω
−8 −6 −4 −2 0 2 4 6 8
0
20
40
60
X(ω)
ω
Fig. 7.5 Illustration of sampling in the frequency domain, when Ω
s
> 2Ω
N
7.1.2 The sampling theorem
The following theorem is one of the cornerstones of discretetime signal processing.
Sampling Theorem:
Suppose that x
a
(t) is BandLimited to [Ω[ <Ω
N
. Provided the sampling frequency
Ω
s
=
2π
T
s
≥2Ω
N
(7.16)
then x
a
(t) can be recovered from its samples x[n] = x
a
(nT
s
) via
x
a
(t) =
∞
∑
n=−∞
x[n]h
r
(t −nT
s
), any t ∈ R (7.17)
where
h
r
(t)
sin(πt/T
s
)
πt/T
s
= sinc(t/T
s
) (7.18)
Proof: From our interpretation of the FT/DTFT relationship (7.10), we have seen under Case 1 that when
Ω
s
≥2Ω
N
, it is possible to recover X
a
(Ω) from the DTFT X(ω), or equivalently X
s
(Ω), by means of (7.15).
This operation simply amounts to keep the lowpass portion of X
s
(Ω).
c (B. Champagne & F. Labeau Compiled September 28, 2004
7.1 Uniform sampling and the sampling theorem 133
−500 −400 −300 −200 −100 0 100 200 300 400 500
0
0.5
1
X
c
(Ω)
T
s
=0.026429, Ω
s
=237.7421
−500 −400 −300 −200 −100 0 100 200 300 400 500
0
100
200
S(Ω)
−500 −400 −300 −200 −100 0 100 200 300 400 500
0
20
40
X
s
(Ω)
Ω
−10 −5 0 5 10
0
20
40
X(ω)
ω
Fig. 7.6 Illustration of sampling in the frequency domain, when Ω
s
< 2Ω
N
This can be done with an ideal continuoustime lowpass ﬁlter with gain T
s
, deﬁned as (see dashed line in
Figure 7.5):
H
r
(Ω) =
T
s
[Ω[ ≤Ω
s
/2
0 [Ω[ >Ω
s
/2
.
The corresponding impulse response is found by taking the inverse continuoustime Fourier Transform of
H
r
(Ω):
h
r
(t) =
1
2π
∞
−∞
H
r
(Ω)e
jΩt
dΩ=
1
2π
Ω
s
/2
−Ω
s
/2
T
s
e
jΩt
dΩ
=
T
s
2π
e
jΩ
s
t/2
−e
−jΩ
s
t/2
jt
=
sin(πt/T
s
)
πt/T
s
.
Passing x
s
(t) through this impulse response h
r
(t) yields the desired signal x
a
(t), as the continuoustime
c (B. Champagne & F. Labeau Compiled September 28, 2004
134 Chapter 7. Digital processing of analog signals
convolution:
x
a
(t) = x
s
(t) ∗h
r
(t) =
∞
−∞
∞
∑
n=−∞
δ
a
(τ −nTs)x
a
(nT
s
)
sin(π(t −τ)/T
s
)
π(t −τ)/T
s
dτ
=
∞
∑
n=−∞
x
a
(nT
s
)
∞
−∞
δ
a
(τ −nTs)
sin(π(t −τ)/T
s
)
π(t −τ)/T
s
dτ
=
∞
∑
n=−∞
x
a
(nT
s
)
sin(π(t −nT
s
)/T
s
)
π(t −nT
s
)/T
s
,
the last equality coming from the fact that
∞
−∞
δ
a
(t −t
0
) f (t)dt = f (t
0
).
Interpretation:
The function h
r
(t) in (7.18) is called reconstruction ﬁlter. The general shape of h
r
(t) is shown in Figure 7.7.
In particular, note that
h
r
(mT
s
) =δ[m] =
1 m = 0
0 otherwise
(7.19)
Thus, in the special case t = mT
s
, (7.17) yields
x
a
(mT
s
) =
∞
∑
n=−∞
x[n]h
r
((m−n)T
s
) = x[m] (7.20)
which is consistent with the deﬁnition of the samples x[m]. In the case t = mT
s
, (7.17) provides an interpo
lation formula:
x
a
(t) =
∑
scaled and shifted copies of sinc(t/T
s
) (7.21)
Equation (7.17) is often called the ideal bandlimited signal reconstruction formula. Equivalently, it may be
viewed as an ideal form of discretetocontinuous (D/C) conversion. The reconstruction process is illustrated
on the right of Figure 7.7.
7.2 Discretetime processing of continuoustime signals
Now that we have seen under which conditions an analog signal can be converted to discretetime without
loss of information, we will develop in this section the conditions for the discretetime implementation of a
system to be equivalent to its continuoustime implementation.
Figure 7.8 shows the structure that we will consider for discretetime processing of analog signals x
s
(t).
This is an ideal form of processing in which the amplitudes of DT signals x[n] and y[n] are not constrained.
In a practical digital signal processing system, these amplitudes would be quantized (see section 7.3).
The purpose of this section will be to answer the following question: Under what conditions is the DT
structure of Figure 7.8 equivalent to an analog LTI system?
c (B. Champagne & F. Labeau Compiled September 28, 2004
7.2 Discretetime processing of continuoustime signals 135
−5 −4 −3 −2 −1 0 1 2 3 4 5
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5
−4
−3
−2
−1
0
1
2
3
4
Fig. 7.7 Bandlimited interpolation: Left, the general shape of the reconstruction ﬁlter h
r
(t)
in time (T
s
= 1 unit); Right: the actual interpolation by superposition of scaled and shifted
versions of h
r
(t).
A/DT DT/A
) (t x
a
) (t y
a
] [n x ] [n y
s
T
s
T
DT system
) (Z H
Fig. 7.8 Setting for discretetime processing of continuoustime signals.
7.2.1 Study of inputoutput relationship
Mathematical characterization: The three elements of Figure 7.8 can be characterized by the following
relationships:
• Ideal C/D converter:
X(ω)[
ω=ΩT
s
=
1
T
s
∞
∑
k=−∞
X
a
(Ω−kΩ
s
) (7.22)
where Ω
s
= 2π/T
s
.
• Discretetime LTI ﬁlter:
Y(ω) = H(ω)X(ω) (7.23)
where H(ω) is an arbitrary DT system function
• Ideal D/C converter (see proof of sampling theorem):
Y
a
(Ω) = H
r
(Ω)Y(ΩT
s
) (7.24)
c (B. Champagne & F. Labeau Compiled September 28, 2004
136 Chapter 7. Digital processing of analog signals
where H
r
(Ω) is the frequency response of an ideal continuoustime lowpass ﬁlter, with cutoff fre
quency Ω
s
/2, i.e.
H
r
(Ω) =
T
s
[Ω[ <Ω
s
/2
0 otherwise
(7.25)
Overall Inputoutput relationship: Combining (7.22)–(7.24) into a single equation, we obtain
Y
a
(Ω) = H
r
(Ω)H(ΩT
s
)
1
T
s
∞
∑
k=−∞
X
a
(Ω−kΩ
s
). (7.26)
Recall that for an analog LTI system, the Fourier Transforms of the input x
a
(t) and corresponding output
y
a
(t) are related through
Y
a
(Ω) = G(Ω)X
a
(Ω). (7.27)
In general, (7.26) does not reduce to that form because of the spectral images X
a
(Ω−kΩ
s
) for k = 0.
However, if we further assume that x
a
(t) is bandlimited to [Ω[ <Ω
N
and that Ω
s
≥2Ω
N
, then the products
H
r
(Ω)X
a
(Ω−kΩ
s
) = 0, for all k = 0. (7.28)
In this case, (7.26) reduces to
Y
a
(Ω) = H(ΩT
s
)X
a
(Ω) (7.29)
which corresponds to an LTI analog system.
Theorem 1 (DT processing of analog signals) Provided that analog signal x
a
(t) is bandlimited to [Ω[ <
Ω
N
and that Ω
s
≥ 2Ω
N
, discretetime processing of x[n] = x
a
(nT
s
) with H(ω) is equivalent to analog pro
cessing of x
a
(t) with
H
a
(Ω) =
H(ΩT
s
) if [Ω[ <Ω
s
/2
0 otherwise
(7.30)
Remarks: Note that H
a
(Ω) is set to zero for [Ω[ ≥Ω
s
/2 since the FT of the input is zero over that range,
i.e. X
a
(Ω) = 0 for [Ω[ ≥Ω
s
/2 . The Theorem states that the two systems shown in Figure 7.9 are equivalent
under the given conditions.
In practice, many factors limit the applicability of this result:
 input signal x
a
(t) not perfectly bandlimited
 nonideal C/D conversion
 nonideal D/C conversion
These issues are given further considerations in the following sections.
Example 7.2:
Consider a continuoustime speech signal limited to 8000 Hz. If it is desired to send this signal over a
telephone line, it has to be bandlimited between 300 Hz and 3400 Hz. The bandpass ﬁlter to be used is
then speciﬁed as:
H
a
(Ω) =
1 600π ≤[Ω[ ≤6800π
0 otherwise
c (B. Champagne & F. Labeau Compiled September 28, 2004
7.2 Discretetime processing of continuoustime signals 137
A/DT DT/A
) (t x
a
) (t y
a
] [n x ] [n y
s
T
s
T
DT LTI system
) (Z H
) (:
a
H
analog LTI system
=
) (t y
a
) (t x
a
Fig. 7.9 Equivalence between Analog and DiscreteTime processing.
If this ﬁlter is to be implemented in discretetime without loss of information, sampling must take place
above the Nyquist rate, i.e. with F
s
≥16 kHz, or Ω
s
≥32000π. If Ω
s
= 32000π is chosen, the discrete
time ﬁlter which will implement the same bandpass ﬁltering as H
a
(Ω) is given by:
H(ω) =
1 600π/16000 ≤[ω[ ≤6800π/16000
0 otherwise in [−π, π],
the speciﬁcations outside [−π, π] follow by periodicity of H(ω).
7.2.2 Antialiasing ﬁlter (AAF)
Principle of AAF:
The AAF is an analog ﬁlter, used prior to C/D conversion, to remove highfrequency content of x
a
(t) when
the latter is not BL to ±Ω
s
/2. Its speciﬁcations are given by:
H
aa
(Ω) =
1 [Ω[ <Ω
s
/2
0 otherwise
, (7.31)
and are illustrated in Figure 7.10.
The AAF avoids spectral aliasing during the sampling process, but is also useful in general even if the signal
to be sampled is already bandlimited: it removes highfrequency noise which would be aliased in the band
of interest by the sampling process.
The AAF is necessary, but has some drawbacks: if the signal extends over the Ω
s
/2 limit, useful signal
information may be lost; moreover, the direct analog implementation of H
aa
(Ω) with sharp cutoffs is costly,
and such ﬁlters usually introduce phasedistortion near the cutoff frequency.
c (B. Champagne & F. Labeau Compiled September 28, 2004
138 Chapter 7. Digital processing of analog signals
:
V
:
V
:
:
DD
+
Fig. 7.10 Antialiasing ﬁlter template.
Implementation issues:
AAF are always present in DSP systems that interface with the analog world. Several approaches can be used
to circumvent difﬁculties associated to the sharp cutoff requirements of the AAF ﬁlter H
aa
(Ω) in (7.31).
Two such approaches are described below and illustrated in Figure (7.11):
• Approach #1:
 Fix Ω
N
= maximum frequency of interest (e.g. Ω
N
= 20kHz)
 Set Ω
s
= (1+r)2Ω
N
where 0 < r < 1 (e.g. r = 0.1 ⇒Ω
s
= 44kHz)
 Use of nonideal H
aa
(Ω) with transition band between Ω
N
and (1+r)Ω
N
. In this case, a smooth
transition band of width rΩ
N
is available.
• Approach #2: Mtimes Oversampling:
 Use cheap analog H
aa
(Ω) with very large transition band
 Signiﬁcantly oversample x
a
(t) (e.g. Ω
s
= 82Ω
N
)
 Complete the antialiasing ﬁltering in the DT domain by a sharp lowpass DT ﬁlter H
aa
(ω), and
then reduce the sampling rate.
The second approach is superior in terms of ﬂexibility and cost. It is generally used in commercial audio
CD players. We will investigate it in further detail when we study multirate systems in chapter ??.
7.3 A/D conversion
Introduction
Up to now, we have considered an idealized form of uniform sampling, also called ideal AnalogtoDiscrete
Time (A/DT) conversion (see Figure 7.12), in which the sampling takes place instantaneously and the signal
amplitude x[n] can take any arbitrary real value ,i.e. x[n] ∈ R, so that x[n] is a discretetime signal.
A/DT conversion cannot be implemented exactly due to hardware limitations; it is used mainly as a mathe
matical model in the study of DSP. Practical DSP systems use nonideal AnalogtoDigital (A/D) converters
instead. These are characterized by:
 Finite response time (the sampling is not quite instantaneous),
c (B. Champagne & F. Labeau Compiled September 28, 2004
7.3 A/D conversion 139
:
1
:
1
:
:
DD
+
1
U :
1
U :
:
1
:
1
:
:
DD
+
1
0:
1
0:
S S 0 S 0 S
Z
Z
DD
+
(a) (b)
Fig. 7.11 Illustration of practical AAF implementations. (a) Excess sampling by factor 1+r
(b) Oversampling by factor M.
A/DT
) (t x
a
) ( ] [
s a
nT x n x
s
T
analog signal Discretetime signal
Fig. 7.12 Ideal A/DT Conversion.
 Finite number of permissible output levels for x[n]
(i.e. x[n] ∈ ﬁnite set ⇒digital signal),
 Other types of imperfections like timing jitter, A/D nonlinearities, . . .
7.3.1 Basic steps in A/D conversion
AnalogtoDigital Conversion (A/D) may be viewed as a twostep process, as illustrated in Figure 7.13:
• The SampleandHold device (S&H) performs uniform sampling of its input at times nT
s
and main
tains a constant output voltage of x[n] until the next sampling instant.
• During that time, the quantizer approximates x[n] with a value ˆ x[n] selected from a ﬁnite set of possible
representation levels.
The operation of each process is further developed below.
Sampleandhold (S&H): The S&H performs uniform sampling of its input at times nT
s
; its output volt
age, denoted ˜ x(t), remains constant until the next sampling instant. Figure 7.14 illustrates the input and
c (B. Champagne & F. Labeau Compiled September 28, 2004
140 Chapter 7. Digital processing of analog signals
Sample
&
Hold
Quantizer
) (t x
a
] [ ˆ n x ] [n x
s
T
A/D Converter
Fig. 7.13 Practical A/D Conversion
output signals of the sampleandhold device.
t
s
T
) (t x
a
Input
Output
) (
~
t x
Fig. 7.14 Illustration of a SampleandHold operation on a continuoustime signal.
Thus, the S&H output ˜ x(t) is a PAM waveform ideally given by
˜ x(t) =
∞
∑
n=−∞
x[n] p(t −nT
s
) (7.32)
x[n] = x
a
(nT
s
) (7.33)
p(t) =
1 if 0 ≤t ≤T
s
0 otherwise
(7.34)
Practical S&H suffer from several imperfections:
 The sampling is not instantaneous: x[n] ≈x
a
(nT
s
);
 The output voltage does not remain constant but slowly decays within a sampling period of duration
T
s
.
c (B. Champagne & F. Labeau Compiled September 28, 2004
7.3 A/D conversion 141
Quantizer: The quantizer is a memoryless device, represented mathematically by a function Q(.), also
called quantizer characteristic:
ˆ x(n) = Q(x[n]). (7.35)
Assuming a constant input voltage x[n], the function Q selects, among a ﬁnite set of K permissible levels,
the one that best represents x[n]:
x[n] ∈ R
Q
−→ ˆ x[n] ∈ A =¦ˆ x
k
¦
K
k=1
(7.36)
where ˆ x
k
(k = 1, . . . , K) represent the permissible levels and A denotes the set of all such levels. In practice,
B+1 bits are used for the representation of signal levels (B bits for magnitude + 1 bit for sign), so that the the
total number of levels is K = 2
B+1
. Since the output levels of the quantizer are usually represented in binary
form, the concept of binary coding is implicit within the function Q(.) (more on this in a later chapter).
For an input signal x[n] to the quantizer, the quantization error is deﬁned as
e[n] ˆ x[n] −x[n] (7.37)
Uniform Quantizer: The most commonly used form of quantizer is the uniform quantizer, in which the
representation levels are uniformly spaced within an interval [−X
fs
, X
fs
], where X
fs
(often speciﬁed as a
positive voltage) is known as the fullscale level of the quantizer. Speciﬁcally, the representation levels are
constrained to be
ˆ x
k
=−X
fs
+(k −1)∆, k = 1, . . . , K. (7.38)
• The minimum value representable by the quantizer is ˆ x
min
= ˆ x
1
=−X
fs
• ∆, called the quantization step size, is the constant distance between representation levels. In practice,
we have
∆ =
2X
fs
K
= 2
−B
X
fs
(7.39)
• The maximum value representable by the quantizer is ˆ x
max
= ˆ x
K
= X
fs
−∆
• The dynamic range of the uniform quantizer is deﬁned as the interval
DR = [ ˆ x
1
−
∆
2
, ˆ x
K
+
∆
2
] (7.40)
While several choices are possible for the nonlinear quantizer characteristic Q(.) in (7.35), a common
approach is to round the input to the closest level:
Q(x[n]) =
ˆ x
1
if x[n] < ˆ x
1
−
∆
2
ˆ x
k
if ˆ x
k
−
∆
2
≤x[n] < ˆ x
k
+
∆
2
for k = 1, ..., K
ˆ x
K
if x[n] ≥ ˆ x
K
+
∆
2
.
(7.41)
Figure 7.15 illustrates the inputoutput relationship of a 3bit uniform quantizer. In this case, K =8 different
levels exist.
For the uniform quantizer, the quantization error e[n] satisﬁes:
x[n] ∈ DR ⇒ [e[n][ ≤∆/2 (7.42)
x[n] / ∈ DR ⇒ [e[n][ >∆/2 (clipping)
c (B. Champagne & F. Labeau Compiled September 28, 2004
142 Chapter 7. Digital processing of analog signals
] [n x
] [ ˆ n x
000
001
010
=011
111
110
101
100=
'
fs
X 2
1
ˆ x
K
xˆ
1
ˆ x
K
xˆ
Fig. 7.15 3bit Uniform Quantizer Inputoutput characteristic. The reproduction levels are
labelled with their two’s complement binary representation.
7.3.2 Statistical model of quantization errors
In the study of DSP systems, it is important to know how quantization errors will affect the system output.
Since it is difﬁcult to handle such errors deterministically, a statistical model is usually assumed. A basic
statistical model is derived below for the uniform quantizer.
Model structure: From the deﬁnition of quantization error:
e[n] ˆ x[n] −x[n] ⇒ ˆ x[n] = x[n] +e[n], (7.43)
one can derive an equivalent additive model for the quantization noise, as illustrated in Figure 7.16. The
@ >Q [ @ > Ö Q [
@ >Q H
@ >Q [ @ > Ö Q [
4
Fig. 7.16 Additive noise model of a quantizer
nonlinear device Q is replaced by a linear operation, and the error e[n] is modelled as a white noise sequence
(see below).
White noise assumption: The following assumptions are made about the input sequence x[n] and the
corresponding quantization error sequence e[n]:
c (B. Champagne & F. Labeau Compiled September 28, 2004
7.3 A/D conversion 143
 e[n] and x[n] are uncorrelated, i.e.
E¦x[n]e[m]¦ = 0 for any n, m ∈ Z (7.44)
where E¦¦ denotes expectation.
 e[n] is a white noise sequence, i.e.:
E¦e[n]e[m]¦ = 0 if m = n (7.45)
 e[n] is uniformly distributed within ±∆/2, which implies
mean = E¦e[n]¦ = 0 (7.46)
variance = E¦e[n]
2
¦ =
∆/2
−∆/2
e
2
∆
de =
∆
2
12
σ
2
e
(7.47)
These assumptions are valid provided the signal is sufﬁciently “busy”, i.e.:
 σ
x
RMS value of x[n] ∆;
 x[n] changes rapidly.
SQNR computation: the signal to quantization noise ratio (SQNR) is a good characterization of the effect
of the quantizer on the input signal. It is deﬁned as the ratio between the signal power and the quantization
noise power at the output of the quantizer, in dB:
SQNR = 10log
10
σ
2
x
σ
2
e
,
where σ
x
and σ
e
represent the RMS value of the signal and noise respectively. With the above assumption
on the noise, we have that
σ
2
e
=
∆
2
12
=
2
−2B
X
2
fs
12
,
where X
fs
is the full scale level of the quantizer, as illustrated in Figure 7.15, so that
SQNR = 20log
10
σ
x
X
fs
+10log
10
12+20Blog
10
2
or ﬁnally
SQNR = 6.02 B+10.8−20log
10
X
fs
σ
x
(dB) (7.48)
The above SQNR expression states that for each added bit, the SQNR increases by 6 dB. Moreover, it
includes a penalty term for the cases where the quantizer range is not matched to the input dynamic range.
As an example, if the dynamic range is chosen too big (X
fs
σ
x
), then this term will add a big negative
contribution to the SQNR, because the usage of the available quantizer reproduction levels will be very
bad (only a few levels around 0 will be used). This formula does not take into account the complementary
problem (quantizer range too small, resulting in clipping of useful data), because he underlying noise model
is based on a noclipping assumption.
c (B. Champagne & F. Labeau Compiled September 28, 2004
144 Chapter 7. Digital processing of analog signals
7.4 D/A conversion
Figure 7.17 shows the ideal discrete timetoanalog conversion (DT/C) as considered up to now. It uses an
ideal lowpass reconstruction ﬁlter h
r
(t), as deﬁned in equation (7.18). The corresponding frequency and
DT/A
] [n y ) (t y
r
s
T
Analog signal DT signal
Fig. 7.17 Ideal Discrete TimetoAnalog Conversion
time input/output expressions are:
y
r
(t) =
∞
∑
n=−∞
y[n]h
r
(t −nT
s
) (7.49)
and
Y
r
(Ω) =Y(ΩT
s
)H
r
(Ω) (7.50)
where
H
r
(Ω) =
T
s
[Ω[ <Ω
s
/2
0 otherwise
(7.51)
This is an idealized form of reconstruction, since all the signal samples y[n] are needed to reconstruct y
r
(t).
The interpolating function h
r
(t) corresponds to an ideal lowpass ﬁltering operation (i.e. noncausal and
unstable).
Ideal DT/A conversion cannot be implemented exactly due to physical limitations; similarly to A/DT, it is
used as a mathematical model in DSP. Practical DSP systems use nonideal D/A converters instead charac
terized by:
 a realizable, lowcost approximation to h
r
(t).
 the output signal y
r
(t) is not perfectly bandlimited.
7.4.1 Basic steps in D/A conversion:
D/A may be viewed as a twostep process, illustrated in Figure 7.18:
• The hold circuitry transforms its digital input into a continuoustime PAMlike signal ˜ y(t);
• The postﬁlter smoothes out the PAM signal (spectral reshaping).
Hold circuit: A Timedomain description of its operation is given by:
˜ y(t) =
∞
∑
n=−∞
y[n]h
0
(t −nT
s
) (7.52)
c (B. Champagne & F. Labeau Compiled September 28, 2004
7.4 D/A conversion 145
Hold
circuit
PostIilter
) (W \
U
 Q \ ) (
~
W \
D/A converter
Fig. 7.18 Twostep D/A Conversion.
where h
0
(t) is the basic pulse shape of the circuit. In the frequency domain:
˜
Y
r
(Ω) =
∞
−∞
˜ y(t)e
−jΩt
dt
=
∞
∑
n=−∞
y[n]
∞
−∞
h
0
(t −nTs)e
−jΩt
dt
=
∞
∑
n=−∞
y[n]e
−jΩT
s
n
∞
−∞
h
0
(t)e
−jΩt
dt
= Y(ΩT
s
)H
0
(Ω) (7.53)
In general, H
0
(Ω) =H
r
(Ω), so that the output ˜ y(t) of the hold circuit is different from that of the ideal DT/A
conversion. The most common implementation is the zeroorder hold, which uses a rectangular pulse shape:
h
0
(t) =
1 if 0 ≤t ≤T
s
0 otherwise
(7.54)
The corresponding frequency response is
H
0
(Ω) = T
s
sinc(Ω/Ω
s
)e
−jπΩ/Ω
s
(7.55)
Postﬁlter: The purpose of the postﬁlter H
p f
(Ω) is to smooth out the output of the hold circuitry so that
it resembles more that of an ideal DT/A function.
From the previous discussions, it is clear that the desired output of the D/A converter is
Y
r
(Ω) =Y(ΩT
s
)H
r
(Ω),
whereas the output of hold circuitry is:
˜
Y(Ω) =Y(ΩT
s
)H
0
(Ω).
Clearly, the output of the post ﬁlter will be identical to that of the ideal DT/A converter if
H
pf
(Ω) = H
r
(Ω)/H
0
(Ω) =
T
s
/H
0
(Ω) if [Ω[ <Ω
s
/2
0 otherwise
(7.56)
In practice,the post ﬁlter H
pf
(Ω) can only be approximated but the results are still satisfactory.
c (B. Champagne & F. Labeau Compiled September 28, 2004
146 Chapter 7. Digital processing of analog signals
Example 7.3: Zeroorder hold circuit
The zeroorder hold frequency response is shown on the top of Figure 7.19, as compared to the ideal
brickwall lowpass reconstruction ﬁlter H
r
(Ω). The compensating ﬁlter in this case is shown in the
bottom part of Figure 7.19.
−2π/T −π/T 0 π/T 2π/T
0
T/2
T
0
0.5
1
1.5
−2π/T −π/T 0 π/T 2π/T
H
r
( Ω)
H
0
( Ω)
H
r
( Ω)/H
0
( Ω)
Fig. 7.19 Illustration of the zeroorder hold frequency response (Top, compared with ideal
lowpass interpolation ﬁlter), and the associated compensation postﬁlter (bottom).(T denotes
the sampling period)
c (B. Champagne & F. Labeau Compiled September 28, 2004
147
Chapter 8
Structures for the realization of DT systems
Introduction:
Consider a causal LTI system H with rational system function:
H(z) =Z{h[n]} =
B(z)
A(z)
. (8.1)
In theory, H(z) provides a complete mathematical characterization of the system H. However, even if we
specify H(z), the inputoutput transformation x[n] →y[n] =H{x[n]}, where x[n] denotes and arbitrary input
and y[n] the corresponding output, can be computed in a multitude of different ways.
Each of the different ways of performing the computation y[n] = H{x[n]} is called a realization of the
system. Speciﬁcally:
• A realization of a system is a speciﬁc (and complete) description of its internal computational struc
ture.
• This description may involve difference equations, block diagrams, pseudocode (Matlab like), etc.
For a given system H, there is an inﬁnity of such realizations. The choice of a particular realization is
guided by practical considerations, such as:
• computational requirements;
• internal memory requirements;
• effects of ﬁnite precision representation;
• chip area and power consumption in VLSI implementation;
• etc.
In this chapter, we discuss some of the most wellknown structures for the realization of FIRand IIRdiscrete
time systems. We will use Signal FlowGraph representation to illustrate the different compuational struc
tures. These are reviewed in the next section.
148 Chapter 8. Structures for the realization of DT systems
8.1 Signal ﬂowgraph representation
A signal ﬂowgraph (SFG) is a network of directed branches connecting at nodes. It is mainly used to
graphically represent the relationship between signals. It in fact gives detailed information on the actual
algorithm that is used to compute the samples of one of several signals based on the samples of one or
several other signals.
Basic terminology and operation principles of SFG are detailed below:
Node: A node is represented by a small circle, as shown in Figure 8.1(a). To each node is associated a
node value, say w[n], that may or not appear next to the node. The node value represents either an external
input, the result of an internal computation, or an output value (see below).
Branch: A branch is an oriented line segment. The signal ﬂow along the branch is indicated by an arrow.
The relationship between the branch input and output is provided by the branch transmittance, as shown in
FIgure 8.1(b).
] [n w
a
1 −
z
) (z H
] [n u
] [ ] [ n u n v =
] [ ] [ n au n v =
] 1 [ ] [ − = n u n v
) ( ) ( ) ( z U z H z V =
] [n v
input output
(a) (b)
Fig. 8.1 Basic constituents of a signal ﬂowgraph: (a) a single isolated node; (b) different
branches along with their inputoutput characteristic.
Internal node: An internal node is a node with one or more input branches (i.e. entering the node) and
one or more output branches (i.e. leaving the node), as shown in Figure 8.2(a).
The node value w[n] (not shown in the ﬁgure) is the sum of the outputs of all the branches entering the node,
i.e.
w[n]
K
∑
k=1
u
k
[n]. (8.2)
The inputs to the branches leaving the node are all identical and equal to the node value:
v
1
[n] =· · · = v
L
[n] = w[n]. (8.3)
c B. Champagne & F. Labeau Compiled October 13, 2004
8.1 Signal ﬂowgraph representation 149
) (
1
n u
) (
2
n u
) (n u
K
) (
1
n v
) (n v
L
!
!
] [n x ] [n y
(a) (b) (c)
Fig. 8.2 Different types of nodes: (a) internal node; (b) source node; (c) sink node.
Source node: A source node is a node with no input branch, as shown in Figure 8.2(b). The node value
x[n] is provided by an external input to the network.
Sink node: A sink node is a node with no output branch, as shown in Figure 8.2(c). The node value y[n],
computed in the same way as an internal node, may be used as a network output.
Remarks: SFGs provides a convenient tool for describing various system realizations. In many cases,
nontrivial realizations of a system can be obtained via simple modiﬁcations to its SFG.
In the examples below, we illustrate the use of SFG for the representation of rational LTI systems. In
particular, we illustrate how a SFG can be derived from an LCCDE description of the system and vice versa.
Example 8.1:
Consider the SFG in ﬁgure 8.3, where a and b are arbitrary constants:
] [
1
n x
] [
2
n x
1 −
z
a
] [n w
b
] [n y
Fig. 8.3 SFG for example 8.1.
The node value can be expressed in terms of input signals x
1
[n] and x
2
[n] as
w[n] = x
1
[n−1] +ax
2
[n]
c B. Champagne & F. Labeau Compiled October 13, 2004
150 Chapter 8. Structures for the realization of DT systems
From there, an expression for the output signal y[n] in terms of x
1
[n] and x
2
[n] is obtained as
y[n] = bw[n]
= bx
1
[n−1] +abx
2
[n]
Example 8.2:
Consider the system function
H(z) =
b
0
+b
1
z
−1
1−az
−1
To obtain a SFG representation of H(z), we ﬁrst obtain the corresponding LCCDE, namely
y[n] = ay[n−1] +b
0
x[n] +b
1
x[n−1]
The derivation of the SFG can be made easier by expressing the LCCDE as a set of two equations as
follows:
w[n] = b
0
x[n] +b
1
x[n−1]
y[n] = ay[n−1] +w[n]
From there, one immediately obtains the SFG in ﬁgure 8.4.
1 −
z
a
] [n y
] [n x
1 −
z
0
b
1
b
] [n w
] 1 [ − n ay
Fig. 8.4 SFG for example 8.2
Note the presence of the feedback loop in this diagram which is typical of recursive LCCDE
Example 8.3:
Let us ﬁnd the system function H(z) corresponding to the SFG in ﬁgure 8.5
In this type of problem, one has to be very systematic to avoid possible mistakes. We proceed in 4 steps
as follows:
(1) Identify and label nontrivial internal nodes. Here, we identify two nontrivial nodes, labelled
w
1
[n] and w
2
[n] in Figure 8.5.
(2) Find the zdomain input/output (I/O) relation for each of the nontrivial nodes and the output
node. Here, there are 2 nontrivial nodes and 1 output node, so that we need a total of 3 linearly
independent relations, namely:
Y(z) = b
0
W
1
(z) +b
1
W
2
(z) (8.4)
W
2
(z) = z
−1
W
1
(z) (8.5)
W
1
(z) = X(z) +aW
2
(z) (8.6)
c B. Champagne & F. Labeau Compiled October 13, 2004
8.2 Realizations of IIR systems 151
1 −
z
a
] [n y
] [n x
0
b
1
b
] [
1
n w
] [
2
n w
Fig. 8.5 SFG for example 8.3.
(3) By working out the algebra, solve the I/O relations for Y(z) in terms of X(z). Here:
(8.5) +(8.6) =⇒ W
1
(z) = X(z) +az
−1
W
1
(z)
=⇒ W
1
(z) =
X(z)
1−az
−1
(8.7)
(8.4) +(8.5) +(8.7) =⇒ Y(z) = (b
0
+b
1
z
−1
)W
1
(z)
=⇒ Y(z) =
b
0
+b
1
z
−1
1−az
−1
X(z) (8.8)
(4) Finally, the system function is obtained as
H(z) =
Y(z)
X(z)
=
b
0
+b
1
z
−1
1−az
−1
As a ﬁnal remark, we note that the above system function is identical to that considered in Example 8.2,
even tough the SFG in considered here is different from that in Example 8.3.
8.2 Realizations of IIR systems
We consider IIR systems with rational system functions:
H(z) =
∑
M
k=0
b
k
z
−k
1−∑
N
k=1
a
k
z
−k
B(z)
A(z)
. (8.9)
The minus signs in the denominator are introduced to simplify the Signal FlowGraphs (be careful, they are
often a source of confusion).
For this general IIR system function, we shall derive several SFGs that correspond to different system
realizations.
8.2.1 Direct form I
The LCCDE corresponding to system function H(z) in (8.9) is given by:
y[n] =
N
∑
k=1
a
k
y[n−k] +
M
∑
k=0
b
k
x[n−k] (8.10)
c B. Champagne & F. Labeau Compiled October 13, 2004
152 Chapter 8. Structures for the realization of DT systems
! ! ! !
] [n y
1 −
z
1 −
z
1 −
z
1 −
z
0
b
1
b
1 − M
b
M
b
1
a
1 − N
a
N
a
) (z B ) ( / 1 z A
] [n x
Fig. 8.6 Direct Form I implementation of an IIR system.
Proceeding as in Example 8.2, this leads directly to the SFG shown in Figure 8.6, known as direct form I
(DFI).
In a practical implementation of this SFG, each unit delay (i.e. z
−1
) requires one storage location (past
values need to be stored).
Finally, note that in Figure 8.6, the section of the SFG labeled B(z) corresponds to the zeros of the system,
while the section labelled 1/A(z) corresponds to the poles.
8.2.2 Direct form II
Since LTI systems do commute, the two sections identiﬁed as B(z) and 1/A(z) in Figure 8.6 may be inter
changed. This leads to the intermediate SFG structure shown in Figure 8.7.
Observe that the two vertical delay lines in the middle of the structure compute the same quantities, namely
w[n −1], w[n −2], . . ., so that only one delay line is actually needed. Eliminating one of these delay lines
and merging the two sections lead to the structure shown in Figure 8.8, known as the direct form II (DFII).
Note that N = M is assumed for convenience, but if it is not the case, branches corresponding to a
i
= 0 or
b
i
= 0 will simply disappear.
The main advantage of DFII over DFI is its reduced storage requirement. Indeed, while the DFI contains
N+M unit delays, the DFII only contains max{N, M} ≤N+M unit delays.
Referring to Figure 8.8, one can easily see that the following difference equations describe the DFII realiza
c B. Champagne & F. Labeau Compiled October 13, 2004
8.2 Realizations of IIR systems 153
! !
] [n y
1 −
z
1 −
z
1
a
1 − N
a
N
a
) ( / 1 z A
] [n x
! !
1 −
z
1 −
z
0
b
1
b
1 − M
b
M
b
) (z B
] [n w ] [n w
Fig. 8.7 Structure obtained from a DFI by commuting the implementation of the poles with
the implementation of the zeros.
tion:
w[n] = a
1
w[n−1] +· · · a
N
w[n−N] +x[n] (8.11)
y[n] = b
0
w[n] +b
1
w[n−1] +· · · +b
M
w[n−M] (8.12)
8.2.3 Cascade form
A cascade form is obtained by ﬁrst factoring H(z) as a product:
H(z) =
B(z)
A(z)
= H
1
(z)H
2
(z). . . H
K
(z) (8.13)
where
H
k
(z) =
B
k
(z)
A
k
(z)
, k = 1, 2, ..., K (8.14)
represent IIR sections of loworder (typically 1 or 2). The corresponding SFG is shown in Figure 8.9. Note
that each box needs to be expanded with the proper subSFG (often in DFII or transposed DFII).
c B. Champagne & F. Labeau Compiled October 13, 2004
154 Chapter 8. Structures for the realization of DT systems
! !
] [n y
1 −
z
1 −
z
1
a
1 − N
a
N
a
] [n x
!
0
b
1
b
1 − N
b
N
b
] [n w
Fig. 8.8 Direct Form II realization of an IIR system.
] [n x ] [n y
) (
1
z H ) (z H
K
) (
2
z H
Fig. 8.9 Cascade Form for an IIR system. Note that each box needs to be expanded with the
proper subSFG (often in DFII or transposed DFII).
For a typical real coefﬁcient secondorder section (SOS):
H
k
(z) = G
k
(1−z
k1
z
−1
)(1−z
k2
z
−1
)
(1−p
k1
z
−1
)(1−p
k2
z
−1
)
(8.15)
=
b
k0
+b
k1
z
−1
+b
k2
z
−2
1−a
k1
z
−1
−a
k2
z
−2
(8.16)
where p
k2
= p
∗
k1
and z
k2
= z
∗
k1
so that a
ki
, b
ki
∈ R. It should be clear that such a cascade decomposition of
an arbitrary H(z) is not unique:
• There are many different ways of grouping the zeros and poles of H(z) into lower order sections;
• The gains of the individual sections may be changed (provided their product remains constant).
Example 8.4:
Consider the system function
H(z) = G
(1−e
jπ/4
z
−1
)(1−e
−jπ/4
z
−1
)(1−e
j3π/4
z
−1
)(1−e
−j3π/4
z
−1
)
(1−.9z
−1
)(1+.9z
−1
)(1−.9jz
−1
)(1+.9jz
−1
)
c B. Champagne & F. Labeau Compiled October 13, 2004
8.2 Realizations of IIR systems 155
There are several ways of pairing the poles and the zeros of H(z) to obtain 2nd order sections. Whenever
possible, complex conjugate poles and zeros should be paired so that the resulting second order section
has real coefﬁcient. For example, one possible choice that fulﬁls this requirement is:
H
1
(z) =
(1−e
jπ/4
z
−1
)(1−e
−jπ/4
z
−1
)
(1−.9z
−1
)(1+.9z
−1
)
=
1−
√
2z
−1
+z
−2
1−.81z
−2
H
2
(z) =
(1−e
j3π/4
z
−1
)(1−e
−j3π/4
z
−1
)
(1−.9jz
−1
)(1+.9jz
−1
)
=
1+
√
2z
−1
+z
−2
1+.81z
−2
The corresponding SFG is illustrated in ﬁgure 8.10 where the above 2nd order sections are realized in
DFII. Note that we have arbitrarily incorporated the gain factor G in between the two sections H
1
(z) and
1 −
z
1 −
z
] [n x
1 −
z
1 −
z
2 2 −
81 . 81 . −
G
] [n y
) (
1
z H ) (
2
z H
Fig. 8.10 Example of a cascade realization
H
2
(z). In practice, the gain G could be distributed over the SFG so as to optimize the use of the available
dynamic range of the processor (more on this later).
8.2.4 Parallel form
Based on the partial fraction expansion of H(z), one can easily express the latter as a sum:
H(z) =C(z) +
K
∑
k=1
H
K
(z) (8.17)
H
k
(z) : loworder IIR sections (8.18)
C(z) : FIR section (if needed) (8.19)
The corresponding SFG is shown in Figure 8.11, where each box needs to be expanded with a proper sub
SFG. Typically, secondorder IIR sections H
k
(z) would be obtained by combining terms in the PFE that
correspond to complex conjugate poles:
H
k
(z) =
A
k
1−p
k
z
−1
+
A
∗
k
1−p
∗
k
z
−1
(8.20)
=
b
k0
+b
k1
z
−1
1−a
k1
z
−1
−a
k2
z
−2
(8.21)
c B. Champagne & F. Labeau Compiled October 13, 2004
156 Chapter 8. Structures for the realization of DT systems
] [n x ] [n y
) (
1
z H
) (z H
K
) (z C
) (
2
z H
Fig. 8.11 Parallel Realization of an IIR system.
where a
ki
, b
ki
∈ R. A DFII or transposed DFII structure would be used for the realization of (8.21).
Example 8.5:
Consider the system function in Example 8.4
H(z) = G
(1−e
jπ/4
z
−1
)(1−e
−jπ/4
z
−1
)(1−e
j3π/4
z
−1
)(1−e
−j3π/4
z
−1
)
(1−.9z
−1
)(1+.9z
−1
)(1−.9jz
−1
)(1+.9jz
−1
)
It is not difﬁcult to verify that it has the following PFE:
H(z) = A+
B
1−.9z
−1
+
C
1+.9z
−1
+
D
1−.9 jz
−1
+
D
∗
1+.9jz
−1
where A, B and C are appropriate real valued coefﬁcients and D is complex valued. Again, there are
several ways of pairing the poles of H(z) to obtain 2nd order sections. In practice, terms corresponding
to complex conjugate poles should be combined so that the resulting second order section has real
coefﬁcients. Applying this approach, we obtain
H
1
(z) =
B
1−.9z
−1
+
C
1+.9z
−1
=
b
10
+b
11
z
−1
1−.81z
−2
H
2
(z) =
D
1−.9jz
−1
+
D
∗
1+.9jz
−1
=
b
20
+b
21
z
−1
1+.81z
−2
The corresponding SFG is illustrated below with H
1
(z) and H
2
(z) realized in DFII:
8.2.5 Transposed direct form II
Transposition theorem:
Let G
1
denote a SFG with system function H
1
(z). Let G
2
denote the SFG obtained from G
1
by:
c B. Champagne & F. Labeau Compiled October 13, 2004
8.2 Realizations of IIR systems 157
1 −
z
1 −
z
] [n x
1 −
z
1 −
z
81 .
81 . −
] [n y
A
10
b
11
b
20
b
21
b
Fig. 8.12 Example of a parallel realization
(1) reversing the direction of all the branches, and
(2) interchanging the source x[n] and the sink y[n].
Then
H
2
(z) = H
1
(z) (8.22)
Note that only the branch direction is changed, not the transmittance. Usually, we redraw G
2
so that the
source node x[n] is on the left.
Example 8.6:
An example ﬂowgraph is represented on the left of Figure 8.13. Its transfer function is easily found to
be
H
1
(z) =
bz
−1
1−abz
−1
1−
cbz
−1
1−abz
−1
.
The application of the transposition theorem yields the SFG on the right of Figure 8.13, which can easily
be checked to have the same transfer function.
Application to DFII:
Consider the general rational transfer function
H(z) =
∑
M
k=0
b
k
z
−k
1−∑
N
k=1
a
k
z
−k
. (8.23)
c B. Champagne & F. Labeau Compiled October 13, 2004
158 Chapter 8. Structures for the realization of DT systems
] [n y
1 −
z
1 −
z
a
] [n x
b
c
] [n x
1 −
z
1 −
z
a
] [n y
b
c
Fig. 8.13 Example of application of the transposition theorem: left, original SFG and right,
transposed SFG.
with its DFII realization, as shown in Figure 8.8. Applying the transposition theorem as in the previous
example, we obtain the SFG shown in ﬁgure 8.14, known as the transposed direct form II. Note that N = M
is assumed for convenience.
! !
] [n y
1 −
z
1 −
z
1
a
1 − N
a
N
a
] [n x
!
0
b
1
b
1 − N
b
N
b
] [n w
N
] [
1
n w
N−
] [
1
n w
Fig. 8.14 Transposed DFII realization of a rational system.
The corresponding LCCDEs are :
w
N
[n] = b
N
x[n] +a
N
y[n] (8.24)
w
N−1
[n] = b
N−1
x[n] +a
N−1
y[n] +w
N
[n−1]
.
.
.
w
1
[n] = b
1
x[n] +a
1
y[n] +w
2
[n−1]
y[n] = w
1
[n−1] +b
0
x[n]
c B. Champagne & F. Labeau Compiled October 13, 2004
8.3 Realizations of FIR systems 159
8.3 Realizations of FIR systems
8.3.1 Direct forms
In the case of an FIR system, the denominator in the rational system function (8.9) reduces to A(z) =1. That
is
H(z) = B(z) =
M
∑
k=0
b
k
z
−k
, (8.25)
corresponding to the following nonrecursive LCCDE in the timedomain:
y[n] =
M
∑
k=0
b
k
x[n−k] (8.26)
Accordingly, the DFI and DFII realizations become equivalent, as the structural component labelled 1/A(z)
in Figure 8.6 disappears. The resulting SFG is shown in Figure 8.15; it is also called a tapped delay line or
a transversal ﬁlter structure.
@ >Q \
@ >Q [
]
E
E
0
E
0
E
]
Fig. 8.15 DF for an FIR system (also called tapped delay line or transversal ﬁlter).
Note that the coefﬁcients b
k
in Figure 8.15 directly deﬁne the impulse response of the system. Applying a
unit pulse as the input, i.e. x[n] = δ[n], produces an output y[n] = h[n], with
h[n] =
b
n
if n ∈ {0, 1, . . . , M}
0 otherwise.
(8.27)
c B. Champagne & F. Labeau Compiled October 13, 2004
160 Chapter 8. Structures for the realization of DT systems
8.3.2 Cascade form
In the case of an FIR system, a cascade form is also possible but the problem is in fact simpler than for IIR
since only the zeros need to be considered (because A(z) = 1).
To decompose H(z) in (8.25) as a product of loworder FIR sections, as in
H(z) = H
1
(z)H
2
(z)· · · H
K
(z)
one needs to ﬁnd the zeros of H(z), say z
l
(l =1, . . . , M), and form the desired loworder sections by properly
grouping the corresponding factors 1−z
l
z
−1
and multiplying by the appropriate gain term. For example, a
typical 2nd order section would have the form
H
k
(z) = G
k
(1−z
k
1
z
−1
)(1−z
k
2
z
−1
) = b
k0
+b
k1
z
−1
+b
k2
z
−2
.
where the coefﬁcients b
ki
can be made real by proper choice of complex conjugate zeros (i.e. z
k
1
= z
∗
k
2
).
In practice, cascade realizations of FIR system use sections of order 1, 2 and/or 4. Sections of order 4 are
useful in the efﬁcient realization of linearphase system...
Example 8.7:
Consider the FIR system function
H(z) = (1−e
jπ/4
z
−1
)(1−e
−jπ/4
z
−1
)(1−e
j3π/4
z
−1
)(1−e
−j3π/4
z
−1
)
A cascade realization of H(z) as a product of 2nd order sections can be obtained as follows:
H
1
(z) = (1−e
jπ/4
z
−1
)(1−e
−jπ/4
z
−1
) = 1−
√
2z
−1
+z
−2
H
2
(z) = (1−e
j3π/4
z
−1
)(1−e
−j3π/4
z
−1
) = 1+
√
2z
−1
+z
−2
8.3.3 Linearphase FIR systems
Deﬁnition: A LTI system is said to have generalized linearphase (GLP) if its frequency response is ex
pressible as
H(ω) = A(ω)e
−j(αω−β)
(8.28)
where the function A(ω) and the coefﬁcients α and β are real valued.
Note that A(ω) can be negative, so that in general, A(ω) =H(ω). In terms of A(ω), α and β, we have
H(ω) =
−αω+β if A(ω) > 0
−αω+β+π if A(ω) < 0
(8.29)
Despite the phase discontinuities, it is common practice to refer to H(ω) as a linear phase system.
c B. Champagne & F. Labeau Compiled October 13, 2004
8.3 Realizations of FIR systems 161
Theorem: Consider an LTI system H with FIR h[n], such that
h[n] = 0 for n < 0 and for n > M
h[0] = 0 and h[M] = 0 (8.30)
System H is GLP if and only if h[n] is either
(i) symmetric, i.e.:
h[n] = h[M−n], n ∈ Z (8.31)
(ii) or antisymmetric, i.e.:
h[n] =−h[M−n], n ∈ Z (8.32)
Note that the GLP conditions (8.31) and (8.32) can be combined into a single equation, namely
h[n] = εh[M−n] (8.33)
where ε is either equal to +1 (symmetric) or −1 (antisymmetric).
Example 8.8:
Consider the LTI system with impulse response
h[n] ={1, 0, 1, 0, 1}
Note that h[n] satisﬁes the deﬁnition (8.31) of a symmetric impulse response with M = 4:
h[n] = h[4−n], all n ∈ Z
Let us verify that the system is linear phase, or equivalently, that its frequency response veriﬁes the
condition (8.28). We have
H(ω) =
∞
∑
n=−∞
h[n]e
−jωn
= 1+e
−j2ω
+e
−j4ω
= e
−j2ω
(e
j2ω
+1+e
−j2ω
= e
−j2ω
(1+2cos(2ω))
= A(ω)e
−j(αω+β)
where we identify
A(ω) = 1+2cos(2ω),
α = 2, β = 0
c B. Champagne & F. Labeau Compiled October 13, 2004
162 Chapter 8. Structures for the realization of DT systems
Modiﬁed direct form: The symmetry in h[n] can be used advantageously to reduce the number of multi
plications in direct form realization:
h[n] = εh[M−n] =⇒b
k
= εb
M−k
(8.34)
In the case M odd, we have
y[n] =
M
∑
k=0
b
k
x[n−k] (8.35)
=
(M−1)/2
∑
k=0
(b
k
x[n−k] +b
M−k
x[n−(M−k)])
=
(M−1)/2
∑
k=0
b
k
(x[n−k] +εx[n−(M−k)])
which only requires (M+1)/2 multiplications instead of M+1.
Figure 8.16 illustrates a modiﬁed DFI structure which enforces linear phase.
 Q \
 Q [
1
]
0
E
1
E
2 / ) 1 ( 0
E
1
]
1
] H
1
]
1
]
2 / ) 3 ( 0
E
Fig. 8.16 A modiﬁed DF structure to enforce linear phase (M odd).
Properties of the zeros of a GLP system: In the zdomain the GLP condition h[n] = εh[M−n] is equiv
alent to
H(z) = εz
−M
H(z
−1
) (8.36)
Thus, if z
0
is a zero of H(z), so is 1/z
0
. If in addition h[n] is real, then z
0
, z
∗
0
, 1/z
0
and 1/z
∗
0
are all zeros of
H(z). This is illustrated in Figure 8.17.
As a result, sections of order 4 are often used in the cascade realization of real, linear phase FIR ﬁlters:
H
k
(z) = G
k
(1−z
k
z
−1
)(1−z
∗
k
z
−1
)(1−
1
z
k
z
−1
)(1−
1
z
∗
k
z
−1
) (8.37)
8.3.4 Lattice realization of FIR systems
Lattice stage: The basic lattice stage is a 2input 2output DT processing device. Its signal ﬂow graph is
illustrated in Figure 8.18.
c B. Champagne & F. Labeau Compiled October 13, 2004
8.3 Realizations of FIR systems 163
Re(z)
Ìm(z)
unit circle
]
]
]
]
]
]
]
]
Fig. 8.17 Illustration of the zero locations of FIR realcoefﬁcient GLP systems: a zero always
appears with its inverse and complex conjugate.
] [
1
n u
m−
] [
1
n v
m−
] [n u
m
] [n v
m
m
κ
*
m
κ
1 −
z
Fig. 8.18 Lattice stage.
c B. Champagne & F. Labeau Compiled October 13, 2004
164 Chapter 8. Structures for the realization of DT systems
The parameter κ
m
is called reﬂection coefﬁcient, while m is an integer index whose meaning will be ex
plained shortly.
Inputoutput characterization in the time domain:
u
m
[n] = u
m−1
[n] +κ
m
v
m−1
[n−1] (8.38)
v
m
[n] = κ
∗
m
u
m−1
[n] +v
m−1
[n−1] (8.39)
Equivalent zdomain matrix representation:
U
m
(z)
V
m
(v)
= K
m
(z)
U
m−1
(z)
V
m−1
(v)
(8.40)
where K
m
(z) is a 2×2 transfer matrix deﬁned as
K
m
(z)
1 κ
m
z
−1
κ
∗
m
z
−1
(8.41)
Lattice ﬁlter: A lattice ﬁlter is made up of a cascade of M lattice stages, with index m running from 1 to
M. This is illustrated in ﬁgure 8.19.
] [n x
] [
0
n u
] [
0
n v ] [
1
n v ] [
2
n v ] [n v
M
] [n u
M
] [
2
n u ] [
1
n u ] [n y
1 −
z
1 −
z
1 −
z
1
κ
2
κ
M
κ
*
1
κ
*
2
κ
*
M
κ
Fig. 8.19 A lattice ﬁlter as a series of lattice stages.
Corresponding set of difference equations characterizing the above SFG in the time domain:
u
0
[n] = v
0
[n] = x[n]
for m = 1, 2, . . . , M
u
m
[n] = u
m−1
[n] +κ
m
v
m−1
[n−1]
v
m
[n] = κ
∗
m
u
m−1
[n] +v
m−1
[n−1]
end
y[n] = u
M
[n]
Corresponding matrix representation in the zdomain:
U
0
(z) = V
0
(z) = X(z) (8.42)
U
M
(z)
V
M
(v)
= K
M
(z). . . K
2
(z)K
1
(z)
U
0
(z)
V
0
(v)
(8.43)
Y(z) = U
M
(z) (8.44)
c B. Champagne & F. Labeau Compiled October 13, 2004
8.3 Realizations of FIR systems 165
System function:
• From (8.42)– (8.44), it follows that Y(z) = H(z)X(z) with the equivalent system function H(z) given
by
H(z) =
1 0
K
M
(z). . . K
2
(z)K
1
(z)
1
1
(8.45)
• Expanding the matrix product in (8.45), it is easy to verify that the system function of the lattice ﬁlter
is expressible as
H(z) = 1+b
1
z
−1
+· · · +b
M
z
−M
(8.46)
where each the coefﬁcients b
k
in (8.46) can be expressed algebraically in terms of the reﬂection
coefﬁcients {κ
m
}
M
m=1
• Conclusion: an Mstage lattice ﬁlter is indeed FIR ﬁlter of length M+1.
Minimumphase property: The lattice realization is guaranteed to be minimumphase if the reﬂection
coefﬁcients are all less than one in magnitude, that is: if κ
m
 < 1 for all m = 1, . . . , M.
Note: Thus, by constraining the reﬂection coefﬁcients as above, we can be sure that a causal and stable
inverse exists for the lattice system. The proof of this property is beyond the scope of this course.
c B. Champagne & F. Labeau Compiled October 13, 2004
166
Chapter 9
Filter design
9.1 Introduction
9.1.1 Problem statement
Consider the rational system function
H(z) =
B(z)
A(z)
=
∑
M
k=0
b
k
z
−k
1+∑
N
k=1
a
k
z
−k
(9.1)
for which several computational realizations have been developed in the preceding Chapter.
In ﬁlter design, we seek to ﬁnd the system coefﬁcients, i.e. M, N, a
1
, . . . , a
N
, b
0
, . . . , b
M
, in (9.1) such that
the corresponding frequency response
H(ω) = H(z)[
z=e
jω
provides a good approximation to a desired response H
d
(ω), i.e.
H(ω) ∼H
d
(ω). (9.2)
In order to be practically realizable, the resulting system H(z) must be stable and causal, which imposes
strict constraints on the possible values of a
k
. Additionally, a linear phase requirement for H(z) may be
desirable in certain applications, which further restrict the system parameters.
It is important to realize that the use of an approximation in (9.2) is a fundamental necessity in practical
ﬁlter design. In the vast majority of problems, the desired response H
d
(ω) does not correspond to a stable
and causal system and cannot even be expressed exactly as a rational system function.
To illustrate this point, recall from Chapter 2 that stability of H(z) implies that H(ω) is continuous, while
ideal frequency selective ﬁlters H
d
(ω) all have discontinuities.
Example 9.1: Ideal LowPass Filter
The ideal lowpass ﬁlter has already been studied in Example 3.3. The corresponding frequency re
sponse, denoted H
d
(ω) in the present context, is illustrated in Figure 9.1. It is not continuous, and the
9.1 Introduction 167
corresponding impulse response
h
d
[n] =
ω
c
π
sinc(
ω
c
n
π
) (9.3)
is easily seen to be noncausal. Furthermore, it is possible to show that
∑
n
[h
d
[n][ = ∞
so that the ideal lowpass ﬁlter is unstable.
Z +
Z
S
F
Z
Fig. 9.1 Ideal LowPass ﬁlter speciﬁcation in frequency.
9.1.2 Speciﬁcation of H
d
(ω)
For frequency selective ﬁlters, the magnitude of the desired response is usually speciﬁed in terms of the
tolerable
• passband distortion,
• stopband attenuation, and
• width of transition band.
An additional requirement of linear phase (GLP) may be speciﬁed.
For other types of ﬁlters, such as allpass equalizers and Hilbert transformers, phase speciﬁcations play a
more important role.
Typical lowpass speciﬁcations: A typical graphical summary of design speciﬁcations for a discretetime
lowpass ﬁlter is shown in Figure 9.2. The parameters are:
• [0, ω
p
] = passband
• [ω
s
, π] = stopband
• δω ω
s
−ω
p
= width of transition band.
• δ
1
= passband ripple. Often expressed in dB via 20log
10
(1+δ
1
).
• δ
2
= stopband ripple. Usually expressed in dB as 20log
10
(δ
2
).
c (B. Champagne & F. Labeau Compiled November 5, 2004
168 Chapter 9. Filter design
S
V
Z
Z
G
+
Z
S
Z
G
G
G
Fig. 9.2 Typical template summarizing the design speciﬁcations for a lowpass ﬁlter.
The phase response in the passband may also be speciﬁed, for example by an additional GLP requirement.
Typically, a reduction of δ
1
, δ
2
and/or δω leads to an increase in the required IIR ﬁlter order N or FIR ﬁlter
length M, and thus greater implementation costs. Ideally, one would to use the minimum value of N and/or
M for which the ﬁlter speciﬁcations can be met.
9.1.3 FIR or IIR, That Is The Question
The choice between IIR and FIR is usually based on a consideration of the phase requirements. Recall that
a LTI system is GLP iff
H(ω) = A(ω)e
−j(αω−β)
(9.4)
where the amplitude function A(ω) and the phase parameters α and β are real valued.
Below, we explain how the GLP requirement inﬂuences the choice between FIR and IIR. In essence, only
FIR ﬁlter can be at the same time stable, causal and GLP. This is a consequence of the following theorem,
stated without proof.
Theorem: A stable LTI system H(z) with real impulse response, i.e. h[n] ∈ R, is GLP if and only if
H(z) = εz
−2α
H(1/z) (9.5)
where α ∈ R and ε ∈ ¦−1, +1¦.
PZ symmetry:
• Suppose that p is a pole of H(z) with 0 <[p[ < 1.
• According to above theorem, if H(z) is GLP, then:
H(1/p) = ε p
2α
H(p) = ∞ (9.6)
which shows that 1/p is also a pole of H(z).
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.2 Design of IIR ﬁlters 169
• Note that p
∗
and 1/p
∗
are also poles of H(z) under the assumption of a real system.
• The above symmetry also apply to the zeros of H(z)
• The situation is illustrated in the PZ diagram below:
Re(z)
Im(z)
unit circle
Fig. 9.3 PZ locations of IIR realcoefﬁcient GLP systems: a pole always appears with its
inverse (and complex conjugate).
Conclusion: If H(z) is stable and GLP, to any nontrivial pole p inside the unit circle corresponds a pole
1/p outside the unit circle, so that H(z) cannot have a causal impulse response. In other words, only FIR
ﬁlter can be at the same time stable, causal and GLP.
Basic design principle:
• If GLP is essential =⇒FIR
• If not =⇒IIR usually preferable (can meet speciﬁcations with lower complexity)
9.2 Design of IIR ﬁlters
Mathematical setting: We analyze in this section common techniques used to design IIR ﬁlters. Of
course, we only concentrate on ﬁlters with rational system functions, as
H(z) =
B(z)
A(z)
=
b
0
+b
1
z
−1
+. . . +b
M
z
−M
1+a
1
z
−1
+. . . +a
N
z
−N
. (9.7)
for which practical computational realizations are available.
For H(z) in (9.7) to be the system function of a causal & stable LTI system, all its poles have to be inside
the unit circle (U.C.).
c (B. Champagne & F. Labeau Compiled November 5, 2004
170 Chapter 9. Filter design
Design problem: The goal is to ﬁnd ﬁlter parameters N, M, ¦a
k
¦ and ¦b
k
¦ such that H(ω) ∼ H
d
(ω) to
some desired degree of accuracy.
Transformation methods: There exists a huge literature on analog ﬁlter design. In particular, several
effective approaches do exist for the design of analog IIR ﬁlters, say H
a
(Ω). The transformation methods
try to take advantage of this literature in the following way:
(1) Map DT speciﬁcations H
d
(ω) into analog speciﬁcations H
ad
(Ω) via proper transformation;
(2) Design analog ﬁlter H
a
(Ω) that meets the speciﬁcations;
(3) Map H
a
(Ω) back into H(ω) via a proper inverse transformation.
9.2.1 Review of analog ﬁltering concepts
LTI system: For an analog LTI system, the input/ouput characteristic in the time domain takes the form
of a convolution integral:
y
a
(t) =
∞
−∞
h
a
(u)x
a
(t −u)du (9.8)
where h
a
(t) is the impulse response of the system. Note here that t ∈ R,
System function: The system function of an analog LTI system is deﬁned as the Laplace transform of its
impulse response, i.e.
H
a
(s) =L¦h
a
(t)¦ =
∞
−∞
h
a
(t)e
−st
dt (9.9)
where L denotes the Laplace operator. The complex variable s is often expressed in terms of its real and
imaginary parts as
s = σ+ jΩ
The set of all s ∈ C where integral (9.9) converges absolutely deﬁnes the region of convergence (ROC) of
the Laplace transform. A complete speciﬁcation of the system funciton H
a
(s) must include a description of
the ROC.
In the sdomain, the convolution integral (9.8) is equivalent to
Y
a
(s) = H
a
(s)X
a
(s) (9.10)
where Y
a
(s) and X
a
(s) are the Laplace transforms of y
a
(t) and x
a
(t).
Causality and stability: An analog LTI system is causal iff the ROC of its system function H
a
(s) is a
righthalf plane, that is
ROC : ℜ(s) > σ
0
An analog LTI system is stable iff the ROC of H
a
(s) contains the jΩ axis:
jΩ∈ ROC for all Ω∈ R
The ROC of a stable and causal analog LTI system is illustrated in the Fig. 9.4.
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.2 Design of IIR ﬁlters 171
Re(s)
Im(s)
ROC
Fig. 9.4 Illustration of a typical ROC for a causal and stable analog LTI system.
Frequency response: The frequency response of an LTI system is deﬁned as the Fourier transform of its
impulse response, or equivalently
1
H
a
(Ω) = H
a
(s)[
s=jΩ
(9.11)
In the case an analog system with a real impulse response, we have
[H
a
(Ω)[
2
= H
a
(s)H
a
(−s)[
s=jΩ
(9.12)
Rational systems: As the name indicates, a rational system is characterized by a system function that is a
rational function of s, typically:
H
a
(s) =
β
0
+β
1
s + +β
M
s
M
1+α
1
s + +α
N
s
N
(9.13)
The corresponding inputoutput relationship in the timedomain is an ordinary differential equation of order
N, that is:
y
a
(t) =−
N
∑
k=1
α
k
d
k
dt
k
y
a
(t) +
M
∑
k=0
β
k
d
k
dt
k
x
a
(t) (9.14)
9.2.2 Basic analog ﬁlter types
In this course, we will basically consider only three main types of analog ﬁlters in the design of discrete
time IIR ﬁlters via transformation methods. These three types will have increasing “complexity” in the sense
of being able to design a ﬁlter just with a paper and pencil, but will also have an increasing efﬁciency in
meeting a prescribed set of speciﬁcations with a limited number of ﬁlter parameters.
Butterworth ﬁlters: Butterworth ﬁlters all have a common shape of squared magnitude response:
[H
a
(Ω)[
2
=
1
1+(Ω/Ω
c
)
2N
, (9.15)
1
Note the abuse of notation here: we should formally write H
a
( jΩ) but usually drop the j for simplicity.
c (B. Champagne & F. Labeau Compiled November 5, 2004
172 Chapter 9. Filter design
where Ω
c
is the cutoff frequency (3dB) and N is the ﬁlter order. Figure 9.5 illustrates the frequency
response of several Butterworth ﬁlters of different orders. Note that [H
a
(Ω)[ smoothly decays from a maxi
0 10 20 30 40 50 60 70 80 90 100
0
0.2
0.4
0.6
0.8
1
Radian frequency Ω
A
m
p
l
i
t
u
d
e

H
(
j
Ω
)

N=1
N=2
N=3
N=4
N=10
Fig. 9.5 Magnitude response of several Butterworth ﬁlters of different orders. (Ω
c
=
10rad/s)
mum of 1 at Ω=0 to 0 at Ω=∞. These ﬁlters are simple to compute, and can be easily manipulated without
the help of a computer. The corresponding transfer function is given by:
H
a
(s) =
(Ω
c
)
N
(s −s
0
)(s −s
1
) (s −s
N−1
)
(9.16)
The poles s
k
are uniformly spaced on a circle of radius Ω
c
in the splace, as given by:
s
k
= Ω
c
exp( j[
π
2
+(k +
1
2
)
π
N
]) k = 0, 1, . . . , N−1. (9.17)
Example 9.2:
Chebyshev ﬁlters: Chebyshev ﬁlters come in two ﬂavours: type I and II. The main difference with the
Butterworth ﬁlters is that they are not monotonic in their magnitude response: Type I Chebyshev ﬁlters have
ripple in their passband, while Type II Chebyshev ﬁlters have ripple in their stopband. The table below
summarizes the features of the two types of Chebyshev ﬁlters, and also gives an expression of their squared
magnitude response:
Type I Type II
Def [H
c
(Ω)[
2
=
1
1+ε
2
T
2
N
(Ω/Ω
c
)
[H
c
(Ω)[
2
=
1
1+(ε
2
T
2
N
(Ω
c
/Ω))
−1
Passband Ripple from 1 to
1/1+ε
2
Monotonic
Stopband Monotonic Ripple
1/(1+1/ε
2
)
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.2 Design of IIR ﬁlters 173
In the above table, the function T
N
() represents the Nth order Chebyshev polynomial. The details of the
derivation of these polynomials is beyond the scope of this text, but can be obtained from any standard
textbook. ε is a variable that determines the amount of ripple, and Ω
c
is a cutoff frequency. Figure 9.6
illustrates the frequency response of these ﬁlters. Chebyshev ﬁlters become more complicated to deal with
than Butterworth ﬁlters, mainly because of the presence of the Chebyshev polynomials in their deﬁnition,
and often they will be designed through an adhoc computer program.
0 500 1000 1500 2000 2500 3000
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Chebyshev Type I
Radian Frequency Ω
A
m
p
l
i
t
u
d
e

H
(
j
Ω
)

N=2
N=3
N=4
N=5
N=10
0 100 200 300 400 500
(1+ε
2
)
−.5
1
0 500 1000 1500 2000 2500 3000
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Chebyshev Type II
Radian Frequency Ω
A
m
p
l
i
t
u
d
e

H
(
j
Ω
)

N=2
N=3
N=4
N=5
N=10
600 800 1000 1200 1400
0
(1+ε
−2
)
−.5
(a) (b)
Fig. 9.6 Illustration of the magnitude response of analog Chebyshev ﬁlters of different orders
N: (a) Type I, (b) Type II.
Elliptic ﬁlters: Elliptic ﬁlters are based on elliptic functions. Their exact expression is beyond the scope
of this text, but they can be easily generated by an appropriate computer program. Figure 9.7 illustrates the
magnitude response of these ﬁlters. One can see that they ripple both in the passband and stop band. They
usually require a lower order than Chebyshev and Butterworth ﬁlters to meet the same speciﬁcations.
9.2.3 Impulse invariance method
Principle: In this method of transformation, the impulse response of the discretetime ﬁlter H(z) is ob
tained by sampling the impulse response of a properly designed analog ﬁlter H
a
(s):
h[n] = h
a
(nT
s
) (9.18)
where T
s
is an appropriate sampling period. From sampling theory in Chapter 7, we know that
H(ω) =
1
T
s
∑
k
H
a
(
ω
T
s
−
2πk
T
s
). (9.19)
If there is no signiﬁcant aliasing, then
H(ω) =
1
T
s
H
a
(
ω
T
s
), [ω[ < π. (9.20)
c (B. Champagne & F. Labeau Compiled November 5, 2004
174 Chapter 9. Filter design
0 500 1000 1500 2000 2500 3000
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Elliptic filters
Radian Frequency Ω
A
m
p
l
i
t
u
d
e

H
(
j
Ω
)

N=2
N=3
N=4
N=5
N=10
0 100 200 300 400 500
0.9
0.95
1
600 800 1000 1200 1400
0
0.05
0.1
Fig. 9.7 Magnitude Response of lowpass elliptic ﬁlters of different orders.
) (Ω
a
H
T
π
c
Ω
1
) (ω H
ω π
c
ω
Ω
π 2
aliasing
s
T / 1
Fig. 9.8 Illustration of aliasing with impulse invariance method.
Thus in the absence of aliasing, the frequency response of the DT ﬁlter is the same (up to a scale factor) as
the frequency response of the CT ﬁlter.
Clearly, this method works only if [H
a
(Ω)[ becomes very small for [Ω[ > π/T
s
, so that aliasing can be
neglected. Thus, it can be used to design lowpass and bandpass ﬁlters, but not highpass ﬁlters.
In the absence of additional speciﬁcations on H
a
(Ω), the parameter T
s
can be set to 1: it is a ﬁctitious
sampling period, that does not correspond to any physical sampling.
Design steps:
(1) Given the desired speciﬁcations H
d
(ω) for the DT ﬁlter, ﬁnd the corresponding speciﬁcations for the
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.2 Design of IIR ﬁlters 175
analog ﬁlter, H
cd
(Ω):
H
ad
(Ω) = T
s
H
d
(ω)[
ω=ΩT
s
, [Ω[ <
π
T
s
(9.21)
For example, Figure 9.9 illustrates the conversion of speciﬁcations from DT to CT for the case of a
lowpass design.
π
s
ω
) (ω
d
H
ω
p
ω
1
1 δ +
1
1 δ −
2
δ
s
Ω
) (Ω
ad
H
Ω
p
Ω
) 1 (
1
δ +
s
T
) 1 (
1
δ −
s
T
2
δ
s
T
s s s
T / ω ≡ Ω
s p p
T / ω ≡ Ω
Fig. 9.9 Conversion of speciﬁcations from discretetime to continuoustime for a design by
impulse invariance.
(2) Design an analog IIR ﬁlter H
a
(Ω) that meets the desired analog speciﬁcations H
ad
(Ω):
 Chose the ﬁlter type (Butterworth, elliptic, etc.)
 Select the ﬁlter parameters (ﬁlter order N, poles s
k
, etc.)
(3) Transform the analog ﬁlter H
a
(Ω) into a discretetime ﬁlter H(z) via timedomain sampling of the
impulse response:
h
a
(t) = L
−1
¦H
a
(Ω)¦
h[n] = h
a
(nT
s
)
H(z) = Z¦h[n]¦ (9.22)
In practice, it is not necessary to ﬁnd h
a
(t), as explained below.
Simpliﬁcation of step (3): Assume H
a
(s) has only ﬁrst order poles (e.g. Butterworth ﬁlters). By perform
ing a partial fraction expansion of H
a
(s), one gets:
H
a
(s) =
N
∑
k=1
A
k
s −s
k
(9.23)
The inverse Laplace transform of H
a
(s) is given by
h
a
(t) =
N
∑
k=1
A
k
e
s
k
t
u
c
(t) (9.24)
where u
a
(t) is the analog unit step function, i.e.
u
c
(t) =
1, t ≥0
0, t < 0
(9.25)
c (B. Champagne & F. Labeau Compiled November 5, 2004
176 Chapter 9. Filter design
Now, if we sample h
a
(t) in (9.24) uniformly at times t = nT
s
, we obtain
h[n] =
N
∑
k=1
A
k
e
s
k
nT
s
u[n] =
N
∑
k=1
A
k
(e
s
k
T
s
)
n
u[n]. (9.26)
The Ztransform of h[n] is immediately obtained as:
H(z) =
N
∑
k=1
A
k
1−p
k
z
−1
where p
k
= e
s
k
T
s
(9.27)
This clearly shows that there is no need to go through the actual steps of inverse Laplace transform, sampling
and Ztransform: one only needs the partial fraction expansion coefﬁcients A
k
and the poles s
k
from (9.23)
and plug them into (9.27).
Remarks: The order of the ﬁlter is preserved by impulse invariance, that is: if H
a
(s) is of order N, then
so is H(z). The correspondence between poles in discretetime and continuoustime is given by:
s
k
is a pole of H
a
(s) =⇒ p
k
= e
s
k
T
s
is a pole of H(z).
As a result, stability and causality of the analog ﬁlter are preserved by the transformation. Indeed, if H
a
(s)
is stable and causal, then the real part of all its poles s
k
will be negative:
s
k
= σ
k
+ jΩ
k
, σ
k
< 0.
The corresponding poles p
k
in discretetime can be written as
p
k
= e
σ
k
T
s
e
jΩ
k
T
s
Therefore,
[p
k
[ = e
σ
k
T
s
< 1,
which ensures causality and stability of the discretetime ﬁlter H(z).
Example 9.3:
Apply the impulse invariance method to an appropriate analog Butterworth ﬁlter in order to design a
lowpass digital ﬁlter H(z) that meets the speciﬁcations outlined in Figure 9.10.
• Step 1: We set the sampling period to T
s
= 1 (i.e. Ω=ω), so that the desired speciﬁcations for the
analog ﬁlter are the same as stated in Figure 9.10, or equivalently:
1−δ
1
≤[H
ad
(Ω)[ ≤1 for 0 ≤[Ω[ ≤Ω
1
= .25π (9.28)
[H
ad
(Ω)[ ≤δ
2
, for Ω
2
= .4π ≤[Ω[ ≤∞ (9.29)
• Step 2: We need to design an analog Butterworth ﬁlter H
c
(Ω) that meets the above speciﬁcations.
For the Butterworth family, we have
[H
a
(Ω)[
2
=
1
1+(Ω/Ω
c
)
2N
(9.30)
Thus, we need to determine the ﬁlter order N and the cutoff frequency Ω
c
so that the above
inequalities are satisﬁed. This is explained below:
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.2 Design of IIR ﬁlters 177
π
2
ω
) (ω
d
H
ω
1
ω
1
1
1 δ −
2
δ
π ω
π ω
δ
δ
4 . 0
25 . 0
01 . 0
5 0 . 0
2
1
2
1
=
=
=
=
Fig. 9.10 Speciﬁcations for the lowpass design of example 9.3
 We ﬁrst try to an ﬁnd analog Butterworth ﬁlter that meets the band edge conditions exactly,
that is:
[H
a
(Ω
1
)[ = 1−δ
1
⇒
1
1+(Ω
1
/Ω
c
)
2N
= (1−δ
1
)
2
⇒ (Ω
1
/Ω
c
)
2N
=
1
(1−δ
1
)
2
−1 ≡α (9.31)
[H
a
(Ω
2
)[ = δ
2
⇒
1
1+(Ω
2
/Ω
c
)
2N
= (δ
2
)
2
⇒ (Ω
2
/Ω
c
)
2N
=
1
δ
2
2
−1 ≡β (9.32)
Combining these two equations, we obtain:
Ω
1
/Ω
c
Ω
2
/Ω
c
2N
=
α
β
⇒2Nln(Ω
1
/Ω
2
) = ln(α/β)
⇒N =
1
2
ln(α/β)
ln(Ω
1
/Ω
2
)
= 12.16
Since N must be an integer, we take N = 13.
 The value of Ω
c
is then determined by requiring that:
[H
c
(Ω
1
)[ = 1−δ
1
⇒
1
1+(Ω
1
/Ω
c
)
26
= (1−δ
1
)
2
⇒ Ω
c
= 0.856 (9.33)
With this choice, the speciﬁcations are met exactly at Ω
1
and are exceeded at Ω
2
, which
contribute to minimize aliasing effects.
 An analog Butterworth ﬁlter meeting the desired speciﬁcations is obtained as follows:
H
c
(s) =
(Ω
c
)
13
(s −s
0
)(s −s
1
) (s −s
12
)
(9.34)
s
k
= Ω
c
e
j[
π
2
+(k+
1
2
)
π
13
]
k = 0, 1, . . . , 12 (9.35)
c (B. Champagne & F. Labeau Compiled November 5, 2004
178 Chapter 9. Filter design
• Step 3: The desired digital ﬁlter is ﬁnally obtained by applying the impulse invariance method:
 Partial fraction expansion:
H
c
(s) =
A
0
s −s
0
+ +
A
12
s −s
12
(9.36)
 Direct mapping to the zdomain:
H(z) =
A
0
1−p
0
z
−1
+ +
A
12
1−p
12
z
−1
(9.37)
with
p
k
= e
s
k
, k = 0, 1, . . . , 12 (9.38)
Table 9.1 lists the values of the poles s
k
, the partial fraction expansion coefﬁcients A
k
and the
discretetime poles p
k
for this example.
k s
k
A
k
p
k
0 −0.1031+0.8493j −0.0286+0.2356j +0.5958+0.6773j
1 −0.3034+0.8000j −1.8273−0.6930j +0.5144+0.5296j
2 −0.4860+0.7041j +4.5041−6.5254j +0.4688+0.3982j
3 −0.6404+0.5674j +13.8638+15.6490j +0.4445+0.2833j
4 −0.7576+0.3976j −35.2718+18.5121j +0.4322+0.1815j
5 −0.8307+0.2048j −13.8110−56.0335j +0.4266+0.0886j
6 −0.8556+0.0000j +65.1416+0.0000j +0.4250+0.0000j
7 −0.8307−0.2048j −13.8110+56.0335j +0.4266−0.0886j
8 −0.7576−0.3976j −35.2718−18.5121j +0.4322−0.1815j
9 −0.6404−0.5674j +13.8638−15.6490j +0.4445−0.2833j
10 −0.4860−0.7041j +4.5041+6.5254j +0.4688−0.3982j
11 −0.3034−0.8000j −1.8273+0.6930j +0.5144−0.5296j
12 −0.1031−0.8493j −0.0286−0.2356j +0.5958−0.6773j
Table 9.1 Values of the coefﬁcients used in example 9.3.
 At this point, it is easy to manipulate the function H(z) to derive various ﬁlter realizations
(e.g. direct forms, cascade form, etc.). Figure 9.11 illustrates the magnitude response of the
ﬁlter so designed.
9.2.4 Bilinear transformation
Principle: Suppose we are given an analog IIR ﬁlter H
a
(s). A corresponding digital ﬁlter H(z) can be
obtained by applying the bilinear transformation (BT), deﬁned as follows:
H(z) = H
a
(s) at s = φ(z) α
1−z
−1
1+z
−1
, (9.39)
where α is a scaling parameter introduced for ﬂexibility. In the absence of other requirements, it is often set
to 1.
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.2 Design of IIR ﬁlters 179
0 0.5 1 1.5 2 2.5 3
0
0.2
0.4
0.6
0.8
1
ω

H
(
ω
)

Fig. 9.11 Magnitude Response of the design of example 9.3.
Example 9.4:
Consider the analog ﬁlter
H
a
(s) =
1
s +b
, b > 0 (9.40)
Up to a scaling factor, this is a Butterworth ﬁlter of order N =1 with cutoff frequency Ω
c
=b. This ﬁlter
has only one simple pole at s =−b in the splane; it is stable and causal with a lowpass behavior. The
corresponding PZ diagram in the splane and magnitude response versus Ω are illustrated Figure 9.14
(left) and Figure 9.13 (top), respectively. Note the 3dB point at Ω = b in this example.
Applying the BT with α = 1, we obtain
H(z) =
1
1−z
−1
1+z
−1
+b
=
1
1+b
1+z
−1
1−
1−b
1+b
z
−1
(9.41)
The resulting digital ﬁlter has one simple pole at
z =
1−b
1+b
ρ
inside the U.C.; it is stable and causal with a lowpass behavior The corresponding PZ diagram in the
Zplane and magnitude response versus ω are illustrated Figure 9.12 (right) and Figure 9.13 (bottom),
respectively.
Properties of bilinear transformation: The inverse mapping is given by:
s = φ(z) = α
1−z
−1
1+z
−1
⇐⇒z = φ
−1
(s) =
1+s/α
1−s/α
(9.42)
c (B. Champagne & F. Labeau Compiled November 5, 2004
180 Chapter 9. Filter design
Re(s)
Im(s)
ROC
b −
(zero at infinity)
Re(z)
Im(z)
ROC: z>
1
ρ
ρ
Fig. 9.12 PZ diagrams of the ﬁlters used in example 9.4. Left: analog ﬁlter (b = .5); Right:
discretetime ﬁlter.
0 1 2 3 4 5 6 7 8 9 10
0
0.5
1
1.5
2
H
c
(Ω)
Ω (rad/s)
0 0.5 1 1.5 2 2.5 3
0
0.5
1
1.5
2
2.5
3
ω (rad)
H(ω)
Fig. 9.13 Magnitude responses of the ﬁlters used in example 9.4. Top: analog ﬁlter (b = .5);
Bottom: discretetime ﬁlter.
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.2 Design of IIR ﬁlters 181
It can be veriﬁed from (9.42) that the BT maps the lefthalf part of the splane into the interior of the unit
circle in the zplane, that is:
s = σ+ jΩ with σ < 0 ⇐⇒[z[ < 1 (9.43)
If H
a
(s) has a pole at s
k
, then H(z) has a corresponding pole at
p
k
= φ
−1
(s
k
) (9.44)
Therefore, if H
a
(s) is stable and causal, so is H(z). This is illustrated in Figure 9.14.
z Im
z Re
zplane
ω j
e z =
Ω
σ
splane
0 ) Re( < s
) (
1
s z
−
= φ
0
s
1
s
0
p
1
p
Fig. 9.14 Inverse BT mapping from splane to zplane operated.
The BT uniquely maps the jΩ axis in the splane onto the unit circle in the zplane (i.e. 1to1 mapping):
s = jΩ⇐⇒z =
1+ jΩ/α
1− jΩ/α
=
A
A
∗
(9.45)
where A = 1+ jΩ/α. Clearly
[z[ =[A[/[A
∗
[ = 1 (9.46)
The relationship between the physical frequency Ωand the normalized frequency ωcan be further developed
as follows:
s = jΩ⇐⇒z = e
jω
(9.47)
where ω =A−A
∗
= 2A, that is
ω = 2 tan
−1
(
Ω
α
) (9.48)
or equivalently
Ω = α tan(
ω
2
) (9.49)
This is illustrated in Figure 9.15, from which it is clear that the mapping between Ω and ω is nonlinear.
In particular, the semiinﬁnite Ω axis [0, ∞) is compressed into the ﬁnite ω interval [0, π]. This nonlinear
compression, also called frequency warping, particularly affects the highfrequencies. It is important to
understand this effect and take it into account when designing a ﬁlter.
c (B. Champagne & F. Labeau Compiled November 5, 2004
182 Chapter 9. Filter design
−π
0
π
−4πa −2πa
0
2πa 4πa Ω
ω
Fig. 9.15 Illustration of the frequency mapping operated by the bilinear transformation.
Design steps:
(1) Given desired speciﬁcations H
d
(ω) for the DT ﬁlter, ﬁnd the corresponding speciﬁcations for the
analog ﬁlter, H
ad
(Ω):
H
ad
(Ω) = H
d
(ω) Ω = α tan(
ω
2
) (9.50)
(2) Design an analog IIR ﬁlter H
a
(Ω) that meets these spec.
(3) Apply the BT to obtain the desired digital ﬁlter:
H(z) = H
a
(s) at s = φ(z) α
1−z
−1
1+z
−1
(9.51)
Example 9.5:
Apply the BT to an appropriate analog Butterworth ﬁlter in order to design a lowpass digital ﬁlter H(z)
that meets the speciﬁcations described in Figure 9.10.
• Step 1: In this example, we set the parameter α to 1. The desired speciﬁcations for the analog
ﬁlter are similar to the above except for the bandedge frequencies. That is
1−δ
1
≤[H
ad
(Ω)[ = 1 for 0 ≤[Ω[ ≤Ω
1
(9.52)
[H
ad
(Ω)[ ≤δ
2
, for Ω
2
=≤[Ω[ ≤∞ (9.53)
where now
Ω
1
= tan(
ω
1
2
) ≈0.414214
Ω
2
= tan(
ω
1
2
) ≈0.72654 (9.54)
• Step 2: We need to design an analog Butterworth ﬁlter H
a
(Ω) that meets the above speciﬁcations.
 Proceeding exactly as we did in Example 9.3, we ﬁnd that the required order of the Butter
worth ﬁlter is N = 11 (or precisely N ≥10.1756).
 Since aliasing is not an issue with the BT method, either one of the stopband edge speciﬁca
tions may be used to determine the cutoff frequency Ω
c
. For example:
[H
a
(Ω
2
)[ =
1
1+(Ω
2
/Ω
c
)
22
= δ
2
⇒Ω
c
= 0.478019 (9.55)
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.3 Design of FIR ﬁlters 183
 The desired analog Butterworth ﬁlter is speciﬁed as
H
c
(s) =
(Ω
c
)
11
(s −s
0
)(s −s
1
) (s −s
10
)
(9.56)
s
k
= Ω
c
e
j[
π
2
+(k+
1
2
)
π
11
]
k = 0, 1, . . . , 10 (9.57)
• Step 3: The desired digital ﬁlter is ﬁnally obtained by applying the bilinear transformation:
H(z) =
(Ω
c
)
11
(2
1−z
−1
1+z
−1
−s
0
)(2
1−z
−1
1+z
−1
−s
1
) (2
1−z
−1
1+z
−1
−s
10
)
= . . .
=
b
0
+b
1
z
−1
+ b
11
z
−11
1+a
1
z
−1
+ a
11
z
−11
(9.58)
The corresponding values of the coefﬁcients b
i
and a
i
are shown in Table 9.2.
k b
k
a
k
0 +0.0003 +26.5231
1 +0.0033 −125.8049
2 +0.0164 +296.9188
3 +0.0491 −446.8507
4 +0.0983 +470.4378
5 +0.1376 −360.6849
6 +0.1376 +204.3014
7 +0.0983 −85.1157
8 +0.0491 +25.4732
9 +0.0164 −5.2012
10 +0.0033 +0.6506
11 +0.0003 −0.0377
Table 9.2 Coefﬁcients of the ﬁlter designed in example 9.5.
9.3 Design of FIR ﬁlters
Design problem: The general FIR system function is obtained by setting A(z) = 1 in (9.1), that is
H(z) = B(z) =
M
∑
k=0
b
k
z
−k
(9.59)
The relationship between coefﬁcients b
k
and the ﬁlter impulse response is
h[n] =
b
n
0 ≤n ≤M
0 otherwise
(9.60)
c (B. Champagne & F. Labeau Compiled November 5, 2004
184 Chapter 9. Filter design
In the context of FIR ﬁlter design, it is convenient to work out the analysis directly in terms of h[n]. Accord
ingly, we shall express H(z) as:
H(z) = h[0] +h[1]z
−1
+ h[M]z
−M
(9.61)
It is further assumed throughout that h[0] = 0 and h[M] = 0.
The design problem then consists in ﬁnding the degree M and the ﬁlter coefﬁcients ¦h[k]¦ in (9.61) such that
the resulting frequency response approximates a desired response, H
d
(ω), that is
H(ω) ∼H
d
(ω)
In the design of FIR ﬁlters:
• stability is not an issue since FIR ﬁlters are always stable;
• considerable emphasis is usually given to the issue of linear phase (one of the main reasons for select
ing an FIR ﬁlter instead of an IIR ﬁlter).
In this section, we will review a classiﬁcation of FIR GLP systems, then explain the principles of design by
windowing and design by minmax optimization.
9.3.1 Classiﬁcation of GLP FIR ﬁlters
We recall the following important deﬁnition and result from Section 8.3.3:
Deﬁnition: A discretetime LTI system is GLP iff
H(ω) = A(ω)e
−j(αω−β)
(9.62)
where the A(ω), α and β are real valued.
Property: A real FIR ﬁlter with system function H(z) as in (9.61) is GLP if and only if
h[k] = εh[M−k] k = 0, 1, ..., M (9.63)
where ε is either equal to +1 or −1.
Example 9.6:
Consider the FIR ﬁlter with impulse response given by
h[n] =¦1, 0, −1, 0, 1, 0, −1¦ (9.64)
Note that here, h[n] = h[M−n] with M = 6 and ε = 1. Let us verify that the frequency response H(ω)
effectively satisfy the condition (9.62):
H(e
jω
) = 1−e
−jω2
+e
−jω4
−e
−jω6
= e
−jω3
(e
jω3
−e
jω
+e
−jω
−e
−jω3
)
= 2je
−jω3
(sin(3ω) −sin(ω))
= A(ω)e
−j(αω−β)
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.3 Design of FIR ﬁlters 185
where we deﬁne the realvalued quantities
A(ω) = 2sin(ω) −2sin(3ω), α = 3, β =
3π
2
The frequency response is illustrated in Figure 9.16.
−30
−20
−10
0
m
a
g
n
i
t
u
d
e
(
d
B
)
−π −π/2 0 π/2 π
p
h
a
s
e
(
r
a
d
)
−π −π/2 0 π/2 π
−π
−π/2
0
π/2
π
0
2
4
6
g
r
o
u
p
d
e
l
a
y
−π −π/2 0 π/2 π
frequency ω
Fig. 9.16 Example frequency response of a generalized linearphase ﬁlter.
Classiﬁcation of GLP FIR ﬁlters: Depending on the values of the parameters ε and M, we distinguish 4
types of FIR GLP ﬁlters:
M even M odd
ε = +1 Type I Type II
ε =−1 Type III Type IV
A good understanding of the basic frequency characteristics of these four ﬁlter Types is important for practi
cal FIR ﬁlter design. Indeed, not all ﬁlter Types, as listed above, can be used say for the design of a lowpass
or highpass ﬁlter. This issue is further explored below:
• Type I (ε = +1, M even):
H(ω) = e
−jωM/2
¦
∑
k
α
k
cos(ωk)¦ (9.65)
• Type II (ε = +1, M odd):
H(ω) = e
−jωM/2
¦
∑
k
β
k
cos(ω(k −
1
2
))¦ (9.66)
c (B. Champagne & F. Labeau Compiled November 5, 2004
186 Chapter 9. Filter design
 H(ω) = 0 at ω = π
 Cannot be used to design highpass ﬁlter
• Type III (ε =−1, M even):
H(ω) = je
−jωM/2
¦
∑
k
γ
k
sin(ωk)¦ (9.67)
 H(ω) = 0 at ω = 0 and at ω = π
 Cannot be used to design either lowpass or highpass ﬁlter
• Type IV (ε =−1, M odd):
H(ω) = je
−jωM/2
¦
∑
k
δ
k
sin(ω(k −
1
2
))¦ (9.68)
 H(ω) = 0 at ω = 0
 Cannot be used to design lowpass ﬁlter
9.3.2 Design of FIR ﬁlter via windowing
Basic principle: Desired frequency responses H
d
(ω) usually correspond to noncausal and nonstable
ﬁlters, with inﬁnite impulse responses h
d
[n] extending from −∞ to ∞. Since these ﬁlters have ﬁnite energy,
it can be shown that
[h[n][ →0 as n →±∞. (9.69)
Based on these considerations, it is tempting to approximate h
d
[n] by an FIR response g[n] in the following
way:
g[n] =
h
d
[n], [n[ ≤K
0, otherwise
(9.70)
The FIR g[n] can then be made causal by a simple shift, i.e.
h[n] = g[n−K] (9.71)
Example 9.7:
Consider an ideal lowpass ﬁlter:
H
d
(ω) =
1, [ω[ ≤ω
c
= π/2
0, ω
c
<[ω[ ≤π
The corresponding impulse response is given by
h
d
[n] =
ω
c
π
sinc(
nω
c
π
) =
1
2
sinc(
n
2
)
These desired characteristics are outlined in Figure 9.17.
Assuming a value of K = 5, the truncated FIR ﬁlter g[n] is obtained as
g[n] =
h
d
[n], −5 ≤n ≤5
0, otherwise
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.3 Design of FIR ﬁlters 187
Finally, a causal FIR with M = 2K is obtained from g[n] via a simple shift:
h[n] = g[n−5]
These two impulse responses are illustrated in Figure 9.18. The frequency response of the resulting
ﬁlter, G(ω) is illustrated in Figure 9.19.
S Z
S
Z
G
+
Q
@ >Q K
G
Fig. 9.17 Desired (a) frequency response and (b) impulse response for example 9.7
Q
@ >Q J
Q
@ >Q K
Fig. 9.18 (a) Windowed impulse response g[n] and (b) shifted windowed impulse response
h[n] for example 9.7
Observation: The above design steps (9.70)– (9.71) can be combined into a single equation
h[n] = w
R
[n] h
d
[n−
M
2
] (9.72)
where
w
R
[n]
1, 0 ≤n ≤M
0, otherwise
(9.73)
According to (9.72), the desired impulse response h
d
[n] is ﬁrst shifted by K = M/2 and then multiplied by
w
R
[n].
Remarks on the window: The function w
R
[n] is called a rectangular window. More generally, in the
context of FIR ﬁlter design, we deﬁne a window as a DT function w[n] such that
w[n] =
w[M−n] ≥0, 0 ≤n ≤M
0, otherwise
(9.74)
As we explain later, several windows have been developed and analysed for the purpose of FIR ﬁlter design.
These windows usually lead to better properties of the designed ﬁlter.
c (B. Champagne & F. Labeau Compiled November 5, 2004
188 Chapter 9. Filter design
−40
−30
−20
−10
0
m
a
g
n
i
t
u
d
e
(
d
B
)
0 π/2 π
p
h
a
s
e
(
r
a
d
)
0 π/2 π
−π
−π/2
0
π/2
π
frequency ω
Fig. 9.19 Example frequency response of an FIR ﬁlter designed by windowing (refer to ex
ample 9.7).
Remarks on time shift: Note that the same magnitude response [H(ω)[ would be obtained if the shift
operation in (9.72) was omitted. In practice, the shift operation is used to center h
d
[n] within the window
interval 0 ≤n ≤M, so that symmetries originally present in h
d
[n] with respect to n = 0 translate into corre
sponding symmetries of h[n] with respect to n = K = M/2 after windowing. In this way, a desired response
H
d
(ω) with a GLP property, that is
h
d
[n] = εh
d
[−n] n ∈ Z (9.75)
is mapped into an FIR ﬁlter H(ω) with GLP, i.e.
h[n] w[n]h
d
[n−K] = εh[M−n] (9.76)
as required in most applications of FIR ﬁltering.
The above approach leads to an FIR ﬁlter with M = 2K even. To accommodate the case M odd, or equiva
lently K = M/2 noninteger, the following deﬁnition of a noninteger shift is used:
h
d
[n−K] =
1
2π
π
−π
¦H
d
(ω)e
−jωK
¦e
jωn
dω (9.77)
Design steps: To approximate a desired frequency response H
d
(ω)
• Step (1): Choose window w[n] and related parameters (including M).
• Step (2): Compute the shifted version of desired impulse response:
h
d
[n−K] =
1
2π
π
−π
¦H
d
(ω)e
−jωK
¦e
jωn
dω (9.78)
where K = M/2.
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.3 Design of FIR ﬁlters 189
• Step(3): Apply the selected window w[n]:
h[n] = w[n]h
d
[n−K] (9.79)
Remarks: Several standard window functions exist that can be used in this method (e.g. Bartlet, Hanning,
etc.).
For many of these windows, empirical formulas have been developed that can be used as guidelines in the
choice of a suitable M.
To select the proper window, it is important to understand the effects of and tradeoffs involved in the
windowing operation (9.79) in the frequency domain.
Spectral effects of windowing: Consider the window design formula (9.79), with shift parameter K set
to zero in order to simplify the discussion, i.e.
h[n] = w[n]h
d
[n] (9.80)
Taking the DTFT on both sides, we obtain:
H(ω) =
1
2π
π
−π
H
d
(θ)W(ω−θ)dθ (9.81)
where
W(ω) =
M
∑
n=0
w[n]e
−jωn
(9.82)
is the DTFT of the window function w[n].
Thus, application of the window function w[n] in the timedomain is equivalent to periodic convolution with
W(ω) in the frequency domain.
For an inﬁnitely long rectangular window, i.e. w[n] = 1 for all n ∈ Z, we have W(ω) = 2π∑
k
δ
c
(ω−2πk)
with the result that H(ω) = H
d
(ω). In this limiting case, windowing does not introduce any distortion in the
desired frequency response.
For a practical ﬁnitelength window, i.e. time limited to 0 ≤n ≤M, the DTFTW(ω) is spread around ω = 0
and the convolution (9.81) leads to smearing of the desired response H
d
(ω).
Spectral characteristics of the rectangular window: For the rectangular window w
R
[n] = 1, 0 ≤n ≤M,
we ﬁnd:
W
R
(ω) =
M
∑
n=0
e
−jωn
(9.83)
= e
−jωM/2
sin(ω(M+1)/2)
sin(ω/2)
The corresponding magnitude spectrum is illustrated below for the M = 8:
[W
R
(ω)[ is characterized by a main lobe, centered at ω = 0, and secondary lobes of lower amplitudes, also
called sidelobes: For w
R
[n], we have:
c (B. Champagne & F. Labeau Compiled November 5, 2004
190 Chapter 9. Filter design
dB 13 − = α
sidelobe level:
1st zero at
main lobe
m
ω ∆
1
2
+
=
M
π
ω
 mainlobe width: ∆ω
m
=
4π
M+1
 peak sidelobe level: α =−13dB (independent of M)
Typical LP design with window method: Figure 9.20 illustrates the typical look of the magnitude re
sponse of a lowpass FIR ﬁlter designed by windowing. Characteristic features include:
• Ripples in [H(ω)[ due to the window sidelobes in the convolution (Gibb’s phenomenon);
• δ, which denotes the peak approximation error in [0, ω
p
] ∪[ω
s
, π].
δ, strongly depends on the peak sidelobe level α of the window.
• Width of transition band: ∆ω = ω
s
−ω
p
< ∆ω
m
. Note that ∆ω = 0 due to main lobe width of the
window;
• [H(ω)[ = 0 in stopband ⇒leakage (also an effect of sidelobes)
For a rectangular window:
• δ ≈−21dB (independent of M)
• ∆ω ≈
2π
M
To reduce δ, one must use an other type of window (Kaiser, Hanning, Hamming,. . . ). These other windows
have wider main lobes (resulting in wider transition bands) but in general lower sidelobes (resulting in lower
ripple).
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.3 Design of FIR ﬁlters 191
0 0.5 1 1.5 2 2.5 3
0
0.2
0.4
0.6
0.8
1
1.2
ω

H
(
ω
)

δ
∆ω
Fig. 9.20 Typical magnitude response of an FIR ﬁlter designed by windowing.
9.3.3 Overview of some standard windows
Windowing tradeoffs: Ideally, one would like to use a window function w[n] that has ﬁnite duration in
the timedomain and such that its DTFT W(ω) is perfectly concentrated at ω = 0. This way, the application
of w[n] to a signal x[n] in the timedomain would not not introduce spectral smearing, as predicted by (9.81).
Unfortunately, such a window function does not exist: it is a fundamental property of the DTFT that a signal
with ﬁnite duration has a spectrum that extends from −∞ to +∞ in the ωdomain.
For a ﬁxed value of the window length M, it is possible however to vary the window coefﬁcients w[n] so as
to tradeoff mainlobe width for sidelobe level attenuation. In particular, it can be observed that by tapering
off the values of the window coefﬁcients near its extremity, it is possible to reduce sidelobe level α at the
expense of increasing the width of the main lobe, ∆ω
m
. This is illustrated in Figure 9.21: We refer to this
phenomenon as the fundamental windowing tradeoff.
Bartlett or triangular window:
w[n] = (1−[
2n
M
−1[)w
R
[n] (9.84)
Hanning Window:
w[n] =
1
2
(1−cos
2πn
M
)w
R
[n] (9.85)
Hamming Window:
w[n] = (0.54−0.46 cos
2πn
M
)w
R
[n] (9.86)
Blackman Window:
w[n] = (0.42−0.5 cos
2πn
M
+0.08cos
4πn
M
)w
R
[n] (9.87)
c (B. Champagne & F. Labeau Compiled November 5, 2004
192 Chapter 9. Filter design
0 5 10 15
0
0.2
0.4
0.6
0.8
1
a
m
p
l
i
t
u
d
e
time index n
rectangular window
0 5 10 15
0
0.2
0.4
0.6
0.8
1
a
m
p
l
i
t
u
d
e
time index n
triangular window
−50
−40
−30
−20
−10
0
m
a
g
n
i
t
u
d
e
(
d
B
)
−π −π/2 0 π/2 π
frequency ω
−50
−40
−30
−20
−10
0
m
a
g
n
i
t
u
d
e
(
d
B
)
−π −π/2 0 π/2 π
frequency ω
Fig. 9.21 Illustration of the fundamental windowing tradeoff.
A generic form for the Hanning, Hamming and Blackman windows is
w[n] = (A+Bcos
2πn
M
+Ccos
4πn
M
)w
R
[n]
where coefﬁcients A, B and C determine the window type.
Kaiser window family:
w[n] =
1
I
0
(β)
I
0
(β
1−(
2n
M
−1)
2
)w
R
[n] (9.88)
where I
o
(.) denotes the Bessel function of the 1st kind and β ≥ 0 is an adjustable design parameter. By
varying α, it is possible to tradeoff sidelobe level α for mainlobe width: ∆ω
m
:
• β = 0 ⇒rectangular window;
• β > 0 ⇒α ↓ and ∆ω
m
↑.
The socalled Kaiser design formulae enable one to ﬁnd necessary values of β and M to meet speciﬁc low
pass ﬁlter requirements. Let ∆ω denote the transition band of the ﬁlter i.e.
∆ω = ω
s
−ω
p
(9.89)
and let A denote the stopband attenuation in positive dB:
A =−20log
10
δ (9.90)
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.3 Design of FIR ﬁlters 193
0 5 10 15
0
0.5
1
r
e
c
t
a
n
g
u
l
a
r
−60
−40
−20
0
−π −π/2 0 π/2 π
0 5 10 15
0
0.5
1
B
a
r
t
l
e
t
t
−60
−40
−20
0
−π −π/2 0 π/2 π
0 5 10 15
0
0.5
1
H
a
n
n
i
n
g
−60
−40
−20
0
−π −π/2 0 π/2 π
0 5 10 15
0
0.5
1
H
a
m
m
i
n
g
−60
−40
−20
0
−π −π/2 0 π/2 π
0 5 10 15
0
0.5
1
B
l
a
c
k
m
a
n
time index n
−60
−40
−20
0
−π −π/2 0 π/2 π
frequency ω
Fig. 9.22 .
The following formulae
2
may be used to determine β and M:
M =
A−8
2.285∆ω
(9.91)
β =
0.1102(A−8.7) A > 50
0.5842(A−21)
0.4
+0.07886(A−21) 21 ≤A ≤50
0.0 A < 21
(9.92)
2
J. F. Kaiser, ”Nonrecursive digital ﬁlter design using the I
o
−sinh window function,” IEEE Int. Symp. on Circuits and Systems,
April 1974, pp. 2023.
c (B. Champagne & F. Labeau Compiled November 5, 2004
194 Chapter 9. Filter design
0 5 10 15
0
0.5
1
−60
−40
−20
0
−π −π/2 0 π/2 π
β = 0
0 5 10 15
0
0.5
1
−60
−40
−20
0
−π −π/2 0 π/2 π
β = 3
0 5 10 15
0
0.5
1
time index n
−60
−40
−20
0
−π −π/2 0 π/2 π
β = 6
frequency ω
Fig. 9.23 .
9.3.4 ParksMcClellan method
Introduction: Consider a TypeI GLP FIR ﬁlter (i.e., M even and ε = +1):
H(ω) = e
−jωM/2
A(ω) (9.93)
A(ω) =
L
∑
k=0
α
k
cos(kω) (9.94)
where L = M/2, α
0
= h[L], and α
k
= 2h[L−k] for k = 1, ..., L. Let H
d
(ω) be the desired frequency response
(e.g., ideal lowpass ﬁlter), speciﬁed in terms of tolerances in frequency bands of interest.
The ParksMcClellan (PM) method of FIR ﬁlter design attempts to ﬁnd the best set of coefﬁcients α
k
’s, in
the sense of minimizing the maximum weighted approximation error
E(ω) W(ω)[H
d
(ω) −A(ω)] (9.95)
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.3 Design of FIR ﬁlters 195
over the frequency bands of interest (e.g., passband and stopband). W(ω) ≥ 0 is a frequency weighting
introduced to penalize differently the errors made in different bands.
Specifying the desired response: The desired frequency response H
d
(ω) must be accompanied by a tol
erance scheme or template which speciﬁes:
 a set of disjoint frequency bands B
i
= [ω
i1
, ω
i2
]
 and for each band B
i
, a corresponding tolerance δ
i
, so that
[H
d
(ω) −A(ω)[ ≤δ
i
, ω ∈ B
i
(9.96)
The frequency intervals between the bands B
i
are called transition bands. The behavior of A(ω) in the
transition bands cannot be speciﬁed.
Example 9.8:
For example, in the case of a lowpass design:
 Passband: B
1
= [0, ω
p
] with tolerance δ
1
, i.e.
[1−A(ω)[ ≤δ
1
, 0 ≤ω ≤ω
p
(9.97)
 Stopband: B
2
= [ω
s
, π] with tolerance δ
2
, i.e.
[A(ω)[ ≤δ
2
, ω
s
≤ω ≤π (9.98)
 Transition band: [ω
p
, ω
s
]
The weighting mechanism: In the ParksMcClellan method, a weighting function W(ω) ≥ 0 is applied
to the approximation error H
d
(ω) −A(ω) prior to its optimization, resulting in a weighted error
E(ω) =W(ω)[H
d
(ω) −A(ω)]. (9.99)
The purpose of W(ω) is to scale the error H
d
(ω) −A(ω), so that an error in a band B
i
with a small tolerance
δ
i
is penalized more than an error in a band B
j
with a large tolerance δ
j
> δ
i
. This may be achieved by
constructing W(ω) in the following way:
W(ω) =
1
δ
i
, ω ∈ B
i
(9.100)
so that the cost W(ω) is larger in those bands B
i
where δ
i
is smaller. In this way, the nonuniform tolerance
speciﬁcations δ
i
over the individual bands B
i
translate into a single tolerance speciﬁcation of over all the
bands of interest, i.e.
[E(ω)[ ≤1, ω ∈ B
1
∪B
2
∪. . . (9.101)
c (B. Champagne & F. Labeau Compiled November 5, 2004
196 Chapter 9. Filter design
Properties of the minmax solution: In the ParksMcClellan method, one seeks an optimal ﬁlter H(ω)
that minimizes the maximum approximation error over the frequency bands of interest. Speciﬁcally, we
seek
min
all α
k
¦ max
ω∈B
[E(ω)[ ¦ (9.102)
where
E(ω) =W(ω)[H
d
(ω) −A(ω)] (9.103)
A(ω) =
L
∑
k=0
α
k
cos(kω) (9.104)
B =B
1
∪B
2
∪. . . (9.105)
The optimal solution A
o
(ω) satisﬁes the following properties:
 Alternation theorem: A
o
(ω) is the unique solution to the above minmax problem iff it has at least
L+2 alternations over B.
 That is, there must exist at least L+2 frequency points ω
i
∈ B such that ω
1
< ω
2
< < ω
L+2
and
that
E(ω
i
) =−E(ω
i+1
) =±E
max
(9.106)
E
max
max
ω∈B
[E(ω)[ (9.107)
 All interior points of B where A(ω) has zero slope correspond to alternations.
This is illustrated in Figure 9.24 for a lowpass design.
1
2G
0 0.5 1 1.5 2 2.5 3
0
0.2
0.4
0.6
0.8
1
1.2
1.4
2
G
Aternations
Z
' ) ( ' Z $
0 0.5 1 1.5 2 2.5 3
0.1
0.05
0
0.05
0
0.5
1 1.5 2 2.5 3
0.1
0.05
0
0.05
Weighted Error
Unweighted Error
Z
Z
Fig. 9.24 Illustration of the characteristics of a lowpass ﬁlter designed using the Parks
McClellan method.
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.3 Design of FIR ﬁlters 197
Remarks: Because of the alternations, these ﬁlters are also called equiripple. Due to the use of the weight
ing function W(ω), an equiripple behavior of E(ω) over the band B with amplitude E
max
translates into
equiripple behavior of H
d
(ω) −A(ω) with amplitude δ
i
E
max
over band B
i
.
Observe that the initial speciﬁcations in (9.96) will be satisﬁed if the ﬁnal error level E
max
≤1. One practical
problem with the PM method is that the required value of M to achieve this is not known a priori. In other
words, the method only ensures that the relative sizes of the desired tolerances are attained (i.e. the ratio
δ
i
/δ
j
).
In the case of a lowpass design, the following empirical formula can be used to obtain an initial guess for
M:
M ≈
−10log
10
(δ
1
δ
2
) −13
2.324(ω
s
−ω
p
)
(9.108)
Although we have presented the method for TypeI GLP FIR ﬁlter, it is also applicable with appropriate
modiﬁcations to Types II, III and IV GLP FIR ﬁlters. Generalizations to nonlinear phase complex FIR
ﬁlters also exist.
Use of computer program: Computer programs are available for carrying out the minimax optimization
numerically. They are based on the socalled Remez exchange algorithm. In the Matlab signal processing
toolbox, the function remez can be used to design minimax GLP FIR ﬁlters of Type I, II, III and IV. The
function cremez can be used to design complex valued FIR ﬁlters with arbitrary phase.
Example 9.9:
Suppose we want to design a lowpass ﬁlter with
 Passband: [1−H(ω)[ ≤δ
1
= .05, 0 ≤ω ≤ω
p
= 0.4π
 Stopband: [H(ω)[ ≤δ
2
= .01, 0.6π = ω
s
≤ω ≤π
• Initial guess for M:
M ≈
−10log
10
(0.050.01) −13
2.324(0.6π−0.4π)
≈10.24 (9.109)
• Set M = 11 and use the commands:
F = [0, 0.4, 0.6, 1.0] (9.110)
Hd = [1, 1, 0, 0]
W = [1/0.05, 1/0.01]
B = remez(M, F, Hd,W)
The output vector B will contain the desired ﬁlter coefﬁcients, i.e.
B = [h[0], . . . , h[M]] (9.111)
The corresponding impulse response and magnitude response (M = 11) are shown in Figure 9.25.
• The ﬁlter so designed has tolerances δ
/
1
and δ
/
2
that are different from δ
1
and δ
2
. However:
δ
/
1
δ
/
2
=
δ
1
δ
2
(9.112)
c (B. Champagne & F. Labeau Compiled November 5, 2004
198 Chapter 9. Filter design
• By trial and error,we ﬁnd that the required value of M necessary to meet the spec is M = 15. The
corresponding impulse response and magnitude response are shown in Figure 9.26.
0 2 4 6 8 10 12
−0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
i
m
p
u
l
s
e
r
e
s
p
o
n
s
e
sample index
−50
−40
−30
−20
−10
0
10
m
a
g
n
i
t
u
d
e
(
d
B
)
0 π/2 π
frequency ω
Fig. 9.25 Illustration of trial design in example 9.9 (M = 11).
c (B. Champagne & F. Labeau Compiled November 5, 2004
9.3 Design of FIR ﬁlters 199
0 5 10 15
−0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
i
m
p
u
l
s
e
r
e
s
p
o
n
s
e
sample index
−50
−40
−30
−20
−10
0
10
m
a
g
n
i
t
u
d
e
(
d
B
)
0 π/2 π
frequency ω
Fig. 9.26 Illustration of the ﬁnal design in example 9.9 (M = 15)
c (B. Champagne & F. Labeau Compiled November 5, 2004
200
Chapter 10
Quantization effects
Introduction
Practical DSP systems use ﬁniteprecision (FP) number representations and arithmetic. The implementation
in FP of a given LTI system with rational system function
H(z) =
∑
M
k=0
b
k
z
−k
1−∑
M
k=1
a
k
z
−k
leads to deviations in its theoretically predicted performance:
• Due to quantization of system coefﬁcients a
k
’s and b
k
’s:
H(ω) ⇒
ˆ
H(ω) = H(ω) (10.1)
• Due to roundoff errors in ﬁnite precision arithmetic, there is quantization noise at the system output.
• Other roundoff effects such as limit cycles, etc.
Different structures or realizations (e.g. direct form, cascade, etc.) will be affected differently by these
effects.
10.1 Binary number representation and arithmetic
In a binary number representation system, a real number x ∈ R is represented by a sequence of binary digits
(i.e. bits) b
i
∈ ¦0, 1¦:
x ⇐⇒. . . b
2
b
1
b
0
b
−1
. . .
In practice, the number of bits available for the representation of numbers is ﬁnite (e.g. 16, 32, etc.). This
means that only certain numbers x ∈ R can be represented exactly.
Several approaches are available for the representation of real numbers with ﬁnite sequences of bits, some
of which are discussed below. The choice of a speciﬁc type of representation in a given application involves
the consideration of several factors, including: desired accuracy, system costs, etc.
10.1 Binary number representation and arithmetic 201
10.1.1 Binary representations
Signed ﬁxedpoint representations: In this type of representation, a real number x in the range [−2
K
, 2
K
]
is represented as
x = b
K
b
K−1
b
0
.b
−1
b
−L
, b
i
∈ ¦0, 1¦ (10.2)
where b
K
is called sign bit or sometimes most signiﬁcant bit (MSB), b
−L
is called the least signiﬁcant bit
(LSB), the K bits b
K−1
b
0
represent the integer part of x and the L bits b
−1
b
−L
represent the fractional
part of x. The binary dot “.” is used to separate the fractional part from the integer part. It is convenient to
deﬁne B K+L so that the total number of bits is B+1, where the 1 reﬂects the presence of a sign bit.
Below, we discuss two special types of signed ﬁxedpoint representations, that is: the sign and magnitude
(SM) and the 2’s complement (2C) representations.
SM representation: The simplest type of ﬁxedpoint representation is the SM representation. Its general
format is:
x = (b
K
b
0
.b
−1
b
−L
)[
SM
= (−1)
b
K
K−1
∑
i=−L
b
i
2
i
(10.3)
2’s complement representation: The most common type of signed ﬁxedpoint representation is 2’s com
plement representation (2C). The general format is
x = (b
K
b
0
.b
−1
b
−L
)
2C
=−b
K
2
K
+
K−1
∑
i=−L
b
i
2
i
(10.4)
Example 10.1:
Using (10.3) and (10.4), we ﬁnd that
(0.11)
SM
= (0.11)
2C
= 12
−1
+12
−2
= 3/4
while
(1.11)
SM
= (−1) 3/4 =−3/4
(1.11)
2C
= (−1) +3/4 =−1/4
Observation: For ﬁxedpoint representation, the distance between two consecutive numbers, also called
resolution of step size, is a constant equal to
∆ = 2
−L
Thus, the available dynamic range [−2
K
, 2
K
] is quantized into 2
B+1
uniformly spaced representation levels,
where B = K +L. In scientiﬁc applications characterized by a large dynamic range, the use of such an
uniform number representation grid is not the most efﬁcient way of using the available B+1 bits.
c (B. Champagne & F. Labeau Compiled November 5, 2004
202 Chapter 10. Quantization effects
Floatingpoint representations: In this type of representations, a number x ∈ R is represented as
x = b
0
b
1
. . . b
l
. .. .
M
b
l+1
. . . b
l+k
. .. .
E
= M2
E
(10.5)
where M is a signed mantissa and E is an exponent. Several formats do exist for binary encoding of M and
E.
Example 10.2:
THe IEEE 753 ﬂoatingpoint format uses 32 bvits in total. 1 is reserved for the sign, 23 for the mantissa,
and 8 for the exponent (see Figure 10.1). The corresponding binarytodecimal mapping is
x = (−1)
S
(0.1M) 2
E−126
E M S
E
E
E
E
E
23 bits 8 bits 1 bit
Fig. 10.1 Floatingpoint representation as per format IEEE753.
Observations: Floatingpoint representations offer several advantages over ﬁxedpoint:
• Larger dynamic range
• Variable resolution (depends on value of E)
However, ﬂoatingpoint hardware is usually more expensive, so that ﬁxedpoint is often used when system
cost is a primary factor.
In the sequel, we focus on ﬁxedpoint representations with small number of bits (say 16 or less), where quan
tization effects are the most signiﬁcant. For simplicity, the emphasis in give to fractional representations,
for which
K = 0 and L = B.
However, generalizations to mixed (i.e. integer/fractional) representations are immediate.
10.1.2 Quantization errors in ﬁxedpoint arithmetic
Sources of errors: The two basic arithmetic operations in any DSP system are multiplication and addition,
as exempliﬁed by
y[n] = 0.5(x[n] +x[n−1]).
Because only a ﬁnite number of bits is available to represent the result of these operations, internal errors
will occur. Multiplication and addition in ﬁxedpoint lead to very different types of errors.
c (B. Champagne & F. Labeau Compiled November 5, 2004
10.1 Binary number representation and arithmetic 203
Multiplication: Consider two numbers, say a and b, each expressed in ﬁxedpoint fractional representa
tion with (B+1)bit words:
a = α
0
.α
−1
α
−B
b = β
0
.β
−1
β
−B
Multiplication of these two (B+1)bit words in general leads to a (B
/
+1)bit word, where here B
/
= 2B.
This is illustrated below:
0.1010.001 = 0.000101
In practice, only B+1 bits are available to store the result, generally leading to a socalled roundoff error.
There are two basic ways in which number c = a b with B
/
+1 bits can be represented with B+1 bits
where B < B
/
:
• Truncation: The truncation of x with B
/
+1 bits into ˆ x with B+1 bits, where B < B
/
, is expressed as
follows:
x = (b
0
.b
−1
. . . b
−B
/ ) → ˆ x = Q
tr
[x] = (b
0
.b
−1
. . . b
−B
). (10.6)
In other words, the unwanted bits are simply dropped.
• Rounding: The operation of rounding may be expressed as follows:
x = (b
0
.b
−1
. . . b
−B
/ ) → ˆ x = Q
rn
[x] = (b
/
0
.b
/
−1
. . . b
/
−B
). (10.7)
so that [x− ˆ x[ is minimized. That is, the available number ˆ x that is closest to x is used for its represen
tation. The bits b
/
i
(i = 0, ..., B) may be different from the original bits b
i
.
Example 10.3:
Consider the representation of 5bit number x = (0.0011) = 3/16, using only a 3bit word (i.e. B
/
= 4,
B = 2). For truncation, we have x → ˆ x = (0.00) = 0 For rounding, we have x → ˆ x = (0.01) = 1/4
Quantization error: Both rounding and truncation lead to a socalled quantization error, deﬁned as
e Q[x] −x (10.8)
The value of e depends on the type of representation being used (e.g. SignMagnitude, 2C), as well as
whether rounding or truncation is being applied. For the 2C representation (most often used), it can be
generally veriﬁed that for truncation:
−∆ < e ≤0 where ∆ 2
−B
while for rounding:
−∆/2 < e ≤∆/2
c (B. Champagne & F. Labeau Compiled November 5, 2004
204 Chapter 10. Quantization effects
Example 10.4:
Consider the truncation of (1.0111)
2C
to 2+1 bits (i.e. B = 2):
x = (1.0111)
2C
=−9/16 → ˆ x = Q
tr
[x] = (1.01)
2C
=−3/4
e = ˆ x −x =−3/4+9/16 =−3/16
Now consider the case of rounding:
x = (1.0111)
2C
=−9/16 → ˆ x = Q
rn
[x] = (1.10)
2C
=−1/2
e = ˆ x −x =−1/2+9/16 = 1/16
Addition: Consider again two numbers, say a and b, each expressed in ﬁxedpoint fractional representa
tion with B+1 bits. In theory, the sum of these two numbers may require up to B+2 bits for its correct
representation. That is:
c = a+b = (c
1
c
0
.c
−1
. . . c
−B
)
where c
1
is a sign bit and the binary point has been shifted by one bit to the right. In practice, only B+1
bits are available to store the result c, leading to a type of error called overﬂow.
Example 10.5:
Assuming 2C representation, we have:
(1.01) +(1.10) =−3/4−1/2 =−5/4
which cannot be represented exactly in a (2 +1)bit fractional 2C format. In a practical DSP system,
the above operation would be realized using modulo2 addition, leading to an erroneous result of 3/4
(i.e. left carry bit being lost).
In conventional applications of digital ﬁlters, overﬂow should by all mean be avoided, as it introduces
signiﬁcant distortions in the system output. In practice, two basic means are available to avoid overﬂow:
• Scaling of signals at various point in a DSP system (corresponds to a left shift of the binary point;
resolution is lost)
• Use of temporary guard bits to the left of the binary point.
10.2 Effects of coefﬁcient quantization
In our previous study of ﬁlter structures and design techniques, the ﬁlter coefﬁcients were implicitly assumed
to be represented in inﬁnite precision. In practice, the ﬁlter coefﬁcients must be quantized to a ﬁnite precision
representation (e.g. ﬁxedpoint, 16 bits) prior to the system’s implementation. For example, consider a DFII
realization of an IIR ﬁlter with system function
H(z) =
∑
M
k=0
b
k
z
−k
1−∑
N
k=1
a
k
z
−k
(10.9)
c (B. Champagne & F. Labeau Compiled November 5, 2004
10.2 Effects of coefﬁcient quantization 205
Due to quantization of the ﬁlter coefﬁcients a
k
→ ˆ a
k
and b
k
→
ˆ
b
k
, the corresponding system function is no
longer H(z) but instead
ˆ
H(z) =
∑
M
k=0
ˆ
b
k
z
−k
1−∑
N
k=1
ˆ a
k
z
−k
= H(z) +∆H(z) (10.10)
Equivalently, one may think of the poles p
k
and zeros z
k
of H(z) being displaced to new locations:
ˆ p
k
= p
k
+∆p
k
(10.11)
ˆ z
k
= z
k
+∆z
k
(10.12)
Small changes in the ﬁlter coefﬁcients due to quantization may lead to very signiﬁcant displacements of the
poles and zeros that dramatically affect H(z), i.e. very large ∆H(z).
It should be clear that quantization effects may be different for two different ﬁlter realizations with the same
system function H(z). Indeed, even tough the two realizations have the same transfer characteristics, the
actual system coefﬁcients in the two realizations may be different, leading to different errors in the presence
of quantization.
In this section, we investigate such quantization effects.
10.2.1 Sensitivity analysis for direct form IIR ﬁlters
Consider an IIR ﬁlter with corresponding system function
H(z) =
B(z)
A(z)
(10.13)
where B(z) and A(z) are polynomial in z
−1
:
B(z) = b
0
+b
1
z
−1
+ +b
M
z
−M
= b
0
M
∏
k=1
(1−z
k
z
−1
) (10.14)
A(z) = 1−a
1
z
−1
− −a
N
z
−N
=
M
∏
k=1
(1−p
k
z
−1
) (10.15)
Note that the ﬁlter coefﬁcients b
k
and a
k
uniquely deﬁne the poles and zeros of the system, and vice versa.
Thus, any change in the a
k
s and b
k
s will be accompanied by corresponding changes in p
k
s and z
k
s.
Below, we investigate the sensitivity of the polezero locations to small quantization errors in a
k
s and b
k
s
via a ﬁrst order perturbation analysis. Note that in a direct form realization of H(z), the ﬁlter coefﬁcients
are precisely the a
k
s and b
k
s. Thus the results of this analysis will be immediately applicable to IIR ﬁlters
realized in DFI, DFII and DFIItransposed.
c (B. Champagne & F. Labeau Compiled November 5, 2004
206 Chapter 10. Quantization effects
PZ sensitivity: In Suppose the ﬁlter coefﬁcients in (10.13)– (10.15) are quantized to
ˆ a = Q[a
k
] = a
k
+∆a
k
(10.16)
ˆ
b = Q[b
k
] = b
k
+∆b
k
(10.17)
The resulting system function is now
ˆ
H(z) =
ˆ
B(z)/
ˆ
A(z) where
ˆ
B(z) =
ˆ
b
0
+
ˆ
b
1
z
−1
+ +
ˆ
b
M
z
−M
=
ˆ
b
0
M
∏
k=1
(1−ˆ z
k
z
−1
) (10.18)
ˆ
A(z) = 1− ˆ a
1
z
−1
− − ˆ a
N
z
−N
=
M
∏
k=1
(1− ˆ p
k
z
−1
) (10.19)
Using the chain rule of calculus, it can be shown that to the ﬁrst order in ∆a
k
, the poles of the quantized
system become
ˆ p
k
= p
k
+∆p
k
(10.20)
∆p
k
=
∑
N
l=1
p
N−l
k
∆a
l
∏
l=k
(p
k
−p
l
)
(10.21)
A similar formula can be derived for the variation in the zeros of the quantized system as a function of ∆b
k
.
Remarks: For a system with a cluster of two or more closely spaced poles, we have
p
k
≈ p
l
for some l = k ⇒ ∆p
k
very large
⇒ ∆H(z) very large (10.22)
The above problem posed by a cluster of poles becomes worse as N increases (higher probability of having
closely spaced poles). To avoid the pole cluster problem with IIR ﬁlters:
• do not use direct form realization for large N;
• instead, use cascade or parallel realizations with low order sections (usually in direct form II)
As indicated above, the quantization effects on the zeros of the system are described by similar equations.
• However, the zeros z
k
s are usually not as clustered as the poles p
k
s in practical ﬁlter designs.
• Also, ∆H(z) is typically less sensitive to ∆z
k
than to ∆p
k
.
10.2.2 Poles of quantized 2nd order system
Intro: Consider a 2nd order allpole ﬁlter section with
H(z) =
1
1−2α
1
z
−1
+α
2
z
−2
(10.23)
Observe that the corresponding poles, say p
1
and p
2
, are functions of the coefﬁcients α
1
and α
2
. Thus, if α
1
and α
2
are quantized to B+1 bits, only a ﬁnite number of pole locations are possible. Furthermore, only a
subset of these locations will correspond to a stable and causal DT system.
c (B. Champagne & F. Labeau Compiled November 5, 2004
10.3 Quantization noise in digital ﬁlters 207
System poles: The poles of H(z) in (10.23) are obtained by solving for the roots of z
2
−2α
1
z +α
2
= 0 or
equivalently:
z =α
1
±
√
∆ where ∆ =α
2
1
−α
2
We need to distinguish three cases,
• ∆ = 0: Real double pole at
p
1
= p
2
=α
1
• ∆ > 0: Distinct real poles at
p
1,2
=α
1
±
√
∆
• ∆ < 0: Complex conjugate poles at
p
1,2
=α
1
± j
[∆[ = re
±jθ
where the parameters r and θ satisfy
r =
√
α
2
r cosθ =α
1
(10.24)
Coefﬁcient quantization: Suppose that a B+1bit, fractional signmagnitude representation is used for
the storage of the ﬁlter coefﬁcients a
1
and a
2
, i.e.:
a
i
∈ ¦0.b
1
b
B
: b
i
= 0 or 1¦
Accordingly, only certain locations are possible for the corresponding poles p
1
and p
2
. This is illustrated in
Figure 10.2in the case B = 3 where we make the following observation:
• The top part shows the quantized (a
1
, a
2
)plane, where circles correspond to ∆<0 (complex conjugate
pole locations), bullets to ∆ = 0 and x’s to ∆ > 0.
• The bottom part illustrates the corresponding pole locations in the zplane in the case ∆ < 0 and ∆ = 0
only (distinct real poles not shown to simplify presentation).
• Each circle in the top ﬁgure corresponds to a pair of complex conjugate location in the bottom ﬁgure
(see e.g. circles labelled 1, 2, etc.)
• According to (10.24), the complex conjugate poles are at the intersections of circles with radius
√
α
2
and vertical lines with ordinates r cosθ = α
1
. That is, distance from the origin and projection on
realaxis are quantized in this scheme.
10.3 Quantization noise in digital ﬁlters
In a digital ﬁlter, each multiplier introduces a roundoff error signal, also known as quantization noise. The
error signal from each multiplier propagates through the system, sometimes in a recursive manner (i.e. via
feedback loops). At the system output, the roundoff errors contributed from each multiplier build up in a
cumulative way, as represented by a total output quantization noise signal.
In this Section, we present a systematic approach for the evaluation of the total quantization noise power at
the output of a digitally implemented LTI system. This approach is characterized by the use of an equivalent
random linear model for (nonlinear) digital ﬁlters. Basic properties of LTI systems excited by random
sequences are invoked in the derivation.
c (B. Champagne & F. Labeau Compiled November 5, 2004
208 Chapter 10. Quantization effects
Im(z)
Re(z)
1
1
1 −
1 −
2
1 2
α α =
1 2 3
4 5 6 7
1
2
3
5
6
7
4
2
α
1
α
7
4
5
6
1
2
3
1
1
Fig. 10.2 Illustration of coefﬁcient quantization in a secondorder system..
c (B. Champagne & F. Labeau Compiled November 5, 2004
10.3 Quantization noise in digital ﬁlters 209
Nonlinear model of digital ﬁlter: Let H denote an LTI ﬁlter structure with L multiplicative branches. Let
u
i
[n], α
i
and v
i
[n] respectively denote the input, multiplicative gain and output of the ith branch (i =1, . . . , L),
as shown in Figure 10.3 (left).
i
α
i
α
r
Q
] [n u
i
] [n v
i
] [ ˆ n v
i
] [n u
i
Fig. 10.3 Nonlinear model of digital multiplier.
Let
ˆ
H denote the (B+1)bit ﬁxedpoint implementation of H, where it is assumed that each multiplier
output is rounded to B+1 bits:
v
i
[n] =α
i
u
i
[n]
. .. .
2B+1
=⇒ ˆ v
i
[n] = Q
r
(α
i
u
i
[n])
. .. .
B+1
(10.25)
In the deterministic nonlinear model of
ˆ
H, each multiplicative branch α
i
in H is modiﬁed as shown in
Figure 10.3 (right).
Equivalent linearnoise model: In terms of the quantization error e
i
[n], we have
ˆ v
i
[n] = Q
r
(α
i
u
i
[n])
= α
i
u
i
[n] +e
i
[n] (10.26)
In the equivalent linearnoise model of
ˆ
H, each multiplicative branch α
i
with quantizer Q
r
(.) is replaced as
shown in Figure 10.4.
i
α
i
α
r
Q
] [n e
i
] [n u
i
] [n u
i
] [ ˆ n v
i
] [ ˆ n v
i
Fig. 10.4 Linearnoise model of digital multiplier.
The quantization error signal e
i
[n] (n ∈ Z) is modelled as a white noise sequence, uncorrelated with the
system input x[n] and other quantization error signals e
j
[n] for j = i. For a ﬁxed n, we assume that e
i
[n] is
uniformly distributed within the interval ±
1
2
2
−B
, so that
• its mean value (or DC value) is zero:
E¦e
i
[n]¦ = 0 (10.27)
• its variance (or noise power) is given by:
E¦e
i
[n]
2
¦ =
2
−2B
12
σ
2
e
(10.28)
c (B. Champagne & F. Labeau Compiled November 5, 2004
210 Chapter 10. Quantization effects
Property 1: Consider an LTI system with system function K(z). Suppose that a zero mean white noise
sequence e[n] with variance σ
2
e
is applied to its input and let f [n] denote the corresponding output (see
Figure 10.5). Then, the sequence f [n] has zeromean and its variance σ
2
f
is given by
] [n e ] [n f
LTI system
) (z K
Fig. 10.5 Illustration of LTI system K(z) with stochastic input e[n].
σ
2
f
=σ
2
e
∞
∑
n=−∞
[k[n][
2
=
σ
2
e
2π
π
−π
[K(ω)[
2
dω (10.29)
where k[n] denotes the impulse response of LTI system K(z).
Property 2: Let e
1
[n], . . . , e
L
[n] be zeromean, uncorrelated white noise sequences with variance σ
2
e
. Sup
pose that each sequence e
i
[n] is applied to the input of an LTI system K
i
(z) and let f
i
[n] denote the corre
sponding output. Finally, let f [n] = f
1
[n] + f
L
[n] (see Figure 10.6). Then, the sequence f [n] has zeromean
and variance
σ
2
f
=σ
2
f
1
+ +σ
2
f
L
(10.30)
where the σ
2
f
i
are computed as in Property 1.
] .
@ >
Q H @ >
Q I
] .
/
@ >Q H
/
@ >Q I
/
@ >Q I
Fig. 10.6 LTI systems K
i
(z) with stochastic inputs e
i
[n] and combined output f [n].
Linear superposition of noise sources: Consider an LTI ﬁlter structure H with L multiplicative branches.
Assume that H is implemented in ﬁxedpoint arithmetic and let
ˆ
H denote the corresponding linearnoise
model: where the noise sources e
i
[n] are modelled as uncorrelated zeromean white noise sources, as previ
ously described.
ˆ
H may be interpreted as a linear system with a main input x[n], L secondary inputs e
i
[n] (i = 1, . . . , L) and a
main output ˆ y[n]. Invoking the principle of superposition for linear systems, the output may be expressed as
ˆ y[n] = y[n] + f
1
[n] + + f
L
[n] (10.31)
In this equation:
c (B. Champagne & F. Labeau Compiled November 5, 2004
10.3 Quantization noise in digital ﬁlters 211
] [n x
1
α
2
α
L
α
] [
1
n e
] [
2
n e
] [n e
L
] [ ˆ n y
1 −
z
1 −
z
Fig. 10.7 Linearnoise model of LTI system with L multipliers.
• y[n] is the desired response in the absence of quantization noise:
y[n] H(z)x[n]
where H(z) is the system function in inﬁnite precision.
• f
i
[n] is the individual contribution of noise source e
i
[n] to the system output in the absence of other
noise sources and of main input x[n]:
f
i
[n] = K
i
(z)e
i
[n]
where K
i
(z) is deﬁned as the transfer function between the injection point of e
i
[n] and the system
output ˆ y[n] when x[n] = 0 and e
j
[n] = 0 for all j = i.
Total output noise power: According to (10.31), the total quantization noise at the system output is given
by
f [n] = f
1
[n] + + f
L
[n]
Invoking Properties 1 and 2, we conclude that the quantization noise f [n] has zeromean and its variance is
given by
σ
2
f
=
L
∑
i=1
σ
2
e
∞
∑
n=−∞
[k
i
[n][
2
(10.32)
=
2
−B
12
L
∑
i=1
∞
∑
n=−∞
[k
i
[n][
2
(10.33)
where (10.28) has been used. The variance σ
2
f
obtained in this way provides a measure of the quantization
noise power at the system output.
Note on the computation of (10.33): To compute the quantization noise power σ
2
f
as given by (10.33), it
is ﬁrst necessary to determine the individual transfer functions K
i
(z) seen by each of the noise sources e
i
[n].
This information can be obtained from the ﬂowgraph of the linearnoise model
ˆ
H by setting x[n] = 0 and
e
j
[n] = 0 for all j = i. For simple system functions K
i
(z), it may be possible to determine the associated
impulse response k
i
[n] via inverse ztransform and then compute a numerical value for the sum of squared
magnitudes in (10.33). This is the approach taken in the following example.
c (B. Champagne & F. Labeau Compiled November 5, 2004
212 Chapter 10. Quantization effects
Example 10.6:
Consider the system function
H(z) =
1
(1−a
1
z
−1
)(1−a
2
z
−2
)
where a
1
= 1/2 and a
2
= 1/4. Let H denote a cascade realization of H(z) using 1st order sections, as
illustrated in Figure 10.8
] [n x ] [n y
1 −
z
1 −
z
1
a
2
a ] [
1
n u ] [
2
n u
Fig. 10.8 Cascade realization of a 2nd order system.
Consider the ﬁxedpoint implementation of H using a B +1 bits fractional two’s complement (2C)
number representation in which multiplications are rounded. In this implementation of H, each product
a
i
u
i
[n] in Fig. 10.8, which requires 2B+1 for its exact representation, is rounded to B+1 bits:
a
i
u
i
[n]
. .. .
2B+1 bits
→Q
r
(a
i
u
i
[n])
. .. .
B+1 bits
= a
i
u
i
[n] +e
i
[n]
where e
i
[n] denotes roundoff error. The equivalent linearnoise model for H, shown in Fig. 10.9, may
be viewed as a multiple inputsingle output LTI system.
] [n x ] [ ˆ n y
1 −
z
1 −
z
1
a
2
a
] [
1
n e ] [
2
n e
Fig. 10.9 Equivalent linear noise model.
The output signal ˆ y[n] in Fig. 10.9 may be expressed as
ˆ y[n] = y[n] + f [n]
where y[n] = H(z)x[n] is the desired output and f [n] represents the cumulative effect of the quantization
noise at the system’s output. In turns, f [n] may be expressed as
f [n] = f
1
[n] + f
2
[n]
c (B. Champagne & F. Labeau Compiled November 5, 2004
10.3 Quantization noise in digital ﬁlters 213
where
f
i
[n] = K
i
(z)e
i
[n]
represents the individual contribution of noise source e
i
[n] and K
i
(z) is the system function between the
injection point of e
i
[n] and the system output, obtained by setting the input as well as all other noise
sources to zero. To compute the noise power contributed by source e
i
[n], we need to ﬁnd K
i
(z) and
compute the energy in its impulse response:
• To obtain K
1
(z), set x[n] = e
2
[n] = 0:
K
1
(z) = H(z)
=
1
(1−a
1
z
−1
)(1−a
2
z
−2
)
=
2
1−
1
2
z
−1
−
1
1−
1
4
z
−1
The impulse response is obtained via inverse ztransform (hence the partial fraction expansion)
assuming a causal system:
k
1
[n] =
¸
2
1
2
n
−
1
4
n
u[n]
from which we compute the energy as follows:
∑
all n
[k
1
[n][
2
=
∞
∑
n=0
¸
2
1
2
n
−
1
4
n
2
= = 1.83
• To obtain K
2
(z), set x[n] = e
1
[n] = 0:
K
1
(z) =
1
1−a
2
z
−2
The impulse response is
k
1
[n] =
1
4
n
u[n]
with corresponding energy
∑
all n
[k
2
[n][
2
=
∞
∑
n=0
1
4
2n
= = 1.07
Finally, according to (10.33), the total quantization noise power at the system output is obtained as :
σ
2
f
= σ
2
e
∑
all n
[k
1
[n][
2
+
∑
all n
[k
2
[n][
2
=
2
−2B
12
(1.83+1.07) = 2.90
2
−2B
12
(10.34)
We leave it as an exercise for the student to verify that if the computation is repeated with a
1
= 1/4 and
a
2
= 1/2 (i.e. reversing the order of the 1st order sections), the result will be
σ
2
f
= 3.16
2
−2B
12
Why...?
c (B. Champagne & F. Labeau Compiled November 5, 2004
214 Chapter 10. Quantization effects
Signaltoquantization noise ratio (SQNR): We deﬁne the SQNR at the output of a digital system as
follows:
SQNR = 10log
10
P
y
σ
2
f
in dB (10.35)
where
• P
y
denotes the average power of the output signal y[n] in inﬁnite precision (see below).
• σ
2
f
denotes the average power of the total quantization noise f [n] at the system output. This is com
puted as explained previously.
Output signal power: The computation of the output signal power P
y
depends on the model used for the
input signal x[n]. Two important cases are considered below:
• Random input: Suppose x[n] is a zeromean white noise sequence with variance σ
2
x
. Then, from
Property 1, we have
P
y
=σ
2
x
∞
∑
n=−∞
[h[n][
2
=
σ
2
x
2π
π
−π
[H(ω)[
2
dω (10.36)
• Sinusoidal input: Suppose x[n] = Asin(ω
o
n+φ). Then, the average power of y[n] is given by
P
y
=
A
2
2
[H(ω
o
)[
2
(10.37)
Example 10.7: (Ex. 10.6 cont.)
Suppose that in Example 10.6, the input signal is a white noise sequence with sample values x[n] uni
formly distributed within ±X
max
, where the maximum permissible value of [x[n][, represented by X
max
,
may be less than 1 to avoid potential internal overﬂow problems. From the above, it follows that x[n] has
zeromean and
σ
2
x
=
X
2
max
12
The corresponding output power in inﬁnite precision is computed from (10.35):
P
y
=σ
2
x
∞
∑
n=−∞
[h[n][
2
= 1.83σ
2
x
= 1.83
X
2
max
12
Finally, the SQNR is obtained as
SQNR = 10log
10
P
y
σ
2
f
= 10log
10
1.83X
2
max
/12
2.902
−2B
/12
≈ 6.02B+20log
10
X
max
−2.01 (dB)
Note the dependance on the peak signal value X
max
and the number of bits B. Each additional bit con
tributes a 6dB increase in SQNR. In practice, the choice of X
max
is limited by overﬂow considerations.
c (B. Champagne & F. Labeau Compiled November 5, 2004
10.3 Quantization noise in digital ﬁlters 215
Further note on the computation of (10.33): The approach used in the above example for the compu
tation of ∑
n
[k
i
[n][
2
may become cumbersome if K
i
(z) is too complex. For rational IIR transfer functions
K
i
(z), the computation of the inﬁnite sum can be avoided, based on the fact that
∞
∑
n=−∞
[k[n][
2
=
1
2π
π
−π
[K(ω)[
2
dω
It has been shown earlier that the transfer function C(z) with frequency response C(ω) =[K
i
(ω)[
2
has poles
given by d
k
and 1/d
∗
k
, where d
k
is a (simple) pole of K
i
(z). Thus a partial fraction expansion of C(z) would
yield a result in the form:
C(z) =C
1
(z) +
N
∑
k=1
A
k
1−d
k
z
−1
−
A
∗
k
1−z
−1
/d
∗
k
,
where C
1
(z) is a possible FIR component, and the A
k
’s are partial fraction expansion coefﬁcients. It is easily
shown by the deﬁnition of C(z) as K
i
(z)K
∗
i
(1/z
∗
) that the partial fraction expansion corresponding to 1/d
∗
k
is indeed −A
∗
k
. Evaluating the above PFE on the unit circle and integrating over one period gives:
∑
n
[k[n][
2
=
1
2π
π
−π
C
1
(ω)dω+
N
∑
k=1
1
2π
π
−π
¦
A
k
1−d
k
e
−jω
−
A
∗
k
1−e
−jω
/d
∗
k
¦dω.
All the above terms are easily seen to be inverse discretetime Fourier transforms evaluated at n = 0, so that
∑
n
[k[n][
2
= c
1
[0] +
N
∑
k=1
A
k
(10.38)
Note that the term corresponding to A
∗
k
is zero at n = 0 since the corresponding sequence has to be anti
causal, starting at n =−1.
Finally, it often occurs that two or more, noise sources e
i
[n] have the same systemfunction K
i
(z). In this case,
according to (10.33), these noise sources may be replaced by a single noise source with a scaled variance of
qσ
2
e
, where q is the number of noise sources being merged.
Example 10.8:
Let us study the actual effect of roundoff noise on the DFII implementation of the ﬁlter designed by
bilinear transformation in example 9.5. The coefﬁcients of the transfer function are shown in table 9.2.
Figure 10.10(a) shows the equivalent linear model to take quantization noise into account. The different
2N+1 noise sources can be regrouped as in Figure 10.10(b) into two composite noise sources e
/
1
[n] and
e
/
2
[n] with variances Nσ
2
e
and (N +1)σ
2
e
respectively. The transfer function from e
/
2
[n] to the output of
the structure is just 1, whereas the noise source e
/
1
[n] goes through the direct part of the transfer function
H(z). The corresponding output noise variances can therefore be computed as:
σ
2
f
/
1
= σ
2
e
/
1
∞
∑
n=0
h[n]
2
= Nσ
2
e
∞
∑
n=0
h[n]
2
(10.39)
σ
2
f
/
2
= σ
2
e
/
2
= (N+1)σ
2
e
, (10.40)
for a total noise power of
σ
2
f
=
2
−2B
12
(N+1+N
∞
∑
n=0
h[n]
2
).
c (B. Champagne & F. Labeau Compiled November 5, 2004
216 Chapter 10. Quantization effects
In the case of the coefﬁcients shown in table 9.2, we will use the formula (10.38) in order to compute
the sum over h[n]
2
. The ﬁrst step is to create the numerator and denominator coefﬁcients of the transfer
function C(z) = H(z)H
∗
(1/z
∗
). We will use Matlab to do so. After grouping the coefﬁcients of the
numerator polynomial b
k
in a column vector b in Matlab and doing the same for the a
k
’s in a vector
a, one can easily ﬁnd the numerator by numc=conv(b,flipup(conj(b)), and the denominator
by denc=conv(a,flipud(conj(a)). To compute the partial fraction expansion, the function
residuez is used, yielding:
C
1
(z) = 9.97 10
−6
6
∑
k=1
A
k
= 0.23463,
so that the actual sum
∞
∑
n=0
h[n]
2
= 0.23464.
Finally, the output noise variance is given by:
σ
2
f
= 0.70065 2
−2B
.
For instance, assuming a 12bit representation and an input signal uniformly distributed between −
1
2
and
1
2
, we would get an SQNR of 50.7 dB. Though this value might sound high, it will not be sufﬁcient
for many applications (e.g. “CD quality” audio processing requires a SNR of the order of 95 dB).
Note that the example is not complete:
 the effect of coefﬁcient quantization has not been taken into account. In particular, some low
values of the number B+1 of bits used might push the poles outside of the unit circle and make
the system unstable.
 the effect of overﬂows has not been taken into account; in practise, overﬂow in key nodes is
avoided by a proper scaling of the input signal by a factor s <1 such that the resulting signals in the
nodes of interest do not overﬂow. The nodes to consider are ﬁnal results of accumulated additions,
since overﬂow in intermediate nodes is harmless in two’s complement arithmetics. These nodes
are, in the case of Figure 10.10(b), the two nodes where the noise components are injected. By
looking at the transfer function from the input to these nodes, it is easy to ﬁnd a conservative value
of s that will prevent overﬂow. Note that this will also reduce the SQNR, since the power P
y
will
be scaled by a factor s
2
.
Finally, let us point out that if we have at our disposal a doublelength accumulator (as is available in
many DSP chips, see chapter ??), then the rounding or truncation only has to take place before storage in
memory. Consequently, the only locations in the DFII signal ﬂow graph where quantization will occur
are right before the delay line, and before the output, as illustrated in Figure 10.11. Carrying out a study
similar to the one outline above, we end up with a total output noise variance of
σ
2
f
=
2
−2B
12
(1+
∞
∑
n=0
h[n]
2
),
where the factors N have disappeared. With 12 bits available, this leads to an SQNR value of 59.02 dB.
10.4 Scaling to avoid overﬂow
Problem statement: Consider a ﬁlter structure H with K adders and corresponding node values w
i
[n],
as illustrated in Fig. 10.12. Note that whenever [w
i
[n][ > 1, for a given adder at a given time, overﬂow
c (B. Champagne & F. Labeau Compiled November 5, 2004
10.4 Scaling to avoid overﬂow 217
@ >Q \
]
]
D
1
D
1
D
@ >Q [
E
E
1
E
1
E
@ >Q Z
@ >
Q H
@ >
Q H
1
@ >Q H
1
@ >
Q H
1
@ >
Q H
1
@ >
Q H
1
@ >
Q H
1
@ >Q \
]
]
D
1
D
1
D
@ >Q [
E
E
1
E
1
E
@ >Q Z
¦
@ > @ >
1
1 L
L
Q H Q H ¦
1
L
L
Q H Q H
@ > @ >
(a) (b)
Fig. 10.10 DFII modiﬁed to take quantization effects into account (a) each multiplier contains
a noise source and (b) a simpliﬁed version with composite noise sources.
@ >Q \
]
]
D
1
D
1
D
@ >Q [
E
E
1
E
1
E
@ >Q Z
@ >
Q H @ >
Q H
Fig. 10.11 Roundoff noise model when a doublelength accumulator is available.
c (B. Champagne & F. Labeau Compiled November 5, 2004
218 Chapter 10. Quantization effects
] [n x
1 −
z
1 −
z
] [n y
] [
1
n w
] [
2
n w
] [
3
n w
1
a
Fig. 10.12 Linearnoise model of LTI system with L multipliers.
will result in the corresponding ﬁxedpoint implementation. Overﬂow usually produces large, unpredictable
distortion in the system output and must be avoided at all cost.
Principles of scaling: The basic idea is very simple: properly scale the input signal x[n] to a range [x[n][ <
X
max
so that the condition
[w
i
[n][ < 1, i = 1, . . . , K (10.41)
is enforced at all time n. Several approaches exist for choosing X
max
, the maximum permissible value of
x[n]. The best approach usually depends on the type of input signals being considered. Two such approaches
are presented below:
Wideband signals: For wideband signals, a suitable value of X
max
is provided by
X
max
<
1
max
i ∑
n
[h
i
[n][
(10.42)
where H
i
(z) denotes the transfer function from the system input to the ith adder output. The above choice
of X
max
ensures that overﬂow will be avoided regardless of the particular type of inputs: it represents a
sufﬁcient condition to avoid overﬂow.
Narrowband signals: For narrowband signals, the above bound on X
max
may be too conservative. An
often better choice of X
max
if provided by the bound
X
max
<
1
max
i,ω
[H
i
(ω)[
(10.43)
c (B. Champagne & F. Labeau Compiled November 5, 2004
219
Chapter 11
Fast Fourier transform (FFT)
Computation of the discrete Fourier transform (DFT) is an essential step in many signal processing algo
rithms. Even for data sequences of moderate size N, a direct computation of the Npoint DFT requires
on the order of N
2
arithmetic operations. Even for moderate values of N, this usually entails signiﬁcant
computational costs.
In this Chapter, we investigate socalled fast Fourier transform (FFT) algorithms for the efﬁcient computa
tion of the DFT. These algorithms achieves signiﬁcant savings in the DFT computation by exploiting certain
symmetries of the Fourier basis functions and only require on the order of Nlog
2
N arithmetic operations.
They are at the origin of major advances in the ﬁeld of signal processing, particularly in the 60’s and 70’s.
Nowadays, FFT algorithms ﬁnd applications in numerous commercial DSPbased products.
11.1 Direct computation of the DFT
Recall the deﬁnition of the Npoint DFT of a discretetime signal x[n] deﬁned for 0 ≤n ≤N−1:
X[k] =
N−1
∑
n=0
x[n]W
kn
N
, k = 0, 1, ..., N−1 (11.1)
where for convenience, we have introduced
W
N
e
−j2π/N
, (11.2)
Based on (11.1), one can easily come up with the following algorithmic steps for its direct evaluation, also
known as the direct approach:
Step 1: Compute W
l
N
and store it in a table:
W
l
N
= e
−j2πl/N
(11.3)
= cos(2πl/N) + j sin(2πl/N), l = 0, 1, ..., N−1
Note: this only needs to be evaluated for l = 0, 1, ..., N−1 because of the periodicity.
220 Chapter 11. Fast Fourier transform (FFT)
Step 2: Compute the DFT X[k] using stored values of W
l
N
and input data x[n]:
for k = 0 : N−1 (11.4)
X[k] ←x[0]
for n = 1 : N−1
l = (kn)
N
X[k] ←X[k] +x[n]W
l
N
end
end
The complexity of this approach can be broken up as follows:
• In step (1): N evaluation of the trigonometric functions sin and cos (actually less than N because of
the symmetries). These values are usually computed once and stored for later use. We refer to this
approach as table lookup.
• In step (2):
 There are N(N−1) ≈N
2
complex multiplications (⊗
c
)
 There are N(N−1) ≈N
2
complex additions (⊕
c
)
 One must also take into account the overhead: indexing, addressing, etc.
Because of the ﬁgures above, we say that direct computation of the DFT is O(N
2
), i.e. of order N
2
. The
same is true for the IDFT.
In itself, the computation of the Npoint DFT of a signal is a costly operation. Indeed, even for moderate
values of N, the O(N
2
) complexity will usually require a considerable amount of computational resources.
The rest of this Chapter will be devoted to the description of efﬁcient algorithms for the computation of the
DFT. These socalled Fast Fourier Transform or FFT algorithms can achieve the same DFT computation in
only O(Nlog
2
N).
11.2 Overview of FFT algorithms
In its most general sense, the term FFT refers to a family of computationally fast algorithms used to compute
the DFT. FFT should not be thought of as a new transform: FFTs are merely algorithms.
Typically, FFT algorithms require O(Nlog
2
N) complex multiplications (⊗
c
), while the direct evaluation of
the DFT requires O(N
2
) complex multiplications. For N 1, Nlog
2
N N
2
so that there is a signiﬁcant
gain with the FFT.
Basic principle: The FFT relies on the concept of divide and conquer. It is obtained by breaking the DFT
of size N into a cascade of smaller size DFTs. To achieve this, two essential ingredients are needed:
• N must be a composite number (e.g. N = 6 = 2×3).
c B. Champagne & F. Labeau Compiled November 16, 2004
11.3 Radix2 FFT via decimationintime 221
• The periodicity and symmetry properties of the factor W
N
in (11.2) must be exploited, e.g.:
W
k
N
= W
k+N
N
(11.5)
W
Lk
N
= W
k
N/L
. (11.6)
Different types of FFTs: There are several FFT algorithms, e.g.:
• N = 2
ν
⇒radix2 FFTs. These are the most commonly used algorithms. Even then, there are many
variations:
 Decimation in time (DIT)
 Decimation in frequency (DIF)
• N = r
ν
⇒radixr FFTs. The special cases r = 3 and r = 4 are not uncommon.
• More generally, N = p
1
p
2
...p
l
where the p
i
s are prime numbers lead to socalled mixedradix FFTs.
Radix2 FFTs: In these algorithms, applicable when N = 2
ν
:
• the DFT
N
is decomposed into a cascade of ν stages;
• each stage is made up of
N
2
2point DFTs (DFT
2
).
Radix2 FFTs are possibly the most important ones. Only in very specialized situations will it be more
advantageous to use other radix type FFTs. In these notes, we shall mainly focus on the radix2 FFTs.
However, the students should be able to extend the basic principles used in the derivation of radix2 FFT
algorithms to other radix type FFTs.
11.3 Radix2 FFT via decimationintime
The basic idea behind decimationintime (DIT) is to partition the input sequence x[n], of length N = 2
ν
,
into two subsequences, i.e. x[2r] and x[2r +1], r = 0, 1, ..., (N/2) −1, corresponding to even and odd
values of time, respectively. It will be shown that the Npoint DFT of x[n] can be computed by properly
combining the (N/2)point DFTs of each subsequences. In turn, the same principle can be applied in the
computation of the (N/2)point DFT of each subsequence, which can be reduced to DFTs of size N/4. This
basic principle is repeated until only 2point DFTs are involved. The ﬁnal result is an FFT algorithm of
complexity O(Nlog
2
N)
We will ﬁrst review the 2point DFT, which is the basic building block of radix2 FFT algorithms. We
will then explain how and why the above idea of decimationintime can be implemented mathematically.
Finally, we discuss particularities of the resulting radix2 DIT FFT algorithms.
The 2point DFT: In the case N = 2, (11.1) specializes to
X[k] = x[0] +x[1]W
k
2
, k = 0, 1. (11.7)
c B. Champagne & F. Labeau Compiled November 16, 2004
222 Chapter 11. Fast Fourier transform (FFT)
Since W
2
= e
−jπ
=−1, this can be further simpliﬁed to
X[0] = x[0] +x[1] (11.8)
X[1] = x[0] −x[1], (11.9)
which leads to a very simple realization of the 2point DFT, as illustrated by the signal ﬂowgraph in Fig
ure 11.1.
] 0 [ x
] 1 [ x
] 0 [ X
] 1 [ X
1
Fig. 11.1 Signal ﬂowgraph of a 2point DFT.
Main steps of DIT:
(1) Split the ∑
n
in (11.1) into ∑
n even
+∑
n odd
(2) Express the sums ∑
n even
and ∑
n odd
as (N/2)point DFTs.
(3) If N/2 = 2 stop; else, repeat the above steps for each of the individual (N/2)point DFTs.
Case N = 4 = 2
2
:
• Step (1):
X[k] = x[0] +x[1]W
k
4
+x[2]W
2k
4
+x[3]W
3k
4
= (x[0] +x[2]W
2k
4
) +W
k
4
(x[1] +x[3]W
2k
4
) (11.10)
• Step (2): Using the property W
2k
4
=W
k
2
, we can write
X[k] = (x[0] +x[2]W
k
2
) +W
k
4
(x[1] +x[3]W
k
2
)
= G[k] +W
k
4
H[k] (11.11)
G[k] DFT
2
{even samples} (11.12)
H[k] DFT
2
{odd samples} (11.13)
Note that G[k] and H[k] are 2periodic, i.e.
G[k +2] = G[k], H[k +2] = H[k] (11.14)
• Step (3): Since N/2 = 2, we simply stop; that is, the 2point DFTs G[k] and H[k] cannot be further
simpliﬁed via DIT.
c B. Champagne & F. Labeau Compiled November 16, 2004
11.3 Radix2 FFT via decimationintime 223
The 4point DFT can thus be computed by properly combining the 2point DFTs of the even and odd
samples, i.e. G[k] and H[k], respectively:
X[k] = G[k] +W
k
4
H[k], k = 0, 1, 2, 3 (11.15)
Since G[k] and H[k] are 2periodic, they only need to be computed for k = 0, 1, hence the equations:
X
0
[k] = G[0] +W
0
4
H[0]
X
1
[k] = G[1] +W
1
4
H[1]
X
2
[k] = G[2] +W
2
4
H[2] = G[0] +W
2
4
H[0]
X
3
[k] = G[3] +W
3
4
H[3] = G[1] +W
3
4
H[1]
The ﬂow graph corresponding to this realization is shown in Figure 11.2. Note the special ordering of the
input data. Here further simpliﬁcations of the factors W
k
4
are possible here: W
0
4
= 1, W
1
4
= −j, W
2
4
= −1
and W
3
4
= j.
] 0 [ G
] 1 [ G
] 0 [ X
] 1 [ X
] 2 [ X
] 3 [ X
] 0 [ H
] 1 [ H
0
4
W
1
4
W
2
4
W
3
4
W
] 0 [ x
] 2 [ x
] 1 [ x
] 4 [ x
2
DFT
2
DFT
Fig. 11.2 Decimation in time implementation of a 4point DFT. The DFT
2
blocks are as
shown in ﬁgure 11.1.
General case:
• Step (1): Note that the even and odd samples of x[n] can be represented respectively by the sequences
x[2r] and x[2r +1], where the index r now runs from 0 to
N
2
−1. Therefore
X[k] =
N−1
∑
n=0
x[n]W
kn
N
=
N
2
−1
∑
r=0
x[2r]W
2kr
N
+W
k
N
N
2
−1
∑
r=0
x[2r +1]W
2kr
N
(11.16)
• Step (2): Using the property W
2kr
N
=W
kr
N/2
, we obtain
X[k] =
N
2
−1
∑
r=0
x[2r]W
kr
N/2
+W
k
N
N
2
−1
∑
r=0
x[2r +1]W
kr
N/2
= G[k] +W
k
N
H[k] (11.17)
c B. Champagne & F. Labeau Compiled November 16, 2004
224 Chapter 11. Fast Fourier transform (FFT)
where we deﬁne
G[k] DFT
N/2
{x[2r], r = 0, 1...,
N
2
−1} (11.18)
H[k] DFT
N/2
{x[2r +1], r = 0, ...,
N
2
−1} (11.19)
Note that G[k] and H[k] are
N
2
periodic and need only be computed for k = 0 up to
N
2
−1. Thus, for
k ≥N/2, (11.17) is equivalent to
X[k] = G[k −
N
2
] +W
k
N
H[k −
N
2
] (11.20)
• Step (3): If N/2 ≥4, apply these steps to each DFT of size N/2.
Example 11.1: Case N = 8 = 2
3
The application of the general decimation in time principle to the case N =8 is described by Figures 11.5
through 11.5, where, for the sake of uniformity, all phase factors have been expressed as W
k
8
(e.g. W
1
4
=
W
2
8
).
Figure 11.3 illustrates the result of a ﬁrst pass through DIT steps (1) and (2). The outputs of the top
(bottom) 4point DFT box are the coefﬁcients G[k] (resp. H[k]) for k = 0, 1, 2, 3. Note the ordering of
the input data to accommodate the 4point DFTs over even and odd samples. In DIT step (3), since
X[4]
4point
DFT
4point
DFT
W
2
N
x[0]
x[2]
x[4]
x[6]
x[1]
x[3]
x[5]
x[7]
W
0
N
W
1
N
W
3
N
W
6
N
W
7
N
W
5
N
W
4
N
X[0]
X[1]
X[2]
X[3]
X[7]
X[6]
X[5]
Fig. 11.3 Decomposition of an 8point DFT in 2 4point DFTs.
N/2 = 4, the DIT procedure is repeated for the computation of the 4point DFTs G[k] nd H[k]. This
amounts to replacing the 4point DFT boxes in Figure 11.3 by the ﬂowgraph of the 4point DIT FFT, as
derived previously. In doing so, the order of the input sequence must be modiﬁed accordingly. The result
is illustrated in Figure 11.4. The ﬁnal step amount to replacing each of the 2point DFTs in Figure 11.4
by their corresponding signal ﬂowgraph. The result is illustrated in Figure 11.5.
c B. Champagne & F. Labeau Compiled November 16, 2004
11.3 Radix2 FFT via decimationintime 225
W
6
N
2point
DFT
2point
DFT
2point
DFT
2point
DFT
x[0]
x[6]
x[1]
x[7]
x[4]
x[2]
x[5]
x[3]
W
2
N
W
0
N
W
1
N
W
3
N
W
6
N
W
7
N
W
5
N
W
4
N
X[0]
X[1]
X[2]
X[3]
X[7]
X[6]
X[5]
X[4]
W
0
N
W
2
N
W
4
N
W
6
N
W
0
N
W
2
N
W
4
N
Fig. 11.4 Decomposition of an 8point DFT in 4 2point DFTs.
Computational complexity: The DIT FFT ﬂowchart obtained for N = 8 is easily generalized to N = 2
ν
,
i.e. an arbitrary power of 2. In the general case, the signal ﬂow graph consists of a cascade of ν = log
2
N
stages. Each stage is made up of N/2 basic binary ﬂow graphs called butterﬂies, as shown in Figure 11.6.
In practice, a simpliﬁed butterﬂy structure is used instead of the one in Figure 11.6. Realizing that
W
r+N/2
N
=W
r
N
W
N/2
N
=W
r
N
e
−jπ
=−W
r
N
,
an equivalent butterﬂy with only one complex multiplication can be implemented, as shown in Figure 11.7.
By using the modiﬁed butterﬂy structure, the computational complexity of the DIT FFT algorithm can be
determined based on the following:
• there are ν = log
2
N stages;
• there are N/2 butterﬂies per stage;
• there is 1 ⊗
c
and 2 ⊕
c
per butterﬂy.
Hence, the total complexity:
N
2
log
2
N⊗
c
, Nlog
2
N⊕
c
(11.21)
Ordering of input data: The input data x[n] on the LHS of the FFT ﬂow graph is not listed in standard
sequential order (refer to Figures 11.2 or 11.5). Rather, the data is arranged is a socalled bitreversed order.
Indeed, if one looks at the example in Figure 11.5 and compares the indices of the samples x[n] on the left
of the structure with a natural ordering, one sees that the 3bit binary representation of the actual index is
the bitreversed version of the 3bit binary representation of the natural ordering:
c B. Champagne & F. Labeau Compiled November 16, 2004
226 Chapter 11. Fast Fourier transform (FFT)
W
2
N
x[0]
x[6]
x[1]
x[7]
x[4]
x[2]
x[5]
x[3]
W
2
N
W
0
N
W
1
N
W
3
N
W
6
N
W
7
N
W
5
N
W
4
N
X[0]
X[1]
X[2]
X[3]
X[7]
X[6]
X[5]
X[4]
W
4
N
W
6
N
W
4
N
W
6
N
−1
−1
−1
−1
W
0
N
W
2
N
W
0
N
Fig. 11.5 An 8point DFT implemented with the decimationintime algorithm.
r
N
W
2 / N r
N
W
+
Fig. 11.6 A butterﬂy, the basic building block of an FFT algorithm.
Sample index
Actual offset in
memory
decim. binary decim. binary
0 000 0 000
4 100 1 001
2 010 2 010
6 110 3 011
1 001 4 100
5 101 5 101
3 011 6 110
7 111 7 111
Practical FFT routines contain bitreversing instructions, either prior or after the FFT computation, depend
ing upon the speciﬁc nature of the algorithm. Some programmable DSP chips contain also an option for
bitreversed addressing of data in memory.
We note that the DIT FFT ﬂowgraph can be modiﬁed so that the inputs are in sequential order (e.g. by
properly reordering the vertical line segments in Fig. 11.3); but then the output X[k] will be listed in
bitreversed order.
c B. Champagne & F. Labeau Compiled November 16, 2004
11.4 Decimationinfrequency 227
r
N
W 1 −
Fig. 11.7 A modiﬁed butterﬂy, with only one multiplication.
Storage requirement: A complex array of length N, say A[q] (q =0, ..., N−1), is needed to store the input
data x[n]. The computation can be done inplace, meaning that the same array A[q] is used to store the results
of intermediate computations. This is made possible by the very structure of the FFT ﬂow graph, which is
entirely made up of independent binary butterﬂy computations. We note that once the outputs of a particular
butterﬂy have been computed, the corresponding inputs are no longer needed and can be overwritten. Thus,
provided a single additional complex register is made available, it is possible to overwrite the inputs of each
butterﬂy computation with the corresponding outputs. Referring to Figure 11.7 and denoting by A
1
and A
2
the storage locations of the two inputs to the butterﬂy, we can write the pseudocode of the butterﬂy:
tmp ← A
2
∗W
r
N
A
2
← A
1
−tmp
A
1
← A
1
+tmp,
after which the input has been overwritten by the output, and use of the temporary storage location tmp has
been made. This is so because the inputs of each butterﬂy are only used once.
FFT program: Based on the above consideration, it should be clear that a properly written program for
the DIT FFT algorithm contains the following ingredients:
• Preprocessing to address the data in bitreversed order;
• An outer loop over the stage number, say from l = 1 to ν = log
2
N.
• An inner loop over the butterﬂy number, say from k = 1 to N/2. Note that the speciﬁc indices of the
butterﬂy inputs depends on the stage number l.
11.4 Decimationinfrequency
Decimation in frequency is another way of decomposing the DFT computation so that the resulting algorithm
has complexity O(Nlog
2
N). The principles are very similar to the decimation in time algorithm.
Basic steps: Assume N = 2
ν
, where ν integer ≥1.
(1) Partition DFT samples X[k] (k = 0, 1, . . . , N−1) into two subsequences:
 evenindexed samples: X[2r], r = 0, 1, ...,
N
2
−1
c B. Champagne & F. Labeau Compiled November 16, 2004
228 Chapter 11. Fast Fourier transform (FFT)
 oddindexed samples: X[2r +1], r = 0, 1, ...,
N
2
−1
(2) Express each subsequence as a DFT of size N/2.
(3) If N/2 = 2, stop; else, apply steps (1)(3) to each DFT of size N/2.
Case N = 4:
• For evenindexed DFT samples, we have
X[2r] =
3
∑
n=0
x[n]W
2rn
4
, r = 0, 1
= x[0]W
0
4
+x[1]W
2r
4
+x[2]W
4r
4
+x[3]W
6r
4
Now, observe that:
W
0
4
= 1 W
2r
4
=W
r
2
W
4r
4
= 1 W
6r
4
=W
4r
4
W
2r
4
=W
r
2
Therefore
X[2r] = (x[0] +x[2]) +(x[1] +x[3])W
r
2
= u[0] +u[1]W
r
2
= DFT
2
{u[0], u[1]}
u[r] x[r] +x[r +2], r = 0, 1
• In the same way, it can be veriﬁed that for the oddindexed DFT samples,
X[2r +1] = DFT
2
{v[0], v[1]}
v[r] W
r
4
(x[r] −x[r +2]), r = 0, 1
• The resulting signal ﬂow graph is shown in Figure 11.8, where each of the 2point DFT have been
realized using the basic SFG in Figure 11.7.
General case: Following the same steps as in the case N = 4, the following mathematical expressions can
be derived for the general case N = 2
ν
:
X[2r] = DFT
N/2
{u[r]} (11.22)
X[2r +1] = DFT
N/2
{v[r]} (11.23)
where r = 0, 1, . . . , N/2−1 and
u[r] x[r] +x[r +N/2] (11.24)
v[r] W
r
N
(x[r] −x[r +2]) (11.25)
c B. Champagne & F. Labeau Compiled November 16, 2004
11.5 Final remarks 229
] 0 [ X
] 1 [ X
] 2 [ X
] 3 [ X
] 0 [ x
] 2 [ x
] 1 [ x
] 3 [ x
1 −
1 −
] 0 [ u
] 1 [ u
] 0 [ v
] 1 [ v
1 −
1 −
0
4
W
1
4
W
Fig. 11.8 Decimation in frequency realization of a 4point DFT.
Remarks on the DIF FFT: Much like the DIT, the DIF radix2 FFT algorithms have the following char
acteristics:
• computational complexity of O(Nlog
2
N);
• output X[k] in bitreversed order;
• inplace computation is also possible.
A program for FFT DIF contains components similar to those described previously for the FFT DIT.
11.5 Final remarks
Simpliﬁcations: The FFT algorithms discussed above can be further simpliﬁed in the following special
cases:
 x[n] is a real signal
 only need to compute selected values of X[k] (⇒FFT pruning)
 x[n] containing large number of zero samples (e.g. as a result of zeropadding)
Generalizations: The DIT and DIF approaches can be generalized to:
 arbitrary radix (e.g. N = 3
ν
and N = 4
ν
)
 mixed radix (e.g. N = p
1
p
2
· · · p
K
where p
i
are prime numbers)
c B. Champagne & F. Labeau Compiled November 16, 2004
230
Chapter 12
An introduction to Digital Signal Processors
12.1 Introduction
Digital Signal Processing Chips are specialized microprocessors aimed at performing DSP tasks. They are
often called Digital Signal Processors — “DSPs” for hardware engineers. These DSP chips are sometimes
referred to as PDSPs (Programmable Digital Signal Processor), as opposed to hardwired solutions (ASICs)
and reconﬁgurable solutions (FPGAs).
The ﬁrst DSP chips appeared in the early 80’s, mainly produced by Texas Instruments (TI). Nowadays, their
power has dramatically increased, and the four big players in this game are now TI, Agere (formerly Lucent),
Analog Devices and Motorola. All these companies provide families of DSP processors with different target
applications – from lowcost to high performance.
As the processing capabilities of DSPs have increased, they are being used in more and more applications.
Table 12.1 lists a series of application areas, together with the DSP operations they require, as well as a
series of examples of devices or appliances where a DSP chip can be used.
The very need for specialized microprocessors to accomplish DSPrelated tasks comes from a series of
requirements of classic DSP algorithms, that could not be met by general purpose processors in the early
80’s: DSPs have to support highperformance, repetitive and numerically intensive tasks.
In this chapter, we will review the reasons why a DSP chip should be different from an other microprocessor,
through motivating examples of DSP algorithms. Based on this, we will analyze how DSP manufacturers
have designed their chips so as to meet the DSP constraints, and review a series of features common to most
DSP chips. Finally, we will brieﬂy discuss the performance measures that can be used to compare different
DSP chips.
12.2 Why PDSPs ?
To motivate the need for specialized DSP processors, we will study in this section the series of operations
required to accomplish two basic DSP operations, namely FIR ﬁltering and FFT computation.
12.2 Why PDSPs ? 231
Application Area DSP task Device
Speech & Audio Signal Pro
cessing
Effects (Reverb, Tone Control,
Echo) , Filtering, Audio Compres
sion, Speech Synthesis, Recogni
tion & Compression, Frequency
Equalization, Pitch , Surround
Sound
Musical instruments & Ampliﬁers,
Audio Mixing Consoles & Record
ing Equipment, Audio Equipment
& Boards for PCs, Toys & Games,
Automotive Sound Systems, DAT
& CD Players, HDTV Equipment,
Digital Tapeless Recorders, Cellu
lar phones
Instrumentation and Mea
surement
Fast Fourier Transform (FFT), Fil
tering, Waveform Synthesis, Adap
tive Filtering, High Speed Numeric
Calculations
Test & Measurement Equipment,
I/O Cards for PCs, Power Meters,
Signal Analyzers and Generators
Communications Modulation & Transmission, De
modulation & Reception, Speech
Compression, Data Encryption,
Echo Cancellation
Modems, Fax Machines, Broadcast
Equipment, Mobile Phones, Digi
tal Pagers, Global Positioning Sys
tems, Digital Answering Machines
Medical Electronics Filtering, Echo Cancellation, Fast
Fourier Transform (FFT), Beam
Forming
Respiration, Heart & Fetal Mon
itoring Equipment, Ultra Sound
Equipment, Medical Imaging
Equipment, Hearing Aides
Optical and Image Process
ing
2Dimensional Filtering, Fast
Fourier Transform (FFT), Pattern
Recognition, Image Smoothing
Bar Code Scanners, Automatic
Inspection Systems, Fingerprint
Recognition, Digital Televisions,
Sonar/Radar Systems, Robotic Vi
sion
Table 12.1 A nonexhaustive list of applications of DSP chips.
c B. Champagne & F. Labeau Compiled November 24, 2004
232 Chapter 12. An introduction to Digital Signal Processors
FIR Filtering: Filtering a signal x[n] through an FIR ﬁlter with impulse response h[n] amounts to com
puting the convolution
y[n] = h[n] ∗x[n].
At each sample time n, the DSP processor has to compute
y[n] =
N−1
∑
k=0
h[k]x[n−k],
where N is the length of the FIR. Froman algorithmic point of view, this means that the processor has to fetch
N data samples at each sample time, multiply them with corresponding ﬁlter coefﬁcients stored in memory,
and accumulate the sum of products to yield the ﬁnal answer. Figure 12.1 illustrates these operations at two
consecutive times n and n+1.
It is clear from this drawing that the basic DSP operation in this case is a multiplication followed by an
addition, also know as a Multiply and Accumulate operation, or MAC. Such a MAC has to be implemented
N times for each output sample.
This also shows that the FIR ﬁltering operation is both data intensive (N data needed per output sample)
and repetitive (the same MAC operation repeated N times with different operands).
An other point that appears from this graphic is that, although the ﬁlter coefﬁcients remain the same from
one time instant to the next, the input coefﬁcients change. They do not all change: the last one is discarded,
the newest input sample appears, and all other samples are shifted one location in memory. This is known
as a First InFirst Out (FIFO) structure or queue.
A ﬁnal feature that is common to a lot of DSP applications is the need for real time operation: the output
samples y[n] must be computed by the processor between the moment x[n] becomes available and the time
x[n+1] enters the FIFO, which leaves T
s
seconds to carry out the whole processing.
FFT Computation: As explained in section 11.3, a classical FFT computation by a decimationintime
algorithm uses a simple building block known as a butterﬂy (see Figure 11.7). This simple operation on 2
values involves again multiplication and addition, as suggested by the pseudocode in section 11.3. An
other point mentioned in section 11.3 is that the FFT algorithm yields DFT coefﬁcient in bitreversed order,
so that once they are computed, the processor has to restore the original order.
Summary of requirements: We have seen with the two above examples that DSP algorithms involve:
1. Real time requirements with intensive data ﬂow through the processor;
2. Efﬁcient implementation of loops, and MAC operations;
3. Efﬁcient addressing schemes to access FIFOs and/or bitreversed ordered data.
In the next section we will describe the features common to many DSPs that address the above list of
requirements.
c B. Champagne & F. Labeau Compiled November 24, 2004
12.3 Characterization of PDSPs 233
 Q [
 2  Q [
 3  Q [
 4  Q [
 5  Q [
 6  Q [
 0  K
 2  K
 3  K
 4  K
 5  K
 6  K
u
 Q \
 Q [
 2  Q [
 3  Q [
 4  Q [
 5  Q [
 0  K
 2  K
 3  K
 4  K
 5  K
 6  K
u
 1  Q \
 1  Q [
 1  Q [
 1  K
 1  K  1  Q [
Fig. 12.1 Operations required to compute the output of a 7tap FIR ﬁlter at times n and n+1.
12.3 Characterization of PDSPs
12.3.1 FixedPoint vs. FloatingPoint
Programmable DSPs can be characterized by the type of arithmetic they implement. Fixedpoint processors
are in general much cheaper than their ﬂoatingpoint counterparts. However, the dynamic range offered
by the ﬂoatingpoint processors is much higher. An other advantage of ﬂoatingpoint processors is that
they do not require the programmer to worry too much about overﬂow and scaling issues, which are of
paramount importance in ﬁxedpoint arithmetics (see section 10.1.2). Some applications require a very low
cost processor with low power dissipation (like cell phones for instance), so that ﬁxedpoint processors have
the advantage on these markets.
12.3.2 Some Examples
We present here a few examples of existing DSPs, to which we will refer later to illustrate the different
concepts.
Texas Instrument has been successful with its TMS320 series for nearly 20 years. The latest generation of
these processors is the TMS320C6xxx family, which represent very high performance DSPs. This family
can be grossly divided in the TMS320C62xx, which are ﬁxedpoint processors, and the TMS320C67xx,
which use ﬂoatingpoint arithmetics.
An other example of a slightly older DSP is the DSP56300 from Motorola. The newer, stateoftheart, DSPs
from this company are based on the very high performance StarCore DSP Core, which yield unprecedented
processing power. This is the result of a common initiative with Agere systems.
Analog Devices produces one of the most versatile lowcost DSPs, namely the ADSP21xx family. The next
generation of Analog Devices DSPs are called SHARC DSPs, and contain the ADSP21xxx family of 32bit
ﬂoatingpoint processors. Finally, their latest generation is called TigerSHARC, and provides both ﬁxed
c B. Champagne & F. Labeau Compiled November 24, 2004
234 Chapter 12. An introduction to Digital Signal Processors
and ﬂoating point operation, on variable word lengths. The TigerSHARC is a very high performance DSP
targeting mainly very demanding applications, such as telecommunication networks equipment.
12.3.3 Structural features of DSPs
We will now review several features of Digital Signal Processors that are designed to meet the demanding
constraints of realtime DSP.
Memory Access: Harvard architecture
Most DSPs are created according to the socalled Harvard Architecture. In classic microprocessors, there is
only one memory bank, one address bus and one data bus. Both program instructions and data are fetched
from the same memory through the same buses, as illustrated in Figure 12.2(a). On the other hand, in
order to increase data throughput, DSP chips have in general two separate memory blocks, one for program
instructions and one for data. This is the essence of the Harvard architecture, where each memory block
also has a separate set of buses, as illustrated in Figure 12.2(b). Combined with pipelining (see below), this
architecture enables simultaneous fetching of the next program instruction while fetching the data for the
current instruction, thus increasing data throughput. This architecture enables two memory accesses during
one clock cycle.
Many DSP chips have an architecture similar to the one in Figure 12.2(b), or an enhanced version of it.
A common enhancement is the duplication of the data memory: many common DSP operations use two
operands, like e.g. the MAC operation. It is thus normal to imagine fetching two data words from memory
while fetching one instruction. This is realized in practise either with two separate data memory banks with
their own buses (like in the Motorola DSP563xx family— see Figure 12.3), or by using twoway accessible
RAM (like in Analog Device’s SHARC Family), in which case only the buses have to be duplicated. Finally,
other enhancements found in some DSPs (such as SHARC processors) is a program instruction cache.
Other critical components that enable an efﬁcient ﬂow of data on and off the DSP chip are Direct Memory
Access (DMA) units, which enable direct storage of data in memory without processor intervention. (See
e.g. the architecture of the Motorola DSP56300 in Figure 12.3).
Specialized hardware units and instruction set
In order to accommodate the constraints of DSP algorithms, DSP chips usually comprise specialized hard
ware units not found in many other processors.
Most if not all DSP chips contain one or several dedicated hardware multipliers, or even MAC units. On
conventional processors without hardware multipliers, binary multiplication can take several clock cycles;
in a DSP, it only takes one clock cycle. The multiplier is associated with an accumulator register (in a
MAC unit), which normally is wider than a normal register. For instance, the Motorola 563xx family
(see Figure 12.3) contains a 24bit MAC unit, and a 56bit accumulator. The extra bits provided by the
accumulator are guard bits that prevent overﬂow from occurring during repetitive accumulate operations,
such as the computation of the output of an FIR ﬁlter. In order to store the accumulated value into a normal
register or in memory, the accumulator value must be shifted right by the number of guard bits used, so that
resolution is lost, but no overﬂow occurs. Figure 12.4 illustrates the operation of a MAC unit, as well as the
c B. Champagne & F. Labeau Compiled November 24, 2004
12.3 Characterization of PDSPs 235
Address Bus
Data Bus
Processor
Core
Memory
Program Address Bus
Program Data Bus
Processor
Core
Program
Memory
Data Address Bus
Data Bus
Data
Memory
(a) (b)
Fig. 12.2 Illustration of the difference between a typical microprocessor memory access (a)
and a typical Harvard architecture (b).
need for guard bits. Note that G guard bits prevent overﬂow in the worst case of the addition of 2
G
words.
The issue of shifting the result stored in the accumulator is very important: the programmer has to keep
track of the implied binary point, depending on the type of binary representation used (integer or fractional
or mixed). The instruction set of DSP chips often contains a MAC instruction that not only does the MAC
operation, but also increments pointers in memory.
We have also seen that DSP algorithms often contain repetitive instructions. Remember that to compute one
output sample of an Ntap FIR ﬁlter, you need to repeat N MAC operations. This is why DSPs contain a
provision for efﬁcient looping (called zerooverhead looping), which may be a special assembly language
instruction — so that the programmer does not have to care about checking and decrementing loop counters
— , or even a dedicated hardware unit.
The third component that is necessary to DSP algorithms implementation is an efﬁcient addressing mecha
nism. This is often taken care of by a specialized hardware Address Generation Unit. This unit works in the
background, generating addresses in parallel with the main processor computations. Some common features
of a DSP’s address generation unit are:
 registerindirect addressing with postincrement, where a register points to an address in memory,
from which the operand of the current instruction is fetched, and the pointer is incremented automat
ically at the end of the instruction. All of this happens in one instruction cycle. This is a lowlevel
equivalent of a C language *(var name++) instruction. It is obvious that such a function is very
useful when many contiguous locations in memory have to be used, which is common when process
ing signal samples.
 circular addressing, where addresses are incremented modulo a given buffer size. This is an easy way
to implement a FIFO or queue, which is the best way of creating a window sliding over a series of
samples, as in the FIR ﬁltering example given above. The only overhead for the programmer is to set
up the circular addressing mode and the buffer size in some control register.
c B. Champagne & F. Labeau Compiled November 24, 2004
236 Chapter 12. An introduction to Digital Signal Processors
Program
Memory
X Data
Memory
Y Data
Memory
X Data Address Bus
X Data Bus
Y Data Address Bus
Y Data Bus
Address
Generat
ion
Memory
Arithmetic Units
Program Control
24bit
MAC
56bit
accu.
shiIter
OIIchip memory access
address
data
DMA unit
DMA Address Bus
Program Address Bus
Program Data Bus
Fig. 12.3 Simpliﬁed architecture of the Motorola DSP563xx family. (Source: Motorola[9]).
The DSP563xx is a family of 24bit ﬁxedpoint processors. Notice the 3 separate memory
banks with their associated sets of buses. Also shown is the DMA unit with its dedicated bus
linked directly to memory banks and offchip memory.
c B. Champagne & F. Labeau Compiled November 24, 2004
12.3 Characterization of PDSPs 237
Multiplier
1
1
1
Accumulator
1 *
Accumulator values time
0
1
2
3
4
Guard bits
Fig. 12.4 Operation of a MAC unit with guard bits. Two Nbit operands are multiplied,
leading to a 2Nbit result, which is accumulated in a (2N +G)bit accumulator register. The
need for guard bits is illustrated with the given operands: the values of the accumulator at
different times are shown on the right of the ﬁgure, showing that the fourth operation would
generate an overﬂow without the guard bits: the positive result would appear as a negative
value. It is the programmer’s task in this case to keep track of the implied binary point when
shifting back the accumulator value to an Nbit value.
 bitreversed addressing, where address offsets are bitreversed. This has been shown to be useful for
the implementation of FFT algorithms.
Parallelism and Pipelining
At one moment, it becomes difﬁcult to increase clock speed enough to satisfy the ever more demanding DSP
algorithms. So, in order to meet realtime constraints in more and more sophisticated DSP applications,
other means of increasing the number of operations carried out by the processor have been implemented:
parallelism and pipelining.
Pipelining: Pipelining is a principle that is in application in any modern processor, and is not limited to
DSPs. Its goal is to make better use of the processor resources by avoiding leaving some components idle
while others are working. Basically, the execution of an instruction requires three main steps, each carried
out by a different part of the processor: a fetching step, during which the instruction is fetched from memory;
a decoding step, during which the instruction is decoded and an execution step during which it is actually
executed. Often, each of these three steps can also be decomposed in substep: for example, the execution
may require fetching operands, computing a result and storing the result. The idea behind pipelining is
just to begin fetching the next instruction as soon as the current one enters the decode stage. When the
current instruction reaches the execution stage, the next one is a the decode stage, and the instruction after
that is in the fetching stage. In this way, the processor resources are used more efﬁciently and the actual
c B. Champagne & F. Labeau Compiled November 24, 2004
238 Chapter 12. An introduction to Digital Signal Processors
instruction throughput is increased. Figure 12.5 illustrates the pipelining principle. In practise, care must
be taken because instruction do not all require the same time (mostly during the execution phase), so that
NOPs should sometimes be inserted. Also, branching instructions might ruin the gain of pipelining for a
few cycles, since some instructions already in the pipeline might have to be dropped. These issues are far
beyond the scope of this text and the interested reader is referred to the references for details.
Instruction1
)HWFKXQLW 'HFRGHXQLW ([HFXWHXQLW
idle idle t÷0
idle Instruction1 idle t÷1
idle idle Instruction1 t÷2
Instruction 2 idle idle t÷3
idle Instruction 2 idle t÷4
idle idle Instruction 2 t÷5
Instruction1
)HWFKXQLW 'HFRGHXQLW ([HFXWHXQLW
idle idle t÷0
Instruction 2 Instruction1 idle t÷1
Instruction 3 Instruction 2 Instruction1 t÷2
Instruction 4 Instruction 3 Instruction 2 t÷3
Instruction 5 Instruction 4 Instruction 3 t÷4
Instruction 6 Instruction 5 Instruction 4 t÷5
Fig. 12.5 Schematic illustration of the difference in throughput between a pipelined (right)
and non pipelined (left) architecture. Columns represent processor functional units, lines rep
resent different time instants. Idle units are shaded.
Data level parallelism: Data level parallelism is achieved in many DSPs by duplication of hardware
units. It is not uncommon in this case to have two or more ALUs (Arithmetic and Logic Unit), two or more
MACs and Accumulators, each with their own data path, so that they can be used simultaneously. Single
InstructionMultiple Data (SIMD) processors can then apply the same operation to several data operands
at the same time (two simultaneous MACs for instance). An other way of implementing this, is to have
the processor accommodate multiple date lengths, so that for instance a 16bit multiplier can accomplish
2 simultaneous 8bit multiplications. A good example of this idea is the TigerSHARC processor from
Analog Devices, which is illustrated in Figure 12.6. Figure 12.6(a) illustrates the general architecture of
the processor core, with multiple buses and memory banks, multiple address generation units, and two
parallel 32bit computational units. Figure 12.6(b) shows how theses two computational units can be used
in parallel, together with subword parallelism to enhance the throughput on subword operations: in the
case illustrated, 8 16bit MAC operations can be carried out in just one cycle.
SIMD is efﬁcient only for algorithms that lend themselves to this kind of parallel execution, and for instance
not where data are treated in series or where a lowdelay feedback loop exists. But it is very efﬁcient for
vectorintensive operations like multimedia processing.
Instruction level parallelism: Instruction level parallelism is achieved by having different units with dif
ferent roles running concurrently. Two main families of parallel DSPs exist, namely those with VLIW (Very
Long Instruction Word) architecture and those with superscalar architectures. Superscalar architectures
are used heavily in general purpose processors (e.g. Intel’s Pentium), and contain a special scheduling unit
which determines at runtime which instructions could run in parallel based e.g. on data dependencies. As
is, this kind of behaviour is not desirable in DSPs, because the runtime determination of parallelism makes
the predictability of execution time difﬁcult — though it is a factor of paramount importance in real time
operation. This is why there are few superscalar DSPs available.
VLIW architectures [6] are characterized by long instruction words, which are in fact concatenations of
c B. Champagne & F. Labeau Compiled November 24, 2004
12.3 Characterization of PDSPs 239
Bus 1
Bus 2
Bus 3
Program Control Unit
Memory
0
Memory
2
Memory
1
Memory banks
AGU 1 AGU 2
DMA
Computational
unit 1
ALU
ALU
ShiIter
Multip
lier
R
e
g
i
s
t
e
r
s
Computational
unit 2
ALU
ShiIter
Multip
lier
R
e
g
i
s
t
e
r
s
MAC unit 1
32
32
64
16 16 16 16
32 32
MAC unit 2
32
32
64
16 16 16 16
32 32
16 16 16 16 16 16 16 16
32 32 32 32
Computational Unit 1 Computational Unit 2
operands 1 (128 bits)
operands 2 (128 bits)
(a) (b)
Fig. 12.6 Simpliﬁed view of an Analog Devices TigerSHARC architecture. (a) The ﬁgure
shows the 3 128bit buses and 3 memory banks as well as 2 computational units and 2 address
generation units. The programmer can assign program and/or data to each bus/memory bank.
(b) Illustration of the datalevel parallelism by SIMD operation: each computational unit can
execute two 32bit × 32bit ﬁxedpoint MACs per cycle, but can also work in subword paral
lelism, executing up to four 16bit × 16bit MACs, for a total of eight simultaneous 16bit ×
16bit MACs per cycle. (Source: ADI [1])
several shorter instructions. Each short instruction will ideally execute in parallel with the others on a differ
ent hardware unit. This principle is illustrated in Figure 12.7. This heavy parallelism normally provides an
enormous increase in instruction throughput,but in practise, some limitations will prevent the hardware units
from working at full rate in parallel all the time. For instance, data dependencies between the short instruc
tions might prevent their parallel execution; also, if some of the short instructions in a VLIW instruction use
the same resources (e.g. registers), they will not be able to execute in parallel.
A good illustration of these limitations can be given by looking at a practical example of VLIW processor:
the TMS320C6xxx family from Texas Instruments. The simpliﬁed architecture of this processor is shown
in Figure 12.8, where the address buses have been omitted for clarity. In the TMS320C6xxx, the instruction
words are 128bit wide, each containing eight 32bit instructions. Each of these 32bit instructions is to be
executed by one of the 8 different computational units. These 8 units are constituted by 4 duplicate units,
grouped in socalled data paths A and B (see Figure 12.8): ALU, ALU and shifter, address generation unit
and multiplier. Each data path contains a set of registers, and both data paths share the same data memory.
It is clear in this case that in order for all the 8 instructions of a VLIW instruction to be executed in parallel,
one instruction must be executed on each computational unit, i.e. for instance, the VLIW instruction must
contain two multiplications. The actual 32bit instructions from the same 128bit VLIW that are executed in
parallel are in fact chosen at assembly time, by a special directive in the TMS320C6xxx assembly language.
Of course, if the program is written in some highlevel language such as C, the compiler takes care of this
instructions organization and grouping.
Another example of processor using VLIW is the TigerSHARC from Analog Devices, described earlier (see
Figure 12.6). This DSP combines SIMD and VLIW to attain a very high level of parallelism. VLIW means
c B. Champagne & F. Labeau Compiled November 24, 2004
240 Chapter 12. An introduction to Digital Signal Processors
instruction
1
instruction
2
instruction
3
instruction
4
Wide Instruction
Bus
Hardware unit
1
Hardware unit
2
Hardware unit
3
Hardware unit
4
Fig. 12.7 Principle of VLIW architecture: long instructions are split in short instructions
executed in parallel by different hardware units.
that each of the two computational units in Figure 12.6(b) can be issued a different instruction.
Other Implementations
We have reviewed the characteristics of DSPs, and shown why they are suited to the implementation of DSP
algorithms. However, it is obvious that some other choices are possible when a DSP algorithm has to be
implemented in an endproduct. These choices are listed below, and compared with a PDSPbased solution.
General Purpose Processor: Nowadays, general purpose processors, such as Intel’s Pentium, show ex
tremely high clock rates. They also have very efﬁcient caching, pipelining and make extensive use of
superscalar architectures. Most of them also have a SIMD extension to their instruction set (e.g. MMX for
Intel). However, they suffer some drawbacks for DSP implementations: high power consumption and need
for proper cooling; higher cost (at least than most ﬁxedpoint DSPs); and lack of executiontime predictabil
ity because of multiple caching and dynamic superscalar architectures, which make them less suited to hard
realtime requirements.
ASIC: An Application Speciﬁc Integrated Circuit will execute the required algorithm much faster than a
programmable DSP processor, but will of course lack any reconﬁgurability. So upgrade and maintenance of
products in the ﬁeld are impossible. The design costs are also prohibitive except for very high volumes of
units deployed.
FPGA: FieldProgrammable Gate Arrays are hardware reconﬁgurable circuits that represent some middle
way between a programmable processor and and ASIC. They are faster than PDSPs, but in general have
c B. Champagne & F. Labeau Compiled November 24, 2004
12.3 Characterization of PDSPs 241
ALU A
ALU/
ShiIter
A
Address
Genera
tion A
Multi
plier A
Registers A
Unit A
ALU B
ALU/
ShiIter
B
Address
Genera
tion B
Multi
plier B
Registers B
Unit B
Program Memory
Instruction decoding logic
1
2
8
b
i
t
s
Data Memory Access InterIace
External Memory InterIace Internal Memory
DMA
Unit
Fig. 12.8 Simpliﬁed architecture of VLIW processor: the Texas Instruments TMS320C6xxx
VLIW Processor. Each VLIW instruction is made of eight 32bit instructions, for up to eight
parallel instructions issued to the eight processing units divided in operational Units A and B.
(Source:TI [13])
c B. Champagne & F. Labeau Compiled November 24, 2004
242 Chapter 12. An introduction to Digital Signal Processors
higher power consumption. They are more expensive than PDSPs. The design work tends to be also longer
than on a PDSP.
12.4 Benchmarking PDSPs
Once the decision has been made to implement an algorithm on a PDSP, the time comes to choose a model
amongst the many available brands and families. Many factors will enter into account, and we try to review
some of these here.
Fixed or Floating Point: As stated earlier, ﬂoating point DSPs have higher dynamic range and do not
require the programmer to worry as much about overﬂow and roundoff errors. These comes at the expense
of a higher cost and a higher power consumption. So, most of the time, the application will decide: applica
tions where lowcost and/or low power consumption are an issue (large volume, mobile,. . . ) will call for a
ﬁxedpoint DSP. Once the type of arithmetic is chosen, one still has to choose the word width (in bits). This
is crucial for ﬁxedpoint application, since the number of bits available directly deﬁnes the dynamic range
attainable.
Speed: Constructors publish the speed of their devices as MIPS, MOPS or MFLOPS. MIPS (Millions
of instructions per second) gives simply a hypothetical peak instruction rate through the DSP. It is simply
determined by
N
I
T
I
,
where T
I
is the processor’s instruction cycle time in µs, and N
I
is the maximum number of instructions exe
cuted per cycle (N
I
> 1 for processors with parallelism). Due to the big differences between the instruction
sets, architectures and operational units of different DSPs, it is difﬁcult to conclude from MIPS measure
ments. Instructions on one DSP can be much more powerful than on another one, so that a lower instruction
count will do the same job.
MOPS or MFLOPS (Millions of Operations per Second or Millions of FloatingPoint Operations per Sec
ond) have no consensual deﬁnition: no two device manufacturers do agree on what an “operation” might
be.
The only real way of comparing different DSPs in terms of speed is to run highly optimized benchmark
programs on the devices to be compared, including typical DSP operations: FIRﬁltering, FFT,. . . Figure 12.9
shows an example of such a benchmark, the BDTIMark2000, produced by Berkeley Design Technologies.
Design issues: Timetomarket is an important factor that will perhaps be the main driving force towards
the choice of any DSP. In order to ease the work of design engineers, the DSP manufacturer must provide
development tools (compiler, assembler). Since most new DSPs have their own instruction set and archi
tecture, if prior knowledge of one DSP is available, it may orient the choice of a new DSP to one that is
codecompatible. Also, in order to reuse legacy code, designers may opt for a DSP that is codecompatible
with one they already use.
c B. Champagne & F. Labeau Compiled November 24, 2004
12.5 Current Trends 243
0
500
1000
1500
2000
2500
3000
3500
A
D
S
P

2
1
x
x
A
D
S
P

2
1
0
6
x
A
D
S
P

2
1
1
6
x
D
S
P
1
6
x
x
x
D
S
P
5
6
3
x
x
D
S
P
5
6
8
x
x
M
S
C
8
1
0
1
T
M
S
3
2
0
C
5
4
x
x
T
M
S
3
2
0
C
6
2
x
x
T
M
S
3
2
0
C
6
7
x
x
BDTIMark2000
Analog Devices
Motorola
Texas Instruments
Agere
StarCore
SHARC
Fig. 12.9 Speed benchmark for several available DSPs. The benchmark is the BDTI
Mark2000, by Berkeley Design Technologies (higher is faster). BDTIMark2000 is an aggregate
speed benchmark computed from the speed of the processor in accomplishing common DSP
tasks such as FFT, FIR ﬁltering, IIR ﬁltering,. . . (source: BDTI)
New highlevel development tools should ease the development of DSP algorithms and their implementa
tion. Some manufacturers have a collaboration e.g. with Mathworks to develop a MATLAB/SIMULINK
toolbox enabling direct design and/or simulation of software for the DSP chip in MATLAB and SIMULINK.
12.5 Current Trends
Multiple DSPs: Numerous new applications cannot be tackled by one DSP processor, even the most
heavily parallel ones. As a consequence, new DSP chips are developed that are easily grouped in parallel
or in clusters to yield heavily parallel computing engines [2]. Of course, new challenges appear in the
parallelisation of algorithms, shared resources management, and resolution of data dependencies.
FPGAs coming on fast: FPGA manufacturers are aggressively entering the DSP market, due mainly to
the very high densities attained by their latest products and to the availability of highlevel development
tools (directly from a blocklevel vision of the algorithm to the FPGA). The claimed advantages of FPGAs
over PDSPs are speed and a completely ﬂexible structure that allows for complete customization and highly
parallel implementations [12]. For example, one manufacturer now promises the capability of implementing
192 eighteenbit multiplications in parallel.
c B. Champagne & F. Labeau Compiled November 24, 2004
244
Chapter 13
Multirate systems
13.1 Introduction
Multirate system: A multirate DSP system is characterized by the use of multiple sampling rates in its
realization, that is: signal samples at various points in the systems may not correspond to the same physical
sampling frequency.
Some applications of multirate processing include:
• Sampling rate conversion between different digital audio standards.
• Digital antialiasing ﬁltering (used in commercial CD players).
• Subband coding of speech and video signals
• Subband adaptive ﬁltering.
In some applications, the need for multirate processing comes naturally from the problem deﬁnition (e.g.
sampling rate conversion). In other applications, multirate processing is used to achieve improved perfor
mance of a DSP function (e.g. lower binary rate in speech compression).
Sampling rate modiﬁcation: The problem of sampling rate modiﬁcation is central to multirate signal
processing. Suppose we are given the samples of a continuoustime signal x
a
(t), that is
x[n] = x
a
(nT
s
), n ∈ Z (13.1)
with sampling period T
s
and corresponding sampling frequency F
s
= 1/T
s
. Then, how can we efﬁciently
generate a new set of samples
x
[n] = x
a
(nT
s
), n ∈ Z (13.2)
corresponding to a different sampling period, i.e. T
s
= T
s
?
Assuming that the original sampling rate F
s
> 2F
max
, where F
max
represent the maximum frequency con
tained in analog signal x
a
(t), one possible approach is to reconstruct x
a
(t) from the set of samples x[n] (see
Sampling Theorem, Section 7.1.2) and then to resample x
a
(t) at the desired rate F
s
= 1/T
s
. This approach
is expensive as it requires the use of D/A and A/D devices.
13.2 Downsampling by an integer factor 245
In this Chapter, we seek a cheaper, purely discretetime solution to the above problem, i.e. performing the
resampling by direct manipulation of the discretetime signal samples x[n], without going back in the analog
domain.
The following special terminology will be used here:
• T
s
> T
s
(i.e. F
s
< F
s
) ⇒downsampling
• T
s
< T
s
(i.e. F
s
> F
s
) ⇒upsampling or interpolation
13.2 Downsampling by an integer factor
Let T
s
= MT
s
where M is an integer ≥1. In this case, we have:
x
[n] = x
a
(nT
s
)
= x
a
(nMT
s
) = x[nM] (13.3)
The above operation is so important in multirate signal processing that it is given the special name of
decimation. Accordingly, a device that implements 13.3 is referred to as a decimator. The basic properties
of the decimator are studied in greater details below.
Mfold decimation:
Decimation by an integer factor M ≥1, or simply Mfold decimation, is deﬁned by the following operation:
x
D
[n] =D
M
{x[n]} x[nM], n ∈ Z (13.4)
That is, the output signal x
D
[n] is obtained by retaining only one out of every M input samples x[n].
The block diagram form of the Mfold decimator is illustrated in Figure 13.1. The decimator is sometimes
↓ M
] [n x ] [n x
D
rate = rate =
s
T / 1
s
MT / 1
Fig. 13.1 Mfold decimator
called a sampling rate compressor or subsampler. It should be clear from (13.4) that the decimator is a linear
system, albeit a timevarying one, as shown by the following example.
Example 13.1:
Consider the signal
x[n] = cos(πn/2)u[n]
= {. . . , 0, 0, 1, 0, −1, 0, 1, 0, −1, . . .}
c B. Champagne & F. Labeau Compiled November 24, 2004
246 Chapter 13. Multirate systems
If x[n] is applied at the input of a 2fold decimator (i.e. M = 2), the resulting signal will be
x
D
[n] = x[2n]
= cos(πn)u[2n]
= {. . . , 0, 0, 1, −1, 1, −1, . . .}
Only those samples of x[n] corresponding to even values of n are retained in x
D
[n]. This is illustrated in
Figure 13.2.
n
] [n x ] [n x
D
] 1 [ ] [
~
− = n x n x ] 1 [ ] [
~
− ≠ n x n x
D D
0 1 3 4
2
0 2 4
1
0 1 2
3
4 0 1 2 3 4
3
n
n n
Fig. 13.2 Example of 2fold decimation
Now consider the following shifted version of x[n]:
˜ x[n] = x[n−1]
= {. . . , 0, 0, 1, 0, −1, 0, 1, 0, −1, . . .}
If ˜ x[n] is applied at the input of a 2fold decimator, the output signal will be
˜ x
D
[n] = ˜ x[2n]
= {. . . , 0, 0, 0, 0, 0, 0, . . .}
Note that ˜ x
D
[n] = x
D
[n−1]. This shows that the 2fold decimator is timevarying.
Property: Let x
D
[n] = x[nM] where M is a positive integer. The ztransforms of the sequences x
D
[n] and
x[n], denoted X
D
(z) and X(z), respectively, are related as follows:
X
D
(z) =
1
M
M−1
∑
k=0
X(z
1/M
W
k
M
) (13.5)
where W
M
= e
−j2π/M
.
c B. Champagne & F. Labeau Compiled November 24, 2004
13.2 Downsampling by an integer factor 247
Proof: Introduce the sequence
p[l]
1
M
M−1
∑
k=0
e
j2πkl/M
=
1, l = 0, ±M, ±2M, . . .
0, otherwise
(13.6)
Making use of p[n], we have:
X
D
(z)
∞
∑
n=−∞
x
D
[n]z
−n
=
∞
∑
n=−∞
x[nM]z
−(nM)/M
=
∞
∑
l=−∞
p[l]x[l]z
−l/M
=
∞
∑
l=−∞
{
1
M
M−1
∑
k=0
e
j2πkl/M
}x[l]z
−l/M
=
1
M
M−1
∑
k=0
∞
∑
l=−∞
x[l](z
1/M
e
−j2πk/M
)
−l
=
1
M
M−1
∑
k=0
X(z
1/M
W
k
M
) (13.7)
Remarks:
• In the special case M = 2, we have W
2
= e
−jπ
=−1, so that
X
D
(z) =
1
2
(X(z
1/2
) +X(−z
1/2
)) (13.8)
• Setting z =e
jω
in (13.5), an equivalent expression is obtained that relates the DTFTs of x
d
[n] and x[n],
namely:
X
D
(ω) =
1
M
M−1
∑
k=0
X
ω−2πk
M
(13.9)
• According to (13.9), the spectrum X
D
(ω) is made up of properly shifted, expanded images of the
original spectrum X(ω).
Example 13.2:
In this example, we illustrate the effects of an Mfold decimation in the frequency domain. Consider a
discretetime signal x[n] with DTFT X(ω) as shown in Figure 13.3(top), where ω
max
=π/2 denotes the
maximum frequency contained in x[n]. That is, X(ω) = 0 if ω
max
≤ω ≤π. Let x
D
[n] denote the result
of a Mfold decimation on x[n].
First consider the case M = 2, where (13.9) simpliﬁes to
X
D
(ω) =
1
2
[X(
ω
2
) +X(
ω−2π
2
)] (13.10)
c B. Champagne & F. Labeau Compiled November 24, 2004
248 Chapter 13. Multirate systems
) (ω X
ω
max
ω
π − π π 2
1
π 4
) (ω
D
X
ω π − π π 2
2 / 1
π 4
ω π − π π 2 π 4
) (ω
D
X
2 = M
3 = M
3 / 1
Fig. 13.3
In this equation, X(ω/2) is a dilated version of X(ω) by a factor of 2 along the ωaxis. Note that X(ω) is
no longer of period 2π but instead its period is 4π. Besides scaling factor 1/2, X
D
(ω) is obtained as the
sum of X(ω/2) and its shifted version by 2π, i.e. X(
ω−2π
2
). The introduction of this latter component
restores the periodicity to 2π. The resulting DTFT X
D
(ω) is illustrated in Figure 13.3 (middle).
Note that in the above case M = 2, the two sets of spectral images X(ω/2) and X(
ω−2π
2
) do not overlap
because ω
max
< pi/2. Thus, there is no spectral aliasing between the images and, besides the amplitude
scaling and the frequency dilation, the original spectral shape X(ω) is not distorted. In other words, no
information about the original x[n] is lost through the decimation process. We will see later how x[n] can
be recovered from x
D
[n] in this case.
Next consider the case M = 3, where (13.9) simpliﬁes to
X
D
(ω) =
1
3
[X(
ω
3
) +X(
ω−2π
3
) +X(
ω−4π
3
)] (13.11)
Here, X
D
(ω) is obtained by superposing dilated and shifted versions of X(ω), as shown in Figure 13.3
(bottom). Here the dilation factor is 3 and since ω
max
> π/3, the various images do overlap. That is,
there is spectral aliasing and a loss of information in the decimation process.
Discussion: It can be inferred from the above example that if X(ω) is properly bandlimited, i.e.
X(ω) = 0 for
π
M
≤M ≤π (13.12)
then no loss of information occurs during the decimation process. In this special case, the relationship (13.9)
between X(ω) and X
D
(ω) simpliﬁes to the following if one only considers the fundamental period [−π, π]:
X
D
(ω) =
1
M
X(
ω
M
), 0 ≤ω ≤π (13.13)
c B. Champagne & F. Labeau Compiled November 24, 2004
13.3 Upsampling by an integer factor 249
Downsampling system: In practice, a lowpass digital ﬁlter with cutoff at π/M would be included along
the processing chain prior to Mfold decimation. The resulting system, referred to as a downsampling system
is illustrated in Figure 13.4.
↓ M
] [n x ] [
~
n x
D
rate = rate =
s
T / 1
s
MT / 1
) (ω
D
H
Fig. 13.4
Ideally, H
D
(ω) is a the lowpass antialiasing ﬁlter H
D
(ω) should have the following desired speciﬁcations:
H
D
(ω) =
1 if ω <π/M
0 if π/M ≤ω ≤π.
(13.14)
13.3 Upsampling by an integer factor
Here we consider the sampling rate conversion problem (13.1)– (13.2) with T
s
= T
s
/L where L is an integer
≥1.
Suppose that the underlying analog signal x
a
(t) is bandlimited to Ω
N
, i.e. X(Ω) = 0 for Ω ≥ Ω
N
, and
that the sequence x[n] was obtained by uniform sampling of x
a
(t) at a rate F
s
such that Ω
s
= 2πF
s
≥2Ω
N
. In
this case, according to the sampling theorem (Section 7.1.2), x
a
(t) can be expressed in terms of its samples
x[n] as
x
a
(t) =
∞
∑
k=−∞
x[k] sinc(
t
T
s
−k) (13.15)
Invoking (13.2), the new samples x
[n] can therefore be expressed in terms of the original samples x[n] as
follows: We have
x
[n] = x
a
(nT
s
)
=
∞
∑
k=−∞
x[k] sinc(
nT
s
T
s
−k)
=
∞
∑
k=−∞
x[k] sinc(
n−kL
L
) (13.16)
The above equation can be given a simple interpretation if we introduce the concept of sampling rate expan
sion.
13.3.1 Lfold expansion
Sampling rate expansion by an integer factor L ≥1, or simply Lfold expansion, is deﬁned as follows:
x
E
[n] =U
L
{x[n]}
x[k] if n = kL, k ∈ Z
0 otherwise
(13.17)
c B. Champagne & F. Labeau Compiled November 24, 2004
250 Chapter 13. Multirate systems
That is, L−1 zeros are inserted between each consecutive samples of x[n]. Realtime implementation of an
Lfold expansion requires that the output sample rate be L times larger than the input sample rate.
The block diagram form of an Lfold expander system is illustrated in Figure 13.5. The expander is also
sometimes called upsampler in the DSP literature.
↑ L
] [n x ] [n x
E
rate = rate =
s
T / 1
s
T L/
Fig. 13.5 Lfold expander.
Sampling rate expansion is a linear timevarying operation. This is illustrated in the following example.
Example 13.3:
Consider the signal
x[n] = a
n
u[n]
= {. . . , 0, 0, 1, a, a
2
, a
3
, . . .}
Lfold expansion applied to x[n], with L = 3, yields
x
E
[n] ={. . . , 0, 0, 0, 1, 0, 0, a, 0, 0, a
2
, 0, 0, a
3
, 0, 0, . . .}
That is, two zero samples are inserted between every consecutive samples of x[n].
Next, consider the following shifted version of x[n]:
˜ x[n] = x[n−1]
= {. . . , 0, 0, 0, 1, a, a
2
, a
3
, . . .}
The result of a 3fold expansion on ˜ x[n] is
˜ x
E
[n] ={. . . , 0, 0, 0, 0, 0, 0, 1, 0, 0, a, 0, 0, a
2
, 0, 0, a
3
, 0, 0, . . .}
Note that ˜ x
E
[n] = x
E
[n−1], showing that the expander is a timevarying system. Actually, in the above
example, we have ˜ x
E
[n] = x
E
[n−3].
Property: Let x
E
[n] be deﬁned as in (13.17), with L a positive integer. The ztransforms of the sequences
x
E
[n] and x[n], respectively denoted X
E
(z) and X(z), are related as follows:
X
E
(z) = X(z
L
) (13.18)
Proof: Since the sequence x
E
[n] = 0 unless n = kL, k ∈ Z, we immediately obtain
X
E
(z)
∞
∑
n=−∞
x
E
[n]z
−n
=
∞
∑
k=−∞
x
E
[kL]z
−kL
=
∞
∑
k=−∞
x[k]z
−kL
= X(z
L
) (13.19)
c B. Champagne & F. Labeau Compiled November 24, 2004
13.3 Upsampling by an integer factor 251
Remarks: Setting z = e
jω
in (13.18), we obtain an equivalent relation in terms of the DTFTs of x
E
[n] and
x[n], namely:
X
E
(ω) = X(ωL) (13.20)
According to (13.20), the frequency axis in X(ω) is compressed by a factor L to yield X
E
(ω). Because of
the periodicity of X(ω) this operation introduces L−1 images of the (compressed) original spectrum within
the range [−π, π].
Example 13.4:
The effects of an Lfold expansion in the frequency domain are illustrated in Figure 13.6. The top portion
) (ω X
ω π − π π 2
1
) (ω
E
X
ω π − π π 2
4 = L
1
L
π
Fig. 13.6 Effects of Lfold expansion in the frequency domain.
shows the DTFT X(ω) of the original signal x[n]. The bottom portion shows the DTFT X
E
(ω) of x
E
[n],
the Lfold expansion of x[n], in the special case L = 4. As a result of the time domain expansion, the
original spectrum X(ω) has been compressed by a factor of L = 4 along the ωaxis.
13.3.2 Upsampling (interpolation) system
We are now in a position to provide a simple interpretation for the upsampling formula (13.16).
Suppose x
a
(t) is BL with maximum frequency Ω
N
. Consider uniform sampling at rate Ω
s
≥2Ω
N
followed
by Lfold expansion, versus direct uniform sampling at rate Ω
s
= LΩ
s
. The effects of these operations in the
frequency domain are shown in Figure 13.7 in the special case L = 2 and Ω
s
= 2Ω
N
.
Clearly, X
(ω) can be recovered from X
E
(ω) via lowpass ﬁltering:
X
(ω) = H
I
(ω)X
E
(ω) (13.21)
where H
I
(ω) is an ideal lowpass ﬁlter with speciﬁcations
H
I
(ω) =
L, if ω <π/L
0, if π/L ≤ω ≤π
(13.22)
c B. Champagne & F. Labeau Compiled November 24, 2004
252 Chapter 13. Multirate systems
N
Ω
) (Ω
a
X
1
N
Ω − Ω
N
ω
) (ω X
N
ω − ω π 2
s
T / 1
) (ω
E
X
ω π 2
s
T / 1
π π −
) (ω X′
π − ω π 2
s
T L/
π
N s
Ω = Ω 2 s s
LΩ = Ω
'
↑ L
Fig. 13.7 Sampling at rate Ω
s
= 2Ω
N
followed by Lfold expansion (left) versus direct sam
pling at rate Ω
s
= LΩ
s
(right).
c B. Champagne & F. Labeau Compiled November 24, 2004
13.4 Changing sampling rate by a rational factor 253
The equivalent impulse response of this ﬁlter is given by
h
I
[n] = sinc(
n
L
) (13.23)
Let us verify that the above interpretation of upsampling as the cascade of a sampling rate expander followed
by a LP ﬁlter is consistent with (13.16):
x
[n] =
∞
∑
k=−∞
x[k] sinc(
n−kL
L
)
=
∞
∑
k=−∞
x
E
[kL] sinc(
n−kL
L
)
=
∞
∑
l=−∞
x
E
[l] sinc(
n−l
L
)
=
∞
∑
l=−∞
x
E
[l] h
i
[n−l] (13.24)
A block diagram of a complete upsampling system operating in the discretetime domain is shown in Fig
ure 13.8. We shall also refer to this system as a digital interpolator. In the context of upsampling, the LP
↑ L
] [n x ] [n x
E
rate =
s
T / 1
] [n x′
) (ω
I
H
rate =
s
T L/
Fig. 13.8 DiscreteTime Upsampling System.
ﬁlter H
I
(ω) is called antiimaging ﬁlter, or also interpolation ﬁlter.
13.4 Changing sampling rate by a rational factor
Suppose Ω
s
and Ω
s
are related as follows:
Ω
s
=
L
M
Ω
s
(13.25)
Such a conversion of the sampling rate may be accomplished by means of upsampling and downsampling
by appropriate integer factors:
Ω
s
Upsampling
−−−−−−→
by L
LΩ
s
Downsampling
−−−−−−−−→
by M
L
M
Ω
s
(13.26)
The corresponding block diagram is shown in Figure 13.9. To avoid unnecessary loss of information when
M > 1, upsampling is applied ﬁrst.
Simpliﬁed structure: Recall the deﬁnitions of H
I
(ω) and H
D
(ω):
H
I
(ω) =
L, ω <π/L
0, π/L ≤ω ≤π
(13.27)
c B. Champagne & F. Labeau Compiled November 24, 2004
254 Chapter 13. Multirate systems
n /
@ >Q [
rate =
V
7
Z
L
+ Z
G
+
@ > Q [
p 0
V
7 /
V
7
0
/
upsampling by L downsampling by M
Fig. 13.9 Sampling rate alteration by a rational factor.
H
D
(ω) =
1, ω <π/M
0, π/M ≤ω ≤π
(13.28)
The cascade of these two ﬁlters is equivalent to a single LP ﬁlter with frequency response:
H
ID
(ω) = H
I
(ω)H
D
(ω) =
L, ω <ω
c
0, ω
c
≤ω ≤π
(13.29)
where
ω
c
min(π/M, π/L) (13.30)
The resulting simpliﬁed block diagram is shown in Figure 13.10.
n /
@ >Q [
UDWH
V
7
Z
LG
+
@ > Q [
p 0
V
7
0
/
Fig. 13.10 Simpliﬁed rational factor sampling rate alteration
13.5 Polyphase decomposition
Introduction: The polyphase decomposition is a mathematical tool used in the analysis and design of
multirate systems. It leads to theoretical simpliﬁcations as well as computational savings in many important
applications.
Consider a system function
H(z) =
∞
∑
n=−∞
h[n]z
−n
(13.31)
where h[n] denotes the corresponding impulse response. Consider the following development of H(z) where
the summation over n is split into a summations over even odd indices, respectively:
H(z) =
∞
∑
r=−∞
h[2r]z
−2r
+
∞
∑
r=−∞
h[2r +1]z
−(2r+1)
=
∞
∑
r=−∞
h[2r]z
−2r
+z
−1
∞
∑
r=−∞
h[2r +1]z
−2r
= E
0
(z
2
) +z
−1
E
1
(z
2
) (13.32)
c B. Champagne & F. Labeau Compiled November 24, 2004
13.5 Polyphase decomposition 255
where the functions E
0
(z) and E
1
(z) are deﬁned as
E
0
(z) =
∞
∑
r=−∞
h[2r]z
−r
(13.33)
E
1
(z) =
∞
∑
r=−∞
h[2r +1]z
−r
(13.34)
We refer to (13.32) as the 2fold polyphase decomposition of the system function H(z). The functions E
0
(z)
and E
1
(z) are the corresponding polyphase component. The above development can be easily generalized,
leading to the following result.
Property: In its region of convergence, any system function H(z) admits a unique Mfold polyphase
decomposition
H(z) =
M−1
∑
l=0
z
−l
E
l
(z
M
) (13.35)
where the functions E
l
(z) are called polyphase components of H(z).
Proof: The basic idea is to split the summation over n in (13.31) into M individual summation over index
r ∈ Z, such that each summation covers the time indices n = Mr +l, where l = 0, 1, . . . , M−1:
H(z) =
M−1
∑
l=0
∞
∑
r=−∞
h[Mr +l]z
−(Mr+l)
=
M−1
∑
l=0
z
−l
∞
∑
r=−∞
h[Mr +l]z
−(Mr)
=
M−1
∑
l=0
z
−l
E
l
(z
M
) (13.36)
where we deﬁne
E
l
(z) =
∞
∑
r=−∞
h[Mr +l]z
−r
(13.37)
This shows the existence of the polyphase decomposition. The unicity is proved by involving the unique
nature of the series expansion (13.31).
Remarks:
• The proof is constructive in that in suggests a way of deriving the polyphase components E
l
(z)
via (13.37).
• The use of (13.37)is recommended for deriving the polyphase components in the case of an FIR ﬁlter
H(z). For IIR ﬁlters, there may be simpler approaches.
• In anycase, once a polyphase decomposition as in (13.36) has been obtained, one can be sure that it is
the right one since it is unique.
c B. Champagne & F. Labeau Compiled November 24, 2004
256 Chapter 13. Multirate systems
Example 13.5:
Consider the FIR ﬁlter
H(z) = (1−z
−1
)
6
To ﬁnd the polyphase components for M = 2, we may proceed as follows:
H(z) = 1−6z
−1
+15z
−2
−20z
−3
+15z
−4
−6z
−5
+z
−6
= 1+15z
−2
+15z
−4
+z
−6
+z
−1
(−6−20z
−2
−6z
−4
)
= E
0
(z
2
) +z
−1
E
1
(z
2
)
where
E
0
(z) = 1+15z
−1
+15z
−2
+z
−3
E
1
(z) =−6−20z
−1
−6z
−2
Noble identities: The ﬁrst identy can be used to exchange the order of a decimator and an arbitrary ﬁlter
G(z): The second identity can be used to exchange the order of an expander and ﬁlter G(z): The proof of
] [n x ] [n y
↓ M
) (z G
] [n x ] [n y
↓ M
) (
M
z G
≡
Fig. 13.11 1st Noble identity.
] [n x ] [n y
) (
L
z G
] [n x ] [n y
↑ L ) (z G ≡ ↑ L
Fig. 13.12 2nd Noble identity.
these identities are beyond the scope of this course.
Efﬁcient downsampling structure: The polyphase decomposition can be used along with Noble identity
1 to derive a computationally efﬁcient structure for downsampling.
• Consider basic block diagram of a downsampling system: Suppose that the lowpass antialiasing
↓ M
] [n x ] [n y
rate = rate =
s
T / 1
s
MT / 1
) (z H
Fig. 13.13 Block diagram of a downsampling system.
ﬁlter H(z) is FIR with length N, i.e.
H(z) =
N−1
∑
n=0
h[n]z
−n
(13.38)
c B. Champagne & F. Labeau Compiled November 24, 2004
13.5 Polyphase decomposition 257
Assuming a direct form (DF) realization for H(z), the above structure requires N multiplications per
unit time at the input sampling rate. Note that only 1 out of every M samples computed by the FIR
ﬁlter is actually output by the Mfold decimator.
• For simplicity, assume that N is a multiple of M. Making use of the polyphase decomposition (13.36),
the following equivalent structure for the downsampling system is obtained where the polyphase com
↓ M
] [n x ] [n y
s
T / 1
s
MT / 1
) (
0
M
z E
) (
1
M
z E
) (
1
M
M
z E
−
1 −
z
1 −
z
Fig. 13.14 Equivalent structure for the downsampling system.
ponents are given here by
E
l
(z
M
) =
N/M−1
∑
r=0
h[Mr +l]z
−Mr
(13.39)
If realized in direct form, each ﬁlter E
l
(z
M
) requires N/M multiplications per unit time. Since there
are M such ﬁlters, the computational complexity is the same as above.
• Finally, consider interchanging the order of the ﬁlters E
l
(z
M
) and the decimator in Figure 13.14 by
making use of Noble identity 1: Observe that the resulting structure now only requires N/M multipli
] [n x ] [n y
s
T / 1
s
MT / 1
) (
0
z E
) (
1
z E
) (
1
z E
M−
1 −
z
1 −
z
↓ M
↓ M
↓ M
Fig. 13.15 Final equivalent structure for the downsampling system.
cations per unit time.
c B. Champagne & F. Labeau Compiled November 24, 2004
258 Chapter 13. Multirate systems
One could argue that in the case of an FIR ﬁlter H(z), the computational saving is artiﬁcial since actually,
only one out of every M ﬁlter outputs in Fig. 13.13 needs to be computed. However, since the input samples
are coming in at the higher rate F
s
, the use of a traditional DF realization for H(z) still requires that the
computation of any particular output be completed before the next input sample x[n] arrives and modiﬁes
the delay line. Thus the peak required computational rate is still N multiplies per input sample, even though
the ﬁltering operation needs only be activated once every M samples. This may be viewed as an inefﬁcient
use of available computational resources. The real advantage of the structure in Fig. 13.15 is that it makes
it possible to spread the FIR ﬁlter computation over time, so that the required computational rate is uniform
and equal to N/M multiplications per input samples.
c B. Champagne & F. Labeau Compiled November 24, 2004
259
Chapter 14
Applications of the DFT and FFT
In this Chapter, we discuss two basic applications of the DFT and associated FFT algorithms.
We recall from Chapter 6 that the linear convolution of two timelimited signals can be realized via circular
convolution in the DFT domain. The ﬁrst application describes an efﬁcient way of realizing an FIR ﬁltering
via block DFT processing. Essentially, this amounts to decomposing the input signal into consecutive blocks
of smaller length, and implementing the ﬁltering in the frequency domain via FFT and IFFT processing.
This results in signiﬁcant computational savings, provided a processing delay is tolerated in the underlying
application.
In the second application, we look at the use of the DFT (efﬁciently computed via FFT) as a frequency
analysis or estimation tool. We review the fundamental relationship between the DFT of a ﬁnite signal
and the DTFT and we investigate the effects of the DFT length and windowing function on the spectral
resolution. The effects of measurement noise on the quality of the estimation are also discussed brieﬂy.
14.1 Block FIR ﬁltering
Consider a causal FIR ﬁltering problem as illustrated in Figure 14.1. Assume that the ﬁlter’s impulse re
] [n h
] [n x ] [ ] [ ] [ n x n h n y ∗ =
causal
FIR filter
Fig. 14.1 Causal FIR ﬁltering
sponse h[n] is timelimited to 0 ≤n ≤P−1 and that the input signal x[n] is zero for n < 0. Accordingly, the
ﬁlter output
y[n] = h[n] ∗x[n] =
P−1
∑
k=0
h[k]x[n−k] (14.1)
is also zero for n < 0. If x[n] was also timelimited, then the DFT approach in Section 6.5.1 could be used
to compute y[n], provided the DFT size N is large enough.
260 Chapter 14. Applications of the DFT and FFT
Two fundamental problems exist with respect to the assumption that x[n] is timelimited: x[n]:
• in certain applications, it is not possible to set a speciﬁc limit on the length of x[n] (e.g. voice com
munications);
• such a limit, if it exists, may be too large, leading to various technical problems (processing delay,
memory requirement, etc.).
In practice, it is still possible to compute the linear convolution (14.1) as a circular convolution via the DFT
(i.e. FFT), but it is necessary to resort to block convolution.
Block convolution: The basic principle of block convolution may be summarized as a series of three steps
as follows:
(1) Decompose x[n] into consecutive blocks of ﬁnite length L:
x[n] ⇒x
r
[n], r = 0, 1, 2, ... (14.2)
where x
r
[n] denotes the samples in the rthe block.
(2) Filter each block individually using the DFT approach:
x
r
[n] ⇒y
r
[n] = h[n] ∗x
r
[n] (14.3)
In practice, the required DFTs and inverse DFT are computed using an FFT algorithm (more on this
later).
(3) Reassemble the ﬁltered blocks (as they become available) into an output signal:
y
r
[n] ⇒y[n] (14.4)
The above generic steps may be realized in a different ways. In particular, there exist two basic methods of
block convolution: overlapadd and overlapsave. Both will be described below.
14.1.1 Overlapadd method:
In this method, the generic block convolution steps take the following form:
(1) Blocking of input data into nonoverlapping blocks of length L:
x
r
[n] =
x[n+rL] 0 ≤n < L,
0 otherwise.
(14.5)
where, for convenience of notation, the shift factor rL is introduced so that the nonzero samples of
x
r
[n] are all between 0 ≤n ≤L−1.
(2) For each input block x
r
[n], compute its linear convolution with FIR ﬁlter impulse response h[n] (see
DFT approach below):
y
r
[n] = h[n] ∗x
r
[n] (14.6)
Note that each block y
r
[n] is now of length L+P−1.
c B. Champagne & F. Labeau Compiled November 24, 2004
14.1 Block FIR ﬁltering 261
(3) Form the output signal by adding the shifted ﬁltered blocks:
y[n] =
∞
∑
r=0
y
r
[n−rL] (14.7)
The method takes its name from the fact that the output blocks y
r
[n] need to be properly overlapped and
added in the computation of the desired output signal y[n].
Mathematical justiﬁcation: It is easy to verify that the overlapadd approach produces the desired con
volution results in (14.1). Under the assumption that x[n] = 0 for n < 0, we have from (14.5) that
x[n] =
∞
∑
r=0
x
r
[n−rL] (14.8)
Then, invoking basic properties of the linear convolution, we have
y[n] = h[n] ∗x[n] = h[n] ∗
∞
∑
r=0
x
r
[n−rL]
=
∞
∑
r=0
h[n] ∗x
r
[n−rL] =
∞
∑
r=0
y
r
[n]
(14.9)
DFT realization of step (2): Recall that the DFT approach to linear convolution amounts to computing
a circular convolution. To ensure that the circular and linear convolutions are equivalent, the DFT length,
say N, should be sufﬁciently large and the data must be appropriately zeropadded. Speciﬁcally, based on
Section 6.5, we may proceed as follows
• Set DFT length to N ≥L+P−1 (usually a power of 2)
• Compute (once and store result)
H[k] = FFT
N
{h[0], ..., h[P−1], 0, ..., 0} (14.10)
• For r = 0, 1, 2, ..., compute
X
r
[k] = FFT
N
{x
r
[0], ..., x
r
[L−1], 0, ..., 0} (14.11)
y
r
[n] = IFFT
N
{H[k]X
r
[k]} (14.12)
Often, the value of N is selected as the smallest power of 2 that meets the above condition.
Computational complexity: The computational complexity of the overlapadd method can be evaluated
as follows. For each block of L input samples, we need
•
N
2
log
2
N multiplications to compute X
r
[k] in (14.11)
• N multiplications to compute H[k]X
r
[k] in (14.12)
c B. Champagne & F. Labeau Compiled November 24, 2004
262 Chapter 14. Applications of the DFT and FFT
•
N
2
log
2
N multiplications to compute y
r
[n] in (14.12)
for a total of Nlog
2
N +N multiplications per L input samples. Assuming P ≈L for simplicity, we may set
N = 2L. The resulting complexity of the overlapadd method is then
2log
2
L+4
multiplications per input sample. Linear ﬁltering based on (14.1) requires P multiplications per input sample.
Thus, a signiﬁcant gain may be expected when P ≈L is large (e.g. L = 1024 ⇒2log
2
L+4 = 24 L.
Example 14.1: Overlapadd
The overlapadd process is depicted in Figures 14.2 and 14.3. The top part of Figure 14.2 shows the FIR
ﬁlter h[n] as well as the input signal x[n], supposed to be semiinﬁnite (rightsided). The three bottom
parts of Figure 14.2 show the three ﬁrst blocks of the input x
0
[n], x
1
[n] and x
2
[n]. Figure 14.3 shows the
result of the convolution of these blocks with h[n]. Notice that the convolution results are longer than the
original blocks. Finally, the bottom plot of Figure 14.3 shows the reconstructed output signal obtained
by adding up the overlapping blocks y
r
[n].
0 5 10 15 20 25 30 35 40
0
0.5
1
h[n]
0 5 10 15 20 25 30 35 40
−2
0
2
x[n]
0 5 10 15 20 25 30 35 40
−2
0
2
x
0
[n]
0 5 10 15 20 25 30 35 40
−1
0
1
x
1
[n]
0 5 10 15 20 25 30 35 40
−1
0
1
x
2
[n]
Fig. 14.2 Illustration of blocking in the overlapadd method. Top: the FIR ﬁlter h[n]; Below:
the input signal x[n] and different blocks x
r
[n].
c B. Champagne & F. Labeau Compiled November 24, 2004
14.1 Block FIR ﬁltering 263
0 5 10 15 20 25 30 35 40
−2
0
2
y
0
[n]
0 5 10 15 20 25 30 35 40
−2
0
2
y
1
[n]
0 5 10 15 20 25 30 35 40
−2
0
2
y
2
[n]
0 5 10 15 20 25 30 35 40
−2
0
2
y[n]
Fig. 14.3 Illustration of reconstruction in the overlapadd method. Top: several blocks y
r
[n]
after convolution; Bottom: the output signal y[n] .
c B. Champagne & F. Labeau Compiled November 24, 2004
264 Chapter 14. Applications of the DFT and FFT
14.1.2 Overlapsave method (optional reading)
This method contains the following steps:
(1) Blocking of input data in overlapping blocks of length L, overlapping by P−1 samples:
x
r
[n] =
x[n+r(L−P+1) −P+1] 0 ≤n < L,
0 otherwise.
(14.13)
(2) For each input block, compute the Lpoint circular convolution with the FIR ﬁlter impulse response
h[n]:
y
r
[n] = h[n] x
r
[n] (14.14)
Note that each block y
r
[n] is still of length L. Since this circular convolution does not respect the
criterion in (6.63), there will be timealiasing. By referring to the example in Figure 6.12, one sees
that the ﬁrst P−1 samples of this circular convolution will be corrupted by aliasing, whereas the
remaining L −P+1 will remain equal to the true samples of the corresponding linear convolution
x
r
[n] ∗h[n].
(3) Form the output signal by adding the shifted ﬁltered blocks after discarding the leading P−1 samples
of each block:
y[n] =
∞
∑
r=0
y
r
[n−r(L−P+1)](u[n−P−r(L−P+1)] −u[n−L−r(L−P+1)]) (14.15)
Example 14.2: Overlapsave
The overlapsave process is depicted in Figures 14.4 and 14.5. The top part of Figure 14.4 shows the
FIR ﬁlter h[n] as well as the input signal x[n]. The three bottom parts of Figure 14.4 show the three ﬁrst
blocks of the input x
0
[n], x
1
[n] and x
2
[n]. Figure 14.5 shows the result of the circular convolution of these
blocks with h[n]. Notice that the convolution white dots correspond to samples corrupted by aliasing.
Finally, the bottom plot of Figure 14.5 shows the reconstructed output signal obtained by adding up the
blocks y
r
[n] after dropping the aliased samples.
14.2 Frequency analysis via DFT/FFT
14.2.1 Introduction
The frequency content of a discretetime signal x[n] is deﬁned by its DTFT:
X(ω) =
∞
∑
n=−∞
x[n]e
−jωn
, ω ∈ [−π, π], (14.16)
As pointed out in Chapter 6, the practical computation of X(ω) poses the several difﬁculties:
• an inﬁnite number of mathematical operations is needed (the summation over n is inﬁnite and the
variable ω is continuous).
c B. Champagne & F. Labeau Compiled November 24, 2004
14.2 Frequency analysis via DFT/FFT 265
0
0.5
1
h[n]
−1
0
1
x[n]
−1
0
1
−1
0
1
0 5 10 15 20 25 30 35 40 45
−1
0
1
Fig. 14.4 Illustration of blocking in the overlapsave method. Top: the FIR ﬁlter h[n]; Below:
the input signal x[n] and different blocks x
r
[n].
c B. Champagne & F. Labeau Compiled November 24, 2004
266 Chapter 14. Applications of the DFT and FFT
−5
0
5
−5
0
5
−5
0
5
0 5 10 15 20 25 30 35 40 45
−5
0
5
Fig. 14.5 Illustration of reconstruction in the overlapsave method. Top: several blocks y
r
[n]
after convolution; Bottom: the output signal y[n] .
c B. Champagne & F. Labeau Compiled November 24, 2004
14.2 Frequency analysis via DFT/FFT 267
• all the signals samples x[n] from −∞ to ∞ must be available
Recall the deﬁnition of the DFT
X[k] =
N−1
∑
n=0
x[n]e
−jω
k
n
, k ∈ {0, 1, . . . , N−1} (14.17)
where ω
k
= 2πk/N. In practice, the DFT is often used as an approximation to the DTFT for the purpose of
frequency analysis. The main advantages of the DFT over the DTFT are
• ﬁnite number of mathematical operation
• only a ﬁnite set of signal samples x[n] is needed
If x[n] = 0 for n < 0 and n ≥ N, then the DFT and the DTFT are equivalent concept. Indeed, as we have
shown in Chapter 6, there is in this case a 1to1 relationship between the DFT and the DTFT, namely:
X[k] = X(ω
k
) (14.18)
X(ω) =
N−1
∑
k=0
X[k]P(ω−ω
k
) (14.19)
where P(ω) is an interpolation function deﬁned in (6.18).
In general, however, we are interested in analyzing the frequency content of a signal x[n] which is not a
priori timelimited. In this case, the DFT only provides an approximation to the DTFT. In this Section:
• we investigate in further detail the nature of this approximation;
• based on this analysis, we present and discuss modiﬁcations to the basic DFT computation that lead
to improved frequency analysis results.
14.2.2 Windowing effects
Rectangular window: Recall the deﬁnition of a rectangular window of length N:
w
R
[n]
1, 0 ≤n ≤N−1
0, otherwise
(14.20)
The DTFT of w[n] will play a central role in our discussion:
W
R
(ω) =
N−1
∑
n=0
e
−jωn
= e
−jω(N−1)/2
sin(ωN/2)
sin(ω/2)
(14.21)
The corresponding magnitude spectrum is illustrated in Figure 14.6 for the N = 16. Note the presence of:
• a main lobe with
∆ω =
2π
N
(14.22)
• sidelobes with peak a amplitude of about 0.22 (13dB)
c B. Champagne & F. Labeau Compiled November 24, 2004
268 Chapter 14. Applications of the DFT and FFT
0
0.2
0.4
0.6
0.8
1
m
a
g
n
i
t
u
d
e
(
d
B
)
−π −π/2 0 π/2 π
frequency ω
N=16
Fig. 14.6 Magnitude spectrum of N = 16 point rectangular window (linear scale)
Relation between DFT and DTFT: Deﬁne the windowed signal
y[n] = w
R
[n]x[n] (14.23)
The Npoint DFT of signal x[n] can be expressed as the DTFT of y[n]:
X[k] =
N−1
∑
n=0
x[n]e
−jω
k
n
=
∞
∑
n=−∞
w
R
[n]x[n]e
−jω
k
n
=
∞
∑
n=−∞
y[n]e
−jω
k
n
=Y(ω
k
) (14.24)
where Y(ω) denotes the DTFT of y[n]. The above shows that the DFT can be obtained by uniformly sampling
Y(ω) at the frequencies ω = ω
k
.
Nature of Y(ω): Since y[n] is obtained as the product of x[n] and w
R
[n], we have from the multiplication
property of the DTFT (Ch. 3) that
Y(ω) = DTFT{w
R
[n]x[n]}
=
1
2π
π
−π
X(ω)W
R
(ω−φ)dφ (14.25)
That is, Y(ω) is equal to the circular convolution of the periodic functions W
R
(ω) and X(ω). When comput
ing the Npoint DFT of the signal x[n], the windowing of x[n] that implicitly takes place in the timedomain
is equivalent to the periodic convolution of the desired spectrum X(ω) with the window spectrumW
R
(ω).
c B. Champagne & F. Labeau Compiled November 24, 2004
14.2 Frequency analysis via DFT/FFT 269
Example 14.3:
Consider the signal
x[n] = e
jθn
, n ∈ Z
where 0 < θ < π for simplicity. The DTFT of x[n] is given by
X(ω) = 2π
∞
∑
k=−∞
δ
a
(ω−θ−2πk)
In this special case, equations (14.24) (14.25) yield
X[k] =
1
2π
π
−π
X(φ)W
R
(ω
k
−φ)dφ =W
R
(ω
k
−θ)
Thus, the DFT values X[k] can be obtained by uniformly sampling the shifted window spectrumW
R
(ω−
θ) at the frequencies ω = ω
k
.
Effects of windowing: Based on the above example and taking into consideration the special shape of the
window magnitude spectrum in Figure 14.6, two main effects of the timedomain windowing on the original
DTFT spectrum X(ω) may be identiﬁed:
• Due to nonzero width of the main lobe in Figure 14.6, local smearing (or spreading) of the spectral
details in X(ω) occur as a result of the convolution with W
R
(ω) in (14.25). This essentially corre
sponds to a loss of resolution in the spectral analysis. In particular, if say X(ω) contains two narrow
peaks at closely spaced frequencies θ
1
and θ
2
, it may be that Y(ω) in (14.25) only contains a single
wider peak at frequency (θ
1
+θ
2
)/2. In other words, the two peaks will merge as a result of spectral
spreading. The resolution of the spectral analysis with DFT is basically a function of the window
width ∆ω = 2π/N. Thus resolution can be increased (i.e. ∆ω smaller) by increasing N.
• Due to the presence of sidelobes in Figure 14.6, spectral leakage in X(ω) will occur as a result of
the convolution with W
R
(ω) in (14.25). For instance, even if X(ω) = 0 in some band of frequency,
Y(ω) will usually not be zero in that band due to the trailing effect of the sidelobes in the convolution
process. The amount of leakage depends on the peak sidelobe level. For a rectangular winodw, this is
ﬁxed to 13dB.
Improvements: Based on the above discussion, it should be clear that the results of the frequency analysis
can be improved by using a different type of window, instead of the rectangular window implicit in (14.24).
Speciﬁcally:
˜
X[k] =
N−1
∑
n=0
w[n]x[n]e
−jω
k
n
, k ∈ {0, 1, . . . , N−1} (14.26)
where w[n] denotes an appropriately chosen window. In this respect, all the window functions introduced in
Chapter 9 can be used. In selecting a window, the following approach might be taken:
• Select window so that peak sidelobe level is below acceptable level.
• Adjust window length N so that desired resolution ∆ω is obtained. For most windows of interest
∆ω = cte/N where cte depends on the speciﬁc window type.
The tradeoffs involved in the selection of a window for spectral analysis are very similar to those use in the
window method of FIR ﬁlter design (Ch. 9).
c B. Champagne & F. Labeau Compiled November 24, 2004
270
271
References
[1] Analog Devices Inc., Norwood, MA. TIgerSHARC DSP Microcomputer Preliminary Technical data,
2002.
[2] Ashok Bindra. Novel architectures pack multiple dsp cores onchip. Electronic Design, December
2001.
[3] Yu Hen Hu, editor. Programmable Digital Signal Processors: Architecture, Programming and Appli
cations. Marcel Dekker, 2002.
[4] Emmanuel C. Ifeachor and Barrie W. Jervis. Digital Signal Processing, a practical approach. Prentice
Hall, New Jersey, 2nd edition edition, 2002.
[5] Phil Lapsley, Jeff Bier, Amit Shoham, and Edward A. Lee. DSP Processor Funcamentals: Architec
tures and Features. IEEE Press, 1997.
[6] Ravi A. Managuli and Yongmin Kim. VLIW processor architectures and algorithm mappings for DSP
applications. In Hu [3], chapter 2.
[7] James H. McClellan, Ronald W. Schafer, and Mark A. Yoder. DSP First: a multimedia approach.
PrenticeHall, 1998.
[8] Sanjit K. Mitra. Digital Signal Processing, a computerbased approach. McGrawHill, 2nd edition
edition, 2001.
[9] Motorola, Inc., Denver, CO. DSP56300 Family Manual, 2000.
[10] Allan V. Oppehneim, Ronald W. Schfafer, and J. R. Buck. DiscreteTime Signal Processing. Prentice
Hall, New Jersey, 2nd edition edition, 1999.
[11] John G. Proakis and Dimitris G. Manolakis. Digital Signal Processing, Principles, algorithms and
applications. PrenticeHall, 3rd edition edition, 1996.
[12] Russell Tessier and Wayne Burleson. Reconﬁgurable computing and digital signal processing: Past,
present and future. In Hu [3], chapter 4.
[13] Texas Instruments, Houston, TX. TMS320C6204 Data Sheet, 2000.
i
Contents
1 Introduction 1.1 1.2 The concepts of signal and system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital signal processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 1.2.2 1.2.3 1.2.4 1.3 1.4 2 Digital processing of analog signals . . . . . . . . . . . . . . . . . . . . . . . . . . Basic components of a DSP system . . . . . . . . . . . . . . . . . . . . . . . . . . Pros and cons of DSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discretetime signal processing (DTSP) . . . . . . . . . . . . . . . . . . . . . . . . 1 1 5 5 5 8 8 9
Applications of DSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Course organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 13
Discretetime signals and systems 2.1 2.2 2.3 2.4 2.5
Discretetime (DT) signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 DT systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Linear timeinvariant (LTI) systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 LTI systems described by linear constant coefﬁcient difference equations (LCCDE) . . . . . 24 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 30
3
Discretetime Fourier transform (DTFT) 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8
The DTFT and its inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Convergence of the DTFT: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Properties of the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Frequency analysis of LTI systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 LTI systems characterized by LCCDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Ideal frequency selective ﬁlters: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Phase delay and group delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
ii
Contents
4
The ztransform (ZT) 4.1 4.2 4.3 4.4 4.5
55
The ZT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Study of the ROC and ZT examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Properties of the ZT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Rational ZTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Inverse ZT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.5.1 4.5.2 Inversion via PFE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Putting X(z) in a suitable form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.6 4.7 5
The onesided ZT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 78
Zdomain analysis of LTI systems 5.1 5.2 5.3 5.4
The system function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 LTI systems described by LCCDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Frequency response of rational systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Analysis of certain basic systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.4.1 5.4.2 5.4.3 First order LTI systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Second order systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 FIR ﬁlters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.5 5.6 5.7 5.8
More on magnitude response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Allpass systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Inverse system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Minimumphase system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.8.1 5.8.2 5.8.3 5.8.4 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 MPAP decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Frequency response compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Properties of MP systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.9 6
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 108
The discrete Fourier Transform (DFT) 6.1 6.2
The DFT and its inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Relationship between the DFT and the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . 112 6.2.1 6.2.2 Finite length signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Periodic signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Compiled January 23, 2004
c B. Champagne & F. Labeau
Contents
iii
6.3 6.4
Signal reconstruction via DTFT sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Properties of the DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.4.1 6.4.2 6.4.3 6.4.4 6.4.5 6.4.6 6.4.7 Time reversal and complex conjugation . . . . . . . . . . . . . . . . . . . . . . . . 122 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Even and odd decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Circular shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Circular convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Other properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6.5 6.6 7
Relation between linear and circular convolutions . . . . . . . . . . . . . . . . . . . . . . . 131 6.5.1 Linear convolution via DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 142 . . . . . . . . . . . . . . . . . . . . . . . . . 143
Digital processing of analog signals 7.1 Uniform sampling and the sampling theorem 7.1.1 7.1.2 7.2 7.2.1 7.2.2 7.3 7.3.1 7.3.2 7.4
Frequency domain representation of uniform sampling . . . . . . . . . . . . . . . . 145 The sampling theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Study of inputoutput relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Antialiasing ﬁlter (AAF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Basic steps in A/D conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Statistical model of quantization errors . . . . . . . . . . . . . . . . . . . . . . . . 159
Discretetime processing of continuoustime signals . . . . . . . . . . . . . . . . . . . . . . 150
A/D conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
D/A conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 7.4.1 Basic steps in D/A conversion: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 166
8
Structures for the realization of DT systems 8.1 8.2
Signal ﬂowgraph representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Realizations of IIR systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 8.2.1 8.2.2 8.2.3 8.2.4 8.2.5 Direct form I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Direct form II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Cascade form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Parallel form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Transposed direct form II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Compiled January 23, 2004
c B. Champagne & F. Labeau
iv
Contents
8.3
Realizations of FIR systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 8.3.1 8.3.2 8.3.3 8.3.4 Direct forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Cascade form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Linearphase FIR systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Lattice realization of FIR systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 189 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Speciﬁcation of Hd (ω) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 FIR or IIR, That Is The Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Review of analog ﬁltering concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Basic analog ﬁlter types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Impulse invariance method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Bilinear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Classiﬁcation of GLP FIR ﬁlters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Design of FIR ﬁlter via windowing . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Overview of some standard windows . . . . . . . . . . . . . . . . . . . . . . . . . 219 ParksMcClellan method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 228
9
Filter design 9.1 9.1.1 9.1.2 9.1.3 9.2 9.2.1 9.2.2 9.2.3 9.2.4 9.3 9.3.1 9.3.2 9.3.3 9.3.4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Design of IIR ﬁlters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Design of FIR ﬁlters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
10 Quantization effects
10.1 Binary number representation and arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . 228 10.1.1 Binary representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 10.1.2 Quantization errors in ﬁxedpoint arithmetic . . . . . . . . . . . . . . . . . . . . . . 231 10.2 Effects of coefﬁcient quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 10.2.1 Sensitivity analysis for direct form IIR ﬁlters . . . . . . . . . . . . . . . . . . . . . 234 10.2.2 Poles of quantized 2nd order system . . . . . . . . . . . . . . . . . . . . . . . . . . 236 10.3 Quantization noise in digital ﬁlters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 10.4 Scaling to avoid overﬂow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 11 Fast Fourier transform (FFT) 251
11.1 Direct computation of the DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 11.2 Overview of FFT algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
c B. Champagne & F. Labeau Compiled January 23, 2004
. . . . . 292 14 Applications of the DFT and FFT 298 14. . . . . . . . . . . . . . . . . . . . . 268 12. . . . 287 13. . . . . . . . . . . . . . . . . . . . . . . .2 Windowing effects . . . . . . . . . . . . 298 14.1 Overlapadd method: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . .1 Lfold expansion . . . . . . . . . . . . . . . . . . . . . 261 11. . . . . . . . . .1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Overlapsave method (optional reading) . . .3. . . .3 Characterization of PDSPs . . . . . . . . . . . . . . . . . . 291 13. . . . .1 FixedPoint vs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 14. . . . . .3. . . . . . . . 2004 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FloatingPoint . . . .5 Final remarks . . . . . . . . . . . . . .1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 13.2 Why PDSPs ? . . . . . . . . . . . . . . Champagne & F. . Labeau Compiled January 23. .2 Downsampling by an integer factor . . . . . . . .2 Frequency analysis via DFT/FFT . . . . . . . . . . . . . 254 11. . . . . . . . . . . . . . . . . . . . . . . . 269 12. . . . . . 290 13. . . . . . . . . . . . . . . . . . . . 264 12. . . . . . . . . . . . . . . .2 Upsampling (interpolation) system .3 Radix2 FFT via decimationintime . . . . . . . . . . . . . . . . . 281 13. . . . . . . . . . .2 Some Examples . . . . . . . . . . . . . . . . . . . . . .1 Introduction . . . . . . . . .1. . . . . . . . 268 12. . . . . . . 265 12. . . . . . . . . . . . . . .1 Block FIR ﬁltering . . . . . . . . . . . . . . . . . . .4 Decimationinfrequency . . . . . . . . . . . . . . . . . . .5 Current Trends . . . .2. . . . . . . .3 Upsampling by an integer factor . . . . . . . . . .5 Polyphase decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 282 13. . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 12. . . . . . . . 279 13 Multirate systems 281 13. .3. . . . . .Contents v 11. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Structural features of DSPs .4 Changing sampling rate by a rational factor . .4 Benchmarking PDSPs . . . . . 302 14. . . . . . . . . . . . . . . . . . . . . . .3. . . . . 307 References 311 c B. . . . . . . . . . . . . . 304 14. . . . . . . . . . . . . . . 300 14. . . . . . . . .2. . . . . . . . . . . . . . . 268 12. . . . . . . . . 263 12 An introduction to Digital Signal Processors 264 12. . . . . . . . .
). voltage. space.1 Chapter 1 Introduction This Chapter begins with a highlevel deﬁnition of the concepts of signals and systems.x represents the dependent variable (e.) . . any series of measurements of a physical quantity can be considered a signal (temperature measurements for instance). audio and video signals. lung and heart monitoring.1 The concepts of signal and system Signal: • A signal can be broadly deﬁned as any quantity that varies as a function of time and/or space and has the ability to convey information. . • Signals are ubiquitous in science and engineering.Mechanical signals: sound or pressure waves. 1.t the represents the independent variable (e. The Chapter ends with a presentation of the course outline. etc. Some practical examples of reallife DSP applications are then discussed brieﬂy. . Xray and other types of images. Signal characterization: • The most convenient mathematical representation of a signal is via the concept of a function. .g.. vibrations in a structure. time.Finance: time variations of a stock value or a market index. etc..Electrical signals: currents and voltages in AC circuits. earthquakes. including an overview of the basic components of a DSP system and their functionality. say x(t). • By extension. Examples include: . In this notation: . radio communications signals.Biomedical signals: electroencephalogram. This is followed by a general introduction to digital signal processing (DSP). pressure.g.
. 10 kHz. different types of signals can be identiﬁed: . we also refer to such a signal as a continuoustime signal. . often 16 kHz.Digital signal: n ∈ Z → x[n] ∈ A where A = {a1 .t2 ) represents some measure of the intensity of the image at location (t1 . It can be recorded via a microphone that translates the local pressure variations into a voltage signal. .Multichannel signal: x(t) = (x1 (t). the continuoustime signal can be reconstructed from its samples taken with a sampling rate at least twice the highest frequency component in the signal. sampled at 8 kHz and represented with 256 discrete (nonuniformlyspaced) levels. and also quantized. These represent a continuoustime. Figure 1.1(a). The rightmost part of the Figure shows a zoom on this image that clearly shows the pixels. where t 1 and t2 represent the horizontal and vertical coordinates. . xN (t)) . Speech signals exhibit energy up to say.t2 ). . c B.1(b)). As we know from the sampling theorem. . each pixel (each signal sample) is represented by an 8bit number ranging from 0 (black) to 255 (white). for example: whether x[n] is considered to be deterministic or random in nature. which represent the utterance of the vowel “a”.tN ). Example 1. An example of such a signal is given in Figure 1.e. This example can also be considered in discretetime (or rather in discrete space in this case): digital images are made up of a discrete number of points (or pixels). . discretetime and digital signal respectively.1(b)). This is an 8bit grayscale image. in order to accommodate the ﬁniteprecision representation in a computer (Figure 1.2 Chapter 1. and x(t 1 . However.2: Digital Image An example of twodimensional signal is a grayscale image. Wideband speech (often termed commentary quality) would entail sampling at a higher rate. . • Distinctions can also be made at the model level. it needs to be discretized in time in order to accommodate the discretetime processing capabilities of the computer (Figure 1.Analog signal: t ∈ R → xa (t) ∈ R or C When t denotes the time. In digital telephony.1: Speech signal A speech signal consists of variations in air pressure as a function of time. and the intensity of a pixel can be denoted by x[n 1 . . . . Example 1. 2004 . . If one wants to process this signal with a computer.Multidimensional signal: x(t1 . . n2 ]. . we refer to such a signal as discretetime. Champagne & F. aL } represents a ﬁnite set of L signal levels. i. Labeau Compiled January 23. . speech signals are ﬁltered (with an antialiasing ﬁlter which removes energy above 4 kHz). so that it basically represents a continuoustime signal x(t). most of the intelligibility is conveyed in a bandwidth less than 4 kHz. Introduction • Depending on the nature of the independent and dependent variables. System: • A physical entity that operates on a set of primary signals (the inputs) to produce a corresponding set of resultant signals (the outputs).Discrete signal: n ∈ Z → x[n] ∈ R or C When index n represents sequential values of time.2 shows an example of digital image.
1 An utterance of the vowel “a” in analog.04 Time (s) 0. Labeau Compiled January 23.2 0 0.2 x[n] 0 −0.05 0. Champagne & F.06 50 100 150 (c) Samples 200 250 −0. c B.1 The concepts of signal and system 3 (a) 0. discretetime and digital format.02 0.2 0 50 100 Samples 150 200 250 Fig.2 0 0. 2004 .1.03 (b) 0.2 x(t) 0 −0.01 0. 1.2 x [n] 0 d 0. Sampling at 4 kHz and quantization on 4 bits (16 levels).
Introduction Fig.Digital system: the input and outputs are digital.} T {. . may take several forms: modiﬁcation. Champagne & F. System characterization: • A system can be represented mathematically as a transformation between two signal sets. .4 Chapter 1.} Fig.e. 2004 .3 x[n] T {. Labeau Compiled January 23. etc. 1. ﬁltering.2 Digital image. extraction of parameters. c B. 1. and zoom on a region of the image to show the pixels • The operations. or processing. . This is illustrated in Figure 1. different basic types of systems may be identiﬁed: . discrete and/or digital) coexist. combination.Analog or continuoustime system: the input and output signals are analog in nature.3 A generic system y[n] • Depending on the nature of the signals on which the system operates.Discretetime system: the input and output signals are discrete. as in x[n] ∈ S1 → y[n] = T {x[n]} ∈ S2 .Mixed system: a system in which different types of signals (i. analog. decomposition.
• Thus. • From a system viewpoint. • The digitaltoanalog (D/A) converter transforms the digital output y [n] into an analog signal ya (t) d suitable for interfacing with the outside world. we extend the meaning of DSP systems to include such cases.: circuit analysis using differential equations • Yet. the most common/powerful processing devices today are digital in nature. followed by a quantizer (creating discrete levels). Labeau Compiled January 23.4. Digital signal processing (DSP): • In its most general form. there is a strong. 1.1.2 Digital signal processing 1. • The digital system performs the desired operations on the digital signal x [n] and produces a corred sponding output yd [n] also in digital form. DSP refers to the processing of analog signals by means of discretetime operations implemented on digital hardware. c B. • The analogtodigital (A/D) converter transforms the analog signal x (t) at the system input into a a digital signal xd [n]. • This has lead to the development of an engineering discipline know as digital signal processing (DSP). Champagne & F.motivated by the prevalence of the analog signal model .1 Digital processing of analog signals Discussion: • Early education in engineering focuses on the use of calculus to analyze various systems and processes at the analog level: . due to extraordinary advances made in microelectronics.the input and output signals are analog .2.g.e. An A/D converter can be thought of as consisting of a sampler (creating a discretetime signal). the A/D or D/A converters may not be required.the processing is done on the equivalent digital signals. 2004 . a DSP system will consist of three main components. DSP is concerned with mixed systems: .2. as illustrated in Figure 1.2 Digital signal processing 5 1. practical motivation to carry out the processing of analog realworld signals using such digital devices.2 Basic components of a DSP system Generic structure: • In its most general form. • In some applications.
• In many systems.4 A digital signal processing system A/D converter: • A/D conversion can be viewed as a twostep process: xa (t ) x[n] Sampler Quantizer xd [n] Fig. Labeau Compiled January 23. the levels are not equally spaced — the spacing between levels increases with increasing amplitude. the set of discrete levels is uniformly spaced. 1. • In some systems. Nearly all telephone calls are digitized to 8 bit resolution. This is the case for digital telephony. b • The number of representation levels in the set A is hardware deﬁned. The level resolution for WAVE ﬁles is most often 16 bits per sample. Introduction xa (t ) A/D x d [n] Digital system y d [n] D/A y a (t ) Fig. c B.5 A/D conversion • Sampler: in which the analog input is transformed into a discretetime signal.6 Chapter 1. d with only a ﬁnite set A of possible levels. the set of discrete levels is nonuniformly spaced. as in x (t) → x[n] = a xa (nTs ). 1.1 kHz (the sampling rate used for audio CD’s) or 48 kHz (used in studio recording). For WAVE ﬁles. This is the case for instance for WAVE ﬁles used to store audio signals. • Quantizer: in which the discretetime signal x[n] ∈ R is approximated by a digital signal x [n] ∈ A . the sampling rate is often 44. 2004 . However. Champagne & F. where Ts is the sampling period. but some systems use up to 24 bits per samples. typically 2 where b is the number of bits in a word.
.6.1. 2004 . on which its implementation is based. • The operations performed by the digital system can usually be described by means of an algorithm. Champagne & F. This requires the use of external data storage units.2 Digital signal processing 7 Digital system: • The digital system is functionally similar to a microprocessor: it has the ability to perform mathematical operations on a discretetime basis and can store intermediate results of computation in internal memory. • Interpolator: in which the high frequency components of ya (t) are removed via lowpass ﬁltering to ˆ produce a smooth analog output ya (t). Labeau Compiled January 23. c B. one device takes care of both steps. D/A converter: This operation can also be viewed as a twostep process. This twostep representation is a convenient mathematical model of the actual D/A conversion. • The implementation of the digital system can take different forms: . in practice. • The following distinctions are also important: .Offline system: A non realtime system which operates on stored digital signals.6 D/A Conversion • Pulse train generator: in which the digital signal yd [n] is transformed into a sequence of scaled. analog pulses.Softwired: in which the desired operations are executed via a programmable digital signal processor (PDSP) or a general computer programmed to this end. y d [n] Pulse train gererator ˆ y a (t ) Interpolator y a (t ) Fig. though. as illustrated in Figure 1.Realtime system: the computing associated to each sampling interval can be accomplished in a time ≤ the sampling interval. 1.Hardwired: in which dedicated digital hardware components are specially conﬁgured to accomplish the desired processing task. .
. This however can be mitigated by using compression techniques. For instance.8 Chapter 1.allows for offline computations • Flexibility: .Easy interconnection of DSP blocks (no loading problem) . Labeau Compiled January 23. Introduction 1. • Fortunately.Software programmable ⇒ implementation and fast modiﬁcation of complex processing functions (e. 2004 . • Storage capability: . the zeros and ones can be easily distinguished even in the presence of noise as long as the noise is small enough.4 Discretetime signal processing (DTSP) Equivalence of analog and digital signal processing • It is not at all clear that an arbitrary analog system can be realized as a DSP system.3 Pros and cons of DSP Advantages: • Robustness: . This means that one gets the results are reproducible.Signal levels can be regenerated. The retrieving stored digital signals (often in binary form) results in the regeneration of clean signals. For binary signals.2. • Input signal bandwidth is technology limited. selftunable digital ﬁlter) • Structure: . effectively stripping off the noise. Discretization of the levels adds quantization noise to the signal. for the important class of linear timeinvariant systems. this equivalence can be proved under the following conditions: c B.g. 1.Precision not affected by external factors. • Quantization effects.DSP system can be interfaced to lowcost devices for storage. .Possibility of sharing a processor between several tasks Disadvantages: • Cost/complexity added by A/D and D/A conversion. The process of regeneration make a hard decision between a zero and a one.2. • Simple conversion of a continuoustime signal to a binary stream of data involves an increase in the bandwidth required for transmission of the data.Easy control of system accuracy via changes in sampling rate and number of representation bits. Champagne & F. coding an audio signal using MP3 techniques results in a signal which uses much less bandwidth for transmission than a WAVE ﬁle. .
the sampling rate is larger than twice the largest frequency contained in the analog input signal (Sampling Theorem). The DTSP paradigm • Based on these considerations.Baseband simulation of radio signal transmission in mobile communications • Image processing: .3 Applications of DSP 9 ..Low bitrate codecs (coder/decoder) for digital speech transmission. and video signals: .Image enhancement c B. 1. image.Digital music: CD. audio. Champagne & F.Edge and shape detection . .1. MD.Discretetime signal processing (DTSP) . DAT. .Study of quantization effects • The main object of DTSP is the study of DSP systems under the assumption that ﬁniteprecision effects may be neglected ⇒ DTSP paradigm..Image and video compression algorithms such as JPEG and MPEG • Digital simulation of physical processes: . • Compression and coding of speech. Labeau Compiled January 23.3 Applications of DSP Typical applications: • Signal enhancement via frequency selective ﬁltering • Echo cancellation in telephony: .Acoustic echoes due to coupling between loudspeaker and microphone. • Quantization is concerned with practical issues resulting from the use of ﬁniteprecision digital hardware in the implementation of DTSP systems.Sound wave propagation in a room . 2004 .. .Electric echoes resulting from impedance mismatch and imperfect hybrids. DCC. it is convenient to break down the study of DSP into two distinct sets of issues: .the number of representation levels provided by the digital hardware is sufﬁciently large that quantization errors may be neglected.and now MP3 .
A result of this procedure is shown on Figure 1. with an attenuation and a propagation delay. Figure 1. let us concentrate on this ideal edge.10 Chapter 1. processed through a system which imitates the effect of coupling. For the sake of explanation.4: Edge detection An edge detection system can be easily devised for grayscale images.7 Illustration of electrical echo in classic telephone lines. The incoming signal is monitored. On the other hand. c B.8(a) illustrates what we expect an ideal edge to look like: it is a transition between two ﬂat regions. so that he output will be nonzero. when the convolution takes place on the edge.8(b) shows the impulse response of a ﬁlter that will enable edge detection: the sum of all its samples is equal to one. non ideal edges will exist. Labeau Compiled January 23.7) can partially leak into the transmitting path of the network. causing an echo to be heard by the transmitter. The hybrid is used to connect the local loops deserving the customers (2wire connections) to the main network (4wire connections). Of course. So to detect edges in a signal in an automatic way.9 for the image shown earlier. This separation is never perfect due to impedance mismatch and imperfect hybrids. the signal received from the network (on the left side of Figure 1. The role of the hybrid is to ﬁlter incoming signals so that they are oriented on the right line: signals coming from the network are passed on to the telephone set. and then subtracted from the outgoing signal.3: Electric echo cancellation In classic telephony (see Figure 1. so that the convolution of a ﬂat region with this impulse response will yield 0. Champagne & F. and be sent back to the transmitter (on the right side). To combat such electric echoes. in practise. Figure 1. It is convenient to study the principle in one dimension before going to two dimensions. signals from the telephone set are passed on to the transmitting line of the network. 1. 2004 . a signal processing device is inserted at each end of the channel. direct signal Receiver Transmitter Hybrid Hybrid echo signal Fig. as the central peak will be compensated by the equal side values. with smoother transitions and non ﬂat regions. the values on the left of the peak and on the right of the peak will not be the same anymore. two regions with approximately the same grayscale levels. For instance. one only has to ﬁlter it with a ﬁlter like the one shown in Figure 1. Introduction Example 1.7).8(b) and threshold the output. the signal transmitted through a line must pass through a hybrid before being sent to the receiving telephone. Example 1.
Champagne & F. 1.8 (a) An ideal edge signal in one dimension and (b) an edge detecting ﬁlter Fig. Labeau Compiled January 23.2. 2004 . 1. c B.9 Illustration of an edge detection system on image of Figure 1.3 Applications of DSP 11 (a) (b) Fig.1.
.12 Chapter 1. Introduction 1.Discretetime Fourier transform (DTFT) and Ztransform .Applications of DFT to frequency analysis . A/D and D/A Structures for realization of digital ﬁlters... Labeau Compiled January 23. Champagne & F. signal ﬂowgraphs Digital ﬁlter design (FIR and IIR) Study of ﬁnite precision effects (quantization noise.Discrete Fourier transform (DFT) • Part II: Issues related to the design and implementation of DSP systems: Sampling and reconstruction of analog signals.Introduction to adaptive ﬁltering c B. 2004 .Multirate digital signal processing .Timedomain and transformdomain analysis .) Fast computation of the DFT (FFT) Programmable digital signal processors (PDSP) • Part III: Selected applications and advanced topics.4 Course organization Three main parts: • Part I: Basic tools for the analysis of discretetime signals and systems ..
.1 Discretetime (DT) signals Deﬁnition: A DT signal is a sequence of real or complex numbers. • To refer to the complete sequence. x[n] ∈ C. i. . . that is.1) • n is called the discretetime index. we shall often use the notation x[n] to refer to the complete sequence. .2) latter is a misuse of notation. n = 0) 1 The (2.e. • x[n]. {x[n]} or even x[n] if there is no possible ambiguity. in which case the index n should be viewed as a dummy place holder. . . • We shall denote by S the set of all complex valued DT signals. since x[n] formally refers to the nth signal sample. . as in: n ∈ Z → x[n] ∈ R or C (2. 1. Description: There are several alternative ways of describing the sample values of a DT signal.13 Chapter 2 Discretetime signals and systems 2. Some of the most common are: • Sequence notation: ¯ 1 1 1 x = {.e. is called a sample.} 2 4 8 where the bar on top of symbol 1 indicates origin of time (i.1 • Unless otherwise speciﬁed. . the nth number in the sequence. it will be assumed that the DT signals of interest may take on complex values. . Following a common practice in the DSP literature. 0. one of the following notations may be used: x. 0. a mapping from the set of integers Z into either R or C. .
6) 1 0 1 2 n • Exponential sequence: for some α ∈ C x[n] = Aα . some approaches may lead to more compact representation than others. 0 otherwise. n = 0. u[n] (2.5) G >Q@ Q • Unit step: u[n] = 1 n ≥ 0. Discretetime signals and systems • Graphical: x[n] 1 0 1 2 n • Explicit mathematical expression: x[n] = • Recursive approach: 0 x[n] = 1 1 0 2−n n < 0. Champagne & F. 2 x[n − 1] n > 0 (2.14 Chapter 2.7) Q c B. (2. n ≥ 0. 0 n < 0.4) Depending on the speciﬁc sequence x[n]. Some basic DT signals: • Unit pulse: δ[n] = 1 n = 0. (2. Labeau Compiled September 13. 2004 . n § · ¨ ¸ X>Q@ © ¹ Q n∈Z (2.3) n < 0.
. of course. t ∈ R. 2004 . (2. where ω is called the angular frequency (in radian).1 illustrates the limitation of the concept of frequencies in the discretetime domain. Champagne & F. Uniform sampling of xa (t) results in the discretetime CES x[n] = e j2πFnTs = e jωn where ω = 2πFTs is the angular frequency (dimensionless). using Ts = 1 in (2. the concept of frequency response is really limited to an interval of size 2π. That is.10) n∈Z (2.1 Discretetime (DT) signals 15 • Complex exponential sequence (CES): If α = 1 in (2. add a radian version of F (Ω = 2πF) or a natural frequency version of ω ( f = ω/(2π)). Labeau Compiled September 13. where F denotes the analog frequency (in units of 1/time). One could. Basic operations on signal: • Let S denote the set of all DT signals. This is a simpliﬁed illustration of an important phenomenon known as frequency aliasing.9)).8) As we saw in the example. The continuous and dasheddotted lines respectively show the real part of the analog complex exponential signals e jωt and e j(ω+2π)t . consider an analog complex exponential signal given by xa (t) = e j2πFt . Figure 2. .e. α = e jω for some ω ∈ R.11) where Fs (Fs = 1/Ts ) is the sampling frequency. the DT CES e jωn and e j(ω+2π)n are indistinguishable.1 below). For a signal sampled at Fs . These are related by ω = 2πF/Fs . x[n + N] = x[n]. as in: x[n] = xa (nTs ). as shown by the solid bullets in the ﬁgure. a common way of generating DT signals is via uniform (or periodic) sampling of an analog signal xa (t). then the two CES signals x2 [n] = e jω2 n and x1 [n] = e jω1 n are indistinguishable (see also Figure 2.6). π].One can show that a CES is periodic with period N. i. we have x[n] = Ae jωn . For example. Upon uniform sampling at integer values of t (i. if and only if ω = 2πk/N for some integer k. for sampled data signals (discretetime signals formed by sampling continuoustime signals).1: Uniform sampling In DSP applications. (2. i. . the frequency interval −Fs /2 ≤ F ≤ Fs /2 is mapped to the interval −π ≤ ω ≤ π. and ω the (normalized) radian frequency of the frequency response of the discretetime signal. the same sample values are obtained for both analog exponentials. there are several frequency variables: F the frequency variable (units Hz) for the frequency response of the continuoustime signal.e.If ω2 = ω1 + 2π. for DT signals. even tough the original analog signals are different.2. c B.e.9) where Ts > 0 is called the sampling period. typically [−π. n ∈ Z (2.Thus. Example 2.
16) • Bounded signals: all x ∈ S that can be bounded. i. we can ﬁnd Bx > 0 such that x[n] ≤ Bx for all n∈Z • Absolutely summable: all x ∈ S such that ∑∞ n=−∞ x[n] < ∞ Discrete convolution: • The discrete convolution of two signals x and y in S is deﬁned as (x ∗ y)[n] k=−∞ ∑ ∞ x[k]y[n − k] (2. • We deﬁne the following operations on S : scaling: addition: (αx)[n] = αx[n]. n=−∞ ∑ ∞ x[n]2 < ∞ (2. Champagne & F. Discretetime signals and systems 1 0.5 −1 1 2 Fig.14) (x + y)[n] = x[n] + y[n] multiplication: (xy)[n] = x[n]y[n] • Set S equipped with addition and scaling is a vector space.13) (2.e.15) Px N 1 ∑ x[n]2 < ∞ N→∞ 2N + 1 n=−N lim (2. c B.16 Chapter 2.5 0 −0.e.17) The notation x[n] ∗ y[n] is often used instead of (x ∗ y)[n].1 3 4 5 6 7 8 9 10 Illustration of the limitation of frequencies in discretetime.12) (2. Classes of signals: The following subspaces of S play an important role: • Energy signals: all x ∈ S with ﬁnite energy. where α ∈ C (2. 2. 2004 . Labeau Compiled September 13.e. i. i. Ex • Power signals: all x ∈ S with ﬁnite power.
However. That is.23) • Graphical interpretation: mirror image about origin (see Figure 2. • The following properties of ∗ may be proved easily: (a) x ∗ y = y ∗ x (b) (x ∗ y) ∗ z = x ∗ (y ∗ z) (c) x ∗ δ = x • For example. (a) is equivalent to (2. generally depends on x[k] for all values of k ∈ Z. and to y as the output signal. the system output at discretetime n. • Even tough the notation y[n] = T {x[n]} is often used. Time reversal: • Deﬁnition: y[n] = (Rx)[n] x[−n] (2.3) c B.2 DT systems Deﬁnition: A DT system is a mapping T from S into itself. 2.21) which can be proved by changing the index of summation from k to k = n − k in the LHS summation (try it!) 2.2 \>Q@ Q = 7 ^` 7 ^` RXWSXW A generic DiscreteTime System seen as a signal mapping • Note that y[n]. • Some basic systems are described below.17) may diverge.20) k=−∞ ∑ ∞ x[k]y[n − k] = k=−∞ ∑ ∞ y[k]x[n − k] (2.2. That is x ∈ S → y = Tx ∈ S (2.2 DT systems 17 • The convolution of two arbitrary signals may not exist. or response.2 We refer to x as the input signal or excitation. if both x and y are absolutely summable. Labeau Compiled September 13. Champagne & F.22) An equivalent block diagram form is shown in Figure 2.18) (2. the alternative notation y[n] = (T x)[n] is more precise. x ∗ y is guaranteed to exist. [>Q@ Q = LQSXW Fig.19) (2. the sum in (2. 2004 .
25) Other system examples: • Moving average system: y[n] = N 1 ∑ x[n − k] M + N + 1 k=−M (2. Discretetime signals and systems x[n] x[ n] R 1 1 0 1 2 n 0 1 2 n Fig.26) This system can be used to smooth out rapid variations in a signal. • Accumulator: y[n] = k=−∞ ∑ n x[k] (2.4: the top part of the ﬁgure shows daily NASDAQ index values over a period of almost two years. in order to easily view longterm trends.2: Application of Moving Average An example of Moving Average ﬁltering is given in Figure 2.27) Example 2.24) k=−∞ ∑ ∞ x[k]δ[n − k] (2. original signal. 2.k ≥ 0 ⇒ graph of x[n] shifted by k units to the right . the output of the moving average system is smoother and enables a better view of longerterm trends.3 Illustration of time reversal: left. 2004 .18 Chapter 2. c B. Clearly. Champagne & F. Labeau Compiled September 13. Delay or shift by integer k: • Deﬁnition: y[n] = (Dk x)[n] • Interpretation: . The bottom part of the ﬁgure shows the same signal after applying a moving average system to it. right: result of time reversal.k < 0 ⇒ graph of x[n] shifted by k units to the left • Application: any signal x ∈ S can be expressed as a linear combination of shifted impulses: x[n] = x[n − k] (2. with M + N + 1 = 10.
2.e. x2 ∈ S . but y[n] = 1 (x[n − 1] + x[n]) is not. 2 Memoryless systems are also called static systems. Example 2. i. α2 ∈ C and x1 .4 Effect of a moving average ﬁlter. Further. Systems with memory are termed dynamic systems. Some authors prefer to deﬁne an anticausal system as one where y[n] only depends on values x[k] for k > n. • Causal: y[n] only depends on values x[k] for k ≤ n. Example 2. not even the present input sample. the present output does only depend on future inputs.28) 2 This deﬁnition is not consistent in the literature. (Sample values are connected by straight lines to enable easier viewing) Basic system properties: • Memoryless: y[n] = (T x)[n] is a function of x[n] only.2. T (α1 x1 + α2 x2 ) = α1 T (x1 ) + α2 T (x2 ) (2. Champagne & F. • Anticausal: y[n] only depends on values x[k] for k ≥ n2 . • Linear: for any α1 . y[n] = ∑k=n x[k] is anticausal.4: ∞ y[n] = (T x)[n] = ∑n k=−∞ x[k] is causal. systems with memory can have ﬁnite (length) memory or inﬁnite (length) memory. 2004 . c B.2 DT systems 19 5000 4000 3000 2000 1000 0 0 5000 4000 3000 2000 1000 0 0 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 Fig.3: y[n] = (x[n])2 is memoryless. Labeau Compiled September 13.
and play a certain role in processing its input.g. which is of course an undesirable behaviour. some noncausal systems require the knowledge of all future samples of the input to compute the current output sample: these are of course not suited for online processing. Labeau Compiled September 13.e. On the other hand.26).6: The moving average system can easily be shown to be time invariant by just replacing n by n − k in the system deﬁnition (2.20 Chapter 2. Discretetime signals and systems Example 2. which is different from y[−1] = x[−2]. A causal system only needs the past and present values of the input to compute the current output sample y[n]. 2004 . 2.5: The moving average system is a good example of linear system. If only a few future samples are needed. is not time invariant.g. Champagne & F. • Timeinvariant: for any k ∈ Z. if x[n] ≤ Bx for all n. y[n] = (x[n])2 is obviously not linear. Stability is also a very desirable property of a system. An unstable system could be driven to an unbounded output by a bounded input. it is supposed to meet certain design speciﬁcations. but e. When a system is implemented. the ﬁrst sample of the output is x[−1]. then we can ﬁnd By such that y[n] ≤ By for all n Remarks: In online processing. on the other hand. That is. • Stable: x bounded ⇒ y = T x bounded. y[n] = x[2n]. which may not be acceptable for realtime processing. i. this translates into a processing delay. • Many efﬁcient tools are available for the analysis and design of LTI systems (e. T (Dk x) = Dk (T x) or equivalently. Fourier analysis). c B. This is easily seen by considering a delay of k = 1: without delay. with a delay of 1 sample in the input. T {x[n]} = y[n] ⇒ T {x[n − k]} = y[n − k]. a system deﬁned by y[n] = (T x)[n] = x[2n]. A noncausal system would require at least some future values of the input to compute its output.3 Linear timeinvariant (LTI) systems Motivation: Discretetime systems that are both linear and timeinvariant (LTI) play a central role in digital signal processing: • Many physical systems are either LTI or approximately so.29) Example 2. a decimation system. (2. causality of the system is very important.
Then y[n] = where h k=−∞ ∑ ∞ x[k]h[n − k] (2. There are different ways of deriving the impulse response here. or equivalently (2.27).32) k=−∞ ∑ ∞ x[k] Dk h (2. (2. Champagne & F.Time reverse sequence h[k] ⇒ h[−k] .33) • For LTI system.34).31) Invoking linearity and then timeinvariance of T .7: Impulse response of the accumulator Consider the accumulator given by (2. Labeau Compiled September 13. we obtain: h[n] = k=−∞ ∑ n δ[k] = 1 n ≥ 0.30) T δ is known as the unit pulse (or impulse) response of T . knowledge of impulse response h = T δ completely characterizes the system. we have y = Tx = = which is equivalent to (2. For any other input x.27). k=−∞ ∞ k=−∞ ∑ ∑ ∞ x[k] T (Dk δ) x[k] Dk (T δ) = (2. 0 n < 0.3 Linear timeinvariant (LTI) systems 21 Fundamental property: Suppose T is LTI and let y = T x (x arbitrary).30) Discussion: • The inputoutput relation for the LTI system may be expressed as a convolution sum: y = x∗h (2.Multiply sequences x[k] and h[n − k] and sum over k ⇒ y[n] Example 2. By setting x[n] = δ[n] in (2. one may proceed as follows: .35) = u[n] c B.30). 2004 . we can write x= k=−∞ ∑ x[k] Dk δ (2.Shift h[−k] by n samples ⇒ h[−(k − n)] = h[n − k] . we have T x = x ∗ h.34) . ∞ Proof: Recall that for any DT signal x.2. • Graphical interpretation: to compute the sample values of y[n] according to (2.
h[n] = u[n].27). y[n] only depends on values x[m] for m ≤ n iff h[k] = 0 for k < 0 . i. i. ∑∞ n=−∞ h[n] = ∞.e. Now. For instance. Labeau Compiled September 13. suppose that h[n] is not absolutely summable. y[0] = +∞. such that x[n] ≤ Bx < ∞. the corresponding LTI system is not bounded. Then. by simply changing the index of summation from k to l = n − k. We leave it as an exercise to the student to verify that in this case. Note that x[n] ≤ 1 so that x is bounded. Discretetime signals and systems An alternative approach is via modiﬁcation of (2. c B. l=0 +∞ ∑ x[n − l] ∑ u[l]x[n − l].e. Consider the input sequence deﬁned by x[−n] = h[n]∗ /h[n] if h[n] = 0 0 if h[n] = 0. we have for the corresponding output sequence y: y[n] =  ∑ x[n − k]h[k] ≤ ∑ x[n − k]h[k] ≤ ∑ Bx h[k] = Bx Mh < ∞ k k k (2.38) which shows that y is bounded. Proof: Suppose that h is absolutely summable.37) = · · · + h[−1]x[n + 1] + h[0]x[n] + h[1]x[n − 1] + · · · Clearly. for any bounded input sequence x. Stability: An LTI system is stable iff the sequence h[n] is absolutely summable. that is ∑∞ n=−∞ h[n] < ∞. l=−∞ Causality: An LTI system is causal iff h[n] = 0 for n < 0.30). equation (2. that is ∑n h[n] = Mh < ∞. 2004 .36) (2. so that the output sequence y is unbounded and so. Proof: The inputoutput relation for an LTI system can be expressed as: y[n] = k=−∞ ∑ ∞ h[k]x[n − k] (2.27) then becomes: +∞ y[n] = = so that by identiﬁcation.22 Chapter 2. so as to directly reveal the convolution format in (2. Champagne & F.
2004 .5): y = (x ∗ h1 ) ∗ h2 = x ∗ (h1 ∗ h2 ) = x ∗ (h2 ∗ h1 ) = (x ∗ h2 ) ∗ h1 LTI systems commute.40) Thus the system is stable provided α < 1. the sum diverges if α ≥ 1.39) Clearly. it converges: n=0 ∑ αn = 1 − α < ∞ 1 (2.44) Interconnection of LTI systems: • Cascade interconnection (Figure 2.42) (2.3 Linear timeinvariant (LTI) systems 23 Example 2.47) Compiled September 13.45) (2. the LTI system with h[n] = u[n] − u[n − N] = 1 0 ≤ n ≤ N −1 0 otherwise (2.46) (2.2. The LTI system with h[n] = αn u[n] = is IIR (cannot ﬁnd an N2 . Labeau (2. Champagne & F.41) is FIR with N1 = 0 and N2 = N − 1. • For example. not true for arbitrary systems.8: Consider LTI system with impulse response h[n] = αn u[n] • Causality: h[n] = 0 for n < 0 ⇒ causal • Stability: n=−∞ ∞ ∑ ∞ h[n] = n=0 ∑ αn ∞ (2.6): y = x ∗ h1 + x ∗ h2 = x ∗ (h1 + h2 ) c B. while if α < 1. FIR versus IIR • An LTI system has ﬁnite impulse response (FIR) if we can ﬁnd integers N1 ≤ N2 such that h[n] = 0 when n < N1 or n > N2 • Otherwise the LTI system has an inﬁnite impulse response (IIR)...43) n=−∞ ∑ h[n] = n=N1 ∑ h[n] < ∞ (2. • Parallel interconnection (Figure 2.) • FIR systems are necessarily stable: ∞ N2 αn 0 n≥0 otherwise (2.
k=0 ∑ ak y[n − k] = ∑ bk x[n − k] k=0 N M (2. Labeau Compiled September 13.4 LTI systems described by linear constant coefﬁcient difference equations (LCCDE) Deﬁnition: A discretetime system can be described by an LCCDE of order N if. Champagne & F.50) = y[n − 1] + x[n] This is an LCCDE of order N = 1 (M = 0. Example 2. Observe that n−1 k=−∞ n ∑ x[k] (2.49) y[n] = k=−∞ ∑ x[k] + x[n] (2.5 Cascade interconnection of LTI systems K [ \ [ K K \ K Fig.9: Difference Equation for Accumulator Consider the accumulator system: x[n] −→ y[n] = which we know to be LTI. 2004 . 2. for any arbitrary input x and corresponding output y.24 Chapter 2. 2. a1 = −1.6 Parallel interconnection of LTI systems 2.48) where a0 = 0 and aN = 0. a0 = 1. b0 = 1) c B. Discretetime signals and systems [ K K \ [ K K \ [ K K \ Fig.
50)).53) c B. given a particular input sequence x[n].49) requires an inﬁnite number of adders and memory units. Solution of LCCDE: The problem of solving the LCCDE (2. • While (2. y[n − 1] in (2.53): y p [n] = Aan u[n] The homogeneous equation is yh [n] + ayh [n − 1] = 0 Its general solution is given by yh [n] = Ban where B is an arbitrary constant.g. 2004 . The presentation of a systematic solution technique is deferred to Chapter 4. Here. Structure of the general solution: The most general solution of the LCCDE (2. the general solution of the LCCDE is given by: y[n] = Aan u[n] + Ban (2. Labeau Compiled September 13.4 LTI systems described by linear constant coefﬁcient difference equations (LCCDE) 25 Remarks: LCCDEs lead to efﬁcient recursive implementation: • Recursive because the computation of y[n] makes use past output signal values (e.48) can be expressed in the form y[n] = y p [n] + yk [n] • y p [n] is any particular solution of the LCCDE • yh [n] is the general solution of the homogeneous equation (2. is of particular interest in DSP.55) (2. Champagne & F.54) (2.48) for an output sequence y[n]. (2. we only look at general properties of the solutions y[n]. It is easy to verify that the following is a solution to (2. • These past output signal values contain all the necessary information about earlier states of the system.2.50) requires only one adder and one memory unit.57) (2.56) (2.10: Consider the 1st order LCCDE y[n] = ay[n − 1] + x[n] with input x[n] = Aδ[n].52) Example 2.51) k=0 ∑ ak yh [n − k] = 0 N (2. So.
53) is not causal in general: the choice B = −A gives y[n] = which is an anticausal solution. Assuming zeroinitial conditions. • However.53) is not unique: B is arbitrary • (2. x[n − 2] 3. for an Nth order LCCDE. 2004 . n < 0. 4. we obtain a unique (causal) solution y[n] = Aan u[n].7. it can be shown that the LCCDE will correspond to a unique causal LTI system if we further assume that this system is initially at rest.59) 0 −Aan n ≥ 0. • A systematic approach for solving LCCDE will be presented in Chap. uniqueness requires the speciﬁcation of N initial conditions.5 Problems Problem 2. Using this condition in (2. we have y[−1] = Ba−1 = 0 ⇒ B = 0 so that ﬁnally. we must have y[−1] = 0. Labeau Compiled September 13.53) does not necessarily deﬁne a linear system: in the case A = 0 and B = 1.1: Basic Signal Transformations Let the sequenc x[n] be as illustrated in ﬁgure 2. (2. • An LCCDE does not necessarily correspond to a causal LTI system. x[3 − n] c B. N. we obtain y[n] = 0 while x[n] = 0.. • The solution of (2. Champagne & F.58) • This is equivalent to assuming zero initial conditions when solving LCCDE. • Back to example: Here x[n] = Aδ[n] = 0 for n < 0..60) 2. . Determine and draw the following sequences: 1. General remarks: • The solution of an LCCDE is not unique.26 Chapter 2. Discretetime signals and systems Remarks on example: • The solution of (2. x[−n] 2. That is: x[n] = 0 for n < n0 =⇒ y[n] = 0 for n < n0 (2.57).. (2. that is: y[n0 − l] = 0 for l = 1.
2 Problem 2.2. 2. x[n] ∗ (u[n] − u[n − 2]) 6. 1. 0.3: Let Ti . i = 1. prove it. 2004 . −1. 1. 0. . . 2. 2. x[n] 1 2 1 n 1 Fig. . . 2 4 c B.2: Impulse response from LCCDE Let the inputoutput relationship of an LTI system be described by the following difference equation: 1 y[n] + y[n − 1] = x[n] − 4x[n − 1]. 3. i = 1. . 0. Determine the output of this system if the input x[n] is given by x[n] = . 2. Determine the impulse response h[n] of the corresponding system. 2. .4: Let Ti . x[n] ∗ δ[n − 3] 5. . with corresponding impulse responses hi [n]. Champagne & F. be stable LTI systems. stable. Problem 2. x[2n]. what is its impulse response ? Problem 2. Is the system illustrated in ﬁgure 2. otherwise give a counterexample. IIR ? ¯ 1 3.8 LTI ? If so. If the system is LTI. .1.5 Problems 27 4. be LTI systems. 3 together with initial rest conditions. FIR. Labeau Compiled September 13. 0. 0. with corresponding impulse responses h1 [n] = δ[n] − 3δ[n − 2] + δ[n − 3] 1 1 h2 [n] = δ[n − 1] + δ[n − 2].7 Sequence to be used in problem 2. 0. Is the system causal.
9 Feedback connection for problem 2.4.} T2{. Champagne & F.} Fig.} + {.} T {. 2.} T {.28 Chapter 2.} T1{. Discretetime signals and systems T {. 2. Labeau Compiled September 13.8 System connections to be used in problem 2.} T2{. c B.} T3{.} Fig. T {.} T1{.3. 2004 .
9.2. Compute h[0]. Explain how you could compute h[n] from this LCCDE. Consider the overall system T in ﬁgure 2.5 Problems 29 1. Write a LCCDE corresponding to its inputoutput relationship. with impulse response h[n]. Champagne & F. h[1] and h[2]. 2004 . Labeau Compiled September 13. Write a linear constant coefﬁcient difference equation (LCCDE) corresponding to the inputoutput relationships of T1 and T2 . 2. c B.
(3. π] is sufﬁcient • X(ω) is called the spectrum of x[n]: X(ω) = X(ω)e j∠X(ω) ⇒ X(ω) = magnitude spectrum. namely X(ω) = n=−∞ ∑ ∞ x[n]e− jωn . X(ω) ∈ C • X(ω + 2π) = X(ω) ⇒ ω ∈ [−π.4) . Then x[n] = 1 2π π −π (3.3) X(ω)e jωn dω. ∠X(ω) = phase spectrum.1) Remarks: • In general. n∈Z (3. ω∈R (3.30 Chapter 3 Discretetime Fourier transform (DTFT) 3.2) The magnitude spectrum is often expressed in decibel (dB): X(ω)dB = 20 log10 X(ω) Inverse DTFT: Let X(ω) be the DTFT of DT signal x[n].1 The DTFT and its inverse Deﬁnition: The DTFT is a transformation that maps DT signal x[n] into a complexvalued function of the real variable.
2 Convergence of the DTFT: Introduction: − jωn must converge • For the DTFT to exist. π]. 2004 n=−M ∑ M x[n]e− jωn (3. namely: . Then.3.1) is called analysis relation.2 Convergence of the DTFT: 31 Proof: First note that π jωn dω −π e π −π = 2πδ[n]. and write: x[n] ←→ X(ω) F (3. we state without proof the main theoretical results and illustrate the theory with corresponding examples of DTFTs. 3. ω ∈ [−π. • Below.energy signals . with weight X(ω) • Accordingly.7) . the DTFT X(ω) describes the frequency content of x[n] • Since the DT signal x[n] can be recovered uniquely from its DTFT X(ω).power signals • In each case.5) x[k] −π = 2π k=−∞ ∑ x[k] δ[n − k] = 2π x[n] Remarks: • In (3. the series ∑∞ n=−∞ x[n]e • That is. we have π X(ω)e jωn dω = = k=−∞ ∞ ∑ −π k=−∞ ∞ π { ∑ ∞ x[k]e− jωk }e jωn dω e jω(n−k) dω (3.absolutely summable signals .4). while inverse DTFT equation (3. we say that x[n] together with X(ω) form a DTFT pair. Champagne & F. the partial sum XM (ω) = must converge to a limit X(ω) as M → ∞. x[n] is expressed as a weighted sum of complex exponential signals e jωn . Labeau Compiled September 13.6) • DTFT equation (3. c B.4) is called synthesis relation. we discuss the convergence of XM (ω) for three different signal classes of practical interest.
The corresponding the DTFT is simply X(ω) = 1. More generally. X(ω) always exists because:  n=−∞ ∑ ∞ x[n]e− jωn  ≤ n=−∞ ∑ ∞ x[n]e− jωn  = n=−∞ ∑ ∞ x[n] < ∞ (3. say (N1 ≤ N2 ) x[n] = k=N1 ∑ N2 ck δ[n − k] Clearly.1 illustrates the uniform convergence of XM (ω) as deﬁned in (3.8. x[n] is absolutely summable. Its DTFT is easily computed to be X(ω) = n=0 ∑ (ae jω )n = 1 − ae− jω . consider an arbitrary ﬁnite duration signal.32 Chapter 3. the whole XM (ω) curve tends to X(ω) at every frequency. Example 3.8) • In this case.9) • Possible to show that XM (ω) converges uniformly to X(ω). As M increases. can ﬁnd Mε such that X(ω) − XM (ω) < ε for all M > Mε and for all ω ∈ R. that is: For all ε > 0. c B.2: The exponential sequence x[n] = an u[n] with a < 1 is absolutely summable. its DTFT is given by the ﬁnite sum X(ω) = n=N1 ∑ N2 cn e− jωn Example 3. Champagne & F. ∞ 1 Figure 3.1: Consider the unit pulse sequence x[n] = δ[n].7) to X(ω) in the special case a = 0. • X(ω) is continuous and d p X(ω)/dω p exists and continuous for all p ≥ 1. Labeau Compiled September 13. Discretetime Fourier transform (DTFT) Absolutely summable signals: • Recall: x[n] is said to be absolutely summable (sometimes denoted as L1 ) iff n=−∞ ∑ ∞ x[n] < ∞ (3. 2004 .
3.L2 convergence does not guarantee that limM→∞ XM (ω) exists for all ω..3.2(a).e. .g. 2004 .11) • Meansquare convergence is weaker than uniform convergence: .12) Compiled September 13. Champagne & F. Labeau 1 2π ωc −ωc e jωn dω = ωc n ωc sinc( ) π π (3. The corresponding DT signal is obtained via (3.10) • In this case. X(ω) may be discontinuous (jump) at certain points.3: Ideal lowpass ﬁlter Consider the DTFT deﬁned by X(ω) = 1 ω < ωc 0 ωc < ω < π whose graph is illustrated in Figure 3. Example 3.L1 convergence implies L2 convergence. Energy signals: • Recall: x[n] is an energy signal (square summable or L2 ) iff Ex n=−∞ ∑ ∞ x[n]2 < ∞ (3.4) as follows: x[n] = c B.1 DTFT M=2 M=5 M=10 M=20 π/4 ω π/2 3π/4 π Illustration of uniform convergence for an exponential sequence. .2 Convergence of the DTFT: 33 5 4 XM(ω) 3 2 1 0 0 Fig. that is: π M→∞ −π lim X(ω) − XM (ω)2 dω = 0 (3. it can be proved that XM (ω) converges in the mean square sense to X(ω).
34 Chapter 3.2 0 −0. XM (ω) converges to X(ω) in the meansquare only. • The overshoot cannot be eliminated by increasing M.3): • There is an overshoot of size ∆X ≈ 0.4 0. 3.2 −10 −8 −6 −4 −2 0 2 discrete−time n 4 6 8 10 ω π c c Fig.1 0 −0. Labeau Compiled September 13.09 near ±ωc . Discretetime Fourier transform (DTFT) (a) DTFT 1 0. 2004 .2(b) for the case ωc = π/2 sinc(x) = It can be shown that: • ∑n x[n] = ∞ ⇒ x[n] not absolutely summable • ∑n x[n]2 < ∞ ⇒ x[n] square summable Here.3 x[n] 0.4 0.2 Impulse response of the ideal lowpass ﬁlter. Power signals • Recall: x[n] is a power signal iff Px N 1 ∑ x[n]2 < ∞ N→∞ 2N + 1 n=−N lim (3. where the sinc function is deﬁned as sin πx .1 −0.2 −π −ω 0 angular frequency ω (b) DT signal 0.5 0.6 0. πx The signal x[n] is illustrated in Figure 3. We have the socalled Gibb’s phenomenon (Figure 3. Champagne & F.13) c B. • It has important implications in ﬁlter design.8 X( ω) 0. the convergence is not uniform. • This is a manifestation of the nonuniform convergence.2 0.
2 Convergence of the DTFT: 35 M=5 1 0.4 0. Labeau Compiled September 13.2 1 0. c B.8 0.2 −π −ωc 0 M=35 ωc π 1 0.2 −π −ωc 0 ωc π Fig. 2004 . 3.8 0.3 Illustration of the Gibb’s phenomenon.6 0.2 1 0.6 0.8 0.6 0.4 0.8 0.2 0 −π −ωc 0 ωc π −0.6 0.2 0 M=9 −π −ωc 0 M=15 ωc π −0.4 0.3.4 0.2 0 −0. Champagne & F.2 0 −0.
Champagne & F. XM (ω) may still converge to a generalized function X(ω): • The expression of X(ω) typically contains continuous delta functions in the variable ω. as developed in the examples below.Unit step • While the formal mathematical treatment of generalized functions is beyond the scope of this course.4). Example 3. The exceptions include the following useful DT signals: . 2004 . By using the synthesis relation (3.4: Consider the following DTFT X(ω) = 2π k=−∞ ∑ ∞ δa (ω − 2πk) where δa (ω) denotes an analog delta function centred at ω = 0. one gets x[n] = = = 1 2π π −π 2π π k=−∞ ∑ ∞ e jωn δa (ω − 2πk)dω (3. • Most power signals do not have a DTFT even in this sense. The following result is given without proof: U(ω) = ∞ 1 + π ∑ δa (ω − 2πk) − jω 1−e k=−∞ c B. Labeau Compiled September 13.15) =e =1 since in (3.5: Consider the unit step sequence u[n] and let U(ω) denotes its DTFT.Periodic signals . Example 3.36 Chapter 3.14) k=−∞ −π π jωn −π j0n ∑ ∞ e jωn δa (ω − 2πk)dω e δa (ω)dω (3.14) the only value of k for which the delta function will be nonzero in the integration interval is k = 0. Discretetime Fourier transform (DTFT) • If x[n] has inﬁnite energy but ﬁnite power. we shall frequently make use of certain basic results.
24) (3.23) (3.17) (3.3.19) (3.16) (3.20) (3. Labeau F F F F F F X(−ω) X ∗ (−ω) Xe (ω) Xo (ω) XR (ω) jXI (ω) (3.25) (3.27) Compiled September 13.26) (3.22) (3. Champagne & F.3 Properties of the DTFT 37 3. 2004 .21) (3.18) F F X(ω) Y (ω) x[−n] ←→ x∗ [n] ←→ xR [n] ←→ jxI [n] ←→ xe [n] ←→ xo [n] ←→ c B.3 Properties of the DTFT Notations: • DTFT pairs: x[n] ←→ y[n] ←→ • Real and imaginary parts: x[n] = xR [n] + jxI [n] X(ω) = XR (ω) + jXI (ω) • Even and odd components: x[n] = xe [n] + xo [n] 1 ∗ (x[n] + x∗ [−n]) = xe [−n] (even) xe [n] 2 1 ∗ xo [n] (x[n] − x∗ [−n]) = −xo [−n] (odd) 2 • Similarly: X(ω) = Xe (ω) + Xo (ω) 1 ∗ (X(ω) + X ∗ (−ω)) = Xe (−ω) (even) Xe (ω) 2 1 ∗ Xo (ω) (X(ω) − X ∗ (−ω)) = −Xo (−ω) (odd) 2 Basic symmetries: (3.
2004 . π]. symmetry X(ω) = X ∗ (−ω) implies X(ω) = X(−ω).6: dX(ω) dω (3.29) This means that. this implies X(ω) = Xe (ω) = X ∗ (−ω). By using the linearity and time shift properties. Labeau Compiled September 13. then X(ω) = −X ∗ (−ω) Proof: x[n] ∈ R ⇒ xI [n] = 0 ⇒ Xo (ω) = 0. In turns. because of symmetry. XI (ω) = −XI (−ω) (3. Champagne & F. Linearity: F ax[n] + by[n] ←→ aX(ω) + bY (ω) Time shift (very important): x[n − d] ←→ e− jωd X(ω) Frequency modulation: e jω0 n x[n] ←→ X(ω − ω0 ) Differentiation: F F F (3.28) (3. one gets that Y (ω) = (b0 + b1 e− jω )X(ω) c B. for real signals. one only needs to specify X(ω) for ω ∈ [0. Similar argument for purely imaginary case.30) (3. Discretetime Fourier transform (DTFT) Real/imaginary signals: • If x[n] is real.31) (3. Remarks: • For x[n] real. then X(ω) = X ∗ (−ω) • If x[n] is purely imaginary.38 Chapter 3.33) y[n] = b0 x[n] + b1 x[n − 1]. ∠X(ω) = −∠X(−ω) XR (ω) = XR (−ω).32) nx[n] ←→ j Example 3.
x[n]y[n] ←→ F (3.34) (3. by linearity and time shift: e Y (ω) = 1 (X(ω − ω0 ) + X(ω + ω0 )) 2 Convolution: F x[n] ∗ y[n] ←→ X(ω)Y (ω) Multiplication: 1 π X(φ)Y (ω − φ)dφ 2π −π We refer to the RHS of (3.DT convolution ←→ multiplication in ω while .36) Plancherel’s relation: n=−∞ ∑ ∞ x[n]y[n]∗ = 1 2π π −π X(ω)Y (ω)∗ dω (3.37) We have met two kinds of convolution so far and one more is familiar from continuoustime systems.3 Properties of the DTFT 39 Example 3. Here we have seen periodic or circular convolution of periodic functions of a continuous variable (in this case the periodic frequency responses of discretetime systems) involving an integral over one period. so that cos(ωo n) = 2 (e jω0 n + − jω0 n ). 2004 . and. c B.DT multiplication ←→ circular convolution in ω We say that these two properties are dual of each other Parseval’s relation: F F n=−∞ ∑ ∞ x[n]2 = 1 2π π −π X(ω)2 dω (3.3. we will meet the periodic or circular convolution of sequences involving a sum over one period. For discretetime systems. For functions of a continuous variable we have ordinary or linear convolution involving an integral from −∞ to ∞. Labeau Compiled September 13. Champagne & F. Recall ﬁrst that e jω0 n = cos(ωo n) + j sin(ωo n). we have seen the ordinary or linear convolution which involves a sum from −∞ to Later in the study of DFTs.35) as the circular convolution of periodic functions X(ω) and Y (ω).7: 1 y[n] = cos(ωo n)x[n].35) Thus: .
denoted H(ω). c B.41) Remarks: • If DT system H is stable. Discretetime Fourier transform (DTFT) 3. Labeau Compiled September 13. • Eigenvector interpretation: .} ∑ ∞ y[n] y[n] = x[n] ∗ h[n] = h[n] = H {δ[n]} Response to complex exponential: • Let x[n] = e jωn .38) (3.} H{. The frequency response of H . ω∈R (3. • Corresponding output signal: x[k]h[n − k] (3.40) where we recognize H(ω) as the DTFT of h[n].x[n] = e jωn behaves as an eigenvector of LTI system H : H x = λx .39) k=−∞ y[n] = ∑ h[k]x[n − k] k = ∑ h[k]e jω(n−k) = {∑ h[k]e− jωk }e jωn = H(ω)x[n] k k (3. n ∈ Z. then the sequence h[n] is absolutely summable and the DTFT converges uniformly: .Corresponding eigenvalue λ = H(ω) provides the system gain Deﬁnition: Consider an LTI system H with impulse response h[n].40 Chapter 3. 2004 . Champagne & F.H(ω) exists and is continuous.4 Frequency analysis of LTI systems LTI system (recap): x[n] T {. is deﬁned as the DTFT of h[n]: H(ω) = n=−∞ ∑ ∞ h[n]e− jωn .
Let y[n] denote the response of H to an arbitrary input x[n].44) x[n] = 2π −π • According to (3. We refer to ∠H(ω) as the phase spectrum.4 Frequency analysis of LTI systems 41 • H(ω) known ⇒ we can recover h[n] from the inverse DTFT relation: h[n] = 1 2π π −π H(ω)e jωn dω (3. Labeau . Properties: Let H be an LTI system with frequency response H(ω). i.42) • We refer to H(ω) as the magnitude response of the system.43) Proof: H LTI ⇒ y = h ∗ x ⇒ Y (ω) = H(ω)X(ω) Interpretation: • Recall that input signal x[n] may be expressed as a weighted sum of complex exponential sequences via the inverse DTFT: 1 π X(ω)e jωn dω (3. y[n] = 1 2π π −π H(ω)X(ω)e jωn dω (3. 2004 c B.phase rotation of ∠H(ω). Champagne & F.gain or attenuation of H(ω) . X(ω)e jωn .45) • Note ﬁltering role of H(ω): each frequency component in x[n].8: Consider a causal moving average system: y[n] = • Impulse response: h[n] = = = 1 M−1 ∑ δ[n − k] M k=0 1 M 1 M−1 ∑ x[n − k] M k=0 (3.e. is affected by H(ω) in the ﬁnal system output y[n]: .46) 0 0≤n<M otherwise 1 (u[n] − u[n − M]) M Compiled September 13. We have Y (ω) = H(ω)X(ω) (3.43). Example 3.3.
48) c B. 2004 .4. Y (ω) = H(ω)X(ω): .5. sin (ω/2) π 3π/4 π/2 π/4 0 −π/4 −π/2 −3π/4 −π 0 π/3 2π/3 ∠ X(ejω) ω π 4π/3 5π/3 2π Fig.level of ﬁrst sidelobe ≈ −13dB sin (ωM/2) sin (ω/2) (3.zeros at ω = 2πk/M (k = 0). Champagne & F. where sin (ωM/2) changes its sign.X(ω) = 0 at a given frequency ω ⇒ Y (ω) = 0 if not: system is nonlinear or timevarying .4 Phase response of the causal moving average system (M = 6) Further remarks: • We have seen that for LTI system. 3.47) =0 • The phase response is illustrated in Figure 3. .42 Chapter 3. where .jumps of π at ω = 2πk/M (k = 0). its frequency response may be computed as H(ω) = Y (ω) X(ω) (3.negative slope of −(M − 1)/2 . Labeau Compiled September 13. Discretetime Fourier transform (DTFT) • Frequency response: H(ω) = = 1 M−1 − jωn ∑e M n=0 1 e− jωM − 1 M e− jω − 1 1 e− jωM/2 e− jωM/2 − e jωM/2 = M e− jω/2 e− jω/2 − e jω/2 1 − jω(M−1)/2 sin (ωM/2) e = M sin (ω/2) • The magnitude response is illustrated in Figure 3.H(ω) = 0 ⇒ Y (ω) = 0. . regardless of input • If system is LTI.
• If we further assume initial rest conditions.51) M ⇒ H(ω) = c B.: x[n] = 0 for n < n0 =⇒ y[n] = 0 for n < n0 (3. 2004 . 3. Frequency response: Taking DTFT on both sides of (3.8 0.5 Magnitude Response of the causal moving average system (M = 6) 3.2 1 0.49): k=0 N ∑ ak y[n − k] = ∑ bk x[n − k] k=0 k=0 ∑M bk e− jωk k=0 ∑N ak e− jωk k=0 N M ⇒ k=0 ∑ ak e− jωkY (ω) = Y (ω) = X(ω) ∑ bk e− jωk X(ω) (3. Champagne & F.e.50) LCCDE corresponds to unique causal LTI system.3.4 0. Labeau Compiled September 13. i.6 0.2 0 0 π/3 2π/3 ω π 4π/3 5π/3 2π Fig.5 LTI systems characterized by LCCDE LCCDE (recap): • DT system obeys LCCDE of order N if k=0 ∑ ak y[n − k] = N k=0 ∑ bk x[n − k] M (3.49) where a0 = 0 and aN = 0.5 LTI systems characterized by LCCDE 43 X(ejω) 1.
6 Ideal frequency selective ﬁlters: Ideal lowpass: H( ) HLP (ω) = 1 if ω < ωc 0 if ωc < ω < π ωc ωc n sinc( ) π π (3.they are noncausal (h[n] = 0 for n < 0) . Champagne & F. Discretetime Fourier transform (DTFT) 3.6: Lowpass ﬁlter Ideal highpass: H( ) HHP (ω) = 1 if ωc < ω < π 0 if ω < ωc (3. π] (3. 2004 .53) c 1 hHP [n] = δ[n] − hLP [n] c Figure 3.8: Bandpass ﬁlter Ideal bandstop: H( ) HBS (ω) = 0 if ω − ωo  < B/2 1 else in [−π.55) o 1 B hBS [n] = δ[n] − hBP [n] o Figure 3.7: Highpass ﬁlter Ideal bandpass: H( ) HBP (ω) = 1 if ω − ωo  < B/2 0 else in [−π. Labeau Compiled September 13.9: Stoppass ﬁlter Remarks: • These ﬁlters are not realizable: .they are unstable (∑n h[n] = ∞) c B.44 Chapter 3. π] (3.54) o 1 B hBP [n] = 2 cos(ωo n)hLP [n]ωc ≡B/2 o Figure 3.52) c 1 hLP [n] = c Figure 3.
for which the inputoutput relationship is y[n] = x[n − k]. The corresponding system output is then y[n] ≈ H(ω0 )s[n − τgr (ω0 )]e jω0 [n−τ ph (ω0 )] . 2004 .60) (3. (3.7 Phase delay and group delay Introduction: Consider an integer delay system. we deﬁne the phase delay as τ ph For the integer delay system.62) Compiled September 13.59) ω ω The concept of phase delay is useful mostly for wideband signals (i.7 Phase delay and group delay 45 • The main object of ﬁlter design is to derive practical approximations to such ﬁlters (more on this later..3.56) This clearly shows that the phase of the system ∠H(ω) = −ωk provides information about the delay incurred by the input. c B.58) ∠H(ω) ωk = =k (3. Labeau (3. − Group delay: Deﬁnition: d∠H(ω) dω This concept is useful when the system input x[n] is a narrowband signal centred around ω0 .57) k ∈ Z.: τgr − x[n] = s[n]e jω0 n (3.e.. X(ω) (3.) 3. we get: − ∠H(ω) ω (3. Champagne & F. An example of such a narrowband signal is given in Figure 3.e. The corresponding frequency response is computed as Y (ω) = e− jωk X(ω) ⇒ H(ω) = Y (ω) = e− jωk . i. occupying the whole band from −π to π).61) where s[n] is a slowlyvarying envelope. Phase delay: For arbitrary H(ω).7.
05 s[n] x[n] 0 −0.46 Chapter 3. Champagne & F. Discretetime Fourier transform (DTFT) 0. Bottom: the magnitude of the DTFT of x[n]. 3. 2004 . Top: the signal x[n] is shown.05 200 400 600 800 1000 n 1200 1400 1600 1800 2000 30 20 X(ω) (dB) 10 0 −10 −20 0 ω 0 π/4 π/2 3π/4 π Frequency ω Fig. Labeau Compiled September 13.10 An example of a narrowband signal as deﬁned by x[n] = s[n] cos(ω0 n). together with the dashed envelope s[n] (Samples are connected to ease viewing). c B.
called the cutoff frequency.e. 1. whereas the group delay τgr (ω0 ) contributes a delay to the envelope s[n].64) (3. (3. 2. though a noninteger value of τgr (ω0 ) can still be interpreted as a noninteger delay1 .1: Ideal HighPass Filter Let h[n] be the impulse reponse of an ideal highpass ﬁlter. and for somr given ωc . since its effect on the phase of the input amounts to a delay by τ samples. H(ω) = Aexp( jωτ). Champagne & F. a delay of 1 2 sample amount to replacing x[n] by a value interpolated between x[n] and x[n − 1]. but it is readily seen that a linear phase system does not introduce phase distortion. 1 For instance. One can deﬁne a distortionless system as one having a gain and a linear phase. we say that the system has linear phase.8 Problems Problem 3. 2004 .65) (3. 1 elsewhere for the main period −π ≤ ω ≤ π. • Deﬁned directly in terms of its frequency response: H(ω) = e− jωτ where τ ∈ R is not limited to take integer value. i. • Clearly: H(ω) = 1 ⇒ no magnitude distortion τ ph (ω) = τgr (ω) = τ Linear phase: Consider an LTI system with H(ω) = H(ω)e j∠H(ω) . Pure delay system: • Generalization of the integer delay system.8 Problems 47 The above equation shows that the phase delay τ ph (ω0 ) contributes a phase change to the carrier e jω0 n .63) 3. i.66) If ∠H(ω) = −ωτ. The gain is easily compensated for and the linear phase merely delays the signal by τ samples.e.3. A more general deﬁnition of linear phase will be given later. c B. Notice that this equation is strictly valid only for integer values of τgr (ω0 ).. compute h[n] by using the deﬁnition of the inverse DTFT. H(ω) is deﬁned as: H(ω) = 0 ω < ωc . plot H(ω). (3. Labeau Compiled September 13.
Let now y[n] = x[n]w[n].48 Chapter 3.] F c B. Given that H(ω) can be expressed as H(ω) = 1 − HLP (ω). 1. Give an expression of X(ω). causal. where HL P(ω) is the freqquency response of an ideal lowpass ﬁlter with cutoff frequency ωc . What is the equivalent in the time domain of the following frequency domain condition X(ω)2 + X(π − ω)2 = 1? [Hint: Recall that a2 = a∗ a. compute h[n] as a function of hLP [n]. Compute W (ω) by using the deﬁnition of the DTFT. 3. 2. such that 0 ≤ ω0 ≤ π. Compute the frequency response Y (ω). Discretetime Fourier transform (DTFT) 3. Is the ﬂter h[n] stable. Champagne & F. What would be the frequency response G(ω) if g[n] was h[n] delayed by 3 samples ? 5.2: Windowing Let x[n] = cos(ω0 n) for some discrete frequency ω0 . Let w[n] = u[n] − u[n − M]. Compute the DTFT of (−1)n x∗ [M − n] for some M ∈ Z. 4. IIR ? Problem 3. Compute the DTFT of (−1)n x[n]. Labeau Compiled September 13. 2004 .3: Given a DTFT pair x[n] ←→ X(ω). FIR. Plot the sequence w[n]. Problem 3.
so is ze jφ for any angle φ. X(z) is smooth. • One may view the ZT as a generalization of the DTFT that is applicable to a larger class of signals.1) The domain of X(z) is the set of all z ∈ C such that the series converges absolutely.2) Remarks: • The domain of X(z) is called the region of convergence (ROC).1 The ZT Deﬁnition: The ZT is a transformation that maps DT signal x[n] into a function of the complex variable z. • Within the ROC. X(z) is an analytic function of complex variable z. the DTFT of a simple signal like x[n] = 2n u[n] does not exist. the DTFT has a limited range of applicability. • For example. (That is. deﬁned as X(z) = n=−∞ ∑ ∞ x[n]z−n (4. 4. • The ZT is the discretetime equivalent of the Laplace transform for continuoustime signals. that is: Dom(X) = {z ∈ C : ∑ x[n]z−n < ∞} n (4. • The ROC only depends on z: if z ∈ ROC. derivative exists. .) • Both X(z) and the ROC are needed when specifying a ZT.49 Chapter 4 The ztransform (ZT) Motivation: • While very useful. etc.
and write: x[n] ←→ X(z).5) explicitly to compute the inverse ZT (more on this later). 2004 . next item). • Since DT signal x[n] can be recovered uniquely from its ZT X(z) (and associated ROC Rx ). be the ZT of DT signal x[n]. possibility of adjusting r so that series converges. We have X(z) = where the series converges provided ∞ n=0 ∑ 2n z−n = 1 − 2z−1 Accordingly.x[n] = 2n u[n] does not have a DTFT (Note: ∑n x[n] = ∞) . • In practice. signals zn . C n∈Z (4. Then x[n] = 1 2π j X(z)zn−1 dz.6) Compiled September 13.x[n] has a ZT for z = r > 2 • If z = e jω ∈ ROC. Let z = re jω . • Consider previous example: . with relative weight X(z).4) • In the sequel. we do not use (4.1: Consider x[n] = 2n u[n]. Inverse ZT: Let X(z). or simply by X(ω) when there is no ambiguity (as we did in Chapter 3).g.5) is limited mostly to theoretical considerations (e. Labeau Z (4. with associated ROC denoted Rx . the DTFT is either denoted by X(e jω ). Champagne & F.5) where C is any closed. z ∈ Rx c B. ROC : z > 2 1 2z−1  < 1. X(e jω ) = n=−∞ ∑ ∞ x[n]e− jωn = DTFT{x[n]} (4. we say that x[n] together with X(z) and Rx form a ZT pair. so that ZT{x[n]} = n=−∞ ∑ ∞ x[n]r−n e− jωn = DTFT{x[n]r−n } (4.3) With ZT. simple contour around z = 0 within Rx .50 Chapter 4. the use of (4. Remarks: • In analogy with the inverse DTFT. Connection with DTFT: • The ZT is more general than the DTFT. The ztransform (ZT) Example 4.
Labeau Compiled September 13. 2. Then.2: Consider the unit pulse sequence x[n] = δ[n]. except possibly at z = 0 and z = ∞: .N1 < 0 ⇒ z = ∞ ∈ ROC / Example 4.N2 > 0 ⇒ z = 0 ∈ ROC / .2 Study of the ROC and ZT examples 51 4. X(z) = 1 × z0 = 1 ROC = C (4.10) . X(z) = z + 1 + 2z−1 + z−2 ROC : 0 < z < ∞ (4. 1.8) Example 4. Champagne & F. we can associate a radius of convergence n=0 R = lim such that if w < R ⇒ the series converges absolutely if w > R ⇒ the series diverges c B. we have X(z) = n=N1 ∑ N2 x[n]z−n (4.7) = x[N1 ]z−N1 + x[N1 + 1]z−N1 −1 + · · · + x[N2 ]z−N2 • ZT exists for all z ∈ C.2 Study of the ROC and ZT examples Signal with ﬁnite duration: • Suppose there exist integers N1 ≤ N2 such that x[n] = 0 for n < N1 and for n > N2 .4. 1}.9) Theorem: To any power series ∑∞ cn wn .3: Consider the ﬁnite length sequence x[n] = {1. 2004 cn cn+1 n→∞ (4.
Labeau Compiled September 13. 2004 . The ztransform (ZT) Causal signal: Suppose x[n] = 0 for n < 0. 4.52 Chapter 4. Champagne & F.11) w < Rw = lim z > 1 ≡r Rw n→∞ x[n] x[n + 1] (4.1 Illustration of the ROC for a causal signal Example 4.13) n=0 ∞ n=0 ∑ an z−n ∑ (az−1 )n = 1 − az−1 1 (4.12) The ROC is the exterior of a circle of radius r. We have X(z) = n=0 ∑ x[n]z−n = ∑ cn wn n=0 ∞ ∞ cn ≡ x[n]. We have X(z) = = (4. as shown in Figure 4.14) ∞ c B.4: Consider the causal sequence x[n] = an u[n] where a is an arbitrary complex number. the ROC is given by w ≡ z−1 (4. Therefore.1. Im(z) ROC: z>r r Re(z) Fig.
Labeau (4. We have X(z) = n=−∞ ∑ 0 x[n]z−n = n=0 ∑ x[−n]zn ∞ (4.2 Illustration of the ROC for an anticausal signal. Anticausal signal: Suppose x[n] = 0 for n > 0. Thus. 2004 . as shown in Figure 4. ROC : z > a (4. Im(z) R Re(z) ROC: z<R Fig.17) The ROC is the interior of a circle of radius R in the zplane. c B.4. the ROC is given by z < R = lim n→∞ x[−n] x[−n − 1] (4.18) Compiled September 13. Champagne & F.5: Consider the anticausal sequence x[n] = −an u[−n − 1]. this ROC corresponds to the exterior of a circle of radius r = a in the zplane. z > a. 4. or equivalently.2 Study of the ROC and ZT examples 53 provided az−1  < 1.15) Consistent with Figure 4.1.2.16) Therefore. Example 4.
2004 .54 Chapter 4. but the ROCs are different. Arbitrary signal: We can always decompose the series X(z) as X(z) = n=−∞ ∑ −1 x[n]z−n + ∑ x[n]z−n n=0 needs z<R ∞ (4.21) needs z>r We distinguish two cases: • If r < R.3 General annular ROC.19) provided a−1 z < 1. z < a. Note that in this and the previous example.3). the ZT does not exist. Champagne & F. c B. The ztransform (ZT) We have X(z) = − = − ∑ (a−1 z)n = − = n=1 a−1 z n=−∞ ∞ ∑ −1 an z−n 1 − a−1 z 1 1 − az−1 if a−1 z < 1 (4. ROC : z < a (4. the ROC is the interior of a circle of radius R = a. we obtain the same mathematical expression for X(z). the ZT exists and ROC : r < z < R (see Figure 4.2. or equivalently. Im(z) r R Re(z) ROC: r<z<R Fig. • If r > R. 4. Labeau Compiled September 13.20) Consistent with Figure 4. Thus.
X(z) = X+ (z) + X− (z) where X+ (z) = X− (z) = ∞ (4. the true ROC may be larger than the one indicated.6: Consider x[n] = (1/2)n u[n] − 2n u[−n − 1]. (4.3 Properties of the ZT 55 Example 4. • In some cases. Z Z z ∈ Rx z ∈ Ry Rx and Ry respectively denote the ROC of X(z) and Y (z) • When stating a property. 2004 .23) The two series will converge simultaneously iff ROC : 1/2 < z < 2.24) Example 4. Here. Here. we must also specify the corresponding ROC. This correspond to the situation shown in Figure 4.3 Properties of the ZT Introductory remarks: • Notations for ZT pairs: x[n] ←→ X(z).3 with r = 1/2 and R = 2. c B. we have X(z) = ∞ −1 (4. Champagne & F.22) n=0 ∑ (1/2)n z−n − ∑ z>1/2 2n z−n z<2 n=−∞ = = 1 1 1 − 2 z−1 2 − 5 z−1 2 5 1 − 2 z−1 + z−2 + 1 1 − 2z−1 (4. Labeau Compiled September 13. the ROC is empty and the ZT does not exist.4.25) n=0 −1 ∑ 2n z−n converges for z > 2 ∑ (1/2)n z−n converges for z < 1/2 n=−∞ Since these two regions do not intersect. y[n] ←→ Y (z).7: Consider x[n] = 2n u[n] − (1/2)n u[−n − 1]. 4.
27) Linearity: ax[n] + by[n] ←→ aX(Z) + bY (z). Z Z z−1 ∈ Rx z ∈ Rx (4. Z z/a ∈ Rx (4.e.28) Time shift (very important): x[n − d] ←→ z−d X(z). The ztransform (ZT) Basic symmetries: x[−n] ←→ X(z−1 ). x∗ [n] ←→ X ∗ (z∗ ).32) Initial value: For x[n] causal (i. x[n] = 0 for n < 0). dz z ∈ Rx (4. Champagne & F.31) Convolution: x[n] ∗ y[n] ←→ X(z)Y (z). we have z→∞ lim X(z) = x[0] (4.29) Exponential modulation: an x[n] ←→ X(z/a).33) c B. 2004 . Labeau Compiled September 13.26) (4. Z z ∈ Rx ∩ Ry (4.56 Chapter 4. Z z ∈ Rx ∩ Ry (4.30) Differentiation: nx[n] ←→ −z Z dX(z) . Z z ∈ Rx (4.
4.10: Consider the signals x1 [n] = {1. a. a4 } where a ∈ C. 1 − 2z−1 cos ωo + z−2 ROC : z > 1 Example 4. a2 } and x2 [n] = {1. 2004 . y[n] = {1. Champagne & F. −2a. −a5 . 0. z > a az−1 .3 Properties of the ZT 57 Example 4. −a. Let us compute the convolution y = x1 ∗ x2 using the ztransform: X1 (z) = 1 − 2az−1 + a2 z−2 = (1 − az−1 )2 X2 (z) = 1 + az−1 + a2 z−2 + a3 z−3 + a4 z−4 = 1 − a5 z−5 1 − az−1 Y (z) = X1 (z)X2 (z) = (1 − az−1 )(1 − a5 z−5 ) = 1 − az−1 − a5 z−5 + a6 z−6 Therefore. a2 . a3 . 0. a6 }. 0.9: Consider x[n] = nan u[n] We have X(z) = −z = d dz 1 1 − az−1 .8: Consider x[n] = cos(ωo n)u[n] 1 jωo n 1 e u[n] + e− jωo n u[n] = 2 2 We have X(z) = = 1 1 ZT {e jωo n u[n]} + ZT {e− jωo n u[n]} 2 2 1 1 1 1 + jωo z−1 − jωo z−1 2 1−e 2 1−e z>e jωo =1 z>e− jωo =1 = 1 − z−1 cos ωo . (1 − az−1 )2 ROC : z > a Example 4. c B. Labeau Compiled September 13.
z−1 ) Remarks: • Rational ZT plays a central role in DSP • Essential for the realization of practical IIR ﬁlters.37) c B. The ztransform (ZT) 4.Inversion via partial fraction expansion Poles and zeros: • X(z) has a pole of order L at z = po if X(z) = • X(z) has a zero of order L at z = zo if X(z) = (z − zo )L ψ(z).34) • We sometimes refer to the order L as the multiplicity of the pole/zero. Poles and zeros at ∞: • X(z) has a pole of order L at z = ∞ if X(z) = zL ψ(z). we investigate two important issues related to rational ZT: . Champagne & F.4 Rational ZTs Rational function: X(z) is a rational function in z (or z−1 ) if X(z) = where N(z) and D(z) are polynomials in z (resp. Labeau Compiled September 13. • In this and the next Sections.38) 0 < ψ(∞) < ∞ (4.58 Chapter 4. • X(z) has a zero of order L at z = ∞ if X(z) = ψ(z) .36) ψ(z) . 0 < ψ(zo ) < ∞ (4.Polezero (PZ) characterization .35) N(z) D(z) (4. zL 0 < ψ(∞) < ∞ (4. 2004 . (z − po )L 0 < ψ(po ) < ∞ (4.
11: (1) Consider the rational function X(z) = z−1 z z = 2 = 1 − 2z−1 + z−2 z − 2z + 1 (z − 1)2 (4. consider X(z) = z − 1 The poles and zeros are zeros poles : z1 = 1.4. 2004 . 1. Labeau Compiled September 13. 3. 2.39) The poles and zeros of X(z) along with their order are as follows: poles : zeros : p1 = 1. Champagne & F. L = 1 1 − z−4 z4 − 1 = 3 1 + 3z−1 z (z + 3) (3) As a ﬁnal example. L = 1 (2) Let X(z) = The corresponding poles and zeros are zeros poles : : zk = e jπk/2 for k = 0. L = 2 z1 = 0. • If we include poles and zeros at 0 and ∞: number of poles = number of zeros Example 4. L = 1 z2 = ∞. L = 1 p1 = 0. : p1 = ∞.4 Rational ZTs 59 Poles & zeros of a rational X(z): Consider rational function X(z) = N(z)/D(z): • Roots of N(z) ⇒ zeros of X(z) Roots of D(z) ⇒ poles of X(z) • Must take into account polezero cancellation: common roots of N(z) and D(Z) do not count as zeros and poles. L = 3 (triple pole) p2 = −3. • Repeated roots in N(z) (or D(z)) lead to multiple zeros (respectively poles). L=1 L=1 c B.
the presence of poles or zeros at ∞ should be mentioned on the diagram.13: Consider x[n] = an u[n].60 Chapter 4. Labeau Compiled September 13.12: Let us consider the following pole and zero values: z1 = 1 L = 1 p1 = 2 L = 1 =⇒ X(z) = G 1 − z−1 z−1 =G z−2 1 − 2z−1 (4.40) Remarks: • Thus. say G ∈ C. Champagne & F. knowledge of the poles and zeros (along with their order) completely specify X(z). we may represent X(z) by a socalled PZ diagram. Example 4. 4. up to a scaling factor.4. (k) =pole of order k of order k Im(z) (k) =zero (2) (2) Re(z) (2) Fig. as illustrated in Figure 4. 2004 .5. • For completeness. −1 1 − az z−a z1 = 0 . • Note that it is also useful to indicate ROC on the pzdiagram. Example 4. The ztransform (ZT) Polezero (PZ) diagram: For rational functions X(z) = N(z)/D(Z). p1 = a . ROC : z > a (4. where a > 0.4 Example of a PZ diagram.41) L=1 L=1 c B. The PZ diagram is shown in Figure 4. The corresponding ZT is X(z) = z 1 = .
Contour integration via residue theorem .Partial fraction expansion (PFE) .we discuss division only as a useful tool to put X(z) in proper form for the PFE.5 PZ diagram for the signal x[n] = an u[n]. then several possible ROC: .we present the PFE method in detail. Champagne & F. Labeau Compiled September 13. .Long division • Partial fraction is by far the most useful technique in the context of rational ZTs • In this section: .accordingly.any annular region between two poles of increasing magnitude.5 Inverse ZT Introduction: • Several methods do exist for the evaluation of x[n] given its ZT X(z) and corresponding ROC: . • ROC can always be extended to nearest pole • ROC delimited by poles ⇒ annular region between poles • If we are given only X(z).5 Inverse ZT 61 Im(z) ROC: z>a a Re(z) Fig. several possible DT signals x[n] 4. ROC for rational ZT: Let’s summarize a few facts about ROC for rational ZTs: • ROC does not contain poles (because X(z) does not converge at a pole).4. 2004 . c B. 4. .
62 Chapter 4. . . .47) Compiled September 13. . express x[n] as x[n] = k=1 l=1 ∑ ∑ (1 − pk z−1 )l 1 (1 − pk z−1 )l K Lk Akl (4.42) Akl ≡ Ak1 = (1 − pk z−1 )X(z)z=pk (4. pK are the distinct poles of X(z) .45) k=1 l=1 ∑ ∑ Akl Z −1 K Lk (4. Labeau pn u[n] if pk  ≤ r k −pn u[−n − 1] if pk  ≥ R k (4. • Evaluate the elementary inverse ZTs in (4.simple poles (Lk = 1): .L1 .5. 2004 .44) Inversion method: Given X(z) as above with ROC : r < z < R. LK are the corresponding orders • The constants Akl can be computed as follows: . the corresponding DT signal x[n] may be obtained as follows: • Determine the PFE of X(z): X(z) = • Invoking linearity of the ZT.simple poles (Lk = 1): 1 Z −1 −→ 1 − pk z−1 c B.p1 . .N(z) and D(z) are polynomials in z−1 .multiple poles (Lk > 1): Akl = 1 d Lk −l [(1 − pk z−1 )Lk X(z)] (Lk − l)!(−pk )Lk −l (dz−1 )Lk −l z=pk k=1 l=1 ∑ ∑ (1 − pk z−1 )l K Lk Akl (4. The ztransform (ZT) 4.43) (4.46) where Z −1 denotes the inverse ztransform. . Champagne & F. .46): .degree of D(z) > degree of N(z) • Under these conditions.1 Inversion via PFE PFE: • Suppose that X(z) = N(z) D(z) where . . X(z) may be expressed as X(z) = where .
N(z) and D(z) be polynomials in z−1 . Champagne & F. it is essential that .higher order poles (Lk > 1): 1 Z −1 −→ −1 )l (1 − pk z where Example 4.4. 2004 .polynomial division .48) = n! r!(n−r)! (read n choose r) The PFE of X(z) can be rewritten as: X(z) = with A1 A2 so that x[n] = A1 Z −1 1 1 − az−1 + A2 Z −1 1 1 − bz−1 = = (1 − az−1 )X(z) z=a A1 A2 + .5 Inverse ZT 63 . Labeau Compiled September 13. −1 1 − az 1 − bz−1 1 1 − bz−1 a a−b b b−a = = z=a 1 (1 − bz−1 )X(z) z=b = 1 − az−1 = z=b = = z>a⇒causal z<b⇒anticausal n A1 a u[n] − A2 b u[−n − 1] an+1 bn+1 n a−b u[n] − b−a u[−n − 1] 4.5.2 Putting X(z) in a suitable form Introduction: • When applying the above PFE method to X(z) = N(z)/D(z).degree of D(z) > degree of N(z) • If either one of the above conditions are not satisﬁed.14: Consider X(z) = 1 (1 − az−1 )(1 − bz−1 ) . a < z < b n r n+l−1 n l−1 pk u[n] − n+l−1 pn u[−n − 1] k l−1 if pk  ≤ r if pk  ≥ R (4. further algebraic manipulations must be applied to X(z) • There are two common types of manipulations: .use of shift property c B.
2004 .49) D(z) (4.g. 3 + 4z−1 + z−2 with the constraint that x[n] is causal. the poles at z = − 1 and z = −1. such that N(z) R(z) = Q(z) + D(z) D(z) where Q(z) = ∑N2 1 x[n]z−n and R(z) is a polynomial in z−1 with degree less than that of D(z) n=N • Determination of Q(z) and R(z) via a division table: Q(z) N(z) −Q(z)D(z) R(z) (4. X(z) = 1 − 1 −1 3 (1 + 3 z )(1 + z−1 ) The PFE of the rational term in the above equation is given by two terms: X(z) = 1 − 1 3 A1 A2 + 1 −1 1 + z−1 1+ 3z . Labeau Compiled September 13. we write down N(z) and D(z) in reverse order when performing the division. The ztransform (ZT) Polynomial division: • Find Q(z) and R(z). we simply express D(z) and N(z) in decreasing powers of z (e. we want the smallest power of z in N(z) to increase from z−2 to z−1 .51) so that X(z) rewrites: X(z) = 1 − The denominator of the second term has two roots. hence the factoriza3 tion: −1 + 8 z 1 .50) • If we want the largest power of z in N(z) to decrease. c B. z−2 + 4z−1 + 3 (4. 1 That is. D(z) = 1 + 2z−1 + z−2 ) • If we want the smallest power of z in N(z) to increase we write down D(z) and N(z) in reverse order (e. D(z) = z−2 + 2z−1 + 1) Example 4.g.15: Let us consider the ztransform X(z) = −5 + 3z−1 + z−2 . 1 1 z−2 + 4z−1 + 3 z−2 + 3z−1 − 5 −(z−2 + 4z−1 + 3) − z−1 − 8 z−1 + 8 . The ﬁrst step towards ﬁnding x[n] is to use long division to make the degree of the numerator smaller than the degree of the denominator.64 Chapter 4. Champagne & F. Accordingly.
= = z−1 + 8 1 + z−1 =− z=− 1 3 5 2 (4. The sequence x[n] is then given by: x[n] = δ[n] + 1 5 − 6 3 n 7 u[n] − (−1)n u[n]. 2004 . A faster approach is to use a combination of linearity and the shift property. Since the sequence is causal. yielding a DT signal y[n] • Finally.16: Consider 1 − z−128 z > 1 1 − z−2 We could use division to work out this example but this would not be very efﬁcient. 2 (4.53) Now the constraint of causality of x[n] determines the region of convergence of X(z).52) z−1 + 8 1 + 1 z−1 3 = z=−1 21 .4. the shift property is applied to recover x[n]: x[n] = y[n − k] Example 4.54) (4. its ROC must extend outwards from the outermost pole.5 Inverse ZT 65 with A1 A2 so that X(z) = 1 + 7 5 − 1 −1 6(1 + 3 z ) 2(1 + z−1 ) . a simple multiplication by zk is sufﬁcient to put X(z) into a suitable format. so that the ROC is z > 1. which is supposed to be delimited by circles with radius 1/3 and/or 1. Champagne & F.55) The inverse ztransform of Y (z) is easily obtained as (please try it) 1 y[n] = (1 + (−1)n )u[n] 2 c B. 2 Use of shift property: • In some cases. First note that X(z) = X(z) = Y (z) − z−128Y (z) where Y (z) = 1 1 = −2 −1 )(1 + z−1 ) 1−z (1 − z N(z) D(z) (4. Labeau Compiled September 13. that is: Y (z) = zk X(z) = where N(z) and D(z) satisfy previous conditions • The PFE method is then applied to Y (z).
66
Chapter 4. The ztransform (ZT)
Therefore 1 x[n] = y[n] − y[n − 128] = (1 + (−1)n )(u[n] − u[n − 128]) 2
4.6 The onesided ZT
Deﬁnition
∞
X + (z) = Z + {x[n]} • ROC: z > r, for some r ≥ 0 • Information about x[n] for n < 0 is lost
n=0
∑ x[n]z−n
(4.56)
• Used to solve LCCDE with arbitrary initial conditions
Useful properties: • Time shift to the right (k > 0): x[n − k] −→ z−k X + (z) + x[−k] + x[−k + 1]z−1 + · · · + x[−1]z−(k−1) • Time shift to the left (k > 0): x[n + k] −→ zk X + (z) − x[0]zk − x[1]zk−1 − · · · − x[k − 1]z
Z+ Z+
(4.57)
(4.58)
Example 4.17:
Consider the LCCDE y[n] = αy[n − 1] + x[n], n≥0 (4.59)
where x[n] = βu[n], α and β are real constants with α = 1 and y[−1] is an arbitrary initial condition. Let us ﬁnd the solution of this equation, i.e. y[n] for n ≥ 0, using the unilateral ztransform Z + . • Compute Z + of x[n]: x[n] causal ⇒ X + (z) = X(z) = β , 1 − z−1 z > 1
c B. Champagne & F. Labeau
Compiled September 13, 2004
4.7 Problems
67
• Apply Z + to both sides of (4.59) and solve for Y + (z): Y + (z) = α(z−1Y + (z) + y[−1]) + X + (z) β = αz−1Y + (z) + αy[−1] + 1 − z−1 β ⇒ (1 − αz−1 )Y + (z) = αy[−1] + 1 − z−1 αy[−1] β ⇒ Y + (z) = + 1 − αz−1 (1 − αz−1 )(1 − z−1 ) A B αy[−1] + + = 1 − αz−1 1 − αz−1 1 − z−1 where A=− αβ 1−α B= β 1−α
• To obtain y[n] for n ≥ 0, compute the inverse unilateral ZT. This is equivalent to computing the standard inverse ZT under the assumption of a causal solution, i.e. ROC : z > max(1, α). Thus, for n ≥ 0 y[n] = αy[−1]αn u[n] + Aαn u[n] + B(1)n u[n] = y[−1]αn+1 + β 1 − αn+1 1−α , n≥0
4.7 Problems
Problem 4.1: Inverse ZT Knowing that h[n] is causal, determine h[n] from its ztransform H(z): H(z) = 2 + 2z−1 . (1 + 1 z−1 )(1 − 1 z−1 ) 4 2
Problem 4.2: Let a stable system H(z) be described by the polezero plot in ﬁgure 4.6, with the added speciﬁcation that H(z) is equal to 1 at z = 1. 1. Using Matlab if necessary, ﬁnd the impulse response h[n]. 2. From the polezero plot, is this a lowpass, highpass, bandpass or bandstop ﬁlter ? Check your answer by plotting the magnitude response H(ω).
Problem 4.3: Let a causal system have the following set of poles and zeros: • zeros: 0, 3 e± jπ/4 ,0.8; 2
c B. Champagne & F. Labeau Compiled September 13, 2004
68
Chapter 4. The ztransform (ZT)
1 0.8 0.6 0.4
Imaginary Part
0.2 0 −0.2 −0.4 −0.6 −0.8 −1 −1 −0.5
Fig. 4.6
3/4π
1/6 π 1/12π 0.8
0
Real Part
0.5
1
PoleZero Plot for problem 4.2.
c B. Champagne & F. Labeau
Compiled September 13, 2004
4.7 Problems
69
• poles: ± 1 , 1 e± jπ/6 . 2 2 It is also known that H(z)z=1 = 1. 1. Compute H(z) and determine the Region of Convergence; 2. Compute h[n].
Problem 4.4: Difference Equation Let a system be sepciﬁed by the following LCCDE: 1 y[n] = x[n] − x[n − 1] + y[n − 1], 3 with initial rest conditions. 1. What is the corresponding transfer function H(z)? Also determine its ROC. 2. Compute the impulse response h[n] (a) by inverting theztransform H(z); (b) from the LCCDE. 3. What is the output of this system if 1 n u[n]; 2 (b) x[n] = δ[n − 3]. (a) x[n] =
c B. Champagne & F. Labeau
Compiled September 13, 2004
70
Chapter 5
Zdomain analysis of LTI systems
5.1 The system function
LTI system (recap): y[n] = x[n] ∗ h[n] = h[n] = H {δ[n]} Response to arbitrary exponential: • Let x[n] = zn , n ∈ Z. The corresponding output signal:
k=−∞
∑
∞
x[k]h[n − k]
(5.1) (5.2)
H {zn } =
∑ h[k]zn−k
k
= {∑ h[k]z−k }zn = H(z)zn
k
(5.3)
where we recognize H(z) as the ZT of h[n]. • Eigenvector interpretation:  x[n] = zn behaves as an eigenvector of LTI system H : H x = λx  Corresponding eigenvalue λ = H(z) provides the system gain Deﬁnition: Consider LTI system H with impulse response h[n]. The system function of H , denoted H(z), is the ZT of h[n]: H(z) = where RH denotes the corresponding ROC.
n=−∞
∑
∞
h[n]z−n ,
z ∈ RH
(5.4)
5) That is.2 LTI systems described by LCCDE 71 Remarks: • Specifying the ROC is essential: two different LTI systems may have the same H(z) but different RH .e.2 LTI systems described by LCCDE LCCDE (recap): • DT system obeys LCCDE of order N if k=0 ∑ ak y[n − k] = ∑ bk x[n − k] k=0 N M (5. Champagne & F.5.7) where a0 = 0 and aN = 0. • If z = e jω ∈ RH . the system function evaluated at z = e jω corresponds to the frequency response at angular frequency ω. then Y (z) = H(z)X(z) • LTI system H is causal iff RH is the exterior of a circle (including ∞). Proof: • Inputoutput relation: (5.6) H LTI ⇒ y = h ∗ x ⇒ Y (z) = H(z)X(z) • Causality: H causal ⇔ h[n] = 0 for n < 0 ⇔ RH : r < z ≤ ∞ • Stability: H stable ⇔ ∑ h[n] = ∑ h[n]e− jωn  < ∞ ⇔ e jω ∈ RH n n 5. the ROC contains the unit circle. i. then H(e jω ) = ∞ n=−∞ ∑ h[n]e− jωn ≡ H(ω) (5. h[n] can be recovered via inverse ZT. c B. • If H(z) and RH are known. Labeau Compiled September 13. 2004 . • LTI system H is stable iff RH contains the unit circle. • If y[n] denotes the response of H to arbitrary input x[n]. Properties: Let H be LTI system with system function H(z) and ROC RH .
12) where G = system gain (∈ R or C). Labeau Compiled September 13. one can express H(z) as H(z) = Gz−L ∏M (1 − zk z−1 ) k=1 N ∏k=1 (1 − pk z−1 ) B(z) A(z) M (5. Zdomain analysis of LTI systems • If we further assume initial rest conditions.e.8) LCCDE corresponds to unique causal LTI system.72 Chapter 5. pk ’s are nontrivial poles. Properties: • Rational system (5. System function: Taking ZT on both sides of (5.12) is causal iff ROC = exterior of a circle: ⇒ no poles at z = ∞ ⇒ L ≥ 0 ⇒ ROC: z > maxk pk  • Rational system is causal and stable if in addition to above: ⇒ unit circle contained in ROC.9) (5. i.7): k=0 N ∑ ak y[n − k] = N ⇒ k=0 ∑ ak z−kY (z) = ∑ bk z−k X(z) k=0 k=0 M ∑ bk x[n − k] M Y (z) ∑M bk z−k = k=0 ⇒ H(z) = X(z) ∑N ak z−k k=0 Rational system: More generally we say that LTI system H is rational iff H(z) = z−L where L is an arbitrary integer and A(z) = Factored form: If the roots of B(z) and A(z) are known.10) k=0 ∑ ak z−k . zk ’s are nontrivial zeros (i.11) (5.: x[n] = 0 for n < n0 =⇒ y[n] = 0 for n < n0 (5. ⇒ that is: pk  < 1 for all k c B. Champagne & F. Factor z−L takes care of zeros/poles at 0 or ∞. N B(z) = k=0 ∑ bk z−k (5. 2004 .e = 0 or ∞).
13) • In many applications.5. coefﬁcients ak ’s and bk ’s ∈ R. 5.3 Frequency response of rational systems 73 Rational system with real coefﬁcients: • Consider rational system: H(z) = z−L B(z) ∑M bk z−k = z−L k=0 −k A(z) ∑N ak z k=0 (5.1 PZ diagram with complex conjugate symmetry. the above complex conjugate symmetry translates into a mirror image symmetry of the poles and zeros with respect to the real axis. 2004 . it can be shown that .15) pk * pk zk Re(z) * zk Fig. then H(z∗ ) = (H(zk ))∗ = 0∗ = 0 k which shows that z∗ is also a zero of H(z) k • More generally. Im(z) (5. • In the PZ diagram of H(z). Champagne & F. An example of this is illustrate in Figure 5. so is p∗ k .16) c B.14) (5. This implies H ∗ (z) = H(z∗ ) • Thus.1. Labeau Compiled September 13.If pk is a pole of order l of H(z). so is z∗ k We say that complex poles (or zeros) occur in complex conjugate pairs. if zk is a zero of H(z).If zk is a zero of order l of H(z).12) ⇒ H(z) = Gz−K ∏M (z − zk ) k=1 N ∏k=1 (z − pk ) (5.3 Frequency response of rational systems The formulae: • Alternative form of H(z) (note K = L + M − N): (5. 5.
φk (ω) = ∠∆: angle between ∆ and real axis • A similar interpretation holds for the terms Vk (ω) and θk (ω) associated to the zeros zk . .74 Chapter 5. .18) (5.22) Geometrical interpretation: • Consider pole pk : ej pk Im(z) ej pk Re(z) unit circle . Labeau Compiled September 13.17) (5.21) • Group delay: τgr (ω) = − M N d ∠H(ω) = K + ∑ θk (ω) − ∑ φk (ω) dω k=1 k=1 (5. Zdomain analysis of LTI systems • Frequency response: H(ω) ≡ H(z)z=e jω = Ge− jωK • Deﬁne: Vk (ω) = e jω − zk  θk (ω) = ∠(e jω − zk ) • Magnitude response: H(ω) = G Uk (ω) = e jω − pk  φk (ω) = ∠(e jω − pk ) V1 (ω) ..∆ = e jω − pk : vector joining pk to point e jω on unit circle .VM (ω) U1 (ω) . Champagne & F.20) H(ω)dB = GdB + ∑ Vk (ω)dB − ∑ Uk (ω)dB k=1 k=1 M • Phase response: ∠H(ω) = ∠G − ωK + ∑ θk (ω) − ∑ φk (ω) k=1 k=1 M N (5.Uk (ω) = ∆: length of vector ∆ . . c B.UN (ω) N ∏M (e jω − zk ) k=1 N ∏k=1 (e jω − pk ) (5.19) (5. 2004 .. .
2004 1 − bz−1 1 − az−1 (5.2 illustrates this construction process. we study the PZ conﬁguration and the frequency response of basic rational systems. the zeros can be anywhere • Poles near the unit circle at p = re jωo (r < 1) give rise to: .4. 1 − . Champagne & F.rapid phase variation near ωo • Zeros near the unit circle at z = re jωo give rise to: .rapid phase variation near ωo 5.8 θ1 (ω) = ω φ1 (ω) = ∠(e jω − .4 Analysis of certain basic systems 75 Example 5.8 (5. In all cases.23) In this case. the poles are located inside the unit circles. and the phase response is given by ω − φ1 (ω). So we have that V1 (ω) = e jω  = 1 U1 (ω) = e jω − .8 and e jω in the zplane.peak in H(ω) near ωo . Labeau Compiled September 13.24) .8. 5. we assume that the system coefﬁcients ak s and bk are real valued.8z−1 z − .deep notch in H(ω) near ωo .5.1 First order LTI systems Description: The system function is given by: H(z) = G The poles and zeros are: .4 Analysis of certain basic systems Introduction In this section.pole: z = a (simple) . the only zero is at z = 0.8) The magnitude of the frequency response at frequency ω is thus given by the inverse of the distance between . Some basic principles: • For stable and causal systems. while the only pole is at z = .1: Consider the system with transfer function: H(z) = z 1 = . Figure 5.zero: z = b (simple) c B.
5 10 5 1 dB 0 −5 0.5 −0.5 0 0. 2004 .5 10 5 1 dB 0 −5 0.5 −0.5 0. Labeau Compiled September 13.5 −1 radians 0 π/4 π/2 3π/4 π 5π/4 3π/2 7π/4 2π 0 −0.5 0.5 1 0 π/4 π/2 3π/4 π 5π/4 3π/2 7π/4 2π Fig.2 Illustration of the geometric construction of the frequency response of the signal in (5.5 0 0. 5.5 −10 0 1 −0.5 −10 0 1 −0.5 −1 −1. for ω = π/5 (top) and ω = π/2 (bottom) c B.5 −1 15 −1. Champagne & F.76 Chapter 5.5 1 0 π/4 π/2 3π/4 π 5π/4 3π/2 7π/4 2π 1. Zdomain analysis of LTI systems 15 1.23).5 −1 radians 0 π/4 π/2 3π/4 π 5π/4 3π/2 7π/4 2π 0 −0.
2: LowPass ﬁrst order system Two examples of choices for b are shown below. (5.4 Analysis of certain basic systems 77 Practical requirements: . Champagne & F. where 0 < ε 1 (typically). one needs a = 1 − ε.25) Example 5. Labeau Compiled September 13.5.causality: ROC : z > a . 2004 .stability: a < 1 Impulse response (ROC: z > a): b b h[n] = G(1 − )an u[n] + G δ[n] a a Lowpass case: To get a lowpass behavior. Im(z) ROC: z>a a 1 Re(z) zero: b = 0 1 H1 (z) = G1 1 − az−1 G1 = 1 − a ⇒ H1 (ω = 0) = 1 unit circle Im(z) ROC: z>a zero: b = −1 1 a 1 H2 (z) = G2 Re(z) 1 + z−1 1 − az−1 G2 = 1−a ⇒ H2 (ω = 0) = 1 2 unit circle c B. Additional attenuation of highfrequency is possible by proper placement of the zero z = b.
c B. where 0 < ε attenuation of the DC component.78 Chapter 5. one has to locate the pole at a = −1 + ε.3 Frequency response of ﬁrst order system Highpass case: To get a highpass behavior. 10 magnitude (dB) 0 −10 −20 −30 −40 −π π phase (rad) −π/2 0 π/2 π a=.9.3.3: HighPass ﬁrst order system The PZ diagram and transfer function of a highpass ﬁrst order system with a = −. b=−1 0 −π −π 10 group delay −π/2 0 π/2 π 5 0 −π −π/2 0 frequency ω π/2 π Fig.9. 1 To get a high Example 5. Zdomain analysis of LTI systems The frequency responses of the corresponding ﬁrst order lowpass systems are shown in Figure 5. 5. 2004 . Labeau Compiled September 13. one has to locate the zero at or near b = 1. Champagne & F. b−0 a=.9 and b = 1 are shown below.
2 Second order systems Description: • System function: H(z) = G 1 + b1 z−1 + b2 z−2 1 + a1 z−1 + a2 z−2 (5.5.4. 5. b = 1 π/2 π π phase (rad) 0 −π −π −π/2 0 frequency ω π/2 π Fig.9 and a zero 5. Labeau Compiled September 13.26) c B. Champagne & F. 2004 .4 Analysis of certain basic systems 79 Im(z) ROC: z>a zero: b = 1 a H(z) = G 1 Re(z) 1 − z−1 1 − az−1 G= 1+a ⇒ H(−π) = 1 2 unit circle The corresponding frequency response is shown in Figure 5.9.4 0 magnitude (dB) −10 −20 −30 −40 −50 −π −π/2 0 a = −.4 at z = 1 Frequency response of a ﬁrstorder highpass ﬁlter with a pole at z = −.
causality: ROC : z > max{p1 .stability: can show that pk  < 1 for all k iff a2  < 1. Labeau Compiled September 13. p2 } .2 = − a1 ± j 2 1 2 4a2 − a2 1 • Practical requirements: .80 Chapter 5.if a2 = 4a2 : double pole (real) at p1 = − a1 1 2 1 . as the results of zeroes located on (or close to) the unit circle.5) clearly shows peaks around ±ωo . H(ω) attains a maximum at ω = ±ωo • 3dBbandwidth: for r close to 1. Zdomain analysis of LTI systems • Poles: 1 . a2 > a1  − 1 (5.28) Notch ﬁlter: A notch ﬁlter is an LTI system containing one or more notches in its frequency response. 2 2 where ∆ω = 2(1 − r) (5.if a2 > 4a2 : 2 distinct poles (real) at p1. Champagne & F. can show: H(ωo ± ∆ω 1 ) = √ .27) Resonator: Im(z) ROC: z>r p1 = re jωo p2 = re− jωo = p∗ 1 (2) r 1 Re(z) H(z) = G = G 1 −1 −1 (1 − re jωo z )(1 − re− jωo z ) 1 1 − 2r cos(ωo )z−1 + r2 z−2 unit circle • The frequency response (Figure 5. c B. • For r close to 1 (but < 1). 2004 .2 = − a1 ± 2 1 2 a2 − 4a2 1 .if a2 < 4a2 : 2 distinct poles (complex) at p1.
5.29) Compiled September 13. ωo =π/4 π π −π −π phase (rad) 0 −π/2 0 frequency ω π/2 π Fig. Champagne & F.4 Analysis of certain basic systems 81 0 magnitude (dB) −10 −20 −30 −40 −π −π/2 0 π/2 r = −. 5. Labeau (5. H(z) = G 1 p2 z 2 (1 − e jωo z−1 )(1 − e− jωo z−1 ) (1 − re jωo z−1 )(1 − re− jωo z−1 ) 1 − 2 cos(ωo )z−1 + z−2 = G 1 − 2r cos(ωo )z−1 + r2 z−2 unit circle The notches in the frequency response are clearly shown in Figure 5.3 FIR ﬁlters Description: • System function: H(z) = B(z) = b0 + b1 z−1 + · · · + bM z−M = b0 (1 − z1 z−1 ) · · · (1 − zM z−1 ) c B.5 Im(z) Frequency response of a resonator system ROC: z>r z1 = e jωo .4. 2004 .975. 5.6. p1 r z2 = e− jωo = z∗ 1 p2 = re− jωo = p∗ 1 z1 Re(z) p1 = re jωo .
9. 5. Zdomain analysis of LTI systems 0 magnitude (dB) r = −. • Practical requirement: none Above system is always causal and stable • Impulse response: h[n] = bn 0 ≤ n ≤ M 0 otherwise (5.82 Chapter 5.e.6 Frequency response of a notch ﬁlter. ωo =π/4 −10 −20 −30 −π −π/2 0 π/2 π π π/2 phase (rad) 0 −π/2 −π −π −π/2 0 frequency ω π/2 π Fig. Champagne & F. 2004 . • This is a zeroth order rational system (i.31) c B.32) 1 M−1 ∑ x[n − k] M k=0 (5.30) Moving average system: • Difference equation: y[n] = • System function: H(z) = 1 M−1 −k 1 1 − z−M z = ∑ M k=0 M 1 − z−1 (5. Labeau Compiled September 13. A(z) = 1) The M zeros zk can be anywhere in the complex plane There is a multiple pole of order M at z = 0.
34) • The PZ diagram and frequency response for M = 8 are shown in Figures 5..8 respectively.5 More on magnitude response 83 • PZ analysis: Roots of the numerator: zM = 1 ⇒ z = e j2πk/M . 5. M − 1 (5. k = 0.. Im(z) ROC: z>0 ( M 1) 2 M 1 Re(z) unit circle Fig. Champagne & F. 1. .7 and 5. 2004 .5.33) Note: there is no pole at z = 1 because of PZ cancellation. 5. Thus: H(z) = 1 M−1 ∏ (1 − e j2πk/M z−1 ) M k=1 (5.35) • Recall that: h[n] ∈ R ⇐⇒ H ∗ (ω) = H(−ω) ⇐⇒ H ∗ (z) = H(z∗ ) c B. Labeau Compiled September 13..7 Zero/Pole diagram for a Moving Average System (M = 8).5 More on magnitude response Preliminaries: • Consider stable LTI system with impulse response h[n] • Stable ⇒ e jω ∈ ROC ⇒ H(ω) = = ∑ h[n]e− jωn ∑ h[n]z−n z=e n n jω = H(z)z=e jω (5.
38) or (5.84 Chapter 5.39) in (5. we have H ∗ (z) = H(z∗ ) = H(1/z)z=e jω .39) To complete the proof.36) (5.37) Magnitude square function: • C(z) H(z)H ∗ (1/z∗ ) c B. Zdomain analysis of LTI systems 0 magnitude (dB) −10 −20 −30 −40 −π −π/2 0 π/2 M=8 π π π/2 phase (rad) 0 −π/2 −π −π −π/2 0 frequency ω π/2 π Fig. Properties: The squared magnitude response of a stable LTI system can be expressed in terms of the system function as follows: H(ω)2 = H(z)H ∗ (1/z∗ )z=e jω = H(z)H(1/z)z=e jω if h[n] ∈ R Proof: Using (5.35). (5. we can write z = 1/z∗ . Hence: H ∗ (z) = H ∗ (1/z∗ )z=e jω When h[n] ∈ R. 2004 . Champagne & F. substitute (5.38) (5. 5.37) (5.8 Frequency response of a Moving Average system (M = 8). Labeau Compiled September 13. the square magnitude response can be expressed as H(ω)2 = H(ω)H ∗ (ω) = H(z)H ∗ (z)z=e jω Note that when z = e jω .
Im(z) * 1 / pk pk zk * 1/ zk Re(z) unit circle Fig.we can always ﬁnd the PZ diagram of C(z) .zk = zero of H(z) ⇒ zk and 1/z∗ are zeros of C(z) k . 5. In the sequel. 2004 .from there.5.9 Typcial Zero/Pole plot for a magnitude square function C(z). H(ω).9). Properties: Let H(z) be an AP system.6 Allpass systems Deﬁnition: We say that LTI system H is allpass (AP) iff its frequency response.40) where K is a positive constant. only a ﬁnite number of possibilities for H(z) 5.pk = pole of H(z) ⇒ pk and 1/p∗ are poles of C(z) k • This is called conjugate reciprocal symmetry (see Figure 5. π] (5. satisﬁes H(ω) = K.6 Allpass systems 85 • Important PZ property: . as well as the number of poles and zeros of H(z): . for all ω ∈ [−π. • Suppose magnitude response H(ω)2 is known as a function of ω. Labeau Compiled September 13. c B. Champagne & F. this constant is set to 1.
44) where the system gain G has been chosen such that H(z) = 1 at z = 1. This implies H(z)H ∗ (1/z∗ ) = 1. it follows that Therefore zo = zero of order l of H(z) ⇐⇒ 1/z∗ = pole of order l of H(z) o po = pole of order l of H(z) ⇐⇒ 1/p∗ = zero of order l of H(z) o Trivial AP system: • Consider integer delay system system: H(ω) = e− jωk • This is an allpass system: H(ω) = 1 First order AP system: From the above PZ considerations. we obtain the following PZ diagram: Im(z) for all ω ∈ [−π. 2004 . The frequency response is shown in Figure 5. 1 − az−1 (5. π] (5. Zdomain analysis of LTI systems • From deﬁnition of AP system. • From above relation.42) (5. H(ω)2 = H(z)H ∗ (1/z∗ )z=e jω = 1. c B. Labeau Compiled September 13.10. Champagne & F.41) (5.43) for all z ∈ C H(1/z∗ ) = 1/H ∗ (z) 1/ a* a Re(z) unit circle System function: HAP (z) = G 1 z − a∗ z−a = ··· z−1 − a∗ = .86 Chapter 5.
5. resp. General form of rational AP system: • Consider rational system: H(z) = z−L • In order for this system to be AP. H1 (ω) and H2 (ω).9 ejπ/4 0 −π π phase (rad) π/2 0 −π/2 −π −π 20 group delay 15 10 5 0 −π 0 frequency ω π/2 π −π/2 0 π/2 π −π/2 0 π/2 π −π/2 Fig.C. Champagne & F. zeros of A(z)) inside U. in addition to above: . c B. 2004 .5 a=. 5.e. we need: B(z) = z−N A∗ (1/z∗ ) = B(z) ∑M bk z−k = z−L k=0 −k A(z) ∑N ak z k=0 (5.all poles (i.L≥0 . Labeau Compiled September 13.45) k=0 ∑ a∗ zk−N k N (5.10 Frequency response of a ﬁrst order allpass system. Remark: • Consider two LTI systems with freq.46) • AP system will be causal and stable if.6 Allpass systems 87 1 magnitude 0.
we have: HI (H x) = x (5.88 Chapter 5. we have: x1 = x2 =⇒ H x1 = H x2 Deﬁnition: Let H be an invertible system. π] iff H2 (z) = H1 (z)Hap (z) for some allpass system Hap (z) 5. it should be clear that the zeros of H(z) become the poles of HI (z) and vice versa (with the order being preserved). mixed or anticausal impulse response hI [n]) • Constraint: ROC(H ) ∩ ROC(HI ) cannot be empty • From (5. Champagne & F.49).} y[n] H I {. denoted HI . c B.} x[n] Fundamental property: Let H be an invertible LTI system with system function H(z).7 Inverse system Deﬁnition: A system H is invertible iff for any arbitrary input signals x1 and x2 . 2004 . the inverse system produces the original input x (5. Labeau Compiled September 13. Its inverse. The inverse system HI is also LTI with system function HI (z) = 1/H(z) (5. is such that for any input signal x in the domain of H . • When applied to output signal y = H x. Zdomain analysis of LTI systems • H1 (ω) = H2 (ω) for all ω ∈ [−π.48) Remarks: • Invertible means that there is a onetoone correspondence between the set of possible input signals (domain of H ) and the set of corresponding output signals (range).47) x[n] H {.49) Remarks: • Several ROC are usually possible for HI (z) (corresponding to causal.
Invoking (5. HI (Dk y) = Dk x = Dk (HI y) which shows that HI is also TI.4: Consider FIR system with impulse response (0 < a < 1): h[n] = δ[n] − aδ[n − 1] The corresponding system function H(z) = 1 − az−1 . Since H is assumed to be timeinvariant. causal and stable inverse will exist iff all the zero of HI (z) are inside the unit circle.Time invariance: Let y = H x and let Dk denote the shift by k operation.49): For any x in the domain of H .5. (5. Since both H and HI are LTI systems. H (Dk x) = Dk y.49).Eq. By deﬁnition of an inverse system. Proof (the ﬁrst two parts are optional): . . . Champagne & F. we obtain HI (z) = The corresponding PZ diagrams are shown below Im(z) Im(z) . we have H (a1 x1 + a2 x2 ) = a2 y1 + a2 y2 . 2004 . we must have HI (H x) = x. Therefore.Linearity: Let y1 = H x1 and y2 = H x2 . this implies that HI (z)H(z)X(z) = X(z) from which (5. z > 0 1 1 − az−1 H (z ) H I (z ) ROC1 ROC2 a Re(z) unit circle a Re(z) unit circle Note that there are two possible ROC for the inverse system HI (z): • ROC1 : z > a ⇒ • ROC2 : z < a ⇒ hI [n] = an u[n] causal & stable hI [n] = −an u[−n − 1] anticausal & unstable c B. Since H is linear by assumption.7 Inverse system 89 • Thus. Labeau Compiled September 13.49) follows immediately Example 5. HI (a2 y1 + a2 y2 ) = a1 x1 + a2 x2 = a1 HI (y1 ) + a2 HI (y2 ) which shows that HI is linear.
Labeau Compiled September 13.c. 2004 . ⇒ h[n] can be chsoen to be causal and stable Zeros inside u. Deﬁnition: A causal LTI system H is said to be minimum phase (MP) iff all its poles and zeros are inside the unit circle. 5. ⇒ hI [n] can be chsoen to be causal and stable exists c B.5: Inverse system and ROC Consider the situation described by the pole/zero plot in Figure 5.90 Chapter 5. it is clear that: HI causal and stable ⇐⇒ ⇐⇒ poles of HI (z) inside u. Champagne & F. There are 3 possible ROC for the inverse system HI : .8 Minimumphase system 5. Remarks • Poles inside u.ROC1 : z < 1/2 ⇒ anticausal & unstable . zeros of H(z) inside u.11 Example of pole/zero plot of an LTI system and its inverse. which fulﬁll the above conditions.11 (note: HI (z) = 1/H(z)).1 Introduction It is of practical importance to know when a causal & stable inverse HI exists.c.ROC2 : 1/2 < z < 3/2 ⇒ mixed & stable . Zdomain analysis of LTI systems 5.c. These considerations lead to the deﬁnition of minimum phase systems.8.ROC1 : z > 3/2 ⇒ causal & unstable A causal and stable inverse does not exist! Im(z) Im(z) H (z ) H I (z ) Re(z) 1/ 2 Re(z) 1/ 2 3/ 2 3/ 2 unit circle unit circle Fig. Example 5.c. From the above example.
and it will 2 have to be included in the allpass component. The allpass component is then Hap (z) = G 3 1 − 2 z−1 1 − 2 z−1 3 .8.8 Minimumphase system 91 • Minimum phase systems play a very important role in practical applications of digital ﬁlters. whose pole/zero plot is shown in Figure 5. in this case we know that there exists no stable and causal inverse. Example 5. If we further require the allpass component to have unit gain. we need to remove distortion introduced by a channel on its input via a compensating ﬁlter (e. consider the MPAP decomposition of the transfer function: H(ω) = Hmin (ω)Hap (ω) A possible c B.12 H min ( z ) H ap (z ) Minimum phase/All pass decomposition of a rational system. Now.5. 2004 .3 Frequency response compensation In many applications. channel equalization).8. The annoying factor is thus 1 − 3 z−1 on the numerator of H(z).50) MP AP H (z ) Fig. 5.g. while Hmin (z) is chosen so as to have H(z) = Hmin (z)Hap (z): Hmin (z) = 2 H(z) 1 (1 − 1 z−1 )(1 − 3 z−1 ) 2 = 1 Hap (z) G 1 − z−1 + 2 z−2 The corresponding polezero plots are shown in Figure 5. Hap (z = 1) = 1.13. Suppose H(ω) is not minimum phase. This means that the allpass component Hap (z) will have a zero at z = 3/2. Labeau Compiled September 13. The zero at z = 3/2 is not inside the unit circle.11. 5. as illustrated in Figure 5.14.2 MPAP decomposition Any rational system function can be decomposed as the product of a minimumphase and an allpass component. so that H(z) is not minimumphase. i.e. Champagne & F.12: H(z) = Hmin (z) Hap (z) (5.6: Consider for instance the transfer function H(z) = (1 − 1 z−1 )(1 − 3 z−1 ) 2 2 1 − z−1 + 1 z−2 2 . and thus a pole at z = 2/3. then we must set G = −3/2. 5. like illustrated in Figure 5.
e. For any causal system H(ω) with the same magnitude response (i. Champagne & F.92 Chapter 5. X(ω) (5.51) In this case.52) Accordingly. the compounded frequency response of the cascaded systems in Figure 5.53) Thanks to this decomposition. ˆ X(ω) = X(ω) ⇒ exact magnitude compensation ˆ ∠X(ω) = ∠X(ω) + ∠Hap (ω) ⇒ phase distortion (5. (5.13 Pole/Zero plot for a decomposition in Minimumphase and allpass parts.4 Properties of MP systems Consider a causal MP system with frequency response Hmin (ω). 5.14 is given by ˆ X(ω) = Hc (ω)H(ω) = Hap (ω). whereas whereas their phases differ by the phase response of the all pass component Hap (z).14 Hc ( ) Frequecny response Compensation choice for the compensating ﬁlter (which is then stable and causal) is the inverse of the minimum phase component of H(ω)): Hc (ω) = 1/Hmin (ω). x[n] Distorting system y[n] Compensating system? ˆ x[n] H( ) Fig. Labeau Compiled September 13. 5. Zdomain analysis of LTI systems Im(z) Im(z) H min ( z ) H ap (z ) 2/3 1/ 2 Re(z) Re(z) 2/3 3/ 2 unit circle unit circle Fig. 5. H(ω) = Hmin (ω)): (1) group delay of H(ω) ≥ group delay Hmin (ω) c B.8. the magnitude spectrum of the compensator output signal x[n] is identical to that of the input ˆ signal x[n]. 2004 . at least the effect of H(ω) on the input signal’s magnitude spectrum can be compensated for.
Compute G(z) and draw a polezero plot.9 Problems 93 (2) n=0 ∑ h[n]2 ≤ M n=0 ∑ hmin [n]2 . Labeau Compiled September 13. 5. What is the region of converhence ? 3. A modiﬁed Moving Average system with forgetting factor is h[n] = 1 n a (u[n] − u[n − M]).15. Champagne & F. specify any additional constraint on the ROC in order for the property to hold. Property (2) states that for the MP system. a system with a stable inverse. a system with real impulse response. for all integer M ≥ 0 M According to (1). state whether it can correspond to 1.5. In each case. Problem 5. M where a is real.e. Draw a polezero plot of H(z). 4. the minimum phase system has minimum processing delay among all systems with the same magnitude response. 2.2: Inversion Moving Average system An Mpoint moving average system is deﬁned by the impulse response h[n] = 1 (u[n] − u[n − M]). 2. 2004 . c B. a minmum phase system. Compute the ztransform for all possible inverses hI [n] of h[n]. a Generalized Linear Phase system.9 Problems Problem 5. an allpass system. an FIR system. Find a stable and causal inverse GI (z) and compute its impulse response gI [n]. 6. positive and a < 1. ﬁnd the ztransform of h[n] (use the formula for a sum of a geometric series to simplify your expression). Is any of these inverses stable ? What is the geenral shape of the correspndiong impulse responses hI [n]? 4. M 1. 5. minimum energy delay).1: Polezero plots For each of the polezero plots in ﬁgure 5. 3. the energy concentration of the impulse response hmin [n] around n = 0 is maximum (i.
15 PoleZero plots pertaining to problem 5. (d) c B. Labeau Compiled September 13.94 Chapter 5. 5. Champagne & F.1. 2004 . Zdomain analysis of LTI systems Im(z) Im(z) r p1 z1 4 Re(z) Re(z) 1 2 p2 2 1 r z2 unit circle unit circle (a) Im(z) (b) Im(z) 4 Re(z) 1 2 Re(z) unit circle unit circle (c) Fig.
and whether the inverse is stable and/or caussal. Consider the anticausal inverse hI [n] above. 4 Furthermore. H(z) is equal to 1 at z = 1. In each case.99% of the energy of the impulse response hI [n]. b = 1. You want to implement a delayed and truncated version of this system. 1.9 Problems 95 Problem 5. specify a region of convergence. Champagne & F. z > a. Is this system stable? Is it causal ? 2.7e± jπ/8 . Labeau Compiled September 13. Problem 5. Find the impulse response h[n]. c B. 5. For each of the above inverses.5. Find the number of samples of hI [n] to keep and the corresponding delay. 1. Find the ztransforms HI (z) of all possible inverses for this system. 2. 0. Assume that the system is causal.3: Implementation of an approximate noncausal inverse Let a system be deﬁned by 1 − bz−1 H(z) = .5. so as to keep 99. compute the impulse response and skecth it for a = 0. where Hmin (z) is minmimumphase . 4. Draw a polezero plot for this system.5. Problem 5.4: Minimumphase/AllPass decomposition Let a system be deﬁned in the z domain by the following set of poles and zeros: • zeros: 2. Draw a polezero plot for Hmin (z) and Hap (z).3e± jπ/12 . 0. Is it stable ? 3. 1 − az−1 where b > 1 > a.5: Twosided sequence Let a stable system have the transfer function H(z) = 1 (1 − 3 −1 1 jπ/4 −1 1 z )(1 − 2 e− jπ/4 z−1 ) 2 z )(1 − 2 e . and Hap (z) is allpass. • poles: 3 . 3. 2004 . Compute the factorization of H(z) into H(z) = Hmin (z)Hap (z).
Labeau Compiled September 13. compute the inverse ztransform h[n].96 Chapter 5. 2004 . 2. c B. Draw a polezero plot for H(z) and specify its ROC. Zdomain analysis of LTI systems 1. Champagne & F.
The discrete Fourier transform (DFT) fulﬁlls these needs. . We would also like a frequency domain representation of these samples in which the frequency variable only takes on a ﬁnite set of values. . (6. . it becomes clear that it suffers several drawbacks Indeed. x[1]. . . It can be seen as an approximation to the DTFT. • the variable ω is continuous.97 Chapter 6 The discrete Fourier Transform (DFT) Introduction: The DTFT has proven to be a valuable tool for the theoretical analysis of signals and systems. . or not necessary. However. x[N − 1]. we would like to analyze the frequency content of signal x[n] based only on the ﬁnite set of samples x[0]. In many situations of interest. or • the signal is periodic with period N.: ∞ X(ω) = n=−∞ ∑ x[n]e− jωn . in order to better match the way a processor will be able to compute the frequency representation. . In all these cases. say ωk for k = 0. .1) from a computational viewpoint. if one looks at its deﬁnition. to implement the inﬁnite summation ∑∞ n=−∞ in (6. π]. its numerical evaluation poses the following difﬁculties: • the summation over n is inﬁnite.e. it is either not possible. N − 1. ω ∈ [−π. • the signal is known to be zero outside this range. . 1. i.1): • only the signal samples x[n] from n = 0 to n = N − 1 are available.
N − 1 only. it is sufﬁcient to specify X[k] for k = 0. .. .. Champagne & F.. 1.e. . ... . • The DFT X[k] may be viewed as an approximation to the DTFT X(ω) at frequency ωk = 2πk/N.5) kn ∑ x[n]WN Example 6. . with period N: X[k + N] = X[k].1: (a) Consider x[n] = We have N−1 1 n = 0. . . c B. X[k] = (b) Let n=0 ∑ x[n]e− j2πkn/N = 1. deﬁned by N−1 X[k] = DFTN {x[n]} n=0 ∑ x[n]e− j2πkn/N .98 Chapter 6. x[N − 1]} into a periodic sequence X[k]. . all k ∈ Z x[n] = an . . Labeau n = 0.. x[N − 1] are used in the computation. Thus. 2004 . ωk ) • Other common notations: N−1 (6. 1. .4) (6. • The Npoint DFT is periodic. 0 n = 1. N − 1. N − 1 Compiled September 28.1 The DFT and its inverse Deﬁnition: The Npoint DFT is a transformation that maps DT signal samples {x[0]. . The discrete Fourier Transform (DFT) 6.3) X[k] = = n=0 N−1 n=0 ∑ x[n]e− jω n k where where ωk WN 2πk/N e− j2π/N (6.2) Remarks: • Only the samples x[0]. . k∈Z (6. . • The ”D” in DFT stands for discrete frequency (i.
. a e− j2πk/N N = 1 − ρN k 1−ρ Note the following special cases of interest: a=1 a = e j2πl/N =⇒ X[k] = =⇒ X[k] = N 0 N 0 if k = 0.. N − 1 (6. Inverse DFT (IDFT): The Npoint IDFT of the samples X[0].. x[n] = x[n] for all n ∈ Z (more on this later). .9) (6. .. .6. Labeau n = 0. . . then x[n] = x[n] = IDFTN {X[k]}...8) (6. N − 1. ..10) Compiled September 28. • The Npoint IDFT is periodic. if k = l modulo N.7) IDFT Theorem: If X[k] is the Npoint DFT of {x[0]. Champagne & F.1 The DFT and its inverse 99 We have N−1 X[k] = = n=0 N−1 n=0 ∑ an e− j2πkn/N ∑ ρn k where ρk if ρk = 1.. X[N − 1] are used in the computation.. . otherwise . . otherwise. N k=0 n∈Z (6. if k = 1. with period N: x[n + N] = x[n] ˜ ˜ • Other common notations: x[n] = ˜ = 1 N−1 ∑ X[k]e jωk n N k=0 1 N−1 −kn ∑ X[k]WN N k=0 where where ωk WN 2πk/N e− j2π/N (6. . x[N − 1]}.X[N − 1] is deﬁned as the periodic sequence x[n] = IDFTN {X[k]} ˜ 1 N−1 ∑ X[k]e j2πkn/N .6) Remarks: • In general. ˜ c B. ˜ • Only the samples X[0]. 2004 .. 1. .
. Thus.13) Compiled September 28. • However. Champagne & F. 1... Labeau n = 0. according to (6. • Starting from the IDFT deﬁnition: x[n] = ˜ = = 1 N−1 ∑ X[k]e j2πkn/N N k=0 1 N−1 ∑ N k=0 N−1 l=0 N−1 l=0 ∑ x[l]e− j2πkl/N e j2πkn/N (6.. .100 Chapter 6. l ∈ Z.11) Indeed.x[n] is known to be zero for n < 0 and for n ≥ N Proof of Theorem: • First note that: 1 N−1 − j2πkn/N = ∑e N k=0 1 if n = 0. .. 1. . 1. then all the N terms are equal to one.. . so that the sum is N. the bracketed term is zero unless l = n. If n = lN. 0 otherwise (6... Therefore: x[n] = x[n]. there are two important special cases when the complete signal x[n] can be recovered from the DFT samples X[k] (k = 0. N − 1 (6. ±N. Nothing is said about sample values of ˜ x[n] outside this range. . The discrete Fourier Transform (DFT) Remarks: • The theorem states that x[n] = x[n] for n = 0. whereas ˜ ˜ no such requirement is imposed on the original signal x[n]. ˜ c B. 2004 .. N − 1 only. N − 1): . . the values of x[n] for n < 0 and for n ≥ N cannot in general be recovered from the DFT samples X[k].x[n] is periodic with period N . N − 1}.. This is understandable since these sample values are not used when computing X[k]. Otherwise. 1 − e− j2πn/N whose numerator is equal to 0.. it is not true that x[n] = x[n] for all n ∈ Z: the IDFT x[n] is periodic with period N. 1.. the above sum is a sum of terms of a geometric series. • Therefore.12) ∑ x[l] 1 N−1 − j2πk(l−n)/N ∑e N k=0 For the special case when n ∈ {0. • In general. ±2N.11). we have −N < l − n < N. the sum is equal to 1 − e− j2πkn . .
ω arbitrary. c B. N k=0 n ∈ Z.16) The general case. i. For n < 0 and for n ≥ N.2 Relationship between the DFT and the DTFT 101 6.6. Therefore: x[n] if 0 ≤ n < N. (6. n ∈ Z. 2004 . Labeau Compiled September 28. it should not be possible to recover the DTFT exactly from the DFT. an arbitrary signal x[n]. is handled by the following theorem.e.. the IDFT theorem yields: x[n] = x[n].15) 0 otherwise. if the Npoint DFT X[k] is known. Thus.Nperiodic signals. k ∈ {0. x[n] can be recovered entirely from its Npoint DFT X[k]. cannot be recovered entirely from its Npoint DFT X[k]. . . N − 1 Let x[n] denote the IDFT of X[k]: ˜ x[n] = IDFT{X[k]} = ˜ 1 N−1 ∑ X[k]e j2πkn/N . 6. Champagne & F.. 1. we have x[n] = 0 by ˜ assumption.... Relationship between DFT and DTFT: Since the DTFT X(ω) is a function of the samples x[n]. First consider the frequency ωk = 2πk/N: X(ωk ) = = ∞ n=−∞ N−1 n=0 ∑ x[n]e− jωk n k ∑ x[n]e− jω n = X[k] (6. 1. in the following two special cases: .1 Finite length signals Assumption: Suppose x[n] = 0 for n < 0 and for n ≥ N. However. at any given frequency ω ∈ [−π.. k = 0. N − 1}.. the DFT is not merely an approximation to the DTFT: the DTFT can be evaluated exactly. In these two cases.2.ﬁnite length signals .. Inverse DFT: In this case. in general. ˜ x[n] = (6.2 Relationship between the DFT and the DTFT Introduction The DFT may be viewed as a ﬁnite approximation to the DTFT: N−1 X[k] = n=0 ∑ x[n]e− jωk n ≈ X(ω) = n=−∞ ∑ ∞ x[n]e− jωn at frequency ω = ωk = 2πk/N As pointed out earlier.. . N − 1. x[n] can be completely recovered from its Npoint DFT. it should be clear that in this case.14) For n = 0. π]. it is possible to completely reconstruct the DTFT X(ω) from the Npoint DFT X[k].
If x[n] = 0 for n < 0 and for n ≥ 0.16). Labeau Compiled September 28. 2004 1 1 − e− jωN N 1 − e jω 1 − jω(N−1)/2 sin(ωN/2) e N sin(ω/2) (6.17) (6. c B. The discrete Fourier Transform (DFT) Theorem: Let X(ω) and X[k] respectively denote the DTFT and Npoint DFT of signal x[n]. 0 k = 1. • Figure 6. N − 1. . • If ω = 2πl. (6.102 Chapter 6. Proof: N−1 X(ω) = = = = n=0 ∑ x[n]e− jωn ∑ 1 N−1 ∑ X[k]e jωk n e− jωn N k=0 X[k] 1 N−1 − j(ω−ωk )n ∑e N n=0 (6. then P(ω) = = • Note that at frequency ωk = 2πk/N: P(ωk ) = so that (6.17) is consistent with (6.20) 1 k = 0. Champagne & F. then: N−1 X(ω) = where P(ω) k=0 ∑ X[k]P(ω − ωk ) 1 N−1 − jωn ∑e N n=0 (6. then e− jωn = e− j2πln = 1 so that P(ω) = 1...21) ..18) Remark: This theorem provides a kind of interpolation formula for evaluating X(ω) in between adjacent values of X(ωk ) = X[k].1 shows the magnitude of P(ω) (for N = 8).19) N−1 n=0 N−1 k=0 N−1 k=0 ∑ ∑ X[k]P(ω − ωk ) Properties of P(ω): • Periodicity: P(ω + 2π) = P(ω) • If ω = 2πl (l integer).
.e..2 Relationship between the DFT and the DTFT 103 P(ω) N=8 1 magnitude 0 −π −π/2 0 frequency ω π/2 π Fig.14).1 Magnitude spectrum of the frequency interpolation function P(ω) for N = 8. k = 0. N − 1... For n = 0.. no new information about the original signal is introduced by increasing N in this way. Inverse DFT: In this case. i. Remarks: More generally.6. N ≥ L. Champagne & F. Increasing the value of N beyond the minimum required value. . 0} • The DFT points obtained give a nicer graph of the underlying DTFT because ∆ωk = 2π/N is smaller. Labeau Compiled September 28. suppose that x[n] = 0 for n < 0 and for n ≥ L.e.. 1. i. ..2 Periodic signals Assumption: Suppose x[n] is Nperiodic. Since both x[n] and x[n] are known to be Nperiodic. 0.. 2004 .....22) c B. Therefore x[n] = x[n]. x[n + N] = x[n]. x[n] can be recovered entirely from its Npoint DFT X[k]. 6.. ∀n ∈ Z ˜ (6. 6. i.. as deﬁned in (6. N = L.. is called zeropadding: • {x[0]. . from the theorem: X[k] = X(ωk ) at ωk = 2πk/N.. it follows that x[n] = x[n] must also be true ˜ ˜ ˜ for n < 0 and for n ≥ N. 1. • However. N − 1. In particular: • One can reconstruct x[n] entirely from the DFT samples X[k] • Also. the results of this section apply. .2. As long as the DFT size N is larger than or equal to L. x[L − 1]} ⇒ {x[0]. Let x[n] denote the IDFT of X[k]. the IDFT theorem yields: ˜ x[n] = x[n].e. x[L − 1].
i. one can recover the DTFT X(ω) from the Npoint DFT X[k]. The discrete Fourier Transform (DFT) Relationship between DFT and DTFT: Since the Nperiodic signal x[n] can be recovered completely from its Npoint DFT X[k]. Labeau Compiled September 28. The situation is more complicated here because the DTFT of a periodic signal contains inﬁnite impulses.104 Chapter 6. Remarks: According to the Theorem. and that ∑∞ r=−∞ ∑k=0 f (k + rN) is equivalent to ∑k=−∞ f (k) we ﬁnally have: X(ω) = = 2π ∞ N−1 2π ∑ ∑ X[k + rN] δa (ω − N (k + rN)) N r=−∞ k=0 2π ∞ 2πk ∑ X[k] δa (ω − N ) N k=−∞ Remarks on the Discrete Fourier series: In the special case when x[n] is Nperiodic.24) N−1 ∞ Realizing that X[k] is periodic with period N. x[n + N] = x[n]. Champagne & F. The amplitude of the impulse at frequency ωk = 2πk/N is given by 2πX[k]/N Proof of Theorem (optional): X(ω) = = = = = = n=−∞ ∑ ∑ ∞ ∞ x[n]e− jωn 1 N−1 ∑ X[k]e jωk n e− jωn N k=0 n=−∞ ∞ 1 N−1 ∑ X[k] ∑ e jωk n e− jωn N k=0 n=−∞ 1 N−1 ∑ X[k] DTFT{e jωk n } N k=0 ∞ 1 N−1 X[k] 2π ∑ δa (ω − ωk − 2πr) ∑ N k=0 r=−∞ 2π ∞ N−1 2π ∑ ∑ X[k] δa (ω − N (k + rN)) N r=−∞ k=0 (6.e. Indeed. X(ω) is made up of a periodic train of inﬁnite impulses in the ω domain (i. Then: 2π ∞ (6. the DFT admits a Fourier series interpretation.e. The desired relationship is provided by the following theorem.23) X(ω) = ∑ X[k]δa (ω − ωk ) N k=−∞ where δa (ω) denotes an analog delta function centered at ω = 0. Theorem: Let X(ω) and X[k] respectively denote the DTFT and Npoint DFT of Nperiodic signal x[n]. 2004 . line spectrum). it should also be possible in theory to reconstruct the DTFT X(ω) from X[k]. the IDFT provides an expansion of x[n] as a sum of c B.
3 Signal reconstruction via DTFT sampling 105 harmonically related complex exponential signals e jωk n : x[n] = 1 N−1 ∑ X[k]e jωk n . Even for n = 0.25) In some textbooks (e. 6. there is no reason for x[n] to ˆ ˆ be equal to x[n]. this expansion is called discrete Fourier series (DFS). Consider the sampled values of X(ω) at uniformly spaced frequencies ωk = 2πk/N for k = 0. N − 1. the IDFT theorem does not apply since X(ωk ) = DFTN {x[n]}. Suppose we compute the IDFT of the samples X(ωk ): x[n] = IDFT{X(ωk )} = ˆ 1 N−1 ∑ X(ωk )e jωk n N k=0 (6. .. Champagne & F.28) c B. The answer to the above question turns out to be very important when using DFT to compute linear convolution.. 2004 .. N − 1.. we treat the DFS as a special case of the DFT. while x[n] may not be. Indeed. Theorem x[n] = IDFT{X(ωk )} = ˆ r=−∞ ∑ ∞ x[n − rN] (6. [10]). corresponding to the situation when x[n] is Nperiodic. n ∈ Z. The DFS coefﬁcients X[k] are identical to the DFT: N−1 X[k] = n=0 ∑ x[n]e− jω n . N k=0 n∈Z (6. k n∈Z (6. that is: X(ω) = n=−∞ ∑ ∞ x[n]e− jωn ω ∈ R.6.26) In these notes. Labeau Compiled September 28..g. .27) What is the relationship between the original signal x[n] and the reconstructed sequence x[n]? ˆ Note that x[n] is Nperiodic...3 Signal reconstruction via DTFT sampling Introduction: Let X(ω) be the DTFT of signal x[n]..
30) Interpretation x[n] is an inﬁnite sum of the sequences x[n − rN]. Figure 6.e.3. we note that the bracketed quantity in (6. • Time limited signal: Suppose x[n] = 0 for n < 0 and for n ≥ N.29) l=−∞ ∑ ∞ x[l] 1 N−1 j2π(n−l)k/N ∑e N k=0 From (6. it is not true that x[n] = x[n] for ˆ all 0 ≤ n ≤ N − 1.31) This is consistent with the results of Section 6. the sequences x[n−rN] for different values of r will overlap in the timedomain. Champagne & F.4 Properties of the DFT Notations: In this section. In this case.2 shows an illustration of this reconstruction by IDFT. 2004 . we make no special assumption about the signal values outside this range. Labeau Compiled September 28. Each of these sequences x[n − rN] is a shifted ˆ version of x[n] by an integer multiple of N.. deﬁned for all k ∈ Z c B. 6. (6. N − 1. • Non timelimited signal: Suppose that x[n] = 0 for some n < 0 or n ≥ N.11).29) is equal to 1 if n − l = rN. l = n − rN (r integer) and is equal to 0 otherwise. we distinguish two important cases.2. We denote the Npoint DFT of x[n] and y[n] by X[k] and Y [k]: N x[n] ←→ X[k] N y[n] ←→ Y [k] DFT DFT We view X[k] and Y [k] as Nperiodic sequences... We can recover x[n] exactly from one period of x[n]: ˆ x[n] = x[n] n = 0.1. we assume that the signals x[n] and y[n] are deﬁned over 0 ≤ n ≤ N − 1. 1. as illustrated in Figure 6. We refer to this phenomenon as temporal aliasing. i. Then. The discrete Fourier Transform (DFT) Proof: x[n] = ˆ = = 1 N−1 ∑ X(ωk )e jωk n N k=0 1 N−1 ∑ N k=0 l=−∞ ∑ ∞ x[l]e− jωk l e jωk n (6. Depending on whether or not these shifted sequences overlap. Unless explicitly stated. ˆ 0 otherwise. there is no temporal overlap of the sequences x[n − rN]. . r ∈ Z. Therefore x[n] = ˆ r=−∞ ∑ ∞ x[n − rN] (6. Then.106 Chapter 6.
Labeau Compiled September 28. Top right. Champagne & F.5 10 8 1 X(ω) DTFT 6 4 x[n] 0. the original DTFT X(ω).4 Properties of the DFT 107 original signal 1.5 0 −15 k −10 −5 0 n 5 10 0 0 5 k 10 15 Fig. from which 15 samples are taken. c B. Top left: the original signal x[n] is time limited.5 2 0 −15 0 0 π −10 −5 0 n 5 10 ω 2π inverse DFT 1. ˆ there is no overlap between replicas of the original signal in time. Since the original sequence is zero outside 0 ≤ n ≤ 14. 2004 . Bottom right: the equivalent impulse spectrum corresponds by IDFT to a 15periodic sequence x[n] shown on the bottom left.6.5 10 8 1 xhat[n] X(ω ) sampled DTFT 6 4 2 0.2 Illustration of reconstruction of a signal from samples of its DTFT. 6.
5 0 −15 −10 −5 0 n 5 10 k 0 2 k 4 6 Fig. Since the original sequence is not zero outside 0 ≤ ˆ n ≤ 5. 6. Labeau Compiled September 28.5 2 0 −15 0 0 π −10 −5 0 n 5 10 ω 2π inverse DFT 3 2. Bottom right: the equivalent impulse spectrum corresponds by IDFT to a 6periodic sequence x[n] shown on the bottom left. the original DTFT X(ω).108 Chapter 6. there is overlap between replicas of the original signal in time.5 10 8 1 X(ω) DTFT 6 4 x[n] 0. Top right.3 Illustration of reconstruction of a signal from samples of its DTFT. and thus aliasing. 2004 . Champagne & F. from which 6 samples are taken. c B.5 1 0.5 2 xhat[n] sampled DTFT 10 8 X(ω ) 6 4 2 0 1. The discrete Fourier Transform (DFT) ofiginal signal 1. Top left: the original signal x[n] is time limited.
e. 6. (6. for N = 6. 6. 1.34) Example 6. Labeau Compiled September 28.4 Illustration of circular time reversal c B. 1. i. x[(n)N ] = where ξ[n] = x[n](u[n] − u[n − N]). Champagne & F. 0 ≤ n ≤ N − 1 (6. N − 1} and r ∈ Z. 0 ≤ n ≤ N − 1.33) Then...4. (−1)6 = 5.4 Properties of the DFT 109 Modulo N operation: Any integer n ∈ Z can be expressed uniquely as n = k + rN where k ∈ {0.. .. 2004 . We deﬁne (n)N = n modulo N For example: (11)8 = 3. . 0 0 6 1 5 1 2 4 2 3 3 3 4 2 4 5 1 5 x[n] 6 6 x[( n) 6 ] CR 0 0 1 2 3 4 5 n 0 0 1 2 3 4 5 n Fig.1 Time reversal and complex conjugation Circular time reversal: Given a sequence x[n]. Note that the sequence deﬁned by x[(n)N ] is an Nperiodic sequence made of repetitions of the values of x[n] between 0 and N − 1. we have CR{x[n]} = x[(−n)6 ]: n (−n)6 x[(−n)6 ] This is illustrated Figure 6. 5: n x[n] 0 6 1 5 2 4 3 3 4 2 5 1 r=−∞ k.6..4.32) ∑ ∞ ξ[n − rN] (6.2: Let x[n] = 6 − n for n = 0. we deﬁne its circular reversal (CR) as follows: CR{x[n]} = x[(−n)N ].
The discrete Fourier Transform (DFT) Interpretation: Circular reversal can be seen as an operation on the set of signal samples x[0]. Figure 6.5 0 5 10 15 20 25 Graphical illustration of circular reversal. whereas • for k = 1 to N − 1. 6. . One can also see the circular reversal as an operation consisting in: • periodizing the samples of x[n]. . .5 gives a graphical illustration of this process. with N = 15. x[N −1]: • x[0] is left unchanged. 0 ≤ n ≤ N − 1 with period N. samples x[k] and x[N − k] are exchanged. 2004 . x[n] 10 5 0 10 5 0 10 5 0 10 5 0 −15 x[((−n))15] limited x[((−n))15] x[((n))15] −10 −5 Fig.110 Chapter 6. . Champagne & F. Labeau Compiled September 28. • timereversing the periodized sequence. • Keeping only the samples between 0 and N − 1. c B.
X[0] is real. (6.37) Remarks: • Since X[k] is Nperiodic. • Property (6. Labeau Compiled September 28.4.if N is even: X[N/2] is real.2 Duality Property: N X[n] ←→ Nx[(−k)N ] DFT (6. .36) leads to the following conclusion for realvalued signals: x[n] = x∗ [n] ⇐⇒ X[k] = X ∗ [−k] or equivalently.36) (6. iff X[k] = X[−k] and X[k] = − X[−k]. 2004 .35). . if N = 6. we note that X[−k] = X[(−k)N ]. for realvalued signals: .38) 6.35) (6. • Thus. for some integer r.39) Proof: First note that since (−k)N = −k + rN. then X[−1] = X[5] in (6.X[N − k] = X ∗ [k] for 1 ≤ k ≤ N − 1.6.6 illustrates this with a graphical example.4 Properties of the DFT 111 Property: N x[(−n)N ] ←→ X[−k] N x∗ [n] ←→ X ∗ [−k] N x∗ [(−n)N ] ←→ X ∗ [k] DFT DFT DFT (6.40) ∑ X[n]e− j2πnk/N N−1 = n=0 ∑ X[n]e j2πn(−k) N /N = N × IDFT{X[n]} at index value (−k)N = N x[(−k)N ] c B. we have e− j2πkn/N = e j2πn(−k)N /N Using this identity and recalling the deﬁnition of the IDFT: N−1 n=0 (6. Champagne & F. For example. Figure 6.
112 Chapter 6. 2004 . Champagne & F. Labeau Compiled September 28. 6.6 10). The discrete Fourier Transform (DFT) x[n] 2 0 −2 −4 0 5 1 2 3 4 5 Re(X[k]) 6 7 8 9 0 −5 0 5 1 2 3 4 5 Im(X[k]) 6 7 8 9 0 −5 0 1 2 3 4 5 6 7 8 9 Fig. Illustration of the symmetry properties of the DFT of a realvalues sequence (N = c B.
48) (6.N [n] 1 (x[n] + x∗ [(−n)N ]) 2 1 (x[n] − x∗ [(−n)N ]) 2 (6. Accordingly.N [n]. e. the modulo N operation can be omitted. we deﬁne Xe [k] Xo [k] 1 (X[k] + X ∗ [−k]) 2 1 (X[k] − X ∗ [−k]) 2 (6. 1.50) Example 6.N [N − n] = xe. 2004 .4 Even and odd decomposition Conjugate symmetric components: The even and odd conjugate symmetric components of ﬁnite sequence x[n]. .3: Let x[n] = 6 − n for n = 0.43) In the case of a Nperiodic sequence. n x[n] c B.N [n] xo.5 Circular shift DFTN DFTN DFTN jIm{X[k]} Deﬁnition: Given a sequence x[n] deﬁned over the interval 0 ≤ n ≤ N − 1.6. Labeau 0 6 1 5 2 4 3 3 4 2 5 1 Compiled September 28.. 5. 0 ≤ n ≤ N − 1 (6.49) jIm{x[n]} ←→ Xo [k] xe. .41) 6. .4. are deﬁned as: xe.N [n] ←→ 6. .46) (6. Property: N Re{x[n]} ←→ Xe [k] DFT (6.N [0] is real.g. . xe. Champagne & F..N [n] ←→ Re{X[k]} xo. n = 1.4.45) Note that.42) (6. we deﬁne its circular shift by k samples as follows: CSk {x[n]} = x[(n − k)N ]. deﬁned for n ∈ Z. 0 ≤ n ≤ N − 1. whereas xe.4 Properties of the DFT 113 6. N − 1.44) (6.47) (6.3 Linearity Property: N ax[n] + by[n] ←→ aX[k] + bY [k] DFT (6.4.
• any signal sample leaving the interval 0 ≤ n ≤ N − 1 from one end reenters by the other end. circular shift may be interpreted as follows: • periodizing the samples of x[n]. 6. .7. . It is clearly seen from this ﬁgure that a circular shift by k samples amounts to taking the last k samples in the window of time from 0 to N − 1. introduce ωk = 2πk/N. 2004 c B. The discrete Fourier Transform (DFT) For N = 6.114 Chapter 6. • delaying the periodized sequence by k samples. 0 ≤ n ≤ N − 1 with period N. N−1 DFTN {x[(n − m)N ]} = n=0 ∑ x[(n − m)N ]e− jω n k = e− jωk m N−1−m n=−m ∑ x[(n)N ]e− jωk n Compiled September 28. shifting the rest of the samples to the right. Labeau . Alternatively. • keeping only the samples between 0 and N − 1.8 gives a graphical illustration of this process. and placing these at the beginning of the window.7 Illustration of circular time reversal Interpretation: Circular shift can be seen as an operation on the set of signal samples x[n] (n ∈ {0. . Circular Shift Property: N x[(n − m)N ] ←→ e− j2πmk/N X[k] DFT Proof: To simplify the notations. N − 1}) in which: • signal samples x[n] are shifted as in a conventional shift. for k = 2: n (n − 2)6 x[(n − 2)6 ] This is illustrated Figure 6. Figure 6. 0 4 2 1 5 1 2 0 6 3 1 5 4 2 4 5 3 3 x[n] 6 6 x[(n 2) 6 ] CS 2 0 0 1 2 3 4 5 n 0 0 1 2 3 4 5 n Fig. we have CSk {x[n]} = x[(n − k)6 ] In particular. . Champagne & F.
2004 . Champagne & F.8 Graphical illustration of circular shift. Labeau Compiled September 28. 6. c B.6. with N = 15 and k = 2.4 Properties of the DFT 115 x[n] 10 5 0 10 5 0 10 5 0 10 5 0 −15 x[((n))15] x[((n−2))15] x[((n−2))15] limited −10 −5 0 5 10 15 20 25 Fig.
52) DFT 1 X[k] N Y [k] (6.. The discrete Fourier Transform (DFT) Observe that the expression to the right of ∑N−1−m is Nperiodic in the variable n. . Labeau Compiled September 28.6 Circular convolution Deﬁnition: Let x[n] and y[n] be two sequences deﬁned over the interval 0 ≤ n ≤ N − 1. Circular Convolution Property: x[n] Multiplication Property: N x[n]y[n] ←→ N y[n] ←→ X[k]Y [k] DFT (6.7 Other properties Plancherel’s relation: N−1 n=0 ∑ x[n]y∗ [n] = 1 N−1 ∑ X[k]Y ∗ [k] N k=0 (6. 0 ≤ n ≤ N −1 (6.4.116 Chapter 6. the modulo N operation is not needed here.4.. we may replace n=−m that summation by ∑N−1 without affecting the result (in both cases.54) c B. that is: X[(k − m)N ] = X[k − m] 6.51) Remarks: This is conceptually similar to the linear convolution of Chapter 2 (i.53) 6. ∑∞ = x[m]y[n − m]) n=∞ except for two important differences: • the sum is ﬁnite. The use of (n − m)N ensures that the argument of y[ ] remains in the range 0. we sum over one complete period) n=0 DFTN {x[(n − m)N ]} = e− jωk m = e− jωk m N−1 n=0 N−1 n=0 ∑ x[(n)N ]e− jω n k ∑ x[n]e− jω n = e− jω m X[k] k k Frequency shift Property: N e j2πmn/N x[n] ←→ X[k − m] DFT Remark: Since the DFT X[k] is already periodic.e. Thus. The Npoint circular convolution of x[n] and y[n] is given by N−1 x[n] y[n] m=0 ∑ x[m]y[(n − m)N ]. Champagne & F. N − 1. running from 0 to N − 1 • circular shift is used instead of a linear shift. 1.. 2004 .
. If the linear and circular convolution were equivalent. then. . Similarly.5 Relation between linear and circular convolutions Introduction: The circular convolution admits a fast (i.e.54) with y[n] = x[n]. . we investigate the general relationship between the linear and circular convolutions. N − 1) directly from the DFT samples X[k].5 Relation between linear and circular convolutions 117 Parseval’s relation: N−1 n=0 ∑ x[n]2 = 1 N−1 ∑ X[k]2 N k=0 (6. y = [y[0]. .54)may be interpreted as the inner product between the DFT vectors X = [X[0]. the LHS summation in (6.55) Remarks: • Deﬁne signal vectors x = [x[0]. It allows the computation of the energy of the signal samples x[n] (n = 0.the signals of interest have ﬁnite length.57) Compiled September 28. y[N − 1]] Then. . In particular. . . is the following: under what conditions are the linear and circular convolutions equivalent? In this section. Y = [Y [0].54) may be interpreted as the inner product between signal vectors x and y in vector space CN . Labeau k=−∞ ∑ ∞ x1 [k]x2 [n − k].55) is a special case of (6. 2004 . . Linear convolution: Time domain expression: yl [n] = x1 [n] ∗ x2 [n] = Frequency domain representation via DTFT: Yl (ω) = X1 (ω)X2 (ω).6. x[N − 1]]. . .and the DFT size is sufﬁciently large. The problem. Champagne & F. the RHS of (6. . . up to scaling factor 1/N. 2π] (6. . . very low complexity) realization via the socalled fast Fourier transform (FFT) algorithms.56) ω ∈ [0.54) is therefore equivalent to the statement that the DFT operation preserves the innerproduct between timedomain vectors. (6.Y [N − 1]] Identity (6. n∈Z (6. it would be possible to evaluate the linear convolution efﬁciently as well via the FFT. . 6. .Y [N − 1]]. c B. we show that both types of convolution are equivalent if . . . • Eq. . . .
118 Chapter 6. 2. 1. A simple calculation yields yl [n] = {1. 0 ≤ n ≤ N −1 (6.5. 1/2. a necessary condition for equivalence of yl [n] and yc [n] is that N ≥ N1 + N2 − 1. 1} x2 [n] = {1. k ∈ {0. 2. Example 6.56) are equivalent if yl [n] = yc [n] if 0 ≤ n < N 0 otherwise (6. where N3 = N1 + N2 − 1.9. N − 1} (6. the linear convolution yl [n] in (6. Condition for equivalence: Suppose that x1 [n] and x2 [n] are TL to 0 ≤ n < N1 and 0 ≤ n < N2 . 1/2} where N1 = 3 and N2 = 4. Based on our previous discussion.60) Clearly. This is illustrated in ﬁgure 6. Champagne & F. The discrete Fourier Transform (DFT) Circular convolution: Time domain expression: N−1 yc [n] = x1 [n] x2 [n] = k=0 ∑ x1 [k]x2 [(n − k)N ]. this can only be true in general if the signals x1 [n] and x2 [n] both have ﬁnite length. Suppose that x1 [n] and x2 [n] are TL to 0 ≤ n < N1 and 0 ≤ n < N2 . Labeau Compiled September 28. 1. Finite length assumption: We say that x[n] is time limited (TL) to 0 ≤ n < N iff x[n] = 0 for n < 0 and for n ≥ N. Then.4: Consider the TL signals x1 [n] = {1. . c B.58) and the linear convolution (6. respectively.. We show below that this is a sufﬁcient condition. 1. . 2004 .58) Frequency domain representation via Npoint DFT: Yc [k] = X1 [k]X2 [k]. 1.56) is TL to 0 ≤ n < N3 . 1...5} with N3 = N1 + N2 − 1 = 6.59) Observation: We say that the circular convolution (6.
bottom: yl [n].5 Relation between linear and circular convolutions 119 x1[k ] 1 1 x 2 [k ] 0 3 k 0 k x2 [ k ] yl [0] 1 0 yl [ n] 0. Champagne & F. c B. n 6 k Fig.6.5 0 5 yl [n] 0.4. Top: x1 [n] and x2 [n].9 Illustration of the sequences used in example 6. 6. n 0 k x2 [1 k ] yl [1] 1. their linear convolution. Labeau Compiled September 28. 2004 .5 0 1 k x 2 [5 k ] yl [5] .
as illustrated in ﬁgure 6.5: Consider the TL signals x1 [n] = {1. Labeau Compiled September 28. 1/2. 2. The discrete Fourier Transform (DFT) Linear convolution: yl [n] = = k=−∞ n k=0 ∑ ∞ x1 [k]x2 [n − k] 0 ≤ n ≤ N −1 (6. x2 with N = N1 + N2 − 1 = 6. 1. 1.61) ∑ x1 [k]x2 [n − k]. 2} x2 with N = 4.11.10. the CTR operation has no effect on the convolution. On the contrary. it can be veriﬁed that the product x1 [k]x2 [N + n − k] in the second summation on the RHS of (6.5} which is identical to the result of the linear convolution in example 6. it thus follows that yl [n] = yc [n] for 0 ≤ n ≤ N − 1. Champagne & F. Under this condition. 2. 2. First consider the circular convolution yc = x1 as illustrated in ﬁgure 6. Conclusion: the linear and circular convolution are equivalent iff N ≥ N1 + N2 − 1 (6. 1} x2 [n] = {1. 2004 .4. The result of this computation is yc [n] = {1.62) If N ≥ N2 + N1 − 1. Remarks: When N ≥ N1 + N2 − 1. .5. 1. Now consider the circular convolution yc = x1 The result of this computation is yc [n] = {2. the CTR will affect the result of the convolution due to overlap between x1 [k] and x2 [N + n − k]. 1. c B.62) is always zero. 1/2} where N1 = 3 and N2 = 4. 2. when N = N1 + N2 − 1. Circular convolution: N−1 yc [n] = = k=0 n k=0 ∑ x1 [k]x2 [(n − k)N ] N−1 k=n+1 ∑ x1 [k]x2 [n − k] + ∑ x1 [k]x2 [N + n − k] (6.63) Example 6.120 Chapter 6.
5 Relation between linear and circular convolutions 121 x1[k ] 1 1 x 2 [k ] 0 5 k 0 5 k x2 [( k )6 ] CRx1[k ] y c [ 0] 1 0 5 k x2 [(1 k ) 6 ] yc [1] 1. their circular convolution with N = 6.6. Top: x1 [n] and x2 [n]. Labeau Compiled September 28.10 Illustration of the sequences used in example 6. bottom: yc [n]. 2004 . 6.5 0 5 k Fig.5 0 1 5 k x 2 [(5 k ) 6 ] y c [5] .5. Champagne & F. c B.
11 Illustration of the sequences used in example 6. Top: x1 [n] and x2 [n].122 Chapter 6. The discrete Fourier Transform (DFT) x1[k ] 1 1 x 2 [k ] 0 3 k 0 3 k x2 [( k ) 4 ] yc [0] 2 0 3 k x 2 [(1 k ) 4 ] yc [1] 0 1 3 2 k x2 [(3 k ) 4 ] yc [3] 2 0 3 k Fig. 2004 . Champagne & F. bottom: yc [n]. 6. c B. their circular convolution with N = 4. Labeau Compiled September 28.5.
5} where N1 = 3 and N2 = 4. so that there is time aliasing. 1.65) for N < N1 + N2 − 1 is illustrated in Figure 6.65). Note that out of the N = 11 samples in the circular convolution yc [n].an Nperiodic repetition of the linear convolution yl [n] . within the interval 0 ≤ n ≤ 10.64) This shows that Yc [k] is also made of uniformly spaced samples of Yl (ω).5.66) which is consistent with our earlier development. We need: lenght of DFT ≥ length of yl [n]. we know that yl [n] = {1. only the ﬁrst N3 − N samples are affected by aliasing (as shown by the white dots). N ≥ N3 = N1 + N2 − 1 (6. 1. Example 6.and a windowing to limit the nonzero samples to the instants 0 to N − 1. 1} and x2 [n] = {1.7: To convince ourselves of the validity of (6.e.12. N2 }. the circular convolution yc [n] can be computed as the Npoint IDFT of these frequency samples: yc [n] = IDFTN {Yc [k]} = IDFTN {Yl (ωk )} Making use of (6. N = 11 and N3 = N1 + N2 − 1 = 15. . 1.4. These signals are added together to yield the circular convolution yc [n] (bottom) according to (6.6. 1. Labeau Compiled September 28. 2.5. To get yc [n] = yl [n] for 0 ≤ n ≤ N − 1. whereas the other samples are common to yl [n] and yc [n]. (6. In this case.65) so that the circular convolution is obtained by . .65).5 Relation between linear and circular convolutions 123 Relationship between yc [n] and yl [n]: Assume that N ≥ max{N1 . From Example 6. respectively.5} c B. i. Thus. we have yc [n] = ωk = 2πk/N r=−∞ ∑ ∞ yl [n − rN]. i. The following examples illustrate the above concepts: Example 6. 2. the DTFT of the linear convolution yl [n]. 2004 .28). we must avoid temporal aliasing. Then the DFT of each of the two sequences x1 [n] and x2 [n] are samples of the corresponding DTFT: N ≥ N1 =⇒ X1 [k] = X1 (ωk ).e. N ≥ N2 =⇒ X2 [k] = X2 (ωk ) The DFT of the circular convolution is just the product of DFT: Yc [k] = X1 [k]X2 [k] = X1 (ωk )X2 (ωk ) = Yl (ωk ) (6. .6: The application of (6. consider again the sequences x1 [n] = {1. 0 ≤ n ≤ N − 1. The result of a linear convolution yl [n] is illustrated (top) together with two shifted versions by +N and −N samples. Champagne & F.
2004 .124 Chapter 6. Labeau Compiled September 28. 6. The discrete Fourier Transform (DFT) yl[n] 10 5 0 10 5 0 10 5 0 10 5 0 yc[n] yl[n+N] yl[n−N] −10 −5 0 5 10 15 20 Fig. Champagne & F.12 Circular convolution seen as linear convolution plus time aliasing: c B.
1 Linear convolution via DFT We can summarize the way one can compute a linear convolution via DFT by the following steps: • Suppose that x1 [n] is TL to 0 ≤ n < N1 and x2 [n] is TL to 0 ≤ n < N2 • Select DFT size N ≥ N1 + N2 − 1 (usually. the circular convolution for N = 4 yields yc [n] = {2. the computation of a linear convolution via FFT is more efﬁcient than direct timedomain realization.6. N = 2K ). 2004 .13. 6. Labeau Compiled September 28. as computed from (6. • Compute: x1 [n] ∗ x2 [n] = IDFTN {X1 [k]X2 [k]} 0 ≤ n ≤ N − 1 0 otherwise (6. It can be seen that the sample values of the circular yl [ n 4 ] yl [n] yl [ n 4 ] 1 3/2 2 2 1 1/2 1 3/2 2 2 3 1 3/2 2 1 1/2 4 2 1 1/2 4 0 1 2 8 n yc [ n ] r yl [ n 4 r ] 2 0 2 1 2 2 2 3 n Fig. The accompanying disadvantage is that it involves a processing delay.65). 2. Champagne & F. are identical to those obtained in Example 6. the DFT is computed via an FFT algorithm (more on this later).68) 0 ≤ k ≤ N −1 (6. c B. 2} The application of (6. say N ≥ 30. convolution yc [n].7).65) is illustrated in Figure 6.5. For large N. 2.67) Remarks: In practice.5. compute Xi [k] = DFTN {xi [n]}. • For i = 1.5 Relation between linear and circular convolutions 125 while from Example 6.13 Illustration of the circular convolution as a timealiased verion of the linear convolution (see examepl 6. 2. 6.5. because the data must be buffered.
. 0.5.5.7321 j} X1 [k] = DFT6 {1. The DFT size must satisfy N ≥ N1 + N2 − 1 = 6 for the circular and linear convolutions to be equivalent. 0} = {4. 1. 1.5.5 − 1. Let us ﬁrst consider the case N = 6. 1} x2 [n] = {1. 2. 1. The 6point DFT of the zeropadded sequences x1 [n] and x2 [n] are: X1 [k] = DFT6 {1.5} where N1 = 3 and N2 = 4. 0.866 j. 0.5. Let X[k] denote its Npoint DFT.7321 j. 1. +1. 1.5} which is identical to yl [n] previously computed in Example 6. 1 − 0. Problem 6. The discrete Fourier Transform (DFT) Example 6. 1/2. −1. 0} = {2. −1. we obtain: yc [n] = IDFT6 {Y [k]} = {1. The student is invited to try this approach in the case N = 4.6 Problems Problem 6.7321 j} Taking the 6point IDFT.126 Chapter 6.5. 0. c B. 6. . 1.4. 01. 0. 0. . 2. Champagne & F.5. 1.2: DFT of a complex exponential Find the Npoint DFT of x[n] = e j2πpn/N .5 − 1. Let Y [k] represent its 2Npoint DFT.8: Consider the sequences x1 [n] = {1.1: DFT of Periodic Sequence Let x[n] be aperiodic sequence with period N. 0. 1. Find the relationship between X[k] and Y [k]. 1/2.866 j} The product of the DFT yields Yc [k] = {8. 2004 . −1. 1 + 0. Labeau Compiled September 28. 1. 0.7321 j.
In this chapter. This is done in two steps: . we have stated in the introduction of this course that DTSP is a convenient and powerful approach for the processing of realworld analog signals. . as illustrated in Figure 7.uniform sampling .127 Chapter 7 Digital processing of analog signals As a motivation for DiscreteTime Signal Processing. 7.quantization • The digital system performs the desired operations on the digital signal xd [n] and produces a corresponding output yd [n] also in digital form. Structural overview: xa (t ) x d [n] A/D Digital system y d [n] D/A y a (t ) Fig.1: • The A/D converter transforms the analog input signal xa (t) into a digital signal xd [n]. we will further study the theoretical and practical conditions that enable digital processing of analog signals.1 The three main stages of digital processing of analog signals Digital processing of analog signals consists of three main components. • The D/A converter transforms the digital output yd [n] into an analog signal ya (t) suitable for interfacing with the outside world.
bottom: the corresponding discretetime sequence. Notice the difference in scale betwwen the xaxis of the two plots. i. Based on the sampling period Ts . is mapped into a discretetime signal x[n]. it may take any possible values within the set R. Digital processing of analog signals In the rest of this chapter.e.2 gives a graphic illustration of the process. uniform sampling is implemented with analogtodigital (A/D) converters.3). Champagne & F.4 0 2 4 6 8 10 samples 12 14 16 18 20 Fig. 2004 .2 0 −0. or 2πFs = 2π/Ts . t ∈ R. Figure 7. It should be clear from the above presentation that.2) (7. for an arbitrary continuoustime signal xa (t). 7. We say ideal because the amplitude of x[n] is not quantized.3) We shall also refer to uniform sampling as ideal analogtodiscretetime (A/DT) conversion. c B. n ∈ Z. These are nonideal devices: only a ﬁnite set of possible values is allowed for the signal amplitudes x[n] (see section 7. according to the following rule: xa (t) → x[n] xa (nTs ).1 Uniform sampling and the sampling theorem Uniform (or periodic) sampling refers to the process in which an analog.5 5 0. or continuoustime (CT).2 −0. we will study the details of each of these building blocks.2 −0.2 Illustration of Uniform Sampling: top.5 4 4. 7. information will be lost through the uniform sampling process xa (t) → x[n] = xa (nTs ).2 0 −0. the Sampling Frequency can be deﬁned as Fs Ωs 1/Ts . Ideal A/DT conversion is represented in block diagram form as follows: In practice.1) where Ts is called the sampling period.5 2 2.5 time (s) 3 3. n ∈ Z are shown by arrows. n∈Z (7. (7. signal xa (t).4 0 0. the analog signal.128 Chapter 7. This is illustrated by the following example. 0. where the sampling points t = nTs . Labeau Compiled September 28.5 1 1.4 0.4 0.
F1 = 100Hz F2 = 300Hz This example shows that it is not possible in general to recover the original analog signal xa (t) for all t ∈ R.1. This is the essence of the sampling theorem. n ∈ Z. Uniform sampling at the rate Fs = 200Hz yields x1 [n] = xc1 (nTs ) = cos(2πF1 n/Fs ) = cos(πn) x2 [n] = xc2 (nTs ) = cos(2πF2 n/Fs ) = cos(3πn) Clearly.4) The corresponding inverse Fourier transform relationship is given by: xa (t) = 1 2π ∞ −∞ Xa (Ω)e jΩt dΩ (7. 7. xa2 (t) = cos(2πF2t). if we further assume that xa (t) is bandlimited and select Fs to be sufﬁciently large. FT of analog signals: For an analog signal xa (t). The sampling theorem will follow naturally from this relationship.1: Information Loss in Sampling Consider the analog signals xa1 (t) = cos(2πF1t). from the set of samples x[n]. we ﬁrst investigate the relationship between the Fourier transform (FT) of the analog signal xa (t) and the DTFT of the discretetime signal x[n] = xa (nTs ). manytoone mapping.1 Uniform sampling and the sampling theorem 129 xa (t ) x[n] A/DT Ts Fig. Ω ∈ R. Champagne & F. Signals and Systems should be reviewed).5) We assume that the student is familiar with the FT and its properties (if not. x1 [n] = x2 [n] This shows that uniform sampling is a noninvertible. However. it is possible to recover xa (t) from its samples x[n]. c B.3 Block diagram representation of ideal A/DT conversion. is deﬁned by Xa (Ω) ∞ −∞ xa (t)e− jΩt dt (7. 7. Example 7. 2004 . the Fourier transform.1 Frequency domain representation of uniform sampling In this section.7. Labeau Compiled September 28. denoted here as Xa (Ω).
i. A sampling model: It is convenient to represent uniform sampling as shown in Figure 7. 2004 = X(ω) at ω = ΩTs c B. Labeau . Champagne & F.9) ∑ ∑ ∞ xa (nTs )δa (t − nTs ) x[n]δa (t − nTs ) • The sample value x[n] is encoded in the amplitude of the pulse δa (t − nTs ).7) (7. one can be reconstructed from the other and vice versa.e.130 Chapter 7. that is.10) Proof: The ﬁrst equality is proved as follows: Xs (Ω) = = = = ∞ −∞ ∞ xs (t)e− jΩt dt { n=−∞ ∞ n=−∞ ∑ −∞ n=−∞ ∞ ∞ ∑ ∞ x[n]δa (t − nTs )}e− jΩt dt δa (t − nTs )e− jΩt dt x[n] −∞ ∑ x[n]e− jΩnTs (7. FT/DTFT Relationship: Let X(ω) denote the DTFT of the samples x[n] = xa (nTs ). 7.6) • xs (t) is a pulseamplitude modulated (PAM) waveform: xs (t) = xa (t)s(t) = = n=−∞ ∞ n=−∞ (7. In this model: • s(t) is an impulse train. i.8) (7.11) Compiled September 28. X(ω) = − jωn .4 Impulse to Sequence x[n] Uniform sampling model. Digital processing of analog signals xa (t ) x s (t ) s (t ) Fig. The information content of xs (t) and x[n] are identical.4.: s(t) = ∞ n=−∞ ∑ δa (t − nTs ) (7. The following equalities hold: ∑∞ n=−∞ x[n]e X(ω)ω=ΩTs = Xs (Ω) = 1 ∞ ∑ Xa (Ω − kΩs ) Ts k=−∞ (7.e.
In this case.14) Ω We shall refer to Fs as the normalized frequency. Interpretation: Suppose xa (t) is a bandlimited (BL) signal.e. • Case 1: suppose Ωs ≥ 2ΩN (see Figure 7. the different images Xa (Ω−kΩs ) do overlap.e. twice the maximum frequency) is called the Nyquist rate for uniform sampling.e. c B. Ωs < 2ΩN . where ΩN represents a maximum frequency. Xa (Ω) can be recovered from X(ω): Xa (Ω) = Ts X(ΩTs ) Ω ≤ Ωs /2 0 otherwise (7. Ωs ≥ 2ΩN . the spectral images Xa (Ω − kΩs ) do not overlap.15) • Case 2: suppose Ωs < 2ΩN (see Figure 7.7. In case 2.13) Remarks: The LHS of (7.1 Uniform sampling and the sampling theorem 131 Now consider the second equality: From ECSE 304. it is not possible to recover Xa (Ω) from X(ω).6). the fact that Xa (Ω) can be recovered from X(ω) actually implies that xa (t) can be recovered from x[n]: this will be stated in the sampling theorem. The middle term of (7. Note that for different values of k.e. The RHS represents an inﬁnite sum of scaled. since xs (t) = xa (t)s(t). Consequently. i. we recall that S(Ω) = FT{s(t)} = FT{ = n=−∞ ∞ ∞ ∑ δa (t − nTs )} (7. Champagne & F.12) 2π ∑ δa (Ω − kΩs ) Ts k=−∞ Finally. it follows from the multiplicative property of the FT that Xs (Ω) = = = 1 Xa (Ω) ∗ S(Ω) 2π 1 ∞ 2π ∞ Xa (Ω − Ω ) ∑ δa (Ω − kΩs ) dΩ 2π −∞ Ts k=−∞ 1 ∞ ∑ Xa (Ω − kΩs ) Ts k=−∞ (7.10) is the DTFT of x[n] evaluated at ω = ΩTs = F Ω = 2π Fs Fs (7. the distortion resulting from spectral overlap is called aliasing.5). In case 1. shifted copies of Xa (Ω) by integer multiples of Ωs .10) is the FT of the amplitude modulated pulse train xs (t) = ∑n x[n]δa (t − nTs ). Xa (Ω) = 0 for Ω ≥ ΩN . i. Labeau Compiled September 28. As a result. The critical frequency 2ΩN (i. i. 2004 .
Sampling Theorem: Suppose that xa (t) is BandLimited to Ω < ΩN . This operation simply amounts to keep the lowpass portion of Xs (Ω).1.5 Illustration of sampling in the frequency domain.16) then xa (t) can be recovered from its samples x[n] = xa (nTs ) via xa (t) = where hr (t) n=−∞ ∑ ∞ x[n]hr (t − nTs ). it is possible to recover Xa (Ω) from the DTFT X(ω).18) Proof: From our interpretation of the FT/DTFT relationship (7. Digital processing of analog signals Ts=0.016706. Provided the sampling frequency Ωs = 2π ≥ 2ΩN Ts (7. c B.2 The sampling theorem The following theorem is one of the cornerstones of discretetime signal processing.17) sin(πt/Ts ) = sinc(t/Ts ) πt/Ts (7.10).1106 1 0. by means of (7. we have seen under Case 1 that when Ωs ≥ 2ΩN . when Ωs > 2ΩN 7. Ωs=376. Labeau Compiled September 28.5 0 −500 300 200 100 0 −500 60 40 20 0 −500 60 40 20 0 −8 −6 −4 −2 0 2 4 6 8 −400 −400 −300 −200 −100 0 100 200 300 400 500 Xc(Ω) −400 −300 −200 −100 0 100 200 300 400 500 S(Ω) Xs(Ω) −300 −200 −100 Ω 0 100 200 300 400 500 X(ω) ω Fig. Champagne & F.15). 7.132 Chapter 7. any t ∈ R (7. 2004 . or equivalently Xs (Ω).
Champagne & F.5): Hr (Ω) = Ts Ω ≤ Ωs /2 . Ωs=237.7421 1 0.7.026429.5 0 −500 200 Xc(Ω) −400 −300 −200 −100 0 100 200 300 400 500 S(Ω) 100 0 −500 40 20 0 −500 40 20 0 −400 −300 −200 −100 0 100 200 300 400 500 Xs(Ω) −400 −300 −200 −100 Ω 0 100 200 300 400 500 X(ω) −10 −5 ω 0 5 10 Fig. as the continuoustime c B. 2004 .6 Illustration of sampling in the frequency domain. 0 Ω > Ωs /2 The corresponding impulse response is found by taking the inverse continuoustime Fourier Transform of Hr (Ω): ∞ Ωs /2 −Ωs /2 hr (t) = = = 1 2π Ts 2π jt sin (πt/Ts ) . when Ωs < 2ΩN This can be done with an ideal continuoustime lowpass ﬁlter with gain Ts . deﬁned as (see dashed line in Figure 7. 7.1 Uniform sampling and the sampling theorem 133 Ts=0. Labeau Compiled September 28. πt/Ts −∞ e jΩst/2 − e− jΩst/2 Hr (Ω)e jΩt dΩ = 1 2π Ts e jΩt dΩ Passing xs (t) through this impulse response hr (t) yields the desired signal xa (t).
Champagne & F. Figure 7. 7.17) provides an interpolation formula: xa (t) = ∑ scaled and shifted copies of sinc(t/Ts ) (7. in the special case t = mTs . The general shape of hr (t) is shown in Figure 7. (7. these amplitudes would be quantized (see section 7. In particular. In a practical digital signal processing system.8 shows the structure that we will consider for discretetime processing of analog signals xs (t).8 equivalent to an analog LTI system? c B. note that hr (mTs ) = δ[m] = Thus.19) n=−∞ ∑ ∞ x[n]hr ((m − n)Ts ) = x[m] (7. Interpretation: The function hr (t) in (7.18) is called reconstruction ﬁlter.7. Equivalently. 2004 . Digital processing of analog signals convolution: xa (t) = xs (t) ∗ hr (t) = = = ∞ −∞ n=−∞ ∞ −∞ ∑ ∞ δa (τ − nT s)xa (nTs ) sin (π(t − τ)/Ts ) dτ π(t − τ)/Ts n=−∞ ∞ n=−∞ ∑ ∞ xa (nTs ) xa (nTs ) δa (τ − nT s) sin (π(t − τ)/Ts ) dτ π(t − τ)/Ts ∑ sin (π(t − nTs )/Ts ) . The purpose of this section will be to answer the following question: Under what conditions is the DT structure of Figure 7. we will develop in this section the conditions for the discretetime implementation of a system to be equivalent to its continuoustime implementation. Labeau Compiled September 28.21) Equation (7. This is an ideal form of processing in which the amplitudes of DT signals x[n] and y[n] are not constrained. In the case t = mTs . (7.3).2 Discretetime processing of continuoustime signals Now that we have seen under which conditions an analog signal can be converted to discretetime without loss of information. it may be viewed as an ideal form of discretetocontinuous (D/C) conversion. The reconstruction process is illustrated on the right of Figure 7.17) yields xa (mTs ) = 1 m=0 0 otherwise (7.134 Chapter 7.20) which is consistent with the deﬁnition of the samples x[m].7. π(t − nTs )/Ts ∞ −∞ δa (t − t0 ) f (t)dt the last equality coming from the fact that = f (t0 ).17) is often called the ideal bandlimited signal reconstruction formula.
2 −1 0 −2 −0. 7.8 3 2 0.2.6 4. Right: the actual interpolation by superposition of scaled and shifted versions of hr (t).4 −5 −4 −3 −2 −1 0 1 2 3 4 5 −4 3 3.8 H( ) y[n] DT/A Ts y a (t ) Setting for discretetime processing of continuoustime signals.8 5 Fig.4 4.2 3. Labeau 1 ∞ ∑ Xa (Ω − kΩs ) Ts k=−∞ (7.2 −3 −0.4 3.6 1 0. 7.22) (7. Champagne & F.1 Study of inputoutput relationship Mathematical characterization: The three elements of Figure 7.6 3.2 Discretetime processing of continuoustime signals 135 1 4 0. 2004 . 7. DT system xa (t ) x[n] A/DT Ts Fig.7.8 can be characterized by the following relationships: • Ideal C/D converter: X(ω)ω=ΩTs = where Ωs = 2π/Ts . • Discretetime LTI ﬁlter: Y (ω) = H(ω)X(ω) where H(ω) is an arbitrary DT system function • Ideal D/C converter (see proof of sampling theorem): Ya (Ω) = Hr (Ω)Y (ΩTs ) c B.23) (7.2 4.24) Compiled September 28.4 0 0. the general shape of the reconstruction ﬁlter hr (t) in time (Ts = 1 unit).7 Bandlimited interpolation: Left.8 4 4.
However. Champagne & F.nonideal D/C conversion These issues are given further considerations in the following sections. Ts k=−∞ (7. If it is desired to send this signal over a telephone line.30) 0 otherwise Remarks: Note that Ha (Ω) is set to zero for Ω ≥ Ωs /2 since the FT of the input is zero over that range.26) reduces to Ya (Ω) = H(ΩTs )Xa (Ω) which corresponds to an LTI analog system.input signal xa (t) not perfectly bandlimited . Labeau Compiled September 28. Theorem 1 (DT processing of analog signals) Provided that analog signal xa (t) is bandlimited to Ω < ΩN and that Ωs ≥ 2ΩN .e. The bandpass ﬁlter to be used is then speciﬁed as: 1 600π ≤ Ω ≤ 6800π Ha (Ω) = 0 otherwise c B. with cutoff frequency Ωs /2. it has to be bandlimited between 300 Hz and 3400 Hz.25) 0 otherwise Overall Inputoutput relationship: Combining (7. many factors limit the applicability of this result: . discretetime processing of x[n] = xa (nTs ) with H(ω) is equivalent to analog processing of xa (t) with H(ΩTs ) if Ω < Ωs /2 Ha (Ω) = (7. (7. (7. Xa (Ω) = 0 for Ω ≥ Ωs /2 .28) (7. Digital processing of analog signals where Hr (Ω) is the frequency response of an ideal continuoustime lowpass ﬁlter.2: Consider a continuoustime speech signal limited to 8000 Hz.9 are equivalent under the given conditions. (7. Ts Ω < Ωs /2 Hr (Ω) = (7.24) into a single equation.26) Recall that for an analog LTI system. Example 7.26) does not reduce to that form because of the spectral images Xa (Ω − kΩs ) for k = 0. In this case.nonideal C/D conversion . 2004 for all k = 0.136 Chapter 7.22)–(7. In practice.27) In general. then the products Hr (Ω)Xa (Ω − kΩs ) = 0. we obtain Ya (Ω) = Hr (Ω)H(ΩTs ) 1 ∞ ∑ Xa (Ω − kΩs ).29) . the Fourier Transforms of the input xa (t) and corresponding output ya (t) are related through Ya (Ω) = G(Ω)Xa (Ω). The Theorem states that the two systems shown in Figure 7. if we further assume that xa (t) is bandlimited to Ω < ΩN and that Ωs ≥ 2ΩN . i. i.e. (7.
7. sampling must take place above the Nyquist rate. Labeau Compiled September 28. moreover. If Ωs = 32000π is chosen. The AAF is necessary. the speciﬁcations outside [−π.e. 0 otherwise (7. and such ﬁlters usually introduce phasedistortion near the cutoff frequency. the direct analog implementation of Haa (Ω) with sharp cutoffs is costly. but is also useful in general even if the signal to be sampled is already bandlimited: it removes highfrequency noise which would be aliased in the band of interest by the sampling process. π] follow by periodicity of H(ω).7.31) . The AAF avoids spectral aliasing during the sampling process. Its speciﬁcations are given by: Haa (Ω) = and are illustrated in Figure 7. or Ωs ≥ 32000π.2 Antialiasing ﬁlter (AAF) Principle of AAF: The AAF is an analog ﬁlter. Champagne & F.2. with Fs ≥ 16 kHz. i. used prior to C/D conversion. c B. 2004 1 Ω < Ωs /2 . π].10.2 Discretetime processing of continuoustime signals 137 DT LTI system xa (t ) x[n] y[n] ya (t ) A/DT Ts H( ) DT/A Ts = analog LTI system xa (t ) Ha ( ) ya (t ) Fig. to remove highfrequency content of xa (t) when the latter is not BL to ±Ωs /2. but has some drawbacks: if the signal extends over the Ωs /2 limit. the discretetime ﬁlter which will implement the same bandpass ﬁltering as Ha (Ω) is given by: H(ω) = 1 600π/16000 ≤ ω ≤ 6800π/16000 0 otherwise in [−π.9 Equivalence between Analog and DiscreteTime processing. 7. If this ﬁlter is to be implemented in discretetime without loss of information. useful signal information may be lost.
138 Chapter 7. Digital processing of analog signals + DD :.
so that x[n] is a discretetime signal.Fix ΩN = maximum frequency of interest (e.Finite response time (the sampling is not quite instantaneous). we have considered an idealized form of uniform sampling. A/DT conversion cannot be implemented exactly due to hardware limitations.g. it is used mainly as a mathematical model in the study of DSP. ΩN = 20kHz) .Set Ωs = (1 + r)2ΩN where 0 < r < 1 (e. in which the sampling takes place instantaneously and the signal amplitude x[n] can take any arbitrary real value . 7. • Approach #2: Mtimes Oversampling: . We will investigate it in further detail when we study multirate systems in chapter ??. 2004 .12). 7.i.1 ⇒ Ωs = 44kHz) .Use cheap analog Haa (Ω) with very large transition band .3 A/D conversion Introduction Up to now.e.10 Antialiasing ﬁlter template. r = 0.g. Champagne & F.31).Signiﬁcantly oversample xa (t) (e.g. Several approaches can be used to circumvent difﬁculties associated to the sharp cutoff requirements of the AAF ﬁlter Haa (Ω) in (7.11): • Approach #1: . x[n] ∈ R. Two such approaches are described below and illustrated in Figure (7. Implementation issues: AAF are always present in DSP systems that interface with the analog world. Ωs = 8 × 2ΩN ) . : :V :V Fig. The second approach is superior in terms of ﬂexibility and cost. a smooth transition band of width rΩN is available.Complete the antialiasing ﬁltering in the DT domain by a sharp lowpass DT ﬁlter Haa (ω). Practical DSP systems use nonideal AnalogtoDigital (A/D) converters instead. Labeau Compiled September 28. also called ideal AnalogtoDiscrete Time (A/DT) conversion (see Figure 7. In this case. c B. These are characterized by: .Use of nonideal Haa (Ω) with transition band between ΩN and (1 + r)ΩN . and then reduce the sampling rate. It is generally used in commercial audio CD players.
3 A/D conversion 139 + DD :.7.
: 0: 1 + DD :.
:1 :1 + DD Z .
0: 1 : U .
: 1 : 1 : 1 U .
x[n] ∈ ﬁnite set ⇒ digital signal).Other types of imperfections like timing jitter. . The operation of each process is further developed below.12 x[n] x a (nTs ) Ideal A/DT Conversion.13: • The SampleandHold device (S&H) performs uniform sampling of its input at times nTs and maintains a constant output voltage of x[n] until the next sampling instant. A/D nonlinearities. as illustrated in Figure 7.14 illustrates the input and ˜ c B. remains constant until the next sampling instant. 2004 . Champagne & F.1 Basic steps in A/D conversion AnalogtoDigital Conversion (A/D) may be viewed as a twostep process.3. denoted x(t).11 Illustration of practical AAF implementations. Figure 7.e. 7. Sampleandhold (S&H): The S&H performs uniform sampling of its input at times nTs . 7.Finite number of permissible output levels for x[n] (i. . . . Labeau Compiled September 28. 7. analog signal Discretetime signal xa (t ) A/DT Ts Fig.: 1 Z S S 0 S 0 S (a) (b) Fig. (a) Excess sampling by factor 1 + r (b) Oversampling by factor M. its output voltage. • During that time. . the quantizer approximates x[n] with a value x[n] selected from a ﬁnite set of possible ˆ representation levels.
32) (7.14 Illustration of a SampleandHold operation on a continuoustime signal.34) x[n] = xa (nTs ) 1 if 0 ≤ t ≤ Ts p(t) = 0 otherwise Practical S&H suffer from several imperfections: .33) (7. Output ~ (t ) x Input xa (t ) t Ts Fig. 2004 .140 Chapter 7. Labeau Compiled September 28. Digital processing of analog signals A/D Converter xa (t ) Sample & Hold x[n] Quantizer x[n] ˆ Ts Fig. the S&H output x(t) is a PAM waveform ideally given by ˜ x(t) = ˜ n=−∞ ∑ ∞ x[n] p(t − nTs ) (7. Thus.The sampling is not instantaneous: x[n] ≈ xa (nTs ). 7. c B. 7. Champagne & F.The output voltage does not remain constant but slowly decays within a sampling period of duration Ts . .13 Practical A/D Conversion output signals of the sampleandhold device.
the one that best represents x[n]: x[n] ∈ R −→ x[n] ∈ A = {xk }K ˆ ˆ k=1 Q (7. For an input signal x[n] to the quantizer. the quantization error e[n] satisﬁes: x[n] ∈ DR x[n] ∈ DR / c B.. . k = 1. we have 2Xfs ∆= = 2−B Xfs (7. Speciﬁcally.15 illustrates the inputoutput relationship of a 3bit uniform quantizer. 2004 . a common approach is to round the input to the closest level: x1 if x[n] < x1 − ∆ ˆ ˆ 2 ∆ Q(x[n]) = xk if xk − 2 ≤ x[n] < xk + ∆ for k = 1. ˆ (7. K) represent the permissible levels and A denotes the set of all such levels.36) where xk (k = 1. xK + ] ˆ ˆ 2 2 (7. In this case. represented mathematically by a function Q(. ˆ B +1 bits are used for the representation of signal levels (B bits for magnitude + 1 bit for sign).35) Assuming a constant input voltage x[n].39) K • The maximum value representable by the quantizer is xmax = xK = Xfs − ∆ ˆ ˆ • The dynamic range of the uniform quantizer is deﬁned as the interval ∆ ∆ DR = [x1 − . . in which the representation levels are uniformly spaced within an interval [−Xfs .). . For the uniform quantizer. K = 8 different levels exist. so that the the total number of levels is K = 2B+1 . K. K (7. among a ﬁnite set of K permissible levels. also called quantizer characteristic: x(n) = Q(x[n]). Champagne & F. ˆ (7. where Xfs (often speciﬁed as a positive voltage) is known as the fullscale level of the quantizer.41) ˆ ˆ ˆ 2 ∆ xK if x[n] ≥ xK + 2 . In practice. is the constant distance between representation levels.37) Uniform Quantizer: The most commonly used form of quantizer is the uniform quantizer.. the quantization error is deﬁned as e[n] x[n] − x[n] ˆ (7.35). . .7.) (more on this in a later chapter).) in (7. the representation levels are constrained to be xk = −Xfs + (k − 1)∆. Since the output levels of the quantizer are usually represented in binary form. Xfs ]. . . the concept of binary coding is implicit within the function Q(. called the quantization step size. .42) Compiled September 28. . ˆ ˆ Figure 7. Labeau ⇒ e[n] ≤ ∆/2 ⇒ e[n] > ∆/2 (clipping) (7.3 A/D conversion 141 Quantizer: The quantizer is a memoryless device.40) While several choices are possible for the nonlinear quantizer characteristic Q(.. In practice. the function Q selects.38) • The minimum value representable by the quantizer is xmin = x1 = −Xfs ˆ ˆ • ∆.
a statistical model is usually assumed. 7. as illustrated in Figure 7. The H>Q@ [>Q@ 4. Since it is difﬁcult to handle such errors deterministically.142 Chapter 7.15 3bit Uniform Quantizer Inputoutput characteristic. Digital processing of analog signals ˆ x[n] xK =011 ˆ 010 001 000 x1 ˆ xK ˆ x[n] 111 110 101 ˆ 100= x1 2 X fs Fig. The reproduction levels are labelled with their two’s complement binary representation. A basic statistical model is derived below for the uniform quantizer.16. 7.3. Model structure: From the deﬁnition of quantization error: e[n] x[n] − x[n] ⇒ x[n] = x[n] + e[n].43) one can derive an equivalent additive model for the quantization noise. it is important to know how quantization errors will affect the system output.2 Statistical model of quantization errors In the study of DSP systems. ˆ ˆ (7.
Fig. 7.16
Ö [>Q@
[>Q@
Ö [>Q@
Additive noise model of a quantizer
nonlinear device Q is replaced by a linear operation, and the error e[n] is modelled as a white noise sequence (see below). White noise assumption: The following assumptions are made about the input sequence x[n] and the corresponding quantization error sequence e[n]:
c B. Champagne & F. Labeau Compiled September 28, 2004
7.3 A/D conversion
143
 e[n] and x[n] are uncorrelated, i.e. E{x[n]e[m]} = 0 for any n, m ∈ Z where E{ } denotes expectation.  e[n] is a white noise sequence, i.e.: E{e[n]e[m]} = 0 if m = n  e[n] is uniformly distributed within ±∆/2, which implies mean = E{e[n]} = 0 variance = E{e[n]2 } =
∆/2 e2 −∆/2
(7.44)
(7.45)
(7.46) ∆ de = ∆2 12 σ2 e (7.47)
These assumptions are valid provided the signal is sufﬁciently “busy”, i.e.:  σx RMS value of x[n] ∆;  x[n] changes rapidly. SQNR computation: the signal to quantization noise ratio (SQNR) is a good characterization of the effect of the quantizer on the input signal. It is deﬁned as the ratio between the signal power and the quantization noise power at the output of the quantizer, in dB: SQNR = 10 log10 σ2 x , σ2 e
where σx and σe represent the RMS value of the signal and noise respectively. With the above assumption on the noise, we have that 2 ∆2 2−2B Xfs = , σ2 = e 12 12 where Xfs is the full scale level of the quantizer, as illustrated in Figure 7.15, so that SQNR = 20 log10 or ﬁnally SQNR = 6.02 B + 10.8 − 20 log10 Xfs σx (dB) (7.48) σx + 10 log10 12 + 20B log10 2 Xfs
The above SQNR expression states that for each added bit, the SQNR increases by 6 dB. Moreover, it includes a penalty term for the cases where the quantizer range is not matched to the input dynamic range. As an example, if the dynamic range is chosen too big (Xfs σx ), then this term will add a big negative contribution to the SQNR, because the usage of the available quantizer reproduction levels will be very bad (only a few levels around 0 will be used). This formula does not take into account the complementary problem (quantizer range too small, resulting in clipping of useful data), because he underlying noise model is based on a noclipping assumption.
c B. Champagne & F. Labeau Compiled September 28, 2004
144
Chapter 7. Digital processing of analog signals
7.4 D/A conversion
Figure 7.17 shows the ideal discrete timetoanalog conversion (DT/C) as considered up to now. It uses an ideal lowpass reconstruction ﬁlter hr (t), as deﬁned in equation (7.18). The corresponding frequency and
DT signal y[n] DT/A Ts
Fig. 7.17
Analog signal y r (t )
Ideal Discrete TimetoAnalog Conversion
∞
time input/output expressions are: yr (t) = and
n=−∞
∑
y[n]hr (t − nTs )
(7.49)
Yr (Ω) = Y (ΩTs )Hr (Ω) where Hr (Ω) = Ts Ω < Ωs /2 0 otherwise
(7.50) (7.51)
This is an idealized form of reconstruction, since all the signal samples y[n] are needed to reconstruct yr (t). The interpolating function hr (t) corresponds to an ideal lowpass ﬁltering operation (i.e. noncausal and unstable). Ideal DT/A conversion cannot be implemented exactly due to physical limitations; similarly to A/DT, it is used as a mathematical model in DSP. Practical DSP systems use nonideal D/A converters instead characterized by:  a realizable, lowcost approximation to hr (t).  the output signal yr (t) is not perfectly bandlimited. 7.4.1 Basic steps in D/A conversion: D/A may be viewed as a twostep process, illustrated in Figure 7.18: • The hold circuitry transforms its digital input into a continuoustime PAMlike signal y(t); ˜ • The postﬁlter smoothes out the PAM signal (spectral reshaping). Hold circuit: A Timedomain description of its operation is given by: y(t) = ˜
n=−∞
∑
∞
y[n]h0 (t − nTs )
(7.52)
c B. Champagne & F. Labeau
Compiled September 28, 2004
7.4 D/A conversion
145
'$ FRQYHUWHU a W
\
3RVWILOWHU
\>Q@
+ROG FLUFXLW
\ U W
The most common implementation is the zeroorder hold.56) In practice. where h0 (t) is the basic pulse shape of the circuit. the output of the post ﬁlter will be identical to that of the ideal DT/A converter if Hp f (Ω) = Hr (Ω)/H0 (Ω) = Ts /H0 (Ω) if Ω < Ωs /2 0 otherwise (7.55) 1 if 0 ≤ t ≤ Ts 0 otherwise (7.the post ﬁlter Hp f (Ω) can only be approximated but the results are still satisfactory. In the frequency domain: ˜ Yr (Ω) = = = ∞ −∞ ∞ y(t)e− jΩt dt ˜ ∞ n=−∞ ∞ n=−∞ ∑ y[n] −∞ h0 (t − nT s)e− jΩt dt ∞ −∞ ∑ y[n]e− jΩTs n h0 (t)e− jΩt dt (7. so that the output y(t) of the hold circuit is different from that of the ideal DT/A ˜ conversion. From the previous discussions. H0 (Ω) = Hr (Ω). whereas the output of hold circuitry is: ˜ Y (Ω) = Y (ΩTs )H0 (Ω). 7. Clearly. 2004 . Fig.53) = Y (ΩTs ) H0 (Ω) In general. c B. Labeau Compiled September 28. Champagne & F. which uses a rectangular pulse shape: h0 (t) = The corresponding frequency response is H0 (Ω) = Ts sinc(Ω/Ωs ) e− jπΩ/Ωs (7.18 Twostep D/A Conversion. it is clear that the desired output of the D/A converter is Yr (Ω) = Y (ΩTs )Hr (Ω).54) Postﬁlter: The purpose of the postﬁlter Hp f (Ω) is to smooth out the output of the hold circuitry so that it resembles more that of an ideal DT/A function.
5 Hr( Ω)/H0( Ω) 1 0. Champagne & F. compared with ideal lowpass interpolation ﬁlter).(T denotes the sampling period) c B. Labeau Compiled September 28.19 Illustration of the zeroorder hold frequency response (Top. The compensating ﬁlter in this case is shown in the bottom part of Figure 7.146 Chapter 7. 7. Digital processing of analog signals Example 7.19.3: Zeroorder hold circuit The zeroorder hold frequency response is shown on the top of Figure 7.19.5 0 −2π/T −π/T 0 π/T 2π/T Fig. and the associated compensation postﬁlter (bottom). T Hr( Ω) T/2 H ( Ω) 0 0 −2π/T −π/T 0 π/T 2π/T 1. as compared to the ideal brickwall lowpass reconstruction ﬁlter Hr (Ω). 2004 .
we discuss some of the most wellknown structures for the realization of FIR and IIR discretetime systems. However.1) In theory. etc. These are reviewed in the next section. . even if we specify H(z). • effects of ﬁnite precision representation. H(z) provides a complete mathematical characterization of the system H . pseudocode (Matlab like). where x[n] denotes and arbitrary input and y[n] the corresponding output. Speciﬁcally: • A realization of a system is a speciﬁc (and complete) description of its internal computational structure. Each of the different ways of performing the computation y[n] = H {x[n]} is called a realization of the system.147 Chapter 8 Structures for the realization of DT systems Introduction: Consider a causal LTI system H with rational system function: H(z) = Z {h[n]} = B(z) . The choice of a particular realization is guided by practical considerations. can be computed in a multitude of different ways. block diagrams. For a given system H . A(z) (8. • chip area and power consumption in VLSI implementation. the inputoutput transformation x[n] → y[n] = H {x[n]}. We will use Signal FlowGraph representation to illustrate the different compuational structures. • This description may involve difference equations. • etc. In this chapter. • internal memory requirements. such as: • computational requirements. there is an inﬁnity of such realizations.
say w[n]. as shown in FIgure 8. as shown in Figure 8.1(b). the result of an internal computation. To each node is associated a node value. 8. that may or not appear next to the node. The relationship between the branch input and output is provided by the branch transmittance. 2004 . Basic terminology and operation principles of SFG are detailed below: Node: A node is represented by a small circle. (b) different branches along with their inputoutput characteristic. Internal node: An internal node is a node with one or more input branches (i. as shown in Figure 8. Structures for the realization of DT systems 8. leaving the node).3) Compiled October 13. K (8.1 Signal ﬂowgraph representation A signal ﬂowgraph (SFG) is a network of directed branches connecting at nodes. i.2) The inputs to the branches leaving the node are all identical and equal to the node value: v1 [n] = · · · = vL [n] = w[n]. entering the node) and one or more output branches (i. The node value w[n] (not shown in the ﬁgure) is the sum of the outputs of all the branches entering the node. Branch: A branch is an oriented line segment.e. input u[n] w[n] output v[n] v[n] = u[n] a v[n] = au[n] z −1 v[n] = u[n − 1] H (z ) (a) (b) V ( z ) = H ( z )U ( z ) Fig. w[n] k=1 ∑ uk [n]. It is mainly used to graphically represent the relationship between signals. or an output value (see below).148 Chapter 8.1 Basic constituents of a signal ﬂowgraph: (a) a single isolated node.1(a). The node value represents either an external input.e. The signal ﬂow along the branch is indicated by an arrow.e. Labeau (8. Champagne & F. c B. It in fact gives detailed information on the actual algorithm that is used to compute the samples of one of several signals based on the samples of one or several other signals.2(a).
In many cases. In particular. we illustrate the use of SFG for the representation of rational LTI systems.2 (b) Different types of nodes: (a) internal node. where a and b are arbitrary constants: x1[n] z −1 w[n] b y[n] a x2 [n] Fig.1 Signal ﬂowgraph representation 149 u1 (n) u 2 ( n) ! u K (n) v1 (n) ! x[n] y[n] v L (n) (a) Fig. Example 8. as shown in Figure 8.1. Remarks: SFGs provides a convenient tool for describing various system realizations. we illustrate how a SFG can be derived from an LCCDE description of the system and vice versa. The node value y[n].1: Consider the SFG in ﬁgure 8.2(b). The node value x[n] is provided by an external input to the network. (c) Source node: A source node is a node with no input branch.8. 2004 . computed in the same way as an internal node. 8. as shown in Figure 8. 8. Sink node: A sink node is a node with no output branch.2(c). (c) sink node.3. may be used as a network output. (b) source node. In the examples below. Labeau Compiled October 13. Champagne & F. nontrivial realizations of a system can be obtained via simple modiﬁcations to its SFG.3 SFG for example 8. The node value can be expressed in terms of input signals x1 [n] and x2 [n] as w[n] = x1 [n − 1] + ax2 [n] c B.
we identify two nontrivial nodes. there are 2 nontrivial nodes and 1 output node. we ﬁrst obtain the corresponding LCCDE. Here.4) (8.5 In this type of problem. namely: Y (z) = b0W1 (z) + b1W2 (z) W2 (z) = z−1W1 (z) W1 (z) = X(z) + aW2 (z) c B.6) Compiled October 13. Labeau (8. 8.2 z −1 a Note the presence of the feedback loop in this diagram which is typical of recursive LCCDE Example 8. one has to be very systematic to avoid possible mistakes.4. Here.5.3: Let us ﬁnd the system function H(z) corresponding to the SFG in ﬁgure 8.5) (8.2: Consider the system function b0 + b1 z−1 1 − az−1 To obtain a SFG representation of H(z). 2004 .4 SFG for example 8. x[n] b0 w[n] y[n] z −1 b1 ay[n − 1] Fig. (2) Find the zdomain input/output (I/O) relation for each of the nontrivial nodes and the output node. We proceed in 4 steps as follows: (1) Identify and label nontrivial internal nodes. namely H(z) = y[n] = ay[n − 1] + b0 x[n] + b1 x[n − 1] The derivation of the SFG can be made easier by expressing the LCCDE as a set of two equations as follows: w[n] = b0 x[n] + b1 x[n − 1] y[n] = ay[n − 1] + w[n] From there. Champagne & F. one immediately obtains the SFG in ﬁgure 8. labelled w1 [n] and w2 [n] in Figure 8. Structures for the realization of DT systems From there.150 Chapter 8. so that we need a total of 3 linearly independent relations. an expression for the output signal y[n] in terms of x1 [n] and x2 [n] is obtained as y[n] = bw[n] = bx1 [n − 1] + abx2 [n] Example 8.
7) =⇒ Y (z) = (b0 + b1 z−1 )W1 (z) =⇒ Y (z) = (4) Finally.2 Realizations of IIR systems We consider IIR systems with rational system functions: H(z) = ∑M bk z−k k=0 1 − ∑N ak z−k k=1 B(z) . For this general IIR system function.1 Direct form I The LCCDE corresponding to system function H(z) in (8.3. 8. A(z) (8. solve the I/O relations for Y (z) in terms of X(z).9) The minus signs in the denominator are introduced to simplify the Signal FlowGraphs (be careful. b1 (3) By working out the algebra.2. they are often a source of confusion).5 SFG for example 8.9) is given by: y[n] = k=1 ∑ ak y[n − k] + ∑ bk x[n − k] k=0 N M (8. 2004 .8.4) + (8.6) =⇒ W1 (z) = X(z) + az−1W1 (z) X(z) =⇒ W1 (z) = 1 − az−1 (8.7) As a ﬁnal remark.3. Labeau Compiled October 13.8) (8. 8. even tough the SFG in considered here is different from that in Example 8.5) + (8. 8. Here: (8. Champagne & F. we shall derive several SFGs that correspond to different system realizations.10) c B. we note that the above system function is identical to that considered in Example 8.2.5) + (8. the system function is obtained as H(z) = Y (z) b0 + b1 z−1 = X(z) 1 − az−1 b0 + b1 z−1 X(z) 1 − az−1 (8.2 Realizations of IIR systems 151 x[n] w1[n] b0 y[n] z −1 a w2 [n] Fig.
while the section labelled 1/A(z) corresponds to the poles. Observe that the two vertical delay lines in the middle of the structure compute the same quantities. Champagne & F. namely w[n − 1].e. the two sections identiﬁed as B(z) and 1/A(z) in Figure 8.2. each unit delay (i.152 Chapter 8. known as direct form I (DFI).8.6 may be interchanged. note that in Figure 8. Finally. 8. z−1 ) requires one storage location (past values need to be stored). M} ≤ N + M unit delays.8.6.7. Indeed. Eliminating one of these delay lines and merging the two sections lead to the structure shown in Figure 8. 8. this leads directly to the SFG shown in Figure 8. In a practical implementation of this SFG. Referring to Figure 8. the section of the SFG labeled B(z) corresponds to the zeros of the system.6 1 / A( z ) Direct Form I implementation of an IIR system. known as the direct form II (DFII). while the DFI contains N + M unit delays. but if it is not the case. . . so that only one delay line is actually needed.. Note that N = M is assumed for convenience. the DFII only contains max{N. . The main advantage of DFII over DFI is its reduced storage requirement. Structures for the realization of DT systems x[n] b0 z −1 z −1 y[n] b1 a1 ! bM −1 ! ! a N −1 ! z −1 z −1 bM aN B(z ) Fig.2 Direct form II Since LTI systems do commute. one can easily see that the following difference equations describe the DFII realizac B. branches corresponding to ai = 0 or bi = 0 will simply disappear.2. w[n − 2]. Labeau Compiled October 13. 2004 .6. Proceeding as in Example 8. This leads to the intermediate SFG structure shown in Figure 8.
K (8. c B..2. HK (z) A(z) (8. .7 Structure obtained from a DFI by commuting the implementation of the poles with the implementation of the zeros.8.. 8. . . Champagne & F.9. tion: w[n] = a1 w[n − 1] + · · · aN w[n − N] + x[n] y[n] = b0 w[n] + b1 w[n − 1] + · · · + bM w[n − M] (8.13) represent IIR sections of loworder (typically 1 or 2). Ak (z) k = 1. 2004 . Note that each box needs to be expanded with the proper subSFG (often in DFII or transposed DFII). 2.11) (8.3 Cascade form A cascade form is obtained by ﬁrst factoring H(z) as a product: H(z) = where Hk (z) = Bk (z) .2 Realizations of IIR systems 153 x[n] w[n] w[n] b0 y[n] z −1 z −1 a1 b1 ! a N −1 ! ! bM −1 ! z −1 aN z −1 bM 1 / A( z ) B(z ) Fig.14) B(z) = H1 (z)H2 (z) . Labeau Compiled October 13.12) 8.. The corresponding SFG is shown in Figure 8.
2004 . It should be clear that such a cascade decomposition of k1 k1 an arbitrary H(z) is not unique: • There are many different ways of grouping the zeros and poles of H(z) into lower order sections.154 Chapter 8. Example 8.15) (8. • The gains of the individual sections may be changed (provided their product remains constant). Champagne & F. Labeau (1 − e jπ/4 z−1 )(1 − e− jπ/4 z−1 )(1 − e j3π/4 z−1 )(1 − e− j3π/4 z−1 ) (1 − . Note that each box needs to be expanded with the proper subSFG (often in DFII or transposed DFII).4: Consider the system function H(z) = G c B.9z−1 )(1 − .9z−1 )(1 + . x[n] y[n] H1 ( z) H 2 ( z) H K (z ) Fig.9 Cascade Form for an IIR system. bki ∈ R. For a typical real coefﬁcient secondorder section (SOS): Hk (z) = Gk = (1 − zk1 z−1 )(1 − zk2 z−1 ) (1 − pk1 z−1 )(1 − pk2 z−1 ) bk0 + bk1 z−1 + bk2 z−2 1 − ak1 z−1 − ak2 z−2 (8.8 Direct Form II realization of an IIR system. Structures for the realization of DT systems x[n] w[n] b0 z −1 y[n] a1 b1 ! a N −1 ! bN −1 z −1 aN bN ! Fig.9 jz−1 ) Compiled October 13.9 jz−1 )(1 + . 8.16) where pk2 = p∗ and zk2 = z∗ so that aki . 8.
9 jz−1 ) 1 + . one can easily express the latter as a sum: H(z) = C(z) + ∑ HK (z) k=1 K (8. Champagne & F.9 jz−1 )(1 + . secondorder IIR sections Hk (z) would be obtained by combining terms in the PFE that correspond to complex conjugate poles: Hk (z) = = c B. 2004 .9z−1 )(1 + .81 − . For example.4 Parallel form Based on the partial fraction expansion of H(z). Note that we have arbitrarily incorporated the gain factor G in between the two sections H1 (z) and x[n] z −1 − 2 G z −1 2 y[n] z −1 . 8. In practice. complex conjugate poles and zeros should be paired so that the resulting second order section has real coefﬁcient. Whenever possible. Labeau A∗ Ak k + 1 − pk z−1 1 − p∗ z−1 k bk0 + bk1 z−1 1 − ak1 z−1 − ak2 z−2 (8.10 where the above 2nd order sections are realized in DFII. one possible choice that fulﬁls this requirement is: √ (1 − e jπ/4 z−1 )(1 − e− jπ/4 z−1 ) 1 − 2z−1 + z−2 H1 (z) = = (1 − .17) (8.18) (8.9z−1 ) 1 − .10 H 2 ( z) Example of a cascade realization H2 (z).8. the gain G could be distributed over the SFG so as to optimize the use of the available dynamic range of the processor (more on this later).81z−2 √ (1 − e j3π/4 z−1 )(1 − e− j3π/4 z−1 ) 1 + 2z−1 + z−2 H2 (z) = = (1 − .81 z −1 H1 ( z ) Fig.21) Compiled October 13. where each box needs to be expanded with a proper subSFG.19) Hk (z) : loworder IIR sections C(z) : FIR section (if needed) The corresponding SFG is shown in Figure 8.2 Realizations of IIR systems 155 There are several ways of pairing the poles and the zeros of H(z) to obtain 2nd order sections.20) (8. Typically.81z−2 The corresponding SFG is illustrated in ﬁgure 8. 8.2.11.
terms corresponding to complex conjugate poles should be combined so that the resulting second order section has real coefﬁcients. where aki . Again.9z−1 )(1 − . 2004 .9 jz−1 1 + .4 H(z) = G (1 − e jπ/4 z−1 )(1 − e− jπ/4 z−1 )(1 − e j3π/4 z−1 )(1 − e− j3π/4 z−1 ) (1 − .156 Chapter 8. Champagne & F.81z−2 D D∗ b20 + b21 z−1 + = 1 − . 8.9 jz−1 It is not difﬁcult to verify that it has the following PFE: H(z) = A + where A. bki ∈ R.9 jz−1 1 + . we obtain H1 (z) = H2 (z) = B C b10 + b11 z−1 + = 1 − .9z−1 1 − . B and C are appropriate real valued coefﬁcients and D is complex valued.5 Transposed direct form II Transposition theorem: Let G1 denote a SFG with system function H1 (z). A DFII or transposed DFII structure would be used for the realization of (8. Example 8. In practice.2.11 Parallel Realization of an IIR system. Applying this approach.9z−1 1 + .81z−2 The corresponding SFG is illustrated below with H1 (z) and H2 (z) realized in DFII: 8.9z−1 )(1 + . Structures for the realization of DT systems C (z ) x[n] y[n] H1 ( z) H 2 ( z) H K (z ) Fig. Let G2 denote the SFG obtained from G1 by: c B.21).5: Consider the system function in Example 8.9 jz−1 ) B C D D∗ + + + 1 − .9 jz−1 1 + .9z−1 1 + . Labeau Compiled October 13. there are several ways of pairing the poles of H(z) to obtain 2nd order sections.9 jz−1 )(1 + .9z−1 1 − .
13. 8. Its transfer function is easily found to be H1 (z) = bz−1 1−abz−1 cbz−1 1 − 1−abz−1 .6: An example ﬂowgraph is represented on the left of Figure 8. Application to DFII: Consider the general rational transfer function H(z) = c B. 1 − ∑N ak z−k k=1 (8.81 Fig. which can easily be checked to have the same transfer function. 2004 .23) Compiled October 13. Champagne & F. and (2) interchanging the source x[n] and the sink y[n].2 Realizations of IIR systems 157 A x[n] b10 z −1 y[n] b11 z −1 . Example 8. Usually. Then H2 (z) = H1 (z) (8. we redraw G2 so that the source node x[n] is on the left. Labeau ∑M bk z−k k=0 . not the transmittance.13.81 b20 z −1 b21 z −1 − .12 Example of a parallel realization (1) reversing the direction of all the branches.22) Note that only the branch direction is changed. The application of the transposition theorem yields the SFG on the right of Figure 8.8.
. 8.24) . original SFG and right.13 Example of application of the transposition theorem: left. w1 [n] = b1 x[n] + a1 y[n] + w2 [n − 1] y[n] = w1 [n − 1] + b0 x[n] c B.158 Chapter 8. known as the transposed direct form II. 2004 (8. Structures for the realization of DT systems x[n] b z −1 z −1 y[n] y[n] b z −1 z −1 x[n] a c a c Fig.8. as shown in Figure 8. transposed SFG.14 Transposed DFII realization of a rational system. we obtain the SFG shown in ﬁgure 8. Champagne & F. Labeau Compiled October 13. . The corresponding LCCDEs are : wN [n] = bN x[n] + aN y[n] wN−1 [n] = bN−1 x[n] + aN−1 y[n] + wN [n − 1] .14. Note that N = M is assumed for convenience. with its DFII realization. x[n] b0 y[n] z −1 b1 w1[n] a1 ! bN −1 ! a N −1 wN−1[n] ! z −1 bN wN [n] aN Fig. 8. Applying the transposition theorem as in the previous example.
1.6 disappears. Labeau Compiled October 13. (8.3.15. the DFI and DFII realizations become equivalent. the denominator in the rational system function (8.e.27) c B. M} otherwise. That is H(z) = B(z) = k=0 ∑ bk z−k .15 \>Q@ DF for an FIR system (also called tapped delay line or transversal ﬁlter). .26) Accordingly. 8.1 Direct forms In the case of an FIR system. x[n] = δ[n].3 Realizations of FIR systems 159 8. Note that the coefﬁcients bk in Figure 8. produces an output y[n] = h[n]. [>Q@ ] E E ] E0 E0 Fig. it is also called a tapped delay line or a transversal ﬁlter structure. M (8.25) corresponding to the following nonrecursive LCCDE in the timedomain: y[n] = k=0 ∑ bk x[n − k] M (8. . Champagne & F. i. . as the structural component labelled 1/A(z) in Figure 8.15 directly deﬁne the impulse response of the system. Applying a unit pulse as the input. with h[n] = bn 0 if n ∈ {0. 2004 .3 Realizations of FIR systems 8.9) reduces to A(z) = 1. . The resulting SFG is shown in Figure 8.8.
a typical 2nd order section would have the form Hk (z) = Gk (1 − zk1 z−1 )(1 − zk2 z−1 ) = bk0 + bk1 z−1 + bk2 z−2 .2 Cascade form In the case of an FIR system. zk1 = z∗2 ). say zl (l = 1. c B. .3. we have H(ω) = −αω + β −αω + β + π if A(ω) > 0 if A(ω) < 0 (8.7: Consider the FIR system function H(z) = (1 − e jπ/4 z−1 )(1 − e− jπ/4 z−1 )(1 − e j3π/4 z−1 )(1 − e− j3π/4 z−1 ) A cascade realization of H(z) as a product of 2nd order sections can be obtained as follows: √ H1 (z) = (1 − e jπ/4 z−1 )(1 − e− jπ/4 z−1 ) = 1 − 2z−1 + z−2 √ H2 (z) = (1 − e j3π/4 z−1 )(1 − e− j3π/4 z−1 ) = 1 + 2z−1 + z−2 8. 2 and/or 4.160 Chapter 8. so that in general. α and β.29) Despite the phase discontinuities.3. Structures for the realization of DT systems 8. Sections of order 4 are useful in the efﬁcient realization of linearphase system.e. and form the desired loworder sections by properly grouping the corresponding factors 1 − zl z−1 and multiplying by the appropriate gain term. Labeau Compiled October 13. it is common practice to refer to H(ω) as a linear phase system..25) as a product of loworder FIR sections. To decompose H(z) in (8.3 Linearphase FIR systems Deﬁnition: A LTI system is said to have generalized linearphase (GLP) if its frequency response is expressible as H(ω) = A(ω)e− j(αω−β) (8. 2004 . .28) where the function A(ω) and the coefﬁcients α and β are real valued. Example 8. as in H(z) = H1 (z)H2 (z) · · · HK (z) one needs to ﬁnd the zeros of H(z). . where the coefﬁcients bki can be made real by proper choice of complex conjugate zeros (i. A(ω) = H(ω). Champagne & F. cascade realizations of FIR system use sections of order 1.. For example. Note that A(ω) can be negative. In terms of A(ω). a cascade form is also possible but the problem is in fact simpler than for IIR since only the zeros need to be considered (because A(z) = 1). k In practice. M). .
2004 .30) Note that the GLP conditions (8.31) of a symmetric impulse response with M = 4: h[n] = h[4 − n]. n∈Z (8. 0.32) can be combined into a single equation.: h[n] = h[M − n]. Champagne & F.8: Consider the LTI system with impulse response h[n] = {1.31) (8.33) Example 8. or equivalently. α = 2. all n ∈ Z Let us verify that the system is linear phase.28). such that h[n] = 0 for n < 0 and for n > M h[0] = 0 and h[M] = 0 System H is GLP if and only if h[n] is either (i) symmetric. Labeau Compiled October 13. namely h[n] = ε h[M − n] where ε is either equal to +1 (symmetric) or −1 (antisymmetric). i.3 Realizations of FIR systems 161 Theorem: Consider an LTI system H with FIR h[n].32) n∈Z (8.e.: h[n] = −h[M − n]. that its frequency response veriﬁes the condition (8. i. (ii) or antisymmetric. We have H(ω) = = 1+e + e− j4ω − j2ω j2ω = e (e + 1 + e− j2ω = e− j2ω (1 + 2 cos(2ω)) = A(ω)e− j(αω+β) where we identify A(ω) = 1 + 2 cos(2ω).31) and (8. 1. 0. β=0 n=−∞ − j2ω ∑ ∞ h[n]e− jωn c B.8.e. (8. 1} Note that h[n] satisﬁes the deﬁnition (8.
we have y[n] = = = (8.16 illustrates a modiﬁed DFI structure which enforces linear phase. Figure 8.34) k=0 ∑ bk x[n − k] k=0 M (8.35) (M−1)/2 ∑ (bk x[n − k] + bM−k x[n − (M − k)]) bk (x[n − k] + ε x[n − (M − k)]) (M−1)/2 k=0 ∑ which only requires (M + 1)/2 multiplications instead of M + 1. [>Q@ ] ] H] \>Q@ E E ] E 0 . Structures for the realization of DT systems Modiﬁed direct form: The symmetry in h[n] can be used advantageously to reduce the number of multiplications in direct form realization: h[n] = εh[M − n] =⇒ bk = εbM−k In the case M odd.162 Chapter 8.
] E 0 .
sections of order 4 are often used in the cascade realization of real.37) 8.36) Thus.3.4 Lattice realization of FIR systems Lattice stage: The basic lattice stage is a 2input 2output DT processing device. Properties of the zeros of a GLP system: In the zdomain the GLP condition h[n] = εh[M − n] is equivalent to H(z) = εz−M H(z−1 ) (8. As a result. z∗ . Labeau Compiled October 13. 1/z0 and 1/z∗ are all zeros of 0 0 H(z). if z0 is a zero of H(z). linear phase FIR ﬁlters: Hk (z) = Gk (1 − zk z−1 )(1 − z∗ z−1 )(1 − k 1 −1 1 z )(1 − ∗ z−1 ) zk zk (8. so is 1/z0 . Fig.16 A modiﬁed DF structure to enforce linear phase (M odd). Champagne & F. then z0 .17. 8. Its signal ﬂow graph is illustrated in Figure 8. If in addition h[n] is real. c B. This is illustrated in Figure 8.18. 2004 .
3 Realizations of FIR systems 163 .8.P].
] ] ] ] ] ] 5H].
u m−1[n] * κm u m [n] z −1 vm−1[ n] κm vm [n] Fig. 8. 8. c B.17 Illustration of the zero locations of FIR realcoefﬁcient GLP systems: a zero always appears with its inverse and complex conjugate.18 Lattice stage. Champagne & F. Labeau Compiled October 13. 2004 . ] ] XQLW FLUFOH Fig.
. . .19 A lattice ﬁlter as a series of lattice stages. with index m running from 1 to M. Inputoutput characterization in the time domain: um [n] = um−1 [n] + κm vm−1 [n − 1] vm [n] = Equivalent zdomain matrix representation: Um (z) Vm (v) = Km (z) Um−1 (z) Vm−1 (v) (8.41) Lattice ﬁlter: A lattice ﬁlter is made up of a cascade of M lattice stages.19.40) κ∗ um−1 [n] + vm−1 [n − 1] m (8. u0 [ n] u1[n] κ 1* * κ2 u2 [n] * κM u M [n] y[n] x[n] z −1 v0 [ n ] κ1 v1[n] z −1 κ2 v2 [ n ] z −1 κM vM [n] Fig.44) Compiled October 13.42) U0 (z) V0 (v) (8. 8. Structures for the realization of DT systems The parameter κm is called reﬂection coefﬁcient. while m is an integer index whose meaning will be explained shortly. .39) where Km (z) is a 2 × 2 transfer matrix deﬁned as Km (z) 1 κm z−1 κ∗ z−1 m (8. . Champagne & F.164 Chapter 8. K2 (z)K1 (z) VM (v) Y (z) = UM (z) c B.38) (8. M um [n] = um−1 [n] + κm vm−1 [n − 1] vm [n] = κ∗ um−1 [n] + vm−1 [n − 1] m end y[n] = uM [n] Corresponding matrix representation in the zdomain: U0 (z) = V0 (z) = X(z) UM (z) = KM (z) . 2004 . Corresponding set of difference equations characterizing the above SFG in the time domain: u0 [n] = v0 [n] = x[n] for m = 1.43) (8. . Labeau (8. This is illustrated in ﬁgure 8. 2.
we can be sure that a causal and stable inverse exists for the lattice system. . that is: if κm  < 1 for all m = 1. it follows that Y (z) = H(z)X(z) with the equivalent system function H(z) given by 1 H(z) = 1 0 KM (z) .42)– (8. by constraining the reﬂection coefﬁcients as above. .45) 1 • Expanding the matrix product in (8. . Labeau Compiled October 13. c B.45). The proof of this property is beyond the scope of this course. .3 Realizations of FIR systems 165 System function: • From (8.8. Champagne & F. M. 2004 .46) can be expressed algebraically in terms of the reﬂection coefﬁcients {κm }M m=1 • Conclusion: an Mstage lattice ﬁlter is indeed FIR ﬁlter of length M + 1.44). Note: Thus. . . Minimumphase property: The lattice realization is guaranteed to be minimumphase if the reﬂection coefﬁcients are all less than one in magnitude. K2 (z)K1 (z) (8.46) where each the coefﬁcients bk in (8. it is easy to verify that the system function of the lattice ﬁlter is expressible as H(z) = 1 + b1 z−1 + · · · + bM z−M (8.
aN . denoted Hd (ω) in the present context.1 Problem statement Consider the rational system function H(z) = B(z) ∑M bk z−k k=0 = A(z) 1 + ∑N ak z−k k=1 (9. . bM . .2) is a fundamental necessity in practical ﬁlter design. It is important to realize that the use of an approximation in (9. which imposes strict constraints on the possible values of ak . recall from Chapter 2 that stability of H(z) implies that H(ω) is continuous. a linear phase requirement for H(z) may be desirable in certain applications. i. (9. Example 9. a1 . .1) such that the corresponding frequency response H(ω) = H(z)z=e jω provides a good approximation to a desired response Hd (ω).e. .1.166 Chapter 9 Filter design 9. in (9. is illustrated in Figure 9. It is not continuous.3. M. In the vast majority of problems. and the . . b0 . we seek to ﬁnd the system coefﬁcients. the desired response Hd (ω) does not correspond to a stable and causal system and cannot even be expressed exactly as a rational system function. . while ideal frequency selective ﬁlters Hd (ω) all have discontinuities. . . H(ω) ∼ Hd (ω).1) for which several computational realizations have been developed in the preceding Chapter. N. which further restrict the system parameters. In ﬁlter design. the resulting system H(z) must be stable and causal.1.2) In order to be practically realizable.1: Ideal LowPass Filter The ideal lowpass ﬁlter has already been studied in Example 3. The corresponding frequency response. i. Additionally. To illustrate this point.e.1 Introduction 9.
9.3) is easily seen to be noncausal.1 Introduction 167 corresponding impulse response hd [n] = ωc ωc n sinc( ) π π (9. it is possible to show that ∑ hd [n] = ∞ n so that the ideal lowpass ﬁlter is unstable. Furthermore. + Z .
ω p ] = passband • [ωs . the magnitude of the desired response is usually speciﬁed in terms of the tolerable • passband distortion. • δ2 = stopband ripple. • stopband attenuation. 9. • δ1 = passband ripple. Z ZF Fig. and • width of transition band. Usually expressed in dB as 20 log10 (δ2 ). π] = stopband • δω ωs − ω p = width of transition band.1 S Ideal LowPass ﬁlter speciﬁcation in frequency. Typical lowpass speciﬁcations: A typical graphical summary of design speciﬁcations for a discretetime lowpass ﬁlter is shown in Figure 9.1. phase speciﬁcations play a more important role. The parameters are: • [0. c B. An additional requirement of linear phase (GLP) may be speciﬁed. Labeau Compiled November 5. such as allpass equalizers and Hilbert transformers.2 Speciﬁcation of Hd (ω) For frequency selective ﬁlters. Often expressed in dB via 20 log10 (1 + δ1 ). For other types of ﬁlters.2. Champagne & F. 9. 2004 .
168 Chapter 9. Filter design + G Z .
for example by an additional GLP requirement.6) . +1}. δ2 and/or δω leads to an increase in the required IIR ﬁlter order N or FIR ﬁlter length M. Recall that a LTI system is GLP iff H(ω) = A(ω)e− j(αω−β) (9. stated without proof.3 FIR or IIR. In essence.5) (9. then: H(1/p) = ε p2α H(p) = ∞ which shows that 1/p is also a pole of H(z). This is a consequence of the following theorem. h[n] ∈ R. causal and GLP. is GLP if and only if H(z) = εz−2α H(1/z) where α ∈ R and ε ∈ {−1. 9. Below. and thus greater implementation costs. c B.4) where the amplitude function A(ω) and the phase parameters α and β are real valued. G G G ZS Fig. if H(z) is GLP. PZ symmetry: • Suppose that p is a pole of H(z) with 0 < p < 1. Labeau Compiled November 5. Champagne & F. one would to use the minimum value of N and/or M for which the ﬁlter speciﬁcations can be met. a reduction of δ1 .2 ZV S Z Typical template summarizing the design speciﬁcations for a lowpass ﬁlter.e. The phase response in the passband may also be speciﬁed. Typically. i.1. 9. That Is The Question The choice between IIR and FIR is usually based on a consideration of the phase requirements. Theorem: A stable LTI system H(z) with real impulse response. 2004 (9. Ideally. we explain how the GLP requirement inﬂuences the choice between FIR and IIR. only FIR ﬁlter can be at the same time stable. • According to above theorem.
In other words. we only concentrate on ﬁlters with rational system functions. Basic design principle: • If GLP is essential =⇒ FIR • If not =⇒ IIR usually preferable (can meet speciﬁcations with lower complexity) 9.9. to any nontrivial pole p inside the unit circle corresponds a pole 1/p outside the unit circle. For H(z) in (9. + bM z−M = .7) for which practical computational realizations are available. . 2004 . only FIR ﬁlter can be at the same time stable. all its poles have to be inside the unit circle (U.C. A(z) 1 + a1 z−1 + . Champagne & F. . + aN z−N (9.2 Design of IIR ﬁlters Mathematical setting: We analyze in this section common techniques used to design IIR ﬁlters. Labeau Compiled November 5. as H(z) = B(z) b0 + b1 z−1 + . 9.2 Design of IIR ﬁlters 169 • Note that p∗ and 1/p∗ are also poles of H(z) under the assumption of a real system. causal and GLP. .7) to be the system function of a causal & stable LTI system. so that H(z) cannot have a causal impulse response.). c B. Conclusion: If H(z) is stable and GLP. . Of course.3 PZ locations of IIR realcoefﬁcient GLP systems: a pole always appears with its inverse (and complex conjugate). • The above symmetry also apply to the zeros of H(z) • The situation is illustrated in the PZ diagram below: Im(z) Re(z) unit circle Fig.
10) . (3) Map Ha (Ω) back into H(ω) via a proper inverse transformation. Note here that t ∈ R. c B. {ak } and {bk } such that H(ω) ∼ Hd (ω) to some desired degree of accuracy. System function: The system function of an analog LTI system is deﬁned as the Laplace transform of its impulse response. Champagne & F. Ha (s) = L {ha (t)} = ∞ −∞ ha (t)e−st dt (9.9) converges absolutely deﬁnes the region of convergence (ROC) of the Laplace transform. The complex variable s is often expressed in terms of its real and imaginary parts as s = σ + jΩ The set of all s ∈ C where integral (9.8) where ha (t) is the impulse response of the system. 9. the input/ouput characteristic in the time domain takes the form of a convolution integral: ya (t) = ∞ −∞ ha (u)xa (t − u)du (9. that is ROC : ℜ(s) > σ0 An analog LTI system is stable iff the ROC of Ha (s) contains the jΩ axis: jΩ ∈ ROC for all Ω ∈ R The ROC of a stable and causal analog LTI system is illustrated in the Fig. Transformation methods: There exists a huge literature on analog ﬁlter design. i. Causality and stability: An analog LTI system is causal iff the ROC of its system function Ha (s) is a righthalf plane. the convolution integral (9.170 Chapter 9. Filter design Design problem: The goal is to ﬁnd ﬁlter parameters N. In the sdomain. several effective approaches do exist for the design of analog IIR ﬁlters. A complete speciﬁcation of the system funciton Ha (s) must include a description of the ROC. (2) Design analog ﬁlter Ha (Ω) that meets the speciﬁcations.e.9) where L denotes the Laplace operator. 2004 (9.2. The transformation methods try to take advantage of this literature in the following way: (1) Map DT speciﬁcations Hd (ω) into analog speciﬁcations Had (Ω) via proper transformation. In particular.8) is equivalent to Ya (s) = Ha (s)Xa (s) where Ya (s) and Xa (s) are the Laplace transforms of ya (t) and xa (t). say Ha (Ω).4.1 Review of analog ﬁltering concepts LTI system: For an analog LTI system. M. 9. Labeau Compiled November 5.
9.12) Rational systems: As the name indicates. c B. a rational system is characterized by a system function that is a rational function of s. we have Ha (Ω)2 = Ha (s)Ha (−s)s= jΩ (9. Butterworth ﬁlters: Butterworth ﬁlters all have a common shape of squared magnitude response: Ha (Ω)2 = 1 Note 1 1 + (Ω/Ωc )2N . typically: β0 + β1 s + · · · + βM sM Ha (s) = (9.2 Basic analog ﬁlter types In this course.11) In the case an analog system with a real impulse response. Frequency response: The frequency response of an LTI system is deﬁned as the Fourier transform of its impulse response. These three types will have increasing “complexity” in the sense of being able to design a ﬁlter just with a paper and pencil. but will also have an increasing efﬁciency in meeting a prescribed set of speciﬁcations with a limited number of ﬁlter parameters.4 Illustration of a typical ROC for a causal and stable analog LTI system. that is: N M dk dk ya (t) = − ∑ αk k ya (t) + ∑ βk k xa (t) (9. (9.14) dt dt k=1 k=0 9. we will basically consider only three main types of analog ﬁlters in the design of discretetime IIR ﬁlters via transformation methods. 9.2 Design of IIR ﬁlters 171 Im(s) ROC Re(s) Fig.2. Labeau Compiled November 5. Champagne & F. 2004 .15) the abuse of notation here: we should formally write Ha ( jΩ) but usually drop the j for simplicity.13) 1 + α1 s + · · · + αN sN The corresponding inputoutput relationship in the timedomain is an ordinary differential equation of order N. or equivalently 1 Ha (Ω) = Ha (s)s= jΩ (9.
2: (9. . Champagne & F. . 1. as given by: π 1 π sk = Ωc exp( j[ + (k + ) ]) k = 0.5 illustrates the frequency response of several Butterworth ﬁlters of different orders. Filter design where Ωc is the cutoff frequency (3dB) and N is the ﬁlter order. 2 2 N Example 9. 9.172 Chapter 9. and can be easily manipulated without the help of a computer. The table below summarizes the features of the two types of Chebyshev ﬁlters.2 0 0 10 20 30 40 50 60 Radian frequency Ω 70 80 90 100 Fig. .8 Amplitude H(jΩ) 0. These ﬁlters are simple to compute. 10rad/s) (Ωc = mum of 1 at Ω = 0 to 0 at Ω = ∞. The corresponding transfer function is given by: Ha (s) = (Ωc )N (s − s0 )(s − s1 ) · · · (s − sN−1 ) (9.16) The poles sk are uniformly spaced on a circle of radius Ωc in the splace. Note that Ha (Ω) smoothly decays from a maxiN=1 N=2 N=3 N=4 N=10 1 0. and also gives an expression of their squared magnitude response: Type I Type II 1 1 2= 2= Def Hc (Ω) Hc (Ω) 2 −1 2 1+ε2 TN (Ω/Ωc ) 1+(ε2 TN (Ωc /Ω)) Passband Ripple from 1 to 1/1 + ε2 Monotonic Stopband Monotonic Ripple 1/(1 + 1/ε2 ) c B. N − 1.6 0. Figure 9. Labeau Compiled November 5. while Type II Chebyshev ﬁlters have ripple in their stopband. .4 0. The main difference with the Butterworth ﬁlters is that they are not monotonic in their magnitude response: Type I Chebyshev ﬁlters have ripple in their passband.17) Chebyshev ﬁlters: Chebyshev ﬁlters come in two ﬂavours: type I and II.5 Magnitude response of several Butterworth ﬁlters of different orders. 2004 .
4 0. Ts k (9.7 100 200 300 400 500 Amplitude H(jΩ) 0 600 800 1000 1200 1400 0.2 Design of IIR ﬁlters 173 In the above table. (b) Type II. the function TN (·) represents the Nth order Chebyshev polynomial. (9.5 0.6 Illustration of the magnitude response of analog Chebyshev ﬁlters of different orders N: (a) Type I. mainly because of the presence of the Chebyshev polynomials in their deﬁnition.7 Amplitude H(jΩ) Chebyshev Type II 1 0.3 0. One can see that they ripple both in the passband and stop band. They usually require a lower order than Chebyshev and Butterworth ﬁlters to meet the same speciﬁcations.4 0. Their exact expression is beyond the scope of this text. Chebyshev Type I 1 1 0. Chebyshev ﬁlters become more complicated to deal with than Butterworth ﬁlters.9 0.20) Compiled November 5.2 0.3 0.7 illustrates the magnitude response of these ﬁlters.6 0. 9. 9. then H(ω) = c B.5 0 0.6 illustrates the frequency response of these ﬁlters.19) 1 ω Ha ( ). Labeau (9. but they can be easily generated by an appropriate computer program. Elliptic ﬁlters: Elliptic ﬁlters are based on elliptic functions.8 0.8 −2 −. The details of the derivation of these polynomials is beyond the scope of this text. we know that H(ω) = If there is no signiﬁcant aliasing. the impulse response of the discretetime ﬁlter H(z) is obtained by sampling the impulse response of a properly designed analog ﬁlter Ha (s): h[n] = ha (nTs ) where Ts is an appropriate sampling period.3 Impulse invariance method Principle: In this method of transformation.2 0. Figure 9.2. and Ωc is a cutoff frequency.1 0 0 500 1000 1500 2000 2500 N=2 N=3 N=4 N=5 N=10 Radian Frequency Ω 0. 2004 . but can be obtained from any standard textbook. and often they will be designed through an adhoc computer program.18) 1 ω 2πk ∑ Ha ( Ts − Ts ). Champagne & F.5 0. Figure 9.6 0.1 0 0 500 1000 1500 2000 2500 3000 N=2 N=3 N=4 N=5 N=10 3000 Radian Frequency Ω (a) (b) Fig.9. Ts Ts ω < π. ε is a variable that determines the amount of ripple.9 0.5 (1+ε ) (1+ε2)−. From sampling theory in Chapter 7.
1 0 0 500 1000 600 800 1000 1200 1400 N=2 N=3 N=4 N=5 N=10 Radian Frequency Ω 1500 2000 2500 3000 Fig. 9. Thus. ﬁnd the corresponding speciﬁcations for the c B. In the absence of additional speciﬁcations on Ha (Ω). H a (Ω) 1 1 / Ts H (ω ) aliasing Ωc π T Ω ωc π 2π ω Fig. so that aliasing can be neglected.2 0. the frequency response of the DT ﬁlter is the same (up to a scale factor) as the frequency response of the CT ﬁlter. Labeau Compiled November 5.9 0 0. Clearly. Filter design Elliptic filters 1 1 0. it can be used to design lowpass and bandpass ﬁlters. this method works only if Ha (Ω) becomes very small for Ω > π/Ts . Thus in the absence of aliasing.7 Amplitude H(jΩ) 0. 2004 .4 0. Champagne & F.3 0. 9. the parameter Ts can be set to 1: it is a ﬁctitious sampling period.95 0. that does not correspond to any physical sampling. but not highpass ﬁlters.6 0.5 0. Design steps: (1) Given the desired speciﬁcations Hd (ω) for the DT ﬁlter.174 Chapter 9.7 Magnitude Response of lowpass elliptic ﬁlters of different orders.8 Illustration of aliasing with impulse invariance method.8 0.05 0 100 200 300 400 500 0.9 0.1 0.
it is not necessary to ﬁnd ha (t). i. Butterworth ﬁlters). t ≥ 0 0. Champagne & F.Chose the ﬁlter type (Butterworth.2 Design of IIR ﬁlters 175 analog ﬁlter.25) Compiled November 5.e. 2004 . as explained below. etc. uc (t) = c B.9 Conversion of speciﬁcations from discretetime to continuoustime for a design by impulse invariance. one gets: Ha (s) = The inverse Laplace transform of Ha (s) is given by ha (t) = where ua (t) is the analog unit step function. Simpliﬁcation of step (3): Assume Ha (s) has only ﬁrst order poles (e. Had (Ω) = Ts Hd (ω)ω=ΩTs .21) Ts For example. etc. Figure 9. elliptic. Ω < H d (ω ) H ad (Ω) Ts (1 + δ 1 ) Ts (1 − δ 1 ) Ω s ≡ ω s / Ts Ω p ≡ ω p / Ts 1 + δ1 1 − δ1 δ2 ωp Tsδ 2 ωs π ω Ωp Ωs Ω Fig.22) k=1 ∑ s − sk N Ak (9.9. t < 0 (9. Hcd (Ω): π (9.Select the ﬁlter parameters (ﬁlter order N.g.9 illustrates the conversion of speciﬁcations from DT to CT for the case of a lowpass design.23) k=1 ∑ Ak es t uc (t) k N (9. Labeau (9.) (3) Transform the analog ﬁlter Ha (Ω) into a discretetime ﬁlter H(z) via timedomain sampling of the impulse response: ha (t) = L −1 {Ha (Ω)} h[n] = ha (nTs ) H(z) = Z {h[n]} In practice. (2) Design an analog IIR ﬁlter Ha (Ω) that meets the desired analog speciﬁcations Had (Ω): . poles sk . 9. By performing a partial fraction expansion of Ha (s).24) 1.) .
that is: if Ha (s) is of order N. The correspondence between poles in discretetime and continuoustime is given by: sk is a pole of Ha (s) =⇒ pk = esk Ts is a pole of H(z). For the Butterworth family. • Step 1: We set the sampling period to Ts = 1 (i. pk  = eσk Ts < 1.26) k=1 The Ztransform of h[n] is immediately obtained as: H(z) = k=1 ∑ 1 − pk z−1 N Ak where pk = esk Ts (9. As a result. σk < 0. Example 9.3: Apply the impulse invariance method to an appropriate analog Butterworth ﬁlter in order to design a lowpass digital ﬁlter H(z) that meets the speciﬁcations outlined in Figure 9.27) This clearly shows that there is no need to go through the actual steps of inverse Laplace transform.29) • Step 2: We need to design an analog Butterworth ﬁlter Hc (Ω) that meets the above speciﬁcations. then the real part of all its poles sk will be negative: sk = σk + jΩk . we have Ha (Ω)2 = 1 1 + (Ω/Ωc )2N (9. we need to determine the ﬁlter order N and the cutoff frequency Ωc so that the above inequalities are satisﬁed.27).4π ≤ Ω ≤ ∞ (9. so that the desired speciﬁcations for the analog ﬁlter are the same as stated in Figure 9.30) Thus.24) uniformly at times t = nTs .10. for for 0 ≤ Ω ≤ Ω1 = . 2004 . Labeau Compiled November 5.23) and plug them into (9. Indeed. Filter design Now.10. k s k s N N (9. then so is H(z). if Ha (s) is stable and causal. Remarks: The order of the ﬁlter is preserved by impulse invariance. stability and causality of the analog ﬁlter are preserved by the transformation. This is explained below: c B. Ω = ω). we obtain h[n] = k=1 ∑ Ak es nT u[n] = ∑ Ak (es T )n u[n]. which ensures causality and stability of the discretetime ﬁlter H(z).28) (9. sampling and Ztransform: one only needs the partial fraction expansion coefﬁcients Ak and the poles sk from (9.176 Chapter 9.e. The corresponding poles pk in discretetime can be written as pk = eσk Ts e jΩk Ts Therefore. Champagne & F. if we sample ha (t) in (9. or equivalently: 1 − δ1 ≤ Had (Ω) ≤ 1 Had (Ω) ≤ δ2 .25π Ω2 = .
.10 ω2 π ω Speciﬁcations for the lowpass design of example 9.32) Combining these two equations.3 .31) Ha (Ω2 ) = δ2 (9. .01 ω1 = 0.An analog Butterworth ﬁlter meeting the desired speciﬁcations is obtained as follows: Hc (s) = (Ωc )13 (s − s0 )(s − s1 ) · · · (s − s12 ) π 1 π (9.4π δ2 ω1 Fig. . which contribute to minimize aliasing effects. we obtain: 2N = ⇒ 2N ln(Ω1 /Ω2 ) = ln(α/β) ⇒N= 1 ln(α/β) = 12. . that is: Ha (Ω1 ) = 1 − δ1 1 = (1 − δ1 )2 1 + (Ω1 /Ωc )2N 1 ⇒ (Ω1 /Ωc )2N = −1 ≡ α (1 − δ1 )2 ⇒ 1 = (δ2 )2 1 + (Ω2 /Ωc )2N 1 ⇒ (Ω2 /Ωc )2N = 2 − 1 ≡ β δ2 ⇒ Ω1 /Ωc Ω2 /Ωc α β (9.856 (9. 12 .We ﬁrst try to an ﬁnd analog Butterworth ﬁlter that meets the band edge conditions exactly.35) Compiled November 5. . 2004 sk = Ωc e j[ 2 +(k+ 2 ) 13 ] c B.9. 1. 9.33) With this choice. the speciﬁcations are met exactly at Ω1 and are exceeded at Ω2 . Champagne & F. we take N = 13.The value of Ωc is then determined by requiring that: Hc (Ω1 ) = 1 − δ1 ⇒ 1 = (1 − δ1 )2 1 + (Ω1 /Ωc )26 ⇒ Ωc = 0.16 2 ln(Ω1 /Ω2 ) Since N must be an integer.34) (9.25π ω2 = 0. .05 δ 2 = 0. Labeau k = 0.2 Design of IIR ﬁlters 177 H d (ω ) 1 1 − δ1 δ1 = 0.
2048 j −0. it is often set to 1.8493 j Table 9. 1.7041 j −0. In the absence of other requirements.3976 j −0.6404 + 0.6490 j +4.5041 + 6.2718 + 18. Labeau Compiled November 5.1 Ak −0.5254 j −1. Figure 9.8307 + 0.g.4250 + 0.2.8273 − 0.5674 j −0.4445 + 0. 2004 .2718 − 18.0000 j −0.5041 − 6.At this point.7041 j −0.8273 + 0.5296 j +0.2048 j −0.5296 j +0. 9.3982 j +0.2356 j pk +0.1815 j +0.).2833 j +0. Filter design • Step 3: The desired digital ﬁlter is ﬁnally obtained by applying the impulse invariance method: .6490 j −35. .4266 − 0. etc. . . deﬁned as follows: H(z) = Ha (s) at s = φ(z) α 1 − z−1 1 + z−1 .4860 − 0.4322 − 0.4688 − 0.6773 j +0.6930 j −0. the partial fraction expansion coefﬁcients Ak and the discretetime poles pk for this example.5958 − 0. A0 A12 +···+ 1 − p0 z−1 1 − p12 z−1 (9.8638 + 15.11 illustrates the magnitude response of the ﬁlter so designed.0286 − 0.38) Table 9. direct forms.8110 − 56.4266 + 0.4860 + 0. .1 lists the values of the poles sk .1031 + 0.2833 j +0. A corresponding digital ﬁlter H(z) can be obtained by applying the bilinear transformation (BT).3.8110 + 56. (9.8000 j −0.4322 + 0.5674 j −0.37) A0 A12 +···+ s − s0 s − s12 (9.5144 + 0.0000 j +0. cascade form. k = 0. Champagne & F. it is easy to manipulate the function H(z) to derive various ﬁlter realizations (e.39) where α is a scaling parameter introduced for ﬂexibility.2356 j −1.0335 j +65.7576 − 0.7576 + 0.6404 − 0.5144 − 0.4445 − 0. .0886 j +0.1031 − 0.8638 − 15.5254 j +13.4688 + 0.178 Chapter 9.1416 + 0.Partial fraction expansion: Hc (s) = .0335 j −35.3976 j −0.8493 j −0.0886 j +0. 12 (9.4 Bilinear transformation Principle: Suppose we are given an analog IIR ﬁlter Ha (s).6773 j Values of the coefﬁcients used in example 9.Direct mapping to the zdomain: H(z) = with pk = esk .5958 + 0.8307 − 0.0000 j −13.3982 j +0.36) k 0 1 2 3 4 5 6 7 8 9 10 11 12 sk −0.0286 + 0.1815 j +0.5121 j −13.5121 j +13.8000 j −0.8556 + 0.3034 + 0.6930 j +4.3034 − 0. c B.
we obtain Ha (s) = H(z) = = The resulting digital ﬁlter has one simple pole at 1−b ρ 1+b inside the U. 2004 .2 Design of IIR ﬁlters 179 1 0.3. Labeau 1 − z−1 1 + z−1 ⇐⇒ z = φ−1 (s) = 1 + s/α 1 − s/α (9.41) 1 1 + z−1 1 + b 1 − 1−b z−1 1+b Properties of bilinear transformation: The inverse mapping is given by: s = φ(z) = α c B.40) s+b Up to a scaling factor. 9.12 (right) and Figure 9.8 H(ω) 0. respectively. The corresponding PZ diagram in the splane and magnitude response versus Ω are illustrated Figure 9.. Example 9.14 (left) and Figure 9.9. it is stable and causal with a lowpass behavior The corresponding PZ diagram in the Zplane and magnitude response versus ω are illustrated Figure 9.13 (bottom).C. Note the 3dB point at Ω = b in this example.13 (top).6 0. z= 1 1−z−1 1+z−1 +b (9.2 0 0 0. it is stable and causal with a lowpass behavior.5 1 1.5 ω 2 2.11 Magnitude Response of the design of example 9.4: Consider the analog ﬁlter 1 . Champagne & F.5 3 Fig.42) Compiled November 5. This ﬁlter has only one simple pole at s = −b in the splane.4 0. b>0 (9. this is a Butterworth ﬁlter of order N = 1 with cutoff frequency Ωc = b. respectively. Applying the BT with α = 1.
13 Magnitude responses of the ﬁlters used in example 9.5 1 0. Left: analog ﬁlter (b = . 2 Hc(Ω) 1. Champagne & F.5).5 2 2. c B.4. 9.12 PZ diagrams of the ﬁlters used in example 9.5 Ω (rad/s) 0 0 1 2 3 4 5 H(ω) 6 7 8 9 10 3 2.4. Top: analog ﬁlter (b = .5 3 ω (rad) Fig. Right: discretetime ﬁlter. 2004 .5).5 2 1.5 1 1. Bottom: discretetime ﬁlter.5 1 0.5 0 0 0.180 Chapter 9. 9. Filter design Im(s) (zero at infinity) Im(z) ROC: z>ρ ROC −b Re(s) 1 ρ Re(z) Fig. Labeau Compiled November 5.
that is Ω ω = 2 tan−1 ( ) α ω Ω = α tan( ) 2 1 + jΩ/α A = ∗ 1 − jΩ/α A (9. also called frequency warping. 2004 . 9.e.43) If Ha (s) has a pole at sk .49) This is illustrated in Figure 9. c B. then H(z) has a corresponding pole at pk = φ−1 (sk ) Therefore. It is important to understand this effect and take it into account when designing a ﬁlter. ∞) is compressed into the ﬁnite ω interval [0. This is illustrated in Figure 9. Champagne & F.2 Design of IIR ﬁlters 181 It can be veriﬁed from (9. from which it is clear that the mapping between Ω and ω is nonlinear. the semiinﬁnite Ω axis [0. if Ha (s) is stable and causal.15. π]. Re( s )< 0 (9. that is: s = σ + jΩ with σ < 0 ⇐⇒ z < 1 (9.45) (9.46) The relationship between the physical frequency Ω and the normalized frequency ω can be further developed as follows: s = jΩ ⇐⇒ z = e jω (9. particularly affects the highfrequencies. In particular.9. 1to1 mapping): s = jΩ ⇐⇒ z = where A = 1 + jΩ/α.47) where ω = A − A∗ = 2 A. This nonlinear compression.44) Ω z = φ −1 ( s ) Im z z = e jω s0 p0 σ s1 Re z p1 splane zplane Fig. The BT uniquely maps the jΩ axis in the splane onto the unit circle in the zplane (i.42) that the BT maps the lefthalf part of the splane into the interior of the unit circle in the zplane. Labeau Compiled November 5.14 Inverse BT mapping from splane to zplane operated.48) or equivalently (9. Clearly z = A/A∗  = 1 (9.14. so is H(z).
2004 .54) • Step 2: We need to design an analog Butterworth ﬁlter Ha (Ω) that meets the above speciﬁcations. The desired speciﬁcations for the analog ﬁlter are similar to the above except for the bandedge frequencies. we set the parameter α to 1.182 Chapter 9.Proceeding exactly as we did in Example 9.50) 2 (2) Design an analog IIR ﬁlter Ha (Ω) that meets these spec.5: Apply the BT to an appropriate analog Butterworth ﬁlter in order to design a lowpass digital ﬁlter H(z) that meets the speciﬁcations described in Figure 9. ﬁnd the corresponding speciﬁcations for the analog ﬁlter. we ﬁnd that the required order of the Butterworth ﬁlter is N = 11 (or precisely N ≥ 10.478019 (9.52) (9.10.3. (3) Apply the BT to obtain the desired digital ﬁlter: H(z) = Ha (s) at s = φ(z) α 1 − z−1 1 + z−1 (9. Champagne & F. Filter design π ω 0 −π −4πa −2πa 0 Ω 2πa 4πa Fig. That is 1 − δ1 ≤ Had (Ω) = 1 Had (Ω) ≤ δ2 . where now Ω1 Ω2 ω1 ) ≈ 0. Labeau Compiled November 5.51) Example 9. Design steps: (1) Given desired speciﬁcations Hd (ω) for the DT ﬁlter. . .414214 2 ω1 = tan( ) ≈ 0.15 Illustration of the frequency mapping operated by the bilinear transformation. either one of the stopband edge speciﬁcations may be used to determine the cutoff frequency Ωc .Since aliasing is not an issue with the BT method. • Step 1: In this example.1756).72654 2 = tan( for for 0 ≤ Ω ≤ Ω1 Ω2 =≤ Ω ≤ ∞ (9. 9. Had (Ω): ω Had (Ω) = Hd (ω) Ω = α tan( ) (9. For example: Ha (Ω2 ) = 1 1 + (Ω2 /Ωc )22 = δ2 ⇒ Ωc = 0.55) c B.53) (9.
5231 −125. .4732 −5. 1.2012 +0.0003 +0.59) The relationship between coefﬁcients bk and the ﬁlter impulse response is h[n] = c B.5..8507 +470. 10 • Step 3: The desired digital ﬁlter is ﬁnally obtained by applying the bilinear transformation: H(z) = (Ωc )11 −1 −1 1−z (2 1+z−1 − s0 )(2 1−z−1 − s1 ) · · · (2 1+z = .0377 Coefﬁcients of the ﬁlter designed in example 9.3 Design of FIR ﬁlters Design problem: The general FIR system function is obtained by setting A(z) = 1 in (9.1376 +0. .56) (9.0164 +0.0003 ak +26. Labeau bn 0 ≤ n ≤ M 0 otherwise (9.0033 +0. 9.3014 −85.0983 +0. that is H(z) = B(z) = k=0 ∑ bk z−k M (9.1).9188 −446.2 bk +0.58) The corresponding values of the coefﬁcients bi and ai are shown in Table 9.4378 −360. .8049 +296. Champagne & F.0491 +0. b0 + b1 z−1 + · · · b11 z−11 = 1 + a1 z−1 + · · · a11 z−11 1−z−1 1+z−1 − s10 ) (9.1376 +0. .9. 2004 .3 Design of FIR ﬁlters 183 . k 0 1 2 3 4 5 6 7 8 9 10 11 Table 9..57) sk = Ωc e j[ 2 +(k+ 2 ) 11 ] k = 0.1157 +25.6506 −0.6849 +204.0033 +0.60) Compiled November 5.The desired analog Butterworth ﬁlter is speciﬁed as Hc (s) = (Ωc )11 (s − s0 )(s − s1 ) · · · (s − s10 ) π 1 π (9.0491 +0.2.0983 +0.0164 +0.
61) such that the resulting frequency response approximates a desired response. Champagne & F. −1} (9.64) (9. The design problem then consists in ﬁnding the degree M and the ﬁlter coefﬁcients {h[k]} in (9.3. • considerable emphasis is usually given to the issue of linear phase (one of the main reasons for selecting an FIR ﬁlter instead of an IIR ﬁlter). 2004 . In this section.62): H(e jω ) = 1 − e− jω2 + e− jω4 − e− jω6 = e− jω3 (e jω3 − e jω + e− jω − e− jω3 ) = 2 je− jω3 (sin(3ω) − sin(ω)) = A(ω)e− j(αω−β) c B. Property: A real FIR ﬁlter with system function H(z) as in (9. 1. Example 9. then explain the principles of design by windowing and design by minmax optimization. Accordingly.. 0.3.61) is GLP if and only if h[k] = ε h[M − k] k = 0.3: Deﬁnition: A discretetime LTI system is GLP iff H(ω) = A(ω)e− j(αω−β) where the A(ω). it is convenient to work out the analysis directly in terms of h[n].184 Chapter 9. we shall express H(z) as: H(z) = h[0] + h[1]z−1 + · · · h[M]z−M It is further assumed throughout that h[0] = 0 and h[M] = 0. M where ε is either equal to +1 or −1.61) (9. 0. 1.. Filter design In the context of FIR ﬁlter design. we will review a classiﬁcation of FIR GLP systems. h[n] = h[M − n] with M = 6 and ε = 1.. Labeau Compiled November 5. Hd (ω).63) Note that here.1 Classiﬁcation of GLP FIR ﬁlters We recall the following important deﬁnition and result from Section 8.6: Consider the FIR ﬁlter with impulse response given by h[n] = {1. Let us verify that the frequency response H(ω) effectively satisfy the condition (9. 9. that is H(ω) ∼ Hd (ω) In the design of FIR ﬁlters: • stability is not an issue since FIR ﬁlters are always stable. 0. −1. α and β are real valued. .62) (9.
16 Example frequency response of a generalized linearphase ﬁlter. β= 3π 2 −π/2 0 π/2 π −π/2 0 frequency ω π/2 π Fig. not all ﬁlter Types.9. 9. we distinguish 4 types of FIR GLP ﬁlters: ε = +1 ε = −1 M even Type I Type III M odd Type II Type IV A good understanding of the basic frequency characteristics of these four ﬁlter Types is important for practical FIR ﬁlter design. Champagne & F.66) Compiled November 5.3 Design of FIR ﬁlters 185 where we deﬁne the realvalued quantities A(ω) = 2 sin(ω) − 2 sin(3ω).65) • Type II (ε = +1. Labeau (9. 0 magnitude (dB) −10 −20 −30 −π π phase (rad) π/2 0 −π/2 −π −π 6 group delay 4 2 0 −π −π/2 0 π/2 π α = 3. The frequency response is illustrated in Figure 9. This issue is further explored below: • Type I (ε = +1. M even): H(ω) = e− jωM/2 {∑ αk cos(ωk)} k (9. as listed above. M odd): 1 H(ω) = e− jωM/2 {∑ βk cos(ω(k − ))} 2 k c B. Classiﬁcation of GLP FIR ﬁlters: Depending on the values of the parameters ε and M. 2004 .16. Indeed. can be used say for the design of a lowpass or highpass ﬁlter.
Assuming a value of K = 5. Labeau .70) 0.Cannot be used to design lowpass ﬁlter 9. it can be shown that h[n] → 0 as n → ±∞. M even): H(ω) = je− jωM/2 {∑ γk sin(ωk)} k (9. otherwise Compiled November 5. Champagne & F. i. otherwise The FIR g[n] can then be made causal by a simple shift. (9. ω ≤ ωc = π/2 0.H(ω) = 0 at ω = 0 .H(ω) = 0 at ω = π . 2004 c B. ωc < ω ≤ π (9.Cannot be used to design either lowpass or highpass ﬁlter • Type IV (ε = −1. it is tempting to approximate hd [n] by an FIR response g[n] in the following way: hd [n].3.186 Chapter 9.67) . with inﬁnite impulse responses hd [n] extending from −∞ to ∞. h[n] = g[n − K] Example 9.17.H(ω) = 0 at ω = 0 and at ω = π .69) Based on these considerations. Filter design .Cannot be used to design highpass ﬁlter • Type III (ε = −1.e.71) The corresponding impulse response is given by hd [n] = nωc 1 n ωc sinc( ) = sinc( ) π π 2 2 These desired characteristics are outlined in Figure 9. the truncated FIR ﬁlter g[n] is obtained as g[n] = hd [n].7: Consider an ideal lowpass ﬁlter: Hd (ω) = 1. M odd): 1 H(ω) = je− jωM/2 {∑ δk sin(ω(k − ))} 2 k .68) (9. −5 ≤ n ≤ 5 0. n ≤ K g[n] = (9.2 Design of FIR ﬁlter via windowing Basic principle: Desired frequency responses Hd (ω) usually correspond to noncausal and nonstable ﬁlters. Since these ﬁlters have ﬁnite energy.
3 Design of FIR ﬁlters 187 Finally.19.18. + G Z . a causal FIR with M = 2K is obtained from g[n] via a simple shift: h[n] = g[n − 5] These two impulse responses are illustrated in Figure 9.9. The frequency response of the resulting ﬁlter. G(ω) is illustrated in Figure 9.
17 S Z Q Desired (a) frequency response and (b) impulse response for example 9.71) can be combined into a single equation M h[n] = wR [n] hd [n − ] 2 where 1. Champagne & F. c B. the desired impulse response hd [n] is ﬁrst shifted by K = M/2 and then multiplied by wR [n].72) (9.7 J>Q@ K>Q@ Q Q Fig. we deﬁne a window as a DT function w[n] such that w[n] = w[M − n] ≥ 0. 9.72). in the context of FIR ﬁlter design. 0 ≤ n ≤ M wR [n] 0.18 (a) Windowed impulse response g[n] and (b) shifted windowed impulse response h[n] for example 9. Labeau Compiled November 5. 9. otherwise (9.73) According to (9.70)– (9. These windows usually lead to better properties of the designed ﬁlter. otherwise (9. 2004 . several windows have been developed and analysed for the purpose of FIR ﬁlter design. Remarks on the window: The function wR [n] is called a rectangular window. More generally.74) As we explain later. 0 ≤ n ≤ M 0.7 Observation: The above design steps (9. KG >Q@ S Fig.
or equivalently K = M/2 noninteger. the shift operation is used to center hd [n] within the window interval 0 ≤ n ≤ M. The above approach leads to an FIR ﬁlter with M = 2K even.188 Chapter 9.7).19 Example frequency response of an FIR ﬁlter designed by windowing (refer to example 9.75) is mapped into an FIR ﬁlter H(ω) with GLP. i.e.77) Design steps: To approximate a desired frequency response Hd (ω) • Step (1): Choose window w[n] and related parameters (including M). Remarks on time shift: Note that the same magnitude response H(ω) would be obtained if the shift operation in (9. • Step (2): Compute the shifted version of desired impulse response: hd [n − K] = where K = M/2.72) was omitted. 2004 1 2π π −π {Hd (ω)e− jωK }e jωn dω (9. In this way. To accommodate the case M odd.78) . h[n] w[n]hd [n − K] = εh[M − n] (9. c B. Champagne & F. Labeau Compiled November 5. a desired response Hd (ω) with a GLP property. 9. In practice. so that symmetries originally present in hd [n] with respect to n = 0 translate into corresponding symmetries of h[n] with respect to n = K = M/2 after windowing. that is hd [n] = εhd [−n] n ∈ Z (9.76) as required in most applications of FIR ﬁltering. the following deﬁnition of a noninteger shift is used: hd [n − K] = 1 2π π −π {Hd (ω)e− jωK }e jωn dω (9. Filter design 0 magnitude (dB) −10 −20 −30 −40 0 π/2 π π π/2 phase (rad) 0 −π/2 −π 0 π/2 frequency ω π Fig.
also called sidelobes: For wR [n]. time limited to 0 ≤ n ≤ M. 0 ≤ n ≤ M.g.3 Design of FIR ﬁlters 189 • Step(3): Apply the selected window w[n]: h[n] = w[n]hd [n − K] (9. Labeau Compiled November 5. For many of these windows.79) in the frequency domain.e. w[n] = 1 for all n ∈ Z.79) Remarks: Several standard window functions exist that can be used in this method (e. For an inﬁnitely long rectangular window.). Bartlet. the DTFT W (ω) is spread around ω = 0 and the convolution (9. empirical formulas have been developed that can be used as guidelines in the choice of a suitable M. and secondary lobes of lower amplitudes. i. Hanning.e. Spectral effects of windowing: Consider the window design formula (9. i. with shift parameter K set to zero in order to simplify the discussion. etc.81) leads to smearing of the desired response Hd (ω). we have W (ω) = 2π ∑k δc (ω − 2πk) with the result that H(ω) = Hd (ω). h[n] = w[n]hd [n] Taking the DTFT on both sides. 2004 . For a practical ﬁnitelength window. centered at ω = 0. Spectral characteristics of the rectangular window: For the rectangular window wR [n] = 1.e. it is important to understand the effects of and tradeoffs involved in the windowing operation (9.81) n=0 ∑ w[n]e− jωn (9.9.82) n=0 ∑ e− jωn sin(ω(M + 1)/2) sin(ω/2) M (9. Champagne & F. i. windowing does not introduce any distortion in the desired frequency response. we ﬁnd: WR (ω) = 1 2π π −π (9. Thus.80) Hd (θ)W (ω − θ)dθ M (9. we have: c B.83) = e− jωM/2 The corresponding magnitude spectrum is illustrated below for the M = 8: WR (ω) is characterized by a main lobe. To select the proper window. we obtain: H(ω) = where W (ω) = is the DTFT of the window function w[n]. application of the window function w[n] in the timedomain is equivalent to periodic convolution with W (ω) in the frequency domain. In this limiting case.79).
20 illustrates the typical look of the magnitude response of a lowpass FIR ﬁlter designed by windowing. one must use an other type of window (Kaiser. . which denotes the peak approximation error in [0. Note that ∆ω = 0 due to main lobe width of the window. • Width of transition band: ∆ω = ωs − ω p < ∆ωm .peak sidelobe level: α = −13dB (independent of M) Typical LP design with window method: Figure 9.mainlobe width: ∆ωm = 4π M+1 . • δ. ω p ] ∪ [ωs . Hanning. Labeau Compiled November 5. Filter design main lobe sidelobe level: α = −13dB ∆ω m 1st zero at ω= 2π M +1 . π]. Hamming. . These other windows have wider main lobes (resulting in wider transition bands) but in general lower sidelobes (resulting in lower ripple). strongly depends on the peak sidelobe level α of the window. δ.. Characteristic features include: • Ripples in H(ω) due to the window sidelobes in the convolution (Gibb’s phenomenon). Champagne & F. ).190 Chapter 9. • H(ω) = 0 in stopband ⇒ leakage (also an effect of sidelobes) For a rectangular window: • δ ≈ −21dB (independent of M) • ∆ω ≈ 2π M To reduce δ. c B. 2004 .
84) 1 2πn (1 − cos ) wR [n] 2 M (9.2 ∆ω 0 0 0. as predicted by (9.08 cos ) wR [n] M M (9.2 δ 1 0. Labeau 2n − 1) wR [n] M (9. 9.9.3 Overview of some standard windows Windowing tradeoffs: Ideally. ∆ωm . it can be observed that by tapering off the values of the window coefﬁcients near its extremity. This way.8 H(ω) 0.5 1 1.86) 2πn 4πn + 0.85) 2πn ) wR [n] M (9. it is possible to reduce sidelobe level α at the expense of increasing the width of the main lobe.4 0.87) Compiled November 5. such a window function does not exist: it is a fundamental property of the DTFT that a signal with ﬁnite duration has a spectrum that extends from −∞ to +∞ in the ωdomain.81).5 cos c B.5 3 Fig. 9. Unfortunately. Bartlett or triangular window: w[n] = (1 −  Hanning Window: w[n] = Hamming Window: w[n] = (0.3. 2004 . This is illustrated in Figure 9.20 Typical magnitude response of an FIR ﬁlter designed by windowing. it is possible however to vary the window coefﬁcients w[n] so as to tradeoff mainlobe width for sidelobe level attenuation.3 Design of FIR ﬁlters 191 1.42 − 0. the application of w[n] to a signal x[n] in the timedomain would not not introduce spectral smearing. For a ﬁxed value of the window length M. Champagne & F.6 0. In particular. one would like to use a window function w[n] that has ﬁnite duration in the timedomain and such that its DTFT W (ω) is perfectly concentrated at ω = 0.21: We refer to this phenomenon as the fundamental windowing tradeoff.46 cos Blackman Window: w[n] = (0.5 ω 2 2.54 − 0.
21 Illustration of the fundamental windowing tradeoff. A generic form for the Hanning.90) Compiled November 5.192 Chapter 9. By varying α. Kaiser window family: w[n] = 1 I0 (β I0 (β) 1−( 2n − 1)2 ) wR [n] M (9.6 0. Champagne & F. B and C determine the window type.6 0.88) where Io (.8 0. • β > 0 ⇒ α ↓ and ∆ωm ↑.89) (9. Let ∆ω denote the transition band of the ﬁlter i. Hamming and Blackman windows is w[n] = (A + B cos 4πn 2πn +C cos ) wR [n] M M where coefﬁcients A.4 0.) denotes the Bessel function of the 1st kind and β ≥ 0 is an adjustable design parameter.2 0 triangular window 0 5 10 time index n 15 0 5 10 time index n 15 0 magnitude (dB) magnitude (dB) π/2 π −10 −20 −30 −40 −50 −π −π/2 0 frequency ω 0 −10 −20 −30 −40 −50 −π −π/2 0 frequency ω π/2 π Fig. it is possible to tradeoff sidelobe level α for mainlobe width: ∆ωm : • β = 0 ⇒ rectangular window.2 0 amplitude 1 0. Labeau (9. Filter design rectangular window 1 0. 2004 . ∆ω = ωs − ω p and let A denote the stopband attenuation in positive dB: A = −20 log10 δ c B.4 0. The socalled Kaiser design formulae enable one to ﬁnd necessary values of β and M to meet speciﬁc lowpass ﬁlter requirements.e. 9.8 amplitude 0.
92) 2 J. 2023. c B.5 0 Fig. ”Nonrecursive digital ﬁlter design using the I − sinh window function. 9. Champagne & F. o April 1974.5 0 1 0. The following formulae 2 may be used to determine β and M: M= A−8 2.5 0 1 0 −20 −40 0 5 10 15 −60 −π 0 −20 −40 0 5 10 15 −60 −π 0 −20 −40 0 5 10 15 −60 −π 0 −20 −40 0 5 10 15 −60 −π 0 −20 −40 0 5 10 time index n 15 −60 −π −π/2 0 frequency ω π/2 π −π/2 0 π/2 π −π/2 0 π/2 π −π/2 0 π/2 π −π/2 0 π/2 π Bartlett Hanning Hamming Blackman 0.91) 0. Kaiser. on Circuits and Systems.” IEEE Int. F.5842(A − 21) 0.0 A < 21 (9.22 .5 0 1 0.3 Design of FIR ﬁlters 193 rectangular 1 0.07886(A − 21) 21 ≤ A ≤ 50 β = 0.1102(A − 8. 2004 .4 + 0.5 0 1 0.285∆ω (9.9. Symp. pp.7) A > 50 0. Labeau Compiled November 5.
23 . 9. and αk = 2h[L − k] for k = 1.. M even and ε = +1): H(ω) = e− jωM/2 A(ω) A(ω) = (9.5 −40 −60 −π 0 1 0 5 10 15 −π/2 0 π/2 β=3 π 0 1 0 5 10 15 −π/2 0 π/2 β=6 π 0 0 5 10 time index n 15 −π/2 0 frequency ω π/2 π Fig. Filter design 1 0 −20 β=0 0. speciﬁed in terms of tolerances in frequency bands of interest.5 −40 −60 −π 0 −20 0. in the sense of minimizing the maximum weighted approximation error E(ω) c B..5 −40 −60 −π 0 −20 0. L.4 ParksMcClellan method Introduction: Consider a TypeI GLP FIR ﬁlter (i. 9. Champagne & F. Labeau W (ω)[Hd (ω) − A(ω)] (9. α0 = h[L]. Let Hd (ω) be the desired frequency response (e. ideal lowpass ﬁlter).g. The ParksMcClellan (PM) method of FIR ﬁlter design attempts to ﬁnd the best set of coefﬁcients αk ’s.194 Chapter 9..3.e.94) L k=0 ∑ αk cos(kω) where L = M/2. .. 2004 .95) Compiled November 5..93) (9.
101) c B. in the case of a lowpass design: . . ωi2 ] . 2004 . ω p ] with tolerance δ1 . ω ∈ B1 ∪ B2 ∪ .9.Stopband: B2 = [ωs .Transition band: [ω p .e. Specifying the desired response: The desired frequency response Hd (ω) must be accompanied by a tolerance scheme or template which speciﬁes: .Passband: B1 = [0. The behavior of A(ω) in the transition bands cannot be speciﬁed. i.98) 0 ≤ ω ≤ ωp (9. a weighting function W (ω) ≥ 0 is applied to the approximation error Hd (ω) − A(ω) prior to its optimization. This may be achieved by constructing W (ω) in the following way: W (ω) = 1 .. ωs ] ωs ≤ ω ≤ π (9. In this way.99) The purpose of W (ω) is to scale the error Hd (ω) − A(ω).8: For example. the nonuniform tolerance speciﬁcations δi over the individual bands Bi translate into a single tolerance speciﬁcation of over all the bands of interest. passband and stopband). Champagne & F. a corresponding tolerance δi . W (ω) ≥ 0 is a frequency weighting introduced to penalize differently the errors made in different bands. π] with tolerance δ2 .and for each band Bi . E(ω) ≤ 1. . i. . .e. Example 9.g. A(ω) ≤ δ2 . Labeau Compiled November 5.100) so that the cost W (ω) is larger in those bands Bi where δi is smaller.e.96) The frequency intervals between the bands Bi are called transition bands. resulting in a weighted error E(ω) = W (ω)[Hd (ω) − A(ω)]. so that Hd (ω) − A(ω) ≤ δi . so that an error in a band Bi with a small tolerance δi is penalized more than an error in a band B j with a large tolerance δ j > δi . 1 − A(ω) ≤ δ1 . i. (9.97) The weighting mechanism: In the ParksMcClellan method. (9.a set of disjoint frequency bands Bi = [ωi1 . δi ω ∈ Bi (9. ω ∈ Bi (9.3 Design of FIR ﬁlters 195 over the frequency bands of interest (e.
.103) (9.196 Chapter 9. one seeks an optimal ﬁlter H(ω) that minimizes the maximum approximation error over the frequency bands of interest.107) .106) Emax max E(ω) ω∈B (9. The optimal solution Ao (ω) satisﬁes the following properties: .24 for a lowpass design. Speciﬁcally. .Alternation theorem: Ao (ω) is the unique solution to the above minmax problem iff it has at least L + 2 alternations over B . we seek min { max E(ω) } (9. Filter design Properties of the minmax solution: In the ParksMcClellan method. _ $Z . .104) (9.102) all αk ω∈B where E(ω) = W (ω)[Hd (ω) − A(ω)] A(ω) = (9.105) k=0 ∑ αk cos(kω) L B = B1 ∪ B2 ∪ . there must exist at least L + 2 frequency points ωi ∈ B such that ω1 < ω2 < · · · < ωL+2 and that E(ωi ) = −E(ωi+1 ) = ±Emax (9.All interior points of B where A(ω) has zero slope correspond to alternations. This is illustrated in Figure 9.That is.
9. c B.24 Illustration of the characteristics of a lowpass ﬁlter designed using the ParksMcClellan method. Champagne & F. Labeau Compiled November 5. _ :HLJKWHG (UURU 8QZHLJKWHG (UURU G Z $WHUQDWLRQV Z G Z Fig. 2004 .
0. II.6π = ωs ≤ ω ≤ π • Initial guess for M: M≈ −10 log10 (0. F.109) • Set M = 11 and use the commands: F Hd W B = = = = [0. the function remez can be used to design minimax GLP FIR ﬁlters of Type I. In other words. Generalizations to nonlinear phase complex FIR ﬁlters also exist.324(ωs − ω p ) Although we have presented the method for TypeI GLP FIR ﬁlter.Stopband: H(ω) ≤ δ2 = . In the Matlab signal processing toolbox.01) − 13 ≈ 10. 1.111) The corresponding impulse response and magnitude response (M = 11) are shown in Figure 9. an equiripple behavior of E(ω) over the band B with amplitude Emax translates into equiripple behavior of Hd (ω) − A(ω) with amplitude δi Emax over band Bi . i. B = [h[0].9: Suppose we want to design a lowpass ﬁlter with . In the case of a lowpass design.110) The output vector B will contain the desired ﬁlter coefﬁcients.96) will be satisﬁed if the ﬁnal error level Emax ≤ 1.112) Compiled November 5. . 1/0.324(0. The function cremez can be used to design complex valued FIR ﬁlters with arbitrary phase.0] [1. the ratio δi /δ j ). Use of computer program: Computer programs are available for carrying out the minimax optimization numerically. However: δ1 δ1 = δ2 δ2 c B. Labeau (9.3 Design of FIR ﬁlters 197 Remarks: Because of the alternations. They are based on the socalled Remez exchange algorithm. . .e.4π) (9.4. the following empirical formula can be used to obtain an initial guess for M: −10 log10 (δ1 δ2 ) − 13 M≈ (9. 0.W ) (9.25.e. h[M]] (9. Observe that the initial speciﬁcations in (9. 0. III and IV.05.108) 2. 0] [1/0. Champagne & F. 2004 . it is also applicable with appropriate modiﬁcations to Types II. One practical problem with the PM method is that the required value of M to achieve this is not known a priori.4π . 0 ≤ ω ≤ ω p = 0. 1. these ﬁlters are also called equiripple.24 2.6π − 0.01.9. Due to the use of the weighting function W (ω).6. .05 × 0. the method only ensures that the relative sizes of the desired tolerances are attained (i. 0.Passband: 1 − H(ω) ≤ δ1 = . III and IV GLP FIR ﬁlters. Example 9.01] remez(M. Hd.05. • The ﬁlter so designed has tolerances δ1 and δ2 that are different from δ1 and δ2 .
we ﬁnd that the required value of M necessary to meet the spec is M = 15.198 Chapter 9.2 0. 0. c B.25 Illustration of trial design in example 9.9 (M = 11).1 0 2 4 6 sample index 8 10 12 10 0 magnitude (dB) −10 −20 −30 −40 −50 0 π/2 frequency ω π Fig.5 impulse response 0.3 0. The corresponding impulse response and magnitude response are shown in Figure 9.1 0 −0. Filter design • By trial and error.6 0.26. 2004 . Labeau Compiled November 5. 9.4 0. Champagne & F.
6 0. Champagne & F.4 0.9 (M = 15) c B.1 0 5 sample index 10 15 10 0 magnitude (dB) −10 −20 −30 −40 −50 0 π/2 frequency ω π Fig.26 Illustration of the ﬁnal design in example 9.2 0.9.1 0 −0. Labeau Compiled November 5. 9. 2004 .5 impulse response 0.3 Design of FIR ﬁlters 199 0.3 0.
The choice of a speciﬁc type of representation in a given application involves the consideration of several factors. 32. the number of bits available for the representation of numbers is ﬁnite (e. • Other roundoff effects such as limit cycles. . 1}: x ⇐⇒ . there is quantization noise at the system output. .1 Binary number representation and arithmetic In a binary number representation system. Different structures or realizations (e. Several approaches are available for the representation of real numbers with ﬁnite sequences of bits. b2 b1 b0 b−1 . a real number x ∈ R is represented by a sequence of binary digits (i.g. 10. The implementation in FP of a given LTI system with rational system function H(z) = ∑M bk z−k k=0 1 − ∑M ak z−k k=1 leads to deviations in its theoretically predicted performance: • Due to quantization of system coefﬁcients ak ’s and bk ’s: ˆ H(ω) ⇒ H(ω) = H(ω) (10. cascade.). etc. etc.g. This means that only certain numbers x ∈ R can be represented exactly. bits) bi ∈ {0. etc.200 Chapter 10 Quantization effects Introduction Practical DSP systems use ﬁniteprecision (FP) number representations and arithmetic.e. etc. system costs. . 16.1) • Due to roundoff errors in ﬁnite precision arithmetic. some of which are discussed below. . direct form. In practice. including: desired accuracy.) will be affected differently by these effects. .
4) Example 10.b−1 · · · b−L )2C = −bK 2K + K−1 i=−L ∑ bi 2i (10. c B. the K bits bK−1 · · · b0 represent the integer part of x and the L bits b−1 · · · b−L represent the fractional part of x. 1} (10. Its general format is: x = (bK · · · b0 . SM representation: The simplest type of ﬁxedpoint representation is the SM representation. is a constant equal to ∆ = 2−L Thus. Champagne & F. we discuss two special types of signed ﬁxedpoint representations.1 Binary representations Signed ﬁxedpoint representations: In this type of representation. 2K ] is represented as x = bK bK−1 · · · b0 .11)SM = (−1) × 3/4 = −3/4 (1. b−L is called the least signiﬁcant bit (LSB). the distance between two consecutive numbers.1. Labeau Compiled November 5.b−1 · · · b−L . the use of such an uniform number representation grid is not the most efﬁcient way of using the available B + 1 bits.” is used to separate the fractional part from the integer part. also called resolution of step size. Below. 2004 .10.11)2C = 1 × 2−1 + 1 × 2−2 = 3/4 while (1. where B = K + L.2) where bK is called sign bit or sometimes most signiﬁcant bit (MSB). bi ∈ {0. the available dynamic range [−2K .1: Using (10. we ﬁnd that (0. that is: the sign and magnitude (SM) and the 2’s complement (2C) representations. The binary dot “. It is convenient to deﬁne B K + L so that the total number of bits is B + 1. In scientiﬁc applications characterized by a large dynamic range.3) and (10.11)SM = (0. a real number x in the range [−2K .b−1 · · · b−L )SM = (−1)bK × K−1 i=−L ∑ bi 2i (10. 2K ] is quantized into 2B+1 uniformly spaced representation levels. where the 1 reﬂects the presence of a sign bit.3) 2’s complement representation: The most common type of signed ﬁxedpoint representation is 2’s complement representation (2C).1 Binary number representation and arithmetic 201 10.4).11)2C = (−1) + 3/4 = −1/4 Observation: For ﬁxedpoint representation. The general format is x = (bK · · · b0 .
202 Chapter 10.5) where M is a signed mantissa and E is an exponent.5(x[n] + x[n − 1]). as exempliﬁed by y[n] = 0. For simplicity. the emphasis in give to fractional representations. 10. bl+k = M × 2E M E (10. The corresponding binarytodecimal mapping is x = (−1)S × (0. a number x ∈ R is represented as x = b0 b1 . we focus on ﬁxedpoint representations with small number of bits (say 16 or less). Several formats do exist for binary encoding of M and E. 1 is reserved for the sign. for which K = 0 and L = B. .e. and 8 for the exponent (see Figure 10. Because only a ﬁnite number of bits is available to represent the result of these operations. internal errors will occur. 2004 . Quantization effects Floatingpoint representations: In this type of representations.1. . c B.1 Floatingpoint representation as per format IEEE753. 10. .2 Quantization errors in ﬁxedpoint arithmetic Sources of errors: The two basic arithmetic operations in any DSP system are multiplication and addition. bl bl+1 . . integer/fractional) representations are immediate.1). Observations: Floatingpoint representations offer several advantages over ﬁxedpoint: • Larger dynamic range • Variable resolution (depends on value of E) However. Multiplication and addition in ﬁxedpoint lead to very different types of errors. In the sequel. Champagne & F. where quantization effects are the most signiﬁcant. ﬂoatingpoint hardware is usually more expensive.2: THe IEEE 753 ﬂoatingpoint format uses 32 bvits in total.1M) × 2E−126 E E 6 ELW ( ELWV E E 0 ELWV E Fig. generalizations to mixed (i. Labeau Compiled November 5. so that ﬁxedpoint is often used when system cost is a primary factor. Example 10. However. 23 for the mantissa.
we have x → x = (0. B) may be different from the original bits bi . 2004 2−B . Champagne & F.b−1 . 2C). For the 2C representation (most often used). B = 4. b−B ).α−1 · · · α−B b = β0 . as well as whether rounding or truncation is being applied. Example 10. The bits bi (i = 0.00) = 0 For rounding. • Rounding: The operation of rounding may be expressed as follows: x = (b0 . B = 2).b−1 . There are two basic ways in which number c = a × b with B + 1 bits can be represented with B + 1 bits where B < B : • Truncation: The truncation of x with B + 1 bits into x with B + 1 bits. generally leading to a socalled roundoff error.g.1 Binary number representation and arithmetic 203 Multiplication: Consider two numbers. This is illustrated below: 0.6) so that x − x is minimized.e. . Labeau Compiled November 5. the available number x that is closest to x is used for its represenˆ ˆ tation. say a and b. . . . b−B ) → x = Qtr [x] = (b0 . each expressed in ﬁxedpoint fractional representation with (B + 1)bit words: a = α0 . is expressed as ˆ follows: x = (b0 .8) The value of e depends on the type of representation being used (e. . deﬁned as e Q[x] − x (10.10. where B < B .b−1 . . the unwanted bits are simply dropped.001 = 0. b−B ). ˆ In other words.0011) = 3/16.. That is.β−1 · · · β−B Multiplication of these two (B + 1)bit words in general leads to a (B + 1)bit word. ˆ (10. .. using only a 3bit word (i. only B + 1 bits are available to store the result. b−B ) → x = Qrn [x] = (b0 ..3: Consider the representation of 5bit number x = (0.000101 In practice. SignMagnitude. we have x → x = (0.01) = 1/4 ˆ ˆ Quantization error: Both rounding and truncation lead to a socalled quantization error. For truncation. .101 × 0. . where here B = 2B.7) (10.b−1 . it can be generally veriﬁed that for truncation: −∆ < e ≤ 0 where ∆ while for rounding: −∆/2 < e ≤ ∆/2 c B.
ﬁxedpoint.204 Chapter 10. the above operation would be realized using modulo2 addition. two basic means are available to avoid overﬂow: • Scaling of signals at various point in a DSP system (corresponds to a left shift of the binary point. leading to a type of error called overﬂow.0111)2C to 2+1 bits (i. In practice.c−1 . In a practical DSP system. In practice. each expressed in ﬁxedpoint fractional representation with B + 1 bits.01) + (1. 2004 . only B + 1 bits are available to store the result c. Champagne & F. Quantization effects Example 10. . left carry bit being lost). 16 bits) prior to the system’s implementation.5: Assuming 2C representation. say a and b. For example. resolution is lost) • Use of temporary guard bits to the left of the binary point. In theory. leading to an erroneous result of 3/4 (i. the ﬁlter coefﬁcients must be quantized to a ﬁnite precision representation (e.10)2C = −1/2 ˆ e = x − x = −1/2 + 9/16 = 1/16 ˆ Addition: Consider again two numbers. 10. the sum of these two numbers may require up to B + 2 bits for its correct representation. B = 2): x = (1.e. Labeau ∑M bk z−k k=0 1 − ∑N ak z−k k=1 (10. overﬂow should by all mean be avoided. consider a DFII realization of an IIR ﬁlter with system function H(z) = c B.0111)2C = −9/16 → x = Qtr [x] = (1. the ﬁlter coefﬁcients were implicitly assumed to be represented in inﬁnite precision. as it introduces signiﬁcant distortions in the system output.g.10) = −3/4 − 1/2 = −5/4 which cannot be represented exactly in a (2 + 1)bit fractional 2C format.9) Compiled November 5. c−B ) where c1 is a sign bit and the binary point has been shifted by one bit to the right. In conventional applications of digital ﬁlters.2 Effects of coefﬁcient quantization In our previous study of ﬁlter structures and design techniques.01)2C = −3/4 ˆ e = x − x = −3/4 + 9/16 = −3/16 ˆ Now consider the case of rounding: x = (1. Example 10. .4: Consider the truncation of (1. In practice. we have: (1.0111)2C = −9/16 → x = Qrn [x] = (1.e. That is: c = a + b = (c1 c0 .
10.2 Effects of coefﬁcient quantization
205
ˆ Due to quantization of the ﬁlter coefﬁcients ak → ak and bk → bk , the corresponding system function is no ˆ longer H(z) but instead ˆ H(z) = ˆ ∑M bk z−k k=0 1 − ∑N ak z−k k=1 ˆ = H(z) + ∆H(z)
(10.10)
Equivalently, one may think of the poles pk and zeros zk of H(z) being displaced to new locations: pk = pk + ∆pk ˆ zk = zk + ∆zk ˆ (10.11) (10.12)
Small changes in the ﬁlter coefﬁcients due to quantization may lead to very signiﬁcant displacements of the poles and zeros that dramatically affect H(z), i.e. very large ∆H(z). It should be clear that quantization effects may be different for two different ﬁlter realizations with the same system function H(z). Indeed, even tough the two realizations have the same transfer characteristics, the actual system coefﬁcients in the two realizations may be different, leading to different errors in the presence of quantization. In this section, we investigate such quantization effects. 10.2.1 Sensitivity analysis for direct form IIR ﬁlters Consider an IIR ﬁlter with corresponding system function H(z) = where B(z) and A(z) are polynomial in z−1 : B(z) = b0 + b1 z−1 + · · · + bM z−M = b0 ∏ (1 − zk z−1 )
k=1 M
B(z) A(z)
(10.13)
(10.14)
A(z) = 1 − a1 z−1 − · · · − aN z−N =
k=1
∏(1 − pk z−1 )
M
(10.15)
Note that the ﬁlter coefﬁcients bk and ak uniquely deﬁne the poles and zeros of the system, and vice versa. Thus, any change in the ak s and bk s will be accompanied by corresponding changes in pk s and zk s. Below, we investigate the sensitivity of the polezero locations to small quantization errors in ak s and bk s via a ﬁrst order perturbation analysis. Note that in a direct form realization of H(z), the ﬁlter coefﬁcients are precisely the ak s and bk s. Thus the results of this analysis will be immediately applicable to IIR ﬁlters realized in DFI, DFII and DFIItransposed.
c B. Champagne & F. Labeau Compiled November 5, 2004
206
Chapter 10. Quantization effects
PZ sensitivity: In Suppose the ﬁlter coefﬁcients in (10.13)– (10.15) are quantized to a = Q[ak ] = ak + ∆ak ˆ ˆ b = Q[bk ] = bk + ∆bk ˆ ˆ ˆ The resulting system function is now H(z) = B(z)/A(z) where ˆ ˆ ˆ ˆ B(z) = b0 + b1 z−1 + · · · + bM z−M ˆ = b0 ∏ (1 − zk z−1 ) ˆ
k=1 M
(10.16) (10.17)
(10.18)
ˆ A(z) = 1 − a1 z−1 − · · · − aN z−N ˆ ˆ =
k=1
ˆ ∏(1 − pk z−1 )
M
(10.19)
Using the chain rule of calculus, it can be shown that to the ﬁrst order in ∆ak , the poles of the quantized system become pk = pk + ∆pk ˆ (10.20) ∑N pN−l ∆al l=1 k (10.21) ∏l=k (pk − pl ) A similar formula can be derived for the variation in the zeros of the quantized system as a function of ∆bk . ∆pk = Remarks: For a system with a cluster of two or more closely spaced poles, we have pk ≈ pl for some l = k ⇒ ∆pk very large ⇒ ∆H(z) very large (10.22) The above problem posed by a cluster of poles becomes worse as N increases (higher probability of having closely spaced poles). To avoid the pole cluster problem with IIR ﬁlters: • do not use direct form realization for large N; • instead, use cascade or parallel realizations with low order sections (usually in direct form II) As indicated above, the quantization effects on the zeros of the system are described by similar equations. • However, the zeros zk s are usually not as clustered as the poles pk s in practical ﬁlter designs. • Also, ∆H(z) is typically less sensitive to ∆zk than to ∆pk . 10.2.2 Poles of quantized 2nd order system Intro: Consider a 2nd order allpole ﬁlter section with H(z) = 1 1 − 2 α1 z−1 + α2 z−2 (10.23)
Observe that the corresponding poles, say p1 and p2 , are functions of the coefﬁcients α1 and α2 . Thus, if α1 and α2 are quantized to B + 1 bits, only a ﬁnite number of pole locations are possible. Furthermore, only a subset of these locations will correspond to a stable and causal DT system.
c B. Champagne & F. Labeau Compiled November 5, 2004
10.3 Quantization noise in digital ﬁlters
207
System poles: The poles of H(z) in (10.23) are obtained by solving for the roots of z2 − 2 α1 z + α2 = 0 or equivalently: √ z = α1 ± ∆ where ∆ = α2 − α2 1 We need to distinguish three cases, • ∆ = 0: Real double pole at • ∆ > 0: Distinct real poles at • ∆ < 0: Complex conjugate poles at p1,2 = α1 ± j where the parameters r and θ satisfy r= √ α2 r cos θ = α1 (10.24) ∆ = re± jθ p1 = p2 = α1 √ p1,2 = α1 ± ∆
Coefﬁcient quantization: Suppose that a B + 1bit, fractional signmagnitude representation is used for the storage of the ﬁlter coefﬁcients a1 and a2 , i.e.: ai ∈ {0.b1 · · · bB : bi = 0 or 1} Accordingly, only certain locations are possible for the corresponding poles p1 and p2 . This is illustrated in Figure 10.2in the case B = 3 where we make the following observation: • The top part shows the quantized (a1 , a2 )plane, where circles correspond to ∆ < 0 (complex conjugate pole locations), bullets to ∆ = 0 and x’s to ∆ > 0. • The bottom part illustrates the corresponding pole locations in the zplane in the case ∆ < 0 and ∆ = 0 only (distinct real poles not shown to simplify presentation). • Each circle in the top ﬁgure corresponds to a pair of complex conjugate location in the bottom ﬁgure (see e.g. circles labelled 1, 2, etc.) √ • According to (10.24), the complex conjugate poles are at the intersections of circles with radius α2 and vertical lines with ordinates r cos θ = α1 . That is, distance from the origin and projection on realaxis are quantized in this scheme.
10.3 Quantization noise in digital ﬁlters
In a digital ﬁlter, each multiplier introduces a roundoff error signal, also known as quantization noise. The error signal from each multiplier propagates through the system, sometimes in a recursive manner (i.e. via feedback loops). At the system output, the roundoff errors contributed from each multiplier build up in a cumulative way, as represented by a total output quantization noise signal. In this Section, we present a systematic approach for the evaluation of the total quantization noise power at the output of a digitally implemented LTI system. This approach is characterized by the use of an equivalent random linear model for (nonlinear) digital ﬁlters. Basic properties of LTI systems excited by random sequences are invoked in the derivation.
c B. Champagne & F. Labeau Compiled November 5, 2004
208
Chapter 10. Quantization effects
α2
1
α 2 = α12
4 1
5 2
6 3
7
α1
−1
1
−1
Im(z)
1
4 1
5 6 2 3 7
Re(z)
1
3 1 4 2 6 5 7
Fig. 10.2
Illustration of coefﬁcient quantization in a secondorder system..
c B. Champagne & F. Labeau
Compiled November 5, 2004
10.3 Quantization noise in digital ﬁlters
209
Nonlinear model of digital ﬁlter: Let H denote an LTI ﬁlter structure with L multiplicative branches. Let ui [n], αi and vi [n] respectively denote the input, multiplicative gain and output of the ith branch (i = 1, . . . , L), as shown in Figure 10.3 (left).
ui [n]
αi
vi [n]
ui [n]
αi
vi [n] ˆ
Qr
Fig. 10.3 Nonlinear model of digital multiplier.
ˆ Let H denote the (B + 1)bit ﬁxedpoint implementation of H , where it is assumed that each multiplier output is rounded to B + 1 bits: vi [n] = αi ui [n] =⇒ vi [n] = Qr (αi ui [n]) ˆ
2B+1 B+1
(10.25)
ˆ In the deterministic nonlinear model of H , each multiplicative branch αi in H is modiﬁed as shown in Figure 10.3 (right). Equivalent linearnoise model: In terms of the quantization error ei [n], we have vi [n] = Qr (αi ui [n]) ˆ = αi ui [n] + ei [n] (10.26) ˆ In the equivalent linearnoise model of H , each multiplicative branch αi with quantizer Qr (.) is replaced as shown in Figure 10.4.
ui [n]
αi
vi [n] ˆ
ui [n]
αi
vi [n] ˆ
Qr
ei [n]
Fig. 10.4 Linearnoise model of digital multiplier.
The quantization error signal ei [n] (n ∈ Z) is modelled as a white noise sequence, uncorrelated with the system input x[n] and other quantization error signals e j [n] for j = i. For a ﬁxed n, we assume that ei [n] is uniformly distributed within the interval ± 1 2−B , so that 2 • its mean value (or DC value) is zero: E{ei [n]} = 0 • its variance (or noise power) is given by: E{ei [n]2 } =
c B. Champagne & F. Labeau
(10.27)
2−2B 12
σ2 e
(10.28)
Compiled November 5, 2004
Suppose that a zero mean white noise sequence e[n] with variance σ2 is applied to its input and let f [n] denote the corresponding output (see e Figure 10. uncorrelated white noise sequences with variance σ2 .30) f f f where the σ2i are computed as in Property 1. . ] . f H>Q@ . the sequence f [n] has zeromean and variance σ2 = σ21 + · · · + σ2L (10. Quantization effects Property 1: Consider an LTI system with system function K(z). . Supe pose that each sequence ei [n] is applied to the input of an LTI system Ki (z) and let fi [n] denote the corresponding output. Finally.6). Then. Property 2: Let e1 [n]. . ∞ σ2 = σ2 f e n=−∞ ∑ k[n]2 = σ2 e 2π π −π K(ω)2 dω (10.29) where k[n] denotes the impulse response of LTI system K(z). Then.5 Illustration of LTI system K(z) with stochastic input e[n]. the sequence f [n] has zeromean and its variance σ2 is given by f LTI system e[n] f [n] K (z ) Fig.210 Chapter 10.5). . eL [n] be zeromean. let f [n] = f1 [n]+· · · fL [n] (see Figure 10. 10.
I > Q@ I >Q@ H/ >Q@ . / ] .
L) and a i main output y[n]. . 10. Linear superposition of noise sources: Consider an LTI ﬁlter structure H with L multiplicative branches. I / >Q@ Fig.31) .6 LTI systems Ki (z) with stochastic inputs ei [n] and combined output f [n]. ˆ H may be interpreted as a linear system with a main input x[n]. Labeau Compiled November 5. . Invoking the principle of superposition for linear systems. Champagne & F. . ˆ Assume that H is implemented in ﬁxedpoint arithmetic and let H denote the corresponding linearnoise model: where the noise sources ei [n] are modelled as uncorrelated zeromean white noise sources. . 2004 (10. the output may be expressed as ˆ y[n] = y[n] + f1 [n] + · · · + fL [n] ˆ In this equation: c B. as previously described. L secondary inputs e [n] (i = 1.
ˆ Total output noise power: According to (10. 2004 .33) n=−∞ ∞ i=1 n=−∞ ∑ ∑ where (10.33): To compute the quantization noise power σ2 as given by (10.28) has been used.33). Note on the computation of (10.7 Linearnoise model of LTI system with L multipliers. c B. the total quantization noise at the system output is given by f [n] = f1 [n] + · · · + fL [n] Invoking Properties 1 and 2.31). it f is ﬁrst necessary to determine the individual transfer functions Ki (z) seen by each of the noise sources ei [n]. it may be possible to determine the associated impulse response ki [n] via inverse ztransform and then compute a numerical value for the sum of squared magnitudes in (10. • y[n] is the desired response in the absence of quantization noise: y[n] H(z)x[n] where H(z) is the system function in inﬁnite precision. we conclude that the quantization noise f [n] has zeromean and its variance is given by σ2 f = = i=1 2−B L ∑ σ2 ∑ e 12 L ∞ ki [n]2 ki [n]2 (10. This is the approach taken in the following example. ˆ This information can be obtained from the ﬂowgraph of the linearnoise model H by setting x[n] = 0 and e j [n] = 0 for all j = i.33). Champagne & F. 10.3 Quantization noise in digital ﬁlters 211 e1[n] eL [n] αL x[n] α1 e2 [ n] y[n] ˆ z −1 α2 z −1 Fig. For simple system functions Ki (z). The variance σ2 obtained in this way provides a measure of the quantization f noise power at the system output. Labeau Compiled November 5.10.32) (10. • fi [n] is the individual contribution of noise source ei [n] to the system output in the absence of other noise sources and of main input x[n]: fi [n] = Ki (z)ei [n] where Ki (z) is deﬁned as the transfer function between the injection point of ei [n] and the system output y[n] when x[n] = 0 and e j [n] = 0 for all j = i.
each product ai ui [n] in Fig.9 may be expressed as ˆ y[n] = y[n] + f [n] ˆ where y[n] = H(z)x[n] is the desired output and f [n] represents the cumulative effect of the quantization noise at the system’s output. Labeau Compiled November 5. may be viewed as a multiple inputsingle output LTI system. Champagne & F. Consider the ﬁxedpoint implementation of H using a B + 1 bits fractional two’s complement (2C) number representation in which multiplications are rounded. shown in Fig.212 Chapter 10. x[n] z −1 a1 e1[n] Fig.9. is rounded to B + 1 bits: ai ui [n] → Qr (ai ui [n]) = ai ui [n] + ei [n] 2B+1 bits B+1 bits where ei [n] denotes roundoff error. 10. 10. Quantization effects Example 10. as illustrated in Figure 10. Let H denote a cascade realization of H(z) using 1st order sections. In turns. 2004 .8. In this implementation of H .9 y[ n] ˆ z −1 a2 e2 [ n] Equivalent linear noise model. The output signal y[n] in Fig. which requires 2B + 1 for its exact representation.8 y[n] z −1 a2 u1[n] u 2 [ n] Cascade realization of a 2nd order system.8 x[n] z −1 a1 Fig. 10. f [n] may be expressed as f [n] = f1 [n] + f2 [n] c B. 10.6: Consider the system function H(z) = 1 (1 − a1 z−1 )(1 − a −2 2z ) where a1 = 1/2 and a2 = 1/4. The equivalent linearnoise model for H . 10.
To compute the noise power contributed by source ei [n]. according to (10. obtained by setting the input as well as all other noise sources to zero.3 Quantization noise in digital ﬁlters 213 where fi [n] = Ki (z)ei [n] represents the individual contribution of noise source ei [n] and Ki (z) is the system function between the injection point of ei [n] and the system output.34) We leave it as an exercise for the student to verify that if the computation is repeated with a1 = 1/4 and a2 = 1/2 (i..83 + 1. the result will be σ2 = 3.07 Finally. Labeau Compiled November 5. the total quantization noise power at the system output is obtained as : σ2 f = σ2 e = all n −2B 2 ∑ k1 [n]2 + ∑ k2 [n]2 all n 12 (1..16 f Why. set x[n] = e2 [n] = 0: K1 (z) = H(z) = = 1 −2 (1 − a1 2z ) 2 1 − 1 − 1 z−1 1 − 1 z−1 2 4 z−1 )(1 − a The impulse response is obtained via inverse ztransform (hence the partial fraction expansion) assuming a causal system: 1 n 1 n k1 [n] = 2 − u[n] 2 4 from which we compute the energy as follows: all n ∑ k1 [n]2 = n=0 ∑ ∞ 2 1 2 n − 1 4 n 2 = · · · = 1.10.07) = 2.83 • To obtain K2 (z). 2004 2−2B 12 . we need to ﬁnd Ki (z) and compute the energy in its impulse response: • To obtain K1 (z).33).90 2−2B 12 (10.e. Champagne & F. reversing the order of the 1st order sections). set x[n] = e1 [n] = 0: K1 (z) = The impulse response is k1 [n] = with corresponding energy 1 4 1 1 − a2 z−2 n u[n] all n ∑ k2 [n]2 = n=0 ∑ ∞ 1 4 2n = · · · = 1.? c B.
36) Py = σ2 ∑ h[n]2 = x x 2π −π n=−∞ • Sinusoidal input: Suppose x[n] = A sin(ωo n + φ).6 cont.37) n=−∞ ∑ ∞ h[n]2 = 1.7: (Ex.35): Py = σ2 x Finally.35) σf where • Py denotes the average power of the output signal y[n] in inﬁnite precision (see below). Each additional bit contributes a 6dB increase in SQNR. Output signal power: The computation of the output signal power Py depends on the model used for the input signal x[n]. may be less than 1 to avoid potential internal overﬂow problems. the SQNR is obtained as SQNR = 10 log10 = 10 log10 Py σ2 f A2 H(ωo )2 2 (10.902 ≈ 6. 2004 . we have ∞ σ2 π H(ω)2 dω (10. 10. Two important cases are considered below: • Random input: Suppose x[n] is a zeromean white noise sequence with variance σ2 .83Xmax /12 −2B /12 2.01 (dB) Note the dependance on the peak signal value Xmax and the number of bits B. Champagne & F. represented by Xmax .83σ2 = 1. • σ2 denotes the average power of the total quantization noise f [n] at the system output. it follows that x[n] has zeromean and X2 σ2 = max x 12 The corresponding output power in inﬁnite precision is computed from (10. Then. This is comf puted as explained previously.83 x 2 Xmax 12 2 1. Quantization effects Signaltoquantization noise ratio (SQNR): We deﬁne the SQNR at the output of a digital system as follows: Py SQNR = 10 log10 2 in dB (10.02B + 20 log10 Xmax − 2.214 Chapter 10.) Suppose that in Example 10. Then. c B. from x Property 1. In practice.6. From the above. the input signal is a white noise sequence with sample values x[n] uniformly distributed within ±Xmax . the average power of y[n] is given by Py = Example 10. the choice of Xmax is limited by overﬂow considerations. where the maximum permissible value of x[n]. Labeau Compiled November 5.
Figure 10. 1 − dk z−1 1 − z−1 /dk k=1 where C1 (z) is a possible FIR component.39) (10.2. so that ∑ k[n]2 = c1 [0] + ∑ Ak n k=1 N (10. these noise sources may be replaced by a single noise source with a scaled variance of qσ2 . starting at n = −1. 1 − dk e− jω 1 − e− jω /dk All the above terms are easily seen to be inverse discretetime Fourier transforms evaluated at n = 0. Evaluating the above PFE on the unit circle and integrating over one period gives: is indeed −Ak ∑ k[n]2 = n 1 2π π −π C1 (ω)dω + ∑ 1 k=1 2π N π −π { A∗ Ak k − ∗ }dω. where dk is a (simple) pole of Ki (z). In this case. according to (10. the computation of the inﬁnite sum can be avoided. 2004 .38) Note that the term corresponding to A∗ is zero at n = 0 since the corresponding sequence has to be antik causal. based on the fact that n=−∞ ∑ ∞ k[n]2 = 1 2π π −π K(ω)2 dω It has been shown earlier that the transfer function C(z) with frequency response C(ω) = Ki (ω)2 has poles ∗ given by dk and 1/dk . e e 2 ∞ 2−2B (N + 1 + N ∑ h[n]2 ). Labeau = σ2 e 1 ∑ h[n]2 = Nσ2 ∑ h[n]2 e 1 n=0 n=0 ∞ ∞ (10.10(b) into two composite noise sources e1 [n] and e2 [n] with variances Nσ2 and (N + 1)σ2 respectively. 12 n=0 Compiled November 5. It is easily ∗ shown by the deﬁnition of C(z) as Ki (z)Ki∗ (1/z∗ ) that the partial fraction expansion corresponding to 1/dk ∗ .10(a) shows the equivalent linear model to take quantization noise into account. e Example 10.33).3 Quantization noise in digital ﬁlters 215 Further note on the computation of (10. Champagne & F. For rational IIR transfer functions Ki (z). it often occurs that two or more. whereas the noise source e1 [n] goes through the direct part of the transfer function H(z). The different 2N + 1 noise sources can be regrouped as in Figure 10. where q is the number of noise sources being merged.33): The approach used in the above example for the computation of ∑n ki [n]2 may become cumbersome if Ki (z) is too complex. noise sources ei [n] have the same system function Ki (z). The transfer function from e2 [n] to the output of e e the structure is just 1. The corresponding output noise variances can therefore be computed as: σ2 f σ2 f for a total noise power of σ2 = f c B.5.10.8: Let us study the actual effect of roundoff noise on the DFII implementation of the ﬁlter designed by bilinear transformation in example 9. The coefﬁcients of the transfer function are shown in table 9. Thus a partial fraction expansion of C(z) would yield a result in the form: N A∗ Ak k − C(z) = C1 (z) + ∑ ∗. and the Ak ’s are partial fraction expansion coefﬁcients.40) 2 = σ2 = (N + 1)σ2 . Finally.
then the rounding or truncation only has to take place before storage in memory. After grouping the coefﬁcients of the numerator polynomial bk in a column vector b in Matlab and doing the same for the ak ’s in a vector a. and the denominator by denc=conv(a. the only locations in the DFII signal ﬂow graph where quantization will occur are right before the delay line. so that the actual sum n=0 ∑ h[n]2 = 0. Quantization effects In the case of the coefﬁcients shown in table 9. Note that the example is not complete: . Note that whenever wi [n] > 1.97 10−6 6 k=1 ∑ Ak ∞ = 0. the output noise variance is given by: σ2 = 0. some low values of the number B + 1 of bits used might push the poles outside of the unit circle and make the system unstable. To compute the partial fraction expansion. overﬂow in key nodes is avoided by a proper scaling of the input signal by a factor s < 1 such that the resulting signals in the nodes of interest do not overﬂow. in the case of Figure 10. assuming a 12bit representation and an input signal uniformly distributed between − 1 2 1 and 2 . The ﬁrst step is to create the numerator and denominator coefﬁcients of the transfer function C(z) = H(z)H ∗ (1/z∗ ). Finally.02 dB. one can easily ﬁnd the numerator by numc=conv(b.12. Though this value might sound high. it will not be sufﬁcient for many applications (e. as illustrated in Figure 10.g. it is easy to ﬁnd a conservative value of s that will prevent overﬂow. We will use Matlab to do so. as illustrated in Fig.23464. With 12 bits available. in practise. let us point out that if we have at our disposal a doublelength accumulator (as is available in many DSP chips. . see chapter ??).2.the effect of overﬂows has not been taken into account.216 Chapter 10. In particular. we would get an SQNR of 50. and before the output. yielding: C1 (z) = 9. By looking at the transfer function from the input to these nodes. since the power Py will be scaled by a factor s2 . “CD quality” audio processing requires a SNR of the order of 95 dB). overﬂow c B. Champagne & F.11.4 Scaling to avoid overﬂow Problem statement: Consider a ﬁlter structure H with K adders and corresponding node values wi [n]. the function residuez is used. Carrying out a study similar to the one outline above. 10. 10.38) in order to compute the sum over h[n]2 . 12 n=0 where the factors N have disappeared. we end up with a total output noise variance of σ2 = f ∞ 2−2B (1 + ∑ h[n]2 ). the two nodes where the noise components are injected.flipud(conj(a)). 2004 .10(b). Finally. The nodes to consider are ﬁnal results of accumulated additions. f For instance. Note that this will also reduce the SQNR.flipup(conj(b)).7 dB.70065 2−2B . for a given adder at a given time. since overﬂow in intermediate nodes is harmless in two’s complement arithmetics. Consequently. this leads to an SQNR value of 59. Labeau Compiled November 5. These nodes are.the effect of coefﬁcient quantization has not been taken into account. we will use the formula (10.23463.
10. c B.4 Scaling to avoid overﬂow 217 H 1 >Q@ H >Q@ 1 ¦ HL >Q@ L 1 H > Q @ L 1 ¦ HL >Q@ [>Q@ Z>Q@ E \>Q@ [>Q@ Z>Q@ E ] \>Q@ ] H>Q@ D E H 1 > Q@ D E H 1 >Q@ D 1 E1 ] H 1 >Q@ D 1 E1 ] H 1 > Q@ D1 E1 H 1 >Q@ D1 E1 (a) (b) Fig. 10.11 Roundoff noise model when a doublelength accumulator is available. H>Q@ E Z>Q@ H > Q @ \>Q@ [>Q@ ] D E D 1 E1 ] D1 E1 Fig. Champagne & F.10 DFII modiﬁed to take quantization effects into account (a) each multiplier contains a noise source and (b) a simpliﬁed version with composite noise sources.10. 2004 . Labeau Compiled November 5.
Two such approaches are presented below: Wideband signals: For wideband signals. K (10.42) where Hi (z) denotes the transfer function from the system input to the ith adder output. Several approaches exist for choosing Xmax . the maximum permissible value of x[n]. unpredictable distortion in the system output and must be avoided at all cost.43) c B.ω Hi (ω) (10. . The above choice of Xmax ensures that overﬂow will be avoided regardless of the particular type of inputs: it represents a sufﬁcient condition to avoid overﬂow. a suitable value of Xmax is provided by Xmax < 1 maxi ∑n hi [n] (10. . . Quantization effects w1[ n] x[n] w2 [n] w3 [n] y[n] z −1 z −1 a1 Fig.12 Linearnoise model of LTI system with L multipliers.41) is enforced at all time n. Overﬂow usually produces large. Champagne & F. the above bound on Xmax may be too conservative. Narrowband signals: For narrowband signals. Principles of scaling: The basic idea is very simple: properly scale the input signal x[n] to a range x[n] < Xmax so that the condition wi [n] < 1.218 Chapter 10. Labeau Compiled November 5. 2004 . The best approach usually depends on the type of input signals being considered. An often better choice of Xmax if provided by the bound Xmax < 1 maxi. 10. will result in the corresponding ﬁxedpoint implementation. . i = 1.
219 Chapter 11 Fast Fourier transform (FFT) Computation of the discrete Fourier transform (DFT) is an essential step in many signal processing algorithms. 1.. l = 0. a direct computation of the Npoint DFT requires on the order of N 2 arithmetic operations. 1.. These algorithms achieves signiﬁcant savings in the DFT computation by exploiting certain symmetries of the Fourier basis functions and only require on the order of N log2 N arithmetic operations. Nowadays. They are at the origin of major advances in the ﬁeld of signal processing.. Even for data sequences of moderate size N.2) Based on (11. .. we have introduced WN e− j2π/N ..1 Direct computation of the DFT Recall the deﬁnition of the Npoint DFT of a discretetime signal x[n] deﬁned for 0 ≤ n ≤ N − 1: N−1 X[k] = n=0 kn ∑ x[n]WN .1) where for convenience. . N − 1 (11. this usually entails signiﬁcant computational costs. In this Chapter. k = 0.1). . we investigate socalled fast Fourier transform (FFT) algorithms for the efﬁcient computation of the DFT.. N − 1 because of the periodicity. 11. (11. Even for moderate values of N. . N − 1 (11.. FFT algorithms ﬁnd applications in numerous commercial DSPbased products. particularly in the 60’s and 70’s. one can easily come up with the following algorithmic steps for its direct evaluation. also known as the direct approach: l Step 1: Compute WN and store it in a table: l WN = e− j2πl/N = cos(2πl/N) + j sin(2πl/N).3) Note: this only needs to be evaluated for l = 0... 1.
N log2 N N 2 so that there is a signiﬁcant gain with the FFT.g. For N 1. Fast Fourier transform (FFT) l Step 2: Compute the DFT X[k] using stored values of WN and input data x[n]: for k = 0 : N − 1 X[k] ← x[0] for n = 1 : N − 1 l = (kn)N l X[k] ← X[k] + x[n]WN (11.There are N(N − 1) ≈ N 2 complex additions (⊕c ) .One must also take into account the overhead: indexing. 2004 .2 Overview of FFT algorithms In its most general sense. of order N 2 . the term FFT refers to a family of computationally fast algorithms used to compute the DFT. Basic principle: The FFT relies on the concept of divide and conquer. even for moderate values of N.e. FFT should not be thought of as a new transform: FFTs are merely algorithms. Indeed. The rest of this Chapter will be devoted to the description of efﬁcient algorithms for the computation of the DFT. Champagne & F. These socalled Fast Fourier Transform or FFT algorithms can achieve the same DFT computation in only O(N log2 N). etc. N = 6 = 2 × 3). • In step (2): . addressing. i. Typically. two essential ingredients are needed: • N must be a composite number (e. 11.There are N(N − 1) ≈ N 2 complex multiplications (⊗c ) . These values are usually computed once and stored for later use. c B. In itself. We refer to this approach as table lookup.220 Chapter 11. To achieve this. The same is true for the IDFT. while the direct evaluation of the DFT requires O(N 2 ) complex multiplications. the O(N 2 ) complexity will usually require a considerable amount of computational resources. Because of the ﬁgures above. the computation of the Npoint DFT of a signal is a costly operation. FFT algorithms require O(N log2 N) complex multiplications (⊗c ). It is obtained by breaking the DFT of size N into a cascade of smaller size DFTs. we say that direct computation of the DFT is O(N 2 ). Labeau Compiled November 16.4) end end The complexity of this approach can be broken up as follows: • In step (1): N evaluation of the trigonometric functions sin and cos (actually less than N because of the symmetries).
which is the basic building block of radix2 FFT algorithms. (11. It will be shown that the Npoint DFT of x[n] can be computed by properly combining the (N/2)point DFTs of each subsequences. Different types of FFTs: There are several FFT algorithms. This basic principle is repeated until only 2point DFTs are involved. c B.Decimation in time (DIT) . into two subsequences. These are the most commonly used algorithms. i. 1. of length N = 2ν . respectively. Radix2 FFTs: In these algorithms. ..5) (11.: k k+N WN = WN Lk WN (11.7) Compiled November 16. x[2r] and x[2r + 1]. However. corresponding to even and odd values of time.. e. which can be reduced to DFTs of size N/4..3 Radix2 FFT via decimationintime The basic idea behind decimationintime (DIT) is to partition the input sequence x[n]. • More generally. N = p1 p2 . the same principle can be applied in the computation of the (N/2)point DFT of each subsequence. The ﬁnal result is an FFT algorithm of complexity O(N log2 N) We will ﬁrst review the 2point DFT. (N/2) − 1. In these notes. The 2point DFT: In the case N = 2. Radix2 FFTs are possibly the most important ones..Decimation in frequency (DIF) • N = rν ⇒ radixr FFTs. Champagne & F. The special cases r = 3 and r = 4 are not uncommon.: • N = 2ν ⇒ radix2 FFTs. Even then.11. Labeau k = 0.2) must be exploited.e.3 Radix2 FFT via decimationintime 221 • The periodicity and symmetry properties of the factor WN in (11. 11.. we shall mainly focus on the radix2 FFTs. • each stage is made up of N 2 2point DFTs (DFT2 ). 1.6) = k WN/L .pl where the pi s are prime numbers lead to socalled mixedradix FFTs.g.1) specializes to X[k] = x[0] + x[1]W2k . there are many variations: . We will then explain how and why the above idea of decimationintime can be implemented mathematically. the students should be able to extend the basic principles used in the derivation of radix2 FFT algorithms to other radix type FFTs. Only in very specialized situations will it be more advantageous to use other radix type FFTs. r = 0. applicable when N = 2ν : • the DFTN is decomposed into a cascade of ν stages. e. In turn.g. (11. Finally. we discuss particularities of the resulting radix2 DIT FFT algorithms. 2004 .
e.13) (11. c B. Champagne & F. i. repeat the above steps for each of the individual (N/2)point DFTs.1.222 Chapter 11.14) • Step (3): Since N/2 = 2. x[0] X [0] X [1] x[1] 1 Fig. Labeau Compiled November 16.1) into ∑n even + ∑n odd (2) Express the sums ∑n even and ∑n odd as (N/2)point DFTs. as illustrated by the signal ﬂowgraph in Figure 11.11) (11. Main steps of DIT: (1) Split the ∑n in (11.9) which leads to a very simple realization of the 2point DFT. (11.1 Signal ﬂowgraph of a 2point DFT. that is. H[k + 2] = H[k] (11.8) (11.12) (11.10) Note that G[k] and H[k] are 2periodic. we simply stop. 2004 . 11. G[k + 2] = G[k]. the 2point DFTs G[k] and H[k] cannot be further simpliﬁed via DIT. Fast Fourier transform (FFT) Since W2 = e− jπ = −1. else. this can be further simpliﬁed to X[0] = x[0] + x[1] X[1] = x[0] − x[1]. Case N = 4 = 22 : • Step (1): X[k] = x[0] + x[1]W4k + x[2]W42k + x[3]W43k = (x[0] + x[2]W42k ) +W4k (x[1] + x[3]W42k ) • Step (2): Using the property W42k = W2k . we can write X[k] = (x[0] + x[2]W2k ) +W4k (x[1] + x[3]W2k ) = G[k] +W4k H[k] G[k] H[k] DFT2 {even samples} DFT2 {odd samples} (11. (3) If N/2 = 2 stop.
1.2 Decimation in time implementation of a 4point DFT. W41 = − j. where the index r now runs from 0 to N − 1. Labeau . respectively: X[k] = G[k] +W4k H[k].3 Radix2 FFT via decimationintime 223 The 4point DFT can thus be computed by properly combining the 2point DFTs of the even and odd samples.16) X[k] = r=0 k = G[k] +WN H[k] ∑ kr k x[2r]WN/2 +WN N 2 −1 r=0 kr ∑ x[2r + 1]WN/2 (11. we obtain N 2 −1 ∑ 2kr k x[2r]WN +WN N 2 −1 r=0 2kr ∑ x[2r + 1]WN (11. Therefore 2 N−1 X[k] = = n=0 kn ∑ x[n]WN N 2 −1 r=0 2kr kr • Step (2): Using the property WN = WN/2 . Here further simpliﬁcations of the factors W4k are possible here: W40 = 1. 1. i. General case: • Step (1): Note that the even and odd samples of x[n] can be represented respectively by the sequences x[2r] and x[2r + 1]. Champagne & F. 3 (11. 11. G[k] and H[k]. Note the special ordering of the input data. hence the equations: X0 [k] = G[0] +W40 H[0] X1 [k] = G[1] +W41 H[1] X2 [k] = G[2] +W42 H[2] = G[0] +W42 H[0] X3 [k] = G[3] +W43 H[3] = G[1] +W43 H[1] The ﬂow graph corresponding to this realization is shown in Figure 11. 1. k = 0.11. they only need to be computed for k = 0. x[0] DFT2 G[0] X [0] 0 W4 x[2] x[1] G[1] H [ 0] X [1] 1 W4 DFT2 x[4] W42 X [2] X [3] H [1] W43 Fig.17) Compiled November 16.2. 2. The DFT2 blocks are as shown in ﬁgure 11.15) Since G[k] and H[k] are 2periodic. W42 = −1 and W43 = j. 2004 c B.e.
N/2 = 4. Champagne & F.3 Decomposition of an 8point DFT in 2 4point DFTs..19) − 1.20) • Step (3): If N/2 ≥ 4. The ﬁnal step amount to replacing each of the 2point DFTs in Figure 11. In doing so. all phase factors have been expressed as W8k (e. 2.3 by the ﬂowgraph of the 4point DIT FFT. Figure 11.4 by their corresponding signal ﬂowgraph. Note the ordering of the input data to accommodate the 4point DFTs over even and odd samples. The result is illustrated in Figure 11. The outputs of the top (bottom) 4point DFT box are the coefﬁcients G[k] (resp. (11. as derived previously.18) (11.. Thus.3 illustrates the result of a ﬁrst pass through DIT steps (1) and (2). 1..1: Case N = 8 = 23 The application of the general decimation in time principle to the case N = 8 is described by Figures 11. − 1} 2 DFTN/2 {x[2r].5 through 11. for Note that G[k] and H[k] are N periodic and need only be computed for k = 0 up to 2 k ≥ N/2.5. c B.5. W41 = W82 ). Example 11. the DIT procedure is repeated for the computation of the 4point DFTs G[k] nd H[k]. . 3.. for the sake of uniformity. Fast Fourier transform (FFT) where we deﬁne G[k] H[k] N − 1} 2 N DFTN/2 {x[2r + 1]. Labeau Compiled November 16...224 Chapter 11. 11.4. This amounts to replacing the 4point DFT boxes in Figure 11. since x[0] x[2] x[4] DFT x[6] x[1] x[3] x[5] x[7] 7 WN 0 WN X[0] 4point 1 WN 2 WN X[1] X[2] X[3] 3 WN 4 WN X[4] X[5] X[6] X[7] 4point 5 WN DFT 6 WN Fig. r = 0. 1. In DIT step (3).g. r = 0. 2004 . N 2 (11.17) is equivalent to X[k] = G[k − N N k ] +WN H[k − ] 2 2 (11. where. the order of the input sequence must be modiﬁed accordingly. H[k]) for k = 0. The result is illustrated in Figure 11. apply these steps to each DFT of size N/2.
Rather. Indeed. 2004 . the data is arranged is a socalled bitreversed order. Each stage is made up of N/2 basic binary ﬂow graphs called butterﬂies. Computational complexity: The DIT FFT ﬂowchart obtained for N = 8 is easily generalized to N = 2ν . if one looks at the example in Figure 11. i. the computational complexity of the DIT FFT algorithm can be determined based on the following: • there are ν = log2 N stages. an equivalent butterﬂy with only one complex multiplication can be implemented.5 and compares the indices of the samples x[n] on the left of the structure with a natural ordering.6. an arbitrary power of 2.2 or 11.4 Decomposition of an 8point DFT in 4 2point DFTs. a simpliﬁed butterﬂy structure is used instead of the one in Figure 11.3 Radix2 FFT via decimationintime 225 x[0] x[4] x[2] x[6] x[1] x[5] x[3] x[7] 2point DFT 6 WN 2point DFT 2 WN 0 WN X[0] 0 WN X[1] 1 WN 4 WN 6 WN 2 WN 2point DFT 2point DFT X[2] X[3] 3 WN 4 WN 5 WN X[4] X[5] X[6] X[7] 0 WN 2 WN 4 WN 6 WN 7 WN Fig.e. Hence. • there is 1 ⊗c and 2 ⊕c per butterﬂy. the total complexity: N log2 N ⊗c . • there are N/2 butterﬂies per stage. By using the modiﬁed butterﬂy structure. Champagne & F.11.5). 11. one sees that the 3bit binary representation of the actual index is the bitreversed version of the 3bit binary representation of the natural ordering: c B. the signal ﬂow graph consists of a cascade of ν = log2 N stages.7.6. Realizing that WN r+N/2 r = WN WN N/2 r r = WN e− jπ = −WN . as shown in Figure 11. In the general case. as shown in Figure 11.21) Ordering of input data: The input data x[n] on the LHS of the FFT ﬂow graph is not listed in standard sequential order (refer to Figures 11. Labeau Compiled November 16. In practice. 2 N log2 N ⊕c (11.
r WN r WN + N / 2 Fig. Champagne & F. Some programmable DSP chips contain also an option for bitreversed addressing of data in memory. We note that the DIT FFT ﬂowgraph can be modiﬁed so that the inputs are in sequential order (e. 11. Sample index decim. but then the output X[k] will be listed in bitreversed order. Labeau Compiled November 16. binary 0 000 1 001 2 010 3 011 4 100 5 101 6 110 7 111 Practical FFT routines contain bitreversing instructions. by properly reordering the vertical line segments in Fig.g. 11. 11. 2004 .6 A butterﬂy. 0 4 2 6 1 5 3 7 binary 000 100 010 110 001 101 011 111 Actual offset in memory decim. the basic building block of an FFT algorithm.226 Chapter 11. Fast Fourier transform (FFT) x[0] x[4] −1 0 WN 0 WN X[0] X[1] 2 WN 4 WN 1 WN 2 WN x[2] x[6] −1 X[2] X[3] 3 WN 4 WN 5 WN 6 WN x[1] x[5] −1 0 WN X[4] X[5] X[6] X[7] x[3] x[7] −1 2 WN 4 WN 6 WN 6 WN 7 WN Fig.3). c B. either prior or after the FFT computation.5 An 8point DFT implemented with the decimationintime algorithm. depending upon the speciﬁc nature of the algorithm.
Champagne & F. (1) Partition DFT samples X[k] (k = 0. FFT program: Based on the above consideration. . say A[q] (q = 0. 11.4 Decimationinfrequency Decimation in frequency is another way of decomposing the DFT computation so that the resulting algorithm has complexity O(N log2 N). • An inner loop over the butterﬂy number. after which the input has been overwritten by the output. Basic steps: Assume N = 2ν . .evenindexed samples: X[2r]. • An outer loop over the stage number. and use of the temporary storage location tmp has been made. it is possible to overwrite the inputs of each butterﬂy computation with the corresponding outputs. We note that once the outputs of a particular butterﬂy have been computed. .7 and denoting by A1 and A2 the storage locations of the two inputs to the butterﬂy. The principles are very similar to the decimation in time algorithm. Labeau Compiled November 16. Thus. is needed to store the input data x[n].4 Decimationinfrequency 227 r WN −1 Fig. where ν integer ≥ 1. provided a single additional complex register is made available. Storage requirement: A complex array of length N.. N − 1) into two subsequences: .7 A modiﬁed butterﬂy. 1. it should be clear that a properly written program for the DIT FFT algorithm contains the following ingredients: • Preprocessing to address the data in bitreversed order.11. 2004 . 11. . 1.. This is made possible by the very structure of the FFT ﬂow graph.. r = 0. meaning that the same array A[q] is used to store the results of intermediate computations. Referring to Figure 11. which is entirely made up of independent binary butterﬂy computations. the corresponding inputs are no longer needed and can be overwritten. N −1). . .. say from k = 1 to N/2.. with only one multiplication.. say from l = 1 to ν = log2 N. The computation can be done inplace. Note that the speciﬁc indices of the butterﬂy inputs depends on the stage number l. This is so because the inputs of each butterﬂy are only used once. we can write the pseudocode of the butterﬂy: r tmp ← A2 ∗WN A2 ← A1 − tmp A1 ← A1 + tmp. N − 1 2 c B.
. Labeau (11. 1 = x[0]W40 + x[1]W42r + x[2]W44r + x[3]W46r Now.8. 1. 2004 . stop. . else. . Case N = 4: • For evenindexed DFT samples. the following mathematical expressions can be derived for the general case N = 2ν : X[2r] = DFTN/2 {u[r]} X[2r + 1] = DFTN/2 {v[r]} where r = 0. 3 r = 0. observe that: W40 = 1 W44r = 1 Therefore X[2r] = (x[0] + x[2]) + (x[1] + x[3])W2r = u[0] + u[1]W2r = DFT2 {u[0].oddindexed samples: X[2r + 1]. r = 0. 1 • The resulting signal ﬂow graph is shown in Figure 11.22) (11.7. r = 0. u[1]} u[r] x[r] + x[r + 2]..228 Chapter 11.23) x[r] + x[r + N/2] r WN (x[r] − x[r + 2]) (11. . General case: Following the same steps as in the case N = 4. Champagne & F. where each of the 2point DFT have been realized using the basic SFG in Figure 11. . 1. 1 W42r = W2r W46r = W44rW42r = W2r • In the same way. . v[1]} v[r] W4r (x[r] − x[r + 2]). X[2r + 1] = DFT2 {v[0]. it can be veriﬁed that for the oddindexed DFT samples. N/2 − 1 and u[r] v[r] c B. Fast Fourier transform (FFT) .24) (11. we have X[2r] = n=0 ∑ x[n]W42rn . (3) If N/2 = 2. apply steps (1)(3) to each DFT of size N/2. r = 0. N − 1 2 (2) Express each subsequence as a DFT of size N/2.25) Compiled November 16..
11.5 Final remarks
229
u[0]
x[0]
X [0]
u[1]
−1
x[1]
X [2] X [1]
v[0]
x[ 2]
−1 −1
Fig. 11.8
W40
x[3]
v[1]
−1
W41
X [3]
Decimation in frequency realization of a 4point DFT.
Remarks on the DIF FFT: Much like the DIT, the DIF radix2 FFT algorithms have the following characteristics: • computational complexity of O(N log2 N); • output X[k] in bitreversed order; • inplace computation is also possible. A program for FFT DIF contains components similar to those described previously for the FFT DIT.
11.5 Final remarks
Simpliﬁcations: The FFT algorithms discussed above can be further simpliﬁed in the following special cases:  x[n] is a real signal  only need to compute selected values of X[k] (⇒ FFT pruning)  x[n] containing large number of zero samples (e.g. as a result of zeropadding) Generalizations: The DIT and DIF approaches can be generalized to:  arbitrary radix (e.g. N = 3ν and N = 4ν )  mixed radix (e.g. N = p1 p2 · · · pK where pi are prime numbers)
c B. Champagne & F. Labeau
Compiled November 16, 2004
230
Chapter 12
An introduction to Digital Signal Processors
12.1 Introduction
Digital Signal Processing Chips are specialized microprocessors aimed at performing DSP tasks. They are often called Digital Signal Processors — “DSPs” for hardware engineers. These DSP chips are sometimes referred to as PDSPs (Programmable Digital Signal Processor), as opposed to hardwired solutions (ASICs) and reconﬁgurable solutions (FPGAs). The ﬁrst DSP chips appeared in the early 80’s, mainly produced by Texas Instruments (TI). Nowadays, their power has dramatically increased, and the four big players in this game are now TI, Agere (formerly Lucent), Analog Devices and Motorola. All these companies provide families of DSP processors with different target applications – from lowcost to high performance. As the processing capabilities of DSPs have increased, they are being used in more and more applications. Table 12.1 lists a series of application areas, together with the DSP operations they require, as well as a series of examples of devices or appliances where a DSP chip can be used. The very need for specialized microprocessors to accomplish DSPrelated tasks comes from a series of requirements of classic DSP algorithms, that could not be met by general purpose processors in the early 80’s: DSPs have to support highperformance, repetitive and numerically intensive tasks. In this chapter, we will review the reasons why a DSP chip should be different from an other microprocessor, through motivating examples of DSP algorithms. Based on this, we will analyze how DSP manufacturers have designed their chips so as to meet the DSP constraints, and review a series of features common to most DSP chips. Finally, we will brieﬂy discuss the performance measures that can be used to compare different DSP chips.
12.2 Why PDSPs ?
To motivate the need for specialized DSP processors, we will study in this section the series of operations required to accomplish two basic DSP operations, namely FIR ﬁltering and FFT computation.
12.2 Why PDSPs ?
231
Application Area Speech & Audio Signal Processing
DSP task Effects (Reverb, Tone Control, Echo) , Filtering, Audio Compression, Speech Synthesis, Recognition & Compression, Frequency Equalization, Pitch , Surround Sound
Instrumentation and Measurement
Communications
Medical Electronics
Fast Fourier Transform (FFT), Filtering, Waveform Synthesis, Adaptive Filtering, High Speed Numeric Calculations Modulation & Transmission, Demodulation & Reception, Speech Compression, Data Encryption, Echo Cancellation Filtering, Echo Cancellation, Fast Fourier Transform (FFT), Beam Forming 2Dimensional Filtering, Fast Fourier Transform (FFT), Pattern Recognition, Image Smoothing
Device Musical instruments & Ampliﬁers, Audio Mixing Consoles & Recording Equipment, Audio Equipment & Boards for PCs, Toys & Games, Automotive Sound Systems, DAT & CD Players, HDTV Equipment, Digital Tapeless Recorders, Cellular phones Test & Measurement Equipment, I/O Cards for PCs, Power Meters, Signal Analyzers and Generators Modems, Fax Machines, Broadcast Equipment, Mobile Phones, Digital Pagers, Global Positioning Systems, Digital Answering Machines Respiration, Heart & Fetal Monitoring Equipment, Ultra Sound Equipment, Medical Imaging Equipment, Hearing Aides Bar Code Scanners, Automatic Inspection Systems, Fingerprint Recognition, Digital Televisions, Sonar/Radar Systems, Robotic Vision
Optical and Image Processing
Table 12.1
A nonexhaustive list of applications of DSP chips.
c B. Champagne & F. Labeau
Compiled November 24, 2004
232
Chapter 12. An introduction to Digital Signal Processors
FIR Filtering: Filtering a signal x[n] through an FIR ﬁlter with impulse response h[n] amounts to computing the convolution y[n] = h[n] ∗ x[n]. At each sample time n, the DSP processor has to compute
N−1
y[n] =
k=0
∑ h[k]x[n − k],
where N is the length of the FIR. From an algorithmic point of view, this means that the processor has to fetch N data samples at each sample time, multiply them with corresponding ﬁlter coefﬁcients stored in memory, and accumulate the sum of products to yield the ﬁnal answer. Figure 12.1 illustrates these operations at two consecutive times n and n + 1. It is clear from this drawing that the basic DSP operation in this case is a multiplication followed by an addition, also know as a Multiply and Accumulate operation, or MAC. Such a MAC has to be implemented N times for each output sample. This also shows that the FIR ﬁltering operation is both data intensive (N data needed per output sample) and repetitive (the same MAC operation repeated N times with different operands). An other point that appears from this graphic is that, although the ﬁlter coefﬁcients remain the same from one time instant to the next, the input coefﬁcients change. They do not all change: the last one is discarded, the newest input sample appears, and all other samples are shifted one location in memory. This is known as a First InFirst Out (FIFO) structure or queue. A ﬁnal feature that is common to a lot of DSP applications is the need for real time operation: the output samples y[n] must be computed by the processor between the moment x[n] becomes available and the time x[n + 1] enters the FIFO, which leaves Ts seconds to carry out the whole processing.
FFT Computation: As explained in section 11.3, a classical FFT computation by a decimationintime algorithm uses a simple building block known as a butterﬂy (see Figure 11.7). This simple operation on 2 values involves again multiplication and addition, as suggested by the pseudocode in section 11.3. An other point mentioned in section 11.3 is that the FFT algorithm yields DFT coefﬁcient in bitreversed order, so that once they are computed, the processor has to restore the original order.
Summary of requirements: We have seen with the two above examples that DSP algorithms involve: 1. Real time requirements with intensive data ﬂow through the processor; 2. Efﬁcient implementation of loops, and MAC operations; 3. Efﬁcient addressing schemes to access FIFOs and/or bitreversed ordered data. In the next section we will describe the features common to many DSPs that address the above list of requirements.
c B. Champagne & F. Labeau Compiled November 24, 2004
12.3 Characterization of PDSPs
233
[>Q@ [>Q @ [>Q @ [>Q @ [>Q @ [>Q @ [>Q @ [>Q @ [>Q@ [>Q @ [>Q @ [>Q @ [>Q @ [>Q @
u
K>@ K>@ K>@ K>@ K>@ K>@ K>@
K>@ K>@ K>@ K>@ K>@ K>@ K>@
\>Q@
u
\>Q @
Fig. 12.1
Operations required to compute the output of a 7tap FIR ﬁlter at times n and n + 1.
12.3 Characterization of PDSPs
12.3.1 FixedPoint vs. FloatingPoint Programmable DSPs can be characterized by the type of arithmetic they implement. Fixedpoint processors are in general much cheaper than their ﬂoatingpoint counterparts. However, the dynamic range offered by the ﬂoatingpoint processors is much higher. An other advantage of ﬂoatingpoint processors is that they do not require the programmer to worry too much about overﬂow and scaling issues, which are of paramount importance in ﬁxedpoint arithmetics (see section 10.1.2). Some applications require a very low cost processor with low power dissipation (like cell phones for instance), so that ﬁxedpoint processors have the advantage on these markets. 12.3.2 Some Examples We present here a few examples of existing DSPs, to which we will refer later to illustrate the different concepts. Texas Instrument has been successful with its TMS320 series for nearly 20 years. The latest generation of these processors is the TMS320C6xxx family, which represent very high performance DSPs. This family can be grossly divided in the TMS320C62xx, which are ﬁxedpoint processors, and the TMS320C67xx, which use ﬂoatingpoint arithmetics. An other example of a slightly older DSP is the DSP56300 from Motorola. The newer, stateoftheart, DSPs from this company are based on the very high performance StarCore DSP Core, which yield unprecedented processing power. This is the result of a common initiative with Agere systems. Analog Devices produces one of the most versatile lowcost DSPs, namely the ADSP21xx family. The next generation of Analog Devices DSPs are called SHARC DSPs, and contain the ADSP21xxx family of 32bit ﬂoatingpoint processors. Finally, their latest generation is called TigerSHARC, and provides both ﬁxed
c B. Champagne & F. Labeau Compiled November 24, 2004
Most if not all DSP chips contain one or several dedicated hardware multipliers. on variable word lengths.3) contains a 24bit MAC unit. as well as the c B. Figure 12. and a 56bit accumulator. The TigerSHARC is a very high performance DSP targeting mainly very demanding applications. DSP chips have in general two separate memory blocks. (See e. Finally.3 Structural features of DSPs We will now review several features of Digital Signal Processors that are designed to meet the demanding constraints of realtime DSP. it only takes one clock cycle. In order to store the accumulated value into a normal register or in memory. in a DSP. the Motorola 563xx family (see Figure 12. or even MAC units. A common enhancement is the duplication of the data memory: many common DSP operations use two operands. in order to increase data throughput. Combined with pipelining (see below). in which case only the buses have to be duplicated. 2004 .3). but no overﬂow occurs. other enhancements found in some DSPs (such as SHARC processors) is a program instruction cache. On conventional processors without hardware multipliers.2(b). the accumulator value must be shifted right by the number of guard bits used. there is only one memory bank. like e. This is the essence of the Harvard architecture. The extra bits provided by the accumulator are guard bits that prevent overﬂow from occurring during repetitive accumulate operations. one for program instructions and one for data. Memory Access: Harvard architecture Most DSPs are created according to the socalled Harvard Architecture. as illustrated in Figure 12.2(b). Both program instructions and data are fetched from the same memory through the same buses. where each memory block also has a separate set of buses. The multiplier is associated with an accumulator register (in a MAC unit). On the other hand. so that resolution is lost. such as telecommunication networks equipment. This architecture enables two memory accesses during one clock cycle. It is thus normal to imagine fetching two data words from memory while fetching one instruction. the architecture of the Motorola DSP56300 in Figure 12.g.3). Many DSP chips have an architecture similar to the one in Figure 12. Other critical components that enable an efﬁcient ﬂow of data on and off the DSP chip are Direct Memory Access (DMA) units. This is realized in practise either with two separate data memory banks with their own buses (like in the Motorola DSP563xx family— see Figure 12. 12. as illustrated in Figure 12.3. Labeau Compiled November 24. such as the computation of the output of an FIR ﬁlter. or an enhanced version of it. or by using twoway accessible RAM (like in Analog Device’s SHARC Family). which normally is wider than a normal register. the MAC operation. For instance.4 illustrates the operation of a MAC unit.g. Champagne & F. binary multiplication can take several clock cycles. this architecture enables simultaneous fetching of the next program instruction while fetching the data for the current instruction. In classic microprocessors. which enable direct storage of data in memory without processor intervention. thus increasing data throughput. one address bus and one data bus. Specialized hardware units and instruction set In order to accommodate the constraints of DSP algorithms. An introduction to Digital Signal Processors and ﬂoating point operation.2(a). DSP chips usually comprise specialized hardware units not found in many other processors.234 Chapter 12.
Labeau Compiled November 24. you need to repeat N MAC operations. depending on the type of binary representation used (integer or fractional or mixed). All of this happens in one instruction cycle. The third component that is necessary to DSP algorithms implementation is an efﬁcient addressing mechanism. This is often taken care of by a specialized hardware Address Generation Unit.12. from which the operand of the current instruction is fetched. 12. generating addresses in parallel with the main processor computations. This is a lowlevel equivalent of a C language *(var name++) instruction. as in the FIR ﬁltering example given above. The only overhead for the programmer is to set up the circular addressing mode and the buffer size in some control register. We have also seen that DSP algorithms often contain repetitive instructions.3 Characterization of PDSPs 235 3URJUDP $GGUHVV %XV 3URJUDP 'DWD %XV $GGUHVV %XV 'DWD %XV 3URFHVVRU &RUH 'DWD 0HPRU\ 3URJUDP 0HPRU\ 'DWD $GGUHVV %XV 3URFHVVRU &RUH 0HPRU\ 'DWD %XV (a) (b) Fig. This is why DSPs contain a provision for efﬁcient looping (called zerooverhead looping). which is common when processing signal samples. but also increments pointers in memory. which may be a special assembly language instruction — so that the programmer does not have to care about checking and decrementing loop counters — . Remember that to compute one output sample of an Ntap FIR ﬁlter.circular addressing. It is obvious that such a function is very useful when many contiguous locations in memory have to be used. Champagne & F. or even a dedicated hardware unit. where a register points to an address in memory. Note that G guard bits prevent overﬂow in the worst case of the addition of 2G words. 2004 . which is the best way of creating a window sliding over a series of samples. c B. . need for guard bits. where addresses are incremented modulo a given buffer size.registerindirect addressing with postincrement. This unit works in the background. The instruction set of DSP chips often contains a MAC instruction that not only does the MAC operation. Some common features of a DSP’s address generation unit are: .2 Illustration of the difference between a typical microprocessor memory access (a) and a typical Harvard architecture (b). This is an easy way to implement a FIFO or queue. and the pointer is incremented automatically at the end of the instruction. The issue of shifting the result stored in the accumulator is very important: the programmer has to keep track of the implied binary point.
'DWD 0HPRU\ 3URJUDP 0HPRU\ VKLIWHU .236 Chapter 12. 2004 . 'DWD %XV < 'DWD $GGUHVV %XV < 'DWD %XV Fig. (Source: Motorola[9]). The DSP563xx is a family of 24bit ﬁxedpoint processors. Labeau Compiled November 24. Also shown is the DMA unit with its dedicated bus linked directly to memory banks and offchip memory. 12. Notice the 3 separate memory banks with their associated sets of buses. c B. Champagne & F. An introduction to Digital Signal Processors DGGUHVV GDWD '0$ XQLW 2IIFKLS PHPRU\ DFFHVV '0$ $GGUHVV %XV 3URJUDP $GGUHVV %XV 3URJUDP 'DWD %XV 0HPRU\ 3URJUDP &RQWURO $ULWKPHWLF 8QLWV ELW 0$& ELW DFFX $GGUHVV *HQHUDW LRQ < 'DWD 0HPRU\ .3 Simpliﬁed architecture of the Motorola DSP563xx family. 'DWD $GGUHVV %XV .
the next one is a the decode stage. where address offsets are bitreversed. So. The idea behind pipelining is just to begin fetching the next instruction as soon as the current one enters the decode stage. . other means of increasing the number of operations carried out by the processor have been implemented: parallelism and pipelining. computing a result and storing the result. This has been shown to be useful for the implementation of FFT algorithms. each carried out by a different part of the processor: a fetching step. 2004 . showing that the fourth operation would generate an overﬂow without the guard bits: the positive result would appear as a negative value. and the instruction after that is in the fetching stage. Pipelining: Pipelining is a principle that is in application in any modern processor. In this way. which is accumulated in a (2N + G)bit accumulator register. Often. in order to meet realtime constraints in more and more sophisticated DSP applications. The need for guard bits is illustrated with the given operands: the values of the accumulator at different times are shown on the right of the ﬁgure. Basically. Its goal is to make better use of the processor resources by avoiding leaving some components idle while others are working. during which the instruction is fetched from memory. the processor resources are used more efﬁciently and the actual c B.4 Operation of a MAC unit with guard bits. each of these three steps can also be decomposed in substep: for example. Labeau Compiled November 24. Champagne & F. Parallelism and Pipelining At one moment. during which the instruction is decoded and an execution step during which it is actually executed. a decoding step. leading to a 2Nbit result. the execution of an instruction requires three main steps. the execution may require fetching operands. 12. When the current instruction reaches the execution stage.bitreversed addressing. Two Nbit operands are multiplied.3 Characterization of PDSPs 237 1 1 WLPH $FFXPXODWRU YDOXHV 0XOWLSOLHU *XDUG ELWV 1 $FFXPXODWRU * 1 Fig. and is not limited to DSPs. it becomes difﬁcult to increase clock speed enough to satisfy the ever more demanding DSP algorithms.12. It is the programmer’s task in this case to keep track of the implied binary point when shifting back the accumulator value to an Nbit value.
QVWUXFWLRQ . each with their own data path. lines represent different time instants. and two parallel 32bit computational units. Champagne & F.QVWUXFWLRQ . which are in fact concatenations of c B. Labeau Compiled November 24.238 Chapter 12. this kind of behaviour is not desirable in DSPs. An introduction to Digital Signal Processors instruction throughput is increased. so that they can be used simultaneously. on data dependencies. As is.g. since some instructions already in the pipeline might have to be dropped. and for instance not where data are treated in series or where a lowdelay feedback loop exists. 8 16bit MAC operations can be carried out in just one cycle.QVWUXFWLRQ .QVWUXFWLRQ 'HFRGH XQLW LGOH . because the runtime determination of parallelism makes the predictability of execution time difﬁcult — though it is a factor of paramount importance in real time operation. Figure 12. Columns represent processor functional units.QVWUXFWLRQ LGOH LGOH . VLIW architectures [6] are characterized by long instruction words. Superscalar architectures are used heavily in general purpose processors (e.QVWUXFWLRQ .QVWUXFWLRQ LGOH LGOH .5 Schematic illustration of the difference in throughput between a pipelined (right) and non pipelined (left) architecture. Intel’s Pentium).QVWUXFWLRQ . These issues are far beyond the scope of this text and the interested reader is referred to the references for details.QVWUXFWLRQ LGOH ([HFXWH XQLW LGOH LGOH . Single InstructionMultiple Data (SIMD) processors can then apply the same operation to several data operands at the same time (two simultaneous MACs for instance).5 illustrates the pipelining principle. 2004 .QVWUXFWLRQ . together with subword parallelism to enhance the throughput on subword operations: in the case illustrated. In practise. branching instructions might ruin the gain of pipelining for a few cycles.QVWUXFWLRQ LGOH LGOH . with multiple buses and memory banks. so that NOPs should sometimes be inserted.QVWUXFWLRQ .QVWUXFWLRQ ([HFXWH XQLW LGOH LGOH . Data level parallelism: Data level parallelism is achieved in many DSPs by duplication of hardware units. multiple address generation units. SIMD is efﬁcient only for algorithms that lend themselves to this kind of parallel execution.QVWUXFWLRQ .QVWUXFWLRQ LGOH LGOH 'HFRGH XQLW LGOH .6. two or more MACs and Accumulators. Figure 12. which is illustrated in Figure 12. It is not uncommon in this case to have two or more ALUs (Arithmetic and Logic Unit).6(a) illustrates the general architecture of the processor core. Figure 12.QVWUXFWLRQ .g. This is why there are few superscalar DSPs available.QVWUXFWLRQ .6(b) shows how theses two computational units can be used in parallel.QVWUXFWLRQ . Instruction level parallelism: Instruction level parallelism is achieved by having different units with different roles running concurrently. Two main families of parallel DSPs exist.QVWUXFWLRQ W W W W W W )HWFK XQLW .QVWUXFWLRQ Fig. A good example of this idea is the TigerSHARC processor from Analog Devices. and contain a special scheduling unit which determines at runtime which instructions could run in parallel based e. namely those with VLIW (Very Long Instruction Word) architecture and those with superscalar architectures. is to have the processor accommodate multiple date lengths.QVWUXFWLRQ . Idle units are shaded. )HWFK XQLW W W W W W W . care must be taken because instruction do not all require the same time (mostly during the execution phase). so that for instance a 16bit multiplier can accomplish 2 simultaneous 8bit multiplications. But it is very efﬁcient for vectorintensive operations like multimedia processing. Also. 12. An other way of implementing this.
12.3 Characterization of PDSPs 239 3URJUDP &RQWURO 8QLW 0HPRU\ 0HPRU\ EDQNV 0HPRU\ 0HPRU\ RSHUDQGV ELWV.
RSHUDQGV ELWV.
ALU and shifter. for a total of eight simultaneous 16bit × 16bit MACs per cycle. and both data paths share the same data memory. In the TMS320C6xxx. The actual 32bit instructions from the same 128bit VLIW that are executed in parallel are in fact chosen at assembly time. (Source: ADI [1]) several shorter instructions. The programmer can assign program and/or data to each bus/memory bank. if the program is written in some highlevel language such as C. registers). This DSP combines SIMD and VLIW to attain a very high level of parallelism.7. address generation unit and multiplier. by a special directive in the TMS320C6xxx assembly language.8): ALU. for instance. Labeau Compiled November 24. the VLIW instruction must contain two multiplications. (b) Illustration of the datalevel parallelism by SIMD operation: each computational unit can execute two 32bit × 32bit ﬁxedpoint MACs per cycle.but in practise. Each of these 32bit instructions is to be executed by one of the 8 different computational units. grouped in socalled data paths A and B (see Figure 12. Another example of processor using VLIW is the TigerSHARC from Analog Devices. some limitations will prevent the hardware units from working at full rate in parallel all the time. the compiler takes care of this instructions organization and grouping. Each data path contains a set of registers.6). the instruction words are 128bit wide. This heavy parallelism normally provides an enormous increase in instruction throughput. each containing eight 32bit instructions.6 Simpliﬁed view of an Analog Devices TigerSHARC architecture. This principle is illustrated in Figure 12. For instance. described earlier (see Figure 12. data dependencies between the short instructions might prevent their parallel execution. It is clear in this case that in order for all the 8 instructions of a VLIW instruction to be executed in parallel.g. they will not be able to execute in parallel. Each short instruction will ideally execute in parallel with the others on a different hardware unit.8. VLIW means c B. (a) The ﬁgure shows the 3 128bit buses and 3 memory banks as well as 2 computational units and 2 address generation units. Champagne & F. 12. %XV %XV %XV &RPSXWDWLRQDO XQLW $/8 5HJLVWHUV 0XOWLS $/8 OLHU 6KLIWHU &RPSXWDWLRQDO XQLW $/8 5HJLVWHUV 0XOWLS OLHU 6KLIWHU 0$& XQLW 0$& XQLW $*8 $*8 '0$ &RPSXWDWLRQDO 8QLW &RPSXWDWLRQDO 8QLW (a) (b) Fig. executing up to four 16bit × 16bit MACs. The simpliﬁed architecture of this processor is shown in Figure 12. These 8 units are constituted by 4 duplicate units. A good illustration of these limitations can be given by looking at a practical example of VLIW processor: the TMS320C6xxx family from Texas Instruments. Of course. i.e. one instruction must be executed on each computational unit. where the address buses have been omitted for clarity. 2004 . but can also work in subword parallelism. if some of the short instructions in a VLIW instruction use the same resources (e. also.
but will of course lack any reconﬁgurability. Other Implementations We have reviewed the characteristics of DSPs.240 Chapter 12. An introduction to Digital Signal Processors :LGH . They are faster than PDSPs. pipelining and make extensive use of superscalar architectures. Champagne & F. They also have very efﬁcient caching. it is obvious that some other choices are possible when a DSP algorithm has to be implemented in an endproduct. and shown why they are suited to the implementation of DSP algorithms. and compared with a PDSPbased solution.6(b) can be issued a different instruction.QVWUXFWLRQ %XV LQVWUXFWLRQ LQVWUXFWLRQ LQVWUXFWLRQ LQVWUXFWLRQ +DUGZDUH XQLW +DUGZDUH XQLW +DUGZDUH XQLW +DUGZDUH XQLW Fig. FPGA: FieldProgrammable Gate Arrays are hardware reconﬁgurable circuits that represent some middle way between a programmable processor and and ASIC. However.g. However. but in general have c B. such as Intel’s Pentium. 12. MMX for Intel). Labeau Compiled November 24. ASIC: An Application Speciﬁc Integrated Circuit will execute the required algorithm much faster than a programmable DSP processor. general purpose processors. 2004 .7 Principle of VLIW architecture: long instructions are split in short instructions executed in parallel by different hardware units. show extremely high clock rates. they suffer some drawbacks for DSP implementations: high power consumption and need for proper cooling. Most of them also have a SIMD extension to their instruction set (e. So upgrade and maintenance of products in the ﬁeld are impossible. These choices are listed below. and lack of executiontime predictability because of multiple caching and dynamic superscalar architectures. that each of the two computational units in Figure 12. The design costs are also prohibitive except for very high volumes of units deployed. General Purpose Processor: Nowadays. which make them less suited to hard realtime requirements. higher cost (at least than most ﬁxedpoint DSPs).
(Source:TI [13]) c B. for up to eight parallel instructions issued to the eight processing units divided in operational Units A and B.QVWUXFWLRQ GHFRGLQJ ORJLF ELWV 8QLW $ $/8 $ $/8 6KLIWHU $ $GGUHVV *HQHUD WLRQ $ 0XOWL SOLHU $ 8QLW % $/8 % $/8 6KLIWHU % $GGUHVV *HQHUD WLRQ % 0XOWL SOLHU % 5HJLVWHUV $ 5HJLVWHUV % '0$ 8QLW 'DWD 0HPRU\ $FFHVV .8 Simpliﬁed architecture of VLIW processor: the Texas Instruments TMS320C6xxx VLIW Processor.QWHUIDFH ([WHUQDO 0HPRU\ .QWHUQDO 0HPRU\ Fig. Labeau Compiled November 24. 12. 2004 . Each VLIW instruction is made of eight 32bit instructions.QWHUIDFH .3 Characterization of PDSPs 241 3URJUDP 0HPRU\ .12. Champagne & F.
Champagne & F. The design work tends to be also longer than on a PDSP.242 Chapter 12. Since most new DSPs have their own instruction set and architecture. They are more expensive than PDSPs. ) will call for a ﬁxedpoint DSP. produced by Berkeley Design Technologies. These comes at the expense of a higher cost and a higher power consumption. 2004 . Design issues: Timetomarket is an important factor that will perhaps be the main driving force towards the choice of any DSP. MOPS or MFLOPS (Millions of Operations per Second or Millions of FloatingPoint Operations per Second) have no consensual deﬁnition: no two device manufacturers do agree on what an “operation” might be. . so that a lower instruction count will do the same job. it is difﬁcult to conclude from MIPS measurements. including typical DSP operations: FIR ﬁltering. Due to the big differences between the instruction sets. It is simply determined by NI . Speed: Constructors publish the speed of their devices as MIPS. . . Many factors will enter into account. MIPS (Millions of instructions per second) gives simply a hypothetical peak instruction rate through the DSP. the DSP manufacturer must provide development tools (compiler. and we try to review some of these here. FFT. 12. and NI is the maximum number of instructions executed per cycle (NI > 1 for processors with parallelism). the BDTIMark2000. the application will decide: applications where lowcost and/or low power consumption are an issue (large volume. most of the time. So. since the number of bits available directly deﬁnes the dynamic range attainable. ﬂoating point DSPs have higher dynamic range and do not require the programmer to worry as much about overﬂow and roundoff errors.. designers may opt for a DSP that is codecompatible with one they already use. Fixed or Floating Point: As stated earlier. if prior knowledge of one DSP is available. . one still has to choose the word width (in bits). Instructions on one DSP can be much more powerful than on another one. This is crucial for ﬁxedpoint application. it may orient the choice of a new DSP to one that is codecompatible.4 Benchmarking PDSPs Once the decision has been made to implement an algorithm on a PDSP.. the time comes to choose a model amongst the many available brands and families. In order to ease the work of design engineers. Figure 12. Also. Once the type of arithmetic is chosen. An introduction to Digital Signal Processors higher power consumption. mobile. c B. TI where TI is the processor’s instruction cycle time in µs. Labeau Compiled November 24. The only real way of comparing different DSPs in terms of speed is to run highly optimized benchmark programs on the devices to be compared. architectures and operational units of different DSPs. in order to reuse legacy code. MOPS or MFLOPS.9 shows an example of such a benchmark. assembler).
5 Current Trends 243 %'7. new DSP chips are developed that are easily grouped in parallel or in clusters to yield heavily parallel computing engines [2]. Some manufacturers have a collaboration e. 12. (source: BDTI) New highlevel development tools should ease the development of DSP algorithms and their implementation. even the most heavily parallel ones. FIR ﬁltering.QVWUXPHQWV Fig. c B. 2004 706&[[ '63[[[ '63[[ $'63[ $'63[ $'63[[ '63[[ . by Berkeley Design Technologies (higher is faster).g.0DUN 6+$5& 06& 706&[[ 706&[[ $QDORJ 'HYLFHV $JHUH 0RWRUROD 6WDU&RUH 7H[DV . Champagne & F. . Of course. with Mathworks to develop a MATLAB/SIMULINK toolbox enabling direct design and/or simulation of software for the DSP chip in MATLAB and SIMULINK. new challenges appear in the parallelisation of algorithms. For example. 12. FPGAs coming on fast: FPGA manufacturers are aggressively entering the DSP market. IIR ﬁltering. and resolution of data dependencies.12. Labeau Compiled November 24.5 Current Trends Multiple DSPs: Numerous new applications cannot be tackled by one DSP processor. one manufacturer now promises the capability of implementing 192 eighteenbit multiplications in parallel. shared resources management. As a consequence.9 Speed benchmark for several available DSPs. BDTIMark2000 is an aggregate speed benchmark computed from the speed of the processor in accomplishing common DSP tasks such as FFT. The benchmark is the BDTIMark2000. due mainly to the very high densities attained by their latest products and to the availability of highlevel development tools (directly from a blocklevel vision of the algorithm to the FPGA). The claimed advantages of FPGAs over PDSPs are speed and a completely ﬂexible structure that allows for complete customization and highly parallel implementations [12].. .
that is x[n] = xa (nTs ).e. This approach is expensive as it requires the use of D/A and A/D devices. Then.2) corresponding to a different sampling period.g. In some applications.1) with sampling period Ts and corresponding sampling frequency Fs = 1/Ts .2) and then to resample xa (t) at the desired rate Fs = 1/Ts . In other applications. lower binary rate in speech compression). • Subband coding of speech and video signals • Subband adaptive ﬁltering. i. multirate processing is used to achieve improved performance of a DSP function (e.1 Introduction Multirate system: A multirate DSP system is characterized by the use of multiple sampling rates in its realization.244 Chapter 13 Multirate systems 13. Section 7. n∈Z (13. that is: signal samples at various points in the systems may not correspond to the same physical sampling frequency.1.g. Ts = Ts ? Assuming that the original sampling rate Fs > 2Fmax . the need for multirate processing comes naturally from the problem deﬁnition (e. . Suppose we are given the samples of a continuoustime signal xa (t). how can we efﬁciently generate a new set of samples x [n] = xa (nTs ). one possible approach is to reconstruct xa (t) from the set of samples x[n] (see Sampling Theorem. sampling rate conversion). • Digital antialiasing ﬁltering (used in commercial CD players). n ∈ Z (13. where Fmax represent the maximum frequency contained in analog signal xa (t). Sampling rate modiﬁcation: The problem of sampling rate modiﬁcation is central to multirate signal processing. Some applications of multirate processing include: • Sampling rate conversion between different digital audio standards.
0. Labeau Compiled November 24. −1.e.1: Consider the signal x[n] = cos(πn/2) u[n] = {. Example 13. The basic properties of the decimator are studied in greater details below. Accordingly.3 is referred to as a decimator. . Champagne & F.3) The above operation is so important in multirate signal processing that it is given the special name of decimation. 0. Fs > Fs ) ⇒ upsampling or interpolation 13. n∈Z (13. The following special terminology will be used here: • Ts > Ts (i.4) that the decimator is a linear system.2 Downsampling by an integer factor Let Ts = MTs where M is an integer ≥ 1. as shown by the following example. Mfold decimation: Decimation by an integer factor M ≥ 1. without going back in the analog domain. a device that implements 13. or simply Mfold decimation. 1. 0.e. The decimator is sometimes x[n] rate = 1 / Ts x D [n] M↓ rate = 1 / MTs Fig. 0.13. In this case. The block diagram form of the Mfold decimator is illustrated in Figure 13.e. 0.} c B. the output signal xD [n] is obtained by retaining only one out of every M input samples x[n]. is deﬁned by the following operation: xD [n] = DM {x[n]} x[nM]. performing the resampling by direct manipulation of the discretetime signal samples x[n]. . albeit a timevarying one. 2004 . −1. we seek a cheaper. It should be clear from (13. 1. .4) That is. . .1. we have: x [n] = xa (nTs ) = xa (nMTs ) = x[nM] (13.1 Mfold decimator called a sampling rate compressor or subsampler. 13. purely discretetime solution to the above problem. Fs < Fs ) ⇒ downsampling • Ts < Ts (i. .2 Downsampling by an integer factor 245 In this Chapter. i.
1. . The ztransforms of the sequences xD [n] and x[n]. This is illustrated in Figure 13. .2 Example of 2fold decimation Now consider the following shifted version of x[n]: x[n] = x[n − 1] ˜ = {. . −1. 1. . .246 Chapter 13. c B. 13. . −1. . 0. . . . 0. 0. . −1. Multirate systems If x[n] is applied at the input of a 2fold decimator (i. Champagne & F.2. 0. 0. 0. the resulting signal will be xD [n] = x[2n] = cos(πn) u[2n] = {.5) . are related as follows: XD (z) = where WM = e− j2π/M . x[n] xD [n] 2 0 1 3 4 1 3 2 4 n 0 n ~[ n] = x[ n − 1] x ~ [ n] ≠ x [ n − 1] xD D 3 0 1 2 4 n 0 1 2 3 4 n Fig. . 0. −1. . 0. ˜ Property: Let xD [n] = x[nM] where M is a positive integer. 0.} Only those samples of x[n] corresponding to even values of n are retained in xD [n]. 2004 1 M−1 k ∑ X(z1/MWM ) M k=0 (13. M = 2). 0. 0. denoted XD (z) and X(z). . 1. 0. . . 0. Labeau Compiled November 24. This shows that the 2fold decimator is timevarying. . 1.} If x[n] is applied at the input of a 2fold decimator. respectively. .e. the output signal will be ˜ xD [n] = x[2n] ˜ ˜ = {.} Note that xD [n] = xD [n − 1].
±2M. expanded images of the original spectrum X(ω).10) Compiled November 24.2: In this example. where ωmax = π/2 denotes the maximum frequency contained in x[n]. we have: XD (z) = = = = = 1 M−1 j2πkl/M = ∑e M k=0 1. Let xD [n] denote the result of a Mfold decimation on x[n]. 0. namely: 1 M−1 ω − 2πk XD (ω) = (13. Champagne & F.3(top). That is.6) n=−∞ ∞ n=−∞ ∞ l=−∞ ∑ ∑ ∞ xD [n]z−n x[nM]z−(nM)/M p[l]x[l]z−l/M 1 M−1 j2πkl/M }x[l]z−l/M ∑e M k=0 ∑ ∞ l=−∞ ∑ { 1 M−1 ∞ ∑ ∑ x[l](z1/M e− j2πk/M )−l M k=0 l=−∞ 1 M−1 k ∑ X(z1/MWM ) M k=0 (13. l = 0. Consider a discretetime signal x[n] with DTFT X(ω) as shown in Figure 13. we have W2 = e− jπ = −1. Labeau (13. . First consider the case M = 2. . X(ω) = 0 if ωmax ≤ ω ≤ π. otherwise (13. Example 13. we illustrate the effects of an Mfold decimation in the frequency domain. 2004 .5).9) ∑X M k=0 M • According to (13. the spectrum XD (ω) is made up of properly shifted. .8) • Setting z = e jω in (13. an equivalent expression is obtained that relates the DTFTs of xd [n] and x[n].13. where (13.9) simpliﬁes to 1 ω ω − 2π XD (ω) = [X( ) + X( )] 2 2 2 c B.7) Remarks: • In the special case M = 2.9). so that 1 XD (z) = (X(z1/2 ) + X(−z1/2 )) 2 (13.2 Downsampling by an integer factor 247 Proof: Introduce the sequence p[l] Making use of p[n]. ±M.
e. the two sets of spectral images X(ω/2) and X( ω−2π ) do not overlap 2 because ωmax < pi/2. Note that X(ω) is no longer of period 2π but instead its period is 4π. X(ω/2) is a dilated version of X(ω) by a factor of 2 along the ωaxis.248 Chapter 13. In this special case. XD (ω) is obtained as the sum of X(ω/2) and its shifted version by 2π. That is. there is no spectral aliasing between the images and. besides the amplitude scaling and the frequency dilation. π]: XD (ω) = c B. Multirate systems X (ω ) 1 −π ω max π 2π 4π ω X D (ω ) 1/ 2 M =2 −π π 2π 4π ω X D (ω ) 1/ 3 M =3 −π π 2π 4π ω Fig. the relationship (13. X(ω) = 0 for π ≤M≤π M (13.e.3 (middle). 13. Champagne & F. Labeau 1 ω X( ).9) simpliﬁes to 1 ω ω − 2π ω − 4π XD (ω) = [X( ) + X( ) + X( )] 3 3 3 3 (13. the various images do overlap. the original spectral shape X(ω) is not distorted. Discussion: It can be inferred from the above example that if X(ω) is properly bandlimited. In other words. no information about the original x[n] is lost through the decimation process. M M 0 ≤ ω ≤ π (13. 2004 . We will see later how x[n] can be recovered from xD [n] in this case.3 (bottom). XD (ω) is obtained by superposing dilated and shifted versions of X(ω). Note that in the above case M = 2. i. as shown in Figure 13. The resulting DTFT XD (ω) is illustrated in Figure 13. X( ω−2π ). The introduction of this latter component 2 restores the periodicity to 2π.9) between X(ω) and XD (ω) simpliﬁes to the following if one only considers the fundamental period [−π. Next consider the case M = 3.11) Here.3 In this equation. Thus. i. there is spectral aliasing and a loss of information in the decimation process.13) Compiled November 24. Besides scaling factor 1/2. Here the dilation factor is 3 and since ωmax > π/3.12) then no loss of information occurs during the decimation process. where (13.
or simply Lfold expansion.1)– (13. referred to as a downsampling system is illustrated in Figure 13. the new samples x [n] can therefore be expressed in terms of the original samples x[n] as follows: We have x [n] = xa (nTs ) = = k=−∞ ∞ k=−∞ ∞ ∑ x[k] sinc( x[k] sinc( nTs − k) Ts n − kL ) L (13. Suppose that the underlying analog signal xa (t) is bandlimited to ΩN . X(Ω) = 0 for Ω ≥ ΩN . 13.4 ~ [ n] xD rate = 1 / Ts M↓ rate = 1 / MTs Ideally. k ∈ Z 0 otherwise (13. is deﬁned as follows: xE [n] = UL {x[n]} c B.14) 13. HD (ω) is a the lowpass antialiasing ﬁlter HD (ω) should have the following desired speciﬁcations: HD (ω) = 1 if ω < π/M 0 if π/M ≤ ω ≤ π.16) ∑ The above equation can be given a simple interpretation if we introduce the concept of sampling rate expansion. according to the sampling theorem (Section 7. In this case. x[n] H D (ω ) Fig.1.3.13. Champagne & F.1 Lfold expansion Sampling rate expansion by an integer factor L ≥ 1.4. (13. 2004 .2). The resulting system. 13. a lowpass digital ﬁlter with cutoff at π/M would be included along the processing chain prior to Mfold decimation.15) xa (t) = ∑ x[k] sinc( − k) Ts k=−∞ Invoking (13.3 Upsampling by an integer factor 249 Downsampling system: In practice.2). Labeau x[k] if n = kL. i. and that the sequence x[n] was obtained by uniform sampling of xa (t) at a rate Fs such that Ωs = 2πFs ≥ 2ΩN .3 Upsampling by an integer factor Here we consider the sampling rate conversion problem (13. xa (t) can be expressed in terms of its samples x[n] as ∞ t (13.e.17) Compiled November 24.2) with Ts = Ts /L where L is an integer ≥ 1.
a3 . . .5 Lfold expander. two zero samples are inserted between every consecutive samples of x[n]. a.} The result of a 3fold expansion on x[n] is ˜ xE [n] = {. 0. yields xE [n] = {. Labeau Compiled November 24. . . . a. a2 . Actually. Sampling rate expansion is a linear timevarying operation. 13.250 Chapter 13.3: Consider the signal x[n] = an u[n] = {.} Lfold expansion applied to x[n]. 0. a3 . Example 13. 0. k ∈ Z. .} That is. 0. in the above ˜ example.18) n=−∞ ∞ k=−∞ ∑ ∞ xE [n]z−n = k=−∞ ∑ ∞ xE [kL]z−kL (13. . 0. 0. 0. L − 1 zeros are inserted between each consecutive samples of x[n]. . 1. . . with L = 3. we immediately obtain XE (z) = (13. . 0. . we have xE [n] = xE [n − 3]. Next. 0. 0. The ztransforms of the sequences xE [n] and x[n]. are related as follows: XE (z) = X(zL ) Proof: Since the sequence xE [n] = 0 unless n = kL. 0. This is illustrated in the following example. . 0. with L a positive integer. consider the following shifted version of x[n]: x[n] = x[n − 1] ˜ = {.19) ∑ x[k]z−kL = X(zL ) c B. respectively denoted XE (z) and X(z). a2 . 0. . The expander is also sometimes called upsampler in the DSP literature.17). 0. 0. a. a3 . 0. . 0. . 0. . 0. Champagne & F.5. The block diagram form of an Lfold expander system is illustrated in Figure 13. . 0. Realtime implementation of an Lfold expansion requires that the output sample rate be L times larger than the input sample rate. 1. a2 . 0. 0. 0. a3 . 0. 0. 0. . x[n] x E [n] L↑ rate = 1 / Ts rate = L / Ts Fig. 0. 0. . a. a2 . . 0. 1. ˜ Property: Let xE [n] be deﬁned as in (13. Multirate systems That is. 2004 . . . . showing that the expander is a timevarying system.} ˜ Note that xE [n] = xE [n − 1]. 1. 0.
Suppose xa (t) is BL with maximum frequency ΩN . Clearly.6. namely: XE (ω) = X(ωL) (13. shows the DTFT X(ω) of the original signal x[n].3.20) According to (13.2 Upsampling (interpolation) system We are now in a position to provide a simple interpretation for the upsampling formula (13. Labeau (13. Consider uniform sampling at rate Ωs ≥ 2ΩN followed by Lfold expansion. if π/L ≤ ω ≤ π (13. if ω < π/L 0. versus direct uniform sampling at rate Ωs = LΩs . in the special case L = 4. Example 13. X (ω) can be recovered from XE (ω) via lowpass ﬁltering: X (ω) = HI (ω)XE (ω) where HI (ω) is an ideal lowpass ﬁlter with speciﬁcations HI (ω) = c B. 13.7 in the special case L = 2 and Ωs = 2ΩN .13.4: The effects of an Lfold expansion in the frequency domain are illustrated in Figure 13.22) Compiled November 24. 2004 .6 Effects of Lfold expansion in the frequency domain. 13. As a result of the time domain expansion. we obtain an equivalent relation in terms of the DTFTs of xE [n] and x[n]. Because of the periodicity of X(ω) this operation introduces L − 1 images of the (compressed) original spectrum within the range [−π. the Lfold expansion of x[n].3 Upsampling by an integer factor 251 Remarks: Setting z = e jω in (13.18). π]. The bottom portion shows the DTFT XE (ω) of xE [n]. the frequency axis in X(ω) is compressed by a factor L to yield XE (ω).21) L. The top portion X (ω ) 1 −π π 2π ω X E (ω ) 1 L=4 −π π L π 2π ω Fig. the original spectrum X(ω) has been compressed by a factor of L = 4 along the ωaxis.16). The effects of these operations in the frequency domain are shown in Figure 13.20). Champagne & F.
2004 .252 Chapter 13.7 Sampling at rate Ωs = 2ΩN followed by Lfold expansion (left) versus direct sampling at rate Ωs = LΩs (right). Labeau Compiled November 24. Multirate systems X a (Ω) 1 − ΩN ΩN Ω Ω s = 2Ω N Ω 's = LΩ s X (ω ) X ′(ω ) L / Ts 1 / Ts −ωN ωN 2π ω −π π 2π ω L↑ X E (ω ) 1 / Ts −π π 2π ω Fig. Champagne & F. 13. c B.
upsampling is applied ﬁrst.13. Fig. Simpliﬁed structure: Recall the deﬁnitions of HI (ω) and HD (ω): HI (ω) = c B. 2004 .27) Compiled November 24.8.23) Let us verify that the above interpretation of upsampling as the cascade of a sampling rate expander followed by a LP ﬁlter is consistent with (13. In the context of upsampling. ω < π/L 0. To avoid unnecessary loss of information when M > 1.16): x [n] = = = = k=−∞ ∞ k=−∞ ∞ l=−∞ ∞ l=−∞ ∑ ∑ ∞ x[k] sinc( n − kL ) L n − kL ) L xE [kL] sinc( xE [l] sinc( ∑ n−l ) L (13. Champagne & F. the LP x[n] xE [n] rate = 1 / Ts L↑ H I (ω ) x′[n] rate = L / Ts DiscreteTime Upsampling System. π/L ≤ ω ≤ π (13. or also interpolation ﬁlter. We shall also refer to this system as a digital interpolator. 13.24) ∑ xE [l] hi [n − l] A block diagram of a complete upsampling system operating in the discretetime domain is shown in Figure 13.9.25) M Such a conversion of the sampling rate may be accomplished by means of upsampling and downsampling by appropriate integer factors: Ωs = −−−− −−− Ωs − − − → LΩs − − − − → by L by M Upsampling Downsampling L Ωs M (13.26) The corresponding block diagram is shown in Figure 13. Labeau L.8 ﬁlter HI (ω) is called antiimaging ﬁlter.4 Changing sampling rate by a rational factor Suppose Ωs and Ωs are related as follows: L Ωs (13.4 Changing sampling rate by a rational factor 253 The equivalent impulse response of this ﬁlter is given by n hI [n] = sinc( ) L (13. 13.
Multirate systems [>Q@ /n + L Z .254 Chapter 13.
/ 7V + G Z .
π/M ≤ ω ≤ π L. ωc ≤ ω ≤ π (13. ω < π/M 0. (13.30) The resulting simpliﬁed block diagram is shown in Figure 13. 0p [ > Q@ UDWH 7V XSVDPSOLQJ E\ / Fig.29) [>Q@ /n + LG Z . ω < ωc 0.10. 13.28) The cascade of these two ﬁlters is equivalent to a single LP ﬁlter with frequency response: HID (ω) = HI (ω)HD (ω) = where ωc min(π/M. HD (ω) = 1.9 GRZQVDPSOLQJ E\ 0 / 7V 0 Sampling rate alteration by a rational factor. π/L) (13.
Consider a system function H(z) = n=−∞ ∑ ∞ h[n]z−n (13. 0p [ > Q@ UDWH 7V Fig. It leads to theoretical simpliﬁcations as well as computational savings in many important applications.32) Compiled November 24. Labeau r=−∞ ∞ ∑ ∞ h[2r]z−2r + r=−∞ ∑ ∞ h[2r + 1]z−(2r+1) r=−∞ r=−∞ 2 −1 2 E0 (z ) + z E1 (z ) ∑ h[2r]z−2r + z−1 ∑ ∞ h[2r + 1]z−2r (13. 2004 . Consider the following development of H(z) where the summation over n is split into a summations over even odd indices.10 Simpliﬁed rational factor sampling rate alteration / 7V 0 13.5 Polyphase decomposition Introduction: The polyphase decomposition is a mathematical tool used in the analysis and design of multirate systems. 13.31) where h[n] denotes the corresponding impulse response. respectively: H(z) = = = c B. Champagne & F.
37) r=−∞ This shows the existence of the polyphase decomposition. .31) into M individual summation over index r ∈ Z. . such that each summation covers the time indices n = Mr + l.36) has been obtained. The functions E0 (z) and E1 (z) are the corresponding polyphase component.33) (13. Remarks: • The proof is constructive in that in suggests a way of deriving the polyphase components El (z) via (13. • In anycase. . For IIR ﬁlters. once a polyphase decomposition as in (13. 1. where l = 0. leading to the following result. Labeau Compiled November 24. Proof: The basic idea is to split the summation over n in (13.13. any system function H(z) admits a unique Mfold polyphase decomposition M−1 H(z) = l=0 ∑ z−l El (zM ) (13.34) ∑ We refer to (13.37)is recommended for deriving the polyphase components in the case of an FIR ﬁlter H(z). Property: In its region of convergence.35) where the functions El (z) are called polyphase components of H(z). c B. M − 1: M−1 H(z) = = = where we deﬁne l=0 r=−∞ M−1 ∞ −l l=0 M−1 l=0 ∑ ∑ ∞ h[Mr + l]z−(Mr+l) h[Mr + l]z−(Mr) (13. The above development can be easily generalized.37). • The use of (13.32) as the 2fold polyphase decomposition of the system function H(z). there may be simpler approaches. 2004 . one can be sure that it is the right one since it is unique. The unicity is proved by involving the unique nature of the series expansion (13.31).36) ∑z ∑ r=−∞ ∑ z−l El (zM ) ∑ ∞ El (z) = h[Mr + l]z−r (13.5 Polyphase decomposition 255 where the functions E0 (z) and E1 (z) are deﬁned as E0 (z) = E1 (z) = r=−∞ ∞ r=−∞ ∑ ∞ h[2r]z−r h[2r + 1]z−r (13. . Champagne & F.
13.5: Consider the FIR ﬁlter H(z) = (1 − z−1 )6 To ﬁnd the polyphase components for M = 2. Labeau Compiled November 24. L↑ G( z L ) y[n] these identities are beyond the scope of this course. • Consider basic block diagram of a downsampling system: Suppose that the lowpass antialiasing x[n] H (z ) y[n] rate = 1 / Ts Fig.12 2nd Noble identity.e.11 1st Noble identity. G( z M ) M↓ y[n] x[n] G (z ) L↑ y[n] x[n] ≡ Fig. Efﬁcient downsampling structure: The polyphase decomposition can be used along with Noble identity 1 to derive a computationally efﬁcient structure for downsampling. 2004 . 13. we may proceed as follows: H(z) = 1 − 6z−1 + 15z−2 − 20z−3 + 15z−4 − 6z−5 + z−6 = 1 + 15z−2 + 15z−4 + z−6 + z−1 (−6 − 20z−2 − 6z−4 ) = E0 (z2 ) + z−1 E1 (z2 ) where E0 (z) = 1 + 15z−1 + 15z−2 + z−3 E1 (z) = −6 − 20z−1 − 6z−2 Noble identities: The ﬁrst identy can be used to exchange the order of a decimator and an arbitrary ﬁlter G(z): The second identity can be used to exchange the order of an expander and ﬁlter G(z): The proof of x[n] M↓ G (z ) y[n] x[n] ≡ Fig.38) c B. N−1 H(z) = n=0 ∑ h[n]z−n (13. ﬁlter H(z) is FIR with length N. i. Champagne & F. 13. Multirate systems Example 13.256 Chapter 13.13 M↓ rate = 1 / MTs Block diagram of a downsampling system.
consider interchanging the order of the ﬁlters El (zM ) and the decimator in Figure 13. the above structure requires N multiplications per unit time at the input sampling rate. Champagne & F. the computational complexity is the same as above. the following equivalent structure for the downsampling system is obtained where the polyphase com x[n] 1 / Ts E0 ( z M ) z −1 y[n] M↓ 1 / MTs E1 ( z M ) z −1 EM −1 ( z M ) Fig. • For simplicity. • Finally.14 Equivalent structure for the downsampling system. Labeau Compiled November 24. assume that N is a multiple of M. Since there are M such ﬁlters.14 by making use of Noble identity 1: Observe that the resulting structure now only requires N/M multipli x[n] 1 / Ts M↓ E0 ( z ) E1 ( z ) y[n] z −1 M↓ 1 / MTs z −1 M↓ Fig. Making use of the polyphase decomposition (13. 2004 .5 Polyphase decomposition 257 Assuming a direct form (DF) realization for H(z).36).13.39) If realized in direct form. Note that only 1 out of every M samples computed by the FIR ﬁlter is actually output by the Mfold decimator. 13. each ﬁlter El (zM ) requires N/M multiplications per unit time.15 EM −1 ( z ) Final equivalent structure for the downsampling system. cations per unit time. c B. 13. ponents are given here by N/M−1 El (zM ) = r=0 ∑ h[Mr + l]z−Mr (13.
only one out of every M ﬁlter outputs in Fig. 13.13 needs to be computed. Labeau Compiled November 24. so that the required computational rate is uniform and equal to N/M multiplications per input samples.258 Chapter 13. However. since the input samples are coming in at the higher rate Fs . the computational saving is artiﬁcial since actually. The real advantage of the structure in Fig. 2004 . Champagne & F. the use of a traditional DF realization for H(z) still requires that the computation of any particular output be completed before the next input sample x[n] arrives and modiﬁes the delay line. 13. Thus the peak required computational rate is still N multiplies per input sample. c B. This may be viewed as an inefﬁcient use of available computational resources. even though the ﬁltering operation needs only be activated once every M samples.15 is that it makes it possible to spread the FIR ﬁlter computation over time. Multirate systems One could argue that in the case of an FIR ﬁlter H(z).
14.1.1) is also zero for n < 0. The effects of measurement noise on the quality of the estimation are also discussed brieﬂy. this amounts to decomposing the input signal into consecutive blocks of smaller length. 14. Accordingly. the ﬁlter output P−1 y[n] = h[n] ∗ x[n] = k=0 ∑ h[k]x[n − k] (14. provided the DFT size N is large enough. then the DFT approach in Section 6. we look at the use of the DFT (efﬁciently computed via FFT) as a frequency analysis or estimation tool. provided a processing delay is tolerated in the underlying application. The ﬁrst application describes an efﬁcient way of realizing an FIR ﬁltering via block DFT processing.1 Block FIR ﬁltering Consider a causal FIR ﬁltering problem as illustrated in Figure 14. If x[n] was also timelimited. We review the fundamental relationship between the DFT of a ﬁnite signal and the DTFT and we investigate the effects of the DFT length and windowing function on the spectral resolution. and implementing the ﬁltering in the frequency domain via FFT and IFFT processing.5. we discuss two basic applications of the DFT and associated FFT algorithms. Assume that the ﬁlter’s impulse recausal FIR filter x[n] h[n] y[n] = h[n] ∗ x[n] Fig. This results in signiﬁcant computational savings. Essentially. We recall from Chapter 6 that the linear convolution of two timelimited signals can be realized via circular convolution in the DFT domain.1 could be used to compute y[n]. .259 Chapter 14 Applications of the DFT and FFT In this Chapter. In the second application.1 Causal FIR ﬁltering sponse h[n] is timelimited to 0 ≤ n ≤ P − 1 and that the input signal x[n] is zero for n < 0.
1. the required DFTs and inverse DFT are computed using an FFT algorithm (more on this later). 0 otherwise. etc. Applications of the DFT and FFT Two fundamental problems exist with respect to the assumption that x[n] is timelimited: x[n]: • in certain applications. there exist two basic methods of block convolution: overlapadd and overlapsave. leading to various technical problems (processing delay. 1. .2) In practice.4) The above generic steps may be realized in a different ways.260 Chapter 14. 2.. 14. (3) Reassemble the ﬁltered blocks (as they become available) into an output signal: yr [n] ⇒ y[n] (14. In practice. Both will be described below.). In particular. Block convolution: The basic principle of block convolution may be summarized as a series of three steps as follows: (1) Decompose x[n] into consecutive blocks of ﬁnite length L: x[n] ⇒ xr [n].e.. 2004 . • such a limit. (14. Labeau Compiled November 24. c B. may be too large. compute its linear convolution with FIR ﬁlter impulse response h[n] (see DFT approach below): yr [n] = h[n] ∗ xr [n] (14. the shift factor rL is introduced so that the nonzero samples of xr [n] are all between 0 ≤ n ≤ L − 1. memory requirement. (2) Filter each block individually using the DFT approach: xr [n] ⇒ yr [n] = h[n] ∗ xr [n] (14. the generic block convolution steps take the following form: (1) Blocking of input data into nonoverlapping blocks of length L: xr [n] = x[n + rL] 0 ≤ n < L. it is not possible to set a speciﬁc limit on the length of x[n] (e. (14. if it exists. it is still possible to compute the linear convolution (14. for convenience of notation.6) Note that each block yr [n] is now of length L + P − 1.1 Overlapadd method: In this method.g. voice communications). but it is necessary to resort to block convolution.3) r = 0. (2) For each input block xr [n]. Champagne & F.1) as a circular convolution via the DFT (i. FFT).5) where. where xr [n] denotes the samples in the rthe block.
. 0} • For r = 0.1 Block FIR ﬁltering 261 (3) Form the output signal by adding the shifted ﬁltered blocks: y[n] = r=0 ∑ yr [n − rL] ∞ (14. based on Section 6...7) The method takes its name from the fact that the output blocks yr [n] need to be properly overlapped and added in the computation of the desired output signal y[n]. . compute Xr [k] = FFTN {xr [0].1).14. . Mathematical justiﬁcation: It is easy to verify that the overlapadd approach produces the desired convolution results in (14. Speciﬁcally.12) c B.. . we have from (14. . 2. Champagne & F. the DFT length. xr [L − 1].5) that x[n] = r=0 ∑ xr [n − rL] ∞ ∞ (14..9) DFT realization of step (2): Recall that the DFT approach to linear convolution amounts to computing a circular convolution... invoking basic properties of the linear convolution.. Computational complexity: The computational complexity of the overlapadd method can be evaluated as follows. 2004 ..12) log2 N multiplications to compute Xr [k] in (14. 0. 0. we need • N 2 (14. should be sufﬁciently large and the data must be appropriately zeropadded.. To ensure that the circular and linear convolutions are equivalent.11) (14.5.. we may proceed as follows • Set DFT length to N ≥ L + P − 1 (usually a power of 2) • Compute (once and store result) H[k] = FFTN {h[0].8) Then. 0} yr [n] = IFFTN {H[k]Xr [k]} Often.. h[P − 1]. the value of N is selected as the smallest power of 2 that meets the above condition. Under the assumption that x[n] = 0 for n < 0.. we have y[n] = h[n] ∗ x[n] = h[n] ∗ ∑ xr [n − rL] = r=0 ∑ h[n] ∗ xr [n − rL] = ∑ yr [n] r=0 ∞ r=0 ∞ (14..10) (14.. .11) • N multiplications to compute H[k]Xr [k] in (14. 1. Labeau Compiled November 24. For each block of L input samples. say N.
1) requires P multiplications per input sample.3. the bottom plot of Figure 14.2 Illustration of blocking in the overlapadd method.2 shows the FIR ﬁlter h[n] as well as the input signal x[n]. x1 [n] and x2 [n]. The resulting complexity of the overlapadd method is then 2 log2 L + 4 multiplications per input sample.2 and 14. c B.5 0 2 0 −2 2 0 −2 1 0 −1 1 0 −1 0 5 10 15 20 25 30 35 40 0 5 10 15 20 x2[n] 25 30 35 40 0 5 10 15 20 x1[n] 25 30 35 40 0 5 10 15 20 x [n] 0 0 5 10 15 20 x[n] 25 30 35 40 25 30 35 40 Fig. Notice that the convolution results are longer than the original blocks. Figure 14. Finally. a signiﬁcant gain may be expected when P ≈ L is large (e. 14. 2004 .262 Chapter 14. Champagne & F.g. Example 14. Thus. Applications of the DFT and FFT • N 2 log2 N multiplications to compute yr [n] in (14. Linear ﬁltering based on (14. supposed to be semiinﬁnite (rightsided). Top: the FIR ﬁlter h[n]. we may set N = 2L. Labeau Compiled November 24. h[n] 1 0. The three bottom parts of Figure 14.12) for a total of N log2 N + N multiplications per L input samples. Assuming P ≈ L for simplicity. L = 1024 ⇒ 2 log2 L + 4 = 24 L.3 shows the reconstructed output signal obtained by adding up the overlapping blocks yr [n]. The top part of Figure 14.2 show the three ﬁrst blocks of the input x0 [n]. Below: the input signal x[n] and different blocks xr [n].1: Overlapadd The overlapadd process is depicted in Figures 14.3 shows the result of the convolution of these blocks with h[n].
Champagne & F.14. c B. 14. Bottom: the output signal y[n] . Top: several blocks yr [n] after convolution. Labeau Compiled November 24.1 Block FIR ﬁltering 263 y0[n] 2 0 −2 2 0 −2 2 0 −2 2 0 −2 0 5 10 15 20 y1[n] 25 30 35 40 0 5 10 15 20 y2[n] 25 30 35 40 0 5 10 15 20 y[n] 25 30 35 40 0 5 10 15 20 25 30 35 40 Fig. 2004 .3 Illustration of reconstruction in the overlapadd method.
The three bottom parts of Figure 14.2 Overlapsave method (optional reading) This method contains the following steps: (1) Blocking of input data in overlapping blocks of length L. Finally. (14.4 shows the FIR ﬁlter h[n] as well as the input signal x[n]. The top part of Figure 14.16) As pointed out in Chapter 6.5 shows the reconstructed output signal obtained by adding up the blocks yr [n] after dropping the aliased samples.15) Example 14.264 Chapter 14. one sees that the ﬁrst P − 1 samples of this circular convolution will be corrupted by aliasing. the practical computation of X(ω) poses the several difﬁculties: • an inﬁnite number of mathematical operations is needed (the summation over n is inﬁnite and the variable ω is continuous). Since this circular convolution does not respect the criterion in (6. Applications of the DFT and FFT 14. ω ∈ [−π. c B. overlapping by P − 1 samples: xr [n] = x[n + r(L − P + 1) − P + 1] 0 ≤ n < L. there will be timealiasing. (3) Form the output signal by adding the shifted ﬁltered blocks after discarding the leading P − 1 samples of each block: y[n] = r=0 ∑ yr [n − r(L − P + 1)](u[n − P − r(L − P + 1)] − u[n − L − r(L − P + 1)]) ∞ (14. Figure 14.5 shows the result of the circular convolution of these blocks with h[n]. 14. 2004 . By referring to the example in Figure 6.12.2. Labeau Compiled November 24. Notice that the convolution white dots correspond to samples corrupted by aliasing. whereas the remaining L − P + 1 will remain equal to the true samples of the corresponding linear convolution xr [n] ∗ h[n]. compute the Lpoint circular convolution with the FIR ﬁlter impulse response h[n]: yr [n] = h[n] xr [n] (14. 0 otherwise. x1 [n] and x2 [n].2: Overlapsave The overlapsave process is depicted in Figures 14.2 Frequency analysis via DFT/FFT 14. the bottom plot of Figure 14.4 and 14. π]. Champagne & F. (14.5.63).1 Introduction The frequency content of a discretetime signal x[n] is deﬁned by its DTFT: X(ω) = n=−∞ ∑ ∞ x[n]e− jωn .13) (2) For each input block.14) Note that each block yr [n] is still of length L.1.4 show the three ﬁrst blocks of the input x0 [n].
2 Frequency analysis via DFT/FFT 265 h[n] 1 0. Below: the input signal x[n] and different blocks xr [n].5 0 1 0 −1 1 0 −1 1 0 −1 1 0 −1 0 5 10 15 20 25 30 35 40 45 x[n] Fig. Champagne & F.14. Top: the FIR ﬁlter h[n]. 14.4 Illustration of blocking in the overlapsave method. 2004 . Labeau Compiled November 24. c B.
Applications of the DFT and FFT 5 0 −5 5 0 −5 5 0 −5 5 0 −5 0 5 10 15 20 25 30 35 40 45 Fig. Champagne & F. c B. Labeau Compiled November 24. Top: several blocks yr [n] after convolution. 2004 .266 Chapter 14. 14.5 Illustration of reconstruction in the overlapsave method. Bottom: the output signal y[n] .
as we have shown in Chapter 6.22) • sidelobes with peak a amplitude of about 0.2 Frequency analysis via DFT/FFT 267 • all the signals samples x[n] from −∞ to ∞ must be available Recall the deﬁnition of the DFT N−1 X[k] = n=0 ∑ x[n]e− jω n . 0 ≤ n ≤ N − 1 0.2 Windowing effects Rectangular window: Recall the deﬁnition of a rectangular window of length N: wR [n] 1. namely: X[k] = X(ωk ) N−1 (14.20) The DTFT of w[n] will play a central role in our discussion: N−1 WR (ω) = n=0 ∑ e− jωn = e− jω(N−1)/2 sin(ωN/2) sin(ω/2) (14. . Champagne & F. there is in this case a 1to1 relationship between the DFT and the DTFT. 1. k k ∈ {0. we are interested in analyzing the frequency content of a signal x[n] which is not a priori timelimited.2. otherwise (14. 14.14. the DFT only provides an approximation to the DTFT.19) X(ω) = k=0 ∑ X[k]P(ω − ωk ) where P(ω) is an interpolation function deﬁned in (6. In practice.18) (14.17) where ωk = 2πk/N.6 for the N = 16. 2004 . . • based on this analysis. The main advantages of the DFT over the DTFT are • ﬁnite number of mathematical operation • only a ﬁnite set of signal samples x[n] is needed If x[n] = 0 for n < 0 and n ≥ N. N − 1} (14. we present and discuss modiﬁcations to the basic DFT computation that lead to improved frequency analysis results. the DFT is often used as an approximation to the DTFT for the purpose of frequency analysis. In this case. In this Section: • we investigate in further detail the nature of this approximation. In general.22 (13dB) c B. Indeed. . then the DFT and the DTFT are equivalent concept. . Labeau Compiled November 24.21) The corresponding magnitude spectrum is illustrated in Figure 14. Note the presence of: • a main lobe with ∆ω = 2π N (14. however.18).
Champagne & F. we have from the multiplication property of the DTFT (Ch.4 0.25) That is. Applications of the DFT and FFT N=16 1 0. 3) that Y (ω) = DTFT{wR [n]x[n]} 1 π = X(ω)WR (ω − φ)dφ 2π −π (14. the windowing of x[n] that implicitly takes place in the timedomain is equivalent to the periodic convolution of the desired spectrum X(ω) with the window spectrum WR (ω). 14.6 Magnitude spectrum of N = 16 point rectangular window (linear scale) Relation between DFT and DTFT: Deﬁne the windowed signal y[n] = wR [n]x[n] The Npoint DFT of signal x[n] can be expressed as the DTFT of y[n]: N−1 (14.268 Chapter 14. The above shows that the DFT can be obtained by uniformly sampling Y (ω) at the frequencies ω = ωk .6 0. Labeau Compiled November 24.23) X[k] = = = n=0 ∞ ∑ x[n]e− jω n k n=−∞ ∞ n=−∞ ∑ wR [n]x[n]e− jωk n y[n]e− jωk n = Y (ωk ) (14. Nature of Y (ω): Since y[n] is obtained as the product of x[n] and wR [n].24) ∑ where Y (ω) denotes the DTFT of y[n]. Y (ω) is equal to the circular convolution of the periodic functions WR (ω) and X(ω).2 0 −π −π/2 0 frequency ω π/2 π Fig. c B. 2004 . When computing the Npoint DFT of the signal x[n].8 magnitude (dB) 0.
the following approach might be taken: • Select window so that peak sidelobe level is below acceptable level. it should be clear that the results of the frequency analysis can be improved by using a different type of window. 9). . instead of the rectangular window implicit in (14. even if X(ω) = 0 in some band of frequency. two main effects of the timedomain windowing on the original DTFT spectrum X(ω) may be identiﬁed: • Due to nonzero width of the main lobe in Figure 14. all the window functions introduced in Chapter 9 can be used. this is ﬁxed to 13dB. For instance. This essentially corresponds to a loss of resolution in the spectral analysis. The tradeoffs involved in the selection of a window for spectral analysis are very similar to those use in the window method of FIR ﬁlter design (Ch. . In other words.2 Frequency analysis via DFT/FFT 269 Example 14. the DFT values X[k] can be obtained by uniformly sampling the shifted window spectrum WR (ω − θ) at the frequencies ω = ωk . if say X(ω) contains two narrow peaks at closely spaced frequencies θ1 and θ2 . equations (14. Champagne & F. For a rectangular winodw. local smearing (or spreading) of the spectral details in X(ω) occur as a result of the convolution with WR (ω) in (14. c B. 2004 . Effects of windowing: Based on the above example and taking into consideration the special shape of the window magnitude spectrum in Figure 14. In selecting a window.14. the two peaks will merge as a result of spectral spreading. k k ∈ {0. The amount of leakage depends on the peak sidelobe level. it may be that Y (ω) in (14.6.(14. Thus resolution can be increased (i.25) yield X[k] = 1 2π π −π X(φ)WR (ωk − φ)dφ = WR (ωk − θ) Thus. In particular. 1.6.e.25). .6. For most windows of interest ∆ω = cte/N where cte depends on the speciﬁc window type. The resolution of the spectral analysis with DFT is basically a function of the window width ∆ω = 2π/N. N − 1} (14. spectral leakage in X(ω) will occur as a result of the convolution with WR (ω) in (14.3: Consider the signal x[n] = e jθn . . ∆ω smaller) by increasing N.25).24). The DTFT of x[n] is given by X(ω) = 2π k=−∞ ∑ δa (ω − θ − 2πk) In this special case. • Adjust window length N so that desired resolution ∆ω is obtained. In this respect.24). Improvements: Based on the above discussion.25) only contains a single wider peak at frequency (θ1 + θ2 )/2. • Due to the presence of sidelobes in Figure 14. Speciﬁcally: N−1 ˜ X[k] = n=0 ∑ w[n]x[n]e− jω n . ∞ n∈Z where 0 < θ < π for simplicity. Labeau Compiled November 24.26) where w[n] denotes an appropriately chosen window. Y (ω) will usually not be zero in that band due to the trailing effect of the sidelobes in the convolution process.
270 .
McClellan. present and future. VLIW processor architectures and algorithm mappings for DSP applications. PrenticeHall. TX. chapter 2. Digital Signal Processing. [6] Ravi A. Digital Signal Processing. PrenticeHall. TMS320C6204 Data Sheet. Norwood. [9] Motorola. 2nd edition edition. Reconﬁgurable computing and digital signal processing: Past. DSP First: a multimedia approach. Oppehneim. [12] Russell Tessier and Wayne Burleson. [8] Sanjit K. Jervis. IEEE Press. [7] James H. New Jersey. Manolakis. editor. Lee. a computerbased approach. R. DSP56300 Family Manual. Novel architectures pack multiple dsp cores onchip. 2002. Schfafer. [13] Texas Instruments. 2000. In Hu [3]. and J. Yoder. Ronald W. and Mark A. [3] Yu Hen Hu. Jeff Bier. DSP Processor Funcamentals: Architectures and Features. Buck. 1997. In Hu [3].271 References [1] Analog Devices Inc. 2002. and Edward A. 2000. Inc. . [5] Phil Lapsley. PrenticeHall. PrenticeHall. McGrawHill. [4] Emmanuel C. algorithms and applications. 2001. Managuli and Yongmin Kim. TIgerSHARC DSP Microcomputer Preliminary Technical data. Proakis and Dimitris G. 1996.. Denver. 3rd edition edition. Houston. [11] John G. Digital Signal Processing. [2] Ashok Bindra. Electronic Design. Amit Shoham. Mitra. 1999. CO. Programmable Digital Signal Processors: Architecture. chapter 4. 2002. 2nd edition edition. Marcel Dekker. New Jersey. 2nd edition edition. Ronald W. [10] Allan V. Schafer. MA. December 2001. 1998. Programming and Applications. Ifeachor and Barrie W. Principles. a practical approach.. DiscreteTime Signal Processing.