You are on page 1of 223

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/333224643

Fundamentals of Analog & Digital Signal Processing

Book · May 2015

CITATIONS READS

0 7,780

1 author:

Muhammad El-Saba
Ain Shams University
64 PUBLICATIONS   56 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Development of near-room-temperature superconductors for super-high-speed ICs View project

Advanced Electronic Engineering Encyclopedia (AEEE) View project

All content following this page was uploaded by Muhammad El-Saba on 22 June 2021.

The user has requested enhancement of the downloaded file.


Analog & Digital

Signal Processing
_

Prof. Dr. Muhammad H. El-SABA


Faculty of Engineering , Ain-Shams University
Department of Electronics and Communications

4th Edition 20019

-iii-
Prof. Dr. Muhammad EL-SABA
Copyright © 2007-2019, by the author. All rights reserved.

Reproduction or translation of any part of this work, without


permission of the copyright owner, is unlawful. Requests for permission
or further information should be addressed to the author, at the Dept.
of Electronic Engineering , Faculty of Engineering , 11517 Abbasia ,
Cairo, Egypt. E-mail Address mhs1308@gmail.com

Deposit No. 2007/16036 (Dar El-Kottob) ISBN .

-iv-
Prof. Dr. Muhammad EL-SABA
PREFACE

This book is intended for use by electrical, electronic and computer


engineering students. The book covers the fundamentals of analog and
digital signal processing techniques and applications. The book is divided
into 5 parts, namely:

1- Introduction to Signals and Systems


2- Linear Time-Invariant (LTI) Signals and their Transforms
3- Analog Signal Processing & Active Filters
4- Discrete Time Signals and their Transforms
5- Digital Signal Processing & Digital Filters
6- Digital Signal Processors and their Applications
7- Fast Fourier Transform (FFT) and its Applications

This course is intended for use of students and post graduates of electronic
engineering and communication engineering. The first chapter introduce
analog continuous-time signals and thier transform into different domains
and how they interact with analog system. The following chapters deals with
active filters as the most famous analog processing systems. The synthesis of
active filters is dealt with in this chapter,

The second part of this book provides an applications-oriented introduction


to digital signal processing. This part may be used at the junior or senior
level. It is based on a junior-level DSP course that I have already.

This book will help you to:


1. Explain how signals are represented in time and frequency.
2. Discuss the various types of continuous and discrete form signals.
3. Explain and design the popularity of Active filters.
4. Discuss the conversion of continuous signals to discrete form
5..Explain and design various types of digital filters
6. Sketch the block diagram for a typical DSP system.
7. List some architectures of DSP.
8. Explain the operation and design of digital filters.
9. Discuss DSP applications.

-v-
Prof. Dr. Muhammad EL-SABA
A solutions manual, which also contains the results of the computer
experiments, is available from the publisher (mhs1308@gmail.com). The C
and MATLAB functions may be obtained via anonymous FTP from the
Internet site ece.asu.edu in the directory /mhs/dsp

Prof. Dr. Muhammad EL-SABA


Cairo in(June 2019

-vi-
Prof. Dr. Muhammad EL-SABA
Course Contents
Subject Page

PREFACE xiii
CHAPTER 1: Analog Signal Processing 243
5-1. Introduction 243
5-1.1. Time Domain & Frequency Domain of Analog Signals 245
5-1.2. Transfer Function of a Linear Network 246
5-1.3. Pole-Zero Locations 248
5-1.4. Frequency Response 248
5-1.5. Transient Response 249
5-1.6. Convolution 249
5-1.7. Impulse Response 249
5-1.8. Stability 250
5-1.9 Hwriuitz Criterion 250
CHAPTER 2: Active Filters & their Applications 243
2-1. Introduction 243
2-2. Filter Design (Approximation and Synthesis) 251
2-3. Filter Approximation 253
5-3.1. Butterworth (Maximally Flat) Filter Approximation 254
5-3.2. Chebeshev (Equi-ripple) Filter Approximation 261
5-3.3. Inverse Chebechev Filter Approximation 262
5-3.4. Cauer (Elleptic) Filter Approximation 263
5-3.5. Bessel or Thomas (Linear Phase) Filter Approximation 264
5-3.6. Gaussian Filter Approximation 265
5-3.7. Legendre (Optimum L) Filter Approximation 266
5-3.8. Comparison between Different Filter Approximations 266
2-4. Active Filter Synthesis 267
5-4.1. Filter Sensitivity 267
5-4.2. Filter Devices 267
5-4.3. Filter Synthesis Methods 268
2-5. Filter Synthesis Using Op-Amps 270
5-5.1. Op-Amp Filter Synthesis by Cascade Method 270
i. First-order Op-Amp Filter 270
ii. General Second-Order Filter Function 271
iii. Single-Op-Amp Biquads 272
-vii-
Prof. Dr. Muhammad EL-SABA
Subject Page

A. Sallen-Key 2nd order (Biquad) LPF 273


B. Sallen-Key 2nd order (Biquad) HPF 254
C. Sallen-Key 2nd order (Biquad) BPF 274
iv. Two-Op-Amp Biquads 277
v. Three-OpAmp Biquads 277
A. KHN Biquad Filter 277
B. Tow-Thomas Biquad Filter 278
C. Akerberg-Mossberg Biquad Filter 278
5-5.2. Op-Amp Filter Synthesis by Direct Simulation Methods 280
i. LC Ladder Filters 280
ii. Ladder Simulation Methods 285
iii. Inductance Simulation (Gyrators) 285
A. General Impedance Converter (GIC) 285
B. General Impedance Network (GIN) 286
C. Antoniou & Riodan Simulated Inductances 287
iv. Frequency Dependent Negative Resistance (FDNR) 290
v. Bruton Transform 291
vi. Leapfrog Simulation of a Ladder Network 292
2-6. Filter Synthesis Using OTA’s 297
5-6.1. OTA Filter Synthesis by Cascade Method 297
i. Second-Order Filters Derived from 3-Admittance Model 297
ii. Second-Order Filters Derived from 4-Admittance Model 299
A. Single OTA 2nd -order OTA LPF 300
nd
B. Single OTA 2 -order OTA HPF 301
nd
C. Single OTA 2 -order OTA BPF 302
iii. Second-Order Filters Derived from 5-Admittance Model 302
A. Single OTA 2nd -order OTA LPF 303
nd
B. Single OTA 2 -order OTA HPF 303
nd
C. Single OTA 2 -order OTA BPF 304
iv. Multiple OTA-C Filters 304
5-6.2. OTA Filter Synthesis by Direct Simulation Methods 307
i. Component Substitution Method 307
ii. Signal Flow Simulation 308
iii. Leapfrog Structures 309
iv. OTA-C Bandpass Leapfrog Filte Design 313
2-7. Filter Synthesis Using Current Conveyor (CC’s) 315

-viii-
Prof. Dr. Muhammad EL-SABA
Subject Page

2-8. Filter Synthesis Using Switched Capacitors (SC’s) 316


5-8.1. Cascaded SC Filters 317
5-8.2. SC Filter IC’s 317
5-8.3. SC Leapfrog Ladder Filters 317
5-8.4. Limitations of SC Filters 318
2-9. Summary 319
2-10. Problems 323
2-11. Bibliography 324

327
CHAPTER 3: Digital Signals and Systems
3-1. Discrete-time Signals 331
3-2. Digital Signal Processors 380
3-3. Summary 381
3-4. Problems 382
3-5. Bibliography 384

CHAPTER 4: Discrete Time Signals and their Transform

CHAPTER 5: Digital Signals Processing & Digital Filters

CHAPTER 6: Digital Signals Processors

CHAPTER 7: Applications

APPENDICES

-ix-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter1

Fundamentals of Analog
Signal Processing

Contents

1-1. Introduction
1-2. Time Domain & Frequency Domain of Analog Signals
1-3. Transfer Function of a Linear Network
1-4 Pole-Zero Locations
1-4 Frequency Response
1-6 Transient Response
1-7 Convolution
5-8. Impulse Response
1-9. Stability
1-10 Hwriuitz Criterion
1-11. Summary
1-12. Problems
1-13. Bibliography

-1-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter1

Fundamentals of Analog
Signal Processing

1-1. Introduction
Analog signal processing (ASP) is any signal processing conducted on
analog signals by analog means. Analog indicates something that is
mathematically represented as a set of continuous values. This differs
from digital which uses a series of discrete quantities to represent signal.

Analog values are typically represented as a voltage, electric current or


electric charge around components in the electronic devices. An error or
noise affecting such physical quantities will result in a corresponding
error in the signals represented by such physical quantities. Examples of
analog signal processing include all sorts of analog active filters, such as
crossover filters in loudspeakers, bass, treble and volume controls on
stereos, and tint controls on TV‟s. Common analog processing elements
include Op-Amps, and OTA‟s in addition to passive components.

1-2. Continuous-Time Signals & Systems


The systems we treat with in this chapter are linear analog signals. A
system is considered continuous if the signal exists for all time.
Frequently, the terms "analog" and "continuous" are usually employed
interchangably, although they are not strictly the same. Assume a
continuous-time system with impulse response h(t), as shown in the
following figure. The system processes the continuous-time input signal
x(t) in some way, and produces an output y(t). At any value of time, t, we
input a value of x(t) into the system and the system produces an output
value for y(t). For instance, if we have a multiplier system whose
input/output relation is given by y(t) = 7x(t), and the input value x(t) =
sin(wt) then the output is y(t) = 7sin(wt).

Illustration of a continuous-time system.

-2-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter1

There exist a specific category of discrete-time systems called linear


time-invariant (LTI) which have interesting properties.

Properties of Linear Time-Invariant (LTI) Systems


A system is considered to be an LTI system if it satisfies the requirements
of time-invariance and linearity. This book will only consider linear
systems. The LTI systems have the following properties:

1-2.1- Additivity
A system satisfies the property of additivity, if a sum of inputs results in
a sum of outputs. By definition: an input of x3(t) = x1(t) + x2(t) results in
an output of y3(t) = y1(t) + y2(t).

1-2,2- Homogeniety
A system satisfies the condition of homogeniety if an output scaled by a
certain factor produces an output scaled by that same factor. By
definition: an input of ax1 results in an output of ay1

1-2.3- Linearity
A system is considered linear if it satisfies the conditions of Additivity
and Homogeniety.

1-2.4- Causality
A simple definition of a causal system is a system in which the output
does not change before the input. A more formal definition of a causal
system is the system whose output is only dependant on past or current
inputs. A system is called non-causal if the output of the system is
dependant on future inputs. This book will only consider causal systems,
because they are easier to work with and understand, and since most
practical systems are causal in nature.

1-2.5- Memory
A system is said to have memory if the output from the system is
dependant on past inputs (or future inputs!) to the system. A system is
called memoryless if the output is only dependant on the current input.
Memoryless systems are easier to work with, but systems with memory
are more common in digital signal processing applications.

1-2.6- Time-Invariance
A system is called time-invariant if the system relationship between the
input and output signals is not dependant on the passage of time. If the
input signal x(t) produces an output y(t) then any time shifted input, x(t +
-3-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter1

δ), results in a time-shifted output y(t + δ). This property can be satisfied
if the transfer function of the system is not a function of time except
expressed by the input and output.

1-3. Time Domain and Frequency Domain of Analog Signals


The Fourier transform of an analog signal v(t) transforms the signal
from the time domain to the frequency domain and results in the
frequency spectrum of the signal V(ω)

V(ω) = L {v(t)} = ∫ v(t) e-jωot dt (5-1a)

v(t) = L -1{V()} = ½ π ∫ V(ω) e-jωt dt (5-1b)

Also, the Laplace transform of an analog signals v(t) translates it to the


frequency domain (or the s-domain where s = α + jω) such that:

V(s) = L {v(t)} = ∫ v(t) est dt. (5-1c)

If v(t) is a sinusoidal signal, then s can be replaced by (jω) and the


Laplace transform reduces to the Fourier transform. Both Fourier
transform and Laplace transform are useful tools for analog signal
analysis. More details about these tools may be found in the
appendices A, B.

1-3. Transfer Function of a Linear Network


The network function of any linear network, like a filter, out to the
following voltage transfer function

N ( s ) an s n  an 1s n 1  ....a1s  a0
H (s)   m (5-2a)
D( s ) s  bm 1s m 1  ....b1s  b0

where N(s) and D(s) are the numerator and denominator polynomials,
respectively, with m ≥ n, ai, bi real and bi positive.

In most analog filter design cases, H(s) represents the ratio of the Laplace
transform of the output voltage Vo(s) to the Laplace transform of the input
voltage Vi(s), being thus a dimensionless voltage gain.

Av(s) = Vo(s)/Vi(s) (5-2b)


-4-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter1

vi(t) vo(t) Vi(s) Vo(s)


h(t) H(s) or Av(s)

Fig. 5-1. Block diagram of a filter, with its signals in the time and frequency domains

However, it may also represent ratio of currents, or ratio of output voltage


to input current (trans-impedance) or output current to input voltage
(trans-admittance) having the dimensions of impedance or admittance,
respectively. Finally, it may represent a driving point function, i.e., the
ratio of the voltage to the current (e.g., input or output impedance) in one
port of the filter network or vice versa (e.g., input or output admittance).

The filter design problem consist in construction of a filter and


determining its coefficients (ai ,bi ) that meet certain frequency response
specifications. If zi, i = 1,2,...n are the roots of N(s), i.e., the zeros of H(s)
and pi, i = 1,2,...., m are the roots of D(s), i.e., the poles of H(s), then the
above equation can be written as follows:
n

( s  zn ).( s  zn 1 )....( s  z1 )  (s  z ) i
H ( s )  an  an i 1
( s  pn ).( s  pn 1 )....( s  p1 ) m
(5-2c)
 (s  p )
j 1
j

The following table depicts one of the famous transfer functions


polynomials, namely the Butterworth polynomials, and their roots.

Table 5-1. Butterworth polynomials and their roots.

-5-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter1

If the input signal is sinusoidal of frequency ω, and s is substituted by jω,


then H(s) can be written in the following form:

H(j) = | H(j) | exp[j ()] (5-3a)

i.e., in terms of the magnitude and phase of H(j). Therefore,

(j) = arg [ H(j)] (5-3b)

It is usual practice to present the magnitude of H(jω) in the form

H() =20 log | H(j) | (5-3c)

This gives the filter gain in dB. However, in most cases, we talk about the
filter attenuation or loss, –A(ω), also in decibels. In some cases, the
attenuation is given in Nepers obtained as follows:

() = - log | H(j) | (5-3d)

1-4. Pole-Zero Locations


The roots of N(s), which are the zeros zi of H(s) (because for s = zi H(s)
becomes zero), can be real or complex conjugate, since all of the
coefficients of N(s) are real. Each of these zeros can be located at a
unique point in the complex frequency plane.

In case of a multiple zero, all of them are located at the same point in the
s-plane. On the other hand, the roots of D(s), which are the poles pi of
H(s) can be real or complex conjugate, since D(s) also has real
coefficients. However, their real part can only be negative for reasons of
stability. Also, for a network to be stable and useful as a filter, its transfer
function H(s) should not have poles with real part equal to zero. Thus, the
poles of function H(s) should all lie in the left half of the s-plane (LHP)
excluding the j-axis, while its zeros can lie anywhere in the s-plane, i.e.,
in the left-half and in the right-half s-plane (RHP).

1-5. Frequency Response


For sinusoidal signals under steady-state conditions (where s = j) the
magnitude of H(j) and its phase () = arg[H(j)], as A() and (),
constitute the frequency response of the filter. To get a good picture of
the gain and phase as functions of frequency , we draw the
corresponding plots with the frequency being the independent variable.
-6-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter1

It is usual in most cases for the scale in the frequency axis to be


logarithmic in order to include as many frequencies as possible in the
plots. The A() axis has a linear scale but, in effect, it is also logarithmic,
since A() is expressed in decibels (dB). Finally, the () axis is linear,
usually expressed in degrees. However, in some cases, instead of working
in terms of , we work considering the group delay g(), which is
defined as flows:

d
g  
d (5-4)

1-6. Transient Response


In filter design, the specifications are usually given in terms of the
frequency response. However, in cases of pulse transmission, it is useful
to know the response of the filter as a function of time, i.e., its transient
response. In such cases, we usually study the response of the filter to two
test functions: the unit impulse function (t) and the unit step function
U(t).

1-7. Convolution
The convolution is a basic concept in signal processing that states an
input signal can be combined with the system's transfer function to find
the output signal, in the time domain.


vo (t )  vi (t )* h(t )   v ( ).h(t   )d

i (5-5)

where h(t) is inverse Laplace transform of the voltage transfer function


H(s) and the asterisk „*‟ is used as a symbol of the convolution process.
Thus, the convolution integral can be used to find the time-domain
solution (transient response) of a system with specific transfer function,
in response to a specific input.

1-8. Impulse Response


The impulse response of a filter is its transient response when the
excitation is the unit impulse or (t) function, the Laplace transform of
the filter response to the unit impulse function (t) is the transfer function
H(s). Taking the inverse Laplace transform of H(s) to be

vo(t) = l -1 {Vo(s)} = l -1 {H(s)} = h(t) (5-6)

-7-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter1

Therefore, the impulse response of a filter is the inverse Laplace


transform of its transfer function.

1-9. Stability
Active filters may be unstable under certain conditions. Filter stability has
already been mentioned with reference to the pole positions of the filter
function. In practical terms, the output voltage or current of a filter must
always follow the input at steady-state, i.e., it should not become
uncontrollable. Such an uncontrollable behavior usually leads either to
DC saturation of the output voltage or to the generation of oscillating
waveforms independent of the input signal. In mathematical terms,
stability of a linear network in the time domain, requires that its impulse
response h(t) be absolutely integralable, i.e.,

 h(t ) dt  I  
0
(5-7)

Consequently, strict stability implies that only terms of the following


form are allowed in the expression for the impulse response

A tn exp(-t).cos(t) or A tn exp(-t).sin(t)

For a filter to be useful, it should be strictly stable. Since the impulse


response is the inverse Laplace transform of the pertinent network
function, strict stability implies that the poles of this function should only
be of the form

-±J with  ≥ 0

i.e., the poles should lie in the LHS of the s-plane excluding the j
If the network function has poles on the j-axis also, then its impulse
response will include terms of the form Acos(t), Asin(t). The network
is considered marginally stable in this case, but then it cannot be useful as
a filter. A network is unstable if it is not strictly or marginally stable.

1-10. Hwriuitz Criterion


If the coefficients of the denominator of the transfer function, D(s),
are non-zero positive; it is a Hwriuits polynomial, whose roots have
negative real parts and the system is stable.

-8-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter1

1-9. Summary
In this chapter we refer to a number of signal properties and transforms
and the reader is assumed to have at least a small prior knowledge of
them. We start with the introduction of the properties of continuous-time
signals and systems and the transforms that apply on them and then we
continue with discrete-time signals and their transforms.

Analog signals are continuous-time signals, which directly represent


physical variables, like electricity, temperature and sound – all of which
may take any value. A system is considered continuous if the signal
exists for all time. Frequently, the terms "analog" and "continuous" are
usually employed interchangeably, although they are not strictly the
same. Assume a continuous-time system with impulse response h(t), as
shown in the following figure. The system processes the continuous-time
input signal x(t) in some way, and produces an output y(t). At any value
of time, t, we input a value of x(t) into the system and the system
produces an output value for y(t).

There exist a specific category of continuous-time systems called linear


time-invariant (LTI) which have interesting properties. A linear system is
said to be linear time invariant when a time shift or delay at the input
produce an identical time shift or delay at the output. Also, a contiguous-
time system is time invariant if x(t)→ y(t) implies that x(t+z) → y(t+z).
Continuous-time systems are usually characterized by linear ODE‟S with
fixed coefficients.

The Fourier transform of an analog signal x(t) translates the signal


from the time domain to the frequency domain X(ω) and results in
the frequency spectrum of the signal.

X(ω) = ƒ{x(t)} =  x(t) e-jωot dt


x(t) = ƒ -1{X()} = ½π  X(ω) e-jωt dt

Also, the Laplace transform of analog signals x(t) translates frequency


domain (or the s-domain where s = α + jω ) such that :

X(s) = L{x(t)} =  x(t) est dt.

-9-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter1

If x(t) is sinusoidal signal, s can be replaced by (jω) and the Laplace


transform reduces to the Fourier transform.
The frequency response of an LTI system is given by:

Y(ω) = H(iω) • X(jω)


where:
H(ω) = ƒ{h(t)}

is called the frequency response of the system.

So the frequency response of the continuous LTI H(ω) is defined as the


Fourier transform of the impulse response h(t).
Similarly, the complex frequency response of a continuous (LTI) system
H(s) is defined as the Laplace transform of the impulse response h(t):

H(s) =  h(t) e-st dt .

Such that, the output of the continuous (LTI) system having an impulse
response h(t) to an input x(t) and Laplace transform H(s) and X(s),
respectively is given by:
X(s) = H(s) • X(s)
Y(s) = L{y(t)} =  y(t) e-st dt

The relation between the Laplace transform X(s) and Y(s) of an LTI
network is given by a network function NF(s) = Y(s) / X(s). For instance,
the network function may be the voltage gain Av of an amplifiers or
across any two port network, as shown in figure:
Av(s) = VO(s) / Vi(s)

-10-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter1

Frequently, we are interested in the magnitude of the network function


‫׀‬Av‫ ׀‬expressed in decibels (20log‫׀‬Av‫ (׀‬and the phase difference between
input and output θ = arg(Av)

The function Av(s) is some time called the voltage transfer function of the
2-port network. It may be put as ratio

Av(s) = N(s) / D(s) = (s - sz1) (s – sz 2) ..... / (s – sp1) (s – sp2) .....

The zeros of the numerators (szi) and denominator (spi) are said to be the
zeros and poles of the network.

If the coefficients of the denominator D(s) are positive (non-zero); it is a


Hwriuits polynomial (whose roots have negative real parts) and the
system is stable.

-11-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter1

1-10. Problems

1-1) Explain why the impulse response function can completely describe
the characteristics of a linear time-invariant filter.
1-2) Choose the most suitable answer for the following statements
1. What is the rule h*x = x*h called?
a) Commutativity rule
b) Associativity rule
c) Distributive rule
d) Transitive rule

2. For an LTI discrete system to be stable, the square sum of the impulse
response should be
a) Integral multiple of 2pi
b) Infinity
c) Finite
d) Zero

3. What is the rule (h*x)*c = h*(x*c) called?


a) Commutativity rule
b) Associativity rule
c) Distributive rule
d) Transitive rule

4. Does the system h(t) = exp(-7t) correspond to a stable system?


a) Yes
b) No
c) Marginally Stable

5. What is the following expression equal to: h*(d+bd), d(t) is the delta
function
a) h + d
b) b + d
c) d
d) h + b

6. Does the system h(t) = exp([14j]t) correspond to a stable system?


a) Yes
b) No
c) Marginally Stable

7. What is the rule c*(x*h) = (x*h)*c called?


a) Commutativity rule
b) Associativity rule
c) Distributive rule
d) Associativity and Commutativity rule

-12-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter1

8. Is y[n] = n*sin(n*pi/4)u[-n] a causal system?


a) Yes
b) No
c) Marginally causal

9. The system transfer function and the input if exchanged will still give
the same response.
a) True
b) False

10. Is y[n] = nu[n] a linear system?


a) Yes
b) No

1-3) The Figure below shows the discrete-time input signal and the
impulse response of a linear time invariant filter. Using the principles of
linearity and superposition, obtain the output of the filter in response to
the input discrete-time input signal

1-4) A first order digital pre-emphasis filter is given by


y(m) = x(m) − a x(m −1)
Obtain the z-transfer function of this filter. Assuming a value of a=0.98
draw its pole-zero and frequency response diagrams.
State the range of values of the parameter a where this filter acts as a pre-
emphasis filter. Give an example

1-5) Determine and sketch the impulse response h(n) of the low pass filter
described by the following equation

-13-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter1

1-11. Bibligraphy

[l] V. Belevitch, “Summary of the history of circuit theory,” Pm. IRE,


vol. 50, no. 5, pp. 848-855, May 1962.

[2] S. Darlington, “Some thoughts on the history of circuit theory,” IEEE


Trans. Circuits Syst., vol. CAS-24, no. 12, pp. 665-666, Dec. 1977.

[3] R. M. Foster. Telenhone conversations. Letter, June 3 1983.

[4] R. E. Crochiere and L. R. Rabiner, Multirate Digital Signal


Processing, Prentice-Hall, 1983

[5] J. G. Proakis and D. G. Manolakis, Digital Signal Processing:


Principles, Algorithms, and Applications, MacMillan Publishing, New
York, NY, 1992

[6] A. V. Oppenheim, A. S. Willsky, and S. H. Nawab, Signals &


Systems, Prentice-Hall, Inc., 1996

[7] R. Chassaing, DSP Applications Using C and the TMS320C6x DSK,


Wiley, NY, ISBN 0471207543, 2002

[8] Simon Haykin, , and Barry Van Veen. Signals and Systems. 2nd ed.
Hoboken, NJ: John Wiley and Sons, Inc., 2003.

[9] James McClellan, H., Ronald W. Schafer, and Mark A. Yoder. Signal
Processing First. Upper Saddle River, NJ: Pearson Education, Inc., 2003.

-14-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Active Filters
Contents
2-1. Introduction
2-2. Filter Design (Approximation and Synthesis)
2-3. Filter Approximation
2-3.1. Butterworth (Maximally Flat) Filter Approximation
2-3.2. Chebeshev (Equi-ripple) Filter Approximation
2-3.3. Inverse Chebechev Filter Approximation
2-3.4. Cauer (Elleptic) Filter Approximation
2-3.5. Bessel or Thomas (Linear Phase) Filter Approximation
2-3.6. Gaussian Filter Approximation
2-3.7. Legendre (Optimum L) Filter Approximation
2-3.8. Comparison between Different Filter Approximations
2-4. Active Filter Synthesis
2-4.1. Filter Sensitivity
2-4.2. Filter Devices
2-4.3. Filter Synthesis Methods
2-5. Filter Synthesis Using Op-Amps
2-5.1. Op-Amp Filter Synthesis by Cascade Method
i. First-order Op-Amp Filter
ii. General Second-Order Filter Function
iii. Single-Op-Amp Biquads
A. Sallen-Key 2nd order (Biquad) LPF
B. Sallen-Key 2nd order (Biquad) HPF
C. Sallen-Key 2nd order (Biquad) BPF
iv. Two-Op-Amp Biquads
v. Three-OpAmp Biquads
A. KHN Biquad Filter
B. Tow-Thomas Biquad Filter
C. Akerberg-Mossberg Biquad Filter
2-5.2. Op-Amp Filter Synthesis by Direct Simulation Methods
i. LC Ladder Filters
ii. Ladder Simulation Methods

-15-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

iii. Inductance Simulation (Gyrators)


A. General Impedance Converter (GIC)
B. General Impedance Network (GIN)
C. Antoniou & Riodan Simulated Inductances
iv. Frequency Dependent Negative Resistance (FDNR)
v. Bruton Transform
vi. Leapfrog Simulation of a Ladder Network
2-6. Filter Synthesis Using OTA’s
2-6.1. OTA Filter Synthesis by Cascade Method
i. Second-Order Filters Derived from 3-Admittance Model
ii. Second-Order Filters Derived from 4-Admittance Model
iii. Second-Order Filters Derived from 2-Admittance Model
iv. Multiple OTA-C Filters
2-6.2. OTA Filter Synthesis by Direct Simulation Methods
i. Component Substitution Method
ii. Signal Flow Simulation
iii. Leapfrog Structures
iv. OTA-C Bandpass Leapfrog Filte Design
2-7. Filter Synthesis Using Current Conveyor (CC’s)
2-8. Filter Synthesis Using Switched Capacitors (SC’s)
2-8.1. Cascaded SC Filters
2-8.2. SC Filter IC’s
2-8.3. SC Leapfrog Ladder Filters
2-8.4. Limitations of SC Filters
2-9. Summary
2-10. Problems
2-11. Bibliography

-16-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Active Filters

2-1. Introduction
The filter is an electronic device which could make useful signals passes
while suppress or attenuate useless signals. With the development of
electronic and integrated circuit technology, the filter technique is widely
used in communication, measuring, signal processing, data acquisition
and real-time control.

Note 2-1. History of Passive & Active Filters


According to John Linvill, the history of network synthesis and filter
theory using resistors, inductors, and capacitors, started in the early
1920’s.. As he points out the 1930’s were the early formative years of
Network Synthesis. Much of the later work of the 1940’s has been built
on the results of the 1930’s in conjunction with feedback systems.
However, the real stride in active networks began in the 1950’s with the
availability of the transistor as an active element. The combination of
active elements, resistors, and capacitors forming what became known as
an active-RC network, eliminated the need for inductors, the most
troublesome of the passive network troika. Designs for active filter
circuits using high performance active devices, such as, operational
amplifier(OpAmp), operational transconductance amplifier(OTA),
second generation current conveyor(CCII) and so on, have been
followed. In recent years, there has been considerable interest in the
development and applications of active integrated filters, for mobile
communications and many other applications.

2-2. Filter Design


Filters can be classified according to their function as following:

1- Low-pass Filters (LPF).


2- High-pass Filters (HPF).
3- Band-pass Filters (BPF).
4- Band-stop Filters (BSF).
5- All-pass Filters (APF) or delay equalizers.
-17-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

A low-pass filter passes only low frequencies up to a certain cutoff


frequency ωH .

A high-pass filter passes only high frequencies starting from certain


cutoff frequency ωL .

A band pass filters passes a band of frequencies (Bw) stating at ω p1 and


ending at ωp2 such that it bandwidth (Bw=ωp2-ωp1) and its center
frequency ωp= √(ωp1∙ωp2.).

Finally the band-stop (notch) filter passes all frequencies except for a
certain frequency band.

Fig. 2-1. Filter types, according to the pass bands and stop bands.

-18-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

The filter design problem can be divided into the following two steps:

1- Filter Approximation
2- Filter Synthesis

Figure 2-2 depicts the sequence of these steps in the filter design cycle.

Fig. 2-2. Filter design cycle

-19-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-3. Filter Approximation


In practical filter design, the amplitude response is more often specified
than the phase response. The amplitude response of the ideal lowpass
filter with normalized cutoff frequency at ωc = 1 is shown in figure 2-4.
This ideal amplitude response cannot be expressed as a rational function
of s. It is thus unrealizable.

Fig. 2-3. Ideal low-pass filter characteristics

If we accept a small error in the passband and a non-zero transition band,


we may seek a rational function H(s), the magnitude of which will
approximate the ideal response as closely as possible. A suitable
magnitude function can be of the form:

1
H ( j )  M ( )  (2-1)
1   2W ( 2 )

where  is a constant between zero and one (0 <  ≤ 1), according to the
accepted passband error, and W(ω2) is a function of ω2 such that 0 ≤
W(ω2) ≤ 1 for 0 ≤ ω ≤ 1 and which increases very fast with increasing ω ,
for ω > 1, remaining much greater than one outside the passband.

In general, the numerator of M(ω) may be a constant other than unity,


which will influence the gain (or attenuation) at ω = 0 (at DC).

In the following, we review the most popular functions W(ω2) and the
corresponding H(s), the magnitude of which approximate the amplitude
response of the ideal lowpass filter.

-20-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

The approximate response of a filter may be obtained using one of the


following famous approximation functions:

1- Butterworth (Maximally Flat) Filter Approximation


2- Chebeshev (Equi-ripple) Filter Approximation
3- Inverse Chebechev Filter Approximation
4- Cauer (Elleptic) Filter Approximation
2- Bessel or Thomas (Linear Phase) Filter Approximation
6- Gaussian Filter Approximation
7- Legendre (Optimum L) Filter Approximation
8- Linkwitz-Riley Filter Approximation

2-3.1. Butterworth (Maximally Flat) Approximation


The Butterworth approximation (due to Stephan Butterworth) is a filter
design approach which has a flat frequency response as mathematically
flat possible in the passband.

If we let = 1 and let

W(2) = 2n (2-2a)

where the filter order n is a positive real integer. We will get the
following amplitude function:

1
M ( )  (2-2b)
1   2n

Hence, at = o = 1, the amplitude is 3 dB below its DC value. This is


the cutoff frequency of the filter. The Butterworth approximation is also
called the maximally flat approximation, because the first 2n–1
derivatives of M(ω) are zero at ω = 0.

The filters attenuation A() is given by:

A() = -10 log |M2(ω) | = -10 log[1+ (ω/ωo)2n] (2-3a)

At  = p we have:

-21-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

AP = A(P) = 10 log[1+ (ωp/ωo)2n ] = 10 log(1+p2) (2-3b)

At  = s we have:

AS = A(S) = 10 log[1+(ωs/ωo)2n] = 10 log(1+s2) (2-3c)

with

P = (ωp/ωo)n and s = (ωs/ωo)n (2-4)

Fig. 2-4(a). Butterworth LPF characteristics

The filter order, n is given by:

ln  10 0.1 As
 10
1 /
0.1 Ap

 1  (2-5)
  
ln(e)
n
ln( w)  /  
s p

where

e = s / p = √(100.1as-1)/ √ (100.1ap-1 ) and w = ωs / ωp (2-6)

-22-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Also, the filter cutoff frequency is given by:

ωo = ωp / p1/n = ωp / (100.1ap-1)1/2n (2-7)

So, starting from filter characteristics (Ap, As, s and p) we can calculate
n and o from which we can construct the Butterworth transfer function
using the So-called spectral factorization method. Note that the roll-off
of the Butterworth filters is 20n dB/decade where n is the filter order.

Fig. 2-4(b). Butterworth filter at different orders

A. Spectral Factorization Method:


Using this method we can construct the Butterworth filters network
function H(s) starting from its order (n) and its 2-dB normalization
frequency (ωo). The method can be summarized as follows. First we set

D(s) = 1/H(s) = DO(s).D1(s).D2(s)…DK(s) (2-8a)

where
DO(s) = (1 + s/ωo) (2-8b)

DI(s) = (1- s/si)…(1-s/sk) , i = 1,2,…. k, (2-8c)

Note that if n is odd such that n=2k+1 then:

D(s) = DO(s) .D1(s)…DK(s) (2-8d)


-23-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

And if n is even such that n = 2k then:

D(s) = D1(s)…DK(s) (2-9)

The roots (si) may be given by:


si = ωo exp ( jθi ) (2-10)

Such that
DI(s) = 1-2(s/ωo) cos θi + (s2 / ωo2) (2-11)

Finally we get:

AV(s) = AO(s).A1(s) … AK(s) (2-12a)

with

AO = 1 if n = 2k (even)
= 1/(1+ s/ωo) if n = 2k+1 (odd) (2-12b)
and
AI(s) =1 / [1-2(s/ωo) cosi+(s2/ωo2)] for i =1,2,…, k (2-13)

where

θi = (2i + n -1)π / 2n for i =1,2,…, n (2-14)

These are the phase angles of the poles si for (π/2) < θi < 3/2 π .
Table 2-1 Butterworth polynomials for a normalized (prototype) LPF. Here s is
normalized with response to use (i.e putting ωo=1)

Polynomial  Butterworth Polynomial,


order, n k i Bn(s)
1 0 /2 (s + 1)
2 1 3/4 (s2 + 1.4142 s + 1)
3 1 2/3 (s + 1).(s2 + s + 1)
4 2 5/8, 7/8 (s2 + 0.7654 s + 1).(s2 + 1.8478 s + 1)
5 2 6/10, 8/10 (s + 1).(s2 + 0.618 s + 1).(s2 + 1.618 s + 1)
6 3 7/12, 9/12, (s2 + 0.5176 s + 1).(s2 + 1.4142s + 1). (s2 +
11/12 1.9319 s + 1)
7 3 8/14, 10/14, (s + 1).(s2 + 0.445 s + 1).(s2 + 1.247 s + 1).
12/14 (s2 + 1.8019 s + 1)

-24-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

The roots (poles) of H(s) may be plotted on the LHS of a circle of


radius ωo called the Butterworth circle in the S-Plane, as shown in
figure 2-5(c).

Im(s) = j
/n
S1
S-Plane
S2

Si o

Re(s)= 

Sn

Fig. 2-5. Butterworth circle, with roots (poles) of the transfer function.

Note that an LPF with ωo =1 is called a proto types LPF. After


constructing the network polynomial, H(s), you can implement
(synthesize) the filter as we demonstrate in a subsequent section.

B. Frequency Translation:
The high pass Butterworth filter is analogous to the LPF and has the
general from characteristic:

|H(ω)|2 = (ω/ωo)2n/[1+(ω/ωo)2n] =1/[1+(ωo/ω)2n] (2-15)

where n is the order of the HPF .

Note that this is the same LPF characteristics with (ω/ωo) replaced with
(ωo/ω). So you can use the former design rules (to obtain n and ωo) and
construct the network function H(s) of high pass filters using the same
Butterwort polynomial of the LPF with replacing (s/ωo) with (ωo/s) or

LPF HPF: (ω/ωo) (ωo/ω)


or (s /ωo) (ωo / s) (2-16)

-25-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Similarly you can translate a prototype LPF to BPF using the relations:

LPF BPF: (ω/ωo) [ + P1P2] / [ (P1 - P2 )]


2

or (s /ωo) [s /B + ωo/ s B] (2-17)

where p1 and p2 are the 3-dB frequencies of the bandpass filter
(that’s the filter bandwidth B =o1-o2). On the other hand, you can
translate a prototype LPF to BSF using the relation:

LPF BSF: (ω/ωo) [ (S1 - S2 )] / [ + S1S2]


2

or (s /ωo) 1/[s /B + ωo/ s B] (2-18)

where s1 and s2 are the 3-dB frequencies of the bandstop filter such
that the filter bandwidth B=(s1 -s2) and ωo = √(s1 -s2) is its center
frequency.

Example 2-1:
Consider a Butterworth filter that is to give an attenuation of at most 0.1
dB at 10 KHZ and at least 60 dB at 1 KHZ ?
i) Determine the filter order
ii) What is the cutoff (3-dB) frequency of this filter ?
iii) What are the actual attenuation at 1 KHZ and 10 KHZ ?
iv) Determine the network function Av(s) that best meets the above
specifications.
v) Draw the Bode plots of this filter.

Solution:
i) Let’s use the Butterworth filter, whose order is given by (2-11):

n ≥ ln(e)/ln(w) = ln(s/p)/ln(ωs/ωp) = ½ ln(s2/p2) / ln(ωs/ωp)

But S2 = 10 (0.1 As) -1 and p2 = 10 (0.1 Ap) -1


So that
n ≥ ln [√(100.1As -1)/ √ (100.1Ap -1 )] / ln (ωs/ωp)

Here: As = 60 dB , ωs = 1 KHZ .
Ap = 0.1 dB , ωp = 10 KHZ .
Substituting we get n = -3.82 That is a HPF with |n| ≥ 3.82 = 4

-26-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

So, we take n = 4 (higher integer)

ii) The cutoff frequency is given by:

ωo = ωp/p1/n = ωp /(100.1Ap-1)1/2n = 6.25 KHZ

iii) The actual attenuation at 1 KHZ is:


As (ω=10 KHZ) = 20 log(ωs /ωo)n – 10 log(1 + [(ωs / ωo)2n]
= 63.7 dB (Better than 60 dB)
Also Ap (ω = 1 KHZ) = 20 log(ωp / ωo)n – 10 log[1 + (ωp / ωo)2n]
= 0.1 dB (as specified)

iv) The filter transfer function is given by equation (2-15):

A(s) =1 / [1-2(s/ωo) cos1+(s2/ωo2)] . [1-2(s/ωo) cos2+(s2/ωo2)]

where θi = (2i + n -1)π /2n , such that θ1 = 5π/8, and θ2 = 7π/8.

Substituting θ1 and θ2 into A(s) yields:

A(s) =1 /(s2/ωo2 + 1.8478(s/ωo) + 1).( s2/ωo2 +0.7654(s/ωo) + 1)

Note that this result is identical to the normalized Butterworth


polynomial, for n = 4, in Table 2-1, with s replaced with (s/ωo).

-27-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-3.2. Chebyshev (Equi-ripple) Approximation


In the Chebyshev approximation, the LPF function takes the following
form:
1
H ( j )  M 2 ( ) 
2

1   Cn2 ( )
2 (2-19a)

Here again, 0 < ε ≤ 1, and Cn(ω) is a Chebyshev polynomial of degree n


having the following expression:

cn() = cos (n cos-1) for 0 ≤ || ≤ 1


= cosh (n cosh-1) for || ≥ 1 (2-19b)
The Chebyshev function, Cn(ω) varies between +1 and –1 in the passband
(0 ≤ ω ≤ 1), while its absolute value increases rapidly with ω above ω=1.
Consequently, M(ω) varies between 1 and (1+ε2)–1/2 in the passband,
having an oscillatory or ripple error of

20 log(1+)½ = 10 log (1+2) dB (2-20)

Thus, the accepted error in the passband determines the value of ε.

Fig. 2-6. Chebyshev LPF characteristics

-28-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

The Chebyshev polynomials can be obtained by the recursion formula:

CN+1() = 2 CN() – CN-1(), C0() = 1, (2-21)

For instance: C1() =  , C2() = 22 -1, C3()= 43 -3, ..etc.

Note that the peak-to-peak ripples in the pass band of Chebyshev


change by:
AP = 10 log (1 + εp2) (2-22a)

Similarly
AS = 10 log (1 + εs2) (2-22b)

2-3.3. Inverse Chebyshev (type-II) Approximation


The Chebyshev polynomials are also used to obtain the so-called Inverse
Chebyshev (or type-II) filter functions, the magnitude of which is given
as follows:

1
 2Cn2 
  
H ( j )  M ( ) 
2 2
(2-23)
1
1   2Cn2  
 

Fig. 2-7. Inverse Chebyshev LPF characteristics

-29-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

The order of the Chebyshev filter (n) that meets a given p (for type-I) or
a given s (for type -II) is given by:

n 

cosh 1  e  ln e  e  1
2


cosh 1  w  ln w  w2  1
 (2-24a)

with

e = s / p and w = ωs / ωp (2-24b)

where p and s can be determined from Ap and AS. The above relation
may be expressed in terms of As and Ap as follows:

(2-25)

The 3-dB frequency (o) of the Chebyshev filter is given by:

O = cosh [ (1/n).cosh-1 (1/p) ] p ≤ 1 (2-26)

2-3.4. Cauer (Elliptic) Function Approximation


The filters examined so far have, except for the Inverse Chebyshev, have
all of their zeros located at infinity. However, in some cases, a higher
falloff rate is required in the transition band; in other words, a very high
attenuation is required near the cutoff frequency. This requirement
mandates the use of elliptic functions in the filter approximation. The so-
called Cauer filters have such elliptic functions. These filters display
equi-ripple behavior both in the passband and the stopband. The typical
magnitude response of a Cauer elliptic filter is shown in figure,
corresponding to the following general filter function H(j).

1
H ( j )  M 2 ( ) 
2

1   Rn ( )
2 (2-27a)

where

(2-27b)

-30-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

or

(2-27c)

The Rn polynomial is sometimes called the Chebyshev rational function.


Thus, the elliptic filter is a filter with equalized ripple behavior in both
the passband and the stopband. The amount of ripple in each band is
independently adjustable, and no other filter of equal order can have a
faster transition (roll-off) in gain between the passband and the stopband,
for the given values of ripple. Alternatively, one may give up the ability
to independently adjust the passband and stopband ripple, and instead
design a filter which is maximally insensitive to component variations.

Fig. 2-8. Elliptic (Cauer) LPF approximation

As the ripple in the stopband of the elliptic filter approaches zero, the
filter becomes a type-I Chebyshev filter. As the ripple in the passband
approaches zero, the filter becomes a type-II Chebyshev filter and, as
both ripple values approach zero, the filter becomes a Butterworth filter.

2-3.5. Bessel (Linear Phase) Filters


The filters make use of the so-called Thomson functions (which are
related to Bessel’s polynomials) as a denominator of the network function
1
H ( s) 
ao  a1s  a2 s 2  ...  s n (2-28a)

-31-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

where n is the filter order and the coefficients ak are given by:
(2n  k )!
ak  nk
2 (n  k )! k ! (2-28b)

Note that the highest order coefficient (an) is unity. The Bessel filter
denominator polynomials may be also expressed as follows:

b1(s) = s +1
b2(s) = s2 + 3s +1
bn(s) = (2n-1).bn-1(s) + s2 bn-2(s) (2-29)

These polynomials give a linear phase with slope of -1. In fact, the phase
of the network function of Bessel filters may be approximated as follows:

a1 a1 a12 3 (2-30a)
H ( )     ( 2  3 )  ...  ..  to
ao ao 3ao

where to=a1/ao. Therefore, the group delay of the Bessel filter is given by:

d
= constant  g ( )  H ( )  to
d (2-30b)

The Bessel-based (Thomson) functions can achieve time-domain step


response with minimum overshoot (ringing). However, a step response,
which is completely free of ringing, can be obtained from the normalized
Gaussian characteristics.

2-3.6. Gaussian Filters


The Gaussian filter has a step-response with no overshoot (ringing) while
maintaining a linear phase characteristics. This behavior is closely
connected to the fact that the Gaussian filter has the minimum possible
group delay. The Gaussian filter window is the Gaussian function:

1
H ( )  e  , H ( )   12 n
2 2

1        ..
2 1
2
4 1
3!
6 (2-31)

Thus, the Gaussian filter characteristics may be approximated as follows:

1
H ( ) 
2
,
1    12  4
2 (2-32a)

-32-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Such that

2
H ( s)  (2-32b)
2  2.1473s  s 2

2-3.7. Optimum L (Legendre) Filters


The Optimum "L" filter (also known as a Legendre filter) was proposed
by Athanasios Papoulis in 1958. It has the maximum roll off rate for a
given filter order while maintaining a monotonic frequency response. It
provides a compromise between the Butterworth filter which is
monotonic but has a slower roll off and the Chebyshev filter which has a
faster roll off but has ripple in either the pass band or stop band. The filter
design is based on Legendre polynomials which is the reason for its
alternate name and the "L" in Optimum "L".

2-3.8. Comparison Between Different Approximations


It is possible to design a new type of filter, from scratch, but the filters
listed in this chapter have been studied extensively, and designs for these
filters (including circuit implementation) are all readily available.

A Butterworth filter is a type of filter this is both easy to implement.


Butterworth filters offers consistent, smooth results, with maximal flat
response in the passband. Butterworth filters, however, don't have a
particularly steep roll-off, and if roll-off is the biggest concern, another
type of filter should be used.

Chebyshev. Chebyshev filters exhibit an equi-ripple shape in the


frequency response of the filter. Chebyshev filters can be divided into two
types: Chebyshev Type I filters, which have ripples in the passband.
Chebyshev Type II filters, which have ripples in the stopband.

In comparison to Butterworth filters, Chebyshev filters have much


steeper roll-off, but at the same time suffer from a rippling effect in the
passband that can cause unspecified results. Also, Chebyshev equations
utilize a complex string of trigonometric and inverse-trigonometric
functions, which can be difficult to deal with mathematically.

Elliptic filters, like Chebyshev filters, suffer from a ripple effect.


However, unlike the Chebyshev filters, Elliptic filters have ripples in both
the passband and the stopband. However, this limitation of Elliptic filters
is compensated by their very aggressive roll-off.

-33-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-4. Active Filter Synthesis


So far, we discussed the approximation functions of active filters, which
is the first step in active filter design. In this section we deal with the
implementation (synthesis) of active filters. All active elements, which
we introduced in the previous chapters (Op-Amps, OTA’s and CC’s) are
useful in the realization of active filter functions. The active filters
implementation methods are various, but some are more advantageous
than others, especially when they are implemented as integrated IC’s.
One of the key issues in the filter design problem is their sensitivity to
their component variations, with temperature and due to aging and other
environmental effects.

2-4.1. Filter Sensitivity


Filter sensitivity is a measure of the change in the filter performance
characteristics due to the change in the value of its elements. A good filter
design should have small sensitivity to its components variations. The
sensitivity of a certain filter parameter (P) against the variation of a
certain value of circuit element (z) is defined as follows:

P z  ln( P)
S zP .  (2-33a)
z P  ln( z )

For example, consider the a LPF which has a transfer function H(s)
=1/[1+(s/o)], with o=1/RC. The sensitivity of o against R is given by:

o R
S Ro  .  1
R o (2-33b)

2-4.2. Active Filter Devices


The active elements, which are employed in active filters, usually have
one to three ports with properties that make them very useful in network
synthesis. Some active elements are more useful than others, in the sense
that their realizations are more practical than others. The most important
ideal active elements in network synthesis fall into the following groups:

• Ideal controlled sources, like Op-Amps and OTA’s


• Current conveyors (CC’s)
• Generalized impedance converters (GIC) & impedance inverters (GII)
• Negative resistance devices.

-34-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

The Op-Amp and OTA are special cases of two ideal active elements, and
their implementations in IC form make them indispensable today, both in
discrete and fully integrated analog network design.
It should be noted that automatic electronic tuning is crucial for fully
integrated filters to compensate the drifts of element values and filter
performances due to component tolerance, device non-ideality, parasitic
effects, temperature, and aging. Integration of analog circuits in MOS
technology is also driven towards a single chip implementation of mixed
circuits because digital circuits are integrated in MOS technology.
In conventional active RC filters, the resistor is the problem; it has a
physically limited range (normally R ≤ 40k) and is not electronically
tunable. A MOSFET can be used as a voltage-controlled resistor in the
ohmic region, with its resistance being adjustable by the gate voltage.
MOSFET’s can be also used to simulate resistors in the so-called
switched capacitor (SC) filters. Therefore, using the MOSFET to replace
the resistor in active RC filters can meet the two requirements and the
resulting filters are sometimes called the MOSFET-C filters.

2-4.3. Filter Synthesis Methods


The most popular method for high-order filter design is the cascade
method due to its modularity of structure and simplicity of design and
tuning. For a given transfer function we first factorize it into low-order
functions (first and second order) and then realize these functions using
the well-known filter structures. Finally we cascade the designed low-
order sections, the whole circuit giving the desired transfer function.
The cascade method however has a very high sensitivity to component
tolerances. On the other hand, it has been established that resistively
terminated lossless LC filters have very low passband sensitivity. In order
to achieve low sensitivity, active filters can thus be designed by
simulating passive LC Ladder filters.

In this section we investigate both the cascade and the Ladder


simulation methods for active filter design. In the later case, we assume
the availability of design tables or appropriate computer software for the
generation of the equivalent LC ladder network component values, and
therefore concentrate on how to simulate such passive filters using Op-
Amps or OTA’s or even CC’s and capacitors. Various methods of
simulation of doubly-terminated passive LC ladders will be introduced in
a systematic way in this chapter. The simulation methods can be broadly
classified into three categories:
-35-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

 Component substitution,
 Signal flow simulation, and
 Coupled biquad realization.

The first category (Component substitution), belonging to the topological


approach, includes the inductor substitution (Gyrators), frequency-
dependent negative resistance (FDNR) with the Bruton transformation
and the impedance/admittance block substitution.

The second category (Signal flow simulation), contains the Leapfrog


(LF) simulation structures as well as matrix methods, including the
wave filter method.
The third category (Coupled biquad realization) embraces the biquad-
based LF structure, one of the multiple loop feedback configurations,
and the follow-the-leader-feedback (FLF) structure.

Active Filter Synthesis Methods

Cascade Methods Direct Simulation Methods

Component
Single Amplifier Multiple Amplifier Substitution
Biquads (SAB) Biquads (MAB)
Gyrators
Sallen-Key FDNR
Op-Amp Filters
Tow-Thomas
Biquad
OTA Filters Akerberg-
Mosberg Biquad Signal Flow
Simulation
KHN Biquad
Current Conveyor (State Variables)
Filters Leapfrog (LF)
Matrix Methods
Switched Capacitor
Filters Coupled Biquads

Biquad Leapfrog
FLF

Fig. 2-9. Synthesis methods of high-order active filters.

-36-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-5. Active Filter Synthesis, using Op-Amps


The Op-Amp is a very suitable device, which is useful in the realization
of transfer functions of active filters. In this section we deal with the
implementation (synthesis) of active filters, using Op-Amps. The Op-
Amp is the most versatile active element, in use up to frequencies of the
order of 100 kHz.

2-5.1. Op-Amp Filter Synthesis, by Cascade Method


We start by introducing the synthesis of Op-Amp active filters, by the aid
of cascade methods. For this purpose, we introduce the 1st order and 2nd
order Op-Amp filter blocks, which are employed in the cascade synthesis
method. Then, we turn our attention to the simulation methods, with
emphasis on components substitution (using gyrators and FDNR’s ) and
Leapfrog methods.

2-5.1. i. First-order Op-Amp Filters


Although the lowpass and highpass functions can be realized using RC
circuits only, the presence of the Op-Amps in the circuit can provide it
with gain and isolation from the circuit that follows it. The following
figure 2-10 depicts 2 possible configurations for a first-order LPF using
Op-Amps.

(A) (B)

Fig. 2-10. First-order LPF structures, using Op-Amps. (a) Inverting configuration ,
(b) Non-inverting configuration.

The transfer function of the LPF is given by:

Vo o
H (s)  k (2-34a)
Vi s  o

where o = 1/CR. In the non-inverting configuration shown in figure 2-


10(b), the gain K is given by:

-37-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

K = 1 + R2/R1 (2-34B)
In the inverting configuration shown in figure 2-10(a), the gain is given
by K = -R2/R1.
The following figure depicts 2 possible configurations for a first-order
HPF using Op-Amps. The transfer function of the HPF is given by:

Vo s
H (s)  k
Vi s  o (2-35)

The gain K is described as before, in the case of inverting and non-


inverting LPF.

Fig. 2-11. First-order HPF structures, using Op-Amps. (a) Inverting configuration ,
(b) Non-inverting configuration.

2-5.1.ii. General Second-Order Filter (Biquad) Function


The second-order filter function, in its general form, is the following:

a2 s 2  a1s  ao
H ( s)  2 (2-36a)
s  s 

H(s) can be written alternatively in general ―biquadratic‖ form as follows:

os
s2  s  os2
Qs
H (s)  k
op (2-36b)
s2  s  op2
Qp

where, ωos and ωop are the undamped natural frequencies of the zeros and
poles, respectively, while Qs and Qp are the corresponding quality factors,
or Q factors. The zero and pole frequency correspond to the magnitude of
the filter zero or pole, while their quality is a measure of how near the jω-

-38-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

axis is the corresponding zero or pole in the s-plane. Comparing equation


(2-36a) to equation (2-36b), we get

ωop = √ , Qp = √/  (2-36c)

Note that the inverse of the quality factor is sometimes called damping
factor (DF = 1/Q = √). The following table resumes the transfer
functions of the 2nd order filters.

Table 2-2. the transfer functions of the 2nd order filters.

Filter Type Transfer Function


Low Pass

High Pass

Band Pass

Notch (Band Stop)

5.5.1.iii. Single-Op-Amp Biquads


An active RC circuit realizing a biquadratic transfer function is called a
biquad. A single amplifier biquad (SAB) is a biquad using one amplifier.
Most SABs can be classified in one of the following two general
structures, shown in figure 2-12. As shown in figure, the first one has a
non-inverting Op-Amp configuration and the second has an inverting Op-
Amp. These voltage-controlled voltage source (VCVS) structures are
sometimes called the Sallen-Key filters.

-39-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Fig. 2-12. General single amplifier biquad (SAB) filter structures.

A. Sallen-Key 2nd order (Biquad) LPF


The Sallen-Key filters are widely used to implement 2nd order filters of
different types. The most popular second-order lowpass Sallen-Key filter
is shown in Fig. 2-13. As shown, the circuit employs an Op-Amp in an
arrangement of a VCVS with gain K. The transfer function of the above
LPF is given by:

 k 
 
H (s) 
Vo
  C1C2 R1 R2 
(2-37)
Vi  1 1 1 k   1 
s2     s 
 R1C1 R2C1 R2C2   C1C2 R1 R2 

where k = 1+ RB / RA

Fig. 2-13. Sallen-Key 2nd order LPF

Therefore, the realization of the second-order lowpass function, may be


put in the following general form:

-40-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

K
H (s) 
s  s 
2 (2-38)

As a special case, if R1 = R2 = R and C1 =C2 = C, then

O = 1/RC, DF = 1/Q = 3- K = 2 - RB / RA (2-39)

A Butterworth response of a 2nd order LPF requires that Q = 1/√2 =0.707.

B. Sallen-Key 2nd order (Biquad) HPF


The transfer function of the above HPF is given by:

V ks 2
H (s)  o  (2-40)
Vi  1 1 1 k   1 
s2     s 
 R1C1 R2C1 R2C2   C1C2 R1 R2 

Fig. 2-14. Sallen-Key 2nd order HPF

C. Sallen-Key 2nd order (Biquad) BPF


The transfer function of the following BPF is given by:

Vo ks
H ( s)  
Vi  C  C2 k   1  (2-41)
s2   1   
s  
 R2C1C2 R2C2   C1C2 R1R2 

-41-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Fig. 2-15. Sallen-Key 2nd order BPF

Example 2-2:
Design a filter that has an attenuation of 1dB at 1kHz and an attenuation
of 50dB at 5kHz, with maximal flat response in passband.

Solution
This is a LPF which has: Ap = 1 dB at p = 2 x 1k rad/s,
As = 50 dB at s = 2 x 5k rad/s,
1- We start by filter approximation (Butterwort, for maximal flat
response) and we try to find out the filter order N, and corner frequency
o, from the above characteristics, as follows:

Ap = - 10 log [ 1+p2] = 1 dB  p = √(10 0.1 - 1) = 0.508


As = - 10 log [ 1+s2] = 50 dB  s = √(10 5 - 1) = 316

e = Ap / As = 316 / 0.508 = 621, w = s / p =5


n ≥ ln(e)/ln(w) = 2.8  n = 3.

Also o = p / p1/2n = 2 x 1.9 k rad/s Or fo = 1.9 kHz

2- We now proceed to the filter synthesis (implementation) step, as a 3rd


order Sallen-Key LPF, using the cascade method:

-42-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Fig. 2-16. A cascaded Sallen-Key 3rd order LPF

3- After choosing the synthesis method and circuit topology, we


determine the circuit elements values, as follows:

For a Butterworth response of this 3rd order LPF, the normalized network
function (transfer function) should have the following form (refer to
Table 2-1):

Av(s) = 1/D(s), with D(s) = (1+s/o)(1+s/o +s2/o2),

So, we have Q = 1 for the second stage.

If we chose R1 = R2 =R and C1 = C2 = C for the second stage, then

1/Q = DF =1 = 2- Rb/Ra or Rb = Ra. We may choose Rb = Ra = 1k

Also, we have o =1/RC = 2 x1.9k rad/s. We may choose C =1nF then


R = 8.7k
As for the first stage, it should have the same corner frequency, o
=1/(R’1C’1), so we may choose also R’1= R = 8.7k and C’1 = C = 1nF.

Finally, R’a and R’b may be arbitrary chosen to make a gain k1 ≈ 10 = 1+


R’b/R’a. For instance, if we chose R’b =8.7k and R’a =1k then k1= 9.7.

No need to mention that the roll-off of the 3rd order Butterworth LPF is
60dB/decade, which is better than the required 50dB/decade.

-43-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-5.1.iv. Two-Op-Amp Biquads


A large number of two-OpAmp biquads can be obtained by the
techniques introduced so far for the enhancement of the Q of certain
SAB’s. In addition, very low sensitivity biquads may be obtained if the
inductance of an LCR biquad passive filter is simulated by a two Op-
Amp general impedance converter (GIC).

2-5.1.v. Three-OpAmp Biquads


The use of three Op-Amps to realize second-order filter functions leads to
multiple-output biquads with the additional advantage of versatility in
that o, Q, and the filter gain can be independently adjusted. Although
they are not without problems, as we shall see later, technology has
produced three opamp chips ready for use in realizing biquadratics.

We may develop such a three-Op-Amp biquad following the old analog


computing technique for solving a differential equation. To show this, let
us consider the second-order lowpass function realized as the ratio of two
voltages Vo and Vi, i.e.,
Vo K
H ( s)  
Vi s 2  o s   2 (2-42a)
o
Q
s2 K 1 s
V  V . Vo  Vo (2-42b)
o2 o
o2 i
Q o

A. Kerwin-Huelsman-Newcomb (KHN) Filter


The so-called two-integrator biquad, can realize the three basic 2nd order
filter functions, namely: LPF, BPF and HPF. It is obtained, by
implementing the above quadratic equation. The so called Kerwin-
Huelsman-Newcomb (KHN) biquad is obtained by appropriate selection
of components of the two-loop integrator biquad, as shown in Fig. 2-17.

Fig. 2-17. KHN three-Op-Amp filter


-44-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

B. Tow-Thomas Biquad Filter


This circuit was initially proposed by Tow and studied by Thomas. Its
passive sensitivities are similar to those of the KHN three-Op-Amp
network. However, it is more versatile, realizing all kinds of second-order
filter function without the use of an extra Op-Amp, which is required in
the case of the KHN circuit. However, it suffers, though, from the results
of excess phase shift on Q enhancement.

(2-43)

Therefore, it is possible to obtain any kind of second-order filter function


by the proper choice of the component values. Thus we may have the
following:

IF C1=0, R2 = R3= LPF


IF C1=0, R1 = R2 =  BPF (NON-INVERTING)
IF C1=0, R2 = R3 =  BPF(INVERTING)
IF C1=C, R1 = R2=R3 =   HPF
IF C1=C, R1 = R3 = BSF (NOTCH)
IF C1=C, R1 = R3=, R = R3/Q  APF (ALLPASS) (2-44)

Fig. 2-18. Tow-Thomas biquad filter

C. Akerberg-Mossberg Filters
The Åkerberg-Mossberg biquad is a modified version of the Tow-
Thomas Op-Amp biquad. The modification is made by replacing the non-
inverting integrator (A2 and A3 in Fig 2-18) with 2-back-to-back Op-

-45-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Amps. For simplicity, we set the following values in the circuit of Fig. 2-
18:

C1 = 0, R2 = R3 =  (2-45)

Then, the Tow-Thomas biquad becomes as shown in Fig. 2-19(a). By


actively compensating the non-inverting integrator, the Åkerberg-
Mossberg biquad is obtained as shown in Fig. 2-19(b). Straightforward
analysis of the biquad in Fig. 2-19(b) yields:

(2-46a)

(2-46b)

The Åkerberg-Mossberg biquad, in its simplified form, can thus


simultaneously realize lowpass and bandpass functions with the same
poles. In its more general version, it can realize any type of biquadratic
function, if the input signal is fed properly weighted to the inputs of all
amplifiers. In all cases, the poles are not affected.

Fig. 2-19. Tow-Thomas biquad actively compensated for excess phase, giving the
Åkerberg-Mossberg biquad in (b).

-46-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-5.2. Op-Amp Filters by Direct Simulation Methods


It has been taken for granted that the cascade methods are so sensitive to
component variations. However, the so-called direct methods, which are
based on the simulation of ladder filters, have so much less sensitivity to
component variations. The ladder-simulation methods are based on the
resistively terminated lossless LC ladder filters. As we pointed out earlier,
these passive structures have very low passband sensitivity to component
variations. Op-Amp filters can be designed by simulating such passive
LC ladder filters. In this section concentrate on how to simulate the
passive RLC filters, with emphasis on LC ladders using only Op-Amps
and capacitors.

We start by introducing the basics of LC ladder networks and then we


present one of the ladder simulation methods, namely: the Leapfrog
simulation. Other simulation methods, which are based on the component
substitution, will be detailed in a subsequent section, when we discuss the
OTA-based active filters.

2-5.2.i. LC Ladder Filters


The general Ladder network is shown below in Fig. 2-20. Figure 2-21
shows two common low-pass LC filter configurations. Also, figure 2-26
shows two common high-pass LC filter configurations. Each section
consists of an L-C pair, with each section corresponding to the order of
the filter. Note that the values of the capacitors and inductors changes
with varying input and output resistances.

Tabulated "normalized" values for the inductors and capacitors for


varying termination ratios are available to the design engineer. The
component values in the tables are normalized with respect to the
termination ratio and cutoff frequency.

Fig. 2-20. Resistively-terminated ladder network.

-47-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Normalized coefficients of Butterworth (maximally flat), Chebyshev


(equi-ripple) and Bessel (linear phase) L-C lowpass filters are presented
in tabular form on tables 2-3 through 2-5. The coefficients (gk) refere to
either Lk or Ck, according to the filter type (LPF, HPF,..etc) and
configuration (even or odd order). The normalized inductors (L) and
capacitors (C) should be de-normalized using the following relations:

C = C /O RL, , L = L RL /O (2-47)

Fig. 2-21. Two common LC ladder LPF configurations.

Fig. 2-22. Two common LC ladder HPF configurations.

-48-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

-49-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

The transformation between LPF and other types of filters are given in
the following table (Table 2-6).
The normalized components values1 gk (Ck and Lk), shown in the above
tables (for Rs=RL=1 and o=1), can be also calculated analytically. For
instance, the normalized elements of a Butterworth odd-order LC ladder
LPF (with Rs=RL=1 and o=1), are given by:

L2i-1 = 2 sin (4i-3), C2i = 2 sin (4i-1), i=1,2,...(n+1)/2 (2-48a)

where

k = (k/2n), k = 1, 2, 3, ...2n-1 (2-48b)

Table 2-6. Transformation between different types of filters

The synthesis procedure of passive LC ladder filters may be summarized


as follows:

1-Start with a normalized prototype LPF, determine the components


values (Li, Ci) from tables, or using computer programs or ready
formulas if available, like equations (2-48) for Butterworth filters,
2-Transform the prototype to the required filter type (e.g., LPF BPF)
using the frequency translation rules, e.g. equations (2-17) to (2-19).
3-De-normalize the component values, using equation (2-47).

1
Note that for odd ordered ladder L1=g1, C2=g2, L3 = g3, C4 = g4, .. and so on. For even-ordered
ladder, C1=g1, L2=g2, C3 = g3, L4 = g4, .. and so on. The last term gn+1 is the load resistor value.
-50-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Example 2-3:
Design a HPF with cutoff frequency o =10k rad/s, which is double
terminated with source and load resistors Rs=RL =1k. The required HPF
should have maximal flat response in the passband.
Solution:
1-We start with a 5th order prototype LPF, which is double terminated, as
shown in Fig. 2-23(a).

Fig. 2-23. Example for a 5th order LC ladder HPF.

Normalized values of Li, Ci can be obtained from Table 2-2 (Butterworth


response, n=5). Alternatively, the component values may be determined
from the analytic relations (2-48), for odd-order LPF, as follows:

k=(k/10) 1 =/10,2 =2/10, 3 =2/10,4 =4/10, 5 =5/10, 6


=6/10,7 = 7/10,8 = 8/10,9 = 9/10,

L2i-1= 2sin (4i-3), C2i = 2sin (4i-1), i = 1, 2, 3

i=1 L1=2sin1=2sin(/10)=0.618, C2=2sin3=2sin(3/10)=1.618


i=2 L3=2sin5=2sin(5/10)=2.0 , C4=2sin7 =2sin(7/10)=1.618
i=3 L5=2sin9=2sin(9/10)=0.618,

2- Transform the LPF HPF, using the frequency translation


rule (2-17). The resultant HPF is shown in Fig. 2-23(b).
L1 C1 =1/ L1=1/0.618= 1.618 , C2 L2 =1/C2 = 1/1.1618 =0.618
L3 C3 =1/ L3 =1/2.0 = 0.5 , C4 L4 =1/C4 = 1/1.1618 =0.618
L5 C5 =1/ L5 =1/0.618 = 1.168,
3- Denormalize the HPF component values, using the relations (2-47):

C = C /O RL, , L = L RL /O

-51-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

If RL = Rs = 1k, and o =10k rad/s, then:

C1 =C1/,=1.618/106 =1.618F , L2 =L2/ORL= 1.1618/106 =0.618H


C3 =C3 /106 = 0.5/ 106 =0.5 F , L4 =L4 /106 = 0.618/106 = 0.618 H
C5 =C5 /106 =1.168/106=1.168F ,

N.B. Usually, the filter order, n, and the cutoff frequency, O, are not
explicitly given. Instead, the filter attenuation in the passband Ap(p), and
in the stopband As(s) are given. From this information we can determine
n, and O using the appropriate relations. For instance, the Butterworth
low pass filter order is given by equations (2-11), which states:

n ≥ ln[√(100.1As-1)/ √ (100.1Ap-1 )] / ln(ωs/ωp)

And its cutoff frequency is given by:

ωo = ωp / p1/n = ωp / (100.1 Ap-1)1/2n.

2-5.2.ii. Ladder Simulation Methods


The simulation of a resistively terminated LC ladder can be achieved by
the following four methods:
1- Inductance substitution by a gyrator.
2. Impedance transformation of part or the whole of the LC ladder, using
generalized-immitance converters.
3. Simulation of currents and voltages in the ladder. The leapfrog (LF),
and coupled biquad (CB) methods have been successfully used to
implement this technique.
4. The linear transformation (LT), which includes the wave active filter
(WAF) method, approaches the simulation of the LC ladder the same way
as 3, but it uses transformed variables instead of simulating the voltages
and currents of the LC ladder.

2-5.2.iii. Inductance Simulation (Gyrators)


We have seen so far how to simulate an inductance with the negative
impedance converter (NIC). We present here the generalized impedance
converter (GIC) and how to simulate an inductance or negative resistance
elements from it.

A. Generalized Impedance Converter (GIC)


The following circuit depicts a general impedance converter circuit using
two Op-Amps. Here we have:

-52-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

I1 = Y1(V1 - V3), Y2(V3 - V4) = Y3(V4 - V5), I2 =Y4(V2 – V5) (2-49)

Such that

I2 Z1Z3
 (2-50)
I1 V2  0
Z2 Z4

Fig. 2-24. Generalized impedance converter (GIC) circuit, using Op-Amps

B. Generalized Impedance Network (GIN)


If the GIC is terminated by impedance to ground (Z5), as shown in Fig. 2-
25, then the resulting 2-terminal network is called a generalized
impedance network (GIN). The GIN has input impedance given by:

V1 Z1Z 3
Z in ( s )   .Z 5 (2-51)
I1 Z 2 Z 4

Fig. 2-25. Generalized impedance network (GIN) circuit, using Op-Amps

-53-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

The GIC and GIN have many useful applications. One of the useful
applications has been the inductance simulator (Gyrator). For instance,
assume that Z1 = Z2 = Z3 = Z5 = R and Z4 = 1/sC, then the input
impedance at port 1 will be
Zin = V1 /I1 = s C R2 (2-52a)

This is equivalent to an inductance of the following value:

Leq = C R2 (2-52b)

C. Antoniou & Riodan Simulated Inductances


The above described circuit is sometimes called Antoniou simulated
inductance. The circuit is shown in figure 2-26. However, this is not the
only way for inductance simulation, using Op-Amps. In fact, we have
seen so far in Chapter 2, how to simulate an inductance using the so-
called negative impedance converter (NIC), with a single Op-Amp. The
following figure depicts another arrangement for simulated inductance,
called the Riordan gyrator, using two Op-Amps.

Fig. 2-26(a). Antoniou gyrator circuit.

Fig. 2-26(b). Riordan circuit for inductance simulation (Gyrator)

-54-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Straightforward analysis, assuming ideal Op-Amps, gives the input


impedance Zin as follows:
Zin = s Leq (2-53a)

Which is equivalent to an inductance Leq having the value:

Leq = C R2 (2-53b)

Such simulated inductance can be substituted in any passive RLC filter to


synthesize equivalent active biquad filters.

Example 2-4
The following figure depicts how to substitute an inductance in a ladder
filter with a gyrator circuit. The gyrator may be implemented with
Antoniou’s simulated inductance, with R = 1k and C=30nF, such that the
simulated inductance Leq = CR2 = 30x10-9x 1k x 1k = 30 x10-3 H (30mH).

Fig. 2-27. Example showing the substitution of inductance in a filter with a gyrator

Example 2-5:
Design an active HPF with cutoff frequency o =10k rad/s, which
simulates a double terminated 5th order ladder filter, with source and load
resistors Rs=RL =1k. The required HPF should have maximal flat
response in the passband.

-55-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Solution: The required filter simulates the HPF, which we designed in


Example 2-3 (see figure 2-23). The normalized values of Li, Ci are plotted
in the following figure.

Fig. 2-28. Simulation of inductances in a 5th order LC ladder HPF.

In order to substitute the inductors, with gyrators, we proceed as follows:


1- Synthesize the simulated inductors that have Leq=0.618. We may use
the Antoniou simulated inductance, with equal resistors and Leq=R2C.
2- Choosing R = 1, results in C=0.618.
3- Denormalize all the filter components, such that: C = C /O RL, ,So:
RL=Rs=1k, C=0.618/103x104=0.1F,C1=C5=1.168F, C3=0.5F.

Fig. 2-29. Simulation of inductances in a 5th order LC ladder HP, final circuit.

-56-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Note that the simulated inductance in the above examples is grounded.


However, in lowpass, bandpass, and bandstop LC ladders, floating
inductors are present. In order to simulate such floating inductors, two
gyrators are connected as shown in figure. 2-26(c).

Fig. 2-26(c). Floating inductance simulation

2-2-2.iv. Frequency-dependent Negative Resistance (FDNR)


The FDNR is an applications of the GIN, in which Z1=1/sC1, Z5=1/sC5,
Z2 = R2 , Z3 = R3 and Z4=R4, such that

Yin = 1/ Zin = s2 (C1C5R3/R2R4) (2-54)

If C1= C5 = C and R2 = R3 = R4 =R, then

Yin = s2 C2R3 (2-55a)

For sinusoidal inputs (s= j) we get:

Yin = - 2 C2R3 = -2D (2-55b)


2
Since D = C R3 is a positive real quantity, the FDNR is considered as a
negative resistance element whose value is frequency dependent.

Fig. 2-30. Frequency-dependent negative resistance (FDNR) circuit and symbol.

-57-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-2-2.v. Bruton Transform


Synthesis of active filters, with FNDR’s, is usually performed with the
aid of the so called ―Bruton Transform‖. In this method, the conventional
RLC filter elements (R,L,C) are replaced with their corresponding
elements in the Bruton transform (C,R,D), as shown in table 2-7

Table 2-7. Bruton transform

Example 2-6
The following figure depicts how to substitute the RLC components in a
ladder filter with the CRD components, using Bruton transform.

Fig. 2-31. Example showing the substitution of RLC components in a ladder filter
with CRD components, using Bruton transform.

-58-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-2-2.vi. Leapfrog Simulation of a Ladder Network


In this approach, we simulate the operation of the LC ladder, i.e., the
equations that describe the topology of the LC ladder, rather than
simulate the impedance of the inductances. In other words, instead of
using an active circuit that simulates the impedance of inductances, we
try to simulate the voltage and the current that exist in the ladder reactive
elements. As shown in figure 2-32(a), an LC-ladder filter network can be
characterized by the admittance blocks in the series arms and impedance
blocks in the parallel arms. This representation leads to the corresponding
signal-flow graph of figure 2-32(b) in terms of node voltages and mesh
currents, as follows:

(2-56)

The transfer function Vout/Vin can be obtained from these equations by


eliminating the intermediate variables.

Fig. 2-32. General LC-ladder network (b) Its signal-flow graph

Note that the output of each block is fed back to the input of the
preceding block and therefore the structure is called the Leapfrog (LF)
structure. In contrast with the cascaded topology, these blocks are not
isolated from each other, and any change in one block affects the voltages
and currents in all the other blocks. This type of coupling between the
blocks makes the tuning of the whole network more difficult, but gives
rise to the much lower sensitivity.
-59-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Fig. 2-33. Leapfrog simulation of a general ladder network.

In active filter design the mixed current and voltage signal equations are
normally converted by scaling into their counterparts with voltage signals
only. Scaled by a certain conductance, equations (2-56) can be written as
follows (shown in the send column):

I1 = (VIN – V2 ).Y1 VI1 = (V1 – V2 ).Y1


V2 = (VI1 – VI3 ).Z2 V2 = (VI1 – VI3 )(Z2)
I3 = (-V2 + V4 ).Y3 VI3 = (-V2 + V4 ).Y3
V4 = (-VI3 + VI5 ).Z4 V4 = (-VI3 + VI5).(Z4)
I5 = (V4 – V6 ).Y5 VI5 = (V4 – V6 ).Y5
: : (2-57)

The scaled signal flow graph is shown below in Fig. 2-34..

Fig. 2-34. Scaled leapfrog diagram of a general ladder network

The synthesis procedure with leapfrog method can be summarized as


follows:

1- Design a normalized RLC prototype that satisfies the required filter


specifications. Start with a normalized prototype LPF, determine the
components values (Li, Ci) from tables, or using computer programs or
ready formulas if available, and then transform it to the required filter
type (e.g., LPF BPF) using the frequency translation rules.

2- Draw the Signal flow diagram which simulates the ladder network

-60-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

3- Design the inner elements of a leapfrog filter using inverting and non-
inverting integrators (or any suitable active building blocks), with
normalized values.
4- Design the input/output elements of the leapfrog filter using lossy
integrators (if they have source or load resistances) or any suitable active
building blocks.
2- Denormalize all impedances to meet the frequency specifications of
the desired filter.

Example 2-7
Design a 3rd order active LPF with cutoff frequency o =1k rad/s, which
is double terminated with source and load resistors Rs=RL =1k. The
required LPF should have maximal flat response in the passband. Use
Leapfrog simulation to implement the filter.

Solution:
The required Leapfrog filter simulates a LPF. The normalized values of
Li, Ci of such LPF and its signal flow diagram are plotted in the following
figure.

Fig. 2-35. Example showing the leapfrog simulation of a 3rd order ladder LPF.

In order to simulate this ladder LPF, with Leapfrog method, we proceed


as follows:

T1(s) = Y1(s) = 1/sL1  Non-inverting integrator


T2(s) = -Z2(s) = -1/sC2  Inverting integrator
T3(s) = Y3(s) = 1/(s+RL) = (1/ L3)/(s+R L/L3)
 Non-inverting lossy integrator

The final Leapfrog HPF is shown below in figure 2-36.

-61-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Fig. 2-36. Example showing the leapfrog simulation of a ladder filter.

The shown component values, should be de-normalized as follows:

C = C /O RL,

Thus, C1 = 0.05 F, C2 = 0.05 F, C3 = 0.133 F, R = 10K



Example 2-8
Design a 6th order active BPF with center frequency o =1k rad/s, and a
bandwidth of one octave (p1 = 2p2). The filter is double terminated
with source and load resistors Rs=RL =1k and has maximal flat response
in the passband. Use Leapfrog simulation to implement the filter.
Solution:
The required Leapfrog filter can be translated from a prototype 3 rd order
LPF, as shown in figure. The normalized values of Li, Ci of such LPF and
the corresponding BPF are plotted in the following figure.

Fig. 2-37. Translation of 3rd order LPF to a 6th order BPF.

Note that the active bandwidth B = (p1 - p ) = 1/√2, for a unity center
frequency (o =1). In order to simulate this ladder BPF, with Leapfrog
method, we proceed as follows:

-62-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

We have 3 biquadratic branches. We can represent them with different


biquad building blocks, like the following modified Tow-Thomas filters:

Fig. 2-38. Modified Tow-Thomas biquad.

T1(s) = Y1(s)  Non-inverting biquad (Tow-Thomas)


T2(s) = -Z2(s)  Inverting biquad (Tow-Thomas)
T3(s) = Y3(s)  Non-inverting biquad (Tow-Thomas)

The final Leapfrog BPF is shown below in Fig. 2-39.

Fig. 2-39. Leapfrog simulation of a ladder filter, with Tow-Thomas biquads.

The shown component values, should be denomalized, by multiplying all


resistors by 104 and dividing all capacitors by 2x103, so that:

R = 10K

C = C /O RL, = 16.9 nF

-63-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-6. Filters Synthesis with OTA’s


In the previous section, we discussed the active RC filters using the
operational amplifiers (Op-Amps). These filters have been widely used in
various low frequency applications in telecommunication networks,
signal processing circuits, communication systems, control, and
instrumentation systems for a long time. However, active RC filters
cannot work at higher frequencies (over 200kHz) due to Op-Amp
frequency limitations and are not suitable for full integration. They are
also not electronically tunable and usually have complex structures. Many
attempts have been made to overcome these drawbacks. The most
successful approach is to replace the operational transconductance
amplifier (OTA) with the conventional Op-Amp in active RC filters. As
we have seen so far, an ideal OTA is a voltage-controlled current source,
with infinite input and output impedances and constant transconductance.
In recent years OTA-based high frequency integrated circuits, filters and
systems have been widely investigated.

2-6.1. OTA Filter Synthesis by Cascade Methods


As we pointed out so far, the cascade method is the most popular method
for high-order filter design due to its modularity of structure and
simplicity of design and tuning. We start by introducing first-order and
second-order OTA-based filters, which can be cascaded to make high
order filters. For a high-order transfer function we first factorize it into
low-order functions (first and second order) and then realize these
functions using the well-known OTA structures.

2-6.1.i. First Order OTA Filters by Three- Admittance Model


Consider the general circuit model in Fig. 2-36. It contains one OTA and
three admittances. With the indicated input and output voltages it can be
simply shown that

(2-58a)

(2-58b)

Using these expressions we can readily derive different first-order and


second-order filter structures from the general three-admittance model in
Fig. 2-40 by assigning different components to Yi and checking the
corresponding transfer functions in equation (2-58).

-64-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Fig. 2-40. The OTA 3-admittance model

For example, Yi can be a resistor (Yi = gi ), a capacitor (Yi = sCi ), an open


circuit (Yi = 0), or a short circuit (Yi = ∞). It can also be a parallel
combination of two components (Yi = gi + sCi ).

Fig 2-41. First-order filter configurations with three passive components.

It is first verified that when choosing Y1 = sC1, Y2 = g2 and Y3 = g3, the


general model produces a LPF from Vo1, that is

(2-59)

The circuit is shown in Fig. 2-41(a). Then consider the circuit in Fig. 2-
41(b), which is obtained by setting Y1 = g1, Y2 = sC2 and Y3 = g3. It is
found that a highpass filter is derived whose transfer function is given by

(2-60)

with the gain at the infinite frequency being gm/(g1 + g3 + gm) and the
cutoff frequency equal to g1g3/[(g1 + g3 + gm)C2]. This circuit also offers a
general first-order characteristic, as can be seen from its transfer function

(2-61)

-65-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Finally, if Y1 and Y2 are resistors and Y3 a capacitor, then both H1(s) and
H2(s) are of lowpass characteristic. The circuit is presented in figure 2-
41(c) and the transfer functions are given below.

(2-62a)

(2-62b)

It is interesting to note from equations (2-62) that the filters in figure 2-


41(a) and (c) have similar characteristics from output Vo1 or H1(s). The
circuits in figures 2-41(a–c) may be also used as lossy integrators to
construct integrator-based OTA-C filters in next section. It should be
pointed out that the model in figure 2-41 can also support many second-
order filters. However, we will present the second order filters, using the
so-called 4-admittance OTA model.

2-6.1.ii. Second-Order Filters Derived from Four- Admittance Model


In this section we consider another two general single-OTA models and
filter structures derived from them. We first consider the model in figure
2-42, which consists of an OTA and four admittances. This model may be
looked upon as a result of grounding the non-inverting terminal of the
OTA and applying a voltage input through an admittance to the inverting
terminal of the OTA in figure 2-42. It can be shown that the transfer
function of this model is given by

(2-63)

Similarly, filter structures can be generated by selecting proper


components in the model and the corresponding transfer functions can be
obtained from equation (2-63).

Fig. 2-42. The OTA 4-admittance model


-66-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

A. Single OTA 2nd Order LPF


The following circuit depicts a 2nd order LPF, using a single OTA, with 4
admittances:

Fig. 2-43. OTA BPF.

(2-64)

Comparing its transfer function in equation (2-63) with the desired


function in equation (2-64) yields the following equations:

(2-65a)

(2-65b)

Based on these expressions we can design and analyze the LPF. The same
design method can be used and the same design formulas and sensitivity
performance of ωo and Q can be achieved, with different configurations.
To show this, we select C2 = C4 = C and g1 = g3 = g. Using equation (2-
65) we can obtain the design formulas as

(2-66)

B. Single OTA 2nd Order HPF


The following circuit depicts a 2nd order HPF, using a single OTA, with 4
admittances:

-67-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Fig. 2-44 OTA HPF.

(2-67)

Design can be carried out by comparing equation (2-67) with the standard
highpass characteristic

(2-68)

where K is the gain at the infinite frequency, ωo is the un-damped natural


frequency, and the quality factor Q relates to the transition sharpness.
Design equations are as follows (K = 1):

(2-69)

Choosing C1 = C3 = C and g2 = g4 = g we can determine that

(2-70)

It can be shown that for this design, the highpass circuit has very low ωo
sensitivities, but Q sensitivities will increase with Q. The filter thus may
not suit very high Q applications. The design also requires interchanging
the OTA input terminals.

C. Single OTA 2nd Order BPF


The following circuit depicts a 2nd order BPF, using a single OTA, with 4
admittances:

-68-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Fig. 2-45. OTA BPF.

(2-71)

Comparing equation (2-71) with equation (2-60) leads to


(2-72a)

(2-72b)

2-6.1.iii. Second-Order Filters Derived from Five- Admittance Model


In this section we consider another single OTA with five-admittances.
The general model with complete feedback is shown in Fig. 2-46. As
more admittances are used, more filter structures and design flexibility
can be achieved. The circuit transfer function can be shown as

(2-73)

Fig. 2-46- Five-admittance model with complete output feedback.

Different filter characteristics can be realized using the general model.


This can be done by trying different combinations of passive components

-69-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

in equation (2-73). Suppose that each admittance is realized with one


element. Exhaustive search shows that a total of 13 different structures
can be derived: one highpass, four bandpass and three lowpass filters

A. Single-OTA 2nd Order LPF


Three lowpass filter configurations can be generated. The first lowpass
filter is obtained by selecting Y1 = sC1, Y2 = g2, Y3 = sC3, Y4 = g4, Y5 = g2-
Substitution into equation (2-75) leads to

Fig. 2-47. One possible Configuration of a LPF, using OTA with 2-admittancs.

which compares to the standard lowpass filter characteristic. The


corresponding shown in figure 2-44. The structure shown in figure comes
from the setting Y1 = g1, Y2 = g2, Y3 = sC3, Y4 = g4, Y5 = sC2- The transfer
function is given by

(2-
76)

B. Single-OTA 2nd Order HPF


A highpass filter can be obtained by selecting Y1 = g1, Y2 = sC2, Y3 = g3,
Y4 = sC4, Y5 = g5 as shown in Fig. 2-48.

Fig. 2-48. Highpass filter derived from the OTA 2-admittance model.

The transfer function is then easily derived as

-70-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

(2-74)

Comparison of equations (2-74) and (2-73) will give rise to design


equations of ωo, Q, and K in terms of gs and Cs. Using these equations we
can determine component values and analyze sensitivity performance.
For the setting up of C2 = C4 = C and g1 = g3 = g5 = g we can obtain the
component values as

(2-75)

C. Single-OTA 2nd Order BPF


There exist four bandpass filter structures are presented in this section.
The first bandpass filter is derived from Fig. 2-46 by setting Y1 = g1, Y2 =
sC2, Y3 = sC3, Y4 = g4, Y5 = g5 as shown in Fig. 2-49. The transfer function
can be found, by sorting out equation (2-76) according to Y2 and Y3, as

(2-77)

Fig. 2-49. One possible configuration of a BPF, using OTA with 2-admittances.

When C2 = C3 = C and g1 = g4 = g5 = g, the design formulas are found to


be as follows:
(2-78a)

(2-78b)

-71-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-6.1.iv. Multiple OTA-C Filters


The single OTA filter structures may not be fully integrable and fully
programmable due to the fact that they contain resistors and use only one
OTA. But they are still useful for monolithic implementation, because by
replacing the discrete resistor with the simulated OTA resistor, they can
be very easily converted into the counterparts using OTAs and capacitors.
The derived OTA-C filters should be suitable for full integration.

In the following we first discuss how to simulate resistors using OTAs


only and then selectively illustrate some OTA-C filters thus derived from
the single OTA counterparts. So, any admittance in the above
configurations can be replaced by a capacitor or an OTA resistor as
shown below in figure. Resistors can be simulated using OTA’s. Figure
2-50(a) shows a simple single OTA connection. This circuit is equivalent
to a grounded resistor with resistance equal to the inverse of the OTA
transconductance, that is, R = 1/gm. Floating resistor simulation may
require more OTAs. Figure 2-50(b) shows a circuit with two identical
OTA’s. It can be shown that it is equivalent to a floating resistor of
resistance equal to R = 1/gm.

Finally, for the ideal voltage input, the first OTA in the input terminated
floating resistor simulation is redundant and can thus be eliminated
shown in figure 2-50(c). This simulation not only saves one OTA but also
has high input impedance, a feature useful for cascade design.

Fig. 2-50. Miscellaneous gm-C filters.

The transfer functions of the OTA-C (or gm-C) filters are the same as
those of the single OTA counterparts. The difference is only that in gm-C
filters, the gm’s are all OTA transconductances.

-72-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Fig. 2-51. Miscellaneous gm-C filters.

-73-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-6.2. OTA Filter Synthesis by Direct Simulation Methods


As we have seen so far, the most popular method for high-order filter
design is the cascade method due to its modularity of structure and
simplicity of design and tuning. As we have pointed for earlier, the
cascade method has a very high sensitivity to component tolerances.

However, we present here ladder-simulation method, which is based on


the resistively terminated lossless LC ladder filters. We dully note that
these structures have very low sensitivity to component variations. OTA-
C filters can thus be designed by simulating passive LC ladder filters.
We assume the availability of design tables or appropriate computer
software for the generation of LC ladder network component values, and
therefore concentrate on how to simulate these passive LC ladders using
only OTA’s and capacitors.

As we pointed out in the introduction of this section §2-5, there exist


various methods for simulation of doubly terminated passive LC ladders.
These methods can be broadly classified into three categories:

 Component substitution,
 Signal flow simulation (Leapfrog simulation and Matrix methods),
 Coupled biquad realization.

2-6.2.i. Component Substitution Method


OTA-C filter design based on a passive LC ladder can be conducted by
substituting resistors and inductors by OTA-C counterparts. Such an
OTA-C circuit has as low sensitivity as the passive counterpart, except
for the imperfections in the realization of the active resistor and inductor
and the increase in the total number of components.

The Bruton method transforms the passive LC ladder into some new
equivalent ladder which contains no inductors, but some new
components, which are then replaced by OTA-C counterparts. The
admittance block substitution method deals with each ladder arm as a
whole and replaces it by the OTA-C circuit which has the same
impedance or admittance function. For illustration we consider the fifth-
order lowpass finite zero LC ladder in figure 2-53(a). We replace the
input and output termination resistors by the OTA counterparts in figures
2-53(a) and (b), respectively, and the two floating inductors by the OTA-
C equivalent in figure 2-53(b). The resulting OTA-C filter is displayed in
figure 2-53(b). The component values can be determined using the
formulas in equations (2-32).
-74-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Fig. 2-52. OTA Resonatol

Fig. 2-53. Illustration of the component substitution method, for a 5th order LPF

2-6.2.ii. Signal Flow Simulation


Among various ladder simulation approaches the leapfrog ladder and
coupled biquad methods are the most popular. The Leapfrog theory for
OTA-C filter design consists in simulating signal flow relations of the LC
ladder circuit. The circuit equations that describe the topology of the
passive ladder structure are first written. Then a signal flow block
diagram is drawn based on these equations. Finally the block diagram is
realized using OTA’s and capacitors. In the simulation of LC ladders, the
original equations are of the mixed current and voltage type. We can
convert these equations to their voltage-only counterparts by scaling,
which is the technique that will be discussed in this section. The mixed
equations can also be scaled to the current signal only.

-75-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Two techniques for active signal simulation of a passive ladder exist. One
is to simulate relations of series-arm currents and parallel-arm voltages,
treating the respective arm as a single-port network.

The other is to do component-level signal simulations, that is, to simulate


relations of signals in all individual elements, for example, individual
capacitor voltages and inductor currents. The first type of signal flow
simulation structures are block based, since the series and parallel arms
are treated as a whole, no matter how many components are there.

The second method has a case-by-case feature, for the different passive
LC structures the signal flow equations may be very different. Also, note
that if each arm in the passive ladder structure is simply a single
component, such cases including all-pole LP and zero-at-origin HP LC
ladders, the block method will reduce to the component method. In the
following we will therefore concentrate on the block signal simulation
method for OTA-C filter design based on passive LC ladders. A
systematic treatment will be given

2-6.2.iii. Leapfrog Simulation Structures of General Ladder


The general ladder network with series admittances and parallel
impedances is shown in figure 2-54. The equations relating the currents
flowing in the series arms, Ij , and the voltages across parallel arms, Vj ,
can be written as follows:
(2-79a)

(2-79b)

Fig. 2-54. General admittance and impedance ladder with signals indicated.

The transfer function Vout/Vin can be obtained from these equations by


eliminating the intermediate variables. These equations can be
represented by a signal flow diagram depicted in figure 2-55. Observe
that the output of each block is fed back to the input of the preceding
block and therefore the structure is called the Leapfrog (LF) structure.
-76-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Fig. 2-55. Leapfrog block diagram of general ladder.

A. Leapfrog Simulation of General Ladder


The mixed current and voltage signal equations can be converted by
scaling into their counterparts with voltage signals only. Scaled by a
conductance gm, equation (2-74) can be written as follows:

(2-80a)

(2-80b)

(2-80c)

Where V j = Ij /gm. The Yj /gm and gmZi are voltage transfer functions. It is
clear that these equations will lead to the same transfer function Vout/Vin as
that from equation (2-75). The corresponding block diagram is shown in
Fig. 2-56. As traditionally done for Op-Amp filter design, to realize this
new block diagram we can similarly synthesize the voltage summers and
voltage transfer functions of Yj /gm and gmZi using OTA’s and capacitors.
Of course, different ladders will have different Yj and Zi values and the
scaled Leapfrog block diagram of general ladder.

Fig. 2-56. Scaled leapfrog block diagram of general ladder

Associated OTA-C structures thus will be different. In the following,


however, we will not follow the conventional method. We will present a
new, systematic, and more efficient method unique to OTA-C filters by
using the feature of the OTA. From equation (2-80) we can see that the
voltage relations have a typical form of
-77-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

(2-81)

where Uj can be Vj or Vj’ and

Hj = Yj/gm for odd j; Hj = gm Zj for even j (2-82)

Equation (2-82) can be realized using an OTA with a transconductance of


gj and a grounded impedance of as shown in figure 2-57. This is an OTA-
grounded impedance section. The summation operation is simply realized
by the OTA differential input. It can be verified that the voltage transfer
function from the OTA input to output is equal to gjZj = Hj . Note that we
relate the voltage transfer function Hj to the grounded impedance Zj.

Fig. 2-57. Grounded impedance

B. Simulating the Grounded Impedance Section.


Using figure 2-57 as a building block we can readily obtain the OTA-
grounded impedance LF structure from equation (2-53). The grounded
impedances have the values calculated by:

(2-83a)

(2-83b)

From equation (2-83) we can see that besides the general scaling by gm,
each new grounded impedance has a separate transconductance which
can be used to adjust the impedance level. We can also note that Zj are
not the original impedances Zj in the ladder.

For the even number subscript, Zj is the original impedance Zj in the


parallel arm of the ladder multiplied by the ratio of gm/gj .For the odd
number subscript, Zj is the inversion of the original impedance Zj or the
admittance Yj in the series arm divided by the product of gj gm.

-78-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Fig. 2-58. General LF OTA-grounded impedance realization.

When gj = gm = g, we have Zj = Yj /g2 for odd j and Zj = Zj for even j .


Further if gj = gm = 1, then Z j = Yj for odd j. Note that in many OTA-C
publications, unity values of transconductances and the scaling
conductance are used for simplicity.

The salient feature of the structure is that the OTA-C realization problem
becomes the OTA-C realization of the grounded impedances only and the
simple inductor substitution method can be conveniently used to simulate
the impedance constituents.

Z’J = HJ /GJ (2-84)

Example 2-9: Leapfrog OTA-C LPF


We design a fifth-order 1 dB-ripple 4 MHz Chebyshev filter based on the
LF OTA-C simulation of the passive ladder. The fifth-order 1 dB-ripple
Chebyshev LPF has a normalized characteristic of

(2-85)

The corresponding normalized component values of the ladder in Fig. 2-


60(a) are given by

(2-86a)

(2-86b)

-79-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Fig. 2-59. Ladder simulation using Leapfrog method

We first de-normalize the component values with the frequency of fo =


4MHz and the resistance of R = 10k. The corresponding de-normalized
component values are obtained as follows:

(2-87a)

(2-87b)

Then using the formulas in equation (2-49) with the choice of equal
transconductances

(2-88)

We Obtain:

(2-89a)

(2-89b)

2-6.2.iv. OTA-C Bandpass Leapfrog Filter Design


The complexity of the OTA-C filter based on the LF structure will
depend on the number of elements in the series and shunt branches of the
passive ladder circuit. The bandpass filter design may be conducted from
the all-pole lowpass LC filter by applying the lowpass to bandpass
frequency transformation s/ωo  [s/B + ωo/sB], where ωo is the center
frequency and B is the bandwidth of the bandpass filter to be designed.

-80-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Fig. 2-60. Ladder simulation using Leapfrog method

The bandpass LC structure from the all pole lowpass prototype will
typically have series resonators in series arms and parallel resonators in
parallel arms. We start design from the bandpass LC ladder only and take
the LF simulation of the circuit in figure 2-60(a) as an example.
Recognizing that Y1 is an RLC series resonator, Z4 is a parallel RLC
resonator and Z2 and Y3 are the ideal parallel and series LC resonators and
following the same design procedure we can obtain the LF OTA-C filter
structure as shown in figure 2-60(b).

The component values can be formulated as

(2-90a)

(2-90b)

(2-90c)

where gm is the scaling conductance. Further design can be carried out


based on the equation.

-81-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-7. Filters Synthesis using Current Conveyor


Over the last decade, current-mode filters using second-generation
current conveyors (CCII) have received considerable attention, owing to
their bandwidth, linearity, and dynamic range performances are better
than those of their Op-Amp based counterparts.

The technique of simulating LC ladder filters by their signal-flow graphs


is very well established. The following figure depicts a leapfrog LC-
ladder filters simulation by circuits employing multiple outputs second-
generation current controlled conveyors (MO-CCCII) and grounded
capacitors.

Fig. 2-61. Ladder simulation using Leapfrog method, with switched capacitors

Fig. 2-62. Ladder simulation using Leapfrog method, with current conveyors

-82-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-8. Switched Capacitor Filters


So what's the big headache about fabricating an RC filter on a MOS
integrated circuit? For starters, the biggest capacitor you'll get will be
around 100 pF! Next, that means to create a low-pass filter with fc = 100
Hz, you'll need a resistor of R=16 MΩ! However, you may simulate a
resistor using switched capacitors (SC). In order to simulated a resistor,
with a switched capacitor you need a couple of switches and a capacitor -
both readily available in MOS. If the switches work with clock frequency
fc, as shown in figure 2-63, then the simulated resistor value is given by:

R = T/C = 1/fCC (2-91)

Fig. 2-63. Realization of simulated resistor with a switched capacitor

Today switched-capacitor circuits are thriving in the field as mult-pole


filter circuits and programmable analog ICs. However, many switched-
capacitor circuits available today are Op-Amp based

2-8.1. Cascaded SC Filters


Filter Solutions supports both cascade and parallel biquads. The cascade
architecture produces higher quality stop band zeros, The following are
examples of a 1MHz third order Elliptic filter.

Fig. 2-64. A 1MHz third order Elliptic filter, with switched capacitors.

-83-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

As shown is composed of two stages, a 1st order LPF and a 2nd (biquad)
LPF, all implemented with switched capacitors and Op-Amps.

2-8.2. SC Filter IC’s


The SC filters have been available as integrated circuits. The LMF100
integrated circuit is a versatile IC with four switched capacitor
integrators, that can be connected to make cascaded filters. You can
choose  to either be 1/50 or 1/100 of the clock frequency. By changing
internal and external connections to the circuit you can obtain different
filter types; lowpass, highpass, bandpass, notch (bandreject) or allpass.

Fig. 2-65. The LMF100 switched capacitor filterIC.

2-8.3. SC Leapfrog Ladder Filters


The leapfrog ladder filters comprise inverting and non-inverting SC
integrators connected in feedback loops. The source and the load
terminations are realized with damped SC integrators. figure 2-66 depicts
the circuit of a fifth-order lowpass SC leapfrog ladder filter. For the
sampling frequency fs = 40 kHz the ideal filter passband characteristics
are 4.1 kHz bandwidth and a lower than 0.21 dB passband ripple. The
component values for the circuit are listed in Table 2-8.

-84-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Fig. 2-66. Ladder simulation using Leapfrog method, with switched capacitors

Table 2-8. Component values for the SC filter of Fig. 2-66.

2-8.4. Limitation of SC Filters


One of the key parameters in the design of SC networks concerns the
speed requirements of the Op-Amps. The frequency of operation is
usually limited by the settling time which restricts the sampling frequency
fs to about 20% of the gain-bandwidth product GBW of the Op-Amps.

In addition, the distortion introduced by the finite gain Avo in SC filters is


more pronounced than that of the finite bandwidth BW. Generally, the
price to be paid for the high speed is a low DC gain Avo, because these are
contradictory requirements. The amplifier gain of 60 dB is normally
considered as minimum for an SC circuit. Application of such Op-Amps
(with Avo = 100) causes significant deterioration of the performance of
classical SC circuits.
-85-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-9. Summary

There is a large amount of terminology that we need to present, when we


talk about filters.

Filter Order: The order of a filter is an integer number, that defines how
complex the filter is. In common filters, the order of the filter is the
number of "stages" of the filter. Higher order filters perform better, but
they have a higher delay, and they cost more.

Pass Band: In a general sense, the passband is the frequency range of


the filter that allows information to pass. The passband is usually defined
in the specifications of the filter. For instance, We could define that we
want our passband to extend from 0 to 1000Hz, and we want the entire
passband to be higher than -1db.

Transition Band The transition band is the area of the filter between the
passband and the stopband. Higher-order filters have a thinner transition
band

Stop Band The stop band of a filter is the frequency range where the
signal is attenuated. Stop band performance is often defined in the
specification for the filter. For instance, we might say that we want to
attenuate all frequencies above 5000Hz, and we want to attenuate them
all by -40db or more

Cut-off Frequency The cut off frequency of a filter is the frequency at


which the filter "breaks", and changes (between pass band and transition
band, or transition band and passband, for instance). The cut-off of a filter
always has an attenuation of -3db. The -3db point is the frequency that is
cut in power by exactly 1/2.

Lowpass filters allow low frequency components to pass through, while


attenuating high frequency components.

Highpass filters allow high frequency components to pass through, while


attenuating low frequency components.

Bandpass filters allow a single band of frequency information to pass the


filter, but will attenuate all frequencies above the band and below the
band.

-86-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

A bandstop filter will allow high frequencies and low frequencies to pass
through the filter, but will attenuate all frequencies that lay within a
certain band.

Some filters allow all frequency components to pass through these filters
are called allpass filters. The question must be raised "why would we
need a filter that doesnt remove anything from our signal?" And the
answer is that even though an allpass filter might not remove any
frequency components on the magnitude graph, it might have a very
beneficial effect on the phase graph.

Filter Design
It is possible to design a new type of filter, from scratch, but the filters
listed in this chapter have been studied extensively, and designs for these
filters (including circuit implementation) are all readily available.

Filter Approximation
The approximate response of a filter may be obtained using one of the
following famous approximation functions:

1- Butterworth (Maximally Flat) Filter Approximation


2- Chebeshev (Equi-ripple) Filter Approximation
3- Inverse Chebechev Filter Approximation
4- Cauer (Elleptic) Filter Approximation
2- Bessel or Thomas (Linear Phase) Filter Approximation
6- Gaussian Filter Approximation
7- Legendre (Optimum L) Filter Approximation
8- Linkwitz-Riley Filter Approximation

A Butterworth filter is a type of filter this is both easy to implement.


Butterworth filters offer consistent, smooth results, with maximal flat
response in the passband. Butterworth filters, however, don't have a
particularly steep roll-off, and if roll-off is the biggest concern, another
type of filter should be used.

Chebyshev. Chebyshev filters exhibit an equi-ripple shape in the


frequency response of the filter. Chebyshev filters can be divided into two
types:

-87-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

Chebyshev Type I filters, which have ripples in the passband.


Chebyshev Type II filters, which have ripples in the stopband.

In comparison to Butterworth filters, Chebyshev filters have much


steeper roll-off, but at the same time suffer from a rippling effect in the
passband that can cause unspecified results. Also, Chebyshev equations
utilize a complex string of trigonometric and inverse-trigonometric
functions, which can be difficult to deal with mathematically.

Elliptic filters, like Chebyshev filters, suffer from a ripple effect.


However, unlike the Chebyshev filters, Elliptic filters have ripples in both
the passband and the stopband. However, this limitation of Elliptic filters
is compensated by their very aggressive roll-off.

The Filter approximation results in the transfer function of the filter. The
following table resumes the transfer functions of the 2nd order filters.

Filter Type Transfer Function


Low Pass

High Pass

Band Pass

Notch (Band
Reject)

Filter Synthesis (Implementation)


First- and second-order filter circuits are very useful in the realization of
high-order filter functions. Some methods for such realizations are based
on the use of lower-order sections, particularly biquads. Therefore,
knowledge of the most suitable biquad in a particular case is of
fundamental importance for achieving the ―best‖ design. This will
become apparent in the next chapter, where the design of high-order
filters is considered.
-88-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

In this chapter we dealt with high-orde filter design. The most popular
method for high-order filter design is the cascade method due to its
modularity of structure and simplicity of design and tuning. For a given
transfer function we first factorize it into low-order functions and then
realize these functions using the well-known low-order filter structures.

The cascade method however has a very high sensitivity to component


tolerances. It has already been established that resistively terminated
lossless LC filters have very low passband sensitivity. To achieve low
sensitivity, active filters can thus be designed by simulating passive LC
filters.

In this section we investigate the simulation method for active filter


design. Again, we assume the availability of design tables or appropriate
computer software for the generation of LC ladder network component
values, and therefore concentrate on how to simulate these passive LC
ladders using Op-Amps or OTA’s or even CC’s and capacitors. Various
methods of simulation of doubly-terminated passive LC ladders will be
introduced in a systematic way. These can be broadly classified into three
categories:

 component substitution,
 signal flow simulation, and
 coupled biquad realization.

The first category, belonging to the topological approach, includes the


inductor substitution, the Bruton transformation and the
impedance/admittance block substitution. The second, being the
functional or operational approach, contains the leapfrog (LF) structure
and its derivatives as well as matrix methods, including the wave filter
method. The third embraces the biquad-based LF structure, one of the
multiple loop feedback configurations, and the follow-the-leader-
feedback (FLF) structure. The component substitution methods keep the
active filter structure and equations identical to those of the original
passive ladder. The signal simulation method has the same equations as,
but different structures from, the original ladder. The coupled biquad
approach may have different equations and structures.

Programmable high-frequency active filters can be achieved by


incorporating the OTA’s and current conveyors (CC) instead of Op-
Amps. The OTA, CC and switched capacitors filters have been also
presented in this chapter.
-89-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-10. Problems

2-1) What is the order of the Butterworth filter that is to give an


attenuation of at most 0.2 dB at 10 KHz and at least 70 dB at 10 KHz?
i) Determine the filter order
ii) What is the cutoff (3-dB) frequency of this filter?
iii) What are the actual attenuation at 1 KHz and 10 KHz?
iv) Determine the network function Av(s) that best meets the above specifications.

2-2) Design an LC ladder BSF with cutoff frequency o =10k rad/s,


which is double terminated with source and load resistors Rs=RL =1k.
The required BSF should have maximal flat response in the passbands.
2-3) Show how to substitute the inductance in the above filter with a
gyrator circuit. Find the gyrator circuit component values.

2-4) Design an active HPF with cutoff frequency o =10k rad/s, which
simulates a double terminated 3th order ladder filter, with source and load
resistors Rs=RL =1k. The required HPF should have maximal flat
response in the passband.

2-5) Design a 3rd order active LPF with cutoff frequency o =10k rad/s,
which is double terminated with source and load resistors Rs=RL =1k.
The required LPF should have maximal flat response in the passband.
Use Leapfrog simulation to implement the filter.
2-6) Choose the most suitable answer for the following statements:
1. The problem of passive filters is overcome by using
a) Analog filter
b) Active filter
c) LC filter
d) A combination of analog and digital filters

2. Ideal response of filter takes place in


a) Pass band and stop band frequency
b) Stop band frequency
c) Pass band frequency
d) None of the mentioned

3. The critical frequency is defined as the point at which the response


drops ___ from the passband
a) –20 dB
b) –3 dB
c) –6 dB
d) –40 dB
-90-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

4. Which filter exhibits a linear phase characteristic?


a) Bessel
b) Butterworth
c) Chebyshev
d) all of the above

5. A third-order filter will have a roll-off rate of


a) –20 dB/decade.
b) –30 dB/decade.
c) –40 dB/decade.
d) –60 dB/decade.

6. Refer to Figure (a). This is a __ filter, and it has a cutoff frequency of


__
a) 721 Hz, low-pass filter.
b) 721 Hz, high-pass filter
c) 72 Hz, low-pass filter.
d) 721 Hz, band-pass filter

7. Refer to Figure (b). This is a ___ filter, and The roll-off of this filter is
about___
a) low-pass filter, 20 dB/decade.
b) high-pass filter, 40 dB/decade
c) low-pass filter, 60 dB/decade
d) band-pass filter 80 dB/decade

8. Refer to Figure (c). This circuit is known as a _____ filter, and the fc is
_____
a) high-pass, 1.59 kHz
b) band-pass, 15.9 kHz
c) low-pass, 15.9 kHz
d) high-pass, 15.9 kHz

9. Refer to Figure (a). RA = 2.2 k and RB = 1.2 k . This filter is


probably a_____
a) Butterworth type.
b) Bessel type.
c) Chebyshev type.
d) None of the above

-91-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

2-11. References

[1] R. P. Sallen and E. L. Key. ―A practical method of designing RC


active filters,‖ IRE Trans. on Circuit Theory CT-2, pp. 74–82- 1955.
[2] D. F. Tuttle. Network Synthesis, vol. I, New York: John Wiley and
Sons. 1958.
[3] J. G. Linvill, ―Active networks, past present and future,‖ in Active
Networks & Feedback Systems-MRI vol. 10, Brooklyn, NY, 1960.
[4] R. E. Bach.. ―Selecting R-C values for active filters,‖ Electronics 33,
pp. 82–83, 1960
[5] V. Belevitch, ―Summary of the history of circuit theory,‖ Pm. IRE,
vol. 50, no. 5, pp. 848-855, May 1962.
[6] W. Kerwin, L. P. Huelsman, and R. W. Newcomb. ―State-variable
synthesis for insensitive integrated circuit transfer functions,‖ IEEE J.
Solid-State Circuits SC-2, 3, pp. 87–92. 1967.
[7] J. Tow. ―Active RC filters—A state-space realization,‖ Proc. IEEE
56, pp. 1137–1139. 1968.
[8] T. Deliyannis. ―High-Q factor circuit with reduced sensitivity,‖
Electron. Lett. 4, p. 577. 1968.
[9] T. Deliyannis. ―RC active allpass sections,‖ Electron. Lett. 5, p. 59.
1969.
[10] A. Antoniou. ―Realization of gyrators using operational amplifiers
and their use in RCactive network synthesis,‖ Proc. IEE (London) 116,
pp. 1838–1850. 1969.
[11] L. C. Thomas. ―The biquad: Part I—some practical design
considerations,‖ IEEE Trans. Circuit Theory CT-18, pp. 350–357. 1971.
[12] S.K. Mitra, Analysis and Synthesis of Linear Active Networks, New
York: John Wiley & Sons, Inc., 1969.
[13] T. Deliyannis. ―Six new delay functions and their realization using
active RC networks,‖ The Radio and Electronic Engineer, 39(3), pp.
139–144. 1970.
[14] J. J. Friend. ―A single operational amplifier biquadratic filter
section,‖ Proc. 1970 Int. Symposium on Circuit Theory, Atlanta, Georgia,
pp. 179–180. 1970.
[15] T. Deliyannis. ―A low-pass filter with extremely low sensitivity,‖
Proc. IEEE 58(9), pp. 1366–1367. 1970.
[16] T. Deliyannis and Y. Berdi. ―Selectivity improvement in a useful
second-order active RC section,‖ Int. J. Electronics 31(3), pp. 243–248.
1971.
[17] N. Fliege, ―A new class of second-order RC-active filters with two
operational amplifiers, NTZ 26(4), pp. 279–282, 1973

-92-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

[18] Ibid. ―Selectivity Enhancement of Certain Low-sensitivity RC


Active Networks,‖ 34(4), pp. 513–526. 1973.
[19] B. B. Bhattacharyya, W. S. Mikhael, and A. Antoniou.. ―Design of
RC-active networks using generalized-immittance converters,‖ J. Frank.
Inst., 297(1), pp. 45–48. 1974
[20] D. Åkerberg and K. Mossberg. ―A versatile active RC building
block with inherent compensation for the finite bandwidth of the
amplifier,‖ IEEE Trans. Circuits and Systems CAS-21, vol.1, pp.75–78.
1974.
[21] A. S. Sedra. ―Generation and Classification of Single Amplifier
Filters,‖ Intl. J. Circuit Theory and Appl. 2(1), pp. 51–67. 1974.
[22] W. Saraga, ―Sensitivity of 2nd-order Sallen-Key-type active RC
filters,‖ Electron. Lett. 3(10), pp. 442–444.
[23] G. Daryanani. Principles of Active Network Synthesis and Design,
New York: John Wiley and Sons. 1976.
[24] P. E. Fleischer. ―Sensitivity minimization in a single amplifier
biquad circuit,‖ IEEE Trans. Circuits and Systems CAS-23, 1, pp. 45–52-
1976.
[25] P.R. Geffe, ―RC-amplifier resonators for active filters,‖ IEEE Trans.
Circuit Theory CT-15, pp. 415–419,1976.
[26 S. Darlington, ―Some thoughts on the history of circuit theory,‖
IEEE Trans. Circuits Syst., vol. CAS-24, no. 12, pp. 662-666, Dec. 1977.
[27] A. S. Sedra and P. O. Brackett. Filter Theory and Design: Active
and Passive, London: Pitman. 1999 CRC Press LLC, 1978.
[28] R. M. Foster. Telenhone conversations. Letter, June 3 1983.
[29] A. S. Sedra. In Miniaturized and Integrated Filters, S. K. Mitra and
C. F. Kurth (eds.), New York: John Wiley and Sons. 1989.
[30] Al-Hashimi, B.M. and Fidler, J.K., Novel high-frequency
continuous-time low-pass OTA based filters, Proc. IEEE Intl. Symp.
Circuits Syst., pp. 1171–1172, 1990.
[31] L. Huelsman, Active and Passive Analog Filter Design, Mc-Graw-
Hill, New-York, 1993.
[32] S.J. Orfanidis, Introduction to Signals and Systems, Prentice Hall,
1996.
[33] Sun, Y. and Fidler, J.K., Structure generation and design of multiple
loop feedback OTA grounded capacitor filters, IEEE Trans. on Circuits
and Systems, Part-I: Fundamental Theory and Applications, vol. 44,
No.1, pp.1–11, 1996
[34] T. Nagasaku, A. Hyogo and K. Sekine, ―A synthesis of a novel
current-mode operational amplifier‖, Analog Integrated Circuits and
Signal Processing, vol.11, pp 183-185, 1996.

-93-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 2

[35] R. Jaeger, Microelectronic Circuit Design, Mc-Graw-Hills, 1997.


[36] R. Schaumann, ―Simulating lossless ladders with transconductance-
C circuits,‖ IEEE Trans. Circuit Syst. II, vol. CAS-45, pp. 407 - 410,
March 1998.
[37] S. Minaei, et al., ―High output impedance current-mode lowpass,
bandpass and highpass filters using current controlled conveyors,‖ Int’l J.
Electronics, vol. 88, pp.915 922, 2001.
[38] T. Floyed, Electronic Devices, Pearson Education, 2002.

-94-
Dr. Eng. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

Fundamentals of Digital
Signal Processing
Contents

3-1. Introduction to Digital Signal Processing


3-1.1. DSP topics
3-1.2. Discrete Fourier Transform (DFT) of Discrete Signals
3-1.3. Fast Fourier Transform (FFT) of Discrete Signals
3-1.4. Digital Signal Interpolation and Signal Decimation
3-1.5. DSP Applications
3-1.6. Implementation of the FFT Algorithm
3-2. Digital Signal Processors
3-2.1. History of Digital Signal Processors
3-2.2. Modern Digital Signal Processors
3-3. Summary
3-4. Problems
3-5. Bibliography

-95-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

Digital Signal
Processing &
Digital Filters
3-1. Introduction to Digital Signal Processing
Digital signal processing ('DSP') is the study of signals in a digital
representation and the processing methods of these signals. Digital and
analog signal processing are subfields of signal processing. Since the goal
of DSP is usually to measure or filter continuous real-world analog
signals, the first step is usually to convert the signal from an analog to a
digital form, by using an analog to digital converter (ADC). Often, the
required output signal is another analog output signal, which requires a
digital to analog converter (DAC).

Fig. 3-1. Block diagram of a digital signal processing (DSP) system

3-1.1. Discrete-Time Signals


Signals can be classified into two categories:
1- Continuous-time (Analog) signals, and
2- Discrete-time signals
Discrete-time signals have discrete values at discrete values of time. A
discrete-time signal x[n] is obtained from continuous-time signal x(t), by
measuring (sampling) it periodically every a sampling period T s. So, the
signal is quantized at discrete time intervals t = n Ts where n = 0, 1, 2,…

-96-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

The sampling frequency fs = 1/Ts should satisfy the Nyquist ctiterion: fs


> 2 fmax, where fmax is the maximum frequency component in the spectrum
of the corresponding analog signal.

Fig. 3-2. Block diagram of a digital signal processing (DSP) system

A discrete-time system (like a digital filter) processes the discrete-time


input x[n] in some way, and produces an output y[n]. At each integer
value for discrete time, n, we input a value of x[n] into the system and the
system produces an output value for y[n]. For instance, if we have a
multiplier system whose input/output relation is given by y[n] = 7 x[n],
and the input sequence x[n] = [1 0 3] then the out output sequence is
y[n] = [7 0 21].

There exist a specific category of discrete-time systems called linear-shift


invariant (LSI) which have interesting properties. More details about the
mathematical properties of LSI systems and their properties can be found
in Appendix D. We may write the general LSI system in the following
difference equation form:
M N

y[n] = a
i 0
i x[n  i]   b j y[n  j ]
j 1
(3-1)

where the constant coefficients aj =0,1,2,…,N and bi = 0,1,2,…,N. The ai


coefficients are called the feedforward coefficients and the bj coefficients
are called the feedback coefficients.

3-1.2. DSP Topics


In DSP, we usually study digital signals in one of the following domains:

1- Time domain (1-D signals and spatial multidimensional signals),


2- Frequency domain,
3- Autocorrelation domain, and
4- Wavelet domain.
-97-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

Engineers choose the domain in which to process a signal by making an


informed guess as to which domain best represents the essential
characteristics of the signal. A sequence of samples from a measuring
device produces a time or spatial domain representation, whereas a
discrete Fourier transform (DFT) produces the frequency domain
information, i.e., the frequency spectrum. Autocorrelation is defined as
the cross-correlation of the signal with itself over varying intervals of
time or space. So, digital signal processing includes the following main
subfields:

1- Spectral analysis (Discrete-time analysis, using DFT, and FFT),


2- Filtering (Discrete-time filters, FIR, IIR)

DSP

Spectral
Analysis Filtering

DFT (Discrete- FFT FIR IIR


time
Analysis)

Fig. 3-3. Digital signal processing (DSP) topics

3-1.3. Discrete Fourier Transform (DFT) of Discrete Signals


The Discrete Fourier Transform (DFT) is the principle spectral analysis
tool of discrete-time signals. The DFT of a discrete-time signal x[n] is
defined as follows:
N 1
 j 2 nk  N 1
X [k ]   x[n].exp      x[n].WN
nk
(3-2)
n 0  N  n 0

where the harmonic index k = 0, 1,2, …, N-1, WN = e- are weighting or


basis functions (primitive Nth root of unity, such that WNN =1) and X(k)
denotes the kth spectral sample.

The inverse DFT (IDFT), which is sometimes called the synthesis


equation, is defined as follows:

-98-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

N 1
 j 2 nk  N 1
x[n]   X [k ].exp     X [k ].WN
 nk
(3-3)
k 0  N  k 0

More details about DFT can be found in Appendix E.

3-1.3. Fast Fourier Transform (FFT)


The fast Fourier transform (FFT) is an efficient implementation of the
discrete Fourier transform (DFT) for highly composite transform lengths
N. When N is a power of 2, the computational complexity drops from
O(N2) (for the DFT) down to O(N.lnN) for the FFT, where lnN denotes
the logarithm-base-2 of N. The FFT was evidently first described by
Gauss in 1805 and rediscovered in the 1960s by Cooley and Tukey. The
first stage of a radix-2 FFT algorithm is derived below; the remaining
stages are obtained by applying the same decomposition recursively.

There are two basic varieties of FFT, one based on decimation in time
(DIT), and the other on decimation in frequency (DIF). Here we will
derive decimation in time. When is even, the DFT summation can be
split into sums over the odd and even indexes of the input signal:
1 N 1 1 N 1

 x[2n].W  x[2n  1].W


2 2

X [k ]  1N
nk
 WNk nk
1N (3-4)
2 2
n 0 n 0

where x[2n] and x[2n+1] denote the even- and odd-indexed samples
from x[n]. Thus, the length N is computable using two length ½ N of
corresponding DFTs. The complex factors WNk are called twiddle factors.
The splitting into sums over even and odd time indexes is called
decimation in time.

For decimation in frequency, the inverse DFT of the spectrum X[k] is


split into sums over even and odd numbers k. More details about FFT can
be found in section 3-1-6 and in Appendix F.

3-1.4. Digital Signal Interpolation and Decimation


Any real signal will be transmitted along some form of channel which will
have a finite bandwidth. As a result the received signal's spectrum cannot
contain any frequencies above some maximum value. There exist two
important processes in DSP systems, namely signal interpolation and
signal decimation. As shown in figure 3-3, the signal decimation is
performed by a LPF, which removes some spectral components.
-99-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

Fig. 3-4. Digital signal interpolation and decimation processes.

Alternatively, the interpolation process consists in inserting additional


spectral samples, and hence increases the sampling rate. Therefore, the
decimation process is the inverse of the interpolation process.

3-1.5. DSP Applications


Digital signal processing has so many applications like:

 Image signal processing,


 Audio and speech signal processing,
 Radar and Sonar signal processing,
 Signal processing for communications,
 Biomedical signal processing,

Specific examples in telecommunications are modulation/demodulation


(MODEM) in digital mobile phones, diversity reception to get rid of
fading effects and equalization of cable TV (CATV) signals.

In addition, DSP is applied in speech compression, equalization of


sound, high fidelity loudspeaker crossovers, and audio effects in electric
guitar amplifiers, weather forecasting, seismic data processing, medical
imaging such as computer-aided tomography (CAT) scans and magnetic
resonance imaging (MRI). Figure 3-4 depicts some DSP applications.

-100-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

Fig. 3-5. DSP applications

The remainder of this chapter illustrates areas where DSP has produced
revolutionary changes. As you go through each application, notice that
DSP is very interdisciplinary, relying on the technical work in many
adjacent fields. As Fugue 5-4 suggests, the borders between DSP and
other technical disciplines are not sharp and well defined, but rather fuzzy
and overlapping. If you want to specialize in DSP, these are the allied
areas you will also need to study.

-101-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

3-2. Digital Signal Processors (DSP’s)


A digital signal processor (DSP) is a specialized microprocessor designed
specifically for digital signal processing, usually in real-time computing.
Almost all recent communication devices have built-in DSPs. The DSP in
communication systems is responsible of implementing so many
functions, like coding/decoding (codec), digital modulation/
demodulation (modem) and digital filtering. The DSP is optimized to
execute multiply-and-accumulate (MAC) instructions, which are current
in digital filters, in a single cycle. Digital signal processing can be done
on general-purpose microprocessors. However, dedicated digital signal
processors contain architectural optimizations to speed up processing.
These optimizations are also important to lower costs, heat-emission and
power-consumption.

3-2.1. DSP Architecture


One of the biggest bottlenecks in executing DSP algorithms is
transferring information to and from memory. This includes data, such as
samples from the input signal and the filter coefficients, as well as
program instructions, the binary codes that go into the program
sequencer. For example, suppose we need to multiply two numbers that
reside somewhere in memory. To do this, we must fetch three binary
values from memory, the numbers to be multiplied, plus the program
instruction describing what to do. Figure 3-6(a) shows how this
seemingly simple task is done in a traditional microprocessor. This is
often called Von Neumann architecture,

Fig. 3-6(a). Von-Neumann architecture

As shown in figure 3-5(a), a Von Neumann architecture contains a single


memory and a single bus for transferring data into and out of the central
processing unit (CPU). Multiplying two numbers requires at least three
clock cycles, one to transfer each of the three numbers over the bus from
the memory to the CPU. We don't count the time to transfer the result

-102-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

back to memory, because we assume that it remains in the CPU for


additional manipulation. The Von Neumann design is quite satisfactory
when you are content to execute all of the required tasks in serial. This
leads us to the Harvard architecture, shown in figure 3-6(b).

Fig. 3-6(b). Harvard architecture

As shown in this illustration, Aiken insisted on separate memories for


data and program instructions, with separate buses for each. Since the
buses operate independently, program instructions and data can be
fetched at the same time, improving the speed over the single bus design.
Most present day DSPs use this dual bus architecture. Figure 3-5(c)
illustrates the next level of sophistication, the Super Harvard
Architecture. This term was coined by Analog Devices to describe the
internal operation of their ADSP-2106x and new ADSP-211xx families
of Digital Signal Processors. These are called SHARC® DSPs, a
contraction of the longer term, Super Harvard ARChitecture.

Fig. 3-6(c). Super Harvard architecture

While the SHARC DSPs are optimized in dozens of ways, two areas are
important enough to be included in figure 3-6(d): an instruction cache,
and an I/O controller.

-103-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

Fig. 3-6(d). SHARC DSP architecture

3-2.2. History of Digital Signal Processors


In 1978, Intel released the 2920 as an analog signal processor. It had an
on-chip ADC/DAC with an internal signal processor, but it didn't have a
hardware multiplier and was not successful in the market. In 1979, Bell
Labs introduced the first single chip DSP, the Mac4 Microprocessor.

The first DSP produced by Texas Instruments (TI), the TMS32010


presented in 1983, proved to be an even bigger success. It was based on
the Harvard architecture, and so had separate instruction and data
memory. It already had a special instruction set, with instructions like
load-and-accumulate or multiply-and-accumulate. It could work on 16-bit
numbers and needed 390ns for a multiply-add operation. TI is now the
market leader in general purpose DSPs. Another successful design was
the Motorola 56000. About five years later, the second generation of
DSPs began to spread. They had 3 memories for storing two operands
simultaneously and included hardware to accelerate tight loops, they also
had an addressing unit capable of loop-addressing. Some of them
operated on 24-bit variables and a typical model only required about 21ns
for a MAC (multiply-accumulate). Members of this generation were for
example the AT&T DSP16A or the Motorola DSP56001.

-104-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

The main improvement in the third generation was the appearance of


application-specific units and instructions in the data path, or sometimes
as coprocessors. These units allowed direct hardware acceleration of very
specific but complex mathematical problems, like the Fourier-transform
or matrix operations. Some chips, like the Motorola MC68356, even
included more than one processor core to work in parallel. Other DSPs
from 1995 are the TI TMS320C541 or the TMS 320C80.

The fourth generation is best characterized by the changes in the


instruction set and the instruction encoding/decoding. SIMD and MMX
extensions were added; VLIW and the superscalar architecture appeared.
As always, the clock-speeds have increased, a 3ns MAC now became
possible.

3-2.2. Modern DSP’s


Recent signal processors yield much greater performance. This is due in
part to both technological and architectural advancements like smaller
feature sizes (channel length), fast-access two-level cache, EDMA circuit
and a wider bus system. Many kinds of signal processors exist; each is
suited for a specific task, ranging in price from US$1.50 to US$300.

A Texas Instruments C6000 series DSP clocks at 1 GHz and implements


separate instruction and data caches as well as 8 MB L2-cache, and its
I/O speed is rapid thanks to its 64 EDMA channels. The top models are
capable of even 8000 MIPS (million instructions per second), use VLIW
encoding, perform eight operations per clock-cycle and are compatible
with a broad range of external peripherals and various buses
(PCI/serial/USB, etc).

Another big signal processor manufacturer today is Analog Devices. The


company provides a broad range of DSPs, but its main portfolio is
multimedia processors, such as codecs, filters and digital-analog
converters. The Analog Devices ADSP21060 shows how similar are the
basic architectures. The ADSP21060 has Harvard architecture - shown by
the two memory buses. This is extended by a cache, making it a Super
Harvard ARChitecture (SHARC).

The SHARC-based processors range in performance from 66 MHz/198


MFLOPS (million floating-point operations per second) to 400
MHz/2400MFLOPS. Some models even support multiple multipliers and
ALUs, SIMD instructions and audio processing-specific components and
peripherals.
-105-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

Fig. 3-7. Detailed view of the SHARC architecture

Figure 3-7 presents a more detailed view of the SHARC architecture,


showing the I/O controller connected to data memory. This is how the
signals enter and exit the system. For instance, the SHARC DSPs
provides both serial and parallel communications ports. These are
extremely high speed connections. For example, at a 40 MHz clock
speed, there are two serial ports that operate at 40 Mbits/second each,
while six parallel ports each provide a 40 Mbytes/second data transfer.
When all six parallel ports are used together, the data transfer rate is an
incredible 240 Mbytes/second.

3-3. Fixed Point & Floating Point Arithmetic on DSP’s


Digital Signal Processing can be divided into two categories, fixed point
and floating point. These refer to the format used to store and manipulate
numbers within the devices.Most DSPs use fixed-point arithmetic,
because in real world signal processing, the additional range provided by
floating point is not needed, and there is a large speed benefit and cost
benefit due to reduced hardware complexity. Floating point DSPs may be
invaluable in applications where a wide dynamic range is required.
Product developers might also use floating point DSPs to reduce the cost
-106-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

and complexity of software development in exchange for more expensive


hardware, since it is generally easier to implement algorithms in floating
point. It is worth noting that fixed point format is not exactly the same as
integer representation, as shown in the following figure.

Fig. 3-8. Ficed point representation

The fixed point format (or integer format) is straightforward:


representing whole numbers from 0 up to the largest whole number that
can be represented with the available number of bits. Fixed point format
is used to represent numbers that lie between 0 and 1: with a 'binary point'
assumed to lie just after the most significant bit. The most significant bit
in both cases carries the sign of the number.
 The size of the fraction represented by the smallest bit is the
precision of the fixed point format.
 The size of the largest number that can be represented in the
available word length is the dynamic range of the fixed point format
To make the best use of the full available word length in the fixed point
format, the programmer has to make some decisions:
 If a fixed point number becomes too large for the available word
length, the programmer has to scale the number down, by shifting it to the
right: in the process lower bits may drop off the end and be lost
 If a fixed point number is small, the number of bits actually used to
represent it is small. The programmer may decide to scale the number up,
in order to use more of the available word length

In both cases the programmer has to keep a track of by how much the
binary point has been shifted, in order to restore all numbers to the same
scale at some later stage. Floating point format has the remarkable
property of automatically scaling all numbers by moving, and keeping
track of, the binary point so that all numbers use the full word length
available but never overflow:
-107-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

Fig. 3-9. Floating point representation

Floating point numbers have two parts: the mantissa, which is similar to
the fixed point part of the number, and an exponent which is used to keep
track of how the binary point is shifted. Every number is scaled by the
floating point hardware:
 If a number becomes too large for the available word length, the
hardware automatically scales it down, by shifting it to the right
 If a number is small, the hardware automatically scale it up, in
order to use the full available word length of the mantissa
In both cases the exponent is used to count how many times the number
has been shifted.

In floating point numbers the binary point comes after the second most
significant bit in the mantissa.

The block floating point format provides some of the benefits of floating
point, but by scaling blocks of numbers rather than each individual
number:

Fig. 3-10. Floating point representation


-108-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

Block floating point numbers are actually represented by the full word
length of a fixed point format.
 If any one of a block of numbers becomes too large, the
programmer scales down all numbers, by shifting them to the right
 If the largest of a block of numbers is small, the programmer scales
up all numbers, to use the full available word length of the mantissa
In both cases the exponent is used to count how many times the numbers
in the block have been shifted.

Note 5-1: IEEE 754 Standard for Floating Point Representations


Based on the number of bits, there are two representations in IEEE 754
standard: 32 bit single precision and 64 bit double precision. This
standard is almost exclusively used across computing platforms and
hardware designs that support floating point arithmetic. In this standard a
normalized floating point number x is stored in three parts: the sign s, the
excess exponent e, and the significant or mantissa m, and the value of the
number in terms of these parts is:

This indeed is a sign magnitude representation, s represents the sign of


the number and m gives the normalized magnitude with a 1 at the MSB
position, and this implied 1 is not stored with the number. For normalized
values, m represents a fraction value greater than 1.0 and less than 2.0.
This IEEE format stores the exponent e as a biased number that is a
positive number from which a constant bias b is subtracted to get the
actual positive or negative exponent. The following figure depicts IEEE
format for single precision 32 bit floating point number

For example -12.25 is represented as:


Sign bit (s) = 1
Mantissa fieldð (m) = 10001000 00000000 0000000
Exponent field (e) = 3þ127 = 130 = 82 h = 1000 0010
So the complete 32 bit floating point number in binary representation is:
1 10000010 10001000 00000000 0000000
-109-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

The floating point format has one further advantage over fixed point: it is
faster. Because of quantization error, a basic direct form 1 IIR filter
second order section requires an extra multiplier, to scale numbers and
avoid overflow. But the floating point hardware automatically scales
every number to avoid overflow, so this extra multiplier is not required:

3-4. How to Choose a DSP?


It comes from the above tradeoffs between fixed and floating point, how
do you choose which to use? Here are some things to consider. First, look
at how many bits are used in the ADC and DAC. In many applications,
12-14 bits per sample is the crossover for using fixed versus floating
point. For instance, television and other video signals typically use 8 bit
ADC and DAC, and the precision of fixed point is acceptable. In
comparison, professional audio applications can sample with as high as
20 or 24 bits, and almost certainly need floating point to capture the large
dynamic range. The next thing to look at is the complexity of the
algorithm that will be run. If it is relatively simple, think fixed point; if it
is more complicated, think floating point. Lastly, think about the money:
how important is the cost of the product, and how important is the cost of
the development? When fixed point is chosen, the cost of the product will
be reduced, but the development cost will probably be higher due to the
more difficult algorithms. In the reverse manner, floating point will
generally result in a quicker and cheaper development cycle, but a more
expensive final product.

Figure 3-11 shows some of the major trends in DSPs. Figure 3-11(a)
illustrates the impact that Digital Signal Processors have had on the
embedded market. These are applications that use a microprocessor to
directly operate and control some larger system, such as a cellular
telephone, microwave oven, or automotive instrument display panel. The
name "microcontroller" is often used in referring to these devices, to
distinguish them from the microprocessors used in personal computers.
As shown in (a), about 38% of embedded designers have already started
using DSPs, and another 49% are considering the switch. The high
throughput and computational power of DSPs often makes them an ideal
choice for embedded designs.

The following figure shows a comparison between the utilization of


Assembly and C-language in the development of DSP applications.
Programs written in assembly can execute faster, while programs written
in C are easier to develop and maintain. However, DSP programs are
different from traditional software tasks in two important respects. First,
-110-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

the programs are usually much shorter. Second, the execution speed is
often a critical part of the application.

Fig. 3-11(a). Floating point representation

Fig. 3-11(b). Floating point representation


-111-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

3-5. How Fast are DSPs?


The primary reason for using a DSP instead of a traditional microprocessor is speed,
the ability to move samples into the device, carry out the needed mathematical
operations, and output the processed data. This brings up the question: How fast are
DSPs? The usual way of answering this question is benchmarks, methods for
expressing the speed of a microprocessor as a number. For instance, fixed point
systems are often quoted in MIPS (million integer operations per second). Likewise,
floating point devices can be specified in MFLOPS (million floating point operations
per second). One hundred and fifty years ago, British Prime Minister Benjamin
Disraeli declared that there are three types of lies: lies, damn lies, and statistics. If
Disraeli were alive today and working with microprocessors, he would add
benchmarks as a fourth category. The idea behind benchmarks is to provide a head-to-
head comparison to show which is the best device. Unfortunately, this often fails in
practicality, because different microprocessors excel in different areas. Imagine
asking the question: Which is the better car, a Cadillac or a Ferrari? It depends on
what you want it for!

Confusion about benchmarks is aggravated by the competitive nature of the


electronics industry. Manufacturers want to show their products in the best light, and
they will use any ambiguity in the testing procedure to their advantage. There is an
old saying in electronics: "A specification writer can get twice as much performance
from a device as an engineer." These people aren't being untruthful, they are just paid
to have good imaginations. Benchmarks should be viewed as a tool for a complicated
task. If you are inexperienced in using this tool, you may come to the wrong
conclusion. A better approach is to look for specific information on the execution
speed of the algorithms you plan to carry out. For instance, if your application calls
for an FIR filter, look for the exact number of clock cycles it takes for the device to
execute this particular task. Using this strategy, let's look at the time required to
execute various algorithms on the Analog Devices SHARC family. Keep

Fig. 3-11(b). Floating point representation

-112-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

3-6. Summary

Digital signal processing (DSP) is the study of signals in a digital


representation and the processing methods of these signals. DSP includes
subfields like: audio and speech signal processing, sonar and radar signal
processing, sensor array processing, spectral estimation, statistical signal
processing, image processing, signal processing for communications,
biomedical signal processing, etc.

Since the goal of DSP is usually to measure or filter continuous real-


world analog signals, the first step is usually to convert the signal from an
analog to a digital form, by using an analog to digital converter. Often,
the required output signal is another analog output signal, which requires
a digital to analog converter.

Digital filters fall into two main categories: finite impulse response (FIR)
and infinite impulse response filter (IIR). An FIR filter uses only the
input signal, while an IIR uses both the input signal and previous samples
of the output signal. FIR filters are always stable, while IIR filters may be
unstable. Most filters can be described in Z-domain (a superset of the
frequency domain) by their transfer functions. A filter may also be
described as a difference equation, a collection of zeroes and poles or, if
it is an FIR filter, an impulse response or step response.

The algorithms required for DSP are sometimes performed using


specialized computers, which make use of specialized microprocessors
called digital signal processors (DSP). These process signals in real time
and are generally purpose-designed application-specific integrated
circuits (ASICs). When flexibility and rapid development are more
important than unit costs at high volume, DSP algorithms may also be
implemented using field-programmable gate arrays (FPGAs).

Digital signal processing is often implemented using specialized micro


processors such as the MC56000, from Motorola and the TMS320, from
Texas Instruments. These often process data using fixed-point arithmetic,
although some versions are available which use floating point arithmetic
and are more powerful. For faster applications FPGAs might be used.
Beginning in 2007, multi-core implementations of DSPs have started to
emerge from companies including Freescale and startup Stream
Processors, Inc. For faster applications with vast usage, ASICs might be

-113-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

designed specifically. For slow applications such as flame scanning, a


traditional slower processor such as a microcontroller can be used.

ROM RAM
Code ROM
Input No Wo
N1 W1 Output
X N2 W2 Y
: :
Nm Wm
Bus

Wi
y
Ni Wi
y MAC

-114-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

3-6. Problems

3-1). is the function y[n] = sin(x[n]) periodic or not?

3-2) Is the function y[n] = x[n-1] – x[n-4] memoryless?


a) The system doesn’t need to have memory
b) The system needs to have memory

3-3) What is the nature of the following function: y[n] = y[n-1] + x[n]?
a) Integrator
b) Differentiator
c) Subtractor
d) Accumulator

3-3) Is the above function defined, causal in nature?


a) Causal
b) Non Causal

3-4) A second order digital filter is given by


y(m) =1.386 y(m −1) − 0.9604 y(m − 2) + g x(m)
Obtain the z-transfer function of this filter and write the z transfer
function of this equation in polar form in terms of the radius and angular
frequencies of its zeros. Sketch the pole-zero diagram and the frequency
response of the filter.

3-5) The difference equation relating the input and output of the infinite
duration impulse response (IIR) filter, shown in Figure below, is
y(n) =ay(n − 4) + g x(n) − gx(n − 4)]
(i) Taking the z-transform of the difference equation find, the transfer
function of the filter.
(ii) Describe the transfer function in polar form, and for a value of
a=0.6561 obtain the pole and zeros of the filter and sketch its pole-zero
diagram.
(iii) Sketch the frequency response of the filter, and suggest an
application for this filter. Discuss the effect of varying the value of a on
the frequency response of the filter.

-115-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 3

3-7. References

[1] M. Frerking, “Digital Signal Processing in Communication


Systems”, Kluwer Academic Publishers, NY, 1994.

[2] Andreas Antoniou, “Digital Filters Analysis & Design”, Prentice


Hall, 1996.

[3] R Rabiner and B. Gold, “Theory & Application of Digital Signal


processing”, Prentice Hall, 1996.

[4] Andreas Antoniou , “Digital Signal Processing”, Prentice Hall, 1999.

[5] John G Proakis and Dimitris G. Manolakis,”Digital Signal


Processing “, Prentice Hall, 1996.

[6] Sanjit K. Mithra , , “Digital Signal Processing”, Tata Mc–Graw Hill,


1996.

[7] Douglas K. Lindner, “Introduction to signals &Systems” Mc Graw


Hill, 1999.

-116-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

Digital Filters
Contents

9-1. Introduction
9-3. Digital Filters
9-3.1. Types of Digital Filters
9-3.2. Finite Impulse Response (FIR) Filter
9-3.3. Infinite Impulse Response (IIR) Filter
9-4. Examples of Digital Filters
9-5. Stability of Digital Filters
9-6. Synthesis (Implementation) of Digital Filters
9-6.1. Direct Forms (DF-I & DF-II)
9-6.2. Canonic and Non-Canonic Forms
9-6.3. Software Method
9-7. Summary
9-8. Problems
9-9. Bibliography

-117-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

-118-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

Digital Filters

4-1. Introduction
In this chapter we develop several mathematically equivalent ways to
describe and characterize digital filters, namely, in terms of their:
• Transfer function H(z)
• Frequency response H(ω)
• Block diagram realization and sample processing algorithm
• I/O difference equation
• Pole/zero patterns
• Impulse response h(n)
• I/O convolutional equation

The most important description is the transfer function because from it


we can easily obtain all the others. Figure 4-1 shows the relationships
among these descriptions. The need for such multiple descriptions is that
each provides a different insight into the nature of the filter and each
serves a different purpose.

Fig. 4-1. Description methods of a digital filter system.

-119-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

4-3. Digital Filters


Almost all recent communication devices have built-in digital signal
processors (DSP). The DSP in communication systems is responsible of
implementing so many functions, like coding/decoding (codec), digital
modulation/demodulation (modem) and digital filtering.

Fig. 4-2. Block diagram of a digital filter system.

A digital filter is just a filter that operates on discrete digital signals, such
as sound represented inside a computer. It is a computation which takes
one sequence of numbers (the input signal, x[n]) and produces a new
sequence of numbers (the filtered output signal, y[n]). The active filters
mentioned in chapter 5 are not digital because they operate on analog
signals. It is important to realize that a digital filter can do anything that a
real-world filter can do. Thus, a digital filter is only a formula for going
from one digital signal to another. It may exist as an equation on paper, as
a small loop in a computer subroutine, or as an integrated circuit chips.
Many digital filters are based on the Fast Fourier transform (FFT), a
mathematical algorithm that quickly extracts the frequency spectrum of a
signal, allowing the spectrum to be manipulated before converting the
modified spectrum back into a time-series signal. Here, we present the
two famous types of digital filters, and their basic architectures.

4-3.1. Types of Digital Filters


The transfer function for a typical linear digital filter can be expressed as
a transform in the Z-domain, as follows:

Y ( z ) a0  a1 z 1  a2 z 2  ...  aM z  M
H ( z)   (4-1)
X ( z ) 1  b1 z 1  b2 z 2  ....  bN z  N

where M is the order of the filter and X(z) = {x[n]} and Y(z) = {y[n]}
are the z-transforms of input and output digital sequences, respectively.
This form is for a recursive filter, which typically leads to infinite
impulse response (IIR) behavior, but if the denominator is unity, then this
is the form for a finite impulse response (FIR) filter. In fact, M is

-120-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

sometimes called, the feed-forward filter order and N is called the


feedback filter order.

Note 4-2. The Z-Transform of Discrete-Time Signals


The Z-transform is used primarily to convert discrete data sets into a
continuous representation. The Z-transform is similar to other discrete
transforms, except that the Z-transform does not take explicit account for
the sampling period. The Z-transform has a number of uses in the field of
digital signal processing and the study of discrete signals in general, and
is useful because Z-transform results are extensively tabulated.
The Z-Transform is defined as follows:
n 
X ( z )  {x[n]}   x[n].z
n 
n

More details about the Z-Transform and its characteristics can be found
in Appendix D of this book.

Note 4-3. State-Space Representation


There exists another representation form of digital filters called the state-
space model. A well known state-space filter is the Kalman filter.

4-3.2. Finite Impulse Response (FIR) Filter


The finite impulse response (FIR) filters are conceptually the easiest to
understand and the easiest to design. FIR filters have no feedback
elements. However, FIR filters suffer from low efficiency and creating an
FIR to meet a given specifications requires more hardware then an
equivalent infinite impulse response (IIR) filter. It is called 'finite'
because its response to an impulse ultimately settles to zero. This is in
contrast to IIR filters which have internal feedback and may continue to
respond indefinitely. We start the discussion by stating the difference
equation which defines how the input signal x[n] is related to the output
signal y[n] of an FIR filter,

y[n]  ao x[n]  a1x[n 1]  ...  aM x[n  M ] (4-2)

where ai are the filter coefficients. M is known as the filter order; an Mth-
order filter has (M + 1) terms on the right-hand side; these are commonly
referred to as taps. The previous equation can be expressed as follows:
-121-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

M
y[n]   ai x[n  i ] (4-3)
i 0

To find the impulse response we set the input data samples as follows:

x[n] = [n] (4-4)

where δ[n] is the Kronecker delta impulse. The impulse response for an
FIR filter is given by the following set of coefficients:
M
h[n]   ai [n  i ]  an (4-5)
i 0

The Z-transform of the impulse response yields the transfer function H(z)
= Y(z)/X(z) of the FIR filter, as follows:

n  M
H ( z )  {h[n]}   h[n].z
n 
n
  an z  n
n0
(4-6)

FIR filters are BIBO1 stable, since the output is a sum of a finite number
of finite multiples of the input values, so can be no greater than ai times
the largest value appearing in the input The following figure depicts the
typical block diagram of an FIR filter. The D (or z−1) blocks are unit
delay elements (flip-flops).

Fig. 4-3..Block diagram of a FIR filter

The following figure shows the block diagram of a moving average filter,
which is a very simple FIR filter. The filter coefficients are given by:

1
BIBO stands for “Bounded Input Bounded Output:” that’s for a bounded input, the output is
bounded.
-122-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

(4-7)
for i= 0,1,…,N

Here, we select the filter order: N=2 (2nd order moving average filter)

Fig. 4-4a)..Example of an FIR filter (The moving average filter)

The impulse response of the resulting filter is:

The z-transform of the impulse response:

The following figure shows the pole-zero diagram of the filter. Note that
there are two poles at the origin and two zeroes at

The frequency response is hence given by:

The following figure shows the absolute value of the frequency response.
Clearly, the moving-average filter passes low frequencies with a gain
near 1, and attenuates high frequencies. This is a typical low-pass filter
characteristic.
-123-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

Fig. 4-4(b). Poles and zeroes of the moving average filter example (of order 2).

Fig. 4-4(c). Frequency response of the moving average filter example (of order 2).

-124-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

The advantages and disadvantages of FIR filters can be summarized as


follows:
i- Advantages of FIR Filters:
 Always stable
 Linear phase response
 Easy to design
 Easy to implement
 Finite start-up transient

ii- Disadvantage of FIR Filters:


The FIR filters may be more computationally demanding (in terms of
memory and calculations) than IIR to achieve the same response.
Therefore, they are usually of higher order than IIR for most applications.

iii. FIR Filter Design


To design a filter means to select the coefficients such that the system
has specific characteristics. The required characteristics are stated in filter
specifications. Most of the time filter specifications refer to the frequency
response of the filter.
There are different methods to find the coefficients of a digital filter from
the specifications:
1. Parks-McClellan (Remez) method
2. windowing design method
3. Frequency Sampling method
4. Weighted least squares design
5. Direct Calculation method

The Parks-McClellan method (inaccurately called Remez by Matlab) is


probably the most widely used FIR filter design method. This iteration
algorithm is commonly used to find an optimal equiripple set of
coefficients. Here the user specifies a desired frequency response, a
weighting function for errors from this response, and a filter order M. The
algorithm then finds the set of (M + 1) coefficients that minimize the
maximum deviation from the ideal characteristics. Intuitively, this finds
the filter that is as close as you can get to the desired response given that
you can use only (M + 1) coefficients.
In the windowing method, an initial impulse response is derived by
taking the Inverse Discrete Fourier Transform (IDFT) of the desired
frequency response. Then, the impulse response is refined by applying a
data window to it.
-125-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

Fig. 4-5. Illustration of the Windowing method for digital filter design

In Direct Calculation method, the impulse responses of certain types of


FIR filters (e.g. Raised Cosine and Windowed Sinc) can be calculated
directly from formulas

iv. Special Types of FIR Filters


Aside from regular filter types (LPF, HPF, ..etc.) there are:
Boxcar - Boxcar FIR filters are simply filters in which each coefficient is
1.0. Therefore, for an N-tap boxcar, the output is just the sum of the past
N samples. Because boxcar FIRs can be implemented using only adders,
they are of interest primarily in hardware implementations, where
multipliers are expensive to implement.
Hilbert Transformer - Hilbert Transformers shift the phase of a signal by
90 degrees. They are used primarily for creating the imaginary part of a
complex signal, given its real part.
Differentiator - Differentiators have an amplitude response which is a
linear function of frequency. They are not very popular nowadays, but are
sometimes used for FM demodulators.
Nyquist - Also called " Lth-Band " filters. These filters are a special class
of filters used in multirate applications. Their key point is that one of
every L coefficients is zero-- which can be exploited to reduce the
number of multiple operations to implement a filter. The famous "half-
band" filter is actually an Lth-band filter, with L=2.
-126-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

Raised-Cosine - This is a special kind of filter that is sometimes used for


digital data applications. The frequency response in the passband is a
cosine shape which has been "raised" by a constant.
v. FIR Filter Implementation
Structurally, FIR filters consist of just two things: a sample delay line and a
set of coefficients. To implement the filter, the algorithm is as follows:
1. Put the input sample into the delay line.
2. Multiply each sample in the delay line by the corresponding coefficient
and accumulate the result.
3. Shift the delay line by one sample to make room for the next input sample.

vi. Development of FIR Filters


The following figure depicts the complete design cycle of an FIR filter in
the form of an application-specific IC (ASIC).

Fig. 4-6. FIR filter design cycle as an application specific IC ( ASIC)


-127-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

4-3.3. Infinite Impulse Response (IIR) Filter


Infinite impulse response (IIR) is a property of signal processing
systems. These filters have an impulse response function which is non-
zero over an infinite length of time. This is in contrast to finite impulse
response filters (FIR) which have fixed-duration impulse responses. IIR
filters differ from FIR filters because they contain feedback elements in
the circuit, which can make the transfer functions more complicated.

The difference equation of an IIR filter is similar to the general LSI


system equation and is given by:
M N
y[n]   ai x[n  i]   b j y[n  j ] (4-8a)
i 0 j 1

In order to find the transfer function H(z) = Y(z)/X(z) of the filter, we first
take the Z-transform of each side of the above equation, and we use the
time-shift property to obtain:
M N

a z
i 0
i
i
. X ( z )   b j z  jY ( z )
j 0
(4-8b)

The following figure depicts the typical block diagram of an IIR filter
looks like the following. The D (or z−1) blocks are unit delay elements
(flip-flops). The coefficients ak, bk and number of feedback/feed-forward
paths (M and N) are implementation-dependent.

Fig. 4-7. Block diagram of an IIR filter

In practice, design engineers find IIR filters to be fast and cheap, but with
poorer bandpass filtering and stability characteristics than FIR filters. The
-128-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

advantages and disadvantages of IIR filters can be summarized as


follows:
i- Advantages of IIR Filters:
IIR filters are harder to design than the FIR filters, but the benefits are
extraordinary: IIR filters are an order of magnitude more efficient then an
equivalent FIR filter, even though FIR is easier to design, IIR will do the
same work with fewer components, and fewer components translate
directly to less money.
ii- Disadvantage of IIR Filters:
 Non-linear phase
 Sensitive to round off error
 More difficult to design some shapes
 Infinite start-up transient

iii- IIR Filter Design


Design of digital IIR filters is heavily dependent on that of their analog
counterparts because there are plenty of resources, works and
straightforward design methods concerning analog feedback filter design
while there are hardly any for digital IIR filters. As a result, mostly, if a
digital IIR filter is going to be implemented, first, an analog filter (e.g.
Chebyshev filter, Butterworth filter, Elliptic filter) is designed and then it
is converted to digital by applying discretization techniques such as
Bilinear transform or Impulse invariance.

4-4. Examples of Digital Filters


Like analog filters, digital filters have different forms that operate in
different manners on different frequency regions.

i. Lowpass and Highpass Filters


High-pass filters (HPF) and low-pass filters (LPF) are the simplest forms
of digital filters, and they are relatively easy to design to specifications.
Both high-pass and low-pass transfer functions may be implemented in
FIR or IIR designs. The simplest low-pass filter is given by the following
difference equation:
y[n] = x[n]+ x[n −1] (4-9)

Fig. 4-8. An example of a digital low-pass filter


-129-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

The following figure shows the impulse response (time-domain), h[n] of


an FIR filter digital LPF This is obtained by setting x[n]=[n] in the
transfer function

Fig. 4-9(a). Impulse response of a digital low-pass filter

The next figure shows the frequency response, h().

Fig. 4-9(b). Frequency response (magnitude) of a digital low-pass filter

Example 4.1
Consider the design of a high-pass linear-phase digital FIR filter
operating at a sampling rate of Fs Hz and with a cutoff frequency of Fc
Hz. The frequency response of the filter is given by

-130-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

The impulse response of this filter is obtained by the inverse Fourier


Transform as follows:

The impulse response hd(m) is non-causal (nonzero for m < 0) and infinite
in duration. To obtain an FIR filter of order M we multiply hd(m) by a
rectangular window sequence of length M+1 samples. To introduce
causality (h(m) =0 for m < 0) shift the truncated h(m) by M/2 samples

ii- Bandpass Filter


Bandpass filters (BPF) are like a combination of a high-pass and a low-
pass filter. Only specific bands are allowed to pass through the filter, and
frequency data that is too high or too low will not pass the filter.

Example 4.2
Consider the design of a band-pass linear-phase digital FIR filter
operating at a sampling rate of Fs and a lower and higher cutoff FL and FH
Hz. The frequency response of the HPF is given by
-131-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

The impulse response of this filter is obtained by the inverse Fourier


Transform as follows:

The impulse response hd(m) is non-causal (nonzero for m < 0) and infinite
in duration. To obtain an FIR filter of order M multiply hd(m) by a
rectangular window sequence of length M+1 samples. To introduce
causality (h(m) =0 for m < 0) shift the truncated h(m) by M/2 samples

iii- Bandstop Filter


Bandstop filters are the opposite to BPF filters in that they only stop a
certain narrow band of frequency information, and allow all other data to
pass without problem.

iv- Notch Filter


The direct inverse of a bandpass filter is called a bandstop filter. A notch
filter is essentially a very narrow bandstop filter.
Example 4.3
Consider the design of a band-stop linear-phase digital FIR filter
operating at a sampling rate of Fs Hz and with a lower and higher cutoff
frequencies of FL and FH Hz. The frequency response of the filter

-132-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

The impulse response of this filter is obtained by the inverse Fourier


Transform as follows:

The impulse response hd(m) is non-causal (nonzero for m < 0) and infinite
in duration. To obtain an FIR filter of order M we multiply hd(m) by a
rectangular window sequence of length M+1 samples. To introduce
causality (h(m) =0 for m < 0) shift h(m) by M/2 samples

v- Allpass Filter
The all pass filter (APF) is a filter that has a unity magnitude response for
all frequencies. It does not affect the frequency response of a filter. The
APF does affect the phase response of the system. For example, if an IIR
filter is designed to meet a prescribed magnitude response often the phase
response of the system will become non-linear. To correct for this non-
linearity an allpass filter is cascaded with the IIR filter so that the overall
response (IIR + APF) has a constant group delay.

-133-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

vi- Comb Filters


The comb filters are low-pass FIR filters which allow selected
frequencies to pass while blocking their harmonics and all other
frequencies. Comb filters look like a hair comb. They have many "teeth",
which are evenly-spaced notches in the transfer function. They are useful
for removing noise that appears at regular frequency intervals.
Other common application of Comb filters is to sharpen composite video
signals by eliminating residual color (Chroma) from the brightness
(luminance) signal.

The comb filters have the following transfer function:

(4-10)

where r is the frequency interval ratio.


The figure below depicts an example of a combo filter having difference
equation of the form:

y[n] = x[n] + g1.x[n −M1] + g2. y[n −M2] (4-11)

Fig. 4-10. Signal flow graph of a digital comb filter

* Advantages of Comb Filters:


Simple operations of comb filters is suitable for high frequency operation
(first decimation stage in ∑Δ ADC interpolation stage). Comb filters also
have minimum storage of samples.

* Drawbacks of Comb Filters:


Good attenuation can be achieved only in a narrow band. The “sinc”
attenuation in base band need for equalization in other filtering stages

-134-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

vii- Cascaded-Integrator Comb (CIC) Filters


CIC filters are a class of FIR filters used in multi-rate digital signal
processing and were invented by Eugene Hogenauer. A first-order CIC
filter is made of an integrator and a Comb stage. A switch between these
two stages performs the down sampling operation. The architecture of a
single-stage CIC filter is shown in Figure 4-11(a). Figure 4-11(b) shows
the diagram for a multistage CIC filter. In a CIC filter architecture; note
that there is no multiplication (multiplier).

Fig. 4-11(a). One-stage CIC filter

Fig. 4-11(b). Multi-stage CIC filter

The system function of a multi-stage CIC filter, which is referenced to the


high sampling rate, fs is

(4-12)

where R is interpolation (or decimation) ratio, M is number of samples


per stage (1 or 2) and N is number of stages in filter. The CIC filters are
simple in construction and have linear phase response. They have many
applications in interpolation and decimation.
-135-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

viii- Dissimation Filters


Decimation is the process of reducing the sampling frequency of a signal
to a lower sampling frequency that differs from the original frequency by
an integer value. In practice, this usually implies lowpass-filtering (LPF)
the signal, then throwing away some of its samples.

Decimation is sometimes called as down-sampling. However, down-


sampling is a more specific term which refers to just the process of
throwing away samples, without the LPF operation. The LPF associated
with decimation removes high-frequency content from the signal to
accommodate the new sampling frequency.

The decimation factor is the ratio of the input rate to the output rate. It is
usually symbolized by "M", so M = input rate / output rate. You can
only decimate by integer factors and cannot decimate by a fraction.

Fig. 4-12. Operation of the decimation filter

Decimation filters perform several important functions. The first is to


down-sample and reduce the clock rate. Decimation filters help you to
remove the out-of-band signals (remove the excess bandwidth) and noise.

Implementation
Decimation consists of the processes of lowpass filtering, followed by
down-sampling. To implement the filtering part, you can use either FIR
or IIR filters. The CIC implementation of a decimation filter is the
cascade of an integrator stage, a decimation procedure, and a comb stage
as shown in figure 4-13.

In order to reduce the complexity of the analysis of the CIC filters, it is


important to combine the integrator and comb stages into a single transfer
function. However, to reduce the computational expense of the operation,
the implementation of the filter is performed in two separate sections,
-136-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

before and after the decimation. The integrator section is the cascade of N
ideal digital integrator stages operating at the high sampling rate Fs, The
comb sections consist of N comb stages operating at the low sampling
rate of FS/R, where R is the rate change factor. The comb sections have a
delay of M samples per stage

Fig. 4-13. Block diagram of a CIC decimation (and interpolation) filter(s).

The cascade of the integrator and comb sections of the CIC filter requires
a decimation process to be implemented between these two filters. The
order of the comb filter and the decimation procedure can be switched
without causing any change in the end results of the filtering operation.
The integrator and comb stages of the filter can be combined into a
single transfer function, thus simplifying analysis. The transfer function
of the cascaded integrator comb filter before decimation is then

(4-13)

Fig. 4-14. Signal flow graph of CIC decimation) filter.

The frequency response (with respect to a higher input sample rate) is

(4-14)

-137-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

This filter is a cascade of N copies of an R.Mth-length FIR filter whose


coefficients specify a rectangular time-domain window, so it is indeed an
LPF. Furthermore, because of the symmetry, this is a linear phase filter.
Therefore, the CIC decimation filter is actually an alternative
implementation of a general decimation filter. In this CIC structure, the
comb stage operates at the low sampling rate, resulting in a more efficient
implementation of the decimation filters.

4-5. Stability of Digital Filters


A filter is said to be stable if the impulse response h[n] approaches zero
as n goes to infinity.

In this context, we may say that an impulse response approaches zero by


definition if there exists a finite integer nf, and real numbers A≥ 0 and
≥ 0, such that:

| b[n] | < A exp (-n) for all n≥ nf (4-15)

In other terms, the impulse response is asymptotically bounded by a


decaying exponential. Thus, every finite-order non-recursive (FIR) filter
is stable. In fact, FIR filters are clearly bounded-input bounded-output
(BIBO) stable, since the output is a sum of a finite number of finite
multiples of the input values, so can be no greater than |bi| times the
largest value appearing in the input.

Only the feedback coefficients bj (in the transfer function of IIR filters)
can cause instability.

Filter stability may be discussed further in terms of poles and zeros of the
system. We can say that stability is guaranteed when all filter poles have
magnitude less than 1. So, a recursive (IIR) filter is stable if and only if
all its poles have magnitude less than 1.

-138-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

4-6. Synthesis (Implementation) of Digital Filters


After finishing the design of a digital filter, we should search for a
suitable synthesis method to implement the filter (in hardware or as a
software code). There exist so many variations to implement and
synthesize digital filters. We discuss here the most famous
implementation methods.
4-6.1. Direct Forms (DF-I & DF-II)
There exist four direct-form structures to choose from, to implement a
digital filter. The general difference equation of a causal LSI system (4-4)
is used to specify the direct-form I (DF-I) implementation of a digital
filter. For instance the following difference equation depicts the DF-I of a
second-order digital filter:
y[n] = bo x[n] + b1 x[n-1] + b2 x[n-2] – a1 y[n-1] – a2 y[n-2] (4-16)
The DF-I signal flow graph of this case is shown in figure 4-15(a). The
direct-form II structure, another common choice, is depicted in figure 4-
15(b). The other two direct forms are obtained by transposing direct
forms I and II

Fig. 4-15. System diagram for the digital filter described by equation (4-10).
-139-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

4-6.2. Canonic and Non-Canonic Forms


Canonic filters are filters where the order of the transfer function
matches the number of delay units in the filter. Inversely, non-canonic
filters are when the filter has more delay units then the order of the
transfer function. In general, IIR filters are non-canonic, and FIR filters
are canonic. This is not always the case, however. Figure 4-16 illustrates
the canonical, and non-canonical forms of a 2nd order digital filter.
Notice that in the canonical filter, the system order (2) is equal to the
number of delay units in the filter (2). In the non-canonical version, the
system order is not equal to the number of delays (4 delay units).
Although both filters perform the same task, but the canonical version is
easier to implement in digital hardware, because you will need fewer
delay units.

Fig. 4-16. Canonic and non-canonic implementation forms of a digital filter

4-6-3. Software Implementation


The digital filters maybe implemented as a software code, which may be
edited, compiled and run on any general-purpose computer or dedicated
digital signal processor (DSP). In Matlab, a filter can be implemented
using the filter function. For example, the following matlab code
computes the output signal y given the input signal x for a comb filter:
g1 = (0.5)^3; % Some specific coefficients
g2 = (0.9)^5;
B = [1 0 0 g1]; % Feedforward coefficients, M1=3
A = [1 0 0 0 0 g2]; % Feedback coefficients, M2=5
N = 1000; % Number of signal samples
x = rand(N,1); % Random test input signal
y = filter(B,A,x); % Matlab and Octave compatible
-140-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

We can also implement digital filters using C or C++ code. The following
code listings for FIR.h and FIR.cpp, depict the header file and class file for
implementing arbitrary causal FIR filters

/********************************************/
/* General Finite-Impulse-Response (FIR) */
/* digital filter class */
/********************************************/
#if !defined(__FIR_h)
#define __FIR_h
#include "Object.h"

class FIR : public Object


{
protected:
int length;
FLOAT *coeffs;
FLOAT *pastInputs;
int piOffset;
FLOAT delay;
char name[64];
public:
FIR(int length);
FIR(const char *filterFile);
~FIR();
void clear(void);
void setCoeffs(FLOAT *theCoeffs);
FLOAT tick(FLOAT input);
FLOAT lastOutput;
FLOAT getDelay(FLOAT freq);
int getLength(void);
char *getName(void);
void setName(char *theName);
};
#endif

#include "FIR.h"
#include <string.h>
FIR :: FIR(int theLength) : Object()
{
length = theLength;
coeffs = (FLOAT *) malloc(length*sizeof(FLOAT));
pastInputs =(FLOAT *) calloc(2*length,sizeof(FLOAT)); piOffset = length;
strcpy(name,"FIR Filter");
}
FIR :: FIR(const char *filterFile) : Object()
{
FILE *fp;

-141-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

float d;
int rtn,i;
strncpy(name,filterFile,strlen(name));
fp = fopen(filterFile,"r");
if (!fp) {
char nfn[1000];
strcpy(nfn,FILTER_DIR);
strcat(nfn,filterFile);
printf("Couldn't find soundfile %s. Trying %s\n", filterFile, nfn);
fp = fopen(nfn,"r");
if (!fp) {
printf("Couldn't find filterfile %s or %s!\n",filterFile,nfn);
exit(-1);
}
}
rtn = fscanf(fp,"%f",&d);
if (rtn == EOF)
goto error;
delay = (FLOAT) d;
rtn = fscanf(fp,"%f",&d);
if (rtn == EOF)
goto error;
length = (int)d;
coeffs = (FLOAT *) malloc(length*sizeof(FLOAT));
pastInputs = (FLOAT *) calloc(2*length, sizeof(FLOAT));
for (i=0;i<length;i++) {
rtn = fscanf(fp,"%f",&d);
if (rtn == EOF)
goto error;
coeffs[i] = (FLOAT)d;
}
piOffset = length;
return;
error:
fprintf(stderr,"Premature EOF or bad numbers in filter file %s\n", filterFile);
exit(-1);
}/////////////////////////////////////////////////////
FIR :: ~FIR()
{
free(pastInputs);
free(coeffs);
}/////////////////////////////////////////////////////
void FIR :: clear()
{
int i;
for (i=0; i < 2*length; i++) {pastInputs[i] = 0;}
piOffset = length;
}/////////////////////////////////////////////////////

-142-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

void FIR :: setCoeffs(FLOAT *theCoeffs)


{
int i;
for (i=0; i < length; i++) {
coeffs[i] = theCoeffs[i];
}
}/////////////////////////////////////////////////////

FLOAT FIR :: tick(FLOAT input) {


int i;
lastOutput = input*coeffs[0];
for (i=1; i<length; i++)
lastOutput += coeffs[i] * pastInputs[piOffset-i];
// sample 0 unused
pastInputs[piOffset++] = input;
if (piOffset >= 2*length) { // sample 2*length-1 unused
piOffset = length;
for (i=0; i<length; i++)
pastInputs[i] = pastInputs[length+i];
}
return lastOutput;
}/////////////////////////////////////////////////////

FLOAT FIR :: getDelay(FLOAT freq)


{
return delay;
}/////////////////////////////////////////////////////

int FIR :: getLength(void)


{
return length;
}/////////////////////////////////////////////////////

char * FIR :: getName(void)


{
return name;
}/////////////////////////////////////////////////////

void FIR :: setName(char *theName)


{
strncpy(name,theName,strlen(name));
}/////////////////////////////////////////////////////

-143-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

4-7. Comparison between Analog and. Digital Filters


Most digital signals originate in analog electronics. If the signal needs to
be filtered, is it better to use an analog filter before digitization, or a
digital filter after? We will answer this question by letting two of the best
contenders deliver their blows. The goal will be to provide a low-pass
filter at 1 kHz. Fighting for the analog side is a six pole Chebyshev filter
with 0.5 dB (6%) ripple. This can be constructed with 3 OpAmps, 12
resistors, and 6 capacitors. In the digital corner, the windowed-sinc is
warming up and ready to fight. The analog signal is digitized at a 10kHz
sampling rate, making the cutoff frequency 0.1 on the digital frequency
scale. The length of the windowed-sinc will be chosen to be 129 points,
providing the same 90% to 10% roll-off as the analog filter. Fair is fair.
Figure 4-17 shows the frequency and step responses for these two filters.
Let's compare the two filters blow-by-blow. As shown in figure 4-17(a)
and (b), the analog filter has a 6% ripple in the passband, while the digital
filter is perfectly flat (within 0.02%). The analog designer might argue
that the ripple can be selected in the design; however, this misses the
point. The flatness achievable with analog filters is limited by the
accuracy of their resistors and capacitors. Even if a Butterworth response
is designed (i.e., 0% ripple), filters of this complexity will have a residue
ripple of, perhaps, 1%. On the other hand, the flatness of digital filters is
primarily limited by round-off error, making them hundreds of times
flatter than their analog counterparts. Score one point for the digital filter.

Next, look at the frequency response on a log scale, as shown in figure 4-


17(c) and (d). Again, the digital filter is clearly the victor in both roll-off
and stopband attenuation. Even if the analog performance is improved by
adding additional stages, it still can't compare to the digital filter. For
instance, imagine that you need to improve these two parameters by a
factor of 100. This can be done with simple modifications to the
windowed-sinc, but is virtually impossible for the analog circuit. Score
two more for the digital filter.

The step response of the two filters is shown in figure 4-17(e) and (f).
The digital filter's step response is symmetrical between the lower and
upper portions of the step, i.e., it has a linear phase. The analog filter's
step response is not symmetrical, i.e., it has a nonlinear phase. One more
point for the digital filter. Lastly, the analog filter overshoots about 20%
on one side of the step. The digital filter overshoots about 10%, but on
both sides of the step. Since both are bad, no points are awarded. In spite
of this beating, there are still many applications where analog filters

-144-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

should, or must, be used. The first advantage is speed: digital is slow;


analog is fast.

Fig. 4-17. Comparison between analog and digital filters.

The second inherent advantage of analog over digital is dynamic range.


This comes in two flavors. Amplitude dynamic range is the ratio
between the largest signal that can be passed through a system, and the
inherent noise of the system. For instance, a 12 bit ADC has a saturation
level of 4095, and an rms quantization noise of 0.29 digital numbers, for a
-145-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

dynamic range of about 14000. In comparison, a standard op amp has a


saturation voltage of about 20V and an internal noise of about 2 V, for a
dynamic range of about ten million. Just as before, a simple op amp
devastates the digital system.

The other flavor is frequency dynamic range. For example, it is easy to


design an op amp circuit to simultaneously handle frequencies between
0.01 Hz and 100 kHz (seven decades). When this is tried with a digital
system, the computer becomes swamped with data. For instance,
sampling at 200 kHz, it takes 20 million points to capture one complete
cycle at 0.01 Hz. You may have noticed that the frequency response of
digital filters is almost always plotted on a linear frequency scale, while
analog filters are usually displayed with a logarithmic frequency. This is
because digital filters need a linear scale to show their exceptional filter
performance, while analog filters need the logarithmic scale to show their
huge dynamic range.

-146-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

4-8. Summary
Digital signal processing (DSP) is the study of signals in a digital
representation and the processing methods of these signals. DSP includes
subfields like: audio and speech signal processing, sonar and radar signal
processing, sensor array processing, spectral estimation, statistical signal
processing, image processing, signal processing for communications,
biomedical signal processing, etc. Since the goal of DSP is usually to
measure or filter continuous real-world analog signals, the first step is
usually to convert the signal from an analog to a digital form, by using an
analog to digital converter. Often, the required output signal is another
analog output signal, which requires a digital to analog converter.
Digital filters fall into two main categories: finite impulse response (FIR)
and infinite impulse response filter (IIR). If you put in an impulse, that is,
a single "1" sample followed by many "0" samples, zeroes will come out
after the "1" sample has made its way through the delay line of the filter.
An FIR filter uses only the input signal, while an IIR uses both the input
signal and previous samples of the output signal. FIR filters are always
stable, while IIR filters may be unstable. Most filters can be described in
Z-domain (a superset of the frequency domain) by their transfer
functions. A filter may also be described as a difference equation, a
collection of zeroes and poles or, if it is an FIR filter, an impulse response
or step response.

Infinite Impulse Response (IIR) Filters have Recursive transfer functions


(with feedback)

Finite Impulse Response (FIR) Filters are Non-recursive transfer functions


(without feedback)
-147-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

The algorithms required for DSP are sometimes performed using


specialized computers, which make use of specialized microprocessors
called digital signal processors (DSP). These process signals in real time
and are generally purpose-designed application-specific integrated
circuits (ASICs). When flexibility and rapid development are more
important than unit costs at high volume, DSP algorithms may also be
implemented using field-programmable gate arrays (FPGAs).

Digital signal processing is often implemented using specialized micro


processors such as the MC56000, from Motorola and the TMS320, from
Texas Instruments. These often process data using fixed-point arithmetic,
although some versions are available which use floating point arithmetic
and are more powerful. For faster applications FPGAs might be used.
Beginning in 2007, multi-core implementations of DSPs have started to
emerge from companies including Freescale and startup Stream
Processors, Inc. For faster applications with vast usage, ASICs might be
designed specifically. For slow applications such as flame scanning, a
traditional slower processor such as a microcontroller can be used.

-148-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

4-8. Problems
4-1) What does "FIR" mean? Why is the impulse response "finite"?

4-2) How do FIR filters compare to IIR filters? What are the advantages
and disadvantages of FIR Filters (compared to IIR filters)?

4-3) What is the main difference between the Windowed-Sinc filter and
the Chebyshev filter? Which is the best digital filter in the frequency
domain?

4-4) Design an IIR LPF, with a corner cutoff frequency 10 kHz and
maximal flat response in the passband

4-5) Choose the most suitable answer for the following sentences
1. IIR filters
a. use feedback
b. are sometimes called recursive filters
c. can oscillate if not properly designed
d. all of the above

2. The output of two digital filters can be added. The same effect can be
achieved by
a. adding their coefficients
b. subtracting their coefficients
c. convolving their coefficients
d. averaging their coefficients and then using a Blackman window

3. DSP convolves each discrete sample with four coefficients and they are
all equal to 0.25. This must be a
a. low-pass filter
b. high-pass filter
c. band-pass filter
d. band-stop filter

4. The inverse Fourier transform


a. converts from the frequency domain to the time domain
b. converts from the time domain to the frequency domain
c. converts from the phasor domain to the magnitude domain
d. is used to make real-time spectrum analyzers

5. The following figure is the impulse response for


a. an IIR highpass filter
b. an FIR bandpass filter
c. an IIR lowpass filter
d. an FIR lowpass filter
-149-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

6. Coefficient symmetry is important in FIR filters because it provides


a. a smaller transition bandwidth
b. less passband ripple
c. less stopband ripple
d. a linear phase response

7. The following curve depicts


a. phase response of a lowpass filter
b. amplitude response of a lowpass filter
c. both of the above
d. none of the above

8. Two digital filters can be operated in cascade. The same effect can be
achieved by
a. adding their coefficients
b. subtracting their coefficients
c. convolving their coefficients
d. averaging their coefficients and then using a Blackman window

9. A DSP convolves each discrete sample with four coefficients and they
are all equal to 0.25. This must be an
a. IIR filter
b. FIR filter
c. RRR filter
d. All of the above

4-6) Briefly list the signal processing steps required in the window design
technique, to obtain the coefficients of a causal FIR discrete-time filter
given the desired frequency response of the filter H(f).

4-7) Using the inverse Fourier transform method design two digital
filters, a low-pass filter and a high-pass filter, to divide the input signal
into two equal bandwidth signals.

-150-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

4-9. References

[1] M. Frerking, “Digital Signal Processing in Communication


Systems”, Kluwer Academic Publishers, NY, 1994.

[2] Andreas Antoniou, “Digital Filters Analysis & Design”, Prentice


Hall, 1996.

[3] R Rabiner and B. Gold, “Theory & Application of Digital Signal


processing”, Prentice Hall, 1996.

[4] John G Proakis and Dimitris G. Manolakis,”Digital Signal


Processing “, Prentice Hall, 1996.

[5] Sanjit K. Mithra , , “Digital Signal Processing”, Tata McGraw Hill,


ND, 1996.

[6] Andreas Antoniou , “Digital Signal Processing”, Prentice Hall, 1999.

[7] Douglas K. Lindner, “Introduction to signals &Systems” McGraw


Hill, NY, 1999.

-151-
Prof. Dr. Muhammad EL-SABA
Integrated Circuits Technology & Applications Chapter 4

-152-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

Applications of
Digital Signal Processing
Contents

5-1. Introduction to DSP Applications


5-2. Conversion from Floating Point to Fixed Point
5-3. Application Example #1: Software Implementation of FFT
5-4. Application Example #2: GSM Receiver
5-5. Application Example #3: 100 Gbps DP-QPSK System
5-6. DSP Selection Guide
5-7. Summary
5-8. Problems
5-9. Bibliography

-151-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

-152-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

Applications of
Digital Signal Processing

5-1. Introduction
Digital signal processing is one of the core technologies in rapidly
growing application areas such as wireless communications, audio and
video processing, and industrial control. Designing efficient digital signal
processing systems cover a complete spectrum of design issues from DSP
perspective. This chapter discusses several DSP applications. Digital
signal processing and digital filters have so many applications in so many
domains, among which one can cite:

 Image signal processing,


 Audio and speech signal processing,
 Sonar and radar signal processing,
 Signal processing for communications,
 Biomedical signal processing,

Another area of application is high data rate communication systems.


These systems have enormous real time computational requirements. A
modern mobile phone, for example, executes several complex algorithms,
including speech compression and decompression, forward error
correction encoding and decoding, highly complex modulation and
demodulation schemes, up conversion and down conversion of modulated
and received signals, and so on. If these are implemented in software, the
amount of real time computation may require the power of a
supercomputer. Advancement in VLSI technology has made it possible to
conveniently accomplish the required computations in a hand held device.

Specific examples in telecommunications are modulation/demodulation


(MODEM) in digital mobile phones, diversity reception to get rid of
fading effects and equalization of cable TV (CATV) signals.

-153-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

In addition, DSP is applied in speech compression, equalization of


sound, high fidelity loudspeaker crossovers, and audio effects in electric
guitar amplifiers, weather forecasting, seismic data processing, medical
imaging such as computer-aided tomography (CAT) scans and magnetic
resonance imaging (MRI). Figure 5-4 depicts some DSP applications.

Fig. 5-1. DSP applications


-154-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

5-2. Conversion from Floating Point to Fixed Point


Most signal processing and communication systems are first implemented
in double precision floating point arithmetic using tools like MATLAB.
While implementing these algorithms the main focus of the developer is
to correctly assimilate the functionality of the algorithm. This MATLAB
code is then converted into floating point C/C++ code. C++ code usually
runs much faster than MATLAB routines.

Conversion requires serious deliberation as it results in a tradeoff between


performance and cost. To reduce cost, shorter word lengths are used for
all variables in the code, but this adversely affects the numerical
performance of the algorithm and adds quantization noise.

A floating point number is simply converted to Qn.m fixed point format


by brining m fractional bits of the number to the integer part and then
dropping the rest of the bits with or without rounding. This conversion
translates a floating point number to an integer number with an implied
decimal. This implied decimal needs to be remembered by the designer
for referral in further processing of the number in different calculations:

num fixed = round (num float x 2m)


or num fixed = fix (num float x 2m)
num fixed round(num float _2m)
if (num fixed > 2N-1 –1)
num fixed 2N-1 –1
elseif (num fixed < -2N-1)
num fixed -2N-1

Overflow is a serious consequence of fixed point arithmetic. Overflow


occurs if two positive or negative numbers are added and the sum
requires more than the available number of bits.
MATLAB_ provides the fi tool that is a fixed point numeric object with a
complete list of attributes that greatly help in seamless conversion of a
floating point algorithm to fixed point format. For example, using fi(), p
is converted to a signed fixed point Q3.5 format number:

>> PI fi(pi, 1, 3+5, 5);% Specifying N bits and the fraction part
>> bin(PI)
01100101
>> double(PI)
3.1563
-155-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

As with MATLAB, once a simulation is working in SystemC using


double precision floating point numbers, by merely substituting or
redefining the same variable with fixed point attributes the same code can
be used forfixed point implementation.

Declaring all variables as single or double precision floating point


numbers is the most convenient way of implementing any DSP algorithm,
but from a digital design perspective the implementation takes much
more area and dissipates a lot more power than its equivalent fixed point
implementation. Although floating point to fixed point conversion is a
difficult aspect of algorithm mapping on architecture, to preserve area
and power this option is widely chosen by designers.

For fixed point implementation on a programmable digital signal


processor, the code is converted to a mix of standard data types consisting
of 8 bit char, 16 bit short and 32 bit long. Hence, defining the word
lengths of different variables and intermediate values of all computations
in the results, which are not assigned to defined variables, is very simple.
Conversion of a floating point algorithm to fixed point format requires
the following steps:

 S0 Serialize the floating point code to separate each floating point


computation into an independent statement assigned to a distinct
variable vari.
 S1 Insert range directives after each serial floating point computation
in the serialized code of S0 to measure the range of values each
distinct variable takes in a set of simulations.

 S2 Design a top layer that in a loop runs the design for all possible sets
of inputs. Each iteration executes the serialized code with range
directives for one possible set of inputs. Make these directives keep a
track of maximum max vari and minimum min vari values of each
variable vari. After running the code for all iterations, the range
directives return the range that each variable vari takes in the
implementation.
 S3 To convert each floating point variable vari to fixed point variable
fx vari in its equivalent Qni.mi fixed point format, extract the integer
length ni using the following mathematical relation:

-156-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

 S4 Setting the fractional part mi of each fixed point variable fx vari


requires detailed technical deliberation and optimization. The integer
part is critical as it must be correctly set to avoid any overflow in fixed
point computation. The fractional part, on the other hand, determines
the computational accuracy of the algorithm as any truncation of data
appear as quantization noise. This noise, generated as a consequence
of throwing away of valid part of the result after each computation,
propagates in subsequent computations in the implementation.

The algorithm designer should always write MATLAB_ code for easy
translation into C/C++ and its subsequent HW/SW partitioning; where
HW is implementation is in RTL Verilog and SW components are
mapped on DSP

5-3. Application Example #1: Software Implementation of FFT


The fast Fourier transform (FFT) is an efficient implementation of the
discrete Fourier transform (DFT) for highly composite transform lengths
N. When N is a power of 2, the computational complexity drops from
O(N2) (for the DFT) down to O(N.lnN) for the FFT, where lnN denotes
the logarithm-base-2 of N. The FFT was evidently first described by
Gauss in 1805 and rediscovered in the 1960s by Cooley and Tukey. The
first stage of a radix-2 FFT algorithm is derived below; the remaining
stages are obtained by applying the same decomposition recursively.

5-3.1. FFT Algorithm


The naive implementation of the N-point digital Fourier transform (DFT)
involves calculating the scalar product of the sample buffer (treated as an
N-dimensional vector) with N separate basis vectors. Since each scalar
product involves N multiplications and N additions, the total time is
proportional to N2 (in other words, it's an O(N2) algorithm). However, it
turns out that by cleverly re-arranging these operations, one can optimize
the algorithm down to O(N log(N)), which for large N makes a huge
difference. As we have pointed out earlier, the optimized version of the
algorithm is called the fast Fourier transform (FFT).

Let's discuss an example. Suppose that we want to do a real-time Fourier


transform of one channel of CD-quality sound. That's 44k samples per
second. Suppose also that we have a 1k buffer that is being re-filled with
data 44 times per second. To generate a 1000-point Fourier transform we
would have to perform 2 million floating-point operations (1M
multiplications and 1M additions). To keep up with incoming buffers, we
-157-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

would need at least 88M flops (floating-point operations per second).


Now, if you are lucky enough to have a 100 Mflop computer, that might
be fine, but consider that you'd be left with very little processing power to
spare. Using the FFT, on the other hand, we would perform on the order
of 2*1000*log2(1000) operations per buffer, which is more like 20,000.
This requires 880k flops—much less than 1M flop! Almost hundred-fold
better in speed.

The standard strategy to speed up an algorithm is to divide and conquer.


We have to find some way to group the terms in the equation

V[k] = Σn=0..N-1 WNkn v[n] (3-4)

Let's see what happens when we separate odd ns from even ns (from now
on, let's assume that N is even):
WNk(2r) = e-2πi*2kr/N = e-2πi*kr/(N/2) = WN/2kr (3-5)

Notice an interesting thing: the two sums are nothing else but N/2-point
Fourier transforms of, respectively, the even subset and the odd subset of
samples. Terms with k greater or equal N/2 can be reduced using another
identity:
WN/2m+N/2 = WN/2mWN/2N/2 = WN/2m (3-6)

which is true because Wmm = e-2πi = cos(-2π) + i sin(-2π)= 1. If we start


with N that is a power of 2, we can apply this subdivision recursively
until we get down to 2-point transforms.

We can also go backwards, starting with the 2-point transform:

V[k] = W20*k v[0] + W21*k v[1], k=0,1 (3-7)

The two components are:

V[0] = W20 v[0] + W20 v[1] = v[0] + W20 v[1] (3-8)


V[1] = W20 v[0] + W21 v[1] = v[0] + W21 v[1]

We can represent the two equations for the components of the 2-point
transform graphically using the, so called, butterfly

-158-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

Fig. 5-2. Butterfly calculation

Furthermore, using the divide and conquer strategy, a 4-point transform


can be reduced to two 2-point transforms: one for even elements, one for
odd elements. The odd one will be multiplied by W4k. In diagram style,
this can be represented as two levels of butterflies. Notice that using the
identity WN/2n = WN2n, we can always express all the multipliers as
powers of the same WN (in this case we choose N=4).

Fig. 5-3. Diagrammatical representation of the 4-point Fourier transform calculation

I encourage the reader to derive the analogous diagrammatical


representation for N=8. What will become obvious is that all the
butterflies have similar form:

Fig. 5-4. Generic butterfly graph


-159-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

This graph can be further simplified using this identity:

WNs+N/2 = WNs WNN/2 = -WNs (3-9)

which is true because

WNN/2 = e-2πi(N/2)/N = e-πi = cos(-π) + isin(-π) = -1 (3-10)

Here's the simplified butterfly:

Fig. 5-5. Simplified generic butterfly

Using this result, we can now simplify our 4-point diagram.

Fig. 5-6. 4-point FFT calculation

This diagram is the essence of the FFT algorithm. The main trick is that
you don't calculate each component of the Fourier transform separately.
That would involve unnecessary repetition of a substantial number of
calculations. Instead, you do your calculations in stages. At each stage
you start with N (in general complex) numbers and "butterfly" them to
obtain a new set of N complex numbers. Those numbers, in turn, become
the input for the next stage.
-160-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

The calculation of a 4-point FFT involves two stages. The input of the
first stage are the 4 original samples. The output of the second stage are
the 4 components of the Fourier transform. Notice that each stage
involves N/2 complex multiplications (or N real multiplications), N/2
sign inversions (multiplication by -1), and N complex additions. So each
stage can be done in O(N) time. The number of stages is log2N (which,
since N is a power of 2, is the exponent m in N = 2 m). Altogether, the
FFT requires on the order of O(N logN) calculations. Moreover, the
calculations can be done in-place, using a single buffer of N complex
numbers. The trick is to initialize this buffer with appropriately scrambled
samples. For N=4, the order of samples is v[0], v[2], v[1], v[3]. In
general, according to our basic identity, we first divide the samples into
two groups, even ones and odd ones. Applying this division recursively,
we split these groups of samples into two groups each by selecting every
other sample. For instance, the group (0, 2, 4, 6, 8, 10, ... 2N-2) will be
split into (0, 4, 8, ...) and (2, 6, 10, ...), and so on. If you write these
numbers in binary notation, you'll see that the first split (odd/even) is
done according to the lowest bit; the second split is done according to the
second lowest bit, and so on. So if we start with the sequence of, say, 8
consecutive binary numbers:

000, 001, 010, 011, 100, 101, 110, 111

we will first scramble them like this:

[even] (000, 010, 100, 110), [odd] (001, 011, 101, 111)

then we'll scramble the groups:

((000, 100), (010, 110)), (001, 101), (011, 111))

which gives the result:

000, 100, 010, 110, 001, 101, 011, 111

This is equivalent to sorting the numbers in bit-reversed order--if you


reverse bits in each number (for instance, 110 becomes 011, and so on),
you'll get a set of consecutive numbers. So this is how the FFT algorithm
works (more precisely, this is the decimation-in-time in-place FFT
algorithm).

-161-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

1. Select N that is a power of two. You'll be calculating an N-point FFT.


2. Gather your samples into a buffer of size N
3. Sort the samples in bit-reversed order and put them in a complex N-
point buffer (set the imaginary parts to zero)
4. Apply the first stage butterfly using adjacent pairs of numbers in the
buffer
5. Apply the second stage butterfly using pairs that are separated by 2
6. Apply the third stage butterfly using pairs that are separated by 4
7. Continue butterflying the numbers in your buffer until you get to
separation of N/2
8. The buffer will contain the Fourier transform

5-3.2. FFT Software Code


Now, we will start the software implementation (in C++ Language) by
initializing some data structures and pre-computing some constants in the
constructor of the FFT object.

#include <complex>
typedef std::complex<double> Complex;
// Use Standard complex Library
// FFT Constructor
FFT::FFT (int Points, long sampleRate)
: _Points (Points), _sampleRate (sampleRate)
{
_sqrtPoints = sqrt((double)_Points);
// calculate binary log
_logPoints = 0;
Points--;
while (Points != 0)
{
Points >>= 1;
_logPoints++;
}
// This is where the original samples will be stored
_aTape = new double [_Points];

for (int i = 0; i < _Points; i++)


_aTape[i] = 0;
// This is the in-place FFT buffer
_X = new Complex [_Points];
// Precompute complex exponentials for each stage
_W = new Complex * [_logPoints+1];
-162-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

int _2_l = 2;
for (int l = 1; l <= _logPoints; l++)
{
_W[l] = new Complex [_Points];

for ( int i = 0; i < _Points; i++ )


{
double re = cos (2. * PI * i / _2_l);
double im = -sin (2. * PI * i / _2_l);
_W[l][i] = Complex (re, im);
}
_2_l *= 2;
}
// prepare bit-reversed mapping
_aBitRev = new int [_Points];
int rev = 0;
int halfPoints = _Points/2;
for (i = 0; i < _Points - 1; i++)
{
_aBitRev[i] = rev;
int mask = halfPoints;
// add 1 backwards
while (rev >= mask)
{
rev -= mask; // turn off this bit
mask >>= 1;
}
rev += mask;
}
_aBitRev [_Points-1] = _Points-1;
}
The FFT buffer is filled with samples from the "tape" in bit-reversed
order
for (i = 0; i < _Points; i++) PutAt (i, _aTape[i]);

The bit reversal is done inside PutAt, which also converts real samples
into complex numbers (with the imaginary part set to zero):
void FFT::PutAt (int i, double val) {
_X [_aBitRev[i]] = Complex (val);}

-163-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

The calculation of the FFT is relatively simple


// 0 1 2 3 4 5 6 7
// level 1
// step 1 0
// increm 2 W
// j = 0 <---> <---> <---> <---> 1
// level 2
// step 2
// increm 4 0
// j = 0 <-------> <-------> W 1
// j = 1 <-------> <-------> 2 W
// level 3 2
// step 4
// increm 8 0
// j = 0 <---------------> W 1
// j = 1 <---------------> 3 W 2
// j = 2 <---------------> 3 W 3
// j = 3 <---------------> 3 W
// 3
//////////////////////////////////////////////////////////////////////////////
void FFT::Transform ()
{
// step = 2 ^ (level-1)
// increm = 2 ^ level;
int step = 1;
for (int level = 1; level <= _logPoints; level++)
{
int increm = step * 2;
for (int j = 0; j < step; j++)
{
// U = exp ( - 2 PI j / 2 ^ level )
Complex U = _W [level][j];
for (int i = j; i < _Points; i += increm)
{
// in-place butterfly
// Xnew[i] = X[i] + U * X[i+step]
// Xnew[i+step] = X[i] - U * X[i+step]
Complex T = U;
T *= _X [i+step];
_X [i+step] = _X[i];
_X [i+step] -= T;
-164-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

_X [i] += T;
}
}
step *= 2;
}
}////////////////////////////////////////////////////////////////////////////////////////////////

The variable step is the "spread" of the butterfly--distance between the


two inputs of the butterfly. It starts with 1 and doubles at every level.
At each level we have to calculate step bunches of butterflies, each bunch
consisting of butterflies separated by the distance of increm (increm is
twice the step). The calculation is organized into bunches, because each
bunch shares the same multiplier W. The source of the FFT class is
available for downloading. You can also download the sources of our
real-time Frequency Analyzer which uses the FFT algorithm to analyze
the sound acquired through your PC's microphone (if you have one).

5-4. Application Example #2: GSM Receiver


A typical digital communication system for voice, such as a GSM (Global
System for Mobile) phone, executes a combination of algorithms of
various types. The following figure depicts a real-time GSM receiver. A
system level representation of these algorithms is shown in Figure 5-7.

Fig. 5-7. Efficiency verses flexibility tradeoff while selecting a design option

Fig. 5-8. Schematic diagram of GSM system.

-165-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

5-4.1. Algorithm
These algorithms fall into the following categories.
 Code intensive algorithms. These do not have repeated code. This
category consists of code for phone book management, keyboard
interface, GSM protocol stack, and the code for configuring different
devices in the system.
 Structured and computationally intensive algorithms. These mostly
take loops in software and are excellent candidates for hardware
mapping. These algorithms consist of digital up and down conversion,
demodulation and synchronization loops, and forward error correction
(FEC).
 Code intensive and computationally intensive algorithms. These lack
any regular structure. Although their implementations do have loops,
they are code intensive. Speech compression is an example.

The GSM is an interesting example of a modern electronic device as most


of these devices implement applications that comprise these three types of
algorithm. Some examples are DVD players, digital cameras and medical
diagnostic systems.
There exist several implementation options, namely: ASIC, FPGA and
DSP. The performance versus flexibility tradeoff is shown in figure 5-8.

Fig. 5-8. Efficiency verses flexibility tradeoff while selecting a design option

-166-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

It is interesting to note that, in many high end systems, usually all the
design options are exercised. The code intensive part of the application
can be mapped on GPPs, non-structured signal processing algorithms
are mapped on DSPs, and structured algorithms are mapped on FPGAs,
whereas for standard algorithms ASICs are used. These apparently simple
decisions are very critical once the system proceeds along the design
cycle. The decisions are especially significant for the blocks that are
partitioned for hardware. The designer takes the high level design and
starts implementing the hardware. The digital design is implemented at
RTL level and then it is synthesized and tools translate the code to gate
level. The synthesized design is physically placed and routed.

As the design. goes along the design cycle, the details are very complex
to comprehend and change. The designer at every stage has to make
decisions, but as the design moves further along the cycle these decisions
have less impact on the overall function and performance of the design.
The relationship between the impact of the design decision and its
associated complexity is shown in figure 5-9.

Fig. 5-9. Digital diagram hierarchy.

-167-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

5-4.2. System Level Design


We start with the system-level design, using a system design tool, such as
MATLAB or SystemC. The design cycle for implementing a signal
processing application starts with the required specification, followed by
the design of an algorithm. Let us consider an example that elaborates on
the rationale of mapping a communication system on a hybrid platform.
The system implements a 512Kbps BPSK/QPSK (phase shift keying)
modem.

After the development of an algorithm in MATLAB, the code is profiled.


The algorithm consists of various components. The computation and
storage requirements of each component along with inter component
communication are analyzed. The following is a list of operations that the
digital receiver performs.

 . Analog to digital conversion (ADC) of an IF signal at 70MHz at the


receiver (Rx) using band pass sampling.
 . Digital to analog conversion (DAC) of an IF signal at 24.5MHz at
the transmitter (Tx).
 . Digital down conversion of the band pass digitized IF signal to
baseband at the Rx. The baseband signal consists of four samples per
symbol on each I and Q channel. For 512 Kbps this makes 2014 Ksps
(kilo samples per second) on both the channels.
 . Digital up conversion of the baseband signal from 2014 ksps at both
I and Q to 80 Msps at the Tx.
 . Digital demodulator processing 1024K complex samples per second.
The algorithm at the Rx consists of: start of burst detection, header
removal, frequency and timing loops and slicer.

In a burst modem, the receiver starts in burst detection state. In this state
the system executes the start. The output of the corrected signal is passed
to the slicer. The slicer makes the soft and hard decisions. For forward
error correction (FEC), the system implements a Viterbi algorithm to
correct the bit errors in the slicer soft decision output, and generates the
final bits. The algorithm is partitioned to be mapped on different
components based on the basis of required computations in
implementation. The following figure depicts the system is effectively
portioned and implemented.

-168-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

Fig. 5-10. System-level co-design (software and Hardware co-design)

As shown in figure 5-10,, a DSP is used for mapping computationally


intensive and non-regular algorithms in the receiver. These algorithms
primarily consist of the demodulator and carrier and timing recovery
loops. ASICs are used for ADC,DAC, digital down conversion (DDC)
and digital up conversion (DUC). A direct digital frequency synthesis
(DDFS) chip is used for cosine generation that mixes with the baseband
signal.
-169-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

Fig. 5-11. System-level design and system partitioning

5-4.3. Coding Level


DSP applications are usually programmed in the same languages as other
science and engineering tasks, such as: C/C++, BASIC and assembly.
The power and versatility of C/C++ makes it the language of choice for
designers and other programmers. However, the programming of
concurrent and real applications are usually started in hardware
description languages (HDL), such as VHDL/VHDL-AMS (or Verilog
and SystemVerilog).

An example MATLAB code that works on chunks of data for simple


baseband modulation is shown below. The top level module sets the user
parameters and calls the initialization functions and main modulation
function. The processing is done on a chunk by chunk basis.

% BPSK 1, QPSK 2, 8PSK 3, 16QAM 4


% All-user defined parameters are set in structure USER PARAMS
USER PARAMS.MOD SCH 2; %select QPSK for current simulation
USER PARAMS.CHUNK SZ 256; %set buffer size for chunk by chunk processing
USER PARAMS.NO CHUNKS 100;% set no of chunks for simulation
% generate raw data for simulation
-170-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

raw data randint(1, USER PARAMS.NO CHUNKS*USER PARAMS.CHUNK SZ)


% Initialize user defined, system defined parameters and states in respective
% Structures
PARAMS MOD Params Init(USER PARAMS);
STATES MOD States Init(PARAMS);
mod out [];
% Code should be structured to process data on chunk-by-chunk basis
for iter 0:USER PARAMS.NO CHUNKS-1
in data raw data
(iter*USER PARAMS.CHUNK SZ+1:USER PARAMS.CHUNK SZ*(iter+1));
[out sig,STATES] Modulator(in data,PARAMS,STATES);
mod out [mod out out sig];
end

The parameter initialization function sets all the parameters for the
modulator.

% Initializing the user defined parameters and system design parameters


% In PARAMS
function PARAMS MOD Params Init(USER PARAMS)
% Structure for transmitter parameters
PARAMS.MOD SCH USER PARAMS.MOD SCH;
PARAMS.SPS 4; % Sample per symbol
% Create a root raised cosine pulse-shaping filter
PARAMS.Nyquist filter rcosfir(.5 , 5, PARAMS.SPS, 1);
% Bits per symbol, in this case bits per symbols is same as mod scheme
PARAMS.BPS USER PARAMS.MOD SCH;
% Lookup tables for BPSK, QPSK, 8-PSK and 16-QAM using gray coding
BPSK Table [(-1 + 0*j) (1 + 0*j)];
QPSK Table [(-.707 - .707*j) (-.707 + .707*j)...
(.707 - .707*j) (.707 + .707*j)];
PSK8 Table [(1 + 0j) (.7071 + .7071i) (-.7071 + .7071i) (0 + i)...
(-1 + 0i) (-.7071 - .7071i) (.7071 - .7071i) (0 - 1i)];
QAM Table [(-3 + -3*j) (-3 + -1*j) (-3 + 3*j) (-3 + 1*j) (-1 + -3*j)...
(-1 + -1*j) (-1 + 3*j) (-1 + 1*j) (3 + -3*j) (3 + -1*j)...
(3 + 3*j) (3 + 1*j) (1 + -3*j) (1 + -1*j) (1 + 3*j) (1 + 1*j)];
% Constellation selection according to bits per symbol
if(PARAMS.BPS 1)
PARAMS.const Table BPSK Table;
elseif(PARAMS.BPS 2)
PARAMS.const Table QPSK Table;

-171-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

elseif(PARAMS.BPS 3)
PARAMS.const Table PSK8 Table;
elseif(PARAMS.BPS 4)
PARAMS.const Table QAM Table

Similarly the state initialization function sets all the states for the
modulator for chunk by chunk processing of data. For this simple
example, the symbols are first zero padded and then they are passed
through a Nyquist filter. The delay line for the filter is initialized in this
function. For more complex applications this function may have many
more arrays.

function STATES MOD States Init(PARAMS)


% Pulse shaping filter delayline
STATES.filter delayline zeros(1,length(PARAMS.Nyquist filter)-1);

And finally the actual modulation function performs the modulation on a


block of data.

function [out data, STATES] Modulator(in data, PARAMS, STATES);


% Bits to symbols conversion
sym reshape(in data,PARAMS.BPS,length(in data)/PARAMS.BPS)’;
% Binary to decimal conversion
sym decimal bi2de(sym);
% Bit to symbol mapping
const sym PARAMS.const Table(sym decimal+1);
% Zero padding for up-sampling
up sym upsample(const sym,PARAMS.SPS);
% Zero padded signal passed through Nyquist filter
[out data, STATES.filter delayline]
filter(PARAMS.Nyquist filter,1,up sym,
STATES.filter delayline);

This MATLAB_ example defines a good layout for implementing a real


time signal processing application for subsequent mapping in software
and hardware. In complex applications some of the MATLAB_ functions
may be mapped in SW while others are mapped in HW. It is important to
keep this aspect of system design from inception and divide the
implementation into several components, and for each component group
its data of parameters and states in different structures.

-172-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

5-4.4. Hardware Implementation


Although there are many applications that are commonly mapped on field
programmable gate arrays (FPGAs) and application specific integrated
circuits (ASICs), the primary focus of this section is to explore DSP
architectures and design techniques that implement real time signal
processing systems. An important application of DSP is in
communication systems. For example, a digital receiver for a G703
compliant E3microwave link needs to demodulate 34.368Mbps of
information that were modulated using binary phase shift keying (BPSK)
or quadrature phase shift keying (QPSK). The system further requires
forward error correction (FEC) decoding and decompression along with
several other auxiliary operations in real time.

There are primarily two common methods to represent and specify DSP
algorithms: language driven executable description and graphics or flow
graph driven specification. The language driven methods are used for
software development, high level languages being used to code
algorithms. The languages are either interpretive or executable.
MATLAB is an example of an interpretive language. Signal processing
algorithms are usually coded in MATLAB. As the code is interpreted line
by line, execution of the code has considerable overheads. To reduce
these it is always best to write compact MATLAB code (.m file). For
computationally intensive algorithms, the designer usually prefers to
write the code in C/C++ (or generate it from MATLAB). As in these
languages the code is compiled for execution, the executable runs much
faster than its equivalent MATLAB simulation.

Fig. 5-12. Example of a digital receiver block diagram


-173-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

Alternatively, we can use the MATLAB HDL generators or the so-called


DSP Builder (from Altera), to generate HDL code from within MATLAB
itself (via Simulink). In fact, the Simulink contains a code generator,
called Simulink Coder. Simulink Coder (formerly Real-Time Workshop)
generates and executes C and C++ code from Simulink diagrams,
Stateflow charts, and MATLAB functions. The generated source code
can be used for real-time and unreal-time applications, rapid prototyping,
and hardware testing.

Fig. 5-13. Generation of HDL from MATLAB with DSP Builder, from Altera.

-174-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

5-4.4. Development Cycle


Programming for DSP targets without a proper design flow can lead to
significant inefficiency. Basic DSP design steps include developing a
behavioral model of the DSP algorithm, simulation of the structural
design on a desktop PC, compiling the code for a target DSP, and finally
downloading and testing of the design on DSP hardware. The last stage of
hardware-based testing can be further divided into hardware co-
simulation and testing with real-world I/O.

Fig. 5-13. DSP Development cycle.

-175-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

5-5. Application Example #3: 100 Gbps DP-QPSK System


The combination of polarization-multiplexing and quadrature phase-shift-
keying (PM-QPSK or DP-QPSK) is emerging as one the most promising
solutions to reach bit rates of 100 Gbps and higher. At the receiver end,
the use of digital signal processing (DSP) results in significant
deployment improvement over the traditional implementation. This
application note shows a practical design of a 100 Gbps DP-QPSK
transmission system using coherent detection with digital signal
processing for distortion compensation.

Applications of this system include,


 Optical network backbone (to replace N*10G LAG).
 Data center network and enterprise computing.
 Ethernet network at 100 G- transport

5-5.1. System layout

Fig. 5-14. 100 Gbps DP-QPSK Layout.

5-5.2. Simulation Description


The 100 Gbps DP-QPSK system can be divided into five main parts: DP-
QPSK Transmitter, Transmission Link, Coherent Receiver, Digital Signal
Processing, and Detection & Decoding (which is followed by direct-
error-counting). The signal is generated by an optical DP-QPSK
Transmitter, and is then propagated through the fiber loop where
dispersion and polarization effects occur. It then passes through the
Coherent Receiver and into the DSP for distortion compensation. The
fiber dispersion is compensated using a simple transversal digital filter,
and the adaptive polarization demultiplexing is realized by applying the
constant-modulus algorithm (CMA). A modified Viterbi-and-Viterbi

-176-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

phase estimation algorithm (working jointly on both polarizations) is then


used to compensate for phase and frequency mismatch between the
transmitter and local oscillator (LO). After the digital signal processing is
complete, the signal is sent to the detector and decoder, and then to the
BER Test Set for direct-error-counting. The following shows an image of
the optical spectrum of the 100 Gbps DP-QPSK signal after the
transmitter, as well as the RF spectrum obtained after the Coherent DP-
QPSK Receiver.

Fig. 5-15. optical spectrum of the 100 Gbps DP-QPSK signal after the transmitter, as
well as the RF spectrum obtained after the Coherent DP-QPSK Receiver.

The algorithms used for digital signal processing are implemented


through a Matlab component. By setting the Matlab component to debug
mode, the generated electrical constellation diagrams after each step (CD
compensation, Polarization Demultiplexing, and Carrier Phase
Estimation) are shown here:

Fig. 5-15. constellation diagrams of the 100 Gbps DP-QPSK signal

-177-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

5-6. DSP Selection Guide


DSP processors find use in an extremely diverse array of applications,
from radar systems to consumer electronics. Naturally, no one processor
can meet the needs of all or even most applications. Therefore, the first
task for the designer selecting a DSP processor is to weigh the relative
importance of performance, cost, integration, ease of development, power
consumption, and other factors for the application at hand. Here we’ll
briefly touch on the needs of just a few classes of DSP applications.
In terms of dollar volume, the biggest applications for digital signal
processors are inexpensive, high-volume embedded systems, such as
cellular telephones, disk drives (where DSPs are used for servo control),
and portable digital audio players. In these applications, cost and
integration are paramount. For portable, battery-powered products, power
consumption is critical. Ease of development is usually less important;
even though applications typically involve the development of custom
software to run on the DSP and custom hardware surrounding the DSP,
the huge production volumes justify expending extra development effort.
Another important class of applications involves processing large
volumes of data with complex algorithms for specialized needs. In
addition, the type of native arithmetic used in the processor is one of the
most fundamental characteristics of a DSP. Most DSPs use fixed point
arithmetic, where numbers are represented as integers or as fractions in a
fixed range (usually -1.0 to +1.0). Other processors use floating-point
arithmetic, where values are represented by a mantissa and an exponent.
Digital filter selection is the choice or trade-off between Floating Point
DSP - IIR filters and Fixed Point DSP- FIR filters which are illustrated in
the Digital Filter Decision Tree, as shown in the following figure.
Whether you decide on a fixed point FIR or floating point IIR solution,
the world is still analog. In many applications the conversion from analog
to digital and back to analog is a requirement, often with limitations in
bandwidth and design flexibility. One example is range limitation which
is the maximum bandwidth imposed by the sampling when altering the
digital filter frequency. A solution is to adjust the clock, which forces
adjustments in the anti-alias and reconstruction filter, therefore requiring
multiple fixed frequency or programmable filters (typically not cost
effective). Another approach is to adjust the clock within the DSP by
decimation or interpolation; hence the filter shape can be modified within
the filter algorithm. This is called Multi-Rate filtering and several
decimations can be implemented in series to reach very low frequencies.

-178-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

Fig. 5-16. DSP selection criteria

Final caution on processor speed: be careful when comparing processor


speeds quoted in terms of millions of MOPS (operations per second) or
MFLOPS (millions of floating-point operations per second) figures,
because different processor vendors have different ideas of what
constitutes an operation.

Fig. 5-16. Execution times for a 256-point complex FFT, in microseconds


(lower is better).

-179-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

5-7. Summary
The introduction of the microprocessor in the late 1970's and early 1980's
made it possible for DSP techniques to be used in a much wider range of
applications. However, the initial generations of general-purpose
microprocessors did fulfill the requirements of DSP. A Digital Signal
Processor (DSP) is a special-purpose CPU (Central Processing Unit) that
provides ultra-fast instruction sequences. DSP’s are microprocessors
specifically designed to handle DSP tasks which are carried out by
intensive mathematical operations.

This chapter discusses several DSP applications. DSP technology is


nowadays commonplace in such devices as mobile phones, multimedia
computers, video recorders, CD players, hard disc drive controllers and
modems, and will soon replace analog circuitry in TV sets and
telephones.

-180-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

The DSPs are capable of sequencing and reproducing hundreds to


thousands of discrete elements, such that large hardware structures can be
simulated at relatively low cost. DSP techniques can perform functions
such as Fast-Fourier Transforms (FFT), delay equalization,
programmable gain, modulation, encoding/decoding, and filtering.

There are Two Types of DSP’s, Two Types of Math:


(1) Fixed-Point DSP and FIR (Finite Impulse Response)
Implementations
Fixed-Point DSP’s account for many DSP applications because of their
smaller size and lower cost. The represented as Direct Form-I Structure,

(2) Floating-Point DSP and IIR (Infinite Impulse Response)


Implementations
Like its name, Floating Point DSP’s can perform floating-point math,
which greatly decreases truncation noise problems and allows more
complicated filter structures such as the inclusion of both poles and zeros.
This permits the approximation of many waveforms or transfer functions
that can be expressed as an infinite recursive series. These
implementations are referred to as Infinite Impulse Response (IIR) filters.
The functions are infinite recursive because they use previously
calculated values in future calculations by feedback in hardware systems.

-181-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

5-8. Problems

5-1) Explain, briefly, the main functions of a DSP

5-2) Special-purpose processors can be subdivided into many different


groups which are________
a. Microcontrollers
b. digital signal processors (DSPs)
c. Graphics processing units (GPUs).
d. All of the above

5-3) Assuming a real-time system that processes samples at a f= 10 MHz


sampling rate, and a to= 20 ns, select the most appropriate
implementation among the following:
a. A processor running at 500 MHz, requiring 100 cycles at a cost of 50$
b. An FPGA running at 200 MHz, requiring 10 cycles at a cost of 60$
c. A DSP running at 500 MHz, requiring 20 cycles at a cost of 100$
d. An ASIC running at 2 GHz, requiring 20 cycles at a cost of 500$

-182-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

5-9. References

[1] G. E. Moore, ―Cramming more components onto integrated circuits,‖


Electronics, vol. 38, no. 8, April 1965.

[2] ANSI/IEEE, Standard 754 1985: ―Binary floating point arithmetic,‖


1985.

[3] V. C. Hamachar, Z. G. Vranesic and S. G. Zaky, Computer


Organization, McGraw Hill,. 1990.

[4] R. J. Higgins, Digital Signal Processing in VLSI, Englewood Cliffs,


NJ: Prentice Hall, DSP basics, 1990.

[5] J. G. Proakis and D. G. Manolakis, Digital Signal Processing:


Principles, Algorithms, and Applications, MacMillan Publishing, New
York, NY, 1992

[6] A. V. Oppenheim, R. W. Schafer and J. R. Buck, Discrete time


Signal Processing, 2nd edn, Prentice Hall, 1999

[7] A. V. Oppenheim, A. S. Willsky, and S. H. Nawab, Signals &


Systems, Prentice-Hall, Inc., 1996

[8] Buyer’s Guide to DSP Processors, Berkeley, California: Berkeley


Design Technology, Inc., 1994, 1995, 1997, 2000.

[9] R. Chassaing, DSP Applications Using C and the TMS320C6x DSK,


Wiley, NY, ISBN 0471207543, 2002

[10] R. C. Dorf, Computers, Software Engineering, and Digital Devices, ,


CRC Press, 2005

[11] C. J. Hsu,―Dataflowintegration and simulation techniques for DSP


system design tools,‖ PhD thesis, University of Maryland, College Park,
USA, 2007.

[12] M. Hammes, C. Kranz and D. Seippel, ―Deep submicron CMOS


technology enables system on chip for wireless communications ICs,‖
IEEE Communications Magazine, , vol. 46, pp. 151-161,. 2008

-183-
Prof. Dr. Muhammad EL-SABA
Analog & Digital Signal Processing Chapter 5

-184-
Prof. Dr. Muhammad EL-SABA
Analog and Digital Signal Processing Appendices

In this book we refer to a number of signal properties and transforms and


the reader is assumed to have at least a small prior knowledge of them.
We start with the introduction of the properties of continuous-time signals
and systems and the transforms that apply on them and then we continue
with discrete-time signals and their transforms. For instance, there exist
four forms of the Fourier Transform, namely:

 Fourier Series (FS) – which transforms an infinite periodic time


signal into an infinite discrete frequency spectrum.
 Fourier Integral (FIT or simply FT) – which transforms an infinite
continuous time signal into an infinite continuous frequency spectrum
 Discrete Fourier Transform (DFT) – which transforms a discrete
periodic time signal into a discrete periodic frequency spectrum
 Fast Fourier Transform (FFT) – which is a computer algorithm for
calculating the DFT

It is not the intention of this book to teach the topic of signals and
systems or the topic of transforms. However, we present here a brief
description to remind people with these topics. If you do not know what
the Laplace Transform or the Fourier Transform are, it is highly
recommended that you use this appendix as a quick guide

Appendix A: Continuous-time Signals and their Properties


Appendix B: Laplace Transform (LT) of Periodic Signals
Appendix C: Fourier Transform (FT) & Inverse FT
Appendix D: Discrete-time Signals and their Properties
Appendix E: Discrete Fourier Transform (DFT) & Inverse DFT
Appendix F: Fast Fourier Transform (FFT)
Appendix G: Z-Transform (ZT) & Inverse ZT
Appendix H: Modified Z-Transform (MZT)
Appendix I: Star Transform (ST)
Appendix J: Bilinear Transform (BT)
Appendix K: FFT Code in BASIC
Appendix L: MATLAB & Simulink

-185-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

-186-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

Continuous-Time
Signals & Systems
A system is considered continuous if the signal exists for all time.
Frequently, the terms "analog" and "continuous" are usually employed
interchangably, although they are not strictly the same. Assume a
continuous-time system with impulse response h(t), as shown in the
following figure. The system processes the continuous-time input signal
x(t) in some way, and produces an output y(t). At any value of time, t, we
input a value of x(t) into the system and the system produces an output
value for y(t). For instance, if we have a multiplier system whose
input/output relation is given by y(t) = 7 x(t), and the input value x(t) =
sin(wt) then the output is y(t) = 7sin(wt).

Illustration of a continuous-time system.

There exist a specific category of discrete-time systems called linear


time-invariant (LTI) which have interesting properties.

Properties of Linear Time-Invariant (LTI) Systems


A system is considered to be an LTI system if it satisfies the requirements
of time-invariance and linearity. This book will only consider linear
systems. The LTI systems have the following properties:

1- Additivity
A system satisfies the property of additivity, if a sum of inputs results in
a sum of outputs. By definition: an input of x3(t) = x1(t) + x2(t) results in
an output of y3(t) = y1(t) + y2(t).

-187-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

2- Homogeniety
A system satisfies the condition of homogeniety if an output scaled by a
certain factor produces an output scaled by that same factor. By
definition: an input of ax1 results in an output of ay1

3- Linearity
A system is considered linear if it satisfies the conditions of Additivity
and Homogeniety.

4- Causality
A simple definition of a causal system is a system in which the output
does not change before the input. A more formal definition of a causal
system is the system whose output is only dependant on past or current
inputs. A system is called non-causal if the output of the system is
dependant on future inputs. This book will only consider causal systems,
because they are easier to work with and understand, and since most
practical systems are causal in nature.

5- Memory
A system is said to have memory if the output from the system is
dependant on past inputs (or future inputs!) to the system. A system is
called memoryless if the output is only dependant on the current input.
Memoryless systems are easier to work with, but systems with memory
are more common in digital signal processing applications.

6- Time-Invariance
A system is called time-invariant if the system relationship between the
input and output signals is not dependant on the passage of time. If the
input signal x(t) produces an output y(t) then any time shifted input, x(t +
δ), results in a time-shifted output y(t + δ). This property can be satisfied
if the transfer function of the system is not a function of time except
expressed by the input and output.

-188-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

Laplace Transform

The Laplace Transform converts an equation from the time-domain into


the so-called s-domain, or the Laplace domain, or even the complex
domain. These are all different names for the same mathematical space,
and they all may be used in this book. The transform is defined as
follows:

(B-1)

Laplace transform results have been tabulated extensively. The following


table includes the Laplace transform of famous functions.

Table B1. Laplace Transforms:

Time Domain Function Laplace Transform

δ(t) 1
δ(t − a) e − as

u(t)

u(t − a)

t u(t)

tnu(t)

-189-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

Time Domain Laplace Domain

eat u(t)

tn ea tu(t)

cos(ωt) . u(t)

sin(ωt) u(t)

cosh(ωt) u(t)

sinh(ωt) u(t)

eatcos(ωt) u(t)

eatsin(ωt) u(t)

Properties of the Laplace Transform

Property Definition
Linearity

Differentiation

Frequency
Division

-190-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

Properties of the Laplace Transform (Cont.)

Property Definition

Frequency Integration

Time Integration

Scaling

Initial value theorem


Final value theorem

Frequency Shifts

Time Shifts

Convolution Theorem

where: , and s = σ + jω

Inverse Laplace Transform


The inverse Laplace Transform is defined as such:

(B-2)

The inverse transform converts a function from the Laplace domain back
into the time domain.

Note that F(s) may not fall into any standard form, for which the inverse
Laplace transform is well-known. In this case we use the so-called
Partial Fraction Expansion (also called Heaviside expansion) Method
to expand F(s) into a series of simple terms, whose inverse Laplace
transform is known. Assume

-191-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

N ( s ) K ( s  z1 )( s  z2 )...( s  zm )
F( s )  
D( s ) ( s  p1 )( s  p2 )...( s  pn ) (B-3)

Then, we can expand F(s) in the following form:


n
N( s ) a a1 a2 an
F( s )   i    .... 
D( s ) i 1 s  pi s  p1 s  p2 s  pn (B-4)

where the coefficients ai (i=1, 2, ..n) are called the residues of F(s) and pi
are called the poles of F(s). The residues ai can be obtained from F(s) as
follows:

 N( s )
ai  Lims    pi ( s  pi )  (B-5)
 D( s ) 

Therefore, the inverse Laplace transform of F(s) is simply equal to:

n a  n
f ( t )  L1 F ( s )  L1  i    a i exp(  pit ) (B-6)
 i 1 s  pi  i 1

Example:
Assume:
s5 (s5)
F( s )  
s 2  3 s  2 ( s  1 )( s  2 )

So, we have two poles: p1= -1 and p2 = -2. Using the partial expansion
method, equation (B-5), we get a1 = 4 and a2= -3, so that

4 3
F( s )  
( s  1) ( s  2 )

Therefore: f(t) = 4.exp(-t) - 3.exp(-2t)

N.B. When a pole, pi, is repeated k times, then it is expanded in k-terms


and the coefficients (ai0, ai1, ai(k-1) ) are obtained as follows:

 d k j N( s )
aij  Lims    pi  k  j ( s  pi )k , for j = 1, 2,….k (B-7)
 ds D( s ) 

-192-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

Example:
Assume:
s 2  2s  3 s 2  2s  3
F( s )  
s 3  3s 2  3s  1 ( s  1 )3

Here, we have 3 identical poles (k=3) at: p1=-1. So, we can write F(s) as
follows:

a11 a12 a13


F( s )   
( s  1 ) ( s  1 ) ( s  1 )3
2

Using the partial expansion method, equation (B-7), we get a11 = 1 and
a12= 0, and a13 = 2, so that:

f(t) =1* exp(-t) + 0 * t exp(-t) + t2 exp(-t) = (1+ t2) exp(-t)

-193-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

FourierIntegral
Transform (FIT)
The Fourier Transform (or integral transform) is very similar to the
Laplace transform. The Fourier transform uses the assumption that any
finite time-domain can be broken into an infinite sum of sinusoidal (sine
and cosine waves) signals. Under this assumption, the Fourier Transform
converts a time-domain signal into it's frequency-domain representation,
as a function of the radial frequency, ω. The Fourier Transform is defined
as such:

(C-1)

We can now show that the Fourier Transform is equivalent to the Laplace
transform, when the following condition is true: s = jω Because the
Laplace and Fourier Transforms are so closely related, it does not make
much sense to use both transforms for all problems. This book, therefore,
will concentrate on the Laplace transform for nearly all subjects, except
those problems that deal directly with frequency values. For frequency
problems, it makes life much easier to use the Fourier Transform
representation. Like the Laplace Transform, the Fourier Transform has
been extensively tabulated. The following tables illustrates the Fourier
Transform of famous functions as well as the properties of the Fourier
transform.

Table C-1. Fourier Transforms

Time Domain Fourier Domain

1 2πδ(ω)
δ(t) 1
δ(t − c) e − jωc
u(t)

-194-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

Table C-1. Fourier Transform (Cont.)

e − btu(t)
cosω0t π[δ(ω + ω0) + δ(ω − ω0)]
cos(ω0t + θ) π[e − jθδ(ω + ω0) + ejθδ(ω − ω0)]
sinω0t jπ[δ(ω + ω0) − δ(ω − ω0)]
sin(ω0t + θ) jπ[e − jθδ(ω + ω0) − ejθδ(ω − ω0)]

2πpτ(ω)

Note: sinc(x) = sin(x)/x; pτ(t) is the rectangular pulse function of width τ

Inverse Fourier Transform (IFT)


The inverse Fourier Transform is defined as follows:

(C-1)

This transform is nearly identical to the Fourier Transform. Using the


above equivalence, we can show that the Laplace transform is always
equal to the Fourier Transform, if the variable s is an imaginary number.
However, the Laplace transform is different if s is a real or a complex. So,
we generally define s to have both real and imaginary parts, as follows:

s = σ + jω

And we can show that s = jω, if σ = 0

As the variable s can be broken down into 2 independant components, it


is frequently to plot the variable s on its own S-plane. The S-plain graphs
the variable σ on the horizontal axis, and the value of jω on the vertical
axis.

-195-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

Discrete-Time
Signals & Systems
Digital data is represented by discrete number values. Assume a discrete-
time system with impulse response h[n], as shown in the following
figure. The system processes the input x[n] in some way, and produces an
output y[n]. At each integer value for discrete time, n, we input a value
of x[n] into the system and the system produces an output value for y[n].
For instance, if we have a multiplier system whose input/output relation is
given by y[n] = 7 x[n], and the input sequence x[n] = [1 0 1 3] then the
out output sequence is y[n] = [7 0 7 21].

Illustration of a discrete-time system.

There exist a specific category of discrete-time systems called linear


shift-invariant (LSI) which have interesting properties.

Properties of Linear Shify-Invariant (LSI) Systems


A system is considered to be an LSI system if it satisfies the requirements
of shift-invariance and linearity. LSI systems have the following
properties:

1- Additivity
A system satisfies the property of additivity, if a sum of inputs results in
a sum of outputs. By definition: an input of x3[n] = x1[n] + x2[n] results in
an output of y3[n] = y1[n] + y2[n].

-196-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

2- Homogeniety
A system satisfies the condition of homogeniety if an output scaled by a
certain factor produces an output scaled by that same factor. By
definition: an input of ax1 results in an output of ay1

3- Linearity
A system is considered linear if it satisfies the conditions of Additivity
and Homogeniety.

4- Causality
A simple definition of a causal system is a system in which the output
does not change before the input. A more formal definition of a causal
system is the system whose output is only dependant on past or current
inputs. A system is called non-causal if the output of the system is
dependant on future inputs. This book will only consider causal systems,
because they are easier to work with and understand, and since most
practical systems are causal in nature.

5- Memory
A system is said to have memory if the output from the system is
dependant on past inputs (or future inputs!) to the system. A system is
called memoryless if the output is only dependant on the current input.
Memoryless systems are easier to work with, but systems with memory
are more common in digital signal processing applications.

6- Shift-Invariance
A system is called shift-invariant if the system relationship between the
input and output signals is not dependant on the passage of time.

If the input signal x[n] produces an output y[n] then any time shifted
input, x[n +N], results in a time-shifted output y[n+N] This property can
be satisfied if the transfer function of the system is not a function of time.

-197-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

Discrete Fourier
Transform (DFT)
The Discrete Fourier Transform (DFT) of a signal may be defined by:

where

The sampling interval is also called the sampling period.

When all N signal samples x(tn) are real, we say x Є RN. If they may be
complex, we writexe x Є CN. Finally, n Є N means n is any integer.

Inverse DFT
The inverse DFT (the IDFT) is given by :

-198-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

In the signal processing literature, it is common to write the DFT and its
inverse in the more pure form below, obtained by setting T=1 in the
previous definition:

where x(n) denotes the input signal at time (sample) n, and X(k) denotes
the kth spectral sample. This form is the simplest mathematically, while
the previous form is easier to interpret physically.

Having completely understood the DFT and its inverse mathematically, we


go on to proving various Fourier Theorems, such as the ``shift theorem,''
the ``convolution theorem,'' and ``Parseval's theorem.'' The Fourier
theorems provide a basic thinking vocabulary for working with signals in
the time and frequency domains.

Rayleigh Energy Theorem (Parseval's Theorem)

For any x Є CN,

i.e.,

This is a special case of the power theorem. It, too, is often referred to as
Parseval's theorem (being a special case).

-199-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

Fast Fourier
Transform (FFT)
The fast Fourier transform (FFT) is an efficient implementation of the
discrete Fourier transform (DFT) for highly composite transform lengths
N. When N is a power of 2, the computational complexity drops from
O(N2) for the DFT down to O(N.lnN) for the FFT, where lnN denotes the
logarithm-base-2 of N. The FFT was first described by Gauss in 1805 and
rediscovered in the 1960s by Cooley and Tukey. The first stage of a
radix-2 FFT algorithm is derived below; the remaining stages are
obtained by applying the same decomposition recursively. As we have
seen so far, the DFT is defined as follows:

where x(n) is the input signal amplitude at time n, and

.
Note also that WNN
=1. There are two basic varieties of FFT, one based
on decimation in time (DIT), and the other on decimation in frequency
(DIF). Here we will derive decimation in time. When is even, the DFT
summation can be split into sums over the odd and even indexes of the
input signal:

(F.1)

-200-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

(F.2)

where xe(n) ≡ x(2n) and xo(n) ≡ x(2n+1) denote the even- and odd-
indexed samples from x. Thus, the length N DFT is computable using two
length ½ N DFTs. The complex factors:

are called twiddle factors. The splitting into sums over even and odd time
indexes is called decimation in time. (For decimation in frequency, the
inverse DFT of the spectrum X(k) is split into sums over even and odd
bin numbers k.)

-201-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

Z- Transform

The Z-transform is used primarily to convert discrete data sets into a


continuous representation. The Z-transform is very similar to the star
transform, except that the Z-transform does not take explicit account for
the sampling period. The Z-transform has a number of uses in the field of
digital signal processing and the study of discrete signals in general, and
is useful because Z-transform results are extensively tabulated, whereas
star-transform results are not. The Z-Transform is defined as:

(G-1)

The following table depicts the z-transform of main signals and their
region of convergence (ROC).

Z-Transform Tables:

Signal, x[n] Z-transform, X(z) ROC


1

-202-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

Z-Transform Tables (Cont.)

Signal, x[n] Z-transform, X(z) ROC

10

11

The Inverse Z-Transform is sufficiently complex that we will not


consider it here.

-203-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

Modified Z- Transform

The Modified Z-Transform is similar to the Z-transform, except that the


modified version allows for the system to be subjected to any arbitrary
delay, by design. The Modified Z-Transform is very useful when talking
about digital systems for which the processing time of the system is not
negligible. For instance, a slow computer system can be modeled as being
an instantaneous system with an output delay. The modified Z transform
is based off the delayed Z transform:

(H-1)

-204-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

Star Transform

The Star Transform is a discrete transform that has similarities between


the Z transform and the Laplace Transform. In fact, the Star Transform
can be said to be nearly analogous to the Z transform, except that the Star
transform explicitly accounts for the sampling time of the sampler. The
Star Transform is defined as:

(I-1)

Star transform pairs can be obtained by plugging z = esT into the Z-


transform pairs, above.

-205-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

Bilinear Transform

The bilinear transform is used to convert an equation in the Z-domain into


the arbitrary W-domain, with the following properties:

 roots inside the unit circle in the Z-domain will be mapped to roots
on the left-half of the W plane.
 roots outside the unit circle in the Z-domain will be mapped to
roots on the right-half of the W plane
 roots on the unit circle in the Z-domain will be mapped onto the
vertical axis in the W domain.

The bilinear transform can therefore be used to convert a Z-domain


equation into a form that can be analyzed using the Routh-Hurwitz
criteria. However, it is important to note that the W-domain is not the
same as the complex Laplace S-domain. To make the output of the
bilinear transform equal to the S-domain, the signal must be pre-wrapped,
to account for the non-linear nature of the bilinear transform. The Bilinear
transform can also be used to convert an S-domain system into the Z
domain. Again, the input system must be rewrapped prior to applying the
bilinear transform, or else the results will not be correct. The Bilinear
transform is governed by the following variable transformations:

(J-1)

(J-2)

where T is the sampling time of the discrete signal. Frequencies in the w


domain are related to frequencies in the s domain through the following
relationship:

(J-3)

-206-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

This relationship is called the frequency warping characteristic of the


bilinear transform. The inverse relationship is also used:

(J-4)

Applying these transformations before applying the bilinear transform


actually enables direct conversions between the s-Domain and the z-
Domain. The act of applying one of these frequency warping
characteristics to a function before transforming is called prewarping.

-207-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

FFT Code in
BASIC
3000 'FFT FOR REAL SIGNALS
3010 'Upon entry, N% contains the number of points in the DFT, REX[ ]
contains
3020 'the real input signal, while values in IMX[ ] are ignored. Upon
return,
3030 'REX[ ] and IMX[ ] contain the DFT output. All signals run from 0
to N%-1.
3040 '
3050 NH% = N%/2-1 'Separate even and odd points
3060 FOR I% = 0 TO NH%
3070 REX(I%) = REX(2*I%)
3080 IMX(I%) = REX(2*I%+1)
3090 NEXT I%
3100 '
3110 N% = N%/2 'Calculate N%/2 point FFT
3120 GOSUB 1000 '(GOSUB 1000 is the FFT in Table 12-3)
3130 N% = N%*2
3140 '
3150 NM1% = N%-1 'Even/odd frequency domain decomposition
3160 ND2% = N%/2
3170 N4% = N%/4-1
3180 FOR I% = 1 TO N4%
3190 IM% = ND2%-I%
3200 IP2% = I%+ND2%
3210 IPM% = IM%+ND2%
3220 REX(IP2%) = (IMX(I%) + IMX(IM%))/2
3230 REX(IPM%) = REX(IP2%)
3240 IMX(IP2%) = -(REX(I%) - REX(IM%))/2
3250 IMX(IPM%) = -IMX(IP2%)
3260 REX(I%) = (REX(I%) + REX(IM%))/2
3270 REX(IM%) = REX(I%)
3280 IMX(I%) = (IMX(I%) - IMX(IM%))/2
3290 IMX(IM%) = -IMX(I%)

-208-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

3300 NEXT I%
3310 REX(N%*3/4) = IMX(N%/4)
3320 REX(ND2%) = IMX(0)
3330

-209-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

Intro to
MATLAB & Simulink
The MATLAB Student Version, available from MathWorks, is a full-
version of MATLAB and includes Simulink and the DSP toolbox. It is
available at: http://www.mathworks.com/products/studentversion/
1. Start up MATLAB. You’ll get the MATLAB Desktop
2. Start upS imulink by pressing the button SIMULINK LIBRARY in
the desktop. You get the Simulink Library Browser.

-210-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

3. If you’ve an old version of MATLAB, you may type at the MATLAB prompt
>> simulink .
You’ll also get a similar Simulink Library
4. From the New menu, select New Simulink Model. If you’ve an old version of
MATLAB, you select, New Model from the file menu. A Model window opens as
follows.

5. Now we can start building and testing our model. For instance, we can build a
QPSK modulator.

-211-
Prof. Dr. Muhammad El-SABA
Analog and Digital Signal Processing Appendices

-212-
Prof. Dr. Muhammad El-SABA

View publication stats

You might also like