Professional Documents
Culture Documents
Abstract
Over 80% of all DSP applications require some form of frequency domain processing and there
are many techniques for performing this kind of operation. This two part workshop will serve as
an introduction to frequency domain signal processing. In this first session in a two session
series, we will cover the theory behind Fourier Transforms and frequency domain processing,
while the second half will look at specific frequency domain techniques from an application
perspective and show how frequency domain techniques can often provide information that is
difficult or sometimes impossible to realise in the time domain. Topics to be discussed in this
session include: continuous and discrete Fourier transforms and FFTs; How the FFT works; The
complex exponential; Windowing equations and effects.
1. Introduction
All signals have a frequency domain representation and in 1822, Baron Jean Baptiste Fourier
detailed the theory that any real world waveform can be generated by the addition of sinusoidal
waves. This was arguably developed first by Gauss in 1805. The following diagram shows an
example of this process :
=+
Signals can be transformed between the time and the frequency domain through various
transforms. The signals can be processed within these domains and each process in one domain
has a corollary in the other, as shown :
Time Domain
Impulse
Input Output
Response
x(t) y(t)=h(t)*x(t)
h(t)
Laplace
Impulse
Input Output
Response
X(s) Y(s) = H(s) X(s)
H(s)
Frequency (Fourier)
Impulse
Input Output
Response
X(jω) Y(jω) = H(jω) X(jω)
H(jω)
The most important process translation between the time and frequency domain is that
convolution in the time domain is the equivalent to multiplication in the frequency domain and
V.V.
Within the “real”, continuous time world, systems are defined in the s-domain. In the digital
world, these systems are translated to the s-domain, as shown in the following diagrams :
s-Domain z-Domain
jω
f0
σ fs/2
fs
Stable systems have their poles (feedback elements) located in the left hand half of the s-plane
and these are mapped to a region that lies within the “unit-circle” of the z-domain. Zeros
(feedforward elements) can lie anywhere on either plane.
∴ z|σ=0 = e jwT
Thus the jω axis of the s plane maps to a circle of unit radius in the z plane. As ω increases from
-∞ through 0 to +∞ the unit circle is retraced every 2πc.
In the continuous time domain, e-sT is a unit delay, This is represented by z-1 in the discrete time
domain, where z = e sT
e-nTs
and
z-n
i.e. : The Laplace transform is equivalent to the Fourier Transform for s = jω. The Laplace
transform is given in the following equation :
∞
L(t ) = ∫ x(t) e -st
dt
-∞
and this translates to the following in the z-domain :
−1 ∞
X ( z ) = ∑ x ( n) z + ∑ x ( n) z
−n −n
n=- ∞ n=0
This is referred to as the two sided z-transform however, for most applications it can be
simplified by assuming that at time t=0 then the output is at 0.
Any real world signal or function has a related z-transform however there are many important z-
transforms that are worth remembering. Some of them are listed here, with t = nts written as n.
- jω t
F {x (t )} = ∫ x (t ) e dt
-∞
The FT is equivalent to the LT for x(t) = 0 for t < 0 and if x(t) converges at t = ∞.
As an example of how we might use the different domains, let us consider the following signal
and its representation.
Time Domain Z-Domain
f0
fs/2
t0 T fs
Frequency Domain
f0 fs/2 fs 3fs/2 F
f0
fs/2
fs
t0 T
Frequency Domain
f0 fs/2 fs 3fs/2 F
If we now filter the signal with the filter defined in the z-domain, we see the following results :
The poles represent “gain” and the zeros “attenuation. The effects of applying the filter to the
signal are frequency dependent and so we see that the filter has a low-pass effect and the signal
is smoothed. This operation of the system on the signal is performed by either convolution in the
time domain or multiplication in the frequency domain.
2. Representing Signals
Signals can be represented in many different ways. From Fourier’s theory, we know that we can
represent any real world signal by the combination of two or more sinusoids. Therefore, we need
to be able to understand how sinusoids work, in order that we can understand how the complex
signals operate.
In the complex domain, we can think of a fundamental signal as a rotating phasor. A phasor is a
rotating vector in the complex plane with magnitude A and rotational speed ω radians per second
Imaginary
b ω A
θ
a Real
(ω = 2πf).
At time t :
x(t)=a+jb
Where :
2 2
A= a + b
-1 b
θ = ωt = tan a
In polar format :
jω t
X( t ) = Ae
= A (cos( ωt ) + j sin( ωt ))
Then :
jω t - jω t
cos( ω t ) =
e +e
2
jω t - jω t
sin( ωt ) =
e -e
2j
A cosinusoid can be represented by a conjugate pair of phasors with a purely real result,
similarly a sinusoid is represented by a conjugate pair of phasors with a purely imaginary result,
as shown :
Imaginary
b ω A/2
θ Acos(ωt)
θ a
Real
-b ω A/2
Phasors rotate in two directions and have the following characteristics. Positively rotating
phasors rotate anti-clockwise and represent positive frequencies, whilst negatively rotating
phasors rotate clockwise and represent negative frequencies.
Referring to the Fourier transform, this just splits the signals up into the fundamental phasors, or
complex exponential components.
Signals can be processed or systems analysed in both the time and frequency domains, the
following table shows the various theorems and how they relate to each other.
So far, we have primarily considered the continuous domains but for DSP we need to consider
the discrete equivalents. The discrete Fourier transform (DFT) is given in the following equation
and it shows that for every frequency, the Fourier Transform X(k) determines the contribution of
a complex sinusoid of that frequency in the composition of the signal x(n).
N −1 2 πnk
∑ x (n) e
−j
X (k ) = N for 0 ≤ k ≤ N − 1
n=0
The corollary of the Fourier transform is the inverse Fourier transform, as follows :
2π nk
1 N-1
N∑
j
x ( n) = X ( k ) e N for 0 ≤ k ≤ N − 1
k =0
• Uncertainty Principle
• Increased frequency domain resolution == reduced time domain resolution and v.v.
• Continuity
There are many ways to approach an understanding of the FFT however there are some heuristic
approached that explain the operation and can be used to extend the techniques. The Fourier
transform can be considered to be a bank of band-pass filters that takes in a signal and the
magnitude of the output of each filter is proportional to the total input energy into that filter.
Each of these filters is convolving the input with a set of filter coefficients that are sinusoidal in
nature, with the frequency of oscillation equal to the centre frequency of the filter. When
performing the convolution over all the banks, many of the multiplications of data and
coefficient values are repeated and therefore redundant. Computation saving can be made by
implementing a single Fourier transform as 2 half sized Fourier transforms (i.e. Two N/2 point
DFTs are faster than one N point DFT). Extrapolating this to its limit, a 2 point DFT is often the
optimum process and larger FT operations can be constructed from this small building block.
This leads to the fact that FFT lengths are usually powers of 2. This is a generalisation and there
maybe architectural reasons why other lengths are preferential and it is not unknown to mix the
sizes of the blocks – termed mixed radix transforms.
For the purposes of this paper we will only discuss radix-2 transforms. When looking at the
computational loading of FTs and FFTs, the former requires order N2 operations and the latter
order N/2 log2N operations.
So how do we combine the small FT building blocks into a complete Fourier transform ? The
process starts by sub-dividing (decimating) the complete operations and this can be performed at
either the time or frequency end of the operation. Taking an 8 point FT, the decimation in
frequency operation is as follows :
Stage 1 Stage 2 Stage 3
0 0
x(0) WN WN X(0)
x(1) 0 X(4)
WN 2 0
x(2) 1
WN WN X(2)
WN
x(3) 2 X(6)
WN 0 0
x(4) 3
WN WN X(1)
WN
x(5) X(5)
2
x(6)
WN 0
WN X(3)
x(7) X(7)
The WN values are the coefficients of the FFT and are often referred to as twiddle factors. The
twiddle factors are essentially the complex exponential values, according to the following
equations.
N −1
WN = e j 2π / N where X (k ) = ∑ x(n) W −Nnk
n =0
-1 W=C+j(-S)
The corresponding equations are :
Cr = Ar + Br
Ci = Ai + Bi
Dr = (Ar-Br) * Cos(θ) + (Ai-Bi) * Sin(θ)
Di = (Ai-Bi) * Cos(θ) - (Ar-Br) * Sin(θ)
With the forward and inverse transforms, either the input or output data sets must be in a non-
linear format that uses a bit reversed address style. Most modern DSPs are able to handle this
directly in hardware so converting the data back into a linear order often requires no overhead to
the operation.
The output of an FFT is limited in both resolution, and dynamic range. Resolution is defined as
the gap between to adjacent frequency components (bins) and the dynamic range is the ratio of
the smallest signal to the largest signal detectable.
sample rate
resolution =
number of samples in buffer
⎛ smallest signal ⎞
dynamic range = 20 .log 10 ⎜ ⎟
⎝ largest signal ⎠
When performing the FFT, the operation is processing a block of data. The frequency domain
effect of this is defined by the equation sin(x)/x therefore the magnitude of the sidelobes is
independent of the window length. Increasing the window length decreases the sidelobe width
only, not the height. The effect is shown in the following diagrams :
-T0 T0 t f
0 -1 1 1 3
2T0
0 2T T 2T
0 0 0
Looking at the effects from a different perspective we see the edge effect more clearly. Using this
approach we can also see how to remove the edge effect by using a window that tapers to zero at
the extremities.
Continual Wave
Time
Sample window
Rectangular Window
Time
^ Discontinuities ^
Window
Time
Windowed Data
Time
0
Without Window
(dB)
-50
-100
Frequency
0
With Window
(dB)
-50
-100
Frequency
The windowing functions are typically developed from frequency domain requirements and a
typical example is the Hanning Window, which is defined by the following equation :
1 ⎛ ⎛ 2π t ⎞ ⎞
w (t ) = ⎜ 1 - cos⎜ ⎟ ⎟⎟
⎜ ⎝ Tc ⎠ ⎠
2 ⎝
There are many “standard” windowing functions and the time and frequency domain
performances are shown.
1
0.8
0.6
0.4
0.2
0 Blackman-Harris Blackman
Hamming Hanning
Kaiser Rectangle
Triangle
-50
-100
-150
Blackman-Harris Blackman
-200
Hamming Hanning
Kaiser Rectangl
Triangle
The performance of the windows is measurable via various parameters, including :
In order to understand the output of the FFT, it is important to analyse the outputs generated by
the basic signals. These can be easily related to the phasor systems. Note where the DC
component lies, the examples have a wavelength integer number of bins. These components
represent the phasors (magnitude, frequency and phase).
FFT of a pure cosine waveform :
Real Imaginary
½ ½
Fs
0 Fs/2 Fs 0 Fs/2
Real Imaginary
½
0 Fs/2 Fs 0 Fs/2 Fs
FFT processing is not useful in itself, it is the post FFT processing that is usually the important
issue, it defines what information can be extracted from the information. The following
equations show how to calculate the magnitude and phase of the signals.
⎛ ⎞
= 20 * log 2 2
Log Magnitude real + imag ⎟⎠
10
⎜
⎝
=10*log (real + imag )
2 2
10
−1 ⎛ Imaginary ⎞
Angle = tan ⎜ ⎟
⎝ Real ⎠
Power Spectrum Estimation is one of the common post FFT calculations and is calculated by
averaging the outputs from successive FFTs. This is usually implemented by applying a one-pole
averaging filter across frames, as shown :
Bin 0 Bin N
The final result – an FFT based spectrum analyser would look like the following :
M 0
a
g
n
i -50
t
u
d
e -100
(dB) 0 5KHz 10KHz
Frequency
Graphics
Scaling
When implementing an FFT based system there are some standard suggestions that should be
considered, these include :
The output of an FFT can be viewed in two basic formats. The difference is the location of the
DC component and this is shown in the following diagrams :
f
0 Hz Fs/2 Fs
f
-Fs/2 0 Hz Fs/2
The first format is often used in baseband applications e.g. speech and system analysis, where as
the second is more common in H.F. communication applications, where fc is is hetrodyned down
to D.C. The results are no different but sometimes make post processing easier.
The “noise floor” of the FFT output depends on the size of the FFT. Averaging the signal means
that the coherent signal (e.g. sinusoid) power increases by 6dB for doubling the number of
samples. The average noise power however increases by 3dB for twice for the same increase in
the number of samples. The SNR gain is therefore 3dB per order.
Parceval’s theorem says that the average power of a periodic waveform is equal to the sum of the
average powers carried by its separate frequency components individually. This is often used to
calculate relative powers in-band and out-of-band e.g. Signal-to-Noise-Ratio (SNR).
The FFT is a short time Fourier transform, edge effects cause discontinuities, to overcome these
effects it is necessary to window the data. Unfortunately windowing can mask some important
artefacts within the signal i.e. there can be a loss of vital information from continuous input at
block edges. The solution is to overlap the FFTs. For input only applications we need to overlap
inputs, for applications with continuous inputs and outputs it is necessary to overlap both and
add the intermediate results. The following diagram shows the process :
FT1 FT2 FT3 FT4
No overlap (1:1)
FT1 FT3
25% overlap
FT2 FT4
FT2 FT4
Some applications only require the detection (or generation in the case of the IFFT) of a few
frequencies, a useful technique is to “prune” the FFT structure, in such a way that only the
essential calculations are executed.
Very often, it is necessary to process data vectors that do not contain a number of samples that is
an integer power of 2, in order to process this information, it is common to pad the data with
zeros to the correct length. Zero padding does not affect the spectral content or the frequency
resolution of the result but it does interpolate the original sample set across more points. This
often leads to a more computationally efficient solution for non power of 2 buffer lengths.
7. The Zoom-FFT
The zoom-FFT utilises complex frequency shifting and decimation to zoom in on a particular
frequency band of interest. This increases the frequency domain resolution, increases spectral
range and reduces the system hardware cost and complexity.
The following block diagram shows the structure of the zoom-FFT algorithm :
i Channel i(k)
8k Point I(n)
HDF
x(k) w(k)
exp(j 2.π.fmix.k) Complex
HDF
q(k) FFT Q(n)
q Channel
Mixer Comb Filters FIR FIlters Window FFT
16:1 4:1 52.031 KHz
Decimation
Blackman-
Harris
HDF - High decimation filter
i(k) - In phase signal q(k) - Quadrature phase signal
0 Hz f mix = 1 MHz f
Typical applications of the zoom-FFT include; ultrasonic blood flow analysis, R.F.
communications, mechanical stress analysis and Doppler radar.
This technique was used in an application to detect air bubbles suspended in a liquid, which
resonate at frequencies that are proportional to their size, this resonance makes the bubbles act as
diode mixers consequently when the liquid is stimulated with two signals of different frequencies
they are modulated together. The energy in the mixed signal is related to the wavelength of the
incident signals and the size of the bubble, therefore the bubble size can be detected by looking
at the returned signal.
The ultrasound carrier operates at 1 MHz, which necessitated a high speed (3.33 MHz) sampling
front end. The high performance (120 MFLOPS) DSP sub-system and easy to use PC-DOS
based Man Machine Interface.
The system has to produce a Fast Fourier Transform (FFT) of a region of the frequency spectrum
centred around a frequency of approximately 1 MHz. The frequency band of interest is
approximately 25 kHz either side of the 1 MHz centre frequency, giving a total bandwidth of
approximately 50 kHz. The required frequency resolution is 5 Hz. A Zoom FFT method is used
to generate the required spectral data, since in order to obtain the required spectral resolution
using a standard FFT would require an unfeasible large transform.
Another technique for reducing the processing requirement for some applications is to prune the
FFT. This is useful when only a few frequencies need to be analysed.
Convolution in the time domain is equal to multiplication in the frequency domain and vice
versa. Correlation in the frequency domain is equivalent to convolution, with one array time
reversed. The output length must be greater than N + M – 1, where N and M are the lengths of
the input vectors. It is important that the FFT length is also therefore greater than N + M – 1. The
first thing that is required is that the inputs require zero-padding, otherwise the result is circular
convolution / correlation. The benefits of this are the potential large computational savings. For
many applications it is possible to pre-compute the convolution or correlation kernel FFT for
more efficiency.
The following block diagram shows the operation :
Input A
FFT
Input B
FFT x IFFT
Output
The FFT of a system (e.g. a filter) impulse response is the transfer function, the finite length
transfer function is applied to an “infinite length” signal so an overlap method will be required
(E.G. Overlap and add). It is important to use the correct coefficients E.G. for a low-pass filter
can not just set high frequency coefficients to zero as this generates a time domain sin(x)/x
function.
The following diagram shows the configuration of the frequency domain correlation operation.
Input A
FFT
Input B
Complex
FFT x IFFT
Conjugate
Output
9. An Ultrasonic Time-Of-Flight System
There is a need within the steel industry for accurate measurements of steel sheet thickness. This
can be achieved by making ultrasonic pulse-echo measurements through the steel and calculating
the time-of-flight of the ultrasound. Since the velocity of the ultrasound in the steel is known, the
thickness of the sheet can thus be calculated. A further requirement of the steel industry is the
assessment of the quality of the sheet steel. One of the flaws that is often found within steel is
due to a phenomenon known as residual stress. This is caused by local deformations in the
crystalline structure of the steel which occur during the cooling phase after hot rolling. Residual
stresses within a structure cause local variations in the velocity of the ultrasound, therefore a
time-of-flight measurement which falls outside a given range may also be due to such flaws
within the steel.
The following diagram shows a typical system configuration for a sheet steel thickness test
system :
T = 2s/c
steel c = 6100m/s
x
x x
Sample clock x x
spatial resolution:
s = +/- 0.25 x c x d
frequency domain interpolation process is shown graphically as :
Zero padding of the frequency domain samples is equivalent to sin(x)/x interpolation in the time
domain and this helps with more accurate location of the zero crossing points. The operation is
shown below :
Time domain
Real Imaginary
FFT N points
Pad Interpolation
factor x
N points
IFFT
Time domain
The far face reverberations can then be detected by autocorrelation.
x[n]
x[n+k]
where
k = p2 - p1
This final plot shows the output from a single generated pulse :
1200000
1000000
800000
600000
400000
200000
0
1 100 200 300 400 500 600 700 800
-200000
-400000
-600000
-800000
-1000000
10. Radar Signal Processing
The system specification was to meet a 40 μ seconds pulse repetition interval (PRI), with 1 to 32
range gates. The processing was to be FFT based and required 300 MFLOPS, which mapped
onto six 50 MHz TMS320C40s. The system required a 10 MHz sample rate and 12 bit
quantisation. The following diagram shows a typical radar example, with stationary and moving
targets :
Antenna
Display DSP
The Doppler shifted frequency from a moving object is shown in the example :
Range gating is the technique for analysing the motion of objects at different ranges from the
radar system. The radar outputs pulses at given time spacings and starts sampling the input after
a delay. The larger the delay, the greater the distance from the radar. For each block of samples
taken the nth sample is at a particular range – over a sequence of sampled blocks it is possible to
look at all the samples at a particular range, this is referred to as the range gate.
Each range gate corresponds to a number of cycles at the transmitted frequency, as shown below.
At fs=10MS/s
d = c/Ts d = 3x10^8 / 10x10^6 = 30m
The whole process is shown pictorially below, along with the FFT magnitude results from the
house and aeroplane example :
1 2 3 4 . . . . 32 Range
Gates
N=0, t=0
N=1, t=40 μS
N=2, t=80 μS
N=3, t=160 μS
.
.
.
N=63
Time D.C. Moving signal
Time Domain Result FFT Result
The sinusoidal result seen above is due to the phase of the returned pulses being modified by
moving target. The D.C. signal is generated by the stationary target (the house). The moving
signal is generated by the moving target (aeroplane). The magnitude of the FFT gives the size of
the target, the FFT bin gives the speed of the target.
Log.
A/D Complex One Pole
Mag. and
Acquisitio FFT Filter
Phase
The Arrangement Of The FFTs is :
Range Gates
1 2 3 4 .... 32
....
....
Time
....
64 Point FFTs
= 3 x 10 8 x 100 x 10 -9
= 30 M
=32 x 30 = 960 m
11. Frequency Domain Beamforming
Beamforming is the acquisition of a signal using an array of transducers and then applying a
delay (at its most simplest), to “steer” the direction of the “beam” in a particular direction.
Essentially the direction of the beam is relative to the sampling phase of the transducers, as
shown in the diagram below :
In the time domain, the beams are steered in a single direction, using the following system :
One of the limitations of time domain beamsteering is that one bank of delays can only steer the
beam in one single direction. The solution is to use the fact that time delays in the time domain
are equivalent to complex exponential multiplication in the frequency domain, using the
following equation.
If : x ( n ) ↔ X ( e jω )
Then : x (n − m) ↔ e − jωm X ( e jω )
Essentially this says that we can translate the signals to the frequency domain and cross multiply
by the exponential function and the signals will be delayed by the appropriate amount. The
benefit of performing this operation in the frequency domain is that we can effectively steer the
beam in all directions at the same time.
A Goertzel algorithm is a very efficient technique for selecting a particular pass band in a filtered
signal. The Goertzel algorithm is defined by the equation :
2π f
1 − e f z− 1
i
H f i ( z) = s
⎛ 2π f i ⎞ − 1 − 2
1 − 2 cos⎜ ⎟z +z
⎝ fs ⎠
The algorithm is most commonly implemented as a second order recursive IIR filter, as shown in
the following flow diagram.
Second Order Recursive Goertzel
Input Output
+ +
-1
2cos(2π fi /fs ) Z -e -jπ fi /fs
+ x x
-1
-1 Z
x
This filter does not maintain the complex (phase) information but the Goertzel filter is often used
to detect particular individual frequencies, a common application is the detection of DTMF
tones. This algorithm is very efficient, when compared with the regular FFT, especially when the
requirement is only to detect a few individual frequencies.
Conclusion
Translating signals from the time to the frequency domain can allow a whole new and powerful
area of signal processing. All real signals can be generated from the sum of the component
sinusoids. The Fourier transform will extract the information of phase and magnitude. It is
important to be aware of the effects of scaling and edge effects. To remove the edge effects of
the rectangular window, it is important to use the appropriate windowing function for the
application.
Appendix A - Decibels
In many applications, the use of a logarithmic representation of values gives many advantages,
this section discusses the use of decibels (dB).
P1
dB = 10 log10
P2
In many applications the voltage (V) or current (I) is measured, power is proportional to V2 or
I2, giving the equivalent dB equations of :
V1
dB = 20 log10
V2
and
I1
dB = 20 log10
I2
For many applications, P2 and V2 assume standard values, for example in telecommunications
P2 = 1milliWatt in 600 Ohms, in this case the logarithmic scale is dBm. Note that dBA is not
relative to Amps but is used as a measure of sound intensity. One of the advantages of using
logarithms is that a multiplication of the linear values can be achieved by the addition of
logarithms.
To revert back to a linear scale the following equation is used for power :
P1 dB
= 10 10
P2
V1 dB
= 10 20
V2
The following table shows some of the basic equivalents between absolute power and dB.
P1 P1 V1 I1
P2 (dB) P2 (absolute) V 2 and I 2
20 102 = 100 10
10 101 = 10 π
9 8 2√2
8 2π 2.5
7 5 2.24
6 4 2
5 π 1.77
4 2.5 π/2
3 2 √2
2 π/2 1.5
1 1.25 1.12
0 100 = 1 1
Notes :
The use of π is an approximation of √10 that is useful for quick calculations and accurate to 2
significant figures.
The table shows the absolute power doubles for a 3 dB increase and halves for a 3dB drop. The
voltage and current however double for a 6 dB increase.
This table can be used to give all other dB / absolute equivalents :
P1
16 dB = 10 dB + 6 dB = 10 x 4 = > P2 (absolute) = 40
P1
-74 dB = -(70 + 4) = 1/(107x2.5) => P2 (absolute) = 4x10-8
Abbreviations
Arithmetic Logic Unit. The part of the processor that performs the
ALU
mathematics
AM Amplitude Modulation
COMPAND
COMpressor-exPANDER
ER
Central Processing Unit - the main part of the DSP that executes the
CPU
instructions
CW Continuous Wave
LMS Least Mean Squares - a technique for adapting FIR filter coefficients
MUX Multiplexer
RF Radio Frequency
Adaptive
Differential Pulse A speech compression algorithm that adaptively filters the
Coded difference between two successive PCM samples. This
Modulation technique typically gives a data rate of about 32 Kbps.
(ADPCM)
Adaptive A filtering system that can allow for the effects of a changing
equalisation communications medium to be cancelled.
Adaptive filter A filter that can adapt its coefficients to model a system.
Part of the ALU that allows single cycle shifting and rotating
Barrel shifter
of data words.
The rate at which bits are transmitted and equals the baud rate
Bit rate
* the number of bits per baud.
Fast Fourier
An optimised version of the DFT.
Transform (FFT)
Finite Impulse
A filter that includes no feedback and is unconditionally
Response (FIR)
Filter stable.
Frequency
Division A communications system that divides a single channel into
Multiplexing smaller ones with discrete frequency bands.
(FDM)
High pass filter A filter that allows high frequencies to pass through.
Infinite Impulse
A filter that incorporates data feedback. Also called a
Response (IIR)
recursive filter.
filter
Low pass filter A filter that allows low frequencies to pass through.
Multiple
Instruction
See data flow architecture.
Multiple Data
(MIMD)
Pulse Code
Modulation The effect of sampling an analog signal.
(PCM)
Quadrature
Amplitude A variation of PSK that incorporates AM to increase the
Modulation number of bits per baud.
(QAM)
Twiddle factor The coefficients of the FFT algorithm, typically a ¾ sine table.
Analog Devices, DSP Applications Using The ADSP-2100 Family, Prentice Hall, USA.
Antoniou, A., (1993), Digital Filters : Analysis, Design and Applications, McGraw-Hill, USA.
Bellanger, M., (1984), Digital Processing Of Signals, Theory And Practice, John Wiley & Sons,
UK.
Burrus, C. S. and Parks, T. W., (1985), DFT/FFT and Convolution Algorithms, Wiley
Interscience, NY, USA.
Cooley, J.W. and Tukey, J.W., (1965), An Algorithm for the Machine Calculation of Complex
Fourier Series, Mathematics of Computation, The American Mathematical Society, USA.
Defatta, D. J., Lucas, J. G. and Hodgkiss, W. S., (1988), Digital Signal Processing - A system
Design Approach, John Wiley & Sons Inc., USA.
Higgins, R. J., (1990), Digital Signal Processing in VLSI, Prentice-Hall, Englewood Cliffs, N.J.,
USA.
Harris, F.J., (1978), On The Use of Windows For Harmonic Analysis With the Discrete Fourier
Transform, Proc. Inst. Electrical and Electronic Engineers, January.
Haykin, S. (1991), Adaptive Filter Theory, Prentice-Hall, Englewood Cliffs, N.J., USA.
Knuth, D. E., (1981), Seminumerical Algorithms, vol 2 of The Art of Computer Programming,
Addison-Wesley, USA.
Qian, S and Chen, D., (1996), Joint Time-Frequency Analysis, Prentice-Hall Inc., USA.
Oppenheim, A. V. and Schafer, R. W., (1989), Discrete Time Signal Processing, Prentice-Hall
Inc., USA.
Papamichalis, P. (ed.), (1991), Digital Signal Processing Applications with the TMS320 Family,
Volume 3, Macmillan Inc., USA.
Park, S. (1990), Principles of Sigma-Delta Modulation for Analog-to-Digital Converters,
Motorola Inc., USA.
Proakis, J. G., and Manolakis, D. G., (1992), Digital Signal Processing - Principles, Algorithms
and Applications (2ed), Prentice-Hall, USA.
Proakis, J. G., Rader C. M., Ling, F. and Nikias, C. L., (1992), Advanced Digital Signal
Processing, Macmillan Inc., USA.
Sorensen, H. V., Jones, D. L., Heideman, M. T. & Burrus, C. S., (1987), and Sig. Proc., Vol
ASSP-35, No Real-valued Fast Fourier Transform Algorithms, IEEE Tran. Acoustics, Speech 6
pgs 849-863. (1981)
Stimson, G. W., (1998), Introduction To Airborne Radar (2ed), Scitech Publishing, USA.