You are on page 1of 163

# r

FREQUENCY ANALYSIS

4 FREQUENCY ANALYSIS
During frequency analysis, three different types of signals must theoretically and practically
be handled in different ways. These signals are

## • random signals, e.g. vibrations in a car caused by the road-tire interaction

• transient signals, e.g. shocks arising when a train passes rail joints

## 4.1 Periodic signals - Fourier series

Jean Baptiste Joseph Fourier, who was a French mathematician around the start of the 19 th
century, discovered that all periodic signals can be split up into a (potentially infinite) sum of
sinusoids, where each sinusoid has an individual amplitude and phase, see Figure 4.1. A
periodic signal is thus distinguished by that fact that it contains (sinusoids with) discrete
frequencies. These frequencies are l/Tp, 2/Tp, 3/Tp, etc., where Tp is the period of the signal.

Figure 4.1. Using Fourier's theory, every periodic signal can be split into a (potentially infinite) number of
sinusoidal signals. Shown in the figure is a periodic signal which consists of the sum of the three
frequencies liTf" 2ITf" 3ITf" where Tf' is the period of the signal.

The Fourier series is mathematically formulated so that every periodic signal xp( t) can be
written as

(4.1)

## for k = 0,1,2, ...

(4.2)
for k = 1, 2, 3, ...

where the integration occurs over an arbitrary period of xp( t ). To make the equation easier to
interpret physically, one can also describe Equation (4.1) as a sinusoid at each frequency,
where the phase angle for each sinusoid is described by the variable fjJk. We then obtain

52
Anders Brandt. Introductory Noise & Vibration Analysis

(4.3)

where Go is the same as in Equation (4.1). Comparing Equations (4.1), (4.2), and (4.3) we see
that the coefficients in Equation (4.3) can be obtained from Gk and b k in Equation (4.2)
through

a: = ~a:' + b,2
9, = arctan ( !: 1 (4.4)

By making use of complex coefficients, Ck, instead of Gk and b k , the Fourier series can
alternatively be written as a complex sum as in Equation (4.5).

'X. J27f!: I
x /' (t) = """"
~
ck c ,~ (4.5)
k=-x

In this equation

a
c =_11
II 2

C
k
1
= -(a
2 k
- 1
bk ) = -T
J x/' (t)c
'I +'[~ ./2"k
--I
I;, dt
(4.6)

11

and the integration occurs over an arbirtrary period of the signal xp( t ) as before. Note in
Equation (4.5) that the summation occurs over both positive and negative frequencies, that is,
k=0, ± 1, ±2, '" Since the left side of the equation is real (we assume that the signal xp is an
ordinary, real measured signal), the right side must also be real. Because the cosine function is
an even function and the sine function is odd, then the coefficients Ck must consequently
comply with

Re[c_kl = Re[cJ
Im[ c-k 1= - Im[c!: 1 (4.7)

## for alI k > °

and where * represents complex conjugation. Hence, the real part of each
coefficient Ck are even and the imaginary part odd. For real signals, which are usually band
limited, the Fourier series summation can be done over a smaller frequency interval,
k = 0, ± 1, ±2, ... , N, where the coefficients for k > N are negligible when N is sufficiently
high.

Note also that each coefficient Ck is half as large as the signals amplitude at the frequency k,
which is directly apparent from Equation (4.6). Thus, the fact that we introduce negative
frequencies implies that the physical frequency contents are split symmetricalIy, with half as

53
r.
I
I
FREQUENCY ANALYSIS I
I

(true) positive frequencies, and half as (virtual) negative frequencies. The same thing occurs
for the continuous Fourier transform, which will shortly be described.

## 4.2 Spectra of periodic signals

To describe a periodic signal, either a linear spectrum or a power spectrum is used. The most
intuitive spectrum for periodic signals is the linear spectrum of Figure 4.2, which basically
consists of a specification of the coefficients for amplitude and phase angle according to
Equation (4.3). We will later see that when estimating spectra for periodic signals, many times
one cannot simply compute this spectrum since, due to superimposed noise, it requires
averaging, see Section 6.1. Therefore, the so-called power spectrum is more common in FFT
analyzers. This spectrum consists generally of the squared RMS-value for each sinusoid in the
periodic signal, and is obtained by squaring the coefficients a;
in Equation (4.4) and dividing
by 2. Phase angle is thus missing in the power spectrum.

Amplitude Spectrum, V

lITp 21Tp
Phase Spectrum, Degrees

Figure 4.2. Amplitude spectrum of a periodic signal. The spectrum contains only the discrete frequencies /IT,,,
2/Tpo 31Tp, etc., where Tp is the signal period.

## 4.3 Frequency and time

To understand the difference between the time and frequency domains, that is, the information
that can be retrieved from the different domains, the time domain and the frequency domain,
we can study the illustration in Figure 4.3. We see in the time domain the sum of all included
sine waves, while in the frequency domain we see each isolated sine wave as a spectral
component. Therefore, if we are interested in, for example, the time signal's minimum or
maximum value, we must consider the time domain! However, if we want to see which
spectral components exist in the signal, we should rather look to the frequency domain.
Remember that it is the same signal we see in both cases, that is, all signal information is
contained in both domains. Various elements of this information, however, can be more easily
identified in one or the other domain.

54
Anders Brandt. Introductory Noise & ~'ibration Ana(vsis

Frequency

Amplitude

Time

Figure 4.3. The time and frequency domains can be regarded as the signal appearance from different angles.
Both planes contain all the signal information (actually, for this to be completely accurate, the phase spectrum
must also be included along with the above amplitude spectrum).

## 4.4 Random processes

Random processes are signals that vary randomly with time. They do, however, have on
average constant characteristics such as mean value, RMS value, and spectrum. The signals
we shall study are assumed to be both stationary and ergodic. A stationary signal is a signal
whose statistical characteristics, the mean value, RMS value, and higher order moments, are
independent of time. An ergodic signal is a subclass of the stationary signals, for which the
time average of one signal in an ensemble is equal to the average of all signals in the
ensemble at a single time. These concepts are central to an understanding of the upcoming
analysis, and therefore they will be described in a bit more detail.

## In statistics a random process is called a stochastic process. A stochastic process x( t ) implies

a conceptual (imagined, abstract) function, which, for example, could be "the voltage which
arises due to thennal noise in a carbon film resistor of type XXX at a temperature of20 °C" or , .
the like. This time function is random in that all real resistors like those in the example, in an
experiment where one measures voltage, will exhibit different functions xl t). For one such
.'"
I

process one can, for example, calculate the expected value, E[x(t)], which is defined as

Jx( t) .
'X.

## f-l, (t) = E [x( t) 1 = p, ( x )dx (4.8)

where px( x) is the statistical density/unction of the process. Another common measure is the
variance, oJ t), which is defined as

'-

## a;(t) = E[(x(t) - f-l,(t)tj = (4.9)

55
FREQUENCY ANALYSIS

The variance can, for our voltage example, be interpreted as proportional to the power
developed in the resistor when the voltage is supplied over it. Variance is rarely used in
practice, but the square root of the variance, the standard deviation, is more common, mostly
because this quantity has the same units as the measured signal.

Besides expected value and variance, we also define the more general central moments, Mi , as

## }vI" [x(t)] = E[(x(t) -IiJ'] (4.10)

The variance is consequenctly equal to M2. It is important to understand that all of the above
values are time-dependent, implying, possibly against the intuitive understanding of many, not
what we normally mean by average. If we experimentally want to estimate the expected
value according to the definition, we would (in our example) have to measure a large number
of resistors and then average the voltage over all measurements at each time instant. This
type of averaging is called ensemble averaging, since we carry out a computation for an
ensemble of resistors, see Figure 4.4.

_Time _. . . . -.----.-~

, Ensemble

Figure 4.4. The difference between ensemble values and time average values. The basic definitions from
statistics are based on ensemble values. For ergodic signals these values can be replaced by time averages.
Physical signals are always ergodic, if they are stationary.

A stationary stochastic process is a process for which the above statistical measures are
independent of time. For example, to measure the expected value of our voltage, if the signal
was stationary, we would only need to measure each resistor at a single common time, to. But
in this example, for an exclusively stationary signal we must measure a (large) number of
different resistors.

For those signals we normally measure in physical situations, there is another restriction that
applies in relation to the above, namely that the signals are ergodic. For ergodic signals the
above ensemble values are equal to the corresponding time values, implying that the
definitions can be replaced by time averages. For example, the expected value can be replaced
by

56
Anders Brandt, Introductory Noise & Vibration Analysis

## E[x(t)] = J-L. = lim _1

1 'f'-"X- 2T
Jx(t)dt (4.11)
-T

Similarly, the moment of Equation (4.10) can, for an ergodic signal, be calculated by using

## AI" [x(t)j = E[(x - J-LJ"] = J.~~ 2~ J(x - IL,)" dt (4.12)

-T

If in our example we assume that the voltage is ergodic, it implies that it would suffice to
measure one resistor during a certain time, and then use these time values according to
Equation (4.11) to calculate the expected value. This is what is done in reality, but at the same
time it is important to understand the fundamental difference between the above definitions
[Bendat & Piersol, 1986; Bendat & Piersol, 1993]. It should also be mentioned that
stationarity is very important to check before analysis of a random signal. Methods for this
check are given in [Bendat & Piersol, 1986; Brandt, 2000].

When the above statistical measures are calculated from experimental data, the following
estimations are usually used. If we assume that the signal is stationary and ergodic, the
expected value is estimated with the mean value

(4.13)

where we have simplified the term x l1=x( n). This notation will often be used below as it
simplifies the reading of the equations. The mean value is often denoted by a line above the
variable name. For other variables we use the '"hat" symbol /\ to show that we are dealing with
an estimate and not a theoretical expected value. For an arbitrary variable, the expected value
is estimated by the above formula. The standard variation, O'~, is estimated using, ax'defined
by

a, =~_I_I.(x
N -1 =1
-xf (4. 14)

where N - 1 in the denominator is used so that the estimator becomes consistent, i.e. with an
increased number of averages the estimation should approach the true value, without bias
error. This concept is not so important in practice where we nonnally use more than 20 values
in the averaging.

A common value related to the standard deviation is the Root lYfean Square (RMS) value of a
signal. The name directly implies how the value is calculated, namely as

1 N ,
RMSr = - I. x,: (4.15)
N n=1

The RMS value, as is evident from the equation, is equal to the standard deviation when the
mean value of the signal is zero. The value corresponds also, for any dynamic signal, to the
DC voltage which the signal could be replaced by in order to cause the same power
57
FREQUENCY ANALYSIS

dissipation in a resistor. More generally, the value of a dynamic signal is used as a first
measure of the "size" of the signal. For a sinusoidal signal it is easy to show that the RMS
value corresponds to the peak value divided by the square root of2.

## 4.5 Spectra of random processes

As opposed to periodic signals, random signals have continuous spectra, that is, they contain
all frequencies and not only discrete frequencies. Hence we cannot display the amplitude or
RMS value for each incoming frequency, but we must instead describe the signal with a
density-type spectrum (compare, for example, with discrete and continuous probability
functions). The unit for noise spectra is therefore, for example, (m/s2)2/Hz, if the signal is an
acceleration measured in m/s 2• This spectrum is called Power Spectral Density, PSD. An
example is shown in Figure 4.5.

## 200 Averages, 75% Overlap, 4(=2.5 Hz

0

.....N -5
-~
'"
'"til

~ -lOr
rtil
p..
ci'
5

-20
[

-
.9
e! -25
II)
"ii
u
u
<: -30
,I

.'
~
i -35

-40
0 50 100 150 200 250 300 350 400 450 500
Frequency, Hz

Figure 4.5. Power spectral density, PSD, of a random acceleration signal. The spectrum is characterized by being
continuous in frequency and is a density function, that is, the units are (m/s2)2/Hz if acceleration is measured in
mls1 .

The theoretical derivation of the PSD usually involves the autocorrelation function, even if it,
strictly speaking, is not necessary. The autocorrelation function, R.u( r), for a stochastic,
ergodic time signal x( t) is defined as [Bendat & Piersol, 1986]

(4.16)

and can be interpreted as a measure of the mutual dependence a signal has with a time-shifted
version of itself (shifted by time r). Similarly, the cross-correlation between two different

58
Anders Brandt, Introductory Noise & Vibration Analysis

stochastic, ergodic functions, x( t ) and y( t ), where x( t) is seen as the input and y( t) the
output, is defined as

(4.17)
...
For correlation functions, the following relationships hold for real signals x( t) and}{ t ).

## R[C ( -T) = R[[ (T) even fUllction (4.18)

(4.19)

Now we define the two-sided power spectral density, denoted S~.lf), as the Fourier transform
of the autocorrelation function

## 5,)!) = S' {R,,(T)} = JR,,(T)e-

"-
J27r
!T dT (4.20)

and analogous to this function we define the two-sided cross-spectral density, Sy.lf) as

JR (T)e-
x
J27r
5.,II" (1) = S' {R ,1J.l: (T)} = .11.1'
!T dT (4.21 )

The negative frequencies in the Fourier transform act such that half of the physical frequency
content appears as positive frequencies, and the other half as negative frequencies. Therefore,
experimentally we never measure the two-sided functions, but instead define the single-sided
spectral densities, Auto (Power) Spectral Density, PSD or G.~.lf), and Cross-Spectral
Density, CSD or Cvx(f), as

## G,.,(J) = 25,J!) for 1>0

(4.22)
G,,,(O) = 8 :(0)n I.
1
~.

## G,1,,(!)= 25 (J) for 1>0

,'.
yr
(4.23)
G!I"(O) = 8!11'(0)
'\
.'
For the single-sided spectral densities, CxxC f) and CyxC f), it follows directly from the
characteristics of correlation functions according to Equations (4.18) and (4.19) that

(4.25)

## 4.6 Transient signals

Finally, in addition to the previously mentioned periodic and random signals, we have
transient signals. Like the random processes they have continuous spectra. However, as
opposed to random signals, transient signals do not continue indefinitely. It is therefore not

59
FREQUENCY ANALYSIS r
possible to scale their spectra by the power in the signal (power is of course energy/time).
In~tead, transient signals ar~ ~enerally scaled by their energy, and thus such.spectra can have
umts of for example (m/s-ts/Hz. The spectrum most commonly used IS called Energy
Spectral Density, ESD. Because energy is power times time, we obtain the definition of the
ESD

## ESD = T·PSD (4.26)

where T is the time it takes to collect one time block, see Section 5.6. The ESD shall be
interpreted through the area under the curve, which corresponds to the energy in the signal. In
Section 6.7 below there is more on spectrum estimation for transient signals.

An alternative linear, and therefore more physical, spectrum for a transient signal is obtained
by using the continuous Fourier transform without further scaling. The transient spectrum of a
signal x( t) is consequently defined as

## TJf) = :s{x(t)} (4.27)

This is a two-sided spectrum and we will return to a discrete approximation of this spectrum
in Section 6.7.

## 4.7 Interpretation of spectra

What is the usefulness of defining these different spectra? The motivation is naturally that
through studying the spectrum we will hopefully gain some insight into how the signal in
some sense behaves. In order to understand the signal, we first need an understanding of what
can be read from the spectrum. For periodic signals, this is relatively simple, as they basically

I,' ,
consist only of a sum of individual sinusoids. By knowing what these signals are, that is their
amplitude, phase and frequency, we can also recreate the measured signal at any specific time,
if we so choose.

We may also want to know for example the RMS value for the signal, in order to know how
much power the signal generates. This can be done using a formula called Parseval's
Theorem, see Appendix D. For a periodic signal, which has of course a discrete spectrum, we
obtain its total RMS value by summing the included signals using Equation (4.28),

(4.28)

where IRxkl is the RMS value of each sinusoid for k= I, 2, 3... The RMS value of a signal
consisting of a number of sinusoids is consequently equal to the square root of the sum of the
RMS values. IRxkl corresponds consequently to the value of each peak in the linear spectrum.

For a noisy signal we cannot interpret the spectrum in the same way. This signal contains all
frequencies, which makes it a bit tedious to count them! Instead, we interpret the area under
the PSD in a specific frequency range, see Figure 4.6, which also follows from Parseval's
Theorem. To calculate the noise signal's RMS value from the PSD we use Equation (4.29),

R:\IS" = ~ J
Gil (f)df = .Jarea under the curve (4.29)

60
Anders Brandt, Introductory Noise & Vibration Ana~vsis

## 200 Averages, 75% Overlap, !:.f=2.5 Hz

O~--~----~--~----~--~----~--~----~----~--~

N -5
~
,-,
N

e -1
CI)

'-"

"0
~ -15
~

## 350 400 450 500

Frequency, Hz

Figure 4.6. From a PSD the RMS value can be calculated as the square root of the area under the curve.

In a similar fashion, we can detennine the energy in a transient signal by calculating the
square root of the area under the ESD curve.

Spectral density functions are difficult to interpret since it is the area under the curve that is
interpreted as power or energy. A suitable function for easier interpretation of spectral density
functions is therefore the cumulated jimction. For a PSD, one can build for example the
cumulated mean square value, which is calculated as
f
P,,,< (I) = J
()
GIl' (I)df (4.30)

Note the similarity between this function and the statistical distribution function, which is
equal to the integral of the statistical density function (or the sum if we have a discrete
probability distribution). The function Pmlf) is consequently equal to the RMS value in a
frequency range from the lowest frequency in the spectrum up to the frequency value f
Shown in Figure 4.7 is a cumulated PSD scaled by the RMS value. In a diagram of the
function Pm,(f), one can easily calculate the RMS value in any frequency interval, for
examplefE (fl,h) by fonning

## R.USr(~,fJ = P,,,, (JJ - P,,,., (~) (4.31 )

61
FREQUENCY ANALYSIS

30r---~----,----,----,-----,---~ ____~__-,____,-__~

25

N,...., 20
N

e'"
~

O~
til
p., 15
...
~--~
"0
II)
<'3
"3
§
u
10

5
//
50 100 150 200 250 300 350 400 450 500
Frequency, Hz

Figure 4.7. Cumulated mean square value calculated for the spectral density in Figure 4.5.

62
Anders Brandt, Introductory /Iioise & Vibration Ana{vsis

## 5 EXPERIMENTAL FREQUENCY ANALYSIS

In practice, frequency analysis in the field of noise and vibration analysis generally makes use
of the discrete Fourier transform to estimate spectra. In this chapter we shall show how this
transform is used in practice. Before we begin, however, we need to learn a bit about
estimation of random (stochastic) variables, ~nd therefore we begin with a brief description of
the error measures we will use from now on.

## 5.1 Errors in stochastic variable estimations

When calculating statistical errors, one differentiates between two types: bias (systematic)
error, and random error [Bendat & Piersol, 1986]. We assume that we shall estimate (that is,
measure and calculate) a parameter, ¢, which can be for example the power density spectrum
of a random signal, G.u . In the theory of statistical variables, the "hat" symbol, /\, is usually
used for a variable estimate, so we denote our estimate 9 (Grr ). We now define the bias
error, b,) , as

b. =
r)
£[9]-1> (5.1)

that is, the difference between the expected value of our estimate and the "true" value ¢. We
generally divide this error by the true value to obtain the normali:ed bias error, Ch, as

b.
r)
E=- (5.2)
Ii <p

The random error is defined as the standard deviation, (J , of the difference between our
o
estimated variable and the true variable

(5.3)

,
and, as with the bias error, we define the normali:ed random error, Cr, as '.'

E,. (5.4)

In most cases we do not know the true parameter ¢, and therefore we generally use our
estimated parameter, ¢, in its place. When we have estimated the normalized random error,
Cr, we can use it, for small errors (cr < 0.1), to calculate a 95% confidence interval

## ¢(l- 2c:,) ~ 1> ~ ¢(l + 2c:,) 9-5% (5.5)

63
EXPERD.,fENTAL FREQUENCY Alv'ALYSIS

## 5.2 Principles of frequency analysis

When investigating the spectral contents of a signal, there are a few methods available. We
can use FFT-analysis, described below, or we can use so-called real-time analysis, through
which a series of parallel filters measure the frequency content. We can also use several more
sophisticated methods which have lately beq~me popular, such as wavelet analysis. All of
these methods, however, have a common base: they calculate the RMS value of the signal in a
specific frequency band. The principle is shown in Figure 5.1 .

.J 50
Q.
CIl

~
..101
30

~
.:!, 10
.J •

## 63 125 250 500 Ik 2k 4k Tot

Figure 5.1. "Original" method for measuring spectrum. An adjustable band-pass filter is stepped through a
number of center frequencies, and for each frequency band the RMS value is measured with a voltmeter. Today,
this method is used particularly in acoustics, see the section on octave bands below.

To see what qualities this type of spectrum estimation has, we can study how an RMS value
can be calculated for a band-pass-filtered signal. The RMS value is calculated for a sampled
signal using the fonnula

1 v
R =
1:
_'"'x"2
i\r~ n
(5.6)
JV 11.=[

For a noisy signal, it can be shown [Bendat & Piersol, 1986, 1993] that an RMS value so
calculated has a nonnalized random error given by

1
c, - 2.JBT (5.7)

where B is the signal's bandwidth and T is the measurement time, that is T=Nt1t if N is the
number of samples in the RMS computation.

The product BT, often called the bandwidth-time product, is central in signal analysis. Clearly
it specifies that, in order to obtain a certain maximum error when calculating the RMS value
(or spectrum value) with a specific frequency resolution, we must measure during a specific
minimum amount of time. The finer the frequency resolution we require, the longer time we
must measure. This compromise follows naturally from the fact that frequency and time are
inversely proportional to each other.

64
Anders Brandt. Introductory Noise & Vibration Analysis

## 5.3 Octave and third-octave band spectra

The original way of measuring a spectrum was to have an adjustable band-pass filter and
voltmeter for AC voltage, as was shown in Figure 5.1. To be able to compare spectra from
different measurements, the frequencies and bandwidths used were standardized at an early
stage. It was at that time natural to choose a constant relative bandwidth, so that the
bandwidth increased proportionally with the center frequency. Thus, if we denote the center
frequency by /m and the width of the filter by B, we then have that
. I

B
- = constant (5.8)
1m
The chosen frequencies were distributed into octaves, meaning that each center frequency was
chosen as 2 times the previous one, and the width of each band-pass filter was twice as large
as that of the previous filter. The lower and upper limits were chosen using the geometrical
average, that is

(5.9)

where.ll is the lower frequency lim it and j;J the upper limit. The resulting relationship between
the lower and upper frequency lim its for octave bands is

(5.10)
_ .1/2
!" - !,,, 2
In some instances the octave bands give too coarse a spectrum for a signal, in which case a
finer frequency division can be used. The most common division used is the third-octave
band, where every octave is split into three frequency bands, so that

= f ,)-l/(i
!,1 J", ~
(5.11 )
f
J Il
= f
Jm
21/(i

More generally, one can split each octave into n parts. These frequency bands are generally
lin-octave bands and their frequency limits are given by

f
J[
= f
Jm
2- 112 "
(5.12)

The center frequencies for octave and third-octave bands are standardized in [ISO 266],
among other places. The standard center frequencies for third-octave bands are ... 10, 12.5,
16, 20, 25, 31.5, 40, 50, 63, 80, 100, 125, 250 ... where boldface stands for center frequencies
common to the octave bands. It is clear that these frequencies are not exact doublings, but
rather rounded values from Equation (5.13) below. In Equation (5.13), p is a negative or
positive integer number.

Jf m = 1000.10,,/ 10 (5.13)

65
EXPERIMENTAL FREQUENCY ANALYSIS

## 5.4 Time constants

If one measures a band-pass filtered signal that is not stationary, a signal which varies as a
function of time is obtained. There is, however, a limit to how fast this signal can change,
even if the input varies, because of the band-pass filter's time constant. This value describes
how quickly the signal rises to (l-e- I ) or about 63% of the final value, when the level of the
input signal is suddenly altered. For a band-pass filter with bandwidth B the time constant, r,
is approximately I

1
T=- (5.14)
B

For octave and third-octave band measurements, the different frequency bands consequently
have different time constants, that is, longer time constant for lower frequency band. To the
right in Figure 5.1 a typical octave band spectrum for a vibration signal can be seen. Note that
to the far right the so-called total signal level, that is, the signal's RMS value (within a given
frequency range) is shown. The position of this bar differs depending on analyzer
manufacturer, but it is usually shown on either side of the octave bands.

## 5.5 Real-time versus serial measurements

To measure an acoustic signal's spectral contents using octave bands, one can in the simplest
case use a regular sound level meter with an attached filter bank, that is, a set of adjustable
filters which, often automatically, steps through the desired frequency range and stores the
result for each frequency band. This type of measurement is called serial since the frequency
bands are measured one after the other. Naturally, this method only works when the signal
(sound) is stationary.

In order for the measurement to go faster, or if the signal is not stationary, one can instead use
a real-time analy=er, which is designed with all of the third-octave bands in parallel, so that
the same time data can be used for all frequency bands.

## 5.6 The Discrete Fourier Transform (OFT)

In an FFT analyzer, as evident from the name, the FFT is used to calculate the spectrum. FFT
is an abbreviation for Fast Fourier Transform, which is a computation method (algorithm)
which actually calculates the Discrete Fourier Transform (OFT), only in a faster way than
directly applying the OFT. We shall therefore begin by studying the OFT and follow with
how it is used in an FFT analyzer.

## Let us assume that we have a sampled signal x( n )=x(nLlt ) where x( n) is written in

simplified notation. We further assume that we have collected N samples of the signal where
N is usually an integer power of 2, that is, N = 21' where p is an integer number. There are
algorithms for the FFT that do not require this assumption, but they are slower than those
which assume an integer power of2.

## The (finite) discrete Fourier transform, X( k) = X(k4f), of the sampled signal x( n) IS

usually defined as

66
Anders Brandt. Introductory Noise & Vibration Ana~vsis

.V-I -./2rrl-"
X(k) = Lx(n)c-.v- for k = 0,1,2, ... ,N-1 (5.15)
,,=0

which we call the forward transform of x(n). To calculate the time signal from the spectrum
X(k), we use the inverse Fourier transform,

1 N-I ./2rrllk
x( n) = -
N
L X (k)c.v
k=o
for n = 0,1,2 ... , N - 1 (5.16)

I
It should be pointed out that the definition of the OFT presented in Equation (5.15) is not
unique. One may find definitions with different scaling factors in front of the sum in the
literature. When confronted with new software, one should therefore test a known signal first
to find out which definition of the OFT is used. A simple way to test is to create a signal with
an integer number of periods and with an N of, say, 1024 samples. See Section 5.7 on how to
create such periodicity. By checking the result of an FFT and comparing with the fonnulae
above, the definition used can be identified. The definition according to Equations (5.15) and
(5.16) is common, and is the one used, for example, by MATLAB.

The spectrum obtained from the above definition of the OFT is not scaled physically. This is
clearly seen by studying the value for k = 0. The frequency 0 corresponds to the OC
component in the signal, that is, the average value. But, according to Equation (5.15) above
we have

.'V-I
X(O) = Lx(n) = N· x (5.17)

where we let x denote the mean value of x(n). It can thus be concluded that Equation (5.15)
must be divided by N in order to be physically meaningful. which is done when using only
X(k). As a rule, however, we cannot measure only a time block of data because we have noise
in our measurements. Therefore we need to average the signal, and to that end other scaling
factors are needed, which will be described below. ,I
:1
"r-
It should also be noted here, that the discrete Fourier transfonn in Equation (5.15) differs
substantially from the analog Fourier transfonn, see also Appendix O. First of all, the OFT is
computed from a finite number of samples. Secondly, the DFT is not scaled in the same units
as the analog Fourier transfonn, since the differentiator dt is missing. The analog Fourier
transfonn of a signal with unit of m/s 2 would have unit mis, whereas the OFT will have units If

of m/s 2• In Chapter 6 this will be clear as we present how to compute scaled spectra from the
OFT results.

## 5.7 Periodicity of the Discrete Fourier Transform

As evident from Equation (5.15) above, the discrete Fourier transfonn X( k) is periodic with
period N, that is,

## X( k )=X( k+N) (5.18)

This result arises because we have sampled the signal, which, according to the sampling
theorem, implies that we make it periodic on the frequency axis, so that it repeats at every

67
EXPERIMENTAL FREQUENCY ANALYSIS

m· /.. where m represents all integer numbers. Similarly, the time signal x(n) according to
Equation (5.16) becomes periodic with period N, so that

x(n)=x(n+N) (5.19)

a)
1 . . "
b) 10

0.8 "
0.6-
. . . !

"

"
8 0

I
0.4- i i
i, I
!
, 6

. .
I I
.
I!
0.2 -
,
0 • I

-0.2 - 4
I
-0.4 -
-0.6-
• . • . i
2

.
-0.8 - J,

-1 , • 0
0 2 4 6 8 10 12 14 16
0 2 4 6 8 10 12 14 16 k
n

Figure 5.2. Sinusoid with N = 16 and with an integer number of periods in the measurement window (a), and the
absolute value of the OFT of the same signal (b). As evident from the figure, the result of the transform is
unsealed. Further, the first N 12 values in the OFT correspond to the positive frequencies and the next LV 12 values
to the negative frequencies, which is clear from the periodicity of the OFT Note especially that the time signal
which has an integer number of periods in the measurement window does not end with the same value as the first
sample. Instead, if it was calculated, x(l6) would have been equal to x(O), see Equation (5.19).

The time signal is periodic due to our calculating only at discrete frequencies, which implies,
so to say, a "sampling" in the frequency domain. The first N 12 discrete frequencies in the
DFT, k = 0, 1, 2, ... , N 12, correspond to sinusoids with k periods within the measurement
time of x(n), that is, T = N ·Lit. Hence, the distance on the frequency axis, which is usually
called the frequency increment, Lif, is

## /:)./ = -.!. = _1_ = 1. (5.20)

T N/:).t N

These relationships are important to keep straight when we measure with an FFT analyzer, to
keep count of how long the measurement time is for a certain frequency increment. The k
which lie between N 12+ 1 and N - 1 correspond actually to negative frequencies, since X(k)
repeats every N. In an FFT analyzer, however, we seldom notice this fact, since the built-in
transform function only calculates the positive frequencies and thus shows a single-sided
spectrum, see Figure 5.2_

Another detail which should be observed is that the periodicity in the DFT is such that if we
shall sample an integer number of periods of a periodic signal, then we must sample it so that
the first sample, n = 0, corresponds to the sample which would arrive at n = N if we sampled
that far and did not stop at n = N - I. In this way, the periodicity of our signal is continuous,
as in Figure 5.2. It is a common error to neglect this fact and make the signal symmetric in the
time block.

68
Anders Brandt. Introductory Noise & Vibration Analysis

## S.B Oversampling in FFT-analysis

-
If we use N time samples, which we usually call the block size or frame size, the DFT results
in half as many, that is, N 12 positive (useable) frequencies, as seen in Figure 5.2. These
frequency values are usually calledfrequency lines or spectral lines.
Block size Corresponding
(# of time samples) # of spectral lines
256 101
512 201
1024 401
2048 801
4096 1601
8192 3201
Table 5.1. Typical block sizes and corresponding
numbers of useable spectral lines when applying
FFT.

Because the analog anti-aliasing filter is not ideal, but has some slope after the cutoff
frequency, as seen in Figure 5.3, we cannot sample with a samling frequency which is only
2'B man the bandwidth of the signal. In the FFT analyzer, a "standard" oversampling factor of
2.56 has been established. Thus, we can only use the discrete frequency values up to
k = N 12.56. Typical values for the block size and corresponding number of spectral lines are
given in Table 5.1. The frequency here which corresponds to k = N 12.56 is called the
bandwidth, BI17 (u.

0
co
"0
.,.; -20
c
.,&. -40
" -60
~
Co>
c
"
II
.1
"::s
0-
-80
I~
"
~ -100 I
1
0 100 200 300 400 500 600 700 800 900 1000
1
,,<

.,
"tb -200
0"
0
,:.,
~. -400
..c ~
!:l...
-600

-800
0 100 200 300 400 500 600 700 800 900 1000
Frequency, Hz

Figure 5.3. Typical anti-aliasing filter. Because of the filter's non-ideal characteristics, the cutoff frequency, f.,
needs to be set lower than half of the sampling frequency. It is typically set to Ix 12.56 in FFT analyzers. for
historical reasons, which approximately corresponds to 0,8 j/2, In the figure the cutoff frequency is 800 Hz.
which gives a sampling frequency of 2.56'800=2048 Hz. Note the non-linear phase characteristic, which will be
discussed in Section 7.7.

69
EXPERIAfENTAL FREQUENCY Al'v'ALYSIS

## 5.9 Symmetry Properties

An important thing to understand when we manipulate signals is that, to be able to use the
inverse transformation of Equation (5.16) above, it is necessary to keep all the values of
X( k), for k = 0, 1, 2, ... , N - 1, including both real and imaginary parts. If we discard the
frequency values for k > N 12 we can no longer (easily) calculate for example an impulse
response. In that case, however, the following qualities of the Fourier transform may be used.

For a real measured signal, the real part of the Fourier transform is an even function and the
imaginary part an odd function, that is,

\

## 1m {X(-k)} = -1m {X(k)} (5.22)

These qualities, called the Fourier transform symmetry properties, are valid exactly, even for
the OFT, provided that there does not exist any aliasing distortion. Naturally, these attributes
are valid when we "shift down" the upper N 12 spectral lines so they lie to the left of k = 0, so
that we have a two-sided spectrum X( k), k = 0, ± 1, ±2, ... Thus, according to Equations
(5.21) and (5.22), the negative frequencies can be "filled in" before inverse transformation.
Close study of the OFT result will show that the value X(k = NI2+ 1), for example value
number 513 if the block size is 1024 samples, will be a real-valued number, which is not
equal to X(O). It is not the NI2 number because of the "skew" in the periodic repetition
discussed in Section 5.7. Furthermore, this value (for k = NI2+1) cannot be discarded if an
exact reproduction of the time signal x( n ) is to be computed. This is often overlooked and
only the first NI2 values stored. The correct number of values to store in order to be able to
compute back the original time signal is instead N12+ 1, in our example with 1024 block size,
thus 513 frequency values should be stored. Then all negative frequencies can be filled in
accurately.

5.10 Leakage
What happens if we, for example, compute the DFT with a frequency increment of 41= 2 Hz,
but the measured signal is a sinusoid of 51 Hz, so that the signal frequency lies right between
two spectral lines in the DFT (50 and 52 Hz)? The result is that we get one peak at 50 Hz and
one at 52 Hz. However, both peaks are lower than the true value, see Figure 5.4. To easily
observe this error in the figure, we have scaled the OFT by dividing by N and taking the
absolute value of the result, see also Section 6.3. Thus the correct value should be
11 J2 == 0.7 , the RMS value of the sine wave.

70
Anders Brandt, Introductory Noise & Vibration Ana~vsis

0.5
0.8
0.6 0.4
0.4
0.2 0.3
0
-0.2 0.2
-0.4
-0.6 0.1
-0.8
-1 0
0 0.1 0.2 0.3 0.4 0.5 30 40 50 60 70
Time, s Frequency, (Hz)

Figure 5.4. Time block (left) and linear spectrum (right) of a 51 Hz sinusoid. 256 time samples have been used,
giving 128 spectral lines. The frequency increment is 2 Hz. Instead of the expected value 0.7, that is, the RMS
value of a sinusoid with amplitude of I, we get one peak much too low (in this case 40% too low). There are also
more non-zero frequency values to the left and right of the 50 Hz and 52 Hz values. This phenomenon is called
leakage since the frequency content in the signal has "leaked" out to the sides.

As seen in the figure the resulting peak is far too low, by as much as 40%! Furthermore, it
looks like the frequency content has "leaked" away on either side of the true frequency of 51
Hz. This phenomenon is therefore called leakage.

One way to explain the leakage effect is by studying what happens in the frequency domain
when we limit the measurement time to a finite time, which corresponds to multiplying our
original, continuous signal by a time window which is 0 outside the interval t E (-TI2,TI2),
and 1 within this same interval. A multiplication with this function, w( t ), in the time domain
is analogous to a convolution with the corresponding Fourier transform, W( f). We thus
obtain the weighted Fourtier transform of x(t)· w(t) , denoted X,(f), as

## X",(f) = X(f) * lV(f) = JX(u)lV(f - u)du

'-

(5.23)

where * denotes convolution. W(f) is the transfonn of a rectangular time window, in our case

## lV(f) = T sin( 7rfT) = T sine (.tT) (5.24)

(7r fT)

To make the convolution easier we exchange the two functions, and make use of
.....
X",(.t) = X(f) * lV(f) = lV(f) * X(f) = JlV(u)X(f -u)du (5.25)

71
EXPERIMENTAL FREQUENCY ANALYSIS

-10 , ,
,,
.,,
,," ' ..,,
, ',
. ,, ,
, ," , .
, , , ",', '. "
-20 : ""'.,
, ~ , "" , ,"
,: , .,,
...
",., ,' ·· .,: ·· ," .
:,
,,, ,:" . ,, ,· ,' ,. : "" '"".
'.'
" ···, ···, ', ... ,:'.. ,,, ,
'. ..
.
, , , . , .' .
-30 ,, .. ,,, , ,,, .. ,, ,'. : ·.. .' ... ·., .'.'
",., "
"
.'.'
" " ..
'.'r, ·.'' ., ,, ··, , · ,,, ·· ,, .. :" .
. · ' . ".. ,
". ..'.'., .'
"
""
'.
".'
"
"
.'.' "., ·': .'·· ., ·...., ,,
...". .' .'
..:
··,·· " "'.,· '"'.'" .' "., .'".'
,
.'.'"'
'.
'".
.'""
.'.' ... .' .,
'., .'...... ...... .'.' "'. ., , ,
" ..
.'.' ''..
· " "'.. ""r, .'.!..' "..' "•! ,- ., ., .....' ''.. "" ''...
.'.' .' ''..
.'.'., .'.'.' .'.'.'.' ''r... "" ,'....
"
-40 ", '. '. " .' ".'
.!

## .' '. " '. .'.' .''

. '~. .'.' n
"
"
.'.' '''... " "" .''
". . .' .' "

## ,,., ,•".' •.'.', ,

" '
.' '.
"
. ~
.'
.'~\ ''... " " II
" I.'
~
,,.' "
"

## .' :' "

"
"

" .'
I
.
'
"
"
'. , ., ., ''..
I
"'.. "
"
-l) -8 -7 -6 -5 -4 -3 -2 -I 0 2 3 4 5 6 7 8 9 k
/1)

Figure 5.5. OFT of a sinusoid which coincides with a spectral line. The convolution between the transform of the
(rectangular) time window, WU), and the sinusoid's true spectrum, DUo), results in a single spectral line.

10

.
, ~. ... ~ ., .
.,
20 : ·.. . "
"
""
,
·· ', ··· ,,
"

"
, · , '.'. ""
.
.
· "· . · ''..
,,, . ," . ,' ·.'. " ''..
".: ·, , ··· ,, ·
··' .' · ,
. ·, ,, , ., ..'t; """ '''...
. ·. .., ~, ,, · , ,
.' .'
"

30 ,
.,, '. . ,,. ·· ,, ..,. .: .'
·.'. .. ·''... ' . "
.'.'
"
"
'.
'.
''..
"" · ' · '
"" .'".' .'
""" .' .'.' "
.'
"

"
. . ,, .'· ·· ·· ,,
.'.' .. ·· ·· ·· ,,
.' '. .. ·.
'. "
"

"
"
"
'. .' '. " "
.' '.
" "

## ,,, .'.' '; '"'. ".

" .,
''... .'.' .' " "
40
"
" "
"
"
" .'.'
" "
"
.'"
"
"
"

"
" ., '
'.
.'.'
"
"
.'
.'
.' ''''.... """
"
r.
"

.' ''..
" " " "
" " " " " "
'.
"
.' '.
..'.'.',
" "
" " "
.' '.
" "
"'. " "
"
~
".'.: ''..
" "
"
: '. ::
.'
:: ''..
'.
"
'. "" '.
50LL~~ULLL~~~~~~~~~~~~~~~~~~
"' .
I1
"

-9 -8 -7 -6 -5 -4 -3 -2 -1 0 2 3 4 5 6 7 8 9
.Ii)

Figure 5.6. Leakage. The frequency of the sine wave is located at/o, exactly mid way between k=0 and k=l,
corresponding to an integer number of periods plus one half period in the time window. When a periodic signal
does not have an integer number of periods in the measurement window, then due to the finite measurement time
the convolution results in too low a frequency peak. At the same time the power seems to "leak" into nearby
frequencies; The total power in the spectrum is still the same.

The convolution between the Fourier transfonn of our sine wave and that of the time window
thus implies that we allow the latter, W( I), to sit at the frequency II), that is, we construct
W(I-fo). Then we shift the transfonn of the (continuous) sinusoid, which is a single spectral
line, all the way to the left ( k = 0), and multiply the two. At each k for which we wish to
compute the convolution, we center the sinusoid spectral line at that same k, mUltiply the two

72
· Anders Brandt. Introductory Noise & Vibration Ana~vsis

and sum all the values (all frequencies). In some cases (see Figure 5.5), for example where
fo = ko ',cjf, the sinusoid may move up the time window such that for all integer numbers k we
obtain only one single value from the convolution, since for all k except for k = ko the spectral
line of the sinusoid corresponds to a zero in W(f-fo).

Illustrated in Figure 5.6 is the result of the convolution as described above, for the case where
the sinusoid lies between two spectral lines (we have an integer number plus one half period
in the time window). The Fourier transform of the time window is instead centered at a
frequency fo which is not a multiple of the frequency increment !::.f We see in the picture that
we obtain several frequency lines which slowly decrease to the left and right, and we get two
peaks which are the same height, although much lower than the sinusoidal RMS value. It can
be shown that if the RMS values of all spectral lines are summed according to Equation (4.28)
the result is equal to the RMS value of the sinusoid. Hence, the power in the signal seems to
"leak" out to nearby frequencies, giving the name leakage.

I
5.11 The picket-fence effect
An alternative way to look at the discrete spectrum X(k) we obtain from the OFT is to see
each spectral line as the result of a band-pass filtering, followed by an RMS computation of
the signal after the filter. This process is often illustrated as in Figure 5.7 with a number of
parallel band-pass filters, where each filter is centered at the frequency k (or k ·l1f if we think
in Hz). This method of looking at the OFT is reminiscent of viewing the true spectrum
through a picket fence and therefore it is called the picket-fence effect. Note that the picket-
fence effect is also analog with the method for measuring a spectrum with octave band
analysis that we discussed in Section 5.3. As was mentioned in Section 5.2 this principle is
the only way to measure or compute spectral content.
,~i
1'1

'.,
',1

; !'I
'"

k ,~
•••••• --+--...---4 .....,...
~
I'

Figure 5,7. The picket-fence effect. Each value in the discrete spectrum corresponds to the signal's RMS value
after band-pass filtering. If we study a tone lying between two frequencies we will obtain too Iowa value.

## 5.12 Windows for periodic signals

As we saw above, we obtain an amplitude error when estimating a sinusoid with a non-integer
number of periods in the observed time window. This error is caused by the fact that we
truncate the true, continuous signal. By using a weighting function other than the rectangular
one used above in the leakage discussion, we can reduce this amplitude error. This process is
called time-windowing and is illustrated in Figure 5.8.

73
EXPERI.\4ENTAL FREQUENCY ANALYSIS

dB
0
,.,
,,
,,
-10
'.,
"'1
,',1
,", 1
,
-20 "
" I I

,
,
,, ,"
,, , , ,,
,, ,'
, , ,, ,'" ', ,(,, ,, ,,
,.'
-30 :" ' .'. "
"
"
.,.,
. , ,'.'' ", ,, ', , , ,," ,,
", ,, ,
" " 'L,, ', ,, ,
" "
,., ,
.
"

-40
"

"
"
,.'" ,.'
"
,., "
' ,.',! '.'
"
"r,
...
"

" r,
"
'.'
"
"
. "
" "
"
,,
" "r' " ",',
" "
" ,.,
" "
,,
"
"
"
,,
"
" "
"
"'.'
" 'r
" "

4 5 6 7 8 9 k

Figure 5.5. OFT ofa sinusoid which coincides with a spectral line. The convolution between the transform of the
(rectangular) time window, W(f), and the sinusoid's true spectrum, O(fo), results in a single spectral line.

o ,,

10

,,
, ,, " ,,
"
20 : ~: ~ :, I I,
IJ
It
L
I
r'
1\
\
,

" . "
"
"
"
"
"
,I
\
1
I
I
I
'1

~ "
,
".
" \ I
"
" ,I
,I
"
I, , '
'1
\
,'
I I I I
30 " "
"'r " I, ,',
I' I
"
I
I I 1 I '" I 1

'r
r' "" I'
,II '
, I \ I \ I \

, ,,
I t I I I

## , ", ' 'r ",r ,I

,I
,I " I, I I

,' "
"
,r
/' I'
"
"
"
"
I r
II
,,
" ",r ,I " ,I " II
"
" " ,r ,I II ,I " II
"'.' "
40 "
" ". I'
"
II
II
,I
"
II
I,'
II
II "r,
" 'r " " II
,I 't II
'r "
" ",r ,I ,I /' " 11
"
" 'r ,I ,I " I, II
" r, " " ,I " II II
"
" " ,I ,t " 'I H
"
" " "
" " ,I 11 " 'I II
"
50 ::" "
"
"
~: :: :: :: :: u "
"

-9 -8 -7 -6 -5 -4 -3 -2 -I oI I 234 5 6 7 8 9
til

Figure 5.6. Leakage. The frequency of the sine wave is located atfo, exactly mid way between k=0 and k=l,
corresponding to an integer number of periods plus one half period in the time window. When a periodic signal
does not have an integer number of periods in the measurement window, then due to the finite measurement time
the convolution results in too low a frequency peak. At the same time the power seems to "leak" into nearby
frequencies; The total power in the spectrum is still the same.

The convolution between the Fourier transfonn of our sine wave and that of the time window
thus implies that we allow the latter, W( f), to sit at the frequency fo, that is, we construct
W(f-fo). Then we shift the transfonn of the (continuous) sinusoid, which is a single spectral
line, all the way to the left ( k = 0), and multiply the two. At each k for which we wish to
compute the convolution, we center the sinusoid spectral line at that same k, mUltiply the two

7'1
· Anders Brandt, Introductory Noise & Vibration Ana~vsis

and sum all the values (all frequencies). In some cases (see Figure 5.5), for example where
fo = ko '.d.f,
the sinusoid may move up the time window such that for all integer numbers k we
obtain only one single value from the convolution, since for all k except for k = ko the spectral
line of the sinusoid corresponds to a zero in W(f-fo).

Illustrated in Figure 5.6 is the result of the convolution as described above, for the case where
the sinusoid lies between two spectral lines (we have an integer number plus one half period
in the time window). The Fourier transform of the time window is instead centered at a
frequency fo which is not a multiple of the frequency increment ~f We see in the picture that
we obtain several frequency lines which slowly decrease to the left and right, and we get two
peaks which are the same height, although much lower than the sinusoidal RMS value. It can
be shown that if the RMS values of all spectral lines are summed according to Equation (4.28)
the result is equal to the RMS value of the sinusoid. Hence, the power in the signal seems to
'"leak" out to nearby frequencies, giving the name leakage.

## 5.11 The picket-fence effect

An alternative way to look at the discrete spectrum X(k) we obtain from the DFT is to see
each spectral line as the re~lt of a band-pass filtering, followed by an RMS computation of
the signal after the filter. This process is often illustrated as in Figure 5.7 with a number of
parallel band-pass filters, where each filter is centered at the frequency k (or k· Ilf if we think
in Hz). This method of looking at the DFT is reminiscent of viewing the true spectrum
through a picket fence and therefore it is called the picket-fence effect. Note that the picket-
fence effect is also analog with the method for measuring a spectrum with octave band
analysis that we discussed in Section 5.3. As was mentioned in Section 5.2 this principle is
the only way to measure or compute spectral content.

k
\--"""T"--4-- •••• ~

Figure 5.7. The picket-fence effect Each value in the discrete spectrum corresponds to the signal's RMS value
after band-pass filtering. Ifwe study a tone lying bet\veen two frequencies we will obtain too Iowa value.

## 5.12 Windows for periodic signals

As we saw above, we obtain an amplitude error when estimating a sinusoid with a non-integer
number of periods in the observed time window. This error is caused by the fact that we
truncate the true, continuous signal. By using a weighting function other than the rectangular
one used above in the leakage discussion, we can reduce this amplitude error. This process is
called time-windowing and is illustrated in Figure 5.8.

73
EXPERIMENTAL FREQUENCY ANALYSIS

The time window used in Figure 5.8 is called a Hanning window and is one of the most
common windows used in FFT analyzers. The effect of the window is that it eliminates the
"jump" in the periodic repetition of the time signal, but it is not very intuitive that it would
improve the result. It can be shown, however, that we can estimate the amplitude much better
than with the rectangular window.

a)

"
'."
.. ' . .
.. . . .
'
'
'.
··'...
· . 0.7
·. . .....'.
·· .. ',: .:
d)
.,' "
06

x b) 05

04

OJ

02
c)

" "
0.1
" .,
0
.. '. 0 SO 100 150 200 250 300
'. .
Figure 5.8. Illustration of time-windowing with a Hanning window. The window lessens the jump at the ends of
the repeated signal. In a) is shown the periodic repetition (dotted line) of the actual measured signal (solid line).
In b) is shown the Hanning window and in c) is shown the result of the multiplication of the two. In d) is shown
the result of calculating the spectrum with the Hanning window (solid) and without (dotted). Note that \vhen the
window is used, both the amplitude is closer to the true value (0.7), and the leakage has decreased. There does
exist an amplitude error of up to 16 %.

To obtain a better estimate of the amplitude of a pure sinusoid, we need to create a window
with a Fourier transform that is flatter and wider than that of the rectangular window. Through
the years, a large collection of windows has been developed. Many FFT analyzers therefore
have a large number of different windows from which to choose. We shall here examine two
windows, the Hanning and the Flattop window.

The Hanning window is probably the most common window used in FFT analysis. It IS
defined by half a period of a cosine, or alternatively one period of a squared sine, such that

W If () ..) (1fn)
n = sm- N = '2
1 [1 - cos ( 21fn))
N for n=O, 1, 2, ... , N - 1 (5.26)

The Hanning window's Fourier transform has a main lobe that is wider than the rectangular
window, so that the maximum error decreases to 16%. This error is of course still too large in
many cases, for example when one desires to measure the amplitude of a sinusoidal signal. In
that case, the Flattop window may be utilized, which yields a maximum error in amplitude of
0.1 %, a bit more acceptable.

The flattop window is not actually a uniquely defined window, but a name given to a group of
windows with similar characteristics. When we use flattop windows in this book, we use a

74
Anders Brandt. Introductory Noise & Vibration Ana(vsis

window called Potter 301 [Potter, 1972]. In Figure 5.9 and Figure 5.10 the three windows,
rectangular, Hanning, and flattop, with their Fourier transfonns are shown for comparison.

0.5

O~O~------------------------------------------~T~

0.5

O~~=-------------------------------------~~~
o T

0.5

o t-----~---
-0.5~~------------------------------------------~
o Time T

Figure 5.9. Time windows. Time-domain plots of the rectangular (top), Hanning (middle) and flattop (bottom)
windows.

There is a price to pay for the decreased amplitude uncertainty when we use time windows.
The price is in the fonn of increased frequency uncertainty, which occurs because the better
the amplitude uncertainty, the wider the main lobe of the spectrum of the window. Therefore,
1'1
if we measure a sinusoid with a frequency that matches one of our spectral lines, then the
peak will become wider than if we had used the rectangular window. The flattop window, I'
which has the best amplitude uncertainty, also has the widest main lobe. This trade-off is I"
A

related with the bandwidth-time product which is explained more in relation to errors in PSD
computation with windowing in Section 6.11. Figure 5.11 shows what the DFT of a sinusoid
which exactly matches a spectral line looks like, both after windowing with the Hanning
window and with the flattop window. As shown, the Hanning window results in 3 spectral
lines which are not zero, while the flattop window gives a whole 9 non-zero spectral lines.

Even with windows other than the rectangular, we get leakage when the sinusoid's frequency
does not match up with a spectral line, as seen in Figure 5.12. What detennines the leakage is
how the window's side lobes fall off to each side of the center. The faster the fall-off of the
side lobes, the wider the main lobe, which gives yet another compromise. The decreasing of
the side lobes is usually measured as an approximate slope per octave. The flattop window,
because of its large main-lobe width is only used when it is known that the spectrum does not
contain many neighboring frequencies. The Hanning window is often used therefore as a
standard window since it gives a reasonable compromise between time accuracy and
frequency resolution.

75
E.'(PERIMENTAL FREQUEA'CY ANALYSIS

o ! I I I I I I I!(I\!I. I ! ! ! I ! !
dB Rectangular
-20 /\. .(\, ... {\.
.. . . .

1(\ ......

-40

-60 f-

-80

-100~~~~--~~~~~~~~--~~~~--~~~--~~~ J
-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9

o
dB Hanning
-20

-40

-60

-80

_100~~~-L~--L-~-L~--~~-L~--L-~~~~~~~~

-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9

o
dB Flattop
-20

-40

-60

-80

_100L-~~~~~L-~~~--~~~~--~~~-L~L-~~~

-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9
Normalized Frequency, Times llf

Figure 5.10. Time windows. The Fourier transform of the rectangular (top), Hanning (middle) and flattop
(bottom) windows. Compare these plots with the plots in Figure 5.6. The Hanning window's first zero lies at k =
2, which means that for a sinusoid lying between two frequency values in the OFT, the convolution with the
window spectrum will make the value for k = 0 in Figure 5.6 be attenuated much less than tor the rectangular
window. For the flattop window almost no attenuation occurs. The uncertainty in amplitude decreases theretore
when using windows with a periodic signal. On the other hand, with wider main lobes, the spectral peaks are
broadened, which results in increased frequency uncertainty, see below.

76
Anders Brandt. Introductory Noise & Vibration Ana~vsis

O.S

0.7
Flattop
\./
·
····
0.6
··
0.5 · ·· Hanning
·· ·.',.
,/

0.4
· ··
0.3
··
·· ··
0.2,
···
0.1 .. .
. .
0 '"
-s -6 -4 -2 0 2 4 6 S
k

Figure 5.11. The widening of the frequency peak is the price we pay to get a more accurate amplitude. In the
figure is shown the linear spectrum of a sinusoid with amplitude I and frequency that matches the spectral line
marked "0", for both Hanning (solid) and t1attop (dashed) windowing, With the tlattop window the peak is much
wider than with the Hanning window. For clarity, the two values at k=± I for the spectrum after Hanning
windowing are shown with black dots.

O.s~--~--~--~----~--~--~----~--~---,----.

0.7 ,'---\
I \
I \

I
" "\ Flattop
0.6
:
:
'\ Hanning
I '
I '
0.5 II 'I

: \ Rectangular
I '
I '
0.4 I
I '
'
I '
'
0.3 /
/
,
I
/
'

.,
~.

" .''
/
/
''
0.2 /
'
/
/ ''
/
/
/
' ''
0.1 ./
/
/
''

-s -6 -4 -2 o 2 4 6 s
k

Figure 5.12. The spectrum of a sinusoid with frequency right between the frequencies k = -I and k = O. Three
different windows have been used: rectangular (solid), Hanning (dotted) and t1attop (dashed). From the figure
one can see the compromise between amplitude and frequency uncertainties.

...

77
EXPERIAIENTAL FREQUENCY ANALYSIS

## 5.13 Windows for noise signals

The window's influence on a random signal is a bit different than that described in Section 5.7
since noise signals have continuous spectra, as opposed to periodic signals which have
discrete spectra. The result of the convolution between the continuous Fourier transform of
the window and the noise signal is therefore more complicated to understand. Ifwe recall that
convolution implies that the qualities of both signals are "mixed," we can understand that the
window will introduce a "ripple" in the noise signals spectral density. At the same time, we
get a smoothing of the spectral density, due to the influence of the main lobe. For narrow
frequency peaks, for example if we measure resonant systems with low damping as discussed
in Chapter 2 we get an undesired widening of the resonance peaks. More on these bias errors
are discussed in Section 6.7.

The qualities most important to the influence of the window when determining spectral
densities are the width of the main lobe and the height of the side lobes. The narrower these
side lobes are, the less influence we get from nearby frequency content during convolution.
Th flattop window is never used for random signals, since its main lobe is too wide. The most
common is tlfe Hanning window and many FFT analyzers have no other window
implemented for noise analysis, although even the so-called Kaiser-Bessel window can be
suitable to use.

## 5.14 Frequency resolution

From the above discussion about widening of frequency peaks, it is clear that with a certain
frequency increment, ,1J, one may not, after the OFT computation, discern between two
sinusoids, separated in frequency by only one spectral line. For this reason we should
differentiate between frequency increment and frequency resolution. Frequency resolution
usually implies the smallest frequency difference that is possible to discern between two
signals, while the frequency increment is the distance between two frequency values in the
OFT computation, that is, ,1[ Frequency resolution depends upon the window, while the
frequency increment depends only on the measurement time, T.

There is no exact frequency resolution for a particular window. How close two sinusoids can
be in frequency, in order for the spectrum still to show two peaks, depends. on the width of the
window's main lobe, but also on where between the spectral lines the two sine waves are
located.

## 5.15 Summary of the OFT

It is not so easy to keep clear all these concepts and their influence on the discrete Fourier
transfonn. Therefore, to make things easier we will finish this chapter with a summary
[afterN. Thrane, 1979]. In Figure 5.13 the different steps in the OFT are shown and the
following text explains the different steps.

We start with a continuous time signal as in Figure 5.13 (A. 1). In Figure 5.13 (8.1) is shown
the Fourier transform of this continuous (infinite) signal, which is of course also continuous,
but band-limited so that we fulfil the sampling theorem. For the sake of simplicity we (and
Thrane) have used a time function which is a Gaussian function, and it has the same shape in
time and frequency. •
78
Anders Brandt, Introductory Noise & Vibration Ana(vsis

Time Frequency
A.I B.I
x(t) X(f)

A.2 B.2
I I 1'1 I' I' , ~I I' I I' I'

," ," ,
I ' I '
A.3 B.3 I '
I ' " \ X(f)*SlU)
,
I '
' ,/
I ,

## A.5 B.5 /\./lV\Y(f)'SIU)'WU)

B.6
'I' I I' I' I' ~ 1 1~ I I' I' I'

B.7 ,,
I \
,,X(k)
.

Figure 5.13. Summary of the OFT. See text for explanation. [After Thrane, 1979]

The discrete sampling we then carry out is equivalent to multiplying the signal by an ideal
train of pulses with unity value at each sampling instant and zeros between, see Figure 5.13
(A.2) and (A.3). In the frequency domain, this operation corresponds with a convolution with
the equivalent Fourier transfonn, which is a train of pulses at multiples of the sampling
frequency, f,. We consequently obtain a repetition of the spectrum at each k . f, .This is
actually a proof of the sampling theorem, since if the bandwidth of the original spectrum
would be wider than ±f,./2, the periodic repetition of the spectra will overlap, see Figure 5. I3
(B.2) and (B.3).

The next step is measuring only during a finite time, which in the time domain is equivalent to
multiplying by a rectangular window as in Figure 5.13 (AA) and (A.5). In the frequency
domain this operation is equivalent to the convolution with a sinc functi~n as in (BA) and

79
EXPERIMENTAL FREQUElv'CY ANALYSIS

(8.5). This is where the uncertainty in amplitude arise in the frequency domain, which can be
seen in the ripple on the spectrum in (8.5).

The final step is carried out in the frequency domain, (8.6) and (8.7). We calculate with the
DFT only discrete frequencies. This operation is equivalent, as in (A.2) above, to a
multiplication with a train of pulses, only now with frequency increment 11/ = 1 / T. In the
time domain this step implies a convolution with a train of pulses with separation T, as in
(A.6), which finally gives us the periodicity in the time domain in Figure 5.13 (A.7).

80