FREQUENCY ANALYSIS
4 FREQUENCY ANALYSIS
During frequency analysis, three different types of signals must theoretically and practically
be handled in different ways. These signals are
• transient signals, e.g. shocks arising when a train passes rail joints
Figure 4.1. Using Fourier's theory, every periodic signal can be split into a (potentially infinite) number of
sinusoidal signals. Shown in the figure is a periodic signal which consists of the sum of the three
frequencies liTf" 2ITf" 3ITf" where Tf' is the period of the signal.
The Fourier series is mathematically formulated so that every periodic signal xp( t) can be
written as
(4.1)
where the integration occurs over an arbitrary period of xp( t ). To make the equation easier to
interpret physically, one can also describe Equation (4.1) as a sinusoid at each frequency,
where the phase angle for each sinusoid is described by the variable fjJk. We then obtain
52
Anders Brandt. Introductory Noise & Vibration Analysis
(4.3)
where Go is the same as in Equation (4.1). Comparing Equations (4.1), (4.2), and (4.3) we see
that the coefficients in Equation (4.3) can be obtained from Gk and b k in Equation (4.2)
through
a: = ~a:' + b,2
9, = arctan ( !: 1 (4.4)
By making use of complex coefficients, Ck, instead of Gk and b k , the Fourier series can
alternatively be written as a complex sum as in Equation (4.5).
'X. J27f!: I
x /' (t) = """"
~
ck c ,~ (4.5)
k=x
In this equation
a
c =_11
II 2
C
k
1
= (a
2 k
 1
bk ) = T
J x/' (t)c
'I +'[~ ./2"k
I
I;, dt
(4.6)
11
and the integration occurs over an arbirtrary period of the signal xp( t ) as before. Note in
Equation (4.5) that the summation occurs over both positive and negative frequencies, that is,
k=0, ± 1, ±2, '" Since the left side of the equation is real (we assume that the signal xp is an
ordinary, real measured signal), the right side must also be real. Because the cosine function is
an even function and the sine function is odd, then the coefficients Ck must consequently
comply with
Re[c_kl = Re[cJ
Im[ ck 1=  Im[c!: 1 (4.7)
Note also that each coefficient Ck is half as large as the signals amplitude at the frequency k,
which is directly apparent from Equation (4.6). Thus, the fact that we introduce negative
frequencies implies that the physical frequency contents are split symmetricalIy, with half as
53
r.
I
I
FREQUENCY ANALYSIS I
I
(true) positive frequencies, and half as (virtual) negative frequencies. The same thing occurs
for the continuous Fourier transform, which will shortly be described.
Amplitude Spectrum, V
lITp 21Tp
Phase Spectrum, Degrees
Figure 4.2. Amplitude spectrum of a periodic signal. The spectrum contains only the discrete frequencies /IT,,,
2/Tpo 31Tp, etc., where Tp is the signal period.
54
Anders Brandt. Introductory Noise & ~'ibration Ana(vsis
Frequency
Amplitude
Time
Figure 4.3. The time and frequency domains can be regarded as the signal appearance from different angles.
Both planes contain all the signal information (actually, for this to be completely accurate, the phase spectrum
must also be included along with the above amplitude spectrum).
process one can, for example, calculate the expected value, E[x(t)], which is defined as
Jx( t) .
'X.
where px( x) is the statistical density/unction of the process. Another common measure is the
variance, oJ t), which is defined as
55
FREQUENCY ANALYSIS
The variance can, for our voltage example, be interpreted as proportional to the power
developed in the resistor when the voltage is supplied over it. Variance is rarely used in
practice, but the square root of the variance, the standard deviation, is more common, mostly
because this quantity has the same units as the measured signal.
Besides expected value and variance, we also define the more general central moments, Mi , as
The variance is consequenctly equal to M2. It is important to understand that all of the above
values are timedependent, implying, possibly against the intuitive understanding of many, not
what we normally mean by average. If we experimentally want to estimate the expected
value according to the definition, we would (in our example) have to measure a large number
of resistors and then average the voltage over all measurements at each time instant. This
type of averaging is called ensemble averaging, since we carry out a computation for an
ensemble of resistors, see Figure 4.4.
_Time _. . . . ..~
, Ensemble
Figure 4.4. The difference between ensemble values and time average values. The basic definitions from
statistics are based on ensemble values. For ergodic signals these values can be replaced by time averages.
Physical signals are always ergodic, if they are stationary.
A stationary stochastic process is a process for which the above statistical measures are
independent of time. For example, to measure the expected value of our voltage, if the signal
was stationary, we would only need to measure each resistor at a single common time, to. But
in this example, for an exclusively stationary signal we must measure a (large) number of
different resistors.
For those signals we normally measure in physical situations, there is another restriction that
applies in relation to the above, namely that the signals are ergodic. For ergodic signals the
above ensemble values are equal to the corresponding time values, implying that the
definitions can be replaced by time averages. For example, the expected value can be replaced
by
56
Anders Brandt, Introductory Noise & Vibration Analysis
Similarly, the moment of Equation (4.10) can, for an ergodic signal, be calculated by using
If in our example we assume that the voltage is ergodic, it implies that it would suffice to
measure one resistor during a certain time, and then use these time values according to
Equation (4.11) to calculate the expected value. This is what is done in reality, but at the same
time it is important to understand the fundamental difference between the above definitions
when reading literature in the field. For further reading about statistical concepts, consider
[Bendat & Piersol, 1986; Bendat & Piersol, 1993]. It should also be mentioned that
stationarity is very important to check before analysis of a random signal. Methods for this
check are given in [Bendat & Piersol, 1986; Brandt, 2000].
When the above statistical measures are calculated from experimental data, the following
estimations are usually used. If we assume that the signal is stationary and ergodic, the
expected value is estimated with the mean value
(4.13)
where we have simplified the term x l1=x( n). This notation will often be used below as it
simplifies the reading of the equations. The mean value is often denoted by a line above the
variable name. For other variables we use the '"hat" symbol /\ to show that we are dealing with
an estimate and not a theoretical expected value. For an arbitrary variable, the expected value
is estimated by the above formula. The standard variation, O'~, is estimated using, ax'defined
by
a, =~_I_I.(x
N 1 =1
xf (4. 14)
where N  1 in the denominator is used so that the estimator becomes consistent, i.e. with an
increased number of averages the estimation should approach the true value, without bias
error. This concept is not so important in practice where we nonnally use more than 20 values
in the averaging.
A common value related to the standard deviation is the Root lYfean Square (RMS) value of a
signal. The name directly implies how the value is calculated, namely as
1 N ,
RMSr =  I. x,: (4.15)
N n=1
The RMS value, as is evident from the equation, is equal to the standard deviation when the
mean value of the signal is zero. The value corresponds also, for any dynamic signal, to the
DC voltage which the signal could be replaced by in order to cause the same power
57
FREQUENCY ANALYSIS
dissipation in a resistor. More generally, the value of a dynamic signal is used as a first
measure of the "size" of the signal. For a sinusoidal signal it is easy to show that the RMS
value corresponds to the peak value divided by the square root of2.
.....N 5
~
'"
'"til
~ lOr
rtil
p..
ci'
5
20
[

.9
e! 25
II)
"ii
u
u
<: 30
,I
.'
~
i 35
40
0 50 100 150 200 250 300 350 400 450 500
Frequency, Hz
Figure 4.5. Power spectral density, PSD, of a random acceleration signal. The spectrum is characterized by being
continuous in frequency and is a density function, that is, the units are (m/s2)2/Hz if acceleration is measured in
mls1 .
The theoretical derivation of the PSD usually involves the autocorrelation function, even if it,
strictly speaking, is not necessary. The autocorrelation function, R.u( r), for a stochastic,
ergodic time signal x( t) is defined as [Bendat & Piersol, 1986]
(4.16)
and can be interpreted as a measure of the mutual dependence a signal has with a timeshifted
version of itself (shifted by time r). Similarly, the crosscorrelation between two different
58
Anders Brandt, Introductory Noise & Vibration Analysis
stochastic, ergodic functions, x( t ) and y( t ), where x( t) is seen as the input and y( t) the
output, is defined as
(4.17)
...
For correlation functions, the following relationships hold for real signals x( t) and}{ t ).
(4.19)
Now we define the twosided power spectral density, denoted S~.lf), as the Fourier transform
of the autocorrelation function
and analogous to this function we define the twosided crossspectral density, Sy.lf) as
JR (T)e
x
J27r
5.,II" (1) = S' {R ,1J.l: (T)} = .11.1'
!T dT (4.21 )
The negative frequencies in the Fourier transform act such that half of the physical frequency
content appears as positive frequencies, and the other half as negative frequencies. Therefore,
experimentally we never measure the twosided functions, but instead define the singlesided
spectral densities, Auto (Power) Spectral Density, PSD or G.~.lf), and CrossSpectral
Density, CSD or Cvx(f), as
(4.25)
59
FREQUENCY ANALYSIS r
possible to scale their spectra by the power in the signal (power is of course energy/time).
In~tead, transient signals ar~ ~enerally scaled by their energy, and thus such.spectra can have
umts of for example (m/sts/Hz. The spectrum most commonly used IS called Energy
Spectral Density, ESD. Because energy is power times time, we obtain the definition of the
ESD
where T is the time it takes to collect one time block, see Section 5.6. The ESD shall be
interpreted through the area under the curve, which corresponds to the energy in the signal. In
Section 6.7 below there is more on spectrum estimation for transient signals.
An alternative linear, and therefore more physical, spectrum for a transient signal is obtained
by using the continuous Fourier transform without further scaling. The transient spectrum of a
signal x( t) is consequently defined as
This is a twosided spectrum and we will return to a discrete approximation of this spectrum
in Section 6.7.
We may also want to know for example the RMS value for the signal, in order to know how
much power the signal generates. This can be done using a formula called Parseval's
Theorem, see Appendix D. For a periodic signal, which has of course a discrete spectrum, we
obtain its total RMS value by summing the included signals using Equation (4.28),
(4.28)
where IRxkl is the RMS value of each sinusoid for k= I, 2, 3... The RMS value of a signal
consisting of a number of sinusoids is consequently equal to the square root of the sum of the
RMS values. IRxkl corresponds consequently to the value of each peak in the linear spectrum.
For a noisy signal we cannot interpret the spectrum in the same way. This signal contains all
frequencies, which makes it a bit tedious to count them! Instead, we interpret the area under
the PSD in a specific frequency range, see Figure 4.6, which also follows from Parseval's
Theorem. To calculate the noise signal's RMS value from the PSD we use Equation (4.29),
R:\IS" = ~ J
Gil (f)df = .Jarea under the curve (4.29)
60
Anders Brandt, Introductory Noise & Vibration Ana~vsis
N 5
~
,,
N
e 1
CI)
'"
"0
~ 15
~
Figure 4.6. From a PSD the RMS value can be calculated as the square root of the area under the curve.
In a similar fashion, we can detennine the energy in a transient signal by calculating the
square root of the area under the ESD curve.
Spectral density functions are difficult to interpret since it is the area under the curve that is
interpreted as power or energy. A suitable function for easier interpretation of spectral density
functions is therefore the cumulated jimction. For a PSD, one can build for example the
cumulated mean square value, which is calculated as
f
P,,,< (I) = J
()
GIl' (I)df (4.30)
Note the similarity between this function and the statistical distribution function, which is
equal to the integral of the statistical density function (or the sum if we have a discrete
probability distribution). The function Pmlf) is consequently equal to the RMS value in a
frequency range from the lowest frequency in the spectrum up to the frequency value f
Shown in Figure 4.7 is a cumulated PSD scaled by the RMS value. In a diagram of the
function Pm,(f), one can easily calculate the RMS value in any frequency interval, for
examplefE (fl,h) by fonning
61
FREQUENCY ANALYSIS
30r~,,,,~ ____~__,____,__~
25
N,...., 20
N
e'"
~
O~
til
p., 15
...
~~
"0
II)
<'3
"3
§
u
10
5
//
50 100 150 200 250 300 350 400 450 500
Frequency, Hz
Figure 4.7. Cumulated mean square value calculated for the spectral density in Figure 4.5.
62
Anders Brandt, Introductory /Iioise & Vibration Ana{vsis
b. =
r)
£[9]1> (5.1)
that is, the difference between the expected value of our estimate and the "true" value ¢. We
generally divide this error by the true value to obtain the normali:ed bias error, Ch, as
b.
r)
E= (5.2)
Ii <p
The random error is defined as the standard deviation, (J , of the difference between our
o
estimated variable and the true variable
(5.3)
,
and, as with the bias error, we define the normali:ed random error, Cr, as '.'
E,. (5.4)
In most cases we do not know the true parameter ¢, and therefore we generally use our
estimated parameter, ¢, in its place. When we have estimated the normalized random error,
Cr, we can use it, for small errors (cr < 0.1), to calculate a 95% confidence interval
63
EXPERD.,fENTAL FREQUENCY Alv'ALYSIS
.J 50
Q.
CIl
~
..101
30
~
.:!, 10
.J •
Figure 5.1. "Original" method for measuring spectrum. An adjustable bandpass filter is stepped through a
number of center frequencies, and for each frequency band the RMS value is measured with a voltmeter. Today,
this method is used particularly in acoustics, see the section on octave bands below.
To see what qualities this type of spectrum estimation has, we can study how an RMS value
can be calculated for a bandpassfiltered signal. The RMS value is calculated for a sampled
signal using the fonnula
1 v
R =
1:
_'"'x"2
i\r~ n
(5.6)
JV 11.=[
For a noisy signal, it can be shown [Bendat & Piersol, 1986, 1993] that an RMS value so
calculated has a nonnalized random error given by
1
c,  2.JBT (5.7)
where B is the signal's bandwidth and T is the measurement time, that is T=Nt1t if N is the
number of samples in the RMS computation.
The product BT, often called the bandwidthtime product, is central in signal analysis. Clearly
it specifies that, in order to obtain a certain maximum error when calculating the RMS value
(or spectrum value) with a specific frequency resolution, we must measure during a specific
minimum amount of time. The finer the frequency resolution we require, the longer time we
must measure. This compromise follows naturally from the fact that frequency and time are
inversely proportional to each other.
64
Anders Brandt. Introductory Noise & Vibration Analysis
B
 = constant (5.8)
1m
The chosen frequencies were distributed into octaves, meaning that each center frequency was
chosen as 2 times the previous one, and the width of each bandpass filter was twice as large
as that of the previous filter. The lower and upper limits were chosen using the geometrical
average, that is
(5.9)
where.ll is the lower frequency lim it and j;J the upper limit. The resulting relationship between
the lower and upper frequency lim its for octave bands is
(5.10)
_ .1/2
!"  !,,, 2
In some instances the octave bands give too coarse a spectrum for a signal, in which case a
finer frequency division can be used. The most common division used is the thirdoctave
band, where every octave is split into three frequency bands, so that
= f ,)l/(i
!,1 J", ~
(5.11 )
f
J Il
= f
Jm
21/(i
More generally, one can split each octave into n parts. These frequency bands are generally
linoctave bands and their frequency limits are given by
f
J[
= f
Jm
2 112 "
(5.12)
The center frequencies for octave and thirdoctave bands are standardized in [ISO 266],
among other places. The standard center frequencies for thirdoctave bands are ... 10, 12.5,
16, 20, 25, 31.5, 40, 50, 63, 80, 100, 125, 250 ... where boldface stands for center frequencies
common to the octave bands. It is clear that these frequencies are not exact doublings, but
rather rounded values from Equation (5.13) below. In Equation (5.13), p is a negative or
positive integer number.
Jf m = 1000.10,,/ 10 (5.13)
65
EXPERIMENTAL FREQUENCY ANALYSIS
1
T= (5.14)
B
For octave and thirdoctave band measurements, the different frequency bands consequently
have different time constants, that is, longer time constant for lower frequency band. To the
right in Figure 5.1 a typical octave band spectrum for a vibration signal can be seen. Note that
to the far right the socalled total signal level, that is, the signal's RMS value (within a given
frequency range) is shown. The position of this bar differs depending on analyzer
manufacturer, but it is usually shown on either side of the octave bands.
In order for the measurement to go faster, or if the signal is not stationary, one can instead use
a realtime analy=er, which is designed with all of the thirdoctave bands in parallel, so that
the same time data can be used for all frequency bands.
66
Anders Brandt. Introductory Noise & Vibration Ana~vsis
.VI ./2rrl"
X(k) = Lx(n)c.v for k = 0,1,2, ... ,N1 (5.15)
,,=0
which we call the forward transform of x(n). To calculate the time signal from the spectrum
X(k), we use the inverse Fourier transform,
1 NI ./2rrllk
x( n) = 
N
L X (k)c.v
k=o
for n = 0,1,2 ... , N  1 (5.16)
I
It should be pointed out that the definition of the OFT presented in Equation (5.15) is not
unique. One may find definitions with different scaling factors in front of the sum in the
literature. When confronted with new software, one should therefore test a known signal first
to find out which definition of the OFT is used. A simple way to test is to create a signal with
an integer number of periods and with an N of, say, 1024 samples. See Section 5.7 on how to
create such periodicity. By checking the result of an FFT and comparing with the fonnulae
above, the definition used can be identified. The definition according to Equations (5.15) and
(5.16) is common, and is the one used, for example, by MATLAB.
The spectrum obtained from the above definition of the OFT is not scaled physically. This is
clearly seen by studying the value for k = 0. The frequency 0 corresponds to the OC
component in the signal, that is, the average value. But, according to Equation (5.15) above
we have
.'VI
X(O) = Lx(n) = N· x (5.17)
where we let x denote the mean value of x(n). It can thus be concluded that Equation (5.15)
must be divided by N in order to be physically meaningful. which is done when using only
X(k). As a rule, however, we cannot measure only a time block of data because we have noise
in our measurements. Therefore we need to average the signal, and to that end other scaling
factors are needed, which will be described below. ,I
:1
"r
It should also be noted here, that the discrete Fourier transfonn in Equation (5.15) differs
substantially from the analog Fourier transfonn, see also Appendix O. First of all, the OFT is
computed from a finite number of samples. Secondly, the DFT is not scaled in the same units
as the analog Fourier transfonn, since the differentiator dt is missing. The analog Fourier
transfonn of a signal with unit of m/s 2 would have unit mis, whereas the OFT will have units If
of m/s 2• In Chapter 6 this will be clear as we present how to compute scaled spectra from the
OFT results.
This result arises because we have sampled the signal, which, according to the sampling
theorem, implies that we make it periodic on the frequency axis, so that it repeats at every
67
EXPERIMENTAL FREQUENCY ANALYSIS
m· /.. where m represents all integer numbers. Similarly, the time signal x(n) according to
Equation (5.16) becomes periodic with period N, so that
x(n)=x(n+N) (5.19)
a)
1 . . "
b) 10
0.8 "
0.6
. . . !
•
"
"
8 0
I
0.4 i i
i, I
!
, 6
. .
I I
.
I!
0.2 
,
0 • I
0.2  4
I
0.4 
0.6
• . • . i
2
.
0.8  J,
1 , • 0
0 2 4 6 8 10 12 14 16
0 2 4 6 8 10 12 14 16 k
n
Figure 5.2. Sinusoid with N = 16 and with an integer number of periods in the measurement window (a), and the
absolute value of the OFT of the same signal (b). As evident from the figure, the result of the transform is
unsealed. Further, the first N 12 values in the OFT correspond to the positive frequencies and the next LV 12 values
to the negative frequencies, which is clear from the periodicity of the OFT Note especially that the time signal
which has an integer number of periods in the measurement window does not end with the same value as the first
sample. Instead, if it was calculated, x(l6) would have been equal to x(O), see Equation (5.19).
The time signal is periodic due to our calculating only at discrete frequencies, which implies,
so to say, a "sampling" in the frequency domain. The first N 12 discrete frequencies in the
DFT, k = 0, 1, 2, ... , N 12, correspond to sinusoids with k periods within the measurement
time of x(n), that is, T = N ·Lit. Hence, the distance on the frequency axis, which is usually
called the frequency increment, Lif, is
These relationships are important to keep straight when we measure with an FFT analyzer, to
keep count of how long the measurement time is for a certain frequency increment. The k
which lie between N 12+ 1 and N  1 correspond actually to negative frequencies, since X(k)
repeats every N. In an FFT analyzer, however, we seldom notice this fact, since the builtin
transform function only calculates the positive frequencies and thus shows a singlesided
spectrum, see Figure 5.2_
Another detail which should be observed is that the periodicity in the DFT is such that if we
shall sample an integer number of periods of a periodic signal, then we must sample it so that
the first sample, n = 0, corresponds to the sample which would arrive at n = N if we sampled
that far and did not stop at n = N  I. In this way, the periodicity of our signal is continuous,
as in Figure 5.2. It is a common error to neglect this fact and make the signal symmetric in the
time block.
68
Anders Brandt. Introductory Noise & Vibration Analysis

If we use N time samples, which we usually call the block size or frame size, the DFT results
in half as many, that is, N 12 positive (useable) frequencies, as seen in Figure 5.2. These
frequency values are usually calledfrequency lines or spectral lines.
Block size Corresponding
(# of time samples) # of spectral lines
256 101
512 201
1024 401
2048 801
4096 1601
8192 3201
Table 5.1. Typical block sizes and corresponding
numbers of useable spectral lines when applying
FFT.
Because the analog antialiasing filter is not ideal, but has some slope after the cutoff
frequency, as seen in Figure 5.3, we cannot sample with a samling frequency which is only
2'B man the bandwidth of the signal. In the FFT analyzer, a "standard" oversampling factor of
2.56 has been established. Thus, we can only use the discrete frequency values up to
k = N 12.56. Typical values for the block size and corresponding number of spectral lines are
given in Table 5.1. The frequency here which corresponds to k = N 12.56 is called the
bandwidth, BI17 (u.
0
co
"0
.,.; 20
c
.,&. 40
" 60
~
Co>
c
"
II
.1
"::s
0
80
I~
"
~ 100 I
1
0 100 200 300 400 500 600 700 800 900 1000
1
,,<
.,
"tb 200
0"
0
,:.,
~. 400
..c ~
!:l...
600
800
0 100 200 300 400 500 600 700 800 900 1000
Frequency, Hz
Figure 5.3. Typical antialiasing filter. Because of the filter's nonideal characteristics, the cutoff frequency, f.,
needs to be set lower than half of the sampling frequency. It is typically set to Ix 12.56 in FFT analyzers. for
historical reasons, which approximately corresponds to 0,8 j/2, In the figure the cutoff frequency is 800 Hz.
which gives a sampling frequency of 2.56'800=2048 Hz. Note the nonlinear phase characteristic, which will be
discussed in Section 7.7.
69
EXPERIAfENTAL FREQUENCY Al'v'ALYSIS
For a real measured signal, the real part of the Fourier transform is an even function and the
imaginary part an odd function, that is,
These qualities, called the Fourier transform symmetry properties, are valid exactly, even for
the OFT, provided that there does not exist any aliasing distortion. Naturally, these attributes
are valid when we "shift down" the upper N 12 spectral lines so they lie to the left of k = 0, so
that we have a twosided spectrum X( k), k = 0, ± 1, ±2, ... Thus, according to Equations
(5.21) and (5.22), the negative frequencies can be "filled in" before inverse transformation.
Close study of the OFT result will show that the value X(k = NI2+ 1), for example value
number 513 if the block size is 1024 samples, will be a realvalued number, which is not
equal to X(O). It is not the NI2 number because of the "skew" in the periodic repetition
discussed in Section 5.7. Furthermore, this value (for k = NI2+1) cannot be discarded if an
exact reproduction of the time signal x( n ) is to be computed. This is often overlooked and
only the first NI2 values stored. The correct number of values to store in order to be able to
compute back the original time signal is instead N12+ 1, in our example with 1024 block size,
thus 513 frequency values should be stored. Then all negative frequencies can be filled in
accurately.
5.10 Leakage
What happens if we, for example, compute the DFT with a frequency increment of 41= 2 Hz,
but the measured signal is a sinusoid of 51 Hz, so that the signal frequency lies right between
two spectral lines in the DFT (50 and 52 Hz)? The result is that we get one peak at 50 Hz and
one at 52 Hz. However, both peaks are lower than the true value, see Figure 5.4. To easily
observe this error in the figure, we have scaled the OFT by dividing by N and taking the
absolute value of the result, see also Section 6.3. Thus the correct value should be
11 J2 == 0.7 , the RMS value of the sine wave.
70
Anders Brandt, Introductory Noise & Vibration Ana~vsis
0.5
0.8
0.6 0.4
0.4
0.2 0.3
0
0.2 0.2
0.4
0.6 0.1
0.8
1 0
0 0.1 0.2 0.3 0.4 0.5 30 40 50 60 70
Time, s Frequency, (Hz)
Figure 5.4. Time block (left) and linear spectrum (right) of a 51 Hz sinusoid. 256 time samples have been used,
giving 128 spectral lines. The frequency increment is 2 Hz. Instead of the expected value 0.7, that is, the RMS
value of a sinusoid with amplitude of I, we get one peak much too low (in this case 40% too low). There are also
more nonzero frequency values to the left and right of the 50 Hz and 52 Hz values. This phenomenon is called
leakage since the frequency content in the signal has "leaked" out to the sides.
As seen in the figure the resulting peak is far too low, by as much as 40%! Furthermore, it
looks like the frequency content has "leaked" away on either side of the true frequency of 51
Hz. This phenomenon is therefore called leakage.
One way to explain the leakage effect is by studying what happens in the frequency domain
when we limit the measurement time to a finite time, which corresponds to multiplying our
original, continuous signal by a time window which is 0 outside the interval t E (TI2,TI2),
and 1 within this same interval. A multiplication with this function, w( t ), in the time domain
is analogous to a convolution with the corresponding Fourier transform, W( f). We thus
obtain the weighted Fourtier transform of x(t)· w(t) , denoted X,(f), as
(5.23)
where * denotes convolution. W(f) is the transfonn of a rectangular time window, in our case
To make the convolution easier we exchange the two functions, and make use of
.....
X",(.t) = X(f) * lV(f) = lV(f) * X(f) = JlV(u)X(f u)du (5.25)
71
EXPERIMENTAL FREQUENCY ANALYSIS
10 , ,
,,
.,,
,," ' ..,,
, ',
. ,, ,
, ," , .
, , , ",', '. "
20 : ""'.,
, ~ , "" , ,"
,: , .,,
...
",., ,' ·· .,: ·· ," .
:,
,,, ,:" . ,, ,· ,' ,. : "" '"".
'.'
" ···, ···, ', ... ,:'.. ,,, ,
'. ..
.
, , , . , .' .
30 ,, .. ,,, , ,,, .. ,, ,'. : ·.. .' ... ·., .'.'
",., "
"
.'.'
" " ..
'.'r, ·.'' ., ,, ··, , · ,,, ·· ,, .. :" .
. · ' . ".. ,
". ..'.'., .'
"
""
'.
".'
"
"
.'.' "., ·': .'·· ., ·...., ,,
...". .' .'
..:
··,·· " "'.,· '"'.'" .' "., .'".'
,
.'.'"'
'.
'".
.'""
.'.' ... .' .,
'., .'...... ...... .'.' "'. ., , ,
" ..
.'.' ''..
· " "'.. ""r, .'.!..' "..' "•! , ., ., .....' ''.. "" ''...
.'.' .' ''..
.'.'., .'.'.' .'.'.'.' ''r... "" ,'....
"
40 ", '. '. " .' ".'
.!
" .'
I
.
'
"
"
'. , ., ., ''..
I
"'.. "
"
l) 8 7 6 5 4 3 2 I 0 2 3 4 5 6 7 8 9 k
/1)
Figure 5.5. OFT of a sinusoid which coincides with a spectral line. The convolution between the transform of the
(rectangular) time window, WU), and the sinusoid's true spectrum, DUo), results in a single spectral line.
10
.
, ~. ... ~ ., .
.,
20 : ·.. . "
"
""
,
·· ', ··· ,,
"
"
, · , '.'. ""
.
.
· "· . · ''..
,,, . ," . ,' ·.'. " ''..
".: ·, , ··· ,, ·
··' .' · ,
. ·, ,, , ., ..'t; """ '''...
. ·. .., ~, ,, · , ,
.' .'
"
30 ,
.,, '. . ,,. ·· ,, ..,. .: .'
·.'. .. ·''... ' . "
.'.'
"
"
'.
'.
''..
"" · ' · '
"" .'".' .'
""" .' .'.' "
.'
"
"
. . ,, .'· ·· ·· ,,
.'.' .. ·· ·· ·· ,,
.' '. .. ·.
'. "
"
"
" ., '
'.
.'.'
"
"
.'
.'
.' ''''.... """
"
r.
"
.' ''..
" " " "
" " " " " "
'.
"
.' '.
..'.'.',
" "
" " "
.' '.
" "
"'. " "
"
~
".'.: ''..
" "
"
: '. ::
.'
:: ''..
'.
"
'. "" '.
50LL~~ULLL~~~~~~~~~~~~~~~~~~
"' .
I1
"
9 8 7 6 5 4 3 2 1 0 2 3 4 5 6 7 8 9
.Ii)
Figure 5.6. Leakage. The frequency of the sine wave is located at/o, exactly mid way between k=0 and k=l,
corresponding to an integer number of periods plus one half period in the time window. When a periodic signal
does not have an integer number of periods in the measurement window, then due to the finite measurement time
the convolution results in too low a frequency peak. At the same time the power seems to "leak" into nearby
frequencies; The total power in the spectrum is still the same.
The convolution between the Fourier transfonn of our sine wave and that of the time window
thus implies that we allow the latter, W( I), to sit at the frequency II), that is, we construct
W(Ifo). Then we shift the transfonn of the (continuous) sinusoid, which is a single spectral
line, all the way to the left ( k = 0), and multiply the two. At each k for which we wish to
compute the convolution, we center the sinusoid spectral line at that same k, mUltiply the two
72
· Anders Brandt. Introductory Noise & Vibration Ana~vsis
and sum all the values (all frequencies). In some cases (see Figure 5.5), for example where
fo = ko ',cjf, the sinusoid may move up the time window such that for all integer numbers k we
obtain only one single value from the convolution, since for all k except for k = ko the spectral
line of the sinusoid corresponds to a zero in W(ffo).
Illustrated in Figure 5.6 is the result of the convolution as described above, for the case where
the sinusoid lies between two spectral lines (we have an integer number plus one half period
in the time window). The Fourier transform of the time window is instead centered at a
frequency fo which is not a multiple of the frequency increment !::.f We see in the picture that
we obtain several frequency lines which slowly decrease to the left and right, and we get two
peaks which are the same height, although much lower than the sinusoidal RMS value. It can
be shown that if the RMS values of all spectral lines are summed according to Equation (4.28)
the result is equal to the RMS value of the sinusoid. Hence, the power in the signal seems to
"leak" out to nearby frequencies, giving the name leakage.
I
5.11 The picketfence effect
An alternative way to look at the discrete spectrum X(k) we obtain from the OFT is to see
each spectral line as the result of a bandpass filtering, followed by an RMS computation of
the signal after the filter. This process is often illustrated as in Figure 5.7 with a number of
parallel bandpass filters, where each filter is centered at the frequency k (or k ·l1f if we think
in Hz). This method of looking at the OFT is reminiscent of viewing the true spectrum
through a picket fence and therefore it is called the picketfence effect. Note that the picket
fence effect is also analog with the method for measuring a spectrum with octave band
analysis that we discussed in Section 5.3. As was mentioned in Section 5.2 this principle is
the only way to measure or compute spectral content.
,~i
1'1
'.,
',1
; !'I
'"
k ,~
•••••• +...4 .....,...
~
I'
Figure 5,7. The picketfence effect. Each value in the discrete spectrum corresponds to the signal's RMS value
after bandpass filtering. If we study a tone lying between two frequencies we will obtain too Iowa value.
73
EXPERI.\4ENTAL FREQUENCY ANALYSIS
dB
0
,.,
,,
,,
10
'.,
"'1
,',1
,", 1
,
20 "
" I I
,
,
,, ,"
,, , , ,,
,, ,'
, , ,, ,'" ', ,(,, ,, ,,
,.'
30 :" ' .'. "
"
"
.,.,
. , ,'.'' ", ,, ', , , ,," ,,
", ,, ,
" " 'L,, ', ,, ,
" "
,., ,
.
"
40
"
"
"
,.'" ,.'
"
,., "
' ,.',! '.'
"
"r,
...
"
" r,
"
'.'
"
"
. "
" "
"
,,
" "r' " ",',
" "
" ,.,
" "
,,
"
"
"
,,
"
" "
"
"'.'
" 'r
" "
4 5 6 7 8 9 k
Figure 5.5. OFT ofa sinusoid which coincides with a spectral line. The convolution between the transform of the
(rectangular) time window, W(f), and the sinusoid's true spectrum, O(fo), results in a single spectral line.
o ,,
10
,,
, ,, " ,,
"
20 : ~: ~ :, I I,
IJ
It
L
I
r'
1\
\
,
" . "
"
"
"
"
"
,I
\
1
I
I
I
'1
~ "
,
".
" \ I
"
" ,I
,I
"
I, , '
'1
\
,'
I I I I
30 " "
"'r " I, ,',
I' I
"
I
I I 1 I '" I 1
'r
r' "" I'
,II '
, I \ I \ I \
, ,,
I t I I I
,' "
"
,r
/' I'
"
"
"
"
I r
II
,,
" ",r ,I " ,I " II
"
" " ,r ,I II ,I " II
"'.' "
40 "
" ". I'
"
II
II
,I
"
II
I,'
II
II "r,
" 'r " " II
,I 't II
'r "
" ",r ,I ,I /' " 11
"
" 'r ,I ,I " I, II
" r, " " ,I " II II
"
" " ,I ,t " 'I H
"
" " "
" " ,I 11 " 'I II
"
50 ::" "
"
"
~: :: :: :: :: u "
"
9 8 7 6 5 4 3 2 I oI I 234 5 6 7 8 9
til
Figure 5.6. Leakage. The frequency of the sine wave is located atfo, exactly mid way between k=0 and k=l,
corresponding to an integer number of periods plus one half period in the time window. When a periodic signal
does not have an integer number of periods in the measurement window, then due to the finite measurement time
the convolution results in too low a frequency peak. At the same time the power seems to "leak" into nearby
frequencies; The total power in the spectrum is still the same.
The convolution between the Fourier transfonn of our sine wave and that of the time window
thus implies that we allow the latter, W( f), to sit at the frequency fo, that is, we construct
W(ffo). Then we shift the transfonn of the (continuous) sinusoid, which is a single spectral
line, all the way to the left ( k = 0), and multiply the two. At each k for which we wish to
compute the convolution, we center the sinusoid spectral line at that same k, mUltiply the two
7'1
· Anders Brandt, Introductory Noise & Vibration Ana~vsis
and sum all the values (all frequencies). In some cases (see Figure 5.5), for example where
fo = ko '.d.f,
the sinusoid may move up the time window such that for all integer numbers k we
obtain only one single value from the convolution, since for all k except for k = ko the spectral
line of the sinusoid corresponds to a zero in W(ffo).
Illustrated in Figure 5.6 is the result of the convolution as described above, for the case where
the sinusoid lies between two spectral lines (we have an integer number plus one half period
in the time window). The Fourier transform of the time window is instead centered at a
frequency fo which is not a multiple of the frequency increment ~f We see in the picture that
we obtain several frequency lines which slowly decrease to the left and right, and we get two
peaks which are the same height, although much lower than the sinusoidal RMS value. It can
be shown that if the RMS values of all spectral lines are summed according to Equation (4.28)
the result is equal to the RMS value of the sinusoid. Hence, the power in the signal seems to
'"leak" out to nearby frequencies, giving the name leakage.
k
\"""T"4 •••• ~
Figure 5.7. The picketfence effect Each value in the discrete spectrum corresponds to the signal's RMS value
after bandpass filtering. Ifwe study a tone lying bet\veen two frequencies we will obtain too Iowa value.
73
EXPERIMENTAL FREQUENCY ANALYSIS
The time window used in Figure 5.8 is called a Hanning window and is one of the most
common windows used in FFT analyzers. The effect of the window is that it eliminates the
"jump" in the periodic repetition of the time signal, but it is not very intuitive that it would
improve the result. It can be shown, however, that we can estimate the amplitude much better
than with the rectangular window.
a)
"
'."
.. ' . .
.. . . .
'
'
'.
··'...
· . 0.7
·. . .....'.
·· .. ',: .:
d)
.,' "
06
x b) 05
04
OJ
02
c)
" "
0.1
" .,
0
.. '. 0 SO 100 150 200 250 300
'. .
Figure 5.8. Illustration of timewindowing with a Hanning window. The window lessens the jump at the ends of
the repeated signal. In a) is shown the periodic repetition (dotted line) of the actual measured signal (solid line).
In b) is shown the Hanning window and in c) is shown the result of the multiplication of the two. In d) is shown
the result of calculating the spectrum with the Hanning window (solid) and without (dotted). Note that \vhen the
window is used, both the amplitude is closer to the true value (0.7), and the leakage has decreased. There does
exist an amplitude error of up to 16 %.
To obtain a better estimate of the amplitude of a pure sinusoid, we need to create a window
with a Fourier transform that is flatter and wider than that of the rectangular window. Through
the years, a large collection of windows has been developed. Many FFT analyzers therefore
have a large number of different windows from which to choose. We shall here examine two
windows, the Hanning and the Flattop window.
The Hanning window is probably the most common window used in FFT analysis. It IS
defined by half a period of a cosine, or alternatively one period of a squared sine, such that
W If () ..) (1fn)
n = sm N = '2
1 [1  cos ( 21fn))
N for n=O, 1, 2, ... , N  1 (5.26)
The Hanning window's Fourier transform has a main lobe that is wider than the rectangular
window, so that the maximum error decreases to 16%. This error is of course still too large in
many cases, for example when one desires to measure the amplitude of a sinusoidal signal. In
that case, the Flattop window may be utilized, which yields a maximum error in amplitude of
0.1 %, a bit more acceptable.
The flattop window is not actually a uniquely defined window, but a name given to a group of
windows with similar characteristics. When we use flattop windows in this book, we use a
74
Anders Brandt. Introductory Noise & Vibration Ana(vsis
window called Potter 301 [Potter, 1972]. In Figure 5.9 and Figure 5.10 the three windows,
rectangular, Hanning, and flattop, with their Fourier transfonns are shown for comparison.
0.5
O~O~~T~
0.5
O~~=~~~
o T
0.5
o t~
0.5~~~
o Time T
Figure 5.9. Time windows. Timedomain plots of the rectangular (top), Hanning (middle) and flattop (bottom)
windows.
There is a price to pay for the decreased amplitude uncertainty when we use time windows.
The price is in the fonn of increased frequency uncertainty, which occurs because the better
the amplitude uncertainty, the wider the main lobe of the spectrum of the window. Therefore,
1'1
if we measure a sinusoid with a frequency that matches one of our spectral lines, then the
peak will become wider than if we had used the rectangular window. The flattop window, I'
which has the best amplitude uncertainty, also has the widest main lobe. This tradeoff is I"
A
related with the bandwidthtime product which is explained more in relation to errors in PSD
computation with windowing in Section 6.11. Figure 5.11 shows what the DFT of a sinusoid
which exactly matches a spectral line looks like, both after windowing with the Hanning
window and with the flattop window. As shown, the Hanning window results in 3 spectral
lines which are not zero, while the flattop window gives a whole 9 nonzero spectral lines.
Even with windows other than the rectangular, we get leakage when the sinusoid's frequency
does not match up with a spectral line, as seen in Figure 5.12. What detennines the leakage is
how the window's side lobes fall off to each side of the center. The faster the falloff of the
side lobes, the wider the main lobe, which gives yet another compromise. The decreasing of
the side lobes is usually measured as an approximate slope per octave. The flattop window,
because of its large mainlobe width is only used when it is known that the spectrum does not
contain many neighboring frequencies. The Hanning window is often used therefore as a
standard window since it gives a reasonable compromise between time accuracy and
frequency resolution.
75
E.'(PERIMENTAL FREQUEA'CY ANALYSIS
o ! I I I I I I I!(I\!I. I ! ! ! I ! !
dB Rectangular
20 /\. .(\, ... {\.
.. . . .
1(\ ......
40
60 f
80
100~~~~~~~~~~~~~~~~~~~~~~ J
9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9
o
dB Hanning
20
40
60
80
_100~~~L~L~L~~~L~L~~~~~~~~
9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9
o
dB Flattop
20
40
60
80
_100L~~~~~L~~~~~~~~~~L~L~~~
9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9
Normalized Frequency, Times llf
Figure 5.10. Time windows. The Fourier transform of the rectangular (top), Hanning (middle) and flattop
(bottom) windows. Compare these plots with the plots in Figure 5.6. The Hanning window's first zero lies at k =
2, which means that for a sinusoid lying between two frequency values in the OFT, the convolution with the
window spectrum will make the value for k = 0 in Figure 5.6 be attenuated much less than tor the rectangular
window. For the flattop window almost no attenuation occurs. The uncertainty in amplitude decreases theretore
when using windows with a periodic signal. On the other hand, with wider main lobes, the spectral peaks are
broadened, which results in increased frequency uncertainty, see below.
76
Anders Brandt. Introductory Noise & Vibration Ana~vsis
O.S
0.7
Flattop
\./
·
····
0.6
··
0.5 · ·· Hanning
·· ·.',.
,/
0.4
· ··
0.3
··
·· ··
0.2,
···
0.1 .. .
. .
0 '"
s 6 4 2 0 2 4 6 S
k
Figure 5.11. The widening of the frequency peak is the price we pay to get a more accurate amplitude. In the
figure is shown the linear spectrum of a sinusoid with amplitude I and frequency that matches the spectral line
marked "0", for both Hanning (solid) and t1attop (dashed) windowing, With the tlattop window the peak is much
wider than with the Hanning window. For clarity, the two values at k=± I for the spectrum after Hanning
windowing are shown with black dots.
O.s~~~~~~~~~,.
0.7 ,'\
I \
I \
I
" "\ Flattop
0.6
:
:
'\ Hanning
I '
I '
0.5 II 'I
: \ Rectangular
I '
I '
0.4 I
I '
'
I '
'
0.3 /
/
,
I
/
'
.,
~.
" .''
/
/
''
0.2 /
'
/
/ ''
/
/
/
' ''
0.1 ./
/
/
''
s 6 4 2 o 2 4 6 s
k
Figure 5.12. The spectrum of a sinusoid with frequency right between the frequencies k = I and k = O. Three
different windows have been used: rectangular (solid), Hanning (dotted) and t1attop (dashed). From the figure
one can see the compromise between amplitude and frequency uncertainties.
...
77
EXPERIAIENTAL FREQUENCY ANALYSIS
The qualities most important to the influence of the window when determining spectral
densities are the width of the main lobe and the height of the side lobes. The narrower these
side lobes are, the less influence we get from nearby frequency content during convolution.
Th flattop window is never used for random signals, since its main lobe is too wide. The most
common is tlfe Hanning window and many FFT analyzers have no other window
implemented for noise analysis, although even the socalled KaiserBessel window can be
suitable to use.
There is no exact frequency resolution for a particular window. How close two sinusoids can
be in frequency, in order for the spectrum still to show two peaks, depends. on the width of the
window's main lobe, but also on where between the spectral lines the two sine waves are
located.
We start with a continuous time signal as in Figure 5.13 (A. 1). In Figure 5.13 (8.1) is shown
the Fourier transform of this continuous (infinite) signal, which is of course also continuous,
but bandlimited so that we fulfil the sampling theorem. For the sake of simplicity we (and
Thrane) have used a time function which is a Gaussian function, and it has the same shape in
time and frequency. •
78
Anders Brandt, Introductory Noise & Vibration Ana(vsis
Time Frequency
A.I B.I
x(t) X(f)
A.2 B.2
I I 1'1 I' I' , ~I I' I I' I'
," ," ,
I ' I '
A.3 B.3 I '
I ' " \ X(f)*SlU)
,
I '
' ,/
I ,
B.6
'I' I I' I' I' ~ 1 1~ I I' I' I'
B.7 ,,
I \
,,X(k)
.
Figure 5.13. Summary of the OFT. See text for explanation. [After Thrane, 1979]
The discrete sampling we then carry out is equivalent to multiplying the signal by an ideal
train of pulses with unity value at each sampling instant and zeros between, see Figure 5.13
(A.2) and (A.3). In the frequency domain, this operation corresponds with a convolution with
the equivalent Fourier transfonn, which is a train of pulses at multiples of the sampling
frequency, f,. We consequently obtain a repetition of the spectrum at each k . f, .This is
actually a proof of the sampling theorem, since if the bandwidth of the original spectrum
would be wider than ±f,./2, the periodic repetition of the spectra will overlap, see Figure 5. I3
(B.2) and (B.3).
The next step is measuring only during a finite time, which in the time domain is equivalent to
multiplying by a rectangular window as in Figure 5.13 (AA) and (A.5). In the frequency
domain this operation is equivalent to the convolution with a sinc functi~n as in (BA) and
79
EXPERIMENTAL FREQUElv'CY ANALYSIS
(8.5). This is where the uncertainty in amplitude arise in the frequency domain, which can be
seen in the ripple on the spectrum in (8.5).
The final step is carried out in the frequency domain, (8.6) and (8.7). We calculate with the
DFT only discrete frequencies. This operation is equivalent, as in (A.2) above, to a
multiplication with a train of pulses, only now with frequency increment 11/ = 1 / T. In the
time domain this step implies a convolution with a train of pulses with separation T, as in
(A.6), which finally gives us the periodicity in the time domain in Figure 5.13 (A.7).
•
80