You are on page 1of 370

Theory and Background

E Copyright LMS International 2000


Table of Contents

Part I Signal processing

Chapter 1 Spectral processing


1.1 Digital signal processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Leakage and windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.1 Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Window characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Window types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Choosing window functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Window correction mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Window correction factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.4 Averaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.5 Reading list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Chapter 2 Structural dynamics testing


2.1 Signal analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 System analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3 Signature analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Chapter 3 Functions
3.1 Time domain functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Time Record . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Crosscorrelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Probability Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Probability Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2 Frequency domain functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Autopower Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Crosspower spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Coherence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Principal Component Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Frequency Response Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.3 Composite functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Overall level (OA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Frequency section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Order sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Octave sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4 Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.5 Rms calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Part II Acoustics and Sound Quality

Chapter 4 Terminology and definitions


4.1 Acoustic quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Sound power (P) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Sound pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Sound (Acoustic) intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Free field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Particle velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Acoustic impedance (Z) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.2 Reference conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
dB scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Sound power level Lw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Particle velocity level Lv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Sound (Acoustic) intensity level LI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Sound pressure level LP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.3 Octave bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.4 Acoustic weighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Chapter 5 Acoustic measurements


5.1 Acoustic measurement functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Sound pressure level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Sound Intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Residual intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Pressure residual intensity index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.2 Calculation of acoustic quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.3 Acoustic measurement surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Acoustic ISO standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.4 Frequency bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.5 Field indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
F1 Sound field temporal variability indicator . . . . . . . . . . . . . . . . . . . . . . 77
F2 Surface pressure-intensity indicator . . . . . . . . . . . . . . . . . . . . . . . . . . 77
F3 Negative partial power indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
F4 Non-uniformity indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.5.1 The criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Measurement mesh adequacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

Chapter 6 Sound quality


6.1 The basic concepts of Sound Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Sound signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
The perception of sounds by the human ear . . . . . . . . . . . . . . . . . . . . . . . 85
Binaural hearing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Sound perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Loudness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Pitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Critical bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Temporal effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.2 Sound quality analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Analysis of sound signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Binaural recording and playback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.3 Reading list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Chapter 7 Sound metrics
7.1 Sound pressure level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Time domain sound pressure level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
7.2 Equivalent sound pressure level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
7.3 Loudness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
7.3.1 Stevens Mark VI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.3.2 Stevens Mark VII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.3.3 Loudness Zwicker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.4 Sharpness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.5 Roughness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7.6 Fluctuation strength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.7 Pitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.8 Articulation index (AI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.9 Speech interference level (SIL, PSIL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.10 Impulsiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

Chapter 8 Acoustic holography


8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
8.2 Acoustic holography concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Temporal and spatial frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Summation of plane waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Propagating and evanescent waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
(Back) propagating to other planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
The Wiener filter and the AdHoc window . . . . . . . . . . . . . . . . . . . . . . . . . 125
Derivation of other acoustic quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

Part III Time data processing

Chapter 9 Statistical functions


Minimum, maximum, range and extremum . . . . . . . . . . . . . . . . . . . . . . . . 130
Sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Root mean square . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Crest factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Percentiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Variance and standard deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Mean absolute deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Extreme deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Skewness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Kurtosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Markov regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

Chapter 10 Time frequency analysis


10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
10.2 Linear time-frequency representations . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
The Short Time Fourier Transform (STFT) . . . . . . . . . . . . . . . . . . . . . . . . 142
Wavelet analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
10.3 Quadratic time-frequency representations . . . . . . . . . . . . . . . . . . . . . . . . 146
The Wigner-Ville distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
10.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Chapter 11 Resampling
11.1 Fixed resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
11.1.1 Integer downsampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
11.1.2 Integer upsampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
11.1.3 Fractional ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
11.1.4 Arbitrary ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
11.2 Adaptive resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Implementation example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
11.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

Chapter 12 Digital filtering


12.1 Basic definitions relating to digital filtering . . . . . . . . . . . . . . . . . . . . . . . . 164
12.2 FIR and IIR filter design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
12.2.1 Filter design terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Filter characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Linear phase filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Filter types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
12.2.2 Design of FIR filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Design of an FIR window filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
FIR multi window Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
FIR Remez filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
12.2.3 Design of IIR filters using analog prototypes . . . . . . . . . . . . . . . . . 178
Step 1) Specify the filter characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Step 2) Compute the analog frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Step 3) Select the suitable analog filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Bessel filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Butterworth filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Chebyshev (type I) filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Inverse Chebyshev (type II) filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Cauer (elliptical) filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Step 4) Transform the prototype low pass filter . . . . . . . . . . . . . . . . . . . . 185
Step 5) Apply a bilinear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Determining the filter order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
12.2.4 IIR Inverse design filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
12.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
12.4 Applying filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
12.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

Chapter 13 Harmonic tracking


13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Conditions for use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
13.2 Theoretical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
13.2.1 Determination of the Rpm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
13.2.2 Waveform tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
The Structural equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
The Data equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
13.3 Practical considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Chapter 14 Counting and histogramming
14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
14.2 One dimensional counting methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
14.2.1 Peak count methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
14.2.2 Level cross counting methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
14.2.3 Range counting methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Counting of single ranges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Counting of range-pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
14.3 Two-dimensional counting methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
14.3.1 From-to-counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
14.3.2 Range-mean counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
14.3.3 ``Range pair-range" or ``Rainflow'' method . . . . . . . . . . . . . . . . . 213
14.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217

Part IV Analysis and design

Chapter 15 Estimation of modal parameters


15.1 Estimation of modal parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
A note about units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
15.2 Types of analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
15.2.1 Single or multiple degree of freedom method . . . . . . . . . . . . . . . . 223
15.2.2 Local or global estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
15.2.3 Multiple input analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
15.2.4 Time vs frequency domain implementation . . . . . . . . . . . . . . . . . . 228
15.2.5 Vibro-acoustic modal analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
15.3 Parameter estimation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Selection of a method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
15.3.1 Peak picking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
15.3.2 Mode picking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
15.3.3 Circle fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
15.3.4 Complex mode indicator function . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Cross checking and tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
15.3.5 Least squares complex exponential . . . . . . . . . . . . . . . . . . . . . . . . . 243
Model for continuous data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Model for sampled data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Practical implementation of the method . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Determining the optimum number of modes . . . . . . . . . . . . . . . . . . . . . . . 246
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Model for sampled data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Practical implementation of the method . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
15.3.6 Least squares frequency domain . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
15.3.7 Frequency domain direct parameter identification . . . . . . . . . . . . 256
15.4 Maximum likelihood method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
15.4.1 Theoretical aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
15.5 Calculation of static compensation modes . . . . . . . . . . . . . . . . . . . . . . . . . 264

Chapter 16 Operational modal analysis


16.1 Why operational modal analysis? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
16.2 Theoretical aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
16.2.1 Stochastic substate identification methods . . . . . . . . . . . . . . . . . . . 270
16.2.2 Natural Excitation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
16.2.3 Selection of the modal parameter identification method . . . . . . . 277
16.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Chapter 17
Running modes analysis
17.1 Running mode analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
17.2 Measuring running modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
17.2.1 Transmissibility functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
17.2.2 Crosspower spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
17.3 Identification and scaling of running modes . . . . . . . . . . . . . . . . . . . . . . . 288
17.3.1 Scaling of running modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
17.4 Interpretation of results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Modal Scale Factors and Modal Assurance Criterion . . . . . . . . . . . . . . . . 290
Modal decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291

Chapter 18
Modal validation
18.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
18.2 MSF and MAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
18.3 Mode participation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
18.4 Reciprocity between inputs and outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
18.5 Generalized modal parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
18.6 Mode complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
18.7 Modal phase collinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
18.8 Comparison of models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
18.9 Mode indicator functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
18.10 Summation of FRFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
18.11 Synthesis of FRFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308

Chapter 19 Rigid body modes


19.1 Calculation of rigid body properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Derivation of rigid body properties from measured FRFs . . . . . . . . . . . . 310
Calculation of the rigid body properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
19.2 Rigid body mode analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
19.2.1 Decomposition of measured modes into rigid body modes . . . . . . 317
19.2.2 Synthesis of rigid body modes based on geometrical data . . . . . . . 318
19.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320

Chapter 20 Design
20.1 Using the modal model for modal design . . . . . . . . . . . . . . . . . . . . . . . . . . 322
20.2 Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
20.2.1 Mathematical background to sensitivity analysis . . . . . . . . . . . . . . 325
20.3 Modification prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
20.3.1 Mathematical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
20.3.2 Implementation of Modification prediction . . . . . . . . . . . . . . . . . . 338
20.3.3 Definition of modifications to the model . . . . . . . . . . . . . . . . . . . . 339
20.3.4 Modification prediction calculation . . . . . . . . . . . . . . . . . . . . . . . . . 347
20.3.5 Units of scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Example of the application of a beam element . . . . . . . . . . . . . . . . . . . . . 349
Static condensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
20.4 Forced response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
20.4.1 Mathematical background for forced response . . . . . . . . . . . . . . . . 354
Chapter 21 Geometry concepts
21.1 The geometry of a test structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
21.2 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Theory and Background

Part I
Signal processing

Chapter 1
Spectral processing . . . . . . . . . . . . . . . . . . . . . 1

Chapter 2
Structural dynamics testing . . . . . . . . . . . . . . 23

Chapter 3
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Chapter 1

Spectral processing

This chapter provides an overview of the terminology and techĆ


niques used in general signal processing of vibrational and acoustic
data.
Digital signal processing

Aliasing

Leakage and windows

Averaging
This is by no means a comprehensive treatment of the subject and a
reading list is given at the end.

1
Chapter 1 Spectral processing

1.1 Digital signal processing

Time and frequency domains


It is a property of all real waveforms that they can be made up of a number of
sine waves of certain amplitudes and frequencies. Viewing these waves in the
frequency domain rather than the time domain can be useful in that all the
components are more readily revealed.

amplitude

time frequency

Each sine wave in the time domain is represented by one spectral line in the
frequency domain. The series of lines describing a waveform is known as its
frequency spectrum.

Fourier transform
The conversion of a time signal to the frequency domain (and its inverse) is
achieved using the Fourier Transform as defined below.


S x(f)   x(t)āeā jā2āft


Ădt Eqn. 1-1



x(t)   S ā(f)eā
x
Ăj2ft
Ădf Eqn. 1-2



This function is continuous and in order to use the Fourier Transform digitally
a numerical integration must be performed between fixed limits.

The Discrete Fourier Transform (DFT)


The digital computation of the Fourier Transform is called the Discrete Fourier
Transform. It calculates the values at discrete points (mf) and performs a nuĆ
merical integration as illustrated below between fixed limits (N samples).

2 The Lms Theory and Background Book


Spectral processing

x(t)āe j2āāmfāt

time
t

Since the waveform is being sampled at discrete intervals and during a finite
observation time, we do not have an exact representation of it in either domain.
This gives rise to shortcomings which are discussed later.

Hermitian symmetry
The Fourier transform of a sinusoidal function would result in complex funcĆ
tion made up of real and imaginary parts that are symmetrical. This is illusĆ
trated below. In the majority of cases only the real part is taken into account
and of this only the positive frequencies are shown. So the representation of the
frequency spectrum of the sine wave shown below would become the area
shaded in grey.

X(t) S(f) imaj S(f) real


A/2
A/2
A

-f 0 +f -f 0 +f
A/2

The Fast Fourier Transform (FFT)


The Fast Fourier Transform is a dedicated algorithm to compute the DFT. It
thus determines the spectral (frequency) contents of a sampled and discretized
time signal. The resulting spectrum is also discrete. The reverse procedure is
referred to as an inverse or backward FFT.

Part I Signal processing 3


Chapter 1 Spectral processing

N samples
time

inverse

frequency
N/2 spectral lines

To achieve high calculation performance the FFT algorithm requires that the
number of time samples (N) be a power of 2 (such as 2, 4, 8, ...., 512, 1024, 2048).

Blocksize
Such a time record of N samples is referred to as a block of data with N being
the blocksize. N samples in the time domain converts to N/2 spectral (frequency)
lines. Each line contains information about both amplitude and phase.

Frequency range
The time taken to collect the sample block is T. The lowest frequency that can be
detected then is that which is the reciprocal of the time T.

The frequency spacing between the spectral lines is therefore 1/T and the highĆ
est frequency that can be determined is (N/2).(1/T).

4 The Lms Theory and Background Book


Spectral processing

N/2 spectral lines

frequency
1 2 3 N2
T T T
T

f  1 fmax  N . 1  N .f
T 2 T 2

The frequency range that can be covered is dependant on both the blocksize (N)
and the sampling period (T). To cover high frequencies you need to sample at a
fast rate which implies a short sample period.

Real time Bandwidth


Remember that an FFT requires a complete block of data to be gathered before it
can transform it. The time taken to gather a complete block of data depends on
the blocksize and the frequency range but it is possible to be gathering a second
time record while the first one is being transformed. If the computation time
takes less than the measurement time, then it can be ignored and the process is
said to be operating in real time.

time time time time


record 1 record 2 record3 record 4 Real time
operation
FFT 1 FFT 2 FFT 3

time time time time


record 1 record 2 record3 record 4

FFT 1 FFT 2 FFT 3

This is not the case if the computation time is taking longer than the measureĆ
ment time or if the acquisition requires a trigger condition.

Overlap
Overlap processing involves using time records that are not completely indeĆ
pendent of each other as illustrated below.

Part I Signal processing 5


Chapter 1 Spectral processing

time
record 1
time
record 2
time
record3
time
record 4

FFT 1 FFT 2 FFT 3

If the time data is not being weighted at all by the application of a window,
then overlap processing does not include any new data and therefore makes no
statistical improvement to the estimation procedure. When windows are being
applied however, the overlap process can utilize data that would otherwise be
ignored.

The figure below shows data that is weighted with a Hanning window. In this
case the first and last 20% of each sample period is practically lost and contribĆ
utes hardly anything towards the averaging process.

Sampled
data

Processed data
with no overlap

6 The Lms Theory and Background Book


Spectral processing

Applying an overlap of at least 30% means that this data is once again included
- as shown below. This not only speeds up the acquisition (for the same numĆ
ber of averages) but also makes it statistically more reliable since a much higher
proportion of the acquired data is being included in the averaging process.

Sampled
data

Processed data with 30% overlap

Part I Signal processing 7


Chapter 1 Spectral processing

1.2 Aliasing

Sampling at too low a frequency can give rise to the problem of aliasing which
can lead to erroneous results as illustrated below.

This problem can be overcome by implementing what is known as the Nyquist


Criterion, which stipulates that the sampling frequency (fs ) should be greater
than twice the highest frequency of the interest (fm ).

fs  2Ăfm

The highest frequency that can be measured is fmax which is half the sampling
frequency (fs ), and is also known as the Nyquist frequency (fn ).

f
fmax  s Ă  fn
Ă 2
The problem of aliasing can also be illustrated in the frequency domain.

measured
frequency

fn
f1 input
frequency
f1 f2 f3 f4
fn 2 fn = fs 3 fn 4 fn

All multiples of the Nyquist frequency (fn ) act as `folding lines'. So f4 is folded
back on f3 around line 3 fn , f3 is folded back on f2 around line 2 fn and f2 is
folded back on f1 around line fn . Therefore all signals at f2 , f3 , f4 are all seen as
signals at frequency f1 .

The only sure way to avoid such problems is to apply an analog or digital anti-
aliasing filter to limit the high frequency content of the signal. Filters are less
than ideal however so the positioning of the cut off frequency of the filters must
be made with respect to fmax and the roll off characteristics of the filter.

8 The Lms Theory and Background Book


Spectral processing

ideal filter

fmax fs

roll off
characteristics
of a real filter
fmax fs

Part I Signal processing 9


Chapter 1 Spectral processing

1.3 Leakage and windows

A further problem associated with the discrete time sampling of the data is that
of leakage. A continuous sine wave such as the one shown below should result
in the single spectral line.

continuous
waveform

time

frequency

Because the signals are measured over a sample period T, the DFT assumes that
this is representative for all time. When the sine wave is not periodic in the
sample time window, the result is a consequent leakage of energy from the
original line spectrum due to the discontinuities at the edges.

discretely
sampled
waveform
time

DFT
assumed
waveform
time

frequency

The user should be aware that leakage is one of the most serious problems
associated with digital signal processing. Whilst aliasing errors can be reduced
by various techniques, leakage errors can never be eliminated. Leakage can be reĆ
duced by using different excitation techniques and increasing the frequency
resolution, or through the use of windows as described below.

10 The Lms Theory and Background Book


Spectral processing

1.3.1 Windows
The problem of discontinuities at the edge can be alleviated either by ensuring
that the signal and the sampling period are synchronous or by ensuring that the
function is zero at the start and end of the sampling period. This latter situaĆ
tion can be achieved by applying what is called a `window function' which
normally takes the form of an amplitude modulated sine wave.

X =

sample sample
period T. period T.

Frequency spectrum of Frequency spectrum of a Frequency spectrum of


a sine wave, periodic in sine wave, not periodic a sine wave that is not
the sample period T. with the sample period periodic with the sample
without a window. period with a window.

The use of windows gives rise to errors itself of which the user should be aware
and should be avoided if possible. The various types of windowing functions
distribute the energy in different ways. The choice of window depends on the
input function and on your area of interest.

Self windowing functions


Self windowing functions are those that are periodic in the sample period T or
transient signals. Transient signals are those where the function is naturally
zero at the start and end of the sampling period such as impulse and burst sigĆ
nals. Self windowing functions should be adopted whenever possible since the
application of a window function presents problems of its own. A rectangular
or uniform window can then be used since it does not affect the energy disĆ
tribution.

Note! It should be noted that synchronizing the signal and the sampling time, or using
a self windowing function is preferable to using a window.

Part I Signal processing 11


Chapter 1 Spectral processing

Window characteristics

The time windows provided take a number of forms - many of which are amĆ
plitude modulated sine waves. There are all in effect filters and the properties
of the various windows can be compared by examining their filter characterisĆ
tics in the frequency domain where they can be characterized by the factors
shown below.

noise Bandwidth
0dB

side lobe falloff

highest side
lobe

log f

The windows vary in the amount of energy squeezed in to the central lobe as
compared to that in the side lobes. The choice of window depends on both the
aim of the analysis and the type of signal you are using. In general, the broader
the noise Bandwidth, the worse the frequency resolution, since it becomes more
difficult to pick out adjacent frequencies with similar amplitudes. On the other
hand, selectivity (i.e. the ability to pick out a small component next to a large
on) is improved with side lobe falloff. It is typical that a window that scores
well on Bandwidth is weak on side lobe fall off and the choice is therefore a
trade off between the two. A summary of these characteristics of the windows
provided is given in Table 1.1.

Window type Highest Sidelobe falloff Noise BandĆ Max.


side lobe (dB/decade) width (bins) Amp erĆ
(dB) ror (dB)
Uniform -13 -20 1.00 3.9
Hanning -32 -60 1.5 1.4
Hamming -43 -20 1.36 1.8
Kaiser-Bessel -69 -20 1.8 1.0
Blackman -92 -20 2.0 1.1
Flattop -93 0 3.43 <0.01
Table 1.1 Properties of time windows

12 The Lms Theory and Background Book


Spectral processing

Window types

Uniform window
This window is used when leakage is not a probĆ
lem since it does not affect the energy distribuĆ
tion. It is applied in the case of periodic sine
waves, impulses, transients... where the function
is naturally zero at the start and end of the samĆ
pling period.

The following windows -


Hanning, Hamming, Blackman, Kaiser-Bessel and
Flattop all take the form of an amplitude modulated
sine wave in the time domain. For a comparison of
their frequency domain filter characteristics - see
Table 1.1.

Hanning
This window is most commonly applied for general purpose analysis of ranĆ
dom signals with discrete frequency components. It has the effect of applying a
round topped filter. The ability to distinguish between adjacent frequencies of
similar amplitude is low so it is not suitable for accurate measurements of small
signals.

Hamming
This window has a higher side lobe than the Hanning but a lower fall off rate
and is best used when the dynamic range is about 50dB.

Blackman
This window is useful for detecting a weak component in the presence of a
strong one.

Kaiser–Bessel
The filter characteristics of this window provide good selectivity, and thus
make it suitable for distinguishing multiple tone signals with widely different
levels. It can cause more leakage than a Hanning window when used with ranĆ
dom excitation.

Part I Signal processing 13


Chapter 1 Spectral processing

Flattop

This window's name derives from its low ripple characteristics in the filter pass
band. This window should be used for accurate amplitude measurements of
single tone frequencies and is best suited for calibration purposes.

Force window

This type of window is used with a tranĆ


sient signal in the case of impact testing.
It is designed to eliminate stray noise in
the excitation channel as illustrated here.
It has a value of 1 during the impact periĆ
od and 0 otherwise.

Exponential window

This window is also used with a transient


signal. It is designed to ensure that the sigĆ
nal dies away sufficiently at the end of the
sampling period as shown below. The
form of the exponential window is deĆ
scribed by the formula e -t . The `ExponenĆ
tial decay' determines the % level at the
end of the time window.

An exponential window is normally applied to the response (output) channels


during impact testing. It is also the most appropriate window to be used with a
burst excitation signal in which case it should be applied to all channels i.e.
force(s) and response(s). It does however introduce artificial damping into the
measurement data which should be carefully taken into account in further proĆ
cessing in modal analysis.

14 The Lms Theory and Background Book


Spectral processing

Choosing window functions

For the analysis of transient signals use :

Uniform for general purposes


Force for short impulses and transients to improve the signal to
noise ratio
Exponential for transients which are longer than the sample period or
which do not decay sufficiently within this period.

For the analysis of continuous signals use :

Hanning for general purposes


Blackman or if selectivity is important and you need to distinguish beĆ
Kaiser-Bessel tween harmonic signals with very different levels
Flattop for calibration procedures and for those situations where
the correct amplitude measurements are important.
Uniform only when analyzing special sinusoids whose frequencies
coincide with center frequencies of the analysis.

For system analysis i.e. measurement of FRFs use :

Force for the excitation (reference) signal when this is a hammer


Exponential for the response signal of lightly damped systems with
hammer excitation
Hanning for reference and response channels when using random
excitation signals
Uniform for reference and response channels when using pseudo
random excitation signals

Window correction mode


Applying a window distorts the nature of the signal and correction factors have
to be applied to compensate for this. This correction can be applied in one of
two ways.
Amplitude where the amplitude is corrected to the original value.
Energy where the correction factor gives the correct signal energy for a
particular frequency band. This is the only method that should be
used for broad band analysis.

Part I Signal processing 15


Chapter 1 Spectral processing

If a number of windows is applied to a function, the effect of the window may


be squared or cubed, and this affects the correction factor required.

Amplitude correction
Consider the example of a sine wave signal and a Hanning window.

amplitude

time time

unwindowed signal windowed signal


amplitude amplitude

frequency frequency

When the windowed signal (sine wave x Hanning window) is transformed to


the frequency domain, then the amplitude of the resulting spectrum will be
only half that of the equivalent unwindowed signal. Thus in order to correct
for the effect of the Hanning window on the amplitude of the frequency specĆ
trum, the resulting spectrum has to be multiplied by an amplitude correction
factor of 2.

Amplitude correction must be used for amplitude measurements of single tone


frequencies if the analysis is to yield correct results.

Energy correction
Windowing also affects broadband signals.

original signal window function windowed signal

16 The Lms Theory and Background Book


Spectral processing

In this case however it is the energy in the signal which it is usually important
to maintain, and an energy correction factor will be applied to restore the enerĆ
gy level of the windowed signal to that of the original signal.

In the case of a Hanning window, the energy in the windowed signal is 61% of
that the original signal. The windowed data needs to be multiplied by 1.63
therefore to correct the energy level.

Window correction factors

The actual correction factor that is needed to compensate for the application of
the time window depends on the window correction mode and the number of
windows applied. Table 1.2 lists the values used.

Window type Amplitude mode Energy mode


Uniform 1 1
Hanning x1 2 1.63
Hanning x2 2.67 1.91
Hanning x3 3.20 2.11
Blackman 2.80 1.97
Hamming 1.85 1.59
Kaiser-Bessel 2.49 1.86
Flattop 4.18 2.26
Table 1.2 Window correction factors

Part I Signal processing 17


Chapter 1 Spectral processing

1.4 Averaging

Signals in the real world are contaminated by noise -both random and bias.
This contamination can be reduced by averaging a number of measurements in
which the random noise signal will average to zero. Bias errors however, such
as nonlinearities, leakage and mass loading are not reduced by the averaging
process. A number of different techniques for averaging of measurements are
provided.

Linear
This produces a linearly weighted average in which all the individual measureĆ
ments have the same influence on the final averaged value. If the average value
of M consecutive measurement ensembles is x then -

M1
x  1  xm Eqn 1-3
M m0

x  x a(n1)  x n
The intermediate average is aān . The final averaging can be
done at the end of the acquisition.

Stable
In the case of stable averaging again all the individual measurements have the
same influence on the final averaged value. In this case though, the intermediate
averaging result is based on -


xn  n  1
n Ăx n1  n

xn
Eqn 1-4

The advantage of stable averaging is that the intermediate averaging results are
always properly scaled. This scaling however makes the procedure slightly
more time consuming.

Exponential
Exponential averaging on the other hand yields an averaging result to which
the newest measurement has the largest influence while the effect of the older
ones is gradually diminished. In this case -


xn   

1 Ăx
xn
n1   Eqn 1-5

18 The Lms Theory and Background Book


Spectral processing

where  is a constant which acts as a weighting factor.

Peak level hold


In this case a comparison has to be made between individual measurement enĆ
sembles. When they contain complex data, comparison is done based on the
amplitude information. For peak level hold averaging, the last measurement
ensemble consisting of k individual samples, xn (k), (where k= 0...N-1 and N is
the blocksize) is compared to the average of the n-1 previous steps, xn-1 (k).
The new average xn (k), is then defined as -

x n(k)  x n(k) if |x n(k)| Ă |x n1(k)| or


Eqn. 1-6
x n(k)  x n1Ă(k) otherwise

In this way, the averaging result contains, for a specific k, the maximum value
in an absolute sense of all the ensembles, considered during the averaging proĆ
cess.

Peak reference hold


In peak reference hold averaging, one channel determines the averaging proĆ
cess. If x i is the ensemble for channel i and x r represents the reference channel,
then the last measurement ensemble x rn (k) (where k= 0...N-1) is compared to
the average of the n-1 previous steps, x rn-1 (k).
The new average xn (k), is then defined as -

x i n(k)  x i n(k) if |x r n(k)| Ă |x r n1(k)| or


Eqn 1-7
x i n(k)  x i n1Ă(k) otherwise

This way, the averaging result contains all values that coincide with the maxiĆ
mum values for the reference channel.

Part I Signal processing 19


Chapter 1 Spectral processing

1.5 Reading list

Signal and system theory


J. S. Bendat and A.G. Piersol.
Random Data : Analysis and Measurement Procedures
Wiley - Interscience, 1971.
J. S. Bendat and A.G. Piersol.
Engineering Applications of Correlation and Spectral Analysis
Wiley - Interscience, 1980.
R.K. Otnes and L. Enochson.
Applied Time Series Analysis
John Wiley & Cons, 1978.
J. Max
Méthodes et Techniques de Traitement du Signal (2 Tomes)
Masson, 1972, 1986.

General literature in digital signal processing


A.V. Oppenheimer and R.W. Schafer
Digital Signal Processing
Prentice Hall, Englewood Cliffs N.J., 1975.
L.R. Rabiner and B. Gold
Theory and Application of Digital Signal Processing
Prentice Hall, Englewood Cliffs N.J., 1975.
K.G. Beauchamp and C.K. Yueu
Digital Methods for Signal Analysis
George Allen & Unwin, London 1979.
M. Bellanger
Traitement Numérique du Signal
Masson, Paris 1981.
A. Peled and B. Liu
Digital Signal Processing
Theory, Design And Implementation
John Wiley & Sons.

Discrete Fourier Transform


E.O. Brigham
The Fast Fourier Transform
Prentice Hall, Englewood Cliffs N.J., 1974.

20 The Lms Theory and Background Book


Spectral processing

R.W. Ramirez
The FFT : Fundamentals and Concepts
Prentice Hall, Englewood Cliffs N.J., 1985.

C.S. Burrus and T.W. Parks


DFT/FFT and Convolution Algorithms : Theory and Implementation
John Wiley & Sons, 1985.

H.J. Nussbaumer
Fast Fourier Transform and Convolution Algorithms
Springer Verlag, 1982.

R.E. Blahut
Fast Algorithms for Digital Signal Processing
Addison Wesley, 1985.

IEEE-ASSP Society
Programs for Digital Signal Processing
IEEE Press, New York, 1979.

Part I Signal processing 21


Chapter 2

Structural dynamics
testing

Understanding the structural dynamics of a structure is essential for


both improving the performance of existing structures and the deĆ
sign and development of new ones.
This chapter provides an introduction to types of analysis used in exĆ
amining the dynamic behavior of structures
Signal analysis

Signature analysis

System analysis

23
Chapter 2 Structural dynamics testing

2.1 Signal analysis

The dynamic analysis of a linear physical system can be achieved by measuring


the response of the system (output) to a form of excitation. This excitation can
be operational forces which, while typical, are not necessarily known. MeasurĆ
ing the response to known excitation forces is discussed in section 2.2

In examining the vibrational behavior of a structure, there are a range of funcĆ


tions that can be acquired which will provide information on the frequencies at
which particular phenomena occur. These measurement functions are deĆ
scribed in chapter 3.

Noise levels are a common problem and specific information about acoustic
measurement functions are given in a separate set of documentation on AcousĆ
tics and sound quality.

The examination of the behavior of a structure due to a changing environment,


such as during an engine run up is termed signature analysis and this subject is
discussed in section 2.3.

24 The Lms Theory and Background Book


Structural dynamics testing

2.2 System analysis

System analysis refers to a method of examining the properties of a system, i.e.


how a structure responds to a specific input. In the case of a linear system, this
relationship between the input and the output is a fundamental characteristic of
the system and can be used to predict the behavior of the system due to differĆ
ent stimuli.

output output

output output
output

input input

Modal analysis is a form of system analysis which results in a modal model of


the system composed of a set of frequencies, damping values and mode shapes.

The Frequency Response Function (FRF) is a frequency domain function exĆ


pressing the ratio between a response (output) signal and a reference (input)
signal. The position and direction of the measurements are termed Degrees Of
Freedom DOFs. An FRF thus always depends on 2 DOFs, the response DOF
(numerator) and the reference DOF (denominator).

Input from Output from


reference DOF FRF response DOF
Xj H(f) Xi

Xi
H(f) 
Xj

For modal purposes the response signal is most commonly the acceleration at
the response DOF due to a force input at another. In this case peaks in the FRF
indicate that low input levels generate high response levels (resonances), while
minima indicate low response levels, even for high inputs (anti-resonances).

Part I Signal processing 25


Chapter 2 Structural dynamics testing

resonance

log Amp

anti-resonance frequency

Measurement points
The number of acquisition channels determines the number of response and exĆ
citation points that can be measured at any one time. Their position on the test
system can be defined as part of the geometry of the structure. In order to visuĆ
alize the response of each DOF, then their geometrical position must be defined.

Exciting the structure


The input to the structure can be applied either from a hammer or a shaker. UsĆ
ing a shaker will require a `Source' signal. The nature of this signal can take a
number of forms. The choice of signal depends on the nature of the analysis.

If the response is measured at several response DOFs and the system excited at
a number of inputs then the resulting FRFs are termed Multiple Input Multiple
Output.

When a hammer is used to excite a mechanical structure the procedure is


termed Impact testing. This type of testing can be done in one of two ways.
Using the first method means measuring the response at a fixed point and apĆ
plying the hammer at a number of excitation points. This case is termed `RovĆ
ing hammer' The alternative is to apply the hammer to one point and to meaĆ
sure the response at all the other points. This case is termed `Fixed hammer'.

26 The Lms Theory and Background Book


Structural dynamics testing

2.3 Signature analysis


This involves analyzing a series of non-stationary signals that are varying over
the analysis period. An example would be the vibrational/acoustical behavior
of a structure as a function of rotational speed. Thus during `run-up' and/or
`run-down' a series of signals are measured to determine the behavior of the
structure and to determine the rotational speed. (the tacho signal).
Spectral data are analyzed and plotted against the external parameter as illusĆ
trated below. Such an arrangement is known as a waterfall or map of meaĆ
sured functions. The functions that can be acquired during a run and placed in
a waterfall are listed in sections 3.1 and 3.2.

basic
function

tracking parameter

composite function

As well as the waterfall of measured functions, signature analysis enables you


to obtain so-called composite functions. These are two-dimensional functions
that are directly related to the tracking parameter value. Such functions are
overall levels and frequency sections and they are described in section 3.3.
Measurements are taken during the acquisition but further analyses of the meaĆ
sured functions in relation to the tracking parameters can be performed during
post processing.

Tracking
The dominant parameter describing the change of a signal is termed the trackĆ
ing parameter. This could be time, rpm, temperature or other. The rotational
speed is a commonly used as a tracking parameter and for this a tacho signal is
used to determine the rpm.

A number of pulses per revolution are


generated by the rotating shaft. The tacho
channel uses a positive slope crossing of
a trigger level to determine the time beĆ
tween pulses and thus the rpm.
t1 t2 t3

Part I Signal processing 27


Chapter 2 Structural dynamics testing

While a number of channels can be used to measure tracking values, one must
be used to control the acquisition, i.e. to determine when the measurements
will be made.

Parameters relating to signature analysis

Sampling frequency f s  1 Sampling period T  NĂ.ĂT


T

Number of revs P

M samplesā/ārev M M M

Blocksize N = MP samples

P= Number of revsā/
ā b
ā lock =( Number of revs/sec) . ( Number of secs)
rpm(Hz)
P  rpmā(Hz)Ă.ĂT 
f
M = Number of samplesā/ārev = (Number of samplesā/sec) . (Number of
secsā/rev) fs
M
rpm(Hz)

N= Number of samples = (Number of samples/rev) . (Number of revs)


(data acquisition size)
(blocksize)
N  MĂ.ĂP

Orders
For rotating machinery most signal phenomena are related to the rotational
speed and its harmonics.
A rotational speed harmonic is called an order. It is the proportionality
constant (O) between the rotational speed (rpm) and the frequency (f).

28 The Lms Theory and Background Book


Structural dynamics testing

f= O . rpm (Hz)

For stationary signals the For rotational equipment


relevant analysis the relevant analysis
parameters are - parameters are -

maximum frequency (fmax) maximum order (Omax)


fmax= fs / 2 fmax= Omax . rpm Omax= M / 2

and frequency resolution (f) and order resolution (O)


f = 1 / T f= O . rpm O = 1 / P

Fixed sampling
This is another term for basic signature analysis, where signals are measured
using the standard data acquisition techniques as described above i.e. with a
fixed sampling frequency and sampling period. The rpm is measured but is
used only for control of the acquisition, and annotation of the acquired blocks.
In this case, the maximum order and the order resolution will vary with the roĆ
tational speed (rpm).

Order tracking
This involves measuring signals at different rotational speeds but in this case,
the sampling frequency (fs ) and observation time (T) are dependent on the rpm.

The data is sampled synchronously with the rotational speed (rpm). In this
way the number of samples per revolution is kept constant. The signals are in fact
sampled at constant shaft angle increments rather than time increments. This
implies that the maximum order measured remains constant (Omax= M / 2).

When order tracking, the number of revolutions /measurement (P) is indepenĆ


dent of the rotational speed. Thus with a constant P, the order resolution is a
constant (O= 1 / P). The orders lie on spectral lines and leakage problems are
avoided when an integer number of revolutions are measured.

Part I Signal processing 29


Chapter 3

Functions

This chapter gives a brief description of the various functions that


can be measured and their uses.
Time domain measurements

Frequency domain measurement functions


This chapter does not deal with acoustic measurements which are
dealt with in a separate set of documents Acoustics and sound qualĆ
ity".
It does describe the specific functions that are associated with signaĆ
ture analysis and which are based on a tracking parameter.
Composite functions
In addition this chapter mentions the use of consistent units and how
rms values are calculated for the various measurement functions.
Units

Calculation of rms values


Chapter 3 Functions

3.1 Time domain functions

Time Record
N instantaneous time samples x(n), are taken where N = the blocksize. The reĆ
sult of a time record measurement x(n), is the ensemble average of a series of M
instantaneous time records, where M= the number of averages and A desigĆ
nates the averaging operator.

xĂ(n)  A M1
m0 Ă(x m(n)) n  0 N  1 Eqn 3-1

Averaging is useful in perceiving signals disguised by the presence of noise.


The specification of the number of averages taken in the determination of a
block of data as well as the various averaging methods used are described in secĆ
tion 1.4.
In the case of Signature Analysis, a map or waterfall is obtained of all the time
measurements taken during the acquisition. Because this analysis deals with
changing signals, averaging is only useful with signals that change slowly or in
a stepwise fashion.

Autocorrelation
Correlation is a measure of the similarity between two quantities. The autocorĆ
relation function is found by taking a signal and comparing it with a time
shifted version of itself.
The time domain autocorrelation function Rxx () is thus acquired by multiplyĆ
ing a signal by the same signal displaced by time () and integrating the prodĆ
uct over all time.

R xxĂ()Ă Ă lim
T 
 x(t)Ăx(t  )Ădt Eqn 3-2
T

However this function is more commonly computed by using the correspondĆ


ing frequency domain function. In this case the discrete auto correlation funcĆ
tion Rxx (n) of a sampled signal x(n) is calculated as,

R xxĂ(n)Ă Ă F 1Ă ĂSxxĂ(k)


,ĂĂ k  0...N  1 Eqn 3-3
n  0...N  1

32 The Lms Theory and Background Book


Functions

where F -1 is the inverse Fourier Transform and Sxx (k) is the discrete autopower
spectrum.

It can be seen that the greatest correlation will occur when    and the autoĆ
correlation function will thus be a maximum at this point equal to the mean
square value of x(t). Purely random signals will therefore exhibit just one peak
at    Periodic signals however will exhibit another peak when the time
shift equals a multiple of the period.

The autocorrelation function of a periodic signal is also periodic and has the
same period as the wave form itself. This property is useful in detecting sigĆ
nals hidden by noise. The advantage of using the auto correlation function
rather than linear averaging, is that no synchronizing trigger is required. CerĆ
tain impulse type signals also show up better using the autocorrelation function
rather than using a frequency domain function.

Crosscorrelation

Cross correlation is a measure of the similarity between two different signals. It


therefore requires multiple channels. In terms of the time domain it is defined
as:

R xyĂ()Ă Ă lim
T 
 x(t)Ăy(t  )Ădt Eqn 3-4
T

As in the case of the autocorrelation function the discrete cross correlation funcĆ
tion Rxy (n) between two sampled signals x(n) and y(n) is calculated as,

R xyĂ(n)Ă Ă F 1Ă ĂSxyĂ(k)


,ĂĂ k  0...N  1
Eqn 3-5
n  0...N  1

with Sxy (k) being the discrete crosspower spectrum between the two signals.

Cross correlation indicates the similarity between two signals as a function of


the time shift. It is therefore useful in determining the time difference between
such signals.

Part I Signal processing 33


Chapter 3 Functions

Histogram
The probability histogram q(j) describes the relative occurrence of specific sigĆ
nal levels. Let the signal input range of a sampled signal x(n) be divided in J
classes. Each class j,j = 0...J-1, can be characterized by an average value xj and
a class increment x.
3
2

ÇÇ

nr of classes
signal range

ÇÇÇÇ
ÇÇÇ
ÇÇÇÇÇÇ
ÇÇÇ
ÇÇ
0

ÇÇ
ÇÇÇÇÇÇÇ
ÇÇ
-1

-2 -3 -2 -1 0 1 2 3
-3
nr of classes

Figure 3-1 Histogram


The probability histogram of a sampled signal x(n) can then be defined as,

N1
q(j)Ă Ă 1 ĂĂ
N
 Ă kĂ xĂ(n)
Ă,ĂĂĂ jĂ Ă 0...ĂJĂ Ă 1 Eqn 3-6
nĂ Ă 0Ă

where kĂ x(n)
Ă Ă 1,Ă ifĂx jĂ Ă xĂ Ă xĂ(n)Ă Ă x jĂ Ă xĂĂ
2 2
kĂ x(n)
Ă Ă 0,Ă otherwiseĂĂ

The maximum value of J is either the number of time samples (Time data) or
spectral lines in the block.

Probability Density
The probability density p(j) is a normalized representation of the probability
histogram q(j),

p(j)Ă Ă 100ĂĂ q(j),ĂĂ jĂ Ă 0...JĂ Ă 1 Eqn 3-7


x
This function is expressed in percents per engineering unit.

Probability Distribution
The probability distribution d(j) gives the probability (in percent) that the signal
level is below a given value. This function is calculated from the probability
histogram, q(t) given in equation 3-6.

34 The Lms Theory and Background Book


Functions

j
d(j)Ă Ă Ă q(i),ĂĂ jĂ Ă 0...JĂ Ă 1 Eqn 3-8
i0

Part I Signal processing 35


Chapter 3 Functions

3.2 Frequency domain functions

Spectrum
The instantaneous discrete frequency spectrum X(k),is defined as the discrete
Fourier transform of the instantaneous sampled time record.

X(k)Ă Ă F(ĂxāĂ(n)Ă),ĂĂĂ n  0...N  1 Eqn 3-9


k  0...N  1

The result of a frequency spectrum measurement is the ensemble average of a


series of M instantaneous discrete frequency spectra Xm (k), m = 0...M - 1,

_
XĂ(k)Ă Ă A M1
m0 Ă(X mĂ(k)),ĂĂ kĂ Ă 0...ĂN  1 Eqn 3-10

Since only real valued time records are considered the frequency spectrum has
a Hermitian symmetry.

X(k)Ă Ă X *Ă ( k)Ă Ă X *Ă (N  k),ĂĂ kĂ Ă 0..Ă NĂĂ Eqn 3-11


2

where X * is the complex conjugate.


The number of spectral lines is equal to half the number of time samples.
The FFT algorithms produce a double sided Fourier transform which is corĆ
rected to single-sided spectral quantities. Only the positive frequency values
are considered. These are then adapted according to the format required. A
Peak amplitude multiplies the result by a factor 2, so producing the amplitude
of the time signal in case of a sine wave. Rms amplitude multiplies the result
by 2.
As with time record averaging, the non-synchronous signals will average out.
This function is useful therefore in distinguishing a signal that is contaminated
by noise. When a trigger signal is available the frequency spectrum has the adĆ
vantage over autopower spectrum averaging in that the noise averages to zero,
rather than to its mean square value.

36 The Lms Theory and Background Book


Functions

Autopower Spectrum

The autopower spectrum is the squared magnitude of the frequency spectrum.


The discrete autopower spectrum of a sampled time signal Sxx (k) is defined as
the ensemble average of the squared magnitude of M instantaneous discrete
frequency spectra Xm (k),

S xxĂ(k)Ă ĂĂ A M1 *


m0 Ă(XmĂ(k)ĂX mĂ(k)),ĂĂ kĂ Ă 0...ĂN  1 Eqn 3-12

where X * is the complex conjugate.

Thus if the frequency spectrum is complex you have phase information, while
the autopower spectrum will be real and contain no phase information.

Since only real valued time records are considered, the autopower spectrum is
symmetric with respect to zero-frequency,

S (xx)Ă(k)Ă ĂĂ S xxĂ( k)Ă Ă S xxĂ(N  k),ĂĂĂ kĂ Ă 0Ă...Ă N Eqn 3-13


2

X Sxx Gxx
A2/2
(A/2)2
A/2
A
-f 0 f -f 0 f 0 f
double sided double sided single sided
frequency spectrum autopower spectrum (rms power)
T autopower spectrum
signal
Figure 3-2 Autopower spectra

Of this double sided frequency spectrum, only the positive frequency values
are considered. In order to obtain a time signal power estimate, a summation
of the power spectra values at the positive and negative frequencies must be
made, resulting in the so-called RMS Autopower spectra Gxx (k),

G xxĂĂ(k)Ă Ă S xx,ĂĂ whenĂkĂ Ă 0


Eqn 3-14
G xxĂĂ(k)Ă Ă 2S xxĂ(k),ĂĂ whenĂkĂ Ă 1... N Ă 1
2

The power spectrum values correspond to the Fourier coefficients resulting


from a double sided Fourier transform but these values are corrected to single-
sided spectral quantities, expressed as RMS or as PEAK amplitude values.

Part I Signal processing 37


Chapter 3 Functions

There are a number of formats in which autopower spectra are presented.

The Power Spectral Density normalizes the level with respect to the frequency
resolution. This overcomes differences that may arise from using a specific
Bandwidth. This is the standard way of measuring stationary broadband sigĆ
nals.
For transient signals the Energy Spectral Density may be more interesting
since this looks at the level of the energy rather than the average power over
the total acquisition time and is obtained by multiplying the Power Spectral
Density by the measurement period.

The interrelationship of these autopower formats is shown in Table 3.1. The paĆ
rameters A and T are as illustrated in Figure 3-2 , and F is the frequency resoĆ
lution. Examples of the different modes and units are shown below.

Amplitude mode Amplitude format Value other than DC line


RMS Power A2/2
RMS Linear A/2
RMS PSD A2/2F
RMS ESD A2 T/2 F
Peak Power A2
Peak Linear A
Peak PSD A2/ F
Peak ESD A2 T/ F
Table 3.1 Autopower spectrum formats

Crosspower spectrum
The cross power spectrum Sxy is a measure of the mutual power between two
signals at each frequency in the analysis band. It is the dual of the cross corĆ
relation function.

It is defined as the following product -

S xyĂ(k)Ă ĂĂ A M1 *

m0 Ă X mĂ(k)Ă Ă Y mĂ(k) Ă,ĂĂ kĂ Ă 0...ĂN  1


Eqn 3-15

X*M (K) Is the complex conjugate of the instantaneous frequency spectrum


of the one time signal X(n),
and

38 The Lms Theory and Background Book


Functions

Ym (K) Is the instantaneous frequency spectrum of a related time signal


Y(n),
The crosspower spectrum contains information about both the magnitude and
phase of the signals. Its phase at any frequency is the relative phase between
the two signals and as such it is useful in analyzing phase relationships.
Since it is a product, it will have a high value when the both signal levels are
high, and a low value when both signal levels are low. It is therefore an indicaĆ
tor of major signal levels on both the input and output. Its use in this respect
should be treated with caution however since a high value can also arise from
just the output level without indicating that the input is the cause. The interdeĆ
pendence of input and output is revealed in the coherence function which is deĆ
scribed in the following subsection.
The cross power spectrum is used in the calculation of frequency response funcĆ
tions.
The Amplitude mode in which the crosspower spectrum is presented is as deĆ
scribed in the previous section on Autopower spectrum. Rms and PEAK valĆ
ues are considered.

Coherence
There are three types of coherence functions; the ordinary coherence, partial coĆ
herence and virtual coherence.

Ordinary Coherence
The (squared) ordinary coherence between a signal Xi (N) and Xj (N) is defined
by,

SijĂ(k) 2
 2 0 ijĂ(k)Ă Ă Eqn 3-16
S iiĂ(k)Ă  S jjĂ(k)

where S ij(k) is the averaged crosspower. S ii(k) and S jj(k) are the averaged autoĆ
powers.
It is a ratio of the maximum energy in a combined output signal due to its variĆ
ous components, and the total amount of energy in the output signal. CoherĆ
ence can be used as a measure of the power in one channel that is caused by the
power in the another channel. As such it is useful in assessing the accuracy of
transfer function measurements. It does not however need to apply to input
and output and can also be measured between shakers.

Part I Signal processing 39


Chapter 3 Functions

The coherence function can take values that range between 0 and 1. A high valĆ
ue (near 1) indicates that the output is due almost entirely to the input and you
can feel confident in the frequency response function measurements. A low
value (near 0) indicates problems such as extraneous input signals not being
measured, noise, nonlinearities or time delays in the system.

Multiple coherence (used in the calculation of the measurement function FRF)


The multiple coherence function is the coefficient that describes, in the frequenĆ
cy domain, the causal relationship between a single signal (an output spectrum)
and a set of other signals (the considered input spectra) as a function of freĆ
quency and all considered references. It is the ratio of the energy in an output
signal, caused by several input signals to the total amount of energy in the outĆ
put signal. It is used to verify the amount of noise on the measurements, as all
responses should be related to the applied references (inputs).

The multiple coherence function between a single response spectrum Y(k) and a
set of reference spectra Xi (k) is calculated from

S yy.n!Ă(k)
 2 y:xĂ(k)Ă Ă 1 Ă Eqn 3-17
S yyĂ(k)

where Syy (k) is the autopower of response signal y(n)


Syy.n! (k) is the part of autopower Syy (k) of which the contributions of all
reference spectra Xi (k) have been eliminated

The value of the multiple coherence is always between 0 and 1.

Partial Coherence
The partial coherence is the ordinary coherence between conditioned signals.
Conditioned signals are those where the causal effects of other signals are reĆ
moved in a linear least squares sense.

To define the partial coherence, consider the signals X1 ..., Xi , Xj ,... The partial
coherence between Xi and Xj , after eliminating the signals X1 ... Xg is given by,

 SijĂĂgĂ(k)Ă 2
 2p ijĂgĂĂ(k)ĂĂ Ă Eqn 3-18
SiiĂĂgĂ(k)Ă  S jjgĂ(k)

with :

Sii g (k) = autopower of signal Xi without the influences of the signals X1 ...xg

40 The Lms Theory and Background Book


Functions

Sjjg (k) = autopower of signal Xj without the influence of the signals X1 ...xg
Sijg (k) = crosspower between signals Xi and Xj without the influences of the
signals X1 ...xg .
The partial coherence can take values between 0 and 1.

Virtual Coherence
The Virtual coherence is an ordinary coherence between a signal and a princiĆ
pal component which is discussed below. The virtual coherence is calculated
from,

SijĂ(k) 2
 2 vijĂ(k)Ă Ă Eqn 3-19
S iiĂ(k)Ă  S jjĂ(k)

with :
S'ii (k) autopower of principal component X'i
S'ij (k) crosspower between signal xj and principal component X'i
The value of the virtual coherence is always between 0 and 1. The sum of the
virtual coherences between any signal and all principal components is also in
the range [0,1].

Principal Component Spectra


Consider a set of signals, X... Xn . Now assume that a set of perfectly uncorreĆ
lated signals can be determined such that, by linear combinations, they deĆ
scribe the original set of signals. These signals (indicated by X'1 ... X'n .) are
called the principal components of the signals in the original set. Note that the
coherence between the principal components is exactly 0, as they are by definiĆ
tion, perfectly uncorrelated. The principal components are in a sense the main
independent mechanisms (sources) observable in the signal set.
The Principal components can be calculated either on the sampled time data or
on the corresponding spectra. The fundamental relations are,

 X(k) Ă Ă [ U ] hĂ X(k)  Eqn 3-20

 X(k) Ă Ă [ U ]Ă X(k) 

[ U ] h[ U ]Ă Ă I

[S xx]Ă Ă [U ]hĂS xxĂ[ U ]

Part I Signal processing 41


Chapter 3 Functions

where
S'xx = diagonal matrix with the autopower of the principal component
spectra on the diagonal.
{X'(K)} = an uncorrelated set of principal component signals.
[U] = unitary transformation matrix.
The major application of the principal component spectra is in determining the
number of uncorrelated mechanisms (sources) in a signal set. A well known
example is the diagnosis of multiple input excitation for multiple input/multiĆ
ple output FRF estimation.

Frequency Response Function


The frequency response function (FRF) matrix [H(k) ] expresses the frequency
domain relationship between the inputs and outputs of a linear time-invariant
system.
references responses
Input Output

Input System Output


Input Output
X(k) H(k) Y(k)

If Ni be the number of system inputs and No the number of system outputs, let
{X(N)} be a Ni -vector with the system input signals and {Y(N)} a No -vector with
the system output signals. A frequency response function matrix [ H(k)] of size
(No , Ni ) can then be defined such that,

 Y(k) Ă Ă  H(k) Ă Ă  X(k)  Eqn 3-21

The system described above is an ideal one where the output is related directly
to the input and there is no contamination by noise. This is not the case in realĆ
ity and various estimators are used to estimate [H(k)] from the measured input
and output signals.

The H1 Estimator
The most commonly used one is the H1-estimator, which assumes that there is
no noise on the input and consequently that all the X measurements are accuĆ
rate.

42 The Lms Theory and Background Book


Functions

X H Y Y

Y = HX + N

It minimizes the noise on the output in a least squares sense. In this case the
transfer function is given by -

 Syx(k) 
H 1(k) Ă Ă Eqn 3-22
 Sxx(k) 

This estimator tends to give an underestimate of the FRF if there is noise on the
input. H1 estimates the anti-resonances better than the resonances. Best results
are obtained with this estimator when the inputs are uncorrelated.

The H2 Estimator
Alternatively, the H2 estimator can be used. This assumes that there is no noise
on the output and consequently that all the Y measurements are accurate.
Y
X H Y

Y = H(X - M)

It minimizes the noise on the input in a least squares sense and in this case the
transfer function is given by -

 Syy(k) 
H 2(k) Ă Ă Eqn 3-23
 Syx(k) 

This estimator tends to give an overestimate of the FRF if there is noise on the
output. this estimator estimates the resonances better than the anti-resonances.

Part I Signal processing 43


Chapter 3 Functions

Note! This estimator can only be implemented in the case of a single output

The Hv Estimator
Finally with the Hv estimator, [ H(k)] is calculated from the eigenvector correĆ
sponding to the smallest eigenvalue of a matrix [ Sxxy ]:

SxxyĂĂĂ Ă

S xx S xy
S yx S yy Eqn 3-24

This estimator minimizes the global noise contribution in a total least squares
sense. When using this estimator the partitioning of the noise over the input
and output signals can be scaled.

Y
X H Y

M N

Y-N =H (X-M)

This estimator provides the best overall estimate of the frequency function. It
approximates to the H2 estimator at the resonances and the H1 estimator at the
anti-resonances. It does however require more computational time than the
other two.

Frequency response functions depend on there being at least one reference


channel and one response channel.

Impulse Response
The impulse response (IR) function matrix [h(t)] expresses the time domain
relationship between the inputs and outputs of a linear system. This relationĆ
ship takes the form of a convolution integral.

y(t)Ă Ă  x()Ăx(t  )Ăd Eqn 3-25

44 The Lms Theory and Background Book


Functions

[h(t)] is calculated using the inverse Fourier transform of the frequency reĆ
sponse function as shown below -

 h(t) Ă Ă F 1ĂĂ H(k)  Eqn 3-26

Impulse response functions depend on there being at least one reference chanĆ
nel and one response channel.

The FRF estimators (H1, H2 and Hv) are as described above.

Part I Signal processing 45


Chapter 3 Functions

3.3 Composite functions

The functions described in this section represent functions that can be acquired
or processed during a Signature analysis. Since this type of analysis is intended
to examine the evolution of signals as a function of changing environment (e.g.
rpm, time, ...), then there needs to be functions that express this evolution.
These are called composite functions as they are derived from the `basic' meaĆ
surement functions described in the previous section, for different environmenĆ
tal conditions.

Overall level (OA)

This function describes the evolution of the total energy in the measured signal.
As such it is always expressed as a frequency spectrum rms value. It is availĆ
able with all basic measurement functions. Energy correction is applied to this
function.

ANSI 1.4 time based OA level calculation


The time signal is exponentially averaged to calculate the Overall level over a
t
particular bandwidth. An exponential weighting factor is used (e  ) where t
is the sample period of the signal and  is a time constant. The values of  deĆ
pends on the type of signal and three standardized values are supplied.
 = 35ms for impulse (peaky) signals
 = 125 ms for fast changing signals
 = 1000 ms for slow changing signals.

When the signal contains spikes and is therefore defined as impulse" an addiĆ
tional peak detector mechanism is implemented. In this case the signal is first
averaged using the 35ms averaging time constant and then peaks are detected
using a decay rate of 1500ms.

Frequency section

This function describes the evolution of the energy of the measured signal over
the rpm range in a specified frequency band. It is always expressed as an Rms
frequency spectrum and is available only when the basic measurement function
is a frequency domain function.

The frequency section is calculated by integrating over a Bandwidth around the


center frequency value.

46 The Lms Theory and Background Book


Functions

Bandwidth

Lower Center Upper


bandvalue frequency bandvalue

The center frequency is the frequency at which the section will be calculated
and is specified by the Center parameter. The Lower bandvalue and the UpĆ
per bandvalue are given by

Center frequency +/- {BandwidthĂ/Ă2}

The Bandwidth is determined by the Band mode parameter. Possible ways in


which to express the Bandwidth are -

V a fixed frequency range

V a fixed number of spectral lines


the lines closest to the exact frequency value are used.

V a percentage of the selected center frequency

These options are illustrated below.

rpm rpm

f f f
Band mode=frequency
Band mode=lines Band mode=%
f
 constant
f=constant fc

Part I Signal processing 47


Chapter 3 Functions

Order sections
This function describes the evolution of the energy of the measured signal in a
specified `order' band. Orders are introduced chapter 2.3 in the chapter on
types of testing. An `order' band is a frequency band whose center frequency
changes as a function of the measurement environment or tracking parameter.
It is necessary therefore that the tracking parameter be a `frequency' type of paĆ
rameter (e.g. rotation speed in rpm). An order is nothing other than a multiple
of this basic tracking parameter. The evolution of the energy in a specified orĆ
der band is expressed as a function of the measured rpm. Through post procesĆ
sing it is also possible to examine it in terms of measured time or frequency.
Possible means of defining the span for integration are:
V a fixed frequency range
V a fixed number of spectral lines
the lines closest to the exact value are used.
V a fixed order Bandwidth
V a percentage of the selected order value
These three options are illustrated below.
rpm rpm rpm

f f f f f f
Band mode=frequency Band mode=order Band mode= %
Band mode=lines O=constant
O=constant
f=constant f=constant . rpm Oorder i=Bandwidth (%) . i
Oorder i+1=Bandwidth (%) . (i+1)
f=constant . rpm

Octave sections
An octave section represents the summation of values over octave bands. The
center frequencies of the bands are defined in the ISO norm 150 266. Possible
octave bands are are 1/1, 1/2, 1/3, 1/12 and 1/24 octaves.

48 The Lms Theory and Background Book


Functions

3.4 Units

To ensure consistency in the manipulation of data LMS software always operĆ


ates with an internal set of reference units. The physical quantities with a caĆ
nonical dimension of length, angle, mass, time, temperature, current, and light,
each have a corresponding reference unit as listed below:

Canonical dimension Abbreviation Reference unit Abbreviation


length le meter m
angle an radian rad
mass ma kilogram kg
time ti second s
current cu Ampère A
temperature te deg Kelvin 0K

light li candela cd

Table 3.2 Reference units

This means that all data in either the internal data structures of the LMS softĆ
ware or the database is stored in these units. A physical quantity with a dimenĆ
sion that is a combination of the above canonical dimensions will be allocated a
unit in the internal unit system that is a combination of the corresponding referĆ
ence units.

For example, a quantity with dimension of acceleration (length/time2) will


have a unit that is the reference unit of length divided by the reference unit of
time squared (m/s2).

Part I Signal processing 49


Chapter 3 Functions

3.5 Rms calculations

This section describes the ways in which rms calculations are performed for
different measurement functions. RMS stands for Root Mean Square and is a
measure of the energy in a signal.

If the data is amplitude corrected, then it is automatically converted to energy


correction for the calculations.

Time and Impulse records

When dealing with time samples, then a certain number of sample must be
analyzed in order to obtain a measure of the nature and the energy in the sigĆ
nal. This is done by squaring values, summing them and then taking an averĆ
age (mean) remove the influence of the number of samples. Then the square
root of the mean is taken to arrive at the rms value. So for a range of samples
starting at sample 0 and ending at sample k

Rms   1 ā .ā
k1

k

i0
y 2i Eqn.
3-27
yi

i0 ik

Taking the example of a sine wave, of amĆ


plitude A, then the rms value is rms
A A
2

Frequency spectra

The frequency spectrum is first converted to a double sided amplitude spectrum


Amplitude 2A

Amplitude A

frequency -f f=0 f frequency

50 The Lms Theory and Background Book


Functions

The frequency range over which you want the rms value computed is defined
by the upper and lower values of f1 and f2. All lines completely within the
range will be included in the calculations (Ai) where i takes values of 1 to k-1.
For the lines at the beginning and the end (A0 and A k), half of each value is
taken.

f1 f2
Ai

frequency
-f i=0 i=k f

The rms value is then computed using the following formula

Rms    A 20 k1
 2 i1
2
A 2k
2Ă   A i  
2
Eqn. 3-28

Autopower and crosspower spectra


These spectra are first converted to a double sided power spectrum. The numĆ
ber of lines (k) included in the calculations depends on the defined frequency
span. As was the case for the frequency spectrum shown above, the value for
the first and last sample (A0 and A k) are halved. The rms value is then comĆ
puted using the following formula

Rms   A 0 k1 A
2Ă   A i  k
 2 i1 2 Eqn. 3-29

FRF, Impedance, Transmissibility and Transmittance


Rms values for these types of functions are not well defined. The Lms interĆ
pretation for an FRF is to find the rms response when a force of amplitude 1 is
applied. A force of amplitude 1 has an rms value Frms equal to

F rms  1  k Eqn. 3-30

where k is the number of samples in the range.

The rms of the response, Xrms is derived from equation 3-28.

The rms of the FRF therefore Hrms is

Part I Signal processing 51


Chapter 3 Functions

X rms
H rms 
F rms

H rms   1 ĂA 0 
k12
2 k1 A 2k

Ai  
2
2
i1
Eqn. 3-31

Sound power, sound intensity (active and reactive), SFTVI and SFUI
SFTVI (sound field temporal uniformity indicator) and SFUI (sound field uniĆ
formity indicator) are ISO defined functions for acoustic measurements and
analysis. The rms computes the total energy in a band, so since these are alĆ
ready a measure of energy, then the values of the spectral lines can simply be
added.

Rms 
A0
2
  Ai  A2k Eqn. 3-32

Particle velocity (active and reactive)


Although the particle velocity is basically a frequency spectrum, since it is calĆ
culated as a single sided spectrum it differs by a factor of 2 from equation 3-28.

Rms   Ă
A 20
2

k1
 A2i  A2k
i1
2
Eqn. 3-33

52 The Lms Theory and Background Book


Theory and Background

Part II
Acoustics and Sound Quality

Chapter 4
Terminology and definitions . . . . . . . . . . . . . . 55

Chapter 5
Acoustic measurements . . . . . . . . . . . . . . . . . 67

Chapter 6
Sound quality . . . . . . . . . . . . . . . . . . . . . . . . . . 83

Chapter 7
Sound metrics . . . . . . . . . . . . . . . . . . . . . . . . . 99

Chapter 8
Acoustic holography . . . . . . . . . . . . . . . . . . . . 117

53
Chapter 4

Terminology and
definitions

This chapter contains definitions of basic terms associated with


acoustics.
Acoustic quantities

Reference conditions

Octave bands

Acoustic weighting

55
Chapter 4 Terminology and definitions

4.1 Acoustic quantities

Sound power (P)

The amount of noise emitted from a source depends on the sound power of that
source. The sound power is a basic characteristic of a noise source, providing
an absolute parameter that can be used for comparison. This differs from the
sound pressure levels it gives rise to, which depend on a number of external
factors.

The total sound power PI of a source surrounded by N measurement surfaces is


given by

N
PI   Pi Eqn 4-1
i1

The power of a sound source is expressed in Joules per second, or Watts.

The sound power can also be represented by the letter W.

Sound pressure

The effect of the sound power emanating from a source is the level of sound
pressure. Sound pressure is what the ear detects as noise, the level of which deĆ
pends to a great extent on the acoustic environment and the distance from the
source. The sound pressure is defined as the difference between the actual and
ambient pressure.
This is a scalar quantity that can be derived from measured sound pressure
spectra or autopower spectra either at one specific frequency (spectral line), or
integrated over a certain frequency band.
Sound pressure measurements can be obtained at each measurement point, and
are independent of the measurement direction (X,Y, or Z). The units are Pascal
(Pa) or N/m 2.

Sound (Acoustic) intensity

An important quantity to be derived from the sound power is sound intensity.


The sound intensity of a sound wave describes the direction and net flow of
acoustic energy through an area.

56 The Lms Theory and Background Book


Terminology and definitions

TotalĂpowerĂP I   I.dS

Eqn 4-2


I

S

Sound intensity is a vector, orientated in 3D-space with the fundamental units


of W/m 2, (power transmitted per unit area).
The area is represented as a vector in 3D space with a length equal to the
amount of geometrical area, and a direction perpendicular to the measurement

surface. As such, the vector product ( I i.S i) represents the flow of acoustic enerĆ
gy in a direction perpendicular to a surface. This is the usual direction in
which intensity is measured. If the acoustic intensity vector lies within the surĆ
face itself, the transmitted sound power equals zero.
Intensity is also the time-averaged rate of energy flow per unit area.

 I(t)dt
T
I1 Eqn 4-3
T
0

As such, if the energy is flowing back and forth resulting in zero net energy
flow then there will be zero intensity.

Normal sound intensity


This is the component of the sound intensity vector normal to the measurement
surface.

Free field

This term refers to an idealized situation


where the sound flows directly out from
the source and both pressure and intensiĆ
ty levels drop with increasing distance
from the source according to the inverse
square law.

Part II Acoustics and Sound Quality 57


Chapter 4 Terminology and definitions

Diffuse field

In a diffuse field the


sound is reflected
many times such that
the net intensity can
be zero.

Particle velocity
Pressure variations give rise to movements of the air particles. It is the product
of pressure and particle velocity that results in the intensity. In a medium with
mean flow therefore


I  pĂv Eqn 4-4

where p= sound pressure (Pa)



v = particle velocity (m/s)
The particle velocity of a medium is defined as the average velocity of a volĆ
ume element of that medium. This volume element must be large enough to
contain millions of molecules so that it may be thought of as a continuous fluid,
yet small enough so that acoustic variables such as pressure, density and velocĆ
ity may be considered to be constant throughout the volume element.
Equation 4-4 can be used to compute the particle velocity, once the acoustic inĆ
tensity and the sound pressure have been measured. Particle velocity is a vecĆ
tor in 3D-space expressed in units of (m/s).
In a diffuse field the pressure and velocity phase vary at random giving rise to
a net intensity of zero.
Under certain circumstances (i.e. plane progressive waves in a free field), the
particle velocity can also be calculated from the pressure and the impedance of
the medium (c).

v  p eāāc Eqn 4-5

58 The Lms Theory and Background Book


Terminology and definitions

where pe = effective sound pressure (Pa)


= mass density of the medium (kg/m 3)
c= velocity of sound in the medium (m/s)

By combining equations 4-4 and 4-5 it can be seen that in a free field a relationĆ
ship exists enabling the acoustic intensity to be determined from the effective
pressure of a plane wave.

p 2e
 I  .c Eqn 4-6

Acoustic impedance (Z)

This is defined as the product of the mass density of a medium and the velocity
of sound in that medium.

Z  Ă.Ăc Eqn 4-7

where
 = mass density (kg/m 3)
c = velocity of sound in the medium (m/s)

Part II Acoustics and Sound Quality 59


Chapter 4 Terminology and definitions

4.2 Reference conditions

It is a common practise to define standards for acoustic intensity, pressure, etc...


at an air temperature of 20_C and a standard atmospheric pressure of 1023 hPa
(1 bar). Under these conditions

the density of air o = 1.21 (kg/m 3)


the velocity of sound in air c = 343 (m/s)
the acoustic impedance o .c = 415 rayls (kg/m 2s)

dB scale

Since the range of pressure levels that can be detected is large and the ear reĆ
sponds logarithmically to a stimulus, it is practical to express acoustic parameĆ
ters as a logarithmic ratio of a measured value to a reference value. Hence the
use of the decibel scales for which the reference values for intensity, pressure
and power are defined below.

Sound power level Lw

This is defined as the logarithmic measure of the absolute (unsigned) value of


the sound power generated by a source.

L W  10 log 10Ă

|P I|
P0
Eqn 4-8

The reference sound power is P0 = 10-12 (W)

Particle velocity level Lv

This is defined as the logarithmic measure of the particle velocity.

L v  20 log 10Ă vv
0
Eqn 4-9

The reference particle velocity is v0 = 50 10-9 (m/s)

60 The Lms Theory and Background Book


Terminology and definitions

Sound (Acoustic) intensity level LI

This is the logarithmic measure of the absolute value of the intensity vector.

L I  10 log 10Ă

|I|
I0
Eqn 4-10

The commonly used reference standard intensity for airborne sounds is


=
10-12 (W/m 2)

Normal acoustic intensity level (LI )

This is the logarithmic measure of the absolute value of the normal intensity
vector.

L In  10 log 10Ă
|I n|
I0

dB Eqn 4-11

Sound pressure level LP

This is defined as

2
p
L p  10 log 10Ă p
0
Eqn 4-12
 20 log 10

p
p0

p is the rms value of the acoustic pressure (in Pa)

The above reference values for intensity and power correspond to an effective
rms reference pressure of
po = 0.00002 (Pa)
= 20 Pa

This sound pressure level of 20 Pa is known as the standardized normal hearĆ


ing threshold and represents the quietest sound at 1000Hz that can be heard by
the average person.

Part II Acoustics and Sound Quality 61


Chapter 4 Terminology and definitions

4.3 Octave bands

Complete (1/1) octave bands represent frequency bands where the center freĆ
quency of one band is approximately twice (according to standardized values)
that of the previous one.

fc, i fc, i+1 fc, i+2

f c, i1  2. f c,i

Partial octave bands (1/3, 1/12 1/24 . . .) represent frequency bands where

f c, i1  ( 2 1x ). f c,i

and where x = 3,12, 24 . . .

1/1 bands

1/3 bands

12x
The Lower band limit of a 1/x octave band is f c.2
12x
The Upper band limit of a 1/x octave band is f c.2

The bands defined by these formulas are termed the `natural' bands. The InterĆ
national ISO norm 150266 defines normalized center frequencies for octave
bands and the values for 1/1, 1/2 and 1/3 octave bands are listed in table 4.1.

Natural frequencies are used for calculations but the normalized frequencies
are used for annotation. Octave bands above or below the normalized values
are annotated with the natural frequencies.

62 The Lms Theory and Background Book


Terminology and definitions

Normalized 1/ 1/ 1/ 1/ 1/ 1/ 1/ 1/ 1/
frequency 1 2 3 1 2 3 1 2 3
oct oct oct oct oct oct oct oct oct
16 x x x 160 x 1600 x
18 180 x 1800
20 x 200 x 2000 x x x
22.4 x 224 2240
25 x 250 x x x 2250 x
28 280 2800 x
31.5 x x x 315 x 3150 x
35.5 355 x 3550
40 x 400 x 4000 x x x
45 x 450 4500
50 x 500 x x x 5000 x
56 560 5600 x
63 x x x 630 x 6300 x
71 710 x 7100
80 x 800 x 8000 x x x
90 x 900 9000
100 x 1000 x x x 10000 x
112 1120 11200 x
125 x x x 1250 x 12500 x
140 1400 x 14000
160 x 1600 x 16000 x x x
Table 4.1 Normalized frequencies (Hz)

Part II Acoustics and Sound Quality 63


Chapter 4 Terminology and definitions

4.4 Acoustic weighting

Frequency weighting

The human ear has nonlinear, frequency dependent characteristics, which


means that the sensation of loudness cannot be perfectly described by the
sound pressure level or its spectrum. To derive an experienced loudness level
from the sound pressure signal, the frequency spectrum of the sound pressure
signal is multiplied by a frequency weighting function. These weighting funcĆ
tions are based on experimentally determined equal loudness contours which
express the loudness sensation as a function of sound pressure level and freĆ
quency. A number of equal loudness contours are shown in Figure 4-1. The
loudness level is expressed in `Phons'. 1 kHz-tones are used as the reference,
which means that for a 1000 Hz tone, the Phon value corresponds to the dB
sound pressure level.

Figure 4-1 Equal loudness perception contours

64 The Lms Theory and Background Book


Terminology and definitions

A, B and C - weighting for acoustic signals. A-weighting modifies the freĆ


quency response such that it follows approximately the equal loudness curve of
40 phons and is applied to signals with a sound pressure level of 40dB. The A-
weighted sound level has been shown to correlate extremely well with subjecĆ
tive responses. The B and C-weighting follow more or less the 70 and 100
phon contours respectively. These contours can be seen in Figure 4-2. The reĆ
sulting value is then denoted by LA, LB,.... with unit dBA, dBB...

Table 4.2 (overleaf) shows the relative response attenuations or amplifications


of the 3 types of filters. In between the listed normal frequencies, these filter
spectra are linearly interpolated on a log-log scale. Figure 4-2 shows the same
information in a graphical form.

20
10
Relative response (dB)

-10 C
-20
D
-30
B
-40

-50

-60 A
Frequency (Hz)
-70
10 2 5 102 2 5 103 2 5 104 2

Figure 4-2 Standardized weighting curves

Part II Acoustics and Sound Quality 65


Chapter 4 Terminology and definitions

1/3 Octave band A weighting dB B weighting dB C weighting dB


Center frequency Hz
16 -56.7 -28.5 -8.5
20 -50.5 -24.2 -6.2
25 -44.7 -20.4 -4.4
31.5 -39.4 -17.1 -3.0
40 -34.6 -14.2 -2.0
50 -30.2 -11.6 -1.3
63 -26.2 -9.3 -0.8
80 -22.5 -7.4 -0.5
100 -19.1 -5.6 -0.3
125 -16.1 -4.2 -0.2
160 -13.4 -3.0 -0.1
200 -10.9 -2.0 0
250 -8.6 -1.3 0
315 -6.6 -0.8 0
400 -4.8 -0.5 0
500 -3.2 -0.3 0
630 -1.9 -0.1 0
800 -0.8 0 0
1000 0 0 0
1250 +0.6 0 0
1600 +1.0 0 -0.1
2000 +1.2 -0.1 -0.2
2500 +1.3 -0.2 -0.3
3150 +1.2 -0.4 -0.5
4000 +1.0 -0.7 -0.8
5000 +0.5 -1.2 -1.3
6300 -0.1 -1.9 -2.0
8000 -1.1 -2.9 -3.0
10,000 -2.5 -4.3 -4.4
12,500 -4.3 -6.1 -6.2
16,000 -6.6 -8.4 -8.5
20,000 -9.3 -11.1 -11.2
Table 4.2 Weighting of acoustic signals

66 The Lms Theory and Background Book


Chapter 5

Acoustic measurements

This chapter discusses the measurement of acoustic quantities.


Measured acoustic functions
In addition it describes the calculation of acoustic quantities based
on measured ones and other parameters used in these calculations
Calculation of acoustic quantities

Acoustic measurement surfaces

Frequency bands

Field indicators

67
Chapter 5 Acoustic measurements

5.1 Acoustic measurement functions

This section describes the acoustic quantities that can be measured. From meaĆ
sured quantities it is possible to derive further quantities as described in section
5.2.

Sound pressure level


This is defined by equation 4-12 and can be measured using a single channel.
It will result in an averaged pressure or autopower spectrum.
For measurements in the free field, and in the direction of propagation, the norĆ
mal sound intensity level will be equal to the sound pressure level. In practice,
when not working under free field conditions, the sound intensity level will be
lower than the sound pressure level.

Sound Intensity
The sound intensity in a specified direction at a point is the average rate of
sound energy transmitted in the specified direction through a unit area normal
to this direction at the point considered.
In most situations it is the component of the sound intensity vector normal to

the measurement surface, I n , which is measured.
In order to determine sound intensity you can measure both the instantaneous
pressure and the corresponding particle velocity simultaneously. In practice,
the sound pressure can be obtained directly using a microphone. The instantaĆ
neous particle velocity can be calculated from the pressure gradient between
two closely spaced microphones. A sound intensity probe can therefore consist
of two closely spaced pressure microphones which measure both the sound
pressure and the pressure gradient between the microphones.
For frequency domain calculations, it can be shown that the sound intensity can
be calculated from the imaginary part of the crosspower between the two miĆ
crophone signals. The following formula is used

S 1,2
I  ImagĂ Eqn 5-1
2fĂĂd

Where S1,2 is the double sided crosspower between the two microphone sigĆ
nals, f is the signal frequency, d is the microphone distance and  is the air denĆ
sity.

68 The Lms Theory and Background Book


Acoustic measurements

For this function, all channels are processed as channel pairs, each pair consistĆ
ing of two consecutive channels. It therefore requires that an even number of
channels is defined.

The reactive sound intensity (non propagating energy) is calculated as

S 1,Ă1Ă  S 2,Ă2
I reactiveĂ Ă Eqn. 5-2
2ĂfĂĂd

For the idealized case of measurements in the free field (free space without reĆ
flections) and in the direction of propagation, the reactive intensity is zero.

Residual intensity

This is defined as

RI  L p   pĂI 0 Eqn 5-3

where L p is the measured sound pressure level and  pĂI0 is the pressure residual
intensity index. To calculate the residual intensity therefore it is necessary to
have the pressure residual intensity index available. This is described below.
Intensity measurements can be made in a sound field where the sound intensity
level is in the range

L p   pIo  L I  L p Eqn 5-4

Lp is defined in equation 4-12, and LI in equation 4-10. In a free field the presĆ
sure and intensity levels are the same, whereas in all other cases, the measured
intensity will be less than the pressure. The residual intensity ( L p   pĂI 0) repĆ
resents the lowest intensity level which can be detected by the system for the
given sound pressure level.

Part II Acoustics and Sound Quality 69


Chapter 5 Acoustic measurements

Pressure residual intensity index

For the calculation of the pressure residual intensity index of a sound intensity
probe, it is required to place the intensity probe in a sound field such that the
sound pressure is uniform over the volume. In these conditions there will be
no difference between the two signals at both microphones, and hence the meaĆ
sured intensity should be zero. However, the phase mismatch between the two
measuring channels causes a small difference between the two signals making
it appear as if there is some intensity. The intensity detected can be likened to a
noise floor below which measurements cannot be made. This intensity lower
limit is not fixed but varies with the pressure level. What is fixed, is the differĆ
ence between the pressure and the intensity level when the same signal is fed to
both channels. It is this which is defined as the pressure residual intensity inĆ
dex. Mathematically therefore the pressure residual intensity index is

 pIo  (ĂL p  L InĂ) dB Eqn 5-5

where Lp is the sound pressure level and LIn is the normal sound intensity levĆ
el.

Dynamic capability index


In order to ensure a particular level of accuracy for the measurements it is necĆ
essary to increase the measurement floor defined by the residual intensity level
by an amount termed the `bias error factor' ( )

L p   pIo Ă ĂdB  LI  Lp Eqn 5-6

LI dB
Lp
Ld
pIo


residual intensity level

frequency

Figure 5-1 Dynamic capability index Ld

70 The Lms Theory and Background Book


Acoustic measurements

The `bias error factor' ( ) is selected according to the grade of accuracy reĆ
quired from the table below.

Grade of accuracy Bias error factor


dB
Precision (class 1) 10
Engineering (class 2) 10
Survey (class 3) 7
Table 5.1 Bias error factor ( )

The difference between the residual pressure-intensity index and therefore


represents the range in which the probe should be operating and is termed the
`dynamic capability index' (Ld ) for the probe.

L d  (Ă pIo  Ă) dB Eqn 5-7

Part II Acoustics and Sound Quality 71


Chapter 5 Acoustic measurements

5.2 Calculation of acoustic quantities

Acoustic functions can be derived from ones that have been measured. This
section describes these analysis functions and Table 5.2 gives an overview of
them and the measured quantities required for their derivation.

Calculations will be made over specific frequency bands This subject is disĆ
cussed in section 5.4. Some functions are computed over a known area. The
subject of defining surfaces (meshes) for acoustic functions is discussed in secĆ
tion 5.3.

Effective sound pressure


The effective sound pressure pe or prms may be computed from a measured
sound pressure spectrum or from its autopower spectrum.

f2

p 2e  2   p(f)  df 2

f1

Eqn 5-8
f2

2  A (f)dfp

f1

Acoustic intensity
This as a vector quantity calculated directly from measured acoustic intensity
functions.

f2

I  I(f)df

Eqn 5-9
f1

When intensity measurements are not available but sound pressure measureĆ
ments are available, then the magnitude of the acoustic intensity can be comĆ
puted from the effective sound pressure p and the acoustic impedance  .c

p2e
I  Ă.Ăc Eqn 5-10
0

72 The Lms Theory and Background Book


Acoustic measurements

but only under the assumption of plane progressive waves in a free field.

Sound power
This is calculated from the geometrical area S and the acoustic intensity compoĆ
nent perpendicular to a surface

P  I nĂ.ĂS Eqn 5-11

Under certain circumstances, intensity can be assumed to be proportional to efĆ


fective sound pressure, and then

p2
P  eĂc Ă.ĂS Eqn 5-12
0

Particle velocities
These can be calculated when both acoustic intensity and sound pressure data
are available

v  pI

Eqn 5-13

All the possible analysis functions are summarized in Table 5.2. (These are
based on the assumption of plane progressive waves in a free field.)

Acoustic quantity Symbol Required data Formula MKS


units
Effective (RMS) pe sound pressure 2   p  2 Pa or
N/m2
i
sound pressure
p spectrum p
pressure autopower 2   A  i
2

A
Intensity

I intensity I i W/m2

Sound power
p P Intensity and area IĂ.ĂS W
Sound pressure p 2e (1)
spectrum and area 0Ăc .ĂS

pressure autopower p 2e (1)


and area 0Ăc .ĂS

Particle velocity v intensity and sound I m/s
pressure p

Table 5.2 Overview of analysis functions for acoustic signals

Part II Acoustics and Sound Quality 73


Chapter 5 Acoustic measurements

5.3 Acoustic measurement surfaces

Acoustic measurements differ from other types of signals in that they are meaĆ
sured some distance away from the object rather than on the test structure itĆ
self. The measurement points are termed associated nodes, that are surrounded
by a hypothetical measurement surface. An organized collection of measureĆ
ment surfaces and nodes are termed a measurement mesh and there are ISO
standards that define such meshes for particular measurement types.
acoustic measurement nodes

source

ÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊ
ÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊ
ÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊ
ÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊÊ
reflecting plane

Figure 5-2 Sound source, acoustic measurement mesh and nodes

Acoustic measurement meshes can be parallelepiped, cylindrical or spherical in


shape.

Associated nodes on measurement meshes have a nodal orientation. This is alĆ


ways Cartesian, and the orientation of the +Z nodal coordinate system for a
measurement defines the measurement direction.

Acoustic ISO standards

The ISO-3744 and ISO-3745 standards describe sound pressure measureĆ


ments. The microphone positions are defined on a (hemi-) spherical or a paralĆ
lelepiped measurement mesh. The possible dimensions of the measurement
mesh depend on the characteristic distance of the reference surface. This referĆ
ence surface is defined as the smallest rectangular box that encloses the noise
source.

74 The Lms Theory and Background Book


Acoustic measurements

ISO-3744 Acoustics - Determination of sound power levels of noise sources


- Engineering methods for free-field conditions over a reflecting
plane.

ISO-3745 Acoustics - Determination of sound power levels of noise sources


- Precision methods for anechoic and semi-anechoic rooms

The ISO-9614-1 standard describe sound intensity measurements. In this case


the microphone positions of the measurement meshes are not defined. The
quality of the mesh has to be judged during measurements.
It describes a number of field indicators that allow a judgment of the accuracy
of the measurements and the mesh.

Part II Acoustics and Sound Quality 75


Chapter 5 Acoustic measurements

5.4 Frequency bands

Whenever an acoustic quantity is integrated over a certain frequency band, the


following formula applies

 a(f)df
f2
a Eqn 5-14
f1

The integration of a continuous function a(f) is replaced by a finite sum over the
corresponding discrete samples:

a  1 a1 
2
 a  12 ai 2 Eqn 5-15
i

where a1 = a(f1 )
a2 = a(f2 )
fā1 < ā fāiā < fā2

This integration takes into account the full value of all data samples between
the two limits, and 50 % of the first and last sample. It can be obtained between
any two measured frequency limits.

It is good practise to maintain the type of frequency band that was used in the
acquisition of the data for the calculation. In fact data acquired in octave bands
must remain in those bands for the analysis. The calculation of the field indicaĆ
tors also makes little sense unless the analysis bands correspond with the meaĆ
surement bands.

76 The Lms Theory and Background Book


Acoustic measurements

5.5 Field indicators

When attempting to analyze the sound power being radiated from a noise
source in situ, the international standard ISO 9614-1 lays out a number of meaĆ
surement conditions which must be adhered to if the results are to be considĆ
ered acceptable for this purpose. A number of criteria must be satisfied, based
on the values of particular indicator functions, to ensure the requisite adequacy
of the measurements and meshes. This section describes both the field indicaĆ
tors themselves and the criteria used to assess the results.

F1 Sound field temporal variability indicator

This gives the measure of temporal (or time) variability of the field. It is deĆ
fined as follows

F1  1
In
1
M1
M
 (ĂInk  InĂ)2
k1
Eqn 5-16

Where I n is the mean value of M short time averages of Ink defined in the folĆ
lowing equation.

M
In  1
M
 Ink Eqn 5-17
k1

F2 Surface pressure–intensity indicator

In a free field where sound is only radiating out from a source, the pressure and
intensity levels are equal in magnitude. In a diffuse or reactive field however,
intensity can be low when the pressure is high. A lower measured intensity
can also arise if the sound wave is incident at an angle to the probe since this
also affects the phase change detected across the probe. The pressure-intensity
indicator examines the difference between the pressure and the absolute values
of intensity. This function can be determined on a point to point basis during
the acquisition, but the function F2 described here represents the value averĆ
aged over all the measured surfaces.

Part II Acoustics and Sound Quality 77


Chapter 5 Acoustic measurements

F 2  L p  L |I n| Eqn 5-18

L p is the surface sound pressure level defined as


L p  10 log 10 1
N
p i 2!
ā ā  p
 Eqn 5-19
N
i1
o


where i indicates the measurement surface and N is the total number of surĆ
faces (of the local component).

L |I n| is the surface normal unsigned acoustic intensity level defined as

1 N |Ini|!
L |I |  10 log 10ā ā
Io 
Eqn 5-20
n N 
i1

where |I ni| is the absolute (unsigned) value of the normal intensity vector.

Note! A large difference between intensity and pressure suggests that the probe is
not well aligned or that you are operating in diffuse field.

In order to calculate F2 it is necessary to have both intensity and autopower (or


pressure) measurements for all points on the mesh.

F3 Negative partial power indicator


This indicator also examines the difference between measured intensity and
pressure, but in this case the direction of the intensities is taken into account.
Thus this function expresses the variation between intensities arising from the
source under investigation (positive) and those being generated by extraneous
sources (negative).

F3  L p  LIn Eqn 5-21

Lp is the surface sound pressure level defined above.

78 The Lms Theory and Background Book


Acoustic measurements

L In is the surface normal signed acoustic intensity level defined as


ā Ă1 ā
L In  10 log 10  IInio Ă
N
Eqn 5-22
N i1 

Note! If the quantity  II


ni
0
is negative, then the effect of extraneous sources is

too great and the set of measurements do not satisfy the ISO requirements.

In order to calculate F3 it is necessary to have both intensity and autopower (or


pressure) measurements for all points on the mesh.

F4 Non–uniformity indicator
This indicates the measure of spatial (or positional) variability that exists in the
field. It can be compared with the statistical parameter standard deviation.

F4  1
In
 1
N1
i1
N
(ĂIni  InĂ)2 Eqn 5-23

Where i indicates the measurement surface and N is the total number of surĆ
faces. I n is the mean of the normal acoustic intensity vectors taken over the N
surfaces.

N
In  1 ā
N
 Ini Eqn 5-24
i1

In order to calculate F4 , only intensity measurements are required.

5.5.1 The criteria


Three criterion can be evaluated in verifying the results of an acoustic intensity
analysis.

Part II Acoustics and Sound Quality 79


Chapter 5 Acoustic measurements

Ld – F2 Measurement chain accuracy

If a measurement array is to be considered suitable for determining the sound


power level of a noise source according to ISO 9614-1, then the dynamic capaĆ
bility index (Ld ) must be greater than the indicator F2 for each frequency band.

Ld  F2 0 Criterion 1

Ld is dependent on the measurement equipment and is defined in equation


NO TAG. F2 is defined in equation 5-18. Ld is derived from the pressure residĆ
ual intensity index which must be computed during the measurement phase.

If this criterion is not satisfied then it is an indication that the levels being meaĆ
sured are too low for the source and that it is necessary to reduce the average
distance between the measurement surface and the source.

F3 – F2 Extraneous noise sources

If the difference between field indicators F2 and F3 is significant (greater than


3dB), it is a strong indication of the presence of a directional extraneous noise
source in the vicinity of the noise source under test.

If the difference between these two indicators is greater than 3 dB, then the situĆ
ation can be improved by reducing the average distance between the measureĆ
ment surface and the source, shielding measurement sources from the extraneĆ
ous noises or reducing some reflections towards the source under investigation.

Measurement mesh adequacy

A check on the adequacy of the measurement positions (mesh) can be made usĆ
ing the following criterion.

N CĂ.ĂF 4ā 2 Criterion 2

where N is the number of measurement (probe) positions


F4 is the indicator defined in equation 5-23
C is a factor selected from table 5.3 depending on the accuracy reĆ
quired.

Where the same mesh is used for a number of bands then the maximum value
of C.F4 2 will be considered when evaluating the criterion.

80 The Lms Theory and Background Book


Acoustic measurements

Center frequencies (Hz) C


Octave band 1/3 Octave band Precision Engineering Survey
class 1 class 2 class 3
63-125 50-160 19 11
250-500 200-630 9 19
1000-4000 800-5000 57 29
6300 19 14
A weighted (63 - 4k or 50 - 6.3k) Hz 8
Table 5.3 Values of factor C for measurement mesh accuracy

Part II Acoustics and Sound Quality 81


Chapter 6

Sound quality

The purpose of this chapter is to introduce you to the fundamentals


of sound quality.
Basic theory relating to sound quality

Sound quality analysis


An extensive reading list is included at the end of the chapter for
more detailed information.

83
Chapter 6 Sound quality

6.1 The basic concepts of Sound Quality

Sound signals
The characteristics of a sound as it is perceived are not exactly the same as the
characteristics of sound being emitted. The discussion starts with definitions
which describe the actual sound signals, and then discusses the physical and
psychological effects that influence the perception of a particular signal.
Sound power and sound pressure
The amount of noise emitted from a source depends on the sound power of that
source.
The effect of the sound power emanating from a source is the level of sound (or
acoustic) pressure. Sound pressure is what the eardrum detects - the level of
which depends to a great extent on the acoustic environment and the distance
from the source.
Sound pressure is what is measured by microphones and the majority of data
used in the a sound quality analysis would have the dimension pressure and
thus be referred to as a sound signal. This is not an absolute condition however
and vibrational data too can be analyzed.
Sound pressure level
The basic descriptor of a sound signal is the sound pressure level (SPL) denoted
by L and described in equation 4-12. The sound pressure level of 20 Pa is
known as the standardized normal hearing threshold and represents the quietĆ
est sound at 1000Hz that can be heard by the average person.
Since the range of pressure levels that can be detected is large and the ear reĆ
sponds logarithmically to a stimulus, it is practical to express acoustic parameĆ
ters as a logarithmic ratio of a measured value to a reference value. Hence the
use of the decibel scales.
Hearing frequency range
The threshold frequency for human hearing is around 20kHz. Signals with a
frequency content below this value are referred to as audio signals. Sampling of
audio signals therefore requires a sampling rate at least twice the maximum
that can be detected by the ear in order to avoid aliasing problems. You will
find therefore that CD recorders use a sampling rate of 44.1KHz and DAT reĆ
corders 48kHz.
Loudness and pitch
A sound can be characterized by its loudness (related to the SPL) and its freĆ
quency content. The common term for describing the frequency content of a
sound (or tone) is its `pitch'. However pitch is very much a perceived frequency
sensation and depends on its frequency and the sound pressure level. Both
loudness and pitch are discussed further below.

84 The Lms Theory and Background Book


Sound quality

The perception of sounds by the human ear

An important element in explaining why two sounds with an equal dB level


may have a totally different subjective quality is related to the physics of the
human hearing process. The human ear is a complex, nonlinear device, with
specific frequency dependent transmission characteristics. In addition, the fact
that hearing usually involves two ears (is `binaural') has a considerable influĆ
ence on sound perception. The correct understanding of the hearing processes
will lead to a better appreciation of why a sound has its specific quality, which
in turn will result in improved models and quantitative analysis procedures.

Physics alone, however, are not sufficient to explain all aspects of sound perĆ
ception. It is also influenced by psychological factors such as attitude, backĆ
ground, expectations, environment, context, etc. As a consequence, there is no
better `judge' of sound quality than the human listener, despite all efforts at
quantification and modelling.

The purpose of this section is merely to highlight the salient points of this subĆ
ject. For a more thorough understanding of this topic you should refer to the
reading list at the end of the chapter. Specific references to items in this list are
contained within brackets {1}thus.

The hearing process


Before reaching the eardrum, an incident acoustic signal is considerably modiĆ
fied by the spectral and spatial filtering characteristics of the human body and
the ear. The human torso itself acts as a directional filter through diffraction,
resulting in the fact that very significant interaural differences in sound presĆ
sure level occur depending on the direction of the source,{2}.

Figure 6-1 shows the various parts of the ear (from {5}). The outer ear consists
of the pinna and the ear canal. Diffraction effects at the pinna and direction inĆ
dependent effects within the ear canal result in the human ear being most sensiĆ
tive in the frequency range 1 to 10 kHz. The middle ear links the eardrum to
the cochlea, which is the actual sound receptor. The final link between an
acoustic signal and a neural response takes place in the cochlea, which is in the
inner ear.

Part II Acoustics and Sound Quality 85


Chapter 6 Sound quality

Stirrup Oval window


Hammer Anvil

Semicircular canal

Nerve fibers

Cochlea

Scala vestibuli
Pinna
Scala tympani
Eustachian tube

Ear canal
Ear drum Round window

Outer ear Middle Inner ear


ear

Figure 6-1 The main parts of the ear

Binaural hearing
Another essential characteristic of human hearing is that it is binaural in nature.
The sound signals received by the left and right ear show a relative time delay
as well as a spectral difference dependent on the direction of the sound. Below
about 1500 Hz, the phase difference between the two signals will be the main
contribution to localization, while above this frequency the interaural level difĆ
ference and difference in spectrum will be the principal factors.
Processing in the human brain not only allows the sound to be spatially localĆ
ized, but also to suppress unwanted sounds and to concentrate on a sound
coming from a specific direction {2 6}. This is the well known `party' effect
where it is possible to focus one's hearing on an individual a certain distance
away in the presence of significant background noise.

Sound perception
The body, head and outer ear effects consist mainly of a spatial and spectral filĆ
tering that is applied to the acoustic stimulus. Consequently, just looking at the
frequency spectrum of a free positioned microphone does not necessarily lead
to a correct assessment of the human response. In other words, there is no simĆ
ple relationship between the measured physical sound pressure level and the
human perception of the same sound.

86 The Lms Theory and Background Book


Sound quality

The effects of the inner ear are many, but the most important are its nonlinear
characteristics. This means that the auditory impression of sound strength,
which is referred to by the term `loudness' is not linearly related to the sound
pressure level. In addition, the perceived loudness of a pure tone of constant
sound pressure level varies with its frequency. Also the auditory impression of
frequency, which is referred to by the term `pitch' is not linearly related to the
frequency itself. These and other effects are described below.

Loudness

The sound pressure level is not linearly related to the auditory impression of
sound strength (or loudness). Together with the frequency dependencies disĆ
cussed above, this means that the sensation of loudness cannot be correctly deĆ
scribed by the acoustic pressure level or its spectrum. Figure 6-2 {5} shows a
number of curves representing levels of perceived equal loudness (for sinusoidal
tones) across a frequency range as a function of acoustic pressure level.

Figure 6-2 Equal loudness perception contours {5}

Pitch

The perceived `frequency sensation', referred to as `pitch', is not directly related


to the frequency itself {6}.

Part II Acoustics and Sound Quality 87


Chapter 6 Sound quality

The pitch of a pure tone varies with both the frequency and the sound pressure
level, and this relationship is itself dependent on the frequency of the tone.
Pure tones can be used though to determine how pitch is perceived. One possiĆ
bility is to measure the sensation of `half pitch'. In this case the subject is asked
to listen to one pure tone, and then adjust the frequency of a second one such
that it produces half the pitch of the first one. At low frequencies, the halving
of the pitch sensation corresponds to a ratio of 2:1 in frequency. At high freĆ
quencies however this does not occur and the corresponding frequency ratio is
larger than 2:1. For example a pure tone of 8kHz produces a `half pitch' of only
1300Hz.

So although the ratio between pitches can be determined from experiments, to


obtain absolute values, it is necessary to determine a reference for the sensation
`ratio pitch'. A reference frequency of 125 Hz was chosen so that at low freĆ
quencies, the numerical value of the frequency is identical to the numerical valĆ
ue of the ratio pitch. Because ratio pitch determined in this way is related to
our sensation of melodies, it was assigned the dimension `mel'. Therefore a
pure tone of 125 Hz has a ratio pitch of 125 mel, and the tuning standard, 440
Hz, shows a ratio pitch with almost the same numerical value.

At high frequencies, the numerical value of frequency and that of the ratio pitch
deviate substantially from another. The experimental finding that a pure tone
of 8kHz has a `half pitch' of 1300Hz, is reflected in the numerical values of the
corresponding ratio pitch. The frequency of 8 kHz corresponds to a ratio pitch
of 2100 mel and the frequency of 1300 Hz corresponds to a ratio pitch of 1050
mel, which are half of 2100 mel.

Critical bands

The inner ear can be considered to act as a set of overlapping constant percentĆ
age Bandwidth filters. The noise Bandwidths concerned are approximately
constant with a Bandwidth of around 110 Hz, for frequencies below 500 Hz,
evolving to a constant percentage value (about 23 %) at higher frequencies.
This corresponds perfectly with the nonlinear frequency-distance characterisĆ
tics of the cochlea. These Bandwidths are often referred to as `critical BandĆ
widths' and a `Bark' scale is associated with them as shown in Table 6.1.

88 The Lms Theory and Background Book


Sound quality

Critical Band (Bark) 1 2 3 4 5 6 7 8


Center Frequency (Hz) 50 150 250 350 450 570 700 840
Bandwidth (Hz) 100 100 100 100 110 120 140 150
Critical Band (Bark) 9 10 11 12 13 14 15 16
Center Frequency (Hz) 1000 1170 1370 1600 1850 2150 2500 290
Bandwidth (Hz) 160 190 210 240 280 320 380 450
Critical Band (Bark) 17 18 19 20 21 22 23 24
Center Frequency (Hz) 3400 4000 4800 5800 7000 8500 10500 13500
Bandwidth (Hz) 550 700 900 1100 1300 1800 2500 3500
Table 6.1 Table of critical bands

Masking
The critical bands described above, have important implications for sounds
composed of multiple components. For example, narrow band random sounds
falling within one such filter Bandwidth will add up to the global sensation of
loudness at the center frequency of the filter. On the other hand, a high level
sound component may `mask' another lower level sound which is too close in
frequency.
An example of masking is shown below {5}. A 50 dB, 4 kHz tone (marked +)
can be heard in the presence of narrow-band noise, centered around 1200 Hz,
up to a level of 90 dB. If the noise level rises to 100 dB, the tone is not heard.

Level of masking noise


Sound pressure level (dB)

Threshold of hearing

Frequency

Figure 6-3 Masking effects of narrow band noise [5]

Part II Acoustics and Sound Quality 89


Chapter 6 Sound quality

The higher the level of the masking sound, the wider the frequency band over
which masking occurs. Again, it turns out that multiple sound components falĆ
ling within one of the ear filter Bandwidths add up to the masking level, while
when they are wider apart each can be considered as a separate sound with its
own masking properties.

Temporal effects

Finally, a number of temporal effects are associated with the hearing process.
Sounds must `build up' before causing a neural reaction, the reaction time howĆ
ever is dependent on the sound level. This has an effect on the perceived loudĆ
ness since the loudness of a tone burst decreases for durations smaller than
about 200 ms. For larger durations, the loudness is almost independent of
duration.

This also has its consequences for masking :

- Short sounds preceding a second loud sound can be reduced in loudness or


even masked. The time intervals for this temporal `pre-masking' phenomeĆ
non are in the order of tens of milliseconds.

- A similar effect may occur after switching off a loud sound. During a time
interval up to 200 ms (dependent on masking and tone level), short tone
bursts may be masked (post-masking).

- In the presence of a given continuous sound, tone bursts with levels exceedĆ
ing that of the first signal, might be obscured, depending on their length.
This is called `simultaneous masking'.

A detailed discussion of these temporal effects can be found in {6}.

90 The Lms Theory and Background Book


Sound quality

6.2 Sound quality analysis

One of the fundamental problems with sound quality is that `what-you-hear-


is-not-what-you-get'. Nonlinear physical characteristics of the human ear
mean that the sound perceived is not directly related to the sound level being
generated. Furthermore `what-you-like-is-not-what-you-hear' since the apĆ
preciation or non-appreciation of a sound depends to a great extent on the situĆ
ation and the attitude of the listener. An appreciation of the physical and psyĆ
cho-acoustic aspects of human hearing is essential to the understanding of
sound quality and to this end a short summary of the significant points and
terms used is given in section 6.1.

In the majority of problems or studies related to acoustics, the issue at hand is


acoustic comfort, and not hearing damage or structural integrity.
In order to properly describe this acoustic comfort, it has long since become
clear that the acoustic pressure level is by no means sufficient or even adequate
to correctly represent the actual hearing sensations. This is due to the very
complex nature of the auditory impressions of acoustic signals (or `sounds'),
leading to the use of concepts such as the `quality' of the sound.

Auditory impressions can be annoying, in which case the sound is unwanted


and is often referred to as `noise'. Typical examples are irritating engine, road
or wind noise in a car, aircraft noise, machine or fan noise in the working enviĆ
ronment.
Examples of vehicle noises which while being annoying do not contribute sigĆ
nificantly to the sound pressure level, are wiper noise, fuel pump noise, alterĆ
nator whine, dashboard squeaks. To express this negative quality or annoyĆ
ance, a multitude of qualitative concepts like whine, rattle, boom, rumble, hiss,
beat, squeak, speech interference, harshness, sharpness, roughness, fluctuation,
strength... are used.

But not everything you hear is either bad or unwanted. A sound can be an imĆ
portant messenger of information in which, it conveys a positive feeling. ExĆ
amples are the solidity of a door-slam, the feeling of sportiveness of a car enĆ
gine (or exhaust) during acceleration, the smoothness of a limousine engine, the
`catching' of a door lock, or a seat belt....
In these cases, the noise does not need to be removed, but it has to sound
`right'.

Analysis of sound signals

Having identified a problem the aim is to measure, evaluate and modify


sounds and a prerequisite for this is a high quality recording of the sound.

Part II Acoustics and Sound Quality 91


Chapter 6 Sound quality

Digital
output
Spectral
processing

Digital Filtering Replay

input

Comfort

ÄÄ
analysis

ÄÄ ÎÎÎ
Analysis
Reporting

ÎÎÎ
ÎÎÎ
Figure 6-4 Sound quality analysis
Measurements
Sound quality measurements are acoustic measurements made with microĆ
phones. These can be digitally recorded and imported into the computer sysĆ
tem, but in order to successfully evaluate a sound it is absolutely essential that
it is both recorded and replayed in the most accurate and representative way
possible. Binaural recording is a technique whereby microphones are mounted
inside the ear in an artificial head to represent the sensation of human hearing
as closely as possible.
Evaluation
The next step in dealing with a sound quality issue, is to gain a proper underĆ
standing of the quality of the sound. In order to evaluate sound quality characĆ
teristics, different (non-exclusive) approaches may be followed.
(a) The acoustic signal can be evaluated subjectively by a specialist or jury of
listeners. This can be achieved by replaying the signal either digitally via a
recorder or directly via an analog output to headphones or speakers. When
using direct replay, cyclic repetition of a particular segment can be perĆ
formed and techniques are provided to suppress the `click' at the start and
end of a segment as well as on-line notch filtering. This latter facility can
give a very fast assessment of the critical spectral characteristics of a sound.
(b) The acoustic pressure signal is processed in such a way that perception-releĆ
vant quantitative values can be obtained through the use of adequate sound
quality metrics. Such metrics form part of the comfort analysis.
Modification
Important information on the nature of a sound can be obtained by modifying
the sound signal and comparing its perceived quality with the original. This
modification can be imposed in the time, frequency or order domains.

92 The Lms Theory and Background Book


Sound quality

An important consequence of sound modification, is that it can also serve to


generate the `target' sounds which become the specifications for the subsequent
product modifications.

Binaural recording and playback


The ultimate goal of a sound quality analysis must be to record, analyze, possiĆ
bly modify and then playback a sound in such a way as to reproduce exactly
what the listener would have experienced if he had listened to the original
sound. The purpose of this section is to give an overview of this whole process
and to introduce the factors that are involved in it. It also serves as a means of
clarifying the terminology used in such a process.
Free or Diffuse field

Listener
Artificial head DAT recorder Computer

Sound de-equalization
Recording Quality
Calibration Equalization
Analysis
equalization

Figure 6-5 Binaural recording and playback


Recording
The first stage in this process is to make an exact recording of a sound. A single
microphone situated in free space is insufficient for this since at least four miĆ
crophones would be necessary to correctly capture the 3D nature of the sound.
It has been demonstrated in the previous section that the pressure experienced
by the eardrum will be greatly influenced by the presence of the head and torso
of the listener and is further affected by the non-linear operating characteristics
of the ear itself. As a consequence of this, one of the most accurate ways to reĆ
cord a sound is to mimic the function of the ears themselves and place two miĆ
crophones inside the ear canals. Such a technique is known as binaural recordĆ
ing, which involves two inputs representing what the left and the right ears
would hear.
Although it is possible to place the earphones inside the head, it is more comĆ
mon to use an artificial head which provides similar spatial filtering to that of
an actual head shoulders and torso.
Equalization
You may wish to reconstruct this recording as if it were the original sound and
not as it is heard inside the head. In this case, you will need to `undo' the modĆ
ifications that were caused by the presence of the head. The sound can be reĆ
constructed as if it were in a free field or a diffuse field.

Part II Acoustics and Sound Quality 93


Chapter 6 Sound quality

A free field refers to an idealized situation where the sound flows directly out
from the source and the pressure levels drop with increasing distance from the
source. A diffuse field occupies a smaller space and the sound is reflected
many times.

Thus, when you are recording a sound you can determine the type of field you
wish to reconstruct it in and the appropriate compensation or equalization will
be applied. If you only wish to replay the sound through headphones, then
you do not need equalization and so you can either select to have a non-equalĆ
ized recording or you will have to de-equalize it before it is replayed through
headphones.

Transfer to computer
The recording on the DAT recorder is held in a 16 bit audio format. When this
is transferred to a computer system, it will be then converted to a 32 bit floating
point format. To achieve this conversion a calibration factor is required.

Replay
When you need to replay the signal on the headphones, then de-equalization
may be necessary if free-field or diffuse field equalization has been applied to
the original recording. In addition, compensation is required to take account of
the transfer function associated with the particular set of headphones to be
used.

94 The Lms Theory and Background Book


Sound quality

6.3 Reading list

1 D.LUBMAN, Noise Quality, Toward a Larger Vision of Noise Control Engineering, JourĆ
nal of Noise Control Engineering, ....
2 J.BLAUERT, Spatial Hearing, MIT Press, Cambridge (MA), 1983.
3 W.BRAY ET AL, Development and Use of Binaural Measurement Technique, Proc. Noise
Con. `91, Tarytown (NY), July 14-16, 1991, pp 443-450.
4 D.HAMMERSHOI, H.MOLLER, Binaural Auralisation : Head-Related Transfer FuncĆ
tion Measured on Human Subjects, Proceedings 93rd AES Convention, Vienna (A),
March, 24-27, 1992, 7pp.
5 J.HASSAL, K.ZAVERI, Acoustic Noise Measurements, Bruel & Kjaer, DK2850 NaerĆ
um, Denmark, 1988
6 E.ZWICKER,H.FASTL, Psychoacoustics, Facts and Models, Springer Verlag, Berlin
(Germany), 1990.
7 J.HOLMES, Speech Synthesis and Recognition, Van Nostrand Reinhold, Wokingham,
Berkshire (UK), 1988.
8 M.HUSSAIN, J.GOELLES, Statistical Evaluation of an Annoyance Index for Engine
Noise Recordings, SAE Paper 911080, Proc. SAE Noise and Vibration Conference, TraĆ
verse City (MI), May 16-18 1991 pp 359-368.
9 H.SHIFFBAENKER ET AL, Development and Application of an Evaluation Technique
to Assess the Subjective Character of Engine Noise, SAE paper 911081, Proc. SAE Noise
and Vibration Conference, Traverse City (MI), May 16-18 1991, pp 369-379.
10 K.TAKANAMI ET AL, Improving Interior Noise Produced During Acceleration, SAE
paper 911078, Proc. SAE Noise and Vibration Conference, Traverse City (MI), May
16-18 1991, pp 339-348.
11 G.IRATO, G.RUSPA, Influence of the Experimental Setting on the Evaluation of SubjecĆ
tive Noise Quality, Proceeding of the second International Conference on Vehicle ComĆ
fort, Oct 14-16, 1992, Bologna (Italy), pp. 1033-1044.
12 INTERNATIONAL ORGANIZATION FOR STANDARDIZATION, Method for
Calculating Loudness Level, ISO-532-1975 (E)
13 E.ZWICKER ET AL, Program for Calculating Loudness According to DIN45631
(ISO532B), Journal Acoustic Society Jpn (E), Vol. 12, Nr.1, 1991.
14 S.J.STEVENS, Procedure for Calculating Loudness : Mark VII, J. Acoust. Soc. Am., Vol.
33, Nr.11, pp.1577-1585, 1961.
15 S.J.STEVENS, Perceived Level of Noise by Mark VII and Decibel, J. Acoust. Soc. Am.,
Vol.511, Nr.2, pp. 575-601, 1971.
16 E.ZWICKER, Procedure for Calculating Loudness of Temporally Variable Sounds, J.
Acoust. Soc. Am., Vol. 62, Nr. 3, pp 675-681, 1977.
17 L.L.BERANEK, Criteria for Noise and Vibration in Communities, Buildings and Vehicles
in Noise and Vibration Control, revised edition, McGraw-Hill Inc., 1988.
18 W.AURES, Berechnungsverfahren fü r den Sensorischen Wohlklang beliebigen SchallsigĆ
nale, Acustica, Vol.59, pp. 130-141, 1985

Part II Acoustics and Sound Quality 95


Chapter 6 Sound quality

19 M.ZOLLNER, Psychoacoustic Roughness. A New Quality Criterion, Cortex Electronic,


1992.
20 W.AURES, Ein Berechnungsverfahren der Rauhigkeit, Acustica, Vol.58, pp. 268-280,
1985.
21 M.F.RUSSEL, What Price Noise Quality Indices, Proc. Engineering Integrity Society
Symposium on NVH Challenges - Problem Solutions, Oct.21, 1992.
22 M.F.RUSSEL ET AL. Subjective Assessment of Diesel Vehicle Noise, IMechE paper
925187, Ref. C389/044, FISITA Conference Engineering for the customer, pp.37-42,
1992.
23 D.G.FISH, Vehicle Noise Quality - Towards Improving the Correlation of Objective MeaĆ
surements with Subjective Rating, I. Mech. E. - paper 925186, Ref. C389/468 FISATA-
conference, Engineering for the customer, pp. 29-36, 1992.
24 G.TOWNSEND, A New Approach to the Analysis of Impulsiveness in the Noise of Motor
Vehicles, Proc. Autotech `89, paper 7/26.
25 MOTOR INDUSTRY RESEARCH ASSOCIATION, Improving Correlation of ObjecĆ
tive Measurements with Subjective Rating of Vehicle Noise, MIRA research report
K3866326.
26 F.K.BRANDL ET AL, A Concept for Definition of Subjective Noise Character - A Basis
for More Efficient Vehicle Noise Reduction Strategies, - Proceedings Internoise-89,
Newport Beach (CA), Dec. 4-6, 1989, pp.1279-1282.
27 R.S.THOMAS, A Development Process to Improve Vehicle Sound Quality, SAE paper
911079, Proc, SAE Noise and Vibration Conference, Traverse City (MI), May 13-16
1991, pp. 349-358.
28 G.R.BIENVENUE, M.A.NOBILE, The Prominence Ratio Technique in Characterizing
Perception of Noise Signals Containing Discrete Tones, Proc. Internoise `92, Toronto,
Canada, July 20-22, 1992, pp. 1115-1118.
29 K.TSUGE ET AL, A Study of Noise in Vehicle Passenger Compartment during AcceleraĆ
tion, SAE paper 8509665, Proceedings SAE Noise and Vibration Conference, Traverse
City (MI), May 15-17, 1985, pp. 27-34.
30 T.WAKITA ET AL, Objective Rating of Rumble in Vehicle
Passenger Compartment during Acceleration, SAE paper 891155, Proceedings SAE
Noise and Vibration Conference, Traverse City (MI), May 16-18, 1989, pp. 305-312.
31 W.YAGISHASHI, Analysis of Car Interior Noise during Acceleration Taking into AcĆ
count Auditory Impressions, JSAE Review (E), Vol. 12, nr.4, Oct. 1991, pp. 58-61.
32 K.FUJITA ET AL, Research on Sound Quality Evaluation Methods for Exhaust Noise,
JSAE Review (E), Vol. 9, Nr. 2, April 1988, pp. 28-33.
33 American National Standard, S.3.14-1977 (R886), Rating Noise with Respect to Speech
Interference, Acoustical Society of America.
34 H.STEENEKEN, T.HOUTGAST, RASTI, A Tool for Evaluating Auditoria, Bruel &
Kjaer Technical Review, nr.3-1985, pp. 13-30.
35 M.NAKAMURA, T.YAMASHITA, Sound Evaluation in Cars by RASTI Method, JSAE
Review, Vol.11, Nr.4, Oct 1990, pp.38-41.
36 H.MOLLER, Fundamentals of Binaural Technology, Applied Acoustics, Vol. 36, 1992,
pp. 171-218.

96 The Lms Theory and Background Book


Sound quality

37 K.GENUIT, M.BURKHARD, Artificial Head Measurement System for Subjective EvalĆ


uation of Sound Quality, Sound and Vibration, March 1992, pp. 18-23.
38 G.MICHEL, G.EBBIT, Binaural Measurements of Loudness as a Parameter in the EvalĆ
uation of Sound Quality in Automobiles, Proc. Noise Con. `91, Tarytown (NY), July
14-16, 1991, pp. 483-490.
39 G.THEILE, The Importance of Diffuse Field Equalisation for Stereophonic Recording
and Reproduction, Proc. 13-th Tonmeistertagung, 1984.
40 D.S.MANDIC, P.R.DONOVAN, An Evaluation of Binaural Measurement Systems as
Acoustic Transducers, Proc. Noise Con 91, Tarytown (NY), July 14-16, 1991, pp.
459-466.
41 H.HAMMERSHOI, H.MOLLER, Artificial Head for Free Field Recording ; How Well
Do They Simulate Real Heads ?, Proc. 14th ICA, Beijing, 1992, Paper H6-7 (2pp).
42 K.GENUIT, H.GIERLICH, Investigation between Objective Noise Measurement and
Subjective Classification, SAE Paper 891154, Proceedings SAE Noise and Vibration
Conference, Traverse City (MI), May 16-18 1989, pp 295-303.
43 H.MOLLER ET AL, Transfer Characteristics of Headphones, Proc. 92nd AES ConvenĆ
tion, Vienna (A), March 24-27, 1992, 28 pp.
44 Y.OKAMOTO ET AL, Evaluation of Vehicle SOunds Through Synthesized Sounds that
Respond to Driving Operation, JSAE Review (E), Vol.12, Nr.4, Oct.1991,pp.52-57.
45 S.M.HUTCHINS ET AL, Noise, Vibration and Harshness from the customer's Point of
View, IMechE paper 925181, Ref. C389/049, Proc. FISATA-92 Conf, Engineering for
the Customer.
46 H.AOKI ET AL, Effects of Power Plant Vibration on Sound Quality in the Passenger
Compartment During Acceleration, SAE paper 870955, Proc. SAE Noise and Vibration
Conf., Traverse City (MI), Apr. 28-30, 1987, pp.53-62.
47 K.C. PARSONS, M.J. GRIFFIN, Methods for predicting Passenger Vibration DiscomĆ
fort, Society of Automotive Engineers Technical Paper Series 831921
48 M.J. GRIFFIN, Handbook of Human Vibration, Academic Press Ltd. ISBN
0-12-03040-4
49 J.D. LEATHERWOOD, L.M BARKER, A User-Oriented and Computerized Model
for Estimating Vehicle Ride Quality, NASA Technical Paper 2299 (1984)
50 International Standard, Ref. No. ISO 2631/1 - 1985 (E)
51 International Standard, Ref. No. ISO 5349 - 1986 (E)
52 British Standards Institution, Measurement and evaluation of human exposure to whole-
body mechanical vibration and repeated shock Ref. No. BS 6841 - 1987
53 American National Standard, S3.14 - 1977 (R-1986), Rating Noise with Respect to
Speech Interference, order from the Acoustical Society of America.
54 ANSI S3.5, Calculation of the Articulation index, American National Standards InstiĆ
tute, Inc., 1430 Broadway, New York, New York 10018 USA, 1969
55 International Standard, Ref. No. ISO 532 - 1975 (E)

Part II Acoustics and Sound Quality 97


Chapter 7

Sound metrics

It may be said that the best way to evaluate the quality of a sound is
to listen to it and express an opinion about it, but in a lot of cases there
is also a strong interest in correlating the results from these subjective
evaluations with measurable parameters. Therefore a number of
sound quality metrics exist where perception-relevant quantitative
values are calculated from the acoustic pressure signal.
Sound pressure levels

Loudness metrics

Sharpness

Roughness

Fluctuation strength

Pitch

Articulation index

Speech interference levels

Impulsiveness
The references are listed in chapter 6

99
Chapter 7 Sound metrics

7.1 Sound pressure level

The basic descriptor of a sound signal is sound pressure level (SPL) denoted by
L. and described in equation 4-12.

The stimulus of the sound pressure level needs to be interpreted as a hearing


sensation and one approach consists of multiplying the frequency spectrum of
the acoustic pressure signal with a weighting function before calculating the
RMS level. Several weighting functions have been defined, of which the A-
B-C and D weightings are the most widely used. They are based on experiĆ
mentally determined equal loudness contours which express the loudness
sensation of single tones as a function of sound pressure level and frequency.

Time domain sound pressure level

This function calculates the frequency and time weighted sound pressure level
according to the IEC 651 and ANSI SI.4-1983 standards.

Frequency weighting can be applied to the time signal using the A, B or C


weightings described above. The time signal is then exponentially averaged to
arrive at the sound pressure level. An exponential weighting factor is used
t
(e  ) where t is the sample period of the signal and  is the time constant. The
values of  depends on the type of signal (mode) and three default (standardĆ
ized) values are supplied.

 = 35ms for impulse (peaky) signals


 = 125 ms for fast changing signals
 = 1000 ms for slow changing signals.

By selecting the type of signal (mode) then the appropriate time constant is apĆ
plied.

When the signal contains spikes and is therefore defined by the mode imĆ
pulse" an additional peak detector mechanism is implemented. In this case
when an increase in the averaged signal is detected, then the signal is followed
exactly. When the signal is decreasing, then exponential averaging is used with
a long time constant, set by default to 1500 ms. The time constant used in this
situation is termed the decay time constant.

100 The Lms Theory and Background Book


Sound metrics

7.2 Equivalent sound pressure level

The iso standards: ISO1996/1-1982 and ISO1999:1990 provide a definition for


the `equivalent A-weighted sound pressure level in decibels' identified as
LAeq,T.

This function gives the value of the A-weighted sound pressure level of a conĆ
tinuous, steady sound that, within a specified time interval T, has the same
mean square sound pressure as the sound under consideration whose level varĆ
ies with time. This leads to the expression:

 t


2
p 2A(t)
L Aeq,T  10 logt  t Ă 2 Ă dt
1 Eqn 7-1
1 2 p0
 t
1 
where

LAeq,T is the equivalent continuous A-weighted sound pressure level, in deciĆ


bels, determined over a time interval T starting at t1 and ending at t2

p0 is the reference sound pressure (20mPa);

pA(t) is the instantaneous A-weighted sound pressure of the sound signal.

In practice with sampled data the equivalent sound pressure level is computed
by a summation of the sampled values of the pressure level, in dB over the
number of samples required.

As a generalization, you can apply the same formula to a non-A-weighted


sound pressure signal p(t) to obtain Leq,T.

Part II Acoustics and Sound Quality 101


Chapter 7 Sound metrics

7.3 Loudness

The equal loudness contours shown in Figure 6-2 in the document Sound
quality" are the result of large numbers of psycho-acoustical experiments and
are in principle only valid for the specific sound types involved in the test.
These curves are valid for pure tones and depict the actual experienced loudĆ
ness for a tone of given frequency and sound pressure level when compared to
a reference tone. The resulting value is called the `loudness level'.

The loudness level itself is expressed in Phons. 1 kHz-tones are used as the refĆ
erence, which means that for a 1 kHz tone, the Phon value corresponds to the
dB sound pressure level. The equal loudness contours for free field pure tones
and diffuse field narrow-band random noise are standardized as ISO 226-1987
(E).

A linear unit derived from the (logarithmic) Phon values is the Sone (S), which
is related to the Phon (P) in the following way :

S  2 (P40)10 Eqn 7-2

The Sone scale's linear relationship to the experienced loudness makes it easier
to interpret. A loudness of 1 Sone corresponds to a loudness level of 40 Phons.
A tone which is twice as loud, will have double the loudness (Sone) value, and a
loudness level which is 10 Phons higher.

When broadband or multi-tone sounds are being considered, the frequency


spectrum of the loudness is made in terms of critical bands instead of the total
value. Critical bands and barks are described in Table 6.1 in the chapter
onSound quality". In this case the terminology `specific loudness' is used, exĆ
pressed in Sones/Bark.

For steady state sounds, standardized calculation procedures have been deĆ
fined by Zwicker and Stevens and are accepted as ISO standards {12, 13, 14}. A
more recent procedure by Stevens {15} has not yet been accepted as an ISO stanĆ
dard.

They are both based on :

V a convention for the relation between octave band sound pressure levĆ
els and octave band partial (specific) loudness descriptions

V a convention to combine the specific loudness values into a global loudĆ


ness, taking into account masking effects.

For temporally varying sounds, Zwicker has also proposed an approach taking
into account temporal effects {16}, which is not yet accepted as an ISO standard.

102 The Lms Theory and Background Book


Sound metrics

7.3.1 Stevens Mark VI

The Stevens (Mark VI) method, standardized as ISO 532-A-1975 and ANSI
S3.4-1980, starts from octave band sound pressure levels. Their loudness is
compared to that of a critical band noise at 1 kHz. It is only defined for diffuse
sound fields with relatively smooth, broadband spectra. Through a set of stanĆ
dardized curves, each octave band level is converted into a partial loudness inĆ
dex (s) see Figure 7-1. The partial loudness values are then combined into a
total loudness (in Sones), using equation 7-3.

st  s m  FĂ(Ăs  smĂ) Eqn 7-3

where sāt = the total loudness in Sones


sām = the greatest of the loudness indices, in Sones
s= the sum of the loudness indices of all bands, in Sones
F= fractional loudness contribution factor, reflecting masking
effects. It depends on the type of octave measurement (0.3
for 1/1 octaves,0.15 for 1/3 octaves).

Figure 7-1 Loudness (Mark VI)

Part II Acoustics and Sound Quality 103


Chapter 7 Sound metrics

7.3.2 Stevens Mark VII


A more recent calculation scheme is Stevens Mark VII {15, 17}, which uses a
more refined partial loudness calculation ( see Figure 7-2 ), as well as a level
dependent calculation for F in equation 7-3. The reference frequency is 3150
Hz. Apart from the loudness (in Sones), the logarithmic unit `perceived loudĆ
ness level' (PLdB) is used here, which is 32 dB for a loudness of 1 Sone at 3150
Hz. PLdB values will be about 8 dB lower than the loudness level in Phones.
Examples are discussed in {5} and {17}.

Figure 7-2 Loudness (Mark VII)

7.3.3 Loudness Zwicker


Loudness assessment using the Zwicker method (standardized as ISO 532B)
starts from 1/3 octave band sound pressure level data, which can originate
from either a free or diffuse sound field. It is capable of dealing with complex
broadband noises, which may include pure tones.

104 The Lms Theory and Background Book


Sound metrics

The method takes masking effects into account. Masking effects are important
for sounds composed of multiple components. A high level sound component
may `mask' another lower level sound which is too close in frequency. An exĆ
ample of masking is shown below {5}. A 50 dB, 4 kHz tone (marked +) can be
heard in the presence of narrow-band noise, centered around 1200 Hz, up to a
level of 90 dB. If the noise level rises to 100 dB, the tone is not heard.

Level of masking noise


Sound pressure level (dB)

Threshold of hearing

Frequency

Figure 7-3 Masking effects of narrow band noise (5)

The method uses different sets of graphs for diffuse and free fields that relate
loudness level to sound pressure level and that take the masking into account
by a sloping-edge filter characteristic for each octave band. This way, domiĆ
nant and hence masking frequency bands will show their influence over a large
frequency range and prevent masked sounds contributing to the total level.

Figure 7-4 shows an example of the Zwicker method. The 1/3 octave band
data are transferred to the appropriate Zwicker diagram.

Part II Acoustics and Sound Quality 105


Chapter 7 Sound metrics

Sound pressure level (dB)

Frequency

Figure 7-4 Example loudness calculation according to Zwicker's method {5}

The partial loudness contours are computed for each defined segment (global
evaluation) or frame (tracked evaluation) using a classical Zwicker loudness
calculation. The frame or segment size should be selected to ensure that the
spectral resolution needed for the FFT-based octave band analysis can be
achieved. The frame size can be used to restrict the analysis to time periods
over which time-varying signals can be regarded as stationary.

The Zwicker loudness analysis allows you to distinguish between unmasked


and masked contours thus allowing you to see that certain levels are either
wholly or completely masked by previous ones.

The total loudness is calculated as the surface under the enveloping partial loudĆ
ness contours and can be expressed in Sones, or as loudness level in Phones as a
function of time. This is presented as a single value in the global evaluation
and a trace of values for the tracked evaluation.

106 The Lms Theory and Background Book


Sound metrics

7.4 Sharpness

A sensation which is relevant to the pleasantness of a sound is its `sharpness',


allowing you to classify sounds as shrill (sharp) or `dull'. The sharpness sensaĆ
tion is strongly related to the spectral content and center frequency of narrow-
band sounds and is not dependent on loudness level or the detailed spectral
content of the sound.

Roughly, it corresponds to the first spectral moment of the specific loudness,


with a pre-emphasis for higher frequencies. A quantitative procedure has been
proposed, expressing the sharpness in the unit `acum'. The reference sound of
1 acum is a narrow-band noise, one critical band wide, and at a center frequenĆ
cy of 1 kHz and having a level of 60 dB.

The dependency of sharpness on the center frequency and bandwidth of the


noise is shown in Figure 7-5 {6}. The middle curve represents a noise of one
critical bandwidth as a function of center frequency, the upper and lower
curves representing the sharpness of noises with respect to fixed upper (10
kHz) or lower (0.2 kHz) cut-off frequency as a function of the other cut-off valĆ
ue. Higher frequency noises produce higher sharpness.

Figure 7-5 Loudness of bandlimited noise

Part II Acoustics and Sound Quality 107


Chapter 7 Sound metrics

The specific sharpness calculation (S '(z) ) is made according to:

0.11ĂN (z)Ăg(z)Ăz
S (z) 
24āBark
 N(z)Ăz
Eqn 7-4

0āBark

where N'(z) is the specific Zwicker loudness


g(z) a weighting function that pre-stresses higher frequency compoĆ
nents (Figure 7-6). g(z) has unit value below 16 Bark and rises expoĆ
nentially as

g(z)  0.066e 0.171z Eqn 7-5

Figure 7-6 Sharpness calculation weighting function

The total sharpness S expressed in `acums' is obtained by integrating the specific


sharpness.

24āBark
S  S (z)Ăz Eqn 7-6
0āBark

108 The Lms Theory and Background Book


Sound metrics

7.5 Roughness

The roughness or harshness of a sound is a quality associated with amplitude


modulations of tones. When this modulation frequency is very low (15 Hz), the
actual time varying loudness fluctuations can be perceived. This fluctuation
sensation is discussed in section 7.6.

At high modulation frequencies (above 150-300 Hz), three separate tones can
be heard. In the intermediate frequency range (15-300 Hz), the sensation is of a
stationary, but rough tone, which renders it rather unpleasant. This sensation is
often associated with engine noise, where fractional orders can cause the moduĆ
lation effects.

Roughness increases with degree of modulation and with modulation frequenĆ


cy, and is less sensitive to the base frequency. The unit used to describe roughĆ
ness is the asper"; 1 asper being produced by a 100%, 70 Hz modulated 1 kHz
tone of 60 dB.

The dependency relationship between modulation depth and frequency is howĆ


ever not straightforward. An important element is that the temporal variations
of the loudness can cause masking effects, and a temporal masking depth (L)
is introduced, representing the difference between maximum and minimum in
the actually perceived time dependent loudness pattern. Due to post masking,
this masking depth is smaller than the modulation depth, with the difference
becoming greater at higher frequencies. The roughness (R) of an amplitude
modulated sound can then be approximated as

R " f mod L Eqn 7-7

Quantitative procedures to calculate roughness have been proposed. They inĆ


volve the calculation of partial or specific roughness" in each critical band,
based on modulation and depth, including masking effects and integrating
them to obtain total roughness.

Part II Acoustics and Sound Quality 109


Chapter 7 Sound metrics

7.6 Fluctuation strength

When the sound functions have modulation frequencies below 20 Hz, they are
perceived as changes in the sound volume over time. Typically, fluctuation sigĆ
nal sound louder (and more annoying) than steady state signals of the same
rms amplitude. In this case, the intensity of the sensation is referred to as
Fluctuation strength" with the unit vacil". A reference sound of 1 vacil correĆ
sponds to a 1 KHz tone of 60 dB with a 100 % amplitude modulation of 4Hz.
The ear is most sensitive to fluctuations at 4 Hz. Quantitative models have
been proposed for the fluctuation strength {6} which take into account the temĆ
poral masking effects due to the sound fluctuation.

The dependency of the fluctuation strength (F) on the modulation frequency


(fmod) and masking depth (L) is then the following

F# L Eqn 7-8
(f mod4Hz)  (4Hzf mod)

110 The Lms Theory and Background Book


Sound metrics

7.7 Pitch

Pitch is a sound attribute that classifies sounds on a scale from low to high. For
pure tones, pitch depends largely on the frequency of the tone, but it is also inĆ
fluenced by its level.

In a complex tone, consisting of many spectral components, one or more pitches


can be perceived. These pitches also depend to a large extent on the frequenĆ
cies of the constituent components, but also masking effects can occur, making
some pitches more prominent than others.

Pitches, both for pure and complex tones, which can be derived from the specĆ
tral content of the signals, are called spectral pitches.

It has been observed that in a complex tone, consisting of a fundamental freĆ


quency and a number of its harmonics, a pitch corresponding to the fundamenĆ
tal frequency is perceived, even when that fundamental frequency is filtered
out of the signal. In this case, the perceived pitch does not relate anymore to a
component actually present in the signal but relates to the difference between
the higher harmonics. This type of pitch is called residue pitch or virtual pitch.

The pitch calculation is implemented according the method developed by


Terhardt (J. Acoust. Soc. Am. Vol 71, pp 679-688, 1982). Both spectral and
virtual pitches can be derived as well as the weight of each calculated pitch.
These indicate how prominently the pitches are perceived.

If, in the calculation the effect of the tone level on the pitch is taken into acĆ
count, the calculated pitch is called true pitch. If the influence of level on the
tone is neglected, it is called nominal pitch.

Part II Acoustics and Sound Quality 111


Chapter 7 Sound metrics

7.8 Articulation index (AI)


The Articulation Index is a parameter developed with a view to assuring
speech privacy. Speech privacy can be defined as the lack of intrusion of recogĆ
nizable speech into an area when background sound or noise then provides a
positive quality of privacy.
The measure of interference caused by noise to the masking of speech can be
calculated by weighting the noise spectrum (in 1/3 octave bands) according to
its importance to the understanding of speech. From this weighted spectrum,
the Articulation Index is derived.

A graphical equivalent of the calculation


is given in Figure 7-7 (from {17}). The 1/3
octave bands relevant to speech are
weighted by a number of dots. When the
sound pressure level is plotted on this
graph, the AI can be derived as the numĆ
ber of dots above the spectrum divided
by the total number. Practical calculaĆ
tions are of course based on tables.

Figure 7-7 Graphical representation of the Articulation Index

This index can then be related to a perĆ


centage of syllables understood (see FigĆ
ure 7-8 from {17}) For complete privacy,
an AI of 0.05 is the limit, for semi-privacy
to discuss non-confidential matters, an
AI of 0.1 is acceptable {17}.

Figure 7-8 Intelligibility of sentences as a function of articulation index


There are two methods available.
V Standard
The calculation is based on the work of Beranek as set out in The deĆ
sign of speech communication systems", Proceedings of the IRE, Vol 45,
880-884, 1947. The results of this method will lie in the range 0-100%

112 The Lms Theory and Background Book


Sound metrics

V Modified
These calculations are based upon the AIM method which has been deĆ
scribed in the work mentioned above, but which opens up the internal
floating range of 30dB to a fixed range of 80dB between the limits of 20
and 100dB. The results of this method will lie in the range-107% to alĆ
most 160%

Part II Acoustics and Sound Quality 113


Chapter 7 Sound metrics

7.9 Speech interference level (SIL, PSIL)

When the comprehension of speech is the goal, background sound or noise has
the negative quality of interference. It can cause annoyance, and even be hazĆ
ardous in a working environment where instructions need to be correctly unĆ
derstood. Therefore, a noise rating called `Speech Interference Level' (SIL) was
developed.

Beranek originally defined it as the arithmetic average of the sound pressure


levels in the bands 600-1200, 1200-2400 and 2400-4800 Hz. Since the definition
of the new preferred octave band limits, this definition was changed to the PreĆ
ferred Speech Interference Level' or PSIL, defined as the average sound pressure
level in the 500, 1000 and 2000 Hz octave bands {5,17}.

In 1977, the Speech Interference level was standardized as ANSI


S3.14-1977(R-1986) {33}, which also included the 4 kHz octave band. This is in
accordance with an ISO suggestion, described in ISO Technical Report TR
3352-1974. On average, the ANSI-SIL is about 1 dB higher than the original
(Beranek) and about 2.5 dB lower than the PSIL {17}.

The application of the SIL to the actual understanding of speech is presented in


several graphs and tables {see 5,17}. These papers show the relationship beĆ
tween SIL and the conditions under which speech can be understood. As an
example, Figure 7-9 shows the relationship between ease of face-to-face conĆ
versation with ambient noise level in PSIL, and separation distance in meters
{5}.

Figure 7-9 Communication limits in the presence of background noise (after WebĆ
ster)

114 The Lms Theory and Background Book


Sound metrics

7.10 Impulsiveness

This metric is used to quantify the impulsive nature of a signal. It is used for
instance in the quantification of the diesel engine noise.

The algorithm for calculating impulsiveness is based on the signal envelop, and
results in a number of output values; the mean impulse peak level, mean imĆ
pulse rise rate and mean impulse duration. Each of these parameters is deĆ
scribed in the Figure below. In addition the mean impulse rate (occurrence) is
determined.

Peak level

signal envelope
rise
rate threshold

center position threshold offset

rms level

rise time fall time

A certain threshold is used to determine the occurrence of an impulsive event.


That threshold is the sum of the overall segment RMS value (in the global comĆ
putation) and the RMS of the frame (in the case of tracked calculation) and the
a user-defined threshold offset.

The start of the impulse is defined by the minimum which occurs before the
crossing of the threshold. The rise time is the time between the impulse start,
and the moment at which the impulse peak level is reached. The peak level is
expressed in dB, and is the difference between the impulse peak and the threshĆ
old level. The end of the peak is defined by the first minimum which occurs
after the threshold level. The duration of the peak duration is the sum of rise
time and fall time.

The rise rate is the maximum rise rate occurring between the impulse start and
the impulse peak.

Part II Acoustics and Sound Quality 115


Chapter 8

Acoustic holography

This chapter describes the background to acoustic holography.

117
Chapter 8 Acoustic holography

8.1 Introduction

Acoustic holography allows you to accurately localize noise sources. It thereĆ


fore helps in both the reduction of unwanted vibro-acoustic noise and optiĆ
mization of noise levels. It :

V estimates the acoustic power and the spectral content emitted by the
object under examination.

V maps sound pressure, velocity and intensity on the measurement plane


and on all parallel planes. The mapping of these acoustical quantities
outside the measurement plane is done through acoustical holography
(near field - far field).

V estimates the acoustic level of the principal sources, including contribuĆ


tion analysis.

This document describes the principles of taking acoustic measurements and


the subsequent analysis of acoustic holography data, for both stationary and
transient measurements.

Basic principles
In performing acoustic holography, you need to measure cross spectra between
a set of reference transducers and the hologram microphones. From these meaĆ
surements you can derive sound intensity, particle velocity and sound power
values.

A basic assumption is that you are operating in free field conditions and that
the energy flow is coming directly from the source. Measurements need to be
taken close to the source.

It provides you with an accurate 3D characterization of the sound field and the
source with a higher spatial resolution than is possible with conventional intenĆ
sity measurements.

118 The Lms Theory and Background Book


Acoustic holography

8.2 Acoustic holography concepts

The principle of acoustic holography is to decompose the measured pressure


field in plane waves, by using a spatial Fourier transform. With the frequency
being fixed, we can calculate how each of these plane waves propagates, and
by adding them we can find the pressure field on any plane which is parallel to
the measurement plane.
Consider an acoustic wave. Measuring the pressure on a plane means cutting
the wavefronts by the measurement plane :

measurement plane

ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
The goal is to determine the whole acoustic wavefront from the known pressure
on the measurement plane. Each microphone in the array measures the comĆ
plex pressure (amplitude and phase).

Temporal and spatial frequency


In considering how to do this we will compare the time and the spatial domain.
Time domain
When considering measurements in the time domain, then the position from
the sound source (m) is fixed and we obtain a measure of the pressure variation
as a function of time.

ÄÄÄÄÄÄ pressure

ÄÄÄÄÄÄ
ÄÄÄÄÄÄ
T

ÄÄÄÄÄÄ m time

ÄÄÄÄÄÄ
ÄÄÄÄÄÄ f=1/T
=cĂ/Ăf

The transformation from the time to the frequency domain is achieved using
the Fourier Transform given below

Part II Acoustics and Sound Quality 119


Chapter 8 Acoustic holography



F(āā)   Ă f (ātā)e jāātĂdt Eqn 8-1




Spatial domain
If we now consider measurements where time is fixed and pressure varies as a
function of distance, we can obtain a measure of energy flow.

ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
ÄÄÄÄÄ
m

ÄÄÄÄÄ pressure

distance

The spatial frequency of this function or wavenumber (k0) is defined as :

k0 = 2 f = = 2 where c is the speed of sound


c c
and f is the temporal frequency

If we fix the temporal frequency, this means that the acoustic wavelength is
fixed too.
The complex pressure as a function of the space is called the pressure image at
the specified frequency.

Conversion from the spatial domain is also done using a Fourier transform. In
Acoustic holography pressure is measured in two dimensions (x and y for
example), so a 2-dimensional transformation is performed.

Sā(k x, k y)   Ă P measuredĂ(āxā,ā y)Ăe


j(k xxk yy)
ādxā.ādy Eqn 8-2

where S (kx , ky) is the spatial transform of the measured pressure field to the
wavenumber (kx and ky ) domain resulting in the 2-D hologram pressure field.

120 The Lms Theory and Background Book


Acoustic holography

Pmeasured

spatial domain x,y wavenumber domain kx, ky

A measured pressure (sound) wave with a particular temporal frequency can


propagate in a number of directions, so the wavenumber vector (k) will have a
number of components. The appearance of these vectors depends on the plane
on which you are looking at them. The aim is to find the components of these
vectors in the 2 dimensions that define the plane and to do this projections of
the vectors in the plane are made.

pressure levels kx

ky

wavenumber domain kx ky

Summation of plane waves

The spatial Fourier transform implies that a measured pressure field can be
considered as a sum of sinusoidal functions.

Each of these sinusoidal functions can be understood as the result of cutting the
wavefronts of a plane wave by the measurement plane.

Part II Acoustics and Sound Quality 121


Chapter 8 Acoustic holography

measurement plane

wavelength
= c/f
spatial periodicity wavelength
= c/f

There is a coincidence between the nodes of the sinusoidal function and the waĆ
vefronts. In effect, decomposing the pressure field into a sum of sinusoidal
functions means decomposing the real acoustic wave into a sum of plane
waves.

Whatever the angle of incidence, the spatial periodicity must be greater than
the wavelength
.

Propagating and evanescent waves

There are two kinds of plane waves :

propagating waves evanescent waves


whose level remains the same as they whose level decreases as they
propagate but who undergo a phase shift. propagate.

Propagating waves represent the sound field that is propagated away from the
near towards the far field. Evanescent waves describe the complex sound field
in the near field of the source.

To understand why we must take evanescent plane waves into account, let us
consider our decomposition of the pressure field into sinusoidal functions. If
the spatial periodicity of a sinusoidal function is shorter than the wavelength, it
cannot be the result of cutting a propagating plane wave by the measurement
plane :

122 The Lms Theory and Background Book


Acoustic holography

spatial periodicity

measurement plane

Whatever the direction of the propagating plane wave may be, there is no posĆ
sible coincidence between the nodes of the sinusoidal function and the waveĆ
fronts. Therefore, this sinusoidal function must be understood as the intersecĆ
tion between an evanescent wave (which can have a smaller spatial periodicity
than propagating waves) and the measurement plane.

A mathematical interpretation of the evanescent waves is based on the value of


kz which is the component perpendicular to the measurement directions in the
wave number domain.

kz
ky
measurement plane
kx

kz can be determined from the wave number k0 and the known values of kx
and ky from the transformation.

k 0  k 2x  k 2y  k 2z  
c
Eqn 8-3

kz   c
 (k  k )
2
2
x
2
y

2 2  2
kz is real when k x  k y  ( c ) (the spatial periodicity is greater than the waveĆ
length). This means that the waves lie in the circle defined by the radius /c in
the wave number domain. kz is imaginary outside of this region.

Part II Acoustics and Sound Quality 123


Chapter 8 Acoustic holography

ky

k 2x  k 2y k 20
k z is imaginary
2
evanescent waves k 2x  k 2y  k 0
k z is real kx
propagating
waves
k0  
c

When kz is imaginary, the propagation factor becomes a damped exponential


function ( e jk zĂz) meaning that a propagated wave undergoes an amplitude
modification while the phase is not changed.

(Back) propagating to other planes

Pressure levels at other planes can be found using Raleigh's integral Equation
with Dirichlet's Green function :

P(r)   P(r)G ā(r  r)ādxādy


d Eqn 8-4

where the Green function Gd can be thought of as the transformation function


to transform the sound pressure field from one plane to another.

We can use wave domain properties (k) to predict the pressure at a different
spatial position (z).

The practical computation of Raleigh's equation is

for z > z' S(k z, k y, z)  S(k z, k y, z)āg dā(k z, k y, z  z) Eqn 8-5

Sā(k x,ā k y,ā z)  Sā(k x,ā k y,ā z) 1


for 0 < z < z' g d(k x,ā ky,ā z  z) Eqn 8-6

where z' is the measurement plane and z is the position of the required plane.
The green function is given by

g d  eā jākzāz

and kz can be found from equation 8-3.

124 The Lms Theory and Background Book


Acoustic holography

The final step is to perform an inverse transformation back to the temporal doĆ
main.

The Wiener filter and the AdHoc window


As mentioned above, evanescent waves undergo a change in amplitude when
propagating. Propagating towards the source implies an amplification of the
signal that is a function of kz. Evanescent waves that lie far away from the unit
circle have a large kz, therefore their amplitude is amplified significantly when
propagating to the source. The contribution of these evanescent waves results
in an increase of spatial resolution. Note that the inclusion of evanescent waves
is only appropriate when propagating towards the source.
Propagating away from the source, the evanescent waves decrease so rapidly in
amplitude that their contribution to the spatial resolution becomes negligible.
However the further away a wave is located from the circle, the less accurate
the amplitude estimate becomes so that at a certain point noise is propagated
and at that point the propagated image starts to blur.
propagating waves ky

ÏÏÏÏÏÏÏÏ
evanescent waves
ÏÏÏÏÏÏÏÏ
ÏÏÏÏÏÏÏÏ kx

radius = k0

When propagating towards the source, a Wiener filter can be used to include a
certain number of evanescent waves to improve the resolution. Taking a higher
number of waves taken into account may result in the amplification becoming
unstable. This depends on a parameter of the Wiener filter known as the Signal
to Noise Ratio (SNR). When the SNR value is greater than 15dB, then the amĆ
plification will become unstable as the number of evanescent waves included
increases. Using an low SNR value (5dB for example) means that the evanesĆ
cent waves are taken into account but they are so attenuated that the improveĆ
ment in resolution is negligible. The default value of 15dB provides the best
compromise in terms of resolution and amplification.
When the Wiener filter is used, the pressure image needs to be multiplied by a
two-dimensional window. As is the case with a single FFT, the observed presĆ
sure must be `periodic' within the observed hologram. If this is not the case,
then truncation errors occur as with a single FFT. These truncation errors maniĆ
fest themselves as ghost sources at the borders of the observed area.
Two windows are used

Part II Acoustics and Sound Quality 125


Chapter 8 Acoustic holography

The rectangular window,


which does not modify the pressure image. In case of a rectangular window,
only propagating waves are included in the calculations resulting in a resoluĆ
tion equivalent to an intensity measurement.
The so-called Ad Hoc window
For a time signal, the FFT algorithm takes the time signal and duplicates it from
minus to plus infinity. If the amplitude of the measured time signal differs beĆ
tween the start and the end of the window, a discontinuity occurs during this
multiplication introducing an error in the FFT algorithm. This can be corrected
using a Hanning window. Holography used a double FFT so the AdHoc winĆ
dow is used, which is basically a two dimensional Hanning window thus reĆ
moving discontinuities in the both the x and y directions.
The one-dimension Ad Hoc window (W) would be:

When N  1 (1  )  I  N  1 (1  )
2 2
Wā[I]  1

When I  N  1 (1  )
2
W[I]  0.5  0.5 cosN  .N
2

I  N 2 1 (1  )


When I N  1 (1  )
2
W[I]  0.5  0.5 cosN  .N
2

I  N 2 1 (1  )


Derivation of other acoustic quantities


If we know how the plane waves propagate, we can calculate the pressure field
in any parallel plane, by adding the contributions of all plane waves. This will
be correct only if all acoustic sources are on the same side of both planes :
correct calculation plane

measurement plane

ÄÄÄÄÄÄ
ÄÄÄÄÄÄ
ÄÄÄÄÄÄ
correct calculation plane

ÄÄÄÄÄÄ
ÄÄÄÄÄÄ incorrect

ÄÄÄÄÄÄ
126 The Lms Theory and Background Book
Acoustic holography

Knowing the pressure field on the parallel plane, it is possible to calculate the
particle velocity and eventually the intensity on this plane.

The particle velocity (V) will be known if the pressure differential can be deterĆ
mined -which is the case with Acoustic holography since the pressure can be
measured at r and (r + r)

Pā(r)  fā(P(r), P(r  ādr)) Eqn 8-7

j
V P(r)
ck

Once the pressure and the velocity are known then the intensity is just the
product of the two.

I  PāV Eqn 8-8

Part II Acoustics and Sound Quality 127


Theory and Background

Part III
Time data processing

Chapter 9
Statistical functions . . . . . . . . . . . . . . . . . . . . . 129

Chapter 10
Time frequency analysis . . . . . . . . . . . . . . . . . 139

Chapter 11
Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

Chapter 12
Digital filtering . . . . . . . . . . . . . . . . . . . . . . . . . 163

Chapter 13
Harmonic tracking . . . . . . . . . . . . . . . . . . . . . . 193

Chapter 14
Counting and histogramming . . . . . . . . . . . . . 203

128
Chapter 9

Statistical functions

Descriptive statistics provide information that characterizes sets of


data. This chapter gives a very brief summary of a variety of some
statistical functions.

129
Chapter 9 Statistical functions

Minimum, maximum, range and extremum


These functions are shown in Figure 9-1 and described below.
x
maximum
+ extremum

range t

minimum
Nt
real value absolute value

Figure 9-1 Minimum, maximum, range and extreme of a function


Minimum
This is defined as the lowest value contained within the specified range of valĆ
ues.
Maximum
This is defined as the highest value contained within the specified range of valĆ
ues.
Range
The range is the difference between the minimum and maximum values.
Extremum
The extreme is the highest absolute value contained within the specified range.
It is equal to the maximum when the absolute value of the maximum is greater
than the absolute value of the minimum, and is equal to the minimum value
otherwise.

Sum
This is the summation of all the (N) values within the frame
N1
Sum   Ă xj Eqn. 9-1
j0

Integration
This is the area under the curve of values, found by multiplying the half of the
sum of the values by the time increment.

130 The Lms Theory and Background Book


Statistical functions

x j1

xj N2
area   ĂĂ xj 2xj1Ă t Eqn. 9-2
j0

t

Root mean square


The root mean square, also called effective value, is given by

RMS   Ă1
N
N1
 Ă x2j
j0
Eqn 9-3

where N is the number of samples. Its energy content is equivalent to that of


the original time series.

Crest factor
The crest factor is given by

|max  min|
Eqn 9-4
2ĂRMS

The crest factor provides a measure of the ``spikeness'' in the data. A sine sigĆ
nal has a crest factor of 1.4. A random signal has a crest factor of about 3 or 4.
A short spike will yield a high crest factor.

Mean
The mean of a set of data values (x) estimates the central value contained withĆ
in the set. It is defined as

N1

x 1
N
 Ă xj Eqn 9-5
j0

Part III Time data processing 131


Chapter 9 Statistical functions

where N is the number of samples.

The mean is not the only parameter which characterizes the central value of a
distribution. An alternative is the median.

The mean and the median both provide information on the average or central
value of the data. The choice of the most suitable one to use depends on the
skewness described on page 134.

Median

The median of a probability function p(x) is the value for which larger and
smaller values of x are equally probable:

x med 

 Ă p(x)ĂdxĂ   Ă p(x)ĂdxĂ  12 Eqn 9-6


 xmed

For discrete data, the median is defined as the middle value of the data samples
when they are arranged in increasing (or decreasing) order.
When N is odd, the median is

x med  x N1 Eqn 9-7


2

Thus half the values are numerically greater than the median and half are
smaller.
When N is even, the median is estimated as the mean of the two unique central
values.

x N1  x N
x med  2 2 Eqn 9-8
2

The mean and median both provide information on the average or central valĆ
ue of a set of data. Which is the most suitable one to use in a particular circumĆ
stance depends on the skewness of the data. Skewness is illustrated in Figure
9-2.

132 The Lms Theory and Background Book


Statistical functions

Symmetrical data Positive skewness Negative skewness


no skew
p(x) p(x) p(x)

x x x
(a) mean = median (b) mean > median (c) mean < median

Figure 9-2 Symmetrical and skew data distributions.

Skewness refers to the shape of the distribution about the central value. PerĆ
fectly symmetrical data has no skew. Data distributions where there is a small
number of extremely high values are said to exhibit positive skew. Those with
a few extremely low values show negative skew. The mean is more influenced
by such extreme values than the median, but can be used with confidence if the
skewness lies within the range -1 to 1. For the calculation of skewness see
Equation 9-13 below.

Percentiles

The median can also be expressed as the 50th percentile since it represents the
value where 50% of all the values in the data set are below it and 50% lie above
it. It is also possible to compute the 10th, 25th, 75th and 90th percentiles.

The nth percentile of a probability function p(x) is the value at which n% of the
values in the set are smaller then the percentile value. So 10% of the values are
smaller than the 10th percentile and 90% are larger.

Variance and standard deviation

Further information on the range of values in a distribution can be obtained by


determining how much the data values vary from the mean value. So the variĆ
ance is given by

N1
varĂ(x 0, ..., x N1)  1
N1
 xj  x
2 Eqn 9-9
j0

and as such can also be regarded as the second order moment of a distribution.

The standard deviation is defined as the square root of the variance:

Part III Time data processing 133


Chapter 9 Statistical functions

Ă(x 0, ..., x N1)  ĂvarĂ(x 0, ..., x N1) Eqn 9-10

The standard deviation is in the same units as the original measurement.

Mean absolute deviation


It is not uncommon, in real life, to be dealing with a distribution whose second
order moment does not exist (i.e. is infinite). In this case, the variance or stanĆ
dard deviation is useless as a measure of the data width around its central valĆ
ue. This can occur even when the width of the peak looks perfectly finite to the
eye.
A more robust estimator of the width is the average deviation or mean absolute
deviation, defined by:

N1
ADevĂ(x 0, ..., x N1)  1
N
 Ăxj  x Ă Eqn 9-11
j0

Extreme deviation
The extreme deviation is given by

maxĂ(max  mean,Ă mean  min) Eqn 9-12




The extreme deviation is similar to the crest factor, except that it is referenced to
the mean and will therefore follow data which drifts away from zero.

Skewness
Skewness was illustrated in Figure 9-2. It characterizes the degree of asymmeĆ
try of the distribution around its central value. It is defined as
 3
N1
x j  x !
skewĂ(x 0, ..., x N1)  1  Ă   Eqn 9-13
N
j0 
The skewness is a unitless parameter known as the third order moment of a
distribution.

134 The Lms Theory and Background Book


Statistical functions

Even if the estimated skewness is other than zero, it does not necessarily mean
that the data is in fact skewed. You can have confidence in the skewness only
when the estimated skewness is larger than the standard deviation on this estiĆ
mated parameter (Eqn 9-13). For the idealized case of a normal (Gaussian) disĆ
tribution, the standard deviation on the estimated skewness is approximately
6N . In real life it is good practice to place confidence in skewness only when
the estimated value is several times as large as this.

Kurtosis
One further characteristic of a distribution can be obtained from the kurtosis of
a function. This is also a unitless parameter that measures the relative sharpĆ
ness or flatness of a distribution relative to a normal or Gaussian one.
This is illustrated in Figure 9-3.

p(x) p(x) p(x)

x x x
normal distribution positive kurtosis negative kurtosis

Figure 9-3 Distributions with positive and negative kurtosis compared to a normal
distribution.
The kurtosis is defined as


 
N1 x  x
!!
 4
 Ă   Ă
1 j
kurtĂ(x 0, ..., x N1)Ă Ă $Ă Ă %3 Eqn 9-14
 N j0 

The term -3 is necessary, so that a Gaussian distribution has a kurtosis of zero.


The kurtosis is the fourth order moment of a distribution and is a unitless paĆ
rameter. A positive value indicates that the distribution has longer tails than
the Gaussian distribution, while a negative value indicates that the distribution
has shorter tails.

The standard deviation of (Eqn 9-14) is 24N, for the idealized case of a norĆ
mal (Gaussian) distribution. However, the kurtosis depends on such a high
moment, that there are many real-life distributions for which the standard
deviation of equation 9-14 is effectively infinite.

Part III Time data processing 135


Chapter 9 Statistical functions

Note! Higher order moments (skewness and kurtosis), are often less robust than low-
er order moments which are based on linear sums. (It is possible that the cal-
culation of the skewness or kurtosis generates an overflow.) They must be
used with caution.

Markov regression
This function provides you with a measure of the likelihood of one data value
within a set being similar to another.
It is based on the circular autocorrelation R(.) of a set of data. This calculates
the correlation between one particular value and a value displaced by a certain
lag, as illustrated below.

lag
lag

The circular correlation takes the last shifted value and wraps it to the start.
The circular correlation for a lag of 1 data sample is given by

 N2
 xjĂxj1Ă x0ĂxN1
R(1) Ă Eqn 9-15
 j0 
The circular correlation for a lag of (0) is given by

N1
R(0)   x2j Eqn 9-16
j0

The Markov regression coefficient is the ratio of these two quantities

R(1)
MarkovĂregressionĂcoefficient  Eqn 9-17
R(0)

136 The Lms Theory and Background Book


Statistical functions

This function can therefore take values between 0 (very low correlation) and 1
(high similarity). It approaches 1 for a narrow or filtered band and 0 for broadĆ
band signals. It provides an indication therefor of how much a broadband sigĆ
nal has been filtered.

Part III Time data processing 137


Chapter 10

Time frequency analysis

The objective of a time-frequency analysis is to examine the spectral


(frequency) contents of a signal when this is varying in time. This
chapter provides very brief account of the background theory reĆ
lated to this type of analysis.
Introduction to the theory

Linear representations

Quadratic representations

139
Chapter 10 Time frequency analysis

10.1 Introduction
A great deal of physical signals are non-stationary. Fourier analysis establishes
a one-to-one relationship between the time and the frequency domain, but proĆ
vides no time localization of a signal's frequency components. Whilst an overĆ
all representation of all frequencies that appeared during the observation periĆ
od is presented, there is no indication as to exactly at what time which
frequencies were present.
Time-frequency analysis methods describe a signal jointly in terms of both time
and frequency. The aim is to find a distribution that determines the portion of
the signal's energy which lies in a particular time and/or frequency range. In
addition these distributions might or might not satisfy some other interesting
mathematical properties, such as the marginal equations".
The instantaneous power of a signal at time t is given by

|s(t)| 2 = Energy or intensity per unit time at time t

The intensity per unit frequency is given by the squared modulus of the Fourier
transform S()

|S()| 2 = Energy or intensity per unit frequency at frequency 

The joint function P(,t) should represent the energy per unit time and per unit
frequency
P(, t) = Energy or intensity per unit frequency (at frequency )
per unit time (at time t )

Ideally summing this energy distribution over all frequencies should give the
instantaneous power

 P(, t)ād  |s(t)| 2 Eqn 10-1




and summing over all time should give the energy density spectrum.

 P(, t)ādt  |S()| 2 Eqn 10-2




140 The Lms Theory and Background Book


Time frequency analysis

Equations 10-1 and 10-2 are known as the `marginal' equations and in addition
the total energy, E

E  P(, t)ādtād Eqn 10-3




should be equal to the total energy in the signal while satisfying the marginals.
There are a number of distributions which satisfy equations 10-1 and 10-2 but
which demonstrate very dissimilar behavior.

In general there are two main classes of time-frequency analysis methods -

V linear techniques discussed in section 10.2

V quadratic techniques discussed in section 10.3.

Part III Time data processing 141


Chapter 10 Time frequency analysis

10.2 Linear time–frequency representations

These are representations that satisfy the linearity principle. If x1 , and x2 are
signals, then T(t,f) is a linear time-frequency representation if -

x 1(t) & T x1(t, f)

x 2(t) & T x2(t, f)

x(t)  c 1x 1(t)  c 2x 2(t) & T x(t, f)  c 1T x1(t, f)  c 2T x2(t, f)

Two linear techniques are discussed -

V The Short Time Fourier Transform

V Wavelet analysis

The Short Time Fourier Transform (STFT)

A standard method used to investigate time-varying signals is the so-called


Short Time Fourier Transform (STFT). This involves selecting a relatively narĆ
row observation period, applying a time window and then computing the freĆ
quencies in that range. The observation window then slides along the entire
time signal to obtain a series of spectra shown as vertical bands in Figure 10-1.

time window g(t)


sliding
frame time t
t

frequency

Figure 10-1 The Short Time Fourier Transform

For a time signal s(t) multiplied by a window function g(t), the Short Time
Fourier Transform located at time  is given by

142 The Lms Theory and Background Book


Time frequency analysis

STFT(, )  1
2
e jtĂs(t)Ăg *(t  )ādt Eqn 10-4


This is a useful technique if it is possible to select the observation period so that


the signal can be regarded as being stationary within that period. There are a
whole range of signals however where the frequency contents change so rapidĆ
ly that the time period required would be unacceptably small.
This technique suffers from a further disadvantage in that the same time winĆ
dow is used throughout the analysis and it is this that determines the frequency
resolution (f= 1/T). This fixed relationship means that there has to be a trade
off between frequency resolution and time resolution. So, if you have a signal
composed of short bursts interspersed with long quasi stationary periods, then
each type of signal component can be analyzed with either good time resoluĆ
tion or good frequency resolution but not both.
An alternative view of the STFT is gained if it is expressed in terms of the
Fourier transforms of the signal S() and the window function G(). Equation
10-4 then becomes

STFT(, )  1
2
 eĄ jĂS()āG *(  )Ăād Eqn 10-5


By analogy with the previous discussion this reflects the behavior around the
frequency  ``for all times'' as illustrated by the horizontal bands in Figure
10-1. These bands can be regarded as a bank of bandpass filters which have
impulse responses corresponding to the window function.

Wavelet analysis
A method that provides an alternative for the analysis of non-stationary sigĆ
nals, where it becomes difficult to find the right compromise between time and
frequency resolution for the analysis window of the STFT is the Wavelet analyĆ
sis.
In effect, the Fourier transform decomposes the signal using a set of basis funcĆ
tions, which in this case are sine waves. The Wavelet transform also decomĆ
poses the signal, but it uses another set of basis functions, called wavelets.
These basis functions are concentrated in time, which results in a higher time
localization of the signal's energy. One prototype basis function is defined, and
a scaling factor is then used to dilate or contract this prototype function to arĆ
rive at the series of basis functions needed for the analysis.

Part III Time data processing 143


Chapter 10 Time frequency analysis

This brings us to the definition of the Continuous Wavelet transform. If h(t) is


the prototype function (basic wavelet) localized in time t0 and frequency 0
then the scaled versions (wavelets) are given by

h a(t)  1 Ă h t
Eqn 10-6
|a| a

where a is the scale factor given by 0 /

The Continuous Wavelet Transform CWT is given by

CWT(a, t)  1
|a|
 s()āh  a t
ād Eqn 10-7


where  is the time localization.

A disadvantage of the STFT is that it uses a single analysis window of constant


width. The result is that there is a fixed relationship between the frequency
and time resolutions. Improving one could only be achieved at the cost of the
other. Mapping this onto the time/frequency plane results in a fixed grid as
shown on Figure 10-2(a).

The use of the scaling factor to dilate or contract the basic wavelet results in an
analysis window that is narrow at high frequencies and wide at low frequenĆ
cies. Figure 10-1 likens the STFT to a series of constant width bandpass filters.
Using this concept again, the wavelet transform can be considered as a bank of
constant relative Bandwidth filters.

f
c
f

where c is a constant. This is illustrated in Figure 10-2(b) where by allowing


both the frequency and time resolutions to vary, a multi-resolution analysis is
possible.

144 The Lms Theory and Background Book


Time frequency analysis

time time

(a) STFT
(b) Wavelet analysis
frequency frequency

Figure 10-2 Mapping of the time/frequency plane

This is in fact a very natural way to analyse a signal. Low frequencies are pheĆ
nomena that change slowly with time so requiring a low resolution in this doĆ
main. In this situation, a good time resolution can be sacrificed for a high freĆ
quency resolution. High frequency phenomena vary rapidly with time which
then becomes the important dimension, so under these conditions wavelet analĆ
ysis increases the time resolution at the cost of frequency. This type of analysis
is also very closely related to the human hearing process, since the human ear
seems to analyse sounds in terms of octave bands.

Part III Time data processing 145


Chapter 10 Time frequency analysis

10.3 Quadratic time–frequency representations


Whilst linearity is a desirable property, in many cases, it is more interesting to
interpret a time-frequency representation as a time-frequency energy distribuĆ
tion which is a quadratic signal representation. This type of time-frequency
representations will exhibit many desirable mathematical properties, but it is
important to investigate the consequences of the bilinearity principle.
x(t) & T x(t, f)

y(t) & T y(t, f)

Eqn. 10-8
z(t)  c 1x(t)  c 2y(t) & T z(t, f)
 |c1| 2T x(t, f)  |c 2| 2T y(t, f)  c 1c 2 * T xy(t, f)  c 2c 1 * T yx(t, f)

The first two terms in this result can be seen as signal terms", and the last two
terms as the interference terms". These interference terms are necessary to satĆ
isfy mathematically desirable properties like the marginal equations", but they
often make interpretation of the results difficult.
The interference terms can be recognized by their oscillatory nature, and differĆ
ent so -called smoothing" techniques can be used to reduce their effect. This,
however, leads us to a new tradeoff; that of a reduction of interference terms
against time-frequency localization. The spectral smearing effect of the
smoothing windows will disperse the signal's energy in the time-frequency
plane, thereby reducing the time-frequency localization of all signal compoĆ
nents.
Two examples of quadratic time-frequency representations are the spectrogram
and the scalogram.
spectrogram = |STFT|2
scalogram = |WT|2
which are the two energy-counterparts of the Short-Time Fourier Transform
(STFT) and the Wavelet Transform (WT) respectively. The interference terms
for these representations only exist where different signal components overlap.
Hence if the signal components are sufficiently far apart in the time-frequency
plane, the interference terms will be essentially zero. While neither of these
representations satisfies the marginal equations, this is not of great concern for
a qualitative energy localization assessment.
For an adequate interpretation of time-frequency analysis results, it is often
good practice to use several techniques (STFT or WT together with a quadratic
method), which makes it possible to distinguish the signal components" from
the interference terms".

146 The Lms Theory and Background Book


Time frequency analysis

The Wigner–Ville distribution


The Wigner-Ville distribution is

W(, t)  1
2
 s * t  2ā
Ăe jĂs t  2ā
ād Eqn 10-9


where  is the local time. In terms of the spectrum it is

W(, t)  1
2
 S *   2ā
Ăe jtĂS   2ā
ād Eqn 10-10


where  is the local frequency.


This distribution satisfies the marginals and is real. In addition, time and freĆ
quency shifts in the signal cause corresponding shifts in the distribution.
Many of its characteristics can be understood by considering the fact that in
equation 10-9 at any point (t) a section of data prior to this point is being multiĆ
plied with a section following this point and the results summed. This can be
visualized by imagining that the segment to the left is folded over on top of the
segment to the right. Where there is an overlap there will be a product and
therefor a value for the distribution.
The diagram below demonstrates that for a signal only starting a time (tstart ), all
points to its left have value zero resulting in a distribution with the same value.
The same will apply at the end point (tend ),

tstart tend

Thus one characteristic of the Wigner Ville distribution is that for a signal of finite
duration the distribution is zero up to the start and beyond the end. The same can be
said when considering the frequency version which means that for a band limĆ
ited signal, the Wigner Ville distribution will be zero outside of that range.
The same manoeuvre can be used to see why the reverse is true if at some point
the signal level drops to zero. Consider the situation illustrated below.

Part III Time data processing 147


Chapter 10 Time frequency analysis

t0

At a point where the signal itself is zero (t0 ), multiplying the section to the left
by the section to the right results in a non-zero value. In general it can be said
that the Wigner distribution is not zero when the signal is. This unwelcome characĆ
teristic makes it difficult to interpret, especially when analyzing signals with
many components.
The same mechanism accounts for noisiness that can be seen in the distribution
in places where it is not present in the signal as shown below.
t1 t2

When evaluating the distribution at point (t1 ) the overlapping sections will not
include the noise, but even at point (t2 ) where there is no noise in the signal, it
will already influence the distribution. Noise will be spread over a wider periĆ
od than occurred in the actual signal therefore.
The same reasoning can be used to explain the appearance of the interference
terms along the frequency axis. This is especially so when a signal contains
multiple frequency components at the same moment in time, which will result
in interference terms at a frequency mid way between the frequencies of the
different components. As mentioned above, these terms can easily be recogĆ
nized by their oscillatory nature and smoothing techniques can reduce their efĆ
fect. Some possible smoothing techniques are discussed below.

Generalization
A generalization of the Wigner-Ville distribution leads to a whole class of time-
frequency representations, with as main desirable mathematical property their
invariance against operations like time shift, frequency shift or time/frequency
scaling. This means that a shift in time or frequency of the signal leads to an
equivalent shift of the time-frequency representation of that signal, or that scalĆ
ing the signal leads to a corresponding scaling of the time-frequency represenĆ
tation.

148 The Lms Theory and Background Book


Time frequency analysis

This more general class of time/frequency representations are defined as folĆ


lows

T x(t, f)     (t  t, f  f)W ā(t, f)Ădtdf


T x Eqn. 10-11
t f

where Wx (t',f') is the Wigner-Ville distribution of the signal x(t), and where ΨT
is the kernel function". It is the choice of this kernel function that determines
the basic properties of each specific time-frequency representation derived
from this general definition. The kernel function can also be seen as a smoothĆ
ing function applied to the Wigner-Ville distribution.

Typical examples of techniques that can be defined in this framework are

Spectrogram
where the kernel = Wigner distribution of the analysis window.

Smoothed Pseudo-Wigner Distribution (SPWD)


where the kernel =separable smoothing function with independent smoothĆ
ing spread in time- and frequency domain.

Pseudo-Wigner Distribution (PWD)


the same as the SPWD, but with no smoothing along the frequency axis.
This can also be considered as short-time Wigner distribution".

Choi-Williams Distribution (CWD)


where the kernel = exponential smoothing function.

The class of shift-invariant representations (time- and frequency shifting) is


also called Cohen's class, and examples of representations belonging to that
class are the spectrogram, the Wigner-Ville distribution, PWD, SPWD, ...

The class of time shift/time scale invariant representations is also known as the
Affine class, and examples of representations belonging to this class are the scaĆ
logram, the Wigner-Ville distribution, CWD, ...

Part III Time data processing 149


Chapter 10 Time frequency analysis

10.4 References

Books
Time-frequency analysis :
Leon Cohen - Prentice Hall - 1995 - 299 pp. - ISBN 0-13-594532-1

Papers
Linear and Quadratic Time-frequency Signal Representations :
F. Hlawatsch, G.F.Boudreaux-Bartels (IEEE SP Magazine, April 1992)

Time-frequency distributions - A review :


Leon Cohen (Proc. of IEEE, July 1989)

Wavelets and signal processing :


O. Rioul, M. Vetterli (IEEE SP Magazine, October 1991)

Time-frequency analysis applied to door slam sound quality problems. :


H. Van der Auweraer, K. Wyckaert, W. Hendrickx (Journal de physique IV,
May 1994)

150 The Lms Theory and Background Book


Chapter 11

Resampling

This chapter is concerned with both fixed resampling and adaptive


(or synchronous) resampling. It discusses the general principles inĆ
volved in both of these processes and contains a reading list for furĆ
ther information.
Fixed resampling

Adaptive resampling

151
Chapter 11 Resampling

11.1 Fixed resampling

The process of converting a signal that has been sampled at a particular rate to
one that is sampled at a different rate is known as resampling.

Resampling may be necessary for a number of reasons. A DAT recorder, for exĆ
ample, samples a signal at a rate of 48000 samples per second. If the signal has
a Bandwidth of only 200Hz then 500 samples a second would be adequate and
as a consequence far more data than is needed to describe the signal exists. In
this situation the sample rate could be decreased, a process which is referred to
as decimation or downsampling.

On the other hand, while a critically sampled signal may contain all the inforĆ
mation to adequately describe the frequency contents of the signal, it may not
look good, or be easy to interpret, in the time domain.

Increasing the sampling rate will generate a signal which has identical spectral
contents but a much better defined time waveform. When the resampling inĆ
volves an increase in the sampling rate it is referred to as interpolation or upsamĆ
pling.

A further instance of when a specific sampling rate is required, is when it is reĆ


quired to replay a signal through a D/A convertor. It may well be that the sigĆ
nal must possess the very specific sampling rate supported by the D/A converĆ
tor.

This section considers the theoretical background to the process of digital reĆ
sampling and the factors that must be taken into account to realize resampling
and achieve the required accuracy of results. It should be noted however that
the contents of this document are by no means a comprehensive treatment of
this subject. For a more thorough understanding of this subject you should reĆ
fer to the reading list given at the end of the section and in particular to referĆ
ences [3] and [4].

152 The Lms Theory and Background Book


Resampling

11.1.1 Integer downsampling


Integer downsampling by a factor n effectively means retaining every nth point
of the source data. However it is necessary to take measures to avoid aliasing
problems when doing this. The example below shows the effects of downsamĆ
pling by a factor of 13, when the original number of samples per period was 16.
Sampling a signal at a rate lower than 2 points per period of the highest freĆ
quency in the signal will give rise to erroneous results.

To avoid aliasing, due to the resampling process, it is necessary to ensure that


the signal does not contain frequencies that are any higher than can be deĆ
scribed by the reduced sample rate. The use of a low pass filter will achieve
this. To illustrate this, consider the example of downsampling by a factor of 5
described below.

The original signal was sampled at 1kHz, implying a Bandwidth of 500Hz. It


contains 2 spectral components, one at 8Hz and another at 325 Hz.

1.0

0.5

8Hz 325Hz 500Hz


Bandwidth

Downsampling by a factor of 5 will reduce the sample rate to 200Hz and the
Bandwidth to 100Hz. It is first necessary to apply a low pass filter to limit the
spectral content of the data to the 100 Hz Bandwidth. This will remove the
higher frequency component leaving a time domain signal containing 125
points per period for the remaining 8 Hz component.

É
1.0
É
É
0.5
É
8 100
Bandwidth
É
325 frequency (Hz) 125 points per period

Part III Time data processing 153


Chapter 11 Resampling

The downsampling by a factor 5 is performed by taking every 5th point.

1.0

0.5

8 100 frequency (Hz)


Bandwidth

Not applying the filter would result in the following. The 325 Hz component
will fold to 75Hz in the 100Hz Bandwidth and as a consequence the result is
heavily distorted.

1.0

0.5

8Hz 75Hz
100Hz 325Hz
Bandwidth

11.1.2 Integer upsampling


Integer upsampling by a factor n involves inserting (n-1) data points between
the original measured ones. Normally the inserted points will have a value of
zero, and it is then necessary to apply an appropriate filter to remove the harĆ
monics introduced by the process.

The trace shown here is upsampled by a


factor 4, which means that 3 zeros are
added between each of the existing data
points. The result is that in the time doĆ
main the signal looks highly distorted.

It can be proven that the spectrum of the upsampled signal consists of the origiĆ
nal one plus a mirrored version of it at all higher frequencies.

154 The Lms Theory and Background Book


Resampling

low pass filter

The `distortion' introduced by inserting zeroes, can therefore be filtered out by


a properly designed low pass filter which will retain just the spectral contents
of the original signal Bandwidth.

The improvement in the time domain representation of a signal by upsampling


is illustrated below for the case of a critically sampled sine wave. The sample
rate is just greater than 2 points per period so the Nyquist criterion is satisfied.
Although the time domain representation is poor, there is enough information
for an accurate representation in the frequency domain. Upsampling by a facĆ
tor 10 will make the time domain description of the signal more accurate.

f bw

f bw

The resulting signal has identical spectral contents to the original. The inĆ
creased number of points per cycle provides a much improved time domain deĆ
scription of the waveform.

Part III Time data processing 155


Chapter 11 Resampling

11.1.3 Fractional ratios


Resampling by a non-integer ratio can be realized by a combination of upsamĆ
pling and downsampling. So downsampling by a factor of 2.5 can be achieved
by first upsampling by a factor 2 then downsampling by a factor 5. The order
in which these two processes are done is very important if the original signal
content of interest is to be preserved.
Consider a signal sampled at 2kHz and which contains signals up to 300 Hz. A
new sampling rate of 800Hz is required, representing a downsampling by a facĆ
tor of 2.5

If the signal is first downsampled by a


factor of 5, a filter at 200Hz is required.
As a result, all the signal content beĆ
tween 200 and 300Hz will be eliminated, 5
and the subsequent upsampling will not
of course be capable of restoring this.
2

The correct procedure is to first upsample to 4kHz (a Bandwidth of 2kHz). A


lowpass filter set at 1kHz will retain the original spectral content. The next
stage is to downsample by a factor of 5 with a low pass filter at 400 Hz thus
maintaining the original frequencies of up to 300Hz.

When a non-integer resampling factor is required, the software determines the


optimum ratios and the sequence of resample operations required to achieve
the desired sample rate conversion.

156 The Lms Theory and Background Book


Resampling

11.1.4 Arbitrary ratios


Some resampling requirements can not be easily realized by a simple combinaĆ
tion of an upsampling and a downsampling. For some ratios, even though they
can be expressed as a fraction one needs an extremely high intermediate upĆ
sampling ratio. The process imposes a heavy computational load and the result
is numerically not well conditioned.

Consider for instance a measurement at 8192 samples per second that is to be


resampled to 8000 Hz for replay on digital audio hardware. This can theoretiĆ
cally be realized by upsampling by a factor 125 followed by a downsampling
with a factor 128, but this is computationally extremely costly.

In this situation another strategy is used. Consider the signal shown below
which was originally sampled at a rate indicated by the white circles. The reĆ
quired sample rate is indicates by the filled circles. The new sample rate is not
an integer ratio to the original.

The first stage is to upsample by a relatively high factor (a). This factor is
known as the `Upsampling factor before interpolation' parameter and the deĆ
fault value used is 15. The resulting sample rate is indicated by the squares.
The second stage then involves performing a linear inĆ 
terpolation on the upsampled signal to arrive at a new
sample rate that is an integer multiple (b) of the target
frequency. This introduces an error  which will be
small as long the source trace is upsampled at a high
enough ratio. The maximum distortion that can occur
with the upsampling factor is indicated by the softĆ
ware.

This error is indicated in the form of the `SDR' (Signal to Distortion Ratio). It
depends on the `Upsampling factor before interpolation' parameters and the
filter 's cut-off frequency as shown below:

SDR=10log10 (80*(100 R / ( <cut-off in percent> )))

where R = Upsampling factor before interpolation


cut-off in % = the cut off frequency as a % of the Nyquist frequency.

Part III Time data processing 157


Chapter 11 Resampling

The final stage in this process is to downsample by this integer factor (b) to the
required rate. It is also possible that the downsampling is achieved directly by
the interpolation process itself as long as the downsampling rate being perĆ
formed is lower that the preceding upsampling rate (a).

158 The Lms Theory and Background Book


Resampling

11.2 Adaptive resampling

Adaptive or synchronous resampling enables you to resample a signal such


that its characteristics can be examined in a different domain. A well known
mechanical application is the extraction of ``order-related" phenomena of enĆ
gine vibrations based on the measurement of the rotation speed of one of its
components. Phenomena which are very difficult to analyze or interpret in one
domain, become clear and obvious in another.

For synchronous averaging, for example, it is essential that repetitive phenomeĆ


na occur at the very same instant for the different signal sections that are averĆ
aged. Using the synchronous resampling technique, the data can be transĆ
formed into that particular domain in which the phenomena are indeed
repetitive.

In the same way as the Fourier transform presents the contents of time domain
data in the frequency domain, it converts angle domain data to the order doĆ
main. Just as something that happens twice every second has a frequency of 2
Hz, something that occurs twice every cycle is related to order 2. Consider the
example of measurements taken on an engine at a supposedly constant rpm.
Even very slight variations in rpm will result in a frequency domain represenĆ
tation where the related spectral components are sharp for the low orders, but
become smeared out for higher frequencies. The small RPM variations, lead to
leakage errors in the frequency domain.

For applications where there is a need to investigate higher order phenomena


(such as gear box analysis for example), such smearing makes it very difficult
to discriminate order from resonance components. Transforming such data to
the order domain, will result in all orders being clearly shown, but any resoĆ
nance phenomenon present will be smeared out. The frequency and order doĆ
main representations are therefore complementary to one another and useful
information can be obtained in the domain most suited for analysis. Adaptive
resampling facility enables you to convert from one domain to another.

Implementation example

This example below illustrates the procedure involved in converting from the
time to the angle domain. The principle can be used to convert between any
two domains.

Your original time signal must be measured in conjunction with a tracking sigĆ
nal. This is most likely to be a tacho signal; a pulse train, that can be converted
to an rpm /time function and then integrated to obtain an angle/time function.

Part III Time data processing 159


Chapter 11 Resampling

angle

Ordinate

Abscissa time

In the case of a transformation from the time domain into the angle domain, the
required (constant) resolution in the angle domain () defines the time interĆ
vals at which data samples of the vibration measurement should be available.

 angle
measured points

required points

t1 t2 t3 time

The most appropriate resolution () is based on the minimum slew rate which
must be coped with.

When sampling in the time domain the time increment is the reciprocal of the
sampling frequency.

1  T
Fs

So according to the Nyquist criterion, information is available up to Fs/2.


Adaptive resampling conforms to the same rules: so if you do not have enough
samples then information is lost, while if you use too many samples then the
processing effort is unnecessarily increased.

It is necessary to determine the angle which corresponds to the required Fs in


the time/frequency domain. Adaptive resampling uses a varying time increĆ
ment if the angle/time relationship is non linear. Data loss will occur first at
the lowest rpm values (slew rate) and the aim is to determine the threshold
angle  between over and under sampling.

d
dt

rpm min
  min 
Fs Fs
So for example if the minimum slew rate (d/dt) is 500 rpm and the sample
frequency is 2000Hz, then the threshold angle will be

160 The Lms Theory and Background Book


Resampling

 = 500/2000 = 0.25{ x (360/60) } = 1.5 degrees

Using an angle increment less than this value will yield more data points in the
angle domain without any gain in information thus representing excessive
processing. Using a higher increment value will result in a loss of information
in the lower rpm ranges which will not be recovered if the data is transformed
back to the original domain.

From the required point in the angle doĆ α


main α1, the α(t) function is consulted to
find the corresponding time instant (t1). α1
The value of the measured time signal at
t1 t
that instant (y'1) must then be determined
as the value for the angle position. This is
repeated for every value in the angle doĆ y(t)
main.

Depending on the resolution of the origiĆ


t1 t, ∆t
nal signal and the relation between both
domains, interpolation may be required y’(t)
as illustrated here. In order to maintain
the dynamic nature of the signal, it is esĆ
sential to preserve its spectral contents, so
the signal is first upsampled before interĆ y’1 t1 t, ∆t’
polation.
y'(')
The last interpolation ratio (and thus the
corresponding upsampling factor) is govĆ
erned by the actual local distance beĆ 
y’1  
tween the available and required data
samples. As a final stage the constructed
angle-domain signal needs to be reĆ y''() downsampling
sampled (usually down-sampled) to
match the desired angle resolution.
y’’
1  α 

Preservation of the spectral characteristics during the up-sampling, interpolaĆ


tion, and downsampling steps indicated above, requires the correct application
of these procedures as well as a low-pass finite impulse response (FIR) filters
with enough suppression in the stop-band, a low ripple in the pass-band, and
yet optimal speed performance for acceptable computing times. The principles
of resampling are discussed in section 11.1.4.

Part III Time data processing 161


Chapter 11 Resampling

11.3 References

[1] A. V. Oppenheimer and R.W. Schafer


Digital Signal Processing
Prentice Hall 1975

[2] L.R. Rabiner and B. Gold


Theory and Application of Digital Signal Processing
Prentice Hall 1975

[3] R.E. Crochiere and L.R. Rabiner


Multirate Digital Signal Processing
Prentice Hall 1983

[4] J.G. Proakis and D.G. Manolakis


Digital Signal Processing: Principles Algorithms and Applications
MacMillan Publishing 1992

162 The Lms Theory and Background Book


Chapter 12

Digital filtering

Filtering is most often used to enhance signals by removing unĆ


wanted components. This chapter describes the theoretical basis
used in the design of digital filters.
Basic definitions related to digital filtering

Types of filters and their design

Analysis of filters

Application of filters
This is by no means a comprehensive text and aims just to give some
insight into the subject. A reading list is appended at the end of the
chapter.

163
Chapter 12 Digital filtering

12.1 Basic definitions relating to digital filtering

A linear time–invariant system


Discrete time signals are defined for discrete values of time i.e. when t= n T. A
general way of describing a sequence of discrete pulses of amplitude a(n) as ilĆ
lustrated below is given in equation 12-1.

a(n)

n
0


{a(n)}   a(m)Ău 0Ă(n  m) Eqn 12-1
m

where u0 is the unit impulse. A discrete-time system is an algorithm for conĆ


verting one sequence into another as represented below. In this case the input
x(n) is related to the output y(n) by the specific system .

x(n) y(n) yā(n)  Ă[xā(n)] Eqn 12-2

A linear system implies that applying the input ax1 +bx2 will result in the output
ay1 +by2 where a and b are arbitrary constants.
A time-invariant system implies that the input sequence x(n-n0 ) will result in
the output y(n-n0 ) for all n0 .
From equation 12-1 the input x(n) to a system can be expressed as


x(n)   x(m)Ău 0Ă(n  m) Eqn 12-3
m

164 The Lms Theory and Background Book


Digital filtering

If h(n) is defined as the impulse response of a system which is the response to the
sequence u0 (n), then by time invariance h(n-m) is the response to u0 (n-m). By
linearity, the response to sequence x(m)u0 (n-m) must be x(m)h(n-m).
Thus the response to x(n) is given by

 
y(n)   x(m)ĂhĂ(n  m)   h(m)ĂxĂ(n  m) Eqn 12-4
m m

Equation 12-4 is known as the convolution sum and y(n) is known as the conĆ
volution of x(n) and h(n), designated by x(n) * h(n). Thus for a linear time inĆ
variant (LTI) system a relation exists between the input and output that is comĆ
pletely characterized by the impulse function h(n) of the system.

LTI system
x(n) y(n)
h(n)

Stability and Causality


The constraints of stability and causality define a more restricted class of linear
time-invariant systems which have important practical applications.
A stable system is one for which every bounded input results in a bounded outĆ
put. The necessary and sufficient condition for stability is

 |h(n)|   Eqn 12-5
n

A causal system is one for which the output for any n=n0 depends only on the
input for n  n0. A linear time-invariant system is causal if and only if the unit
sample response is zero for n<0, in which case it may be referred to as a causal
sequence.

Difference equations
Some linear time-invariant systems have input and output sequences that are
related by a constant coefficient linear difference equation. Representing such
systems in this way, can provide means of making them realizable and the apĆ
propriate difference equation reveals useful information on the characteristics
of the system under investigation such as the natural frequencies, their multiĆ
plicity, the order of the system, frequencies for which there is zero transmission
...

Part III Time data processing 165


Chapter 12 Digital filtering

The general form of an Mth order linear constant coefficient difference equation
is given in equation 12-6.

M M
y(n)   b iĂx(n  i)ĂĄ   a iĂy(n  i)Ă Eqn 12-6
i0 i1

An example of a first order difference equation is given by

y(n)   a 1y(n  1)  b 0x(n)  b 1x(n  1) Eqn 12-7

which can be realized as follows.

Delay
x(n–1)
b1

x(n) y(n)
b0
y(n–1)
Delay
–a1

Delay
represents a one sample delay. A realization such as this where sepaĆ
rate delays are used for both input and output is known as Direct form 1. More
detailed information on filter realizations can be obtained from the references
listed at the end of this chapter.

The z transform
The z transform of a sequence x(n) is given by


X(z)   x(n)Ăz n Eqn 12-8
n

where z is a complex variable. The z transform is a useful technique for repreĆ


senting and manipulating sequences.
The information contained in the z transform can be displayed in terms of
(poles) and zeros. If the poles of the function X(z) fall within a radius R1 where
R1  1 then the system is stable.

166 The Lms Theory and Background Book


Digital filtering

In the z plane, the overall representation of a linear time invariant system is


given by

Y(z)
H(z)  Eqn 12-9
X(z)

and H(z) can again be expressed in the general form of difference equations

a 0  a 1z 1  a 2z 2.Ă.Ă.a Mz M
H(z)  Eqn 12-10
1  b 1z 1  b 2z 2.Ă.Ă.b Nz N

The frequency response of filters


Consider the case when the input to a filter is x(n)= e j n (equivalent to a
sampled sinusoid of frequency  ). From equation 12-4


y(n)   h(m)Ăe Ăj0(nm)
m

e j 0n  h(m)Ăe j0m
m

 x(n)ĂHĂ(ej0) Eqns 12-11

The quantity H(eā j) is the frequency response function of the filter, which gives the
transmission of the system for every value of .
This is in fact the z transform of the impulse response function with z=eāj.


H(z)| zej  H(e j)   h(n)Ăe jn Eqn 12-12
n

which means that the frequency response of a filter is an important indicator of


a system's response to any input sequence that can be represented as a continuĆ
ous superposition of input sequences x(n).

Relationship between the frequency response and the Fourier transform of


a filter
The frequency response of a linear time invariant system can be viewed as the
Fourier series representation of H(e j) .

Part III Time data processing 167


Chapter 12 Digital filtering


j
H(eā )   h(n)āe jn Eqn 12-13
n

h(n)  1
2
ā H(eā j)āe jn



where the impulse response coefficients are also the Fourier series coefficients.
Since the above relationships are valid for any sequence that can be summed,
the same can apply to x(n) and y(n) and it can be shown that

Y(eā j)  X(eā j)ĂH(eā j) Eqn 12-14

and so the convolution in the time domain has been converted to multiplication
in the frequency domain.

Discrete Fourier Transform


For a periodic sequence of N samples, the Discrete Fourier Transform is given
as

N1
H p(k)   hp(k)āejā(2N)ānk Eqn 12-15
n0

and the DFT coefficients are identical to the z transform of that same sequence
evaluated at N equally spaced points around the unit circle. The DFT coeffiĆ
cients are therefore a unique representation of a sequence of finite duration.
The continuous frequency response can be obtained from the DFT coefficients,
by artificially increasing the number of points equally spaced around the unit
circle. So by augmenting a finite duration sequence with additional equally
spaced zero valued samples the Fourier transform can be calculated with arbiĆ
trary resolution.

Finite and Infinite Impulse Response Filters

When an impulse response h(n) is made up of a sequence of finite pulses beĆ


tween the limits N1 < n < N2 (as shown below) and is zero outside these limits
then the system is called a finite impulse response (FIR) filter or system.

168 The Lms Theory and Background Book


Digital filtering

h(n)

n
N1 N2

Such filters are always stable and can be realized by delaying the impulse reĆ
sponse by an appropriate amount. The design of FIR filters is described in secĆ
tion 12.2.2.

A filter (system) whose impulse response extends to either - or + (or both)


is termed an infinite impulse response (IIR) filter or system. Design of these filters
is discussed in sections 12.2.3 and 12.2.4.

Use of digital filters


Digital filters can be used in a range of applications such as -
anti-aliasing,
smoothing,
elimination of noise,
compensation (equalization),
modification of fatigue/damage characteristics.

They have some important advantages compared to analog filters -


high accuracy,
consistent behavior and characteristics,
few physical constraints,
independent of hardware,
the signals can be easily used by different processing algorithms.

Part III Time data processing 169


Chapter 12 Digital filtering

12.2 FIR and IIR filter design

Filters fall into two distinct categories -the Finite Impulse Response (FIR) filĆ
ters and the Infinite Impulse Response (IIR) filters. A comparison of the two
categories of filters is given below.

Characteristic FIR IIR


Stability these filters are always stable will be stable if
(poles=0) |poles|<1
Phase linear (important in applications nonlinear
such as speech processing)
Efficiency low better
the length (nr of taps) must be lower order required
relatively large to produce an
adequately sharp cut off
Round off error low high
sensitivity
Start up transients finite duration infinite duration
Adaptive filtering easy difficult
Realization straightforward (direct form) more critical (direct or
cascaded)

There are nine basic designs of filters that are described in this chapter as listed
below.

FIR Window see page 174


FIR Multi window see page 176
FIR Remez see page 177
IIR Bessel see page 180
IIR Butterworth see page 181
IIR Chebyshev see page 182
IIR Inverse Chebyshev see page 183
IIR Cauer see page 183
IIR Inverse design see page 187

This section begins with an introduction to the terminology used in filter deĆ
sign. The following subsections deal with the processes and parameters inĆ
volved in each sort of filter mentioned above.

170 The Lms Theory and Background Book


Digital filtering

12.2.1 Filter design terminology

Filter characteristics

The nomenclature used in describing a (low pass) filter is illustrated in Figure


12-1.

H  pass band ripple

attenuation

stop band ripple



pass band stop band
transition band

Figure 12-1 Filter characteristics

The filter design functions operate with normalized frequencies with a unit freĆ
quency equal to the sampling frequency.

Normalized frequency = frequency (Hz)


sampling frequency
and thus lies in the range 0 to 0.5

Angular frequency on the unit circle = Normalized frequency x 2 

Linear phase filters

The frequency response of a filter has an amplitude and a phase

H(eā j)  |H(eā j)|.eā j()

For a linear phase, () = - where -    . It can be shown that a


necessary condition for this is that the impulse response function is symmetric,

Part III Time data processing 171


Chapter 12 Digital filtering

h(n)  h(N  1  n)

and in this case = (N-1)/2.

This means that for each value of N there is only one value of  for which exactĆ
ly linear phase will be obtained. Figure 12-2 shows the type of symmetry reĆ
quired when N is odd and even.
center of symmetry center of symmetry

N =11 =5 N =12 =5.5


N odd, even symmetry N even, odd symmetry

Figure 12-2 Symmetrical impulses for odd and even N

Filter types

Several types of filter are provided (some of which are illustrated below) as
well as multipoint filters where the required response can be of an arbitrary
shape.

H  H 

low pass band pass


H  H 
high pass band stop

Figure 12-3 Filter types

In addition it is also possible to design a Differentiator filter and a Hilbert


transformer. These can both be designed using the Remez exchange algorithm
and they are briefly described here.

Differentiator filter
Such a filter takes the derivative of a signal and an ideal differentiator has a deĆ
sired frequency response of

172 The Lms Theory and Background Book


Digital filtering

H dā()  jĂĂĂĂĂĂ       Eqn 12-16

The unit sample response is




h(n)  1
2
ā H ()āe d
jnd
Eqn 12-17



 1
2
ā jāe jnd



 cosnn

Which is an anti symmetric H 


unit sample response. In
practice however the ideal
case is not required and a
pass band will be specified stop
as shown here. band
ripple

pass band stop band
transition band

Figure 12-4 Characteristics of a differentiator filter

Hilbert transformer
This filter imparts a 900 phase shift to the input. The ideal Hilbert transformer
has a desired frequency response of

H dā()   jĂĂĂĂĂĂ0     Eqn 12-18


ĂĂĂ jĂĂĂ      0

The unit sample response is




h(n)  1
2
ā H ()āe d
jnd
Eqn 12-19


 0 
!
1
 
2
 ā jāe jnd   
ā jāe jnd
 0 
2
2 Ą sin (ā n2)
 n

Part III Time data processing 173


Chapter 12 Digital filtering

In practice however the ideal case is not required and the desired frequency reĆ
sponse of a Hilbert transformer can be specified as Hd () = 1 between the limits
l <<u as shown below.

H 


l u

Figure 12-5 Characteristics of a Hilbert transformer

12.2.2 Design of FIR filters

Design of an FIR window filter


The frequency response of a filter can be expanded into the Fourier series.


H(eā j)   h(n)āe jn Eqn 12-20
n

h(n)  1
2
ā H(eā j)āe jn



The coefficients of the Fourier series are identical to the impulse response of the
filter. Such a filter is not realizable however since it begins at - and is infiĆ
nitely long. It needs to be both truncated to make it finite and shifted to make
it realizable. Direct truncation is possible but leads to the Gibbs phenomenon
of overshoot and ripple illustrated below.

Figure 12-6 Gibbs phenomenon due to truncation of the Fourier series

174 The Lms Theory and Background Book


Digital filtering

A solution to this is to truncate the Fourier series with a window function. This
is a finite weighting sequence which will modify the Fourier coefficients to conĆ
trol the convergence of the series. Then
^
h(n)  h(n)āw(n) Eqn 12-21

^
where w(n) is the window function sequence and h(n) gives the required imĆ
pulse response.
The desirable characteristics of a window function are
d a narrow main lobe containing as much energy as possible
d side lobes that decrease in energy rapidly as  tends to .
The windows supported are listed below.
Rectangular
This is equivalent to direct truncation.
 (N  1) (N  1) W(n)
W(n)  1ĂwhenĂ n
2 2

= 0 elsewhere

-(N-1)/2 (N-1)/2

Hanning
This type of window trades off transition width for ripple cancellation. In this
case

W(n)    (1  ) cos 2n ĂĂwhenĂĂ


N
 (N  1)
2
n
(N  1)
2

= 0 elsewhere
 = 0.5

Hamming
This has similar properties to the Hanning window described above. The forĆ
mula is the same but in this case =0.54.
Kaiser
The Kaiser window function is a simplified approximation of a prolate spheroiĆ
dal wave function which exhibits the desirable qualities of being a time-limited
function whose Fourier transform approximates a band-limited function. It
displays minimum energy outside a selected frequency band and is described
by the following formula

Part III Time data processing 175


Chapter 12 Digital filtering


I 0  1  [2n(N  1)] 2
 (N  1) (N  1)
W(n)  ĂĂĂĂĂĂĂ when n
I 0ā 2 2

Where I0 is the zeroth order Bessel function and  is a constant representing a


frequency trade-off between the height of the side lobe ripple and the width of
the main lobe.

Chebyshev
This is another example of an essentially optimum window like the Kaiser winĆ
dow, in the sense that it is a finite duration sequence that has the minimum
spectral energy beyond the specified limits. The window function is derived
from the Chebyshev polynomial which is described below.

The Chebyshev polynomial of degree r in x where -1 x  1 is denoted by


TrĂ Ă Tr (x).

T rā(x) ( cos(r. cos 1ā(x))

and T r1(x)  2.x.T r(x)  T r1(x)

And so T0  1
T r(1)  1
T rā( 1)  ( 1) r
T 2rā(0)  ( 1) r
T 2r1ā(0)  0

The window function W(n) is obtained from the inverse DFT of the Chebyshev
polynomial evaluated at N equally spaced points around the unit circle.

FIR multi window Filter

This allows you to design a filter of arbitrary shape and is suited for narrow
band selective filters. It uses the design technique known as frequency samĆ
pling.

It will be recalled from equations 12-15 that a filter can be defined by its DFT
coefficients and that the DFT coefficients can be regarded as samples of the z
transform of the function evaluated at N points around the unit circle.

176 The Lms Theory and Background Book


Digital filtering

N1
H(k)   h(n)āej(2N)ānk
n0
N1
h(n)  1
N
 ā H(k)āej(2N)nk
k0

H(k)  H(z)| zāej(2N)k

From these relationships and since e j2k =1, it can be shown that

N1
H(z) 
(1  zN)
N
Ą  [Ă1  z1H(k)
eā j(2N)ānkĂ]
Eqn. 12-22
k0

The desired filter specification can be sampled in frequency at n equidistant


points around the unit circle, to give the desired frequency response H(k). The
continuous frequency response can be obtained by interpolation of these
sampled values around the unit circle.

The filter coefficients are obtained after applying an inverse FFT on the interpoĆ
lated response. The coefficients are tapered smoothly to zero at the ends by
multiplying the impulse response by the specified window function.

FIR Remez filter

This uses the remez exchange algorithm and the Chebyshev approximation
theory to arrive at filters that optimally fit the desired and the actual frequency
responses, in the sense that the error between them is minimized. The Parks-
McClellan algorithm employed enables you to design an equi-ripple optimal
FIR filter.

The desired frequency response is expressed as a gabarit which contains a numĆ


ber of frequency bands. These bands are interpolated onto a dense grid in a
similar way to that described for the multipoint FIR filter design using a winĆ
dow described above.

The weighted approximation error between the desired frequency response and
the actual response is spread evenly across the passbands and the stopbands
and the maximum error is minimized by linear optimization techniques. The
approximation errors in both the pass and stop bands for a low pass filter are
illustrated in Figure 12-7.

Part III Time data processing 177


Chapter 12 Digital filtering

1
  1
1

2

2
0

Figure 12-7 Approximation errors

The filter coefficients are obtained after applying an inverse DFT on the optiĆ
mum frequency response.

Weighting
For each frequency band the approximation errors can be weighted. This is
done by specifying a weighting function W(). Applying a weighting function
of 1 (unity) in all bands implies an even distribution of the errors over the
whole frequency band. To reduce the ripple in one particular band it is necesĆ
sary to change the relative weighting across the bands and in this case to ensure
that the band of interest has a relatively high weighting. It is convenient to
normalize W() in the stopband to unity and to set it to the ratio of the approxiĆ
mation errors (2/1) in the passband.

12.2.3 Design of IIR filters using analog prototypes


The steps involved in this design process are described in the following subsecĆ
tions. References for further reading on filters can be found on page 191.

Step 1) Specify the filter characteristics

The required filter characteristics are described in Figure 12-8. These will of
course depend on the type of filter required.

178 The Lms Theory and Background Book


Digital filtering

 
maximum ripple in
the pass band (dB)

attenuation (dB)


lower cutoff l u upper cutoff

Figure 12-8 Filter specification for IIR filters

Step 2) Compute the analog frequencies

A prototype low pass filter will be designed based on the required digital cutĆ
off frequency c . First however the digital frequency d must be converted to
an analog one a . This is achieved through a bilinear transformation from the
digital (z) plane to the analog (s) plane where s and z are related by

1
s  2 1  z 1
T 1z

Eqn 12-23

When z= e jT (the unit circle) and s=ja

jT

s  2 1  e jT  2 ā jā tan( dT2)


T 1e T
Eqn 12-24

 a  2 ā tan( dT2) Eqn 12-25


T

The analog  axis is mapped onto one revolution the of the unit circle, but in a
non-linear fashion. It is necessary to compensate for this nonlinearity (warpĆ
ing) as shown below

Part III Time data processing 179


Chapter 12 Digital filtering

a a

computed analog frequencies

c d
defined digital frequencies

Figure 12-9 Conversion from digital to analog frequencies

Step 3) Select the suitable analog filter

It is now necessary to select a suitable low pass analog prototype filter that will
produce the required characteristics. The selection can be made from the folĆ
lowing types of filter.

Bessel filters

Butterworth filters

Chebyshev type I filters

Inverse Chebyshev (type II) filters

Cauer (elliptical) filters

Bessel filters

The goal of the Bessel approximation for filter design is to obtain a flat delay
characteristic in the passband. The delay characteristics of the Bessel approxiĆ
mation are far superior to those of the Butterworth and the Chebyshev approxiĆ
mations, however, the flat delay is achieved at the expense of the stopband atĆ
tenuation which is even lower than that for the Butterworth. The poor
stopband characteristics of the Bessel approximation make it impractical for
most filtering applications !

180 The Lms Theory and Background Book


Digital filtering

Bessel filters have sloping pass and stop bands and a wide transition width reĆ
sulting in a cutoff frequency that is not well defined.

The transfer function is given by

d0
H(s)  Eqn 12-26
B n(s)

where Bn (s) is the nth order Bessel polynomial

B n(s)  (2n  1)B n1(s)  s 2B n2(s) Eqn 12-27

and d0 is a normalizing constant.

(2n)!
d0  Eqn 12-28
2 nn!

Butterworth filters

These are characterized by the response being maximally flat in the pass band
and monotonic in the pass band and stop band. Maximally flat means as many
derivatives as possible are zero at the origin. The squared magnitude response
of a Butterworth filter is

|H(s)| 2  1 Eqn 12-29


1  (ssc) 2n

where n is the order of the filter. The transfer function of this filter can be deĆ
termined by evaluating equation 12-29 at s=j

|H(j)| 2  H(s)H( s)  1
s2 n
Eqn 12-30
1  ( j 2)
c

Butterworth filters are all-pole filters i.e. the zeros of H(s) are all at s=.
They have magnitude (1/2 ) when / c =1 i.e. the magnitude response is
down 3dB at the cutoff frequency.

Part III Time data processing 181


Chapter 12 Digital filtering

|H  |2

3dB

n=4
n=10

c

Figure 12-10 Characteristics of a Butterworth filter

A means of determining the optimum order is described on page 185.

Chebyshev (type I) filters


These are all pole filters that have equi-ripple pass bands and monotone stop
bands. The formula is

|H()| 2  1 Eqn 12-31


1   2C 2n()

where Cn () are the Chebyshev polynomials and  is the parameter related to
the ripple in the pass band as shown below for n odd and even.

 
1 1
1  2 1  2

n odd n even

For the same loss requirements, the Chebyshev approximation usually requires
a lower order than the Butterworth approximation, but at the expense of an
equi-ripple passband. Therefore, the transition width of a Chebyshev filter is
narrower than for a Butterworth filter of the same order.

The increased stopband attenuation is achieved by changing the approximation


conditions in that band thus minimizing the maximum deviation from the ideal
flat characteristics. The stopband loss keeps increasing at the maximum posĆ
sible rate of 6*<Order> dB/Octave.

182 The Lms Theory and Background Book


Digital filtering

Chebyshev filters show a non-uniform group delay and substantially non-linĆ


ear phase. A means of determining the optimum order is described on page
185.

Inverse Chebyshev (type II) filters

These contain poles and zeros and have equi-ripple stop bands with maximally
flat pass bands. In this case

|H()| 2  1 Eqn 12-32


2
 C n(r)
1   2 C (
n r)

where Cn () are the Chebyshev polynomials,  is the pass band ripple parameĆ
ter and r is the lowest frequency where the stop band loss attains a specified
value. These parameters are illustrated below for n odd and even.

 
1 1
1  2 1  2

...
n odd r n even r

For the same loss requirements, the Inverse Chebyshev approximation usually
requires a lower order than the Butterworth approximation, but at the expense
of an equi-ripple stopband.

The increased passband flatness is achieved by changing the approximation


conditions in that band thus minimizing the maximum deviation from the ideal
flat characteristics.

Cauer (elliptical) filter

These filters are optimum in the sense that for a given filter order and ripple
specifications, they achieve the fastest transition between the pass and the stop
band (i.e. the narrowest transition band). They have equi-ripple stop bands
and pass bands.

Part III Time data processing 183


Chapter 12 Digital filtering

n odd n even

The transfer function is given by

|H()| 2  1 Eqn 12-33


1   2R 2n(L)

where Rn (L) is called a Chebyshev rational function and L is a parameter deĆ


scribing the ripple properties of Rn (L). The determination of Rn (L) involves
the use of the Jacobi elliptic function.  is a parameter related to the passband
ripple.

This group of filters is characterized by the property that the group delay is
maximally flat at the origin of the s plane. However this characteristic is not
normally preserved by the bilinear transformation and it has poor stop band
characteristics.

For a given requirement, this approximation will in general require a lower orĆ
der than the Butterworth or the Chebyshev ones. The Cauer approximation
will thus lead to the least costly filter realization, but at the expense of the worst
delay characteristics.

In the Chebyshev and Butterworth approximations, the stopband loss keeps inĆ
creasing at the maximum possible rate of 6*<Order> dB/Octave. Therefore
these approximations provide increasingly more loss than a certain wanted flat
attenuation that is really needed above the edge of the stopband. This source of
inefficiency for both approximations is remedied by the Cauer or elliptic
approximation.

184 The Lms Theory and Background Book


Digital filtering

Step 4) Transform the prototype low pass filter


At this point we have selected a suitable low pass filter prototype with a
normalized cutoff frequency c =1. The next stage is to transform this low pass
filter into the type of analog filter required with the desired cutoff frequencies.
To achieve this the following transformations are applied.
Transform Replace s by Frequency response

Low pass to low pass s s


c


Low pass to high pass s sc

s2   u l
Low pass to band pass s
sā( u   l)
l u

sā( u   l)
Low pass to band stop s
s 2   u l
l u

Step 5) Apply a bilinear transformation


The final stage in this design process is to apply a bilinear transformation to
map the (s) plane to the (z) plane to obtain the desired digital filter.

H(z)  H(s)| Eqn 12-34


sT2 1z 1
1z 1

The final result is a set of filter coefficients a and b, stored in vectors of length
n+1,where n is the order of the filter. A facility, described below, enables you to
determine the optimum order of a filter required for a particular design.

Determining the filter order


You can determine the filter order and the cutoff frequency for a given set of
design parameters that are shown in Figure 12-11.

Part III Time data processing 185


Chapter 12 Digital filtering

1
passband ripple
1-1

attenuation
2

p s

Figure 12-11 Specifications required to determine filter order

Ripple passband This determines the ripple parameter 1. It is exĆ


pressed in dB
Attenuation When this is defined, the ripple parameter 2 is deĆ
termined. It is expressed in dB.
Lower frequency These are the two edge frequencies p (end of the
Upper frequency pass band) and s (start of the stop band) of a low
pass or high pass filter. Band pass and band stop
filters will require a second pair of frequencies to
be defined.
Sampling frequency This is the sampling frequency at which the filter
must operate.

The filter can be any one of the types mentioned above and the prototype can
be either a Butterworth, Chebyshev type I or type II or a Cauer filter. This proĆ
cess does not apply to the Bessel filter because of the particular condition perĆ
taining to these filters in that the filter order affects the cutoff frequency.
The minimum filter order required is determined from a set of functions deĆ
scribed below.

One function relates the pass band and stop band ripple specifications to a filter
design parameter  where

12
2
(1   1) (1   21)   22

Another parameter relates the pass band cut off frequency  p , the transition
width v and the low pass filter transition ratio k where

p tan  p2
k ĄĄ  ĄĄ ĄĄ
s tan  s2
analog digital

186 The Lms Theory and Background Book


Digital filtering

A final function relates the filter order n, the low pass filter transition ratio k
and the filter design parameter  This relationship depends on the type of proĆ
totype analog filter.


n Butterworth
k

cosh1(ā 1)
n Chebyshev
ln
11k 2
k

K(k)āK (1   2)
n Elliptic
K()āK(1  k 2)

where K( .) is the complete elliptical integral of the first kind.

12.2.4 IIR Inverse design filter


The `filter inverse design' command uses a direct digital design technique rathĆ
er than the digitization of existing analog filters as described in section 12.2.3.
An iterative procedure is used to perform a least squares error fit between the
actual frequency response and the specified desired response.

The required response is obtained from a specified gabarit that contains the
necessary frequency and magnitude break points which are mapped onto a
grid.
The outcome is a set of filter coefficients.

Part III Time data processing 187


Chapter 12 Digital filtering

12.3 Analysis

This section describes the functions that provide information on the characterisĆ
tics of filters.

Frequency response of filters


The magnitude and phase of the frequency response H(eā j) of the filter defined
by the coefficients a and b in equation 12-10

Group delay
The group delay of a set of filters provides a measure of the average delay of a
filter as a function of frequency. The frequency response of a filter is given by

H(z)| zej  H(eā j)  |H(eā j)|.eā j()

The phase delay is defined as

ā()
 p()    Eqn 12-35

and the group delay is defined as the first derivative of the phase

dā()
 g()   Eqn 12-36
d

If the wave form is not to be distorted then the group delay should be constant
over the frequency bands being passed by the filter.

For a linear delay, () = - where -    


then  is both the phase delay and the group delay.

188 The Lms Theory and Background Book


Digital filtering

12.4 Applying filters

This section describes how filters can be applied to data.

Direct trace filtering


Implementing this method basically filters the data x according to the filter deĆ
fined by coefficients a and b to produce the filtered data y.

Zero phase filtering


This option also filters the data using the filter defined by the coefficients a and
b, but in such a way as to produce no phase distortion. In the case of FIR filters
an exact linear phase distortion is possible since the output is simply delayed
by a fixed number of samples, but with IIR filters the distortion is very non-linĆ
ear. If the data has been recorded however and the whole sequence can be re-
played, then this problem can be overcome by using the concept of `time reverĆ
sal'. In effect the data is filtered twice, once in the forwards direction, then in
the reverse direction which removes all the phase distortion but results in the
magnitude effect of the filter being squared.
If x(n)=0 when n<0, then the z transform of the time reversed sequence is

0
Z{āxā( n)}   x( n)Ăz n Eqn 12-37
n


which if -n=u   x(u)Ă(z1)u
0

So if X(z)  Z{x(n)}

then Z{x( n)}  X(z 1)

Time reversal filtering can be realized using the method shown in Figure 12-12.

x(n) a(n)=x(-n) f(n) b(n)=f(-n) y(n)


Time Time
reversal reversal

Figure 12-12 Realization of zero phase filters

Part III Time data processing 189


Chapter 12 Digital filtering

In this case it can be seen that

Aā(z)  Xā(z 1)

Fā(z)  Aā(z)ĂHā(z)  Hā(z)ĂXā(z 1)


Bā(z)  Fā(z 1)  Hā(z 1)ĂXā(z)

Yā(z)  Hā(z)ĂBā(z)  Hā(z)ĂHā(z 1)ĂXā(z)

So the `equivalent' filter for the input data is

H eq(z)  Hā(z)ĂHā(z 1)


with z  e j
H eq(z)  Hā(e j)ĂHā(e j)  |H(e j)| 2

i.e. zero phase and squared magnitude. Using this filtering method results in
starting and end transients, which in this implementation are minimized by
carefully matching the initial conditions.

190 The Lms Theory and Background Book


Digital filtering

12.5 References

[1] A. V. Oppenheimer and R.W. Schafer


Digital Signal Processing
Prentice Hall 1975

[2] L.R. Rabiner and B. Gold


Theory and Application of Digital Signal Processing
Prentice Hall 1975

[3] R.E. Crochiere and L.R. Rabiner


Multirate Digital Signal Processing
Prentice Hall 1983

[4] J.G. Proakis and D.G. Manolakis


Digital Signal Processing: Principles Algorithms and Applications
MacMillan Publishing 1992

Part III Time data processing 191


Chapter 13

Harmonic tracking

This chapter describes the concepts involved in Harmonic tracking


using a Kalman filter.
Theoretical background

Practical considerations

193
Chapter 13 Harmonic tracking

13.1 Introduction

There are a number of circumstances when it is necessary to track periodic comĆ


ponents (orders) when the signal of interest is buried in noise, or the rotational
speed is changing rapidly. Indeed some effects only manifest themselves when
the rate of change of frequency is high. In these situations, real time analog and
digital filters have limited of resolution due to transients and excessive procesĆ
sing requirements. The Kalman filter however is able to accurately track sigĆ
nals of a known structure concealed in a confusion of noise and other periodic
components of unknown structure.

An important characteristic of the Kalman filter is that it is non-stationary. It


functions well at high slew rates, because the system model used does not preĆ
sume either fixed time of frequency content, but adapts itself automatically as
the system itself is changing. This ability to derive the system model for each
time sample in the recording (within certain user-defined constraints) frees it
from the usual time/frequency resolution constraint encountered with the
traditional frequency transformations.

Conditions for use


Some important capabilities of the Kalman filter are -

V the ability to track an order with arbitrary fractional order resolution


from signals sampled at a constant rate,

V fine spectral resolution of the orders (i.e. 0.01 Hz) obtained after just a
few measurement samples (not even one cycle of the fundamental comĆ
ponent),

V virtually no slew rate limitations,

V the ability to produce an order value for every measurement sample


point,

V no phase distortion.

In order to use the Kalman filter the following conditions must apply -

d The structure of the signal (sine wave) to be tracked must be accurately


known.

d The signals must be acquired at a constant sampling rate.

d An accurate estimate of the instantaneous Rpm value is required when


you are dealing with signals that vary with rotational speed.

194 The Lms Theory and Background Book


Harmonic tracking

13.2 Theoretical background


The application of the Kalman filters to track harmonic components involves
two stages.
1 Accurate determination of the Rpm
If you want to track an order, then you must provide the corresponding
Rpm/time trace. Your Rpm may have been determined using a Tacho signal
which results in a pulse train or a swept sine function in which case you will
need to convert it to a Rpm/time function.
2 The tracking of the specified waveform
Section 13.2.2 describes the mathematical background to the operation of the
tracking function.
Some practical considerations are discussed in section 13.3.

13.2.1 Determination of the Rpm


Since the Kalman filter is highly selective and accurate in tracking a target sigĆ
nal buried in noise, it is crucial that the instantaneous RPM of the system is preĆ
cisely modelled, otherwise the wrong component will be tracked. The rpm inĆ
formation can be derived from the tachometer channel, which is sampled at the
same rate as the measurement channels to obtain a small statistical variability
in the period estimation. Clearly the tachometer events will occur at a lower
rate and so to reduce the error on the period estimate, resampling is performed
on the original tachometer signal.
The first part of the process therefore is to convert the original tacho signal from
a pulse train to an rpm/time function.
The second step involves obtaining an equidistant function. Since all mechaniĆ
cal systems have some inertia, it is reasonable to expect the speed to be a conĆ
tinuous function, so a cubic spline with the appropriate boundary conditions
can be used to obtain the required `sample-by-sample RPM' estimate of speed
function.

13.2.2 Waveform tracking


The Kalman filtering method involves setting up and solving a pair of equaĆ
tions known as the Structural and the Data equations.

Part III Time data processing 195


Chapter 13 Harmonic tracking

The Structural equation

This equation defines the shape or structure of the waveform you wish to track.
A sine wave for example, x(t) of frequency  sampled at time t satisfies the
following second order difference equation

x(nāt)Ă Ă 2 cos(2āt)x((n  1)āt)  x((n  2)āt)  0 Eqn. 13-1

by dropping the time increment t this can be written more simply as

xā(n)Ă Ă c(n)xā(n  1)  xā(n  2)  0 Eqn. 13-2

where c(n) = cos (2 t)

When the instantaneous frequency  is known, equation 13-2 is a linear freĆ


quency dependent constraint equation on the sine wave which is known as the
structural equation.

When tracking a sine wave which is changing in frequency, and which is conĆ
taminated by noise and other sinusoids, a non homogeneity term (n) is
introduced. This allows the sine wave to vary in frequency, amplitude and
phase and Equation 13-2 then becomes

xā(n)Ă Ă c(n)xā(n  1)  xā(n  2)  (n) Eqn. 13-3

(n) is a deterministic but unknown term which allows for deviations from the
true stationary wave.
It is also useful to define S (n) as the standard deviation of the non homogeneĆ
ity of the structural equation.

The Data equation

x(n) is the time history defined by the structural equation, but the measured
signal y(n) contains both the signal that matches the structural equation as well
as noise and other periodic components.

196 The Lms Theory and Background Book


Harmonic tracking

yā(n)Ă  xā(n)  ā(n) Eqn. 13-4

where (n) contains noise and periodic components at frequencies other than
the target signal.
Once again S (n) is defined as the standard deviation of the nuisance element
of the data equation.

The Least squares formulation


For any point in time (n), equations 13-3 and 13-4 provide linear equations for
{x(n) x(n-1) x(n-2)}. Rearranging these equations gives an unweighted form
of equation where the structural equation is on the top row and the data equaĆ
tion on the bottom.

 ĄĄ1

1Ą  c(n)Ą1 x(n  2)
x(n) 
 (n)
Ąx(n  1) y(n)  (n)  Eqn. 13-5

The error in equation 13-5 is made isotropic by applying a weighting factor r(n)
which is defined as the ratio of the standard deviations of the errors in the
structural and data equations.

sā(n)
rā(n)Ă  Eqn. 13-6
s ā(n)

Equation 13-5 then becomes -

 ĄĄr(n) 
1Ą  c(n)Ą1 x(n  2)
x(n) 
 (n)
Ąx(n  1) r(n)ā(y(n)  (n))  Eqn. 13-7

The weighting function r(n) expresses the degree of confidence between the
structural equation and data equation, or, the certainty of the presence of orders
in the data. This function shapes the nature of the Kalman filter and influences
its tracking characteristics. A small value for r(n) leads to a filter that is highly
discriminating in frequency, but which takes time to converge. Conversely, fast
convergence with low frequency resolution is achieved by choosing a large r(n).

Part III Time data processing 197


Chapter 13 Harmonic tracking

When applied to all observed time points Equation 13-7 provides a system of
overdetermined equations which may be solved using standard least squares
techniques.

198 The Lms Theory and Background Book


Harmonic tracking

13.3 Practical considerations

This section considers some practical characteristics of the Kalman filter and the
parameters that influence them.

Frequency resolution
In principle the Kalman filter is capable of tracking sinusoidal components of
any frequency up to half the sample frequency. In practice however, it has been
found that the ability to distinguish between two closely spaced sine waves is
inversely proportional to the total observation time. As a consequence, the obĆ
servation time should be equal to the inverse of minimum frequency spacing
required between components.

Filter characteristics
It was mentioned above that the weighting r(n) used in Equation 13-7 can be
used to influence the nature of the tracking filter used. This weighting can be
adjusted through the specification of a harmonic confidence factor which is deĆ
fined as the inverse of the weighting factor.

s ā(n)
HC  1  Eqn. 13-8
rā(n) s ā(n)

Applying a high value implies confidence in the harmonic (structural data) and
assumes that the error in your measured data is high. In this case the filter will
be narrow so that it is highly discriminating in frequency. This is obtained at
the cost of time to converge in amplitude. Applying a low value implies that
the error in the measured data is low and consequently a wider filter can be
used which while less discriminating in frequency has the advantage that the
amplitude converges more quickly.

The three Kalman filters shown below are characterized by different harmonic
confidence factors which influence the width of the filter.

Part III Time data processing 199


Chapter 13 Harmonic tracking

HC= 50

HC= 100

HC= 200

Figure 13-1 Effect of the Harmonic Confidence Factor

Bandwidth characteristics
Equation 13-7 shows that the weighting function, r(n), which is the inverse of
the harmonic confidence factor, can be different for every time point. This
means that the bandwidth of the filter can vary as a function of the frequency
or order being tracked.
Using a frequency defined band width means that at low Rpm values, a numĆ
ber of orders will be encompassed by the filter range.

Rpm
amp Rpm
amp

orders
orders

frequency
frequency

Figure 13-2 Defining the filter bandwidth in terms of frequency and amplitude.

Allowable slew rates


The formulation of the Kalman filter assumes that the frequency of the signal to
be tracked remains constant over three consecutive measurement points. When
the frequency is varying, but the variation over these three points is less than the
bandwidth of the filter then no problem arises.
The minimum value of the bandwidth is equal to the inverse of the observation
time T. If the sample rate is Fs then the slew rate must be less than Fsā/ā2T.

200 The Lms Theory and Background Book


Harmonic tracking

Tracking closely spaced order signals with a high slew rate requires sampling
at a high frequency over a long period which imposes a heavy computational
effort. However if you consider the significant slew rate encountered during
the deceleration of gas turbines of 75Hz/sec over 5 seconds, from the above this
implies a sample rate of 750Hz. It can be seen therefore that such an extreme
slew rate does not impose any realistic limitation on the sample rate.

Part III Time data processing 201


Chapter 14

Counting and
histogramming

This chapter provides an introduction to various counting methods


and provides a reading list for further information at the end
Counting of single events and occurrences

Two–dimensional counting methods

203
Chapter 14 Counting and histogramming

14.1 Introduction

In fatigue analysis, real life measurements of mechanical or thermal loads are


used to assess and predict the damage inflicted by such loads over the life time
of a product. Figure 14-1 shows such measurements made on a vehicle part
over a period of around 5 minutes (330 seconds).

0.4
acceleration

(g)
time
(s)

-0.4

Figure 14-1 Typical load/time data

In terms of fatigue analysis it is the occurrence of specific events that are of


more significance than the frequency content of the loads. The approach used
is to scan such time histories looking for typical fatigue-generating events and
then to register how often they occur. These typical events can be demonĆ
strated with a zoomed-in section of a load time history, shown in Figure 14-2.

Figure 14-2 Typical events in a data trace

The interesting events are:-

V The occurrence of peaks at specific levels


These are represented by the circles and are determined using
``Peak counting'' methods described in section 14.2.1.

204 The Lms Theory and Background Book


Counting and histogramming

V The exceedence or crossing of specific levels.


These are represented by the squares and are determined using
``Level cross'' counting methods described in section 14.2.2.

V The occurrence of signal changes of a certain size.


These are represented by the arrows and are determined using ``Range
count methods' described in section 14.2.3

The determination of the signal characteristics based on the events mentioned


above is a two stage process

d Stage 1, counting
The data is scanned for the occurrence of one of the events listed above.
This in effect reduces the full time history to a set of mechanical or therĆ
mal load events.

d Stage 2 histogramming
This involves dividing the counted occurrences into classes where for
each event, its number of occurrences is specified.

Part III Time data processing 205


Chapter 14 Counting and histogramming

14.2 One dimensional counting methods

The procedures described above deal with the counting of `single events' or ocĆ
currences which are further explored in this section.

Section 14.3 describes a number of methods used to examine the occurrence of


additional event circumstances. These methods are termed `Two dimensional
counting methods'.

14.2.1 Peak count methods


The turning points in a data trace are termed ``peaks"(maximums ) and ``valĆ
leys" (minimums ). The number of times that peaks and valleys occur at speĆ
cific levels is counted as shown below. You can choose to count both the peaks
and the valleys (extrema) or just the peaks (maxima), or just the valleys (miniĆ
ma).

-1

-2

Figure 14-3 Counting of peaks and valleys

A histogram is then created by calculating the distribution of the number of ocĆ


currences as a function of the level at which the occurrence appeared. The FigĆ
ure 14-4 shows the results of processing the above peak-valley reduction acĆ
cording to the three types of counting methods.

206 The Lms Theory and Background Book


Counting and histogramming

Ç
Ç
4 4 ÇÇ
ÇÇ ÉÉ
ÉÉ
4 ÉÉ
ÉÉ
Ç
3 3
ÇÇ ÉÉ
3
ÉÉ
Ç ÇÇ ÉÉ ÉÉ
Nr of occurrences

Nr of occurrences

Nr of occurrences
Ç
2

Ç
ÇÇ
ÇÇ
2
Ç
Ç ÇÇ
ÇÇ
2
ÉÉ
ÉÉ
ÉÉ
ÉÉ
ÉÉ
ÉÉ
Ç
1
ÇÇ 1
Ç ÇÇ ÉÉÉÉ
ÉÉ
1
ÉÉ
ÉÉ
ÉÉ
Ç
0
-2 -1 0
ÇÇ 1 2
0 Ç
-2 -1 0
ÇÇ
1 2
ÉÉ
0ÉÉÉÉÉÉ
-2 -1 0 1 2
level level
level
Minima Maxima Extrema

Figure 14-4 Histograms of peaks (maxima), valleys (minima) and both (extrema)

14.2.2 Level cross counting methods


This procedure counts the number of times that the signal crosses various levĆ
els. Distinctions can be made between an upward (positive ) and a downĆ
ward (negative ) crossing as illustrated below. You can choose to count both
the positive (up) crossings, the negative (down) crossings or both types.

-1

-2

Figure 14-5 Counting of level crossings

Peak counts and level cross counts are closely related. The number of positive
crossings of a certain level is equal of the number of peaks above that level miĆ
nus the number of valleys above it. This implies that a level cross count can be
derived from a peak-valley count.

A level crossing count is typically initiated by specifying a grid on top of the


signal to determine the levels. The grid can be specified in ordinate units or as
a percentage of the ordinate range. The resulting histograms for the above sigĆ
nal when up, down and both types of crossings are counted are shown below.

Part III Time data processing 207


Chapter 14 Counting and histogramming

ÉÉ É
10 10 10

ÉÉ É
Nr of occurrences

Nr of occurrences

Nr of occurrences
8 8 8

6
ÉÉ É
6

ÇÇ Ç ÇÇ Ç ÉÉ
ÉÉ ÉÉÉ
6

ÇÇ ÇÇÇÇÇ ÇÇ
ÇÇÇÇÇÇÇ
ÉÉ
ÉÉ
ÉÉÉÉÉ
4 4
É
4

ÇÇ
2
ÇÇÇÇÇÇÇ Ç ÉÉ ÉÉ
2

ÇÇ
ÇÇÇÇÇÇÇ ÇÇÇÇÇÇÇ ÉÉ
ÉÉÉÉÉ
2

0 0 0
-2 -1 0 1 2 -2 -1 0 1 2 -2 -1 0 1 2
level level level
up (+) crossings down (-) crossings up (+) & down (-) crossings

Figure 14-6 Histograms of level crossing counts

14.2.3 Range counting methods


A range count method will determine the number of times that a specific range
change is observed between successive peak-valley sequences.

Counting of single ranges

The range between successive peak-valley pairs is counted. Ranges are considĆ
ered positive when the slope is rising and negative when the slope is falling.

-
4

- + - +
1 1 1 1

+ - + -
1 1 1 1

+
4

Figure 14-7 Counting of single peak-valley ranges

A histogram of the number of occurrences, as a function of the range, is generĆ


ated.

208 The Lms Theory and Background Book


Counting and histogramming

4 ÇÇ
ÇÇ ÇÇ
ÇÇ
Nr of occurrences
3
ÇÇ ÇÇ
2
ÇÇ
ÇÇ ÇÇ
ÇÇ
ÇÇ ÇÇ ÇÇ
1
ÇÇ ÇÇ ÇÇ
0 ÇÇ-4 -3 -2
ÇÇ
-1 0
ÇÇ1 2 3
ÇÇ
4
Range

Figure 14-8 Histogram of single peak-valley ranges

Counting of range–pairs
The counting of single ranges (usually indicated as a range-count), is both simĆ
ple and straightforward but sensitive to small variations of the signal. Thus in
the analysis of the left hand signal illustrated in Figure 14-9, single range
counting would result in a large number of relatively small ranges.

low pass filter

Figure 14-9 Sensitivity of single range counting to signal variation


If this signal were passed through a filter, suppressing the small load variaĆ
tions, the resulting signal would reveal a count of only one very large range.
As a consequence the two analysis results are completely different and the
method is very sensitive to small signal variations.
The range-pair counting method overcomes this sensitivity. Rather then splitĆ
ting up the signal into consecutive ranges, it is interpreted in terms of a ``main"
signal variation (or range) with a smaller cycle (range pair) superimposed on it.

= + R

Figure 14-10 Range pair counting

Part III Time data processing 209


Chapter 14 Counting and histogramming

If a pair of extremities are separated by a range that is less than the defined
range of interest (R), then they are `filtered out' of the range count.

210 The Lms Theory and Background Book


Counting and histogramming

14.3 Two–dimensional counting methods

The counting methods described so far, consider the occurrence of single events
in isolation from any other circumstances which may affect these events. HowĆ
ever, it is also meaningful to count events differently, depending on other cirĆ
cumstances using `two-dimensional' methods. Such methods are discussed in
this section.

14.3.1 From–to–counting
Such a ``combined" event can be the occurrence of a peak at level j followed by
a valley at level i. As an example, consider the combination of a valley at level
A followed by a peak at level C as illustrated in Figure 14-11.

4 12
D
2
C

B
3 11
A
1

Figure 14-11 From-to counting

In this example, the Fromto sequence (12) is counted separately from the
sequences (34) and (1112), although the ranges involved are identical
(C-A=D-B).

The result of such ``fromto'' counting can be presented in a so called Markov-


Matrix A[i,j]. The element aij gives the number of peaks at level j followed by a
valley at level i. The matrix of results of counting the events in Figure 14-11 are
shown below.

Part III Time data processing 211


Chapter 14 Counting and histogramming

From j
A B C D
X A X 0 1 0 1

0 B 0 X 1 2 3

2 To i C 1 1 X 2 2

4 D 1 2 1 X 0

peaks valleys

The lower left triangle of the Markov matrix contains the positive fromto
events, the upper right triangle summarizes the negative transitions. The addiĆ
tional separate columns contain the counting results for peaks and valleys at a
particular level. These results are easily obtained for the triangles of the MarĆ
kov matrix.

14.3.2 Range–mean counting

Another example of a two-dimensional counting method results in the so-


called Range-mean matrix. The variation or range (i-j) is associated with its
corresponding mean value (i+j)/2.

4 12
D
2 C C
C

B B
3 D-B 11 D-B
A
1 C-A

Figure 14-12 Range mean counting

Instead of considering the actual values of A and C, the Range-mean method


will consider the values CA (the range) and B (= A+C / 2 the mean).
Ranges, means and the number of occurrences can be displayed in a 3D format.

212 The Lms Theory and Background Book


Counting and histogramming

Number of events

Mean
Range

Figure 14-13 Display of range-mean counting

14.3.3 ‘‘Range pair–range” or ‘‘Rainflow’’ method

A two-dimensional counting method of special interest, especially for fatigue


damage calculations, is the ``range pair-range" method. Such a method was
also developed, simultaneously and independently in Japan, known as the
``Rainflow method". Both methods yield exactly the same results, i.e. they exĆ
tract the same range-pairs and ranges from the signal, by combining the range-
pair counting principle and the single range counting principle into one methĆ
od. For further details see the references listed on page 217.

Essentially the signal is split into separate cycles, having a specific amplitude
(or range) and a mean. The result can be put directly into cumulative fatigue
damage calculations according to Miner's rule and into simple crack growth
calculations. Three steps are involved in the complete procedure.

1 Conversion of the load history into a peak-valley sequence.

As the counting procedure considers only the values of successive peaks


and valleys, the complete signal may first be reduced to a peak-valley seĆ
quence. In doing this it is usual to apply a specific ``range-filter" or gate.
For a range filter of size R, a peak (or valley) at a certain level is only recĆ
ognized as such if the signal has dropped (or risen) to a level which is R
lower (or higher) then the previous peak (or valley) level.

Part III Time data processing 213


Chapter 14 Counting and histogramming

e5 e5

R
e3

R
e1 e4
e1
e6
e6
e0 R e0
R

e2 e2

Figure 14-14 Conversion of a load history to a peak valley sequence

In the above example e1 is counted as a peak because the signal drops by


more then the range filter size R after it.

After counting the first peak, the next valid valley is looked for, which in
this case is e2. This point is validated as a valley as the signal rises by
more then R to go to e3. The algorithm then searches for the next valid
peak. The first peak encountered is e3, but this is not counted as a valid
peak as the signal does not drop sufficiently before reaching the next exĆ
tremum in the signal (e4). So the algorithm checks whether the following
peak is a valid one. Peak e5 is regarded as valid since the drop in signal
level following it, is greater than R.

In this example the range filter eliminated the small signal variation (e3,e4)
from the peak-valley sequence.

Note that increasing the range filter eliminates only those transitions from
the histogram for which the range is smaller than the new value of R.
This is important for fatigue purposes since it proves that the filtering is
not that sensitive to the range filter size.

2 Scanning of the entire signal for range-pairs.

This phase of the counting procedure consists of taking a set of four conĆ
secutive points, and check whether a range-pair is contained in it. If not,
the search through the peak-valley sequence continues by shifting one
data point ahead. Once a range-pair is detected, the pair is counted and
removed from the sequence. After this, the next new set of four points is
formed by adding the closest two previously scanned points, to the two
remaining after removal of the range pair. The fact that earlier scanned
points are re-considered, clearly distinguishes Range-pair range counting
from single range counting.

3 Counting the ``Residue"

214 The Lms Theory and Background Book


Counting and histogramming

At the end of the second phase, a ``residue" of peaks and valleys is left
which is analyzed according to the single range principle. It can be
shown that this residue has a specific shape, namely a diverging part folĆ
lowed by a converging part.

Example
The following example shows how the range-pair range method operates.

Consider the time signal shown beĆ A peak-valley reduction with a range
low. filter of size R, results in the peak-valley
sequence shown below.
S6
S2
S4
S8
S5
R S3
S1

S7

The second phase (scanning of the range-pair occurrences) starts by looking at


the 4 first extremes. In this group (S1,S2,S3,S4), a pair is counted if the two inĆ
ner extremes (S2,S3,) fall within the range covered by the two outer extremes
(S1, and S4),. If this is not (as in this example), then the algorithm moves one
step forward and considers the extremes S2,S3,S4, and S5. These do not satisfy
the condition either, so the extremes S3,S4,S5, and S6 are considered and this
time a range pair is counted.
S6 S6
S2 S2
S4
S8 S8
S5
S3 S3
S1 S1
S4

S7 S7
S5

Counting a range-pair implies deleting the counted extremes from the signal.

``Stepping backwards", the extremes S1,S2,S3, and S6 are now considered and
another pair (S2,S3) is found.

Part III Time data processing 215


Chapter 14 Counting and histogramming

S6 S6
S2

S8 S8

S3
S1 S1
S2

S7 S7

S3

From the remaining four extremes, no ``pairs'' can be subtracted. This forms
the residue which is further counted as single ``from-to-ranges''.

Further considerations
The result of the range pair-range counting depends on the length of the data
record being analyzed at one time because the largest range counted will be beĆ
tween the lowest valley and the highest peak. This largest variation is often reĆ
ferred to as the `half load cycle'. If the lowest valley occurs near the beginning
of a very long load cycle, and the highest peak near the end, you should conĆ
sider whether it makes physical sense to combine such occurrences, so remote
in time into one cycle.

The counting method is insensitive to the size of the range filter applied. The
only effect of increasing the range filter size from R to 3R, for example, is that
all elements in a From-to counting for which |from-to|<3*R, become zero. In
other words, the choice of the range filter size is not critical.

216 The Lms Theory and Background Book


Counting and histogramming

14.4 References

[1] Fatigue load monitoring of tactical aircraft, de Jonghe J.B., 29th Meeting of
the AGARD SMP, Istanbul, September 1969.

[2] The monitoring of fatigue loads, de Jonghe J.B., IACS-Congress, Rome,


September 1970 .

[3] Statistical load data processing, van Dijk C.M, 6th ICAF Symposium MiĆ
ami, Florida USA, May 1971 .

[4] Fatigue of Metals subjected to varying stress, Matsuiski M. & Endo T.,
Kyushu district meeting, Japan Society of Mechanical Engineers, March
1968 .

[5] Cycle counting and fatigue damage, Watson P., SEE Symposium of 12th
February 1975, Journal of Society of Environmental Engineers, September
1976.

Part III Time data processing 217


Theory and Background

Part IV
Analysis and design

Chapter 15
Estimation of modal parameters . . . . . . . . . . 219

Chapter 16
Operational modal analysis . . . . . . . . . . . . . . 267

Chapter 17
Running modes analysis . . . . . . . . . . . . . . . . 281

Chapter 18
Modal validation . . . . . . . . . . . . . . . . . . . . . . . . 293

Chapter 19
Rigid body modes . . . . . . . . . . . . . . . . . . . . . . 309

Chapter 20
Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321

Chapter 21
Geometry concepts . . . . . . . . . . . . . . . . . . . . . 357

218
Chapter 15

Estimation of modal
parameters

This chapter describes the basic principles involved in estimating


modal parameters. The topic covered are :
The definition and derivation of modal parameters

Factors to consider in the estimation

Descriptions of different parameter estimation techniques

Calculation of static compensation modes

219
Chapter 15 Estimation of modal parameters

15.1 Estimation of modal parameters

A modal analysis provides a set of modal parameters that characterize the dyĆ
namic behavior of a structure. These modal parameters form the modal model
and Figure 15-1 illustrates the process of arriving at the modal parameters.

Ą (jrĂ  )  (jrĂ  )

N *
ijk ijk
h ijĂ(j)Ă  *
k1 k k

curve fit to estimate modal parameters

measure the frequency response function


FREQUENCY
DAMPING
MODE SHAPES

input input

Figure 15-1 Derivation of modal parameters

If a structure exists on which measurements can be made, then it can be asĆ


sumed that a parametric model can be defined that describes that data. The
starting point is usually a set of measured data - most commonly frequency reĆ
sponse functions (FRFs), or the time domain equivalent, impulse responses
(IRs). For IRs the relation between modal parameters and the measurements is
expressed in Equation 15-1.

N
h ijĂ(t)Ă   Ą rĂ ijkeĂ  t  rĂ ijk *eĂ  t

*
k k Eqn 15-1
k1

The corresponding relation for FRFs is given in Equation 15-2.

220 The Lms Theory and Background Book


Estimation of modal parameters

 rĂ ijk  rĂ ijk !
h ijĂ(j)Ă   Ą
N *

(j   ) (j   *)


Eqn 15-2
k1 k k 
where

hij (t) = IR between the response (or output) degree of freedom i and the refĆ
erence (or input) DOF j

hij (j ) = FRF between the response DOF i and reference DOF j

N = number of modes of vibration that contribute to the structure's dyĆ


namic response within the frequency range under consideration

r ijk = residue value for mode k

k = pole value for mode k.

* designates complex conjugate.

The pole value can be expressed as shown in Equations 15-3 and 15-4.

 k   k  j dk Eqn 15-3

where

dk = the damped natural frequency of mode k

k = the damping factor of mode k

or

 k    k nk  j nk 1   2k Eqn 15-4

where

nk = the undamped natural frequency of mode k

k = damping ratio of mode

Equation 15-5 shows that the residue can be proven to be the product of three
terms

r ijk  a kĂv ikĂĂv jk Eqn 15-5

Part IV Modal Analysis and Design 221


Chapter 15 Estimation of modal parameters

where

vik = the mode shape coefficient at response DOF i of mode k

vjk = the mode shape coefficient at reference DOF j of mode k

ak = a complex scaling constant, whose value is determined by the scaling


of the mode shapes

Note that the mode shape coefficients can be either real (normal mode shapes)
or complex. If the mode shapes are real, the scaling constant can be expressed
as,

ak  1
2jm k dk Eqn 15-6

where

mk = the modal mass of mode k

The poles, natural frequencies (damped and undamped), damping factors or


ratios, mode shapes, and residues are commonly referred to as modal parameters
(parameters of the modes of the structure).

The fundamental problem of parameter estimation consists of adjusting (estiĆ


mating) the parameters in the model, so that the data predicted by the model
approximate (or curve-fit) the measured data as closely as possible. Modal paĆ
rameters can be estimated using a number of techniques. These techniques are
discussed in the following sections.

A note about units


The frequency and damping values have a dimension of 1/time, and are thereĆ
fore stored in Hz.

The residues, as appearing in Equation 15-1 of 15-2, have the same dimension
as the measurement data. As an aside, it is important to note that residues have
a dimension. Residues are composed of a product of mode shape coefficients
and a scaling constant, (Equation 15-5). The mode shape coefficients by themĆ
selves do not have any dimension, nor absolute (or scaled) magnitude. DimenĆ
sion, and therefore units will be viewed as attributes of the scaling constant.

Finally, for multiple input analysis, the residues are written in factored form as
the product of mode shapes with modal participation factors. Again, the prodĆ
uct of the factors has a dimension and absolute magnitude. Formally, the mode
shape coefficients will again be considered as without dimension and therefore
units will be viewed as attributes of the residues.

222 The Lms Theory and Background Book


Estimation of modal parameters

15.2 Types of analysis

The section discusses some general principles to be considered when performĆ


ing a modal analysis. These topics include -
V Using single or multiple degree of freedom methods in section 15.2.1
V Making local or global estimates in section 15.2.2
V Using multiple input analysis in section 15.2.3
V Using time or frequency domain analysis in section 15.2.4
V Special conditions which apply when performing vibro-acoustic analyĆ
sis in section 15.2.5
The specific parameters estimation techniques are described in section 15.3

15.2.1 Single or multiple degree of freedom method


If, in a given frequency band, only one mode is assumed to be important, then
the parameters of this mode can be determined separately. This assumption is
sometimes called the single degree of freedom (sDOF) assumption.

d/f

min max frequency

Figure 15-2 The single degree of freedom assumption

Under this assumption, the FRF equation 15-2 can be simplified to equation
15-7. This is assuming the data to have the dimension of displacement over
force.

rĂ ijk rĂ *ijk
h ij  Eqn 15-7
(j   k) (j   *k)

Part IV Modal Analysis and Design 223


Chapter 15 Estimation of modal parameters

 min     max

It is possible to compensate for the modes in the neighborhood of this band, by


introducing so called upper and lower residual terms into the equation.

rĂ ijk rĂ ijk * lr ij
h ij    ur ij  Eqn 15-8
(j   k) (j   *k) 2

where

urij = upper residual term (residual stiffness) used to approximate modes


at frequencies above max.

lrij = lower residual term (residual mass) used to approximate modes at


frequencies below min

Upper and lower residuals are illustrated in Figure 15-3.

d/f

mass line

upper
residual urij

lower  lr ij frequency
residual stiffness line
2

Figure 15-3 Upper and lower residuals

Equation 15-7 can be further simplified by neglecting the complex conjugate


term, and so becomes

rĂ ijk
h ij  Eqn 15-9
(j   k)

Single degree of freedom methods


The single DOF assumption forms the basis for parameter estimation technĆ
iques such as Peak picking, Mode picking and Circle fitting.

224 The Lms Theory and Background Book


Estimation of modal parameters

Multiple degree of freedom methods


The sDOF assumption is valid only if the modes of the system are well deĆ
coupled. In general this may not be the case. It then becomes necessary to
approximate the data with a model that includes terms for several modes. The
parameters of several modes are then estimated simultaneously with so-called
multiple degree of freedom methods.

15.2.2 Local or global estimates


If you recall the time domain relationship between modal parameters and meaĆ
surement functions,

N
h ijā(t)   rijkeā t  r*ijkāeā t

k
*
k Eqn 15-10
k1

you will see that the pole values k are independent of both the response and
the reference DOFs. In other words the pole value k is a characteristic of the
system and should be found in any function that is measured on the structure.
When applying parameter estimation techniques, one of two strategies can be
employed; making local or global estimates.

Local estimates Global estimates


Each data record hij is analyzed indiĆ All the data records are analyzed siĆ
vidually, and a potentially different multaneously in order to estimate the
estimate of the pole value k is found structure's characteristics.
each time.
Analyzing data in this manner proĆ With this approach, a unique estimate
duces as many estimates of each pole of the pole values k ăis obtained.
as there are data records. It is then Such estimates are therefore called
left to the user to decide which estiĆ global estimates.
mate is the best or to somehow calcuĆ
late the best average of all the estiĆ
mates.
Peak picking and Circle fitting are The Least Squares Complex ExponenĆ
techniques that calculate local estiĆ tial, Complex Mode Indicator FuncĆ
mates of pole values. tion and Direct Parameter IdentificaĆ
tion methods allow you to obtain
global estimates of structure characĆ
teristics.

Part IV Modal Analysis and Design 225


Chapter 15 Estimation of modal parameters

15.2.3 Multiple input analysis


Assume that data is available between Ni input DOFs and No output DOFs.
The expression for each of the individual data records ( equation 15-10) can
then be rewritten in matrix form for all the data records.
N
[H]   [Rk]eĂ t  [R*k]eĂ t

k
*
k Eqn 15-11
k1

where
[H] =(No ,Ni ) matrix with hij as elements
[Rk ] = (No ,Ni ) matrix with rijk as elements
Equation 15-5 can be used to express the residue matrix in factored form,

[R k]  a kĂ{V} kĂV r k Eqn 15-12

where
{V}k = No vector (column) with mode shape coefficients at the output DOFs
Vr k = Ni vector (row) with mode shape coefficients at the input
DOFs
If DOFs i and j are both output and input DOFs then the above equation imĆ
plies Maxwell Betti reciprocity,

r ijk  r jik Eqn 15-13

This assumption is not essential however since the residue matrix can be exĆ
pressed in a more general form,

[R k] Ă {V} kĂL k Eqn 15-14

where L k is a vector (row) with Ni coefficients that express the parĆ


ticipation of the mode k in response data relative to different input
DOFs.
These coefficients are called modal participation factors therefore. Note that if recĆ
iprocity is assumed then the modal participation factors are proportional to the
mode shape coefficient at the input DOFs.

226 The Lms Theory and Background Book


Estimation of modal parameters

Using the factored form of the residue matrix, equation 15-11 can be written as,
N
 H    {V} kLkeĂ  t  {V *} kL* keĂ  t

k
*
k Eqn 15-15
k1

If just the data between any output DOF and all input DOFs are considered
then
N
H i   vikLkeĂ t  v*ikL*keĂ t

k
*
k Eqn 15-16
k1

where

H i = Ni vector of data between output DOF i and all input DOFs.

It is essential in the model of equation 15-16 that both the poles and the modal
participation factors are independent of the output DOF. In other words in this
formulation the characteristics become -

L kĂe kt Eqn 15-17

A multiple input modal parameter estimation technique is one that analyses


data relative to several inputs simultaneously to estimate the characteristics exĆ
pressed by equation 15-17 (i.e. both the pole values and the modal participaĆ
tion factors). The basis for these techniques is a model expressed by equation
15-16.

The identification of modal participation factors is essential for decoupling


highly coupled or even repeated roots. To illustrate this consider a structure
that has two modes with pole values
1 and
2 very close to each other. NeĆ
glecting the other modes and the complex conjugate terms, the response data
relative to the input DOF j can be expressed as

{H} i  {V} 1l 1jeĂ 1t  {V} 2l 2jeĂ  2t... Eqn 15-18

or since

1 " 2  

{H} i  Ă {V} 1Ăl 1j  {V} 2Ăl 2j ĂeĂ t... Eqn 15-19

Part IV Modal Analysis and Design 227


Chapter 15 Estimation of modal parameters

The latter equation shows that in the response data relative to an input DOF j, a
combination of the coupled modes is observed and not the individual modes.
The combination coefficients for the modes are the modal participation factors
l1j and l2j .

The response data relative to another input DOF l, is expressed by an equation


similar to equation 15-19.

{H} i  Ă {V}1Ăl 1l  {V} 2Ăl 2lĂ


Ăe t... Eqn 15-20

The only difference between these last two equations is the modal participation
factors l1l and l2l . If they are linearly independent of the modal participation
factors for input i, then the modes will appear in a different combination in the
response data relative to input l. As a multiple input parameter estimation
technique analyses data relative to several inputs simultaneously, and the modĆ
al participation factors are identified, then it is possible to detect highly
coupled or repeated modes.

15.2.4 Time vs frequency domain implementation


Using digital signal processing methods, only samples of a continuous function
are available. For modal parameter estimation the sampled data consist most
frequently of FRF measurements. Normally these are taken at equally spaced
frequency lines. Testing techniques such as stepped sine excitation allow you to
measure data at unequally spaced frequency lines.

For modal parameter estimation applications with the data measured in the freĆ
quency domain, introducing the sampled nature of the data transforms the
equation for the model to -

N
 ijk rĂ rĂ ijk*!
h ij,nĂ(j)Ă   Ą   Eqn 15-21
k1
(j n   ) k (j n  *k)

where

hij,n = samples of data in measured range.

228 The Lms Theory and Background Book


Estimation of modal parameters

n = sampled value of frequency in measured range.

A frequency domain parameter estimation method uses data directly in the freĆ
quency domain to estimate modal parameters. It is therefore irrelevant whethĆ
er the frequency lines are equally spaced or not. They are based directly on the
model expressed by equation 15-21.

If the data are sampled at equally spaced frequency lines, then the FRF can be
transformed back to the time domain to obtain a corresponding Impulse ReĆ
sponse (IR). A Fast Fourier Transform (FFT) algorithm is used for this transĆ
formation but the restriction on the number of frequency lines being equal to a
power of 2 (e.g. 32, 64, 128...) is no longer valid. After transformation, a series
of equally spaced samples of corresponding impulse response functions is obĆ
tained. A time domain parameter estimation technique allows you to analyze
such equally spaced time samples to estimate modal parameters.

In practice, a variety of conditions mean that the frequency band over which
data is analyzed is smaller than the full measurement band. This is illustrated
in Figure 15-4.

hij

min max

analysis frequency band frequency


measurement band

Figure 15-4 Analysis frequency band vs. measurement band

The analysis frequency band includes only three modes whereas the measureĆ
ment band includes five. If the data is transformed from frequency to time doĆ
main, then the time increment between samples will be determined by the analĆ
ysis frequency band and not the measurement band. If the frequency band of
analysis is bounded by max and min then t is determined from

2
t  Eqn 15-22
2( max   min )

By substituting sampled time for continuous time

Part IV Modal Analysis and Design 229


Chapter 15 Estimation of modal parameters

N
h ij,nĂ(t)Ă   rĂijkeĂ nt  rĂ*ijkeĂ nt

k
*
k
Eqn 15-23
k1

or
N
h ij,n   rĂ ijkĂz nk  rĂ *ijkĂz *n
k

Eqn 15-24
k1

where

z k  e  kāt Eqn 15-25

Time domain parameter estimation methods are based on the model defined by
equation 15-24. They analyze hij,n to estimate zk .
k is then calculated from
equation 15-25. Note however that this calculation is not unique since

z k  e ( kjm2t)t  e kt Eqn 15-26

This implies that no poles outside the frequency band ă2 /t can be identiĆ
fied. In other words, with a time domain parameter estimation method, all esĆ
timated poles are to be found in the frequency band of analysis ( min ,  max ).
This may cause problems in estimating modal parameters if the data in the freĆ
quency band of analysis is strongly influenced by modes outside this band (reĆ
sidual effects). Since with frequency domain methods
k is estimated directly,
no such limitation arises. A frequency domain technique may therefore someĆ
times be preferred over a time domain technique for analyzing data over a narĆ
row frequency band, where residual effects are important.

15.2.5 Vibro–acoustic modal analysis


Coupling between the structural dynamic behavior of a system and its interior
acoustical characteristics can have an important impact in many applications.
Based on combined vibrational and acoustical measurements with respect to
acoustical or structural excitation, a mixed vibro-acoustical analysis can be perĆ
formed.
The finite element equation of motion is used to derive the equations describĆ
ing the vibro-acoustical behavior:

230 The Lms Theory and Background Book


Estimation of modal parameters

  2M SĂ  iC SĂ  KS{x}Ă  {f} Ă {l p} Eqn 15-27

with
M S, C S, K S the structural mass, damping and stiffness matrices
{f} the externally applied forces
{lp } the acoustical pressure loading vectors
In the fluidum the indirect acoustical formulation states:

 2M fĂ  iCfĂ  Kf{p}  {q. }  2{lf} Eqn 15-28

with
M f, C f, K f matrices describing the pressure-volume acceleration
ω 2{lf } the acoustical pressure loading vectors
Combining these equations with

{l p} Ă  pĂdS Eqn 15-29


Sb

{l f} Ă  Ăx ĂdS
N
Eqn 15-30
Sb

and rewriting the formulations results in the description of the vibro-acoustical


coupled system:

 KS  Kc
0 Kf
px  iC0 C0 px   MM
S
f
2
S 0
C Mf   
x
p 
f
q
. Eqn 15-31

This represents a second order model formulation of the vibro-acoustical beĆ


havior which is clearly non-symmetrical.
The above equation also reflects the vibro-acoustical reciprocity principle
which can be expressed as:
..
pi xj
| q. 0   . | fi0 Eqn 15-32
fj i qi

Most of the multiple input - multiple output modal parameter algorithms do


not require symmetry. So the non-symmetry of the basic set of equations and
hence the modal description does not pose a problem in obtaining valid modal
frequencies, damping factors and mode shapes.

Part IV Modal Analysis and Design 231


Chapter 15 Estimation of modal parameters

Structural excitation can be substituted for acoustical excitation. The modal


models derived from both are compatible but differ in a scaling factor per
mode due to the special non-symmetry of the set of equations.
To go from the structural formulation to the acoustical formulation a scaling
factor which is the squared eigenvalue of the corresponding mode is required.
This is fully explained in the paper `Vibro-acoustical Modal Analysis : ReciĆ
procity, Model Symmetry and Model Validity' by K. Wyckaert and F.
Augusztinovicz.

232 The Lms Theory and Background Book


Estimation of modal parameters

15.3 Parameter estimation methods

A summary of different methods and their applications is given in Table 15.1.

Method Application DOF Domain Estimates Inputs


Peak picking frequency, single freq local single
damping
Mode picking mode shapes single freq local single
Circle fitting frequency single freq local single
damping
mode shapes
Complex Mode frequency multi freq global single or
Indicator Function damping multiple
mode shapes
Least Squares frequency multi time global single or
Complex Exponential damping multiple
modal
participation
factors
Least Squares mode shapes multi freq global single or
Frequency Domain multiple
Frequency domain frequency multi freq global single or
Direct Parameter damping multiple
identification modal
participation
factors
Table 15.1 Parameter estimation methods and application

Selection of a method

A guide on which parameter estimation techniques method to adopt is outlined


below. Details on all the methods are given in the following sections.

SDOF
Single degree of freedom curve fitters are rough and ready and will give you a
quick impression of the most dominant modes (frequency damping and mode
shapes) influencing a structure under test. As such they are useful in checking
the measurement setup and can help assess:

V whether all the transducers are working and correctly calibrated;

Part IV Modal Analysis and Design 233


Chapter 15 Estimation of modal parameters

V whether the accelerometers are correctly labelled with their node and
direction;
V whether all the nodes are instrumented.
For this purpose it is recommended to identify real modes since these are the
easiest to interpret when displayed.
The circle fitter gives the most accurate estimates of the SDOF techniques, but
may create large errors on nodal points of the mode shapes.
Complex MIF
This method can be used in the same way as the SDOF techniques to give you
an idea of the most dominant modes and check the test setup. It has the
advantage that multiple input FRFs can be used and the mode shape estimates
are of a higher quality. Furthermore, it can extract a modal model that includes
the most dominant modes in a particular frequency band.
Time domain MDOF
This is the most general purpose parameter estimation technique that is probĆ
ably the standard tool used in modal analysis. It provides a complete and accuĆ
rate modal model from MIMO FRFs. Its major weakness seems to be when
analyzing heavily damped systems where the damping is greater than 5% such
as in the case of a fully equipped car.
Frequency domain MDOF
The Frequency Domain Direct Parameter technique provides similar results to
the Time domain technique described above, in terms of accuracy but is generĆ
ally slower. It is weak when dealing with lightly damped systems (damping
less than 0.3%) but fortunately performs better on heavily damped ones, thus
complementing the other MDOF technique. Since it operates in the frequency
domain it is able to analyze FRFs with an unequally spaced frequency axis.

15.3.1 Peak picking


Peak picking is a single DOF method to make local estimates of frequency
and damping. The method is based on the observation that the system reĆ
sponse goes through an extremum in the neighborhood of the natural frequenĆ
cies.
For example, on a frequency response function (FRF) the real part will be zero
around the natural frequency (minimum coincident part), the imaginary part
will be maximal (peak quadrature) and the amplitude will also be maximal
(peak amplitude). The frequency value where this extremum is observed is
called the resonant frequency r and is a good estimate of the natural frequency
of the mode nk for lightly damped systems.

234 The Lms Theory and Background Book


Estimation of modal parameters

A corresponding estimate of the damping can be found with the 3dB rule. The
frequency values 1 and 2,on both sides of the peak of the FRF at which the
amplitude is half the peak amplitude (3dB down) are introduced in the formula
in equation 3.1 to yield the critical damping ratio. The method is also illusĆ
trated in Figure 15-5 below. 1 and 2 are also called half power points.

2  1
 Eqn 15-33
2 r

hij
dB
ampl 3 dB

1 r 2 frequency

Figure 15-5 Half power (3 dB) method for damping estimates

Since the curve fitter locates the resonance frequency on a spectral line, signifiĆ
cant errors can be introduced if the FRF has a low frequency resolution and the
peaks of modes fall between two spectral lines. This can be compensated for by
extrapolating the slopes on either side of the picked line to determine the amĆ
plitude of the FRF more precisely.

It may be necessary to deal with the situation when one of the half power
points is not found. This may arise when the frequency of one mode is close to
that of another mode, or it is near to the ends of the measured frequency range.

Note! Peak picking is a single DOF method: it is therefore only suitable for data with
well separated modes.

As this method yields local estimates, it requires only one data record to obtain
frequency and damping values for all modes. However, if several data records
are available, it may be that different records identify different modes.

Part IV Modal Analysis and Design 235


Chapter 15 Estimation of modal parameters

15.3.2 Mode picking

If you assume that the modes are uncoupled and lightly damped, the modal
amplitude can be computed from the peak quadrature or peak amplitude of the
FRF. With this assumption, the data in the neighborhood of the resonant freĆ
quency can be approximated by

r ijk
h ij,n " Eqn 15-34
(j n   k)

(see also equation 15-7)

The amplitude is maximum at the resonant frequency. However for lightly


damped modes, the resonant frequency, natural frequency and damped natural
frequency are all approximately the same. Therefore, the amplitude at resoĆ
nance or the modal amplitude is found at n which is equal to dk .

By substituting dk for n in equation 15-34 the modal amplitude is given by

r ijk
 Eqn 15-35
k

Note that from the modal amplitude a residue or mode shape estimate is obĆ
tained by multiplying by the modal damping.

To use the Mode picking method you must have an estimate of dk . This estiĆ
mate can be obtained with the Peak picking method (see section 15.3.1) or other
techniques.

The Mode Picking method is obviously quite sensitive to frequency shifts in the
data. If for example the resonant frequency of a mode in a data record is
shifted a few spectral lines with respect to the frequency that is used as resoĆ
nant frequency for that mode, then the modal amplitude would be erroneously
picked. To accommodate situations where frequency shifts occur, you need to
specify an allowed frequency shift around the resonant frequencies dk that are
used to calculate the modal amplitudes. Rather than picking the modal ampliĆ
tude at the resonant frequencies the method now scans a band around each
modal frequency for each data record. The maximum amplitude in this band is
used to determine the modal amplitude and thus the mode shape coefficient.

Mode picking allows you to make a very quick determination of a modal modĆ
el. The accuracy of this model however depends on how well the assumptions
of the methods were applicable to the data.

236 The Lms Theory and Background Book


Estimation of modal parameters

15.3.3 Circle fitting


The Circle fitting method is based on estimating a circle in the complex plane
through data points in a band around a selected mode. The method was origiĆ
nally developed by Kennedy and Pancu for lightly damped systems under the
single DOF assumption. In the band around a mode, the data can be approxiĆ
mately described by

r ijk r *ijk
h ij,n   Eqn 15-36
j n   k j n   *k

Making an abstraction of the indices i, j and k, introducing complex notation for


the residue, and approximating the complex conjugate term by a complex
constant, equation 15-36 transforms to

U  jV
hn   R  jI Eqn 15-37
   jĂ( n   d)

It can be demonstrated that the modal parameters in this expression can be


derived from the coefficients of a circle that is fitted to the data in the complex
plane, as shown in Figure 15-6.

f Re(h)
  arctan V
U

 (R,I)

U 2  V 2
d

(R  U2Ă, I  V2Ă)







Im(h)
 d
f

Figure 15-6 Relation between circle fitting parameters and modal parameters

The natural frequency d is determined by the maximum angular spacing


method where the natural frequency is assumed to occur at the point of maxiĆ
mum rate of change of angle between data points in the complex plane.

Part IV Modal Analysis and Design 237


Chapter 15 Estimation of modal parameters

Having determined the natural frequency and assuming a lightly damped sysĆ
tem, the damping is given by equation 15-38.

  2  1
d
tan( 2) 2 tan( 2)
Eqn 15-38
1 2

The complex residue U + jV is determined from the diameter of the circle d,


and the phase  as illustrated in Figure 15-6.

U 2  V 2
d Eqn 15-39

  arctan
V
U

Eqn 15-40

Circle fitting is a basic sDOF parameter estimation method. It can be used to


obtain frequency, damping and mode shape estimates. The method is fast, but
should really be used interactively to obtain the best possible results.

15.3.4 Complex mode indicator function


The Complex Mode Indicator Function method allows you to identify a modal
model for a mechanical system where multiple reference FRFs were measured.
The method provides a quick and easy way of determining the number of
modes in a system and of detecting the presence of repeated roots. This inĆ
formation can then be used as a basis for more sophisticated multiple input
techniques such as LSCE or FDPI. However in cases where modes are well exĆ
cited and obvious it can yield sufficiently accurate estimates of modal parameĆ
ters.

The FRF matrix of a system with No (output) and Ni (input) degrees of freedom
can be expressed as follows
2N
[ĂH()Ă]   {ĂĂ} rĂ
Qr
{ĂLĂ} Tr Eqn 15-41
  r
r1

Or in matrix form as

[ĂH()Ă]  [ĂĂ]Ă  Q  Ă[ĂLĂ]


r
r
T
Eqn 15-42

238 The Lms Theory and Background Book


Estimation of modal parameters

where

[ĂHĂ()Ă]= the FRF matrix of size Ni by No

[ĂĂ] = the mode shape matrix of size No by 2N

Qr = the scaling factor for the rth mode

 r = the system pole value for the rth mode

[ĂLĂ]Ă T = the transposed modal participation factor matrix of size Ni


by 2N

Taking the singular value decomposition of the FRF matrix at each spectral line
results in

[ĂHĂ]  [ĂUĂ]Ă[ĂSĂ]Ă[ĂVĂ] H Eqn 15-43

where

[ĂĂUĂ]= the left singular matrix corresponding to the matrix of mode shape
vectors

[ĂĂSĂ]= the diagonal singular value matrix

[ĂĂVĂ]= the right singular matrix corresponding to the matrix of modal parĆ
ticipation vectors

In comparing equations 15-42 and 15-43, the mode shape and modal participaĆ
tion vectors in equation 15-42 are, through the singular value decomposition,
scaled to be unitary vectors and the mass matrix in equation 15-43 is assumed
to be an identity matrix, so that the orthogonality of modal vectors is still satisĆ
fied.

For any one mode, the natural frequency is the one where the maximum singuĆ
lar value occurs.

The Complex Mode Indicator Function is defined as the eigenvalues solved


from the normal matrix, which is formed from the FRF matrix (ā[ĂHĂ]ā Hā[ĂHĂ]) at
each spectral line.

[ĂHĂ]Ă H[ĂHĂ]  [ĂVĂ]Ă[ĂSĂ]Ă 2[ĂVĂ] H Eqn 15-44

CMIF k()  k()  s k() 2ĄĄĄĄĄk  1, 2, .N i Eqn 15-45

where

k ()= the kth eigenvalue of the normal FRF matrix at frequency 

Part IV Modal Analysis and Design 239


Chapter 15 Estimation of modal parameters

sk ()= the kth singular value of the FRF matrix at frequency 


Ni = the number of inputs
In practice the [ĂHĂ] Hā[ĂHĂ] ā matrix is calculated at each spectral line and the eiĆ
genvalues are obtained. The CMIF is a plot of these values on a log scale as a
function of frequency. The same number of CMIFs as there are references can
be obtained. Distinct peaks indicate modes and their corresponding frequency,
the damped natural frequency of the mode. This is illustrated in Figure 15-7.
Peaks in the CMIF function can be searched for automatically whilst taking into
account criteria that are used to eliminate spurious peaks due to noise or meaĆ
surement errors.

CMIF 1

log

.1

.01

.001
frequency

Figure 15-7 Example of a CMIF showing selected frequencies

When the frequencies have been selected, equations 15-43 and 15-44 can be
used to yield the complex conjugate of the modal participation factors [ĂVĂ],
and the as yet unscaled mode shape vectors [ĂUĂ].
The unscaled mode shape vectors and the modal participation factors are used
to generate an enhanced FRF for each mode (r), defined by

HE H
r ()Ă  {ĂUĂ} r Ă[ĂH()Ă]Ă{ĂVĂ} r Eqn 15-46

Since the mode shape vectors and modal participation factors are normalized
to unitary vectors by the singular value decomposition, the enhanced FRF is acĆ
tually the decoupled single mode response function
Qr
HE
r ()Ă  Eqn 15-47
  r

A single degree of freedom method (such as the circle fitter technique) can now
be applied to improve the accuracy of the natural frequency estimate and then
to extract damping values and the scaling factor for the mode shape.

240 The Lms Theory and Background Book


Estimation of modal parameters

CMIF 1

.1

log
.01

.001
frequency

amp

frequency

Figure 15-8 Example of a CMIF and the corresponding enhanced FRF

One CMIF can be calculated for each reference DOF. They can be sorted in
terms of the magnitude of the eigenvalues. They can all be plotted as a funcĆ
tion of frequency as shown in the example in Figure 15-9.

CMIF 1

.1

CMIF_1
log
.01 CMIF_2

.001
frequency

Figure 15-9 Example of first and second order CMIFs

Part IV Modal Analysis and Design 241


Chapter 15 Estimation of modal parameters

Cross checking and tracking

At any one frequency these functions will indicate how many significant indeĆ
pendent phenomena are taking place as well as their relative importance.
At a resonance, at least one CMIF will peak implying that at least one mode is
active. At a different frequency however it may be that a different mode has
increased its influence and is the major contributor to the response. Between
resonances, a cross over point can occur where the contribution of two modes are
equal. This can result in a higher order CMIF exhibiting peaks if they are
sorted as shown in Figure 15-9 and in the effect of one CMIF exhibiting a dip at
the same time as a lower order function is exhibiting a peak.

A check on peaks in the second order CMIF functions can be made to deterĆ
mine whether or not they are due to the cross over effect or a genuine pole of
second order. This is done by calculating the MAC matrix using data on either
side of the frequency of interest.

Where a and b represent the frequencies


MAC (1a,1b) MAC (2a,1b) and 1 and 2 the CMIF functions. CMIF 1
contains the larger values and CMIF 2 the
MAC (1a,2b) MAC (2a,2b) smaller ones

CMIF_1

CMIF_1
CMIF_2
a b CMIF_2
a b
When this MAC matrix approximates When this MAC matrix is anti diagoĆ
a unity matrix then the peak in nal then the peak in CMIF_2 repreĆ
CMIF_2 represents a resonance peak. sents a cross over point. The mode is
The mode is not changing between freĆ switching between frequencies a and
quencies a and b. b.
"1 "0 "0 "1
"0 "1 "1 "0

Peak picking can be facilitated by using tracked CMIFs. This alters the display
of the CMIFs for when the mode shapes represented by the two CMIFs are
switched, the CMIFs are also switched. This is determined by the cross over
check described above.

242 The Lms Theory and Background Book


Estimation of modal parameters

An example of the tracked versions of the CMIFs illustrated in Figure 15-9 is


shown below.

CMIF 1

.1

log
.01

.001
frequency

Figure 15-10 Example of first and second order tracked CMIFs

15.3.5 Least squares complex exponential


The Least Squares Complex Exponential method allows you to estimate values
of modal frequency and damping for several modes simultaneously. Since all
the data is analyzed simultaneously, global estimates are obtained.

To understand how the method works, recall the expression for an impulse reĆ
sponse (IR) given below
N
h ijĂ(t)Ă   rĂ ijkeĂ  t  rĂ *ijkeĂ  t

*
k k Eqn 15-48
k1

It can be seen from this expression that the pole values


k are not a function of a
particular response (output) or reference (input) DOF. In other words the pole
values are global (rather than local) characteristics of the structure. They are
the same for any measured FRF on the structure. It should therefore be posĆ
sible to use all the available data measured on the system to identify global estiĆ
mates simultaneously.

This method can be used with single and multiple inputs.

Part IV Modal Analysis and Design 243


Chapter 15 Estimation of modal parameters

Model for continuous data

A particular problem when trying to work with equation 15-48 to achieve the
above objective is that it contains residues rijk which do depend on the response
and reference DOFs. It is therefore essential to define another parametric modĆ
el for the data hij , in which the coefficients are independent of response and refĆ
erence DOFs and can be used to identify estimates for
k. . It can be proved that
such a model takes the form of a linear differential equation of order 2N with
constant real coefficients

(ddt) 2Nh ij  a 1(ddt) 2N1h ij   a 2Nh ij  0 Eqn 15-49

Indeed, equation 15-48 expresses the data as a linear superposition of a set of


2N damped complex exponentials occurring in complex conjugate pairs. Such
complex exponentials can be viewed as the characteristic solutions of a linear
differential equation with constant real coefficients

(ddt) 2Nf (t)  a 1(ddt) 2N1f (t)   a 2Nf (t)  0 Eqn 15-50

The impulse response, being a linear superposition of characteristic solutions, is


by itself also a characteristic solution. Therefore equation 15-49 is valid if the
coefficients are such that

 2N  a 1 2N1   a 2N  0
Eqn 15-51
   k,    *k, k  1 N

Turning the reasoning around therefore, one could first try to estimate the coefĆ
ficients in equation 15-49 using all available data. Estimates of the complex exĆ
ponential coefficients
k can then be found by solving equation 15-51.

Model for sampled data

Measured data is however sampled, not continuous. So rather than working


from equation 15-48 it is necessary to work with
N
h ij,nĂ Ă  rijkĂznkĂ Ă r*ijkĂz*nk

k1 Eqn 15-52

z k  e  kt

Instead of damped complex exponentials, the characteristics are now power seĆ
ries with base numbers zk .

244 The Lms Theory and Background Book


Estimation of modal parameters

Following a similar reasoning to that explained above for continuous data it


can be proved that the sampled data is the solution of a linear finite difference
equation with constant real coefficients of order 2N (instead of a differential
equation as for continuous data).

h ij,nĂ Ă a 1h ij,n1Ă Ă Ă Ă a 2Nh ij,n2NĂ Ă 0 Eqn 15-53

The characteristics zk and therefore the poles


k can be found by solving,

z 2N 2N1
k  a 1z k   a 2N  0 Eqn 15-54

Practical implementation of the method

The Least Squares Complex Exponential is a method that estimates the coeffiĆ
cients in equation 15-53 using data measured on the system.

In principle any data record hij,n ăcan be used. Applying the method to just a
single data record at a time will result in local estimates of the poles.

To estimate the coefficients in equation 15-53 in a least squares sense the equaĆ
tions for all possible time points and all possible response and reference DOFs
are to be solved simultaneously as indicated in equation 15-55. This equation
system will be greatly overdetermined. To find the least squares solution the
normal equations technique can be applied so that the final solution is calcuĆ
lated from a compact equation with a square coefficient matrix, equation 15-56.
The coefficient matrix in this equation is called a covariance matrix.

h11,2N1 h 11,0 ! s  h11,2N!


 ) ) )   ) 
 h11,N 1 h 11,N t2N  a 1 !
  h 11,Nt 

 ) t
 2
a
 ) ) $ ) % $ ) % Eqn 15-55
 hij,n1 h ij,n2N   
 a 

 h ij,n 

 )  2N
 
 ) )  )
h N0N i,Nt1 h N 0Ni,N t2N  hN N N 
 0 i t

where

Nt= last available time sample

N0 = number of response DOFs

Ni = number of input DOFs

We can write this in a simpler manner

Part IV Modal Analysis and Design 245


Chapter 15 Estimation of modal parameters

r 1,1 sr 1,2 r 1,2N


! a1 !   r 1,0 !
r2,2 r 2,2N  a 2    r 2,0 
.  ) %Ă=$
 ). ) ) ) $   ) % Eqn 15-56
. . . r 2N,2N a2N  r 2N,0

The coefficients in the covariance matrix are defined as

N0 Ni Nt
r k,l     (hij,nkĂhij,nl) Eqn 15-57
i1 j1 n1

Building this covariance matrix is the first stage in applying the Least Squares
Complex Exponential method. This phase is usually the most time consuming
since all the available data is used to build the inner products expressed by
equation 15-57.

Note that after solving equation 15-56 all that is required to calculate the estiĆ
mates of modal frequency and damping is to substitute the estimated coeffiĆ
cients in equation 15-54 and to solve for zk .

Determining the optimum number of modes

The solution of equation 15-56 results in least squares estimates of the coeffiĆ
cients in the model expressed by equation 15-53. It is also possible therefore to
calculate the corresponding least squares error. This error is of importance in
determining the minimum number of modes in the data.

In the preceding discussion it has been assumed that N modes are present in
the data. However, the number of modes contained in the data is in fact unĆ
known. It is preferable that this should be determined by the method itself.
Using the Least Squares Complex Exponential method, this can be achieved by
observing the evolution of the least squares error on the solutions of equation
15-56 as a function of the number of assumed modes.

To do this, an equation like equation 15-56 is initially created, assuming a numĆ


ber of modes N that is sufficiently large. A subset of such an equation is then
taken to solve for the coefficients of a model that describes just one mode

r 1,1 r 1,2
. r 2,2
   
a1  r 1,0
a 2 Ă=Ă  r 2,0

The corresponding least squares error is represented by 1.

When 2 modes are assumed in the data then the sub set to be solved is

246 The Lms Theory and Background Book


Estimation of modal parameters

r 1,1 r 1,2 r 1,3 r 1,4 a


! 1!  r 1,0!
. r 2,2 r 2,3 r 2,4 a 2  r 2,0

. $a %Ă=Ă$ r %
r 3,4
 . r 3,3  3
r 4,4 a 4  r 4,0
3,0
. . .
 
With corresponding least squares error 2, and so on. Now if a model is asĆ
sumed with a number of modes equal to the number of modes that is present in
the data then the corresponding least squares error should be significantly
smaller than the error for models with fewer modes.
A diagram that plots the least squares error for increasing number of modes is
called the least squares error chart. Figure 15-11 shows a typical diagram if data
is analyzed for a system with 4 modes (and 4 modes are observable from the
data!).
Noise on the data may cause the error diagram to show a significant drop at a
certain number of modes, followed by a continued decrease of the error as the
number of modes is increased. The problem now is to determine how many
extra modes, or so called computational modes, are to be considered to comĆ
pensate for the noise on the data so that the best estimates of modal frequency
and damping can be obtained. This problem is also illustrated in Figure 15-11.

Least
Squares
Error No noise on data

Noise on data

1 2 3 4 5 6 7 Nr of modes

Figure 15-11 Least squares error diagram, system with 4 modes

To determine the optimal number of modes you could try to compare frequenĆ
cy and damping estimates that are calculated from models with various numĆ
ber of modes. Physical intuition would lead you to expect that estimates of freĆ
quency and damping corresponding to true structural modes, should recur (in
approximately the same place) as the number of modes is increased. ComputaĆ
tional modes will not reappear with identical frequency and damping. A diaĆ
gram that shows the evolution of frequency and damping as the number of
modes is increased is called a Stabilization diagram. The optimal number of
modes that can be calculated for use can then be seen, as those modes for which
the frequency and damping values of the physical modes do not change signifiĆ
cantly. In other words, those which have stabilized.

Part IV Modal Analysis and Design 247


Chapter 15 Estimation of modal parameters

s v s s
s f s s

number of modes
s d s s
s d s s
s f s s
amplitude

s f s s
s v s s
s f s s
d d s
v v s
f f v
f f f
o

frequency

Figure 15-12 A stabilization diagram

Example

Let two data records be measured on a system, both shown in Figure 15-13.

h11
1

0
1 2 3 4

-1

h21

0
1 2 3 4
t
-1

Figure 15-13 Example least squares complex exponential

Let four data samples be measured of which the values are listed in the Table
below.

248 The Lms Theory and Background Book


Estimation of modal parameters

n h11 h21
0 1 0
1 0 1
2 -1 0
3 0 -1

Consider a model for 1 mode (N=1). Equations 15-55 and 15-56 become reĆ
spectively

 0 1! a 1!
1 0
 a
1
 1 0  2 0% Ă=Ă$
0

0 s1 1

20 02
aa Ă=Ă02
1
2

The solution is therefore a1 =0, a2 =1. Now equation 15-54 is used to calculate zk
and so k,

z2  1  0
z * j

The frequency and damping values follow from

zĂ Ă e t

z   j,Ă   0  j
2t

z   j,Ă   0  j
2t
The solution indicates a mode with a period 4t and zero damping. This is
compatible with the trend of the cursor as shown in Figure 15-13.

15.3.5.1 Multiple input least squares complex exponential


The Least Squares Complex Exponential method, described above, uses all data
measured on a structure to estimate global estimates of modal frequency and
damping. In principle, data relative to several reference DOFs can be used.
However the model used by the previous method does not take specific advanĆ
tage of this.

Part IV Modal Analysis and Design 249


Chapter 15 Estimation of modal parameters

The multiple input Least Squares Complex Exponential, (or polyreference), is


an extension of the Least Squares Complex Exponential that does allow consisĆ
tent simultaneous analysis of data relative to several reference DOFs. The
method computes global estimates of frequency and damping and also of
modal participation factors. Modal participation factors are terms which exĆ
press the participation of modes in the system response as a function of the refĆ
erence (or input) DOF (see section 15.2.3). The simultaneous estimation of freĆ
quency, damping and modal participation factors means that highly coupled,
even repeated modes can be identified.

The basis for the Multiple Input Least Squares Complex Exponential method is
the model of the data introduced in section 15.2.3 equation 15-16.
N
H i   vikLkeĂ t  v*ikL*keĂ

k
*
k Eqn 15-58
k1

where

H i = Ni vector (row) of IRs between output DOF i and all input DOFs

L k = vector of modal participation factor for mode k. If Ni reference DOFs


are assumed then L k is of dimension Ni

vik = is the mode shape coefficient at response DOF i for mode k

Note that in this model, frequency, damping and modal participation factors
are independent of the particular response DOF. It should therefore be possible
to estimate these coefficients using all the available data simultaneously.

Model for sampled data


The model expressed by equation 15-58 is not directly suitable for global esĆ
timation of frequency, damping and modal participation factors as it still conĆ
tains the mode shape coefficients that are dependant on the response DOF.
Therefore a more suitable model must be derived.

Introducing firstly the sampled nature of the data, equation 15-58 is rewritten
as,
N
H n i   vikLkznk  v*ikL*kz*nk

k1 Eqn 15-59


z k  e  kt

It can be proved that if the data can be described by equation 15-59, it can also
be described by the following model

250 The Lms Theory and Background Book


Estimation of modal parameters

H n iĂ Ă H n1A 1 Ă Ă HnpA pĂ  0 Eqn 15-60

if the following conditions are fulfilled

Lk[z pk  z p1
k
A 1Ă  A p]  0 Eqn 15-61
pN i  2N Eqn 15-62

(The proof of this follows from basic calculus along the same lines as for Least
Squares Complex Exponential in section 15.3.5).

Equation 15-60 represents, in matrix notation, a coupled set of Ni finite differĆ


ence equations with constant coefficients. The coefficients A1 . . . Ap are thereĆ
fore matrices of dimension (Ni Ni ).

The condition expressed by equation 15-61 states that the terms [ Lk ] and zk nă
are characteristic solutions of this system of finite difference equations. As
equation 15-59 is a superposition of 2N of such terms, it is essential that the
number of characteristic solutions of this system of equations pNi at least equals
2N as expressed by equation 15-62.

Note finally, that if data for each reference DOF is treated individually, i.e. Ni =
1, then equation 15-60 and 15-61 simplify to equations 15-53 and 15-54. Thus
the least squares complex exponential method is a special case of the multiple
input least squares complex exponential method.

Practical implementation of the method


To estimate the coefficients in equation 15-60 in a least squares sense the equaĆ
tions for all possible time points and all possible response DOFs are to be
solved simultaneously, as indicated by equation 15-63. A least squares solution
is found, for example using the normal equations method, from equation 15-64.
The coefficient matrix in this equation is again in the form of a covariance maĆ
trix,

 Hp1 1 H 0 1 !   Hp1 !
 . . . A  .
 HN 1
 
H N p 1  1!   H N   
 1
  1
 ) 2

t t

Ă=Ă 
A .
t

 . . t.
    H n i 
Eqn 15-63
 Hn1i H npi   
  A p  . 
 . . .   
H 

H N p N    H N
t
 N
N 1 N
t t 0
0 0

where

Part IV Modal Analysis and Design 251


Chapter 15 Estimation of modal parameters

Nt = the last available time sample

N0 = the number of response DOFs

N0 Nt
R k,lĂ Ă Ă  Ă(Ă[Hnk]ĂtiĂ[Hnl]iĂ) Eqn 15-64
i1 np

R 1,1    R 1,0
R 1,2 R 1,p A 1
. R 2,2 R 2,pA 2  R 2,0
 ) 
Ă  Ă 
)  )   ) 
 Eqn 15-65
 ) )
. . . R p,p A p
  R p,0

The order (p) of the finite difference equation is related to the number of modes
in the data by equation 15-62. It is preferable that this be determined by the
method itself. As the coefficients of the finite difference equation are solved for
in a least squares sense, this can be done by observing the least squares error as
a function of the assumed order. As an order is reached such that the model
can describe as many modes as are present in the data, the error should drop
considerably.

Due to the condition expressed by equation 15-62 there is no linear relation beĆ
tween the number of modes that can be described by the model and the order
of the model. The relation between the number of modes, the order of the
model and the number of reference DOFs is listed in Table 15.2. It can be seen
that a model of order 8 can describe 11 or 12 modes if data for 3 inputs are anaĆ
lyzed simultaneously. In the error diagrams therefore the same least squares
error is shown for 11 and 12 modes.

As for the Least Squares Complex Exponential method, a stabilization diagram


can again be created to determine the optimal number of modes. As well as
comparing frequency and damping values calculated from models of consecuĆ
tive order it is now also possible to compare the stabilization of modal particiĆ
pation factors. In section 15.2.3, the modal participation factors were shown to
be proportional to the mode shape coefficients at the reference DOFs. They also
represent a physical characteristic of the structure like the frequency and dampĆ
ing. Therefore, the values corresponding to structural modes should also stabiĆ
lize as the order of the model is increased. This additional criterion adds much
to the readability of the stabilization diagram and to the ability to distinguish
computational modes from physical modes

Additionally, the modal participation factors can be used by themselves to


identify physical modes. If they are normalized with respect to the largest, the
values should all be approximately real, in phase or in anti-phase, for structurĆ
al modes.

252 The Lms Theory and Background Book


Estimation of modal parameters

N Ni=1 Ni=2 Ni=3 Ni=4 Ni=5 Ni=6


1 2 1 1 1 1 1
2 4 2 2 1 1 1
3 6 3 2 2 2 1
4 8 4 3 2 2 2
5 10 5 4 3 2 2
6 12 6 4 3 3 2
7 14 7 5 4 3 3
8 16 8 6 4 4 3
9 18 9 6 5 4 3
10 20 10 7 5 4 4
11 22 11 8 6 5 4
12 24 12 8 6 5 4
13 26 13 9 7 6 5
14 28 14 10 7 6 5
15 30 15 10 8 6 5
16 32 16 11 8 7 6
17 34 17 12 9 7 6
18 36 18 12 9 8 6
19 38 19 13 10 8 7
20 40 20 14 10 8 7
21 42 21 14 11 9 7
22 44 22 15 11 9 8
23 46 23 16 12 10 8
24 48 24 16 12 10 8
25 51 25 17 13 10 9
26 52 26 18 13 11 9
27 54 27 18 14 11 9
28 56 28 19 14 12 10
29 58 29 20 15 12 10
30 60 30 20 15 12 10
31 62 31 21 16 13 11
32 64 32 22 16 13 11
Table 15.2 Relation between modal order (tabulated), number of modes (N) and
number of reference DOFs (Ni )

Example
To clarify the method, consider again the example discussed on page 248. Let
the example system satisfy reciprocity so that h12 is also equal to h21 . The vecĆ
tor [h12 h21 ] then represents the data between response DOF 1 and reference
DOFs 1 and 2.

Considering a model for 1 mode (so p= 1, as Ni = 2) equations 15-55 and 15-56


become respectively

Part IV Modal Analysis and Design 253


Chapter 15 Estimation of modal parameters

 1 0! a a 0  1!

11 12
0 1 a a Ă=Ă1 0 

 1 0 0  1
12 22

20 02
aa 11
12


a 12 0 2
a 22 Ă= 1 0

The resulting matrix polynomial is therefore

l 1Ăl 2
z 1 1z
=Ă 00

and the solutions of this eigenvalue problem are



z * j,   0 * j
2t

 L   [* j, 1]

Notice that the solution for the frequency and damping is the same as found
with the Least Squares Complex Exponential (see page 249). In addition you
also find an estimate of the modal participation factors. For this example they
indicate that there should be a phase difference of 90_ in the system response
between excitation from reference DOFs 1 and 2 as h11 is a cosine, and h12 a
sine. This estimate seems to be correct.

15.3.6 Least squares frequency domain


The Least Squares Frequency Domain method is a multiple DOF technique to
estimate residues, or mode shape coefficients. The method requires that freĆ
quency and damping values have already been estimated. It can be used with
single or multiple inputs.

Consider the model expressed by equation 15-66


N
h ijĂ(t)Ă   rĂ ijkeĂ  t  rĂ *ijkeĂ  t

*
k k Eqn 15-66
k1

If estimates of the modal frequency and damping are available, then the resiĆ
dues appear linearly as unknowns in this model.

254 The Lms Theory and Background Book


Estimation of modal parameters

To estimate the residues, equation 15-66 is transformed back to the frequency


domain. Assuming sampled data therefore

j r *ijk !
N r ijk lr ij
h ij,p 
p  k
  ij 2
j p   *k
 ur  Eqn 15-67
k1 p

where

urij = an upper residual term used to approximate modes at frequencies


above max

lrij = an lower residual term used to approximate modes at frequencies


below min

These are illustrated in Figure 15-3. Note that the residues as well as lower and
upper residuals are local characteristics; in other words, they depend on the
particular response and reference DOF.

The Least Squares Frequency Domain method is based on the model expressed
by equation 15-67. Least squares estimates of residues, lower and upper residĆ
uals are calculated by analyzing all data values in a selected frequency range.

15.3.6.1 Multiple input least squares frequency domain


The multiple input Least Squares Frequency Domain method is a multiple DOF
technique to estimate mode shapes. The method analyses data relative to sevĆ
eral reference DOFs simultaneously to estimate mode shape coefficients that are
independent of reference DOFs.

Consider the model expressed by equation 15-58,


N
H i   Ă vikLkeĂ t  v*ikL*keĂ t

k
*
k Eqn 15-68
k1

If estimates of frequency, damping and modal participation factors are availĆ


able, then the mode shape coefficients appear linearly as the only unknowns in
this model. Furthermore, they are only dependent on the response DOF (and
not on the reference DOF) so that data relative to several reference DOFs can be
analyzed simultaneously.

To estimate the residues, equation 15-68 is transformed to the frequency doĆ


main. Adding residual terms and assuming sampled data results in

Part IV Modal Analysis and Design 255


Chapter 15 Estimation of modal parameters

N v ikL k v *ikL * k LR i


H pi  j p   k

j p   *k

 2p
 UR i Eqn 15-69
k1

where

[UR]Ăi = upper residuals between response DOF i and all reference DOFs,
vector of dimension Ni

[LR]ĂĂi = lower residuals between response DOF i and all reference DOFs, vecĆ
tor of dimension Ni

The multiple input LSFD method is based on equation 15-69.

15.3.7 Frequency domain direct parameter identification


The Frequency domain Direct Parameter Identification (FDPI) technique allows
you to estimate the natural frequencies, damping values and mode shapes of
several modes simultaneously. If data relative to several references are availĆ
able, a multiple input analysis will also extract values for the modal participaĆ
tion factors. In this case, the FDPI technique offers the same capabilities as the
LSCE time domain method.

Theoretical background
The basis of the FDPI method is the second order differential equation for meĆ
chanical structures
.. .
My(t)  Cy(t)  Ky(t)  f (t) Eqn 15-70

When transformed into the frequency domain, this equation can be reformuĆ
lated in terms of measured FRFs

[  2I  jA 1  A 0]Ă[H()]  jB 1  B 0 Eqn 15-71

where

= frequency variable

256 The Lms Theory and Background Book


Estimation of modal parameters

A1 = M -1 C, mass modified damping matrix ( No by No )

A0 = M -1 K, mass modified stiffness matrix ( No by No )

H( ) = matrix of FRF's (No by Ni )

B0, B1 are the force distribution matrices (No by Ni )

Note that for the single input case, the H( ) ămatrix becomes a column vector
of frequency dependent FRF's.

Equation 15-71 is valid for every discrete frequency value  When these equaĆ
tions are assembled for all available FRFs, including multiple input - multiple
output test cases, the unknown matrix coefficients A0, A1 , B0, and B1 can be estiĆ
mated from the measurement data H( ). Equation 15-71 thus means that the
measurement dataăH( ) can be described by a second order linear model with
constant matrix coefficients. From the identified matrices, the system's poles
and mode shapes can be estimated via an eigenvalue and eigenvector decomĆ
position of the system matrix.

IA 0A 
1 0 Eqn 15-72

This will yield the diagonal matrix [] of poles and a matrix  of eigenvectors.
It will become clear from the following section, that the matrix ăthus obtained
is not equal to the matrix of mode shapes, although it is related to it.

In a final step, the modal participation factors are estimated from another least
squares problem, using the obtained [] and  matrices.

Data reduction
Prior to estimating the system matrix, all available data are condensed via a
projection on their principal components. For all response stations, a maximum
of Nm ăprincipal components are first calculated and then analyzed. The obĆ
tained matrix ă represents the modal matrix for this set of fictitious response
stations.

The data reduction procedure offers the following advantages

V the calculation time is drastically decreased for the estimation of model


parameters. This is especially important for the calculation of least
squares error charts and stabilization diagrams.

V the number of contributing modes is more easily determined from the


singular value analysis.

Part IV Modal Analysis and Design 257


Chapter 15 Estimation of modal parameters

Residual correction terms


The FDPI technique operates directly on frequency domain data. It is therefore
capable of taking into account the effects of modes outside the frequency band
of analysis. This feature significantly improves the analysis results when
modes below or above the selected band influence the data set. In the case
where both upper and lower residual terms are included in the model, equaĆ
tion 15-71 becomes

[  2I  jA 1  A 0]ĂH()  Eqn 15-73

 2C 2   1C 1  C0  C 1   2C 2

The presence of these residual terms will influence the estimates for frequency,
damping and mode shapes (as well as the modal participation factors for multiĆ
ple input analysis).

Determining the optimum number of modes


As with the Least Squares Complex Exponential (LSCE) method, a least squares
error chart can be built to determine the optimal number of modes in the seĆ
lected frequency band. Because of the principal component projection, this
chart may look somewhat different. For small models, only the first (most imĆ
portant) principal data are used, and the global error will decrease drastically.
As more and more principal components are included by estimating more
modes, their information becomes less important, which may distort the least
squares error chart.
A more reliable tool for estimating the optimal number of nodes for the FDPI
technique is the singular values diagram. As an alternative to the error diaĆ
gram, and to some extent to the stabilization diagram too, the rank of the calcuĆ
lated covariance matrix can be determined. The rank of the matrix is also a
good indication of the optimal number of modes to be used in the analysis.
The rank of the matrix can be determined using a singular value decomposiĆ
tion. A diagram showing the normalized singular values in ascending order is
called a singular values diagram: the rank of the matrix is determined at the
point where the singular values become significantly smaller compared to the
previous values.
When building a stabilization diagram, (see LSCE method page 247), the same
data are described by models of increasing order. An updating procedure is
implemented to save calculation time.

Pseudo–DOFs for small measurement sets


Due to the type of identification algorithm, the FDPI technique can only estiĆ
mate as many modes in the model as there are measurement Degrees of FreeĆ
dom. This means that normally

258 The Lms Theory and Background Book


Estimation of modal parameters

Nm  N0

However, using a similar approach as for the time domain LSCE method, it is
possible to create so-called pseudo-" Degrees of Freedom from the measureĆ
ments that are available, thus generating enough new" measurements to allow
a full identification on as few as one measurement.

Mode shape estimation


Using the reduced mode shapes  for the principal responses, and the transĆ
formation matrix between the principal and physical responses, the FDPI algoĆ
rithm allows you to identify the complete mode shapes of the system by exĆ
panding the reduced  matrix.
This mode shape expansion offers several advantages :-
V it is very fast (no least squares solution required as for the LSFD methĆ
od)
V it identifies a mode shape vector as a global direction in the modal
space, rather than estimating its elements one by one via mutually inĆ
dependent least squares problems.
If the mode shape expansion method is not employed then the LSFD technique
is used to estimate mode shapes.

Normal modes
From the meaning of the matrices [A0 ] and [A1 ] and the eigenvalue problem
(15-72), it is possible to estimate damped (generally complex) mode shapes ,
or undamped real normal modes.
Normal modes can be identified via the FDPI technique by solving an eigenvaĆ
lue problem for the reduced mass and stiffness matrices only
2 Eqn 15-74
M 1ĂK n   nĂ

This eigenvalue problem is very much related to the one that is solved by FEM
software packages that ignore the damping contribution in a system. This is an
entirely different approach to the one that is used to estimate real modes via the
LSFD technique. The latter technique estimates the real-valued mode shape
coefficients that curve-fit the data set in a best least squares sense (proportional
damping assumed), while the FDPI method uses an FEM-like approach.
Damping values are computed by applying a circle-fitter to enhanced FRFs for
each mode. The enhanced FRFs are calculated by projecting the principal FRFs
on the reduced mode shapes.

Part IV Modal Analysis and Design 259


Chapter 15 Estimation of modal parameters

15.4 Maximum likelihood method

A multi-variable frequency-domain maximum likelihood (ML) estimator is


proposed to identify the modal parameters together with their confidence interĆ
vals. The solver is robust to errors in the non-parametric noise model and can
handle measurements with a large dynamical range.

Although the LSCE-LSFD approach has proven to be useful in solving many


vibration problems, the method has some drawbacks:

V the polyreference LSCE estimator does not always work well when the
number of references (inputs) is larger than 3 for example

V the frequencies should be uniformly distributed

V the method is not able to handle noisy measurements properly, which


can result in unclear stabilization plots and

V the method does not deliver confidence intervals on the estimated moĆ
dal parameters.

15.4.1 Theoretical aspects


A scalar matrix-fraction description ć better known as a common-denominator
model ć will be used. The Frequency Response Function (FRF) between output
o and input i is modeled as

^ N oiĂ( f)
H oiĂ( f)  Eqn 15-75
DĂ( f)

for i = 1, . . . . . , Ni and o = 1, . . . . . , No

with
n
N oiĂ( f)   ! jā( f)āA j
j0

the numerator polynomial between output o and input i

and

260 The Lms Theory and Background Book


Estimation of modal parameters

n
Dā( f)   !ā( f)āB oij
j0

the common-denominator polynomial.

The polynomial basis functions j ( f) are given by ! j( f)  e iāā fāT sā.j for a
discrete-time model (with Ts the sampling period). The complex-valued coeffiĆ
cients Aj and Boij are the parameters to be estimated. The approach used to opĆ
timize the computation speed and memory requirements will first be explained
for the Least Squares Solver and then these results will be extrapolated to the
ML estimator.

The Least–Squares Solver


^
Replacing the model H oiĂ( f) in equation 15-75 by the measured FRF H oiĂ( f)
gives, after multiplication with the denominator polynomial,

n n
 !jĂ(f)ĂBoij   !jĂ(f)ĂHoj(f)ĂAj # 0 Eqn 15-76
j0 j0

for i = 1, . . . . . , Ni 0 = 1, . . . . . , No and f= 1, . . . . . , Nf
Note that equation 15-76 can be multiplied with a weighting function Woi (f ).
The quality of the estimate can often be improved by using an adequate
weighting function.
As the elements in equation 15-76 are linear in the parameters, they can be reĆ
formulated as

X1 0 0 Y1   B1 
 0) X) 2 0 Y2   )2 
 B

 ) )  B # 0
X N0N i  N0N i
0 0 X N 0N i
 A 
with

B oi0 A0
.. ,ą A 
A1
B k 
B 
 oi1  .
.   .. 
B oin An

Part IV Modal Analysis and Design 261


Chapter 15 Estimation of modal parameters

X kā( f)  W oiĂ( f)ā[!( f)Ă, ! 1( f)Ă,ā .ā.ā.ā.Ă, ! n(f)]

Y k( f)   X kā( f)ā.āH oiā( f)

and k  (o  1)Ni  i  1,ā .ā.ā.ā, N oāN i

The (complex) Jacobian matrix J of this least-squares problem

X1 0 .ā.ā. 0 Y1 
 0 X2
J  .. ..
0
..
Y2 
..  Eqn 15-77
 . . . . 
0 0 XN N Y NoN i
 o i 
has Nf No Ni rows and (n+1)(No Ni +1) columns (with Nf >> n, where n is the
order of the polynomials). Because every element in equation 15-76 has been
weighted with Woi (f ), the Xk 's in equation 15-77 can all be different.

The Maximum–Likelihood Solver


Using referenced measurements (e.g., FRF data) makes it easier to get global
estimates from measurements that were obtained by roving the sensors over the
structure under test (which is a common practice in experimental modal analyĆ
sis). Because of this, the FRFs will be used here as primary data instead of the
input/output spectra (i.e. non-referenced data). However, one should take care
that the FRFs are not contaminated by systematic errors.

The ML equations
Assuming the different FRFs to be uncorrelated, the (negative) log-likelihood
function reduces to

No Ni Nf ^
|ĂH oiā(,ā  f)  H oiā( f)Ă| 2
l ML()   Ă Ă  Ă var{Hoi( f)}
Eqn 15-78
o1 i1 f1

The ML estimate of   [ĂB T


1
, .ā.ā.ā.ā,Ă B TN oN ,Ă A TĂ] T is given by minimizing equaĆ
i
tion 15-78. This can be done by means of a Gauss-Newton optimization algoĆ
rithm, which takes advantage of the quadratic form of the cost function (15-78).
The Gauss-Newton iterations are given by

(a ) solve āJ H H
mĂJ mĂ m  ā j mĂr m
for m

262 The Lms Theory and Background Book


Estimation of modal parameters

(b ) set  m1   m   m

with r m  r( m), J m  +r()+| m and

 H^11(, 1)  H11(1) !



 var{H 11(1)} 

 .. 
 . 
 ^ 
 H11(, N )  H11(N ā) 
 f  f
 var{H ( )} 
 11 N 
 ..
f

r()  $ . %
^
 H
 12 1 (,  )  H (
12 1 ) 

 var{H 12(1)} 
 
 .. 
 .
. 
^ 
HN N (, N )  HN N (N ā)
 o i f o
 i f
 var{H ( )} 
NN o N
i
 f

Deriving confidence interval

The covariance matrix of the ML estimate ^ is usually close to the correĆ


ML
sponding Cramér-Rav Lower Bound (CRLB) : cov{^ }  CRLB
ML

A good approximation of this Cramér-Rav Lower Bound is given by

CRLB " [J H
mĂJ m]
1 Eqn 15-79

with Jm the Jacobian matrix evaluated in the last iteration step of the Gauss-
Newton algorithm. As one is mainly interested in the uncertainty on the resoĆ
nance frequencies and damping ratios, only the covariance matrix of the deĆ
nominator coefficients is in fact required.

Hence, it is not necessary to invert the full matrix to obtain the uncertainty on
the poles (or on the resonance frequencies and the damping ratios).

Part IV Modal Analysis and Design 263


Chapter 15 Estimation of modal parameters

15.5 Calculation of static compensation modes


Modal synthesis can be used to couple substructures together in low frequency
ranges. For this, modal models for each of the substructures are required as
separate disconnected items. However the results of this coupling may be less
than optimal due to truncation errors. Truncation errors arise because only a
limited number of modes are taken into account.
To improve the results, both static and dynamic compensation terms can be
used.

[H Cā()] #ă [H Rā()]ă ă [ĂH 0Ă]   2[ĂH 1Ă] Eqn 15-80

exact FRF modal FRF static dynamic


supposition compensation compensation
term term

Truncation errors can be approximated by a quadratic function using a taylor


expansion. It has been shown that there is a good correspondence between the
real truncation error and the quadratic estimation.
Static compensation terms can be derived from direct (driving point) FRFs.
These static compensation terms are calculated using the upper residual terms
which were obtained while fitting the FRF matrix of the coupling points (drivĆ
ing points and cross terms). The upper residual terms are converted by means
of a singular value compensation into regular mode shapes and participation
factors. These mode shapes and participation factors can be used afterwards
for modal substructuring, in addition to the regular modes of the two substrucĆ
tures.

Frequency of the static compensation modes


The frequency of the static compensation modes ( 0) must be significantly
higher than the frequency band of the modes, which are taken into account
during the substructuring calculations. The upper limit of the frequency band
used in modal substructuring is defined by the frequency of the upper residual
(upper residual )
 0  10.0Ă upperĂresidual

The Singular Value decomposition (SVD)


In order to calculate the static compensation terms, a singular value decomposiĆ
tion has to be applied on the upper residual term matrix. This is obtained by
putting all upper residual terms together in one big matrix.

264 The Lms Theory and Background Book


Estimation of modal parameters

R upperĂresidual  U  VĂT
The mode shape values of the static compensation mode () are related to the
left singular vector, the singular value, and the frequency value ( 0)

 j  U j  j  0

The participation factor values (L) can be derived from the mode shape values
()

j
Lj   
2Ăm rĂ 0 j

m r  V    U 

j 
Lj     ĂV
2Ă 0 j 0 j

Part IV Modal Analysis and Design 265


Chapter 16

Operational modal
analysis

This chapter describes the theoretical and technical background reĆ


lating to operational modal analysis.
Reasons for performing operational modal analysis

Theoretical aspects

267
Chapter 16 Operational modal analysis

16.1 Why operational modal analysis?

Traditional modal model identification methods and procedures are based on


forced excitation laboratory tests during which Frequency Response Functions
(FRFs) are measured. However, the real loading conditions to which a strucĆ
ture is subjected often differs considerably from those used in laboratory testĆ
ing. Since all real-world systems are to a certain extent non-linear, the models
obtained under real loading will be linearized for much more representative
working points. Additionally, environmental influences on system behavior
(such as pre-stress of suspensions, load-induced stiffening and aero-elastic inĆ
teraction) will be taken into account.

In many cases, such as small excitation of off-shore platforms or traffic/wind


excitation of civil constructions, forced excitation tests are very difficult, if not
impossible, to conduct, at least with standard testing equipment. In such situaĆ
tions operational data are often the only ones available.

It is also the case that large in-operation data sets are measured anyway, for
level verification, operating field shape analysis and other purposes. Hence,
extending classical operating data analysis procedures with modal parameter
identification capabilities will allow a better exploitation of these data.

Finally, the availability of in-operation established models opens the way for in
situ model-based diagnosis and damage detection. Hence, a considerable inĆ
terest exists in extracting valid models directly from operating data.

Traditional processing of operational data

An accepted way of dealing with operational analysis in industry is based on a


peak-picking technique applied to the auto-and crosspowers of the operational
responses. Such processing results in the so-called Running Mode Analysis".
By selecting the peaks in the spectra, approximate estimates for the resonance
frequencies and operational deflection shapes can be obtained. These shapes
can then be compared to or even decomposed into the laboratory modal results.

Correlation of the operating data set with the modal database measured in the
lab allows an assessment the modes which are dominant for a particular operĆ
ating condition. In case of partially correlated inputs (e.g. road analysis), princiĆ
pal component techniques are employed to decompose the multi-reference probĆ
lem into subsets of single reference problems, which can be analyzed in
parallel. These decomposed sets of data can be fed to an animation program, to
interpret the operational deflection shapes for each principal component as a
function of frequency.

268 The Lms Theory and Background Book


Operational modal analysis

The auto-and crosspower peak-picking method requires considerable engiĆ


neering skill to select the peaks which correspond to system resonances. In
addition, no information about the damping of the modes is obtained and the
operational deflections shapes may differ significantly from the real mode
shapes in case of closely spaced modes. Pre-knowledge of a modal model deĆ
rived from FRF measurements in the lab is often indispensable to successfully
perform a conventional operational (running modes) analysis.

Curve-fitting techniques therefore, which allow modal parameters to be exĆ


tracted directly from the operational data would be of a great use for the engiĆ
neer. Such techniques would identify the dominant modes excited under drivĆ
ing conditions and this information might even be used to improve some
traditional FRF tests in the laboratory.

Using Operational modal analysis


The purpose of this procedure is to extract modal frequencies, damping and
mode shapes from data taken under operating conditions. This means that unĆ
der the influence of its natural excitation such as airflow around the structure
(e.g. wind turbines, aeroplanes, helicopters), road input, liquid flow (in pipes),
road traffic (e.g. bridges), internal excitation (rotating machinery).

Theoretically, one could consider the case where the input forces are measured
in such conditions which means that conventional FRF processing and modal
analysis techniques could be used. However the Operational modal analysis
software is aimed specifically at applications where the inputs can not be meaĆ
sured, and works when only responses such as accelerations signals are availĆ
able. The ideal situation is when the input has a flat spectrum.

Three methods are discussed, all of which use time domain correlation funcĆ
tions. These auto- and cross-correlation functions can be calculated directly
from raw time data or be derived from measured auto- and cross powers by an
inverse FFT processing.

Part IV Modal Analysis and Design 269


Chapter 16 Operational modal analysis

16.2 Theoretical aspects

This section describes the mathematical background to the methods used to


identify modal parameters from operational data.
Over recent years, several modal parameter estimation techniques have been
proposed and studied for modal parameter extraction from output-only data.
These include
- Auto-Regressive Moving Averaging models (ARMA),
- Natural Excitation Technique (NExT)
- Stochastic subspace methods.
The Natural Excitation Technique (NExT)
The underlying principle of the NExT technique is that correlation functions
between the responses can be expressed as a sum of decaying sinusoids. Each
decaying sinusoid has a damped natural frequency and damping ratio that is
identical to the one of the corresponding structural mode. Consequently, conĆ
ventional modal parameter techniques such as polyreference Least-Squares
Complex Exponential (LSCE) can be used for output-only system identificaĆ
tion.
Stochastic subspace methods
With the subspace approach, first a reduced set of system states is derived, and
then a state space model is identified. From the state space model, the modal
parameters are derived. The terminology subspace" comes mainly from the
control theory ć it is a family name" which groups methods that use Singular
Values Decomposition in the identification process.
Two subspace techniques, referred to as the Balanced Realization (BR) and the
Canonical Variate Analysis (CVA) are provided.

16.2.1 Stochastic substate identification methods


The following stochastic discrete time state space model is considered

{x k1}  [A]Ă{x k}  {w k}
Eqn 16-1
{y k}  [C]Ă{x k}  {v k}

where {xk } represents the state vector of dimension n,


{yk } is the output vector of dimension Nresp
{wk }, {vk }, are zero-mean, white vector sequences, which represent
the process noise and measurement noise respectively.

270 The Lms Theory and Background Book


Operational modal analysis

For p and q large enough, the matrices [A] and [C] are respectively the state
space matrix and the output matrix. Along with this model, the observability
matrix [Op ] of order p and the controllability matrix [Cq ] of order q are defined :

 [C] 
[C][A] 
[O p] 
 .. p1ā; [Cqā]  [Ă[G]ă[A][G]...ă[A] [G]Ă]
q1
Eqn 16-2
[C][A] 
T
where [G]  E[Ă{x k1}{y k} Ă] and E[.] denotes the expectation operator. The
matrices [Op ] and [Cq ] are assumed to be of rank 2Nm , where Nm is the number
of system modes.

The dynamics of the system are completely characterized by the eigenvalues


and the observed parts of the eigenvectors of the [A] matrix. The eigenvalue
decomposition of [A] is given by

[A]  [āā][ā ā][āā] 1 Eqn 16-3

Complex eigenvectors and eigenvalues in equation 16-3 always appear in comĆ


plex conjugate pairs. The discrete eigenvalues λr on the diagonal of [Λ] can be
transformed into continuous eigenvalues or system poles µr by using the folĆ
lowing equation

 r  eĂ rĂt & r   r  i r  1 ln( r) Eqn 16-4


t

where r is the damping factor and r the damped natural frequency of the r-th
mode.

The damping ratio ξr of the r-th mode is given by

r Eqn 16-5
r  
2r  2r

The mode shape {}r of the r-th mode at the sensor locations are the observed
parts of the system eigenvectors {}r of [], given by the following equation

{"} r  [C]Ă{} Eqn 16-6

The extracted mode shapes can not be mass-normalized as this requires the
measurement of the input force.

Part IV Modal Analysis and Design 271


Chapter 16 Operational modal analysis

The stochastic realization problem


The problem considered here is the estimation of the matrices [A] and [C] in
equation 16-2, up to a similarity transformation, using only the output meaĆ
surements { yk }. This problem is known as the stochastic realization problem
and has been addressed by many researchers from the control departments as
well as the statistics community [ 4, 5 and 6].

Two correlation-driven subspace algorithms are briefly discussed below,


known as the Balanced Realization (BR) and the Canonical Variate Analysis
(CVA).

Given a sequence of correlations

[R k]  EĂ {y km}Ă{y m} TrefĂ
Eqn 16-7

where {yk }ref is a vector containing Nref outputs serving as references.

For p ≥ q, let [Hp,q ] be the following block-Hankel matrix :

[R 1] [R 2] . [R q] 
[R 2] [R 3] .. [R q1] 
[H p,q]  .. ă .. ă ..ă ..  Eqn 16-8
 . . .
. . 
[R p] [R p1] [R pq1]
 
Direct computations of the [Rk ] from the model equations lead to the following
factorization property

[H p,q]  [O p][C q] Eqn 16-9

Let [W1 ] and [W2 ] be two user-defined invertible weighting matrices of size
pNresp and qNresp , respectively. Pre-and post multiplying the Hankel matrix
with [W1 ] and [W2 ] and performing a SVD decomposition on the weighted
Hankel matrix gives the following

T
[W 1]Ă[H p,q][W 2]  [Ă[U 1]ă[U 2]Ă] 
[0] [0]
[S 1] [0] [V 1]T [U ]Ă[S ]Ă[V ]T
[V ]T
 2  1 1 1 Eqn 16-10

where [S1 ] contains n non-zero singular values in decreasing order, the n colĆ
umns of [U1 ] are the corresponding left singular vectors and the n columns of
[V1 ] are the corresponding right singular vectors.

On the other hand, the factorization property of the weighted Hankel matrix
results in

272 The Lms Theory and Background Book


Operational modal analysis

[W 1]Ă[H p,q]Ă[W 2] T  [W 1]Ă[O p]Ă[C q]Ă[W 2] T Eqn 16-11

From equations 16-10 and 16-11, it can be easily seen that the observability maĆ
trix can be recovered, up to a similarity transformation, as

[O p]  [W 1] 1[U 1][S 1] 12 Eqn 16-12

The system matrices are then estimated, up to similarity transformation, using


the shift structure of [Op ]. So,

[C]  {firstĂblockĂofĂrowO p} Eqn 16-13

and [A] is computed as the solution of

O,p1  [Op1]Ă[A] Eqn 16-14

where [Op-1 ] is the matrix obtained by deleting the last block row of [Op ] and
[Op-1 ↑] is the upper shifted matrix by one block row.
Different choices of weighting will lead to different stochastic subspace identifiĆ
cation methods. Two particular choices for the weighting matrices give rise to
the Balance Realization and the Canonical Variate Analysis methods.

Balanced Realization (BR)

[W 1]  [I] and[W 2]  [I] Eqn 16-15

So no weighting is involved.

Canonical Variate Analysis (CVA)


CVA requires that all responses are serving as references, so {yk }={yk }ref . ConseĆ
quently, the correlation matrix [Rk ] given by equation 16-7 is square. Define
then the following Toeplitz matrices

 [R 0] [R 1]T [R p1]T  [R 0] [R 1] [R p1]


.. .


 [R 1]
.
[R 0] . [R p2] T
   [R 1]T
.
[R 0] .. [R p2]
..
.
[
]  . ă . ă .ă . .. ă;ă [
]  .. ă
.
ă ..ă .. 
.
[R ] [R ] . . .   T [R T
. . 
[R ] ] [R 0] T 
 p1 p2 [R 0]   p1 p2

Eqn 16-16

Part IV Modal Analysis and Design 273


Chapter 16 Operational modal analysis

Let the full-rank factorization of [ℜ+] and [ℜ-] be

[
]  [L ]Ă[L ] TĄ; [
]  [L ]Ă[L ] T Eqn 16-17

In case of CVA, the weighting is as follows

[W 1]  [L ] 1ă andă[W 2]  [L ] 1 Eqn 16-18

With this weighting, the singular values in equation 16-10 correspond to the
so-called canonical angles. A physical interpretation of the CVA weighting is
that the system modes are balanced in terms of energy. Modes which are less
well excited in operational conditions might be better identified.

Practical implementation of correlation–driven stochastic subspace meth-


ods
Equation 16-10 only holds for `true' block-Hankel matrices and for a finite orĆ
der system. In practice, the system has a larger, possibly infinite order and the
Hankel and Toeplitz matrices in equations 16-8 and 16-16 will be filled up with
`empirical' correlations, which are computed as follows :
M
[R k]  1
^

M
 {ymk}Ă{ym}Tref Eqn 16-19
m0

where M is the number of data samples.

Although equation 16-19 is a preferred estimator for the correlation functions


as no leakage errors are made and as it can also be used for non-stationary
data, the evaluation of equation (19) in the time domain is not really efficient in
computational effort. A faster estimator for the correlation functions can be imĆ
plemented by taking the inverse FFT of auto-and crosspower spectra which are
calculated on the basis of the FFT and segment averaging. This however asĆ
sumes stationary signals and time windowing (e.g. Hanning) is needed to
avoid leakage.

The SVD decomposition of the weighted empirical Hankel matrix will then reĆ
sult in the following

[S 1] [0] [V 1] 


^ ^ T
^ ^ ^ ^ ^ ^ ^ ^ ^
[W 1]ā[H p,q]ā[W 2]  [Ă[U 1]Ą[U 2]Ă] ă ^ Ă ^ Ă  [U1]ā[S 1]ā[V 1] T  [U 2]ā[S 2]ā[V 2] T
 [0] [S2][V 2] 
T

Eqn 16-20

with

274 The Lms Theory and Background Book


Operational modal analysis

^
[S 1]  diagĂ( 1---  n)Ă,ą  1   2---  n  0
^
Eqn 16-21
[S 2]  diagĂ( n1---  pRrespĂ)Ă,ą  n1   n2---  pR resp  0

Identification of a model with order n is done by truncating the singular values,


so by keeping [S1 ]. The observability matrix is then approximated by

^ ^ ^
[O p]  [W 1] 1Ă[U 1]Ă[S 1] 12 Eqn 16-22

As the model order is typically unknown, inspection of the singular values


might help the engineer to select n such that n n+1 In practice, this criteria
is not often of great use as no significant drop in the singular values can be obĆ
served. Other techniques such as stabilization diagrams are then needed in orĆ
der to find the correct model order.

The remaining steps of the algorithm are similar to those described in equations
16-11 to 16-18, where theoretical quantities are replaced with empirical ones.

16.2.2 Natural Excitation Techniques


Subtitled :
The Polyreference Least Squares Complex Exponential (LSCE) method apĆ
plied to auto-and crosscorrelation functions

Polyreference LSCE applied to impulse response functions is a well-known


technique in conventional modal analysis, yielding global estimates of poles
and the modal participation factors [7]. It has been shown that, under the asĆ
sumption that the system is excited by stationary white noise, correlation funcĆ
tions between the response signals can also be expressed as a sum of decaying
sinusoids [ 8 ].

Each decaying sinusoid has a damped natural frequency and damping ratio
that is identical to that of a corresponding structural mode. Consequently, the
classical modal parameter techniques using impulse repines functions as input,
like Polyreference LSCE, Eigensystem Realization Algorithm (ERA) and
Ibrahim Time Domain are also appropriate to extract the modal parameters
from response-only data measured under operational conditions.

This technique is also referred to as NExT, standing for Natural Excitation TechĆ
nique. An interesting remark is that the ERA method applied to correlation
functions instead of impulse response functions is basically the same as the BalĆ
anced Realization method.

Part IV Modal Analysis and Design 275


Chapter 16 Operational modal analysis

Mathematically, the Polyreference LSCE will decompose the correlation funcĆ


tions as a sum of decaying sinusoids. So,
Nm
[R k]   {"}rĂeĂ kt{L}Tr  {"}*r ĂeĂ kt{L}T*r ăăor
r
*
r

r1 Eqn 16-23


Nm
[R k]   {"}rĂkr{L}Tr  {"}*r Ăk*r {L}T*r
r1

where  r  e rĂt and {L}r is a column vector of Nref constant multipliers which
are constant for all response stations for the r-th mode.
{Note that in conventional modal analysis, these constant multipliers are the
modal participation factors.}

The combinations of complex exponential and constant multipliers,


 r{L} TrĂorĂ *r {L} T*
r are a solution of the following matrix finite difference equaĆ
tion of order t

 krā{L} Trā[I]   k1


r ā{L} Trā[F 1]    kt T
r ā{L} r [F t]  {0}
T Eqn 16-24

where [F1 ]...[Ft ] are coefficient matrices with dimension Nref x Nref .

In case the system has Nm physical modes, the order t in equation 16-24 should
be theoretically equal to 2Nm/Nref in order to find the 2Nm characteristic poles.
In practice, over specification of the model order will be needed.

Since the correlation functions are a linear combination of the characteristic


T * T*
solutions of equation 16-24,  r{L} r ĂorĂ r {L} r , they are also a solution of that
equation. Hence,

[R k]Ă[I]  [R k1]Ă[F 1] ---  [R kt]Ă[F t]  0 Eqn 16-25

Equation 16-25 which uses all response stations simultaneously enables a globĆ
al least squares estimate of the coefficient matrices [F1 ]... [Ft ]... The overdeterĆ
mination is also achieved by considering all available or selected time intervals.
Once the coefficient matrices are known, equation 16-24 can be reformulated
into a generalized eigenvalue problem resulting in Nref t eigenvalues lr, yielding
estimates for the system poles µr and the corresponding left eigenvectors {L}r T .

The selection of outputs which function as references have to be chosen in such


a way that they contain all of the relevant modal information. In fact, the selecĆ
tion of output-reference channels is similar to choosing the input-reference
locations in a traditional modal test.

276 The Lms Theory and Background Book


Operational modal analysis

Extraction of mode shapes in a second step and model validation


Contrary to the stochastic subspace methods, the Polyreference LSCE does not
yield the mode shapes. So, a second step is needed to extract the mode shapes
using the identified modal frequencies and modal damping ratios. For output-
only data, it has been shown [ 9 ] that this can be done by fitting the auto-and
crosspower spectra between the responses and the responses serving as referĆ
ences :

 jAmn

Nm
r A mn*
r B mn
r B mn*
r
X mn(j)     Eqn 16-26
r j  *  j  r  j  *
r r
r1

where Xmn (jω) is the crosspower between m-th response station and the n-th
response station serving as a reference.
In case of autopowers (m=n), Ar mn equals Br mn. The residue Ar mn is proportionĆ
al to the m-th component of the mode shape {}r and the residue Br mn is proĆ
portional to the n-th component of the mode shape {}r. Consequently, by fitĆ
ting the crosspowers between all response stations and one reference station,
the complete mode shape can be derived.
The power spectra fitting step offers the advantage that not all responses
should be included in the time-domain parameter extraction scheme and that
consequently, mode shapes of a large number of response stations can be easily
processed by consecutively fitting the spectra. Additionally, it provides a
graphical quality check by overlaying the actual test data with the synthesized
data. In comparison with modal FRF synthesis, it can be observed in equation
16-26 that two additional terms as function of -jw need to be included for a corĆ
rect synthesis of the auto-and crosspowers which are assumed to be estimated
on the basis of the FFT and segment averaging. If Xmn (jw) would not be calcuĆ
lated with the FFT segment averaging approach, but as the FFT of the correlaĆ
tion function between response m and response n estimated using equation
16-19, the last 2 terms in equation 16-26 can be neglected.

16.2.3 Selection of the modal parameter identification


method
This section discusses the criteria for selecting a particular method
LSCE - LSFD
This classical Least Squares Complex Exponential method is adapted to
work on Auto-correlation and Cross-correlation instead of FRFs or ImĆ
pulse Response functions.

Part IV Modal Analysis and Design 277


Chapter 16 Operational modal analysis

A subset of the response functions can be selected as references in the


computation of the cross power functions. The responses chosen as referĆ
ences should contain all of the relevant modal information, as is required
for the input-reference locations in a traditional modal test.

Mode shapes are identified in a secondary process using the Least Squares
Frequency Domain procedure. For the theoretical background on this
method see section 16.2.2

BR (Balanced Reduction)
This is one of the subspace" techniques which identifies frequency,
damping and mode shapes.

A subset of the response functions can be selected as references. These are


used in the computation of the cross power functions from the original
time domain data.

This method is useful in identifying the most dominant modes occurring


under operational conditions.

CVA (Canonical Variate Analysis)


This is the second of the subspace" techniques which identifies frequency
damping and mode shapes.

In this case all the response functions must be selected as references which
are used in the computation of the cross power functions from the original
time domain data. Thus this method requires more computational effort
but this algorithm will give equal importance" to all modes and can
identify modes which are not well excited under operational conditions.
For the theoretical background on subspace methods see section 16.2.1.

278 The Lms Theory and Background Book


Operational modal analysis

16.3 References

[1] LMS CADA-X Running modes manual, 1997.


[2] Otte D., Development and Evaluation of Singular Value Analysis MethodĆ
ologies for Studying Multivariate Noise and Vibration Problems, PhD K.U.LeuĆ
ven, 1994.
[3] Otte D., Van de Ponseele P., Leuridan J., Operational Deflection Shapes in
Multisource Environments, Proc. 8th International Modal Analysis Conference,
p. 413-421, Florida, 1990.
[4] Abdelghani M., Basseville M., Benveniste A., In-operation Damage MonĆ
itoring and Diagnostics of Vibrating Structures, with Applications to Offshore
Structures and Rotating Machinery", Proc. of IMAC XV, Orlando, 1997.
[5] Desai U.B., Debajyoti P., Kirkpatrick R.D., A realization approach to stoĆ
chastic model reduction", Int. J. Control, Vol. 42, No. 4, pp. 821-838, 1985.
[6] Kung S., A new identification and model reduction algorithm via singuĆ
lar value decomposition", Proc. 12th Asilomar Conf. Circuits, Systems and
Computers, pp. 705-714, Pacific Groves, 1978.
[7] Brown D., Allemang R., Zimmerman R., and Mergeay, M., Parameter EsĆ
timation Techniques for Modal Analysis", SAE Paper 790221, pp. 19, 1979.
[8] James G.H. III, Carne T.G., and Laufer J.P., The Natural Excitation TechĆ
nique (NExT) for Modal Parameter Extraction from Operating Structures, the
international Journal of Analytical and Experimental Modal Analysis", Vol. 10,
no 4, pp. 260-277, 1995.
[9] Hermans L., Van der Auweraer H., On the Use of Auto-and Cross-corĆ
relation functions to extract modal parameters from output-only data, Proc. of
the 6th International conference on Recent Advances in Structural Dynamics,
Work in progress Paper, 1997.
[10] Van der Auweraer H., Wyckaert K., Hendricx W., From Sound Quality to
the Engineering of Solutions for NVH Problems: Case Studies", Acustica/Acta
Acustica, Vol. 83, N° 5, pp. 796-804, 1997.
[11] Wyckaert K., Van der Auweraer H., Hendricx W., Correlation of AcoustiĆ
cal Modal Analysis with Operating Data for Road Noise Problems", Proc. 3rd
International Congress on Air- and Structure-Borne Sound and Vibration,
Montreal (CND), June 13-15, 1994, pp. 931-940, 1994.
[12] Wyckaert K., Hendricx W., Transmission Path Analysis in View of Active
Cancellation of Road Induced Noise in Automotive Vehicles", 3rd International
Congress on Air- and Structure-Borne Sound and Vibration, Montreal (CND),
June 13-15, 1994, pp. 1437-1445, 1994.

Part IV Modal Analysis and Design 279


Chapter 16 Operational modal analysis

[13] Van der Auweraer H., Ishaque K., Leuridan J., Signal Processing and
System Identification Techniques for Flutter Test Data Analysis", Proc. 15th Int.
Seminar of Modal Analysis, K.U.Leuven, pp. 517-538, Leuven, 1990.

[14] Van der Auweraer H, Guillaume P., A Maximum Likelihood Parameter


Estimation Technique to Analyse Multiple Input/Multiple Output Flutter Test
Data", AGARD Structures and Materials Panel Specialists' Meeting on AdĆ
vanced Aeroservoelastic Testing and Data Analysis, Paper no 12, May, 1995.

[15] Van der Auweraer H., Leuridan J., Pintelon R., Schoukens J., A FrequenĆ
cy Domain Maximum Likelihood Identification Scheme with application to
Flight Flutter Data Analysis", Proc. 8-th IMAC, pp. 1252-1261, Kissimmee,
1990.

280 The Lms Theory and Background Book


Chapter 17

Running modes analysis

This chapter describes the basic principles involved in running


mode analysis. It includes the following topics:
The definition of running modes analysis

The type of measurement data required for running mode


analysis

The identification and scaling of running modes

The interpretation and validation of running modes

281
Chapter 17 Running modes analysis

17.1 Running mode analysis

The aim of modal analysis is to identify a modal model that describes the dyĆ
namic behavior of a (mechanical) system. This behavior is identified by means
of the transfer function measured between any two degrees of freedom of the
system.

The outcome of a modal analysis therefore is the estimated modal parameters


of the system, which are the natural frequencies (n ), damping ratios () and
scaled mode shapes (Vik ).

One of the most common ways of estimating the modal parameters is based
upon the measurement of FRFs between one or more input (reference DOFs)
and all response DOFs of interest. These measurements are made under well
defined and controlled conditions, where all input and output signals are meaĆ
sured and no unknown forces (external or internal) are acting on the system.

The modal model is (ideally) valid under any circumstances; that is to say,
whatever the frequency contents, level or nature of the acting forces. This
makes modal analysis a very powerful tool, and the modal model (once identiĆ
fied) can be used in a number of ways, such as trouble shooting, forced reĆ
sponse prediction, sensitivity analysis or modification prediction.

For many reasons, a complete modal analysis can be impracticable. It may be


that the cost of the test setup is too high, the measurement object (e.g. a protoĆ
type) cannot be made available for the period of time required to perform a
modal analysis, or it is found to be simply impossible to isolate the object from
all the forces acting on the system and excite it artificially.

In this case, it is possible to take measurements of the system while it is operatĆ


ing. A number of output signals can be measured (one at each response DOF),
while the system is operating under stationary conditions. This provides a set
of measurements (Xi ()) as a function of frequency.

The measured quantity Xi ( ) at DOF i can be any number of things: displaceĆ


ment, acceleration, voltage, angular position or acceleration, for example. It is
however measured for one particular operating condition, with an unknown
level or nature of the acting forces or inputs.

If you are interested in a particular phenomenon at a well defined frequency, it


is very often most helpful to see what the output levels are at that frequency for
each measurement DOF. So you might, for example, want to know what the
harmonic motion of measurement point 13 is at 85.6 Hz, or perhaps its level of
acceleration. These values can then be assembled in a vector {X}, having one
element for each of the measurement DOFs.

282 The Lms Theory and Background Book


Running modes analysis

Animating the system's wire frame model can lead to a better understanding of
these phenomena. This makes it possible to show each motion (or acceleration)
level at the corresponding DOF, in a cyclic manner. Because of the external reĆ
semblance of the animated representation of the vector quantity {X} with the
mode shape vector {V}, the vector {X} is called a running mode, or an operational
deflection shape.
These running modes must be interpreted entirely differently from modal
modes. Running modes only reflect the cyclic motion of each DOF under specifĆ
ic operational conditions, and at a specific frequency. Using a modal model based
on displacement/force frequency response functions {H}, the displacement runĆ
ning mode {X} can be described as follows.
{X i( p)}  {Hi1( p)}F 1( p)  {H i2( p)}F 2( p)  
Eqn 17-1
{H im( p)}F m( p)

 2N
 Ą
V ikV 1k !  2N Ą V ikV mk !ĂF ( )

j p   k  jp    m p
ĂF ( p)   Eqn 17-2
k1  1 k1 k

where,
i = the DOF counter
p = the particular angular frequency
Fj () = the force input spectrum at DOF j
m = the number of acting forces
The above equation clearly shows that running modes:
V can be identified at any of the measured frequencies p , whereas a moĆ
dal mode has a fixed natural frequency determined by the structural
characteristics of the system (mass, size, Young's modulus, etc.).
V depend on the level and nature of the acting force(s).
V depend on the structural characteristics of the system, through its FRF
behavior.
V depend on the frequency contents of each of the acting forces : if F3 (p )
happens to be zero at p , it will not contribute to the running mode
{x(p )}.
V will be dominant at structural resonances (p " k ), but also at peaks in
the acting force spectra.

Part IV Modal Analysis and Design 283


Chapter 17 Running modes analysis

17.2 Measuring running modes

Ideally, all response spectra for a running mode analysis would be acquired:

V simultaneously

V in a short period of time in which the operating conditions of the test


object remain constant

V with signals having a high signal to noise ratio, so that no averaging is


required.

In practice, the number of acquisition channels on the measurement system limĆ


its the number of response signals which can be measured simultaneously, and
so different sets of responses have to be measured at different periods of time.
Additionally, if a relatively high level of noise is present on the signals, an averĆ
aging procedure may be necessary during the acquisition of the response sigĆ
nals.

Because of varying operation conditions, it is usual to choose a specific reĆ


sponse DOF as a reference station and then measure the responses relative to
this reference. If the operating conditions then change slightly from one meaĆ
surement to the next, this will hopefully affect all response signals in the same
way and the change will be cancelled out because of the relative nature of the
measurements. This procedure also guarantees a fixed phase relationship beĆ
tween the different response signals, using the phase of the reference signal as a
reference.

The two measured functions available for running mode analysis are: transmisĆ
sibility functions and crosspower spectra.

17.2.1 Transmissibility functions


When the response signals are related to the reference by simply dividing each
response signal frequency spectrum by the reference frequency spectrum, the
result is the transmissibility function (T)

X iĂ()
T ijĂ()Ă Ă Eqn 17-3
X jĂ()

where j is the reference station.

284 The Lms Theory and Background Book


Running modes analysis

When averaging is involved, transmissibilities can be calculated from measured


crosspower and autopower spectra.

G ijĂ()
T ij()Ă Ă Eqn 17-4
G jjĂ()

The transmissibility function represents the complex ratio (amplitude and


phase) between two spectra. A peak in this function may thus be caused either
by a peak in the numerator crosspower (i.e. a structural resonance or peak in
the excitation spectrum), or a zero (anti-resonance) in the denominator autoĆ
power spectrum. As resonance peaks will occur at the same frequencies for
cross and autopower spectra, while antiresonances do not, the denominator zeĆ
ros will cause more peaks in Tij. Resonance peaks tend to cancel each other out.

In the case of Frequency Response Functions (acceleration over force), different


estimators (H1 , H2 , HV ) can be used to estimate the transmissibility functions.
In practice, the difference between these different methods of estimating Tij ()
is small when the coherence function is high (near 100 %). When estimating the
transmissibility functions from Equation 17-4 above, the coherence function ()
can also be calculated using the following equation.

ĂGijĂ()2
 2ijĂ()Ă Ă Eqn 17-5
G iiĂ()Ă.ĂG jjĂ()

The coherence function expresses the linear relationship between both response
signals of the measured system. This coherence function is expected to be high,
since both responses are caused by the same acting forces. In practice, however,
it can be low for the same reasons as those affecting the measurement of FRFs,
that is to say due to low signal to noise ratio for one or both of the signals, bad
signal conditioning, etc.

Another interesting reason why the coherence between two measured signals
may be low, can be derived from equation 17-1, when it is substituted in equaĆ
tion 17-3. The linear relationship (and hence the coherence) will vary as a funcĆ
tion of the weighting factors Fj ( ), this can be because of changing operating
conditions during the averaging process for example. High coherence function
values in the frequency regions of interest therefore indicate both a high quality
of the measurement signals and stationary operating conditions.

Part IV Modal Analysis and Design 285


Chapter 17 Running modes analysis

Absolutely scaled running mode coefficients for each DOF i can be obtained by
multiplying the transmissibility spectra by the RMS value of the reference autoĆ
power spectrum.

 X iĂ()  ĂĂ T ijĂ()Ă.Ă G jjĂ() Eqn 17-6

When the measured autopower spectrum has units of displacement squared,


the scaled running mode will be expressed in units of displacement (for examĆ
ple, meters, or inches), if the transmissibility functions themselves are dimenĆ
sionless. Displacement running modes can be converted to velocity or acceleraĆ
tions by simply multiplying by j or (j)2. For a certain value of  (say o), the
following relationships apply.

 X iĂ( 0)  ĂĂ T ijĂ( 0)Ă.Ă G jjĂ(0) [m] Eqn 17-7

.
XiĂ(0)  ĂĂ  XiĂ(0) Ă.ĂjĂ(0) [m/s] Eqn 17-8

XĂ(0)  ĂĂ X iĂ( 0) Ă.ĂjĂ(0)


.. .
[m/s 2] Eqn 17-9

17.2.2 Crosspower spectra


When it can be assumed that the operating conditions are not going to change
while measuring all response signals, then it is possible to measure just crossĆ
power spectra between each response DOF i and a certain reference DOF j

G ijĂ()Ă Ă X iĂ()ĂX *j Ă() Eqn 17-10

where * denotes the complex conjugate.


Compared to transmissibility functions, crosspower functions have the advanĆ
tage that peaks clearly indicate high response levels (which may still be caused
by a structural resonance or a peak in the acting force spectrum). This techĆ
nique is especially useful when all the response signals are measured simultaĆ
neously by a multi-channel measurement system. In this case, the operating
conditions are indeed the same for all response DOFs.

286 The Lms Theory and Background Book


Running modes analysis

Absolutely scaled running modes can, in this case, be obtained again by means
of the autopower spectrum of the reference station j

G ijĂ()
{X iĂ()}Ă Ă Eqn 17-11
GjjĂ()
When displacements were measured, the running mode coefficient will have
units of displacement. Equations 17-8 and 17-9 can be used to derive velocity
or acceleration values.

Part IV Modal Analysis and Design 287


Chapter 17 Running modes analysis

17.3 Identification and scaling of running


modes

Unlike modal modes, a running mode can be identified at any arbitrary freĆ
quency of the measured spectra.
Simple peak picking and mode picking methods can be used to extract the
sampled values, corresponding to a certain spectral line from the measured
spectra. They can then be scaled, and assembled into a vector which can be
listed, or animated using a 3D wire frame model of the measured object. For a
measurement blocksize of 1024 (512 spectral lines), it is thus possible to identify
512 running modes - or even more when interpolating between the spectral
lines.

Note! There is no such quantity as damping defined for a running mode. Similarly
other modal parameter concepts such as residues or modal participation fac-
tors have no meaning for running mode analysis.

17.3.1 Scaling of running modes


It is possible to scale the identified running modes to values with absolute
meaning.

The scaling of running modes coefficients that have been determined using
peak picking methods, depends upon the nature of the measurement data (e.g.
transmissibilities, or autopowers).
Several ways of scaling running modes can be considered

V If transmissibility spectra were measured, then scaling can be perĆ


formed using the reference autopower spectrum, as described in equaĆ
tion 17-6.
V If crosspowers were measured, then equation 17-11 can be applied to
scale the running modes, again using the reference autopower specĆ
trum.

V It is possible to convert between displacement, velocity and acceleraĆ


tion coefficients using equations 17-7, 17-8 and 17-9 where it is posĆ
sible to integrate or differentiate once or twice.

288 The Lms Theory and Background Book


Running modes analysis

V A number of running modes can be scaled manually, by entering a


complex scale factor. Each individual mode shape coefficient will be
multiplied with this scaling factor.

V Finally, a very general scaling mechanism can be used to scale a numĆ


ber of running modes using a spectrum. Individual running mode coĆ
efficients will be multiplied by the (possibly complex) value of the
spectrum block, belonging to the spectral line that corresponds to the
frequency of that particular mode.

Each one of the above scaling methods may change and influence the units of
the scaled running mode. The scaling factor's units will be incorporated into
the mode shape coefficient units, which were initially obtained from the meaĆ
surement data.

Part IV Modal Analysis and Design 289


Chapter 17 Running modes analysis

17.4 Interpretation of results

A set of functions exists, that are designed to assess the validity of modes.
These include the functions of Modal Scale Factor, Modal Assurance Criterion
and Modal decomposition.

Modal Scale Factors and Modal Assurance Criterion

Both the Modal Scale Factor and Modal Assurance Criterion are mathematical
tools used to compare two vectors of equal length. They can be used to
compare running and modal, mode shape information.

The Modal Scale Factor between columns l and j of mode shape k or MSFjlo is
the ratio between two vectors. Although this ratio should be independent of
the row index i (the response station), a least squares estimate has to be comĆ
puted for it when more than one output station coefficient is available.

{V jk} t*Ă{V lk}


MSF jlk  Eqn 17-12
{V jk} t Ă{V jk}
*

where {V jk } is the jth column of [Vk ].

The corresponding Modal Assurance Criterion expresses the degree of confiĆ


dence in this calculation, which is obtained using equation 17-13.

({V jk} t*Ă{V lk}) 2


MAC jlk  Eqn 17-13
({V jk} t Ă{V jk})Ą({V lk} t {V lk})
* *

If a linear relationship exists between the two complex vectors {V jk } and {V lk },


then the MSF is the corresponding proportionality constant between them, and
the MAC value will be near to one. If they are linearly independent, the MAC
value will be small (near zero), and the MSF not very meaningful.

Modal Scale Factors and Modal Assurance Criterion values can be used to
compare an obtained modal model with the accepted running modes. The
MAC values for corresponding modeshapes should be near 100 % and the MSF
between corresponding vectors should be close to unity. When multiple inputs
are used, the MSF can be calculated for each input, while the corresponding
MAC will be the same for all of them.

290 The Lms Theory and Background Book


Running modes analysis

Modal decomposition

When a modal model for the same DOFs is available for a measured object, it is
possible to compare modal and running modes and to track down resonance
phenomena causing a particular running mode to become predominant. This is
termed Modal decomposition. By using a decomposition of each running
mode in a linear combination of the modal modes, it becomes clear whether or
not a running mode originates primarily from a resonance phenomenon.

The modal modes form what is termed the `basis' group of modes. The runĆ
ning modes are in a separate group that is to be decomposed. The following
formula applies.

{X i( 0)}  a 1{V 1}  a 2{V 2}   a n{V n}  Rest Eqn 17-14

Where
Xi is the i th mode of the group to be decomposed (running modes)
Vi is the i th mode of the basis group (modal modes)
ai are the scaling coefficients needed to satisfy the above equation.

The scaling coefficients are rescaled relative to the maximum value.

{X i( 0)} 
100%

a max a 1
a max
a
max

.100%{V 1}  a n .100%{V n}   Rest Eqn 17-15

The Rest" is expressed as a relative error

 {Xi( 0)}  [a 1{V 1}   a n{V n}] 


Rest  100% Eqn 17-16
 {X i( 0)} 

Note! Take care when interpreting these values since resemblance of the modal and
the running mode may purely be coincidental. A running mode at 56 Hz will
have no connection with a modal mode at 200 Hz even if they look alike.

Part IV Modal Analysis and Design 291


Chapter 18

Modal validation

This document describes tools used to verify the validity of a modal


model.
Modal Scale Factors and Modal Assurance Criterion

Mode participation

Reciprocity

Scaling

Modal Phase Collinearity and Mean Phase Deviation

Comparison of modal models

Mode Indicator Functions

Summation of FRFs

Synthesis of FRFs

293
Chapter 18 Modal validation

18.1 Introduction

A number of means are available to validate the accuracy of modal models of


frequencies, damping values, mode shapes and modal participation factors.
These tools are -

V Modal Scale Factors between modes and corresponding correlation


factors (Modal Assurance Criterion, MAC) described in section 18.2.

V Mode participation described in section 18.3.

V Reciprocity between inputs and outputs, described in section 18.4.

V Generalized modal parameters, described in section 18.5. (Scaling)

V Mode complexity, described in section 18.6.

V Modal Phase Collinearity and Mean Phase Deviation indices, deĆ


scribed in section 18.7.

V Comparison of modal models described in section 18.8.

V Mode Indicator Functions, described in section 18.9.

V Summation of FRF data in the Index table, described in section 18.10.

V Synthesis of FRFs described in section 18.11

Some validation procedures allow you to convert the complex mode shape vecĆ
tors to normalized ones. Normalized mode shapes are obtained from the amĆ
plitudes of the complex mode shape coefficients after a rotation over their
weighted mean phase angle in the complex plane.

294 The Lms Theory and Background Book


Modal validation

18.2 MSF and MAC

Modal Scale Factors and Modal Assurance Criterion

The FRF between input j and output i on a structure can be written in partial
fraction expansion form as

 rĂ ijk
h ij,n  
N rĂ *ijk !
  Eqn 18-1
j  
k1
n k j n   *k 

The matrix of FRFs is then expressed as

 j[RK]

N
[R K] *
[H]   Eqn 18-2
k1
n k j n   *k

where [RK ] represents the matrix of residues. When Maxwell's reciprocity prinĆ
ciple holds for the tested structure this residue matrix is symmetric and can be
rewritten as

R k  a k{V k}Ă{V k} t Eqn 18-3

The ratio between two residue elements on the same row i but on two different
columns j and l can be computed as

r iāj,āk v jāk
r iāl,āk  v lāk  MSF jālāk Eqn 18-4

This ratio MSFjlk is called the Modal Scale Factor between columns l and j of
mode k. Although this ratio should be independent of the row index i (the reĆ
sponse station), a least squares estimate has to be computed for it when more
than one output station residue coefficient is available

{R jk} t*Ă{R lk}


MSF jlk  Eqn 18-5
{R jk} t Ă{R jk}
*

where {R jk } is the jth column of [Rk ].

Part IV Modal Analysis and Design 295


Chapter 18 Modal validation

The corresponding Modal Assurance Criterion expresses a degree of confidence


for this calculation :

({R jk} t*Ă{R lk}) 2


MAC jlk  Eqn 18-6
({R jk} t Ă{R jk})Ą({R lk} t {R lk})
* *

If a linear relationship exists between the two complex vectors {R jk} and {R lk}
the MSF is the corresponding proportionality constant between them and the
MAC value will be near to one. If they are linearly independent, the MAC valĆ
ue will be small (near zero), and the MSF not very meaningful.

In a more general way, the MAC concept can be applied on two arbitrary comĆ
plex vectors. This is useful in comparing two arbitrary scaled mode shape vecĆ
tors since similar mode shapes have a high MAC value.

Modal Scale Factors and Modal Assurance Criterion values can be used to
compare two modal models obtained from two different modal parameter esĆ
timation processes on the same test data for example. When comparing mode
shapes, the MAC values for corresponding modes should be near 100 % and
the MSF between corresponding residue vectors (mode shapes, scaled by the
modal participation factors) should be close to unity. When multiple inputs
were used, this MSF can be calculated for each input while the corresponding
MAC will be the same for all of them.

A second application for the MAC value is derived from the orthogonality of
mode shape vectors when weighted by the mass matrix:

{V k} tĂ[M]Ă{V}  m kĄwhenĄk  1
Eqn 18-7
 0Ăotherwise

where mk represents the modal mass for mode k.

Even when no exact mass matrix is available, it can usually be assumed to be


almost diagonal with more or less equal elements. In this case, the calculation
of the MAC value between two different modes is approximately equivalent to
checking their orthogonality.

For more specific information on using MSF and MAC for interpreting results
in a running mode analysis see section 17.4.

296 The Lms Theory and Background Book


Modal validation

18.3 Mode participation

The relative importance of different modes in a certain frequency band can be


investigated using the concept of modal participation. For each mode, the sum
of all residue values for a specific reference expresses that mode's contribution
to the response. At the same time these sums can be added over all references,
to evaluate the importance of each mode.

Note! These evaluations are only meaningful when the same response and reference
stations are included for all modes.

When a comparison is made of the residue sums for one mode at all the referĆ
ences, it evaluates the reference point selection for that mode. The reference
with the highest residue sum is the best one to excite that mode.

When these sums are added together for all references, the importance of the
modes themselves is evaluated. The mode with the highest result is the most
important one.

Finally the sums of residues can be added for all modes. Comparison of these
results between different inputs allows you to evaluate the selection of referĆ
ence stations in a global sense for all modes.

Part IV Modal Analysis and Design 297


Chapter 18 Modal validation

18.4 Reciprocity between inputs and outputs

Reciprocity is one of the fundamental assumptions of modal analysis theory.


This section discusses the reciprocity of FRFs and the reciprocity of the modal
model.

Reciprocity of FRFs
Reciprocity of FRFs means that -

measuring the response at DOF i while exciting at DOF j is the same as measuring
the response at DOF j while exciting at DOF i

This is expressed mathematically as -

h iĂjĂ()  h jĂiĂ() Eqn 18-8

This means that the FRF matrix is symmetric. Note that this property is inĆ
herently assumed when performing hammer impact testing to measure FRFs or
impulse responses.

Reciprocity in the modal model


Using the modal model for the FRF matrix


N
{V} kL k {V} *kL k!
*
 H Ă Ă    Eqn 18-9
j   k j   *k 
k1

it becomes clear that, when this matrix is symmetric, the role of mode shape
vectors and modal participation vectors can be switched. Making an abstracĆ
tion of the absolute scaling of residues, this property can be expressed as folĆ
lows.

For a reciprocal test structure, the modal participation factors should be proporĆ
tional to the mode shape coefficients at the input stations.

Using this proportionality between mode shapes and modal participation facĆ
tors, reciprocity can be checked for each mode when data for more than one inĆ
put station has been used for the modal parameter estimation.

298 The Lms Theory and Background Book


Modal validation

If reciprocity exists then it is possible to correctly synthesize the transfer funcĆ


tion between any pair of response and reference DOFs. This is done by comĆ
puting a scaling factor between the driving point mode shape and the modal
participation factor. This same scaling factor is then used as a reference to deĆ
rive the necessary participation factor from the available mode shape coeffiĆ
cient.

If reciprocity is not satisfied then really only the transfer functions between the
measured response and reference DOFs can be correctly synthesized. If reciprocĆ
ity is required then it can be imposed on the model, and a number of options
are available to calculate the proportionality factor needed to do this.

1 Select one driving point for each mode. The best choice in this case is
the one with the largest driving point residue, since it is the one that
best excites and is observed from that input DOF.

2 Select one specific driving point for all modes. Other participation facĆ
tors are disregarded for scaling.

3 Compute a reciprocal scale factor (RSF) using a least squares average of


all the driving point data as defined by the following formula for n
driving points.


n
 v*i Ăli
i1
RSF  n
 v*i Ăvi
i1

where

vi = the mode shape coefficient


li = the modal participation factor

Part IV Modal Analysis and Design 299


Chapter 18 Modal validation

18.5 Generalized modal parameters

This section deals with mode shape scaling and generalized parameters (modal
mass).

The residueărij,k between locations i and j for mode k can be written as the
product of a scaling factor ak (which is independent of the location) and the moĆ
dal vector components in both locations. If the structure is proportionally
damped, the modal vectors of the structure are real whereas the residues are
purely imaginary. As a consequence, the scaling factor ak , is also purely imagiĆ
nary.
r iāj,k  a kĂv ikĂv jāk
1 Eqn 18-10
ak 
2j dkm k

Equation 18-1 can then be rewritten as

1  vikv jk 
N v *ikv *jk
H ij(j)  
2j m j   
 Eqn 18-11
k1 dk 
k n k j n   *k

where

mk = modal mass of mode k


dk = damped natural frequency of mode k
 nk 1   2k

k = the critical damping ratio of mode k


nk = the undamped natural frequency of mode k

At this point, it should be pointed out that equation 18-11 contains N more paĆ
rameters than equation 18-1, i.e. one more parameter per mode. This is due to
the fact that residues are scaled quantities whereas the modal vectors are deterĆ
mined within a scaling factor only. In equation 18-11 the modal mass values
play the role of the scaling constants. It is clear that the value of the modal
mass depends on the scaling scheme that was used to obtain the numerical valĆ
ues of the modal vector amplitudes.

When the residues of a proportionally damped structure are known, equations


18-10 and 18-11 can therefore be used to compute the modal mass and the moĆ
dal vector amplitudes once a scaling method is proposed. Indeed residues, moĆ
dal vectors and modal mass are related by following equation

300 The Lms Theory and Background Book


Modal validation

v ikv jk
r ijk  Eqn 18-12
2j dkm k

To compute the amplitudes of one modal vector and the corresponding modal
mass from a set of residues with respect to a given input location j you need
one additional equation since the set of equations that can be written for all outĆ
put locationsăi in the form of equation 18-12 is undetermined. Therefore N
equations in N +1 unknowns are obtained. This last equation will actually deĆ
termine the scaling of the modal vector.

Note that an eigenvector determines only a direction in the state space and has
no absolutely scaled amplitude, while a residue has a magnitude with physical
meaning. The scaling of the eigenvectors will determine the modal mass. MoĆ
dal stiffness is determined as the modal mass multiplied by the natural freĆ
quency squared. Modal damping is twice the modal mass multiplied by the
natural frequency and the damping ratio.

V Unity mass
In this case the mode shapes and participation factors are scaled such
that the modal mass (mk ) in equation 18-12 is equal to 1.

V Unity stiffness
In this case the mode shapes and participation factors are scaled such
that the modal stiffness (kk = mk k 2 ) is scaled to 1.

V Unity modal A
In this case the mode shapes and participation factors are scaled such
that the scaling factor (ak ) is scaled to 1. This scaling factor is indepenĆ
dent of the DOFs.

V Unity length
In this case the mode shapes and participation factors are scaled such
that the squared norm of the vector vik is scaled to unity.

N0
Ă v2ikĂ  1
i1

V Unity maximum
In this case the mode shapes and participation factors are scaled such
that the vector vik is scaled to 1 where i is the DOF with the largest
mode shape amplitude.

V Unity component
In this case the mode shapes and participation factors are scaled such
that the vector vik is scaled to 1 where i is any DOF selected by the user.

Part IV Modal Analysis and Design 301


Chapter 18 Modal validation

18.6 Mode complexity

When a mass is added to a mechanical structure at a certain measurement point


then the damped natural frequencies for all modes will shift downwards. This
theoretical characteristic forms the basis of a criterion for the evaluation of estiĆ
mated mode shape vectors.

For each response station, the sensitivity of each natural frequency to a mass
increase at that station can be calculated and should be negative. A quantity
called the Mode Overcomplexity Value" (MOV) is defined as the (weighted)
percentage of the response stations for which a mass addition indeed decreases
the natural frequency for a specific mode,
N0
 wiĂaik
i0
MOV k  Ă xĂ100% Eqn 18-13
N0
w i
i0

where

wi is the weighting factor


= 1 for unweighted calculations
= |vik |2 for weighted calculations

aik = 1 if the k th frequency sensitivity to a mass addition in point i is negative


= 0 otherwise

This MOV index should be high (near 100 %) for high quality modes. If this
index is low the considered mode shape vector is either computational or
wrongly estimated. It is called overcomplex", which means that the phase
angle of some modal coefficients exceeds a reasonable limit.

However if this MOV is low for all modes for a specific input station (say, beĆ
low 10%), this might indicate that the excitation force direction was wrongly
entered while measuring the FRFs for that input station. This error may be
corrected by changing the signs of the modal participation factors for all modes
for that particular input.

302 The Lms Theory and Background Book


Modal validation

18.7 Modal phase collinearity

For lightly or proportionally damped structures, the estimated mode shapes


should be purely normal. This means that the phase angle between two differĆ
ent complex mode shape coefficients of the same mode (i.e. for two different
response stations) should be either 0_, 180_ or -180_. An indicator called the
``Modal Phase Collinearity" (MPC) index expresses the linear functional relaĆ
tionship between the real and the imaginary parts of the unscaled mode shape
vector.

This index should be high (near 100%) for real normal modes. A low MPC inĆ
dex indicates a rather complex mode, either because of local damping elements
in the tested structure or because of an erroneous measurement or analysis proĆ
cedure.

Mean phase deviation


Another indicator for the complexity of unscaled mode shape vectors is the
Mean Phase Deviation (MPD). This index is the statistical variance of the phase
angles for each mode shape coefficient from their mean value, and indicates the
phase scatter of a mode shape. This MPD value should be low (near 0_) for real
normal modes.

Part IV Modal Analysis and Design 303


Chapter 18 Modal validation

18.8 Comparison of models

When you have two groups of modes representing the same modal space then
you can compare the two groups. The comparison concerns the damped freĆ
quencies, the damping values, the modal phase collinearities and the MAC valĆ
ues of the two groups. This is a useful way of comparing sets of modes generĆ
ated from the same data but using different estimation techniques for example.

304 The Lms Theory and Background Book


Modal validation

18.9 Mode indicator functions

Mode Indicator Functions (MIFs) are frequency domain functions that exhibit
local minima at the natural frequencies of real normal modes. The number of
MIFs that can be computed for a given data set equals the number of input
locations that are available. The so-called primary MIF will exhibit a local
minimum at each of the structure's natural frequencies. The secondary MIF
will have local minima only in the case of repeated roots. Depending on the
number of input locations for which data is available, higher order MIFs can be
computed to determine the multiplicity of the repeated root. So a root with a
multiplicity of four will cause a minimum in the first, second, third and fourth
MIF for example. An example of a MIF is shown below.

Given a structure's FRF matrix [H], describing its input-output characteristics


and a force vector, {F}, the output or response {X} can be computed from the folĆ
lowing equation
{X}  H{F} Eqn 18-14

Removing the brackets from the notation, equation 18-14 can be split into real
and imaginary parts
X r  jX i  (H r  jH i)Ă(F r  jF i) Eqn 18-15

For real normal modes, the structural response must lag the excitation forces by
90_. Therefore, when the structure is excited at the correct frequency according
to one of these modes (modal tuning) the contribution of the real part of the reĆ
sponse vector X to its total length must become minimal. Mathematically this
can be formulated in the following minimisation problem

Part IV Modal Analysis and Design 305


Chapter 18 Modal validation

min
X trX r
|FF| ( 1Ă X t X  X tX
r r i i

Eqn 18-16

Substituting the expression for the real and imaginary parts of the response
18-15 in this expression yields

min
Ă FH trH rF
|FF| ( 1 F t(H t H  H tH )F
r r i i

Eqn 18-17

The solution of equation 18-17 reduces to finding minima of the frequency


functions that are built from eigenvalues. The following eigenvalue problem is
formulated at each spectral line under investigation

H trĂH rĂF  Ă(H trĂH r  H tiĂH iĂ)ĂF Eqn 18-18

The square matrices Hr t Hr and Hi t Hi have as many rows and columns as the
number of input or reference locations that were used to create them (i.e. the
number of columns of the FRF matrix that were measured). The primary Mode
Indicator Function is now constructed from the smallest eigenvalue of expresĆ
sion 18-18 at each spectral line. It exhibits noticeable local minima at the freĆ
quencies where real normal modes exist. A second MIF can be constructed usĆ
ing the second smallest eigenvalue of 18-18 for each spectral line. It will
contain noticeable local minima if the structure has repeated modes. This can
be repeated for all other eigenvalues of equation 18-18. The number of funcĆ
tions that can be constructed is equal to the number of eigenvalues, which is the
same as the number of input stations. From these functions, you can then deĆ
duce the multiplicity of each of the normal modes.

306 The Lms Theory and Background Book


Modal validation

18.10 Summation of FRFs

An important indication of the accuracy of the natural frequency estimates is


their coincidence with resonance peaks in the FRF measurements. These resoĆ
nance peaks can be enhanced by a summation of all available data, either by
real or imaginary parts.

Graphically comparing this summation of FRFs with values of the natural freĆ
quencies of modes in a display module can be useful. Problems like missing
modes, erroneous frequency estimates or shifting resonances because of mass
loading by the transducers can easily be detected this way.

Part IV Modal Analysis and Design 307


Chapter 18 Modal validation

18.11 Synthesis of FRFs

The FRFs that you have obtained from a modal model can be synthesized in a
number of ways. Scaled mode shapes (i.e. mode shapes and modal participaĆ
tion factors) have to be available for at least one input station for which a mode
shape coefficient is also available. Using the Maxwell-Betti reciprocity princiĆ
ple between inputs and outputs (section 18.4) it is however possible to calculate
the FRF between any two measurement stations.

Correlation and errors


It is also possible to assess correlation and error values relating to the measured
and synthesized FRFs.

The correlation is the normalized complex product of the synthesized and meaĆ
sured values.

| Ă SiĂxĂM*i
|2
i
correlationĂ Ă Eqn 18-19
 S ĂxĂS*
! M ĂxĂM*
!
 i i  i
i  i i 
with

Si = the complex value of the synthesized FRF at spectral line i

Mi = the complex value of the measured FRF at spectral line i

The LS error is the least square difference normalized to the synthesized values.

Ă SiĂ Ă Mi
ĂxĂ SiĂ Ă Mi
*
LSĂerrorĂ Ă i
 SiĂxĂS*i
Eqn 18-20

A listing of FRFs where the correlation is lower than a specified percentage and
which exhibit an error higher than a specified percentage provides useful inforĆ
mation on the quality if the synthesized FRF.

308 The Lms Theory and Background Book


Chapter 19

Rigid body modes

In this chapter the behavior of a structure as a rigid body is disĆ


cussed. The following topics are covered
The calculation of rigid body properties of a structure from
FRF measurements

Rigid body analysis to determine rigid body modes

309
Chapter 19 Rigid body modes

19.1 Calculation of rigid body properties

This section discusses the theory used in the calculation of rigid body properĆ
ties. Experimental frequency response functions (FRFs) can be used to derive
structural modes of a structure and the inertia properties of a system. These
properties are: the moments of inertia, the products of inertia and the principal
moments of inertia.
In general two types of method are applied.
1 A first type determines the inertia characteristics using the rigid body mode
shapes obtained from test data. This is the Modal Model Method described
in reference [1].
2 The second type starts from the mass line, i.e. the FRF inertia restraint of the
softly suspended structure. This mass line is used in a set of kinematic and
dynamic equations, from which the rigid body characteristics (mass, center
of gravity, principal directions and moments of inertia) can be determined
(reference [2]). Some of these methods also look for the suspension stiffĆ
nesses while others consider the mass of the system as known (reference [3]).

This type of method is described in more detail below.

acc/force
first deformation mode
frequency
band

Rigid body
mode

mass line
frequency

Figure 19-1 Rigid body modes

Derivation of rigid body properties from measured FRFs

Input data
FRFs are required in order to determine the rigid body properties. The input
format is required to be acceleration/force, and if this is not the case then a
transformation can be applied. Rotational or scalar (acoustic) measurements
are not used in the rigid body calculations.

310 The Lms Theory and Background Book


Rigid body modes

In theory 2 excitations and 6 responses are needed for the calculations. PractiĆ
cal tests show that best results are obtained with at least 6 excitations (e.g. 2
nodes in 3 directions) and 12 responses need to be measured.

Reference axis system


All the rigid body properties are calculated relative to a reference axis. The refĆ
erence axis system is defined by the three coordinate values of its origin and
three euler angles representing its rotation.

Specification of the frequency band


Rigid body properties are calculated in a global (least squares) sense over a speĆ
cified frequency band between the last rigid body mode and the first deformaĆ
tion mode (see Figure 19-1).

Mass line value


The mass line" value which is needed for the calculations, can be derived from
the measured FRFs in three ways:
1) When the rigid body modes and deformation modes are sufficiently
spaced, the amplitude values (with the sign of the real part) of the origiĆ
nal, unchanged measured FRFs can be used. In this case there is no need
to have the deformation modes available for the rigid body modes analyĆ
sis.
2) When the spacing between rigid body modes and deformation modes is
not sufficient, the FRFs have to be corrected. In this case the influence of
the first deformation modes, if significant, can be subtracted from the
original FRFs. The amplitude values (with the sign of the real part) of
synthesized FRFs are used.
3) If accurate measured FRFs are not available in the frequency range directĆ
ly above the rigid body modes, then lower residual terms which lie in a
frequency band which contains the first deformation modes can be used.
Residual terms can be determined from a modal analysis. Lower residuĆ
als represent the influence of the modes below the deformation modes, and
are therefore representative of the rigid body modes.

Calculation of the rigid body properties


1 Calculation of the reference acceleration matrix
1.1 Coordinate transformation
If nodes, corresponding to the response DOFs used do not have global diĆ
rections or when a reference (not coincident with the global origin) is speĆ
cified, then a rotation of the measured accelerations according to the globĆ
al/reference axis system is needed.

Part IV Modal Analysis and Design 311


Chapter 19 Rigid body modes

All three directions (+X, +Y, +Z) are required. For the three measured (loĆ
cal) accelerations of output node o":
.. ..
{X} g  [āTā] 1
o Ă{X} l Eqn 19-1

where
..
{X} g is the global acceleration vector
..
{X} l is the local acceleration vector
[T] 1
o is the rotation matrix (global to local) of node o"
When a reference is specified, which does not coincide with the global oriĆ
gin, the three measured accelerations of output node o" are also rotated
according to the axis of the reference system.
.. ..
{X} r  ([āTā] 1
o Ă[āTā] r)Ă{X} l Eqn 19-2

where
[T] r is the rotation matrix (global to local) of node r".
1.2 System of equations
For all spectral lines of the selected band, for all response nodes P, Q,...
and for all inputs 1, 2, ... under consideration
.. ..
X.. 1P X 2Px ---
Z P  Y P X.. 1g
.. ..
X 2g x ---
X1P
x
..
---
1 0 0 0 x

X P  X 1g X 2g y ---
..
X 2Py
 .. y
 0 1 0  ZP 0
 .. 
X1P --- 0 0 1 YP  XP 0 
.. y

 X 2g z ---
..
 .. X 2Pz
 
  X 1g

X1Q 
z
.. z
1 0 0 0 Z  Y Q 

---    .. 1g  2g x ---
.. ..
 .. X 2Qx Q

X x
..  0 1 0  ZQ 0 XQ 
---   .. 1g
x

 2g y --- Eqn 19-3


..
 .. 1Q X 2Qy
 y
..  0 0 1 Y Q  XQ  0  1g
y

 2g z ---
..
X1Q X 2Qz ---
 ) ) )  z

 )
z
) ---  ) ) ---

acceleration of input 1 towards global axis system

where XP, YP, and ZP are the global coordinates of node P (or towards the
reference axis system).
This over-determined system of equations (number of output DOFs is
higher than or equals 6) is solved for each spectral line in a least square
sense. In this way at each spectral line, the reference acceleration matrix is
found. Further, a general solution of the reference acceleration matrix
over the total frequency band is calculated by solving in a least squares
sense the global set of equations containing all outputs and all spectral
lines.

312 The Lms Theory and Background Book


Rigid body modes

2 Calculation of the reference force matrix

2.1 Coordinate transformation

For input force 1 in the local X-direction of node i":

1.0!
i $0.0%
{F 1}  [T] 1 Eqn 19-4
0.0

[T] 1
i is the rotation matrix (global to local) of node i"

When the reference r" is not coincident with the global origin:

1.0!
{F 1}  ([T] r[T] 1$0.0%
i ) Eqn 19-5
0.0

[T] 1
r is the rotation matrix (global to local) of reference node r"

Similar equations are used when the input has Y-direction or Z-direction.

2.2 System of equations

For all inputs 1, 2 . . .

F1g !
F1g   0 1 0 
x
1 0 0
F1g   1 
y

$M1g %  0  Z1 Y 1 
 0 0
Ą{F1} Eqn 19-6
z

M1g   Z1 0  X1
x 

   Y 1 X1 0 
y

M 1g

z

reference force matrix towards global axis system for input 1

{F1} is the applied force at input 1


X1, Y1 and Z1 are the global coordinates of node corresponding with input
1.

3 Calculation of the co-ordinates of the center of gravity and moments


and products of inertia

For
(i) each input and for each spectral line
(ii) each input over the total band:

Part IV Modal Analysis and Design 313


Chapter 19 Rigid body modes

Xcog!
Fg  m.ag !  0  mz m y 0 0 0 0 0 0  Y gog
x x
 
0  Z cog

Fg  m.ag 
F g  m.a g
 
m z 0  m x 0 0 0 0 0
0   I xx 
y y

 my mx 0 0 0 0 0 0
$ Mg %   F g  x 0 0   y 0   zĄ$ Iyy %
I
z z

 0 Fg
 0 y 0  x  z 0 
z y


 Mg  
zz 
x

   Fg
y
0 z
Fgx
 I xy 
0 0  z 0   y   x  
Mg   Fg  mFg
z y x
0
 Iyz 
I xz 

Eqn 19-7

Xcog, Ycog and Zcog are the global coordinates of the center of gravity
Ixx, Iyy Izz are the moments of inertia towards the global axis system
Ixy, Iyz Ixz are the products of inertia towards the global axis system.
This set of equations can be solved in two steps. First, the coordinates of
the center of gravity can be solved from the first three equations (per refĆ
erence). Afterwards, these values can be filled in the last equations to
solve the inertia moments and products.
Step 1
for each input and for each spectral line
and for each input over the total band:



Fg  m.a g !  0
x

x  m z m y  x cog
ycog!
$
Fg  m.ag %
F g 
y
m.a g

ym z 0  m x $z %
ă Eqn 19-8
z   
z
m y m x 0  cog
Step 2
for each input and for each spectral line
and for each input over the total band:

Ixx!
I yy
Mg  ycogĂFg  zcogĂFg ! x 0 0  y 0  z   
I zz
 
x z y

$Mg  x cogĂFg  zcogĂFg %  0 y 0  x  z 0 ă$Ixy% Eqn 19-9


Mg  x cogĂFg  ycogĂFg 
y z x

z y  0 0 z 0  y  x 
x
Iyz
I xz

At each spectral line, these over-determined sets of equations (number of


inputs larger than or equal to 2) are solved in a Least Square sense. Also a
global solution for these rigid body properties over the total band can be
found out of the global acceleration matrix over the total frequency band
(see equation 19-3).

314 The Lms Theory and Background Book


Rigid body modes

If wanted, only the second set of equations is solved. In this case the coorĆ
dinates of the center of gravity are presumed to be known and specified
by the user.

4 Calculation of the co-ordinates of the center of gravity and moments


and products of inertia

In general: {L g}  [A]{ g}

Lx!  Ixx  Ixy  Ixzx!


$Ly%  Iyx Iyy  IyzĂ$y% Eqn 19-10
L z  I zx  I zy I zz  z

{Lg } is the vector of total impulse towards the global (reference) axis sysĆ
tem
[A] is the matrix of inertia (symmetrical)
{g } is the vector of velocity

This is an eigenvalue problem, where


the Eigenvalues : I1, I2, I3: are 3 principal moments of inertia
the Eigenvectors : {e1}, {e2}, {e3} : are directions of the 3 principal axes of
inertia.

Part IV Modal Analysis and Design 315


Chapter 19 Rigid body modes

19.2 Rigid body mode analysis

A rigid body is a (part of a) structure that does not deform of itself, but that
moves periodically as a whole at a certain frequency.

The modal parameters for such a rigid body mode are determined not by the
dynamics of the structure itself, but by the dynamic properties of the boundary
conditions of that structure. This includes the way it is attached to its surĆ
roundings (or the rest of the structure), the stiffness and damping characterisĆ
tics of suspending elements, its global mass, etc... A rigid body can be
compared to a simple system with a mass attached to a fixed point by a spring
and a damper element.

It has 6 modes of vibration i.e. translation along the X, Y, and Z axes, and rotaĆ
tion about these axes. Every mode which is measured for such a system will be
a linear combination of these 6 modes.

Section 19.1 describes how it is possible to calculate the inertia properties of a


structure based on measured FRFs. This enables you to calculate the center of
gravity, moments of inertia and the principle axes as well as synthesized rigid
body modes.

This section discusses how rigid body modes are used and describes two methĆ
ods by which the modes can be determined; namely

V decomposition of measured modes into rigid body modes

V synthesis of rigid body modes based on geometrical data

Use of rigid body analysis


In modal analysis applications, the fact that (part of) a structure acts as a rigid
body up to a certain frequency can be used in different ways.

1 Debugging the measurement set–up


Rigid body modes can be used to verify the measurement set-up when the
frequency range of measured FRFs covers a rigid body of the entire structure
in its suspension (elastic cords or air bags for example). In this case, a simple
peak picking procedure and an animation of the resulting mode will indicate
which measurement points are not moving in line" with the rest of the
structure. Deviations from this rigid body motion can be caused by

V non-measured nodes (not moving at all)

V wrong response point identification (moving out of line)

316 The Lms Theory and Background Book


Rigid body modes

V wrong response direction (moving in opposite direction)

V bad transducers or wrong calibration values (wrong amplitude)

V other measurement errors

Obvious errors, as in the first 4 cases, can be easily detected by curve-fitting


a rigid body mode of the structure.

2 Completion of non–measured DOFs


Mode shape coefficients for non-measured points and/or directions can be
calculated based on the assumptions that the resulting deformed mode
shape should still be a rigid body. This can be achieved by first calculating
the weighting coefficients for each of the 6 rigid body motions of the strucĆ
ture from the measured data and then applying the same weighting to obtain
the motion of the non-measured DOFs. This takes into account the geometry
constraints and thus preserves the rigid body motion of the structure. This
feature is useful to complete sparsely measured rigid parts of a wire frame
model for animation.

3 Correction of measurement errors


Using the same approach as described under 2 it is also possible to re-calcuĆ
late mode shape coefficients for measured DOFs and compare them to the
actually measured ones to evaluate measurement errors (as under 1) or meaĆ
surement noise. It is even possible to replace the measured data by the calĆ
culated data and smooth the mode shapes to obtain good rigid body motion
for (parts of) the structure.

4 Synthesis of modes based on the geometry of the structure


Rigid body modes can be calculated for a structure based on the structure's
mass, moments of inertia, the boundary conditions and values for frequency
and damping specified by the user. This is useful when coupling two subĆ
structures for example for which the modal parameters have been obtained
separately.

19.2.1 Decomposition of measured modes into rigid body


modes
The decomposition into rigid body mode is quite simple and involves the folĆ
lowing steps.

1 Use the geometry data to construct the 6 rigid body motions of the structure.

Part IV Modal Analysis and Design 317


Chapter 19 Rigid body modes

2 Decompose a given mode shape in these 6 modes. This involves solving a


system of linear equation and can only be accomplished if enough equations
can be built. This means that at least 6 measured DOFs must be available
and that the equations must be linearly independent. This means for examĆ
ple that it is not possible to calculate the contribution of a rotation about the
Z axis from data for 2 points on that axis, even if they both have all 3 DOFs
measured.

3 Calculate the mode shape coefficients for the requested DOFs based upon
the geometry and the 6 weighting coefficients.

Limitations
Calculating the rigid body motion for a part of the structure (for example one
single component) can sometimes prove a little awkward. The component will
indeed move as a rigid body but is not constrained to still be connected to the
rest of the structure. When applied to the tail wing of an airplane for example
this wing may rotate about a horizontal axis through the middle of the wing
but may no longer be connected to the fuselage at its base. The same may hapĆ
pen to an engine block of a car which may be disconnected from the supports
when a rigid body motion is applied to it.

19.2.2 Synthesis of rigid body modes based on geometri-


cal data
The synthesis of rigid body modes for a `free-free' structure is based on the
translation along and the rotation about the three principal axes of inertia. The
position of these three axes as well as the principal moments of inertia about
them and the mass are required for the calculation of the rigid modes. The
damping and frequency are specified by the user. The residues are calculated
as follows -

R trans  1
2m

r fĂr x
R rot 
2I

where

318 The Lms Theory and Background Book


Rigid body modes

m = the total mass

 = the user defined damped natural frequency

rf = the perpendicular distance from the reference DOF to the respective


axis of inertia

rx = the perpendicular distance from the response DOF to the respective


axis of inertia

I = the moment of inertia about the respective axis of inertia.

Rigid body modes are useful in completing the modal model of a structure that
is being used for structural modification purposes.

Part IV Modal Analysis and Design 319


Chapter 19 Rigid body modes

19.3 References

[1] Toivola, J. and Nuutila, O.


Comparison of three Methods for Determining Rigid Body Inertia Properties from
Frequency Response Functions
Tampere University of Technology, P.O. Box 589, SF-33101 Tampere,
Finland,

[2] Okuzumi, H.
Identification of the Rigid Body Characteristics of a Powerplant by Using
Experimental Obtained Transfer Functions
Central Engineering Laboratories, Nissan Motor Co., Ltd., Jun 1991

[3] Lemaire, G. and Gielen, L.


Het bepalen van de inertie-parameters van een star lichaam door middel van
transfertfuncties
Eindwerk katholieke hogeschool Brugge-Oostende dep. industriele
wetenschappen en technologie, 1995-1996

[4] LMS International


LMS CADA-X Modal Analysis Manual Revision 3.4
LMS International, Leuven, Belgium, pp 2.6-2.7, pp 3.24-3.32, 1996

[5] LMS International


How to Add Rigid Body Modes to an Existing Modal Model in CADA-X
LMS International Consulting reports, Ref. DVDB/sh/911295, Leuven,
Belgium,
22 pp, 1991

320 The Lms Theory and Background Book


Chapter 20

Design

This chapter discusses the three types of analysis that can be perĆ
formed to determine the effect of design changes on the modal beĆ
havior of a structure. These are
Sensitivity

Modification prediction

Forced response

321
Chapter 20 Design

20.1 Using the modal model for modal design

Correctly scaled mode shapes are an absolute pre-requisite of the correct apĆ
plication of the design procedures described here.

The dynamic behavior of a structure can be fully described and modelled thereĆ
fore if the poles #k ), and the residues rijk for each mode k and each pair of reĆ
sponse and reference DOFs i and j are known.
In practise however the modal model is often defined by the poles (frequency
and damping values) and the residues for only one (or a few) reference staĆ
tion(s) j. The question now arises as to how this limited modal model can be
used for the prediction of responses when forces are acting on a degree of freeĆ
dom for which residues are not readily available. The residues required beĆ
tween any two degrees of freedom can be derived as follows.

For a linear structure which obeys the Maxwell-Betti reciprocity principle beĆ
tween inputs and outputs, the FRF between two DOFs i and j can be obtained
by exciting the structure at DOF j and measuring the response at DOF i, or by
exciting at DOF i and measuring the response at j:

H ijĂ()Ă  H jiĂ(Ă) Eqn 20-1

In other words, the FRF matrix for a reciprocal structure is symmetric.

Under these circumstances, the residue for each mode k between two response
DOFs m and n can be obtained from the ones between each of them and the
available set of residues for reference j:

r mjk  a kĂ$mkĂ$ jk Eqn 20-2

r njk  a kĂ$nkĂ$ jk Eqn 20-3

where

rmjk is the known residue between DOFs m and j

rnjk is the known residue between DOFs n and j

$mk is the unknown mode shape coefficient at response DOF m

$nk is the unknown mode shape coefficient at response DOF n

322 The Lms Theory and Background Book


Design

$jk is the unknown mode shape coefficient at response DOF j


The required residue is then -

$ mkĂ$ jk $ nkĂ$jk r mjkĂrnjk


r mnkĂ Ă a kĂ$ mkĂ$nkĂ Ă a kĂ $jk Ă a kĂ a kĂ$jk Ă Ă r jjk Eqn 20-4

where rjjk is the known driving point residue.


The starting point for modal synthesis applications is the available modal modĆ
el for the structure to be modified or for each of the substructures to be asĆ
sembled.
It is important however that some conditions are met.
V In order to be able to scale the included mode shapes correctly, they
must include driving point coefficients.
V Mode shape coefficients need only be available for the Degrees Of FreeĆ
dom which are affected by the structural changes.
The information used to obtain this scaling are: poles, (unscaled) mode shapes
and modal participation factors for a number of reference stations. The reĆ
quired scaled mode shape coefficients can be obtained from this information as
follows -
For Ni points for which output data are also available (i.e. driving points), a
vector of complex modal participation factors Lkj for each mode k can be built:

L kĂ Ă L 1ĂL 2...ĂL N i Eqn 20-5


k

The corresponding unscaled mode shape coefficients Wik are assembled in a


column vector {W}k

 W1 !
 W2 
{ W } kĂ Ă $ .. % Eqn 20-6
W. 
N i
k

The residues Rk are defined as the product of mode shapes and modal particiĆ
pation factors :

[ R ] kĂ Ă { W } kĂL k Eqn 20-7

Part IV Modal Analysis and Design 323


Chapter 20 Design

The scaled mode shapes {V}k , used in the theoretical derivation of the previous
chapter are related to the unscaled mode shapes {W}k via a complex scaling facĆ
tor k for each mode :

{ V } kĂ Ă  kĂ{ W } k Eqn20-8

From the definition of residues these mode shapes are scaled such that

R kĂ Ă {W} kĂL kĂ Ă {V} kĂ{V} tk Eqn 20-9

or from equation 20-8

t
 2kĂ{ W } kĂ{W } kĂ  { W } kĂĂL k


Eqn
* 20-10
W *1kĂLikĂĂ W *2kĂL 2kĂ
 ...Ă Ă W N kĂL N ik
 kĂ  i

W *1kW1kĂ Ă W *2kW 2kĂ Ă ...Ă Ă W *NiĂkW N ik

In the special case where only one input is considered, i.e. only one set of resiĆ
dues is available, the scaling factor becomes -

 kĂ Ă  L 1k
W 1k
Eqn
20-11

The scaling of equation 20-8 actually converts the generally valid modal model
of mode shape vectors W and modal participation factors L to a model of scaled
mode shape vectors V, in which the modal participation factors are absorbed
via equation 20-10. Obviously some information is lost by removing the scalĆ
ing factors L from the model, and as a consequence, the resulting model is only
valid for reciprocal structures with a symmetric FRF matrix. The calculation of
the scaling factor k according to equation 20-10 is in fact the best compromise in
a least squares sense to approximate a non-reciprocal modal model by a reĆ
duced reciprocal one.

324 The Lms Theory and Background Book


Design

20.2 Sensitivity

An experimental modal analysis of a structure results in a dynamic model in


terms of modal parameters. The qualitative information contained in this modĆ
el can be used to identify dynamic problems for example by animation of the
mode shapes. Through physical insight and expertise structural modifications
can be proposed to overcome specific dynamic problems.

For structures with complex dynamic behavior, predictions about the effect of
physical changes on modal parameters are usually very difficult - if not imposĆ
sible - to make. When unsatisfactory dynamic behavior is detected or susĆ
pected the designer can use trial and error procedures to try out a number of
modifications, but there is no guarantee that any of these attempts will yield
satisfying results. On the other hand numerical techniques can be employed
which use the quantitative results of a modal test to evaluate the effects of
structural changes.

These structural changes can be imposed by modifying the physical characterĆ


istics of the structure in terms of its inertia, stiffness and damping. A SensitivĆ
ity analysis allows you to see how changes in these physical characteristics afĆ
fect particular modes at various points on the structure. It computes only the
sensitivity of the modal model to structural alterations, and does not involve
actually applying any changes. A Sensitivity analysis provides you with the
means of determining the points where such modifications will have most efĆ
fect.

20.2.1 Mathematical background to sensitivity analysis


Determining the sensitivity of a DOF to various parameters involves (in a
mathematical sense) evaluating the partial derivatives of the eigen properties of
a matrix with respect to its individual elements.

Modal parameters are related to the Frequency Response Function as follows.

 Ą jrĂijk

2N
H ij()Ă  Eqn 20-12
k1 k

The partial derivative of this equation to a physical parameter P, can be comĆ


puted as follows

Part IV Modal Analysis and Design 325


Chapter 20 Design

2N ār ijk 2N r ijk


āH ij
āP
Ă  Ă 1
j   k āP
  (j  2
ā k
 ) āP
Eqn 20-13
k1 k1 k

P can be a mass at one DOF or damping or stiffness between a pair of DOFs.

The dynamic stiffness matrix Q is given by

Q    2M cc  jC ccK cc Eqn 20-14

where M is the mass matrix


C is the damping matrix
K is the stiffness matrix
c is a subscript denoting that only elements in the matrices
that
are affected by P will be considered.

Using this equation and the theory of adjoined matrices, equation 20-13 can be
rewritten in the form

āH ij āQ
Ă {H ic} t {H cj} Eqn 20-15
āP āP

Using equation 20-12 equation 20-15 becomes

t
āH ij  2N r ick ! Ă āQĂ 
Ă $
2N r cjk !
%
j  k āP $ (j   k)%
Eqn 20-16
āP
k1 k1 

Splitting up equation 20-16 into partial fractions, and identifying the correĆ
sponding terms of equation 20-13, gives the sensitivities for frequency (20-17)
and mode shape (20-18).

ā k
āP
Ă  1 Ă{r } tĂ āQ
r ijk ick āP
  {r cjk} Eqn 20-17
jjdk

326 The Lms Theory and Background Book


Design

ār ijk
āP
Ă   {r ick} tĂ 
āQ
j āP
 
jjdk
{r cjk}Ă   {rick}tāāQP 
2N

m1 jjdk
 r cjm
m  k

  
2N t
r ick āQ
 Ă {r cjm} Eqn 20-18
m  k āP
m1 jj dk

So from equations 20-17 and 20-18, the residues rick and rcjk for each DOF c
that is influenced by the structural change are required in order to calculate the
sensitivity to that change. Even if not all the residues are available, the MaxĆ
well-Betti reciprocity principle can be used to calculate the required values.
The residue rick to be derived for any reference DOF c when the residues for
DOFs i and c are available for an arbitrary reference j on condition that the drivĆ
ing point residue rjjk is also available. The driving point residue is also required if
the mode shapes are to be correctly scaled.

From the general formula of equation 20-18, it is now possible to calculate the
sensitivity value of a mode shape coefficient for DOF i when a structural
change is considered for the parameter P, which will affect DOFs a and b. The
corresponding scaled mode shape coefficients for each mode in the modal modĆ
el are required. From the definition of the dynamic stiffness matrix Q, the three
specific cases of P being a mass, a linear spring (stiffness) or a viscous damper
can be considered.

Mass

This is the case where P is a mass at a specific DOF a. Equations 20-17 and
20-18 are then simplified to

ā k
Ă  2kĂ$ 2ak Eqn 20-19
ām a

2N  2k
ā$ ik
Ă   kā$ 2akā$ ik  $ak  $ amā$ im Eqn 20-20
ām a
m1 k  m

Stiffness

This is the case where P is a linear spring between DOFs a and b. Equations
20-17 and 20-18 are then simplified to

Part IV Modal Analysis and Design 327


Chapter 20 Design

ā k
Ă  ($ ak  $ bkā) 2 Eqn 20-21
āk ab

2N
ā$ ik
āk ab
Ă  ($ ak  $ bk)ā  ($am $bm

)$ im
Eqn 20-22
k m
m1

Note that if DOF b is a fixed point (ground") then $bm = $bk = 0

Damping
This is the case where P is a viscous damper between DOFs a and b. Equations
20-17 and 20-18 then become

ā k
Ă   k($ ak  $ bkā)2 Eqn 20-23
āc ab

2N
k
ā$ ik
āc ab
($  $ bk) 2
Ă  ak
2
$ik  ($ ak  $ bk)ā   m
($ am  $ bm)$ im Eqn 20-24
m1 k

The imaginary parts of equation 20-19, 20-21 and 20-23 are used to compute
the sensitivities of the damped natural frequencies. The corresponding real
parts express the sensitivities of damping factors or exponential decay rates.

328 The Lms Theory and Background Book


Design

20.3 Modification prediction

This section describes the use of a dynamics modification theory to predict the
effect of structural modifications on a mechanical structure's modal parameters.
These modifications can take the form of local mass, stiffness and/or damping,
FEM-like rod, truss, beam or plate reinforcements. In addition to local modifiĆ
cations, a substructure assembly theory allows you to predict the modal model
for a structure that consists of an assembly of substructures.

Modification prediction allows you to evaluate:

V the effect of structural modifications

V the effect of any number and type of connections between any number
of substructures (only if installed)

V the dynamics of small scale models, built up from lumped mass-


spring-dash pot elements

Such an analysis avoids time consuming experimental trial and error proceĆ
dures of modifying prototypes or scale models of mechanical structures, meaĆ
suring and analyzing the dynamic behavior and evaluating the effects of these
modifications.

20.3.1 Mathematical background


The starting point for the structural modification and substructure theory is the
modal model described in section 15.1.

The first section of this theoretical background deals with the coupling and
modification of substructures using flexible coupling and general viscous
damping. It continues with the cases of rigid coupling and flexible coupling
with proportional damping.

Modal models for the assembly of substructures with flexible coupling and
viscous damping
Modal models of substructures
Consider two structures, 1 and 2. They obey the following equations of motion
in the Laplace domain :

Part IV Modal Analysis and Design 329


Chapter 20 Design

s 2ĂM 1ĂĂx 1Ă  sC 1ĂĂx 1  K 1ĂĂx 1 Ă Ăf 1Ă Eqn 20-25

s 2ĂM 2ĂĂx 2Ă  sC 2ĂĂx 2  K 2Ăx 2 Ă Ăf 2Ă Eqn 20-26

The matrices Mi , Ci and Ki are the mass, damping and stiffness matrices of the
structure 1 or 2 corresponding to the subscript i. General viscous damping is
allowed. The system matrices are symmetric. The displacement vectors are
{x1 } and {x2 }, and the force vectors {f1 } and {f2 } respectively.
The modal parameters for substructure 1 will first be derived in a general way.
For substructure 2 the same method can be used but will not be entirely reĆ
peated.
The transformation to decouple the equations of motion can be found by adĆ
ding a set of dummy equations (Duncan's method) :

sĂM 1Ăx 1 ĂĂ sĂM 1ĂĂx 1 ĂĂ 0Ă Eqn 20-27

The system equations for substructure 1 become :

sA 1Ăy 1Ă Ă B 1 ĂĂy 1Ă Ă p 1 Eqn 20-28

where


0 M1
A 1Ă Ă M C
1 1
 B 1Ă Ă   M1 0
0 K1 
sx 
y 1Ă Ă x 1
1
p 1Ă Ă f
1
0

The matrices A1 and B1 are diagonalized by the transformation matrix V1 , the


matrix of eigenvectors of substructure 1. The corresponding eigenvalues are
stored in the diagonal matrix 1 . Due to the addition of equation 20-27 there
are twice as many eigenvalues as there are degrees of freedom. They appear in
complex conjugate pairs.
The matrices A1 and B1 are diagonalized by post- and pre-multiplication by
the eigenvector matrix V1 and its transpose :

V t1ĂA 1ĂV 1Ă Ă a 1 Eqn 20-29

330 The Lms Theory and Background Book


Design

V t1ĂB 1ĂV 1Ă Ă b 1 Eqn 20-30

The matrix of eigenvectors V1 defines a coordinate transformation from physiĆ


cal co-ordinates {y1 } to modal coordinates {q1 } :

y 1Ă Ă V 1Ăq1 Eqn 20-31

Using expressions 20-29 and 20-30 in the equation of motion 20-28 after pre-
multiplication with the transpose of V1 and substitution with expression 20-31
one obtains the equations of motion in modal coordinates for substructure 1 :

sĂa 1Ăq 1Ă Ă b 1Ăq 1Ă Ă V t1Ăp1 Eqn 20-32

It can be seen that the equations of motion in modal space are uncoupled.

The same procedure can be repeated for substructure 2, yielding a diagonal eiĆ
genvalue matrix  2 and an eigenvector matrix V2 . The eigenvector matrix V2
defines a transformation to modal coordinates {q2 }. The equations of motion for
substructure 2 in modal space are :

sĂa 2Ăq 2Ă Ă b 2Ăq 2Ă Ă V t2Ăp2 Eqn 20-33

Substructure assembly
The system matrices of both substructures can be merged to give a structure
composed of two dynamically independent substructures. For this assembled
structure one can easily derive the modal parameters since they are the same as
those of the two substructures but gathered in one eigenvalue matrix and one
eigenvector matrix.

More explicitly this substructuring yields the following system matrices :

AĂ Ă  A1 0
0 A2  BĂ Ă  
B1 0
0 B2

and Eqn 20-34

Ă
y 
 y Ă Ă y 1
2
p 
 p Ă Ă p 1
2

which yields as equation :

Part IV Modal Analysis and Design 331


Chapter 20 Design

sAĂ{ y }Ă  BĂ{ y }Ă Ă p  Eqn 20-35

It can be verified that the matrices of equation 20-35 are diagonalized by the
eigenvector matrix V composed as follows :


V1 0
VĂ  Ă 0 V
2
 Eqn 20-36

and that the eigenvalue diagonal matrix is :

 Ă  Ă
0  1 0
2
 Eqn 20-37

This yields a transformation to modal coordinates :

{ y } ĂĂ VĂq ĂĂ Eqn 20-38

where


q
 q Ă Ă q 1
2

An expression of the type of equation 20-33 using the eigenvector and eigenvaĆ
lue matrices, yields :

sa Ă q Ă Ă bĂq Ă Ă V tĂ p  Eqn 20-39

A close look at the matrix of eigenvectors V shows that the two substructures 1
and 2 are still dynamically independent. Indeed, any force at any point of one
substructure will not induce any motion at any point of the other substructure.
The two substructures can now be connected with flexible connections modĆ
elled as springs and dampers. With the connection matrices Kc and Cc equation
20-35 becomes:

s(AĂ Ă A cĂ)Ă{ y }Ă  (BĂ  B cĂ)Ă{ y }Ă  p  Eqn 20-40

332 The Lms Theory and Background Book


Design

where

0 0 0 0  0 0 0 0 
0 Cc 0  Cc 0 Kc 0  Kc
A cĂ Ă 
0 0 0 0 
 B cĂ Ă
0 0 0 0 

0  Cc 0 Cc  0  Kc 0 Kc 

The system matrices of the connected substructures will no longer be diagonalĆ


ized by the transformation matrix V as the unconnected substructures were.
This is due to the introduction of the connection stiffness and/or damping valĆ
ues.

Modification of structures
Before decoupling the equations of motion of the connected substructures a
number of modifications to each substructure can be added. Let the structural
modifications be gathered in the modification matrices -

M 1,Ă C 1,Ă K 1ĂandĂM 2,Ă C 2,Ă K 2 Eqn 20-41

These changes can be brought together in system matrices for the modificaĆ
tions:

M
0 M 1 0
C 1 0
0
0
 M
0
1 0
K 1
0
0
0
0

A 
0 
 B 
0 

1
0 0 M2 0  M 2 0 Eqn 20-42
0 0 M 2 C2  0 0 0 K 2

It is clear from the matrices of previous expression that the modifications are
not coupling the substructures, they are only modifying each substructure sepaĆ
rately.

When the modifications of expression 20-42 are added to the system equation
of the connected structure (Eqn. 20-40), one obtains the final equation in physiĆ
cal coordinates

sĂ(A  A c  A)Ă{ y }Ă Ă (B  B c  B)Ă{y }Ă Ă  p  Eqn 20-43

Uncoupling the equations of motion


Using the coordinate transformation of the original unconnected substructures
(expression 20-36) and premultiplying with Vt , one derives a new set of equaĆ
tions of motion in modal coordinates :

Part IV Modal Analysis and Design 333


Chapter 20 Design

sA m q Ă Ă B m q Ă Ă V t p  Eqn 20-44

where

A mĂ Ă aĂ Ă V tA cVĂ Ă V tAV

B mĂ Ă bĂ Ă V tB cVĂ Ă V tBV

The matrices Am and Bm for the modified structure can again be diagonalized
by a general eigenvalue decomposition. When the new eigenvalues and eigenĆ
vectors are represented by ' and W, one has :

W tA mWĂ Ă a

W tB mWĂ Ă b

Consider then the transformation :

 q Ă Ă WĂ q ĂĂ Eqn 20-45

Substituting equation 20-44 and premultiplying with W t yields :

saĂ q Ă Ă bĂ q Ă Ă W tV tĂ p  Eqn 20-46

The transformation matrices V and W can be combined in one matrix Vi as -

VĂ Ă VW Eqn 20-47

which then gives the following transformation equation :

{ y }Ă Ă VĂ q  Eqn 20-48

Equation 20-48 is the transformation between physical coordinates and modal


coordinates of the connected and modified substructures. With this coordinate
transformation the uncoupled equations of motion are :

saĂ q Ă Ă bĂ q Ă Ă V tĂ p  Eqn 20-49

The natural frequencies and the damping factors can be found as the imaginary
resp. the real part of the eigenvalues in  . The mode shapes are the columns
of the matrix Vi.

334 The Lms Theory and Background Book


Design

Flexible coupling with proportional damping


The theory discussed above relates to flexible coupling with general viscous
damping. In this section we consider the case of zero and proportional dampĆ
ing.
Recall the general equation of motion for viscous damping

(s 2[M]  s[C]  [K])Ă{X}  {F} Eqn 20-50

Zero damping
In case of no damping : [C] = [0], next eigenvalue problem is to be solved with
eigenvalues:  r2 and with eigenvectors : {}r.

(s 2[M]  [K]){X}  {0} Eqn 20-51

This system has purely imaginary poles, occurring in complex conjugate pairs.

 1  j 1, ...,  N  j N Eqn 20-52

* *
 1   j 1Ă, ...,Ă  N   j N Eqn 20-53

The modal vectors are real, called normal modes (phase: +/ - 180_).
The equation of motion can be diagonalized, based on the orthogonality of the
modal vectors. Transformation to modal coordinates leads to an equation of
motion, with diagonal system matrices, being the modal mass and modal stifĆ
fness maĆ
trices:
["]  [{" 1}ā...ā{" N}] Eqn 20-54

["] tĂ[M]Ă["]  m ąąą["] tĂ[K]Ă["]  k Eqn 20-55

{X}  ["]{q} Eqn 20-56

(  2m   k){q}  {0} Eqn 20-57

whereĂ :Ă k  m ā r 2 Eqn 20-58

Propotional damping
In case of proportional damping, the damping system matrix is a linear comĆ
bination of the mass system matrix and the stiffness system matrix:

Part IV Modal Analysis and Design 335


Chapter 20 Design

[C]  [M]  [K] Eqn 20-59

This leads to the next equation of motion:

2 Eqn 20-60
(ā(s  s)[M]  [K]ā)ā{X}  {0}
s  1
The eigenvalues are related to the complex poles

2
 r   r Eqn 20-61
   r2
 r  1
The complex poles are solved from the real eigenvalues (-ωn) and the damping
factors (α, β). When more than two original modes are taken into account (in
practical cases, this is always the case), the damping factors can solved in a
least squares way from the modal masses, modal stiffnesses and modal dampĆ
ing factors.

Modal synthesis
Only mass and stiffness coupling modifications, ∆M, ∆K and not damping
coupling modifications can be applied. The equation of motion of the coupled
system are

(s 2([M]  [M])  [K]  [K]){X}  {0} Eqn 20-62

In modal space:

(  2(m   [m])  k  [k]){q}  {0} Eqn 20-63

Where :
["] m  ["][q r] m Eqn 20-64

[m]  [V] tā[M][V]Ą andĄ[k]  [V] tā[K][V] Eqn 20-65

The eigenvalues and eigenvectors of this equation, back-transformed from moĆ


dal to physical space, are the modal parameters of the coupled system.
In case of proportional damping, the complex poles can be solved from the eiĆ
genvalues and the proportional damping factors: α and β.
The option to use proportional damping is provided when modes are preĆ
dicted. It reduces the computation time when dealing with large structures
with numerous modifications and mode shapes containing a lot of DOFs. At
least two original modes must be used in order to determine  and .

336 The Lms Theory and Background Book


Design

Rigid coupling

The above theory relates to flexible coupling, but it is also possible to place
constraints on DOFs connecting substructures to create rigid coupling between
them, or to constrain a single DOF, thus fixing it rigidly to `ground'. In this
case the restrained DOFs will have zero displacement.

Constraints on the physical degrees of freedom are

[R]{Y}  {0} Eqn 20-66

Performing a modal transformation:

{Y}  ["]{q} Eqn 20-67

yields constraints in modal space:

[R]["]{q}  [T]{q}  {0} Eqn 20-68

The modal coordinates are split up into dependent modal coordinates qd an inĆ
dependent modal coordinates qi. The constraint matrix [T] is also split up

  qd
[[T d][T i]] q  {0}
i
Eqn 20-69

  
{q d}
{q i}

 [T d] 1ă[T i]
I  
{q i}  [T]{q i} Eqn 20-70

The choice of the dependent modal coordinates has to be made to lead to a


non-singular [Td].

This leads to the new eigenvalue problem:

(s[T] ta [T]  [T] tb[T]){q i}  {0} Eqn 20-71

When the eigenvalues and the eigenvectors with the independent modal
coordinates qi are solved, the dependent modal coordinates qd of the eigenvecĆ
tors can be calculated. In a last step, the mode shapes in physical coordinates
are found by the inverse modal transformation.

Constraints can be defined in the same way as other structural modifications.

Part IV Modal Analysis and Design 337


Chapter 20 Design

20.3.2 Implementation of Modification prediction


This section discusses some of the more practical aspects of performing modifiĆ
cation prediction. This process allows you to compute the natural frequencies,
damping values and scaled mode shapes for a modified mechanical structure
which is possibly build up from a number of substructures.

20.3.2.1 Retrieval of the modal model

The starting point for modal synthesis applications is the available modal modĆ
el for the structure to be modified or for each of the substructures to be asĆ
sembled.

All modal parameters (natural frequencies, damping values, and scaled mode
shapes) have to be available for the calculation procedure. It is important howĆ
ever that some conditions are met -

1 Driving point coefficients


In order to be able to scale the included mode shapes correctly, they must inĆ
clude driving point coefficients. This means that for at least one record of the
modal participation factor table, the force input (reference) identifier should
match with a record of the mode shape table for the same mode and neither
one of them should be equal to zero. Note that this driving point Degree Of
Freedom can be different for each of the included modes.

2 Matching DOFs for modes and modifications


Mode shape coefficients need only be available for the Degrees Of Freedom
which are affected by the structural changes. This means those for which
mass, stiffness or damping modifications are to be considered or to which
structural elements are to be attached. Moreover, it is perfectly possible to
use incomplete mode shape vectors missing some coefficients for irrelevant DeĆ
grees Of Freedom.

To obtain correct results, the modal model should include all structural modes
to accurately describe the dynamic response for the frequency band of interest.
This aspect is especially important when an experimental modal model was obĆ
tained from a set of FRFs relative to only one reference station, which happened
not to excite some structural modes. This may arise if the reference station was
located on or near a nodal point for these modes. In this case the modal model
may be well suited to describe the measured FRFs but not the dynamic behavĆ
ior of the structure as such.

338 The Lms Theory and Background Book


Design

A similar problem occurs for out-of-band effects caused by the presence of


modes above or below the frequency band of experimental modal parameter
estimation. Some of the frequency domain techniques for estimating mode
shape coefficients allow correction terms (residual masses and flexibilities) to
compensate for these residual effects. Using these corrections it is often posĆ
sible to curve-fit the measurement data fairly accurately. Unfortunately, these
residual terms cannot be scaled correctly for other reference stations as is done
for the mode shape coefficients in the previous sections. They cannot therefore
be included in the calculations. For this reason, it is advisable to use a suffiĆ
ciently large modal model i.e. one with at least one mode below and one mode
above the frequency band of interest.
When using a modal model for a limited frequency band it is possible that imĆ
portant structural modifications would generate modes with a natural frequenĆ
cy outside the range of this frequency band. Since the original modal models
are not valid at these frequencies, the predicted results will not be very reliable.
It is therefore advisable to either include all modes for the frequency band of
the resulting modal model or to keep the structural modifications small enough
to avoid these problems. In any case you should not attach too much confiĆ
dence to modes with natural frequencies outside the frequency band of the
original modal model.
Included mode shapes are correctly scaled. To obtain correctly scaled mode
shapes, the original mode shapes should be scaled in a consistent unit set which
respects the consistency of physical quantities: poles, response engineering
units per Volt, etc.... A correct calibration of measurement signal transducers
and acquisition equipment is required to attach any absolute scaling values to
the obtained results.

20.3.3 Definition of modifications to the model


At each of the available Degrees Of Freedom of the modal model you can deĆ
fine one or more local modifications to influence the dynamic behavior of the
mechanical structure. The structure can also be modified by the addition of
complete substructures for which modal models exist and by the use of
constraints providing rigid coupling.

20.3.3.1 Mass modifications


A point mass can be added to a node on the structure. To add a mass modificaĆ
tion you simply have to specify the node and the mass.

Part IV Modal Analysis and Design 339


Chapter 20 Design

20.3.3.2 Stiffness modifications


A stiffness connection (spring) can be added between any two Degrees Of FreeĆ
dom of the structure.
To add a stiffness modification you have to specify, the DOFs between which
the stiffness is to be applied and the stiffness value.
Note that stiffness (with mass) can also be added to a structure through the
addition of a truss or a rod.

20.3.3.3 Damping modifications


A damping element (dashpot) can be added between any two Degrees Of FreeĆ
dom of the structure.
To add a stiffness modification you have to specify the DOFs between which
the damping is to be applied, and the damping value.
Note that damping can also be added to the structure through the addition of a
tuned absorber.

20.3.3.4 Truss elements


A truss element can be defined as a doubly hinged rod between two points.
Forces located at the ends of the truss element (nodal forces) are directed along
the axis of the rod. Since trusses are modelled with hinges at the end, they canĆ
not withstand transversal forces. Bending and torsion moments cannot be
transmitted from one element to the next.
It provides a means of adding stiffness and mass between two points by the
addition of a connection for which you know the physical characteristics.
To add a truss element you have to specify; The nodes between which the truss
is to be fixed and the physical characteristics of the truss.
A truss element is characterized by its
- cross sectional area A
- material's Young's modulus of elasticity E
- mass density d
These must all expressed in the active unit system.

340 The Lms Theory and Background Book


Design

A truss element between two nodes is translated into elementary mass and
stiffness modifications. The longitudinal stiffness is related to a 6 by 6 stiffness
matrix for 6 Degrees Of Freedom (3 for each node). This matrix is obtained by
projecting the longitudinal stiffness along each of the 3 coordinate axes.

20.3.3.5 Rod elements

A rod element can be added between any two separate nodes on the structure.
Rods are modelled with hinges at their ends so (modal) forces acting on the
ends are directed along the axis of the rod. Bending and torsion moments canĆ
not be transmitted from one element to the next.
In effect it provides a means of adding stiffness and mass between two points
by the addition of a connection for which you know the mass and the stiffness.

To add a rod element you have to specify the nodes between which the rod is to
be fixed and the physical characteristics of the rod. A rod element is characterĆ
ized by its
- longitudinal stiffness Kij
- its mass M.

The longitudinal stiffness is related to a 6 by 6 stiffness matrix for 6 Degrees Of


Freedom (3 for each node). This matrix is obtained by projecting the longitudiĆ
nal stiffness along each of the 3 co-ordinate axes.
The mass M is divided into two equal parts at both ends of the rod.

20.3.3.6 Beam elements

A beam element is an element that can transfer translational forces and moĆ
ments of bending and torsion.

To add a beam element you have to specify the following parameters which are
illustrated below :

V the two end nodes (n1, n2)

V the area of its cross section (A)

V the material's Young's modulus (E)

Part IV Modal Analysis and Design 341


Chapter 20 Design

V the material's mass density (m)

V the material's shear modulus (G)

V the moment of inertia for bending in two planes (Ip, Ib)

V the moment of inertia for torsion (It)

V a reference node to define the orientation of the moments of inertia for


bending (r)

material : E,G,m

It
cross sectional area
A

ÉÉÉ
ÉÉÉ
Ip

ÉÉÉ Reference Node


(r)
Ib

The reference node together with the two end nodes defines the so-called referĆ
ence plane. The moments of inertia for bending are defined in two directions :
Ib for bending in the reference plane
Ip for bending in a plane perpendicular to the reference plane
The 2 end nodes have six Degrees Of Freedom each: 3 translational and 3 rotaĆ
tions. A beam element can therefore transmit six forces to another beam eleĆ
ment: 3 translational forces and 3 moments. For end nodes that are not conĆ
nected to another beam only the translational forces can be transmitted as for
example in the case for a stand-alone beam. In the same way, beams that are
positioned on a straight line (colinear beams) will not be subjected to torsion.

20.3.3.7 Plate membrane elements


A plate membrane element is a two dimensional quadrilateral element capable
of transferring both bending forces (perpendicular to the plane of the plate) and
membrane forces (in the same plane as the plate).

To add a plate element you have to specify the following parameters which are
illustrated below :

342 The Lms Theory and Background Book


Design

V The name of the plate

V The four corner nodes c1, c2, c3, and c4

V The plate thickness (t) expressed in the appropriate user unit

V The number of divisions along the first side, between c1 and c2 (a)

V The number of divisions along the second side, between c2 and c3 (b)

V The connection nodes n1, n2 and n3

V Material properties of the plate i.e. Young's Modulus (E), Poisson's raĆ
tio (), mass density (m).

These must all be expressed in the appropriate unit.

c1 n
3
c4
n
a
1

c2 n
2

thickness t
b c3

When a plate is defined with a and b divisions along its two sides, a mesh of (a
x b) rectangles is created as shown in the diagram. As the corner nodes already
exist this means that ((a+1).(b+1) - 4) new nodes are generated.

If there are connection nodes defined then the mesh point situated closest to a
connection node is replaced by that node.

The plate so defined should comply with the following conditions (1 )

d the mesh elements should not deviate too much from a rectangular
form,
i.e. each corner angle should be #900
(1) The calculation of the mass and stiffness matrices of a plate membrane described here is based on the plate
theory of Mindlin.

Part IV Modal Analysis and Design 343


Chapter 20 Design

d the mesh elements should be approximately square,


i.e. the ratio of length/width should be # 1

d the plate should not be too thick,


i.e. the ratio of length/thickness should be 5

Each of the corner nodes of the mesh elements has 6 Degrees Of Freedom - 3
translations and 3 rotations - and so can transmit six forces to another mesh eleĆ
ment. This is also the case between elements of different plate membranes, as
long as they are connected either at a corner or at a common connection node.

20.3.3.8 Tuned absorbers


A tuned absorber is a single Degree Of Freedom system consisting of a rigid
mass which is connected by a spring and a dashpot to a more complex strucĆ
ture.

ÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ
ÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ
ÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ
ÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ
a

ÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ
The parameters m, k and c of this SDOF system are designed such that the moĆ
tion of the coupling point in the direction of this absorber is decreased
(damped) as much as possible for a certain frequency, typically at resonance.

xa e jwt

k c xr e jwt

If the motion of the coupling point in the direction of the absorber is designated
by xa and the frequency to be damped by f (= /2) then the following formuĆ
lae apply for the equations of motion of m (xr is the relative displacement beĆ
tween the absorber's mass and the attachment point).

344 The Lms Theory and Background Book


Design

(kx r  cjx r)Ăe jtĂ Ă mĂ(x a  x r)Ă 2Ăe jt Eqn 20-72

When this equation is solved for xr :

m 2Ăx 2
x rĂ Ă Eqn 20-73
 m 2Ă  jcĂ Ă k

The force acting on the attachment point is -

Fe jtĂ Ă (k  jc)Ăx rĂe jt Eqn 20-74

From equations 20-73 and 20-74

(k  jc)m
FĂ Ă 2
Ă  2Ăx a Eqn 20-75
 m  jc  k

This force can be imagined as being generated by the inertia of an equivalent


mass meq , which is rigidly attached to the attachment point :

FĂ Ă m eqĂ 2Ăxa Eqn 20-76

(kĂ  jc)Ăm
m eqĂ Ă Eqn 20-77
 mĂ 2Ă Ă jcĂ Ă k

It can be shown that if no damping is used (c=0) the mass and stiffness of the
absorber can be designed such that the vibration of the attachment point is
eliminated entirely (xa = 0). This happens if the natural frequency of the abĆ
sorber equals the forcing frequency .

The most practical application of a tuned absorber is the reduction of vibration


levels at a resonance frequency n . In this case, the absorber's own natural freĆ
quency for optimal tuning is -

 naĂ Ă mĂk Ă  1Ă ĂĂn Eqn 20-78

Part IV Modal Analysis and Design 345


Chapter 20 Design

where ! is the ratio between the absorber's mass and the equivaĆ
lent" mass of the system at resonance :

Ă Ă mm Eqn 20-79
eq

An optimal damping ratio for the absorber is then obtained from :

% optĂ Ă

c Ă Ă
2 km
 3
8(1  ) 3
Eqn 20-80

From equations 20-78, 20-79 and 20-80 the physical parameters m, c and k of
the attached absorber can be computed if the following values are known.
meq the equivalent mass (see further)

n the target frequency of tuning, natural frequency of mode to be tuned


m the absorber's mass to be specified by user
The equivalent mass of the system for a certain mode can be obtained as folĆ
lows:

m eqĂ Ă 1Ă
.V 2i Ă.Ă2Ăjd. Eqn 20-81

where
Vi is the scaled mode shape coefficient of the mode to be tuned at
the attachment point
d is the damped natural frequency of the mode to be tuned.

20.3.3.9 Constraints
Physical constraints can be defined between separate DOFs or between one
DOF and itself.

Defining a constraint between two separate DOFs, applies a rigid coupling beĆ
tween them. Defining a constraint between a DOF and itself effectively fixes it
to `ground'.

346 The Lms Theory and Background Book


Design

20.3.4 Modification prediction calculation


Once the required modifications have been defined the modification prediction
calculation process can be started.

For the simplified case of two substructures which are possibly modified (symĆ
bol ) and connected to each other (subscript c), the following procedure is folĆ
lowed to predict the modal model of the resulting structure :

1. Retrieve the modal models for each substructure. Build the diagonal maĆ
trices  1 and  2 of poles and the (possibly complex) modal matrices V1
and V2 of scaled mode shapes.

2. Join both modal models into the global matrices   (equation 20-37) and V
(20-36).

3. Define the connecting elements (springs and dash pots) between both subĆ
structures. This yields matrices Ac and Bc (equation 20-40).
4. Define the necessary modifications and join them into matrices A and B
(equation 20-42).

5. Use the modal matrix V to transform the connection and modification maĆ
trices to the modal space.

6. Add the diagonalized matrices in modal space (equation 20-44) to yield the
system matrix of the resulting structure.

7. Calculate the modal model via an eigenvalue and eigenvector decomposition


of the resulting system matrix. This yields the complex poles (natural freĆ
quencies and damping factors) and the mode shapes.

Numerical problems
The eigenvalue problem mentioned above that is to be solved for the modified
system, can be subject to numerical problems. These can arise from two
sources.

V The presence of unbalanced structural modifications, such as those


introducing large amounts of stiffness to simulate a fixation or local
heavy dampers.

V A wide range of original natural frequencies. This can occur especially


when rigid body modes of free-free systems (virtually at 0Hz) are imĆ
ported from an FE code and mixed with flexible modes at high freĆ
quencies. More specifically in this case it is the ratio of the highest to
lowest natural frequency that is the relevant factor.

Part IV Modal Analysis and Design 347


Chapter 20 Design

In practice these numerical problems are manifested in the modified modal


model by unrealistic modal parameters or missing modes. While it is impossiĆ
ble to eliminate such problems, they can be reported during the modification
prediction calculation.

The criterion used in this respect is the condition number of the system matrix.
The system matrix is the one whose eigenvalues and eigenvectors yield the moĆ
dal parameters. If this condition number exceeds a certain (critical) value this
is reported to the user. The critical value used has been established by empiriĆ
cal tests and is by default set to 1e+8.

20.3.5 Units of scaling


In order to obtain correct modification prediction results, it is absolutely necesĆ
sary to maintain a correct scaling of the original modal model using a consisĆ
tent unit set.

The scaled mode shapes of the original structure have a physical dimension reĆ
lated to the measurement data from which they were extracted by modal paĆ
rameter estimation techniques. Since this modal model is a valid description
for the relation between input forces and response displacements, the applied
modifications should be defined in a unit set which is consistent for these quanĆ
tities. The same rule applies to the interpretation of the resulting modal model.

Erroneous results are bound to occur when the original mode shape vectors are
not scaled correctly. This might arise because of the incorrect definition of the
reference point for the data (wrong driving point residue), not using the correct
transducer sensitivity or calibration factors for the experimental FRFs (force as
well as response transducers), or the use of an inconsistent unit set during the
modal test or analysis phase. These errors may cause an entirely wrong transĆ
formation of the applied physical modifications to the modal space and a small
mass modification for example may grow out of proportion because of this bad
scaling.

348 The Lms Theory and Background Book


Design

Example of the application of a beam element


The following example will illustrate the procedure. Suppose the dynamic beĆ
havior of an isotropic plate is to be influenced by a rib fixed to the plate as
shown below.

5
4
3 main plate
2
1

I cross section beam

elem
4 nodes
elem
3 5
elem
2 4
elem
1 3
2
1

The procedure becomes :

1 Discretization of the rib into 4 beam elements, interconnected at nodes correĆ


sponding to measurement points of the experimental analysis.
2 Definition or calculation of the following physical parameters.
A = cross section of the beam
It = moment of inertia for torsion
Ib = moment of inertia for bending in the reference plane, defined
by the nodes n1, n2 and r
E = Young's modulus of elasticity
G = shear modulus
L = length of the beam
1, 2, 3 = orientation of the local beam reference system in the global
system. This information is derived from the position of the
three nodes n1, n2 and r as shown in Figure 20-1.

Part IV Modal Analysis and Design 349


Chapter 20 Design

m = material's mass density

From the geometrical properties of the beam the user can calculate the cross
sectional area and the different moments of inertia. Tables listing characterĆ
istics of various types can be found.

z 2
r
1

n2

n1

Figure 20-1 Stiffening rib orientation and local co-ordinate system (Axes 1 2 and 3)

3 Construction of the element matrices for each beam element.


An element stiffness (full) and mass (diagonal) matrix can be built from the
relations between the 6 forces and 6 Degrees Of Freedom at each end node
(U1 , V1 , W1 , 1 , 1 , and "1 for node 1, u2 , v2 , w2 ,  & 2 and "2 for node 2)

T t1 R t1 T t2 R t2


U1! U 2!
$ V1%  T ĂĂ$V 2
  %  T 2ĂĂĂĂT & translation T1
W1
1
W 
2
R1



 1! 2! T2
$1%  R 1ĂĂ$ 2
  %  R 2ĂĂĂĂĂT & rotation
"1 " 
2 R1

Figure 20-2 Element matrices for nodes 1 and 2

4 Assembly of the element matrices as shown below

350 The Lms Theory and Background Book


Design

t t t t t t t t t t
T1 R1 T2 R2 T3 R3 T4 R4 T5 R5
T1
R1
T1
R1
T1
R1
T1
R1
T1
R1

Figure 20-3 Assembly of element matrices

5 Perform a static condensation (see below) of the rotational DOFs.


6 Add the condensed matrices to the system matrices and continue the calculaĆ
tion procedure as for other (lumped) modifications.
Remarks :
V The element matrices of a beam model must be assembled before conĆ
densation and addition to the system matrices to allow moments to be
transmitted between different elements.
V It is important to keep in mind that the basic assumption in beam-
bending analysis is that a plane section originally normal to the neutral
axis remains plane during deformation. This assumption is true proĆ
vided that the ratio of beam length to beam height is greater than 2.
Furthermore, shear effects do not contribute to the elements of the stiffĆ
ness matrix.
V Care should be taken with the input of moments of inertia. In the exĆ
ample stated above the distance between the axis of the plate and the
axis of the beam must be taken into account.

Static condensation
Static condensation in a dynamic analysis is based upon the assumption that
the mass at some Degrees Of Freedom can be neglected without a significant
loss of accuracy on the dynamic model in the frequency range of interest. More
explicitly, for the beam elements in the application of interest consider the rotaĆ
tional Degrees Of Freedom to be without mass. The assembled mass and stiffĆ
ness matrices of the entire beam can then be partitioned as follows,

Part IV Modal Analysis and Design 351


Chapter 20 Design

KTT K TR M  [0]


K  K RRĂ;Ă
TT
 Eqn 20-82
 RT   ] [0]
[0

where
T refers to the translational DOFs
R refers to the rotational DOFs.
The modal parameters describing the dynamic behavior of this structure are
then obtained by solving following eigenvalue problem,

KTT
K 
 RT K RR  
K TR V T

MTT [0]Ă V T
ĂĂ V Ă Ă  2ĂĂ
 R

 [0] [0] V R
  Eqn 20-83

From the bottom half of equation 20-83 a relation between the translational and
the rotational DOFs is derived.

K RTĂV TĂ Ă K RRĂV RĂ Ă {0} Eqn 20-84

which can be solved to express the rotational DOFs in terms of the translational
ones,

1
V RĂ Ă Ă K RR ĂK RTĂV T Eqn 20-85

Introduction of equation 20-85 into equation 20-83 yields :

K TĂĂV TĂ Ă 2ĂM TTĂĂV TĂ Eqn 20-86

with

K TĂĂ  K TTĂ Ă K TRĂĂK RRĂ 1ĂKRT Eqn 20-87

352 The Lms Theory and Background Book


Design

The matrices [KT] and [MTT] of equation 20-86 are used to dynamically model
the beam structure. The model will only be valid in the frequency range where
the mass effects of the rotational DOFs are negligible. Mass effects only conĆ
tribute significantly to the dynamic behavior around and above those resoĆ
nances where they are capable of storing a considerable amount of kinetic enerĆ
gy.

Note that [KT] as expressed in equation 20-87 can only be computed if [KRR] is
non-singular. The stiffness matrix is singular if rigid body motion is possible.
The rigid body mode of a beam along its longitudinal axis is not naturally elimĆ
inated by constraining its three translational DOFs so causing in general a first
order singularity. With such configurations it will not be possible to store torĆ
sional deformation energy in the beam therefore the corresponding off-diagoĆ
nal elements of the assembled stiffness matrix can be neglected and the diagoĆ
nal elements made relatively small. In this way the matrix becomes invertible
and the predicted dynamic behavior will reflect the inability to store torsional
deformation energy in the beam. This operation will, however, not be necesĆ
sary when the beam is two or three dimensional, as in such cases, rigid body
motion through rotation around one of the axes is no longer possible.

Part IV Modal Analysis and Design 353


Chapter 20 Design

20.4 Forced response

Experimental modal analysis results in a dynamic model described by the moĆ


dal parameters, damped natural frequency, exponential decay rate and scaled
mode shapes (residues). These modal parameters provide valuable insight into
the dynamic behavior of a structure. Problem areas can be identified by aniĆ
mating the mode shapes and the relative importance of the mode shapes can be
assessed by comparing their amplitudes.

In most cases however the designer is less interested in dynamic characteristics


themselves than in knowing how the structure is going to behave under normal
operating conditions. The important points to determine are -

V what will happen under dynamic loading conditions ?

V which of the natural frequencies will dominate the response ?

V which points will exhibit large deformations ?

V how will the structure will deform at particular frequencies ?

The natural frequency of the modes of vibration which seem to be the most imĆ
portant parameters in the modal model may well not dominate the response if
conditions are such that they are not excited.

The Forced response functions enable you to answer these questions by deterĆ
mining the response of the modal model to known force spectra.

20.4.1 Mathematical background for forced response


The structure's modal model forms the input for the computation of its dynamĆ
ic response and is the starting point for the forced response analysis.

The equations of motion of a linear, time invariant mechanical structure are exĆ
pressed in the frequency domain as follows:

 X() Ă Ă  H() Ă F()  Eqn 20-88

where {X()} is the response spectra vector (N0 by 1)


[H()] is the Frequency Response Function matrix (N0 by N0 )
{F()} is the applied force spectra vector (N0 by 1).

354 The Lms Theory and Background Book


Design

These quantities are complex-valued functions of the frequency variable  and


are valid for every value of  for which these functions are known.

When the response at one specific degree of freedom (DOF), say i, is needed the
above equations become:

N0
X i()Ă Ă Ă HijĂ()ĂFj() Eqn 20-89
j1

This means that the response at DOF i can be written as a linear combination of
the applied forces, each weighted by the corresponding FRF between input
DOF j and output DOF i. These frequency dependent weighting factors deĆ
scribe the dynamic flexibility between two degrees of freedom i and j of a meĆ
chanical structure.

When the modal model for that structure is available, e.g. from modal test data
or finite element calculations, the FRF can be modelled as given by

 j rijk 

2N
H ij()Ă Ă Eqn 20-90
k1 k

Using equation 20-89, it is now possible to predict the dynamic response at


DOF i when the structure is subjected to a number of simultaneous loads at
DOFs j for which scaled mode shape coefficients (residues) are also available in
the modal model.

Ă jvikvjk !ĂFj()


N0 2N
X i()Ă Ă Eqn 20-91
j1 k1 
k

Even if not all the residues are available, the Maxwell-Betti reciprocity princiĆ
ple can be used to calculate the required values. Equation 20-4 allows the resiĆ
due rick to be derived for any reference DOF c when the residues for DOFs i and
c are available for an arbitrary reference j on condition that the driving point resiĆ
due rjjk is also available. The driving point residue is also required if the mode
shapes are to be correctly scaled.

Equation 20-91 represents the response at all DOFs to all forces with a contribuĆ
tion from all modes. The contribution of each mode is given by -

Part IV Modal Analysis and Design 355


Chapter 20 Design

N0 N0
mode k ; 0 to N f k()  1
j   k
 vjkFj()  pk()  vjkFj()
j1 j1

N0 N0
Eqn 20-92
; N to 2N f k()  1 
j   *k j1
v *jkF j()  p k()  v *jkF j()
j1
(complex conjugate modes)

The response for each DOF then taking into account the contribution of each
mode is then given by

N 2N
X i()Ă Ă  Ă vikĂfk() Ă  Ă v*ikĂfk() Eqn 20-93
k1 kN

356 The Lms Theory and Background Book


Chapter 21

Geometry concepts

This chapter describes the basic concepts involved in the definition


of the geometry of a structure.
the geometry of a test structure

the definition of nodes

357
Chapter 21 Geometry concepts

21.1 The geometry of a test structure

A geometrical representation of a test structure is necessary for the display and


animation of mode shapes, and for the implementation of design modifications.
This chapter discusses the basics regarding the geometry definition of a model
for a test structure.

The most important part of the model is the nodes. These define the points
where measurements will be taken on the structure, and the points where the
mode shape deformation are calculated. It is common practice to defined conĆ
nections or edges between specific nodes to form a wire frame model of the
structure. In addition surfaces can be defined, that aid in the visual representaĆ
tion of the structure.

y
node x

connection
surface

Figure 21-1 A wire frame model of a structure

Note that the definition of nodes and meshes for acoustic measurements are deĆ
scribed in the Acoustic" documentation.

358 The Lms Theory and Background Book


Geometry concepts

21.2 Nodes
A node is defined by its location and its orientation.

Location
The location of a node in the 3D space is defined by a set of 3 real numbers
known as the coordinates. Coordinates are always defined relative to a referĆ
ence coordinate system.
The reference coordinates are normally shown along with the model in the the
display window. The origin of the global coordinate system is the origin of the
3D space that contains the test structure and the global symmetry of the strucĆ
ture should be considered when defining this.
The reference coordinate system can be either Cartesian, cylindrical or spheriĆ
cal.

Z Z Z

z z 
y y y y
r r
x

x x x
" "

Right handed Cylindrical Spherical


Cartesian

Figure 21-2 Coordinate systems


So as an example, the same node defined in each of the coordinate systems
would appear as follows.

Cartesian X Y Z 1 1 1
Cylindrical r " Z 2 45° 1
Spherical r "  3 45° 55°

Orientation
Nodal orientation is defined using a Cartesian coordinate system. In many apĆ
plications the orientation of the node defines the measurement directions.

Part IV Modal Analysis and Design 359


Chapter 21 Geometry concepts

y
x

Figure 21-3 Nodal coordinate system

The origin of the nodal coordinate system coincides with the node's location. If
the principal axes of the nodal coordinate system are not coincident with the
measurement directions, in either a positive or a negative sense, then the differĆ
ence must be defined with Euler angles.

Euler angles

Three Euler angles are used to define the orientation of a one coordinate sysĆ
tem, relative to a reference coordinate system with the same origin.

"xy Zr z'
The first angle, ă"xy (Euler XY) is a rotation
about the Zr axis of the reference system. (PosiĆ
tive from Xr axis to Yr axis). This generates a
first intermediate system indicated by a single y'
quote ' on the axis labels. Xr
+
"xy x' Yr

"xz z"
z'
The second angle "xz (Euler XZ) is a rotation
about the y' axis of the first intermediate system.
(Positive from the x' axis to the z' axis). This genĆ
erates a second intermediate system, indicated x" y'y''
by two quotes " on the axis labels.
+
"xz
x'

360 The Lms Theory and Background Book


Geometry concepts

"yz
Z z''
Finally the third angle, "yz (Euler YZ) is a rotaĆ
tion about the x" axis of the second intermediate
system, positive from the y" axis to the z" axis.
Y
This last orientation generates the desired new
coordinate system orientation. + "yz
y''
x''X

Degrees Of Freedom (DOFs)


The Degrees Of Freedom of a node represent the directions in which a node is
free to move. Each node therefore has a maximum of 7 Degrees Of Freedom; 3
translational, 3 rotational and a scalar DOF(Sc).

RZ scalar

RX RY

X Y

Figure 21-4 Degrees of freedom

Part IV Modal Analysis and Design 361

You might also like