This action might not be possible to undo. Are you sure you want to continue?
TM
Analysis Concepts
LabVIEW Analysis Concepts
March 2004 Edition
Part Number 370192C01
Support
Worldwide Technical Support and Product Information
ni.com
National Instruments Corporate Headquarters
11500 North Mopac Expressway Austin, Texas 787593504 USA Tel: 512 683 0100
Worldwide Offices
Australia 1800 300 800, Austria 43 0 662 45 79 90 0, Belgium 32 0 2 757 00 20, Brazil 55 11 3262 3599,
Canada (Calgary) 403 274 9391, Canada (Ottawa) 613 233 5949, Canada (Québec) 450 510 3055,
Canada (Toronto) 905 785 0085, Canada (Vancouver) 514 685 7530, China 86 21 6555 7838,
Czech Republic 420 224 235 774, Denmark 45 45 76 26 00, Finland 385 0 9 725 725 11,
France 33 0 1 48 14 24 24, Germany 49 0 89 741 31 30, Greece 30 2 10 42 96 427, India 91 80 51190000,
Israel 972 0 3 6393737, Italy 39 02 413091, Japan 81 3 5472 2970, Korea 82 02 3451 3400,
Malaysia 603 9131 0918, Mexico 001 800 010 0793, Netherlands 31 0 348 433 466,
New Zealand 0800 553 322, Norway 47 0 66 90 76 60, Poland 48 22 3390150, Portugal 351 210 311 210,
Russia 7 095 783 68 51, Singapore 65 6226 5886, Slovenia 386 3 425 4200, South Africa 27 0 11 805 8197,
Spain 34 91 640 0085, Sweden 46 0 8 587 895 00, Switzerland 41 56 200 51 51, Taiwan 886 2 2528 7227,
Thailand 662 992 7519, United Kingdom 44 0 1635 523545
For further support information, refer to the Technical Support and Professional Services appendix. To comment
on the documentation, send email to techpubs@ni.com.
© 2000–2004 National Instruments Corporation. All rights reserved.
Important Information
Warranty
The media on which you receive National Instruments software are warranted not to fail to execute programming instructions, due to defects
in materials and workmanship, for a period of 90 days from date of shipment, as evidenced by receipts or other documentation. National
Instruments will, at its option, repair or replace software media that do not execute programming instructions if National Instruments receives
notice of such defects during the warranty period. National Instruments does not warrant that the operation of the software shall be
uninterrupted or error free.
A Return Material Authorization (RMA) number must be obtained from the factory and clearly marked on the outside of the package before
any equipment will be accepted for warranty work. National Instruments will pay the shipping costs of returning to the owner parts which are
covered by warranty.
National Instruments believes that the information in this document is accurate. The document has been carefully reviewed for technical
accuracy. In the event that technical or typographical errors exist, National Instruments reserves the right to make changes to subsequent
editions of this document without prior notice to holders of this edition. The reader should consult National Instruments if errors are suspected.
In no event shall National Instruments be liable for any damages arising out of or related to this document or the information contained in it.
EXCEPT AS SPECIFIED HEREIN, NATIONAL INSTRUMENTS MAKES NO WARRANTIES, EXPRESS OR IMPLIED, AND SPECIFICALLY DISCLAIMS ANY WARRANTY OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. CUSTOMER’S RIGHT TO RECOVER DAMAGES CAUSED BY FAULT OR NEGLIGENCE ON THE PART OF
NATIONAL INSTRUMENTS SHALL BE LIMITED TO THE AMOUNT THERETOFORE PAID BY THE CUSTOMER. NATIONAL INSTRUMENTS WILL NOT BE LIABLE FOR
DAMAGES RESULTING FROM LOSS OF DATA, PROFITS, USE OF PRODUCTS, OR INCIDENTAL OR CONSEQUENTIAL DAMAGES, EVEN IF ADVISED OF THE POSSIBILITY
THEREOF. This limitation of the liability of National Instruments will apply regardless of the form of action, whether in contract or tort, including
negligence. Any action against National Instruments must be brought within one year after the cause of action accrues. National Instruments
shall not be liable for any delay in performance due to causes beyond its reasonable control. The warranty provided herein does not cover
damages, defects, malfunctions, or service failures caused by owner’s failure to follow the National Instruments installation, operation, or
maintenance instructions; owner’s modification of the product; owner’s abuse, misuse, or negligent acts; and power failure or surges, fire,
flood, accident, actions of third parties, or other events outside reasonable control.
Copyright
Under the copyright laws, this publication may not be reproduced or transmitted in any form, electronic or mechanical, including photocopying,
recording, storing in an information retrieval system, or translating, in whole or in part, without the prior written consent of National
Instruments Corporation.
For a listing of the copyrights, conditions, and disclaimers regarding components used in USI (Xerces C++, ICU, and HDF5), refer to the
USICopyrights.chm.
This product includes software developed by the Apache Software Foundation (http://www.apache.org/).
Copyright
© 1999 The Apache Software Foundation. All rights reserved.
Copyright © 1995–2003 International Business Machines Corporation and others. All rights reserved.
NCSA HDF5 (Hierarchical Data Format 5) Software Library and Utilities
Copyright 1998, 1999, 2000, 2001, 2003 by the Board of Trustees of the University of Illinois. All rights reserved.
Trademarks
CVI
™
, LabVIEW
™
, National Instruments
™
, NI
™
, and ni.com
™
are trademarks of National Instruments Corporation.
MATLAB
®
is a registered trademark of The MathWorks, Inc. Other product and company names mentioned herein are trademarks or trade
names of their respective companies.
Patents
For patents covering National Instruments products, refer to the appropriate location: Help»Patents in your software, the patents.txt file
on your CD, or ni .c om /p at en ts.
WARNING REGARDING USE OF NATIONAL INSTRUMENTS PRODUCTS
(1) NATIONAL INSTRUMENTS PRODUCTS ARE NOT DESIGNED WITH COMPONENTS AND TESTING FOR A LEVEL OF
RELIABILITY SUITABLE FOR USE IN OR IN CONNECTION WITH SURGICAL IMPLANTS OR AS CRITICAL COMPONENTS IN
ANY LIFE SUPPORT SYSTEMS WHOSE FAILURE TO PERFORM CAN REASONABLY BE EXPECTED TO CAUSE SIGNIFICANT
INJURY TO A HUMAN.
(2) IN ANY APPLICATION, INCLUDING THE ABOVE, RELIABILITY OF OPERATION OF THE SOFTWARE PRODUCTS CAN BE
IMPAIRED BY ADVERSE FACTORS, INCLUDING BUT NOT LIMITED TO FLUCTUATIONS IN ELECTRICAL POWER SUPPLY,
COMPUTER HARDWARE MALFUNCTIONS, COMPUTER OPERATING SYSTEM SOFTWARE FITNESS, FITNESS OF COMPILERS
AND DEVELOPMENT SOFTWARE USED TO DEVELOP AN APPLICATION, INSTALLATION ERRORS, SOFTWARE AND
HARDWARE COMPATIBILITY PROBLEMS, MALFUNCTIONS OR FAILURES OF ELECTRONIC MONITORING OR CONTROL
DEVICES, TRANSIENT FAILURES OF ELECTRONIC SYSTEMS (HARDWARE AND/OR SOFTWARE), UNANTICIPATED USES OR
MISUSES, OR ERRORS ON THE PART OF THE USER OR APPLICATIONS DESIGNER (ADVERSE FACTORS SUCH AS THESE ARE
HEREAFTER COLLECTIVELY TERMED “SYSTEM FAILURES”). ANY APPLICATION WHERE A SYSTEM FAILURE WOULD
CREATE A RISK OF HARM TO PROPERTY OR PERSONS (INCLUDING THE RISK OF BODILY INJURY AND DEATH) SHOULD
NOT BE RELIANT SOLELY UPON ONE FORM OF ELECTRONIC SYSTEM DUE TO THE RISK OF SYSTEM FAILURE. TO AVOID
DAMAGE, INJURY, OR DEATH, THE USER OR APPLICATION DESIGNER MUST TAKE REASONABLY PRUDENT STEPS TO
PROTECT AGAINST SYSTEM FAILURES, INCLUDING BUT NOT LIMITED TO BACKUP OR SHUT DOWN MECHANISMS.
BECAUSE EACH ENDUSER SYSTEM IS CUSTOMIZED AND DIFFERS FROM NATIONAL INSTRUMENTS' TESTING
PLATFORMS AND BECAUSE A USER OR APPLICATION DESIGNER MAY USE NATIONAL INSTRUMENTS PRODUCTS IN
COMBINATION WITH OTHER PRODUCTS IN A MANNER NOT EVALUATED OR CONTEMPLATED BY NATIONAL
INSTRUMENTS, THE USER OR APPLICATION DESIGNER IS ULTIMATELY RESPONSIBLE FOR VERIFYING AND VALIDATING
THE SUITABILITY OF NATIONAL INSTRUMENTS PRODUCTS WHENEVER NATIONAL INSTRUMENTS PRODUCTS ARE
INCORPORATED IN A SYSTEM OR APPLICATION, INCLUDING, WITHOUT LIMITATION, THE APPROPRIATE DESIGN,
PROCESS AND SAFETY LEVEL OF SUCH SYSTEM OR APPLICATION.
© National Instruments Corporation v LabVIEW Analysis Concepts
Contents
About This Manual
Conventions ................................................................................................................... xv
Related Documentation.................................................................................................. xv
PART I
Signal Processing and Signal Analysis
Chapter 1
Introduction to Digital Signal Processing and Analysis in LabVIEW
The Importance of Data Analysis .................................................................................. 11
Sampling Signals ........................................................................................................... 12
Aliasing.......................................................................................................................... 14
Increasing Sampling Frequency to Avoid Aliasing......................................... 16
AntiAliasing Filters........................................................................................ 17
Converting to Logarithmic Units................................................................................... 18
Displaying Results on a Decibel Scale............................................................ 19
Chapter 2
Signal Generation
Common Test Signals.................................................................................................... 21
Frequency Response Measurements.............................................................................. 25
Multitone Generation..................................................................................................... 25
Crest Factor ..................................................................................................... 26
Phase Generation............................................................................................. 26
Swept Sine versus Multitone........................................................................... 28
Noise Generation ........................................................................................................... 210
Normalized Frequency................................................................................................... 212
Wave and Pattern VIs .................................................................................................... 214
Phase Control................................................................................................... 214
Chapter 3
Digital Filtering
Introduction to Filtering................................................................................................. 31
Advantages of Digital Filtering Compared to Analog Filtering...................... 31
Common Digital Filters ................................................................................................. 32
Impulse Response............................................................................................ 32
Contents
LabVIEW Analysis Concepts vi ni.com
Classifying Filters by Impulse Response ........................................................ 33
Filter Coefficients ............................................................................. 34
Characteristics of an Ideal Filter.................................................................................... 35
Practical (Nonideal) Filters............................................................................................ 36
Transition Band............................................................................................... 36
Passband Ripple and Stopband Attenuation ................................................... 37
Sampling Rate ............................................................................................................... 38
FIR Filters...................................................................................................................... 39
Taps................................................................................................................. 311
Designing FIR Filters...................................................................................... 311
Designing FIR Filters by Windowing .............................................. 314
Designing Optimum FIR Filters Using the ParksMcClellan
Algorithm....................................................................................... 315
Designing Equiripple FIR Filters Using the ParksMcClellan
Algorithm....................................................................................... 316
Designing Narrowband FIR Filters .................................................. 316
Designing Wideband FIR Filters ...................................................... 319
IIR Filters....................................................................................................................... 319
Cascade Form IIR Filtering............................................................................. 320
SecondOrder Filtering..................................................................... 322
FourthOrder Filtering ...................................................................... 323
IIR Filter Types............................................................................................... 323
Minimizing Peak Error ..................................................................... 324
Butterworth Filters............................................................................ 324
Chebyshev Filters ............................................................................. 325
Chebyshev II Filters.......................................................................... 326
Elliptic Filters ................................................................................... 327
Bessel Filters..................................................................................... 328
Designing IIR Filters....................................................................................... 330
IIR Filter Characteristics in LabVIEW............................................. 331
Transient Response........................................................................... 332
Comparing FIR and IIR Filters...................................................................................... 333
Nonlinear Filters............................................................................................................ 333
Example: Analyzing Noisy Pulse with a Median Filter.................................. 334
Selecting a Digital Filter Design ................................................................................... 335
Chapter 4
Frequency Analysis
Differences between Frequency Domain and Time Domain ........................................ 41
Parseval’s Relationship................................................................................... 43
Fourier Transform ......................................................................................................... 44
Contents
© National Instruments Corporation vii LabVIEW Analysis Concepts
Discrete Fourier Transform (DFT) ................................................................................ 45
Relationship between N Samples in the Frequency and Time Domains......... 45
Example of Calculating DFT........................................................................... 46
Magnitude and Phase Information................................................................... 48
Frequency Spacing between DFT Samples ................................................................... 49
FFT Fundamentals ......................................................................................................... 412
Computing Frequency Components ................................................................ 413
Fast FFT Sizes ................................................................................................. 414
Zero Padding ................................................................................................... 414
FFT VI ............................................................................................................. 415
Displaying Frequency Information from Transforms.................................................... 416
TwoSided, DCCentered FFT ...................................................................................... 417
Mathematical Representation of a TwoSided, DCCentered FFT................. 418
Creating a TwoSided, DCCentered FFT....................................................... 419
Power Spectrum............................................................................................................. 422
Converting a TwoSided Power Spectrum to a SingleSided
Power Spectrum............................................................................................ 423
Loss of Phase Information............................................................................... 425
Computations on the Spectrum...................................................................................... 425
Estimating Power and Frequency.................................................................... 425
Computing Noise Level and Power Spectral Density ..................................... 427
Computing the Amplitude and Phase Spectrums .......................................................... 428
Calculating Amplitude in V
rms
and Phase in Degrees ..................................... 429
Frequency Response Function....................................................................................... 430
Cross Power Spectrum................................................................................................... 431
Frequency Response and Network Analysis ................................................................. 431
Frequency Response Function......................................................................... 432
Impulse Response Function............................................................................. 433
Coherence Function......................................................................................... 433
Windowing..................................................................................................................... 434
Averaging to Improve the Measurement ....................................................................... 435
RMS Averaging............................................................................................... 435
Vector Averaging ............................................................................................ 436
Peak Hold ........................................................................................................ 436
Weighting ........................................................................................................ 437
Echo Detection............................................................................................................... 437
Chapter 5
Smoothing Windows
Spectral Leakage............................................................................................................ 51
Sampling an Integer Number of Cycles .......................................................... 52
Sampling a Noninteger Number of Cycles...................................................... 53
Windowing Signals........................................................................................................ 55
Contents
LabVIEW Analysis Concepts viii ni.com
Characteristics of Different Smoothing Windows ........................................................ 511
Main Lobe....................................................................................................... 512
Side Lobes....................................................................................................... 512
Rectangular (None) ......................................................................................... 513
Hanning........................................................................................................... 514
Hamming......................................................................................................... 515
KaiserBessel .................................................................................................. 515
Triangle ........................................................................................................... 516
Flat Top........................................................................................................... 517
Exponential ..................................................................................................... 518
Windows for Spectral Analysis versus Windows for Coefficient Design .................... 519
Spectral Analysis............................................................................................. 519
Windows for FIR Filter Coefficient Design ................................................... 521
Choosing the Correct Smoothing Window.................................................................... 521
Scaling Smoothing Windows ........................................................................................ 523
Chapter 6
Distortion Measurements
Defining Distortion........................................................................................................ 61
Application Areas ........................................................................................... 62
Harmonic Distortion...................................................................................................... 62
THD ................................................................................................................ 63
THD + N......................................................................................................... 64
SINAD ............................................................................................................ 64
Chapter 7
DC/RMS Measurements
What Is the DC Level of a Signal?................................................................................ 71
What Is the RMS Level of a Signal?............................................................................. 72
Averaging to Improve the Measurement ....................................................................... 73
Common Error Sources Affecting DC and RMS Measurements.................................. 74
DC Overlapped with Single Tone................................................................... 74
Defining the Equivalent Number of Digits ..................................................... 75
DC Plus Sine Tone.......................................................................................... 75
Windowing to Improve DC Measurements .................................................... 76
RMS Measurements Using Windows ............................................................. 78
Using Windows with Care .............................................................................. 78
Rules for Improving DC and RMS Measurements ....................................................... 79
RMS Levels of Specific Tones ....................................................................... 79
Contents
© National Instruments Corporation ix LabVIEW Analysis Concepts
Chapter 8
Limit Testing
Setting up an Automated Test System........................................................................... 81
Specifying a Limit ........................................................................................... 81
Specifying a Limit Using a Formula ............................................................... 83
Limit Testing ................................................................................................... 84
Applications ................................................................................................................... 86
Modem Manufacturing Example..................................................................... 86
Digital Filter Design Example......................................................................... 87
Pulse Mask Testing Example .......................................................................... 88
PART II
Mathematics
Chapter 9
Curve Fitting
Introduction to Curve Fitting ......................................................................................... 91
Applications of Curve Fitting.......................................................................... 92
General LS Linear Fit Theory........................................................................................ 93
Polynomial Fit with a Single Predictor Variable ........................................................... 96
Curve Fitting in LabVIEW............................................................................................ 97
Linear Fit ......................................................................................................... 98
Exponential Fit ................................................................................................ 98
General Polynomial Fit.................................................................................... 98
General LS Linear Fit ...................................................................................... 99
Computing Covariance ..................................................................... 910
Building the Observation Matrix ...................................................... 910
Nonlinear LevenbergMarquardt Fit ............................................................... 911
Chapter 10
Probability and Statistics
Statistics ......................................................................................................................... 101
Mean................................................................................................................ 103
Median............................................................................................................. 103
Sample Variance and Population Variance ..................................................... 104
Sample Variance ............................................................................... 104
Population Variance.......................................................................... 105
Standard Deviation.......................................................................................... 105
Mode................................................................................................................ 105
Contents
LabVIEW Analysis Concepts x ni.com
Moment about the Mean ................................................................................. 105
Skewness .......................................................................................... 106
Kurtosis............................................................................................. 106
Histogram........................................................................................................ 106
Mean Square Error (mse) ................................................................................ 107
Root Mean Square (rms) ................................................................................. 108
Probability ..................................................................................................................... 108
Random Variables........................................................................................... 108
Discrete Random Variables .............................................................. 109
Continuous Random Variables......................................................... 109
Normal Distribution ........................................................................................ 1010
Computing the OneSided Probability of a Normally
Distributed Random Variable ........................................................ 1011
Finding x with a Known p ................................................................ 1012
Probability Distribution and Density Functions.............................................. 1012
Chapter 11
Linear Algebra
Linear Systems and Matrix Analysis............................................................................. 111
Types of Matrices............................................................................................ 111
Determinant of a Matrix.................................................................................. 112
Transpose of a Matrix ..................................................................................... 113
Linear Independence......................................................................... 113
Matrix Rank...................................................................................... 114
Magnitude (Norms) of Matrices ..................................................................... 115
Determining Singularity (Condition Number) ................................................ 117
Basic Matrix Operations and EigenvaluesEigenvector Problems................................ 118
Dot Product and Outer Product ....................................................................... 1110
Eigenvalues and Eigenvectors ........................................................................ 1112
Matrix Inverse and Solving Systems of Linear Equations ............................................ 1114
Solutions of Systems of Linear Equations ...................................................... 1114
Matrix Factorization ...................................................................................................... 1116
Pseudoinverse.................................................................................................. 1117
Chapter 12
Optimization
Introduction to Optimization ......................................................................................... 121
Constraints on the Objective Function............................................................ 122
Linear and Nonlinear Programming Problems ............................................... 122
Discrete Optimization Problems....................................................... 122
Continuous Optimization Problems.................................................. 122
Solving Problems Iteratively........................................................................... 123
Contents
© National Instruments Corporation xi LabVIEW Analysis Concepts
Linear Programming ...................................................................................................... 123
Linear Programming Simplex Method............................................................ 124
Nonlinear Programming ................................................................................................ 124
Impact of Derivative Use on Search Method Selection .................................. 125
Line Minimization........................................................................................... 125
Local and Global Minima................................................................................ 125
Global Minimum............................................................................... 126
Local Minimum................................................................................. 126
Downhill Simplex Method .............................................................................. 126
Golden Section Search Method....................................................................... 127
Choosing a New Point x in the Golden Section................................ 128
Gradient Search Methods ................................................................................ 129
Caveats about Converging to an Optimal Solution........................... 1210
Terminating Gradient Search Methods ............................................. 1210
Conjugate Direction Search Methods.............................................................. 1211
Conjugate Gradient Search Methods............................................................... 1212
Theorem A ........................................................................................ 1212
Theorem B......................................................................................... 1213
Difference between FletcherReeves and PolakRibiere .................. 1214
Chapter 13
Polynomials
General Form of a Polynomial....................................................................................... 131
Basic Polynomial Operations......................................................................................... 132
Order of Polynomial ........................................................................................ 132
Polynomial Evaluation .................................................................................... 132
Polynomial Addition ....................................................................................... 133
Polynomial Subtraction ................................................................................... 133
Polynomial Multiplication............................................................................... 133
Polynomial Division........................................................................................ 133
Polynomial Composition................................................................................. 135
Greatest Common Divisor of Polynomials...................................................... 135
Least Common Multiple of Two Polynomials ................................................ 136
Derivatives of a Polynomial ............................................................................ 137
Integrals of a Polynomial................................................................................. 138
Indefinite Integral of a Polynomial ................................................... 138
Definite Integral of a Polynomial...................................................... 138
Number of Real Roots of a Real Polynomial .................................................. 138
Rational Polynomial Function Operations..................................................................... 1311
Rational Polynomial Function Addition.......................................................... 1311
Rational Polynomial Function Subtraction ..................................................... 1311
Rational Polynomial Function Multiplication................................................. 1312
Rational Polynomial Function Division .......................................................... 1312
Contents
LabVIEW Analysis Concepts xii ni.com
Negative Feedback with a Rational Polynomial Function.............................. 1312
Positive Feedback with a Rational Polynomial Function ............................... 1312
Derivative of a Rational Polynomial Function ............................................... 1313
Partial Fraction Expansion.............................................................................. 1313
Heaviside CoverUp Method............................................................ 1314
Orthogonal Polynomials................................................................................................ 1315
Chebyshev Orthogonal Polynomials of the First Kind ................................... 1315
Chebyshev Orthogonal Polynomials of the Second Kind............................... 1316
Gegenbauer Orthogonal Polynomials ............................................................. 1316
Hermite Orthogonal Polynomials ................................................................... 1317
Laguerre Orthogonal Polynomials .................................................................. 1317
Associated Laguerre Orthogonal Polynomials ............................................... 1318
Legendre Orthogonal Polynomials ................................................................. 1318
Evaluating a Polynomial with a Matrix......................................................................... 1319
Polynomial Eigenvalues and Vectors ............................................................. 1320
Entering Polynomials in LabVIEW............................................................................... 1322
PART III
PointByPoint Analysis
Chapter 14
PointByPoint Analysis
Introduction to PointByPoint Analysis ....................................................................... 141
Using the Point By Point VIs ........................................................................................ 142
Initializing Point By Point VIs........................................................................ 142
Purpose of Initialization in Point By Point VIs ................................ 142
Using the First Call? Function.......................................................... 143
Error Checking and Initialization ..................................................... 143
Frequently Asked Questions.......................................................................................... 145
What Are the Differences between PointByPoint Analysis
and ArrayBased Analysis in LabVIEW? .................................................... 145
Why Use PointByPoint Analysis?................................................................ 146
What Is New about PointByPoint Analysis?................................................ 147
What Is Familiar about PointByPoint Analysis?.......................................... 147
How Is It Possible to Perform Analysis without Buffers of Data? ................. 147
Why Is PointByPoint Analysis Effective in RealTime Applications?........ 148
Do I Need PointByPoint Analysis? .............................................................. 148
What Is the LongTerm Importance of PointByPoint Analysis? ................. 149
Case Study of PointByPoint Analysis ........................................................................ 149
PointByPoint Analysis of Train Wheels ...................................................... 149
Overview of the LabVIEW PointByPoint Solution ..................................... 1411
Characteristics of a Train Wheel Waveform................................................... 1412
Contents
© National Instruments Corporation xiii LabVIEW Analysis Concepts
Analysis Stages of the Train Wheel PtByPt VI............................................... 1413
DAQ Stage ........................................................................................ 1413
Filter Stage ........................................................................................ 1413
Analysis Stage................................................................................... 1414
Events Stage...................................................................................... 1415
Report Stage...................................................................................... 1415
Conclusion....................................................................................................... 1416
Appendix A
References
Appendix B
Technical Support and Professional Services
© National Instruments Corporation xv LabVIEW Analysis Concepts
About This Manual
This manual describes analysis and mathematical concepts in LabVIEW.
The information in this manual directly relates to how you can use
LabVIEW to perform analysis and measurement operations.
Conventions
This manual uses the following conventions:
» The » symbol leads you through nested menu items and dialog box options
to a final action. The sequence File»Page Setup»Options directs you to
pull down the File menu, select the Page Setup item, and select Options
from the last dialog box.
This icon denotes a note, which alerts you to important information.
bold Bold text denotes items that you must select or click in the software, such
as menu items and dialog box options. Bold text also denotes parameter
names.
italic Italic text denotes variables, emphasis, a cross reference, or an introduction
to a key concept. This font also denotes text that is a placeholder for a word
or value that you must supply.
monospace Text in this font denotes text or characters that you should enter from the
keyboard, sections of code, programming examples, and syntax examples.
This font is also used for the proper names of disk drives, paths, directories,
programs, subprograms, subroutines, device names, functions, operations,
variables, filenames, and extensions.
Related Documentation
The following documents contain information that you might find helpful
as you read this manual:
• LabVIEW Measurements Manual
• The Fundamentals of FFTBased Signal Analysis and Measurement in
LabVIEW and LabWindows
™
/CVI
™
Application Note, available on
the National Instruments Web site at ni.com/info, where you enter
the info code rdlv04
About This Manual
LabVIEW Analysis Concepts xvi ni.com
• LabVIEW Help, available by selecting Help»VI, Function,
& HowTo Help
• LabVIEW User Manual
• Getting Started with LabVIEW
• On the Use of Windows for Harmonic Analysis with the Discrete
Fourier Transform (Proceedings of the IEEE, Volume 66, No. 1,
January 1978)
© National Instruments Corporation I1 LabVIEW Analysis Concepts
Part I
Signal Processing and Signal Analysis
This part describes signal processing and signal analysis concepts.
• Chapter 1, Introduction to Digital Signal Processing and Analysis in
LabVIEW, provides a background in basic digital signal processing
and an introduction to signal processing and measurement VIs in
LabVIEW.
• Chapter 2, Signal Generation, describes the fundamentals of signal
generation.
• Chapter 3, Digital Filtering, introduces the concept of filtering,
compares analog and digital filters, describes finite infinite response
(FIR) and infinite impulse response (IIR) filters, and describes how to
choose the appropriate digital filter for a particular application.
• Chapter 4, Frequency Analysis, describes the fundamentals of the
discrete Fourier transform (DFT), the fast Fourier transform (FFT),
basic signal analysis computations, computations performed on the
power spectrum, and how to use FFTbased functions for network
measurement.
• Chapter 5, Smoothing Windows, describes spectral leakage, how to use
smoothing windows to decrease spectral leakage, the different types of
smoothing windows, how to choose the correct type of smoothing
window, the differences between smoothing windows used for spectral
analysis and smoothing windows used for filter coefficient design, and
the importance of scaling smoothing windows.
Part I Signal Processing and Signal Analysis
LabVIEW Analysis Concepts I2 ni.com
• Chapter 6, Distortion Measurements, describes harmonic distortion,
total harmonic distortion (THD), signal noise and distortion (SINAD),
and when to use distortion measurements.
• Chapter 7, DC/RMS Measurements, introduces measurement analysis
techniques for making DC and RMS measurements of a signal.
• Chapter 8, Limit Testing, provides information about setting up an
automated system for performing limit testing, specifying limits,
and applications for limit testing.
© National Instruments Corporation 11 LabVIEW Analysis Concepts
1
Introduction to Digital Signal
Processing and Analysis in
LabVIEW
Digital signals are everywhere in the world around us. Telephone
companies use digital signals to represent the human voice. Radio,
television, and hifi sound systems are all gradually converting to the
digital domain because of its superior fidelity, noise reduction, and signal
processing flexibility. Data is transmitted from satellites to earth ground
stations in digital form. NASA pictures of distant planets and outer space
are often processed digitally to remove noise and extract useful
information. Economic data, census results, and stock market prices are all
available in digital form. Because of the many advantages of digital signal
processing, analog signals also are converted to digital form before they are
processed with a computer.
This chapter provides a background in basic digital signal processing and
an introduction to signal processing and measurement VIs in LabVIEW.
The Importance of Data Analysis
The importance of integrating analysis libraries into engineering stations is
that the raw data, as shown in Figure 11, does not always immediately
convey useful information. Often, you must transform the signal, remove
noise disturbances, correct for data corrupted by faulty equipment, or
compensate for environmental effects, such as temperature and humidity.
Figure 11. Raw Data
Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW
LabVIEW Analysis Concepts 12 ni.com
By analyzing and processing the digital data, you can extract the useful
information from the noise and present it in a form more comprehensible
than the raw data, as shown in Figure 12.
Figure 12. Processed Data
The LabVIEW block diagram programming approach and the extensive
set of LabVIEW signal processing and measurement VIs simplify the
development of analysis applications.
Sampling Signals
Measuring the frequency content of a signal requires digitalization of a
continuous signal. To use digital signal processing techniques, you must
first convert an analog signal into its digital representation. In practice, the
conversion is implemented by using an analogtodigital (A/D) converter.
Consider an analog signal x(t) that is sampled every ∆t seconds. The time
interval ∆t is the sampling interval or sampling period. Its reciprocal, 1/∆t,
is the sampling frequency, with units of samples/second. Each of the
discrete values of x(t) at t = 0, ∆t, 2∆t, 3∆t, and so on, is a sample.
Thus, x(0), x(∆t), x(2∆t), …, are all samples. The signal x(t) thus can be
represented by the following discrete set of samples.
{x(0), x(∆t), x(2∆t), x(3∆t), …, x(k∆t), …}
Figure 13 shows an analog signal and its corresponding sampled version.
The sampling interval is ∆t. The samples are defined at discrete points in
time.
Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW
© National Instruments Corporation 13 LabVIEW Analysis Concepts
Figure 13. Analog Signal and Corresponding Sampled Version
The following notation represents the individual samples.
x[i] = x(i∆t)
for
i = 0, 1, 2, …
If N samples are obtained from the signal x(t), then you can represent x(t)
by the following sequence.
X = {x[0], x[1], x[2], x[3], …, x[N–1]}
The preceding sequence representing x(t) is the digital representation, or
the sampled version, of x(t). The sequence X = {x[i]}
is indexed on the
integer variable i and does not contain any information about the sampling
rate. So knowing only the values of the samples contained in X gives you
no information about the sampling frequency.
One of the most important parameters of an analog input system is
the frequency at which the DAQ device samples an incoming signal.
The sampling frequency determines how often an A/D conversion takes
place. Sampling a signal too slowly can result in an aliased signal.
1.1
0.0
–1.1
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
∆t = distance between
samples along time axis
∆t
Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW
LabVIEW Analysis Concepts 14 ni.com
Aliasing
An aliased signal provides a poor representation of the analog signal.
Aliasing causes a false lower frequency component to appear in the
sampled data of a signal. Figure 14 shows an adequately sampled signal
and an undersampled signal.
Figure 14. Aliasing Effects of an Improper Sampling Rate
In Figure 14, the undersampled signal appears to have a lower frequency
than the actual signal—three cycles instead of ten cycles.
Increasing the sampling frequency increases the number of data points
acquired in a given time period. Often, a fast sampling frequency provides
a better representation of the original signal than a slower sampling rate.
For a given sampling frequency, the maximum frequency you can
accurately represent without aliasing is the Nyquist frequency. The Nyquist
frequency equals onehalf the sampling frequency, as shown by the
following equation.
,
where f
N
is the Nyquist frequency and f
s
is the sampling frequency.
Signals with frequency components above the Nyquist frequency appear
aliased between DC and the Nyquist frequency. In an aliased signal,
frequency components actually above the Nyquist frequency appear as
Adequately Sampled Signal
Aliased Signal Due to Undersampling
f
N
f
s
2
 =
Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW
© National Instruments Corporation 15 LabVIEW Analysis Concepts
frequency components below the Nyquist frequency. For example, a
component at frequency f
N
< f
0
< f
s
appears as the frequency f
s
– f
0
.
Figures 15 and 16 illustrate the aliasing phenomenon. Figure 15 shows
the frequencies contained in an input signal acquired at a sampling
frequency, f
s
, of 100 Hz.
Figure 15. Actual Signal Frequency Components
Figure 16 shows the frequency components and the aliases for the input
signal from Figure 15.
Figure 16. Signal Frequency Components and Aliases
In Figure 16, frequencies below the Nyquist frequency of f
s
/2 = 50 Hz are
sampled correctly. For example, F1 appears at the correct frequency.
Frequencies above the Nyquist frequency appear as aliases. For example,
aliases for F2, F3, and F4 appear at 30 Hz, 40 Hz, and 10 Hz, respectively.
F1
25 Hz
F2
70 Hz
F3
160 Hz
F4
510 Hz
ƒ
s
/2=50
Nyquist Frequency
ƒ
s
=100
Sampling Frequency
500
0
Frequency
M
a
g
n
i
t
u
d
e
F1
25 Hz
F2
70 Hz
F3
160 Hz
F4
510 Hz
ƒ
s
/2=50
Nyquist Frequency
ƒ
s
=100
Sampling Frequency
500
0
Frequency
M
a
g
n
i
t
u
d
e
F4 alias
10 Hz
F2 alias
30 Hz
F3 alias
40 Hz
Solid Arrows – Actual Frequency
Dashed Arrows – Alias
Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW
LabVIEW Analysis Concepts 16 ni.com
The alias frequency equals the absolute value of the difference between the
closest integer multiple of the sampling frequency and the input frequency,
as shown in the following equation.
where AF is the alias frequency, CIMSF is the closest integer multiple of
the sampling frequency, and IF is the input frequency. For example, you
can compute the alias frequencies for F2, F3, and F4 from Figure 16 with
the following equations.
Increasing Sampling Frequency to Avoid Aliasing
According to the Shannon Sampling Theorem, use a sampling frequency
at least twice the maximum frequency component in the sampled signal
to avoid aliasing. Figure 17 shows the effects of various sampling
frequencies.
Figure 17. Effects of Sampling at Different Rates
AF CIMSF IF – =
Alias F2 100 70 – 30 Hz = =
Alias F3 2 ( )100 160 – 40 Hz = =
Alias F4 5 ( )100 510 – 10 Hz = =
A. 1 sample/1 cycle
B. 7 samples/4 cycles
C. 2 samples/cycle
D. 10 samples/cycle
Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW
© National Instruments Corporation 17 LabVIEW Analysis Concepts
In case A of Figure 17, the sampling frequency f
s
equals the frequency f
of the sine wave. f
s
is measured in samples/second. f is measured in
cycles/second. Therefore, in case A, one sample per cycle is acquired. The
reconstructed waveform appears as an alias at DC.
In case B of Figure 17, f
s
= 7/4f, or 7 samples/4 cycles. In case B,
increasing the sampling rate increases the frequency of the waveform.
However, the signal aliases to a frequency less than the original
signal—three cycles instead of four.
In case C of Figure 17, increasing the sampling rate to f
s
= 2f results in the
digitized waveform having the correct frequency or the same number of
cycles as the original signal. In case C, the reconstructed waveform more
accurately represents the original sinusoidal wave than case A or case B.
By increasing the sampling rate to well above f, for example,
f
s
= 10f = 10 samples/cycle, you can accurately reproduce the waveform.
Case D of Figure 17 shows the result of increasing the sampling rate to
f
s
= 10f.
AntiAliasing Filters
In the digital domain, you cannot distinguish alias frequencies from the
frequencies that actually lie between 0 and the Nyquist frequency. Even
with a sampling frequency equal to twice the Nyquist frequency, pickup
from stray signals, such as signals from power lines or local radio stations,
can contain frequencies higher than the Nyquist frequency. Frequency
components of stray signals above the Nyquist frequency might alias into
the desired frequency range of a test signal and cause erroneous results.
Therefore, you need to remove alias frequencies from an analog signal
before the signal reaches the A/D converter.
Use an antialiasing analog lowpass filter before the A/D converter to
remove alias frequencies higher than the Nyquist frequency. A lowpass
filter allows low frequencies to pass but attenuates high frequencies.
By attenuating the frequencies higher than the Nyquist frequency, the
antialiasing analog lowpass filter prevents the sampling of aliasing
components. An antialiasing analog lowpass filter should exhibit a flat
passband frequency response with a good highfrequency alias rejection
and a fast rolloff in the transition band. Because you apply the antialiasing
filter to the analog signal before it is converted to a digital signal, the
antialiasing filter is an analog filter.
Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW
LabVIEW Analysis Concepts 18 ni.com
Figure 18 shows both an ideal antialias filter and a practical antialias
filter. The following information applies to Figure 18:
• f
1
is the maximum input frequency.
• Frequencies less than f
1
are desired frequencies.
• Frequencies greater than f
1
are undesired frequencies.
Figure 18. Ideal versus Practical AntiAlias Filter
An ideal antialias filter, shown in Figure 18a, passes all the desired input
frequencies and cuts off all the undesired frequencies. However, an ideal
antialias filter is not physically realizable.
Figure 18b illustrates actual antialias filter behavior. Practical antialias
filters pass all frequencies <f
1
and cut off all frequencies >f
2
. The region
between f
1
and f
2
is the transition band, which contains a gradual
attenuation of the input frequencies. Although you want to pass only
signals with frequencies <f
1
, the signals in the transition band might cause
aliasing. Therefore, in practice, use a sampling frequency greater than two
times the highest frequency in the transition band. Using a sampling
frequency greater than two times the highest frequency in the transition
band means f
s
might be more than 2f
1
.
Converting to Logarithmic Units
On some instruments, you can display amplitude on either a linear scale or
a decibel (dB) scale. The linear scale shows the amplitudes as they are. The
decibel is a unit of ratio. The decibel scale is a transformation of the linear
scale into a logarithmic scale.
F
i
l
t
e
r
O
u
t
p
u
t
f1
Frequency
Transition Band
a. Ideal Antialias Filter b. Practical Antialias Filter
F
i
l
t
e
r
O
u
t
p
u
t
f1 f2
Frequency
Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW
© National Instruments Corporation 19 LabVIEW Analysis Concepts
The following equations define the decibel. Equation 11 defines the
decibel in terms of power. Equation 12 defines the decibel in terms of
amplitude.
, (11)
where P is the measured power, P
r
is the reference power, and is the
power ratio.
, (12)
where A is the measured amplitude, A
r
is the reference amplitude, and
is the voltage ratio.
Equations 11 and 12 require a reference value to measure power and
amplitude in decibels. The reference value serves as the 0 dB level. Several
conventions exist for specifying a reference value. You can use the
following common conventions to specify a reference value for calculating
decibels:
• Use the reference one voltrms squared for power, which
yields the unit of measure dBV
rms
.
• Use the reference one voltrms (1 V
rms
) for amplitude, which yields the
unit of measure dBV.
• Use the reference 1 mW into a load of 50 Ω for radio frequencies
where 0 dB is 0.22 V
rms
, which yields the unit of measure dBm.
• Use the reference 1 mW into a load of 600 Ω for audio frequencies
where 0 dB is 0.78 V
rms
, which yields the unit of measure dBm.
When using amplitude or power as the amplitudesquared of the same
signal, the resulting decibel level is exactly the same. Multiplying the
decibel ratio by two is equivalent to having a squared ratio. Therefore,
you obtain the same decibel level and display regardless of whether you
use the amplitude or power spectrum.
Displaying Results on a Decibel Scale
Amplitude or power spectra usually are displayed on a decibel scale.
Displaying amplitude or power spectra on a decibel scale allows you to
view wide dynamic ranges and to see small signal components in the
presence of large ones. For example, suppose you want to display a signal
containing amplitudes from a minimum of 0.1 V to a maximum of 100 V
dB 10
10
P
P
r
 log =
P
P
r

dB 20
10
A
A
r
 log =
A
A
r

1V
2
rms
( )
Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW
LabVIEW Analysis Concepts 110 ni.com
on a device with a display height of 10 cm. Using a linear scale, if the
device requires the entire display height to display the 100 V amplitude, the
device displays 10 V of amplitude per centimeter of height. If the device
displays 10 V/cm, displaying the 0.1 V amplitude of the signal requires a
height of only 0.1 mm. Because a height of 0.1 mm is barely visible on the
display screen, you might overlook the 0.1 V amplitude component of the
signal. Using a logarithmic scale in decibels allows you to see the 0.1 V
amplitude component of the signal.
Table 11 shows the relationship between the decibel and the power and
voltage ratios.
Table 11 shows how you can compress a wide range of amplitudes into a
small set of numbers by using the logarithmic decibel scale.
Table 11. Decibels and Power and Voltage Ratio Relationship
dB Power Ratio Amplitude Ratio
+40 10,000 100
+20 100 10
+6 4 2
+3 2 1.4
0 1 1
–3 1/2 1/1.4
–6 1/4 1/2
–20 1/100 1/10
–40 1/10,000 1/100
© National Instruments Corporation 21 LabVIEW Analysis Concepts
2
Signal Generation
The generation of signals is an important part of any test or measurement
system. The following applications are examples of uses for signal
generation:
• Simulate signals to test your algorithm when realworld signals are
not available, for example, when you do not have a DAQ device for
obtaining realworld signals or when access to realworld signals is not
possible.
• Generate signals to apply to a digitaltoanalog (D/A) converter.
This chapter describes the fundamentals of signal generation.
Common Test Signals
Common test signals include the sine wave, the square wave, the triangle
wave, the sawtooth wave, several types of noise waveforms, and multitone
signals consisting of a superposition of sine waves.
The most common signal for audio testing is the sine wave. A single sine
wave is often used to determine the amount of harmonic distortion
introduced by a system. Multiple sine waves are widely used to measure
the intermodulation distortion or to determine the frequency response.
Table 21 lists the signals used for some typical measurements.
Table 21. Typical Measurements and Signals
Measurement Signal
Total harmonic distortion Sine wave
Intermodulation distortion Multitone (two sine waves)
Frequency response Multitone (many sine waves,
impulse, chirp), broadband noise
Interpolation Sinc
Chapter 2 Signal Generation
LabVIEW Analysis Concepts 22 ni.com
These signals form the basis for many tests and are used to measure the
response of a system to a particular stimulus. Some of the common test
signals available in most signal generators are shown in Figure 21 and
Figure 22.
Rise time, fall time,
overshoot, undershoot
Pulse
Jitter Square wave
Table 21. Typical Measurements and Signals (Continued)
Measurement Signal
Chapter 2 Signal Generation
© National Instruments Corporation 23 LabVIEW Analysis Concepts
Figure 21. Common Test Signals
1 Sine Wave
2 Square Wave
3 Triangle Wave
4 Sawtooth Wave
5 Ramp
6 Impulse
1.0
0.8
0.6
0.4
0.2
0.0
–0.2
–0.4
–0.6
–0.8
–1.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
A
m
p
l
i
t
u
d
e
1
Time
A
m
p
l
i
t
u
d
e
Time
1.0
0.8
0.6
0.4
0.2
0.0
–0.2
–0.4
–0.6
–0.8
–1.0
2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0
1.1
–1.1
2
A
m
p
l
i
t
u
d
e
Time
1.0
0.8
0.6
0.4
0.2
0.0
–0.2
–0.4
–0.6
–0.8
–1.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
–0.3
A
m
p
l
i
t
u
d
e
Time
1.0
0.8
0.6
0.4
0.2
0.0
–0.2
–0.4
–0.6
–0.8
–1.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
0.9
0.7
0.5
0.3
0.1
–0.1
–0.5
–0.7
–0.9
3
A
m
p
l
i
t
u
d
e
Time
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0 100 200 300 400 500 600 700 800 900 1000
5
4
A
m
p
l
i
t
u
d
e
Time
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0 100 200 300 400 500 600 700 800 900 1000
6
Chapter 2 Signal Generation
LabVIEW Analysis Concepts 24 ni.com
Figure 22. More Common Test Signals
The most useful way to view the common test signals is in terms of their
frequency content. The common test signals have the following frequency
content characteristics:
• Sine waves have a single frequency component.
• Square waves consist of the superposition of many sine waves at odd
harmonics of the fundamental frequency. The amplitude of each
harmonic is inversely proportional to its frequency.
• Triangle and sawtooth waves have harmonic components that are
multiples of the fundamental frequency.
• An impulse contains all frequencies that can be represented for a given
sampling rate and number of samples.
• Chirp signals are sinusoids swept from a start frequency to a stop
frequency, thus generating energy across a given frequency range.
Chirp patterns have discrete frequencies that lie within a certain range.
The discrete frequencies of chirp patterns depend on the sampling rate,
the start and end frequencies, and the number of samples.
7 Sinc 8 Pulse 9 Chirp
A
m
p
l
i
t
u
d
e
Time
1.0
0.9
0.8
0.6
0.5
0.3
0.1
0.1
–0.3
0 100 200 300 400 500 600 700 800 900 1000
–0.2
–0.1
0.0
0.4
0.7
7
A
m
p
l
i
t
u
d
e
Time
1.0
0.9
0.8
0.7
0.5
0.4
0.0
0 100 200 300 400 500 600 700 800 900 1000
0.1
0.2
0.3
0.6
8
A
m
p
l
i
t
u
d
e
Time
1.0
0.8
0.6
0.4
0.0
–0.2
–1.0
0 100 200 300 400 500 600 700 800 900 1000
–0.8
–0.6
–0.4
0.2
9
Chapter 2 Signal Generation
© National Instruments Corporation 25 LabVIEW Analysis Concepts
Frequency Response Measurements
To achieve a good frequency response measurement, the frequency range
of interest must contain a significant amount of stimulus energy. Two
common signals used for frequency response measurements are the chirp
signal and a broadband noise signal, such as white noise. Refer to the
Common Test Signals section of this chapter for information about chirp
signals. Refer to the Noise Generation section of this chapter for
information about white noise.
It is best not to use windows when analyzing frequency response signals.
If you generate a chirp stimulus signal at the same rate you acquire the
response, you can match the acquisition frame size to the length of the
chirp. No window is generally the best choice for a broadband signal
source. Because some stimulus signals are not constant in frequency across
the time record, applying a window might obscure important portions of the
transient response.
Multitone Generation
Except for the sine wave, the common test signals do not allow full control
over their spectral content. For example, the harmonic components of a
square wave are fixed in frequency, phase, and amplitude relative to the
fundamental. However, you can generate multitone signals with a specific
amplitude and phase for each individual frequency component.
A multitone signal is the superposition of several sine waves or tones, each
with a distinct amplitude, phase, and frequency. A multitone signal is
typically created so that an integer number of cycles of each individual tone
are contained in the signal. If an FFT of the entire multitone signal is
computed, each of the tones falls exactly onto a single frequency bin, which
means no spectral spread or leakage occurs.
Multitone signals are a part of many test specifications and allow the fast
and efficient stimulus of a system across an arbitrary band of frequencies.
Multitone test signals are used to determine the frequency response of a
device and with appropriate selection of frequencies, also can be used to
measure such quantities as intermodulation distortion.
Chapter 2 Signal Generation
LabVIEW Analysis Concepts 26 ni.com
Crest Factor
The relative phases of the constituent tones with respect to each other
determine the crest factor of a multitone signal with specified amplitude.
The crest factor is defined as the ratio of the peak magnitude to the RMS
value of the signal. For example, a sine wave has a crest factor of 1.414:1.
For the same maximum amplitude, a multitone signal with a large crest
factor contains less energy than one with a smaller crest factor. In other
words, a large crest factor means that the amplitude of a given component
sine tone is lower than the same sine tone in a multitone signal with a
smaller crest factor. A higher crest factor results in individual sine tones
with lower signaltonoise ratios. Therefore, proper selection of phases is
critical to generating a useful multitone signal.
To avoid clipping, the maximum value of the multitone signal should not
exceed the maximum capability of the hardware that generates the signal,
which means a limit is placed on the maximum amplitude of the signal.
You can generate a multitone signal with a specific amplitude by using
different combinations of the phase relationships and amplitudes of the
constituent sine tones. A good approach to generating a signal is to choose
amplitudes and phases that result in a lower crest factor.
Phase Generation
The following schemes are used to generate tone phases of multitone
signals:
• Varying the phase difference between adjacent frequency tones
linearly from 0 to 360 degrees
• Varying the tone phases randomly
Varying the phase difference between adjacent frequency tones linearly
from 0 to 360 degrees allows the creation of multitone signals with very low
crest factors. However, the resulting multitone signals have the following
potentially undesirable characteristics:
• The multitone signal is very sensitive to phase distortion. If in the
course of generating the multitone signal the hardware or signal path
induces nonlinear phase distortion, the crest factor might vary
considerably.
• The multitone signal might display some repetitive timedomain
characteristics, as shown in the multitone signal in Figure 23.
Chapter 2 Signal Generation
© National Instruments Corporation 27 LabVIEW Analysis Concepts
Figure 23. Multitone Signal with Linearly Varying Phase Difference
between Adjacent Tones
The signal in Figure 23 resembles a chirp signal in that its frequency
appears to decrease from left to right. The apparent decrease in frequency
from left to right is characteristic of multitone signals generated by linearly
varying the phase difference between adjacent frequency tones. Having a
signal that is more noiselike than the signal in Figure 23 often is more
desirable.
Varying the tone phases randomly results in a multitone signal whose
amplitudes are nearly Gaussian in distribution as the number of tones
increases. Figure 24 illustrates a signal created by varying the tone phases
randomly.
1.000
0.800
0.600
0.400
0.200
0.000
–0.200
–0.400
–0.600
–0.800
–1.000
0.000 0.020 0.010 0.030 0.040 0.050 0.060 0.070 0.080 0.090 0.100
A
m
p
l
i
t
u
d
e
Time
Chapter 2 Signal Generation
LabVIEW Analysis Concepts 28 ni.com
Figure 24. Multitone Signal with Random Phase Difference between Adjacent Tones
In addition to being more noiselike, the signal in Figure 24 also is much
less sensitive to phase distortion. Multitone signals with the sort of phase
relationship shown in Figure 24 generally achieve a crest factor between
10 dB and 11 dB.
Swept Sine versus Multitone
To characterize a system, you often must measure the response of the
system at many different frequencies. You can use the following methods
to measure the response of a system at many different frequencies:
• Swept sine continuously and smoothly changes the frequency of a sine
wave across a range of frequencies.
• Stepped sine provides a single sine tone of fixed frequency as the
stimulus for a certain time and then increments the frequency by a
discrete amount. The process continues until all the frequencies
of interest have been reached.
• Multitone provides a signal composed of multiple sine tones.
A multitone signal has significant advantages over the swept sine and
stepped sine approaches. For a given range of frequencies, the multitone
approach can be much faster than the equivalent swept sine measurement,
due mainly to settling time issues. For each sine tone in a stepped sine
1.000
0.800
0.600
0.400
0.200
0.000
–0.200
–0.400
–0.600
–0.800
–1.000
0.000 0.020 0.010 0.030 0.040 0.050 0.060 0.070 0.080 0.090 0.100
A
m
p
l
i
t
u
d
e
Time
Chapter 2 Signal Generation
© National Instruments Corporation 29 LabVIEW Analysis Concepts
measurement, you must wait for the settling time of the system to end
before starting the measurement.
The settling time issue for a swept sine can be even more complex. If the
system has lowfrequency poles and/or zeroes or high Qresonances, the
system might take a relatively long time to settle. For a multitone signal,
you must wait only once for the settling time. A multitone signal containing
one period of the lowest frequency—actually one period of the highest
frequency resolution—is enough for the settling time. After the response to
the multitone signal is acquired, the processing can be very fast. You can
use a single fast Fourier transform (FFT) to measure many frequency
points, amplitude and phase, simultaneously.
The swept sine approach is more appropriate than the multitone approach
in certain situations. Each measured tone within a multitone signal is more
sensitive to noise because the energy of each tone is lower than that in a
single pure tone. For example, consider a single sine tone of amplitude
10 V peak and frequency 100 Hz. A multitone signal containing 10 tones,
including the 100 Hz tone, might have a maximum amplitude of 10 V.
However, the 100 Hz tone component has an amplitude somewhat less than
10 V. The lower amplitude of the 100 Hz tone component is due to the way
that all the sine tones sum. Assuming the same level of noise, the
signaltonoise ratio (SNR) of the 100 Hz component is better for the case
of the swept sine approach. In the multitone approach, you can mitigate the
reduced SNR by adjusting the amplitudes and phases of the tones, applying
higher energy where needed, and applying lower energy at less critical
frequencies.
When viewing the response of a system to a multitone stimulus, any energy
between FFT bins is due to noise or unitundertest (UUT) induced
distortion. The frequency resolution of the FFT is limited by your
measurement time. If you want to measure your system at 1.000 kHz and
1.001 kHz, using two independent sine tones is the best approach. Using
two independent sine tones, you can perform the measurement in a few
milliseconds, while a multitone measurement requires at least one second.
The difference in measurement speed is because you must wait long
enough to obtain the required number of samples to achieve a frequency
resolution of 1 Hz. Some applications, such as finding the resonant
frequency of a crystal, combine a multitone measurement for coarse
measurement and a narrowrange sweep for fine measurement.
Chapter 2 Signal Generation
LabVIEW Analysis Concepts 210 ni.com
Noise Generation
You can use noise signals to perform frequency response measurements
or to simulate certain processes. Several types of noise are typically used,
namely uniform white noise, Gaussian white noise, and periodic random
noise.
The term white in the definition of noise refers to the frequency domain
characteristic of noise. Ideal white noise has equal power per unit
bandwidth, resulting in a flat power spectral density across the frequency
range of interest. Thus, the power in the frequency range from 100 Hz to
110 Hz is the same as the power in the frequency range from 1,000 Hz to
1,010 Hz. In practical measurements, achieving the flat power spectral
density requires an infinite number of samples. Thus, when making
measurements of white noise, the power spectra are usually averaged, with
more number of averages resulting in a flatter power spectrum.
The terms uniform and Gaussian refer to the probability density function
(PDF) of the amplitudes of the timedomain samples of the noise. For
uniform white noise, the PDF of the amplitudes of the time domain samples
is uniform within the specified maximum and minimum levels. In other
words, all amplitude values between some limits are equally likely or
probable. Thermal noise produced in active components tends to be
uniform white in distribution. Figure 25 shows the distribution of the
samples of uniform white noise.
Figure 25. Uniform White Noise
Chapter 2 Signal Generation
© National Instruments Corporation 211 LabVIEW Analysis Concepts
For Gaussian white noise, the PDF of the amplitudes of the time domain
samples is Gaussian. If uniform white noise is passed through a linear
system, the resulting output is Gaussian white noise. Figure 26 shows the
distribution of the samples of Gaussian white noise.
Figure 26. Gaussian White Noise
Periodic random noise (PRN) is a summation of sinusoidal signals with
the same amplitudes but with random phases. PRN consists of all sine
waves with frequencies that can be represented with an integral number
of cycles in the requested number of samples. Because PRN contains only
integralcycle sinusoids, you do not need to window PRN before
performing spectral analysis. PRN is selfwindowing and therefore has no
spectral leakage.
PRN does not have energy at all frequencies as white noise does but has
energy only at discrete frequencies that correspond to harmonics of a
fundamental frequency. The fundamental frequency is equal to the
sampling frequency divided by the number of samples. However, the level
of noise at each of the discrete frequencies is the same.
You can use PRN to compute the frequency response of a linear system
with one time record instead of averaging the frequency response over
several time records, as you must for nonperiodic random noise sources.
Figure 27 shows the spectrum of PRN and the averaged spectra of white
noise.
Chapter 2 Signal Generation
LabVIEW Analysis Concepts 212 ni.com
Figure 27. Spectral Representation of Periodic Random Noise and
Averaged White Noise
Normalized Frequency
In the analog world, a signal frequency is measured in hertz (Hz), or cycles
per second. But the digital system often uses a digital frequency, which is
the ratio between the analog frequency and the sampling frequency, as
shown by the following equation.
The digital frequency is known as the normalized frequency and is
measured in cycles per sample.
digital frequency
analog frequency
sampling frequency
 =
Chapter 2 Signal Generation
© National Instruments Corporation 213 LabVIEW Analysis Concepts
Some of the Signal Generation VIs use a frequency input f that is assumed
to use normalized frequency units of cycles per sample. The normalized
frequency ranges from 0.0 to 1.0, which corresponds to a real frequency
range of 0 to the sampling frequency f
s
. The normalized frequency also
wraps around 1.0 so a normalized frequency of 1.1 is equivalent to 0.1. For
example, a signal sampled at the Nyquist rate of f
s
/2 means it is sampled
twice per cycle, that is, two samples/cycle. This sampling rate corresponds
to a normalized frequency of 1/2 cycles/sample = 0.5 cycles/sample.
The reciprocal of the normalized frequency, 1/f, gives you the number of
times the signal is sampled in one cycle, that is, the number of samples per
cycle.
When you use a VI that requires the normalized frequency as an input, you
must convert your frequency units to the normalized units of cycles per
sample. You must use normalized units of cycles per sample with the
following Signal Generation VIs:
• Sine Wave
• Square Wave
• Sawtooth Wave
• Triangle Wave
• Arbitrary Wave
• Chirp Pattern
If you are used to working in frequency units of cycles, you can convert
cycles to cycles per sample by dividing cycles by the number of samples
generated.
You need only divide the frequency in cycles by the number of samples. For
example, a frequency of two cycles is divided by 50 samples, resulting in a
normalized frequency of f = 1/25 cycles/sample. This means that it takes 25,
the reciprocal of f, samples to generate one cycle of the sine wave.
However, you might need to use frequency units of Hz, cycles per second.
If you need to convert from Hz to cycles per sample, divide your frequency
in Hz by the sampling rate given in samples per second, as shown in the
following equation.
For example, you divide a frequency of 60 Hz by a sampling rate of
1,000 Hz to get the normalized frequency of f = 0.06 cycles/sample.
cycles per second
samples per second
 
cycles
sample
  =
Chapter 2 Signal Generation
LabVIEW Analysis Concepts 214 ni.com
Therefore, it takes almost 17, or 1/0.06, samples to generate one cycle of
the sine wave.
The Signal Generation VIs create many common signals required for
network analysis and simulation. You also can use the Signal Generation
VIs in conjunction with National Instruments hardware to generate analog
output signals.
Wave and Pattern VIs
The names of most of the Signal Generation VIs contain the word wave or
pattern. A basic difference exists between the operation of the two different
types of VIs. The difference has to do with whether the VI can keep track
of the phase of the signal it generates each time the VI is called.
Phase Control
The wave VIs have a phase in input that specifies the initial phase in
degrees of the first sample of the generated waveform. The wave VIs also
have a phase out output that indicates the phase of the next sample of the
generated waveform. In addition, a reset phase input specifies whether the
phase of the first sample generated when the wave VI is called is the phase
specified in the phase in input or the phase available in the phase out
output when the VI last executed. A TRUE value for reset phase sets the
initial phase to phase in. A FALSE value for reset phase sets the initial
phase to the value of phase out when the VI last executed.
All the wave VIs are reentrant, which means they can keep track of phase
internally. The wave VIs accept frequency in normalized units of cycles per
sample. The only pattern VI that uses normalized units is the Chirp Pattern
VI. Wire FALSE to the reset phase input to allow for continuous sampling
simulation.
© National Instruments Corporation 31 LabVIEW Analysis Concepts
3
Digital Filtering
This chapter introduces the concept of filtering, compares analog and
digital filters, describes finite impulse response (FIR) and infinite impulse
response (IIR) filters, and describes how to choose the appropriate digital
filter for a particular application.
Introduction to Filtering
The filtering process alters the frequency content of a signal. For example,
the bass control on a stereo system alters the lowfrequency content of a
signal, while the treble control alters the highfrequency content. Changing
the bass and treble controls filters the audio signal. Two common filtering
applications are removing noise and decimation. Decimation consists of
lowpass filtering and reducing the sample rate.
The filtering process assumes that you can separate the signal content of
interest from the raw signal. Classical linear filtering assumes that the
signal content of interest is distinct from the remainder of the signal in the
frequency domain.
Advantages of Digital Filtering Compared to Analog Filtering
An analog filter has an analog signal at both its input x(t) and its output y(t).
Both x(t) and y(t) are functions of a continuous variable t and can have an
infinite number of values. Analog filter design requires advanced
mathematical knowledge and an understanding of the processes involved
in the system affecting the filter.
Because of modern sampling and digital signal processing tools, you
can replace analog filters with digital filters in applications that require
flexibility and programmability, such as audio, telecommunications,
geophysics, and medical monitoring applications.
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 32 ni.com
Digital filters have the following advantages compared to analog filters:
• Digital filters are software programmable, which makes them easy to
build and test.
• Digital filters require only the arithmetic operations of multiplication
and addition/subtraction.
• Digital filters do not drift with temperature or humidity or require
precision components.
• Digital filters have a superior performancetocost ratio.
• Digital filters do not suffer from manufacturing variations or aging.
Common Digital Filters
You can classify a digital filter as one of the following types:
• Finite impulse response (FIR) filter, also known as moving average
(MA) filter
• Infinite impulse response (IIR) filter, also known as autoregressive
movingaverage (ARMA) filter
• Nonlinear filter
Traditional filter classification begins with classifying a filter according to
its impulse response.
Impulse Response
An impulse is a short duration signal that goes from zero to a maximum
value and back to zero again in a short time. Equation 31 provides the
mathematical definition of an impulse.
(31)
The impulse response of a filter is the response of the filter to an impulse
and depends on the values upon which the filter operates. Figure 31
illustrates impulse response.
x
0
1 =
x
i
0 = for all i 0 ≠
Chapter 3 Digital Filtering
© National Instruments Corporation 33 LabVIEW Analysis Concepts
Figure 31. Impulse Response
The Fourier transform of the impulse response is the frequency response of
the filter. The frequency response of a filter provides information about the
output of the filter at different frequencies. In other words, the frequency
response of a filter reflects the gain of the filter at different frequencies. For
an ideal filter, the gain is one in the passband and zero in the stopband. An
ideal filter passes all frequencies in the passband to the output unchanged
but passes none of the frequencies in the stopband to the output.
Classifying Filters by Impulse Response
The impulse response of a filter determines whether the filter is an FIR or
IIR filter. The output of an FIR filter depends only on the current and past
input values. The output of an IIR filter depends on the current and past
input values and the current and past output values.
The operation of a cash register can serve as an example to illustrate the
difference between FIR and IIR filter operations. For this example, the
following conditions are true:
• x[k] is the cost of the current item entered into the cash register.
• x[k – 1] is the price of the past item entered into the cash register.
•
• N is the total number of items entered into the cash register.
The following statements describe the operation of the cash register:
• The cash register adds the cost of each item to produce the running
total y[k].
• The following equation computes y[k] up to the k
th
item.
y[k] = x[k] + x[k – 1] + x[k – 2] + x[k – 3] + … + x[1] (32)
Therefore, the total for N items is y[N].
A
m
p
l
i
t
u
d
e
A
m
p
l
i
t
u
d
e
A
m
p
l
i
t
u
d
e
A
m
p
l
i
t
u
d
e
Frequency Frequency Frequency Frequency
f
c
f
c
f
c1
f
c1
f
c2
f
c2
Highpass Lowpass Bandpass Bandstop
1 k N ≤ ≤
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 34 ni.com
• y[k] equals the total up to the k
th
item.
y[k – 1] equals the total up to the (k – 1) item.
Therefore, Equation 32 can be rewritten as the following equation.
y[k] = y[k – 1] + x[k] (33)
• Add a tax of 8.25% and rewrite Equations 32 and 33 as the following
equations.
y[k] = 1.0825x[k] + 1.0825x[k – 1] + 1.0825x[k – 2] +
1.0825x[k – 3] + … + 1.0825x[1] (34)
y[k] = y[k – 1] + 1.0825x[k] (35)
Equations 34 and 35 identically describe the behavior of the cash register.
However, Equation 34 describes the behavior of the cash register only in
terms of the input, while Equation 35 describes the behavior in terms of
both the input and the output. Equation 34 represents a nonrecursive, or
FIR, operation. Equation 35 represents a recursive, or IIR, operation.
Equations that describe the operation of a filter and have the same form as
Equations 32, 33, 34, and 35 are difference equations.
FIR filters are the simplest filters to design. If a single impulse is present at
the input of an FIR filter and all subsequent inputs are zero, the output of
an FIR filter becomes zero after a finite time. Therefore, FIR filters are
finite. The time required for the filter output to reach zero equals the
number of filter coefficients. Refer to the FIR Filters section of this chapter
for more information about FIR filters.
Because IIR filters operate on current and past input values and current and
past output values, the impulse response of an IIR filter never reaches zero
and is an infinite response. Refer to the IIR Filters section of this chapter
for more information about IIR filters.
Filter Coefficients
In Equation 34, the multiplying constant for each term is 1.0825. In
Equation 35, the multiplying constants are 1 for y[k – 1] and 1.0825 for
x[k]. The multiplying constants are the coefficients of the filter. For an IIR
filter, the coefficients multiplying the inputs are the forward coefficients.
The coefficients multiplying the outputs are the reverse coefficients.
Chapter 3 Digital Filtering
© National Instruments Corporation 35 LabVIEW Analysis Concepts
Characteristics of an Ideal Filter
In practical applications, ideal filters are not realizable.
Ideal filters allow a specified frequency range of interest to pass through
while attenuating a specified unwanted frequency range. The following
filter classifications are based on the frequency range a filter passes or
blocks:
• Lowpass filters pass low frequencies and attenuate high frequencies.
• Highpass filters pass high frequencies and attenuate low frequencies.
• Bandpass filters pass a certain band of frequencies.
• Bandstop filters attenuate a certain band of frequencies.
Figure 32 shows the ideal frequency response of each of the preceding
filter types.
Figure 32. Ideal Frequency Response
In Figure 32, the filters exhibit the following behavior:
• The lowpass filter passes all frequencies below f
c
.
• The highpass filter passes all frequencies above f
c
.
• The bandpass filter passes all frequencies between f
c1
and f
c2
.
• The bandstop filter attenuates all frequencies between f
c1
and f
c2
.
The frequency points f
c
, f
c1
, and f
c2
specify the cutoff frequencies for the
different filters. When designing filters, you must specify the cutoff
frequencies.
The passband of the filter is the frequency range that passes through the
filter. An ideal filter has a gain of one (0 dB) in the passband so the
amplitude of the signal neither increases nor decreases. The stopband of the
filter is the range of frequencies that the filter attenuates. Figure 33 shows
the passband (PB) and the stopband (SB) for each filter type.
A
m
p
l
i
t
u
d
e
A
m
p
l
i
t
u
d
e
A
m
p
l
i
t
u
d
e
A
m
p
l
i
t
u
d
e
Frequency Frequency Frequency Frequency
f
c
f
c
f
c1
f
c1
f
c2
f
c2
Highpass Lowpass Bandpass Bandstop
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 36 ni.com
Figure 33. Passband and Stopband
The filters in Figure 33 have the following passband and stopband
characteristics:
• The lowpass and highpass filters have one passband and one stopband.
• The bandpass filter has one passband and two stopbands.
• The bandstop filter has two passbands and one stopband.
Practical (Nonideal) Filters
Ideally, a filter has a unit gain (0 dB) in the passband and a gain of
zero (–∞ dB) in the stopband. However, real filters cannot fulfill all the
criteria of an ideal filter. In practice, a finite transition band always exists
between the passband and the stopband. In the transition band, the gain
of the filter changes gradually from one (0 dB) in the passband to
zero (–∞ dB) in the stopband.
Transition Band
Figure 34 shows the passband, the stopband, and the transition band for
each type of practical filter.
A
m
p
l
i
t
u
d
e
Freq
f
c
Lowpass
Passband
Stopband
A
m
p
l
i
t
u
d
e
Freq
f
c
Highpass
Stopband
Passband
A
m
p
l
i
t
u
d
e
Freq
f
c1
f
c2
Bandpass
Passband
Stopband Stopband
A
m
p
l
i
t
u
d
e
Freq
f
c1
f
c2
Bandstop
Stopband
Passband Passband
Chapter 3 Digital Filtering
© National Instruments Corporation 37 LabVIEW Analysis Concepts
Figure 34. Nonideal Filters
In each plot in Figure 34, the xaxis represents frequency, and the yaxis
represents the magnitude of the filter in dB. The passband is the region
within which the gain of the filter varies from 0 dB to –3 dB.
Passband Ripple and Stopband Attenuation
In many applications, you can allow the gain in the passband to vary
slightly from unity. This variation in the passband is the passband ripple,
or the difference between the actual gain and the desired gain of unity.
In practice, the stopband attenuation cannot be infinite, and you must
specify a value with which you are satisfied. Measure both the passband
ripple and the stopband attenuation in decibels (dB). Equation 36 defines
a decibel.
(36)
where log denotes the base 10 logarithm, A
i
(f ) is the amplitude at a
particular frequency f before filtering, and A
o
(f ) is the amplitude at a
particular frequency f after filtering.
When you know the passband ripple or stopband attenuation, you can
use Equation 36 to determine the ratio of input and output amplitudes.
Passband
Stopband Stopband
Stopband
Passband Passband
Stopband
Passband Passband
Stopband
Transition Regions
Bandpass Bandstop
Lowpass Highpass
dB 20
A
o
f ( )
A
i
f ( )

\ .
 
log =
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 38 ni.com
The ratio of the amplitudes shows how close the passband or stopband is to
the ideal. For example, for a passband ripple of –0.02 dB, Equation 36
yields the following set of equations.
(37)
(38)
Equations 37 and 38 show that the ratio of input and output amplitudes is
close to unity, which is the ideal for the passband.
Practical filter design attempts to approximate the ideal desired magnitude
response, subject to certain constraints. Table 31 compares the
characteristics of ideal filters and practical filters.
Practical filter design involves compromise, allowing you to emphasize
a desirable filter characteristic at the expense of a less desirable
characteristic. The compromises you can make depend on whether the
filter is an FIR or IIR filter and the design algorithm.
Sampling Rate
The sampling rate is important to the success of a filtering operation. The
maximum frequency component of the signal of interest usually determines
the sampling rate. In general, choose a sampling rate 10 times higher than
the highest frequency component of the signal of interest.
Make exceptions to the previous sampling rate guideline when filter cutoff
frequencies must be very close to either DC or the Nyquist frequency.
Filters with cutoff frequencies close to DC or the Nyquist frequency might
Table 31. Characteristics of Ideal and Practical Filters
Characteristic Ideal Filters Practical Filters
Passband Flat and constant Might contain ripples
Stopband Flat and constant Might contain ripples
Transition band None Have transition regions
0.02 – 20
A
o
f ( )
A
i
f ( )

\ .
 
log =
A
o
f ( )
A
i
f ( )
 10
0.001 –
0.9977 = =
Chapter 3 Digital Filtering
© National Instruments Corporation 39 LabVIEW Analysis Concepts
have a slow rate of convergence. You can take the following actions to
overcome the slow convergence:
• If the cutoff is too close to the Nyquist frequency, increase the
sampling rate.
• If the cutoff is too close to DC, reduce the sampling rate.
In general, adjust the sampling rate only if you encounter problems.
FIR Filters
Finite impulse response (FIR) filters are digital filters that have a finite
impulse response. FIR filters operate only on current and past input values
and are the simplest filters to design. FIR filters also are known as
nonrecursive filters, convolution filters, and moving average (MA) filters.
FIR filters perform a convolution of the filter coefficients with a sequence
of input values and produce an equally numbered sequence of output
values. Equation 39 defines the finite convolution an FIR filter performs.
(39)
where x is the input sequence to filter, y is the filtered sequence, and h is the
FIR filter coefficients.
FIR filters have the following characteristics:
• FIR filters can achieve linear phase because of filter coefficient
symmetry in the realization.
• FIR filters are always stable.
• FIR filters allow you to filter signals using the convolution. Therefore,
you generally can associate a delay with the output sequence, as shown
in the following equation.
where n is the number of FIR filter coefficients.
y
i
h
k
x
i k –
k 0 =
n 1 –
∑
=
delay
n 1 –
2
  =
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 310 ni.com
Figure 35 shows a typical magnitude and phase response of an FIR filter
compared to normalized frequency.
Figure 35. FIR Filter Magnitude and Phase Response
Compared to Normalized Frequency
In Figure 35, the discontinuities in the phase response result from the
discontinuities introduced when you use the absolute value to compute the
magnitude response. The discontinuities in phase are on the order of pi.
However, the phase is clearly linear.
Chapter 3 Digital Filtering
© National Instruments Corporation 311 LabVIEW Analysis Concepts
Taps
The terms tap and taps frequently appear in descriptions of FIR filters, FIR
filter design, and FIR filtering operations. Figure 36 illustrates the process
of tapping.
Figure 36. Tapping
Figure 36 represents an nsample shift register containing the input
samples [x
i
, x
i – 1
, …]. The term tap comes from the process of tapping off
of the shift register to form each h
k
x
i – k
term in Equation 39. Taps usually
refers to the number of filter coefficients for an FIR filter.
Designing FIR Filters
You design FIR filters by approximating the desired frequency response of
a discretetime system. The most common techniques approximate the
desired magnitude response while maintaining a linearphase response.
Linearphase response implies that all frequencies in the system have the
same propagation delay.
Figure 37 shows the block diagram of a VI that returns the frequency
response of a bandpass equiripple FIR filter.
Tapping
Input Sequence x
h
0
h
0
x
n
x
n
x
n–1
x
n–2
…
x
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 312 ni.com
Figure 37. Frequency Response of a Bandpass Equiripple FIR Filter
The VI in Figure 37 completes the following steps to compute the
frequency response of the filter.
1. Pass an impulse signal through the filter.
2. Pass the filtered signal out of the Case structure to the FFT VI. The
Case structure specifies the filter type—lowpass, highpass, bandpass,
or bandstop. The signal passed out of the Case structure is the impulse
response of the filter.
3. Use the FFT VI to perform a Fourier transform on the impulse
response and to compute the frequency response of the filter, such that
the impulse response and the frequency response comprise the Fourier
transform pair h(t) is the impulse response. H(f) is the
frequency response.
4. Use the Array Subset function to reduce the data returned by the
FFT VI. Half of the real FFT result is redundant so the VI needs to
process only half of the data returned by the FFT VI.
5. Use the Complex To Polar function to obtain the magnitudeandphase
form of the data returned by the FFT VI. The magnitudeandphase
form of the complex output from the FFT VI is easier to interpret than
the rectangular component of the FFT.
6. Unwrap the phase and convert it to degrees.
7. Convert the magnitude to decibels.
h t ( ) H f ( ). ⇔
Chapter 3 Digital Filtering
© National Instruments Corporation 313 LabVIEW Analysis Concepts
Figure 38 shows the magnitude and phase responses returned by the VI in
Figure 37.
Figure 38. Magnitude and Phase Response of a Bandpass Equiripple FIR Filter
In Figure 38, the discontinuities in the phase response result from the
discontinuities introduced when you use the absolute value to compute the
magnitude response. However, the phase response is a linear response
because all frequencies in the system have the same propagation delay.
Because FIR filters have ripple in the magnitude response, designing FIR
filters has the following design challenges:
• Designing a filter with a magnitude response as close to the ideal as
possible
• Designing a filter that distributes the ripple in a desired fashion
For example, a lowpass filter has an ideal characteristic magnitude
response. A particular application might allow some ripple in the passband
and more ripple in the stopband. The filter design algorithm must balance
the relative ripple requirements while producing the sharpest transition
band.
The most common techniques for designing FIR filters are windowing and
the ParksMcClellan algorithm, also known as Remez Exchange.
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 314 ni.com
Designing FIR Filters by Windowing
Windowing is the simplest technique for designing FIR filters because of
its conceptual simplicity and ease of implementation. Designing FIR filters
by windowing takes the inverse FFT of the desired magnitude response and
applies a smoothing window to the result. The smoothing window is a time
domain window.
Complete the following steps to design a FIR filter by windowing.
1. Decide on an ideal frequency response.
2. Calculate the impulse response of the ideal frequency response.
3. Truncate the impulse response to produce a finite number of
coefficients. To meet the linearphase constraint, maintain symmetry
about the center point of the coefficients.
4. Apply a symmetric smoothing window.
Truncating the ideal impulse response results in the Gibbs phenomenon.
The Gibbs phenomenon appears as oscillatory behavior near cutoff
frequencies in the FIR filter frequency response. You can reduce the effects
of the Gibbs phenomenon by using a smoothing window to smooth the
truncation of the ideal impulse response. By tapering the FIR coefficients
at each end, you can decrease the height of the side lobes in the frequency
response. However, decreasing the side lobe heights causes the main lobe
to widen, resulting in a wider transition band at the cutoff frequencies.
Selecting a smoothing window requires a tradeoff between the height of
the side lobes near the cutoff frequencies and the width of the transition
band. Decreasing the height of the side lobes near the cutoff frequencies
increases the width of the transition band. Decreasing the width of the
transition band increases the height of the side lobes near the cutoff
frequencies.
Designing FIR filters by windowing has the following disadvantages:
• Inefficiency
– Windowing results in unequal distribution of ripple.
– Windowing results in a wider transition band than other design
techniques.
Chapter 3 Digital Filtering
© National Instruments Corporation 315 LabVIEW Analysis Concepts
• Difficulty in specification
– Windowing increases the difficulty of specifying a cutoff
frequency that has a specific attenuation.
– Filter designers must specify the ideal cutoff frequency.
– Filter designers must specify the sampling frequency.
– Filter designers must specify the number of taps.
– Filter designers must specify the window type.
Designing FIR filters by windowing does not require a large amount of
computational resources. Therefore, windowing is the fastest technique for
designing FIR filters. However, windowing is not necessarily the best
technique for designing FIR filters.
Designing Optimum FIR Filters Using the
ParksMcClellan Algorithm
The ParksMcClellan algorithm, or Remez Exchange, uses an iterative
technique based on an error criterion to design FIR filter coefficients. You
can use the ParksMcClellan algorithm to design optimum, linearphase,
FIR filter coefficients. Filters you design with the ParksMcClellan
algorithm are optimal because they minimize the maximum error between
the actual magnitude response of the filter and the ideal magnitude
response of the filter.
Designing optimum FIR filters reduces adverse effects at the cutoff
frequencies. Designing optimum FIR filters also offers more control over
the approximation errors in different frequency bands than other FIR filter
design techniques, such as designing FIR filters by windowing, which
provides no control over the approximation errors in different frequency
bands.
Optimum FIR filters you design using the ParksMcClellan algorithm have
the following characteristics:
• A magnitude response with the weighted ripple evenly distributed over
the passband and stopband
• A sharp transition band
FIR filters you design using the ParksMcClellan algorithm have an
optimal response. However, the design process is complex, requires a large
amount of computational resources, and is much longer than designing FIR
filters by windowing.
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 316 ni.com
Designing Equiripple FIR Filters Using the
ParksMcClellan Algorithm
You can use the ParksMcClellan algorithm to design equiripple FIR
filters. Equiripple design equally weights the passband and stopband ripple
and produces filters with a linear phase characteristic.
You must specify the following filter characteristics to design an equiripple
FIR filter:
• Cutoff frequency
• Number of taps
• Filter type, such as lowpass, highpass, bandpass, or bandstop
• Pass frequency
• Stop frequency
The cutoff frequency for equiripple filters specifies the edge of the
passband, the stopband, or both. The ripple in the passband and stopband
of equiripple filters causes the following magnitude responses:
• Passband—a magnitude response greater than or equal to 1
• Stopband—a magnitude response less than or equal to the stopband
attenuation
For example, if you specify a lowpass filter, the passband cutoff frequency
is the highest frequency for which the passband conditions are true.
Similarly, the stopband cutoff frequency is the lowest frequency for which
the stopband conditions are true.
Designing Narrowband FIR Filters
Using conventional techniques to design FIR filters with especially narrow
bandwidths can result in long filter lengths. FIR filters with long filter
lengths often require long design and implementation times and are
susceptible to numerical inaccuracy. In some cases, conventional filter
design techniques, such as the ParksMcClellan algorithm, might not
produce an acceptable narrow bandwidth FIR filter.
The interpolated finite impulse response (IFIR) filter design technique
offers an efficient algorithm for designing narrowband FIR filters. Using
the IFIR technique produces narrowband filters that require fewer
coefficients and computations than filters you design by directly applying
the ParksMcClellan algorithm. The FIR Narrowband Coefficients VI uses
the IFIR technique to generate narrowband FIR filter coefficients.
Chapter 3 Digital Filtering
© National Instruments Corporation 317 LabVIEW Analysis Concepts
You must specify the following parameters when developing narrowband
filter specifications:
• Filter type, such as lowpass, highpass, bandpass, or bandstop
• Passband ripple on a linear scale
• Sampling frequency
• Passband frequency, which refers to passband width for bandpass and
bandstop filters
• Stopband frequency, which refers to stopband width for bandpass and
bandstop filters
• Center frequency for bandpass and bandstop filters
• Stopband attenuation in decibels
Figure 39 shows the block diagram of a VI that estimates the frequency
response of a narrowband FIR bandpass filter by transforming the impulse
response into the frequency domain.
Figure 39. Estimating the Frequency Response of a Narrowband FIR Bandpass Filter
Figure 310 shows the filter response from zero to the Nyquist frequency
that the VI in Figure 39 returns.
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 318 ni.com
Figure 310. Estimated Frequency Response of a Narrowband FIR Bandpass Filter
from Zero to Nyquist
In Figure 310, the narrow passband centers around 1 kHz. The narrow
passband center at 1 kHz is the response of the filter specified by the front
panel controls in Figure 310.
Figure 311 shows the filter response in detail.
Figure 311. Detail of the Estimated Frequency Response of a Narrowband
FIR Bandpass Filter
In Figure 311, the narrow passband clearly centers around 1 kHz and the
attenuation of the signal at 60 dB below the passband.
Chapter 3 Digital Filtering
© National Instruments Corporation 319 LabVIEW Analysis Concepts
Refer to the works of Vaidyanathan, P. P. and Neuvo, Y. et al. in
Appendix A, References, for more information about designing IFIR
filters.
Designing Wideband FIR Filters
You also can use the IFIR technique to produce wideband FIR lowpass
filters and wideband FIR highpass filters. A wideband FIR lowpass filter
has a cutoff frequency near the Nyquist frequency. A wideband FIR
highpass filter has a cutoff frequency near zero. You can use the FIR
Narrowband Coefficients VI to design wideband FIR lowpass filters and
wideband FIR highpass filters. Figure 312 shows the frequency response
that the VI in Figure 39 returns when you use it to estimate the frequency
response of a wideband FIR lowpass filter.
Figure 312. Frequency Response of a Wideband FIR Lowpass Filter
from Zero to Nyquist
In Figure 312, the front panel controls define a narrow bandwidth between
the stopband at 23.9 kHz and the Nyquist frequency at 24 kHz. However,
the frequency response of the filter runs from zero to 23.9 kHz, which
makes the filter a wideband filter.
IIR Filters
Infinite impulse response (IIR) filters, also known as recursive filters and
autoregressive movingaverage (ARMA) filters, operate on current and
past input values and current and past output values. The impulse response
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 320 ni.com
of an IIR filter is the response of the general IIR filter to an impulse, as
Equation 31 defines impulse. Theoretically, the impulse response of an
IIR filter never reaches zero and is an infinite response.
The following general difference equation characterizes IIR filters.
(310)
where b
j
is the set of forward coefficients, N
b
is the number of forward
coefficients, a
k
is the set of reverse coefficients, and N
a
is the number of
reverse coefficients.
Equation 310 describes a filter with an impulse response of theoretically
infinite length for nonzero coefficients. However, in practical filter
applications, the impulse response of a stable IIR filter decays to near zero
in a finite number of samples.
In most IIR filter designs and all of the LabVIEW IIR filters, coefficient a
0
is 1. The output sample at the current sample index i is the sum of scaled
current and past inputs and scaled past outputs, as shown by Equation 311.
, (311)
where x
i
is the current input, x
i – j
is the past inputs, and y
i – k
is the past
outputs.
IIR filters might have ripple in the passband, the stopband, or both. IIR
filters have a nonlinearphase response.
Cascade Form IIR Filtering
Equation 312 defines the directform transfer function of an IIR filter.
(312)
A filter implemented by directly using the structure defined by
Equation 312 after converting it to the difference equation in
y
i
1
a
0
 b
j
x
i j –
j 0 =
N
b
1 –
∑
a
k
y
i k –
k 1 =
N
a
1 –
∑
–
\ .


 
=
y
i
b
j
x
i j –
j 0 =
N
b
1 –
∑
a
k
y
i k –
k 1 =
N
a
1 –
∑
– =
H z ( )
b
0
b
1
z
1 –
… b
N
b
1 –
z
N
b
1 – ( ) –
+ + +
1 a
1
z
1 –
… a
N
a
1 –
z
N
a
1 – ( ) –
+ + +
 =
Chapter 3 Digital Filtering
© National Instruments Corporation 321 LabVIEW Analysis Concepts
Equation 311 is a directform IIR filter. A directform IIR filter often
is sensitive to errors introduced by coefficient quantization and by
computational precision limits. Also, a filter with an initially stable design
can become unstable with increasing coefficient length. The filter order is
proportional to the coefficient length. As the coefficient length increases,
the filter order increases. As filter order increases, the filter becomes more
unstable.
You can lessen the sensitivity of a filter to error by writing Equation 312
as a ratio of z transforms, which divides the directform transfer function
into lower order sections, or filter stages.
By factoring Equation 312 into secondorder sections, the transfer
function of the filter becomes a product of secondorder filter functions,
as shown in Equation 313.
(313)
where N
s
is the number of stages, ,
and N
a
≥ N
b
.
You can describe the filter structure defined by Equation 313 as a cascade
of secondorder filters. Figure 313 illustrates cascade filtering.
Figure 313. Stages of Cascade Filtering
You implement each individual filter stage in Figure 313 with the
directform II filter structure. You use the directform II filter structure
to implement each filter stage for the following reasons:
• The directform II filter structure requires a minimum number of
arithmetic operations.
• The directform II filter structure requires a minimum number of delay
elements, or internal filter states.
• Each k
th
stage has one input, one output, and two past internal states,
s
k
[i – 1] and s
k
[i – 2].
H z ( )
b
0k
b
1k
z
1 –
b
2k
z
2 –
+ +
1 a
1k
z
1 –
a
2k
z
2 –
+ +

k 1 =
N
s
∏
=
N
s
N
a
2

= is the largest integer
N
a
2
 ≤
Stage 1 Stage 2
Stage N
S
y(i) x(i)
…
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 322 ni.com
If n is the number of samples in the input sequence, the filtering operation
proceeds as shown in the following equations.
for each sample i = 0, 1, 2, …, n – 1.
SecondOrder Filtering
For lowpass and highpass filters, which have a single cutoff frequency,
you can design secondorder filter stages directly. The resulting IIR
lowpass or highpass filter contains cascaded secondorder filters.
Each secondorder filter stage has the following characteristics:
• k = 1, 2, …, N
s
, where k is the secondorder filter stage number and N
s
is the total number of secondorder filter stages.
• Each secondorder filter stage has two reverse coefficients, (a
1k
, a
2k
).
• The total number of reverse coefficients equals 2N
s
.
• Each secondorder filter stage has three forward coefficients,
(b
0k
, b
1k
, b
2k
).
• The total number of forward coefficients equals 3N
s
.
In Signal Processing VIs with Reverse Coefficients and Forward
Coefficients parameters, the Reverse Coefficients and the Forward
Coefficients arrays contain the coefficients for one secondorder filter
stage, followed by the coefficients for the next secondorder filter stage, and
so on. For example, an IIR filter with two secondorder filter stages must
have a total of four reverse coefficients and six forward coefficients, as
shown in the following equations.
Total number of reverse coefficients = 2N
s
=
Reverse Coefficients = {a
11
, a
21
, a
12
, a
22
}
Total number of forward coefficients = 3N
s
=
Forward Coefficients = {b
01
, b
11
, b
21
, b
02
, b
12
, b
22
}
y
0
i [ ] x i [ ] =
s
k
i [ ] y
k 1 –
i 1 – [ ] a
1k
s
k
i 1 – [ ] – a
2k
s
k
i 2 – [ ] – = k 1 2 … N
s
, , , =
y
k
i [ ] b
0k
s
k
i [ ] b
1k
s
k
i 1 – [ ] b
2k
s
k
i 2 – [ ] + + = k 1 2 … N
s
, , , =
2 2 × 4 =
3 2 × 6 =
Chapter 3 Digital Filtering
© National Instruments Corporation 323 LabVIEW Analysis Concepts
FourthOrder Filtering
For bandpass and bandstop filters, which have two cutoff frequencies,
fourthorder filter stages are a more direct form of filter design than
secondorder filter stages. IIR bandpass or bandstop filters resulting from
fourthorder filter design contain cascaded fourthorder filters.
Each fourthorder filter stage has the following characteristics:
• k = 1, 2, …, N
s
, where k is the fourthorder filter stage number and N
s
is the total number of fourthorder filter stages.
• .
• Each fourthorder filter stage has four reverse coefficients,
(a
1k
, a
2k
, a
3k
, a
4k
).
• The total number of reverse coefficients equals 4N
s
.
• Each fourthorder filter stage has five forward coefficients,
(b
0k
, b
1k
, b
2k
, b
3k
, b
4k
).
• The total number of forward coefficients equals 5N
s
.
You implement cascade stages in fourthorder filtering in the same manner
as in secondorder filtering. The following equations show how the filtering
operation for fourthorder stages proceeds.
where k = 1, 2, …, N
s
.
IIR Filter Types
Digital IIR filter designs come from the classical analog designs and
include the following filter types:
• Butterworth filters
• Chebyshev filters
• Chebyshev II filters, also known as inverse Chebyshev and Type II
Chebyshev filters
• Elliptic filters, also known as Cauer filters
• Bessel filters
N
s
N
a
1 +
4

=
y
0
i [ ] x i [ ] =
s
k
i [ ] y
k 1 –
i 1 – [ ] a
1k
s
k
i 1 – [ ] – a
2k
s
k
i 2 – [ ] – a
3k
s
k
i 3 – [ ] a
4k
s
k
i 4 – [ ] – – =
y
k
i [ ] b
0k
s
k
i [ ] b
1k
s
k
i 1 – [ ] b
2k
s
k
i 2 – [ ] b
3k
s
k
i 3 – [ ] b
4k
s
k
i 4 – [ ] – – + + =
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 324 ni.com
The IIR filter designs differ in the sharpness of the transition between
the passband and the stopband and where they exhibit their various
characteristics—in the passband or the stopband.
Minimizing Peak Error
The Chebyshev, Chebyshev II, and elliptic filters minimize peak error by
accounting for the maximum tolerable error in their frequency response.
The maximum tolerable error is the maximum absolute value of the
difference between the ideal filter frequency response and the actual filter
frequency response. The amount of ripple, in dB, allowed in the frequency
response of the filter determines the maximum tolerable error. Depending
on the type, the filter minimizes peak error in the passband, stopband, or
both.
Butterworth Filters
Butterworth filters have the following characteristics:
• Smooth response at all frequencies
• Monotonic decrease from the specified cutoff frequencies
• Maximal flatness, with the ideal response of unity in the passband and
zero in the stopband
• Halfpower frequency, or 3 dB down frequency, that corresponds to the
specified cutoff frequencies
The advantage of Butterworth filters is their smooth, monotonically
decreasing frequency response. Figure 314 shows the frequency response
of a lowpass Butterworth filter.
Chapter 3 Digital Filtering
© National Instruments Corporation 325 LabVIEW Analysis Concepts
Figure 314. Frequency Response of a Lowpass Butterworth Filter
As shown in Figure 314, after you specify the cutoff frequency of
a Butterworth filter, LabVIEW sets the steepness of the transition
proportional to the filter order. Higher order Butterworth filters approach
the ideal lowpass filter response.
Butterworth filters do not always provide a good approximation of the
ideal filter response because of the slow rolloff between the passband and
the stopband.
Chebyshev Filters
Chebyshev filters have the following characteristics:
• Minimization of peak error in the passband
• Equiripple magnitude response in the passband
• Monotonically decreasing magnitude response in the stopband
• Sharper rolloff than Butterworth filters
Compared to a Butterworth filter, a Chebyshev filter can achieve a sharper
transition between the passband and the stopband with a lower order filter.
The sharp transition between the passband and the stopband of a
Chebyshev filter produces smaller absolute errors and faster execution
speeds than a Butterworth filter.
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 326 ni.com
Figure 315 shows the frequency response of a lowpass Chebyshev filter.
Figure 315. Frequency Response of a Lowpass Chebyshev Filter
In Figure 315, the maximum tolerable error constrains the equiripple
response in the passband. Also, the sharp rolloff appears in the stopband.
Chebyshev II Filters
Chebyshev II filters have the following characteristics:
• Minimization of peak error in the stopband
• Equiripple magnitude response in the stopband
• Monotonically decreasing magnitude response in the passband
• Sharper rolloff than Butterworth filters
Chebyshev II filters are similar to Chebyshev filters. However,
Chebyshev II filters differ from Chebyshev filters in the following ways:
• Chebyshev II filters minimize peak error in the stopband instead of the
passband. Minimizing peak error in the stopband instead of the
passband is an advantage of Chebyshev II filters over Chebyshev
filters.
• Chebyshev II filters have an equiripple magnitude response in the
stopband instead of the passband.
• Chebyshev II filters have a monotonically decreasing magnitude
response in the passband instead of the stopband.
Chapter 3 Digital Filtering
© National Instruments Corporation 327 LabVIEW Analysis Concepts
Figure 316 shows the frequency response of a lowpass Chebyshev II filter.
Figure 316. Frequency Response of a Lowpass Chebyshev II Filter
In Figure 316, the maximum tolerable error constrains the equiripple
response in the stopband. Also, the smooth monotonic rolloff appears in the
stopband.
Chebyshev II filters have the same advantage over Butterworth filters that
Chebyshev filters have—a sharper transition between the passband and the
stopband with a lower order filter, resulting in a smaller absolute error and
faster execution speed.
Elliptic Filters
Elliptic filters have the following characteristics:
• Minimization of peak error in the passband and the stopband
• Equiripples in the passband and the stopband
Compared with the same order Butterworth or Chebyshev filters, the
elliptic filters provide the sharpest transition between the passband and
the stopband, which accounts for their widespread use.
Order = 2
Order = 3
Order = 5
Chebyshev II
Response
0.0 0.1 0.2 0.3 0.4 0.5
–3.0
–2.5
–2.0
–1.5
–1.0
–0.5
0.0
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 328 ni.com
Figure 317 shows the frequency response of a lowpass elliptic filter.
Figure 317. Frequency Response of a Lowpass Elliptic Filter
In Figure 317, the same maximum tolerable error constrains the ripple in
both the passband and the stopband. Also, even loworder elliptic filters
have a sharp transition edge.
Bessel Filters
Bessel filters have the following characteristics:
• Maximally flat response in both magnitude and phase
• Nearly linearphase response in the passband
You can use Bessel filters to reduce nonlinearphase distortion inherent in
all IIR filters. Highorder IIR filters and IIR filters with a steep rolloff have
a pronounced nonlinearphase distortion, especially in the transition
regions of the filters. You also can obtain linearphase response with FIR
filters.
Figure 318 shows the magnitude and phase responses of a lowpass Bessel
filter.
Order = 2
Order = 3
Order = 4
Elliptic
Response
0.0 0.1 0.2 0.3
0.4
0.5
0.0
0.1 0.1
0.4
0.5
0.6
0.9
1.0
0.2
0.3
0.7
0.8
Chapter 3 Digital Filtering
© National Instruments Corporation 329 LabVIEW Analysis Concepts
Figure 318. Magnitude Response of a Lowpass Bessel Filter
In Figure 318, the magnitude is smooth and monotonically decreasing at
all frequencies.
Figure 319 shows the phase response of a lowpass Bessel filter.
Figure 319. Phase Response of a Lowpass Bessel Filter
Figure 319 shows the nearly linear phase in the passband. Also, the phase
monotonically decreases at all frequencies.
Like Butterworth filters, Bessel filters require highorder filters to
minimize peak error, which accounts for their limited use.
Order = 2
Order = 5
Order = 10
Bessel Magnitude
Response
0.0 0.1 0.2 0.3
0.4
0.5
0.0
0.1 0.1
0.4
0.5
0.6
0.9
1.0
0.2
0.3
0.7
0.8
Order = 2
Order = 5
Order = 10
Bessel Phase
Response
0.0 0.1 0.2 0.3
0.4
0.5
–10.0
–9.0
–6.0
–5.0
–4.0
–1.0
0.0
–8.0
–7.0
–3.0
–2.0
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 330 ni.com
Designing IIR Filters
When choosing an IIR filter for an application, you must know the response
of the filter. Figure 320 shows the block diagram of a VI that returns the
frequency response of an IIR filter.
Figure 320. Frequency Response of an IIR Filter
Because the same mathematical theory applies to designing IIR and FIR
filters, the block diagram in Figure 320 of a VI that returns the frequency
response of an IIR filter and the block diagram in Figure 37 of a VI that
returns the frequency response of an FIR filter share common design
elements. The main difference between the two VIs is that the Case
structure on the left side of Figure 320 specifies the IIR filter design and
filter type instead of specifying only the filter type. The VI in Figure 320
computes the frequency response of an IIR filter by following the same
steps outlined in the Designing FIR Filters section of this chapter.
Figure 321 shows the magnitude and the phase responses of a bandpass
elliptic IIR filter.
Chapter 3 Digital Filtering
© National Instruments Corporation 331 LabVIEW Analysis Concepts
Figure 321. Magnitude and Phase Responses of a Bandpass Elliptic IIR Filter
In Figure 321, the phase information is clearly nonlinear. When deciding
whether to use an IIR or FIR filter to process data, remember that IIR filters
provide nonlinear phase information. Refer to the Comparing FIR and IIR
Filters section and the Selecting a Digital Filter Design section of this
chapter for information about differences between FIR and IIR filters and
selecting an appropriate filter design.
IIR Filter Characteristics in LabVIEW
IIR filters in LabVIEW have the following characteristics:
• IIR filter VIs interpret values at negative indexes in Equation 310 as
zero the first time you call the VI.
• A transient response, or delay, proportional to the filter order occurs
before the filter reaches a steady state. Refer to the Transient Response
section of this chapter for information about the transient response.
• The number of elements in the filtered sequence equals the number of
elements in the input sequence.
• The filter retains the internal filter state values when the filtering
process finishes.
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 332 ni.com
Transient Response
The transient response occurs because the initial filter state is zero or has
values at negative indexes. The duration of the transient response depends
on the filter type.
The duration of the transient response for lowpass and highpass filters
equals the filter order.
delay = order
The duration of the transient response for bandpass and bandstop filters
equals twice the filter order.
delay = 2 × order
You can eliminate the transient response on successive calls to an IIR filter
VI by enabling state memory. To enable state memory for continuous
filtering, wire a value of TRUE to the init/cont input of the IIR filter VI.
Figure 322 shows the transient response and the steady state for an IIR
filter.
Figure 322. Transient Response and Steady State for an IIR Filter
Original Signal
Filtered Signal
Transient Steady State
Chapter 3 Digital Filtering
© National Instruments Corporation 333 LabVIEW Analysis Concepts
Comparing FIR and IIR Filters
Because designing digital filters involves making compromises to
emphasize a desirable filter characteristic over a less desirable
characteristic, comparing FIR and IIR filters can help guide you in
selecting the appropriate filter design for a particular application.
IIR filters can achieve the same level of attenuation as FIR filters but with
far fewer coefficients. Therefore, an IIR filter can provide a significantly
faster and more efficient filtering operation than an FIR filter.
You can design FIR filters to provide a linearphase response. IIR filters
provide a nonlinearphase response. Use FIR filters for applications that
require linearphase responses. Use IIR filters for applications that do not
require phase information, such as signal monitoring applications.
Refer to the Selecting a Digital Filter Design section of this chapter for
more information about selecting a digital filter type.
Nonlinear Filters
Smoothing windows, IIR filters, and FIR filters are linear because they
satisfy the superposition and proportionality principles, as shown in
Equation 314.
L {ax(t) + by(t)} = aL {x(t)} + bL {y(t)} (314)
where a and b are constants, x(t) and y(t) are signals, L{•} is a linear
filtering operation, and inputs and outputs are related through the
convolution operation, as shown in Equations 39 and 311.
A nonlinear filter does not satisfy Equation 314. Also, you cannot obtain
the output signals of a nonlinear filter through the convolution operation
because a set of coefficients cannot characterize the impulse response of the
filter. Nonlinear filters provide specific filtering characteristics that are
difficult to obtain using linear techniques.
The median filter, a nonlinear filter, combines lowpass filter characteristics
and highfrequency characteristics. The lowpass filter characteristics allow
the median filter to remove highfrequency noise. The highfrequency
characteristics allow the median filter to detect edges, which preserves edge
information.
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 334 ni.com
Example: Analyzing Noisy Pulse with a Median Filter
The Pulse Parameters VI analyzes an input sequence for a pulse pattern and
determines the best set of pulse parameters that describes the pulse. After
the VI completes modal analysis to determine the baseline and the top of
the input sequence, discriminating between noise and signal becomes
difficult without more information. Therefore, to accurately determine the
pulse parameters, the peak amplitude of the noise portion of the input
sequence must be less than or equal to 50% of the expected pulse
amplitude. In some practical applications, a 50% pulsetonoise ratio is
difficult to achieve. Achieving the necessary pulsetonoise ratio requires
a preprocessing operation to extract pulse information.
If the pulse is buried in noise whose expected peak amplitude exceeds 50%
of the expected pulse amplitude, you can use a lowpass filter to remove
some of the unwanted noise. However, the filter also shifts the signal in
time and smears the edges of the pulse because the transition edges contain
highfrequency information. A median filter can extract the pulse more
effectively than a lowpass filter because the median filter removes
highfrequency noise while preserving edge information.
Figure 323 shows the block diagram of a VI that generates and analyzes a
noisy pulse.
Figure 323. Using a Median Filter to Extract Pulse Information
The VI in Figure 323 generates a noisy pulse with an expected peak noise
amplitude greater than 100% of the expected pulse amplitude. The signal
the VI in Figure 323 generates has the following ideal pulse values:
• Amplitude of 5.0 V
• Delay of 64 samples
• Width of 32 samples
Chapter 3 Digital Filtering
© National Instruments Corporation 335 LabVIEW Analysis Concepts
Figure 324 shows the noisy pulse, the filtered pulse, and the estimated
pulse parameters returned by the VI in Figure 323.
Figure 324. Noisy Pulse and Pulse Filtered with Median Filter
In Figure 324, you can track the pulse signal produced by the median filter,
even though noise obscures the pulse.
You can remove the highfrequency noise with the Median Filter VI to
achieve the 50% pulsetonoise ratio the Pulse Parameters VI needs to
complete the analysis accurately.
Selecting a Digital Filter Design
Answer the following questions to select a filter for an application:
• Does the analysis require a linearphase response?
• Can the analysis tolerate ripples?
• Does the analysis require a narrow transition band?
Use Figure 325 as a guideline for selecting the appropriate filter for an
analysis application.
Chapter 3 Digital Filtering
LabVIEW Analysis Concepts 336 ni.com
Figure 325. Filter Flowchart
Figure 325 can provide guidance for selecting an appropriate filter type.
However, you might need to experiment with several filter types to find the
best type.
No
Linear phase?
Narrow
transition band?
Ripple acceptable?
Ripple
in Passband?
Ripple
in Stopband?
Multiband
filter specifications?
Narrowest possible transition region?
Yes
No
No
No
No
No
No
Yes
Yes
Yes
Yes
Yes
Yes
Elliptic
Filter
FIR Filter
Inverse
Chebyshev
Filter
Chebyshev
Filter
Elliptic
Filter
FIR filter
Loworder
Butterworth
Filter
Highorder
Butterworth
Filter
© National Instruments Corporation 41 LabVIEW Analysis Concepts
4
Frequency Analysis
This chapter describes the fundamentals of the discrete Fourier transform
(DFT), the fast Fourier transform (FFT), basic signal analysis
computations, computations performed on the power spectrum, and how to
use FFTbased functions for network measurement. Use the NI Example
Finder to find examples of using the digital signal processing VIs and the
measurement analysis VIs to perform FFT and frequency analysis.
Differences between Frequency Domain and
Time Domain
The timedomain representation gives the amplitudes of the signal at the
instants of time during which it was sampled. However, in many cases you
need to know the frequency content of a signal rather than the amplitudes
of the individual samples.
Fourier’s theorem states that any waveform in the time domain can be
represented by the weighted sum of sines and cosines. The same waveform
then can be represented in the frequency domain as a pair of amplitude and
phase values at each component frequency.
You can generate any waveform by adding sine waves, each with a
particular amplitude and phase. Figure 41 shows the original waveform,
labeled sum, and its component frequencies. The fundamental frequency is
shown at the frequency f
0
, the second harmonic at frequency 2f
0
, and the
third harmonic at frequency 3f
0
.
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 42 ni.com
Figure 41. Signal Formed by Adding Three Frequency Components
In the frequency domain, you can separate conceptually the sine waves that
add to form the complex timedomain signal. Figure 41 shows single
frequency components, which spread out in the time domain, as distinct
impulses in the frequency domain. The amplitude of each frequency line
is the amplitude of the time waveform for that frequency component.
The representation of a signal in terms of its individual frequency
components is the frequencydomain representation of the signal. The
frequencydomain representation might provide more insight about the
signal and the system from which it was generated.
The samples of a signal obtained from a DAQ device constitute the
timedomain representation of the signal. Some measurements, such
as harmonic distortion, are difficult to quantify by inspecting the time
waveform on an oscilloscope. When the same signal is displayed in
the frequency domain by an FFT Analyzer, also known as a Dynamic
Signal Analyzer, you easily can measure the harmonic frequencies and
amplitudes.
Time Axis
Frequency Axis
Sum
f
0
2 f
0
3 f
0
Chapter 4 Frequency Analysis
© National Instruments Corporation 43 LabVIEW Analysis Concepts
Parseval’s Relationship
Parseval’s Theorem states that the total energy computed in the time
domain must equal the total energy computed in the frequency domain.
It is a statement of conservation of energy. The following equation defines
the continuous form of Parseval’s relationship.
The following equation defines the discrete form of Parseval’s relationship.
(41)
where is a discrete FFT pair and n is the number of elements in the
sequence.
Figure 42 shows the block diagram of a VI that demonstrates Parseval’s
relationship.
Figure 42. VI Demonstrating Parseval’s Theorem
The VI in Figure 42 produces a real input sequence. The upper branch on
the block diagram computes the energy of the timedomain signal using the
left side of Equation 41. The lower branch on the block diagram converts
the timedomain signal to the frequency domain and computes the energy
of the frequencydomain signal using the right side of Equation 41.
x t ( )x t ( ) t d
∞ –
∞
∫
X f ( )
2
f d
∞ –
∞
∫
=
x
i
2
i 0 =
n 1 –
∑
1
n
 X
k
2
k 0 =
n 1 –
∑
=
x
i
X
k
⇔
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 44 ni.com
Figure 43 shows the results returned by the VI in Figure 42.
Figure 43. Results from Parseval VI
In Figure 43, the total computed energy in the time domain equals the total
computed energy in the frequency domain.
Fourier Transform
The Fourier transform provides a method for examining a relationship in
terms of the frequency domain. The most common applications of the
Fourier transform are the analysis of linear timeinvariant systems and
spectral analysis.
The following equation defines the twosided Fourier transform.
The following equation defines the twosided inverse Fourier transform.
Twosided means that the mathematical implementation of the forward and
inverse Fourier transform considers all negative and positive frequencies
and time of the signal. Singlesided means that the mathematical
implementation of the transforms considers only the positive frequencies
and time history of the signal.
A Fourier transform pair consists of the signal representation in both the
time and frequency domain. The following relationship commonly denotes
a Fourier transform pair.
X f ( ) F x t ( ) { } x t ( )e
j2πft –
t d
∞ –
∞
∫
= =
x t ( ) F
1 –
X f ( ) { } X f ( )e
j2πft
f d
∞ –
∞
∫
= =
x t ( ) X f ( ) ⇔
Chapter 4 Frequency Analysis
© National Instruments Corporation 45 LabVIEW Analysis Concepts
Discrete Fourier Transform (DFT)
The algorithm used to transform samples of the data from the time domain
into the frequency domain is the discrete Fourier transform (DFT). The
DFT establishes the relationship between the samples of a signal in the
time domain and their representation in the frequency domain. The DFT
is widely used in the fields of spectral analysis, applied mechanics,
acoustics, medical imaging, numerical analysis, instrumentation, and
telecommunications. Figure 44 illustrates using the DFT to transform
data from the time domain into the frequency domain.
Figure 44. Discrete Fourier Transform
Suppose you obtained N samples of a signal from a DAQ device. If you
apply the DFT to N samples of this timedomain representation of the
signal, the result also is of length N samples, but the information it contains
is of the frequencydomain representation.
Relationship between N Samples in the Frequency and Time Domains
If a signal is sampled at a given sampling rate, Equation 42 defines the
time interval between the samples, or the sampling interval.
(42)
where ∆t is the sampling interval and f
s
is the sampling rate in samples per
second (S/s).
The sampling interval is the smallest frequency that the system can resolve
through the DFT or related routines.
Time Domain Representation of x[n] Frequency Domain Representation
DFT
t ∆
1
f
s
 =
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 46 ni.com
Equation 43 defines the DFT. The equation results in X[k], the
frequencydomain representation of the sample signal.
, (43)
where x[i] is the timedomain representation of the sample signal and N is
the total number of samples. Both the time domain x and the frequency
domain X have a total of N samples.
Similar to the time spacing of ∆t between the samples of x in the time
domain, you have a frequency spacing, or frequency resolution, between
the components of X in the frequency domain, which Equation 44 defines.
(44)
where ∆f is the frequency resolution, f
s
is the sampling rate, N is the number
of samples, ∆t is the sampling interval, and N∆t is the total acquisition time.
To improve the frequency resolution, that is, to decrease ∆f, you must
increase N and keep f
s
constant or decrease f
s
and keep N constant. Both
approaches are equivalent to increasing N∆t, which is the time duration of
the acquired samples.
Example of Calculating DFT
This section provides an example of using Equation 43 to calculate the
DFT for a DC signal. This example uses the following assumptions:
• X[0] corresponds to the DC component, or the average value, of the
signal.
• The DC signal has a constant amplitude of +1 V.
• The number of samples is four samples.
• Each of the samples has a value +1, as shown in Figure 45.
• The resulting time sequence for the four samples is given by the
following equation.
x[0] = x[1] = x[3] = x[4] = 1
X k [ ] x i [ ]e
j 2πi k N ⁄ –
i 0 =
N 1 –
∑
= for k 0,1,2, … , N 1 – =
f ∆
f
s
N
 
1
N∆t
 = =
Chapter 4 Frequency Analysis
© National Instruments Corporation 47 LabVIEW Analysis Concepts
Figure 45. Time Sequence for DFT Samples
The DFT calculation makes use of Euler’s identity, which is given by the
following equation.
exp (–iθ) = cos(θ) – jsin(θ)
If you use Equation 43 to calculate the DFT of the sequence shown in
Figure 45 and use Euler’s identity, you get the following equations.
where X[0] is the DC component and N is the number of samples.
x[0] x[1] x[2] x[3]
Time
0 1 2 3
+1 V
A
m
p
l
i
t
u
d
e
X 0 [ ] x
i
e
j2πi 0 N ⁄ –
i 0 =
N 1 –
∑
x 0 [ ] x 1 [ ] x 2 [ ] x 3 [ ] 4 = + + + = =
X 1 [ ] x 0 [ ] x 1 [ ]
π
2

\ .
 
cos j
π
2

\ .
 
sin –
\ .
 
x 2 [ ] π ( ) cos j π ( ) sin – ( )
x 3 [ ]
3π
2

\ .
 
cos j
3π
2

\ .
 
sin –
\ .
 
1 j – 1 – j + ( ) 0 = =
+ + + =
X 2 [ ] x 0 [ ] x 1 [ ] π ( ) cos j π ( ) sin – ( ) x 2 [ ] 2π ( ) cos j 2π ( ) sin – ( )
x 3 [ ] 3π ( ) cos j 3π ( ) sin – ( ) 1 1 – 1 1 – + ( ) 0 = =
+ + + =
X 3 [ ] x 0 [ ] x 1 [ ]
3π
2

\ .
 
cos j
3π
2

\ .
 
sin –
\ .
 
x 2 [ ] 3π ( ) cos j 3π ( ) sin – ( )
x 3 [ ]
9π
2

\ .
 
cos j
9π
2

\ .
 
sin –
\ .
 
1 j – 1 – j – ( ) 0 = =
+ + + =
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 48 ni.com
Therefore, except for the DC component, all other values for the sequence
shown in Figure 45 are zero, which is as expected. However, the calculated
value of X[0] depends on the value of N. Because in this example N = 4,
X[0] = 4. If N = 10, the calculation results in X[0] = 10. This dependency of
X[ ] on N also occurs for the other frequency components. Therefore, you
usually divide the DFT output by N to obtain the correct magnitude of the
frequency component.
Magnitude and Phase Information
N samples of the input signal result in N samples of the DFT. That is, the
number of samples in both the time and frequency representations is the
same. Equation 43 shows that regardless of whether the input signal x[i] is
real or complex, X[k] is always complex, although the imaginary part may
be zero. In other words, every frequency component has a magnitude and
phase.
Normally the magnitude of the spectrum is displayed. The magnitude is the
square root of the sum of the squares of the real and imaginary parts.
The phase is relative to the start of the time record or relative to a
singlecycle cosine wave starting at the beginning of the time record.
Singlechannel phase measurements are stable only if the input signal is
triggered. Dualchannel phase measurements compute phase differences
between channels so if the channels are sampled simultaneously, triggering
usually is not necessary.
The phase is the arctangent of the ratio of the imaginary and real parts and
is usually between π and –π radians, or 180 and –180 degrees.
For real signals (x[i] real), such as those you obtain from the output of one
channel of a DAQ device, the DFT is symmetric with properties given by
the following equations.
X[k] = X[N – k]
phase(X[k]) = –phase(X[N – k])
The magnitude of X[k] is even symmetric, and phase(X[k]) is odd
symmetric. An even symmetric signal is symmetric about the yaxis, and an
odd symmetric signal is symmetric about the origin. Figure 46 illustrates
even and odd symmetry.
Chapter 4 Frequency Analysis
© National Instruments Corporation 49 LabVIEW Analysis Concepts
Figure 46. Signal Symmetry
Because of this symmetry, the N samples of the DFT contain repetition of
information. Because of this repetition of information, only half of the
samples of the DFT actually need to be computed or displayed because you
can obtain the other half from this repetition. If the input signal is complex,
the DFT is asymmetrical, and you cannot use only half of the samples to
obtain the other half.
Frequency Spacing between DFT Samples
If the sampling interval is ∆t seconds and the first data sample (k = 0)
is at 0 seconds, the k
th
data sample, where k > 0 and is an integer, is at
k∆t seconds. Similarly, if the frequency resolution is ∆f Hz, the k
th
sample
of the DFT occurs at a frequency of k∆f Hz. However, this is valid for only
up to the first half of the frequency components. The other half represent
negative frequency components.
Depending on whether the number of samples N is even or odd, you can
have a different interpretation of the frequency corresponding to the
k
th
sample of the DFT. For example, let N = 8 and p represent the index of
the Nyquist frequency p = N/2 = 4. Table 41 shows the ∆f to which each
format element of the complex output sequence X corresponds.
y
x
y
x
Odd Symmetry Even Symmetry
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 410 ni.com
The negative entries in the second column beyond the Nyquist frequency
represent negative frequencies, that is, those elements with an index
value >p.
For N = 8, X[1] and X[7] have the same magnitude; X[2] and X[6] have
the same magnitude; and X[3] and X[5] have the same magnitude. The
difference is that X[1], X[2], and X[3] correspond to positive frequency
components, while X[5], X[6], and X[7] correspond to negative frequency
components. X[4] is at the Nyquist frequency.
Figure 47 illustrates the complex output sequence X for N = 8.
Table 41. X[p] for N = 8
X[p] ∆f
X[0] DC
X[1] ∆f
X[2] 2∆f
X[3] 3∆f
X[4] 4∆f (Nyquist frequency)
X[5] –3∆f
X[6] –2∆f
X[7] –∆f
Chapter 4 Frequency Analysis
© National Instruments Corporation 411 LabVIEW Analysis Concepts
Figure 47. Complex Output Sequence X for N = 8
A representation where you see the positive and negative frequencies is the
twosided transform.
When N is odd, there is no component at the Nyquist frequency. Table 42
lists the values of ∆f for X[p] when N = 7 and p = (N–1)/2 = (7–1)/2 = 3.
For N = 7, X[1] and X[6] have the same magnitude; X[2] and X[5] have the
same magnitude; and X[3] and X[4] have the same magnitude. However,
X[1], X[2], and X[3] correspond to positive frequencies, while X[4], X[5],
and X[6] correspond to negative frequencies. Because N is odd, there is no
component at the Nyquist frequency.
Table 42. X[p] for N = 7
X[p] ∆f
X[0] DC
X[1] ∆f
X[2] 2∆f
X[3] 3∆f
X[4] –3∆f
X[5] –2∆f
X[6] –∆f
Positive
Frequencies
Negative
Frequencies
Nyquist
Component
DC
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 412 ni.com
Figure 48 illustrates the complex output sequence X[p] for N = 7.
Figure 48. Complex Output Sequence X[p] for N = 7
Figure 48 also shows a twosided transform because it represents the
positive and negative frequencies.
FFT Fundamentals
Directly implementing the DFT on N data samples requires approximately
N
2
complex operations and is a timeconsuming process. The FFT is a
fast algorithm for calculating the DFT. The following equation defines
the DFT.
The following measurements comprise the basic functions for FFTbased
signal analysis:
• FFT
• Power spectrum
• Cross power spectrum
You can use the basic functions as the building blocks for creating
additional measurement functions, such as the frequency response,
impulse response, coherence, amplitude spectrum, and phase spectrum.
Positive
Frequencies
Negative
Frequencies
DC
X k ( ) x n ( )e
j
2πnk
N

\ .
 
–
n 0 =
N 1 –
∑
=
Chapter 4 Frequency Analysis
© National Instruments Corporation 413 LabVIEW Analysis Concepts
The FFT and the power spectrum are useful for measuring the frequency
content of stationary or transient signals. The FFT produces the average
frequency content of a signal over the total acquisition. Therefore, use the
FFT for stationary signal analysis or in cases where you need only the
average energy at each frequency line.
An FFT is equivalent to a set of parallel filters of bandwidth ∆f centered at
each frequency increment from DC to (F
s
/2) – (F
s
/N). Therefore, frequency
lines also are known as frequency bins or FFT bins.
Refer to the Power Spectrum section of this chapter for more information
about the power spectrum.
Computing Frequency Components
Each frequency component is the result of a dot product of the timedomain
signal with the complex exponential at that frequency and is given by the
following equation.
The DC component is the dot product of x(n) with [cos(0) – jsin(0)], or
with 1.0.
The first bin, or frequency component, is the dot product of x(n) with
cos(2πn/N) – jsin(2πn/N). Here, cos(2πn/N) is a single cycle of the cosine
wave, and sin(2πn/N) is a single cycle of a sine wave.
In general, bin k is the dot product of x(n) with k cycles of the cosine wave
for the real part of X(k) and the sine wave for the imaginary part of X(k).
The use of the FFT for frequency analysis implies two important
relationships.
The first relationship links the highest frequency that can be analyzed to the
sampling frequency and is given by the following equation.
,
where F
max
is the highest frequency that can be analyzed and f
s
is the
sampling frequency. Refer to the Windowing section of this chapter for
more information about F
max
.
X k ( ) x n ( )e
j
2πnk
N

\ .
 
–
n 0 =
N 1 –
∑
x n ( )
2πnk
N
 
\ .
 
cos j
2πnk
N
 
\ .
 
sin –
n 0 =
N 1 –
∑
= =
F
max
f
s
2
 =
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 414 ni.com
The second relationship links the frequency resolution to the total
acquisition time, which is related to the sampling frequency and the
block size of the FFT and is given by the following equation.
,
where ∆f is the frequency resolution, T is the acquisition time, f
s
is the
sampling frequency, and N is the block size of the FFT.
Fast FFT Sizes
When the size of the input sequence is a power of two, N = 2
m
, you can
implement the computation of the DFT with approximately N log
2
(N)
operations, which makes the calculation of the DFT much faster.
DSP literature refers to the algorithms for faster DFT calculation as
fast Fourier transforms (FFTs). Common input sequence sizes that are
a power of two include 512, 1,024, and 2,048.
When the size of the input sequence is not a power of two but is factorable
as the product of small prime numbers, the FFTbased VIs use a mixed
radix CooleyTukey algorithm to efficiently compute the DFT of the input
sequence. For example, Equation 45 defines an input sequence size N as
the product of small prime numbers.
(45)
For the input sequence size defined by Equation 45, the FFTbased VIs can
compute the DFT with speeds comparable to an FFT whose input sequence
size is a power of two. Common input sequence sizes that are factorable as
the product of small prime numbers include 480, 640, 1,000, and 2,000.
Zero Padding
Zero padding is a technique typically employed to make the size of the
input sequence equal to a power of two. In zero padding, you add zeros to
the end of the input sequence so that the total number of samples is equal
to the next higher power of two. For example, if you have 10 samples of
a signal, you can add six zeros to make the total number of samples equal
to 16, or 2
4
, which is a power of two. Figure 49 illustrates padding
10 samples of a signal with zeros to make the total number of samples
equal 16.
f ∆
1
T

f
s
N
  = =
N 2
m
3
k
5
j
= for m k j , , 0 1 2 3 … , , , , =
Chapter 4 Frequency Analysis
© National Instruments Corporation 415 LabVIEW Analysis Concepts
Figure 49. Zero Padding
The addition of zeros to the end of the timedomain waveform does
not improve the underlying frequency resolution associated with the
timedomain signal. The only way to improve the frequency resolution of
the timedomain signal is to increase the acquisition time and acquire
longer time records.
In addition to making the total number of samples a power of two so that
faster computation is made possible by using the FFT, zero padding can
lead to an interpolated FFT result, which can produce a higher display
resolution.
FFT VI
The polymorphic FFT VI computes the FFT of a signal and has two
instances—Real FFT and Complex FFT.
The difference between the two instances is that the Real FFT instance
computes the FFT of a realvalued signal, whereas the Complex FFT
instance computes the FFT of a complexvalued signal. However, the
outputs of both instances are complex.
Most realworld signals are real valued. Therefore, you can use the
Real FFT instance for most applications. You also can use the Complex
FFT instance by setting the imaginary part of the signal to zero.
An example of an application where you use the Complex FFT instance
is when the signal consists of both a real and an imaginary component.
A signal consisting of a real and an imaginary component occurs frequently
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 416 ni.com
in the field of telecommunications, where you modulate a waveform by a
complex exponential. The process of modulation by a complex exponential
results in a complex signal, as shown in Figure 410.
Figure 410. Modulation by a Complex Exponential
Displaying Frequency Information from Transforms
The discrete implementation of the Fourier transform maps a digital signal
into its Fourier series coefficients, or harmonics. Unfortunately, neither a
time nor a frequency stamp is directly associated with the FFT operation.
Therefore, you must specify the sampling interval ∆t.
Because an acquired array of samples represents a progression of equally
spaced samples in time, you can determine the corresponding frequency in
hertz. The following equation gives the sampling frequency f
s
for ∆t.
Figure 411 shows the block diagram of a VI that properly displays
frequency information given the sampling interval 1.000E – 3 and returns
the value for the frequency interval ∆f.
Figure 411. Correctly Displaying Frequency Information
Modulation by
exp(–j t)
ω x(t) ω ω y(t) = x(t)cos( t) – jx(t)sin( t)
f
s
1
∆t
 =
Chapter 4 Frequency Analysis
© National Instruments Corporation 417 LabVIEW Analysis Concepts
Figure 412 shows the display and ∆f that the VI in Figure 411 returns.
Figure 412. Properly Displayed Frequency Information
Two other common ways of presenting frequency information are
displaying the DC component in the center and displaying onesided
spectrums. Refer to the TwoSided, DCCentered FFT section of this
chapter for information about displaying the DC component in the center.
Refer to the Power Spectrum section of this chapter for information about
displaying onesided spectrums.
TwoSided, DCCentered FFT
The twosided, DCcentered FFT provides a method for displaying a
spectrum with both positive and negative frequencies. Most introductory
textbooks that discuss the Fourier transform and its properties present a
table of twosided Fourier transform pairs. You can use the frequency
shifting property of the Fourier transform to obtain a twosided,
DCcentered representation. In a twosided, DCcentered FFT, the
DC component is in the middle of the buffer.
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 418 ni.com
Mathematical Representation of a TwoSided, DCCentered FFT
If is a Fourier transform pair, then
Let
where f
s
is the sampling frequency in the discrete representation of the time
signal.
Set f
0
to the index corresponding to the Nyquist component f
N
, as shown in
the following equation.
f
0
is set to the index corresponding to f
N
because causing the DC component
to appear in the location of the Nyquist component requires a frequency
shift equal to f
N
.
Setting f
0
to the index corresponding to f
N
results in the discrete Fourier
transform pair shown in the following relationship.
where n is the number of elements in the discrete sequence, x
i
is the
timedomain sequence, and X
k
is the frequencydomain representation of x
i
.
Expanding the exponential term in the timedomain sequence produces the
following equation.
(46)
Equation 46 represents a sequence of alternating +1 and –1. Equation 46
means that negating the odd elements of the original timedomain sequence
and performing an FFT on the new sequence produces a spectrum whose
DC component appears in the center of the sequence.
x t ( ) X f ( ) ⇔
x t ( )e
j 2πf
0
t
X f f
0
– ( ) ⇔
∆t
1
f
s
 =
f
0
f
N
f
s
2

1
2∆ t
  = = =
x
i
e
ji π
X
k
n
2
 –
⇔
e
j iπ
iπ ( ) cos j iπ ( ) sin +
1 if i is even
1 – if i is odd
¹
´
¦
= =
Chapter 4 Frequency Analysis
© National Instruments Corporation 419 LabVIEW Analysis Concepts
Therefore, if the original input sequence is
X = {x
0
, x
1
, x
2
, x
3
, …, x
n – 1
}
then the sequence
Y = {x
0
, –x
1
, x
2
, –x
3
, …, x
n – 1
} (47)
generates a DCcentered spectrum.
Creating a TwoSided, DCCentered FFT
You can modulate a signal by the Nyquist frequency in place without extra
buffers. Figure 413 shows the block diagram of the Nyquist Shift VI
located in the labview\examples\analysis\dspxmpl.llb, which
generates the sequence shown in Equation 47.
Figure 413. Block Diagram of the Nyquist Shift VI
In Figure 413, the For Loop iterates through the input sequence,
alternately multiplying array elements by 1.0 and –1.0, until it processes
the entire input array.
Figure 414 shows the block diagram of a VI that generates a timedomain
sequence and uses the Nyquist Shift and Power Spectrum VIs to produce a
DCcentered spectrum.
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 420 ni.com
Figure 414. Generating TimeDomain Sequence and DCCentered Spectrum
In the VI in Figure 414, the Nyquist Shift VI preprocesses the
timedomain sequence by negating every other element in the sequence.
The Power Spectrum VI transforms the data into the frequency domain. To
display the frequency axis of the processed data correctly, you must supply
x
0
, which is the xaxis value of the initial frequency bin. For a DCcentered
spectrum, the following equation computes x
0
.
Figure 415 shows the timedomain sequence and DCcentered spectrum
the VI in Figure 414 returns.
x
0
n
2
 – =
Chapter 4 Frequency Analysis
© National Instruments Corporation 421 LabVIEW Analysis Concepts
Figure 415. Raw TimeDomain Sequence and DCCentered Spectrum
In the DCcentered spectrum display in Figure 415, the DC component
appears in the center of the display at f = 0. The overall format resembles
that commonly found in tables of Fourier transform pairs.
You can create DCcentered spectra for evensized input sequences by
negating the odd elements of the input sequence.
You cannot create DCcentered spectra by directly negating the odd
elements of an input timedomain sequence containing an odd number of
elements because the Nyquist frequency appears between two frequency
bins. To create DCcentered spectra for oddsized input sequences, you
must rotate the FFT arrays by the amount given in the following
relationship.
n 1 –
2

Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 422 ni.com
For a DCcentered spectrum created from an oddsized input sequence, the
following equation computes x
0
.
Power Spectrum
As described in the Magnitude and Phase Information section of this
chapter, the DFT or FFT of a real signal is a complex number, having a real
and an imaginary part. You can obtain the power in each frequency
component represented by the DFT or FFT by squaring the magnitude
of that frequency component. Thus, the power in the k
th
frequency
component—that is, the k
th
element of the DFT or FFT—is given by the
following equation.
power = X[k]
2
,
where X[k] is the magnitude of the frequency component. Refer to the
Magnitude and Phase Information section of this chapter for information
about computing the magnitude of the frequency components.
The power spectrum returns an array that contains the twosided power
spectrum of a timedomain signal and that shows the power in each of the
frequency components. You can use Equation 48 to compute the twosided
power spectrum from the FFT.
(48)
where FFT*(A) denotes the complex conjugate of FFT(A). The complex
conjugate of FFT(A) results from negating the imaginary part of FFT(A).
The values of the elements in the power spectrum array are proportional
to the magnitude squared of each frequency component making up the
timedomain signal. Because the DFT or FFT of a real signal is symmetric,
the power at a positive frequency of k∆f is the same as the power at the
corresponding negative frequency of –k∆f, excluding DC and Nyquist
components. The total power in the DC component is X[0]
2
. The total
power in the Nyquist component is X[N/2]
2
.
x
0
n 1 –
2
  – =
Power Spectrum S
AA
f ( )
FFT A ( ) FFT
∗
A ( ) ×
N
 =
Chapter 4 Frequency Analysis
© National Instruments Corporation 423 LabVIEW Analysis Concepts
A plot of the twosided power spectrum shows negative and positive
frequency components at a height given by the following relationship.
where A
k
is the peak amplitude of the sinusoidal component at frequency k.
The DC component has a height of where A
0
is the amplitude of the DC
component in the signal.
Figure 416 shows the power spectrum result from a timedomain signal
that consists of a 3 V
rms
sine wave at 128 Hz, a 3 V
rms
sine wave at 256 Hz,
and a DC component of 2 VDC. A 3 V
rms
sine wave has a peak voltage
of 3.0 • or about 4.2426 V. The power spectrum is computed from the
basic FFT function, as shown in Equation 48.
Figure 416. TwoSided Power Spectrum of Signal
Converting a TwoSided Power Spectrum to a SingleSided Power
Spectrum
Most frequency analysis instruments display only the positive half of
the frequency spectrum because the spectrum of a realworld signal is
symmetrical around DC. Thus, the negative frequency information is
redundant. The twosided results from the analysis functions include the
positive half of the spectrum followed by the negative half of the spectrum,
as shown in Figure 416.
A twosided power spectrum displays half the energy at the positive
frequency and half the energy at the negative frequency. Therefore, to
convert a twosided spectrum to a singlesided spectrum, you discard the
A
k
2
4

A
0
2
2
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 424 ni.com
second half of the array and multiply every point except for DC by two, as
shown in the following equations.
where S
AA
(i) is the twosided power spectrum, G
AA
(i) is the singlesided
power spectrum, and N is the length of the twosided power spectrum. You
discard the remainder of the twosided power spectrum S
AA
, N/2 through
N – 1.
The nonDC values in the singlesided spectrum have a height given by the
following relationship.
(49)
Equation 49 is equivalent to the following relationship.
where is the root mean square (rms) amplitude of the sinusoidal
component at frequency k.
The units of a power spectrum are often quantity squared rms, where
quantity is the unit of the timedomain signal. For example, the singlesided
power spectrum of a voltage waveform is in volts rms squared, .
Figure 417 shows the singlesided spectrum of the signal whose twosided
spectrum Figure 416 shows.
G
AA
i ( ) S
AA
i ( ), i 0 (DC) = =
G
AA
i ( ) 2S
AA
i ( ) ( ), i 1 to =
N
2
 1 – =
A
k
2
2

A
k
2

\ .
 
2
A
k
2

V
rms
2
Chapter 4 Frequency Analysis
© National Instruments Corporation 425 LabVIEW Analysis Concepts
Figure 417. SingleSided Power Spectrum
In Figure 417, the height of the nonDC frequency components is twice the
height of the nonDC frequency component in Figure 416. Also, the
spectrum in Figure 417 stops at half the frequency of that in Figure 416.
Loss of Phase Information
Because the power is obtained by squaring the magnitude of the DFT or
FFT, the power spectrum is always real. The disadvantage of obtaining the
power by squaring the magnitude of the DFT or FFT is that the phase
information is lost. If you want phase information, you must use the DFT
or FFT, which gives you a complex output.
You can use the power spectrum in applications where phase information is
not necessary, such as calculating the harmonic power in a signal. You can
apply a sinusoidal input to a nonlinear system and see the power in the
harmonics at the system output.
Computations on the Spectrum
When you have the amplitude or power spectrum, you can compute several
useful characteristics of the input signal, such as power and frequency,
noise level, and power spectral density.
Estimating Power and Frequency
If a frequency component is between two frequency lines, the frequency
component appears as energy spread among adjacent frequency lines with
reduced amplitude. The actual peak is between the two frequency lines.
You can estimate the actual frequency of a discrete frequency component
to a greater resolution than the ∆f given by the FFT by performing a
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 426 ni.com
weighted average of the frequencies around a detected peak in the power
spectrum, as shown in the following equation.
where j is the array index of the apparent peak of the frequency of interest.
The span j ± 3 is reasonable because it represents a spread wider than the
main lobes of the smoothing windows listed in Table 53, Correction
Factors and WorstCase Amplitude Errors for Smoothing Windows, of
Chapter 5, Smoothing Windows.
You can estimate the power in of a discrete peak frequency
component by summing the power in the bins around the peak. In other
words, you compute the area under the peak. You can use the following
equation to estimate the power of a discrete peak frequency component.
(410)
Equation 410 is valid only for a spectrum made up of discrete frequency
components. It is not valid for a continuous spectrum. Also, if two or more
frequency peaks are within six lines of each other, they contribute to
inflating the estimated powers and skewing the actual frequencies. You
can reduce this effect by decreasing the number of lines spanned by
Equation 410. If two peaks are within six lines of each other, it is likely
that they are already interfering with one another because of spectral
leakage.
If you want the total power in a given frequency range, sum the power in
each bin included in the frequency range and divide by the noise power
bandwidth of the smoothing window. Refer to Chapter 5, Smoothing
Windows, for information about the noise power bandwidth of smoothing
windows.
Estimated Frequency
Power i ( ) i∆f ( ) ( )
i j 3 – =
j 3 +
∑
Power i ( )
i j 3 – =
j 3 +
∑
 =
V
rms
2
Estimated Power
Power i ( )
i j 3 – =
j 3 +
∑
noise power bandwidth of window
 =
Chapter 4 Frequency Analysis
© National Instruments Corporation 427 LabVIEW Analysis Concepts
Computing Noise Level and Power Spectral Density
The measurement of noise levels depends on the bandwidth of the
measurement. When looking at the noise floor of a power spectrum, you are
looking at the narrowband noise level in each FFT bin. Therefore, the noise
floor of a given power spectrum depends on the ∆f of the spectrum, which
is in turn controlled by the sampling rate and the number of points in the
data set. In other words, the noise level at each frequency line is equivalent
to the noise level obtained using a ∆f Hz filter centered at that frequency
line. Therefore, for a given sampling rate, doubling the number of data
points acquired reduces the noise power that appears in each bin by 3 dB.
Theoretically, discrete frequency components have zero bandwidth and
therefore do not scale with the number of points or frequency range of
the FFT.
To compute the signaltonoise ratio (SNR), compare the peak power in
the frequencies of interest to the broadband noise level. Compute the
broadband noise level in by summing all the power spectrum bins,
excluding any peaks and the DC component, and dividing the sum by the
equivalent noise bandwidth of the window.
Because of noiselevel scaling with ∆f, spectra for noise measurement often
are displayed in a normalized format called power or amplitude spectral
density. The power or amplitude spectral density normalizes the power or
amplitude spectrum to the spectrum measured by a 1 Hzwide square filter,
a convention for noiselevel measurements. The level at each frequency line
is equivalent to the level obtained using a 1 Hz filter centered at that
frequency line.
You can use the following equation to compute the power spectral density.
You can use the following equation to compute the amplitude spectral
density.
The spectral density format is appropriate for random or noise signals.
The spectral density format is not appropriate for discrete frequency
components because discrete frequency components theoretically have
zero bandwidth.
V
rms
2
Power Spectral Density
Power Spectrum in V
rms
2
∆f Noise Power Bandwidth of Window ×
 =
Amplitude Spectral Density
Amplitude Spectrum in V
rms
∆f Noise Power Bandwidth of Window ×
 =
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 428 ni.com
Computing the Amplitude and Phase Spectrums
The power spectrum shows power as the mean squared amplitude at each
frequency line but includes no phase information. Because the power
spectrum loses phase information, you might want to use the FFT to view
both the frequency and the phase information of a signal.
The phase information the FFT provides is the phase relative to the start of
the timedomain signal. Therefore, you must trigger from the same point in
the signal to obtain consistent phase readings. A sine wave shows a phase
of –90° at the sine wave frequency. A cosine wave shows a 0° phase.
Usually, the primary area of interest for analysis applications is either the
relative phases between components or the phase difference between two
signals acquired simultaneously. You can view the phase difference
between two signals by using some of the advanced FFT functions. Refer
to the Frequency Response and Network Analysis section of this chapter for
information about the advanced FFT functions.
The FFT produces a twosided spectrum in complex form with real and
imaginary parts. You must scale and convert the twosided spectrum to
polar form to obtain magnitude and phase. The frequency axis of the polar
form is identical to the frequency axis of the twosided power spectrum.
The amplitude of the FFT is related to the number of points in the
timedomain signal. Use the following equations to compute the amplitude
and phase versus frequency from the FFT.
(411)
(412)
where the arctangent function returns values of phase between –π and +π,
a full range of 2π radians.
The following relationship defines the rectangulartopolar conversion
function.
(413)
Amplitude spectrum in quantity peak
Magnitude FFT A ( ) [ ]
N
 =
real FFT A ( ) [ ] [ ]
2
imag FFT A ( ) [ ] [ ]
2
+
N
 =
Phase spectrum in radians Phase FFT A ( ) [ ] =
arctangent
imag FFT A ( ) [ ]
real FFT A ( ) [ ]

\ .
 
=
FFT A ( )
N

Chapter 4 Frequency Analysis
© National Instruments Corporation 429 LabVIEW Analysis Concepts
Using the rectangulartopolar conversion function to convert the complex
spectrum to its magnitude (r) and phase (φ) is equivalent to using
Equations 411 and 412.
The twosided amplitude spectrum actually shows half the peak amplitude
at the positive and negative frequencies. To convert to the singlesided
form, multiply each frequency, other than DC, by two and discard the
second half of the array. The units of the singlesided amplitude spectrum
are then in quantity peak and give the peak amplitude of each sinusoidal
component making up the timedomain signal.
To obtain the singlesided phase spectrum, discard the second half of the
array.
Calculating Amplitude in V
rms
and Phase in Degrees
To view the amplitude spectrum in volts rms (V
rms
), divide the nonDC
components by the square root of two after converting the spectrum to the
singlesided form. Because you multiply the nonDC components by two
to convert from the twosided amplitude spectrum to the singlesided
amplitude spectrum, you can calculate the rms amplitude spectrum directly
from the twosided amplitude spectrum by multiplying the nonDC
components by the square root of two and discarding the second half of
the array. The following equations show the entire computation from a
twosided FFT to a singlesided amplitude spectrum.
where i is the frequency line number, or array index, of the FFT of A.
The magnitude in V
rms
gives the rms voltage of each sinusoidal component
of the timedomain signal.
The amplitude spectrum is closely related to the power spectrum. You can
compute the singlesided power spectrum by squaring the singlesided rms
amplitude spectrum. Conversely, you can compute the amplitude spectrum
by taking the square root of the power spectrum. Refer to the Power
Spectrum section of this chapter for information about computing the
power spectrum.
Amplitude Spectrum V
rms
2
Magnitude FFT A ( ) [ ]
N
 = for i 1 to
N
2
  1 – =
Amplitude Spectrum V
rms
Magnitude FFT A ( ) [ ]
N
 = for i 0 (DC) =
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 430 ni.com
Use the following equation to view the phase spectrum in degrees.
Frequency Response Function
When analyzing two simultaneously sampled channels, you usually want
to know the differences between the two channels rather than the properties
of each.
In a typical dualchannel analyzer, as shown in Figure 418, the
instantaneous spectrum is computed using a window function and the FFT
for each channel. The averaged FFT spectrum, auto power spectrum, and
cross power spectrum are computed and used in estimating the frequency
response function. You also can use the coherence function to check the
validity of the frequency response function.
Figure 418. DualChannel Frequency Analysis
The frequency response of a system is described by the magnitude, H, and
phase, ∠H, at each frequency. The gain of the system is the same as its
magnitude and is the ratio of the output magnitude to the input magnitude
at each frequency. The phase of the system is the difference of the output
phase and input phase at each frequency.
Phase Spectrum in Degrees
180
π
 Phase FFT A ( ) =
Frequency
Response
Function
Coherence
Time FFT
Time FFT
Cross
Spectrum
Auto
Spectrum
Auto
Spectrum
Window
Average
Average
Average Average
Average
Window
Ch A
Ch B
Chapter 4 Frequency Analysis
© National Instruments Corporation 431 LabVIEW Analysis Concepts
Cross Power Spectrum
The cross power spectrum is not typically used as a direct measurement but
is an important building block for other measurements.
Use the following equation to compute the twosided cross power spectrum
of two timedomain signals A and B.
The cross power spectrum is a twosided complex form, having real and
imaginary parts. To convert the cross power spectrum to magnitude and
phase, use the rectangulartopolar conversion function from
Equation 413.
To convert the cross power spectrum to a singlesided form, use the
methods and equations from the Converting a TwoSided Power Spectrum
to a SingleSided Power Spectrum section of this chapter. The singlesided
cross power spectrum yields the product of the rms amplitudes and the
phase difference between the two signals A and B. The units of the
singlesided cross power spectrum are in quantity rms squared, for
example, .
The power spectrum is equivalent to the cross power spectrum when signals
A and B are the same signal. Therefore, the power spectrum is often referred
to as the auto power spectrum or the auto spectrum.
Frequency Response and Network Analysis
You can use the following functions to characterize the frequency response
of a network:
• Frequency response function
• Impulse response function
• Coherence function
Cross Power Spectrum S
AB
f ( )
FFT B ( ) FFT
∗
A ( ) ×
N
2
 =
V
rms
2
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 432 ni.com
Frequency Response Function
Figure 419 illustrates the method for measuring the frequency response of
a network.
Figure 419. Configuration for Network Analysis
In Figure 419, you apply a stimulus to the network under test and measure
the stimulus and response signals. From the measured stimulus and
response signals, you compute the frequency response function. The
frequency response function gives the gain and phase versus frequency
of a network. You use Equation 414 to compute the response function.
(414)
where H( f) is the response function, A is the stimulus signal, B is the
response signal, S
AB
( f) is the cross power spectrum of A and B, and S
AA
( f)
is the power spectrum of A.
The frequency response function is a twosided complex form, having real
and imaginary parts. To convert to the frequency response gain and the
frequency response phase, use the rectangulartopolar conversion function
from Equation 413. To convert to singlesided form, discard the second
half of the response function array.
You might want to take several frequency response function readings and
compute the average. Complete the following steps to compute the average
frequency response function.
1. Compute the average S
AB
( f) by finding the sum in the complex form
and dividing the sum by the number of measurements.
2. Compute the average S
AA
( f) by finding the sum and dividing the sum
by the number of measurements.
3. Substitute the average S
AB
( f) and the average S
AA
( f) in Equation 414.
Measured Response(B) Applied Stimulus
Measured Stimulus (A)
Network
Under
Test
H f ( )
S
AB
f ( )
S
AA
f ( )
 =
Chapter 4 Frequency Analysis
© National Instruments Corporation 433 LabVIEW Analysis Concepts
Impulse Response Function
The impulse response function of a network is the timedomain
representation of the frequency response function of the network.
The impulse response function is the output timedomain signal
generated by applying an impulse to the network at time t = 0.
To compute the impulse response of the network, perform an inverse
FFTon the twosided complex frequency response function from
Equation 414. To compute the average impulse response, perform an
inverse FFT on the average frequency response function.
Coherence Function
The coherence function provides an indication of the quality of the
frequency response function measurement and of how much of the
response energy is correlated to the stimulus energy. If there is another
signal present in the response, either from excessive noise or from another
signal, the quality of the network response measurement is poor. You can
use the coherence function to identify both excessive noise and which of
the multiple signal sources are contributing to the response signal. Use
Equation 415 to compute the coherence function.
(415)
where S
AB
is the cross power spectrum, S
AA
is the power spectrum of A, and
S
BB
is the power spectrum of B.
Equation 415 yields a coherence factor with a value between zero and one
versus frequency. A value of zero for a given frequency line indicates no
correlation between the response and the stimulus signal. A value of one for
a given frequency line indicates that 100% of the response energy is due to
the stimulus signal and that no interference is occurring at that frequency.
For a valid result, the coherence function requires an average of two or
more readings of the stimulus and response signals. For only one reading,
the coherence function registers unity at all frequencies.
γ
2
f ( )
Magnitude of the Average S
AB
f ( ) ( )
2
Average S
AA
f ( ) ( ) Average S
BB
f ( ) ( )
 =
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 434 ni.com
Windowing
In practical applications, you obtain only a finite number of samples of the
signal. The FFT assumes that this time record repeats. If you have an
integral number of cycles in your time record, the repetition is smooth at
the boundaries. However, in practical applications, you usually have a
nonintegral number of cycles. In the case of a nonintegral number of cycles,
the repetition results in discontinuities at the boundaries. These artificial
discontinuities were not originally present in your signal and result in a
smearing or leakage of energy from your actual frequency to all other
frequencies. This phenomenon is spectral leakage. The amount of leakage
depends on the amplitude of the discontinuity, with a larger amplitude
causing more leakage.
A signal that is exactly periodic in the time record is composed of sine
waves with exact integral cycles within the time record. Such a perfectly
periodic signal has a spectrum with energy contained in exact frequency
bins.
A signal that is not periodic in the time record has a spectrum with energy
split or spread across multiple frequency bins. The FFT spectrum models
the time domain as if the time record repeated itself forever. It assumes that
the analyzed record is just one period of an infinitely repeating periodic
signal.
Because the amount of leakage is dependent on the amplitude of the
discontinuity at the boundaries, you can use windowing to reduce the size
of the discontinuity and reduce spectral leakage. Windowing consists of
multiplying the timedomain signal by another timedomain waveform,
known as a window, whose amplitude tapers gradually and smoothly
towards zero at edges. The result is a windowed signal with very small or
no discontinuities and therefore reduced spectral leakage. You can choose
from among many different types of windows. The one you choose depends
on your application and some prior knowledge of the signal you are
analyzing.
Refer to Chapter 5, Smoothing Windows, for more information about
windowing.
Chapter 4 Frequency Analysis
© National Instruments Corporation 435 LabVIEW Analysis Concepts
Averaging to Improve the Measurement
Averaging successive measurements usually improves measurement
accuracy. Averaging usually is performed on measurement results or
on individual spectra but not directly on the time record.
You can choose from among the following common averaging modes:
• RMS averaging
• Vector averaging
• Peak hold
RMS Averaging
RMS averaging reduces signal fluctuations but not the noise floor. The
noise floor is not reduced because RMS averaging averages the energy, or
power, of the signal. RMS averaging also causes averaged RMS quantities
of singlechannel measurements to have zero phase. RMS averaging for
dualchannel measurements preserves important phase information.
RMSaveraged measurements are computed according to the following
equations.
where X is the complex FFT of signal x (stimulus), Y is the complex FFT of
signal y (response), X
*
is the complex conjugate of X, Y
*
is the complex
conjugate of Y, and 〈X〉 is the average of X, real and imaginary parts being
averaged separately.
FFT spectrum
power spectrum
cross spectrum
frequency response
X
∗
X • 〈 〉
X
∗
X • 〈 〉
X
∗
Y • 〈 〉
H1
X
∗
Y • 〈 〉
X
∗
X • 〈 〉
 =
H2
Y
∗
Y •
Y
∗
X •
 〈 〉 =
H3
H1 H2 + ( )
2
 =
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 436 ni.com
Vector Averaging
Vector averaging eliminates noise from synchronous signals. Vector
averaging computes the average of complex quantities directly. The real
part is averaged separately from the imaginary part. Averaging the real part
separately from the imaginary part can reduce the noise floor for random
signals because random signals are not phase coherent from one time
record to the next. The real and imaginary parts are averaged separately,
reducing noise but usually requiring a trigger.
where X is the complex FFT of signal x (stimulus), Y is the complex FFT of
signal y (response), X
*
is the complex conjugate of X, and 〈X〉 is the average
of X, real and imaginary parts being averaged separately.
Peak Hold
Peak hold averaging retains the peak levels of the averaged quantities. Peak
hold averaging is performed at each frequency line separately, retaining
peak levels from one FFT record to the next.
where X is the complex FFT of signal x (stimulus) and X
*
is the complex
conjugate of X.
FFT spectrum
power spectrum
cross spectrum
frequency response (H1 = H2 = H3)
FFT spectrum
power spectrum
X 〈 〉
X
∗
〈 〉 X 〈 〉 •
X
∗
〈 〉 Y 〈 〉 •
Y 〈 〉
X 〈 〉

MAX X
∗
X • ( )
MAX X
∗
X • ( )
Chapter 4 Frequency Analysis
© National Instruments Corporation 437 LabVIEW Analysis Concepts
Weighting
When performing RMS or vector averaging, you can weight each new
spectral record using either linear or exponential weighting.
Linear weighting combines N spectral records with equal weighting. When
the number of averages is completed, the analyzer stops averaging and
presents the averaged results.
Exponential weighting emphasizes new spectral data more than old and is
a continuous process.
Weighting is applied according to the following equation.
,
where X
i
is the result of the analysis performed on the i
th
block, Y
i
is the
result of the averaging process from X
1
to X
i
, N = i for linear weighting,
and N is a constant for exponential weighting (N = 1 for i = 1).
Echo Detection
Echo detection using Hilbert transforms is a common measurement for the
analysis of modulation systems.
Equation 416 describes a timedomain signal. Equation 417 yields the
Hilbert transform of the timedomain signal.
x(t) = Ae
–t/τ
cos(2πf
0
t) (416)
x
H
(t) = –Ae
–t/τ
sin(2πf
0
t) (417)
where A is the amplitude, f
0
is the natural resonant frequency, and τ is the
time decay constant.
Equation 418 yields the natural logarithm of the magnitude of the analytic
signal x
A
(t).
(418)
Y
i
N 1 –
N
 Y
i 1 –
1
N
  X
i
+ =
x
A
t ( ) ln x t ( ) jx
H
t ( ) + ln
t
τ
 A ln + – = =
Chapter 4 Frequency Analysis
LabVIEW Analysis Concepts 438 ni.com
The result from Equation 418 has the form of a line with slope .
Therefore, you can extract the time constant of the system by graphing
lnx
A
(t).
Figure 420 shows a timedomain signal containing an echo signal.
Figure 420. Echo Signal
The following conditions make the echo signal difficult to locate in
Figure 420:
• The time delay between the source and the echo signal is short relative
to the time decay constant of the system.
• The echo amplitude is small compared to the source.
You can make the echo signal visible by plotting the magnitude of x
A
(t) on
a logarithmic scale, as shown in Figure 421.
Figure 421. Echogram of the Magnitude of x
A
(t)
m
1
τ
 – =
Chapter 4 Frequency Analysis
© National Instruments Corporation 439 LabVIEW Analysis Concepts
In Figure 421, the discontinuity is plainly visible and indicates the location
of the time delay of the echo.
Figure 422 shows a section of the block diagram of the VI used to produce
Figures 420 and 421.
Figure 422. Echo Detector Block Diagram
The VI in Figure 422 completes the following steps to detect an echo.
1. Processes the input signal with the Fast Hilbert Transform VI to
produce the analytic signal x
A
(t).
2. Computes the magnitude of x
A
(t) with the 1D Rectangular To Polar VI.
3. Computes the natural log of x
A
(t) to detect the presence of an echo.
© National Instruments Corporation 51 LabVIEW Analysis Concepts
5
Smoothing Windows
This chapter describes spectral leakage, how to use smoothing windows to
decrease spectral leakage, the different types of smoothing windows, how
to choose the correct type of smoothing window, the differences between
smoothing windows used for spectral analysis and smoothing windows
used for filter coefficient design, and the importance of scaling smoothing
windows.
Applying a smoothing window to a signal is windowing. You can use
windowing to complete the following analysis operations:
• Define the duration of the observation.
• Reduce spectral leakage.
• Separate a small amplitude signal from a larger amplitude signal with
frequencies very close to each other.
• Design FIR filter coefficients.
The Windows VIs provide a simple method of improving the spectral
characteristics of a sampled signal. Use the NI Example Finder to find
examples of using the Windows VIs.
Spectral Leakage
According to the Shannon Sampling Theorem, you can completely
reconstruct a continuoustime signal from discrete, equally spaced samples
if the highest frequency in the time signal is less than half the sampling
frequency. Half the sampling frequency equals the Nyquist frequency.
The Shannon Sampling Theorem bridges the gap between continuoustime
signals and digitaltime signals. Refer to Chapter 1, Introduction to Digital
Signal Processing and Analysis in LabVIEW, for more information about
the Shannon Sampling Theorem.
In practical, signalsampling applications, digitizing a time signal results in
a finite record of the signal, even when you carefully observe the Shannon
Sampling Theorem and sampling conditions. Even when the data meets the
Nyquist criterion, the finite sampling record might cause energy leakage,
called spectral leakage. Therefore, even though you use proper signal
Chapter 5 Smoothing Windows
LabVIEW Analysis Concepts 52 ni.com
acquisition techniques, the measurement might not result in a scaled,
singlesided spectrum because of spectral leakage. In spectral leakage, the
energy at one frequency appears to leak out into all other frequencies.
Spectral leakage results from an assumption in the FFT and DFT
algorithms that the time record exactly repeats throughout all time. Thus,
signals in a time record are periodic at intervals that correspond to the
length of the time record. When you use the FFT or DFT to measure the
frequency content of data, the transforms assume that the finite data set is
one period of a periodic signal. Therefore, the finiteness of the sampling
record results in a truncated waveform with different spectral
characteristics from the original continuoustime signal, and the finiteness
can introduce sharp transition changes into the measured data. The sharp
transitions are discontinuities. Figure 51 illustrates discontinuities.
Figure 51. Periodic Waveform Created from Sampled Period
The discontinuities shown in Figure 51 produce leakage of spectral
information. Spectral leakage produces a discretetime spectrum that
appears as a smeared version of the original continuoustime spectrum.
Sampling an Integer Number of Cycles
Spectral leakage occurs only when the sample data set consists of a
noninteger number of cycles. Figure 52 shows a sine wave sampled at
an integer number of cycles and the Fourier transform of the sine wave.
Time
One Period Discontinuity
Chapter 5 Smoothing Windows
© National Instruments Corporation 53 LabVIEW Analysis Concepts
Figure 52. Sine Wave and Corresponding Fourier Transform
In Figure 52, Graph 1 shows the sampled timedomain waveform. Graph 2
shows the periodic time waveform of the sine wave from Graph 1. In
Graph 2, the waveform repeats to fulfill the assumption of periodicity for
the Fourier transform. Graph 3 shows the spectral representation of the
waveform.
Because the time record in Graph 2 is periodic with no discontinuities,
its spectrum appears in Graph 3 as a single line showing the frequency of
the sine wave. The waveform in Graph 2 does not have any discontinuities
because the data set is from an integer number of cycles—in this case, one.
The following methods are the only methods that guarantee you always
acquire an integer number of cycles:
• Sample synchronously with respect to the signal you measure.
Therefore, you can acquire an integral number of cycles deliberately.
• Capture a transient signal that fits entirely into the time record.
Sampling a Noninteger Number of Cycles
Usually, an unknown signal you are measuring is a stationary signal.
A stationary signal is present before, during, and after data acquisition.
When measuring a stationary signal, you cannot guarantee that you are
Chapter 5 Smoothing Windows
LabVIEW Analysis Concepts 54 ni.com
sampling an integer number of cycles. If the time record contains a
noninteger number of cycles, spectral leakage occurs because the
noninteger cycle frequency component of the signal does not correspond
exactly to one of the spectrum frequency lines. Spectral leakage distorts the
measurement in such a way that energy from a given frequency component
appears to spread over adjacent frequency lines or bins, resulting in a
smeared spectrum. You can use smoothing windows to minimize the
effects of performing an FFT over a noninteger number of cycles.
Because of the assumption of periodicity of the waveform, artificial
discontinuities between successive periods occur when you sample a
noninteger number of cycles. The artificial discontinuities appear as very
high frequencies in the spectrum of the signal—frequencies that are not
present in the original signal. The high frequencies of the discontinuities
can be much higher than the Nyquist frequency and alias somewhere
between 0 and f
s
/2. Therefore, spectral leakage occurs. The spectrum you
obtain by using the DFT or FFT is a smeared version of the spectrum and
is not the actual spectrum of the original signal.
Figure 53 shows a sine wave sampled at a noninteger number of cycles and
the Fourier transform of the sine wave.
Figure 53. Spectral Representation When Sampling a Noninteger
Number of Samples
Chapter 5 Smoothing Windows
© National Instruments Corporation 55 LabVIEW Analysis Concepts
In Figure 53, Graph 1 consists of 1.25 cycles of the sine wave. In Graph 2,
the waveform repeats periodically to fulfill the assumption of periodicity
for the Fourier transform. Graph 3 shows the spectral representation of the
waveform. The energy is spread, or smeared, over a wide range of
frequencies. The energy has leaked out of one of the FFT lines and smeared
itself into all the other lines, causing spectral leakage.
Spectral leakage occurs because of the finite time record of the input signal.
To overcome spectral leakage, you can take an infinite time record,
from –infinity to +infinity. With an infinite time record, the FFT calculates
one single line at the correct frequency. However, waiting for infinite time
is not possible in practice. To overcome the limitations of a finite time
record, windowing is used to reduce the spectral leakage.
In addition to causing amplitude accuracy errors, spectral leakage can
obscure adjacent frequency peaks. Figure 54 shows the spectrum for two
close frequency components when no smoothing window is used and when
a Hanning window is used.
Figure 54. Spectral Leakage Obscuring Adjacent Frequency Components
In Figure 54, the second peak stands out more prominently in the
windowed signal than it does in the signal with no smoothing window
applied.
Windowing Signals
Use smoothing windows to improve the spectral characteristics of a
sampled signal. When performing Fourier or spectral analysis on
finitelength data, you can use smoothing windows to minimize the
discontinuities of truncated waveforms, thus reducing spectral leakage.
d
B
V
20
0
–20
–40
–60
–80
–100
–120
100 150 200 250 300
Hz
Hann Window
No Window
Chapter 5 Smoothing Windows
LabVIEW Analysis Concepts 56 ni.com
The amount of spectral leakage depends on the amplitude of the
discontinuity. As the discontinuity becomes larger, spectral leakage
increases, and vice versa. Smoothing windows reduce the amplitude of the
discontinuities at the boundaries of each period and act like predefined,
narrowband, lowpass filters.
The process of windowing a signal involves multiplying the time record
by a smoothing window of finite length whose amplitude varies smoothly
and gradually towards zero at the edges. The length, or time interval,
of a smoothing window is defined in terms of number of samples.
Multiplication in the time domain is equivalent to convolution in the
frequency domain. Therefore, the spectrum of the windowed signal is a
convolution of the spectrum of the original signal with the spectrum of the
smoothing window. Windowing changes the shape of the signal in the time
domain, as well as affecting the spectrum that you see.
Figure 55 illustrates convolving the original spectrum of a signal with the
spectrum of a smoothing window.
Figure 55. Frequency Characteristics of a Windowed Spectrum
Even if you do not apply a smoothing window to a signal, a windowing
effect still occurs. The acquisition of a finite time record of an input signal
produces the effect of multiplying the signal in the time domain by a
uniform window. The uniform window has a rectangular shape and uniform
height. The multiplication of the input signal in the time domain by the
uniform window is equivalent to convolving the spectrum of the signal with
*
Windowed Signal Spectrum
Signal Spectrum Window Spectrum
Chapter 5 Smoothing Windows
© National Instruments Corporation 57 LabVIEW Analysis Concepts
the spectrum of the uniform window in the frequency domain, which has a
sinc function characteristic.
Figure 56 shows the result of applying a Hamming window to a
timedomain signal.
Figure 56. Time Signal Windowed Using a Hamming Window
In Figure 56, the time waveform of the windowed signal gradually tapers
to zero at the ends because the Hamming window minimizes the
discontinuities along the transition edges of the waveform. Applying a
smoothing window to timedomain data before the transform of the data
into the frequency domain minimizes spectral leakage.
Figure 57 shows the effects of the following smoothing windows on a
signal:
• None (uniform)
• Hanning
• Flat top
Chapter 5 Smoothing Windows
LabVIEW Analysis Concepts 58 ni.com
Figure 57. Power Spectrum of 1 V
rms
Signal at 256 Hz with Uniform, Hanning,
and Flat Top Windows
The data set for the signal in Figure 57 consists of an integer number of
cycles, 256, in a 1,024point record. If the frequency components of the
original signal match a frequency line exactly, as is the case when you
acquire an integer number of cycles, you see only the main lobe of the
spectrum. The smoothing windows have a main lobe around the frequency
of interest. The main lobe is a frequencydomain characteristic of windows.
The uniform window has the narrowest lobe. The Hanning and flat top
windows introduce some spreading. The flat top window has a broader
main lobe than the uniform or Hanning windows. For an integer number of
cycles, all smoothing windows yield the same peak amplitude reading and
have excellent amplitude accuracy. Side lobes do not appear because the
spectrum of the smoothing window approaches zero at ∆f intervals on
either side of the main lobe.
Figure 57 also shows the values at frequency lines of 254 Hz through
258 Hz for each smoothing window. The amplitude error at 256 Hz equals
0 dB for each smoothing window. The graph shows the spectrum values
between 240 Hz and 272 Hz. The actual values in the resulting spectrum
Chapter 5 Smoothing Windows
© National Instruments Corporation 59 LabVIEW Analysis Concepts
array for each smoothing window at 254 Hz through 258 Hz are shown
below the graph. ∆f equals 1 Hz.
If a time record does not contain an integer number of cycles, the
continuous spectrum of the smoothing window shifts from the main lobe
center at a fraction of ∆f that corresponds to the difference between the
frequency component and the FFT line frequencies. This shift causes the
side lobes to appear in the spectrum. In addition, amplitude error occurs at
the frequency peak because sampling of the main lobe is off center and
smears the spectrum. Figure 58 shows the effect of spectral leakage on a
signal whose data set consists of 256.5 cycles.
Figure 58. Power Spectrum of 1 V
rms
Signal at 256.5 Hz with Uniform, Hanning,
and Flat Top Windows
In Figure 58, for a noninteger number of cycles, the Hanning and flat top
windows introduce much less spectral leakage than the uniform window.
Also, the amplitude error is better with the Hanning and flat top windows.
The flat top window demonstrates very good amplitude accuracy and has a
wider spread and higher side lobes than the Hanning window.
Chapter 5 Smoothing Windows
LabVIEW Analysis Concepts 510 ni.com
Figure 59 shows the block diagram of a VI that measures the windowed
and nonwindowed spectrums of a signal composed of the sum of two
sinusoids.
Figure 59. Measuring the Spectrum of a Signal Composed of the Sum
of Two Sinusoids
Figure 510 shows the amplitudes and frequencies of the two sinusoids and
the measurement results. The frequencies shown are in units of cycles.
Chapter 5 Smoothing Windows
© National Instruments Corporation 511 LabVIEW Analysis Concepts
Figure 510. Windowed and Nonwindowed Spectrums of the Sum of Two Sinusoids
In Figure 510, the nonwindowed spectrum shows leakage that is more than
20 dB at the frequency of the smaller sinusoid.
You can apply more sophisticated techniques to get a more accurate
description of the original timecontinuous signal in the frequency domain.
However, in most applications, applying a smoothing window is sufficient
to obtain a better frequency representation of the signal.
Characteristics of Different Smoothing Windows
To simplify choosing a smoothing window, you need to define various
characteristics so that you can make comparisons between smoothing
windows. An actual plot of a smoothing window shows that the frequency
characteristic of the smoothing window is a continuous spectrum with a
main lobe and several side lobes. Figure 511 shows the spectrum of a
typical smoothing window.
Chapter 5 Smoothing Windows
LabVIEW Analysis Concepts 512 ni.com
Figure 511. Frequency Response of a Smoothing Window
Main Lobe
The center of the main lobe of a smoothing window occurs at each
frequency component of the timedomain signal. By convention, to
characterize the shape of the main lobe, the widths of the main lobe at
–3 dB and –6 dB below the main lobe peak describe the width of the main
lobe. The unit of measure for the main lobe width is FFT bins or frequency
lines.
The width of the main lobe of the smoothing window spectrum limits the
frequency resolution of the windowed signal. Therefore, the ability to
distinguish two closely spaced frequency components increases as the main
lobe of the smoothing window narrows. As the main lobe narrows and
spectral resolution improves, the window energy spreads into its side lobes,
increasing spectral leakage and decreasing amplitude accuracy. A tradeoff
occurs between amplitude accuracy and spectral resolution.
Side Lobes
Side lobes occur on each side of the main lobe and approach zero at
multiples of f
s
/N from the main lobe. The side lobe characteristics of the
smoothing window directly affect the extent to which adjacent frequency
components leak into adjacent frequency bins. The side lobe response of a
strong sinusoidal signal can overpower the main lobe response of a nearby
weak sinusoidal signal.
–6 dB
Peak
Side Lobe
Level
Window Frequency Response
Main Lobe Width Frequency
Side Lobe
RollOff Rate
Chapter 5 Smoothing Windows
© National Instruments Corporation 513 LabVIEW Analysis Concepts
Maximum side lobe level and side lobe rolloff rate characterize the side
lobes of a smoothing window. The maximum side lobe level is the largest
side lobe level in decibels relative to the main lobe peak gain. The side lobe
rolloff rate is the asymptotic decay rate in decibels per decade of frequency
of the peaks of the side lobes. Table 51 lists the characteristics of several
smoothing windows.
Rectangular (None)
The rectangular window has a value of one over its length. The following
equation defines the rectangular window.
w(n) = 1.0 for n = 0, 1, 2, …, N – 1
where N is the length of the window and w is the window value.
Applying a rectangular window is equivalent to not using any window
because the rectangular function just truncates the signal to within a finite
time interval. The rectangular window has the highest amount of spectral
leakage.
Figure 512 shows the rectangular window for N = 32.
Table 51. Characteristics of Smoothing Windows
Smoothing
Window
–3 dB Main
Lobe Width
(bins)
–6 dB Main
Lobe Width
(bins)
Maximum Side
Lobe Level (dB)
Side Lobe
RollOff Rate
(dB/decade)
Uniform (none) 0.88 1.21 –13 20
Hanning 1.44 2.00 –32 60
Hamming 1.30 1.81 –43 20
BlackmanHarris 1.62 2.27 –71 20
Exact Blackman 1.61 2.25 –67 20
Blackman 1.64 2.30 –58 60
Flat Top 2.94 3.56 –44 20
Chapter 5 Smoothing Windows
LabVIEW Analysis Concepts 514 ni.com
Figure 512. Rectangular Window
The rectangular window is useful for analyzing transients that have a
duration shorter than that of the window. Transients are signals that exist
only for a short time duration. The rectangular window also is used in order
tracking, where the effective sampling rate is proportional to the speed of
the shaft in rotating machines. In order tracking, the rectangular window
detects the main mode of vibration of the machine and its harmonics.
Hanning
The Hanning window has a shape similar to that of half a cycle of a cosine
wave. The following equation defines the Hanning window.
for n = 0, 1, 2, …, N – 1
where N is the length of the window and w is the window value.
Figure 513 shows a Hanning window with N = 32.
Figure 513. Hanning Window
The Hanning window is useful for analyzing transients longer than the time
duration of the window and for generalpurpose applications.
w n ( ) 0.5 0.5
2πn
N
  cos – =
Chapter 5 Smoothing Windows
© National Instruments Corporation 515 LabVIEW Analysis Concepts
Hamming
The Hamming window is a modified version of the Hanning window.
The shape of the Hamming window is similar to that of a cosine wave.
The following equation defines the Hamming window.
for n = 0, 1, 2, …, N – 1
where N is the length of the window and w is the window value.
Figure 514 shows a Hamming window with N = 32.
Figure 514. Hamming Window
The Hanning and Hamming windows are similar, as shown in Figures 513
and 514. However, in the time domain, the Hamming window does not get
as close to zero near the edges as does the Hanning window.
KaiserBessel
The KaiserBessel window is a flexible smoothing window whose shape
you can modify by adjusting the beta input. Thus, depending on your
application, you can change the shape of the window to control the amount
of spectral leakage.
Figure 515 shows the KaiserBessel window for different values of beta.
w n ( ) 0.54 0.46
2πn
N
 cos – =
Chapter 5 Smoothing Windows
LabVIEW Analysis Concepts 516 ni.com
Figure 515. KaiserBessel Window
For small values of beta, the shape is close to that of a rectangular window.
Actually, for beta = 0.0, you do get a rectangular window. As you increase
beta, the window tapers off more to the sides.
The KaiserBessel window is useful for detecting two signals of almost the
same frequency but with significantly different amplitudes.
Triangle
The shape of the triangle window is that of a triangle. The following
equation defines the triangle window.
for n = 0, 1, 2, …, N – 1
where N is the length of the window and w is the window value.
w n ( ) 1
2n N –
N

– =
Chapter 5 Smoothing Windows
© National Instruments Corporation 517 LabVIEW Analysis Concepts
Figure 516 shows a triangle window for N = 32.
Figure 516. Triangle Window
Flat Top
The flat top window has the best amplitude accuracy of all the smoothing
windows at ±0.02 dB for signals exactly between integral cycles. Because
the flat top window has a wide main lobe, it has poor frequency resolution.
The following equation defines the flat top window.
where
a
0
= 0.215578948
a
1
= 0.416631580
a
2
= 0.277263158
a
3
= 0.083578947
a
4
= 0.006947368
Figure 517 shows a flat top window.
w n ( ) 1 – ( )
k
k 0 =
4
∑
a
k
kω ( ) cos =
ω
2πn
N
  =
Chapter 5 Smoothing Windows
LabVIEW Analysis Concepts 518 ni.com
Figure 517. Flat Top Window
The flat top window is most useful in accurately measuring the amplitude
of single frequency components with little nearby spectral energy in the
signal.
Exponential
The shape of the exponential window is that of a decaying exponential.
The following equation defines the exponential window.
for n = 0, 1, 2, …, N – 1
where N is the length of the window, w is the window value, and f is the final
value.
The initial value of the window is one and gradually decays toward zero.
You can adjust the final value of the exponential window to between
0 and 1.
Figure 518 shows the exponential window for N = 32, with the final value
specified as 0.1.
Figure 518. Exponential Window
w n [ ] e
n f ( ) ln
N 1 –

\ .
 
f
n
N 1 –

\ .
 
= =
Chapter 5 Smoothing Windows
© National Instruments Corporation 519 LabVIEW Analysis Concepts
The exponential window is useful for analyzing transient response signals
whose duration is longer than the length of the window. The exponential
window damps the end of the signal, ensuring that the signal fully decays
by the end of the sample block. You can apply the exponential window to
signals that decay exponentially, such as the response of structures with
light damping that are excited by an impact, such as the impact of a
hammer.
Windows for Spectral Analysis versus Windows
for Coefficient Design
Spectral analysis and filter coefficient design place different requirements
on a window. Spectral analysis requires a DFTeven window, while filter
coefficient design requires a window symmetric about its midpoint.
Spectral Analysis
The smoothing windows designed for spectral analysis must be DFT even.
A smoothing window is DFT even if its dot product, or inner product, with
integral cycles of sine sequences is identically zero. In other words, the
DFT of a DFTeven sequence has no imaginary component.
Figures 519 and 520 show the Hanning window for a sample size of 8 and
one cycle of a sine pattern for a sample size of 8.
Chapter 5 Smoothing Windows
LabVIEW Analysis Concepts 520 ni.com
Figure 519. Hanning Window for Sample Size 8
Figure 520. Sine Pattern for Sample Size 8
In Figure 519, the DFTeven Hanning window is not symmetric about its
midpoint. The last point of the window is not equal to its first point, similar
to one complete cycle of the sine pattern shown in Figure 520.
Smoothing windows for spectral analysis are spectral windows and include
the following window types:
• Scaled timedomain window
• Hanning window
• Hamming window
• Triangle window
• Blackman window
• Exact Blackman window
• BlackmanHarris window
• Flat top window
• KaiserBessel window
• General cosine window
• Cosine tapered window
Chapter 5 Smoothing Windows
© National Instruments Corporation 521 LabVIEW Analysis Concepts
Windows for FIR Filter Coefficient Design
Designing FIR filter coefficients requires a window that is symmetric about
its midpoint.
Equations 51 and 52 illustrate the difference between a spectral window
and a symmetrical window for filter coefficient design.
Equation 51 defines the Hanning window for spectral analysis.
(51)
where N is the length of the window and w is the window value.
Equation 52 defines a symmetrical Hanning window for filter coefficient
design.
(52)
where N is the length of the window and w is the window value.
By modifying a spectral window, as shown in Equation 52, you can define
a symmetrical window for designing filter coefficients. Refer to Chapter 3,
Digital Filtering, for more information about designing digital filters.
Choosing the Correct Smoothing Window
Selecting a smoothing window is not a simple task. Each smoothing
window has its own characteristics and suitability for different
applications. To choose a smoothing window, you must estimate the
frequency content of the signal. If the signal contains strong interfering
frequency components distant from the frequency of interest, choose a
smoothing window with a high side lobe rolloff rate. If the signal contains
strong interfering signals near the frequency of interest, choose a
smoothing window with a low maximum side lobe level. Refer to Table 51
for information about side lobe rolloff rates and maximum side lobe levels
for various smoothing windows.
If the frequency of interest contains two or more signals very near to each
other, spectral resolution is important. In this case, it is best to choose a
smoothing window with a very narrow main lobe. If the amplitude accuracy
of a single frequency component is more important than the exact location
w i [ ] 0.5 1
2πi
N
\ .
 
.

cos –
\

= for i 0, 1, 2, … , N 1 – =
w i [ ] 0.5 1
2πi
N 1 –
\ .
 
.

cos –
\

= for i 0, 1, 2, … , N 1 – =
Chapter 5 Smoothing Windows
LabVIEW Analysis Concepts 522 ni.com
of the component in a given frequency bin, choose a smoothing window
with a wide main lobe. If the signal spectrum is rather flat or broadband in
frequency content, use the uniform window, or no window. In general, the
Hanning window is satisfactory in 95% of cases. It has good frequency
resolution and reduced spectral leakage. If you do not know the nature of
the signal but you want to apply a smoothing window, start with the
Hanning window.
Table 52 lists different types of signals and the appropriate windows that
you can use with them.
Table 52. Signals and Windows
Type of Signal Window
Transients whose duration is shorter than the length of the
window
Rectangular
Transients whose duration is longer than the length of the
window
Exponential, Hanning
Generalpurpose applications Hanning
Spectral analysis (frequencyresponse measurements) Hanning (for random excitation),
Rectangular (for pseudorandom
excitation)
Separation of two tones with frequencies very close to each
other but with widely differing amplitudes
KaiserBessel
Separation of two tones with frequencies very close to each
other but with almost equal amplitudes
Rectangular
Accurate singletone amplitude measurements Flat top
Sine wave or combination of sine waves Hanning
Sine wave and amplitude accuracy is important Flat top
Narrowband random signal (vibration data) Hanning
Broadband random (white noise) Uniform
Closely spaced sine waves Uniform, Hamming
Excitation signals (hammer blow) Force
Response signals Exponential
Unknown content Hanning
Chapter 5 Smoothing Windows
© National Instruments Corporation 523 LabVIEW Analysis Concepts
Initially, you might not have enough information about the signal to select
the most appropriate smoothing window for the signal. You might need to
experiment with different smoothing windows to find the best one. Always
compare the performance of different smoothing windows to find the best
one for the application.
Scaling Smoothing Windows
Applying a smoothing window to a timedomain signal multiplies the
timedomain signal by the length of the smoothing window and introduces
distortion effects due to the smoothing window. The smoothing window
changes the overall amplitude of the signal. When applying multiple
smoothing windows to the same signal, scaling each smoothing window by
dividing the windowed array by the coherent gain of the window results in
each window yielding the same spectrum amplitude result within the
accuracy constraints of the window. The plots in Figures 57 and 58 are
the result of applying scaled smoothing windows to the timedomain
signal.
An FFT is equivalent to a set of parallel filters with each filter having a
bandwidth equal to ∆f. Because of the spreading effect of a smoothing
window, the smoothing window increases the effective bandwidth of an
FFT bin by an amount known as the equivalent noisepower bandwidth
(ENBW) of the smoothing window. The power of a given frequency peak
equals the sum of the adjacent frequency bins around the peak increased by
a scaling factor equal to the ENBW of the smoothing window. You must
take the scaling factor into account when you perform computations based
on the power spectrum. Refer to Chapter 4, Frequency Analysis, for
information about performing computations on the power spectrum.
Table 53 lists the scaling factor, also known as coherent gain, the ENBW,
and the worstcase peak amplitude accuracy caused by offcenter
components for several popular smoothing windows.
Table 53. Correction Factors and WorstCase Amplitude Errors for Smoothing Windows
Window
Scaling Factor
(Coherent Gain) ENBW
WorstCase
Amplitude Error (dB)
Uniform (none) 1.00 1.00 3.92
Hanning 0.50 1.50 1.42
Hamming 0.54 1.36 1.75
Chapter 5 Smoothing Windows
LabVIEW Analysis Concepts 524 ni.com
BlackmanHarris 0.42 1.71 1.13
Exact Blackman 0.43 1.69 1.15
Blackman 0.42 1.73 1.10
Flat Top 0.22 3.77 <0.01
Table 53. Correction Factors and WorstCase Amplitude Errors for Smoothing Windows (Continued)
Window
Scaling Factor
(Coherent Gain) ENBW
WorstCase
Amplitude Error (dB)
© National Instruments Corporation 61 LabVIEW Analysis Concepts
6
Distortion Measurements
This chapter describes harmonic distortion, total harmonic distortion
(THD), signal noise and distortion (SINAD), and when to use distortion
measurements.
Defining Distortion
Applying a pure singlefrequency sine wave to a perfectly linear system
produces an output signal having the same frequency as that of the input
sine wave. However, the output signal might have a different amplitude
and/or phase than the input sine wave. Also, when you apply a composite
signal consisting of several sine waves at the input, the output signal
consists of the same frequencies but different amplitudes and/or phases.
Many realworld systems act as nonlinear systems when their input limits
are exceeded, resulting in distorted output signals. If the input limits of a
system are exceeded, the output consists of one or more frequencies that did
not originally exist at the input. For example, if the input to a nonlinear
system consists of two frequencies f
1
and f
2
, the frequencies at the output
might have the following components:
• f
1
and harmonics, or integer multiples, of f
1
• f
2
and harmonics of f
2
• Sums and differences of f
1
, f
2
• Harmonics of f
1
and f
2
The number of new frequencies at the output, their corresponding
amplitudes, and their relationships with respect to the original frequencies
vary depending on the transfer function. Distortion measurements quantify
the degree of nonlinearity of a system. Common distortion measurements
include the following measurements:
• Total harmonic distortion (THD)
• Total harmonic distortion + noise (THD + N)
• Signal noise and distortion (SINAD)
• Intermodulation distortion
Chapter 6 Distortion Measurements
LabVIEW Analysis Concepts 62 ni.com
Application Areas
You can make distortion measurements for many devices, such as A/D
and D/A converters, audio processing devices, analog tape recorders,
cellular phones, radios, televisions, stereos, and loudspeakers.
Measurements of harmonics often provide a good indication of the cause
of the nonlinearity of a system. For example, nonlinearities that are
asymmetrical around zero produce mainly even harmonics. Nonlinearities
symmetrical around zero produce mainly odd harmonics. You can use
distortion measurements to diagnose faults such as bad solder joints,
torn speaker cones, and incorrectly installed components.
However, nonlinearities are not always undesirable. For example, many
musical sounds are produced specifically by driving a device into its
nonlinear region.
Harmonic Distortion
When a signal x(t) of a particular frequency f
1
passes through a nonlinear
system, the output of the system consists of f
1
and its harmonics. The
following expression describes the relationship between f
1
and its
harmonics.
f
1
, f
2
= 2f
1
, f
3
= 3f
1
, f
4
= 4f
1
, …, f
n
= nf
1
The degree of nonlinearity of the system determines the number of
harmonics and their corresponding amplitudes the system generates. In
general, as the nonlinearity of a system increases, the harmonics become
higher. As the nonlinearity of a system decreases, the harmonics become
lower.
Figure 61 illustrates an example of a nonlinear system where the output
y(t) is the cube of the input signal x(t).
Figure 61. Example of a Nonlinear System
The following equation defines the input for the system shown in
Figure 61.
y(t) = f(x) = x
3
(t) cos(ωt) cos
3
(ωt)
x t ( ) ωt ( ) cos =
Chapter 6 Distortion Measurements
© National Instruments Corporation 63 LabVIEW Analysis Concepts
Equation 61 defines the output of the system shown in Figure 61.
(61)
In Equation 61, the output contains not only the input fundamental
frequency ω but also the third harmonic 3ω.
A common cause of harmonic distortion is clipping. Clipping occurs when
a system is driven beyond its capabilities. Symmetrical clipping results in
odd harmonics. Asymmetrical clipping creates both even and odd
harmonics.
THD
To determine the total amount of nonlinear distortion, also known as total
harmonic distortion (THD), a system introduces, measure the amplitudes of
the harmonics the system introduces relative to the amplitude of the
fundamental frequency. The following equation yields THD.
where A
1
is the amplitude of the fundamental frequency, A
2
is the amplitude
of the second harmonic, A
3
is the amplitude of the third harmonic, A
4
is the
amplitude of the fourth harmonic, and so on.
You usually report the results of a THD measurement in terms of the
highest order harmonic present in the measurement, such as THD through
the seventh harmonic.
The following equation yields the percentage total harmonic distortion
(%THD).
x
3
t ( ) 0.5 ωt ( ) cos 0.25 ωt ( ) cos 3ωt ( ) cos + [ ] + =
THD
A
2
2
A
3
2
A
4
2
… + + +
A
1
 =
%THD 100 ( )
A
2
2
A
3
2
A
4
2
… + + +
A
1

\ .

 
=
Chapter 6 Distortion Measurements
LabVIEW Analysis Concepts 64 ni.com
THD + N
Realworld signals usually contain noise. A system can introduce
additional noise into the signal. THD + N measures signal distortion while
taking into account the amount of noise power present in the signal.
Measuring THD + N requires measuring the amplitude of the fundamental
frequency and the power present in the remaining signal after removing the
fundamental frequency. The following equation yields THD + N.
where N is the noise power.
A low THD + N measurement means that the system has a low amount of
harmonic distortion and a low amount of noise from interfering signals,
such as AC mains hum and wideband white noise.
As with THD, you usually report the results of a THD + N measurement in
terms of the highest order harmonic present in the measurement, such as
THD + N through the third harmonic.
The following equation yields percentage total harmonic distortion + noise
(%THD + N).
SINAD
Similar to THD + N, SINAD takes into account both harmonics and noise.
However, SINAD is the reciprocal of THD + N. The following equation
yields SINAD.
You can use SINAD to characterize the performance of FM receivers in
terms of sensitivity, adjacent channel selectivity, and alternate channel
selectivity.
THD N +
A
2
2
A
3
2
… N
2
+ + +
A
1
2
A
2
2
A
3
2
…N
2
+ + +
  =
%THD N + 100 ( )
A
2
2
A
3
2
… N
2
+ + +
A
1
2
A
2
2
A
3
2
…N
2
+ + +

\ .


 
=
SINAD
Fundamental Noise Distortion + +
Noise Distortion +
 =
© National Instruments Corporation 71 LabVIEW Analysis Concepts
7
DC/RMS Measurements
Two of the most common measurements of a signal are its direct current
(DC) and root mean square (RMS) levels. This chapter introduces
measurement analysis techniques for making DC and RMS measurements
of a signal.
What Is the DC Level of a Signal?
You can use DC measurements to define the value of a static or slowly
varying signal. DC measurements can be both positive and negative. The
DC value usually is constant within a specific time window. You can track
and plot slowly moving values, such as temperature, as a function of time
using a DC meter. In that case, the observation time that results in the
measured value has to be short compared to the speed of change for the
signal. Figure 71 illustrates an example DC level of a signal.
Figure 71. DC Level of a Signal
The DC level of a continuous signal V(t) from time t1 to time t2 is given by
the following equation.
where t 2 – t1 is the integration time or measurement time.
t1 t2
Time
V
o
l
t
a
g
e
V
dc
V
dc
1
t2 t1 – ( )
 V t ( ) t d
t 1
t 2
∫
⋅ =
Chapter 7 DC/RMS Measurements
LabVIEW Analysis Concepts 72 ni.com
For digitized signals, the discretetime version of the previous equation is
given by the following equation.
For a sampled system, the DC value is defined as the mean value of the
samples acquired in the specified measurement time window.
Between pure DC signals and fastmoving dynamic signals is a gray zone
where signals become more complex, and measuring the DC level of these
signals becomes challenging.
Realworld signals often contain a significant amount of dynamic
influence. Often, you do not want the dynamic part of the signal. The DC
measurement identifies the static DC signal hidden in the dynamic signal,
for example, the voltage generated by a thermocouple in an industrial
environment, where external noise or hum from the main power can disturb
the DC signal significantly.
What Is the RMS Level of a Signal?
The RMS level of a signal is the square root of the mean value of the
squared signal. RMS measurements are always positive. Use RMS
measurements when a representation of energy is needed. You usually
acquire RMS measurements on dynamic signals—signals with relatively
fast changes—such as noise or periodic signals. Refer to Chapter 7,
Measuring AC Voltage, of the LabVIEW Measurements Manual for more
information about when to use RMS measurements.
The RMS level of a continuous signal V(t) from time t1 to time t 2 is given
by the following equation.
where t 2 – t1 is the integration time or measurement time.
V
dc
1
N
 V
i
i 1 =
N
∑
⋅ =
V
rms
1
t2 t1 – ( )
 V
2
t ( ) t d
t1
t 2
∫
⋅ =
Chapter 7 DC/RMS Measurements
© National Instruments Corporation 73 LabVIEW Analysis Concepts
The RMS level of a discrete signal V
i
is given by the following equation.
One difficulty is encountered when measuring the dynamic part of a signal
using an instrument that does not offer an ACcoupling option. A true RMS
measurement includes the DC part in the measurement, which is a
measurement you might not want.
Averaging to Improve the Measurement
Instantaneous DC measurements of a noisy signal can vary randomly and
significantly, as shown in Figure 72. You can measure a more accurate
value by averaging out the noise that is superimposed on the desired DC
level. In a continuous signal, the averaged value between two times, t1 and
t 2, is defined as the signal integration between t1 and t 2, divided by the
measurement time, t 2 – t1, as shown in Figure 71. The area between the
averaged value V
dc
and the signal that is above V
dc
is equal to the area
between V
dc
and the signal that is under V
dc
. For a sampled signal, the
average value is the sum of the voltage samples divided by the
measurement time in samples, or the mean value of the measurement
samples. Refer to Chapter 6, Measuring DC Voltage, of the LabVIEW
Measurements Manual for more information about averaging in LabVIEW.
Figure 72. Instantaneous DC Measurements
V
rms
1
N
 V
i
2
i 1 =
N
∑
⋅ =
t1 t2 Time
V
o
l
t
a
g
e
V at t1
V at t2
Chapter 7 DC/RMS Measurements
LabVIEW Analysis Concepts 74 ni.com
An RMS measurement is an averaged quantity because it is the average
energy in the signal over a measurement period. You can improve the RMS
measurement accuracy by using a longer averaging time, equivalent to the
integration time or measurement time.
There are several different strategies to use for making DC and RMS
measurements, each dependent on the type of error or noise sources.
When choosing a strategy, you must decide if accuracy or speed of the
measurement is more important.
Common Error Sources Affecting DC
and RMS Measurements
Some common error sources for DC measurements are singlefrequency
components (or tones), multiple tones, or random noise. These same error
signals can interfere with RMS measurements so in many cases the
approach taken to improve RMS measurements is the same as for
DC measurements.
DC Overlapped with Single Tone
Consider the case where the signal you measure is composed of a DC signal
and a single sine tone. The average of a single period of the sine tone is
ideally zero because the positive halfperiod of the tone cancels the
negative halfperiod.
Figure 73. DC Signal Overlapped with Single Tone
Any remaining partial period, shown in Figure 73 with vertical hatching,
introduces an error in the average value and therefore in the DC
measurement. Increasing the averaging time reduces this error because the
integration is always divided by the measurement time t2 – t1. If you know
Time
V
o
l
t
a
g
e
t1 t2
Chapter 7 DC/RMS Measurements
© National Instruments Corporation 75 LabVIEW Analysis Concepts
the period of the sine tone, you can take a more accurate measurement of
the DC value by using a measurement period equal to an integer number
of periods of the sine tone. The most severe error occurs when the
measurement time is a halfperiod different from an integer number of
periods of the sine tone because this is the maximum area under or over the
signal curve.
Defining the Equivalent Number of Digits
Defining the Equivalent Number of Digits (ENOD) makes it easier to relate
a measurement error to a number of digits, similar to digits of precision.
ENOD translates measurement accuracy into a number of digits.
ENOD = log
10
(Relative Error)
A 1% error corresponds to two digits of accuracy, and a one part per million
error corresponds to six digits of accuracy (log
10
(0.000001) = 6).
ENOD is only an approximation that tells you what order of magnitude of
accuracy you can achieve under specific measurement conditions.
This accuracy does not take into account any error introduced by the
measurement instrument or data acquisition hardware itself. ENOD is only
a tool for computing the reliability of a specific measurement technique.
Thus, the ENOD should at least match the accuracy of the measurement
instrument or measurement requirements. For example, it is not necessary
to use a measurement technique with an ENOD of six digits if your
instrument has an accuracy of only 0.1% (three digits). Similarly, you do
not get the six digits of accuracy from your sixdigit accurate measurement
instrument if your measurement technique is limited to an ENOD of only
three digits.
DC Plus Sine Tone
Figure 74 shows that for a 1.0 VDC signal overlapped with a 0.5 V
single sine tone, the worst ENOD increases with measurement
time—xaxis shown in periods of the additive sine tone—at a rate of
approximately one additional digit for 10 times more measurement time.
To achieve 10 times more accuracy, you need to increase your
measurement time by a factor of 10. In other words, accuracy and
measurement time are related through a firstorder function.
Chapter 7 DC/RMS Measurements
LabVIEW Analysis Concepts 76 ni.com
Figure 74. Digits versus Measurement Time for 1.0 VDC Signal with 0.5 V Single Tone
Windowing to Improve DC Measurements
The worst ENOD for a DC signal plus a sine tone occurs when the
measurement time is at halfperiods of the sine tone. You can greatly
reduce these errors due to noninteger number of cycles by using a
weighting function before integrating to measure the desired DC value.
The most common weighting or window function is the Hann window,
commonly known as the Hanning window.
Figure 75 shows a dramatic increase in accuracy from the use of the Hann
window. The accuracy as a function of the number of sine tone periods is
improved from a firstorder function to a thirdorder function. In other
words, you can achieve one additional digit of accuracy for every
10
1/3
= 2.15 times more measurement time using the Hann window instead
of one digit for every 10 times more measurement time without using a
window. As in the nonwindowing case, the DC level is 1.0 V and the single
tone peak amplitude is 0.5 V.
Chapter 7 DC/RMS Measurements
© National Instruments Corporation 77 LabVIEW Analysis Concepts
Figure 75. Digits versus Measurement Time for DC + Tone Using Hann Window
You can use other types of window functions to further reduce the
necessary measurement time or greatly increase the resulting accuracy.
Figure 76 shows that the Low Sidelobe (LSL) window can achieve more
than six ENOD of worst accuracy when averaging your DC signal over only
five periods of the sine tone (same test signal).
Figure 76. Digits versus Measurement Time for DC + Tone Using LSL Window
Chapter 7 DC/RMS Measurements
LabVIEW Analysis Concepts 78 ni.com
RMS Measurements Using Windows
Like DC measurements, the worst ENOD for measuring the RMS level of
signals sometimes can be improved significantly by applying a window
to the signal before RMS integration. For example, if you measure the
RMS level of the DC signal plus a single sine tone, the most accurate
measurements are made when the measurement time is an integer number
of periods of the sine tone. Figure 77 shows that the worst ENOD varies
with measurement time (in periods of the sine tone) for various window
functions. Here, the test signal contains 0.707 VDC with 1.0 V peak sine
tone.
Figure 77. Digits versus Measurement Time for RMS Measurements
Applying the window to the signal increases RMS measurement accuracy
significantly, but the improvement is not as large as in DC measurements.
For this example, the LSL window achieves six digits of accuracy when the
measurement time reaches eight periods of the sine tone.
Using Windows with Care
Window functions can be very useful to improve the speed of your
measurement, but you must be careful. The Hann window is a general
window recommended in most cases. Use more advanced windows such as
the LSL window only if you know the window will improve the
measurement. For example, you can reduce significantly RMS
measurement accuracy if the signal you want to measure is composed of
many frequency components close to each other in the frequency domain.
Chapter 7 DC/RMS Measurements
© National Instruments Corporation 79 LabVIEW Analysis Concepts
You also must make sure that the window is scaled correctly or that you
update scaling after applying the window. The most useful window
functions are prescaled by their coherent gain—the mean value of the
window function—so that the resulting mean value of the scaled window
function is always 1.00. DC measurements do not need to be scaled when
using a properly scaled window function. For RMS measurements, each
window has a specific equivalent noise bandwidth that you must use to
scale integrated RMS measurements. You must scale RMS measurements
using windows by the reciprocal of the square root of the equivalent noise
bandwidth.
Rules for Improving DC and RMS Measurements
Use the following guidelines when determining a strategy for improving
your DC and RMS measurements:
• If your signal is overlapped with a single tone, longer integration times
increase the accuracy of your measurement. If you know the exact
frequency of the sine tone, use a measurement time that corresponds to
an exact number of sine periods. If you do not know the frequency of
the sine tone, apply a window, such as a Hann window, to reduce
significantly the measurement time needed to achieve a specific
accuracy.
• If your signal is overlapped with many independent tones, increasing
measurement time increases the accuracy of the measurement. As in
the single tone case, using a window significantly reduces the
measurement time needed to achieve a specific accuracy.
• If your signal is overlapped with noise, do not use a window. In this
case, you can increase the accuracy of your measurement by increasing
the integration time or by preprocessing or conditioning your noisy
signal with a lowpass (or bandstop) filter.
RMS Levels of Specific Tones
You always can improve the accuracy of an RMS measurement by
choosing a specific measurement time to contain an integer number of
cycles of your sine tones or by using a window function. The measurement
of the RMS value is based only on the time domain knowledge of your
signal. You can use advanced techniques when you are interested in a
specific frequency or narrow frequency range.
Chapter 7 DC/RMS Measurements
LabVIEW Analysis Concepts 710 ni.com
You can use bandpass or bandstop filtering before RMS computations to
measure the RMS power in a specific band of frequencies. You also can use
the Fast Fourier Transform (FFT) to pick out specific frequencies for RMS
processing. Refer to Chapter 4, Frequency Analysis, for more information
about the FFT.
The RMS level of a specific sine tone that is part of a complex or noisy
signal can be extracted very accurately using frequency domain processing,
leveraging the power of the FFT, and using the benefits of windowing.
© National Instruments Corporation 81 LabVIEW Analysis Concepts
8
Limit Testing
This chapter provides information about setting up an automated system for
performing limit testing, specifying limits, and applications for limit
testing.
You can use limit testing to monitor a waveform and determine if it always
satisfies a set of conditions, usually upper and lower limits. The region
bounded by the specified limits is a mask. The result of a limit or mask test
is generally a pass or fail.
Setting up an Automated Test System
You can use the same method to create and control many different
automated test systems. Complete the following basic steps to set up an
automated test system for limit mask testing.
1. Configure the measurement by specifying arbitrary upper and lower
limits. This defines your mask or region of interest.
2. Acquire data using a DAQ device.
3. Monitor the data to make sure it always falls within the specified mask.
4. Log the pass/fail results from step 3 to a file or visually inspect the
input data and the points that fall outside the mask.
5. Repeat steps 2 through 4 to continue limit mask testing.
The following sections describe steps 1 and 3 in further detail. Assume that
the signal to be monitored starts at x = x
0
and all the data points are evenly
spaced. The spacing between each point is denoted by dx.
Specifying a Limit
Limits are classified into two types—continuous limits and segmented
limits, as shown in Figure 81. The top graph in Figure 81 shows a
continuous limit. A continuous limit is specified using a set of x and
y points {{x
1
, x
2
, x
3
, …}, {y
1
, y
2
, y
3
, …}}. Completing step 1 creates a limit
with the first point at x
0
and all other points at a uniform spacing of
dx (x
0
+ dx, x
0
+ 2dx, …). This is done through a linear interpolation of the
x and y values that define the limit. In Figure 81, black dots represent the
Chapter 8 Limit Testing
LabVIEW Analysis Concepts 82 ni.com
points at which the limit is defined and the solid line represents the limit
you create. Creating the limit in step 1 reduces test times in step 3. If the
spacing between the samples changes, you can repeat step 1. The limit is
undefined in the region x
0
< x < x
1
and for x > x
4
.
Figure 81. Continuous versus Segmented Limit Specification
The bottom graph of Figure 81 shows a segmented limit. The first segment
is defined using a set of x and y points {{x
1
, x
2
}, {y
1
, y
2
}}. The second
segment is defined using a set of points {x
3
, x
4
, x
5
} and {y
3
, y
4
, y
5
}. You can
define any number of such segments. As with continuous limits, step 1 uses
linear interpolation to create a limit with the first point at x
0
and all other
points with an uniform spacing of dx. The limit is undefined in the region
x
0
< x < x
1
and in the region x > x
5
. Also, the limit is undefined in the region
x
2
< x < x
3
.
Continuous Limit
x
0
x
1
x
2
x
3
x
4
Y
y
1
y
4
y
2
y
3
Segmented Limit
x
0
x
1
x
2
x
4
Y
y
1
y
2
y
4
x
3
x
5
y
3
y
5
Chapter 8 Limit Testing
© National Instruments Corporation 83 LabVIEW Analysis Concepts
Specifying a Limit Using a Formula
You can specify limits using formulas. Such limits are best classified as
segmented limits. Each segment is defined by start and end frequencies and
a formula. For example, the ANSI T1.413 recommendation specifies the
limits for the transmit and receive spectrum of an ADSL signal in terms of
formula. Table 81, which includes only a part of the specification, shows
the start and end frequencies and the upper limits of the spectrum for each
segment.
The limit is specified as an array of a set of x and y points,
[{0.3, 4.0}{–97.5, –97.5}, {4.0, 25.9}{–92.5 + 21.5 log
2
(f/4,000),
–92.5 + 21.5 log
2
(f/4,000)}, …, {307.0, 1,221.0}{–90, –90}]. Each
element of the array corresponds to a segment.
Figure 82 shows the segmented limit plot specified using the formulas
shown in Table 81. The xaxis is on a logarithmic scale.
Table 81. ADSL Signal Recommendations
Start (kHz) End (kHz)
Maximum (Upper Limit)
Value (dBm/Hz)
0.3 4.0 –97.5
4.0 25.9 –92.5 + 21.5 log
2
(f/4,000)
25.9 138.0 –34.5
138.0 307.0 –34.5 – 48.0 log
2
(f/138,000)
307.0 1,221.0 –90
Chapter 8 Limit Testing
LabVIEW Analysis Concepts 84 ni.com
Figure 82. Segmented Limit Specified Using Formulas
Limit Testing
After you define your mask, you acquire a signal using a DAQ device. The
sample rate is set at 1/dx S/s. Compare the signal with the limit. In step 1,
you create a limit value at each point where the signal is defined. In step 3,
you compare the signal with the limit. For the upper limit, if the data point
is less than or equal to the limit point, the test passes. If the data point is
greater than the limit point, the test fails. For the lower limit, if the data
point is greater than or equal to the limit point, the test passes. If the data
point is less than the limit point, the test fails.
Figure 83 shows the result of limit testing in a continuous mask case. The
test signal falls within the mask at all the points it is sampled, other than
points b and c. Thus, the limit test fails. Point d is not tested because it falls
outside the mask.
Chapter 8 Limit Testing
© National Instruments Corporation 85 LabVIEW Analysis Concepts
Figure 83. Result of Limit Testing with a Continuous Mask
Figure 84 shows the result of limit testing in a segmented mask case. All
the points fall within the mask. Points b and c are not tested because the
mask is undefined at those points. Thus, the limit test passes. Point d is not
tested because it falls outside the mask.
Figure 84. Result of Limit Testing with a Segmented Mask
Chapter 8 Limit Testing
LabVIEW Analysis Concepts 86 ni.com
Applications
You can use limit mask testing in a wide range of test and measurement
applications. For example, you can use limit mask testing to determine that
the power spectral density of ADSL signals meets the recommendations in
the ANSI T1.413 specification. Refer to the Specifying a Limit Using a
Formula section of this chapter for more information about ADSL signal
limits.
The following sections provide examples of when you can use limit mask
testing. In all these examples, the specifications are recommended by
standardsgenerating bodies, such as the CCITT, ITUT, ANSI, and IEC,
to ensure that all the test and measurement systems conform to a
universally accepted standard. In some other cases, the limit testing
specifications are proprietary and are strictly enforced by companies for
quality control.
Modem Manufacturing Example
Limit testing is used in modem manufacturing to make sure the transmit
spectrum of the line signal meets the V.34 modem specification, as shown
in Figure 85.
Figure 85. Upper and Lower Limit for V.34 Modem Transmitted Spectrum
Chapter 8 Limit Testing
© National Instruments Corporation 87 LabVIEW Analysis Concepts
The ITUT V.34 recommendation contains specifications for a modem
operating at data signaling rates up to 33,600 bits/s. It specifies that the
spectrum for the line signal that transmits data conforms to the template
shown in Figure 85. For example, for a normalized frequency of 1.0, the
spectrum must always lie between 3 dB and 1 dB. All the modems must
meet this specification. A modem manufacturer can set up an automated
test system to monitor the transmit spectrum for the signals that the modem
outputs. If the spectrum conforms to the specification, the modem passes
the test and is ready for customer use. Recommendations such as the
ITUT V.34 are essential to ensure interoperability between modems from
different manufacturers and to provide highquality service to customers.
Digital Filter Design Example
You also can use limit mask testing in the area of digital filter design. You
might want to design lowpass filters with a passband ripple of 10 dB and
stopband attenuation of 60 dB. You can use limit testing to make sure the
frequency response of the filter always meets these specifications. The first
step in this process is to specify the limits. You can specify a lower limit of
–10 dB in the passband region and an upper limit of –60 dB in the stopband
region, as shown in Figure 86. After you specify these limits, you can run
the actual test repeatedly to make sure that all the frequency responses of
all the filters are designed to meet these specifications.
Figure 86. Limit Test of a Lowpass Filter Frequency Response
Chapter 8 Limit Testing
LabVIEW Analysis Concepts 88 ni.com
Pulse Mask Testing Example
The ITUT G.703 recommendation specifies the pulse mask for signals
with bit rates, n × 64, where n is between 2 and 31. Figure 87 shows the
pulse mask for interface at 1,544 kbits/s. Signals with this bit rate also are
referred to as T1 signals. T1 signals must lie in the mask specified by the
upper and lower limit. These limits are set to properly enable the
interconnection of digital network components to form a digital path or
connection.
Figure 87. Pulse Mask Testing on T1/E1 Signals
© National Instruments Corporation II1 LabVIEW Analysis Concepts
Part II
Mathematics
This part provides information about mathematical concepts commonly
used in analysis applications.
• Chapter 9, Curve Fitting, describes how to extract information from
a data set to obtain a functional description.
• Chapter 10, Probability and Statistics, describes fundamental
concepts of probability and statistics and how to use these concepts
to solve realworld problems.
• Chapter 11, Linear Algebra, describes how to use the Linear Algebra
VIs to perform matrix computation and analysis.
• Chapter 12, Optimization, describes basic concepts and methods used
to solve optimization problems.
• Chapter 13, Polynomials, describes polynomials and operations
involving polynomials.
© National Instruments Corporation 91 LabVIEW Analysis Concepts
9
Curve Fitting
This chapter describes how to extract information from a data set to obtain
a functional description. Use the NI Example Finder to find examples of
using the Curve Fitting VIs.
Introduction to Curve Fitting
The technique of curve fitting analysis extracts a set of curve parameters or
coefficients from a data set to obtain a functional description of the data set.
The least squares method of curve fitting fits a curve to a particular data set.
Equation 91 defines the least square error.
e(a) = [f(x, a) – y(x)]
2
(91)
where e(a) is the least square error, y(x) is the observed data set, f(x, a) is
the functional description of the data set, and a is the set of curve
coefficients that best describes the curve.
For example, if a = {a
0
, a
1
}, the following equation yields the functional
description.
f(x, a) = a
0
+ a
1
x
The least squares algorithm finds a by solving the system defined by
Equation 92.
(92)
To solve the system defined by Equation 92, you set up and solve the
Jacobian system generated by expanding Equation 92. After you solve the
system for a, you can use the functional description f(x, a) to obtain an
estimate of the observed data set for any value of x.
The Curve Fitting VIs automatically set up and solve the Jacobian system
and return the set of coefficients that best describes the data set. You can
concentrate on the functional description of the data without having to
solve the system in Equation 92.
∂
∂a
e a ( ) 0 =
Chapter 9 Curve Fitting
LabVIEW Analysis Concepts 92 ni.com
Applications of Curve Fitting
In some applications, parameters such as humidity, temperature, and
pressure can affect data you collect. You can model the statistical data by
performing regression analysis and gain insight into the parameters that
affect the data.
Figure 91 shows the block diagram of a VI that uses the Linear Fit VI to
fit a line to a set of data points.
Figure 91. Fitting a Line to Data
You can modify the block diagram to fit exponential and polynomial curves
by replacing the Linear Fit VI with the Exponential Fit VI or the General
Polynomial Fit VI.
Figure 92 shows a multiplot graph of the result of fitting a line to a noisy
data set.
Figure 92. Fitting a Line to a Noisy Data Set
Chapter 9 Curve Fitting
© National Instruments Corporation 93 LabVIEW Analysis Concepts
The practical applications of curve fitting include the following
applications:
• Removing measurement noise
• Filling in missing data points, such as when one or more measurements
are missing or improperly recorded
• Interpolating, which is estimating data between data points, such as
if the time between measurements is not small enough
• Extrapolating, which is estimating data beyond data points, such as
looking for data values before or after a measurement
• Differentiating digital data, such as finding the derivative of the
data points by modeling the discrete data with a polynomial and
differentiating the resulting polynomial equation
• Integrating digital data, such as finding the area under a curve when
you have only the discrete points of the curve
• Obtaining the trajectory of an object based on discrete measurements
of its velocity, which is the first derivative, or acceleration, which is the
second derivative
General LS Linear Fit Theory
For a given set of observation data, the general leastsquares (LS) linear fit
problem is to find a set of coefficients that fits the linear model, as shown
in Equation 93.
(93)
where x
ij
is the observed data contained in the observation matrix H, n is the
number of elements in the set of observed data and the number of rows of
in H, b is the set of coefficients that fit the linear model, and k is the number
of coefficients.
y
i
b
o
x
i 0
… b
k 1 –
x
i k 1 –
b
j
x
ij
j 0 =
k 1 –
∑
i 0 1 … n 1 – , , , = = + + =
Chapter 9 Curve Fitting
LabVIEW Analysis Concepts 94 ni.com
The following equation defines the observation matrix H.
You can rewrite Equation 93 as the following equation.
Y = HB.
The general LS linear fit model is a multiple linear regression model.
A multiple linear regression model uses several variables, x
i0
, x
i1
, …, x
ik – 1
,
to predict one variable, y
i
.
In most analysis situations, you acquire more observation data than
coefficients. Equation 93 might not yield all the coefficients in set B.
The fit problem becomes to find the coefficient set B that minimizes the
difference between the observed data y
i
and the predicted value z
i
.
Equation 94 defines z
i
.
(94)
You can use the least chisquare plane method to find the solution set B that
minimizes the quantity given by Equation 95.
= H
0
B – Y
0

2
(95)
where , , i = 0, 1, …, n – 1, and j = 0, 1, …, k – 1.
H
x
00
x
01
… x
0k 1 –
x
10
x
11
… x
1k 1 –
.
.
.
.
x
n 10 –
x
n 12 –
… x
n 1k – 1 –
=
z
i
b
j
x
i j
j 0 =
k 1 –
∑
=
χ
2
y
i
z –
i
σ
i

\ .
 
2
i 0 =
n 1 –
∑
y
i
b
j
x
i j
j 0 =
k 1 –
∑
–
σ
i

\ .






 
2
i 0 =
n 1 –
∑
= =
h
oi j
x
i j
σ
i
 = y
oi
y
i
σ
i
  =
Chapter 9 Curve Fitting
© National Instruments Corporation 95 LabVIEW Analysis Concepts
In Equation 95, σ
i
is the standard deviation. If the measurement errors are
independent and normally distributed with constant standard deviation,
σ
i
= σ, Equation 95 also is the leastsquare estimation.
You can use the following methods to minimize χ
2
from Equation 95:
• Solve normal equations of the leastsquare problems using LU or
Cholesky factorization.
• Minimize χ
2
to find the leastsquare solution of equations.
Solving normal equations involves completing the following steps.
1. Set the partial derivatives of χ
2
to zero with respect to b
0
, b
1
, …, b
k – 1
,
as shown by the following equations.
(96)
2. Derive the equations in Equation 96 to the following equation form.
(97)
where is the transpose of H
0
.
Equations of the form given by Equation 97 are called normal equations of
the leastsquare problems. You can solve them using LU or Cholesky
factorization algorithms. However, the solution from the normal equations
is susceptible to roundoff error.
The preferred method of minimizing χ
2
is to find the leastsquare solution
of equations. Equation 98 defines the form of the leastsquare solution of
equations.
H
0
B = Y
0
(98)
∂χ
2
∂b
0
 0 =
∂χ
2
∂b
1
  0 =
.
.
.
.
∂χ
2
∂b
k 1 –
 0 =
\

H
0
T
H
0
B H
0
T
Y =
H
0
T
Chapter 9 Curve Fitting
LabVIEW Analysis Concepts 96 ni.com
You can use QR or SVD factorization to find the solution set B for
Equation 98. For QR factorization, you can use the Householder
algorithm, the Givens algorithm, or the Givens 2 algorithm, which also
is known as the fast Givens algorithm. Different algorithms can give you
different precision. In some cases, if one algorithm cannot solve the
equation, another algorithm might solve it. You can try different algorithms
to find the one best suited for the observation data.
Polynomial Fit with a Single Predictor Variable
Polynomial fit with a single predictor variable uses one variable to predict
another variable. Polynomial fit with a single predictor variable is a special
case of multiple regression. If the observation data sets are {x
i
, y
i
}, where
i = 0, 1, …, n – 1, Equation 99 defines the model for polynomial fit.
(99)
Comparing Equations 93 and 99 shows that , as shown by the
following equations.
Because , you can build the observation matrix H as shown by the
following equation.
Instead of using , you also can choose another function formula
to fit the data sets {x
i
, y
i
}. In general, you can select x
ij
= f
j
(x
i
). Here, f
j
(x
i
)
y
i
b
j
x
i
j
j 0 =
k 1 –
∑
= b
0
b
1
x
i
b
2
x
i
2
… b
k 1 –
x
i
k 1 –
i 0 1 2 … n 1 – , , , , = + + + + =
x
i j
x
j
i
=
x
i 0
x
i
0
=
1 x
i 1
x
i
x
i2
x
2
i
… x
i k 1 –
x
i
k 1 –
= , , = , = , =
x
i j
x
j
i
=
H
1 x
0
x
2
0
… x
k 1 –
0
1 x
1
x
2
1
… x
1
k 1 –
.
.
.
1 x
n 1 –
x
2
n 1 – … x
k 1 –
n 1 –
¹ )
¦ ¦
¦ ¦
¦ ¦
¦ ¦
¦ ¦
´ `
¦ ¦
¦ ¦
¦ ¦
¦ ¦
¦ ¦
¦ ¹
=
x
i j
x
i
j
=
Chapter 9 Curve Fitting
© National Instruments Corporation 97 LabVIEW Analysis Concepts
is the function model that you choose to fit your observation data.
In polynomial fit,
In general, you can build H as shown in the following equation.
The following equation defines the fit model.
Curve Fitting in LabVIEW
For the Curve Fitting VIs, the input sequences Y and X represent the data
set y(x). A sample or point in the data set is (x
i
, y
i
). x
i
is the i
th
element of
the sequence X. y
i
is the i
th
element of the sequence Y.
Some Curve Fitting VIs return only the coefficients for the curve that best
describe the input data while other Curve Fitting VIs return the fitted curve.
Using the VIs that return only coefficients allows you to further manipulate
the data. The VIs that return the fitted curve also return the coefficients and
the mean squared error (MSE). MSE is a relative measure of the residuals
between the expected curve values and the actual observed values. Because
the input data represents a discrete system, the VIs use the following
equation to calculate MSE.
where f is the sequence representing the fitted values, y is the sequence
representing the observed values, and n is the number of observed sample
points.
f
j
x
i
( ) x
i
j
. =
H
f
0
x
0
( ) f
1
x
0
( ) f
2
x
0
( ) … f
k 1 –
x
0
( )
f
0
x
1
( ) f
1
x
1
( ) f
2
x
1
( ) … f
k 1 –
x
1
( )
.
.
.
f
0
x
n 1 –
( ) f
1
x
n 1 –
( ) f
2
x
n 1 –
( ) … f
k 1 –
x
n 1 –
( )
¹ )
¦ ¦
¦ ¦
¦ ¦
¦ ¦
´ `
¦ ¦
¦ ¦
¦ ¦
¦ ¦
¦ ¹
=
y
i
b
0
f
0
x ( ) b
1
f
1
x ( ) … b
k 1 –
f
k 1 –
x ( ) + + + =
MSE
1
n
 f
i
y
i
– ( )
2
i 0 =
n 1 –
∑
=
Chapter 9 Curve Fitting
LabVIEW Analysis Concepts 98 ni.com
Linear Fit
The Linear Fit VI fits experimental data to a straight line of the general
form described by the following equation.
y = mx + b
The Linear Fit VI calculates the coefficients a
0
and a
1
that best fit the
experimental data (x[i] and y[i]) to a straight line model described by the
following equation.
y[i]=a
0
+ a
1
x[i]
where y[i] is a linear combination of the coefficients a
0
and a
1
.
Exponential Fit
The Exponential Fit VI fits data to an exponential curve of the general form
described by the following equation.
y = ae
bx
The following equation specifically describes the exponential curve
resulting from the exponential fit algorithm.
General Polynomial Fit
The General Polynomial Fit VI fits data to a polynomial function of the
general form described by the following equation.
y = a + bx + cx
2
+ …
The following equation specifically describes the polynomial function
resulting from the general polynomial fit algorithm.
y[i] = a
0
+ a
1
x[i]+a
2
x[i]
2
…
y i [ ] a
0
e
a
1
x i [ ]
=
Chapter 9 Curve Fitting
© National Instruments Corporation 99 LabVIEW Analysis Concepts
General LS Linear Fit
The General LS Linear Fit VI fits data to a line described by the following
equation.
y[i] = a
0
+ a
1
f
1
(x[i]) + a
2
f
2
(x[i]) + …
where y[i] is a linear combination of the parameters a
0
, a
1
, a
2
,
….
You can extend the concept of a linear combination of coefficients further
so that the multiplier for a
1
is some function of x, as shown in the following
equations.
y[i] = a
0
+ a
1
sin(ωx[i])
y[i] = a
0
+ a
1
(x[i])
2
y[i] = a
0
+ a
1
cos(ωx[i]
2
)
where ω is the angular frequency.
In each of the preceding equations, y[i] is a linear combination of the
coefficients a
0
and a
1
. In the case of the General LS Linear Fit VI, you
can have y[i] that is a linear combination of several coefficients. Each
coefficient can have a multiplier of some function of x[i]. Therefore,
you can use the General LS Linear Fit VI to calculate coefficients of the
functional models and represent the coefficients of the functional models
as linear combinations of the coefficients, as shown in the following
equations.
y = a
0
+ a
1
sin(ωx)
y = a
0
+ a
1
x
2
+ a
2
cos(ωx
2
)
y = a
0
+ a
1
(3sin(ωx)) + a
2
x
3
+ + …
In each of the preceding equations, y is a linear function of the coefficients,
although it might be a nonlinear function of x.
a
3
x

Chapter 9 Curve Fitting
LabVIEW Analysis Concepts 910 ni.com
Computing Covariance
The General LS Linear Fit VI returns a k × k matrix of covariances between
the coefficients a
k
. The General LS Linear Fit VI uses the following
equation to compute the covariance matrix C.
Building the Observation Matrix
When you use the General LS Linear Fit VI, you must build the observation
matrix H. For example, Equation 910 defines a model for data from a
transducer.
y = a
0
+ a
1
sin(ωx) + a
2
cos(ωx) + a
3
x
2
(910)
In Equation 910, each a
j
has the following different functions as a
multiplier:
• One multiplies a
0
.
• sin(ωx) multiplies a
1
.
• cos(ωx) multiplies a
2
.
• x
2
multiplies a
3
.
To build H, set each column of H to the independent functions evaluated at
each x value x[i]. If the data set contains 100 x values, the following
equation defines H.
If the data set contains N data points and if k coefficients (a
0
, a
1
, …a
k – 1
)
exist for which to solve, H is an N × k matrix with N rows and k columns.
Therefore, the number of rows in H equals the number of data points N. The
number of columns in H equals the number of coefficients k.
C H
0
T
H
0
( )
1 –
=
H
1 ωx
0
( ) sin ωx
0
( ) cos x
0
2
1 ωx
1
( ) sin ωx
1
( ) cos x
1
2
1 ωx
2
( ) sin ωx
2
( ) cos x
2
2
… … … …
1 ωx
99
( ) sin ωx
99
( ) cos x
99
2
=
Chapter 9 Curve Fitting
© National Instruments Corporation 911 LabVIEW Analysis Concepts
Nonlinear LevenbergMarquardt Fit
The nonlinear LevenbergMarquardt Fit method fits data to the curve
described by the following equation.
y[i] = f(x[i]), a
0
, a
1
, a
2
,...
where a
0
, a
1
, a
2
,... are the parameters.
The nonlinear LevenbergMarquardt method is the most general curve
fitting method and does not require y to have a linear relationship with
a
0
, a
1
, a
2
, .... You can use the nonlinear LevenbergMarquardt method to fit
linear or nonlinear curves. However, the most common application of the
method is to fit a nonlinear curve because the general linear fit method is
better for linear curve fitting. You must verify the results you obtain with
the LevenbergMarquardt method because the method does not always
guarantee a correct result.
© National Instruments Corporation 101 LabVIEW Analysis Concepts
10
Probability and Statistics
This chapter describes fundamental concepts of probability and statistics
and how to use these concepts to solve realworld problems. Use the
NI Example Finder to find examples of using the Probability and
Statistics VIs.
Statistics
Statistics allow you to summarize data and draw conclusions for the present
by condensing large amounts of data into a form that brings out all the
essential information and is yet easy to remember. To condense data, single
numbers must make the data more intelligible and help draw useful
inferences. For example, in a season, a sports player participates in
51 games and scores a total of 1,568 points. The total of 1,568 points
includes 45 points in Game A, 36 points in Game B, 51 points in Game C,
45 points in Game D, and 40 points in Game E. As the number of games
increases, remembering how many points the player scored in each
individual game becomes increasingly difficult. If you divide the total
number of points that the player scored by the number of games played,
you obtain a single number that tells you the average number of points the
player scored per game. Equation 101 yields the points per game average
for the player.
(101)
Computing percentage provides a method for making comparisons. For
example, the officials of an American city are considering installing a
traffic signal at a major intersection. The purpose of the traffic signal is to
protect motorists turning left from oncoming traffic. However, the city has
only enough money to fund one traffic signal but has three intersections that
potentially need the signal. Traffic engineers study each of the three
intersections for a week. The engineers record the total number of cars
using the intersection, the number of cars travelling straight through the
1,568 points
51 games
 30.7 points per game average =
Chapter 10 Probability and Statistics
LabVIEW Analysis Concepts 102 ni.com
intersection, the number of cars making lefthand turns, and the number of
cars making righthand turns. Table 101 shows the data for one of the
intersections.
Looking only at the raw data from each intersection might make
determining which intersection needs the traffic signal difficult because the
raw numbers can vary widely. However, computing the percentage of cars
turning at each intersection provides a common basis for comparison. To
obtain the percentage of cars turning left, divide the number of cars turning
left by the total number of cars using the intersection and multiply that
result by 100. For the intersection whose data is shown in Table 101, the
following equation gives the percentage of cars turning left.
Given the data for the other two intersections, the city officials can obtain
the percentage of cars turning left at those two intersections. Converting the
raw data to a percentage condenses the information for the three
intersections into single numbers representing the percentage of cars that
turn left at each intersection. The city officials can compare the percentage
of cars turning left at each intersection and rank the intersections in order
of highest percentage of cars turning left to the lowest percentage of cars
Table 101. Data for One Major Intersection
Day
Total Number
of Cars Using
the Intersection
Number of Cars
Turning Left
Number of Cars
Turning Right
Number of Cars
Continuing
Straight
1 1,258 528 330 400
2 1,306 549 340 417
3 1,355 569 352 434
4 1,227 515 319 393
5 1,334 560 346 428
6 694 291 180 223
7 416 174 108 134
Totals 7,590 3,186 1,975 2,429
3,186
7,590
 100 × 42% =
Chapter 10 Probability and Statistics
© National Instruments Corporation 103 LabVIEW Analysis Concepts
turning left. Ranking the intersections can help determine where the traffic
signal is needed most. Thus, in a broad sense, the term statistics implies
different ways to summarize data to derive useful and important
information from it.
Mean
The mean value is the average value for a set of data samples. The
following equation defines an input sequence X consisting of n samples.
X = {x
0
, x
1
, x
2
, x
3
, …, x
n – 1
}
The following equation yields the mean value for input sequence X.
The mean equals the sum of all the sample values divided by the number of
samples, as shown in Equation 101.
Median
The median of a data sequence is the midpoint value in the sorted version
of the sequence. The median is useful for making qualitative statements,
such as whether a particular data point lies in the upper or lower portion of
an input sequence.
The following equation represents the sorted sequence of an input
sequence X.
S = {s
0
, s
1
, s
2
, …, s
n – 1
}
You can sort the sequence either in ascending order or in descending order.
The following equation yields the median value of S.
(102)
where and .
Equation 103 defines a sorted sequence consisting of an odd number of
samples sorted in descending order.
S = {5, 4, 3, 2, 1} (103)
x
1
n
 x
0
x
1
x
2
x
3
… x
n 1 –
+ + + + + ( ) =
x
median
s
i
n is odd
0.5 s
k 1 –
s
k
+ ( ) n is even
¹
´
¦
=
i
n 1 –
2
  = k
n
2
 =
Chapter 10 Probability and Statistics
LabVIEW Analysis Concepts 104 ni.com
In Equation 103, the median is the midpoint value 3.
Equation 104 defines a sorted sequence consisting of an even number of
samples sorted in ascending order.
S = {1, 2, 3, 4} (104)
The sorted sequence in Equation 104 has two midpoint values, 2 and 3.
Using Equation 102 for n is even, the following equation yields the median
value for the sorted sequence in Equation 104.
x
median
= 0.5(s
k – 1
+ s
k
) = 0.5(2 + 3) = 2.5
Sample Variance and Population Variance
The Standard Deviation and Variance VI can calculate either the sample
variance or the population variance. Statisticians and mathematicians
prefer to use the sample variance. Engineers prefer to use the population
variance. For values of both methods produce similar results.
Sample Variance
Sample variance measures the spread or dispersion of the sample values.
You can use the sample variance as a measure of the consistency. The
sample variance is always positive, except when all the sample values are
equal to each other and in turn, equal to the mean.
The sample variance s
2
for an input sequence X equals the sum of the
squares of the deviations of the sample values from the mean divided by
n – 1, as shown in the following equation.
where n > 1 and is the number of samples in X and is the mean of X.
n 30, ≥
s
2 1
n 1 –
  x
1
x – ( )
2
x
2
x – ( )
2
… x
n
x – ( )
2
+ + + [ ] =
x
Chapter 10 Probability and Statistics
© National Instruments Corporation 105 LabVIEW Analysis Concepts
Population Variance
The population variance σ
2
for an input sequence X equals the sum of the
squares of the deviations of the sample values from the mean divided by n,
as shown in the following equation.
where n > 1 and is the number of samples in X, and is the mean of X.
Standard Deviation
The standard deviation s of an input sequence equals the positive square
root of the sample variance s
2
, as shown in the following equation.
Mode
The mode of an input sequence is the value that occurs most often in the
input sequence. The following equation defines an input sequence X.
The mode of X is 4 because 4 is the value that occurs most often in X.
Moment about the Mean
The moment about the mean is a measure of the deviation of the elements
in an input sequence from the mean. The following equation yields the m
th
order moment for an input sequence X.
where n is the number of elements in X and is the mean of X.
For m = 2, the moment about the mean equals the population variance σ
2
.
σ
2 1
n
 x
1
x – ( )
2
x
2
x – ( )
2
… x
n
x – ( )
2
+ + + [ ] =
x
s s
2
=
X 0 1 3 3 4 4 4 5 5 7 , , , , , , , , , { } =
σ
n
m
σ
x
m 1
n
 x
i
x – ( )
m
i 0 =
n 1 –
∑
=
x
Chapter 10 Probability and Statistics
LabVIEW Analysis Concepts 106 ni.com
Skewness
Skewness is a measure of symmetry and corresponds to the thirdorder
moment.
Kurtosis
Kurtosis is a measure of peakedness and corresponds to the fourthorder
moment.
Histogram
A histogram is a bar graph that displays frequency data and is an indication
of the data distribution. A histogram provides a method for graphically
displaying data and summarizing key information.
Equation 105 defines a data sequence.
X = {0, 1, 3, 3, 4, 4, 4, 5, 5, 8} (105)
To compute a histogram for X, divide the total range of values into the
following eight intervals, or bins:
• 0–1
• 1–2
• 2–3
• 3–4
• 4–5
• 5–6
• 6–7
• 7–8
The histogram display for X indicates the number of data samples that lie
in each interval, excluding the upper boundary. Figure 101 shows the
histogram for the sequence in Equation 105.
Chapter 10 Probability and Statistics
© National Instruments Corporation 107 LabVIEW Analysis Concepts
Figure 101. Histogram
Figure 101 shows that no data samples are in the 2–3 and 6–7 intervals.
One data sample lies in each of the intervals 0–1, 1–2, and 7–8. Two data
samples lie in each of the intervals 3–4 and 5–6. Three data samples lie in
the 4–5 interval.
The number of intervals in the histogram affects the resolution of the
histogram. A common method of determining the number of intervals to
use in a histogram is Sturges’ Rule, which is given by the following
equation.
Number of Intervals = 1 + 3.3log(size of (X))
Mean Square Error (mse)
The mean square error (mse) is the average of the sum of the square of the
difference between the corresponding elements of two input sequences.
The following equation yields the mse for two input sequences X and Y.
where n is the number of data points.
You can use the mse to compare two sequences. For example, system S
1
receives a digital signal x and produces an output signal y
1
. System S
2
produces y
2
when it receives x. Theoretically, y
1
= y
2
. To verify that y
1
= y
2
,
you want to compare y
1
and y
2
. Both y
1
and y
2
contain a large number of
data points. Because y
1
and y
2
are large, an elementbyelement comparison
is difficult. You can calculate the mse of y
1
and y
2
. If the mse is smaller than
an acceptable tolerance, y
1
and y
2
are equivalent.
0 1 2 3 4 5 6 7 8
1
2
3
∆0 ∆1 ∆7
mse
1
n
 x
i
y
i
– ( )
2
i 0 =
n 1 –
∑
=
Chapter 10 Probability and Statistics
LabVIEW Analysis Concepts 108 ni.com
Root Mean Square (rms)
The root mean square (rms) of an input sequence equals the positive square
root of the mean of the square of the input sequence. In other words, you
can square the input sequence, take the mean of the new squared sequence,
and take the square root of the mean of the new squared sequence. The
following equation yields the rms for an input sequence X.
where n is the number of elements in X.
Root mean square is a widely used quantity for analog signals. The
following equation yields the root mean square voltage V
rms
for a sine
voltage waveform.
where V
p
is the peak amplitude of the signal.
Probability
In any random experiment, a chance, or probability, always exists that a
particular event will or will not occur. The probability that event A will
occur is the ratio of the number of outcomes favorable to A to the total
number of equally likely outcomes.
You can assign a number between zero and one to an event as an indication
of the probability that the event will occur. If you are absolutely sure that
the event will occur, its probability is 100% or one. If you are sure that the
event will not occur, its probability is zero.
Random Variables
Many experiments generate outcomes that you can interpret in terms of real
numbers. Some examples are the number of cars passing a stop sign during
a day, the number of voters favoring candidate A, and the number of
accidents at a particular intersection. Random variables are the numerical
outcomes of an experiment whose values can change from experiment to
experiment.
Ψ
x
Ψ
x
1
n
 x
i
2
i 0 =
n 1 –
∑
=
V
rms
V
p
2
 =
Chapter 10 Probability and Statistics
© National Instruments Corporation 109 LabVIEW Analysis Concepts
Discrete Random Variables
Discrete random variables can take on only a finite number of possible
values. For example, if you roll a single unbiased die, six possible events
can occur. The roll can result in a 1, 2, 3, 4, 5, or 6. The probability that a
2 will result is one in six, or 0.16666.
Continuous Random Variables
Continuous random variables can take on any value in an interval of real
numbers. For example, an experiment measures the life expectancy x of
50 batteries of a certain type. The batteries selected for the experiment
come from a larger population of the same type of battery. Figure 102
shows the histogram for the observed data.
Figure 102. Life Lengths Histogram
Figure 102 shows that most of the values for x are between zero and
100 hours. The histogram values drop off smoothly for larger values of x.
The value of x can equal any value between zero and the largest observed
value, making x a continuous random variable.
You can approximate the histogram in Figure 102 by an exponentially
decaying curve. The exponentially decaying curve is a mathematical model
for the behavior of the data sample. If you want to know the probability that
a randomly selected battery will last longer than 400 hours, you can
approximate the probability value by the area under the curve to the right
of the value 4. The function that models the histogram of the random
variable is the probability density function. Refer to the Probability
0 1 2 3 4 5 6
Life Length in Hundreds of Hours
Histogram
Chapter 10 Probability and Statistics
LabVIEW Analysis Concepts 1010 ni.com
Distribution and Density Functions section of this chapter for more
information about the probability density function.
A random variable X is continuous if it can take on an infinite number of
possible values associated with intervals of real numbers and a probability
density function f(x) exists such that the following relationships and
equations are true.
(106)
The chance that X will assume a specific value of X = a is extremely small.
The following equation shows solving Equation 106 for a specific value
of X.
Because X can assume an infinite number of possible values, the probability
of it assuming a specific value is zero.
Normal Distribution
The normal distribution is a continuous probability distribution. The
functional form of the normal distribution is the normal density function.
The following equation defines the normal density function f(x).
f x ( ) 0 for all x ≥
f x ( ) x d
∞ –
∞
∫
1 =
P a X b ≤ ≤ ( )
f x ( ) x d
a
b
∫
=
X a P X a = ( ) , f x ( ) x d
a
a
∫
0 = = =
f x ( )
1
2πs
e
x x – ( )
2
2s
2
( ) ⁄ –
=
Chapter 10 Probability and Statistics
© National Instruments Corporation 1011 LabVIEW Analysis Concepts
The normal density function has a symmetric bell shape. The following
parameters completely determine the shape and location of the normal
density function:
• The center of the curve is the mean value = 0.
• The spread of the curve is the variance s
2
= 1.
If a random variable has a normal distribution with a mean equal to zero
and a variance equal to one, the random variable has a standard normal
distribution.
Computing the OneSided Probability of a Normally
Distributed Random Variable
The following equation defines the onesided probability of a normally
distributed random variable.
where p is the onesided probability, X is a standard normal distribution
with the mean value equal to zero and the variance equal to one, and x is the
value.
You can use the Normal Distribution VI to compute p for x. Suppose you
measure the heights of 1,000 randomly selected adult males and obtain a
data set S. The histogram distribution of S shows many measurements
grouped closely about a mean height, with relatively few very short and
very tall males in the population. Therefore, you can closely approximate
the histogram with the normal distribution.
Next, you want to find the probability that the height of a male in a different
set of 1,000 randomly chosen males is greater than or equal to 170 cm.
After normalizing 170 cm, you can use the Normal Distribution VI to
compute the onesided probability p. Complete the following steps to
normalize 170 cm and calculate p using the Normal Distribution VI.
1. Subtract the mean from 170 cm.
2. Scale the difference from step 1 by the standard deviation to obtain the
normalized x value.
3. Wire the normalized x value to the x input of the Normal Distribution
VI and run the VI.
The choice of the probability density function is fundamental to obtaining
a correct probability value.
x
p Prob X x ≤ ( ) =
Chapter 10 Probability and Statistics
LabVIEW Analysis Concepts 1012 ni.com
In addition to the normal distribution method, you can use the following
methods to compute p:
• ChiSquare distribution
• F distribution
• T distribution
Finding x with a Known p
The Inv Normal Distribution VI computes the values x that have the chance
of lying in a normally distributed sample for a given p. For example, you
might want to find the heights of males that have a 60% chance of lying in
a randomly chosen data set.
In addition to the inverse normal distribution method, you can use the
following methods to compute x with a known p:
• Inverse ChiSquare distribution
• Inverse F distribution
• Inverse T distribution
Probability Distribution and Density Functions
Equation 107 defines the probability distribution function F(x).
(107)
where f(x) is the probability density function, ,
and
By performing differentiation, you can derive the following equation from
Equation 107.
F x ( ) f µ ( ) µ d
∞ –
x
∫
=
f x ( ) 0 ≥ x ∀ domain of f ∈
f x ( ) x d
∞ –
∞
∫
1. =
f x ( )
dF x ( )
dx
 =
Chapter 10 Probability and Statistics
© National Instruments Corporation 1013 LabVIEW Analysis Concepts
You can use a histogram to obtain a denormalized discrete representation
of f(x). The following equation defines the discrete representation of f(x).
The following equation yields the sum of the elements of the histogram.
where m is the number of samples in the histogram and n is the number of
samples in the input sequence representing the function.
Therefore, to obtain an estimate of F(x) and f(x), normalize the histogram
by a factor of ∆x = 1/n and let h
j
= x
j
.
Figure 103 shows the block diagram of a VI that generates F(x) and f(x)
for Gaussian white noise.
Figure 103. Generating Probability Distribution Function and Probability
Density Function
x
i
i 0 =
n 1 –
∑
∆x 1 =
h
l
l 0 =
m 1 –
∑
n =
Chapter 10 Probability and Statistics
LabVIEW Analysis Concepts 1014 ni.com
The VI in Figure 103 uses 25,000 samples, 2,500 in each of the 10 loop
iterations, to compute the probability distribution function for Gaussian
white noise. The Integral x(t) VI computes the probability distribution
function. The Derivative x(t) VI performs differentiation on the probability
distribution function to compute the probability density function.
Figure 104 shows the results the VI in Figure 103 returns.
Figure 104. Input Signal, Probability Distribution Function,
and Probability Density Function
Chapter 10 Probability and Statistics
© National Instruments Corporation 1015 LabVIEW Analysis Concepts
Figure 104 shows the last block of Gaussiandistributed noise samples,
the plot of the probability distribution function F(x), and the plot of the
probability density function f(x). The plot of F(x) monotonically increases
and is limited to the maximum value of 1.00 as the value of the xaxis
increases. The plot of f(x) shows a Gaussian distribution that conforms to
the specific pattern of the noise signal.
© National Instruments Corporation 111 LabVIEW Analysis Concepts
11
Linear Algebra
This chapter describes how to use the Linear Algebra VIs to perform matrix
computation and analysis. Use the NI Example Finder to find examples of
using the Linear Algebra VIs.
Linear Systems and Matrix Analysis
Systems of linear algebraic equations arise in many applications that
involve scientific computations, such as signal processing, computational
fluid dynamics, and others. Such systems occur naturally or are the result
of approximating differential equations by algebraic equations.
Types of Matrices
Whatever the application, it is always necessary to find an accurate solution
for the system of equations in a very efficient way. In matrixvector
notation, such a system of linear algebraic equations has the following
form.
where A is an n × n matrix, b is a given vector consisting of n elements, and
x is the unknown solution vector to be determined.
A matrix is a 2D array of elements with m rows and n columns. The
elements in the 2D array might be real numbers, complex numbers,
functions, or operators. The matrix A shown below is an array of m rows
and n columns with m × n elements.
Here, a
i, j
denotes the (i, j)
th
element located in the i
th
row and the j
th
column.
In general, such a matrix is a rectangular matrix. When m = n so that the
Ax b =
A
a
0 0 ,
a
0 1 ,
… a
0 n 1 – ,
a
1 0 ,
a
1 1 ,
… a
1 n 1 – ,
… … … …
a
m 1 – 0 ,
a
m 1 – 1 ,
… a
m 1 – n 1 – ,
=
Chapter 11 Linear Algebra
LabVIEW Analysis Concepts 112 ni.com
number of rows is equal to the number of columns, the matrix is a square
matrix. An m × 1 matrix—m rows and one column—is a column vector. A
row vector is a 1 × n matrix—one row and n columns. If all the elements
other than the diagonal elements are zero—that is, a
i, j
= 0, i ≠ j—such a
matrix is a diagonal matrix. For example,
is a diagonal matrix. A diagonal matrix with all the diagonal elements equal
to one is an identity matrix, also known as unit matrix. If all the elements
below the main diagonal are zero, the matrix is an upper triangular matrix.
On the other hand, if all the elements above the main diagonal are zero, the
matrix is a lower triangular matrix. When all the elements are real numbers,
the matrix is a real matrix. On the other hand, when at least one of the
elements of the matrix is a complex number, the matrix is a complex matrix.
Determinant of a Matrix
One of the most important attributes of a matrix is its determinant. In the
simplest case, the determinant of a 2 × 2 matrix
is given by ad – bc. The determinant of a square matrix is formed by taking
the determinant of its elements. For example, if
then the determinant of A, denoted by , is
= =
= –196
A
4 0 0
0 5 0
0 0 9
=
A
a b
c d
=
A
2 5 3
6 1 7
1 6 9
=
A
A
2 5 3
6 1 7
1 6 9
2
1 7
6 9
5
6 7
1 9
– 3
6 1
1 6
+
\ .

 
=
2 33 – ( ) 5 47 ( ) – 3 35 ( ) +
Chapter 11 Linear Algebra
© National Instruments Corporation 113 LabVIEW Analysis Concepts
The determinant of a diagonal matrix, an upper triangular matrix, or a lower
triangular matrix is the product of its diagonal elements.
The determinant tells many important properties of the matrix. For
example, if the determinant of the matrix is zero, the matrix is singular. In
other words, the above matrix with nonzero determinant is nonsingular.
Refer to the Matrix Inverse and Solving Systems of Linear Equations
section of this chapter for more information about singularity and the
solution of linear equations and matrix inverses.
Transpose of a Matrix
The transpose of a real matrix is formed by interchanging its rows and
columns. If the matrix B represents the transpose of A, denoted by A
T
,
then b
j, i
= a
i, j
. For the matrix A defined above,
In the case of complex matrices, we define complex conjugate
transposition. If the matrix D represents the complex conjugate transpose
(if a = x + iy, then complex conjugate a* = x – iy) of a complex matrix C,
then
That is, the matrix D is obtained by replacing every element in C by its
complex conjugate and then interchanging the rows and columns of the
resulting matrix.
A real matrix is a symmetric matrix if the transpose of the matrix is equal
to the matrix itself. The example matrix A is not a symmetric matrix. If a
complex matrix C satisfies the relation C = C
H
, C is a Hermitian matrix.
Linear Independence
A set of vectors x
1
, x
2
, …, x
n
is linearly dependent only if there exist scalars
α
1
, α
2
, …, α
n
, not all zero, such that
(111)
B A
T
2 6 1
5 1 6
3 7 9
= =
D C
H
d
i j ,
c
∗
j i ,
= ⇒ =
α
1
x
1
α
2
x
2
… α
n
x
n
+ + + 0 =
Chapter 11 Linear Algebra
LabVIEW Analysis Concepts 114 ni.com
In simpler terms, if one of the vectors can be written in terms of a linear
combination of the others, the vectors are linearly dependent.
If the only set of α
i
for which Equation 111 holds is α
1
= 0, α
2
= 0, …,
α
n
= 0, the set of vectors x
1
, x
2
, …, x
n
is linearly independent. So in this
case, none of the vectors can be written in terms of a linear combination of
the others. Given any set of vectors, Equation 111 always holds for α
1
= 0,
α
2
= 0, …, α
n
= 0. Therefore, to show the linear independence of the set,
you must show that α
1
= 0, α
2
= 0, …, α
n
= 0 is the only set of α
i
for which
Equation 111 holds.
For example, first consider the vectors
α
1
= 0 and α
2
= 0 are the only values for which the relation α
1
x + α
2
y = 0
holds true. Therefore, these two vectors are linearly independent of each
other. Now consider the vectors
If α
1
= –2 and α
2
= 1, α
1
x + α
2
y = 0. Therefore, these two vectors are
linearly dependent on each other. You must understand this definition of
linear independence of vectors to fully appreciate the concept of the rank of
the matrix.
Matrix Rank
The rank of a matrix A, denoted by ρ(A), is the maximum number of
linearly independent columns in A. If you look at the example matrix A,
you find that all the columns of A are linearly independent of each other.
That is, none of the columns can be obtained by forming a linear
combination of the other columns. Hence, the rank of the matrix is 3.
Consider one more example matrix, B, where
x
1
2
= y
3
4
=
x
1
2
= y
2
4
=
B
0 1 1
1 2 3
2 0 2
=
Chapter 11 Linear Algebra
© National Instruments Corporation 115 LabVIEW Analysis Concepts
This matrix has only two linearly independent columns because the third
column of B is linearly dependent on the first two columns. Hence, the rank
of this matrix is 2. It can be shown that the number of linearly independent
columns of a matrix is equal to the number of independent rows. So the
rank can never be greater than the smaller dimension of the matrix.
Consequently, if A is an matrix, then
where min denotes the minimum of the two numbers. In matrix theory,
the rank of a square matrix pertains to the highest order nonsingular matrix
that can be formed from it. A matrix is singular if its determinant is zero.
So the rank pertains to the highest order matrix that you can obtain whose
determinant is not zero. For example, consider a 4 × 4 matrix
For this matrix, det(B) = 0, but
Hence, the rank of B is 3. A square matrix has full rank only if its
determinant is different from zero. Matrix B is not a fullrank matrix.
Magnitude (Norms) of Matrices
You must develop a notion of the magnitude of vectors and matrices to
measure errors and sensitivity in solving a linear system of equations.
As an example, these linear systems can be obtained from applications in
control systems and computational fluid dynamics. In two dimensions,
for example, you cannot compare two vectors x = [x1 x2] and y = [y1 y2]
because you might have x1 > y1 but x2 < y2. A vector norm is a way to
assign a scalar quantity to these vectors so that they can be compared with
each other. It is similar to the concept of magnitude, modulus, or absolute
value for scalar numbers.
n m ×
ρ A ( ) min n m , ( ) ≤
B
1 2 3 4
0 1 1 – 0
1 0 1 2
1 1 0 2
=
1 2 3
0 1 1 –
1 0 1
1 – =
Chapter 11 Linear Algebra
LabVIEW Analysis Concepts 116 ni.com
There are ways to compute the norm of a matrix. These include the 2norm
(Euclidean norm), the 1norm, the Frobenius norm (Fnorm), and the
Infinity norm (infnorm). Each norm has its own physical interpretation.
Consider a unit ball containing the origin. The Euclidean norm of a vector
is simply the factor by which the ball must be expanded or shrunk in order
to encompass the given vector exactly, as shown in Figure 111.
Figure 111. Euclidean Norm of a Vector
Figure 111a shows a unit ball of radius = 1 unit. Figure 111b shows a
vector of length = = . As shown in Figure 111c, the unit
ball must be expanded by a factor of before it can exactly encompass
the given vector. Hence, the Euclidean norm of the vector is .
The norm of a matrix is defined in terms of an underlying vector norm. It is
the maximum relative stretching that the matrix does to any vector. With the
vector 2norm, the unit ball expands by a factor equal to the norm. On the
other hand, with the matrix 2norm, the unit ball might become an
ellipsoidal (ellipse in 3D), with some axes longer than others. The longest
axis determines the norm of the matrix.
Some matrix norms are much easier to compute than others. The 1norm
is obtained by finding the sum of the absolute value of all the elements in
each column of the matrix. The largest of these sums is the 1norm.
In mathematical terms, the 1norm is simply the maximum absolute
column sum of the matrix.
1
1
2
2
2
2
2 2 2 2
a b c
2
2
2
2
+ 8 2 2
2 2
2 2
A
1
max
j
a
i j ,
i 0 =
n 1 –
∑
=
Chapter 11 Linear Algebra
© National Instruments Corporation 117 LabVIEW Analysis Concepts
For example,
then
The infnorm of a matrix is the maximum absolute row sum of the matrix.
(112)
In this case, you add the magnitudes of all elements in each row of the
matrix. The maximum value that you get is the infnorm. For the
Equation 112 example matrix,
The 2norm is the most difficult to compute because it is given by the
largest singular value of the matrix. Refer to the Matrix Factorization
section of this chapter for more information about singular values.
Determining Singularity (Condition Number)
Whereas the norm of the matrix provides a way to measure the magnitude
of the matrix, the condition number of a matrix is a measure of how close
the matrix is to being singular. The condition number of a square
nonsingular matrix is defined as
where p can be one of the four norm types described in the Magnitude
(Norms) of Matrices section of this chapter. For example, to find the
condition number of a matrix A, you can find the 2norm of A, the 2norm
of the inverse of the matrix A, denoted by A
–1
, and then multiply them
together. The inverse of a square matrix A is a square matrix B such that
AB = I, where I is the identity matrix. As described earlier in this chapter,
A
1 3
2 4
=
A
1
max 3 7 , ( ) 7 = =
A
∞
max
i
a
i j ,
j 0 =
n 1 –
∑
=
A
∞
max 4 6 , ( ) 6 = =
cond A ( ) A
p
A
1 –
p
⋅ =
Chapter 11 Linear Algebra
LabVIEW Analysis Concepts 118 ni.com
the 2norm is difficult to calculate on paper. You can use the Matrix Norm
VI to compute the 2norm. For example,
The condition number can vary between 1 and infinity. A matrix with a
large condition number is nearly singular, while a matrix with a condition
number close to 1 is far from being singular. The matrix A above is
nonsingular. However, consider the matrix
The condition number of this matrix is 47,168, and hence the matrix is close
to being singular. A matrix is singular if its determinant is equal to zero.
However, the determinant is not a good indicator for assessing how close a
matrix is to being singular. For the matrix B above, the determinant
(0.0299) is nonzero. However, the large condition number indicates that the
matrix is close to being singular. Remember that the condition number of a
matrix is always greater than or equal to one; the latter being true for
identity and permutation matrices. A permutation matrix is an identity
matrix with some rows and columns exchanged. The condition number is a
very useful quantity in assessing the accuracy of solutions to linear
systems.
Basic Matrix Operations and
EigenvaluesEigenvector Problems
In this section, consider some very basic matrix operations. Two matrices,
A and B, are equal if they have the same number of rows and columns and
their corresponding elements all are equal. Multiplication of a matrix A by
a scalar α is equal to multiplication of all its elements by the scalar. That is,
A
1 2
3 4
A
1 –
,
2 – 1
1.5 0.5 –
A
2
, 5.4650 A
1 –
2
,
2.7325 cond A ( ) , 14.9331
= = =
= =
B
1 0.99
1.99 2
=
C αA c
i j ,
⇒ αa
i j ,
= =
Chapter 11 Linear Algebra
© National Instruments Corporation 119 LabVIEW Analysis Concepts
For example,
Two (or more) matrices can be added or subtracted only if they have the
same number of rows and columns. If both matrices A and B have m rows
and n columns, their sum C is an m × n matrix defined as ,
where c
i, j
= a
i, j
± b
i, j
. For example,
For multiplication of two matrices, the number of columns of the first
matrix must be equal to the number of rows of the second matrix. If matrix
A has m rows and n columns and matrix B has n rows and p columns, their
product C is an m × p matrix defined as C = AB, where
For example,
So you multiply the elements of the first row of A by the corresponding
elements of the first column of B and add all the results to get the elements
in the first row and first column of C. Similarly, to calculate the element in
the i
th
row and the j
th
column of C, multiply the elements in the i
th
row of A
by the corresponding elements in the j
th
column of C, and then add them all.
This is shown pictorially in Figure 112.
2
1 2
3 4
2 4
6 8
=
C A B ± =
1 2
3 4
2 4
5 1
+
3 6
8 5
=
c
i j ,
a
i k ,
b
k j ,
k 0 =
n 1 –
∑
=
1 2
3 4
2 4
5 1
×
12 6
26 16
=
Chapter 11 Linear Algebra
LabVIEW Analysis Concepts 1110 ni.com
Figure 112. Matrix Multiplication
Matrix multiplication, in general, is not commutative, that is, AB ≠ BA.
Also, multiplication of a matrix by an identity matrix results in the original
matrix.
Dot Product and Outer Product
If X represents a vector and Y represents another vector, the dot product of
these two vectors is obtained by multiplying the corresponding elements of
each vector and adding the results. This is denoted by
where n is the number of elements in X and Y. Both vectors must have the
same number of elements. The dot product is a scalar quantity and has
many practical applications.
For example, consider the vectors a = 2i + 4j and b = 2i + j in a
twodimensional rectangular coordinate system, as shown in Figure 113.
Rn • C1 Rn • Cm
R1 • C1 R1 • Cm
R1
Rn
X
C1 Cm
=
X Y • x
i
y
i
i 0 =
n 1 –
∑
=
Chapter 11 Linear Algebra
© National Instruments Corporation 1111 LabVIEW Analysis Concepts
Figure 113. Vectors a and b
Then the dot product of these two vectors is given by
The angle α between these two vectors is given by
where a denotes the magnitude of a.
As a second application, consider a body on which a constant force a acts,
as shown in Figure 114. The work W done by a in displacing the body is
defined as the product of d and the component of a in the direction of
displacement d. That is,
Figure 114. Force Vector
a=2i+4j
α=36.86°
b=2i+j
d
2
4
2
1
• 2 2 × ( ) 4 1 × ( ) + 8 = = =
α inv
a b •
a b

\ .
 
cos inv
8
10
 
\ .
 
cos 36.86
o
= = =
W a d α cos a d • = =
d
Force a
Body
α
α
Chapter 11 Linear Algebra
LabVIEW Analysis Concepts 1112 ni.com
On the other hand, the outer product of these two vectors is a matrix.
The (i, j)
th
element of this matrix is obtained using the formula
a
(i,j)
= x
i
× y
j
For example,
Eigenvalues and Eigenvectors
To understand eigenvalues and eigenvectors, start with the classical
definition. Given an matrix A, the problem is to find a scalar λ
and a nonzero vector x such that
(113)
In Equation 113, λ is an eigenvalue. Similar matrices have the same
eigenvalues. In Equation 113, x is the eigenvector that corresponds to the
eigenvalue. An eigenvector of a matrix is a nonzero vector that does not
rotate when the matrix is applied to it.
Calculating the eigenvalues and eigenvectors are fundamental principles of
linear algebra and allow you to solve many problems such as systems of
differential equations when you understand what they represent. Consider
an eigenvector x of a matrix A as a nonzero vector that does not rotate when
x is multiplied by A, except perhaps to point in precisely the opposite
direction. x may change length or reverse its direction, but it will not turn
sideways. In other words, there is some scalar constant λ such that
Equation 113 holds true. The value λ is an eigenvalue of A.
Consider the following example. One of the eigenvectors of the matrix A,
where
is
1
2
3
4
×
3 4
6 8
=
n n ×
Ax λx =
A
2 3
3 5
=
x
0.62
1.00
=
Chapter 11 Linear Algebra
© National Instruments Corporation 1113 LabVIEW Analysis Concepts
Multiplying the matrix A and the vector x simply causes the vector x
to be expanded by a factor of 6.85. Hence, the value 6.85 is one of the
eigenvalues of the vector x. For any constant α, the vector αx also is
an eigenvector with eigenvalue λ because
In other words, an eigenvector of a matrix determines a direction in
which the matrix expands or shrinks any vector lying in that direction by
a scalar multiple, and the expansion or contraction factor is given by the
corresponding eigenvalue. A generalized eigenvalue problem is to find a
scalar λ and a nonzero vector x such that
where B is another n × n matrix.
The following are some important properties of eigenvalues and
eigenvectors:
• The eigenvalues of a matrix are not necessarily all distinct. In other
words, a matrix can have multiple eigenvalues.
• All the eigenvalues of a real matrix need not be real. However, complex
eigenvalues of a real matrix must occur in complex conjugate pairs.
• The eigenvalues of a diagonal matrix are its diagonal entries, and the
eigenvectors are the corresponding columns of an identity matrix of
the same dimension.
• A real symmetric matrix always has real eigenvalues and eigenvectors.
• Eigenvectors can be scaled arbitrarily.
There are many practical applications in the field of science and
engineering for an eigenvalue problem. For example, the stability of a
structure and its natural modes and frequencies of vibration are determined
by the eigenvalues and eigenvectors of an appropriate matrix. Eigenvalues
also are very useful in analyzing numerical methods, such as convergence
analysis of iterative methods for solving systems of algebraic equations and
the stability analysis of methods for solving systems of differential
equations.
The EigenValues and Vectors VI has an Input Matrix input, which is an
N × N real square matrix. The matrix type input specifies the type of the
input matrix. The matrix type input could be 0, indicating a general matrix,
or 1, indicating a symmetric matrix. A symmetric matrix always has real
A αx ( ) αAx λαx = =
Ax λBx =
Chapter 11 Linear Algebra
LabVIEW Analysis Concepts 1114 ni.com
eigenvalues and eigenvectors. A general matrix has no special property
such as symmetry or triangular structure.
The output option input specifies what needs to be computed. A value
of 0 indicates that only the eigenvalues need to be computed. A value of 1
indicates that both the eigenvalues and the eigenvectors should be
computed. It is computationally expensive to compute both the eigenvalues
and the eigenvectors. So it is important that you use the output option input
of the EigenValues and Vectors VI carefully. Depending on your particular
application, you might just want to compute the eigenvalues or both the
eigenvalues and the eigenvectors. Also, a symmetric matrix needs less
computation than a nonsymmetric matrix. Choose the matrix type
carefully.
Matrix Inverse and Solving Systems of Linear Equations
The inverse, denoted by A
–1
, of a square matrix A is a square matrix
such that
where I is the identity matrix. The inverse of a matrix exists only if the
determinant of the matrix is not zero—that is, it is nonsingular. In general,
you can find the inverse of only a square matrix. However, you can compute
the pseudoinverse of a rectangular matrix. Refer to the Matrix
Factorization section of this chapter for more information about
the pseudoinverse of a rectangular matrix.
Solutions of Systems of Linear Equations
In matrixvector notation, a system of linear equations has the form Ax = b,
where A is an n × n matrix and b is a given nvector. The aim is to determine
x, the unknown solution nvector. Whether such a solution exists and
whether it is unique lies in determining the singularity or nonsingularity of
the matrix A.
A matrix is singular if it has any one of the following equivalent properties:
• The inverse of the matrix does not exist.
• The determinant of the matrix is zero.
• The rows or columns of A are linearly dependent.
• Az = 0 for some vector z ≠ 0.
A
1 –
A AA
1 –
I = =
Chapter 11 Linear Algebra
© National Instruments Corporation 1115 LabVIEW Analysis Concepts
Otherwise, the matrix is nonsingular. If the matrix is nonsingular, its inverse
A
–1
exists, and the system Ax = b has a unique solution, x = A
–1
b, regardless
of the value for b.
On the other hand, if the matrix is singular, the number of solutions is
determined by the righthandside vector b. If A is singular and Ax = b,
A(x + ϒz) = b for any scalar ϒ, where the vector z is as in the previous
definition. Thus, if a singular system has a solution, the solution cannot be
unique.
Explicitly computing the inverse of a matrix is prone to numerical
inaccuracies. Therefore, you should not solve a linear system of equations
by multiplying the inverse of the matrix A by the known righthandside
vector. The general strategy to solve such a system of equations is to
transform the original system into one whose solution is the same as that of
the original system but is easier to compute. One way to do so is to use the
Gaussian Elimination technique. The Gaussian Elimination technique has
three basic steps. First, express the matrix A as a product
where L is a unit lower triangular matrix and U is an upper triangular
matrix. Such a factorization is LU factorization. Given this, the linear
system Ax = b can be expressed as LUx = b. Such a system then can be
solved by first solving the lower triangular system Ly = b for y by
forwardsubstitution. This is the second step in the Gaussian Elimination
technique. For example, if
then
A LU =
l
a 0
b c
= y
p
q
= b
r
s
=
p
r
a
 q ,
s bp – ( )
c
 = =
Chapter 11 Linear Algebra
LabVIEW Analysis Concepts 1116 ni.com
The first element of y can be determined easily due to the lower triangular
nature of the matrix L. Then you can use this value to compute the
remaining elements of the unknown vector sequentially—hence the name
forwardsubstitution. The final step involves solving the upper triangular
system Ux = y by backsubstitution. For example, if
then
In this case, this last element of x can be determined easily and then
used to determine the other elements sequentially—hence the name
backsubstitution. So far, this chapter has described the case of square
matrices. Because a nonsquare matrix is necessarily singular, the system
of equations must have either no solution or a nonunique solution. In such
a situation, you usually find a unique solution x that satisfies the linear
system in an approximate sense.
You can use the Linear Algebra VIs to compute the inverse of a matrix,
compute LU decomposition of a matrix, and solve a system of linear
equations. It is important to identify the input matrix properly, as it
helps avoid unnecessary computations, which in turn helps to minimize
numerical inaccuracies. The four possible matrix types are general
matrices, positive definite matrices, and lower and upper triangular
matrices. A real matrix is positive definite only if it is symmetric, and if the
quadratic form for all nonzero vectors is X. If the input matrix is square but
does not have a full rank (a rankdeficient matrix), the VI finds the least
square solution x. The least square solution is the one that minimizes the
norm of Ax – b. The same also holds true for nonsquare matrices.
Matrix Factorization
The Matrix Inverse and Solving Systems of Linear Equations section of this
chapter describes how a linear system of equations can be transformed into
a system whose solution is simpler to compute. The basic idea was to
factorize the input matrix into the multiplication of several, simpler
matrices. The LU decomposition technique factors the input matrix as a
product of upper and lower triangular matrices. Other commonly used
factorization methods are Cholesky, QR, and the Singular Value
U
a b
0 c
= x
m
n
= y
p
q
=
n
q
c
  m ,
p bn – ( )
a
 = =
Chapter 11 Linear Algebra
© National Instruments Corporation 1117 LabVIEW Analysis Concepts
Decomposition (SVD). You can use these factorization methods to solve
many matrix problems, such as solving linear system of equations,
inverting a matrix, and finding the determinant of a matrix.
If the input matrix A is symmetric and positive definite, an LU factorization
can be computed such that A = U
T
U, where U is an upper triangular matrix.
This is Cholesky factorization. This method requires only about half the
work and half the storage compared to LU factorization of a general matrix
by Gaussian Elimination. You can determine if a matrix is positive definite
by using the Test Positive Definite VI.
A matrix Q is orthogonal if its columns are orthonormal—that is,
if Q
T
Q = I, the identity matrix. QR factorization technique factors a
matrix as the product of an orthogonal matrix Q and an upper triangular
matrix R—that is, A = QR. QR factorization is useful for both square
and rectangular matrices. A number of algorithms are possible for
QR factorization, such as the Householder transformation, the
Givens transformation, and the Fast Givens transformation.
The Singular Value Decomposition (SVD) method decomposes a matrix
into the product of three matrices—A = USV
T
. U and V are orthogonal
matrices. S is a diagonal matrix whose diagonal values are called the
singular values of A. The singular values of A are the nonnegative square
roots of the eigenvalues of A
T
A, and the columns of U and V, which are
called left and right singular vectors, are orthonormal eigenvectors of AA
T
and A
T
A, respectively. SVD is useful for solving analysis problems such as
computing the rank, norm, condition number, and pseudoinverse of
matrices.
Pseudoinverse
The pseudoinverse of a scalar σ is defined as 1/σ if σ ≠ 0, and zero
otherwise. In case of scalars, pseudoinverse is the same as the inverse.
You now can define the pseudoinverse of a diagonal matrix by transposing
the matrix and then taking the scalar pseudoinverse of each entry. Then the
pseudoinverse of a general real m × n matrix A, denoted by A
†
, is given by
The pseudoinverse exists regardless of whether the matrix is square or
rectangular. If A is square and nonsingular, the pseudoinverse is the same
as the usual matrix inverse. You can use the PseudoInverse Matrix VI to
compute the pseudoinverse of real and complex matrices.
A
†
VS
†
U
T
=
© National Instruments Corporation 121 LabVIEW Analysis Concepts
12
Optimization
This chapter describes basic concepts and methods used to solve
optimization problems. Refer to Appendix A, References, for a list of
references to more information about optimization.
Introduction to Optimization
Optimization is the search for a set of parameters that minimize a function.
For example, you can use optimization to define an optimal set of
parameters for the design of a specific application, such as the optimal
parameters for designing a control mechanism for a system or the
conditions that minimize the cost of a manufacturing process. Generally,
optimization problems involve a set of possible solutions X and the
objective function f(x), also known as the cost function. f(x) is the function
of the variable or variables you want to minimize or maximize.
The optimization process either minimizes or maximizes f(x) until reaching
the optimal value for f(x). When minimizing f(x), the optimal solution
x* ∈ X satisfies the following condition.
(121)
The optimization process searches for the value of x* that minimizes f(x),
subject to the constraint x* ∈ X, where X is the constraint set. A value that
satisfies the conditions defined in Equation 121 is a global minimum.
Refer to the Local and Global Minima section of this chapter for more
information about global minima.
In the case of maximization, x* satisfies the following condition.
A value satisfying the preceding condition is a global maximum. This
chapter describes optimization in terms of minimizing f(x).
f x
∗
( ) f x ( ) ≤ x ∀ X ∈
f x
∗
( ) f x ( ) x ∀ X ∈ ≥
Chapter 12 Optimization
LabVIEW Analysis Concepts 122 ni.com
Constraints on the Objective Function
The presence and structure of any constraints on the value of f(x) influence
the selection of the algorithm you use to solve an optimization problem.
Certain algorithms solve only unconstrained optimization problems. If the
value of f(x) has any of the following constraints, the optimal value of f(x)
must satisfy the condition the constraint defines:
• Equality constraints, such as G
i
(x) = 4 (i = 1, …, m
e
)
• Inequality constraints, such as
• Lower and upper level boundaries, such as x
l
, x
u
Note Currently, LabVIEW does not include VIs you can use to solve optimization
problems in which the value of the objective function has constraints.
Linear and Nonlinear Programming Problems
The most common method of categorizing optimization problems is as
either a linear programming problem or a nonlinear programming problem.
In addition to constraints on the value of f(x), whether an optimization
problem is linear or nonlinear influences the selection of the algorithm you
use to solve the problem.
Note In the context of optimization, the term programming does not refer to computer
programming. Programming also refers to scheduling or planning. Linear and nonlinear
programming are subsets of mathematical programming. The objective of mathematical
programming is the same as optimization—maximizing or minimizing f(x).
Discrete Optimization Problems
Linear programming problems are discrete optimization problems. A finite
solution set X and a combinatorial nature characterize discrete optimization
problems. A combinatorial nature refers to the fact that several solutions to
the problem exist. Each solution to the problem consists of a different
combination of parameters. However, at least one optimal solution exists.
Planning a route to several destinations so you travel the minimum distance
typifies a combinatorial optimization problem.
Continuous Optimization Problems
Nonlinear programming problems are continuous optimization problems.
An infinite and continuous set X characterizes continuous optimization
problems.
G
i
x ( ) 4 ≤ i m
e
1 + … m , , = ( )
Chapter 12 Optimization
© National Instruments Corporation 123 LabVIEW Analysis Concepts
Solving Problems Iteratively
Algorithms for solving optimization problems use an iterative process.
Beginning at a userspecified starting point, the algorithms establish a
search direction. Each iteration of the algorithm proceeds along the search
direction to the optimal solution by solving subproblems. Finding the
optimal solution terminates the iterative process of the algorithm.
As the number of design variables increases, the complexity of the
optimization problem increases. As problem complexity increases,
computational overhead increases due to the size and number of
subproblems the optimization algorithm must solve to find the optimal
solution. Because of the computational overhead associated with highly
complex problems, consider limiting the number of iterations allocated to
find the optimal solution. Use the accuracy input of the Optimization VIs
to specify the accuracy of the optimal solution.
Linear Programming
Linear programming problems have the following characteristics:
• Linear objective function
• Solution set X with a polyhedron shape defined by linear inequality
constraints
• Continuous f(x)
• Partially combinatorial structure
Solving linear programming problems involves finding the optimal value
of f(x) where f(x) is a linear combination of variables, as shown in
Equation 122.
f(x) = a
1
x
1
+ … + a
n
x
n
(122)
The value of f(x) in Equation 122 can have the following constraints:
• Primary constraints of
• Additional constraints of M = m
1
+ m
2
+ m
3
• m
1
of the following form
• m
2
of the following form
x
1
0 … x
n
, , 0 ≥ ≥
a
i 1
x
1
… a
i n
x
n
+ + b
i
b
i
0 ≥ ( ) i 1 … m
1
, , = , , ≤
a
j 1
x
1
… a
j n
x
n
+ + b
j
b
j
0 ≥ ( ) j m
1
1 … m
1
m
2
+ , , + = , , ≥
Chapter 12 Optimization
LabVIEW Analysis Concepts 124 ni.com
• m
3
of the following form
Any vector x that satisfies all the constraints on the value of f(x) constitutes
a feasible answer to the linear programming problem. The vector yielding
the best result for f(x) is the optimal solution.
The following relationship represents the standard form of the linear
programming problem.
where is the vector of unknowns, is the cost vector, and
is the constraint matrix. At least one member of solution set X
is at a vertex of the polyhedron that describes X.
Linear Programming Simplex Method
A simplex describes the solution set X for a linear programming problem.
The constraints on the value of f(x) define the polygonal surface
hyperplanes of the simplex. The hyperplanes intersect at vertices along the
surface of the simplex. The linear nature of f(x) means the optimal solution
is at one of the vertices of the simplex. The linear programming simplex
method iteratively moves from one vertex to the adjoining vertex until
moving to an adjoining vertex no longer yields a more optimal solution.
Note Although both the linear programming simplex method and the nonlinear downhill
simplex method use the concept of a simplex, the methods have nothing else in common.
Refer to the Downhill Simplex Method section of this chapter for information about the
downhill simplex method.
Nonlinear Programming
Nonlinear programming problems have either a nonlinear f(x) or a solution
set X defined by nonlinear equations and inequalities. Nonlinear
programming is a broad category of optimization problems and includes
the following subcategories:
• Quadratic programming problems
• Leastsquares problems
• Convex problems
a
k1
x
1
… a
kn
x
n
+ + b
k
b
k
0 ≥ ( ) k m
1
m
2
1 + … M , , + = , , =
min c
T
x: Ax b x 0 ≥ , = { }
x IR
n
∈ c IR
n
∈
A IR
m n ×
∈
Chapter 12 Optimization
© National Instruments Corporation 125 LabVIEW Analysis Concepts
Impact of Derivative Use on Search Method Selection
When you select a search method, consider whether the method uses
derivatives, which can help you determine the suitability of the method for
a particular optimization problem. For example, the downhill simplex
method, also known as the NelderMead method, uses only evaluations of
f(x) to find the optimal solution. Because it uses only evaluations of f(x), the
downhill simplex method is a good choice for problems with pronounced
nonlinearity or with problems containing a significant number of
discontinuities.
The search methods that use derivatives, such as the gradient search
methods, work best with problems in which the objective function is
continuous in its first derivative.
Line Minimization
The process of iteratively searching along a vector for the minimum value
on the vector is line minimization or line searching. Line minimization can
help establish a search direction or verify that the chosen search direction
is likely to produce an optimal solution.
Nonlinear programming search algorithms use line minimization to solve
the subproblems leading to an optimal value for f(x). The search algorithm
searches along a vector until it reaches the minimum value on the vector.
After the search algorithm reaches the minimum on one vector, the search
continues along another vector, usually orthogonal to the first vector. The
line search continues along the new vector until reaching its minimum
value. The line minimization process continues until the search algorithm
finds the optimal solution.
Local and Global Minima
The goal of any optimization problem is to find a global optimal solution.
However, nonlinear programming problems are continuous optimization
problems so the solution set X for a nonlinear programming problem might
be infinitely large. Therefore, you might not be able to find a global
optimum for f(x). In practice, you solve most nonlinear programming
problems by finding a local optimum for f(x).
Chapter 12 Optimization
LabVIEW Analysis Concepts 126 ni.com
Global Minimum
In terms of solution set X, x* is a global minimum of f over X if it satisfies
the following relationship.
Local Minimum
A local minimum is a minimum of the function over a subset of the domain.
In terms of solution set X, x* is a local minimum of f over X if x* ∈ X, and
an ε > 0 exists so that the following relationship is true.
where
Figure 121 illustrates a function of x where the domain is any value
between 32 and 65; x ∈ [32, 65].
Figure 121. Domain of X (32, 65)
In Figure 121, A is a local minimum because you can find ε > 0, such that
ε = 1 would suffice. Similarly, C is a local minimum. B is the
global minimum because for
Downhill Simplex Method
The downhill simplex method developed by Nelder and Mead uses a
simplex and performs function evaluations without derivatives.
Note Although the downhill simplex method and the linear programming simplex method
use the concept of a simplex, the methods have nothing else in common. Refer to the Linear
f x
∗
( ) f x ( ) ≤ x ∀ X ∈
f x
∗
( ) f x ( ) ≤ x X ∈ ∀ with x x
∗
– ε <
x x′x. =
f x
∗
( ) f x ( ). ≤
f x
∗
( ) x ≤ x ∀ 32 65 , [ ]. ∈
Chapter 12 Optimization
© National Instruments Corporation 127 LabVIEW Analysis Concepts
Programming Simplex Method section of this chapter for information about the linear
programming simplex method and the geometry of the simplex.
Most practical applications involve solution sets that are nondegenrate
simplexes. A nondegenrate simplex encloses a finite volume of
N dimensions. If you take any point of the nondegenrate simplex as the
origin of the simplex, the remaining N points of the simplex define vector
directions spanning the Ndimensional space.
The downhill simplex method requires that you define an initial simplex by
specifying N + 1 starting points. No effective means of determining the
initial starting point exists. You must use your judgement about the best
location from which to start. After deciding upon an initial starting point P
0
,
you can use Equation 123 to determine the other points needed to define
the initial simplex.
P
i
= P
0
+ λe
i
(123)
where e
i
is a unit vector and λ is an estimate of the characteristic length
scale of the problem.
Starting with the initial simplex defined by the points from Equation 123,
the downhill simplex method performs a series of reflections. A reflection
moves from a point on the simplex through the opposite face of the simplex
to a point where the function f is smaller. The configuration of the
reflections conserves the volume of the simplex, which maintains the
nondegeneracy of the simplex. The method continues to perform
reflections until the function value reaches a predetermined tolerance.
Because of the multidimensional nature of the downhill simplex method,
the value it finds for f(x) might not be the optimal solution. You can verify
that the value for f(x) is the optimal solution by repeating the process. When
you repeat the process, use the optimal solution from when you first ran the
method as P
0
. Reinitialize the method to N + 1 starting points using
Equation 123.
Golden Section Search Method
The golden section search method finds a local minimum of a 1D function
by bracketing the minimum. Bracketing a minimum requires a triplet of
points, as shown in the following relationship.
a < b < c such that f(b) < f(a) and f(b) < f(c) (124)
Chapter 12 Optimization
LabVIEW Analysis Concepts 128 ni.com
Because the relationship in Equation 124 is true, the minimum of the
function is within the interval (a, c). The search method starts by choosing
a new point x between either a and b or between b and c. For example,
choose a point x between b and c and evaluate f(x). If f(b) < f(x), the new
bracketing triplet is a < b < x. If f(b) > f(x), the new bracketing triplet is
b < x < c. In each instance, the middle point, b or x, is the current optimal
minimum found during the current iteration of the search.
Choosing a New Point x in the Golden Section
Given that a < b < c, point b is a fractional distance W between a and c,
as shown in the following equations.
A new point x is an additional fractional distance Z beyond b, as shown in
Equation 125.
(125)
Given Equation 125, the next bracketing triplet can have either a length
of W + Z relative to the current bracketing triplet or a length of 1 – W.
To minimize the possible worst case, choose Z such that the following
equations are true.
W + Z = 1 – W
Z = 1 – 2W (126)
Given Equation 126, the new x is the point in the interval symmetric to b.
Therefore, Equation 127 is true.
b – a = x – c (127)
You can imply from Equation 127 that x is within the larger segment
because Z is positive only if W < 1/2.
If Z is the current optimal value of f(x), W is the previous optimal value of
f(x). Therefore, the fractional distance of x from b to c equals the fractional
distance of b from a to c, as shown in Equation 128.
(128)
b a –
c a –
 W =
c b –
c a –
 1 W – =
x b –
c a –
 Z =
Z
1 W –
  W =
Chapter 12 Optimization
© National Instruments Corporation 129 LabVIEW Analysis Concepts
Equations 126 and 128 yield the following quadratic equation.
W
2
– 3W + 1 = 0
(129)
Therefore, the middle point b of the optimal bracketing interval a < b < c
is the fractional distance of 0.38197 from one of the end points and the
fractional distance of 0.61803 from the other end point. 0.38197 and
0.61803 comprise the golden mean, or golden section, of the Pythagoreans.
The golden section search method uses a bracketing triplet and measures
from point b to find a new point x a fractional distance of 0.38197 into the
larger interval, either (a, b) or (b, c), on each iteration of the search method.
Even when starting with an initial bracketing triplet whose segments are not
within the golden section, the process of successively choosing a new
point x at the golden mean quickly causes the method to converge linearly
to the correct, selfreplicating golden section. After the search method
converges to the selfreplicating golden section, each new function
evaluation brackets the minimum to an interval only 0.61803 times the size
of the preceding interval.
Gradient Search Methods
Gradient search methods determine a search direction by using information
about the slope of f(x). The search direction points toward the most
probable location of the minimum. After the gradient search method
establishes the search direction, it uses iterative descent to move toward the
minimum.
The iterative descent process starts at a point x
0
, which is an estimate of the
best starting point, and successively produces vectors x
1
, x
2
, …, so f
decreases with each iteration, as shown in the following relationship.
where k is the iteration number, f(x
k + 1
) is the objective function value at
iteration k + 1, and f(x
k
) is the objective function value at iteration k.
Successively decreasing f improves the current estimate of the solution.
The iterative descent process attempts to decrease f to its minimum.
W
3 5 –
2
 0.38197 ≈ =
f x
k 1 +
( ) f x
k
( ), < k 0 1 … , , =
Chapter 12 Optimization
LabVIEW Analysis Concepts 1210 ni.com
The following equations and relationships provide a general definition of
the gradient search method of solving nonlinear programming problems.
(1210)
where d
k
is the search direction and α
k
is the step size.
In Equation 1210, if the gradient of the objective function the
gradient search method needs a positive value for α
k
and a value for d
k
that
fulfills the following relationship.
Iterations of gradient search methods continue until x
k + 1
= x
k
.
Caveats about Converging to an Optimal Solution
A global minimum is a value for f(x) that satisfies the relationship described
in Equation 121.
Ideally, iteratively decreasing f converges to a global minimum for f(x). In
practice, convergence rarely proceeds to a global minimum for f(x) because
of the presence of local minima that are not global. Local minima attract
gradient search methods because the form of f near the current iterate and
not the global structure of f determines the downhill course the method
takes.
When a gradient search method begins on or encounters a stationary point,
the method stops at the stationary point. Therefore, a gradient search
method converges to a stationary point. If f is convex, the stationary point
is a global minimum. However, if f is not convex, the stationary point might
not be a global minimum. Therefore, if you have little information about the
locations of local minima, you might have to start the gradient search
method from several starting points.
Terminating Gradient Search Methods
Because a gradient search method does not produce convergence at a
global minimum, you must decide upon an error tolerance ε that assures
that the point at which the gradient search method stops is at least close to
a local minimum. Unfortunately, no explicit rules exist for determining an
absolutely accurate ε. The selection of a value for ε is somewhat arbitrary
and based on an estimate about the value of the optimal solution.
x
k 1 +
x
k
α
k
d
k
+ = k 0 1 … , , =
∇f x
k
( ) 0, ≠
∇f x
k
( )′d
k
0 <
Chapter 12 Optimization
© National Instruments Corporation 1211 LabVIEW Analysis Concepts
Use the accuracy input of the Optimization VIs to specify a value for ε.
The nonlinear programming optimization VIs iteratively compare the
difference between the highest and lowest input values to the value of
accuracy until two consecutive approximations do not differ by more than
the value of accuracy. When two consecutive approximations do not differ
by more than the value of accuracy, the VI stops.
Conjugate Direction Search Methods
Conjugate direction methods attempt to find f(x) by defining a direction set
of vectors such that minimizing along one vector does not interfere with
minimizing along another vector, which prevents indefinite cycling
through the direction set.
When you minimize a function f along direction u, the gradient of f is
perpendicular to u at the line minimum. If P is the origin of a coordinate
system with coordinates x, you can approximate f by the Taylor series of f,
as shown in Equation 1211.
(1211)
where
, ,
and
The components of matrix A are the second partial derivative matrix of f.
Matrix A is the Hessian matrix of f at P.
The following equation gives the gradient of f.
f x ( ) f P ( )
∂f
∂x
i
x
i
i
∑
1
2

∂
2
f
∂x
i
∂x
j
x
i
x
j
i j ,
∑
… + + + =
c bx
1
2
xAx + – ≈
c f P ( ) ≡ b ∇ – f
P
≡
A [ ]
i j
∂
2
f
∂x
i
∂x
j

P
≡
f ∇ Ax b – =
Chapter 12 Optimization
LabVIEW Analysis Concepts 1212 ni.com
The following equation shows how the gradient changes with
movement along A.
After the search method reaches the minimum by moving in direction u, it
moves in a new direction v. To fulfill the condition that minimization along
one vector does not interfere with minimization along another vector, the
gradient of f must remain perpendicular to u, as shown in Equation 1212.
(1212)
When Equation 1212 is true for two vectors u and v, u and v are conjugate
vectors. When Equation 1212 is true pairwise for all members of a set of
vectors, the set of vectors is a conjugate set. Performing successive line
minimizations of a function along a conjugate set of vectors prevents the
search method from having to repeat the minimization along any member
of the conjugate set.
If a conjugate set of vectors contains N linearly independent vectors,
performing N line minimizations arrives at the minimum for functions
having the quadratic form shown in Equation 1211. If a function does not
have exactly the form of Equation 1211, repeated cycles of N line
minimizations eventually converge quadratically to the minimum.
Conjugate Gradient Search Methods
At an Ndimensional point P, the conjugate gradient search methods
calculate the function f(P) and the gradient . is the vector
of first partial derivatives. The conjugate gradient search method attempts
to find f(x) by searching a gradient conjugate to the previous gradient
and conjugating to all previous gradients, as much as possible.
The FletcherReeves method and the PolakRibiere method are the two
most common conjugate gradient search methods. The following theorems
serve as the basis for each method.
Theorem A
Theorem A has the following conditions:
• A is a symmetric, positivedefinite, n × n matrix.
• g
0
is an arbitrary vector.
• h
0
= g
0
.
f ∇
δ f ∇ ( ) A δx ( ) =
0 uδ f ∇ ( ) uAv = =
f P ( ) ∇ f P ( ) ∇
Chapter 12 Optimization
© National Instruments Corporation 1213 LabVIEW Analysis Concepts
• The following equations define the two sequences of vectors for
i = 0, 1, 2, ….
(1213)
(1214)
where the chosen values for λ
i
and
γ
i
make g
i + 1
g
i
= 0 and h
i + 1
Ah
i
= 0,
as shown in the following equations.
(1215)
(1216)
If the denominators equal zero, take λ
i
= 0, γ
i
= 0.
• The following equations are true for all .
(1217)
The elements in the sequence that Equation 1213 produces are mutually
orthogonal. The elements in the sequence that Equation 1214 produces are
mutually conjugate.
Because Equation 1217 is true, you can rewrite Equations 1215
and 1216 as the following equations.
(1218)
Theorem B
The following theorem defines a method for constructing the vector from
Equation 1213 when the Hessian matrix A is unknown:
• g
i
is the vector sequence defined by Equation 1213.
• h
i
is the vector sequence defined by Equation 1214.
g
i 1 +
g
i
λ
i
Ah
i
– =
h
i 1 +
g
i 1 +
γ
i
h
i
+ =
γ
i
g
i 1 +
Ah
i
h
i
Ah
i
 =
λ
i
g
i
g
i
g
i
Ah
i
 =
i j ≠
g
i
g
j
0 = h
i
Ah
j
0 =
γ
i
g
i 1 +
g
i 1 +
⋅
g
i
g
i
⋅

g
i 1 +
g
i
– ( ) g
i 1 +
⋅
g
i
g
i
⋅
 = =
λ
i
g
i
h
i
⋅
h
i
A h
i
⋅ ⋅
 =
Chapter 12 Optimization
LabVIEW Analysis Concepts 1214 ni.com
• Approximate f as the quadratic form given by the following
relationship.
• for some point P
i
.
• Proceed from P
i
in the direction h
i
to the local minimum of f at
point P
i + 1
.
• Set the value for g
i + 1
according to Equation 1219.
(1219)
The vector g
i + 1
that Equation 1219 yields is the same as the vector that
Equation 1213 yields when the Hessian matrix A is known. Therefore, you
can optimize f without having knowledge of Hessian matrix A and without
the computational resources to calculate and store the Hessian matrix A.
You construct the direction sequence h
i
with line minimization of the
gradient vector and the latest vector in the g sequence.
Difference between FletcherReeves and
PolakRibiere
Both the FletcherReeves method and the PolakRibiere method use
Theorem A and Theorem B. However, the FletcherReeves method uses the
first term from Equation 1218 for γ
i
, as shown in Equation 1220.
(1220)
The PolakRibiere method uses the second term from Equation 1218 for
γ
i
, as shown in Equation 1221.
(1221)
Equation 1220 equals Equation 1221 for functions with exact quadratic
forms. However, most functions in practical applications do not have exact
quadratic forms. Therefore, after you find the minimum for the quadratic
form, you might need another set of iterations to find the actual minimum.
f x ( ) c b x
1
2
x A x ⋅ ⋅ + ⋅ – ≈
g
i
f P
i
( ) ∇ – =
g
i 1 +
f P
i 1 +
( ) ∇ – =
γ
i
g
i 1 +
g
i 1 +
⋅
g
i
g
i
⋅
 =
γ
i
g
i 1 +
g
i
– ( ) g
i 1 +
⋅
g
i
g
i
⋅
 =
Chapter 12 Optimization
© National Instruments Corporation 1215 LabVIEW Analysis Concepts
When the PolakRibiere method reaches the minimum for the quadratic
form, it resets the direction h along the local gradient, essentially starting
the conjugategradient process again. Therefore, the PolakRibiere method
can make the transition to additional iterations more efficiently than the
FletcherReeves method.
© National Instruments Corporation 131 LabVIEW Analysis Concepts
13
Polynomials
Polynomials have many applications in various areas of engineering and
science, such as curve fitting, system identification, and control design.
This chapter describes polynomials and operations involving polynomials.
General Form of a Polynomial
A univariate polynomial is a mathematical expression involving a sum of
powers in one variable multiplied by coefficients. Equation 131 shows the
general form of an nthorder polynomial.
P(x) = a
0
+ a
1
x + a
2
x
2
+ … + a
n
x
n
(131)
where P(x) is the nthorder polynomial, the highest power n is the order of
the polynomial if a
n
≠ 0, a
0
, a
1
, …, a
n
are the constant coefficients of the
polynomial and can be either real or complex.
You can rewrite Equation 131 in its factored form, as shown in
Equation 132.
P(x) = a
n
(x – r
1
)(x – r
2
) … (x – r
n
) (132)
where r
1
, r
2
, …, r
n
are the roots of the polynomial.
The root r
i
of P(x) satisfies the following equation.
In general, P(x) might have repeated roots, such that Equation 133 is true.
(133)
The following conditions are true for Equation 133:
• r
1
, r
2
, …, r
l
are the repeated roots of the polynomial
• k
i
is the multiplicity of the root r
i
, i = 1, 2, …, l
P x ( )
x r
i
=
0 = i 1 2 … n , , , =
P x ( ) a
n
x r
1
– ( )
k
1
x r
2
– ( )
k
2
… x r
l
– ( )
k
l
x r
l 1 +
– ( ) x r
l 2 +
– ( )… = x r
l j +
– ( )
Chapter 13 Polynomials
LabVIEW Analysis Concepts 132 ni.com
• r
l + 1
, r
l + 2
, …, r
l + j
are the nonrepeated roots of the polynomial
• k
1
+ k
2
+ … + k
l
+ j = n
A polynomial of order n must have n roots. If the polynomial coefficients
are all real, the roots of the polynomial are either real or complex conjugate
numbers.
Basic Polynomial Operations
The basic polynomial operations include the following operations:
• Finding the order of a polynomial
• Evaluating a polynomial
• Adding, subtracting, multiplying, or dividing polynomials
• Determining the composition of a polynomial
• Determining the greatest common divisor of two polynomials
• Determining the least common multiple of two polynomials
• Calculating the derivative of a polynomial
• Integrating a polynomial
• Finding the number of real roots of a real polynomial
The following equations define two polynomials used in the following
sections.
P(x) = a
0
+ a
1
x + a
2
x
2
+ a
3
x
3
= a
3
(x – p
1
)(x – p
2
)(x – p
3
) (134)
Q(x) = b
0
+b
1
x + b
2
x
2
= b
2
(x – q
1
)(x – q
2
) (135)
Order of Polynomial
The largest exponent of the variable determines the order of a polynomial.
The order of P(x) in Equation 134 is three because of the variable x
3
. The
order of Q(x) in Equation 135 is two because of the variable x
2
.
Polynomial Evaluation
Polynomial evaluation determines the value of a polynomial for a particular
value of x, as shown by the following equation.
P x ( )
x x
0
=
a
0
a
1
x
0
a
2
x
0
2
a
3
x
0
3
a
0
x
0
a
1
x
0
a
2
x
0
a
3
+ ( ) + ( ) + = + + + =
Chapter 13 Polynomials
© National Instruments Corporation 133 LabVIEW Analysis Concepts
Evaluating an nthorder polynomial requires n multiplications and
n additions.
Polynomial Addition
The addition of two polynomials involves adding together coefficients
whose variables have the same exponent. The following equation shows
the result of adding together the polynomials defined by Equations 134
and 135.
P(x) + Q(x) = (a
o
+ b
0
) + (a
1
+ b
1
)x +(a
2
+ b
2
)x
2
+ a
3
x
3
Polynomial Subtraction
Subtracting one polynomial from another involves subtracting coefficients
whose variables have the same exponent. The following equation shows
the result of subtracting the polynomials defined by Equations 134
and 135.
P(x) – Q(x) = (a
0
– b
0
) + (a
1
– b
1
)x + (a
2
– b
2
)x
2
+ a
3
x
3
Polynomial Multiplication
Multiplying one polynomial by another polynomial involves multiplying
each term of one polynomial by each term of the other polynomial. The
following equations show the result of multiplying the polynomials defined
by Equations 134 and 135.
P(x)Q(x) = (a
0
+ a
1
x + a
2
x
2
+ a
3
x
3
)(b
0
+ b
1
x + b
2
x
2
)
= a
0
(b
0
+ b
1
x + b
2
x
2
) + a
1
x(b
0
+ b
1
x + b
2
x
2
)
+ a
2
x
2
(b
0
+ b
1
x + b
2
x
2
) + a
3
x
3
(b
0
+ b
1
x + b
2
x
2
)
= a
3
b
2
x
5
+ (a
3
b
1
+ a
2
b
2
)x
4
+ (a
3
b
0
+ a
2
b
1
+ a
1
b
2
)x
3
+ (a
2
b
0
+ a
1
b
1
+ a
0
b
2
)x
2
+ (a
1
b
0
+ a
0
b
1
)x + a
0
b
0
Polynomial Division
Dividing the two polynomials P(x) and Q(x) results in the quotient U(x) and
remainder V(x), such that the following equation is true.
P(x) = Q(x)U(x) + V(x)
For example, the following equations define polynomials P(x) and Q(x).
P(x) = 5 – 3x – x
2
+ 2x
3
(136)
Chapter 13 Polynomials
LabVIEW Analysis Concepts 134 ni.com
Q(x) = 1 – 2x + x
2
(137)
Complete the following steps to divide P(x) by Q(x).
1. Divide the highest order term in Equation 136 by the highest order
term in Equation 137.
(138)
2. Multiply the result of Equation 138 by Q(x) from Equation 137.
2xQ(x) = 2x – 4x
2
+ 2x
3
(139)
3. Subtract the product of Equation 139 from P(x).
The highest order term becomes 3x
2
.
4. Repeat step 1 through step 3 using 3x
2
as the highest term of P(x).
a. Divide 3x
2
by the highest order term in Equation 137.
(1310)
b. Multiply the result of Equation 1310 by Q(x) from
Equation 137.
3Q(x)
= 3x
2
– 6x + 3 (1311)
c. Subtract the result of Equation 1311 from 3x
2
– 5x + 5.
x
x x x x x
2
5 3 2 1 2
2 3 2
+ − − + −
5 5 3
) 2 4 2 (
2
5 3 2 1 2
2
2 3
2 3 2
+ −
+ − −
+ − − + −
x x
x x x
x
x x x x x
3x
2
x
2
 3 =
2
) 3 6 3 (
5 5 3
) 2 4 2 (
3 2
5 3 2 1 2
2
2
2 3
2 3 2
+
+ − −
+ −
+ − −
+
+ − − + −
x
x x
x x
x x x
x
x x x x x
Chapter 13 Polynomials
© National Instruments Corporation 135 LabVIEW Analysis Concepts
Because the order of the remainder x + 2 is lower than the order of
Q(x), the polynomial division procedure stops. The following
equations give the quotient polynomial U(x) and the remainder
polynomial V(x) for the division of Equation 136 by Equation 137.
U(x) = 3 + 2x
V(x) = 2 + x.
Polynomial Composition
Polynomial composition involves replacing the variable x in a polynomial
with another polynomial. For example, replacing x in Equation 134 with
the polynomial from Equation 135 results in the following equation.
P(Q(x)) = a
0
+ a
1
Q(x) + a
2
(Q(x))
2
+ a
3
(Q(x))
3
= a
0
+ Q(x){a
1
+ Q(x)[a
2
+a
3
Q(x)]}
where P(Q(x)) denotes the composite polynomial.
Greatest Common Divisor of Polynomials
The greatest common divisor of two polynomials P(x) and Q(x) is a
polynomial R(x) = gcd(P(x), Q(x)) and has the following properties:
• R(x) divides P(x) and Q(x).
• C(x) divides P(x) and Q(x) where C(x) divides R(x).
The following equations define two polynomials P(x) and Q(x).
P(x) = U(x)R(x) (1312)
Q(x) = V(x)R(x) (1313)
where U(x), V(x), and R(x) are polynomials.
The following conditions are true for Equations 1312 and 1313:
• U(x) and R(x) are factors of P(x).
• V(x) and R(x) are factors of Q(x).
• P(x) is a multiple of U(x) and R(x).
• Q(x) is a multiple of V(x) and R(x).
• R(x) is a common factor of polynomials P(x) and Q(x).
Chapter 13 Polynomials
LabVIEW Analysis Concepts 136 ni.com
If P(x) and Q(x) have the common factor R(x), and if R(x) is divisible by
any other common factors of P(x) and Q(x) such that the division does not
result in a remainder, R(x) is the greatest common divisor of P(x) and Q(x).
If the greatest common divisor R(x) of polynomials P(x) and Q(x) is equal
to a constant, P(x) and Q(x) are coprime.
You can find the greatest common divisor of two polynomials by using
Euclid’s division algorithm and an iterative procedure of polynomial
division. If the order of P(x) is larger than Q(x), you can complete the
following steps to find the greatest common divisor R(x).
1. Divide P(x) by Q(x) to obtain the quotient polynomial Q
1
(x) and
remainder polynomial R
1
(x).
P(x) = Q(x)Q
1
(x) + R
1
(x)
2. Divide Q(x) by R
1
(x) to obtain the new quotient polynomial Q
2
(x) and
new remainder polynomial R
2
(x).
Q(x) = R
1
(x)Q
2
(x) + R
2
(x)
3. Divide R
1
(x) by R
2
(x) to obtain Q
3
(x) and R
3
(x)
R
1
(x) = R
2
(x)Q
3
(x) + R
3
(x)
R
2
(x) = R
3
(x)Q
4
(x) + R
4
(x)
If the remainder polynomial becomes zero, as shown by the following
equation,
R
n – 1
(x) = R
n
(x)Q
n + 1
(x),
the greatest common divisor R(x) of polynomials P(x) and Q(x) equals
R
n
(x).
Least Common Multiple of Two Polynomials
Finding the least common multiple of two polynomials involves finding the
smallest polynomial that is a multiple of each polynomial.
P(x) and Q(x) are polynomials defined by Equations 1312 and 1313,
respectively. If L(x) is a multiple of both P(x) and Q(x), L(x) is a common
multiple of P(x) and Q(x). In addition, if L(x) has the lowest order among
.
.
.
Chapter 13 Polynomials
© National Instruments Corporation 137 LabVIEW Analysis Concepts
all the common multiples of P(x) and Q(x), L(x) is the least common
multiple of P(x) and Q(x).
If L(x) is the least common multiple of P(x) and Q(x) and if R(x) is the
greatest common divisor of P(x) and Q(x), dividing the product of P(x)
and Q(x) by R(x) obtains L(x), as shown by the following equation.
Derivatives of a Polynomial
Finding the derivative of a polynomial involves finding the sum of the
derivatives of the terms of the polynomial.
Equation 1314 defines an n
th
order polynomial, T(x).
T(x) = c
0
+ c
1
x + c
2
x
2
+ … +c
n
x
n
(1314)
The first derivative of T(x) is a polynomial of order n – 1, as shown by the
following equation.
The second derivative of T(x) is a polynomial of order n – 2, as shown by
the following equation.
The following equation defines the k
th
derivative of T(x).
where k ≤ n.
The NewtonRaphson method of finding the zeros of an arbitrary equation
is an application where you need to determine the derivative of a
polynomial.
L x ( )
P x ( )Q x ( )
R x ( )

U x ( )R x ( )V x ( )R x ( )
R x ( )
 U = x ( )V x ( )R x ( ) = =
d
dx
T x ( ) c
1
2c
2
x 3c
3
x
2
… nc
n
x
n 1 –
+ + + + =
d
2
dx
2
T x ( ) 2c
2
6c
3
x … n n 1 – ( )c
n
x
n 2 –
+ + + =
d
k
dx
k
T x ( ) k!c
k
k 1 + ( )!
1!
c
k 1 +
x
k 2 + ( )!
2!
c
k 2 +
x
2
…
n!
n k – ( )!
c
n
x
n k –
+ + + + =
Chapter 13 Polynomials
LabVIEW Analysis Concepts 138 ni.com
Integrals of a Polynomial
Finding the integral of a polynomial involves the summation of integrals of
the terms of the polynomial.
Indefinite Integral of a Polynomial
The following equation yields the indefinite integral of the polynomial T(x)
from Equation 1314.
Because the derivative of a constant is zero, c can be an arbitrary constant.
For convenience, you can set c to zero.
Definite Integral of a Polynomial
Subtracting the evaluations at the two limits of the indefinite integral
obtains the definite integral of the polynomial, as shown by the following
equation.
Number of Real Roots of a Real Polynomial
For a real polynomial, you can find the number of real roots of the
polynomial over a certain interval by applying the Sturm function.
If
P
0
(x) = P(x)
and
,
T x ( ) x d
∫
c c
0
x
1
2
 c
1
x
2
…
1
n 1 +
c
n
x
n 1 +
+ + + + =
T x ( ) x d
a
b
∫
c
0
x
1
2
 c
1
x
2
…
1
n 1 +
c
n
x
n 1 +
+ + +
\ .
 
x b =
=
c
0
x
1
2
 c
1
x
2
…
1
n 1 +
c
n
x
n 1 +
+ + +
\ .
 
x a =
–
c
0
x
1
2
 c
1
x
2
…
1
n 1 +
c
n
x
n 1 +
+ + +
\ .
 
a
b
=
P
1
x ( )
d
dx
 P x ( ) =
Chapter 13 Polynomials
© National Instruments Corporation 139 LabVIEW Analysis Concepts
the following equation defines the Sturm function.
where P
i
(x) is the Sturm function and
represents the quotient polynomial resulting from the division of
P
i – 2
(x) by P
i – 1
(x).
You can calculate P
i
(x) until it becomes a constant. For example, the
following equations show the calculation of the Sturm function over
the interval (–2,1).
P
0
(x) = P(x) = 1 – 4x + 2x
3
P
i
x ( ) P
i 2 –
x ( ) P
i 1 –
x ( )
P
i 2 –
x ( )
P
i 1 –
x ( )
 –
¹ )
´ `
¦ ¹
– , = i 2 3 … , , =
P
i 2 –
x ( )
P
i 1 –
x ( )

P
1
x ( )
d
dx
 P x ( ) 4 6x
2
+ – = =
P
2
x ( ) P
0
x ( ) P
1
x ( )
P
0
x ( )
P
1
x ( )
 –
¹ )
´ `
¦ ¹
– =
P
0
x ( ) P
1
x ( )
1
3
 x –
¹ )
´ `
¦ ¹
– =
1 –
8
3
 x + =
P
3
x ( ) P
1
x ( ) P
2
x ( )
P
1
x ( )
P
2
x ( )
 –
¹ )
´ `
¦ ¹
– =
P
1
x ( ) P
2
x ( )
27
32
 
9
4
x +
\ .
 
–
¹ )
´ `
¦ ¹
– =
101
32
 =
Chapter 13 Polynomials
LabVIEW Analysis Concepts 1310 ni.com
To evaluate the Sturm functions at the boundary of the interval (–2,1), you
do not have to calculate the exact values in the evaluation. You only need
to know the signs of the values of the Sturm functions. Table 131 lists the
signs of the Sturm functions for the interval (–2,1).
In Table 131, notice the number of sign changes for each boundary. For
x = –2, the evaluation of P
i
(x) results in three sign changes. For x = 1, the
evaluation of P
i
(x) results in one sign change.
The difference in the number of sign changes between the two boundaries
corresponds to the number of real roots that lie in the interval. For the
calculation of the Sturm function over the interval (–2,1), the difference in
the number of sign changes is two, which means two real roots of
polynomial P(x) lie in the interval (–2,1). Figure 131 shows the result of
evaluating P(x) over (–2,1).
Figure 131. Result of Evaluating P(x) over (–2, 1)
In Figure 131, the two real roots lie at approximately –1.5 and 0.26.
Table 131. Signs of the Sturm Functions for the Interval (–2, 1)
x P
0
(x) P
1
(x) P
2
(x) P
3
(x)
Number of
Sign Changes
–2 – + – + 3
1 – + + + 1
4
2
0
–2
–2
–1.5 –1 –0.5 0.5
–4
–6
–8
1
P(x)
x
Chapter 13 Polynomials
© National Instruments Corporation 1311 LabVIEW Analysis Concepts
Rational Polynomial Function Operations
Rational polynomial functions have many applications, such as filter
design, system theory, and digital image processing. In particular, rational
polynomial functions provide the most common way of representing the
ztransform. A rational polynomial function takes the form of the division
of two polynomials, as shown by the following equation.
where F(x) is the rational polynomial, B(x) is the numerator polynomial,
A(x) is the denominator polynomial, and A(x) cannot equal zero.
The roots of B(x) are the zeros of F(x). The roots of A(x) are the poles
of F(x).
The following equations define two rational polynomials used in the
following sections.
(1315)
Rational Polynomial Function Addition
The following equation shows the addition of two rational polynomials.
Rational Polynomial Function Subtraction
The following equation shows the subtraction of two rational polynomials.
F x ( )
B x ( )
A x ( )

b
0
b
1
x b
2
x
2
… b
m
x
m
+ + + +
a
0
a
1
x a
2
x
2
… a
n
x
n
+ + + +
 = =
F
1
x ( )
B
1
x ( )
A
1
x ( )
 =
F
2
x ( )
B
2
x ( )
A
2
x ( )
 =
F
1
x ( ) F
2
x ( ) +
B
1
x ( )A
2
x ( ) B
2
x ( )A
1
x ( ) +
A
1
x ( )A
2
x ( )
 =
F
1
x ( ) F
2
x ( ) –
B
1
x ( )A
2
x ( ) B
2
x ( )A
1
x ( ) –
A
1
x ( )A
2
x ( )
 =
Chapter 13 Polynomials
LabVIEW Analysis Concepts 1312 ni.com
Rational Polynomial Function Multiplication
The following equation shows the multiplication of two rational
polynomials.
Rational Polynomial Function Division
The following equation shows the division of two rational polynomials.
Negative Feedback with a Rational Polynomial Function
Figure 132 shows a diagram of a generic system with negative feedback.
Figure 132. Generic System with Negative Feedback
For the system shown in Figure 132, the following equation yields the
transfer function of the system.
Positive Feedback with a Rational Polynomial Function
Figure 133 shows a diagram of a generic system with positive feedback.
F
1
x ( )F
2
x ( )
B
1
x ( )B
2
x ( )
A
1
x ( )A
2
x ( )
 =
F
1
x ( )
F
2
x ( )

B
1
x ( )A
2
x ( )
A
1
x ( )B
2
x ( )
 =
F
1
F
2
–
H x ( )
F
1
x ( )
1 F
1
x ( )F
2
x ( ) +

B
1
x ( )A
2
x ( )
A
1
x ( )A
2
x ( ) B
1
x ( )B
2
x ( ) +
 = =
Chapter 13 Polynomials
© National Instruments Corporation 1313 LabVIEW Analysis Concepts
Figure 133. Generic System with Positive Feedback
For the system shown in Figure 133, the following equation yields the
transfer function of the system.
Derivative of a Rational Polynomial Function
The derivative of a rational polynomial function also is a rational
polynomial function. Using the quotient rule, you obtain the derivative of
a rational polynomial function from the derivatives of the numerator and
denominator polynomials. According to the quotient rule, the following
equation yields the first derivative of the rational polynomial function F
1
(x)
defined in Equation 1315.
You can derive the second derivative of a rational polynomial function
from the first derivative, as shown by the following equation.
You continue to derive rational polynomial function derivatives such that
you derive the jth derivative of a rational polynomial function from the
( j – 1)th derivative.
Partial Fraction Expansion
Partial fraction expansion involves splitting a rational polynomial into a
summation of low order rational polynomials. Partial fraction expansion is
a useful tool for ztransform and digital filter structure conversion.
F
1
F
2
+
H x ( )
F
1
x ( )
1 F
1
x ( )F
2
x ( ) –

B
1
x ( )A
2
x ( )
A
1
xA
2
x B
1
x ( )B
2
x ( ) –
 = =
d
dx
F
1
x ( )
A
1
x ( )
d
dx
 B
1
x ( ) B
1
x ( )
d
dx
A
1
x ( ) –
A
1
x ( ) ( )
2
 =
d
2
dx
2
F
1
x ( )
d
dx
 
d
dx
 F
1
x ( )
\ .
 
=
Chapter 13 Polynomials
LabVIEW Analysis Concepts 1314 ni.com
Heaviside CoverUp Method
The Heaviside coverup method is the easiest of the partial fraction
expansion methods.
The following actions and conditions illustrate the Heaviside coverup
method:
• Define a rational polynomial function F(x) with the following
equation.
where m < n, meaning, without loss of generality, the order of B(x) is
lower than the order of A(x).
• Assume that A(x) has one repeated root r
0
of multiplicity k and use the
following equation to express A(x) in terms of its roots.
A(x) = a
n
(x – r
0
)
k
(x – r
1
)(x – r
2
) … (x – r
n – k
)
• Rewrite F(x) as a sum of partial fractions.
where
F x ( )
B x ( )
A x ( )

b
0
b
1
x b
2
x
2
… b
m
x
m
+ + + +
a
0
a
1
x a
2
x
2
… a
n
x
n
+ + + +
 = =
F x ( )
B x ( )
a
n
x r
0
– ( )
k
x r
1
– ( )… x r
n k –
– ( )
 =
β
0
x r
0
–

β
1
x r
0
– ( )
2
 …
β
k 1 –
x r
0
– ( )
k
 + + + =
α
1
x r
1
–

α
2
x r
2
–
 …
α
n k –
x r
n k –
–
 + + + +
α
i
x r
i
– ( )F x ( )
x r
i
=
= i 1 2 … n , k – , , =
β
j
1
k j – 1 – ( )!

d
k j – 1 – ( )
dx
k j – 1 – ( )
 x r
0
– ( )
k
F x ( ) ( )
x r
0
=
= j 0 1 … k 1 – , , , =
Chapter 13 Polynomials
© National Instruments Corporation 1315 LabVIEW Analysis Concepts
Orthogonal Polynomials
A set of polynomials P
i
(x) are orthogonal polynomials over the interval
a < x < b if each polynomial in the set satisfies the following equations.
The interval (a, b) and the weighting function w(x) vary depending on the
set of orthogonal polynomials. One of the most important applications of
orthogonal polynomials is to solve differential equations.
Chebyshev Orthogonal Polynomials of the First Kind
The recurrence relationship defines Chebyshev orthogonal polynomials of
the first kind T
n
(x), as shown by the following equations.
T
0
(x) = 1
T
1
(x) = x
Chebyshev orthogonal polynomials of the first kind satisfy the following
equations.
w x ( )P
n
x ( )P
m
x ( ) x 0 n m ≠ , = d
a
b
∫
w x ( )P
n
x ( )P
n
x ( ) x 0 n m = , ≠ d
a
b
∫
¹
¦
¦
´
¦
¦
¦
T
n
x ( ) 2xT
n 1 –
x ( ) T
n 2 –
x ( ) – = n 2 3 … , , =
1
1 x
2
–
T
n
x ( )T
m
x ( ) x 0 , = d
1 –
1
∫
n m ≠
1
1 x
2
–
T
n
x ( )T
n
x ( ) x
π
2
 , n 0 ≠
π , n 0 =
¹
¦
´
¦
¦
= d
1 –
1
∫
¹
¦
¦
¦
´
¦
¦
¦
¦
Chapter 13 Polynomials
LabVIEW Analysis Concepts 1316 ni.com
Chebyshev Orthogonal Polynomials of the Second Kind
The recurrence relationship defines Chebyshev orthogonal polynomials of
the second kind U
n
(x), as shown by the following equations.
U
0
(x) = 1
U
1
(x) = 2x
Chebyshev orthogonal polynomials of the second kind satisfy the
following equations.
Gegenbauer Orthogonal Polynomials
The recurrence relationship defines Gegenbauer orthogonal polynomials
as shown by the following equations.
Gegenbauer orthogonal polynomials satisfy the following equations.
U
n
x ( ) 2xU
n 1 –
x ( ) U
n 2 –
x ( ) – = n 2 3 … , , =
1 x
2
– U
n
x ( )U
m
x ( ) x 0 = d
1 –
1
∫
1 x
2
– U
n
x ( )U
n
x ( ) x
π
2
 = d
1 –
1
∫
¹
¦
¦
´
¦
¦
¦
n m ≠
n m =
C
a
n
x ( ),
C
a
0
x ( ) 1 =
C
a
1
x ( ) 2ax =
C
n
a
x ( )
2 n a 1 – + ( )
n
xC
n 1 –
a
x ( )
n 2a 2 – +
n
C
n 2 –
a
x ( ) – =
n 2 3 … , , =
a 0 ≠
1 x
2
– ( )
a 1 2 ⁄ –
C
n
a
x ( )C
m
a
x ( ) x 0 n m ≠ = d
1 –
1
∫
1 x
2
– ( )
a 1 2 ⁄ –
C
n
a
x ( )C
n
a
x ( ) x
π2
1 2a –
Γ n 2a + ( )
n! n a + ( )Γ
2
a ( )
 a 0 ≠
2π
n
2
 a 0 =
¹
¦
¦
´
¦
¦
¦
= d
1 –
1
∫
¹
¦
¦
¦
¦
´
¦
¦
¦
¦
¦
Chapter 13 Polynomials
© National Instruments Corporation 1317 LabVIEW Analysis Concepts
where Γ(z) is a gamma function defined by the following equation.
Hermite Orthogonal Polynomials
The recurrence relationship defines Hermite orthogonal polynomials H
n
(x),
as shown by the following equations.
H
0
(x) = 1
H
1
(x) = 2x
Hermite orthogonal polynomials satisfy the following equations.
Laguerre Orthogonal Polynomials
The recurrence relationship defines Laguerre orthogonal polynomials
L
n
(x), as shown by the following equations.
L
0
(x) = 1
L
1
(x) = –x + 1
Laguerre orthogonal polynomials satisfy the following equations.
Γ z ( ) t
z 1 –
e
t –
t d
0
∞
∫
=
H
n
x ( ) 2xH
n 1 –
x ( ) 2 n 1 – ( )H
n 2 –
– = x ( ) n 2 3 … , , =
e
x
2
–
H
n
x ( )H
m
x ( ) x 0 = d
∞ –
∞
∫
e
x
2
–
H
n
x ( )H
n
x ( ) x π2
n
n! = d
∞ –
∞
∫
¹
¦
¦
´
¦
¦
¦
n m ≠
n m =
L
n
x ( )
2n 1 – x –
n
 L
n 1 –
x ( )
n 1 –
n
 L
n 2 –
x ( ) – = n 2 3 … , , =
e
x –
L
n
x ( )L
m
x ( ) x 0 = d
0
∞
∫
e
x –
L
n
x ( )L
n
x ( ) x 1 = d
0
∞
∫
¹
¦
¦
´
¦
¦
¦
n m ≠
n m =
Chapter 13 Polynomials
LabVIEW Analysis Concepts 1318 ni.com
Associated Laguerre Orthogonal Polynomials
The recurrence relationship defines associated Laguerre orthogonal
polynomials as shown by the following equations.
Associated Laguerre orthogonal polynomials satisfy the following
equation.
Legendre Orthogonal Polynomials
The recurrence relationship defines Legendre orthogonal polynomials
P
n
(x), as shown by the following equations.
P
0
(x) = 1
P
1
(x) = x
Legendre orthogonal polynomials satisfy the following equations.
L
a
n
x ( ),
L
a
0
x ( ) 1 =
L
a
1
x ( ) x – a 1 + + =
L
a
n
x ( )
2n a 1 x – – +
n
 L
n 1 –
a
x ( )
n a 1 – +
n
 L
n 2 –
a
x ( ) – = n 2 3 … , , =
e
x –
x
a
L
a
n
x ( ) L
a
m
x ( ) x 0 = d
0
∞
∫
e
x –
x
a
L
a
n
x ( ) L
a
n
x ( ) x
Γ a n 1 + + ( )
n!
 = d
0
∞
∫
¹
¦
¦
´
¦
¦
¦
n m ≠
n m =
P
n
x ( )
2n 1 –
n
xP
n 1 –
x ( )
n 1 –
n
P
n 2 –
x ( ) – = n 2 3 … , , =
P
n
x ( )P
m
x ( ) x 0 = d
1 –
1
∫
P
n
x ( )P
n
x ( ) x
2
2n 1 +
 = d
1 –
1
∫
¹
¦
¦
´
¦
¦
¦
n m ≠
n m =
Chapter 13 Polynomials
© National Instruments Corporation 1319 LabVIEW Analysis Concepts
Evaluating a Polynomial with a Matrix
The matrix evaluation of a polynomial differs from 2D polynomial
evaluation.
When performing matrix evaluation of a polynomial, you must use a square
matrix. The following equations define a secondorder polynomial P(x) and
a square 2 × 2 matrix G.
P(x) = a
0
+ a
1
x + a
2
x
2
(1316)
(1317)
In 2D polynomial evaluation, you evaluate P(x) at each element of
matrix G, as shown by the following equation.
When performing matrix polynomial evaluation, you replace the variable x
with matrix G, as shown by the following equation.
P([G]) = a
o
I +a
1
G + a
1
GG
where I is the identity matrix of the same size as G.
In the following equations, actual values replace the variables a and g in
Equations 1316 and 1317.
P(x) = 5 + 3x +2x
2
(1318)
(1319)
G
g
1
g
2
g
3
g
4
=
P G ( )
P x ( )
x g
1
=
P x ( )
x g
2
=
P x ( )
x g
3
=
P x ( )
x g
4
=
=
G
1 2
3 4
=
Chapter 13 Polynomials
LabVIEW Analysis Concepts 1320 ni.com
The following equation shows the matrix evaluation of the polynomial P(x)
from Equation 1318 with matrix G from Equation 1319.
Polynomial Eigenvalues and Vectors
For every operator, a collection of functions exists that when operated
on by the operator produces the same function, modified only by a
multiplicative constant factor. The members of the collection of functions
are eigenfunctions. The multiplicative constants modifying the
eigenfunctions are eigenvalues. The following equation illustrates the
eigenfunction/eigenvalue relationship.
,
where f(x) is an eigenfunction of A and a is the eigenvalue of f(x).
Some applications lead to a polynomial eigenvalue problem. Given a set of
square matrices, the problem becomes determining a scalar λ and a nonzero
vector x such that Equation 1320 is true.
(1320)
The following conditions apply to Equation 1320:
• Ψ(λ) is the matrix polynomial whose coefficients are square matrixes.
• C
i
is a square matrix of size m × m, i = 0, 1, …, n.
• λ is the eigenvalue of Ψ(λ).
• x is the corresponding eigenvector of Ψ(λ) and has length m.
• is the zero vector and has length m.
P G [ ] ( ) 5
1 0
0 1
3
1 2
3 4
2
1 2
3 4
1 2
3 4
+ + =
5 0
0 5
=
3 6
9 12
14 20
30 44
+ +
22 26
39 61
=
A
ˆ
f x ( ) a f x ( ) =
Ψ λ ( )x C
0
λC
1
… λ
n 1 –
C
n 1 –
λ
n
C
n
+ + + + ( )x 0 = =
0
Chapter 13 Polynomials
© National Instruments Corporation 1321 LabVIEW Analysis Concepts
You can write the polynomial eigenvalue problem as a generalized
eigenvalue problem, as shown by the following equation.
Az = λBz
where
and is an nm × nm matrix;
and is an nm × nm matrix;
and is an nm matrix;
is the zero matrix of size m × m;
I is the identity matrix of size m × m.
A
=
0 I
=
0 …
=
0
=
0
=
0 I …
=
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
I
C
0
– C
1
– C
2
–
…
C
n 1 –
–
=
B
I
I
.
.
.
I
C
n
=
z
x
λx
λ
2
x
.
.
.
λ
n 1 –
x
=
=
0
Chapter 13 Polynomials
LabVIEW Analysis Concepts 1322 ni.com
Entering Polynomials in LabVIEW
Use the Polynomial and Rational Polynomial VIs to perform polynomial
operations.
LabVIEW uses 1D arrays for polynomial inputs and outputs. The 1D array
stores the polynomial coefficients. When entering polynomial coefficient
values into an array, maintain a consistent method for entering the values.
The order in which LabVIEW displays the results of polynomial operations
reflects the order in which you enter the input polynomial coefficient
values. National Instruments recommends entering polynomial coefficient
values in ascending order of power. For example, the following equations
define polynomials P(x) and Q(x).
P(x) = 1 – 3x + 4x
2
+ 2x
3
Q(x) = 1 – 2x + x
2
You can describe P(x) and Q(x) by vectors P and Q, as shown in the
following equations.
Figure 134 shows the front panel of a VI that uses the Add Polynomials VI
to add P(x) and Q(x).
Figure 134. Adding P(x) and Q(x)
P
1
3 –
4
2
=
Q
1
2 –
1
=
Chapter 13 Polynomials
© National Instruments Corporation 1323 LabVIEW Analysis Concepts
In Figure 134, you enter the polynomial coefficients into the array
controls, P(x) and Q(x), in ascending order of power. Also, the VI displays
the results of the addition in P(x) + Q(x) in ascending order of power, based
on the order of the two input arrays.
© National Instruments Corporation III1 LabVIEW Analysis Concepts
Part III
PointByPoint Analysis
This part describes the concepts of pointbypoint analysis, answers
frequently asked questions about pointbypoint analysis, and describes
a case study that illustrates the use of the Point By Point VIs.
© National Instruments Corporation 141 LabVIEW Analysis Concepts
14
PointByPoint Analysis
This chapter describes the concepts of pointbypoint analysis, answers
frequently asked questions about pointbypoint analysis, and describes
a case study that illustrates the use of the Point By Point VIs. Use the
NI Example Finder to find examples of using the Point By Point VIs.
Introduction to PointByPoint Analysis
Pointbypoint analysis is a method of continuous data analysis in which
analysis occurs for each data point, point by point. Pointbypoint analysis
is ideally suited to realtime data acquisition. When your data acquisition
system requires realtime, deterministic performance, you can build a
program that uses pointbypoint versions of arraybased LabVIEW
analysis VIs.
Realtime performance is a reality for data acquisition. With pointbypoint
analysis, data analysis also can utilize realtime performance. The discrete
stages of arraybased analysis, such as buffer preparation, analysis, and
output, can make arraybased analysis too slow for higher speed,
deterministic, realtime systems.
Pointbypoint analysis enables you to accomplish the following tasks:
• Track and respond to realtime events.
• Connect the analysis process directly to the signal for speed and
minimal data loss.
• Perform programming tasks more easily, because you do not allocate
arrays and you make fewer adjustments to sampling rates.
• Synchronize analysis with data acquisition automatically, because you
work with a single signal instantaneously.
Chapter 14 PointByPoint Analysis
LabVIEW Analysis Concepts 142 ni.com
Using the Point By Point VIs
The Point By Point VIs correspond to each arraybased analysis VI that is
relevant to continuous data acquisition. However, you must account for
programming differences. You usually have fewer programming tasks
when you use the Point By Point VIs. Table 141 describes characteristic
inputs and outputs of the Point By Point VIs.
Refer to the Case Study of PointByPoint Analysis section of this chapter
for an example of a pointbypoint analysis system.
Initializing Point By Point VIs
This section describes when and how to use the pointbypoint initialize
parameter of many Point By Point VIs. This section also describes the
First Call? function.
Purpose of Initialization in Point By Point VIs
Using the initialize parameter, you can reset the internal state of Point By
Point VIs without interrupting the continuous flow of data or computation.
You can reset a VI in response to events such as the following:
• A user changing the value of a parameter
• The application generating a specific event or reaching a threshold
For example, the Value Has Changed PtByPt VI can respond to change
events such as the following:
• Receiving the input data
• Detecting the change
Table 141. Characteristic Inputs and Outputs for Point By Point VIs
Parameter Description
input data Incoming data
output data Outgoing, analyzed data
initialize Routine that resets the internal state of a VI
sample length Setting for your data acquisition system or
computation system that best represents the area
of interest in the data
Chapter 14 PointByPoint Analysis
© National Instruments Corporation 143 LabVIEW Analysis Concepts
• Generating a Boolean TRUE value that triggers initialization in
another VI
• Transferring the input data to another VI for processing
Figure 141 shows the Value Has Changed PtByPt VI triggering
initialization in another VI and transferring data to that VI. In this case,
the input data is a parameter value for the target VI.
Figure 141. Typical Role of the Value Has Changed PtByPt VI
Many pointbypoint applications do not require use of the initialize
parameter because initialization occurs automatically whenever an operator
quits an application and then starts again.
Using the First Call? Function
Where necessary, use the First Call? function to build point by point VIs.
In a VI that includes the First Call? function, the internal state of the VI
is reset once, the first time you call the VI. The value of the initialize
parameter in the First Call? function is always TRUE for the first call to the
VI. The value remains FALSE for the remainder of the time you run the VI.
Figure 142 shows a typical use of the First Call? function with a While
Loop.
Figure 142. Using the First Call? Function with a While Loop
Error Checking and Initialization
The Point By Point VIs generate errors to help you identify flaws in the
configuration of the applications you build. Several pointbypoint error
codes exist in addition to the standard LabVIEW error codes.
Chapter 14 PointByPoint Analysis
LabVIEW Analysis Concepts 144 ni.com
Error codes usually identify invalid parameters and settings. For
higherlevel error checking, configure your program to monitor and
respond to irregularities in data acquisition or in computation. For example,
you create a form of error checking when you range check your data.
A Point By Point VI generates an error code once at the initial call to the
VI or at the first call to the VI after you initialize your application. Because
Point By Point VIs generate error codes only once, they can perform
optimally in a realtime, deterministic application.
The Point By Point VIs generate an error code to inform you of any invalid
parameters or settings when they detect an error during the first call. In
subsequent calls, the Point By Point VIs set the error code to zero and
continue running, generating no error codes. You can program your
application to take one of the following actions in response to the first error:
• Report the error and continue running.
• Report the error and stop.
• Ignore the error and continue running. This is the default behavior.
The following programming sequence describes how to use the Value Has
Changed PtByPt VI to build a pointbypoint error checking mechanism for
Point By Point VIs that have an error parameter.
1. Choose a parameter that you want to monitor closely for errors.
2. Wire the parameter value as input data to the Value Has Changed
PtByPt VI.
3. Transfer the output data, which is always the unchanged input data
in Value Has Changed PtByPt VI, to the target VI.
4. Pass the TRUE event generated by the Value Has Changed PtByPt VI
to the target VI to trigger initialization, as shown in Figure 141. The
Value Has Changed PtByPt VI outputs a TRUE value whenever the
input parameter value changes.
For the first call that follows initialization of the target VI, LabVIEW
checks for errors. Initialization of the target VI and error checking occurs
every time the input parameter changes.
Chapter 14 PointByPoint Analysis
© National Instruments Corporation 145 LabVIEW Analysis Concepts
Frequently Asked Questions
This section answers frequently asked questions about pointbypoint
analysis.
What Are the Differences between PointByPoint Analysis
and ArrayBased Analysis in LabVIEW?
Tables 142 and 143 compare arraybased LabVIEW analysis to
pointbypoint analysis from multiple perspectives. In Table 142, the
differences between two automotive fuel delivery systems, carburation and
fuel injection, demonstrate the differences between arraybased data
analysis and pointbypoint analysis.
Table 143 presents other comparisons between arraybased and
pointbypoint analysis.
Table 142. Comparison of Traditional and Newer Paradigms
Traditional Paradigm Newer Paradigm
Automotive Technology
Carburation
• Fuel accumulates in a float bowl.
• Engine vacuum draws fuel through a single
set of metering valves that serve all
combustion chambers.
• Somewhat efficient combustion occurs.
Fuel Injection
• Fuel flows continuously from gas tank.
• Fuel sprays directly into each combustion
chamber at the moment of combustion.
• Responsive, precise combustion occurs.
Data Analysis Technology
ArrayBased Analysis
• Prepare a buffer unit of data.
• Analyze data.
• Produce a buffer of analyzed data.
• Generate report.
PointByPoint Analysis
• Receive continuous stream of data.
• Filter and analyze data continuously.
• Generate realtime events and reports
continuously.
Chapter 14 PointByPoint Analysis
LabVIEW Analysis Concepts 146 ni.com
Why Use PointByPoint Analysis?
Pointbypoint analysis works well with computerbased realtime data
acquisition. In arraybased analysis, the inputanalysisoutput process takes
place for subsets of a larger data set. In pointbypoint analysis, the
inputanalysisoutput process takes place continuously, in real time.
Table 143. Comparison of ArrayBased and PointByPoint Data Analysis
Characteristic ArrayBased Analysis
Data Acquisition and Analysis
with Point By Point VIs
Compatibility Limited compatibility with
realtime systems
Compatible with realtime systems;
backward compatible with arraybased
systems
Data typing Arrayoriented Scalaroriented
Interruptions Interruptions critical Interruptions tolerated
Operation You observe, offline You control, online
Performance and
programming
Compensate for startup data
loss (4–5 seconds) with
complex “state machines”
Startup data loss does not occur;
initialize the data acquisition system
once and run continuously
Point of view Reflection of a process, like a
mirror
Direct, natural flow of a process
Programming Specify a buffer No explicit buffers
Results Output a report Output a report and an event in real time
Runtime behavior Delayed processing Real time
Runtime behavior Stop Continue
Runtime behavior Wait Now
Work style Asynchronous Synchronous
Chapter 14 PointByPoint Analysis
© National Instruments Corporation 147 LabVIEW Analysis Concepts
What Is New about PointByPoint Analysis?
When you perform pointbypoint analysis, keep in mind the following
concepts:
• Initialization—You must initialize the pointbypoint analysis
application to prevent interference from settings you made in previous
sessions of data analysis.
• ReEntrant Execution—You must enable LabVIEW reentrant
execution for pointbypoint analysis. Reentrant execution allocates
fixed memory to a single analysis process, guaranteeing that two
processes that use the same analysis function never interfere with each
other.
Note If you create custom VIs to use in your own pointbypoint application, be sure to
enable reentrant execution. Reentrant execution is enabled by default in almost all Point
By Point VIs.
• Deterministic Performance—Pointbypoint analysis is the natural
companion to many deterministic systems, because it efficiently
integrates with the flow of a realtime data signal.
What Is Familiar about PointByPoint Analysis?
The approach used for most pointbypoint analysis operations in
LabVIEW remains the same as arraybased analysis. You use filters,
integration, mean value algorithms, and so on, in the same situations and
for the same reasons that you use these operations in arraybased data
analysis. In contrast, the computation of zeroes in polynomial functions is
not relevant to pointbypoint analysis, and pointbypoint versions of these
arraybased VIs are not necessary.
How Is It Possible to Perform Analysis without Buffers of Data?
Analysis functions yield solutions that characterize the behavior of a data
set. In arraybased data acquisition and analysis, you might analyze a large
set of data by dividing the data into 10 smaller buffers. Analyzing those
10 sets of data yields 10 solutions. You can further resolve those
10 solutions into one solution that characterizes the behavior of the entire
data set.
In pointbypoint analysis, you analyze an entire data set in realtime.
A sample unit of a specific length replaces a buffer. The pointbypoint
sample unit can have a length that matches the length of a significant
event in the data set that you are analyzing. For example, the application in
Chapter 14 PointByPoint Analysis
LabVIEW Analysis Concepts 148 ni.com
the Case Study of PointByPoint Analysis section of this chapter acquires
a few thousand samples per second to detect defective train wheels. The
input data for the train wheel application comes from the signal generated
by a train that is moving at 60 km to 70 km per hour. The sample length
corresponds to the minimum distance between wheels.
A typical pointbypoint analysis application analyzes a long series of
sample units, but you are likely to have interest in only a few of those
sample units. To identify those crucial samples of interest, the
pointbypoint application focuses on transitions, such as the end of the
relevant signal.
The train wheel detection application in the Case Study of PointByPoint
Analysis section of this chapter uses the end of a signal to identify crucial
samples of interest. The instant the application identifies the transition
point, it captures the maximum amplitude reading of the current sample
unit. This particular amplitude reading corresponds to the complete signal
for the wheel on the train whose signal has just ended. You can use this
realtime amplitude reading to generate an event or a report about that
wheel and that train.
Why Is PointByPoint Analysis Effective in RealTime Applications?
In general, when you must process continuous, rapid data flow,
pointbypoint analysis can respond. For example, in industrial automation
settings, control data flows continuously, and computers use a variety
of analysis and transfer functions to control a realworld process.
Pointbypoint analysis can take place in real time for these engineering
tasks.
Some realtime applications do not require highspeed data acquisition
and analysis. Instead, they require simple, dependable programs.
Pointbypoint analysis offers simplicity and dependability, because
you do not allocate arrays explicitly, and data analysis flows naturally
and continuously.
Do I Need PointByPoint Analysis?
As you increase the samplesperseconds rate by factors of ten, the need for
pointbypoint analysis increases. The pointbypoint approach simplifies
the design, implementation, and testing process, because the flow of a
pointbypoint application closely matches the natural flow of the
realworld processes you want to monitor and control.
Chapter 14 PointByPoint Analysis
© National Instruments Corporation 149 LabVIEW Analysis Concepts
You can continue to work without pointbypoint analysis as long as
you can control your processes without highspeed, deterministic,
pointbypoint data acquisition. However, if you dedicate resources in
a realtime data acquisition application, use pointbypoint analysis
to achieve the full potential of your application.
What Is the LongTerm Importance of PointByPoint Analysis?
Realtime data acquisition and analysis continue to demand more
streamlined and stable applications. Pointbypoint analysis is streamlined
and stable because it directly ties into the acquisition and analysis process.
Streamlined and stable pointbypoint analysis allows the acquisition and
analysis process to move closer to the point of control in field
programmable gate array (FPGA) chips, DSP chips, embedded controllers,
dedicated CPUs, and ASICs.
Case Study of PointByPoint Analysis
The case study in this section uses the Train Wheel PtByPt VI and shows a
complete pointbypoint analysis application built in LabVIEW with Point
By Point VIs. The Train Wheel PtByPt VI is a realtime data acquisition
application that detects defective train wheels and demonstrates the
simplicity and flexibility of pointbypoint data analysis. The Train Wheel
PtByPt VI is located in the labview\examples\ptbypt\
PtByPt_No_HW.llb.
PointByPoint Analysis of Train Wheels
In this example, the maintenance staff of a train yard must detect defective
wheels on a train. The current method of detection consists of a railroad
worker striking a wheel with a hammer and listening for a different
resonance that identifies a flaw. Automated surveillance must replace
manual testing, because manual surveillance is too slow, too prone to error,
and too crude to detect subtle defects. An automated solution also adds the
power of dynamic testing, because the train wheels can be in service during
the test, instead of standing still.
The automated solution to detect potentially defective train wheels needs to
have the following characteristics:
• Detect even subtle signs of defects quickly and accurately.
• Gather data when a train travels during a normal trip.
• Collect and analyze data in real time to simplify programming and to
increase speed and accuracy of results.
Chapter 14 PointByPoint Analysis
LabVIEW Analysis Concepts 1410 ni.com
The Train Wheel PtByPt VI offers a solution for detecting defective train
wheels. Figures 143 and 144 show the front panel and the block diagram,
respectively, for the Train Wheel PtByPt VI.
Figure 143. Front Panel of the Train Wheel PtByPt VI
Chapter 14 PointByPoint Analysis
© National Instruments Corporation 1411 LabVIEW Analysis Concepts
Figure 144. Block Diagram of the Train Wheel PtByPt VI
Note This example focuses on implementing a pointbypoint analysis program in
LabVIEW. The issues of ideal sampling periods and approaches to signal conditioning
are beyond the scope of this example.
Overview of the LabVIEW PointByPoint Solution
As well as Point By Point VIs, the Train Wheel PtByPt VI requires standard
LabVIEW programming objects, such as Case structures, While Loops,
numeric controls, and numeric operators.
The data the Train Wheel PtByPt VI acquires flows continuously through
a While Loop. The process carried out by the Train Wheel PtByPt VI inside
the While Loop consists of five analysis stages that occur sequentially. The
following list reflects the order in which the five analysis stages occur,
Chapter 14 PointByPoint Analysis
LabVIEW Analysis Concepts 1412 ni.com
briefly describes what occurs in each stage, and corresponds to the labeled
portions of the block diagram in Figure 144.
1. In the data acquisition stage (DAQ), waveform data flows into the
While Loop.
2. In the Filter stage, separation of low and highfrequency components
of the waveform occurs.
3. In the Analysis stage, detection of the train, wheel, and energy level of
the waveform for each wheel occurs.
4. In the Events stage, responses to signal transitions of trains and wheels
occurs.
5. In the Report stage, the logging of trains, wheels, and trains that might
have defective wheels occurs.
Characteristics of a Train Wheel Waveform
The characteristic waveform that train wheels emit determines how you
analyze and filter the waveform signal pointbypoint. A train wheel in
motion emits a signal that contains low and highfrequency components.
If you mount a strain gauge in a railroad track, you detect a noisy signal
similar to a bell curve. Figure 145 shows the low and highfrequency
components of this curve.
Figure 145. Low and HighFrequency Components of a Train Wheel Signal
The lowfrequency component of train wheel movement represents the
normal noise of operation. Defective and normal wheels generate the same
lowfrequency component in the signal. The peak of the curve represents
the moment when the wheel moves directly above the strain gauge. The
lowest points of the bell curve represent the beginning and end of the wheel,
respectively, as the wheel passes over the strain gauge.
Lowpass component
of a typical train
wheel signal
Highpass component
of a typical train
wheel signal
Chapter 14 PointByPoint Analysis
© National Instruments Corporation 1413 LabVIEW Analysis Concepts
The signal for a train wheel also contains a highfrequency component that
reflects the quality of the wheel. In operation, a defective train wheel
generates more energy than a normal train wheel. In other words, the
highfrequency component for a defective wheel has greater amplitude.
Analysis Stages of the Train Wheel PtByPt VI
The waveform of all train wheels, including defective ones, falls within
predictable ranges. This predictable behavior allows you to choose the
appropriate analysis parameters. These parameters apply to the five stages
described in the Overview of the LabVIEW PointByPoint Solution section
of this chapter. This section discusses each of the five analysis stages and
the parameters use in each analysis stage.
Note You must adjust parameters for any implementation of the Train Wheel PtByPt VI
because the characteristics of each data acquisition system differ.
DAQ Stage
Data moves into the Point By Point VIs through the input data parameter.
The pointbypoint detection application operates on the continuous stream
of waveform data that comes from the wheels of a moving train. For a train
moving at 60 km to 70 km per hour, a few hundred to a few thousand
samples per second are likely to give you sufficient information to detect
a defective wheel.
Filter Stage
The Train Wheel PtByPt VI must filter low and highfrequency
components of the train wheel waveform. Two Butterworth Filter
PtByPt VIs perform the following tasks:
• Extract the lowfrequency components of the waveform.
• Extract the highfrequency components of the waveform.
In the Train Wheel PtByPt VI, the Butterworth Filter PtByPt VIs use the
following parameters:
• order specifies the amount of the waveform data that the VI filters at
a given time and is the filter resolution. 2 is acceptable for the Train
Wheel PtByPt.
• fl specifies the low cutoff frequency, which is the minimum signal
strength that identifies the departure of a train wheel from the strain
gauge. 0.01 is acceptable for the Train Wheel PtByPt.
Chapter 14 PointByPoint Analysis
LabVIEW Analysis Concepts 1414 ni.com
• fh specifies the high cutoff frequency, which is the minimum signal
strength that identifies the end of highfrequency waveform
information. 0.25 is acceptable for the Train Wheel PtByPt.
Analysis Stage
The pointbypoint detection application must analyze the low and
highfrequency components separately. The Array Max & Min PtByPt VI
extracts waveform data that reveals the level of energy in the waveform for
each wheel, the end of each train, and the end of each wheel.
Three separate Array Max & Min PtByPt VIs perform the following
discrete tasks:
• Identify the maximum highfrequency value for each wheel.
• Identify the end of each train.
• Identify the end of each wheel.
Note The name Array Max & Min PtByPt VI contains the word array only to match the
name of the arraybased form of this VI. You do not need to allocate arrays for the Array
Max & Min PtByPt VI.
In the Train Wheel PtByPt VI, the Array Max & Min PtByPt VIs use the
following parameters and functions:
• sample length specifies the size of the portion of the waveform that the
Train Wheel PtByPt VI analyzes. To calculate the ideal sample length,
consider the speed of the train, the minimum distance between wheels,
and the number of samples you receive per second. 100 is acceptable
for the Train Wheel PtByPt VI. The Train Wheel PtByPt VI uses
sample length to calculate values for all three Array Max & Min
PtByPt VIs.
• The Multiply function sets a longer portion of the waveform to
analyze. When this longer portion fails to display signal activity for
train wheels, the Array Max & Min PtByPt VIs identify the end of the
train. 4 is acceptable for the Train Wheel PtByPt VI.
• threshold provides a comparison point to identify when no train wheel
signals exist in the signal that you are acquiring. threshold is wired to
the Greater? function. 3 is an acceptable setting for threshold in the
Train Wheel PtByPt VI.
Chapter 14 PointByPoint Analysis
© National Instruments Corporation 1415 LabVIEW Analysis Concepts
Events Stage
After the Analysis stage identifies maximum and minimum values, the
Events stage detects when these values cross a threshold setting.
The Train Wheel PtByPt VI logs every wheel and every train that it detects.
Two Boolean Crossing PtByPt VIs perform the following tasks:
• Generate an event each time the Array Max & Min PtByPt VIs detect
the transition point in the signal that indicates the end of a wheel.
• Generate an event every time the Array Max & Min PtByPt VIs detect
the transition point in the signal that indicates the end of a train.
The Boolean Crossing PtByPt VIs respond to transitions. When the
amplitude of a single wheel waveform falls below the threshold setting,
the end of the wheel has arrived at the strain gauge. For the Train Wheel
PtByPt VI, 3 is a good threshold setting to identify the end of a wheel.
When the signal strength falls below the threshold setting, the Boolean
Crossing PtByPt VIs recognize a transition event and pass that event to a
report.
Analysis of the highfrequency signal identifies which wheels, if any, might
be defective. When the Train Wheel PtByPt VI encounters a potentially
defective wheel, the VI passes the information directly to the report at the
moment the endofwheel event is detected.
In the Train Wheel PtByPt VI, the Boolean Crossing PtByPt VIs use the
following parameters:
• initialize resets the VI for a new session of continuous data
acquisition.
• direction specifies the kind of Boolean crossing.
Report Stage
The Train Wheel PtByPt VI reports on all wheels for all trains that pass
through the data acquisition system. The Train Wheel PtByPt VI also
reports any potentially defective wheels.
Every time a wheel passes the strain gauge, the Train Wheel PtByPt VI
captures its waveform, analyzes it, and reports the event. Table 144
describes the components of a report on a single train wheel.
Chapter 14 PointByPoint Analysis
LabVIEW Analysis Concepts 1416 ni.com
The Train Wheel PtByPt VI uses pointbypoint analysis to generate a
report, not to control an industrial process. However, the Train Wheel
PtByPt VI acquires data in real time, and you can modify the application to
generate realtime control responses, such as stopping the train when the
Train Wheel PtByPt VI encounters a potentially defective wheel.
Conclusion
When acquiring data with realtime performance, pointbypoint analysis
helps you analyze data in real time. Pointbypoint analysis occurs
continuously and instantaneously. While you acquire data, you filter and
analyze it, point by point, to extract the information you need and to make
an appropriate response. This case study demonstrates the effectiveness of
the pointbypoint approach for generation of both events and reports in
real time.
Table 144. Example Report on a Single Train Wheel
Information Source Meaning of Results
Counter mechanism for
waveform events
Stage One: Wheel number four has passed the strain gauge.
Analysis of highpass filter data Stage Two: Wheel number four has passed the strain gauge and
the wheel might be defective.
Counter mechanism for
endoftrain events
Stage Three: Wheel number four in train number eight has
passed the strain gauge, and the wheel might be defective.
© National Instruments Corporation A1 LabVIEW Analysis Concepts
A
References
This appendix lists the reference material used to produce the analysis VIs,
including the signal processing and mathematics VIs. Refer to the
following documents for more information about the theories and
algorithms implemented in the analysis library.
Baher, H. Analog & Digital Signal Processing. New York: John Wiley &
Sons, 1990.
Baily, David H. and Paul N. Swartztrauber. “The Fractional Fourier
Transform and Applications.” Society of Industrial and Applied
Mathematics Review 33, no. 3 (September 1991): 389–404.
Bates, D. M. and D. G. Watts. Nonlinear Regression Analysis and its
Applications. New York: John Wiley & Sons, 1988.
Bertsekas, Dimitri P. Nonlinear Programming. 2d ed. Belmont,
Massachusetts: Athena Scientific, 1999.
Bracewell, R.N. “Numerical Transforms.” Science 248, (11 May 1990).
Burden, R. L. and J. D. Faires. Numerical Analysis. 3d ed. Boston: Prindle,
Weber & Schmidt, 1985.
Chen, C. H. et al. Signal Processing Handbook. New York: Marcel
Decker, Inc., 1988.
Chugani, Mahesh L., Abhay R. Samant, and Michael Cerna. LabVIEW
Signal Processing. Upper Saddle River, NJ: Prentice Hall PTR, 1998.
Crandall, R. E. Projects in Scientific Computation. Berlin: Springer, 1994.
DeGroot, M. Probability and Statistics. 2d ed. Reading, Massachusetts:
AddisonWesley Publishing Co., 1986.
Dowdy, S. and S. Wearden. Statistics for Research. 2nd ed. New York:
John Wiley & Sons. 1991.
Dudewicz, E. J. and S. N. Mishra. Modern Mathematical Statistics.
New York: John Wiley & Sons, 1988.
Appendix A References
LabVIEW Analysis Concepts A2 ni.com
Duhamel, P. et al. “On Computing the Inverse DFT.” IEEE Transactions.
Dunn, O. and V. Clark. Applied Statistics: Analysis of Variance and
Regression. 2nd ed. New York: John Wiley & Sons. 1987.
Ecker, Joseph G. and Michael Kupferschmid. Introduction to Operations
Research. New York: Krieger Publishing, 1991.
Elliot, D. F. Handbook of Digital Signal Processing Engineering
Applications. San Diego: Academic Press, 1987.
Fahmy, M. F. “Generalized Bessel Polynomials with Application to the
Design of Bandpass Filters.” Circuit Theory and Applications 5,
(1977): 337–342.
Gander, W. and J. Hrebicek. Solving Problems in Scientific Computing
using Maple and MATLAB. Berlin: Springer, 1993.
Golub, G.H. and C. F. Van Loan. Matrix Computations. Baltimore:
The John Hopkins University Press, 1989.
Harris, Fredric J. “On the Use of Windows for Harmonic Analysis with the
Discrete Fourier Transform.” Proceedings of the IEEE 66, no. 1
(1978).
Lanczos, C. A. “Precision Approximation of the Gamma Function.”
Journal SIAM Numerical Analysis series B, no. 1 (1964): 87–96.
Maisel, J. E. “Hilbert Transform Works With Fourier Transforms to
Dramatically Lower Sampling Rates.” Personal Engineering and
Instrumentation News 7, no. 2 (February 1990).
Miller, I. and J. E. Freund. Probability and Statistics for Engineers.
Englewood Cliffs, New Jersey: PrenticeHall, Inc., 1987.
Mitra, Sanjit K. and James F. Kaiser. Handbook for Digital Signal
Processing. New York: John Wiley & Sons, 1993.
Neter, J. et al. Applied Linear Regression Models. Richard D. Irwin, Inc.,
1983.
Neuvo, Y., C. Y. Dong, and S. K. Mitra. “Interpolated Finite Impulse
Response Filters” IEEE Transactions on ASSP ASSP32, no. 6
(June, 1984).
O’Neill, M. A. “Faster Than Fast Fourier.” BYTE (April 1988).
Appendix A References
© National Instruments Corporation A3 LabVIEW Analysis Concepts
Oppenheim, A. V. and R. W. Schafer. DiscreteTime Signal Processing.
Englewood Cliffs, New Jersey: PrenticeHall, Inc., 1989.
Oppenheim, Alan V. and Alan S. Willsky. Signals and Systems. New York:
PrenticeHall, Inc., 1983.
Parks, T. W. and C. S. Burrus. Digital Filter Design. New York: John Wiley
& Sons, Inc., 1987.
Pearson, C. E. Numerical Methods in Engineering and Science. New York:
Van Nostrand Reinhold Co., 1986.
Press, William H., Saul A. Teukolsky, William T. Vetterling, and Brian P.
Flannery. Numerical Recipes in C, The Art of Scientific Computing.
Cambridge: Cambridge University Press, 1994.
Qian, Shie and Dapang Chen. Joint TimeFrequency Analysis. New York:
PrenticeHall, Inc., 1996.
Rabiner, L. R. and B. Gold. Theory and Application of Digital Signal
Processing. Englewood Cliffs, New Jersey: PrenticeHall, Inc., 1975.
Rockey, K. C., H. R. Evans, D. W. Griffiths, and D. A. Nethercot.
The Finite Element Method—A Basic Introduction for Engineers.
New York: John Wiley & Sons, 1983.
Sorensen, H. V. et al. “On Computing the SplitRadix FFT.” IEEE
Transactions on ASSP ASSP34, no. 1 (February 1986).
Sorensen, H. V. et al. “RealValued Fast Fourier Transform Algorithms.”
IEEE Transactions on ASSP ASSP35, no. 6 (June 1987).
Spiegel, M. Schaum’s Outline Series on Theory and Problems of
Probability and Statistics. New York: McGrawHill, 1975.
Stoer, J. and R. Bulirsch. Introduction to Numerical Analysis. New York:
SpringerVerlag, 1987.
Vaidyanathan, P. P. Multirate Systems and Filter Banks. Englewood Cliffs,
New Jersey: PrenticeHall, Inc., 1993.
Wichman, B. and D. Hill. “Building a RandomNumber Generator: A
Pascal Routine for VeryLongCycle RandomNumber Sequences.”
BYTE (March 1987): 127–128.
Appendix A References
LabVIEW Analysis Concepts A4 ni.com
Wilkinson, J. H. and C. Reinsch. Linear Algebra, Vol. 2 of Handbook for
Automatic Computation. New York: Springer, 1971.
Williams, John R. and Kevin Amaratunga. “Introduction to Wavelets in
Engineering.” International Journal for Numerical Methods in
Engineering 37, (1994): 2365–2388.
Zwillinger, Daniel. Handbook of Differential Equations. San Diego:
Academic Press, 1992.
© National Instruments Corporation B1 LabVIEW Analysis Concepts
B
Technical Support and
Professional Services
Visit the following sections of the National Instruments Web site at
ni.com for technical support and professional services:
• Support—Online technical support resources include the following:
– SelfHelp Resources—For immediate answers and solutions,
visit our extensive library of technical support resources available
in English, Japanese, and Spanish at ni.com/support. These
resources are available for most products at no cost to registered
users and include software drivers and updates, a KnowledgeBase,
product manuals, stepbystep troubleshooting wizards,
conformity documentation, example code, tutorials and
application notes, instrument drivers, discussion forums,
a measurement glossary, and so on.
– Assisted Support Options—Contact NI engineers and other
measurement and automation professionals by visiting
ni.com/support. Our online system helps you define your
question and connects you to the experts by phone, discussion
forum, or email.
• Training and Certification—Visit ni.com/training for
selfpaced training, eLearning virtual classrooms, interactive CDs,
and Certification program information. You also can register for
instructorled, handson courses at locations around the world.
• System Integration—If you have time constraints, limited inhouse
technical resources, or other project challenges, NI Alliance Program
members can help. To learn more, call your local NI office or visit
ni.com/alliance.
If you searched ni.com and could not find the answers you need, contact
your local office or NI corporate headquarters. Phone numbers for our
worldwide offices are listed at the front of this manual. You also can visit
the Worldwide Offices section of ni.com/niglobal to access the branch
office Web sites, which provide uptodate contact information, support
phone numbers, email addresses, and current events.
Support Worldwide Technical Support and Product Information ni.com National Instruments Corporate Headquarters 11500 North Mopac Expressway Worldwide Offices Australia 1800 300 800, Austria 43 0 662 45 79 90 0, Belgium 32 0 2 757 00 20, Brazil 55 11 3262 3599, Canada (Calgary) 403 274 9391, Canada (Ottawa) 613 233 5949, Canada (Québec) 450 510 3055, Canada (Toronto) 905 785 0085, Canada (Vancouver) 514 685 7530, China 86 21 6555 7838, Czech Republic 420 224 235 774, Denmark 45 45 76 26 00, Finland 385 0 9 725 725 11, France 33 0 1 48 14 24 24, Germany 49 0 89 741 31 30, Greece 30 2 10 42 96 427, India 91 80 51190000, Israel 972 0 3 6393737, Italy 39 02 413091, Japan 81 3 5472 2970, Korea 82 02 3451 3400, Malaysia 603 9131 0918, Mexico 001 800 010 0793, Netherlands 31 0 348 433 466, New Zealand 0800 553 322, Norway 47 0 66 90 76 60, Poland 48 22 3390150, Portugal 351 210 311 210, Russia 7 095 783 68 51, Singapore 65 6226 5886, Slovenia 386 3 425 4200, South Africa 27 0 11 805 8197, Spain 34 91 640 0085, Sweden 46 0 8 587 895 00, Switzerland 41 56 200 51 51, Taiwan 886 2 2528 7227, Thailand 662 992 7519, United Kingdom 44 0 1635 523545 For further support information, refer to the Technical Support and Professional Services appendix. To comment on the documentation, send email to techpubs@ni.com. © 2000–2004 National Instruments Corporation. All rights reserved. Austin, Texas 787593504 USA Tel: 512 683 0100
Important Information
Warranty
The media on which you receive National Instruments software are warranted not to fail to execute programming instructions, due to defects in materials and workmanship, for a period of 90 days from date of shipment, as evidenced by receipts or other documentation. National Instruments will, at its option, repair or replace software media that do not execute programming instructions if National Instruments receives notice of such defects during the warranty period. National Instruments does not warrant that the operation of the software shall be uninterrupted or error free. A Return Material Authorization (RMA) number must be obtained from the factory and clearly marked on the outside of the package before any equipment will be accepted for warranty work. National Instruments will pay the shipping costs of returning to the owner parts which are covered by warranty. National Instruments believes that the information in this document is accurate. The document has been carefully reviewed for technical accuracy. In the event that technical or typographical errors exist, National Instruments reserves the right to make changes to subsequent editions of this document without prior notice to holders of this edition. The reader should consult National Instruments if errors are suspected. In no event shall National Instruments be liable for any damages arising out of or related to this document or the information contained in it. EXCEPT AS SPECIFIED HEREIN, NATIONAL INSTRUMENTS MAKES NO WARRANTIES , EXPRESS OR IMPLIED, AND SPECIFICALLY DISCLAIMS ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE . C USTOMER’S RIGHT TO RECOVER DAMAGES CAUSED BY FAULT OR NEGLIGENCE ON THE PART OF NATIONAL INSTRUMENTS SHALL BE LIMITED TO THE AMOUNT THERETOFORE PAID BY THE CUSTOMER. NATIONAL INSTRUMENTS WILL NOT BE LIABLE FOR DAMAGES RESULTING FROM LOSS OF DATA, PROFITS, USE OF PRODUCTS, OR INCIDENTAL OR CONSEQUENTIAL DAMAGES, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. This limitation of the liability of National Instruments will apply regardless of the form of action, whether in contract or tort, including negligence. Any action against National Instruments must be brought within one year after the cause of action accrues. National Instruments shall not be liable for any delay in performance due to causes beyond its reasonable control. The warranty provided herein does not cover damages, defects, malfunctions, or service failures caused by owner’s failure to follow the National Instruments installation, operation, or maintenance instructions; owner’s modification of the product; owner’s abuse, misuse, or negligent acts; and power failure or surges, fire, flood, accident, actions of third parties, or other events outside reasonable control.
Copyright
Under the copyright laws, this publication may not be reproduced or transmitted in any form, electronic or mechanical, including photocopying, recording, storing in an information retrieval system, or translating, in whole or in part, without the prior written consent of National Instruments Corporation. For a listing of the copyrights, conditions, and disclaimers regarding components used in USI (Xerces C++, ICU, and HDF5), refer to the USICopyrights.chm. This product includes software developed by the Apache Software Foundation ( http://www.apache.org/). Copyright © 1999 The Apache Software Foundation. All rights reserved. Copyright © 1995–2003 International Business Machines Corporation and others. All rights reserved. NCSA HDF5 (Hierarchical Data Format 5) Software Library and Utilities Copyright 1998, 1999, 2000, 2001, 2003 by the Board of Trustees of the University of Illinois. All rights reserved.
Trademarks
CVI™, LabVIEW™ , National Instruments™, NI™ , and ni.com™ are trademarks of National Instruments Corporation. MATLAB® is a registered trademark of The MathWorks, Inc. Other product and company names mentioned herein are trademarks or trade names of their respective companies.
Patents
For patents covering National Instruments products, refer to the appropriate location: Help»Patents in your software, the patents.txt file on your CD, or ni.com/patents.
WARNING REGARDING USE OF NATIONAL INSTRUMENTS PRODUCTS
(1) NATIONAL INSTRUMENTS PRODUCTS ARE NOT DESIGNED WITH COMPONENTS AND TESTING FOR A LEVEL OF RELIABILITY SUITABLE FOR USE IN OR IN CONNECTION WITH SURGICAL IMPLANTS OR AS CRITICAL COMPONENTS IN ANY LIFE SUPPORT SYSTEMS WHOSE FAILURE TO PERFORM CAN REASONABLY BE EXPECTED TO CAUSE SIGNIFICANT INJURY TO A HUMAN. (2) IN ANY APPLICATION, INCLUDING THE ABOVE, RELIABILITY OF OPERATION OF THE SOFTWARE PRODUCTS CAN BE IMPAIRED BY ADVERSE FACTORS, INCLUDING BUT NOT LIMITED TO FLUCTUATIONS IN ELECTRICAL POWER SUPPLY, COMPUTER HARDWARE MALFUNCTIONS, COMPUTER OPERATING SYSTEM SOFTWARE FITNESS, FITNESS OF COMPILERS AND DEVELOPMENT SOFTWARE USED TO DEVELOP AN APPLICATION, INSTALLATION ERRORS, SOFTWARE AND HARDWARE COMPATIBILITY PROBLEMS, MALFUNCTIONS OR FAILURES OF ELECTRONIC MONITORING OR CONTROL DEVICES, TRANSIENT FAILURES OF ELECTRONIC SYSTEMS (HARDWARE AND/OR SOFTWARE), UNANTICIPATED USES OR MISUSES, OR ERRORS ON THE PART OF THE USER OR APPLICATIONS DESIGNER (ADVERSE FACTORS SUCH AS THESE ARE HEREAFTER COLLECTIVELY TERMED “SYSTEM FAILURES”). ANY APPLICATION WHERE A SYSTEM FAILURE WOULD CREATE A RISK OF HARM TO PROPERTY OR PERSONS (INCLUDING THE RISK OF BODILY INJURY AND DEATH) SHOULD NOT BE RELIANT SOLELY UPON ONE FORM OF ELECTRONIC SYSTEM DUE TO THE RISK OF SYSTEM FAILURE. TO AVOID DAMAGE, INJURY, OR DEATH, THE USER OR APPLICATION DESIGNER MUST TAKE REASONABLY PRUDENT STEPS TO PROTECT AGAINST SYSTEM FAILURES, INCLUDING BUT NOT LIMITED TO BACKUP OR SHUT DOWN MECHANISMS. BECAUSE EACH ENDUSER SYSTEM IS CUSTOMIZED AND DIFFERS FROM NATIONAL INSTRUMENTS' TESTING PLATFORMS AND BECAUSE A USER OR APPLICATION DESIGNER MAY USE NATIONAL INSTRUMENTS PRODUCTS IN COMBINATION WITH OTHER PRODUCTS IN A MANNER NOT EVALUATED OR CONTEMPLATED BY NATIONAL
INSTRUMENTS, THE USER OR APPLICATION DESIGNER IS ULTIMATELY RESPONSIBLE FOR VERIFYING AND VALIDATING THE SUITABILITY OF NATIONAL INSTRUMENTS PRODUCTS WHENEVER NATIONAL INSTRUMENTS PRODUCTS ARE INCORPORATED IN A SYSTEM OR APPLICATION, INCLUDING, WITHOUT LIMITATION, THE APPROPRIATE DESIGN, PROCESS AND SAFETY LEVEL OF SUCH SYSTEM OR APPLICATION.
Contents
About This Manual
Conventions ...................................................................................................................xv Related Documentation..................................................................................................xv
PART I Signal Processing and Signal Analysis Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW
The Importance of Data Analysis ..................................................................................11 Sampling Signals ...........................................................................................................12 Aliasing ..........................................................................................................................14 Increasing Sampling Frequency to Avoid Aliasing.........................................16 AntiAliasing Filters........................................................................................17 Converting to Logarithmic Units ...................................................................................18 Displaying Results on a Decibel Scale............................................................19
Chapter 2 Signal Generation
Common Test Signals ....................................................................................................21 Frequency Response Measurements ..............................................................................25 Multitone Generation .....................................................................................................25 Crest Factor .....................................................................................................26 Phase Generation .............................................................................................26 Swept Sine versus Multitone ...........................................................................28 Noise Generation ...........................................................................................................210 Normalized Frequency...................................................................................................212 Wave and Pattern VIs ....................................................................................................214 Phase Control...................................................................................................214
Chapter 3 Digital Filtering
Introduction to Filtering.................................................................................................31 Advantages of Digital Filtering Compared to Analog Filtering......................31 Common Digital Filters .................................................................................................32 Impulse Response............................................................................................32
© National Instruments Corporation
v
LabVIEW Analysis Concepts
............................................................................................................................................................... 328 Designing IIR Filters............................................................................................................................................................................................................................................ 319 IIR Filters................................. 34 Characteristics of an Ideal Filter........................ 326 Elliptic Filters ............................................. 311 Designing FIR Filters by Windowing ................. 323 Minimizing Peak Error ........................................ 43 Fourier Transform ........................................................... 38 FIR Filters................................................................................................................................................................ 36 Transition Band........................... 324 Chebyshev Filters ........................ 315 Designing Equiripple FIR Filters Using the ParksMcClellan Algorithm.......................................................................................................................................................................................... 324 Butterworth Filters................................................................................................... 311 Designing FIR Filters....................................Contents Classifying Filters by Impulse Response .......................... 35 Practical (Nonideal) Filters............................................................................................ 330 IIR Filter Characteristics in LabVIEW...com ...... 39 Taps ............................................................................................................................. 41 Parseval’s Relationship ................................................................................ 334 Selecting a Digital Filter Design ......................................................................................................................................................................................................................................................... 333 Example: Analyzing Noisy Pulse with a Median Filter................................................................................................................................................................................................................................ 327 Bessel Filters........................................................................................... 44 LabVIEW Analysis Concepts vi ni................................................. 314 Designing Optimum FIR Filters Using the ParksMcClellan Algorithm..................................................................................................................................................... 37 Sampling Rate ...................................................................................................... 316 Designing Narrowband FIR Filters ........................................................... 323 IIR Filter Types ................... 322 FourthOrder Filtering ...... 316 Designing Wideband FIR Filters .......................................................... 332 Comparing FIR and IIR Filters............................................ 325 Chebyshev II Filters.................... 33 Filter Coefficients .......... 335 Chapter 4 Frequency Analysis Differences between Frequency Domain and Time Domain ...... 36 Passband Ripple and Stopband Attenuation ............................................... 320 SecondOrder Filtering ................. 319 Cascade Form IIR Filtering.............................. 333 Nonlinear Filters.............................................................................................................................................. 331 Transient Response..................................................
.................................................................................425 Computing Noise Level and Power Spectral Density .................414 FFT VI .....417 Mathematical Representation of a TwoSided................412 Computing Frequency Components ..53 Windowing Signals................45 Example of Calculating DFT.......................................435 RMS Averaging..................................430 Cross Power Spectrum.........................Contents Discrete Fourier Transform (DFT) .........................................437 Echo Detection..........................................................................................................................................................................................................................................................................................................................................................................436 Weighting ................... DCCentered FFT...........................................414 Zero Padding ..............................................................................................................................................................................434 Averaging to Improve the Measurement .........................................................................................................................................................................................................................................................................................................................................416 TwoSided...................................431 Frequency Response and Network Analysis ..................................................................................................................................................................................435 Vector Averaging ..427 Computing the Amplitude and Phase Spectrums ..............................................................................................................431 Frequency Response Function..433 Windowing........................................................................................................................................................................................433 Coherence Function....419 Power Spectrum ..........437 Chapter 5 Smoothing Windows Spectral Leakage................................................428 Calculating Amplitude in Vrms and Phase in Degrees .............................................................425 Computations on the Spectrum...................................413 Fast FFT Sizes ............................................................................................................415 Displaying Frequency Information from Transforms.................................................55 © National Instruments Corporation vii LabVIEW Analysis Concepts ......51 Sampling an Integer Number of Cycles ............................418 Creating a TwoSided............................................................................................................................................................................................................ DCCentered FFT ....................................423 Loss of Phase Information..........46 Magnitude and Phase Information............................................................................................................................................................................................................52 Sampling a Noninteger Number of Cycles.....................48 Frequency Spacing between DFT Samples ..49 FFT Fundamentals ........................................................................................................................................................................ DCCentered FFT .......................................................................................429 Frequency Response Function .......................................425 Estimating Power and Frequency........................................432 Impulse Response Function.............436 Peak Hold ...................................................................45 Relationship between N Samples in the Frequency and Time Domains........................................................................................422 Converting a TwoSided Power Spectrum to a SingleSided Power Spectrum..................
.................................................................................... 517 Exponential ......................................................... 61 Application Areas ................................................ 512 Side Lobes................................................... 74 DC Overlapped with Single Tone .... 75 DC Plus Sine Tone................................................ 519 Windows for FIR Filter Coefficient Design ....................................................................................................................................................... 514 Hamming................................................................................................................................. 78 Using Windows with Care ...................................................................................... 73 Common Error Sources Affecting DC and RMS Measurements................................. 518 Windows for Spectral Analysis versus Windows for Coefficient Design ............................................................................................................................................................................... 511 Main Lobe .................................................. 519 Spectral Analysis.................................................................................... 62 THD .................................................................................................... 71 What Is the RMS Level of a Signal? ............................. 521 Scaling Smoothing Windows ............................................................................................... 516 Flat Top ................................................ 63 THD + N .................................................................................................................................................................................................................................................... 62 Harmonic Distortion... 79 LabVIEW Analysis Concepts viii ni............................................................. 513 Hanning ... 78 Rules for Improving DC and RMS Measurements .................................................. 523 Chapter 6 Distortion Measurements Defining Distortion.............................. 512 Rectangular (None) ............................................... 515 KaiserBessel .................. 76 RMS Measurements Using Windows ..... 515 Triangle ..........................................................................Contents Characteristics of Different Smoothing Windows ................................................................................................................................................................................................................ 64 SINAD .................................................................................................................... 79 RMS Levels of Specific Tones ............................................................................................................................................................................................................................................................................................ 74 Defining the Equivalent Number of Digits ............................................................................................ 72 Averaging to Improve the Measurement................................ 64 Chapter 7 DC/RMS Measurements What Is the DC Level of a Signal?................................... 75 Windowing to Improve DC Measurements ....................................................................................................................................com . 521 Choosing the Correct Smoothing Window..
................................................................................................................................................................................................................................................................................................................................................................87 Pulse Mask Testing Example ...................................911 Chapter 10 Probability and Statistics Statistics ...86 Digital Filter Design Example...............................98 General Polynomial Fit.............................................910 Nonlinear LevenbergMarquardt Fit ...Contents Chapter 8 Limit Testing Setting up an Automated Test System ..............................................................................................88 PART II Mathematics Chapter 9 Curve Fitting Introduction to Curve Fitting .....103 Sample Variance and Population Variance ...........................................101 Mean ....................................................................................................92 General LS Linear Fit Theory..................................96 Curve Fitting in LabVIEW ......................97 Linear Fit ..................................................................................................................................................................................................................86 Modem Manufacturing Example........................................................................................103 Median...............................................................................................................................................................................................................................................84 Applications .........................99 Computing Covariance ..........83 Limit Testing ......................91 Applications of Curve Fitting ..............................................................................................................910 Building the Observation Matrix ............................104 Population Variance ...................................................................................................................104 Sample Variance .....................................98 Exponential Fit .................................................................................................................................................................................................................................................................................................................................105 Standard Deviation ......81 Specifying a Limit Using a Formula .............................93 Polynomial Fit with a Single Predictor Variable ...81 Specifying a Limit .......................................................................................105 © National Instruments Corporation ix LabVIEW Analysis Concepts ...............105 Mode.........................................................................................................................................................................................................................................................98 General LS Linear Fit........
......................................................................................................................................................................................................................................................................................................................................... 106 Histogram............................................. 109 Normal Distribution ...................................................................................................................................................................................................................................... 108 Discrete Random Variables ............................................ 122 Discrete Optimization Problems................................................... 1011 Finding x with a Known p ....................................................................................... 112 Transpose of a Matrix .............................................................................................................. 1012 Chapter 11 Linear Algebra Linear Systems and Matrix Analysis.................................................................... 1116 Pseudoinverse............. 118 Dot Product and Outer Product .............. 1010 Computing the OneSided Probability of a Normally Distributed Random Variable ................................. 108 Probability .................................................. 108 Random Variables.......... 1110 Eigenvalues and Eigenvectors ........... 106 Mean Square Error (mse) ...................... 107 Root Mean Square (rms) ...................... 1112 Matrix Inverse and Solving Systems of Linear Equations ............................................................................................. 1114 Solutions of Systems of Linear Equations .................... 1117 Chapter 12 Optimization Introduction to Optimization ............................................................... 113 Matrix Rank ...Contents Moment about the Mean .................................. 122 Continuous Optimization Problems................................. 111 Types of Matrices....................... 122 Linear and Nonlinear Programming Problems ............................... 115 Determining Singularity (Condition Number) ........................................................................................................................................................................................................................ 123 LabVIEW Analysis Concepts x ni............................................ 106 Kurtosis............................................................................................................................................................ 122 Solving Problems Iteratively....................................................................................................................... 105 Skewness ......................... 114 Magnitude (Norms) of Matrices ................................................................................ 1114 Matrix Factorization .... 121 Constraints on the Objective Function........................................... 111 Determinant of a Matrix................................................................................................................................................................... 117 Basic Matrix Operations and EigenvaluesEigenvector Problems..................................................................................... 1012 Probability Distribution and Density Functions........... 109 Continuous Random Variables ................................................................................................................................................. 113 Linear Independence....................................com .......
................................................132 Polynomial Evaluation ..........138 Definite Integral of a Polynomial............................133 Polynomial Division.....................................................................1312 © National Instruments Corporation xi LabVIEW Analysis Concepts ......................................................................................................................1214 Chapter 13 Polynomials General Form of a Polynomial..125 Line Minimization .........................................................................................................................................1211 Conjugate Gradient Search Methods...................................................................................126 Golden Section Search Method ..............................127 Choosing a New Point x in the Golden Section ..................................135 Least Common Multiple of Two Polynomials .............................................................133 Polynomial Multiplication...................................................................................................................124 Nonlinear Programming ..............................................................................................................................................................136 Derivatives of a Polynomial ...................................................132 Polynomial Addition .................................................................................................................................................................................................................................................................................................128 Gradient Search Methods ...............124 Impact of Derivative Use on Search Method Selection ...................................................................................................................137 Integrals of a Polynomial.......1311 Rational Polynomial Function Multiplication ...............................................126 Local Minimum.............123 Linear Programming Simplex Method....1210 Terminating Gradient Search Methods .....................129 Caveats about Converging to an Optimal Solution.........138 Number of Real Roots of a Real Polynomial ................................................................................................................................1311 Rational Polynomial Function Addition................................................................................................................................138 Rational Polynomial Function Operations.......................................................................126 Downhill Simplex Method ..........................1213 Difference between FletcherReeves and PolakRibiere .............................133 Polynomial Subtraction ..............................................................................................................................................................................125 Global Minimum.........................................131 Basic Polynomial Operations.......1210 Conjugate Direction Search Methods......................................................................................................................................................................................................................125 Local and Global Minima.....135 Greatest Common Divisor of Polynomials...............................138 Indefinite Integral of a Polynomial ......................................1212 Theorem A ....................................................................................132 Order of Polynomial .............................................................................................133 Polynomial Composition ...............................1312 Rational Polynomial Function Division .1311 Rational Polynomial Function Subtraction ...Contents Linear Programming .........................................................1212 Theorem B.......................................................................................................................
.................................................................................. 143 Frequently Asked Questions...................................................................................................................................... 1322 PART III PointByPoint Analysis Chapter 14 PointByPoint Analysis Introduction to PointByPoint Analysis ......... 1315 Chebyshev Orthogonal Polynomials of the Second Kind................................... 147 What Is Familiar about PointByPoint Analysis?....... 146 What Is New about PointByPoint Analysis?......... 1318 Evaluating a Polynomial with a Matrix............ 1315 Chebyshev Orthogonal Polynomials of the First Kind ........................ 1313 Heaviside CoverUp Method............................................................ 1313 Partial Fraction Expansion ............................................................................. 142 Initializing Point By Point VIs................................................................................... 1411 Characteristics of a Train Wheel Waveform........................... 149 Overview of the LabVIEW PointByPoint Solution ................................................................... 1317 Associated Laguerre Orthogonal Polynomials ................................................ 147 How Is It Possible to Perform Analysis without Buffers of Data? . 145 Why Use PointByPoint Analysis?.................com ............................................................................................... 1412 LabVIEW Analysis Concepts xii ni.................. 148 Do I Need PointByPoint Analysis? ................. 1317 Laguerre Orthogonal Polynomials .. 148 What Is the LongTerm Importance of PointByPoint Analysis? .............. 1320 Entering Polynomials in LabVIEW............................ 1319 Polynomial Eigenvalues and Vectors .................................................. 142 Using the First Call? Function.................. 1316 Gegenbauer Orthogonal Polynomials ............ 142 Purpose of Initialization in Point By Point VIs .........................................................................................................................................Contents Negative Feedback with a Rational Polynomial Function............... 149 PointByPoint Analysis of Train Wheels ................................. 1312 Derivative of a Rational Polynomial Function ....... 143 Error Checking and Initialization ................................................. 141 Using the Point By Point VIs ........................................................ 1318 Legendre Orthogonal Polynomials ........................................ 145 What Are the Differences between PointByPoint Analysis and ArrayBased Analysis in LabVIEW? ...... 1314 Orthogonal Polynomials......................................................................... 1316 Hermite Orthogonal Polynomials ........................................................................................................................................................................................................................................................................................................................................................................... 147 Why Is PointByPoint Analysis Effective in RealTime Applications?....................................................................................................................................................... 149 Case Study of PointByPoint Analysis . 1312 Positive Feedback with a Rational Polynomial Function ........................................
....Contents Analysis Stages of the Train Wheel PtByPt VI............................................................................................................................................1413 DAQ Stage ..............................1413 Analysis Stage.....................................................................................................................................................................1415 Report Stage ..........................................1414 Events Stage ................................................................1413 Filter Stage .................................................1416 Appendix A References Appendix B Technical Support and Professional Services © National Instruments Corporation xiii LabVIEW Analysis Concepts .................................................................................1415 Conclusion.......
subroutines.com/info. programming examples. select the Page Setup item. operations. Text in this font denotes text or characters that you should enter from the keyboard. functions. directories. such as menu items and dialog box options. available on the National Instruments Web site at ni. programs. Bold text also denotes parameter names. emphasis. subprograms. device names. paths. italic monospace Related Documentation The following documents contain information that you might find helpful as you read this manual: • • LabVIEW Measurements Manual The Fundamentals of FFTBased Signal Analysis and Measurement in LabVIEW and LabWindows™/CVI ™ Application Note. which alerts you to important information. Italic text denotes variables. or an introduction to a key concept.About This Manual This manual describes analysis and mathematical concepts in LabVIEW. The sequence File»Page Setup»Options directs you to pull down the File menu. This font also denotes text that is a placeholder for a word or value that you must supply. variables. filenames. bold Bold text denotes items that you must select or click in the software. and select Options from the last dialog box. The information in this manual directly relates to how you can use LabVIEW to perform analysis and measurement operations. This icon denotes a note. sections of code. where you enter the info code rdlv04 © National Instruments Corporation xv LabVIEW Analysis Concepts . a cross reference. and syntax examples. Conventions This manual uses the following conventions: » The » symbol leads you through nested menu items and dialog box options to a final action. This font is also used for the proper names of disk drives. and extensions.
com .About This Manual • • • • LabVIEW Help. No. 1. available by selecting Help»VI. Volume 66. Function. January 1978) LabVIEW Analysis Concepts xvi ni. & HowTo Help LabVIEW User Manual Getting Started with LabVIEW On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform (Proceedings of the IEEE.
introduces the concept of filtering. Chapter 5. Smoothing Windows. Chapter 4. Frequency Analysis. Chapter 3. and the importance of scaling smoothing windows. Signal Generation. • • • • © National Instruments Corporation I1 LabVIEW Analysis Concepts . compares analog and digital filters. and how to use FFTbased functions for network measurement.Part I Signal Processing and Signal Analysis This part describes signal processing and signal analysis concepts. the fast Fourier transform (FFT). Chapter 2. the different types of smoothing windows. computations performed on the power spectrum. describes finite infinite response (FIR) and infinite impulse response (IIR) filters. describes spectral leakage. Digital Filtering. • Chapter 1. the differences between smoothing windows used for spectral analysis and smoothing windows used for filter coefficient design. describes the fundamentals of the discrete Fourier transform (DFT). basic signal analysis computations. how to use smoothing windows to decrease spectral leakage. provides a background in basic digital signal processing and an introduction to signal processing and measurement VIs in LabVIEW. and describes how to choose the appropriate digital filter for a particular application. how to choose the correct type of smoothing window. Introduction to Digital Signal Processing and Analysis in LabVIEW. describes the fundamentals of signal generation.
Limit Testing. Distortion Measurements. total harmonic distortion (THD). and when to use distortion measurements. Chapter 8.Part I Signal Processing and Signal Analysis • Chapter 6. specifying limits. provides information about setting up an automated system for performing limit testing. • • LabVIEW Analysis Concepts I2 ni. and applications for limit testing. describes harmonic distortion. signal noise and distortion (SINAD). DC/RMS Measurements. Chapter 7.com . introduces measurement analysis techniques for making DC and RMS measurements of a signal.
remove noise disturbances. noise reduction. Often.Introduction to Digital Signal Processing and Analysis in LabVIEW 1 Digital signals are everywhere in the world around us. Radio. This chapter provides a background in basic digital signal processing and an introduction to signal processing and measurement VIs in LabVIEW. does not always immediately convey useful information. Because of the many advantages of digital signal processing. Figure 11. television. and signal processing flexibility. census results. and stock market prices are all available in digital form. The Importance of Data Analysis The importance of integrating analysis libraries into engineering stations is that the raw data. such as temperature and humidity. NASA pictures of distant planets and outer space are often processed digitally to remove noise and extract useful information. correct for data corrupted by faulty equipment. and hifi sound systems are all gradually converting to the digital domain because of its superior fidelity. Telephone companies use digital signals to represent the human voice. Data is transmitted from satellites to earth ground stations in digital form. as shown in Figure 11. you must transform the signal. Economic data. analog signals also are converted to digital form before they are processed with a computer. Raw Data © National Instruments Corporation 11 LabVIEW Analysis Concepts . or compensate for environmental effects.
you must first convert an analog signal into its digital representation. x(2∆t). x(2∆t). with units of samples/second. x(∆t). is a sample. the conversion is implemented by using an analogtodigital (A/D) converter. Consider an analog signal x(t) that is sampled every ∆t seconds. …. The signal x(t) thus can be represented by the following discrete set of samples. Each of the discrete values of x(t) at t = 0. Thus. To use digital signal processing techniques. 3∆t. are all samples. and so on. x(0). …. Its reciprocal. LabVIEW Analysis Concepts 12 ni. 2∆t. The time interval ∆t is the sampling interval or sampling period. The sampling interval is ∆t. ∆t. The samples are defined at discrete points in time. x(k∆t). x(3∆t).Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW By analyzing and processing the digital data. {x(0). is the sampling frequency. Processed Data The LabVIEW block diagram programming approach and the extensive set of LabVIEW signal processing and measurement VIs simplify the development of analysis applications. as shown in Figure 12. you can extract the useful information from the noise and present it in a form more comprehensible than the raw data. 1/∆t. Sampling Signals Measuring the frequency content of a signal requires digitalization of a continuous signal. x(∆t). …} Figure 13 shows an analog signal and its corresponding sampled version. Figure 12. In practice.com .
Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW 1. 2.4 0.6 0.0 Figure 13. … If N samples are obtained from the signal x(t).7 0.0 –1. One of the most important parameters of an analog input system is the frequency at which the DAQ device samples an incoming signal.2 0.1 0. X = {x[0].8 0. So knowing only the values of the samples contained in X gives you no information about the sampling frequency.9 1. or the sampled version.1 0. The sampling frequency determines how often an A/D conversion takes place. 1. © National Instruments Corporation 13 LabVIEW Analysis Concepts . Sampling a signal too slowly can result in an aliased signal. then you can represent x(t) by the following sequence. x[1].5 0.3 0. of x(t).0 0. The sequence X = {x[i]} is indexed on the integer variable i and does not contain any information about the sampling rate. Analog Signal and Corresponding Sampled Version The following notation represents the individual samples.1 ∆t = distance between samples along time axis ∆t 0. x[N–1]} The preceding sequence representing x(t) is the digital representation. x[i] = x(i∆t) for i = 0. x[3]. x[2]. ….
The Nyquist frequency equals onehalf the sampling frequency. as shown by the following equation. f f N = s . Aliasing causes a false lower frequency component to appear in the sampled data of a signal. 2 where fN is the Nyquist frequency and fs is the sampling frequency. the maximum frequency you can accurately represent without aliasing is the Nyquist frequency. In an aliased signal. For a given sampling frequency. Signals with frequency components above the Nyquist frequency appear aliased between DC and the Nyquist frequency. Increasing the sampling frequency increases the number of data points acquired in a given time period.Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW Aliasing An aliased signal provides a poor representation of the analog signal. Aliasing Effects of an Improper Sampling Rate In Figure 14.com . frequency components actually above the Nyquist frequency appear as LabVIEW Analysis Concepts 14 ni. the undersampled signal appears to have a lower frequency than the actual signal—three cycles instead of ten cycles. Figure 14 shows an adequately sampled signal and an undersampled signal. Adequately Sampled Signal Aliased Signal Due to Undersampling Figure 14. Often. a fast sampling frequency provides a better representation of the original signal than a slower sampling rate.
For example. Figures 15 and 16 illustrate the aliasing phenomenon. For example.Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW frequency components below the Nyquist frequency. Signal Frequency Components and Aliases In Figure 16. 40 Hz. and F4 appear at 30 Hz. a component at frequency fN < f0 < fs appears as the frequency fs – f0. of 100 Hz. Solid Arrows – Actual Frequency Dashed Arrows – Alias Magnitude F2 alias 30 Hz F4 alias 10 Hz F1 25 Hz F3 alias 40 Hz F2 70 Hz F3 160 Hz F4 510 Hz 0 Frequency ƒs /2=50 Nyquist Frequency ƒs =100 Sampling Frequency 500 Figure 16. Frequencies above the Nyquist frequency appear as aliases. frequencies below the Nyquist frequency of fs/2 = 50 Hz are sampled correctly. respectively. F1 appears at the correct frequency. aliases for F2. © National Instruments Corporation 15 LabVIEW Analysis Concepts . F3. Actual Signal Frequency Components Figure 16 shows the frequency components and the aliases for the input signal from Figure 15. fs. For example. Figure 15 shows the frequencies contained in an input signal acquired at a sampling frequency. and 10 Hz. Magnitude F1 25 Hz F2 70 Hz F3 160 Hz F4 510 Hz 0 Frequency ƒs /2=50 Nyquist Frequency ƒs =100 Sampling Frequency 500 Figure 15.
you can compute the alias frequencies for F2. Effects of Sampling at Different Rates LabVIEW Analysis Concepts 16 ni. For example. use a sampling frequency at least twice the maximum frequency component in the sampled signal to avoid aliasing. as shown in the following equation. Figure 17 shows the effects of various sampling frequencies. A. 1 sample/1 cycle C. 2 samples/cycle B. F3. AF = CIMSF – IF where AF is the alias frequency.com .Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW The alias frequency equals the absolute value of the difference between the closest integer multiple of the sampling frequency and the input frequency. and F4 from Figure 16 with the following equations. CIMSF is the closest integer multiple of the sampling frequency. 10 samples/cycle Figure 17. 7 samples/4 cycles D. Alias F2 = 100 – 70 = 30 Hz Alias F3 = ( 2 )100 – 160 = 40 Hz Alias F4 = ( 5 )100 – 510 = 10 Hz Increasing Sampling Frequency to Avoid Aliasing According to the Shannon Sampling Theorem. and IF is the input frequency.
pickup from stray signals. A lowpass filter allows low frequencies to pass but attenuates high frequencies. the sampling frequency fs equals the frequency f of the sine wave. An antialiasing analog lowpass filter should exhibit a flat passband frequency response with a good highfrequency alias rejection and a fast rolloff in the transition band. you cannot distinguish alias frequencies from the frequencies that actually lie between 0 and the Nyquist frequency.Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW In case A of Figure 17. fs = 7/4f. © National Instruments Corporation 17 LabVIEW Analysis Concepts . Therefore. in case A. Frequency components of stray signals above the Nyquist frequency might alias into the desired frequency range of a test signal and cause erroneous results. one sample per cycle is acquired. the antialiasing analog lowpass filter prevents the sampling of aliasing components. such as signals from power lines or local radio stations. the signal aliases to a frequency less than the original signal—three cycles instead of four. Because you apply the antialiasing filter to the analog signal before it is converted to a digital signal. Therefore. f is measured in cycles/second. AntiAliasing Filters In the digital domain. In case C of Figure 17. can contain frequencies higher than the Nyquist frequency. fs is measured in samples/second. increasing the sampling rate to fs = 2f results in the digitized waveform having the correct frequency or the same number of cycles as the original signal. By attenuating the frequencies higher than the Nyquist frequency. Case D of Figure 17 shows the result of increasing the sampling rate to fs = 10f. Even with a sampling frequency equal to twice the Nyquist frequency. fs = 10f = 10 samples/cycle. you can accurately reproduce the waveform. you need to remove alias frequencies from an analog signal before the signal reaches the A/D converter. By increasing the sampling rate to well above f. In case B. increasing the sampling rate increases the frequency of the waveform. Use an antialiasing analog lowpass filter before the A/D converter to remove alias frequencies higher than the Nyquist frequency. In case B of Figure 17. The reconstructed waveform appears as an alias at DC. the antialiasing filter is an analog filter. the reconstructed waveform more accurately represents the original sinusoidal wave than case A or case B. for example. In case C. However. or 7 samples/4 cycles.
The decibel scale is a transformation of the linear scale into a logarithmic scale. an ideal antialias filter is not physically realizable. However. The decibel is a unit of ratio. Figure 18b illustrates actual antialias filter behavior. the signals in the transition band might cause aliasing. Practical antialias filters pass all frequencies <f1 and cut off all frequencies >f2. Transition Band Filter Output Filter Output Frequency f1 f1 f2 Frequency a. Therefore. Frequencies less than f1 are desired frequencies. passes all the desired input frequencies and cuts off all the undesired frequencies.com . The following information applies to Figure 18: • • • f1 is the maximum input frequency. Converting to Logarithmic Units On some instruments. Frequencies greater than f1 are undesired frequencies.Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW Figure 18 shows both an ideal antialias filter and a practical antialias filter. use a sampling frequency greater than two times the highest frequency in the transition band. which contains a gradual attenuation of the input frequencies. Although you want to pass only signals with frequencies <f1. The linear scale shows the amplitudes as they are. Practical Antialias Filter Figure 18. Using a sampling frequency greater than two times the highest frequency in the transition band means fs might be more than 2f1. The region between f1 and f2 is the transition band. Ideal Antialias Filter b. you can display amplitude on either a linear scale or a decibel (dB) scale. shown in Figure 18a. Ideal versus Practical AntiAlias Filter An ideal antialias filter. in practice. LabVIEW Analysis Concepts 18 ni.
which yields the unit of measure dBV. Use the reference 1 mW into a load of 50 Ω for radio frequencies where 0 dB is 0. Use the reference one voltrms (1 Vrms) for amplitude. Therefore. P dB = 10 log 10 . A dB = 20 log 10 . Equation 12 defines the decibel in terms of amplitude. Equation 11 defines the decibel in terms of power.Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW The following equations define the decibel.is the Pr power ratio. Pr is the reference power. Several conventions exist for specifying a reference value. The reference value serves as the 0 dB level. Displaying Results on a Decibel Scale Amplitude or power spectra usually are displayed on a decibel scale. Equations 11 and 12 require a reference value to measure power and amplitude in decibels.1 V to a maximum of 100 V © National Instruments Corporation 19 LabVIEW Analysis Concepts .. you obtain the same decibel level and display regardless of whether you use the amplitude or power spectrum. Pr (11) P where P is the measured power.78 Vrms. 2 When using amplitude or power as the amplitudesquared of the same signal. Ar is the reference amplitude. suppose you want to display a signal containing amplitudes from a minimum of 0. and Ar is the voltage ratio.22 Vrms. Ar (12) A where A is the measured amplitude. which yields the unit of measure dBm. the resulting decibel level is exactly the same. which yields the unit of measure dBm. which yields the unit of measure dBVrms. For example. Displaying amplitude or power spectra on a decibel scale allows you to view wide dynamic ranges and to see small signal components in the presence of large ones. Use the reference 1 mW into a load of 600 Ω for audio frequencies where 0 dB is 0. You can use the following common conventions to specify a reference value for calculating decibels: • • • • Use the reference one voltrms squared ( 1Vrms ) for power. and . Multiplying the decibel ratio by two is equivalent to having a squared ratio..
If the device displays 10 V/cm.1 mm. the device displays 10 V of amplitude per centimeter of height.1 mm is barely visible on the display screen. Because a height of 0.000 Amplitude Ratio 100 10 2 1. you might overlook the 0. Table 11. displaying the 0. Table 11 shows the relationship between the decibel and the power and voltage ratios.com .Chapter 1 Introduction to Digital Signal Processing and Analysis in LabVIEW on a device with a display height of 10 cm.1 V amplitude of the signal requires a height of only 0. if the device requires the entire display height to display the 100 V amplitude.1 V amplitude component of the signal. Using a linear scale. Using a logarithmic scale in decibels allows you to see the 0.000 100 4 2 1 1/2 1/4 1/100 1/10.4 1/2 1/10 1/100 Table 11 shows how you can compress a wide range of amplitudes into a small set of numbers by using the logarithmic decibel scale. Decibels and Power and Voltage Ratio Relationship dB +40 +20 +6 +3 0 –3 –6 –20 –40 Power Ratio 10.4 1 1/1.1 V amplitude component of the signal. LabVIEW Analysis Concepts 110 ni.
Generate signals to apply to a digitaltoanalog (D/A) converter. and multitone signals consisting of a superposition of sine waves. the sawtooth wave. The most common signal for audio testing is the sine wave. several types of noise waveforms. for example. Table 21 lists the signals used for some typical measurements. • This chapter describes the fundamentals of signal generation. broadband noise Sinc © National Instruments Corporation 21 LabVIEW Analysis Concepts . Multiple sine waves are widely used to measure the intermodulation distortion or to determine the frequency response. The following applications are examples of uses for signal generation: • Simulate signals to test your algorithm when realworld signals are not available. Common Test Signals Common test signals include the sine wave. chirp). impulse. A single sine wave is often used to determine the amount of harmonic distortion introduced by a system. Typical Measurements and Signals Measurement Total harmonic distortion Intermodulation distortion Frequency response Interpolation Sine wave Signal Multitone (two sine waves) Multitone (many sine waves. when you do not have a DAQ device for obtaining realworld signals or when access to realworld signals is not possible. the square wave. the triangle wave. Table 21.Signal Generation 2 The generation of signals is an important part of any test or measurement system.
Chapter 2 Signal Generation Table 21. Typical Measurements and Signals (Continued) Measurement Rise time.com . LabVIEW Analysis Concepts 22 ni. overshoot. fall time. Some of the common test signals available in most signal generators are shown in Figure 21 and Figure 22. undershoot Jitter Pulse Square wave Signal These signals form the basis for many tests and are used to measure the response of a system to a particular stimulus.
0 0.8 0. Common Test Signals © National Instruments Corporation 23 LabVIEW Analysis Concepts .8 0.8 0.1 2.9 0.1 0.7 0.2 0.0 0.3 0.2 0.6 0.4 –0.7 0.0 Amplitude 2 1.0 0.5 0.8 2.7 Amplitude Amplitude 0.5 0.8 0.2 0.6 0.8 –1.5 –0.1 0.1 0.0 –0.8 –1.0 –0.0 4 0 100 200 300 400 500 600 700 800 900 1000 Time 5 1 2 Sine Wave Square Wave 3 4 Triangle Wave Sawtooth Wave 5 6 6 Ramp Impulse Figure 21.4 0.5 0.6 –0.0 2.0 Time 1.0 –0.3 0.6 0.1 2.2 –0.6 –0.0 0.2 0.0 0.2 0.4 0.1 0.8 0.8 –1.2 0.0 0.8 –0.4 0.0 Time Amplitude 0.Chapter 2 Signal Generation 1.6 0.0 Time 3 1.2 2.4 2.4 0.7 0.1 1.0 0.2 0.4 0.8 0.4 0.0 0.8 0.3 2.4 Amplitude 0.4 –0.2 –0.0 0.6 –0.0 Time Amplitude 1 1.3 0.7 2.0 –0.3 0.5 0.4 –0.1 0.9 0.7 0.5 0.4 0.6 2.6 0.9 1.9 –1.4 –0.3 0.8 0.2 0.9 0.6 0.0 0.3 0.5 2.5 0.6 0.0 0.9 1.8 0.2 –0.7 0.6 –0.1 0.4 0.1 –0.0 0 100 200 300 400 500 600 700 800 900 1000 Time 1.6 0.2 –0.7 –0.3 –0.6 0.9 3.0 –1.9 1.2 0.
The discrete frequencies of chirp patterns depend on the sampling rate. An impulse contains all frequencies that can be represented for a given sampling rate and number of samples. and the number of samples.7 0.6 0.0 0 100 200 300 400 500 600 700 800 900 1000 Time Amplitude Amplitude 7 1.2 –0.1 0.1 0.5 0.4 0. Chirp patterns have discrete frequencies that lie within a certain range.5 0.4 0. • • • LabVIEW Analysis Concepts 24 ni.1 –0. thus generating energy across a given frequency range.com . More Common Test Signals The most useful way to view the common test signals is in terms of their frequency content.2 0.8 –1. Triangle and sawtooth waves have harmonic components that are multiples of the fundamental frequency.0 –0.1 0.3 0.0 0. The amplitude of each harmonic is inversely proportional to its frequency.9 0.Chapter 2 Signal Generation 1.8 0.8 0.0 0.3 0 100 200 300 400 500 600 700 800 900 1000 Time 1.2 –0.9 0. the start and end frequencies.3 0.6 –0.8 0.7 0.6 0. Chirp signals are sinusoids swept from a start frequency to a stop frequency. The common test signals have the following frequency content characteristics: • • Sine waves have a single frequency component.0 –0.0 0.4 0.4 –0.0 0 100 200 300 400 500 600 700 800 900 1000 Time 8 Amplitude 9 7 Sinc 8 Pulse 9 Chirp Figure 22.2 0. Square waves consist of the superposition of many sine waves at odd harmonics of the fundamental frequency.6 0.
you can generate multitone signals with a specific amplitude and phase for each individual frequency component. applying a window might obscure important portions of the transient response. and amplitude relative to the fundamental. each with a distinct amplitude. Multitone Generation Except for the sine wave. such as white noise. If an FFT of the entire multitone signal is computed. phase. Refer to the Noise Generation section of this chapter for information about white noise. If you generate a chirp stimulus signal at the same rate you acquire the response. Because some stimulus signals are not constant in frequency across the time record. Two common signals used for frequency response measurements are the chirp signal and a broadband noise signal. Refer to the Common Test Signals section of this chapter for information about chirp signals. No window is generally the best choice for a broadband signal source. It is best not to use windows when analyzing frequency response signals. However. and frequency. A multitone signal is the superposition of several sine waves or tones. © National Instruments Corporation 25 LabVIEW Analysis Concepts . Multitone signals are a part of many test specifications and allow the fast and efficient stimulus of a system across an arbitrary band of frequencies. Multitone test signals are used to determine the frequency response of a device and with appropriate selection of frequencies. phase. A multitone signal is typically created so that an integer number of cycles of each individual tone are contained in the signal. which means no spectral spread or leakage occurs. the harmonic components of a square wave are fixed in frequency.Chapter 2 Signal Generation Frequency Response Measurements To achieve a good frequency response measurement. For example. the frequency range of interest must contain a significant amount of stimulus energy. you can match the acquisition frame size to the length of the chirp. the common test signals do not allow full control over their spectral content. each of the tones falls exactly onto a single frequency bin. also can be used to measure such quantities as intermodulation distortion.
In other words. If in the course of generating the multitone signal the hardware or signal path induces nonlinear phase distortion. A good approach to generating a signal is to choose amplitudes and phases that result in a lower crest factor.Chapter 2 Signal Generation Crest Factor The relative phases of the constituent tones with respect to each other determine the crest factor of a multitone signal with specified amplitude. A higher crest factor results in individual sine tones with lower signaltonoise ratios. a sine wave has a crest factor of 1. the maximum value of the multitone signal should not exceed the maximum capability of the hardware that generates the signal. However. Phase Generation The following schemes are used to generate tone phases of multitone signals: • • Varying the phase difference between adjacent frequency tones linearly from 0 to 360 degrees Varying the tone phases randomly Varying the phase difference between adjacent frequency tones linearly from 0 to 360 degrees allows the creation of multitone signals with very low crest factors. a multitone signal with a large crest factor contains less energy than one with a smaller crest factor. For the same maximum amplitude. The multitone signal might display some repetitive timedomain characteristics. the crest factor might vary considerably. a large crest factor means that the amplitude of a given component sine tone is lower than the same sine tone in a multitone signal with a smaller crest factor. the resulting multitone signals have the following potentially undesirable characteristics: • The multitone signal is very sensitive to phase distortion. proper selection of phases is critical to generating a useful multitone signal.com . Therefore. For example. You can generate a multitone signal with a specific amplitude by using different combinations of the phase relationships and amplitudes of the constituent sine tones. which means a limit is placed on the maximum amplitude of the signal. • LabVIEW Analysis Concepts 26 ni. To avoid clipping. as shown in the multitone signal in Figure 23. The crest factor is defined as the ratio of the peak magnitude to the RMS value of the signal.414:1.
000 –0. Multitone Signal with Linearly Varying Phase Difference between Adjacent Tones The signal in Figure 23 resembles a chirp signal in that its frequency appears to decrease from left to right.800 0.600 0.000 0.000 0.200 0. Figure 24 illustrates a signal created by varying the tone phases randomly. Having a signal that is more noiselike than the signal in Figure 23 often is more desirable.080 0.200 –0. © National Instruments Corporation 27 LabVIEW Analysis Concepts .040 0.800 –1.400 –0. The apparent decrease in frequency from left to right is characteristic of multitone signals generated by linearly varying the phase difference between adjacent frequency tones.070 0. Varying the tone phases randomly results in a multitone signal whose amplitudes are nearly Gaussian in distribution as the number of tones increases.Chapter 2 Signal Generation 1.400 Amplitude 0.060 0.010 0.050 0.030 0.090 0.000 0.600 –0.020 0.100 Time Figure 23.
For each sine tone in a stepped sine LabVIEW Analysis Concepts 28 ni. due mainly to settling time issues.400 –0. Swept Sine versus Multitone To characterize a system. Multitone provides a signal composed of multiple sine tones.com .070 0.100 Time Figure 24.080 0.040 0.000 –0. you often must measure the response of the system at many different frequencies. Multitone Signal with Random Phase Difference between Adjacent Tones In addition to being more noiselike.000 0. the signal in Figure 24 also is much less sensitive to phase distortion. the multitone approach can be much faster than the equivalent swept sine measurement.400 Amplitude 0.020 0.000 0.060 0. • A multitone signal has significant advantages over the swept sine and stepped sine approaches.200 0.090 0. For a given range of frequencies.030 0. You can use the following methods to measure the response of a system at many different frequencies: • • Swept sine continuously and smoothly changes the frequency of a sine wave across a range of frequencies. The process continues until all the frequencies of interest have been reached.200 –0. Stepped sine provides a single sine tone of fixed frequency as the stimulus for a certain time and then increments the frequency by a discrete amount.000 0.800 –1.600 –0.050 0.800 0.010 0. Multitone signals with the sort of phase relationship shown in Figure 24 generally achieve a crest factor between 10 dB and 11 dB.600 0.Chapter 2 Signal Generation 1.
If the system has lowfrequency poles and/or zeroes or high Qresonances. using two independent sine tones is the best approach. Using two independent sine tones. The settling time issue for a swept sine can be even more complex. applying higher energy where needed. might have a maximum amplitude of 10 V. the system might take a relatively long time to settle. The lower amplitude of the 100 Hz tone component is due to the way that all the sine tones sum. For example. including the 100 Hz tone. amplitude and phase. you must wait only once for the settling time. In the multitone approach.001 kHz. For a multitone signal. The swept sine approach is more appropriate than the multitone approach in certain situations. The frequency resolution of the FFT is limited by your measurement time. any energy between FFT bins is due to noise or unitundertest (UUT) induced distortion. You can use a single fast Fourier transform (FFT) to measure many frequency points.Chapter 2 Signal Generation measurement. Some applications. However. and applying lower energy at less critical frequencies. consider a single sine tone of amplitude 10 V peak and frequency 100 Hz. you can mitigate the reduced SNR by adjusting the amplitudes and phases of the tones. simultaneously. combine a multitone measurement for coarse measurement and a narrowrange sweep for fine measurement. The difference in measurement speed is because you must wait long enough to obtain the required number of samples to achieve a frequency resolution of 1 Hz. you must wait for the settling time of the system to end before starting the measurement. the signaltonoise ratio (SNR) of the 100 Hz component is better for the case of the swept sine approach. After the response to the multitone signal is acquired. When viewing the response of a system to a multitone stimulus. A multitone signal containing 10 tones. Each measured tone within a multitone signal is more sensitive to noise because the energy of each tone is lower than that in a single pure tone. while a multitone measurement requires at least one second. the 100 Hz tone component has an amplitude somewhat less than 10 V. Assuming the same level of noise. the processing can be very fast. © National Instruments Corporation 29 LabVIEW Analysis Concepts . A multitone signal containing one period of the lowest frequency—actually one period of the highest frequency resolution—is enough for the settling time. If you want to measure your system at 1.000 kHz and 1. you can perform the measurement in a few milliseconds. such as finding the resonant frequency of a crystal.
Uniform White Noise LabVIEW Analysis Concepts 210 ni. The term white in the definition of noise refers to the frequency domain characteristic of noise. and periodic random noise.Chapter 2 Signal Generation Noise Generation You can use noise signals to perform frequency response measurements or to simulate certain processes. Gaussian white noise. Several types of noise are typically used. Thus. achieving the flat power spectral density requires an infinite number of samples.com . resulting in a flat power spectral density across the frequency range of interest. the PDF of the amplitudes of the time domain samples is uniform within the specified maximum and minimum levels. Thermal noise produced in active components tends to be uniform white in distribution. with more number of averages resulting in a flatter power spectrum. when making measurements of white noise. In other words. namely uniform white noise. Thus. the power spectra are usually averaged. In practical measurements. the power in the frequency range from 100 Hz to 110 Hz is the same as the power in the frequency range from 1.000 Hz to 1. Ideal white noise has equal power per unit bandwidth. Figure 25 shows the distribution of the samples of uniform white noise. The terms uniform and Gaussian refer to the probability density function (PDF) of the amplitudes of the timedomain samples of the noise. Figure 25.010 Hz. For uniform white noise. all amplitude values between some limits are equally likely or probable.
However. the resulting output is Gaussian white noise. Because PRN contains only integralcycle sinusoids. The fundamental frequency is equal to the sampling frequency divided by the number of samples. as you must for nonperiodic random noise sources. Gaussian White Noise Periodic random noise (PRN) is a summation of sinusoidal signals with the same amplitudes but with random phases.Chapter 2 Signal Generation For Gaussian white noise. PRN consists of all sine waves with frequencies that can be represented with an integral number of cycles in the requested number of samples. Figure 26 shows the distribution of the samples of Gaussian white noise. You can use PRN to compute the frequency response of a linear system with one time record instead of averaging the frequency response over several time records. PRN is selfwindowing and therefore has no spectral leakage. © National Instruments Corporation 211 LabVIEW Analysis Concepts . PRN does not have energy at all frequencies as white noise does but has energy only at discrete frequencies that correspond to harmonics of a fundamental frequency. the PDF of the amplitudes of the time domain samples is Gaussian. If uniform white noise is passed through a linear system. Figure 26. the level of noise at each of the discrete frequencies is the same. you do not need to window PRN before performing spectral analysis. Figure 27 shows the spectrum of PRN and the averaged spectra of white noise.
analog frequency digital frequency = sampling frequency The digital frequency is known as the normalized frequency and is measured in cycles per sample. Spectral Representation of Periodic Random Noise and Averaged White Noise Normalized Frequency In the analog world. LabVIEW Analysis Concepts 212 ni.Chapter 2 Signal Generation Figure 27. a signal frequency is measured in hertz (Hz). which is the ratio between the analog frequency and the sampling frequency. or cycles per second.com . as shown by the following equation. But the digital system often uses a digital frequency.
cycles per second. two samples/cycle.1. resulting in a normalized frequency of f = 1/25 cycles/sample. a signal sampled at the Nyquist rate of fs/2 means it is sampled twice per cycle. If you need to convert from Hz to cycles per sample. the number of samples per cycle.0.1 is equivalent to 0. However. as shown in the following equation. The normalized frequency also wraps around 1.Chapter 2 Signal Generation Some of the Signal Generation VIs use a frequency input f that is assumed to use normalized frequency units of cycles per sample. you divide a frequency of 60 Hz by a sampling rate of 1. divide your frequency in Hz by the sampling rate given in samples per second.0 so a normalized frequency of 1. which corresponds to a real frequency range of 0 to the sampling frequency fs. you might need to use frequency units of Hz. The normalized frequency ranges from 0. You need only divide the frequency in cycles by the number of samples. that is. that is.= samples per second sample For example.06 cycles/sample. a frequency of two cycles is divided by 50 samples. cycles per second cycles. © National Instruments Corporation 213 LabVIEW Analysis Concepts . You must use normalized units of cycles per sample with the following Signal Generation VIs: • • • • • • Sine Wave Square Wave Sawtooth Wave Triangle Wave Arbitrary Wave Chirp Pattern If you are used to working in frequency units of cycles. For example. the reciprocal of f.0 to 1. gives you the number of times the signal is sampled in one cycle. you must convert your frequency units to the normalized units of cycles per sample. For example. The reciprocal of the normalized frequency. you can convert cycles to cycles per sample by dividing cycles by the number of samples generated.5 cycles/sample. When you use a VI that requires the normalized frequency as an input. This means that it takes 25.000 Hz to get the normalized frequency of f = 0. 1/f. This sampling rate corresponds to a normalized frequency of 1/2 cycles/sample = 0. samples to generate one cycle of the sine wave.
a reset phase input specifies whether the phase of the first sample generated when the wave VI is called is the phase specified in the phase in input or the phase available in the phase out output when the VI last executed. A basic difference exists between the operation of the two different types of VIs. Wave and Pattern VIs The names of most of the Signal Generation VIs contain the word wave or pattern. A TRUE value for reset phase sets the initial phase to phase in. which means they can keep track of phase internally. it takes almost 17. All the wave VIs are reentrant. samples to generate one cycle of the sine wave. Phase Control The wave VIs have a phase in input that specifies the initial phase in degrees of the first sample of the generated waveform.Chapter 2 Signal Generation Therefore. or 1/0. The difference has to do with whether the VI can keep track of the phase of the signal it generates each time the VI is called. The wave VIs also have a phase out output that indicates the phase of the next sample of the generated waveform. The Signal Generation VIs create many common signals required for network analysis and simulation. The only pattern VI that uses normalized units is the Chirp Pattern VI. You also can use the Signal Generation VIs in conjunction with National Instruments hardware to generate analog output signals. Wire FALSE to the reset phase input to allow for continuous sampling simulation. LabVIEW Analysis Concepts 214 ni. A FALSE value for reset phase sets the initial phase to the value of phase out when the VI last executed.com . The wave VIs accept frequency in normalized units of cycles per sample.06. In addition.
Two common filtering applications are removing noise and decimation. Analog filter design requires advanced mathematical knowledge and an understanding of the processes involved in the system affecting the filter. Because of modern sampling and digital signal processing tools. For example. The filtering process assumes that you can separate the signal content of interest from the raw signal. Changing the bass and treble controls filters the audio signal. the bass control on a stereo system alters the lowfrequency content of a signal. © National Instruments Corporation 31 LabVIEW Analysis Concepts . describes finite impulse response (FIR) and infinite impulse response (IIR) filters. and medical monitoring applications. Both x(t) and y(t) are functions of a continuous variable t and can have an infinite number of values. compares analog and digital filters. Classical linear filtering assumes that the signal content of interest is distinct from the remainder of the signal in the frequency domain. Introduction to Filtering The filtering process alters the frequency content of a signal. geophysics.Digital Filtering 3 This chapter introduces the concept of filtering. while the treble control alters the highfrequency content. and describes how to choose the appropriate digital filter for a particular application. telecommunications. such as audio. you can replace analog filters with digital filters in applications that require flexibility and programmability. Decimation consists of lowpass filtering and reducing the sample rate. Advantages of Digital Filtering Compared to Analog Filtering An analog filter has an analog signal at both its input x(t) and its output y(t).
Equation 31 provides the mathematical definition of an impulse. Digital filters require only the arithmetic operations of multiplication and addition/subtraction.com . LabVIEW Analysis Concepts 32 ni. x0 = 1 xi = 0 for all i ≠ 0 (31) The impulse response of a filter is the response of the filter to an impulse and depends on the values upon which the filter operates. Digital filters do not drift with temperature or humidity or require precision components. Common Digital Filters You can classify a digital filter as one of the following types: • • • Finite impulse response (FIR) filter. Digital filters have a superior performancetocost ratio. also known as moving average (MA) filter Infinite impulse response (IIR) filter.Chapter 3 Digital Filtering Digital filters have the following advantages compared to analog filters: • • • • • Digital filters are software programmable. which makes them easy to build and test. Figure 31 illustrates impulse response. Digital filters do not suffer from manufacturing variations or aging. also known as autoregressive movingaverage (ARMA) filter Nonlinear filter Traditional filter classification begins with classifying a filter according to its impulse response. Impulse Response An impulse is a short duration signal that goes from zero to a maximum value and back to zero again in a short time.
An ideal filter passes all frequencies in the passband to the output unchanged but passes none of the frequencies in the stopband to the output. the frequency response of a filter reflects the gain of the filter at different frequencies. y[k] = x[k] + x[k – 1] + x[k – 2] + x[k – 3] + … + x[1] Therefore. For this example. The output of an IIR filter depends on the current and past input values and the current and past output values. The following statements describe the operation of the cash register: • • The cash register adds the cost of each item to produce the running total y[k]. The following equation computes y[k] up to the kth item. Impulse Response The Fourier transform of the impulse response is the frequency response of the filter. Classifying Filters by Impulse Response The impulse response of a filter determines whether the filter is an FIR or IIR filter.Chapter 3 Digital Filtering Amplitude Amplitude Amplitude Frequency fc Lowpass Frequency fc Highpass Frequency fc1 fc2 Bandpass Amplitude Frequency fc1 fc2 Bandstop Figure 31. The operation of a cash register can serve as an example to illustrate the difference between FIR and IIR filter operations. the gain is one in the passband and zero in the stopband. The output of an FIR filter depends only on the current and past input values. For an ideal filter. the total for N items is y[N]. In other words. The frequency response of a filter provides information about the output of the filter at different frequencies. x[k – 1] is the price of the past item entered into the cash register. the following conditions are true: • • • • x[k] is the cost of the current item entered into the cash register. 1≤k≤N N is the total number of items entered into the cash register. (32) © National Instruments Corporation 33 LabVIEW Analysis Concepts .
0825. The multiplying constants are the coefficients of the filter.com . However.0825x[k] + 1.0825x[k – 1] + 1. FIR filters are the simplest filters to design.25% and rewrite Equations 32 and 33 as the following equations. y[k] = y[k – 1] + x[k] (33) • Add a tax of 8. 33. operation. The time required for the filter output to reach zero equals the number of filter coefficients. y[k] = 1. the multiplying constants are 1 for y[k – 1] and 1. FIR filters are finite. and 35 are difference equations. y[k – 1] equals the total up to the (k – 1) item. 34. Refer to the FIR Filters section of this chapter for more information about FIR filters.0825x[k] (34) (35) Equations 34 and 35 identically describe the behavior of the cash register. Because IIR filters operate on current and past input values and current and past output values. while Equation 35 describes the behavior in terms of both the input and the output. Refer to the IIR Filters section of this chapter for more information about IIR filters. Equations that describe the operation of a filter and have the same form as Equations 32. Therefore.Chapter 3 Digital Filtering • y[k] equals the total up to the kth item. the coefficients multiplying the inputs are the forward coefficients. If a single impulse is present at the input of an FIR filter and all subsequent inputs are zero. Therefore. Equation 32 can be rewritten as the following equation. For an IIR filter.0825 for x[k]. In Equation 35.0825x[1] y[k] = y[k – 1] + 1. Filter Coefficients In Equation 34. operation. or IIR. the impulse response of an IIR filter never reaches zero and is an infinite response. the multiplying constant for each term is 1. the output of an FIR filter becomes zero after a finite time. Equation 34 represents a nonrecursive. Equation 34 describes the behavior of the cash register only in terms of the input. The coefficients multiplying the outputs are the reverse coefficients.0825x[k – 3] + … + 1. Equation 35 represents a recursive. LabVIEW Analysis Concepts 34 ni.0825x[k – 2] + 1. or FIR.
The bandpass filter passes all frequencies between fc1 and fc2. you must specify the cutoff frequencies. ideal filters are not realizable. Ideal Frequency Response In Figure 32. The highpass filter passes all frequencies above fc. The bandstop filter attenuates all frequencies between fc1 and fc2. Highpass filters pass high frequencies and attenuate low frequencies. When designing filters. Bandstop filters attenuate a certain band of frequencies. Figure 32 shows the ideal frequency response of each of the preceding filter types. Amplitude Amplitude Amplitude Frequency fc Lowpass Frequency fc Highpass Frequency fc1 fc2 Bandpass Amplitude Frequency fc1 fc2 Bandstop Figure 32. and fc2 specify the cutoff frequencies for the different filters. The following filter classifications are based on the frequency range a filter passes or blocks: • • • • Lowpass filters pass low frequencies and attenuate high frequencies. Figure 33 shows the passband (PB) and the stopband (SB) for each filter type. The passband of the filter is the frequency range that passes through the filter. fc1. the filters exhibit the following behavior: • • • • The lowpass filter passes all frequencies below fc. The stopband of the filter is the range of frequencies that the filter attenuates.Chapter 3 Digital Filtering Characteristics of an Ideal Filter In practical applications. The frequency points fc. An ideal filter has a gain of one (0 dB) in the passband so the amplitude of the signal neither increases nor decreases. © National Instruments Corporation 35 LabVIEW Analysis Concepts . Bandpass filters pass a certain band of frequencies. Ideal filters allow a specified frequency range of interest to pass through while attenuating a specified unwanted frequency range.
Transition Band Figure 34 shows the passband. In practice. The bandstop filter has two passbands and one stopband. LabVIEW Analysis Concepts 36 ni. However. The bandpass filter has one passband and two stopbands. a finite transition band always exists between the passband and the stopband. Passband and Stopband The filters in Figure 33 have the following passband and stopband characteristics: • • • The lowpass and highpass filters have one passband and one stopband. and the transition band for each type of practical filter.com .Chapter 3 Digital Filtering Amplitude Amplitude Amplitude Amplitude Passband Stopband Stopband Passband Stopband Stopband Passband Passband Passband Stopband fc Lowpass Freq fc Highpass Freq fc1 fc2 Bandpass Freq fc1 fc2 Bandstop Freq Figure 33. the stopband. real filters cannot fulfill all the criteria of an ideal filter. the gain of the filter changes gradually from one (0 dB) in the passband to zero (–∞ dB) in the stopband. Practical (Nonideal) Filters Ideally. a filter has a unit gain (0 dB) in the passband and a gain of zero (–∞ dB) in the stopband. In the transition band.
the stopband attenuation cannot be infinite. When you know the passband ripple or stopband attenuation. and you must specify a value with which you are satisfied. Nonideal Filters In each plot in Figure 34. Measure both the passband ripple and the stopband attenuation in decibels (dB). you can allow the gain in the passband to vary slightly from unity. The passband is the region within which the gain of the filter varies from 0 dB to –3 dB. you can use Equation 36 to determine the ratio of input and output amplitudes. Equation 36 defines a decibel. or the difference between the actual gain and the desired gain of unity. Passband Ripple and Stopband Attenuation In many applications. and the yaxis represents the magnitude of the filter in dB. (36) © National Instruments Corporation 37 LabVIEW Analysis Concepts . This variation in the passband is the passband ripple. the xaxis represents frequency. Ai(f) is the amplitude at a particular frequency f before filtering.Chapter 3 Digital Filtering Lowpass Passband Highpass Passband Stopband Stopband Bandpass Passband Bandstop Passband Passband Stopband Stopband Stopband Transition Regions Figure 34. and Ao(f) is the amplitude at a particular frequency f after filtering. In practice. Ao ( f ) dB = 20 log  Ai ( f ) where log denotes the base 10 logarithm.
Practical filter design attempts to approximate the ideal desired magnitude response. Equation 36 yields the following set of equations. Make exceptions to the previous sampling rate guideline when filter cutoff frequencies must be very close to either DC or the Nyquist frequency.Chapter 3 Digital Filtering The ratio of the amplitudes shows how close the passband or stopband is to the ideal. Sampling Rate The sampling rate is important to the success of a filtering operation. allowing you to emphasize a desirable filter characteristic at the expense of a less desirable characteristic.com .001 = 0.02 = 20 log  Ai ( f ) Ao ( f ) – 0. for a passband ripple of –0. choose a sampling rate 10 times higher than the highest frequency component of the signal of interest. Table 31 compares the characteristics of ideal filters and practical filters. Characteristics of Ideal and Practical Filters Characteristic Passband Stopband Transition band Ideal Filters Flat and constant Flat and constant None Practical Filters Might contain ripples Might contain ripples Have transition regions Practical filter design involves compromise.02 dB. For example. which is the ideal for the passband. Ao ( f ) – 0. subject to certain constraints. The compromises you can make depend on whether the filter is an FIR or IIR filter and the design algorithm. Table 31.= 10 Ai ( f ) (37) (38) Equations 37 and 38 show that the ratio of input and output amplitudes is close to unity.9977 . The maximum frequency component of the signal of interest usually determines the sampling rate. Filters with cutoff frequencies close to DC or the Nyquist frequency might LabVIEW Analysis Concepts 38 ni. In general.
reduce the sampling rate. increase the sampling rate. and moving average (MA) filters. FIR filters perform a convolution of the filter coefficients with a sequence of input values and produce an equally numbered sequence of output values.Chapter 3 Digital Filtering have a slow rate of convergence. You can take the following actions to overcome the slow convergence: • • If the cutoff is too close to the Nyquist frequency. adjust the sampling rate only if you encounter problems. Therefore. yi = ∑h x k=0 n–1 k i–k (39) where x is the input sequence to filter. FIR filters have the following characteristics: • • • FIR filters can achieve linear phase because of filter coefficient symmetry in the realization. convolution filters. In general. © National Instruments Corporation 39 LabVIEW Analysis Concepts . n–1 delay = 2 where n is the number of FIR filter coefficients. FIR filters are always stable. and h is the FIR filter coefficients. as shown in the following equation. you generally can associate a delay with the output sequence. y is the filtered sequence. FIR Filters Finite impulse response (FIR) filters are digital filters that have a finite impulse response. Equation 39 defines the finite convolution an FIR filter performs. FIR filters also are known as nonrecursive filters. FIR filters allow you to filter signals using the convolution. If the cutoff is too close to DC. FIR filters operate only on current and past input values and are the simplest filters to design.
the phase is clearly linear.com . FIR Filter Magnitude and Phase Response Compared to Normalized Frequency In Figure 35.Chapter 3 Digital Filtering Figure 35 shows a typical magnitude and phase response of an FIR filter compared to normalized frequency. LabVIEW Analysis Concepts 310 ni. The discontinuities in phase are on the order of pi. However. the discontinuities in the phase response result from the discontinuities introduced when you use the absolute value to compute the magnitude response. Figure 35.
Tapping Figure 36 represents an nsample shift register containing the input samples [xi. The most common techniques approximate the desired magnitude response while maintaining a linearphase response. Taps usually refers to the number of filter coefficients for an FIR filter. xi – 1. The term tap comes from the process of tapping off of the shift register to form each hkxi – k term in Equation 39. …]. © National Instruments Corporation 311 LabVIEW Analysis Concepts . Designing FIR Filters You design FIR filters by approximating the desired frequency response of a discretetime system.Chapter 3 Digital Filtering Taps The terms tap and taps frequently appear in descriptions of FIR filters. FIR filter design. and FIR filtering operations. Figure 36 illustrates the process of tapping. Figure 37 shows the block diagram of a VI that returns the frequency response of a bandpass equiripple FIR filter. Linearphase response implies that all frequencies in the system have the same propagation delay. Input Sequence x Tapping xn xn–1 xn–2 … x h 0 xn h0 Figure 36.
Half of the real FFT result is redundant so the VI needs to process only half of the data returned by the FFT VI. highpass. or bandstop. Use the Complex To Polar function to obtain the magnitudeandphase form of the data returned by the FFT VI. Unwrap the phase and convert it to degrees. h(t) is the impulse response. bandpass.Chapter 3 Digital Filtering Figure 37. 4. The signal passed out of the Case structure is the impulse response of the filter. 6. H(f) is the frequency response. LabVIEW Analysis Concepts 312 ni. Use the Array Subset function to reduce the data returned by the FFT VI. Pass the filtered signal out of the Case structure to the FFT VI. 7. 1. The magnitudeandphase form of the complex output from the FFT VI is easier to interpret than the rectangular component of the FFT. Frequency Response of a Bandpass Equiripple FIR Filter The VI in Figure 37 completes the following steps to compute the frequency response of the filter.com . 2. 5. 3. Pass an impulse signal through the filter. The Case structure specifies the filter type—lowpass. Convert the magnitude to decibels. Use the FFT VI to perform a Fourier transform on the impulse response and to compute the frequency response of the filter. such that the impulse response and the frequency response comprise the Fourier transform pair h ( t ) ⇔ H ( f ).
Because FIR filters have ripple in the magnitude response. also known as Remez Exchange. designing FIR filters has the following design challenges: • • Designing a filter with a magnitude response as close to the ideal as possible Designing a filter that distributes the ripple in a desired fashion For example. © National Instruments Corporation 313 LabVIEW Analysis Concepts . Magnitude and Phase Response of a Bandpass Equiripple FIR Filter In Figure 38. A particular application might allow some ripple in the passband and more ripple in the stopband. a lowpass filter has an ideal characteristic magnitude response. the discontinuities in the phase response result from the discontinuities introduced when you use the absolute value to compute the magnitude response. However. Figure 38. The most common techniques for designing FIR filters are windowing and the ParksMcClellan algorithm. The filter design algorithm must balance the relative ripple requirements while producing the sharpest transition band.Chapter 3 Digital Filtering Figure 38 shows the magnitude and phase responses returned by the VI in Figure 37. the phase response is a linear response because all frequencies in the system have the same propagation delay.
You can reduce the effects of the Gibbs phenomenon by using a smoothing window to smooth the truncation of the ideal impulse response. The smoothing window is a time domain window. Designing FIR filters by windowing takes the inverse FFT of the desired magnitude response and applies a smoothing window to the result. Decide on an ideal frequency response. Complete the following steps to design a FIR filter by windowing.Chapter 3 Digital Filtering Designing FIR Filters by Windowing Windowing is the simplest technique for designing FIR filters because of its conceptual simplicity and ease of implementation. Decreasing the height of the side lobes near the cutoff frequencies increases the width of the transition band. Calculate the impulse response of the ideal frequency response. 1. 2. LabVIEW Analysis Concepts 314 ni. 4. To meet the linearphase constraint. Truncate the impulse response to produce a finite number of coefficients. resulting in a wider transition band at the cutoff frequencies. Truncating the ideal impulse response results in the Gibbs phenomenon. The Gibbs phenomenon appears as oscillatory behavior near cutoff frequencies in the FIR filter frequency response. Apply a symmetric smoothing window. Decreasing the width of the transition band increases the height of the side lobes near the cutoff frequencies. Designing FIR filters by windowing has the following disadvantages: • Inefficiency – – Windowing results in unequal distribution of ripple. you can decrease the height of the side lobes in the frequency response. maintain symmetry about the center point of the coefficients. By tapering the FIR coefficients at each end.com . Windowing results in a wider transition band than other design techniques. 3. Selecting a smoothing window requires a tradeoff between the height of the side lobes near the cutoff frequencies and the width of the transition band. However. decreasing the side lobe heights causes the main lobe to widen.
Chapter 3
Digital Filtering
•
Difficulty in specification – – – – – Windowing increases the difficulty of specifying a cutoff frequency that has a specific attenuation. Filter designers must specify the ideal cutoff frequency. Filter designers must specify the sampling frequency. Filter designers must specify the number of taps. Filter designers must specify the window type.
Designing FIR filters by windowing does not require a large amount of computational resources. Therefore, windowing is the fastest technique for designing FIR filters. However, windowing is not necessarily the best technique for designing FIR filters.
Designing Optimum FIR Filters Using the ParksMcClellan Algorithm
The ParksMcClellan algorithm, or Remez Exchange, uses an iterative technique based on an error criterion to design FIR filter coefficients. You can use the ParksMcClellan algorithm to design optimum, linearphase, FIR filter coefficients. Filters you design with the ParksMcClellan algorithm are optimal because they minimize the maximum error between the actual magnitude response of the filter and the ideal magnitude response of the filter. Designing optimum FIR filters reduces adverse effects at the cutoff frequencies. Designing optimum FIR filters also offers more control over the approximation errors in different frequency bands than other FIR filter design techniques, such as designing FIR filters by windowing, which provides no control over the approximation errors in different frequency bands. Optimum FIR filters you design using the ParksMcClellan algorithm have the following characteristics: • • A magnitude response with the weighted ripple evenly distributed over the passband and stopband A sharp transition band
FIR filters you design using the ParksMcClellan algorithm have an optimal response. However, the design process is complex, requires a large amount of computational resources, and is much longer than designing FIR filters by windowing.
© National Instruments Corporation
315
LabVIEW Analysis Concepts
Chapter 3
Digital Filtering
Designing Equiripple FIR Filters Using the ParksMcClellan Algorithm
You can use the ParksMcClellan algorithm to design equiripple FIR filters. Equiripple design equally weights the passband and stopband ripple and produces filters with a linear phase characteristic. You must specify the following filter characteristics to design an equiripple FIR filter: • • • • • Cutoff frequency Number of taps Filter type, such as lowpass, highpass, bandpass, or bandstop Pass frequency Stop frequency
The cutoff frequency for equiripple filters specifies the edge of the passband, the stopband, or both. The ripple in the passband and stopband of equiripple filters causes the following magnitude responses: • • Passband—a magnitude response greater than or equal to 1 Stopband—a magnitude response less than or equal to the stopband attenuation
For example, if you specify a lowpass filter, the passband cutoff frequency is the highest frequency for which the passband conditions are true. Similarly, the stopband cutoff frequency is the lowest frequency for which the stopband conditions are true.
Designing Narrowband FIR Filters
Using conventional techniques to design FIR filters with especially narrow bandwidths can result in long filter lengths. FIR filters with long filter lengths often require long design and implementation times and are susceptible to numerical inaccuracy. In some cases, conventional filter design techniques, such as the ParksMcClellan algorithm, might not produce an acceptable narrow bandwidth FIR filter. The interpolated finite impulse response (IFIR) filter design technique offers an efficient algorithm for designing narrowband FIR filters. Using the IFIR technique produces narrowband filters that require fewer coefficients and computations than filters you design by directly applying the ParksMcClellan algorithm. The FIR Narrowband Coefficients VI uses the IFIR technique to generate narrowband FIR filter coefficients.
LabVIEW Analysis Concepts
316
ni.com
Chapter 3
Digital Filtering
You must specify the following parameters when developing narrowband filter specifications: • • • • • • • Filter type, such as lowpass, highpass, bandpass, or bandstop Passband ripple on a linear scale Sampling frequency Passband frequency, which refers to passband width for bandpass and bandstop filters Stopband frequency, which refers to stopband width for bandpass and bandstop filters Center frequency for bandpass and bandstop filters Stopband attenuation in decibels
Figure 39 shows the block diagram of a VI that estimates the frequency response of a narrowband FIR bandpass filter by transforming the impulse response into the frequency domain.
Figure 39. Estimating the Frequency Response of a Narrowband FIR Bandpass Filter
Figure 310 shows the filter response from zero to the Nyquist frequency that the VI in Figure 39 returns.
© National Instruments Corporation
317
LabVIEW Analysis Concepts
Chapter 3
Digital Filtering
Figure 310. Estimated Frequency Response of a Narrowband FIR Bandpass Filter from Zero to Nyquist
In Figure 310, the narrow passband centers around 1 kHz. The narrow passband center at 1 kHz is the response of the filter specified by the front panel controls in Figure 310. Figure 311 shows the filter response in detail.
Figure 311. Detail of the Estimated Frequency Response of a Narrowband FIR Bandpass Filter
In Figure 311, the narrow passband clearly centers around 1 kHz and the attenuation of the signal at 60 dB below the passband.
LabVIEW Analysis Concepts
318
ni.com
Chapter 3
Digital Filtering
Refer to the works of Vaidyanathan, P. P. and Neuvo, Y. et al. in Appendix A, References, for more information about designing IFIR filters.
Designing Wideband FIR Filters
You also can use the IFIR technique to produce wideband FIR lowpass filters and wideband FIR highpass filters. A wideband FIR lowpass filter has a cutoff frequency near the Nyquist frequency. A wideband FIR highpass filter has a cutoff frequency near zero. You can use the FIR Narrowband Coefficients VI to design wideband FIR lowpass filters and wideband FIR highpass filters. Figure 312 shows the frequency response that the VI in Figure 39 returns when you use it to estimate the frequency response of a wideband FIR lowpass filter.
Figure 312. Frequency Response of a Wideband FIR Lowpass Filter from Zero to Nyquist
In Figure 312, the front panel controls define a narrow bandwidth between the stopband at 23.9 kHz and the Nyquist frequency at 24 kHz. However, the frequency response of the filter runs from zero to 23.9 kHz, which makes the filter a wideband filter.
IIR Filters
Infinite impulse response (IIR) filters, also known as recursive filters and autoregressive movingaverage (ARMA) filters, operate on current and past input values and current and past output values. The impulse response
© National Instruments Corporation
319
LabVIEW Analysis Concepts
the impulse response of a stable IIR filter decays to near zero in a finite number of samples. However. IIR filters have a nonlinearphase response.Chapter 3 Digital Filtering of an IIR filter is the response of the general IIR filter to an impulse. a0 Nb – 1 j=0 Na – 1 ∑ bj x i – j – ∑ a k yi – k k=1 (310) where bj is the set of forward coefficients. The output sample at the current sample index i is the sum of scaled current and past inputs and scaled past outputs. Nb is the number of forward coefficients. 1 yi = . xi – j is the past inputs. as Equation 31 defines impulse. as shown by Equation 311. Cascade Form IIR Filtering Equation 312 defines the directform transfer function of an IIR filter. yi = ∑ Nb – 1 j=0 bj x i – j – ∑a y k=1 Na – 1 k i–k . Equation 310 describes a filter with an impulse response of theoretically infinite length for nonzero coefficients. the impulse response of an IIR filter never reaches zero and is an infinite response. IIR filters might have ripple in the passband. ak is the set of reverse coefficients. and Na is the number of reverse coefficients. the stopband. and yi – k is the past outputs. Theoretically. coefficient a0 is 1.com . (311) where xi is the current input. b 0 + b 1 z + … + b Nb – 1 z b H ( z ) = –( Na – 1 ) –1 1 + a 1 z + … + aNa – 1 z A filter implemented by directly using the structure defined by Equation 312 after converting it to the difference equation in –1 –(N – 1 ) (312) LabVIEW Analysis Concepts 320 ni. In most IIR filter designs and all of the LabVIEW IIR filters. in practical filter applications. or both. The following general difference equation characterizes IIR filters.
The filter order is proportional to the coefficient length. sk[i – 1] and sk[i – 2]. You can lessen the sensitivity of a filter to error by writing Equation 312 as a ratio of z transforms. As filter order increases. © National Instruments Corporation 321 LabVIEW Analysis Concepts . Stages of Cascade Filtering You implement each individual filter stage in Figure 313 with the directform II filter structure. N s = . one output. 2 2 and Na ≥ Nb. By factoring Equation 312 into secondorder sections. and two past internal states. You use the directform II filter structure to implement each filter stage for the following reasons: • • • The directform II filter structure requires a minimum number of arithmetic operations. Figure 313 illustrates cascade filtering. the transfer function of the filter becomes a product of secondorder filter functions. a filter with an initially stable design can become unstable with increasing coefficient length. Each kth stage has one input. the filter becomes more unstable. Also. A directform IIR filter often is sensitive to errors introduced by coefficient quantization and by computational precision limits. the filter order increases. The directform II filter structure requires a minimum number of delay elements.. as shown in Equation 313. b 0k + b 1k z + b 2k z –1 –2 1 + a 1k z + a 2k z k=1 Ns –1 –2 H(z) = ∏ (313) Na Na where Ns is the number of stages. or internal filter states. which divides the directform transfer function into lower order sections. As the coefficient length increases. or filter stages. You can describe the filter structure defined by Equation 313 as a cascade of secondorder filters.Chapter 3 Digital Filtering Equation 311 is a directform IIR filter. x(i) Stage 1 Stage 2 … Stage NS y(i) Figure 313.is the largest integer ≤ .
2. Each secondorder filter stage has two reverse coefficients. b1k. a21. 2. an IIR filter with two secondorder filter stages must have a total of four reverse coefficients and six forward coefficients. The total number of forward coefficients equals 3Ns. and so on. (a1k. which have a single cutoff frequency. a22} Total number of forward coefficients = 3Ns = 3 × 2 = 6 Forward Coefficients = {b01. b21. 1.Chapter 3 Digital Filtering If n is the number of samples in the input sequence. k = 1. as shown in the following equations. Ns. In Signal Processing VIs with Reverse Coefficients and Forward Coefficients parameters. Ns k = 1. …. b2k). b11. a2k). The total number of reverse coefficients equals 2Ns. 2. …. 2.com . n – 1. you can design secondorder filter stages directly. b22} LabVIEW Analysis Concepts 322 ni. followed by the coefficients for the next secondorder filter stage. a12. the filtering operation proceeds as shown in the following equations. …. Each secondorder filter stage has the following characteristics: • • • • • k = 1. Each secondorder filter stage has three forward coefficients. …. b02. b12. (b0k. the Reverse Coefficients and the Forward Coefficients arrays contain the coefficients for one secondorder filter stage. For example. Ns SecondOrder Filtering For lowpass and highpass filters. The resulting IIR lowpass or highpass filter contains cascaded secondorder filters. Total number of reverse coefficients = 2Ns = 2 × 2 = 4 Reverse Coefficients = {a11. where k is the secondorder filter stage number and Ns is the total number of secondorder filter stages. y0 [ i ] = x [ i ] sk [ i ] = y – 1 [ i – 1 ] – a 1k s k [ i – 1 ] – a 2k s k [ i – 2 ] k yk [ i ] = b 0k s k [ i ] + b 1k s k [ i – 1 ] + b 2k s k [ i – 2 ] for each sample i = 0.
b2k.. also known as Cauer filters Bessel filters © National Instruments Corporation 323 LabVIEW Analysis Concepts . 2. 4 Each fourthorder filter stage has four reverse coefficients. (a1k. The total number of forward coefficients equals 5Ns. Ns. which have two cutoff frequencies. Na + 1 N s = . (b0k. IIR bandpass or bandstop filters resulting from fourthorder filter design contain cascaded fourthorder filters. …. b3k. also known as inverse Chebyshev and Type II Chebyshev filters Elliptic filters. Each fourthorder filter stage has the following characteristics: • • • • • • k = 1. y0 [ i ] = x [ i ] sk [ i ] = y – 1 [ i – 1 ] – a 1k s k [ i – 1 ] – a 2k s k [ i – 2 ] – a 3k s k [ i – 3 ] – a 4k s k [ i – 4 ] k yk [ i ] = b 0k s k [ i ] + b 1k s k [ i – 1 ] + b 2k s k [ i – 2 ] – b 3k s k [ i – 3 ] – b 4k s k [ i – 4 ] where k = 1. You implement cascade stages in fourthorder filtering in the same manner as in secondorder filtering. a3k. b1k. Ns. The following equations show how the filtering operation for fourthorder stages proceeds. 2. IIR Filter Types Digital IIR filter designs come from the classical analog designs and include the following filter types: • • • • • Butterworth filters Chebyshev filters Chebyshev II filters. Each fourthorder filter stage has five forward coefficients. a2k. b4k).Chapter 3 Digital Filtering FourthOrder Filtering For bandpass and bandstop filters. where k is the fourthorder filter stage number and Ns is the total number of fourthorder filter stages. …. a4k). fourthorder filter stages are a more direct form of filter design than secondorder filter stages. The total number of reverse coefficients equals 4Ns.
The amount of ripple. Figure 314 shows the frequency response of a lowpass Butterworth filter. in dB. with the ideal response of unity in the passband and zero in the stopband Halfpower frequency. or both. or 3 dB down frequency. The maximum tolerable error is the maximum absolute value of the difference between the ideal filter frequency response and the actual filter frequency response. stopband. Minimizing Peak Error The Chebyshev. that corresponds to the specified cutoff frequencies The advantage of Butterworth filters is their smooth.Chapter 3 Digital Filtering The IIR filter designs differ in the sharpness of the transition between the passband and the stopband and where they exhibit their various characteristics—in the passband or the stopband. Depending on the type. and elliptic filters minimize peak error by accounting for the maximum tolerable error in their frequency response.com . the filter minimizes peak error in the passband. Butterworth Filters Butterworth filters have the following characteristics: • • • • Smooth response at all frequencies Monotonic decrease from the specified cutoff frequencies Maximal flatness. LabVIEW Analysis Concepts 324 ni. monotonically decreasing frequency response. Chebyshev II. allowed in the frequency response of the filter determines the maximum tolerable error.
Frequency Response of a Lowpass Butterworth Filter As shown in Figure 314.Chapter 3 Digital Filtering Figure 314. a Chebyshev filter can achieve a sharper transition between the passband and the stopband with a lower order filter. Butterworth filters do not always provide a good approximation of the ideal filter response because of the slow rolloff between the passband and the stopband. © National Instruments Corporation 325 LabVIEW Analysis Concepts . Chebyshev Filters Chebyshev filters have the following characteristics: • • • • Minimization of peak error in the passband Equiripple magnitude response in the passband Monotonically decreasing magnitude response in the stopband Sharper rolloff than Butterworth filters Compared to a Butterworth filter. LabVIEW sets the steepness of the transition proportional to the filter order. The sharp transition between the passband and the stopband of a Chebyshev filter produces smaller absolute errors and faster execution speeds than a Butterworth filter. after you specify the cutoff frequency of a Butterworth filter. Higher order Butterworth filters approach the ideal lowpass filter response.
Chebyshev II filters differ from Chebyshev filters in the following ways: • Chebyshev II filters minimize peak error in the stopband instead of the passband.Chapter 3 Digital Filtering Figure 315 shows the frequency response of a lowpass Chebyshev filter.com . Figure 315. However. the sharp rolloff appears in the stopband. • • LabVIEW Analysis Concepts 326 ni. Chebyshev II filters have a monotonically decreasing magnitude response in the passband instead of the stopband. Chebyshev II filters have an equiripple magnitude response in the stopband instead of the passband. the maximum tolerable error constrains the equiripple response in the passband. Minimizing peak error in the stopband instead of the passband is an advantage of Chebyshev II filters over Chebyshev filters. Chebyshev II Filters Chebyshev II filters have the following characteristics: • • • • Minimization of peak error in the stopband Equiripple magnitude response in the stopband Monotonically decreasing magnitude response in the passband Sharper rolloff than Butterworth filters Chebyshev II filters are similar to Chebyshev filters. Also. Frequency Response of a Lowpass Chebyshev Filter In Figure 315.
0 –2.0 0.5 –1.3 0.1 0.0 –0.0 –1. © National Instruments Corporation 327 LabVIEW Analysis Concepts . Also. the smooth monotonic rolloff appears in the stopband.5 Order = 2 Order = 3 Order = 5 Chebyshev II Response Figure 316.0 0.5 –3.5 –2. Frequency Response of a Lowpass Chebyshev II Filter In Figure 316. Elliptic Filters Elliptic filters have the following characteristics: • • Minimization of peak error in the passband and the stopband Equiripples in the passband and the stopband Compared with the same order Butterworth or Chebyshev filters. which accounts for their widespread use.2 0. 0.4 0. the maximum tolerable error constrains the equiripple response in the stopband. resulting in a smaller absolute error and faster execution speed.Chapter 3 Digital Filtering Figure 316 shows the frequency response of a lowpass Chebyshev II filter. the elliptic filters provide the sharpest transition between the passband and the stopband. Chebyshev II filters have the same advantage over Butterworth filters that Chebyshev filters have—a sharper transition between the passband and the stopband with a lower order filter.
3 0. 1. Bessel Filters Bessel filters have the following characteristics: • • Maximally flat response in both magnitude and phase Nearly linearphase response in the passband You can use Bessel filters to reduce nonlinearphase distortion inherent in all IIR filters.2 0.4 0.9 0.7 0.8 0. Frequency Response of a Lowpass Elliptic Filter In Figure 317. especially in the transition regions of the filters. Also.0 0. Highorder IIR filters and IIR filters with a steep rolloff have a pronounced nonlinearphase distortion.5 0.5 Order = 2 Order = 3 Order = 4 Elliptic Response Figure 317. LabVIEW Analysis Concepts 328 ni.0 0.0 0.Chapter 3 Digital Filtering Figure 317 shows the frequency response of a lowpass elliptic filter.1 0.2 0.1 0. You also can obtain linearphase response with FIR filters. Figure 318 shows the magnitude and phase responses of a lowpass Bessel filter. the same maximum tolerable error constrains the ripple in both the passband and the stopband. even loworder elliptic filters have a sharp transition edge.4 0.6 0.com .3 0.
3 0.1 0. the phase monotonically decreases at all frequencies.1 0. which accounts for their limited use.0 –10.0 0.5 Order = 2 Order = 5 Order = 10 Bessel Magnitude Response Figure 318.0 –7.0 –2.3 0.7 0.0 0. Also.0 –1.9 0.0 –4.6 0.4 0.2 0.0 –3.0 –9.0 0. Like Butterworth filters.0 0.0 Order = 2 Order = 5 Order = 10 0. © National Instruments Corporation 329 LabVIEW Analysis Concepts . Phase Response of a Lowpass Bessel Filter Figure 319 shows the nearly linear phase in the passband. Bessel filters require highorder filters to minimize peak error. Magnitude Response of a Lowpass Bessel Filter In Figure 318.5 Bessel Phase Response Figure 319.2 0.8 0.5 0.0 –8. the magnitude is smooth and monotonically decreasing at all frequencies.3 0.1 0.4 0.0 –5.Chapter 3 Digital Filtering 1.2 0. 0.4 0. Figure 319 shows the phase response of a lowpass Bessel filter.0 –6.
The VI in Figure 320 computes the frequency response of an IIR filter by following the same steps outlined in the Designing FIR Filters section of this chapter.com . Figure 320. you must know the response of the filter. The main difference between the two VIs is that the Case structure on the left side of Figure 320 specifies the IIR filter design and filter type instead of specifying only the filter type. Frequency Response of an IIR Filter Because the same mathematical theory applies to designing IIR and FIR filters. Figure 321 shows the magnitude and the phase responses of a bandpass elliptic IIR filter.Chapter 3 Digital Filtering Designing IIR Filters When choosing an IIR filter for an application. the block diagram in Figure 320 of a VI that returns the frequency response of an IIR filter and the block diagram in Figure 37 of a VI that returns the frequency response of an FIR filter share common design elements. Figure 320 shows the block diagram of a VI that returns the frequency response of an IIR filter. LabVIEW Analysis Concepts 330 ni.
A transient response. Magnitude and Phase Responses of a Bandpass Elliptic IIR Filter In Figure 321.Chapter 3 Digital Filtering Figure 321. Refer to the Comparing FIR and IIR Filters section and the Selecting a Digital Filter Design section of this chapter for information about differences between FIR and IIR filters and selecting an appropriate filter design. remember that IIR filters provide nonlinear phase information. the phase information is clearly nonlinear. or delay. The number of elements in the filtered sequence equals the number of elements in the input sequence. proportional to the filter order occurs before the filter reaches a steady state. When deciding whether to use an IIR or FIR filter to process data. IIR Filter Characteristics in LabVIEW IIR filters in LabVIEW have the following characteristics: • • IIR filter VIs interpret values at negative indexes in Equation 310 as zero the first time you call the VI. Refer to the Transient Response section of this chapter for information about the transient response. • • © National Instruments Corporation 331 LabVIEW Analysis Concepts . The filter retains the internal filter state values when the filtering process finishes.
delay = 2 × order You can eliminate the transient response on successive calls to an IIR filter VI by enabling state memory. wire a value of TRUE to the init/cont input of the IIR filter VI. Figure 322 shows the transient response and the steady state for an IIR filter. Transient Response and Steady State for an IIR Filter LabVIEW Analysis Concepts 332 ni. The duration of the transient response for lowpass and highpass filters equals the filter order.com . Transient Steady State Original Signal Filtered Signal Figure 322. delay = order The duration of the transient response for bandpass and bandstop filters equals twice the filter order. The duration of the transient response depends on the filter type. To enable state memory for continuous filtering.Chapter 3 Digital Filtering Transient Response The transient response occurs because the initial filter state is zero or has values at negative indexes.
The highfrequency characteristics allow the median filter to detect edges. x(t) and y(t) are signals. combines lowpass filter characteristics and highfrequency characteristics. Use FIR filters for applications that require linearphase responses. such as signal monitoring applications. which preserves edge information. as shown in Equations 39 and 311. a nonlinear filter. © National Instruments Corporation 333 LabVIEW Analysis Concepts . you cannot obtain the output signals of a nonlinear filter through the convolution operation because a set of coefficients cannot characterize the impulse response of the filter. IIR filters can achieve the same level of attenuation as FIR filters but with far fewer coefficients. Therefore. The lowpass filter characteristics allow the median filter to remove highfrequency noise. The median filter. as shown in Equation 314. IIR filters. L{•} is a linear filtering operation. Nonlinear Filters Smoothing windows. IIR filters provide a nonlinearphase response. Use IIR filters for applications that do not require phase information. You can design FIR filters to provide a linearphase response. and inputs and outputs are related through the convolution operation. A nonlinear filter does not satisfy Equation 314. L {ax(t) + by(t)} = aL {x(t)} + bL {y(t)} (314) where a and b are constants. an IIR filter can provide a significantly faster and more efficient filtering operation than an FIR filter. Nonlinear filters provide specific filtering characteristics that are difficult to obtain using linear techniques. and FIR filters are linear because they satisfy the superposition and proportionality principles. comparing FIR and IIR filters can help guide you in selecting the appropriate filter design for a particular application. Refer to the Selecting a Digital Filter Design section of this chapter for more information about selecting a digital filter type. Also.Chapter 3 Digital Filtering Comparing FIR and IIR Filters Because designing digital filters involves making compromises to emphasize a desirable filter characteristic over a less desirable characteristic.
the peak amplitude of the noise portion of the input sequence must be less than or equal to 50% of the expected pulse amplitude. Using a Median Filter to Extract Pulse Information The VI in Figure 323 generates a noisy pulse with an expected peak noise amplitude greater than 100% of the expected pulse amplitude. Achieving the necessary pulsetonoise ratio requires a preprocessing operation to extract pulse information. a 50% pulsetonoise ratio is difficult to achieve.Chapter 3 Digital Filtering Example: Analyzing Noisy Pulse with a Median Filter The Pulse Parameters VI analyzes an input sequence for a pulse pattern and determines the best set of pulse parameters that describes the pulse. you can use a lowpass filter to remove some of the unwanted noise. However. Figure 323. the filter also shifts the signal in time and smears the edges of the pulse because the transition edges contain highfrequency information. The signal the VI in Figure 323 generates has the following ideal pulse values: • • • Amplitude of 5. Figure 323 shows the block diagram of a VI that generates and analyzes a noisy pulse. If the pulse is buried in noise whose expected peak amplitude exceeds 50% of the expected pulse amplitude.0 V Delay of 64 samples Width of 32 samples LabVIEW Analysis Concepts 334 ni. After the VI completes modal analysis to determine the baseline and the top of the input sequence. to accurately determine the pulse parameters. In some practical applications. discriminating between noise and signal becomes difficult without more information. Therefore. A median filter can extract the pulse more effectively than a lowpass filter because the median filter removes highfrequency noise while preserving edge information.com .
even though noise obscures the pulse. You can remove the highfrequency noise with the Median Filter VI to achieve the 50% pulsetonoise ratio the Pulse Parameters VI needs to complete the analysis accurately. Selecting a Digital Filter Design Answer the following questions to select a filter for an application: • • • Does the analysis require a linearphase response? Can the analysis tolerate ripples? Does the analysis require a narrow transition band? Use Figure 325 as a guideline for selecting the appropriate filter for an analysis application. © National Instruments Corporation 335 LabVIEW Analysis Concepts . the filtered pulse.Chapter 3 Digital Filtering Figure 324 shows the noisy pulse. you can track the pulse signal produced by the median filter. Figure 324. Noisy Pulse and Pulse Filtered with Median Filter In Figure 324. and the estimated pulse parameters returned by the VI in Figure 323.
However. you might need to experiment with several filter types to find the best type.Chapter 3 Digital Filtering Linear phase? Yes FIR Filter No Highorder Butterworth Filter Ripple acceptable? No Narrow transition band? No Loworder Butterworth Filter Inverse Chebyshev Filter Yes Yes Elliptic Filter Yes Narrowest possible transition region? No Ripple in Passband? Yes No Ripple in Stopband? Yes Multiband filter specifications? No Chebyshev Filter No Elliptic Filter Yes FIR filter Figure 325. LabVIEW Analysis Concepts 336 ni. Filter Flowchart Figure 325 can provide guidance for selecting an appropriate filter type.com .
Frequency Analysis 4 This chapter describes the fundamentals of the discrete Fourier transform (DFT). in many cases you need to know the frequency content of a signal rather than the amplitudes of the individual samples. Differences between Frequency Domain and Time Domain The timedomain representation gives the amplitudes of the signal at the instants of time during which it was sampled. Use the NI Example Finder to find examples of using the digital signal processing VIs and the measurement analysis VIs to perform FFT and frequency analysis. each with a particular amplitude and phase. basic signal analysis computations. The fundamental frequency is shown at the frequency f 0. However. and the third harmonic at frequency 3f 0. computations performed on the power spectrum. The same waveform then can be represented in the frequency domain as a pair of amplitude and phase values at each component frequency. and how to use FFTbased functions for network measurement. Fourier’s theorem states that any waveform in the time domain can be represented by the weighted sum of sines and cosines. © National Instruments Corporation 41 LabVIEW Analysis Concepts . and its component frequencies. the fast Fourier transform (FFT). labeled sum. Figure 41 shows the original waveform. the second harmonic at frequency 2f 0. You can generate any waveform by adding sine waves.
Some measurements. also known as a Dynamic Signal Analyzer. such as harmonic distortion. The representation of a signal in terms of its individual frequency components is the frequencydomain representation of the signal. Signal Formed by Adding Three Frequency Components In the frequency domain. as distinct impulses in the frequency domain. LabVIEW Analysis Concepts 42 ni. The amplitude of each frequency line is the amplitude of the time waveform for that frequency component. The samples of a signal obtained from a DAQ device constitute the timedomain representation of the signal. When the same signal is displayed in the frequency domain by an FFT Analyzer. Figure 41 shows single frequency components.Chapter 4 Frequency Analysis Frequency Axis 3 f0 2 f0 f0 Sum Time Axis Figure 41. The frequencydomain representation might provide more insight about the signal and the system from which it was generated. are difficult to quantify by inspecting the time waveform on an oscilloscope. you can separate conceptually the sine waves that add to form the complex timedomain signal. which spread out in the time domain. you easily can measure the harmonic frequencies and amplitudes.com .
Figure 42 shows the block diagram of a VI that demonstrates Parseval’s relationship. The lower branch on the block diagram converts the timedomain signal to the frequency domain and computes the energy of the frequencydomain signal using the right side of Equation 41. VI Demonstrating Parseval’s Theorem The VI in Figure 42 produces a real input sequence.Chapter 4 Frequency Analysis Parseval’s Relationship Parseval’s Theorem states that the total energy computed in the time domain must equal the total energy computed in the frequency domain. –∞ ∫ ∞ x ( t )x ( t ) dt = –∞ ∫ X( f) ∞ 2 df The following equation defines the discrete form of Parseval’s relationship. It is a statement of conservation of energy. n–1 i=0 ∑ xi 2 1 = n k=0 ∑X n–1 2 k (41) where x i ⇔ X k is a discrete FFT pair and n is the number of elements in the sequence. © National Instruments Corporation 43 LabVIEW Analysis Concepts . Figure 42. The upper branch on the block diagram computes the energy of the timedomain signal using the left side of Equation 41. The following equation defines the continuous form of Parseval’s relationship.
X(f) = F{x(t )} = –∞ ∫ x ( t )e ∞ ∞ – j2πft dt The following equation defines the twosided inverse Fourier transform. x( t ) = F { X( f ) } = –1 –∞ ∫ X ( f )e j2πft df Twosided means that the mathematical implementation of the forward and inverse Fourier transform considers all negative and positive frequencies and time of the signal. the total computed energy in the time domain equals the total computed energy in the frequency domain. Results from Parseval VI In Figure 43.Chapter 4 Frequency Analysis Figure 43 shows the results returned by the VI in Figure 42. Figure 43. A Fourier transform pair consists of the signal representation in both the time and frequency domain. x( t ) ⇔ X( f) LabVIEW Analysis Concepts 44 ni. The most common applications of the Fourier transform are the analysis of linear timeinvariant systems and spectral analysis. Fourier Transform The Fourier transform provides a method for examining a relationship in terms of the frequency domain. The following equation defines the twosided Fourier transform. The following relationship commonly denotes a Fourier transform pair. Singlesided means that the mathematical implementation of the transforms considers only the positive frequencies and time history of the signal.com .
or the sampling interval. acoustics. the result also is of length N samples. Figure 44 illustrates using the DFT to transform data from the time domain into the frequency domain. The DFT establishes the relationship between the samples of a signal in the time domain and their representation in the frequency domain. applied mechanics. but the information it contains is of the frequencydomain representation.Chapter 4 Frequency Analysis Discrete Fourier Transform (DFT) The algorithm used to transform samples of the data from the time domain into the frequency domain is the discrete Fourier transform (DFT). Relationship between N Samples in the Frequency and Time Domains If a signal is sampled at a given sampling rate. instrumentation. © National Instruments Corporation 45 LabVIEW Analysis Concepts . numerical analysis. Equation 42 defines the time interval between the samples. The sampling interval is the smallest frequency that the system can resolve through the DFT or related routines. The DFT is widely used in the fields of spectral analysis. If you apply the DFT to N samples of this timedomain representation of the signal. 1 ∆t = fs (42) where ∆t is the sampling interval and fs is the sampling rate in samples per second (S/s). DFT Time Domain Representation of x[n] Frequency Domain Representation Figure 44. and telecommunications. medical imaging. Discrete Fourier Transform Suppose you obtained N samples of a signal from a DAQ device.
(43) where x[i] is the timedomain representation of the sample signal and N is the total number of samples.= N N∆t (44) where ∆f is the frequency resolution. Example of Calculating DFT This section provides an example of using Equation 43 to calculate the DFT for a DC signal. Both approaches are equivalent to increasing N∆t. fs 1 ∆ f = . X[ k] = ∑ x[ i ]e i=0 N–1 – j2πik ⁄ N for k = 0. The number of samples is four samples.com . Both the time domain x and the frequency domain X have a total of N samples. Similar to the time spacing of ∆t between the samples of x in the time domain. … . you must increase N and keep fs constant or decrease fs and keep N constant. N is the number of samples. The equation results in X[k]. ∆t is the sampling interval. which is the time duration of the acquired samples. as shown in Figure 45. which Equation 44 defines. of the signal.2. x[0] = x[1] = x[3] = x[4] = 1 LabVIEW Analysis Concepts 46 ni. N – 1 . to decrease ∆f. you have a frequency spacing. or frequency resolution. fs is the sampling rate. the frequencydomain representation of the sample signal. Each of the samples has a value +1. To improve the frequency resolution.1.Chapter 4 Frequency Analysis Equation 43 defines the DFT. between the components of X in the frequency domain. The DC signal has a constant amplitude of +1 V. This example uses the following assumptions: • • • • • X[0] corresponds to the DC component. The resulting time sequence for the four samples is given by the following equation. or the average value. that is. and N∆t is the total acquisition time.
– j sin . Time Sequence for DFT Samples The DFT calculation makes use of Euler’s identity. – j sin . – j sin . = ( 1 – j – 1 – j ) = 0 2 2 where X[0] is the DC component and N is the number of samples. + x [ 2 ] ( cos ( π ) – j sin ( π ) ) + 2 2 3π 3π x [ 3 ] cos . – j sin . © National Instruments Corporation 47 LabVIEW Analysis Concepts . exp (–iθ) = cos(θ) – jsin(θ) If you use Equation 43 to calculate the DFT of the sequence shown in Figure 45 and use Euler’s identity. = ( 1 – j – 1 + j ) = 0 2 2 X [ 2 ] = x [ 0 ] + x [ 1 ] ( cos ( π ) – j sin ( π ) ) + x [ 2 ] ( cos ( 2π ) – j sin ( 2π ) ) + x [ 3 ] ( cos ( 3π ) – j sin ( 3π ) ) = ( 1 – 1 + 1 – 1 ) = 0 3π 3π X [ 3 ] = x [ 0 ] + x [ 1 ] cos .Chapter 4 Frequency Analysis Amplitude x[0] x[1] x[2] x[3] +1 V Time 0 1 2 3 Figure 45. which is given by the following equation. N–1 X[0] = ∑x e i – j2πi0 ⁄ N = x[ 0 ] + x[ 1 ] + x[ 2 ] + x[ 3 ] = 4 i=0 π π X [ 1 ] = x [ 0 ] + x [ 1 ] cos . + x [ 2 ] ( cos ( 3π ) – j sin ( 3π ) ) + 2 2 9π 9π x [ 3 ] cos . you get the following equations.
If N = 10. except for the DC component. such as those you obtain from the output of one channel of a DAQ device. In other words. Magnitude and Phase Information N samples of the input signal result in N samples of the DFT. the calculation results in X[0] = 10. the calculated value of X[0] depends on the value of N. the DFT is symmetric with properties given by the following equations. Because in this example N = 4.com . Equation 43 shows that regardless of whether the input signal x[i] is real or complex. and phase(X[k]) is odd symmetric. Therefore. although the imaginary part may be zero. However. X[k] is always complex. Figure 46 illustrates even and odd symmetry. every frequency component has a magnitude and phase. Singlechannel phase measurements are stable only if the input signal is triggered. The magnitude is the square root of the sum of the squares of the real and imaginary parts. or 180 and –180 degrees. X[k] = X[N – k] phase(X[k]) = –phase(X[N – k]) The magnitude of X[k] is even symmetric. and an odd symmetric signal is symmetric about the origin. That is. which is as expected. all other values for the sequence shown in Figure 45 are zero. An even symmetric signal is symmetric about the yaxis.Chapter 4 Frequency Analysis Therefore. LabVIEW Analysis Concepts 48 ni. For real signals (x[i] real). triggering usually is not necessary. X[0] = 4. Dualchannel phase measurements compute phase differences between channels so if the channels are sampled simultaneously. you usually divide the DFT output by N to obtain the correct magnitude of the frequency component. the number of samples in both the time and frequency representations is the same. The phase is the arctangent of the ratio of the imaginary and real parts and is usually between π and –π radians. Normally the magnitude of the spectrum is displayed. The phase is relative to the start of the time record or relative to a singlecycle cosine wave starting at the beginning of the time record. This dependency of X[ ] on N also occurs for the other frequency components.
is at k∆t seconds. If the input signal is complex. Table 41 shows the ∆f to which each format element of the complex output sequence X corresponds. © National Instruments Corporation 49 LabVIEW Analysis Concepts . Signal Symmetry Because of this symmetry. Because of this repetition of information. Similarly. Depending on whether the number of samples N is even or odd. the N samples of the DFT contain repetition of information. the DFT is asymmetrical. The other half represent negative frequency components. and you cannot use only half of the samples to obtain the other half. this is valid for only up to the first half of the frequency components. the kth sample of the DFT occurs at a frequency of k∆f Hz. if the frequency resolution is ∆f Hz. let N = 8 and p represent the index of the Nyquist frequency p = N/2 = 4. you can have a different interpretation of the frequency corresponding to the kth sample of the DFT. the kth data sample. For example. Frequency Spacing between DFT Samples If the sampling interval is ∆t seconds and the first data sample (k = 0) is at 0 seconds. However. only half of the samples of the DFT actually need to be computed or displayed because you can obtain the other half from this repetition.Chapter 4 Frequency Analysis y y x x Even Symmetry Odd Symmetry Figure 46. where k > 0 and is an integer.
For N = 8. Figure 47 illustrates the complex output sequence X for N = 8.com . LabVIEW Analysis Concepts 410 ni. The difference is that X[1]. X[2]. while X[5]. X[2] and X[6] have the same magnitude. that is. and X[3] correspond to positive frequency components. X[1] and X[7] have the same magnitude. X[p] for N = 8 X[p] X[0] X[1] X[2] X[3] X[4] X[5] X[6] X[7] ∆f DC ∆f 2∆f 3∆f 4∆f (Nyquist frequency) –3∆f –2∆f –∆f The negative entries in the second column beyond the Nyquist frequency represent negative frequencies. and X[7] correspond to negative frequency components. and X[3] and X[5] have the same magnitude. those elements with an index value >p. X[6].Chapter 4 Frequency Analysis Table 41. X[4] is at the Nyquist frequency.
X[2] and X[5] have the same magnitude.Chapter 4 Frequency Analysis Positive Negative Frequencies Frequencies DC Nyquist Component Figure 47. However. Table 42. X[1]. there is no component at the Nyquist frequency. Table 42 lists the values of ∆f for X[p] when N = 7 and p = (N–1)/2 = (7–1)/2 = 3. X[1] and X[6] have the same magnitude. while X[4]. there is no component at the Nyquist frequency. Complex Output Sequence X for N = 8 A representation where you see the positive and negative frequencies is the twosided transform. When N is odd. © National Instruments Corporation 411 LabVIEW Analysis Concepts . X[2]. and X[6] correspond to negative frequencies. X[p] for N = 7 X[p] X[0] X[1] X[2] X[3] X[4] X[5] X[6] ∆f DC ∆f 2∆f 3∆f –3∆f –2∆f –∆f For N = 7. and X[3] correspond to positive frequencies. Because N is odd. X[5]. and X[3] and X[4] have the same magnitude.
The FFT is a fast algorithm for calculating the DFT. DC Positive Frequencies Negative Frequencies Figure 48. such as the frequency response. Complex Output Sequence X[p] for N = 7 Figure 48 also shows a twosided transform because it represents the positive and negative frequencies. X( k) = ∑ x ( n )e n=0 N–1 2πnk – j . amplitude spectrum. The following equation defines the DFT. LabVIEW Analysis Concepts 412 ni. FFT Fundamentals Directly implementing the DFT on N data samples requires approximately N 2 complex operations and is a timeconsuming process.com . coherence.Chapter 4 Frequency Analysis Figure 48 illustrates the complex output sequence X[p] for N = 7. impulse response. N The following measurements comprise the basic functions for FFTbased signal analysis: • • • FFT Power spectrum Cross power spectrum You can use the basic functions as the building blocks for creating additional measurement functions. and phase spectrum.
The first bin. and sin(2πn/N) is a single cycle of a sine wave. is the dot product of x(n) with cos(2πn/N) – jsin(2πn/N). 2 where Fmax is the highest frequency that can be analyzed and fs is the sampling frequency. cos(2πn/N) is a single cycle of the cosine wave. The use of the FFT for frequency analysis implies two important relationships. The first relationship links the highest frequency that can be analyzed to the sampling frequency and is given by the following equation. Therefore. frequency lines also are known as frequency bins or FFT bins. use the FFT for stationary signal analysis or in cases where you need only the average energy at each frequency line. Refer to the Power Spectrum section of this chapter for more information about the power spectrum. bin k is the dot product of x(n) with k cycles of the cosine wave for the real part of X(k) and the sine wave for the imaginary part of X(k). or frequency component. f F max = s .Chapter 4 Frequency Analysis The FFT and the power spectrum are useful for measuring the frequency content of stationary or transient signals. Computing Frequency Components Each frequency component is the result of a dot product of the timedomain signal with the complex exponential at that frequency and is given by the following equation. The FFT produces the average frequency content of a signal over the total acquisition. N–1 X(k) = ∑ x ( n )e 2πnk – j  N N–1 = ∑ x( n ) 2πnk 2πnk cos  – j sin  N N n=0 n=0 The DC component is the dot product of x(n) with [cos(0) – jsin(0)]. In general. Here.0. © National Instruments Corporation 413 LabVIEW Analysis Concepts . or with 1. Refer to the Windowing section of this chapter for more information about Fmax. An FFT is equivalent to a set of parallel filters of bandwidth ∆f centered at each frequency increment from DC to (Fs/2) – (Fs/N). Therefore.
2.= .000. 1. N = 2m.Chapter 4 Frequency Analysis The second relationship links the frequency resolution to the total acquisition time. you can implement the computation of the DFT with approximately N log2(N) operations. if you have 10 samples of a signal. 1. the FFTbased VIs can compute the DFT with speeds comparable to an FFT whose input sequence size is a power of two. T N where ∆f is the frequency resolution. fs 1 ∆f = . which is a power of two. Common input sequence sizes that are factorable as the product of small prime numbers include 480. which is related to the sampling frequency and the block size of the FFT and is given by the following equation. or 24. LabVIEW Analysis Concepts 414 ni. and 2. k. you add zeros to the end of the input sequence so that the total number of samples is equal to the next higher power of two. j = 0. DSP literature refers to the algorithms for faster DFT calculation as fast Fourier transforms (FFTs).com . … (45) For the input sequence size defined by Equation 45. Figure 49 illustrates padding 10 samples of a signal with zeros to make the total number of samples equal 16.. When the size of the input sequence is not a power of two but is factorable as the product of small prime numbers. which makes the calculation of the DFT much faster. 1. you can add six zeros to make the total number of samples equal to 16. For example. the FFTbased VIs use a mixed radix CooleyTukey algorithm to efficiently compute the DFT of the input sequence.048. 3. Zero Padding Zero padding is a technique typically employed to make the size of the input sequence equal to a power of two. Fast FFT Sizes When the size of the input sequence is a power of two. and N is the block size of the FFT. Equation 45 defines an input sequence size N as the product of small prime numbers. and 2. 640. For example. fs is the sampling frequency. T is the acquisition time. N = 2 3 5 m k j for m.024. In zero padding. Common input sequence sizes that are a power of two include 512.000.
A signal consisting of a real and an imaginary component occurs frequently © National Instruments Corporation 415 LabVIEW Analysis Concepts . Therefore. you can use the Real FFT instance for most applications. You also can use the Complex FFT instance by setting the imaginary part of the signal to zero. In addition to making the total number of samples a power of two so that faster computation is made possible by using the FFT. zero padding can lead to an interpolated FFT result.Chapter 4 Frequency Analysis Figure 49. Zero Padding The addition of zeros to the end of the timedomain waveform does not improve the underlying frequency resolution associated with the timedomain signal. The difference between the two instances is that the Real FFT instance computes the FFT of a realvalued signal. However. An example of an application where you use the Complex FFT instance is when the signal consists of both a real and an imaginary component. The only way to improve the frequency resolution of the timedomain signal is to increase the acquisition time and acquire longer time records. the outputs of both instances are complex. which can produce a higher display resolution. FFT VI The polymorphic FFT VI computes the FFT of a signal and has two instances—Real FFT and Complex FFT. Most realworld signals are real valued. whereas the Complex FFT instance computes the FFT of a complexvalued signal.
Chapter 4
Frequency Analysis
in the field of telecommunications, where you modulate a waveform by a complex exponential. The process of modulation by a complex exponential results in a complex signal, as shown in Figure 410.
x(t)
Modulation by ω exp(–j t)
y(t) = x(t)cos(ωt) – jx(t)sin(ωt)
Figure 410. Modulation by a Complex Exponential
Displaying Frequency Information from Transforms
The discrete implementation of the Fourier transform maps a digital signal into its Fourier series coefficients, or harmonics. Unfortunately, neither a time nor a frequency stamp is directly associated with the FFT operation. Therefore, you must specify the sampling interval ∆t. Because an acquired array of samples represents a progression of equally spaced samples in time, you can determine the corresponding frequency in hertz. The following equation gives the sampling frequency fs for ∆t. 1 fs = ∆t Figure 411 shows the block diagram of a VI that properly displays frequency information given the sampling interval 1.000E – 3 and returns the value for the frequency interval ∆f.
Figure 411. Correctly Displaying Frequency Information
LabVIEW Analysis Concepts
416
ni.com
Chapter 4
Frequency Analysis
Figure 412 shows the display and ∆f that the VI in Figure 411 returns.
Figure 412. Properly Displayed Frequency Information
Two other common ways of presenting frequency information are displaying the DC component in the center and displaying onesided spectrums. Refer to the TwoSided, DCCentered FFT section of this chapter for information about displaying the DC component in the center. Refer to the Power Spectrum section of this chapter for information about displaying onesided spectrums.
TwoSided, DCCentered FFT
The twosided, DCcentered FFT provides a method for displaying a spectrum with both positive and negative frequencies. Most introductory textbooks that discuss the Fourier transform and its properties present a table of twosided Fourier transform pairs. You can use the frequency shifting property of the Fourier transform to obtain a twosided, DCcentered representation. In a twosided, DCcentered FFT, the DC component is in the middle of the buffer.
© National Instruments Corporation
417
LabVIEW Analysis Concepts
Chapter 4
Frequency Analysis
Mathematical Representation of a TwoSided, DCCentered FFT
If x ( t ) ⇔ X ( f ) is a Fourier transform pair, then x ( t )e Let 1 ∆ t = fs where fs is the sampling frequency in the discrete representation of the time signal. Set f0 to the index corresponding to the Nyquist component fN, as shown in the following equation. f = f
0 j2πf0 t
⇔ X ( f – f0 )
N
fs 1 =  = 2 2∆ t
f0 is set to the index corresponding to fN because causing the DC component to appear in the location of the Nyquist component requires a frequency shift equal to fN. Setting f0 to the index corresponding to fN results in the discrete Fourier transform pair shown in the following relationship. xi e
jiπ
⇔ Xk – n 2
where n is the number of elements in the discrete sequence, xi is the timedomain sequence, and Xk is the frequencydomain representation of xi. Expanding the exponential term in the timedomain sequence produces the following equation. e
jiπ
= cos ( iπ ) + j sin ( iπ ) =
1 if i is even – 1 if i is odd
(46)
Equation 46 represents a sequence of alternating +1 and –1. Equation 46 means that negating the odd elements of the original timedomain sequence and performing an FFT on the new sequence produces a spectrum whose DC component appears in the center of the sequence.
LabVIEW Analysis Concepts
418
ni.com
Chapter 4
Frequency Analysis
Therefore, if the original input sequence is X = {x0, x1, x2, x3, …, xn – 1} then the sequence Y = {x0, –x1, x2, –x3, …, xn – 1} generates a DCcentered spectrum. (47)
Creating a TwoSided, DCCentered FFT
You can modulate a signal by the Nyquist frequency in place without extra buffers. Figure 413 shows the block diagram of the Nyquist Shift VI located in the labview\examples\analysis\dspxmpl.llb, which generates the sequence shown in Equation 47.
Figure 413. Block Diagram of the Nyquist Shift VI
In Figure 413, the For Loop iterates through the input sequence, alternately multiplying array elements by 1.0 and –1.0, until it processes the entire input array. Figure 414 shows the block diagram of a VI that generates a timedomain sequence and uses the Nyquist Shift and Power Spectrum VIs to produce a DCcentered spectrum.
© National Instruments Corporation
419
LabVIEW Analysis Concepts
Chapter 4
Frequency Analysis
Figure 414. Generating TimeDomain Sequence and DCCentered Spectrum
In the VI in Figure 414, the Nyquist Shift VI preprocesses the timedomain sequence by negating every other element in the sequence. The Power Spectrum VI transforms the data into the frequency domain. To display the frequency axis of the processed data correctly, you must supply x0, which is the xaxis value of the initial frequency bin. For a DCcentered spectrum, the following equation computes x0. n x 0 = – 2 Figure 415 shows the timedomain sequence and DCcentered spectrum the VI in Figure 414 returns.
LabVIEW Analysis Concepts
420
ni.com
n–1 2 © National Instruments Corporation 421 LabVIEW Analysis Concepts . the DC component appears in the center of the display at f = 0. The overall format resembles that commonly found in tables of Fourier transform pairs. You can create DCcentered spectra for evensized input sequences by negating the odd elements of the input sequence. To create DCcentered spectra for oddsized input sequences. Raw TimeDomain Sequence and DCCentered Spectrum In the DCcentered spectrum display in Figure 415. you must rotate the FFT arrays by the amount given in the following relationship.Chapter 4 Frequency Analysis Figure 415. You cannot create DCcentered spectra by directly negating the odd elements of an input timedomain sequence containing an odd number of elements because the Nyquist frequency appears between two frequency bins.
The total power in the DC component is X[0]2. n–1 x 0 = – 2 Power Spectrum As described in the Magnitude and Phase Information section of this chapter. You can obtain the power in each frequency component represented by the DFT or FFT by squaring the magnitude of that frequency component. excluding DC and Nyquist components. the power at a positive frequency of k∆f is the same as the power at the corresponding negative frequency of –k∆f. the following equation computes x0. LabVIEW Analysis Concepts 422 ni. the kth element of the DFT or FFT—is given by the following equation. The values of the elements in the power spectrum array are proportional to the magnitude squared of each frequency component making up the timedomain signal. the power in the kth frequency component—that is. Thus. Because the DFT or FFT of a real signal is symmetric. the DFT or FFT of a real signal is a complex number.Chapter 4 Frequency Analysis For a DCcentered spectrum created from an oddsized input sequence. You can use Equation 48 to compute the twosided power spectrum from the FFT. FFT ( A ) × FFT∗ ( A ) Power Spectrum S AA ( f ) = N (48) where FFT*(A) denotes the complex conjugate of FFT(A). The power spectrum returns an array that contains the twosided power spectrum of a timedomain signal and that shows the power in each of the frequency components. Refer to the Magnitude and Phase Information section of this chapter for information about computing the magnitude of the frequency components. where X[k] is the magnitude of the frequency component. The total power in the Nyquist component is X[N/2]2. power = X[k]2.com . The complex conjugate of FFT(A) results from negating the imaginary part of FFT(A). having a real and an imaginary part.
Ak 4 2 where Ak is the peak amplitude of the sinusoidal component at frequency k. the negative frequency information is redundant.2426 V. Figure 416. 2 The DC component has a height of A 0 where A0 is the amplitude of the DC component in the signal. and a DC component of 2 VDC. A 3 Vrms sine wave has a peak voltage of 3. you discard the © National Instruments Corporation 423 LabVIEW Analysis Concepts . a 3 Vrms sine wave at 256 Hz.Chapter 4 Frequency Analysis A plot of the twosided power spectrum shows negative and positive frequency components at a height given by the following relationship. The power spectrum is computed from the basic FFT function. as shown in Equation 48. The twosided results from the analysis functions include the positive half of the spectrum followed by the negative half of the spectrum.0 • 2 or about 4. as shown in Figure 416. A twosided power spectrum displays half the energy at the positive frequency and half the energy at the negative frequency. Thus. Figure 416 shows the power spectrum result from a timedomain signal that consists of a 3 Vrms sine wave at 128 Hz. TwoSided Power Spectrum of Signal Converting a TwoSided Power Spectrum to a SingleSided Power Spectrum Most frequency analysis instruments display only the positive half of the frequency spectrum because the spectrum of a realworld signal is symmetrical around DC. to convert a twosided spectrum to a singlesided spectrum. Therefore.
i = 1 to . i = 0 (DC) N G AA ( i ) = ( 2S AA ( i ) ).is the root mean square (rms) amplitude of the sinusoidal 2 component at frequency k. The nonDC values in the singlesided spectrum have a height given by the following relationship. 2 2 Ak where . N/2 through N – 1.– 1 2 where SAA(i) is the twosided power spectrum.Chapter 4 Frequency Analysis second half of the array and multiply every point except for DC by two. The units of a power spectrum are often quantity squared rms. G AA ( i ) = S AA ( i ). Ak . LabVIEW Analysis Concepts 424 ni. as shown in the following equations. GAA(i) is the singlesided power spectrum. the singlesided 2 power spectrum of a voltage waveform is in volts rms squared. Ak 2 2 (49) Equation 49 is equivalent to the following relationship. V rms . and N is the length of the twosided power spectrum. You discard the remainder of the twosided power spectrum SAA.com . where quantity is the unit of the timedomain signal. For example. Figure 417 shows the singlesided spectrum of the signal whose twosided spectrum Figure 416 shows.
The disadvantage of obtaining the power by squaring the magnitude of the DFT or FFT is that the phase information is lost. such as calculating the harmonic power in a signal. the power spectrum is always real. You can use the power spectrum in applications where phase information is not necessary. you must use the DFT or FFT. You can estimate the actual frequency of a discrete frequency component to a greater resolution than the ∆f given by the FFT by performing a © National Instruments Corporation 425 LabVIEW Analysis Concepts . the frequency component appears as energy spread among adjacent frequency lines with reduced amplitude. If you want phase information. and power spectral density. the spectrum in Figure 417 stops at half the frequency of that in Figure 416. Also. which gives you a complex output. SingleSided Power Spectrum In Figure 417.Chapter 4 Frequency Analysis Figure 417. noise level. you can compute several useful characteristics of the input signal. Estimating Power and Frequency If a frequency component is between two frequency lines. the height of the nonDC frequency components is twice the height of the nonDC frequency component in Figure 416. such as power and frequency. The actual peak is between the two frequency lines. You can apply a sinusoidal input to a nonlinear system and see the power in the harmonics at the system output. Loss of Phase Information Because the power is obtained by squaring the magnitude of the DFT or FFT. Computations on the Spectrum When you have the amplitude or power spectrum.
Refer to Chapter 5. if two or more frequency peaks are within six lines of each other. j+3 ∑ i = j–3 ( Power ( i ) ( i∆f ) ) j+3 Estimated Frequency =  ∑ i =j–3 Power ( i ) where j is the array index of the apparent peak of the frequency of interest. it is likely that they are already interfering with one another because of spectral leakage. j+3 Estimated Power = noise power bandwidth of window i = j–3 ∑ Power ( i ) (410) Equation 410 is valid only for a spectrum made up of discrete frequency components. of Chapter 5. Smoothing Windows. The span j ± 3 is reasonable because it represents a spread wider than the main lobes of the smoothing windows listed in Table 53. sum the power in each bin included in the frequency range and divide by the noise power bandwidth of the smoothing window.com . It is not valid for a continuous spectrum. In other words. Also. 2 You can estimate the power in V rms of a discrete peak frequency component by summing the power in the bins around the peak. as shown in the following equation. they contribute to inflating the estimated powers and skewing the actual frequencies. Smoothing Windows. you compute the area under the peak. If you want the total power in a given frequency range. Correction Factors and WorstCase Amplitude Errors for Smoothing Windows. You can use the following equation to estimate the power of a discrete peak frequency component. for information about the noise power bandwidth of smoothing windows. LabVIEW Analysis Concepts 426 ni. If two peaks are within six lines of each other. You can reduce this effect by decreasing the number of lines spanned by Equation 410.Chapter 4 Frequency Analysis weighted average of the frequencies around a detected peak in the power spectrum.
To compute the signaltonoise ratio (SNR). and dividing the sum by the equivalent noise bandwidth of the window. the noise level at each frequency line is equivalent to the noise level obtained using a ∆f Hz filter centered at that frequency line. When looking at the noise floor of a power spectrum. Therefore. In other words. Because of noiselevel scaling with ∆f. Compute the 2 broadband noise level in V rms by summing all the power spectrum bins. for a given sampling rate. a convention for noiselevel measurements. Theoretically. you are looking at the narrowband noise level in each FFT bin. The spectral density format is not appropriate for discrete frequency components because discrete frequency components theoretically have zero bandwidth. discrete frequency components have zero bandwidth and therefore do not scale with the number of points or frequency range of the FFT. excluding any peaks and the DC component. spectra for noise measurement often are displayed in a normalized format called power or amplitude spectral density. 2 © National Instruments Corporation 427 LabVIEW Analysis Concepts . doubling the number of data points acquired reduces the noise power that appears in each bin by 3 dB.Chapter 4 Frequency Analysis Computing Noise Level and Power Spectral Density The measurement of noise levels depends on the bandwidth of the measurement. which is in turn controlled by the sampling rate and the number of points in the data set. Therefore. The level at each frequency line is equivalent to the level obtained using a 1 Hz filter centered at that frequency line. Amplitude Spectrum in Vrms Amplitude Spectral Density = ∆ f × Noise Power Bandwidth of Window The spectral density format is appropriate for random or noise signals. The power or amplitude spectral density normalizes the power or amplitude spectrum to the spectrum measured by a 1 Hzwide square filter. compare the peak power in the frequencies of interest to the broadband noise level. the noise floor of a given power spectrum depends on the ∆f of the spectrum. You can use the following equation to compute the power spectral density. Power Spectrum in Vrms Power Spectral Density = ∆ f × Noise Power Bandwidth of Window You can use the following equation to compute the amplitude spectral density.
a full range of 2π radians. you must trigger from the same point in the signal to obtain consistent phase readings. The following relationship defines the rectangulartopolar conversion function. You can view the phase difference between two signals by using some of the advanced FFT functions. FFT ( A ) N (413) LabVIEW Analysis Concepts 428 ni. The FFT produces a twosided spectrum in complex form with real and imaginary parts. you might want to use the FFT to view both the frequency and the phase information of a signal. You must scale and convert the twosided spectrum to polar form to obtain magnitude and phase. Magnitude [ FFT ( A ) ] Amplitude spectrum in quantity peak = N [ real [ FFT ( A ) ] ] + [ imag [ FFT ( A ) ] ] = N Phase spectrum in radians = Phase [ FFT ( A ) ] imag [ FFT ( A ) ] = arctangent . A sine wave shows a phase of –90° at the sine wave frequency. Because the power spectrum loses phase information.Chapter 4 Frequency Analysis Computing the Amplitude and Phase Spectrums The power spectrum shows power as the mean squared amplitude at each frequency line but includes no phase information. real [ FFT ( A ) ] (412) 2 2 (411) where the arctangent function returns values of phase between –π and +π. The frequency axis of the polar form is identical to the frequency axis of the twosided power spectrum.com . A cosine wave shows a 0° phase. the primary area of interest for analysis applications is either the relative phases between components or the phase difference between two signals acquired simultaneously. Therefore. Usually. The phase information the FFT provides is the phase relative to the start of the timedomain signal. Use the following equations to compute the amplitude and phase versus frequency from the FFT. Refer to the Frequency Response and Network Analysis section of this chapter for information about the advanced FFT functions. The amplitude of the FFT is related to the number of points in the timedomain signal.
or array index. To obtain the singlesided phase spectrum. discard the second half of the array. Refer to the Power Spectrum section of this chapter for information about computing the power spectrum. The twosided amplitude spectrum actually shows half the peak amplitude at the positive and negative frequencies. you can calculate the rms amplitude spectrum directly from the twosided amplitude spectrum by multiplying the nonDC components by the square root of two and discarding the second half of the array. Conversely. You can compute the singlesided power spectrum by squaring the singlesided rms amplitude spectrum. by two and discard the second half of the array. The following equations show the entire computation from a twosided FFT to a singlesided amplitude spectrum. other than DC. you can compute the amplitude spectrum by taking the square root of the power spectrum. The amplitude spectrum is closely related to the power spectrum. Calculating Amplitude in Vrms and Phase in Degrees To view the amplitude spectrum in volts rms (Vrms). Amplitude Spectrum V rms = Magnitude [ FFT ( A ) ] 2 N for i = 1 to N – 1 2 for i = 0 (DC) Magnitude [ FFT ( A ) ] Amplitude Spectrum V rms = N where i is the frequency line number. The units of the singlesided amplitude spectrum are then in quantity peak and give the peak amplitude of each sinusoidal component making up the timedomain signal. To convert to the singlesided form. The magnitude in Vrms gives the rms voltage of each sinusoidal component of the timedomain signal. Because you multiply the nonDC components by two to convert from the twosided amplitude spectrum to the singlesided amplitude spectrum. multiply each frequency. © National Instruments Corporation 429 LabVIEW Analysis Concepts . of the FFT of A. divide the nonDC components by the square root of two after converting the spectrum to the singlesided form.Chapter 4 Frequency Analysis Using the rectangulartopolar conversion function to convert the complex spectrum to its magnitude (r) and phase (φ) is equivalent to using Equations 411 and 412.
at each frequency.Phase FFT ( A ) π Frequency Response Function When analyzing two simultaneously sampled channels. as shown in Figure 418. You also can use the coherence function to check the validity of the frequency response function. you usually want to know the differences between the two channels rather than the properties of each.com . Average Ch A Time Window FFT Average Auto Spectrum Frequency Response Function Average Cross Spectrum Coherence Average Ch B Window Time FFT Average Auto Spectrum Figure 418. The gain of the system is the same as its magnitude and is the ratio of the output magnitude to the input magnitude at each frequency. The phase of the system is the difference of the output phase and input phase at each frequency. the instantaneous spectrum is computed using a window function and the FFT for each channel.Chapter 4 Frequency Analysis Use the following equation to view the phase spectrum in degrees. auto power spectrum. H. In a typical dualchannel analyzer. DualChannel Frequency Analysis The frequency response of a system is described by the magnitude. 180 Phase Spectrum in Degrees = . and cross power spectrum are computed and used in estimating the frequency response function. The averaged FFT spectrum. and phase. LabVIEW Analysis Concepts 430 ni. ∠H.
use the rectangulartopolar conversion function from Equation 413. FFT ( B ) × FFT∗ ( A ) Cross Power Spectrum S AB ( f ) = 2 N The cross power spectrum is a twosided complex form. The power spectrum is equivalent to the cross power spectrum when signals A and B are the same signal. the power spectrum is often referred to as the auto power spectrum or the auto spectrum. Therefore. The singlesided cross power spectrum yields the product of the rms amplitudes and the phase difference between the two signals A and B. Frequency Response and Network Analysis You can use the following functions to characterize the frequency response of a network: • • • Frequency response function Impulse response function Coherence function © National Instruments Corporation 431 LabVIEW Analysis Concepts .Chapter 4 Frequency Analysis Cross Power Spectrum The cross power spectrum is not typically used as a direct measurement but is an important building block for other measurements. The units of the singlesided cross power spectrum are in quantity rms squared. To convert the cross power spectrum to magnitude and phase. use the methods and equations from the Converting a TwoSided Power Spectrum to a SingleSided Power Spectrum section of this chapter. V rms . for 2 example. To convert the cross power spectrum to a singlesided form. having real and imaginary parts. Use the following equation to compute the twosided cross power spectrum of two timedomain signals A and B.
com . Compute the average SAA( f) by finding the sum and dividing the sum by the number of measurements. you compute the frequency response function. Substitute the average SAB( f) and the average SAA( f) in Equation 414. From the measured stimulus and response signals. You might want to take several frequency response function readings and compute the average. The frequency response function gives the gain and phase versus frequency of a network. 2. 3.Chapter 4 Frequency Analysis Frequency Response Function Figure 419 illustrates the method for measuring the frequency response of a network. SAB( f) is the cross power spectrum of A and B. S AB ( f ) H ( f ) = S AA ( f ) (414) where H( f) is the response function. use the rectangulartopolar conversion function from Equation 413. you apply a stimulus to the network under test and measure the stimulus and response signals. You use Equation 414 to compute the response function. A is the stimulus signal. Compute the average SAB( f) by finding the sum in the complex form and dividing the sum by the number of measurements. having real and imaginary parts. and SAA( f) is the power spectrum of A. B is the response signal. discard the second half of the response function array. LabVIEW Analysis Concepts 432 ni. Configuration for Network Analysis In Figure 419. Measured Stimulus (A) Network Under Test Applied Stimulus Measured Response(B) Figure 419. Complete the following steps to compute the average frequency response function. 1. The frequency response function is a twosided complex form. To convert to the frequency response gain and the frequency response phase. To convert to singlesided form.
Coherence Function The coherence function provides an indication of the quality of the frequency response function measurement and of how much of the response energy is correlated to the stimulus energy. A value of one for a given frequency line indicates that 100% of the response energy is due to the stimulus signal and that no interference is occurring at that frequency. You can use the coherence function to identify both excessive noise and which of the multiple signal sources are contributing to the response signal. ( Magnitude of the Average S AB ( f ) ) 2 γ ( f ) = ( Average S AA ( f ) ) ( Average S BB ( f ) ) 2 (415) where SAB is the cross power spectrum. perform an inverse FFT on the average frequency response function. © National Instruments Corporation 433 LabVIEW Analysis Concepts . The impulse response function is the output timedomain signal generated by applying an impulse to the network at time t = 0. To compute the impulse response of the network. perform an inverse FFT on the twosided complex frequency response function from Equation 414. For only one reading. If there is another signal present in the response. the quality of the network response measurement is poor. A value of zero for a given frequency line indicates no correlation between the response and the stimulus signal. Use Equation 415 to compute the coherence function. SAA is the power spectrum of A. the coherence function registers unity at all frequencies. For a valid result.Chapter 4 Frequency Analysis Impulse Response Function The impulse response function of a network is the timedomain representation of the frequency response function of the network. either from excessive noise or from another signal. and SBB is the power spectrum of B. the coherence function requires an average of two or more readings of the stimulus and response signals. To compute the average impulse response. Equation 415 yields a coherence factor with a value between zero and one versus frequency.
The amount of leakage depends on the amplitude of the discontinuity.com . for more information about windowing. in practical applications. However. Smoothing Windows. you can use windowing to reduce the size of the discontinuity and reduce spectral leakage. you usually have a nonintegral number of cycles. A signal that is not periodic in the time record has a spectrum with energy split or spread across multiple frequency bins. This phenomenon is spectral leakage.Chapter 4 Frequency Analysis Windowing In practical applications. It assumes that the analyzed record is just one period of an infinitely repeating periodic signal. the repetition is smooth at the boundaries. In the case of a nonintegral number of cycles. You can choose from among many different types of windows. Because the amount of leakage is dependent on the amplitude of the discontinuity at the boundaries. you obtain only a finite number of samples of the signal. with a larger amplitude causing more leakage. If you have an integral number of cycles in your time record. The result is a windowed signal with very small or no discontinuities and therefore reduced spectral leakage. Refer to Chapter 5. The one you choose depends on your application and some prior knowledge of the signal you are analyzing. Windowing consists of multiplying the timedomain signal by another timedomain waveform. These artificial discontinuities were not originally present in your signal and result in a smearing or leakage of energy from your actual frequency to all other frequencies. the repetition results in discontinuities at the boundaries. whose amplitude tapers gradually and smoothly towards zero at edges. A signal that is exactly periodic in the time record is composed of sine waves with exact integral cycles within the time record. Such a perfectly periodic signal has a spectrum with energy contained in exact frequency bins. LabVIEW Analysis Concepts 434 ni. The FFT spectrum models the time domain as if the time record repeated itself forever. known as a window. The FFT assumes that this time record repeats.
and 〈X〉 is the average of X. Y is the complex FFT of signal y (response). Y* is the complex conjugate of Y. Averaging usually is performed on measurement results or on individual spectra but not directly on the time record. You can choose from among the following common averaging modes: • • • RMS averaging Vector averaging Peak hold RMS Averaging RMS averaging reduces signal fluctuations but not the noise floor.Chapter 4 Frequency Analysis Averaging to Improve the Measurement Averaging successive measurements usually improves measurement accuracy. of the signal. RMSaveraged measurements are computed according to the following equations. or power. © National Instruments Corporation 435 LabVIEW Analysis Concepts . X* is the complex conjugate of X.〉 Y∗ • X H3 = ( H1 + H2 ) 2 where X is the complex FFT of signal x (stimulus). FFT spectrum power spectrum cross spectrum frequency response 〈 X∗ • X〉 〈 X∗ • X〉 〈 X∗ • Y〉 〈 X∗ • Y 〉 H1 = 〈 X∗ • X〉 Y∗ • Y H2 = 〈 . real and imaginary parts being averaged separately. RMS averaging also causes averaged RMS quantities of singlechannel measurements to have zero phase. RMS averaging for dualchannel measurements preserves important phase information. The noise floor is not reduced because RMS averaging averages the energy.
X* is the complex conjugate of X. The real and imaginary parts are averaged separately. Peak hold averaging is performed at each frequency line separately. Averaging the real part separately from the imaginary part can reduce the noise floor for random signals because random signals are not phase coherent from one time record to the next. FFT spectrum power spectrum cross spectrum frequency response 〈 X〉 〈 X∗ 〉 • 〈 X〉 〈 X∗〉 • 〈 Y〉 〈 Y〉 . and 〈X〉 is the average of X.(H1 = H2 = H3) 〈 X〉 where X is the complex FFT of signal x (stimulus).Chapter 4 Frequency Analysis Vector Averaging Vector averaging eliminates noise from synchronous signals. The real part is averaged separately from the imaginary part. Y is the complex FFT of signal y (response). FFT spectrum power spectrum MAX ( X∗ • X ) MAX ( X∗ • X ) where X is the complex FFT of signal x (stimulus) and X* is the complex conjugate of X. retaining peak levels from one FFT record to the next. reducing noise but usually requiring a trigger. real and imaginary parts being averaged separately. Vector averaging computes the average of complex quantities directly. LabVIEW Analysis Concepts 436 ni. Peak Hold Peak hold averaging retains the peak levels of the averaged quantities.com .
N N where Xi is the result of the analysis performed on the ith block. t ln xA ( t ) = ln x ( t ) + jx H ( t ) = – . Weighting is applied according to the following equation. x(t) = Ae–t/τcos(2πf0t) xH(t) = –Ae–t/τsin(2πf0t) (416) (417) where A is the amplitude. Equation 418 yields the natural logarithm of the magnitude of the analytic signal xA(t). Linear weighting combines N spectral records with equal weighting. Yi is the result of the averaging process from X1 to Xi.Chapter 4 Frequency Analysis Weighting When performing RMS or vector averaging. the analyzer stops averaging and presents the averaged results.Y i – 1 + .+ ln A τ (418) © National Instruments Corporation 437 LabVIEW Analysis Concepts . Equation 417 yields the Hilbert transform of the timedomain signal. f0 is the natural resonant frequency. When the number of averages is completed. Echo Detection Echo detection using Hilbert transforms is a common measurement for the analysis of modulation systems. N = i for linear weighting. Exponential weighting emphasizes new spectral data more than old and is a continuous process. and N is a constant for exponential weighting (N = 1 for i = 1). you can weight each new spectral record using either linear or exponential weighting. and τ is the time decay constant. N–1 1 Y i = . Equation 416 describes a timedomain signal.X i .
Echogram of the Magnitude of xA(t) LabVIEW Analysis Concepts 438 ni.Chapter 4 Frequency Analysis The result from Equation 418 has the form of a line with slope m = – 1 . Figure 420. you can extract the time constant of the system by graphing lnxA(t). as shown in Figure 421. Figure 421. You can make the echo signal visible by plotting the magnitude of xA(t) on a logarithmic scale. Figure 420 shows a timedomain signal containing an echo signal. Echo Signal The following conditions make the echo signal difficult to locate in Figure 420: • • The time delay between the source and the echo signal is short relative to the time decay constant of the system. The echo amplitude is small compared to the source.com . τ Therefore.
Computes the natural log of xA(t) to detect the presence of an echo.Chapter 4 Frequency Analysis In Figure 421. Processes the input signal with the Fast Hilbert Transform VI to produce the analytic signal xA(t). 1. Echo Detector Block Diagram The VI in Figure 422 completes the following steps to detect an echo. © National Instruments Corporation 439 LabVIEW Analysis Concepts . the discontinuity is plainly visible and indicates the location of the time delay of the echo. Figure 422. Figure 422 shows a section of the block diagram of the VI used to produce Figures 420 and 421. 3. 2. Computes the magnitude of xA(t) with the 1D Rectangular To Polar VI.
equally spaced samples if the highest frequency in the time signal is less than half the sampling frequency. and the importance of scaling smoothing windows. The Shannon Sampling Theorem bridges the gap between continuoustime signals and digitaltime signals. Even when the data meets the Nyquist criterion. you can completely reconstruct a continuoustime signal from discrete. Design FIR filter coefficients. for more information about the Shannon Sampling Theorem. called spectral leakage. You can use windowing to complete the following analysis operations: • • • • Define the duration of the observation. Half the sampling frequency equals the Nyquist frequency. Applying a smoothing window to a signal is windowing. Use the NI Example Finder to find examples of using the Windows VIs. Spectral Leakage According to the Shannon Sampling Theorem. how to choose the correct type of smoothing window. digitizing a time signal results in a finite record of the signal. even when you carefully observe the Shannon Sampling Theorem and sampling conditions. even though you use proper signal © National Instruments Corporation 51 LabVIEW Analysis Concepts . The Windows VIs provide a simple method of improving the spectral characteristics of a sampled signal. signalsampling applications. Separate a small amplitude signal from a larger amplitude signal with frequencies very close to each other. Reduce spectral leakage. Introduction to Digital Signal Processing and Analysis in LabVIEW. how to use smoothing windows to decrease spectral leakage. Refer to Chapter 1. the different types of smoothing windows. In practical. the differences between smoothing windows used for spectral analysis and smoothing windows used for filter coefficient design.Smoothing Windows 5 This chapter describes spectral leakage. Therefore. the finite sampling record might cause energy leakage.
com . Therefore. the measurement might not result in a scaled. Figure 51 illustrates discontinuities. When you use the FFT or DFT to measure the frequency content of data. and the finiteness can introduce sharp transition changes into the measured data. Sampling an Integer Number of Cycles Spectral leakage occurs only when the sample data set consists of a noninteger number of cycles. Periodic Waveform Created from Sampled Period The discontinuities shown in Figure 51 produce leakage of spectral information. LabVIEW Analysis Concepts 52 ni. the finiteness of the sampling record results in a truncated waveform with different spectral characteristics from the original continuoustime signal. the transforms assume that the finite data set is one period of a periodic signal. Spectral leakage results from an assumption in the FFT and DFT algorithms that the time record exactly repeats throughout all time. singlesided spectrum because of spectral leakage.Chapter 5 Smoothing Windows acquisition techniques. One Period Discontinuity Time Figure 51. The sharp transitions are discontinuities. signals in a time record are periodic at intervals that correspond to the length of the time record. Figure 52 shows a sine wave sampled at an integer number of cycles and the Fourier transform of the sine wave. Spectral leakage produces a discretetime spectrum that appears as a smeared version of the original continuoustime spectrum. In spectral leakage. the energy at one frequency appears to leak out into all other frequencies. Thus.
Chapter 5 Smoothing Windows Figure 52. during. Sampling a Noninteger Number of Cycles Usually. Therefore. you cannot guarantee that you are © National Instruments Corporation 53 LabVIEW Analysis Concepts . In Graph 2. you can acquire an integral number of cycles deliberately. The waveform in Graph 2 does not have any discontinuities because the data set is from an integer number of cycles—in this case. and after data acquisition. Graph 1 shows the sampled timedomain waveform. Sine Wave and Corresponding Fourier Transform In Figure 52. its spectrum appears in Graph 3 as a single line showing the frequency of the sine wave. Graph 3 shows the spectral representation of the waveform. The following methods are the only methods that guarantee you always acquire an integer number of cycles: • • Sample synchronously with respect to the signal you measure. A stationary signal is present before. an unknown signal you are measuring is a stationary signal. Graph 2 shows the periodic time waveform of the sine wave from Graph 1. When measuring a stationary signal. Because the time record in Graph 2 is periodic with no discontinuities. Capture a transient signal that fits entirely into the time record. the waveform repeats to fulfill the assumption of periodicity for the Fourier transform. one.
The spectrum you obtain by using the DFT or FFT is a smeared version of the spectrum and is not the actual spectrum of the original signal. If the time record contains a noninteger number of cycles.com . spectral leakage occurs because the noninteger cycle frequency component of the signal does not correspond exactly to one of the spectrum frequency lines. resulting in a smeared spectrum. Because of the assumption of periodicity of the waveform. spectral leakage occurs. You can use smoothing windows to minimize the effects of performing an FFT over a noninteger number of cycles. Figure 53.Chapter 5 Smoothing Windows sampling an integer number of cycles. The high frequencies of the discontinuities can be much higher than the Nyquist frequency and alias somewhere between 0 and fs/2. Figure 53 shows a sine wave sampled at a noninteger number of cycles and the Fourier transform of the sine wave. artificial discontinuities between successive periods occur when you sample a noninteger number of cycles. Spectral leakage distorts the measurement in such a way that energy from a given frequency component appears to spread over adjacent frequency lines or bins. The artificial discontinuities appear as very high frequencies in the spectrum of the signal—frequencies that are not present in the original signal. Spectral Representation When Sampling a Noninteger Number of Samples LabVIEW Analysis Concepts 54 ni. Therefore.
However.25 cycles of the sine wave. With an infinite time record. To overcome the limitations of a finite time record. The energy is spread. When performing Fourier or spectral analysis on finitelength data. Graph 1 consists of 1. In addition to causing amplitude accuracy errors. you can use smoothing windows to minimize the discontinuities of truncated waveforms. In Graph 2. causing spectral leakage. © National Instruments Corporation 55 LabVIEW Analysis Concepts . or smeared. Spectral Leakage Obscuring Adjacent Frequency Components In Figure 54. To overcome spectral leakage. thus reducing spectral leakage. windowing is used to reduce the spectral leakage. spectral leakage can obscure adjacent frequency peaks. you can take an infinite time record. Windowing Signals Use smoothing windows to improve the spectral characteristics of a sampled signal. over a wide range of frequencies. The energy has leaked out of one of the FFT lines and smeared itself into all the other lines. waiting for infinite time is not possible in practice. the FFT calculates one single line at the correct frequency. Graph 3 shows the spectral representation of the waveform. Figure 54 shows the spectrum for two close frequency components when no smoothing window is used and when a Hanning window is used. 20 0 –20 dBV No Window –40 –60 –80 Hann Window –100 –120 100 150 200 250 300 Hz Figure 54. Spectral leakage occurs because of the finite time record of the input signal. from –infinity to +infinity. the waveform repeats periodically to fulfill the assumption of periodicity for the Fourier transform.Chapter 5 Smoothing Windows In Figure 53. the second peak stands out more prominently in the windowed signal than it does in the signal with no smoothing window applied.
as well as affecting the spectrum that you see. Windowing changes the shape of the signal in the time domain. Smoothing windows reduce the amplitude of the discontinuities at the boundaries of each period and act like predefined. * Signal Spectrum Window Spectrum Windowed Signal Spectrum Figure 55. and vice versa. Therefore. As the discontinuity becomes larger. lowpass filters. The acquisition of a finite time record of an input signal produces the effect of multiplying the signal in the time domain by a uniform window. The multiplication of the input signal in the time domain by the uniform window is equivalent to convolving the spectrum of the signal with LabVIEW Analysis Concepts 56 ni.Chapter 5 Smoothing Windows The amount of spectral leakage depends on the amplitude of the discontinuity. The uniform window has a rectangular shape and uniform height. The process of windowing a signal involves multiplying the time record by a smoothing window of finite length whose amplitude varies smoothly and gradually towards zero at the edges. narrowband. a windowing effect still occurs. Frequency Characteristics of a Windowed Spectrum Even if you do not apply a smoothing window to a signal. Figure 55 illustrates convolving the original spectrum of a signal with the spectrum of a smoothing window. The length. of a smoothing window is defined in terms of number of samples. Multiplication in the time domain is equivalent to convolution in the frequency domain. spectral leakage increases. the spectrum of the windowed signal is a convolution of the spectrum of the original signal with the spectrum of the smoothing window.com . or time interval.
Time Signal Windowed Using a Hamming Window In Figure 56. Figure 56. Figure 57 shows the effects of the following smoothing windows on a signal: • • • None (uniform) Hanning Flat top © National Instruments Corporation 57 LabVIEW Analysis Concepts . Figure 56 shows the result of applying a Hamming window to a timedomain signal. Applying a smoothing window to timedomain data before the transform of the data into the frequency domain minimizes spectral leakage.Chapter 5 Smoothing Windows the spectrum of the uniform window in the frequency domain. the time waveform of the windowed signal gradually tapers to zero at the ends because the Hamming window minimizes the discontinuities along the transition edges of the waveform. which has a sinc function characteristic.
For an integer number of cycles. The Hanning and flat top windows introduce some spreading. Side lobes do not appear because the spectrum of the smoothing window approaches zero at ∆f intervals on either side of the main lobe.com . The amplitude error at 256 Hz equals 0 dB for each smoothing window.Chapter 5 Smoothing Windows Figure 57. The actual values in the resulting spectrum LabVIEW Analysis Concepts 58 ni. The flat top window has a broader main lobe than the uniform or Hanning windows. Hanning. Figure 57 also shows the values at frequency lines of 254 Hz through 258 Hz for each smoothing window. The uniform window has the narrowest lobe. and Flat Top Windows The data set for the signal in Figure 57 consists of an integer number of cycles. The graph shows the spectrum values between 240 Hz and 272 Hz. in a 1. The main lobe is a frequencydomain characteristic of windows. Power Spectrum of 1 Vrms Signal at 256 Hz with Uniform. 256.024point record. as is the case when you acquire an integer number of cycles. The smoothing windows have a main lobe around the frequency of interest. If the frequency components of the original signal match a frequency line exactly. all smoothing windows yield the same peak amplitude reading and have excellent amplitude accuracy. you see only the main lobe of the spectrum.
the Hanning and flat top windows introduce much less spectral leakage than the uniform window. the continuous spectrum of the smoothing window shifts from the main lobe center at a fraction of ∆f that corresponds to the difference between the frequency component and the FFT line frequencies. for a noninteger number of cycles. Power Spectrum of 1 Vrms Signal at 256. and Flat Top Windows In Figure 58. Figure 58 shows the effect of spectral leakage on a signal whose data set consists of 256. Also. If a time record does not contain an integer number of cycles. ∆f equals 1 Hz.Chapter 5 Smoothing Windows array for each smoothing window at 254 Hz through 258 Hz are shown below the graph. In addition. the amplitude error is better with the Hanning and flat top windows. © National Instruments Corporation 59 LabVIEW Analysis Concepts . The flat top window demonstrates very good amplitude accuracy and has a wider spread and higher side lobes than the Hanning window. amplitude error occurs at the frequency peak because sampling of the main lobe is off center and smears the spectrum. This shift causes the side lobes to appear in the spectrum.5 Hz with Uniform. Figure 58. Hanning.5 cycles.
Chapter 5 Smoothing Windows Figure 59 shows the block diagram of a VI that measures the windowed and nonwindowed spectrums of a signal composed of the sum of two sinusoids. Figure 59. LabVIEW Analysis Concepts 510 ni.com . Measuring the Spectrum of a Signal Composed of the Sum of Two Sinusoids Figure 510 shows the amplitudes and frequencies of the two sinusoids and the measurement results. The frequencies shown are in units of cycles.
Windowed and Nonwindowed Spectrums of the Sum of Two Sinusoids In Figure 510. in most applications.Chapter 5 Smoothing Windows Figure 510. You can apply more sophisticated techniques to get a more accurate description of the original timecontinuous signal in the frequency domain. Figure 511 shows the spectrum of a typical smoothing window. However. applying a smoothing window is sufficient to obtain a better frequency representation of the signal. © National Instruments Corporation 511 LabVIEW Analysis Concepts . the nonwindowed spectrum shows leakage that is more than 20 dB at the frequency of the smaller sinusoid. An actual plot of a smoothing window shows that the frequency characteristic of the smoothing window is a continuous spectrum with a main lobe and several side lobes. you need to define various characteristics so that you can make comparisons between smoothing windows. Characteristics of Different Smoothing Windows To simplify choosing a smoothing window.
A tradeoff occurs between amplitude accuracy and spectral resolution. The side lobe response of a strong sinusoidal signal can overpower the main lobe response of a nearby weak sinusoidal signal. The unit of measure for the main lobe width is FFT bins or frequency lines. to characterize the shape of the main lobe. increasing spectral leakage and decreasing amplitude accuracy. the ability to distinguish two closely spaced frequency components increases as the main lobe of the smoothing window narrows. As the main lobe narrows and spectral resolution improves. the widths of the main lobe at –3 dB and –6 dB below the main lobe peak describe the width of the main lobe. Side Lobes Side lobes occur on each side of the main lobe and approach zero at multiples of fs/N from the main lobe. Frequency Response of a Smoothing Window Main Lobe The center of the main lobe of a smoothing window occurs at each frequency component of the timedomain signal. Therefore. By convention. LabVIEW Analysis Concepts 512 ni. The width of the main lobe of the smoothing window spectrum limits the frequency resolution of the windowed signal.com . The side lobe characteristics of the smoothing window directly affect the extent to which adjacent frequency components leak into adjacent frequency bins.Chapter 5 Smoothing Windows Window Frequency Response –6 dB Peak Side Lobe Level Side Lobe RollOff Rate Main Lobe Width Frequency Figure 511. the window energy spreads into its side lobes.
Characteristics of Smoothing Windows Smoothing Window Uniform (none) Hanning Hamming BlackmanHarris Exact Blackman Blackman Flat Top –3 dB Main Lobe Width (bins) 0. Applying a rectangular window is equivalent to not using any window because the rectangular function just truncates the signal to within a finite time interval. The side lobe rolloff rate is the asymptotic decay rate in decibels per decade of frequency of the peaks of the side lobes.64 2. The maximum side lobe level is the largest side lobe level in decibels relative to the main lobe peak gain. The rectangular window has the highest amount of spectral leakage. The following equation defines the rectangular window. Figure 512 shows the rectangular window for N = 32. © National Instruments Corporation 513 LabVIEW Analysis Concepts .0 for n = 0.81 2.30 3.00 1.94 –6 dB Main Lobe Width (bins) 1.25 2. 2. ….21 2.62 1.Chapter 5 Smoothing Windows Maximum side lobe level and side lobe rolloff rate characterize the side lobes of a smoothing window.61 1. Table 51 lists the characteristics of several smoothing windows. 1.88 1. w(n) = 1.30 1. N – 1 where N is the length of the window and w is the window value.44 1. Table 51.27 2.56 Maximum Side Lobe Level (dB) –13 –32 –43 –71 –67 –58 –44 Side Lobe RollOff Rate (dB/decade) 20 60 20 20 20 60 20 Rectangular (None) The rectangular window has a value of one over its length.
5 – 0. the rectangular window detects the main mode of vibration of the machine and its harmonics. Figure 513 shows a Hanning window with N = 32. In order tracking. Hanning The Hanning window has a shape similar to that of half a cycle of a cosine wave. 2. 1. Hanning Window The Hanning window is useful for analyzing transients longer than the time duration of the window and for generalpurpose applications.com . N – 1 where N is the length of the window and w is the window value. The following equation defines the Hanning window. Figure 513.5 cos N for n = 0. LabVIEW Analysis Concepts 514 ni. where the effective sampling rate is proportional to the speed of the shaft in rotating machines. The rectangular window also is used in order tracking. Transients are signals that exist only for a short time duration. …. 2πn w ( n ) = 0. Rectangular Window The rectangular window is useful for analyzing transients that have a duration shorter than that of the window.Chapter 5 Smoothing Windows Figure 512.
2. Thus. 1. Figure 514. Figure 515 shows the KaiserBessel window for different values of beta. depending on your application. Hamming Window The Hanning and Hamming windows are similar. The following equation defines the Hamming window. 2πn w ( n ) = 0. …. KaiserBessel The KaiserBessel window is a flexible smoothing window whose shape you can modify by adjusting the beta input. the Hamming window does not get as close to zero near the edges as does the Hanning window. The shape of the Hamming window is similar to that of a cosine wave.54 – 0.Chapter 5 Smoothing Windows Hamming The Hamming window is a modified version of the Hanning window. you can change the shape of the window to control the amount of spectral leakage. © National Instruments Corporation 515 LabVIEW Analysis Concepts . in the time domain. as shown in Figures 513 and 514. Figure 514 shows a Hamming window with N = 32. However.46 cos N for n = 0. N – 1 where N is the length of the window and w is the window value.
LabVIEW Analysis Concepts 516 ni. for beta = 0. w ( n ) = 1 – 2n – N N for n = 0. The following equation defines the triangle window. As you increase beta.Chapter 5 Smoothing Windows Figure 515. Actually. the shape is close to that of a rectangular window. Triangle The shape of the triangle window is that of a triangle. 1.0. 2.com . The KaiserBessel window is useful for detecting two signals of almost the same frequency but with significantly different amplitudes. KaiserBessel Window For small values of beta. N – 1 where N is the length of the window and w is the window value. …. the window tapers off more to the sides. you do get a rectangular window.
416631580 a2 = 0. w(n) = 2πn ω = N a0 = 0.02 dB for signals exactly between integral cycles. The following equation defines the flat top window. © National Instruments Corporation 517 LabVIEW Analysis Concepts . Triangle Window Flat Top The flat top window has the best amplitude accuracy of all the smoothing windows at ±0. Figure 516. Because the flat top window has a wide main lobe. it has poor frequency resolution.Chapter 5 Smoothing Windows Figure 516 shows a triangle window for N = 32.215578948 a1 = 0.277263158 a3 = 0.006947368 k=0 ∑ ( –1 ) a cos ( kω ) k k 4 where Figure 517 shows a flat top window.083578947 a4 = 0.
The following equation defines the exponential window. The initial value of the window is one and gradually decays toward zero. 2. …. Figure 518. Exponential The shape of the exponential window is that of a decaying exponential. Flat Top Window The flat top window is most useful in accurately measuring the amplitude of single frequency components with little nearby spectral energy in the signal. N – 1 where N is the length of the window. You can adjust the final value of the exponential window to between 0 and 1.com .Chapter 5 Smoothing Windows Figure 517. and f is the final value. 1. Figure 518 shows the exponential window for N = 32. w is the window value. with the final value specified as 0. Exponential Window LabVIEW Analysis Concepts 518 ni. w[n] = e n ln ( f )  N–1 = f n  N – 1 for n = 0.1.
or inner product. Spectral analysis requires a DFTeven window.Chapter 5 Smoothing Windows The exponential window is useful for analyzing transient response signals whose duration is longer than the length of the window. while filter coefficient design requires a window symmetric about its midpoint. You can apply the exponential window to signals that decay exponentially. with integral cycles of sine sequences is identically zero. © National Instruments Corporation 519 LabVIEW Analysis Concepts . the DFT of a DFTeven sequence has no imaginary component. Spectral Analysis The smoothing windows designed for spectral analysis must be DFT even. A smoothing window is DFT even if its dot product. such as the impact of a hammer. Figures 519 and 520 show the Hanning window for a sample size of 8 and one cycle of a sine pattern for a sample size of 8. ensuring that the signal fully decays by the end of the sample block. In other words. such as the response of structures with light damping that are excited by an impact. The exponential window damps the end of the signal. Windows for Spectral Analysis versus Windows for Coefficient Design Spectral analysis and filter coefficient design place different requirements on a window.
The last point of the window is not equal to its first point. similar to one complete cycle of the sine pattern shown in Figure 520.com .Chapter 5 Smoothing Windows Figure 519. Hanning Window for Sample Size 8 Figure 520. Sine Pattern for Sample Size 8 In Figure 519. the DFTeven Hanning window is not symmetric about its midpoint. Smoothing windows for spectral analysis are spectral windows and include the following window types: • • • • • • • • • • • Scaled timedomain window Hanning window Hamming window Triangle window Blackman window Exact Blackman window BlackmanHarris window Flat top window KaiserBessel window General cosine window Cosine tapered window LabVIEW Analysis Concepts 520 ni.
Choosing the Correct Smoothing Window Selecting a smoothing window is not a simple task. spectral resolution is important. … . … . N – 1 (51) where N is the length of the window and w is the window value. for more information about designing digital filters. If the signal contains strong interfering signals near the frequency of interest. 1. as shown in Equation 52. choose a smoothing window with a high side lobe rolloff rate.5 1 – cos N for i = 0. Digital Filtering. you can define a symmetrical window for designing filter coefficients. 2. choose a smoothing window with a low maximum side lobe level. it is best to choose a smoothing window with a very narrow main lobe. By modifying a spectral window.Chapter 5 Smoothing Windows Windows for FIR Filter Coefficient Design Designing FIR filter coefficients requires a window that is symmetric about its midpoint. N – 1 (52) where N is the length of the window and w is the window value. Equation 51 defines the Hanning window for spectral analysis. 2. Refer to Chapter 3. If the amplitude accuracy of a single frequency component is more important than the exact location © National Instruments Corporation 521 LabVIEW Analysis Concepts . 2πi w [ i ] = 0. Each smoothing window has its own characteristics and suitability for different applications.5 1 – cos N – 1 for i = 0. Equation 52 defines a symmetrical Hanning window for filter coefficient design. To choose a smoothing window. you must estimate the frequency content of the signal. If the frequency of interest contains two or more signals very near to each other. Equations 51 and 52 illustrate the difference between a spectral window and a symmetrical window for filter coefficient design. 1. In this case. If the signal contains strong interfering frequency components distant from the frequency of interest. 2πi w [ i ] = 0. Refer to Table 51 for information about side lobe rolloff rates and maximum side lobe levels for various smoothing windows.
Hanning Hanning Hanning (for random excitation).com . Signals and Windows Type of Signal Transients whose duration is shorter than the length of the window Transients whose duration is longer than the length of the window Generalpurpose applications Spectral analysis (frequencyresponse measurements) Rectangular Window Exponential. start with the Hanning window. In general. choose a smoothing window with a wide main lobe. Table 52. the Hanning window is satisfactory in 95% of cases.Chapter 5 Smoothing Windows of the component in a given frequency bin. or no window. Table 52 lists different types of signals and the appropriate windows that you can use with them. If the signal spectrum is rather flat or broadband in frequency content. It has good frequency resolution and reduced spectral leakage. use the uniform window. Hamming Force Exponential Hanning Separation of two tones with frequencies very close to each other but with widely differing amplitudes Separation of two tones with frequencies very close to each other but with almost equal amplitudes Accurate singletone amplitude measurements Sine wave or combination of sine waves Sine wave and amplitude accuracy is important Narrowband random signal (vibration data) Broadband random (white noise) Closely spaced sine waves Excitation signals (hammer blow) Response signals Unknown content LabVIEW Analysis Concepts 522 ni. Rectangular (for pseudorandom excitation) KaiserBessel Rectangular Flat top Hanning Flat top Hanning Uniform Uniform. If you do not know the nature of the signal but you want to apply a smoothing window.
An FFT is equivalent to a set of parallel filters with each filter having a bandwidth equal to ∆f.54 ENBW 1. Table 53 lists the scaling factor. The power of a given frequency peak equals the sum of the adjacent frequency bins around the peak increased by a scaling factor equal to the ENBW of the smoothing window. Because of the spreading effect of a smoothing window. Frequency Analysis.00 0. and the worstcase peak amplitude accuracy caused by offcenter components for several popular smoothing windows. the smoothing window increases the effective bandwidth of an FFT bin by an amount known as the equivalent noisepower bandwidth (ENBW) of the smoothing window.50 1. Correction Factors and WorstCase Amplitude Errors for Smoothing Windows Window Uniform (none) Hanning Hamming Scaling Factor (Coherent Gain) 1. Table 53.Chapter 5 Smoothing Windows Initially.42 1. The plots in Figures 57 and 58 are the result of applying scaled smoothing windows to the timedomain signal. the ENBW.75 © National Instruments Corporation 523 LabVIEW Analysis Concepts . You might need to experiment with different smoothing windows to find the best one.50 0. When applying multiple smoothing windows to the same signal. Refer to Chapter 4. Always compare the performance of different smoothing windows to find the best one for the application. you might not have enough information about the signal to select the most appropriate smoothing window for the signal.36 WorstCase Amplitude Error (dB) 3. The smoothing window changes the overall amplitude of the signal. for information about performing computations on the power spectrum.00 1. scaling each smoothing window by dividing the windowed array by the coherent gain of the window results in each window yielding the same spectrum amplitude result within the accuracy constraints of the window. Scaling Smoothing Windows Applying a smoothing window to a timedomain signal multiplies the timedomain signal by the length of the smoothing window and introduces distortion effects due to the smoothing window. also known as coherent gain.92 1. You must take the scaling factor into account when you perform computations based on the power spectrum.
42 0.71 1.73 3.43 0.10 <0.13 1.01 LabVIEW Analysis Concepts 524 ni.69 1.77 WorstCase Amplitude Error (dB) 1.22 ENBW 1. Correction Factors and WorstCase Amplitude Errors for Smoothing Windows (Continued) Window BlackmanHarris Exact Blackman Blackman Flat Top Scaling Factor (Coherent Gain) 0.com .Chapter 5 Smoothing Windows Table 53.42 0.15 1.
and when to use distortion measurements. signal noise and distortion (SINAD). Also. the output signal consists of the same frequencies but different amplitudes and/or phases. If the input limits of a system are exceeded. when you apply a composite signal consisting of several sine waves at the input. and their relationships with respect to the original frequencies vary depending on the transfer function. Many realworld systems act as nonlinear systems when their input limits are exceeded. Common distortion measurements include the following measurements: • • • • Total harmonic distortion (THD) Total harmonic distortion + noise (THD + N) Signal noise and distortion (SINAD) Intermodulation distortion © National Instruments Corporation 61 LabVIEW Analysis Concepts . the output consists of one or more frequencies that did not originally exist at the input. total harmonic distortion (THD). the output signal might have a different amplitude and/or phase than the input sine wave. of f1 f2 and harmonics of f2 Sums and differences of f1. For example. However. Defining Distortion Applying a pure singlefrequency sine wave to a perfectly linear system produces an output signal having the same frequency as that of the input sine wave. if the input to a nonlinear system consists of two frequencies f1 and f2. or integer multiples.Distortion Measurements 6 This chapter describes harmonic distortion. Distortion measurements quantify the degree of nonlinearity of a system. f2 Harmonics of f1 and f2 The number of new frequencies at the output. the frequencies at the output might have the following components: • • • • f1 and harmonics. their corresponding amplitudes. resulting in distorted output signals.
Example of a Nonlinear System The following equation defines the input for the system shown in Figure 61. f3 = 3f1. radios. cellular phones. the harmonics become higher. …. and incorrectly installed components. Measurements of harmonics often provide a good indication of the cause of the nonlinearity of a system. However. The following expression describes the relationship between f1 and its harmonics. x ( t ) = cos ( ωt ) LabVIEW Analysis Concepts 62 ni. the harmonics become lower. nonlinearities are not always undesirable. analog tape recorders. audio processing devices. In general. You can use distortion measurements to diagnose faults such as bad solder joints. such as A/D and D/A converters. Figure 61 illustrates an example of a nonlinear system where the output y(t) is the cube of the input signal x(t). fn = nf1 The degree of nonlinearity of the system determines the number of harmonics and their corresponding amplitudes the system generates. nonlinearities that are asymmetrical around zero produce mainly even harmonics. cos(ωt) y(t) = f(x) = x3(t) cos3(ωt) Figure 61.com .Chapter 6 Distortion Measurements Application Areas You can make distortion measurements for many devices. stereos. f4 = 4f1. many musical sounds are produced specifically by driving a device into its nonlinear region. As the nonlinearity of a system decreases. f1. Nonlinearities symmetrical around zero produce mainly odd harmonics. For example. torn speaker cones. For example. the output of the system consists of f1 and its harmonics. televisions. f2 = 2f1. as the nonlinearity of a system increases. Harmonic Distortion When a signal x(t) of a particular frequency f1 passes through a nonlinear system. and loudspeakers.
the output contains not only the input fundamental frequency ω but also the third harmonic 3ω. x ( t ) = 0. Symmetrical clipping results in odd harmonics.5 cos ( ωt ) + 0. a system introduces.25 [ cos ( ωt ) + cos ( 3ωt ) ] In Equation 61. A2 is the amplitude of the second harmonic. Asymmetrical clipping creates both even and odd harmonics. A3 is the amplitude of the third harmonic. 3 (61) THD To determine the total amount of nonlinear distortion. such as THD through the seventh harmonic. You usually report the results of a THD measurement in terms of the highest order harmonic present in the measurement. The following equation yields THD. Clipping occurs when a system is driven beyond its capabilities.Chapter 6 Distortion Measurements Equation 61 defines the output of the system shown in Figure 61. The following equation yields the percentage total harmonic distortion (%THD). and so on. A common cause of harmonic distortion is clipping. A4 is the amplitude of the fourth harmonic. A2 + A3 + A4 + … %THD = ( 100 ) . A1 2 2 2 2 2 2 © National Instruments Corporation 63 LabVIEW Analysis Concepts . A2 + A 3 + A4 + … THD = A1 where A1 is the amplitude of the fundamental frequency. also known as total harmonic distortion (THD). measure the amplitudes of the harmonics the system introduces relative to the amplitude of the fundamental frequency.
However. A2 + A3 + … + N THD + N = 2 2 2 2 A 1 + A 2 + A 3 + …N where N is the noise power. A system can introduce additional noise into the signal. Measuring THD + N requires measuring the amplitude of the fundamental frequency and the power present in the remaining signal after removing the fundamental frequency. and alternate channel selectivity. Fundamental + Noise + Distortion SINAD = Noise + Distortion You can use SINAD to characterize the performance of FM receivers in terms of sensitivity.com . SINAD is the reciprocal of THD + N. The following equation yields THD + N. LabVIEW Analysis Concepts 64 ni. THD + N measures signal distortion while taking into account the amount of noise power present in the signal.Chapter 6 Distortion Measurements THD + N Realworld signals usually contain noise. such as THD + N through the third harmonic. The following equation yields SINAD. The following equation yields percentage total harmonic distortion + noise (%THD + N). such as AC mains hum and wideband white noise. A low THD + N measurement means that the system has a low amount of harmonic distortion and a low amount of noise from interfering signals. SINAD takes into account both harmonics and noise. 2 2 2 A2 + A3 + … + N %THD + N = ( 100 )  2 2 2 2 A 1 + A 2 + A 3 + …N 2 2 2 SINAD Similar to THD + N. you usually report the results of a THD + N measurement in terms of the highest order harmonic present in the measurement. As with THD. adjacent channel selectivity.
© National Instruments Corporation 71 LabVIEW Analysis Concepts . What Is the DC Level of a Signal? You can use DC measurements to define the value of a static or slowly varying signal. as a function of time using a DC meter.DC/RMS Measurements 7 Two of the most common measurements of a signal are its direct current (DC) and root mean square (RMS) levels. DC measurements can be both positive and negative.⋅ ( t2 – t1 ) ∫ t2 t1 V ( t ) dt where t2 – t1 is the integration time or measurement time. such as temperature. Vdc Voltage t1 Time t2 Figure 71. This chapter introduces measurement analysis techniques for making DC and RMS measurements of a signal. DC Level of a Signal The DC level of a continuous signal V(t) from time t1 to time t2 is given by the following equation. 1 V dc = . the observation time that results in the measured value has to be short compared to the speed of change for the signal. In that case. The DC value usually is constant within a specific time window. Figure 71 illustrates an example DC level of a signal. You can track and plot slowly moving values.
Chapter 7 DC/RMS Measurements For digitized signals. and measuring the DC level of these signals becomes challenging. Often. the discretetime version of the previous equation is given by the following equation. The RMS level of a continuous signal V(t) from time t1 to time t2 is given by the following equation. N V dc 1 = . where external noise or hum from the main power can disturb the DC signal significantly. RMS measurements are always positive. Use RMS measurements when a representation of energy is needed. Realworld signals often contain a significant amount of dynamic influence. of the LabVIEW Measurements Manual for more information about when to use RMS measurements. for example. The DC measurement identifies the static DC signal hidden in the dynamic signal. What Is the RMS Level of a Signal? The RMS level of a signal is the square root of the mean value of the squared signal. you do not want the dynamic part of the signal. Between pure DC signals and fastmoving dynamic signals is a gray zone where signals become more complex.⋅ N ∑V i=1 i For a sampled system. Refer to Chapter 7. V rms = 1 . the DC value is defined as the mean value of the samples acquired in the specified measurement time window. the voltage generated by a thermocouple in an industrial environment. You usually acquire RMS measurements on dynamic signals—signals with relatively fast changes—such as noise or periodic signals. LabVIEW Analysis Concepts 72 ni.com . Measuring AC Voltage.⋅ ( t2 – t1 ) ∫ t2 2 t1 V ( t ) dt where t2 – t1 is the integration time or measurement time.
is defined as the signal integration between t1 and t2. A true RMS measurement includes the DC part in the measurement. the average value is the sum of the voltage samples divided by the measurement time in samples. You can measure a more accurate value by averaging out the noise that is superimposed on the desired DC level. Refer to Chapter 6.Chapter 7 DC/RMS Measurements The RMS level of a discrete signal Vi is given by the following equation. as shown in Figure 71. of the LabVIEW Measurements Manual for more information about averaging in LabVIEW.⋅ N V rms = i=1 ∑V N 2 i One difficulty is encountered when measuring the dynamic part of a signal using an instrument that does not offer an ACcoupling option. V at t 2 V at t 1 Voltage t1 t2 Time Figure 72. t1 and t2. The area between the averaged value Vdc and the signal that is above Vdc is equal to the area between Vdc and the signal that is under Vdc. 1 . Measuring DC Voltage. t2 – t1. which is a measurement you might not want. as shown in Figure 72. or the mean value of the measurement samples. Instantaneous DC Measurements © National Instruments Corporation 73 LabVIEW Analysis Concepts . For a sampled signal. Averaging to Improve the Measurement Instantaneous DC measurements of a noisy signal can vary randomly and significantly. divided by the measurement time. the averaged value between two times. In a continuous signal.
There are several different strategies to use for making DC and RMS measurements. equivalent to the integration time or measurement time. you must decide if accuracy or speed of the measurement is more important. shown in Figure 73 with vertical hatching. Increasing the averaging time reduces this error because the integration is always divided by the measurement time t2 – t1. each dependent on the type of error or noise sources. multiple tones. or random noise.Chapter 7 DC/RMS Measurements An RMS measurement is an averaged quantity because it is the average energy in the signal over a measurement period. DC Overlapped with Single Tone Consider the case where the signal you measure is composed of a DC signal and a single sine tone. introduces an error in the average value and therefore in the DC measurement. DC Signal Overlapped with Single Tone Any remaining partial period. Common Error Sources Affecting DC and RMS Measurements Some common error sources for DC measurements are singlefrequency components (or tones).com . These same error signals can interfere with RMS measurements so in many cases the approach taken to improve RMS measurements is the same as for DC measurements. If you know LabVIEW Analysis Concepts 74 ni. The average of a single period of the sine tone is ideally zero because the positive halfperiod of the tone cancels the negative halfperiod. When choosing a strategy. Voltage t1 Time t2 Figure 73. You can improve the RMS measurement accuracy by using a longer averaging time.
accuracy and measurement time are related through a firstorder function. DC Plus Sine Tone Figure 74 shows that for a 1.5 V single sine tone.000001) = 6). and a one part per million error corresponds to six digits of accuracy (log10(0.0 VDC signal overlapped with a 0.1% (three digits). the ENOD should at least match the accuracy of the measurement instrument or measurement requirements. This accuracy does not take into account any error introduced by the measurement instrument or data acquisition hardware itself. Thus. similar to digits of precision. ENOD is only an approximation that tells you what order of magnitude of accuracy you can achieve under specific measurement conditions. In other words. you need to increase your measurement time by a factor of 10.Chapter 7 DC/RMS Measurements the period of the sine tone. ENOD is only a tool for computing the reliability of a specific measurement technique. Similarly. © National Instruments Corporation 75 LabVIEW Analysis Concepts . you can take a more accurate measurement of the DC value by using a measurement period equal to an integer number of periods of the sine tone. For example. Defining the Equivalent Number of Digits Defining the Equivalent Number of Digits (ENOD) makes it easier to relate a measurement error to a number of digits. ENOD translates measurement accuracy into a number of digits. ENOD = log10(Relative Error) A 1% error corresponds to two digits of accuracy. it is not necessary to use a measurement technique with an ENOD of six digits if your instrument has an accuracy of only 0. To achieve 10 times more accuracy. the worst ENOD increases with measurement time—xaxis shown in periods of the additive sine tone—at a rate of approximately one additional digit for 10 times more measurement time. The most severe error occurs when the measurement time is a halfperiod different from an integer number of periods of the sine tone because this is the maximum area under or over the signal curve. you do not get the six digits of accuracy from your sixdigit accurate measurement instrument if your measurement technique is limited to an ENOD of only three digits.
The most common weighting or window function is the Hann window. LabVIEW Analysis Concepts 76 ni. Figure 75 shows a dramatic increase in accuracy from the use of the Hann window. You can greatly reduce these errors due to noninteger number of cycles by using a weighting function before integrating to measure the desired DC value. In other words.5 V. commonly known as the Hanning window. the DC level is 1.com .Chapter 7 DC/RMS Measurements Figure 74.5 V Single Tone Windowing to Improve DC Measurements The worst ENOD for a DC signal plus a sine tone occurs when the measurement time is at halfperiods of the sine tone. you can achieve one additional digit of accuracy for every 101/3 = 2.15 times more measurement time using the Hann window instead of one digit for every 10 times more measurement time without using a window. As in the nonwindowing case. The accuracy as a function of the number of sine tone periods is improved from a firstorder function to a thirdorder function. Digits versus Measurement Time for 1.0 VDC Signal with 0.0 V and the single tone peak amplitude is 0.
Figure 76. Digits versus Measurement Time for DC + Tone Using LSL Window © National Instruments Corporation 77 LabVIEW Analysis Concepts . Digits versus Measurement Time for DC + Tone Using Hann Window You can use other types of window functions to further reduce the necessary measurement time or greatly increase the resulting accuracy.Chapter 7 DC/RMS Measurements Figure 75. Figure 76 shows that the Low Sidelobe (LSL) window can achieve more than six ENOD of worst accuracy when averaging your DC signal over only five periods of the sine tone (same test signal).
com . Figure 77 shows that the worst ENOD varies with measurement time (in periods of the sine tone) for various window functions. Using Windows with Care Window functions can be very useful to improve the speed of your measurement.707 VDC with 1. Here. you can reduce significantly RMS measurement accuracy if the signal you want to measure is composed of many frequency components close to each other in the frequency domain. but the improvement is not as large as in DC measurements. the LSL window achieves six digits of accuracy when the measurement time reaches eight periods of the sine tone. the most accurate measurements are made when the measurement time is an integer number of periods of the sine tone. For this example. For example. The Hann window is a general window recommended in most cases.0 V peak sine tone. LabVIEW Analysis Concepts 78 ni. the worst ENOD for measuring the RMS level of signals sometimes can be improved significantly by applying a window to the signal before RMS integration. Use more advanced windows such as the LSL window only if you know the window will improve the measurement. For example. the test signal contains 0. if you measure the RMS level of the DC signal plus a single sine tone. but you must be careful.Chapter 7 DC/RMS Measurements RMS Measurements Using Windows Like DC measurements. Figure 77. Digits versus Measurement Time for RMS Measurements Applying the window to the signal increases RMS measurement accuracy significantly.
Rules for Improving DC and RMS Measurements Use the following guidelines when determining a strategy for improving your DC and RMS measurements: • If your signal is overlapped with a single tone. using a window significantly reduces the measurement time needed to achieve a specific accuracy. each window has a specific equivalent noise bandwidth that you must use to scale integrated RMS measurements. increasing measurement time increases the accuracy of the measurement. you can increase the accuracy of your measurement by increasing the integration time or by preprocessing or conditioning your noisy signal with a lowpass (or bandstop) filter. © National Instruments Corporation 79 LabVIEW Analysis Concepts . • • RMS Levels of Specific Tones You always can improve the accuracy of an RMS measurement by choosing a specific measurement time to contain an integer number of cycles of your sine tones or by using a window function. You can use advanced techniques when you are interested in a specific frequency or narrow frequency range.Chapter 7 DC/RMS Measurements You also must make sure that the window is scaled correctly or that you update scaling after applying the window. As in the single tone case. DC measurements do not need to be scaled when using a properly scaled window function. to reduce significantly the measurement time needed to achieve a specific accuracy. If you do not know the frequency of the sine tone. For RMS measurements. In this case. such as a Hann window. do not use a window. If your signal is overlapped with many independent tones. The most useful window functions are prescaled by their coherent gain—the mean value of the window function—so that the resulting mean value of the scaled window function is always 1. apply a window. If your signal is overlapped with noise.00. If you know the exact frequency of the sine tone. use a measurement time that corresponds to an exact number of sine periods. longer integration times increase the accuracy of your measurement. You must scale RMS measurements using windows by the reciprocal of the square root of the equivalent noise bandwidth. The measurement of the RMS value is based only on the time domain knowledge of your signal.
You also can use the Fast Fourier Transform (FFT) to pick out specific frequencies for RMS processing. for more information about the FFT. and using the benefits of windowing. LabVIEW Analysis Concepts 710 ni. Refer to Chapter 4.Chapter 7 DC/RMS Measurements You can use bandpass or bandstop filtering before RMS computations to measure the RMS power in a specific band of frequencies. The RMS level of a specific sine tone that is part of a complex or noisy signal can be extracted very accurately using frequency domain processing.com . leveraging the power of the FFT. Frequency Analysis.
specifying limits. y3. Configure the measurement by specifying arbitrary upper and lower limits.Limit Testing 8 This chapter provides information about setting up an automated system for performing limit testing. 3. as shown in Figure 81. The top graph in Figure 81 shows a continuous limit. black dots represent the © National Instruments Corporation 81 LabVIEW Analysis Concepts . x0 + 2dx. The region bounded by the specified limits is a mask. x2. …}. Repeat steps 2 through 4 to continue limit mask testing. The following sections describe steps 1 and 3 in further detail. Specifying a Limit Limits are classified into two types—continuous limits and segmented limits. Assume that the signal to be monitored starts at x = x0 and all the data points are evenly spaced. This is done through a linear interpolation of the x and y values that define the limit. The spacing between each point is denoted by dx. Setting up an Automated Test System You can use the same method to create and control many different automated test systems. You can use limit testing to monitor a waveform and determine if it always satisfies a set of conditions. Completing step 1 creates a limit with the first point at x0 and all other points at a uniform spacing of dx (x0 + dx. 1. 2. Acquire data using a DAQ device. In Figure 81. The result of a limit or mask test is generally a pass or fail. x3. This defines your mask or region of interest. usually upper and lower limits. Complete the following basic steps to set up an automated test system for limit mask testing. 5. y2. {y1. Log the pass/fail results from step 3 to a file or visually inspect the input data and the points that fall outside the mask. 4. and applications for limit testing. A continuous limit is specified using a set of x and y points {{x1. …}}. …). Monitor the data to make sure it always falls within the specified mask.
Chapter 8 Limit Testing points at which the limit is defined and the solid line represents the limit you create. The limit is undefined in the region x0 < x < x1 and for x > x4. You can define any number of such segments. the limit is undefined in the region x2 < x < x3. x4. y4. The second segment is defined using a set of points {x3.com . Y Continuous Limit y2 y3 y1 y4 x0 x1 x2 x3 x4 Y Segmented Limit y2 y3 y4 y5 y1 x0 x1 x2 x3 x4 x5 Figure 81. x5} and {y3. you can repeat step 1. Continuous versus Segmented Limit Specification The bottom graph of Figure 81 shows a segmented limit. y2}}. Also. y5}. Creating the limit in step 1 reduces test times in step 3. If the spacing between the samples changes. The limit is undefined in the region x0 < x < x1 and in the region x > x5. As with continuous limits. The first segment is defined using a set of x and y points {{x1. x2}. {y1. step 1 uses linear interpolation to create a limit with the first point at x0 and all other points with an uniform spacing of dx. LabVIEW Analysis Concepts 82 ni.
Chapter 8
Limit Testing
Specifying a Limit Using a Formula
You can specify limits using formulas. Such limits are best classified as segmented limits. Each segment is defined by start and end frequencies and a formula. For example, the ANSI T1.413 recommendation specifies the limits for the transmit and receive spectrum of an ADSL signal in terms of formula. Table 81, which includes only a part of the specification, shows the start and end frequencies and the upper limits of the spectrum for each segment.
Table 81. ADSL Signal Recommendations
Start (kHz) 0.3 4.0 25.9 138.0 307.0
End (kHz) 4.0 25.9 138.0 307.0 1,221.0
Maximum (Upper Limit) Value (dBm/Hz) –97.5 –92.5 + 21.5 log2(f/4,000) –34.5 –34.5 – 48.0 log2(f/138,000) –90
The limit is specified as an array of a set of x and y points, [{0.3, 4.0}{–97.5, –97.5}, {4.0, 25.9}{–92.5 + 21.5 log2(f/4,000), –92.5 + 21.5 log2(f/4,000)}, …, {307.0, 1,221.0}{–90, –90}]. Each element of the array corresponds to a segment. Figure 82 shows the segmented limit plot specified using the formulas shown in Table 81. The xaxis is on a logarithmic scale.
© National Instruments Corporation
83
LabVIEW Analysis Concepts
Chapter 8
Limit Testing
Figure 82. Segmented Limit Specified Using Formulas
Limit Testing
After you define your mask, you acquire a signal using a DAQ device. The sample rate is set at 1/dx S/s. Compare the signal with the limit. In step 1, you create a limit value at each point where the signal is defined. In step 3, you compare the signal with the limit. For the upper limit, if the data point is less than or equal to the limit point, the test passes. If the data point is greater than the limit point, the test fails. For the lower limit, if the data point is greater than or equal to the limit point, the test passes. If the data point is less than the limit point, the test fails. Figure 83 shows the result of limit testing in a continuous mask case. The test signal falls within the mask at all the points it is sampled, other than points b and c. Thus, the limit test fails. Point d is not tested because it falls outside the mask.
LabVIEW Analysis Concepts
84
ni.com
Chapter 8
Limit Testing
Figure 83. Result of Limit Testing with a Continuous Mask
Figure 84 shows the result of limit testing in a segmented mask case. All the points fall within the mask. Points b and c are not tested because the mask is undefined at those points. Thus, the limit test passes. Point d is not tested because it falls outside the mask.
Figure 84. Result of Limit Testing with a Segmented Mask
© National Instruments Corporation
85
LabVIEW Analysis Concepts
Chapter 8
Limit Testing
Applications
You can use limit mask testing in a wide range of test and measurement applications. For example, you can use limit mask testing to determine that the power spectral density of ADSL signals meets the recommendations in the ANSI T1.413 specification. Refer to the Specifying a Limit Using a Formula section of this chapter for more information about ADSL signal limits. The following sections provide examples of when you can use limit mask testing. In all these examples, the specifications are recommended by standardsgenerating bodies, such as the CCITT, ITUT, ANSI, and IEC, to ensure that all the test and measurement systems conform to a universally accepted standard. In some other cases, the limit testing specifications are proprietary and are strictly enforced by companies for quality control.
Modem Manufacturing Example
Limit testing is used in modem manufacturing to make sure the transmit spectrum of the line signal meets the V.34 modem specification, as shown in Figure 85.
Figure 85. Upper and Lower Limit for V.34 Modem Transmitted Spectrum
LabVIEW Analysis Concepts
86
ni.com
Chapter 8
Limit Testing
The ITUT V.34 recommendation contains specifications for a modem operating at data signaling rates up to 33,600 bits/s. It specifies that the spectrum for the line signal that transmits data conforms to the template shown in Figure 85. For example, for a normalized frequency of 1.0, the spectrum must always lie between 3 dB and 1 dB. All the modems must meet this specification. A modem manufacturer can set up an automated test system to monitor the transmit spectrum for the signals that the modem outputs. If the spectrum conforms to the specification, the modem passes the test and is ready for customer use. Recommendations such as the ITUT V.34 are essential to ensure interoperability between modems from different manufacturers and to provide highquality service to customers.
Digital Filter Design Example
You also can use limit mask testing in the area of digital filter design. You might want to design lowpass filters with a passband ripple of 10 dB and stopband attenuation of 60 dB. You can use limit testing to make sure the frequency response of the filter always meets these specifications. The first step in this process is to specify the limits. You can specify a lower limit of –10 dB in the passband region and an upper limit of –60 dB in the stopband region, as shown in Figure 86. After you specify these limits, you can run the actual test repeatedly to make sure that all the frequency responses of all the filters are designed to meet these specifications.
Figure 86. Limit Test of a Lowpass Filter Frequency Response
© National Instruments Corporation
87
LabVIEW Analysis Concepts
Chapter 8 Limit Testing Pulse Mask Testing Example The ITUT G. Figure 87 shows the pulse mask for interface at 1. Signals with this bit rate also are referred to as T1 signals. T1 signals must lie in the mask specified by the upper and lower limit.com .544 kbits/s.703 recommendation specifies the pulse mask for signals with bit rates. These limits are set to properly enable the interconnection of digital network components to form a digital path or connection. Figure 87. n × 64. Pulse Mask Testing on T1/E1 Signals LabVIEW Analysis Concepts 88 ni. where n is between 2 and 31.
Part II Mathematics This part provides information about mathematical concepts commonly used in analysis applications. Curve Fitting. Probability and Statistics. Chapter 11. describes how to extract information from a data set to obtain a functional description. Chapter 12. Chapter 13. Polynomials. describes polynomials and operations involving polynomials. Chapter 10. describes fundamental concepts of probability and statistics and how to use these concepts to solve realworld problems. • • • © National Instruments Corporation II1 LabVIEW Analysis Concepts . • • Chapter 9. Linear Algebra. describes basic concepts and methods used to solve optimization problems. describes how to use the Linear Algebra VIs to perform matrix computation and analysis. Optimization.
and a is the set of curve coefficients that best describes the curve. You can concentrate on the functional description of the data without having to solve the system in Equation 92. a) is the functional description of the data set. Introduction to Curve Fitting The technique of curve fitting analysis extracts a set of curve parameters or coefficients from a data set to obtain a functional description of the data set. a) = a0 + a1 x The least squares algorithm finds a by solving the system defined by Equation 92. a) to obtain an estimate of the observed data set for any value of x. f(x. Use the NI Example Finder to find examples of using the Curve Fitting VIs.Curve Fitting 9 This chapter describes how to extract information from a data set to obtain a functional description. y(x) is the observed data set. the following equation yields the functional description.e ( a ) = 0 ∂a (92) To solve the system defined by Equation 92. if a = {a0. a) – y(x)]2 (91) where e(a) is the least square error. f(x. a1}. After you solve the system for a. e(a) = [f(x. ∂. The least squares method of curve fitting fits a curve to a particular data set. you set up and solve the Jacobian system generated by expanding Equation 92. The Curve Fitting VIs automatically set up and solve the Jacobian system and return the set of coefficients that best describes the data set. Equation 91 defines the least square error. you can use the functional description f(x. © National Instruments Corporation 91 LabVIEW Analysis Concepts . For example.
You can model the statistical data by performing regression analysis and gain insight into the parameters that affect the data. Figure 91. Fitting a Line to Data You can modify the block diagram to fit exponential and polynomial curves by replacing the Linear Fit VI with the Exponential Fit VI or the General Polynomial Fit VI. and pressure can affect data you collect. Figure 92. temperature.com .Chapter 9 Curve Fitting Applications of Curve Fitting In some applications. Figure 91 shows the block diagram of a VI that uses the Linear Fit VI to fit a line to a set of data points. parameters such as humidity. Fitting a Line to a Noisy Data Set LabVIEW Analysis Concepts 92 ni. Figure 92 shows a multiplot graph of the result of fitting a line to a noisy data set.
and k is the number of coefficients. n – 1 (93) where xij is the observed data contained in the observation matrix H. such as finding the area under a curve when you have only the discrete points of the curve Obtaining the trajectory of an object based on discrete measurements of its velocity. the general leastsquares (LS) linear fit problem is to find a set of coefficients that fits the linear model. which is estimating data between data points. y i = b o x i0 + … + b k – 1 x ik – 1 = ∑b x j=0 k–1 j ij i = 0. as shown in Equation 93. n is the number of elements in the set of observed data and the number of rows of in H. which is estimating data beyond data points. such as when one or more measurements are missing or improperly recorded Interpolating. b is the set of coefficients that fit the linear model. such as if the time between measurements is not small enough Extrapolating. …. such as finding the derivative of the data points by modeling the discrete data with a polynomial and differentiating the resulting polynomial equation Integrating digital data. which is the second derivative • • General LS Linear Fit Theory For a given set of observation data.Chapter 9 Curve Fitting The practical applications of curve fitting include the following applications: • • • • • Removing measurement noise Filling in missing data points. such as looking for data values before or after a measurement Differentiating digital data. or acceleration. 1. © National Instruments Corporation 93 LabVIEW Analysis Concepts . which is the first derivative.
…. x n – 10 x n – 12 … x n – 1k – 1 x 01 … x 11 … x 0k – 1 x 1k – 1 You can rewrite Equation 93 as the following equation. . …. = H B – Y 2 χ = 0 0 σi σi i = 0 i=0 2 ∑ ∑ ∑ (95) x ij yi where h oij = . xi1. H = . Equation 94 defines zi. 1. …. to predict one variable. In most analysis situations. Y = HB. x 00 x 10 . k – 1. and j = 0. 1. A multiple linear regression model uses several variables. yi. zi = ∑b x j=0 k–1 j ij (94) You can use the least chisquare plane method to find the solution set B that minimizes the quantity given by Equation 95. i = 0.. xi0. Equation 93 might not yield all the coefficients in set B. y oi = . The general LS linear fit model is a multiple linear regression model. σi σi LabVIEW Analysis Concepts 94 ni. k–1 b j x ij yi – n – 1 n–1 y – z 2 j=0 i 2 i = .Chapter 9 Curve Fitting The following equation defines the observation matrix H.. . xik – 1. n – 1.com . The fit problem becomes to find the coefficient set B that minimizes the difference between the observed data yi and the predicted value zi. you acquire more observation data than coefficients.
b1. The preferred method of minimizing χ2 is to find the leastsquare solution of equations. Equation 95 also is the leastsquare estimation. H0B = Y0 (98) © National Instruments Corporation 95 LabVIEW Analysis Concepts . σi = σ. Minimize χ2 to find the leastsquare solution of equations.= 0 ∂b k – 1 2. Set the partial derivatives of χ2 to zero with respect to b0. (97) Equations of the form given by Equation 97 are called normal equations of the leastsquare problems. ∂χ 2 .Chapter 9 Curve Fitting In Equation 95. . ∂χ = 0 ∂b 0 ∂χ 2 . . T T H0 H0 B = H0 Y T where H 0 is the transpose of H0. However. …. You can use the following methods to minimize χ2 from Equation 95: • • Solve normal equations of the leastsquare problems using LU or Cholesky factorization. . 2 (96) Derive the equations in Equation 96 to the following equation form. σi is the standard deviation. Solving normal equations involves completing the following steps. bk – 1. as shown by the following equations. 1. the solution from the normal equations is susceptible to roundoff error.= 0 ∂b 1 . You can solve them using LU or Cholesky factorization algorithms. Equation 98 defines the form of the leastsquare solution of equations. If the measurement errors are independent and normally distributed with constant standard deviation.
fj(xi) LabVIEW Analysis Concepts 96 ni. if one algorithm cannot solve the equation. Different algorithms can give you different precision. In some cases. yi}. You can try different algorithms to find the one best suited for the observation data. yi = ∑b x j=0 k–1 j j i = b 0 + b 1 x i + b 2 x i2 + … + b k – 1 x i k–1 i = 0.Chapter 9 Curve Fitting You can use QR or SVD factorization to find the solution set B for Equation 98. 1. you can use the Householder algorithm. In general. 2 k–1 1 x x0 … x 0 0 2 k–1 1 x 1 x1 … x 1 H = . . If the observation data sets are {xi. …. where i = 0. or the Givens 2 algorithm. you can build the observation matrix H as shown by the following equation. yi}. x i2 = x i . …. Here. 1. n – 1 (99) Comparing Equations 93 and 99 shows that x ij = x ij . the Givens algorithm. Polynomial Fit with a Single Predictor Variable Polynomial fit with a single predictor variable uses one variable to predict another variable. you also can choose another function formula to fit the data sets {xi. which also is known as the fast Givens algorithm. another algorithm might solve it. Equation 99 defines the model for polynomial fit. as shown by the following equations. Polynomial fit with a single predictor variable is a special case of multiple regression. n – 1. x i1 = x i. x i0 = x i 2 k–1 = 1. x ik – 1 = x i Because x ij = x ij . …. 2. .com . 2 k–1 1 x n – 1 xn – 1 … x n – 1 0 Instead of using x ij = x ji . For QR factorization. you can select xij = fj(xi).
Because the input data represents a discrete system. f (x ) f (x ) f (x ) … fk – 1 ( xn – 1 ) 0 n–1 1 n–1 2 n–1 The following equation defines the fit model. The VIs that return the fitted curve also return the coefficients and the mean squared error (MSE). yi). yi = b0 f0 ( x ) + b1 f1 ( x ) + … + bk – 1 fk – 1 ( x ) Curve Fitting in LabVIEW For the Curve Fitting VIs. xi is the ith element of the sequence X. . © National Instruments Corporation 97 LabVIEW Analysis Concepts . A sample or point in the data set is (xi. f (x ) f (x ) f (x ) … f (x ) 1 0 2 0 k–1 0 0 0 f (x ) f (x ) f (x ) … f (x ) 1 1 2 1 k–1 1 0 1 H = . yi is the ith element of the sequence Y. and n is the number of observed sample points.Chapter 9 Curve Fitting is the function model that you choose to fit your observation data. Some Curve Fitting VIs return only the coefficients for the curve that best describe the input data while other Curve Fitting VIs return the fitted curve. y is the sequence representing the observed values. the input sequences Y and X represent the data set y(x). MSE is a relative measure of the residuals between the expected curve values and the actual observed values. f j ( x i ) = x i . j In polynomial fit. . In general. you can build H as shown in the following equation. Using the VIs that return only coefficients allows you to further manipulate the data. n–1 1 MSE = n ∑ (f – y ) i i i=0 2 where f is the sequence representing the fitted values. the VIs use the following equation to calculate MSE.
y = aebx The following equation specifically describes the exponential curve resulting from the exponential fit algorithm. y = mx + b The Linear Fit VI calculates the coefficients a0 and a1 that best fit the experimental data (x[i] and y[i]) to a straight line model described by the following equation. y[i]=a0 + a1x[i] where y[i] is a linear combination of the coefficients a0 and a1.com . y = a + bx + cx2 + … The following equation specifically describes the polynomial function resulting from the general polynomial fit algorithm. Exponential Fit The Exponential Fit VI fits data to an exponential curve of the general form described by the following equation. y[i] = a0 + a1x[i]+a2x[i]2… LabVIEW Analysis Concepts 98 ni. y [ i ] = a0 e a1 x [ i ] General Polynomial Fit The General Polynomial Fit VI fits data to a polynomial function of the general form described by the following equation.Chapter 9 Curve Fitting Linear Fit The Linear Fit VI fits experimental data to a straight line of the general form described by the following equation.
as shown in the following equations. © National Instruments Corporation 99 LabVIEW Analysis Concepts . ….+ … x In each of the preceding equations. a2. a1. you can have y[i] that is a linear combination of several coefficients. as shown in the following equations. Each coefficient can have a multiplier of some function of x[i]. although it might be a nonlinear function of x. you can use the General LS Linear Fit VI to calculate coefficients of the functional models and represent the coefficients of the functional models as linear combinations of the coefficients. In each of the preceding equations. y[i] = a0 + a1sin(ωx[i]) y[i] = a0 + a1(x[i])2 y[i] = a0 + a1cos(ωx[i]2) where ω is the angular frequency.Chapter 9 Curve Fitting General LS Linear Fit The General LS Linear Fit VI fits data to a line described by the following equation. Therefore. y[i] = a0 + a1f1(x[i]) + a2f2(x[i]) + … where y[i] is a linear combination of the parameters a0. You can extend the concept of a linear combination of coefficients further so that the multiplier for a1 is some function of x. y = a0 + a1sin(ωx) y = a0 + a1x2 + a2cos(ωx2) a3 y = a0 + a1(3sin(ωx)) + a2x3 + . In the case of the General LS Linear Fit VI. y is a linear function of the coefficients. y[i] is a linear combination of the coefficients a0 and a1.
the following equation defines H. x2 multiplies a3. cos(ωx) multiplies a2. you must build the observation matrix H. set each column of H to the independent functions evaluated at each x value x[i]. 1 sin ( ωx 0 ) cos ( ωx 0 ) x 0 1 sin ( ωx 1 ) cos ( ωx 1 ) x 1 H = 1 sin ( ωx 2 ) cos ( ωx 2 ) x 2 … … … … 2 2 2 2 1 sin ( ωx 99 ) cos ( ωx 99 ) x 99 If the data set contains N data points and if k coefficients (a0. Equation 910 defines a model for data from a transducer.Chapter 9 Curve Fitting Computing Covariance The General LS Linear Fit VI returns a k × k matrix of covariances between the coefficients ak. the number of rows in H equals the number of data points N. sin(ωx) multiplies a1. (910) To build H. H is an N × k matrix with N rows and k columns. Therefore. For example. T C = ( H0 H 0 ) –1 Building the Observation Matrix When you use the General LS Linear Fit VI. each aj has the following different functions as a multiplier: • • • • One multiplies a0.com . The number of columns in H equals the number of coefficients k. The General LS Linear Fit VI uses the following equation to compute the covariance matrix C. If the data set contains 100 x values. … ak – 1) exist for which to solve. a1. y = a0 + a1sin(ωx) + a2cos(ωx) + a3x2 In Equation 910. LabVIEW Analysis Concepts 910 ni.
a2. a2.. are the parameters. However. ... the most common application of the method is to fit a nonlinear curve because the general linear fit method is better for linear curve fitting. y[i] = f(x[i]).. You can use the nonlinear LevenbergMarquardt method to fit linear or nonlinear curves..Chapter 9 Curve Fitting Nonlinear LevenbergMarquardt Fit The nonlinear LevenbergMarquardt Fit method fits data to the curve described by the following equation. a0. You must verify the results you obtain with the LevenbergMarquardt method because the method does not always guarantee a correct result. a1. a1. © National Instruments Corporation 911 LabVIEW Analysis Concepts ... a1.. where a0.. a2. The nonlinear LevenbergMarquardt method is the most general curve fitting method and does not require y to have a linear relationship with a0.
As the number of games increases. The purpose of the traffic signal is to protect motorists turning left from oncoming traffic.568 points. However. 1. Statistics Statistics allow you to summarize data and draw conclusions for the present by condensing large amounts of data into a form that brings out all the essential information and is yet easy to remember. the number of cars travelling straight through the © National Instruments Corporation 101 LabVIEW Analysis Concepts .7 points per game average 51 games (101) Computing percentage provides a method for making comparisons.568 points = 30. Traffic engineers study each of the three intersections for a week. and 40 points in Game E. the city has only enough money to fund one traffic signal but has three intersections that potentially need the signal. in a season. Equation 101 yields the points per game average for the player. a sports player participates in 51 games and scores a total of 1. 36 points in Game B. For example.Probability and Statistics 10 This chapter describes fundamental concepts of probability and statistics and how to use these concepts to solve realworld problems.568 points includes 45 points in Game A. the officials of an American city are considering installing a traffic signal at a major intersection. single numbers must make the data more intelligible and help draw useful inferences. For example. you obtain a single number that tells you the average number of points the player scored per game. The engineers record the total number of cars using the intersection. Use the NI Example Finder to find examples of using the Probability and Statistics VIs. 51 points in Game C. To condense data. remembering how many points the player scored in each individual game becomes increasingly difficult. 45 points in Game D. The total of 1. If you divide the total number of points that the player scored by the number of games played.
429 Looking only at the raw data from each intersection might make determining which intersection needs the traffic signal difficult because the raw numbers can vary widely. computing the percentage of cars turning at each intersection provides a common basis for comparison.186 . To obtain the percentage of cars turning left. divide the number of cars turning left by the total number of cars using the intersection and multiply that result by 100.306 1.Chapter 10 Probability and Statistics intersection. Table 101 shows the data for one of the intersections.590 Number of Cars Turning Left 528 549 569 515 560 291 174 3. 3.227 1.× 100 = 42% 7.355 1.258 1. However. and the number of cars making righthand turns.975 Number of Cars Continuing Straight 400 417 434 393 428 223 134 2. Converting the raw data to a percentage condenses the information for the three intersections into single numbers representing the percentage of cars that turn left at each intersection.334 694 416 7.com . the following equation gives the percentage of cars turning left. Table 101. For the intersection whose data is shown in Table 101. Data for One Major Intersection Day 1 2 3 4 5 6 7 Totals Total Number of Cars Using the Intersection 1. The city officials can compare the percentage of cars turning left at each intersection and rank the intersections in order of highest percentage of cars turning left to the lowest percentage of cars LabVIEW Analysis Concepts 102 ni.186 Number of Cars Turning Right 330 340 352 319 346 180 108 1. the city officials can obtain the percentage of cars turning left at those two intersections.590 Given the data for the other two intersections. the number of cars making lefthand turns.
4. X = {x0. s1. si x median = 0.. 2 2 Equation 103 defines a sorted sequence consisting of an odd number of samples sorted in descending order. x3. The median is useful for making qualitative statements. x1.and k = . x2. Thus. S = {s0. 3. S = {5.Chapter 10 Probability and Statistics turning left. s2. The following equation defines an input sequence X consisting of n samples. The following equation yields the median value of S. The following equation represents the sorted sequence of an input sequence X. as shown in Equation 101. Median The median of a data sequence is the midpoint value in the sorted version of the sequence. such as whether a particular data point lies in the upper or lower portion of an input sequence. sn – 1} You can sort the sequence either in ascending order or in descending order. …. 2. …. in a broad sense.5 ( s k – 1 + s k ) n is odd n is even (102) n–1 n where i = . 1 x = . Ranking the intersections can help determine where the traffic signal is needed most. the term statistics implies different ways to summarize data to derive useful and important information from it.( x 0 + x 1 + x 2 + x 3 + … + x n – 1 ) n The mean equals the sum of all the sample values divided by the number of samples. 1} (103) © National Instruments Corporation 103 LabVIEW Analysis Concepts . xn – 1} The following equation yields the mean value for input sequence X. Mean The mean value is the average value for a set of data samples.
equal to the mean. For values of n ≥ 30. 3. as shown in the following equation. S = {1. 2 2 2 1 2 s = . both methods produce similar results. the median is the midpoint value 3. 4} (104) The sorted sequence in Equation 104 has two midpoint values. The sample variance is always positive. The sample variance s2 for an input sequence X equals the sum of the squares of the deviations of the sample values from the mean divided by n – 1. Using Equation 102 for n is even.com . Equation 104 defines a sorted sequence consisting of an even number of samples sorted in ascending order. xmedian = 0. 2. the following equation yields the median value for the sorted sequence in Equation 104. except when all the sample values are equal to each other and in turn. LabVIEW Analysis Concepts 104 ni. You can use the sample variance as a measure of the consistency.[ ( x 1 – x ) + ( x 2 – x ) + … + ( x n – x ) ] n–1 where n > 1 and is the number of samples in X and x is the mean of X.Chapter 10 Probability and Statistics In Equation 103.5(2 + 3) = 2. Sample Variance Sample variance measures the spread or dispersion of the sample values. Statisticians and mathematicians prefer to use the sample variance.5 Sample Variance and Population Variance The Standard Deviation and Variance VI can calculate either the sample variance or the population variance. Engineers prefer to use the population variance. 2 and 3.5(sk – 1 + sk) = 0.
Chapter 10 Probability and Statistics Population Variance The population variance σ2 for an input sequence X equals the sum of the squares of the deviations of the sample values from the mean divided by n. 2 2 2 1 2 σ = . n–1 m σx 1 = n ∑ (x – x) i i=0 m where n is the number of elements in X and x is the mean of X. the moment about the mean equals the population variance σ2. 5. © National Instruments Corporation 105 LabVIEW Analysis Concepts . The following equation defines an input sequence X.[ ( x 1 – x ) + ( x 2 – x ) + … + ( x n – x ) ] n where n > 1 and is the number of samples in X. 7 } The mode of X is 4 because 4 is the value that occurs most often in X. 3. s = s 2 Mode The mode of an input sequence is the value that occurs most often in the input sequence. 4. Moment about the Mean The moment about the mean is a measure of the deviation of the elements in an input sequence from the mean. 1. 3. 4. X = { 0. The following equation yields the mth m order moment σ n for an input sequence X. as shown in the following equation. and x is the mean of X. as shown in the following equation. Standard Deviation The standard deviation s of an input sequence equals the positive square root of the sample variance s2. 4. 5. For m = 2.
X = {0. Figure 101 shows the histogram for the sequence in Equation 105.com . Equation 105 defines a data sequence. excluding the upper boundary. 4. divide the total range of values into the following eight intervals. A histogram provides a method for graphically displaying data and summarizing key information. 1. 3. 4. 4. Histogram A histogram is a bar graph that displays frequency data and is an indication of the data distribution. 8} (105) To compute a histogram for X. 5. 3.Chapter 10 Probability and Statistics Skewness Skewness is a measure of symmetry and corresponds to the thirdorder moment. 5. or bins: • • • • • • • • 0–1 1–2 2–3 3–4 4–5 5–6 6–7 7–8 The histogram display for X indicates the number of data samples that lie in each interval. LabVIEW Analysis Concepts 106 ni. Kurtosis Kurtosis is a measure of peakedness and corresponds to the fourthorder moment.
© National Instruments Corporation 107 LabVIEW Analysis Concepts .Chapter 10 Probability and Statistics 3 2 1 0 ∆0 1 ∆1 2 3 4 5 6 7 ∆7 8 Figure 101.3log(size of (X)) Mean Square Error (mse) The mean square error (mse) is the average of the sum of the square of the difference between the corresponding elements of two input sequences. Because y1 and y2 are large. For example. You can calculate the mse of y1 and y2. To verify that y1 = y2. A common method of determining the number of intervals to use in a histogram is Sturges’ Rule. System S2 produces y2 when it receives x. y1 and y2 are equivalent. you want to compare y1 and y2. If the mse is smaller than an acceptable tolerance. which is given by the following equation. 1 n–1 2 mse = ( xi – yi ) n i=0 ∑ where n is the number of data points. an elementbyelement comparison is difficult. One data sample lies in each of the intervals 0–1. and 7–8. Theoretically. system S1 receives a digital signal x and produces an output signal y1. Number of Intervals = 1 + 3. Two data samples lie in each of the intervals 3–4 and 5–6. You can use the mse to compare two sequences. The number of intervals in the histogram affects the resolution of the histogram. y1 = y2. The following equation yields the mse for two input sequences X and Y. 1–2. Three data samples lie in the 4–5 interval. Histogram Figure 101 shows that no data samples are in the 2–3 and 6–7 intervals. Both y1 and y2 contain a large number of data points.
Vp V rms = 2 where Vp is the peak amplitude of the signal. and the number of accidents at a particular intersection.com . LabVIEW Analysis Concepts 108 ni. its probability is 100% or one. and take the square root of the mean of the new squared sequence. you can square the input sequence. its probability is zero. a chance. Root mean square is a widely used quantity for analog signals. always exists that a particular event will or will not occur. Random variables are the numerical outcomes of an experiment whose values can change from experiment to experiment. Some examples are the number of cars passing a stop sign during a day. The following equation yields the root mean square voltage Vrms for a sine voltage waveform. Random Variables Many experiments generate outcomes that you can interpret in terms of real numbers. If you are absolutely sure that the event will occur. n–1 Ψx = 1 n ∑x i=0 2 i where n is the number of elements in X. If you are sure that the event will not occur. The probability that event A will occur is the ratio of the number of outcomes favorable to A to the total number of equally likely outcomes. Probability In any random experiment. take the mean of the new squared sequence. or probability.Chapter 10 Probability and Statistics Root Mean Square (rms) The root mean square (rms) of an input sequence equals the positive square root of the mean of the square of the input sequence. In other words. You can assign a number between zero and one to an event as an indication of the probability that the event will occur. the number of voters favoring candidate A. The following equation yields the rms Ψ x for an input sequence X.
six possible events can occur. For example. The roll can result in a 1. If you want to know the probability that a randomly selected battery will last longer than 400 hours. or 6. You can approximate the histogram in Figure 102 by an exponentially decaying curve. The function that models the histogram of the random variable is the probability density function. The histogram values drop off smoothly for larger values of x. making x a continuous random variable. 5. The probability that a 2 will result is one in six. Life Lengths Histogram Figure 102 shows that most of the values for x are between zero and 100 hours. if you roll a single unbiased die. The value of x can equal any value between zero and the largest observed value. you can approximate the probability value by the area under the curve to the right of the value 4.Chapter 10 Probability and Statistics Discrete Random Variables Discrete random variables can take on only a finite number of possible values. or 0. 4. Refer to the Probability © National Instruments Corporation 109 LabVIEW Analysis Concepts . The batteries selected for the experiment come from a larger population of the same type of battery. 2. Continuous Random Variables Continuous random variables can take on any value in an interval of real numbers. For example. The exponentially decaying curve is a mathematical model for the behavior of the data sample. 3. an experiment measures the life expectancy x of 50 batteries of a certain type. Figure 102 shows the histogram for the observed data.16666. Histogram 0 1 2 3 4 5 6 Life Length in Hundreds of Hours Figure 102.
A random variable X is continuous if it can take on an infinite number of possible values associated with intervals of real numbers and a probability density function f(x) exists such that the following relationships and equations are true.e 2πs LabVIEW Analysis Concepts 1010 ni. P ( X = a ) = ∫ f ( x )dx = 0 a a Because X can assume an infinite number of possible values. X = a. 1 –( x – x )2 ⁄ ( 2s2 ) f ( x ) = . f ( x ) ≥ 0 for all x –∞ ∫ f ( x ) dx = 1 ∫ f ( x )dx a b ∞ P(a ≤ X ≤ b) = (106) The chance that X will assume a specific value of X = a is extremely small.Chapter 10 Probability and Statistics Distribution and Density Functions section of this chapter for more information about the probability density function. Normal Distribution The normal distribution is a continuous probability distribution.com . The functional form of the normal distribution is the normal density function. The following equation shows solving Equation 106 for a specific value of X. the probability of it assuming a specific value is zero. The following equation defines the normal density function f(x).
X is a standard normal distribution with the mean value equal to zero and the variance equal to one. Complete the following steps to normalize 170 cm and calculate p using the Normal Distribution VI.000 randomly chosen males is greater than or equal to 170 cm. Next.Chapter 10 Probability and Statistics The normal density function has a symmetric bell shape. with relatively few very short and very tall males in the population. p = Prob ( X ≤ x ) where p is the onesided probability. The choice of the probability density function is fundamental to obtaining a correct probability value. and x is the value. The histogram distribution of S shows many measurements grouped closely about a mean height. Suppose you measure the heights of 1. Scale the difference from step 1 by the standard deviation to obtain the normalized x value. Wire the normalized x value to the x input of the Normal Distribution VI and run the VI. Therefore. 1. 3. Computing the OneSided Probability of a Normally Distributed Random Variable The following equation defines the onesided probability of a normally distributed random variable. If a random variable has a normal distribution with a mean equal to zero and a variance equal to one. you can use the Normal Distribution VI to compute the onesided probability p. the random variable has a standard normal distribution. 2. After normalizing 170 cm. Subtract the mean from 170 cm. you want to find the probability that the height of a male in a different set of 1. The following parameters completely determine the shape and location of the normal density function: • • The center of the curve is the mean value x = 0. You can use the Normal Distribution VI to compute p for x. The spread of the curve is the variance s2 = 1.000 randomly selected adult males and obtain a data set S. you can closely approximate the histogram with the normal distribution. © National Instruments Corporation 1011 LabVIEW Analysis Concepts .
you can use the following methods to compute p: • • • ChiSquare distribution F distribution T distribution Finding x with a Known p The Inv Normal Distribution VI computes the values x that have the chance of lying in a normally distributed sample for a given p. f ( x ) ≥ 0 ∀x ∈ domain of f . you can derive the following equation from Equation 107. x F( x) = –∞ ∫ f ( µ ) dµ (107) where f(x) is the probability density function. ∞ By performing differentiation. you might want to find the heights of males that have a 60% chance of lying in a randomly chosen data set.Chapter 10 Probability and Statistics In addition to the normal distribution method. f ( x ) = dF ( x ) dx LabVIEW Analysis Concepts 1012 ni. For example.com . In addition to the inverse normal distribution method. and –∞ ∫ f ( x ) dx = 1. you can use the following methods to compute x with a known p: • • • Inverse ChiSquare distribution Inverse F distribution Inverse T distribution Probability Distribution and Density Functions Equation 107 defines the probability distribution function F(x).
i=0 ∑x ∆x = 1 i n–1 The following equation yields the sum of the elements of the histogram. Therefore. Figure 103. m–1 l=0 ∑h l = n where m is the number of samples in the histogram and n is the number of samples in the input sequence representing the function. to obtain an estimate of F(x) and f(x). normalize the histogram by a factor of ∆x = 1/n and let hj = xj.Chapter 10 Probability and Statistics You can use a histogram to obtain a denormalized discrete representation of f(x). Figure 103 shows the block diagram of a VI that generates F(x) and f(x) for Gaussian white noise. Generating Probability Distribution Function and Probability Density Function © National Instruments Corporation 1013 LabVIEW Analysis Concepts . The following equation defines the discrete representation of f(x).
Figure 104. The Integral x(t) VI computes the probability distribution function. Input Signal. and Probability Density Function LabVIEW Analysis Concepts 1014 ni.500 in each of the 10 loop iterations.com .Chapter 10 Probability and Statistics The VI in Figure 103 uses 25. Figure 104 shows the results the VI in Figure 103 returns. The Derivative x(t) VI performs differentiation on the probability distribution function to compute the probability density function.000 samples. 2. to compute the probability distribution function for Gaussian white noise. Probability Distribution Function.
The plot of F(x) monotonically increases and is limited to the maximum value of 1. and the plot of the probability density function f(x).Chapter 10 Probability and Statistics Figure 104 shows the last block of Gaussiandistributed noise samples. the plot of the probability distribution function F(x). © National Instruments Corporation 1015 LabVIEW Analysis Concepts .00 as the value of the xaxis increases. The plot of f(x) shows a Gaussian distribution that conforms to the specific pattern of the noise signal.
it is always necessary to find an accurate solution for the system of equations in a very efficient way.j denotes the (i. The elements in the 2D array might be real numbers. A matrix is a 2D array of elements with m rows and n columns. b is a given vector consisting of n elements. In matrixvector notation.j)th element located in the ith row and the jth column. such as signal processing.Linear Algebra 11 This chapter describes how to use the Linear Algebra VIs to perform matrix computation and analysis. a 0. The matrix A shown below is an array of m rows and n columns with m × n elements. Such systems occur naturally or are the result of approximating differential equations by algebraic equations. 0 A = a 1. n – 1 Here. Ax = b where A is an n × n matrix. Types of Matrices Whatever the application. and x is the unknown solution vector to be determined. and others. such a system of linear algebraic equations has the following form. complex numbers. When m = n so that the © National Instruments Corporation 111 LabVIEW Analysis Concepts . functions. In general. ai. Use the NI Example Finder to find examples of using the Linear Algebra VIs. Linear Systems and Matrix Analysis Systems of linear algebraic equations arise in many applications that involve scientific computations. n – 1 … … … … a m – 1. or operators. computational fluid dynamics. 1 … … a 0. 1 … a m – 1. 1 a 1. n – 1 a 1. such a matrix is a rectangular matrix. 0 a m – 1. 0 a 0.
ai. For example.com . the matrix is a lower triangular matrix. If all the elements other than the diagonal elements are zero—that is. If all the elements below the main diagonal are zero. when at least one of the elements of the matrix is a complex number. 4 0 0 A = 0 5 0 0 0 9 is a diagonal matrix. A row vector is a 1 × n matrix—one row and n columns. the matrix is a square matrix. denoted by A . if 2 5 3 A = 6 1 7 1 6 9 then the determinant of A. For example. is A = 2 53 = = 2 1 7 – 5 6 7 + 3 6 1 6 17 6 9 1 9 1 6 1 69 2 ( – 33 ) – 5 ( 47 ) + 3 ( 35 ) = –196 LabVIEW Analysis Concepts 112 ni. When all the elements are real numbers. the determinant of a 2 × 2 matrix A = a b c d is given by ad – bc. An m × 1 matrix—m rows and one column—is a column vector. the matrix is a real matrix. Determinant of a Matrix One of the most important attributes of a matrix is its determinant. i ≠ j—such a matrix is a diagonal matrix. also known as unit matrix.Chapter 11 Linear Algebra number of rows is equal to the number of columns. On the other hand. A diagonal matrix with all the diagonal elements equal to one is an identity matrix. the matrix is a complex matrix.j = 0. On the other hand. In the simplest case. The determinant of a square matrix is formed by taking the determinant of its elements. if all the elements above the main diagonal are zero. the matrix is an upper triangular matrix.
the matrix is singular. If the matrix B represents the transpose of A. 2 6 1 T B = A = 5 1 6 3 7 9 In the case of complex matrices. Refer to the Matrix Inverse and Solving Systems of Linear Equations section of this chapter for more information about singularity and the solution of linear equations and matrix inverses. A real matrix is a symmetric matrix if the transpose of the matrix is equal to the matrix itself. Transpose of a Matrix The transpose of a real matrix is formed by interchanging its rows and columns. the matrix D is obtained by replacing every element in C by its complex conjugate and then interchanging the rows and columns of the resulting matrix. the above matrix with nonzero determinant is nonsingular. we define complex conjugate transposition.Chapter 11 Linear Algebra The determinant of a diagonal matrix. For the matrix A defined above. The determinant tells many important properties of the matrix. not all zero. then D = C ⇒ d i. xn is linearly dependent only if there exist scalars α1. …. For example. j = c∗ j. H Linear Independence A set of vectors x1. x2. C is a Hermitian matrix. then complex conjugate a* = x – iy) of a complex matrix C. If the matrix D represents the complex conjugate transpose (if a = x + iy. then bj. …. αn.j. such that α1 x1 + α2 x2 + … + αn xn = 0 (111) © National Instruments Corporation 113 LabVIEW Analysis Concepts .i = ai. i That is. an upper triangular matrix. In other words. or a lower triangular matrix is the product of its diagonal elements. if the determinant of the matrix is zero. α2. The example matrix A is not a symmetric matrix. denoted by AT. If a complex matrix C satisfies the relation C = CH.
is the maximum number of linearly independent columns in A. the vectors are linearly dependent. where 0 1 1 B = 1 2 3 2 0 2 LabVIEW Analysis Concepts 114 ni. …. B. Matrix Rank The rank of a matrix A. αn = 0. none of the vectors can be written in terms of a linear combination of the others. Hence. Therefore. Given any set of vectors. if one of the vectors can be written in terms of a linear combination of the others. these two vectors are linearly dependent on each other. denoted by ρ(A). Now consider the vectors x = 1 2 y = 2 4 If α1 = –2 and α2 = 1. you must show that α1 = 0. Therefore. αn = 0 is the only set of αi for which Equation 111 holds. If you look at the example matrix A. you find that all the columns of A are linearly independent of each other. If the only set of αi for which Equation 111 holds is α1 = 0. You must understand this definition of linear independence of vectors to fully appreciate the concept of the rank of the matrix. For example. That is. x2. So in this case. first consider the vectors x = 1 2 y = 3 4 α1 = 0 and α2 = 0 are the only values for which the relation α1x + α2y = 0 holds true. α2 = 0.Chapter 11 Linear Algebra In simpler terms. the set of vectors x1. …. to show the linear independence of the set. α1x + α2y = 0. these two vectors are linearly independent of each other. α2 = 0. …. αn = 0. Therefore. xn is linearly independent. none of the columns can be obtained by forming a linear combination of the other columns. …. α2 = 0. Equation 111 always holds for α1 = 0. the rank of the matrix is 3.com . Consider one more example matrix.
For example. or absolute value for scalar numbers. A square matrix has full rank only if its determinant is different from zero. modulus. then ρ ( A ) ≤ min ( n. Matrix B is not a fullrank matrix. m ) where min denotes the minimum of the two numbers. the rank of a square matrix pertains to the highest order nonsingular matrix that can be formed from it. So the rank pertains to the highest order matrix that you can obtain whose determinant is not zero. Hence. the rank of B is 3. the rank of this matrix is 2. A matrix is singular if its determinant is zero. if A is an n × m matrix.Chapter 11 Linear Algebra This matrix has only two linearly independent columns because the third column of B is linearly dependent on the first two columns. It is similar to the concept of magnitude. for example. It can be shown that the number of linearly independent columns of a matrix is equal to the number of independent rows. A vector norm is a way to assign a scalar quantity to these vectors so that they can be compared with each other. Consequently. you cannot compare two vectors x = [x1 x2] and y = [y1 y2] because you might have x1 > y1 but x2 < y2. Magnitude (Norms) of Matrices You must develop a notion of the magnitude of vectors and matrices to measure errors and sensitivity in solving a linear system of equations. but 1 2 3 0 1 –1 1 0 1 2 1 0 1 3 –1 1 0 4 0 2 2 = –1 Hence. In matrix theory. So the rank can never be greater than the smaller dimension of the matrix. In two dimensions. det(B) = 0. As an example. © National Instruments Corporation 115 LabVIEW Analysis Concepts . consider a 4 × 4 matrix 1 B = 0 1 1 For this matrix. these linear systems can be obtained from applications in control systems and computational fluid dynamics.
and the Infinity norm (infnorm). the unit ball must be expanded by a factor of 2 2 before it can exactly encompass the given vector. It is the maximum relative stretching that the matrix does to any vector. Figure 111b shows a 2 2 vector of length 2 + 2 = 8 = 2 2 . with the matrix 2norm.com . The longest axis determines the norm of the matrix. with some axes longer than others. In mathematical terms. n–1 A 1 = max j ∑a i=0 i. With the vector 2norm.Chapter 11 Linear Algebra There are ways to compute the norm of a matrix. the Euclidean norm of the vector is 2 2 . as shown in Figure 111. the 1norm. These include the 2norm (Euclidean norm). the unit ball expands by a factor equal to the norm. The norm of a matrix is defined in terms of an underlying vector norm. Consider a unit ball containing the origin. The 1norm is obtained by finding the sum of the absolute value of all the elements in each column of the matrix. On the other hand. Hence. the unit ball might become an ellipsoidal (ellipse in 3D). Euclidean Norm of a Vector Figure 111a shows a unit ball of radius = 1 unit. the 1norm is simply the maximum absolute column sum of the matrix. 2 1 2 1 2 2 2 2 2 2 a b c Figure 111. the Frobenius norm (Fnorm). The largest of these sums is the 1norm. The Euclidean norm of a vector is simply the factor by which the ball must be expanded or shrunk in order to encompass the given vector exactly. Some matrix norms are much easier to compute than others. j LabVIEW Analysis Concepts 116 ni. As shown in Figure 111c. Each norm has its own physical interpretation.
j (112) In this case. denoted by A–1. the 2norm of the inverse of the matrix A. you add the magnitudes of all elements in each row of the matrix. to find the condition number of a matrix A. A ∞ = max ( 4. you can find the 2norm of A. The inverse of a square matrix A is a square matrix B such that AB = I. 7 ) = 7 The infnorm of a matrix is the maximum absolute row sum of the matrix. Refer to the Matrix Factorization section of this chapter for more information about singular values. 6 ) = 6 The 2norm is the most difficult to compute because it is given by the largest singular value of the matrix. The condition number of a square nonsingular matrix is defined as cond ( A ) = A p ⋅ A –1 p where p can be one of the four norm types described in the Magnitude (Norms) of Matrices section of this chapter. © National Instruments Corporation 117 LabVIEW Analysis Concepts . As described earlier in this chapter. n–1 A ∞ = max i ∑a j=0 i. Determining Singularity (Condition Number) Whereas the norm of the matrix provides a way to measure the magnitude of the matrix. For the Equation 112 example matrix. the condition number of a matrix is a measure of how close the matrix is to being singular. where I is the identity matrix. A = 1 3 2 4 then A 1 = max ( 3. For example.Chapter 11 Linear Algebra For example. The maximum value that you get is the infnorm. and then multiply them together.
Multiplication of a matrix A by a scalar α is equal to multiplication of all its elements by the scalar. A permutation matrix is an identity matrix with some rows and columns exchanged. A –1 2 = 2.99 1.5 – 0. j = αa i. A = –2 1 . The condition number is a very useful quantity in assessing the accuracy of solutions to linear systems.4650. and hence the matrix is close to being singular.0299) is nonzero. A matrix is singular if its determinant is equal to zero. Remember that the condition number of a matrix is always greater than or equal to one. –1 A = 1 2 . A and B.99 2 The condition number of this matrix is 47. However. cond ( A ) = 14. consider the matrix B = 1 0. C = αA ⇒ c i. the determinant is not a good indicator for assessing how close a matrix is to being singular.Chapter 11 Linear Algebra the 2norm is difficult to calculate on paper. the large condition number indicates that the matrix is close to being singular. Two matrices. j LabVIEW Analysis Concepts 118 ni. A matrix with a large condition number is nearly singular. are equal if they have the same number of rows and columns and their corresponding elements all are equal. consider some very basic matrix operations. Basic Matrix Operations and EigenvaluesEigenvector Problems In this section.168. For example. For the matrix B above. The matrix A above is nonsingular. However.9331 The condition number can vary between 1 and infinity. However. the determinant (0. A 3 4 1.5 2 = 5. You can use the Matrix Norm VI to compute the 2norm.com .7325. That is. while a matrix with a condition number close to 1 is far from being singular. the latter being true for identity and permutation matrices.
multiply the elements in the ith row of A by the corresponding elements in the jth column of C. For example. 1 2 × 2 4 = 12 6 3 4 5 1 26 16 So you multiply the elements of the first row of A by the corresponding elements of the first column of B and add all the results to get the elements in the first row and first column of C. their sum C is an m × n matrix defined as C = A ± B . If matrix A has m rows and n columns and matrix B has n rows and p columns. This is shown pictorially in Figure 112. j For example. where ci. j = ∑a k=0 i. to calculate the element in the ith row and the jth column of C. 1 2 + 2 4 = 36 3 4 5 1 85 For multiplication of two matrices. the number of columns of the first matrix must be equal to the number of rows of the second matrix. j. j ± bi. k b k. and then add them all. If both matrices A and B have m rows and n columns. where n–1 c i. their product C is an m × p matrix defined as C = AB. © National Instruments Corporation 119 LabVIEW Analysis Concepts . Similarly. j = ai. 2 1 2 = 2 4 3 4 6 8 Two (or more) matrices can be added or subtracted only if they have the same number of rows and columns.Chapter 11 Linear Algebra For example.
Also. Dot Product and Outer Product If X represents a vector and Y represents another vector. Matrix Multiplication Matrix multiplication. For example. the dot product of these two vectors is obtained by multiplying the corresponding elements of each vector and adding the results. This is denoted by n–1 i=0 X•Y = ∑x y i i where n is the number of elements in X and Y. LabVIEW Analysis Concepts 1110 ni.Chapter 11 Linear Algebra R1 • C1 R1 C1 Cm R1 • Cm X Rn = Rn • C1 Rn • Cm Figure 112. consider the vectors a = 2i + 4j and b = 2i + j in a twodimensional rectangular coordinate system. AB ≠ BA. Both vectors must have the same number of elements. that is.com . multiplication of a matrix by an identity matrix results in the original matrix. is not commutative. as shown in Figure 113. The dot product is a scalar quantity and has many practical applications. in general.
W = a d cos α = a • d Force a Body α α d Figure 114.Chapter 11 Linear Algebra a=2i+4j α=36. Vectors a and b Then the dot product of these two vectors is given by d = 2 • 2 = (2 × 2) + (4 × 1) = 8 4 1 The angle α between these two vectors is given by a•b 8α = inv cos  = inv cos  = 36.86° b=2i+j Figure 113. As a second application. consider a body on which a constant force a acts. That is. Force Vector © National Instruments Corporation 1111 LabVIEW Analysis Concepts . as shown in Figure 114. The work W done by a in displacing the body is defined as the product of d and the component of a in the direction of displacement d.86 o a b 10 where a denotes the magnitude of a.
00 LabVIEW Analysis Concepts 1112 ni. λ is an eigenvalue. Consider the following example.j) = xi × yj For example. Consider an eigenvector x of a matrix A as a nonzero vector that does not rotate when x is multiplied by A. One of the eigenvectors of the matrix A. The value λ is an eigenvalue of A. Similar matrices have the same eigenvalues. In other words. Given an n × n matrix A. In Equation 113. x may change length or reverse its direction. there is some scalar constant λ such that Equation 113 holds true. 1 × 3 = 3 4 2 4 6 8 Eigenvalues and Eigenvectors To understand eigenvalues and eigenvectors.Chapter 11 Linear Algebra On the other hand. start with the classical definition. An eigenvector of a matrix is a nonzero vector that does not rotate when the matrix is applied to it.62 1.com . the problem is to find a scalar λ and a nonzero vector x such that Ax = λx (113) In Equation 113. x is the eigenvector that corresponds to the eigenvalue. but it will not turn sideways.j)th element of this matrix is obtained using the formula a(i. The (i. except perhaps to point in precisely the opposite direction. the outer product of these two vectors is a matrix. Calculating the eigenvalues and eigenvectors are fundamental principles of linear algebra and allow you to solve many problems such as systems of differential equations when you understand what they represent. where A = 2 3 3 5 is x = 0.
such as convergence analysis of iterative methods for solving systems of algebraic equations and the stability analysis of methods for solving systems of differential equations. Hence. complex eigenvalues of a real matrix must occur in complex conjugate pairs. For example. the stability of a structure and its natural modes and frequencies of vibration are determined by the eigenvalues and eigenvectors of an appropriate matrix. The following are some important properties of eigenvalues and eigenvectors: • • • The eigenvalues of a matrix are not necessarily all distinct. For any constant α. indicating a general matrix. a matrix can have multiple eigenvalues. A generalized eigenvalue problem is to find a scalar λ and a nonzero vector x such that Ax = λBx where B is another n × n matrix. Eigenvalues also are very useful in analyzing numerical methods. the vector αx also is an eigenvector with eigenvalue λ because A ( αx ) = αAx = λαx In other words. The EigenValues and Vectors VI has an Input Matrix input. A symmetric matrix always has real © National Instruments Corporation 1113 LabVIEW Analysis Concepts .85. All the eigenvalues of a real matrix need not be real. indicating a symmetric matrix. • • There are many practical applications in the field of science and engineering for an eigenvalue problem. However.85 is one of the eigenvalues of the vector x. In other words. the value 6. The matrix type input could be 0. and the eigenvectors are the corresponding columns of an identity matrix of the same dimension. The eigenvalues of a diagonal matrix are its diagonal entries. The matrix type input specifies the type of the input matrix.Chapter 11 Linear Algebra Multiplying the matrix A and the vector x simply causes the vector x to be expanded by a factor of 6. A real symmetric matrix always has real eigenvalues and eigenvectors. Eigenvectors can be scaled arbitrarily. and the expansion or contraction factor is given by the corresponding eigenvalue. or 1. an eigenvector of a matrix determines a direction in which the matrix expands or shrinks any vector lying in that direction by a scalar multiple. which is an N × N real square matrix.
It is computationally expensive to compute both the eigenvalues and the eigenvectors. Solutions of Systems of Linear Equations In matrixvector notation. you might just want to compute the eigenvalues or both the eigenvalues and the eigenvectors. Az = 0 for some vector z ≠ 0. a symmetric matrix needs less computation than a nonsymmetric matrix. The aim is to determine x. Choose the matrix type carefully. a system of linear equations has the form Ax = b. the unknown solution nvector. The rows or columns of A are linearly dependent.com . The output option input specifies what needs to be computed. A value of 0 indicates that only the eigenvalues need to be computed. Depending on your particular application. A matrix is singular if it has any one of the following equivalent properties: • • • • The inverse of the matrix does not exist. So it is important that you use the output option input of the EigenValues and Vectors VI carefully. Whether such a solution exists and whether it is unique lies in determining the singularity or nonsingularity of the matrix A. Refer to the Matrix Factorization section of this chapter for more information about the pseudoinverse of a rectangular matrix. you can compute the pseudoinverse of a rectangular matrix. In general. LabVIEW Analysis Concepts 1114 ni. The inverse of a matrix exists only if the determinant of the matrix is not zero—that is. A general matrix has no special property such as symmetry or triangular structure. Matrix Inverse and Solving Systems of Linear Equations The inverse.Chapter 11 Linear Algebra eigenvalues and eigenvectors. Also. A value of 1 indicates that both the eigenvalues and the eigenvectors should be computed. where A is an n × n matrix and b is a given nvector. The determinant of the matrix is zero. of a square matrix A is a square matrix such that A A = AA –1 –1 = I where I is the identity matrix. However. you can find the inverse of only a square matrix. denoted by A–1. it is nonsingular.
If the matrix is nonsingular. If A is singular and Ax = b. Such a system then can be solved by first solving the lower triangular system Ly = b for y by forwardsubstitution. On the other hand. the linear system Ax = b can be expressed as LUx = b. its inverse A–1 exists. if l = a 0 b c then r ( s – bp ) p = . The general strategy to solve such a system of equations is to transform the original system into one whose solution is the same as that of the original system but is easier to compute. For example. Explicitly computing the inverse of a matrix is prone to numerical inaccuracies. if the matrix is singular.Chapter 11 Linear Algebra Otherwise. Such a factorization is LU factorization. x = A–1b. if a singular system has a solution. the solution cannot be unique. A(x + ϒz) = b for any scalar ϒ. First. One way to do so is to use the Gaussian Elimination technique. and the system Ax = b has a unique solution. q = a c y = p q b = r s © National Instruments Corporation 1115 LabVIEW Analysis Concepts . express the matrix A as a product A = LU where L is a unit lower triangular matrix and U is an upper triangular matrix. The Gaussian Elimination technique has three basic steps. This is the second step in the Gaussian Elimination technique. Given this. regardless of the value for b. the number of solutions is determined by the righthandside vector b. the matrix is nonsingular. you should not solve a linear system of equations by multiplying the inverse of the matrix A by the known righthandside vector. where the vector z is as in the previous definition. Thus. Therefore.
Then you can use this value to compute the remaining elements of the unknown vector sequentially—hence the name forwardsubstitution. if U = a b 0 c then q ( p – bn ) n = . The same also holds true for nonsquare matrices. this last element of x can be determined easily and then used to determine the other elements sequentially—hence the name backsubstitution. In such a situation. this chapter has described the case of square matrices. x = m n y = p q Matrix Factorization The Matrix Inverse and Solving Systems of Linear Equations section of this chapter describes how a linear system of equations can be transformed into a system whose solution is simpler to compute. So far. the system of equations must have either no solution or a nonunique solution. the VI finds the least square solution x. which in turn helps to minimize numerical inaccuracies. and if the quadratic form for all nonzero vectors is X. you usually find a unique solution x that satisfies the linear system in an approximate sense. and solve a system of linear equations. You can use the Linear Algebra VIs to compute the inverse of a matrix. The LU decomposition technique factors the input matrix as a product of upper and lower triangular matrices. The least square solution is the one that minimizes the norm of Ax – b. The four possible matrix types are general matrices. If the input matrix is square but does not have a full rank (a rankdeficient matrix). The basic idea was to factorize the input matrix into the multiplication of several. and the Singular Value LabVIEW Analysis Concepts 1116 ni. positive definite matrices. compute LU decomposition of a matrix. For example. m = c a In this case. Other commonly used factorization methods are Cholesky. The final step involves solving the upper triangular system Ux = y by backsubstitution.com . and lower and upper triangular matrices. A real matrix is positive definite only if it is symmetric. QR. It is important to identify the input matrix properly. as it helps avoid unnecessary computations. Because a nonsquare matrix is necessarily singular. simpler matrices.Chapter 11 Linear Algebra The first element of y can be determined easily due to the lower triangular nature of the matrix L.
denoted by A†. the pseudoinverse is the same as the usual matrix inverse. inverting a matrix. such as the Householder transformation. which are called left and right singular vectors. QR factorization technique factors a matrix as the product of an orthogonal matrix Q and an upper triangular matrix R—that is. This method requires only about half the work and half the storage compared to LU factorization of a general matrix by Gaussian Elimination. A number of algorithms are possible for QR factorization. QR factorization is useful for both square and rectangular matrices. If the input matrix A is symmetric and positive definite. norm. where U is an upper triangular matrix. the Givens transformation. You can use the PseudoInverse Matrix VI to compute the pseudoinverse of real and complex matrices. if QT Q = I. U and V are orthogonal matrices. You can use these factorization methods to solve many matrix problems. condition number. pseudoinverse is the same as the inverse. If A is square and nonsingular. SVD is useful for solving analysis problems such as computing the rank. respectively. In case of scalars. This is Cholesky factorization. are orthonormal eigenvectors of AAT and ATA. Then the pseudoinverse of a general real m × n matrix A. and zero otherwise.Chapter 11 Linear Algebra Decomposition (SVD). and the columns of U and V. You now can define the pseudoinverse of a diagonal matrix by transposing the matrix and then taking the scalar pseudoinverse of each entry. © National Instruments Corporation 1117 LabVIEW Analysis Concepts . such as solving linear system of equations. A = QR. an LU factorization can be computed such that A = UTU. and pseudoinverse of matrices. The singular values of A are the nonnegative square roots of the eigenvalues of AT A. You can determine if a matrix is positive definite by using the Test Positive Definite VI. is given by A† = VS† U T The pseudoinverse exists regardless of whether the matrix is square or rectangular. S is a diagonal matrix whose diagonal values are called the singular values of A. A matrix Q is orthogonal if its columns are orthonormal—that is. Pseudoinverse The pseudoinverse of a scalar σ is defined as 1/σ if σ ≠ 0. the identity matrix. The Singular Value Decomposition (SVD) method decomposes a matrix into the product of three matrices—A = USVT. and the Fast Givens transformation. and finding the determinant of a matrix.
optimization problems involve a set of possible solutions X and the objective function f(x). such as the optimal parameters for designing a control mechanism for a system or the conditions that minimize the cost of a manufacturing process. For example. subject to the constraint x* ∈ X. the optimal solution x* ∈ X satisfies the following condition.Optimization 12 This chapter describes basic concepts and methods used to solve optimization problems. you can use optimization to define an optimal set of parameters for the design of a specific application. for a list of references to more information about optimization. x* satisfies the following condition. Generally. f ( x∗ ) ≤ f ( x ) ∀x ∈ X (121) The optimization process searches for the value of x* that minimizes f(x). A value that satisfies the conditions defined in Equation 121 is a global minimum. f(x) is the function of the variable or variables you want to minimize or maximize. In the case of maximization. When minimizing f(x). References. Introduction to Optimization Optimization is the search for a set of parameters that minimize a function. Refer to the Local and Global Minima section of this chapter for more information about global minima. The optimization process either minimizes or maximizes f(x) until reaching the optimal value for f(x). also known as the cost function. © National Instruments Corporation 121 LabVIEW Analysis Concepts . f ( x∗ ) ≥ f ( x ) ∀ x ∈ X A value satisfying the preceding condition is a global maximum. This chapter describes optimization in terms of minimizing f(x). where X is the constraint set. Refer to Appendix A.
Discrete Optimization Problems Linear programming problems are discrete optimization problems. m ) Lower and upper level boundaries. LabVIEW does not include VIs you can use to solve optimization problems in which the value of the objective function has constraints. such as xl. at least one optimal solution exists. Planning a route to several destinations so you travel the minimum distance typifies a combinatorial optimization problem. A finite solution set X and a combinatorial nature characterize discrete optimization problems. Continuous Optimization Problems Nonlinear programming problems are continuous optimization problems. Linear and Nonlinear Programming Problems The most common method of categorizing optimization problems is as either a linear programming problem or a nonlinear programming problem. the optimal value of f(x) must satisfy the condition the constraint defines: • • • Equality constraints. LabVIEW Analysis Concepts 122 ni.com . A combinatorial nature refers to the fact that several solutions to the problem exist. An infinite and continuous set X characterizes continuous optimization problems. In addition to constraints on the value of f(x). whether an optimization problem is linear or nonlinear influences the selection of the algorithm you use to solve the problem. xu Note Currently. the term programming does not refer to computer programming. Certain algorithms solve only unconstrained optimization problems. Programming also refers to scheduling or planning. Linear and nonlinear programming are subsets of mathematical programming. The objective of mathematical programming is the same as optimization—maximizing or minimizing f(x). If the value of f(x) has any of the following constraints. …. such as G i ( x ) ≤ 4 ( i = m e + 1. Note In the context of optimization. me) Inequality constraints. …. However.Chapter 12 Optimization Constraints on the Objective Function The presence and structure of any constraints on the value of f(x) influence the selection of the algorithm you use to solve an optimization problem. such as Gi(x) = 4 (i = 1. Each solution to the problem consists of a different combination of parameters.
…. …. Finding the optimal solution terminates the iterative process of the algorithm. m 1 + m 2 © National Instruments Corporation 123 LabVIEW Analysis Concepts . As the number of design variables increases. x n ≥ 0 Additional constraints of M = m1 + m2 + m3 m1 of the following form a i1 x 1 + … + a in x n ≤ b i. Because of the computational overhead associated with highly complex problems. ( b i ≥ 0 ). …. Use the accuracy input of the Optimization VIs to specify the accuracy of the optimal solution. Each iteration of the algorithm proceeds along the search direction to the optimal solution by solving subproblems. Linear Programming Linear programming problems have the following characteristics: • • • • Linear objective function Solution set X with a polyhedron shape defined by linear inequality constraints Continuous f(x) Partially combinatorial structure Solving linear programming problems involves finding the optimal value of f(x) where f(x) is a linear combination of variables. As problem complexity increases. Beginning at a userspecified starting point. ( b j ≥ 0 ). computational overhead increases due to the size and number of subproblems the optimization algorithm must solve to find the optimal solution. i = 1. m 1 • m2 of the following form a j1 x 1 + … + a jn x n ≥ b j.Chapter 12 Optimization Solving Problems Iteratively Algorithms for solving optimization problems use an iterative process. the complexity of the optimization problem increases. j = m 1 + 1. as shown in Equation 122. consider limiting the number of iterations allocated to find the optimal solution. the algorithms establish a search direction. f(x) = a1x1 + … + anxn (122) The value of f(x) in Equation 122 can have the following constraints: • • • Primary constraints of x 1 ≥ 0.
n n T Linear Programming Simplex Method A simplex describes the solution set X for a linear programming problem. c ∈ IR is the cost vector. k = m 1 + m 2 + 1. ( b k ≥ 0 ). x ≥ 0 } where x ∈ IR is the vector of unknowns.Chapter 12 Optimization • m3 of the following form a k1 x 1 + … + a kn x n = b k. The linear programming simplex method iteratively moves from one vertex to the adjoining vertex until moving to an adjoining vertex no longer yields a more optimal solution. the methods have nothing else in common. The linear nature of f(x) means the optimal solution is at one of the vertices of the simplex. Nonlinear Programming Nonlinear programming problems have either a nonlinear f(x) or a solution set X defined by nonlinear equations and inequalities. M Any vector x that satisfies all the constraints on the value of f(x) constitutes a feasible answer to the linear programming problem. …. At least one member of solution set X A ∈ IR is at a vertex of the polyhedron that describes X. Nonlinear programming is a broad category of optimization problems and includes the following subcategories: • • • Quadratic programming problems Leastsquares problems Convex problems LabVIEW Analysis Concepts 124 ni. The following relationship represents the standard form of the linear programming problem. The constraints on the value of f(x) define the polygonal surface hyperplanes of the simplex. Note Although both the linear programming simplex method and the nonlinear downhill simplex method use the concept of a simplex. min { c x: Ax = b.com . The hyperplanes intersect at vertices along the surface of the simplex. The vector yielding the best result for f(x) is the optimal solution. and m×n is the constraint matrix. Refer to the Downhill Simplex Method section of this chapter for information about the downhill simplex method.
also known as the NelderMead method. nonlinear programming problems are continuous optimization problems so the solution set X for a nonlinear programming problem might be infinitely large. work best with problems in which the objective function is continuous in its first derivative. For example. which can help you determine the suitability of the method for a particular optimization problem. Local and Global Minima The goal of any optimization problem is to find a global optimal solution. The search methods that use derivatives. uses only evaluations of f(x) to find the optimal solution. Nonlinear programming search algorithms use line minimization to solve the subproblems leading to an optimal value for f(x). The line minimization process continues until the search algorithm finds the optimal solution. After the search algorithm reaches the minimum on one vector. © National Instruments Corporation 125 LabVIEW Analysis Concepts . consider whether the method uses derivatives. usually orthogonal to the first vector. such as the gradient search methods. the downhill simplex method is a good choice for problems with pronounced nonlinearity or with problems containing a significant number of discontinuities. you might not be able to find a global optimum for f(x). the search continues along another vector. The search algorithm searches along a vector until it reaches the minimum value on the vector. Therefore. the downhill simplex method. you solve most nonlinear programming problems by finding a local optimum for f(x).Chapter 12 Optimization Impact of Derivative Use on Search Method Selection When you select a search method. Because it uses only evaluations of f(x). In practice. Line Minimization The process of iteratively searching along a vector for the minimum value on the vector is line minimization or line searching. The line search continues along the new vector until reaching its minimum value. Line minimization can help establish a search direction or verify that the chosen search direction is likely to produce an optimal solution. However.
Refer to the Linear LabVIEW Analysis Concepts 126 ni. and an ε > 0 exists so that the following relationship is true. 65]. f ( x∗ ) ≤ f ( x ) where x = x′x. the methods have nothing else in common. x* is a global minimum of f over X if it satisfies the following relationship. 65) In Figure 121. 65 ]. f ( x∗ ) ≤ f ( x ) ∀x ∈ X Local Minimum A local minimum is a minimum of the function over a subset of the domain. B is the global minimum because f ( x∗ ) ≤ x for ∀x ∈ [ 32. Note Although the downhill simplex method and the linear programming simplex method use the concept of a simplex. ε = 1 would suffice. x ∈ [32. Similarly. Figure 121. C is a local minimum. ∀x ∈ X with x – x∗ < ε Figure 121 illustrates a function of x where the domain is any value between 32 and 65. Downhill Simplex Method The downhill simplex method developed by Nelder and Mead uses a simplex and performs function evaluations without derivatives.Chapter 12 Optimization Global Minimum In terms of solution set X. Domain of X (32. such that f ( x∗ ) ≤ f ( x ). A is a local minimum because you can find ε > 0. In terms of solution set X.com . x* is a local minimum of f over X if x* ∈ X.
the value it finds for f(x) might not be the optimal solution. Pi = P0 + λei (123) where ei is a unit vector and λ is an estimate of the characteristic length scale of the problem.Chapter 12 Optimization Programming Simplex Method section of this chapter for information about the linear programming simplex method and the geometry of the simplex. After deciding upon an initial starting point P0. Reinitialize the method to N + 1 starting points using Equation 123. You must use your judgement about the best location from which to start. A nondegenrate simplex encloses a finite volume of N dimensions. Most practical applications involve solution sets that are nondegenrate simplexes. the remaining N points of the simplex define vector directions spanning the Ndimensional space. Golden Section Search Method The golden section search method finds a local minimum of a 1D function by bracketing the minimum. No effective means of determining the initial starting point exists. Bracketing a minimum requires a triplet of points. The downhill simplex method requires that you define an initial simplex by specifying N + 1 starting points. a < b < c such that f(b) < f(a) and f(b) < f(c) (124) © National Instruments Corporation 127 LabVIEW Analysis Concepts . If you take any point of the nondegenrate simplex as the origin of the simplex. When you repeat the process. you can use Equation 123 to determine the other points needed to define the initial simplex. the downhill simplex method performs a series of reflections. Starting with the initial simplex defined by the points from Equation 123. Because of the multidimensional nature of the downhill simplex method. The configuration of the reflections conserves the volume of the simplex. as shown in the following relationship. A reflection moves from a point on the simplex through the opposite face of the simplex to a point where the function f is smaller. You can verify that the value for f(x) is the optimal solution by repeating the process. use the optimal solution from when you first ran the method as P0. The method continues to perform reflections until the function value reaches a predetermined tolerance. which maintains the nondegeneracy of the simplex.
= W 1–W (128) LabVIEW Analysis Concepts 128 ni. point b is a fractional distance W between a and c. b – a = x – c (127) You can imply from Equation 127 that x is within the larger segment because Z is positive only if W < 1/2.= 1 – W c–a A new point x is an additional fractional distance Z beyond b. the new x is the point in the interval symmetric to b. as shown in the following equations. choose a point x between b and c and evaluate f(x). If f(b) > f(x). the new bracketing triplet is b < x < c. the fractional distance of x from b to c equals the fractional distance of b from a to c. x–b = Z c–a (125) Given Equation 125. If f(b) < f(x). b–a . the next bracketing triplet can have either a length of W + Z relative to the current bracketing triplet or a length of 1 – W. b or x. choose Z such that the following equations are true. In each instance. Equation 127 is true. Choosing a New Point x in the Golden Section Given that a < b < c.= W c–a c–b . W+Z=1–W Z = 1 – 2W (126) Given Equation 126. The search method starts by choosing a new point x between either a and b or between b and c. If Z is the current optimal value of f(x). Z . the minimum of the function is within the interval (a.com . To minimize the possible worst case. as shown in Equation 128. Therefore.Chapter 12 Optimization Because the relationship in Equation 124 is true. W is the previous optimal value of f(x). c). is the current optimal minimum found during the current iteration of the search. as shown in Equation 125. Therefore. For example. the middle point. the new bracketing triplet is a < b < x.
the middle point b of the optimal bracketing interval a < b < c is the fractional distance of 0. 0.38197 2 (129) Therefore.38197 from one of the end points and the fractional distance of 0. The golden section search method uses a bracketing triplet and measures from point b to find a new point x a fractional distance of 0. … where k is the iteration number. x2. f(xk + 1) is the objective function value at iteration k + 1. on each iteration of the search method. The search direction points toward the most probable location of the minimum. of the Pythagoreans.≈ 0.Chapter 12 Optimization Equations 126 and 128 yield the following quadratic equation. it uses iterative descent to move toward the minimum. © National Instruments Corporation 129 LabVIEW Analysis Concepts . k = 0. each new function evaluation brackets the minimum to an interval only 0.61803 from the other end point. selfreplicating golden section. b) or (b. …. as shown in the following relationship. or golden section. and f(xk) is the objective function value at iteration k. Even when starting with an initial bracketing triplet whose segments are not within the golden section. c). f ( x k + 1 ) < f ( x k ). so f decreases with each iteration. After the search method converges to the selfreplicating golden section. which is an estimate of the best starting point. After the gradient search method establishes the search direction. W2 – 3W + 1 = 0 3– 5 W = . either (a. Successively decreasing f improves the current estimate of the solution. and successively produces vectors x1. the process of successively choosing a new point x at the golden mean quickly causes the method to converge linearly to the correct.38197 into the larger interval. The iterative descent process starts at a point x0.61803 comprise the golden mean. The iterative descent process attempts to decrease f to its minimum.38197 and 0. Gradient Search Methods Gradient search methods determine a search direction by using information about the slope of f(x).61803 times the size of the preceding interval. 1.
LabVIEW Analysis Concepts 1210 ni. if f is not convex. if you have little information about the locations of local minima. In practice. iteratively decreasing f converges to a global minimum for f(x). the gradient search method needs a positive value for αk and a value for dk that fulfills the following relationship. … (1210) where dk is the search direction and αk is the step size. When a gradient search method begins on or encounters a stationary point. ∇f ( x k )′d k < 0 Iterations of gradient search methods continue until xk + 1 = xk. Therefore. Ideally. you must decide upon an error tolerance ε that assures that the point at which the gradient search method stops is at least close to a local minimum. Caveats about Converging to an Optimal Solution A global minimum is a value for f(x) that satisfies the relationship described in Equation 121. Terminating Gradient Search Methods Because a gradient search method does not produce convergence at a global minimum. the method stops at the stationary point. However.Chapter 12 Optimization The following equations and relationships provide a general definition of the gradient search method of solving nonlinear programming problems.com . if the gradient of the objective function ∇f ( x k ) ≠ 0. no explicit rules exist for determining an absolutely accurate ε. Therefore. 1. If f is convex. convergence rarely proceeds to a global minimum for f(x) because of the presence of local minima that are not global. Local minima attract gradient search methods because the form of f near the current iterate and not the global structure of f determines the downhill course the method takes. a gradient search method converges to a stationary point. you might have to start the gradient search method from several starting points. Unfortunately. In Equation 1210. the stationary point might not be a global minimum. The selection of a value for ε is somewhat arbitrary and based on an estimate about the value of the optimal solution. the stationary point is a global minimum. xk + 1 = xk + α k dk k = 0.
you can approximate f by the Taylor series of f. When you minimize a function f along direction u. the VI stops. as shown in Equation 1211. j ∂ f . If P is the origin of a coordinate system with coordinates x.x i x j + … ∂x i ∂x j 2 (1211) 1 ≈ c – bx + . ∇f = Ax – b © National Instruments Corporation 1211 LabVIEW Analysis Concepts . The nonlinear programming optimization VIs iteratively compare the difference between the highest and lowest input values to the value of accuracy until two consecutive approximations do not differ by more than the value of accuracy. and ∂ f [ A ]ij ≡ ∂x i ∂x j 2 P The components of matrix A are the second partial derivative matrix of f.x i + ∂x i 2 ∑ i. b ≡ –∇ f P . which prevents indefinite cycling through the direction set.xAx 2 where c ≡ f ( P ) . When two consecutive approximations do not differ by more than the value of accuracy. the gradient of f is perpendicular to u at the line minimum. f(x) = f(P) + ∑ i ∂f 1 . The following equation gives the gradient of f. Conjugate Direction Search Methods Conjugate direction methods attempt to find f(x) by defining a direction set of vectors such that minimizing along one vector does not interfere with minimizing along another vector. Matrix A is the Hessian matrix of f at P.Chapter 12 Optimization Use the accuracy input of the Optimization VIs to specify a value for ε.
LabVIEW Analysis Concepts 1212 ni. To fulfill the condition that minimization along one vector does not interfere with minimization along another vector. If a function does not have exactly the form of Equation 1211. repeated cycles of N line minimizations eventually converge quadratically to the minimum.com . the set of vectors is a conjugate set. u and v are conjugate vectors. δ ( ∇f ) = A ( δx ) After the search method reaches the minimum by moving in direction u. Performing successive line minimizations of a function along a conjugate set of vectors prevents the search method from having to repeat the minimization along any member of the conjugate set. as much as possible. The conjugate gradient search method attempts to find f(x) by searching a gradient conjugate to the previous gradient and conjugating to all previous gradients. When Equation 1212 is true pairwise for all members of a set of vectors. ∇f ( P ) is the vector of first partial derivatives. Conjugate Gradient Search Methods At an Ndimensional point P. If a conjugate set of vectors contains N linearly independent vectors. the conjugate gradient search methods calculate the function f(P) and the gradient ∇f ( P ) . it moves in a new direction v.Chapter 12 Optimization The following equation shows how the gradient ∇f changes with movement along A. performing N line minimizations arrives at the minimum for functions having the quadratic form shown in Equation 1211. as shown in Equation 1212. The FletcherReeves method and the PolakRibiere method are the two most common conjugate gradient search methods. the gradient of f must remain perpendicular to u. 0 = uδ ( ∇f ) = uAv (1212) When Equation 1212 is true for two vectors u and v. Theorem A Theorem A has the following conditions: • • • A is a symmetric. g0 is an arbitrary vector. h0 = g0. The following theorems serve as the basis for each method. positivedefinite. n × n matrix.
2.= gi ⋅ gi gi ⋅ gi gi ⋅ hi λ i = hi ⋅ A ⋅ hi (1218) Theorem B The following theorem defines a method for constructing the vector from Equation 1213 when the Hessian matrix A is unknown: • • gi is the vector sequence defined by Equation 1213. as shown in the following equations. γi = 0. hi is the vector sequence defined by Equation 1214. Because Equation 1217 is true. 1. gi + 1 ⋅ gi + 1 ( g i + 1 – g i ) ⋅ gi + 1 γ i = . g i + 1 Ah γ i = . © National Instruments Corporation 1213 LabVIEW Analysis Concepts . • The following equations are true for all i ≠ j . ….Chapter 12 Optimization • The following equations define the two sequences of vectors for i = 0. gi gj = 0 h i Ahj = 0 (1217) (1215) (1216) The elements in the sequence that Equation 1213 produces are mutually orthogonal. g i + 1 = g i – λ i Ah i h i + 1 = gi + 1 + γ i h i (1213) (1214) where the chosen values for λi and γi make gi + 1gi = 0 and hi + 1Ahi = 0. take λi = 0.i h i Ah i gi gi λ i = g i Ahi If the denominators equal zero. The elements in the sequence that Equation 1214 produces are mutually conjugate. you can rewrite Equations 1215 and 1216 as the following equations.
Proceed from Pi in the direction hi to the local minimum of f at point Pi + 1. the FletcherReeves method uses the first term from Equation 1218 for γi. g i + 1 = – ∇f ( P i + 1 ) (1219) The vector gi + 1 that Equation 1219 yields is the same as the vector that Equation 1213 yields when the Hessian matrix A is known.Chapter 12 Optimization • Approximate f as the quadratic form given by the following relationship. Difference between FletcherReeves and PolakRibiere Both the FletcherReeves method and the PolakRibiere method use Theorem A and Theorem B. 1 f ( x ) ≈ c – b ⋅ x + . as shown in Equation 1220. Therefore. However.x ⋅ A ⋅ x 2 • • • g i = – ∇f ( P i ) for some point Pi. Set the value for gi + 1 according to Equation 1219.com . most functions in practical applications do not have exact quadratic forms. you might need another set of iterations to find the actual minimum. after you find the minimum for the quadratic form. Therefore. However. LabVIEW Analysis Concepts 1214 ni. ( gi + 1 – gi ) ⋅ gi + 1 γ i = g i ⋅ gi (1221) Equation 1220 equals Equation 1221 for functions with exact quadratic forms. as shown in Equation 1221. you can optimize f without having knowledge of Hessian matrix A and without the computational resources to calculate and store the Hessian matrix A. g i + 1 ⋅ gi + 1 γ i = g i ⋅ gi (1220) The PolakRibiere method uses the second term from Equation 1218 for γi. You construct the direction sequence hi with line minimization of the gradient vector and the latest vector in the g sequence.
Chapter 12
Optimization
When the PolakRibiere method reaches the minimum for the quadratic form, it resets the direction h along the local gradient, essentially starting the conjugategradient process again. Therefore, the PolakRibiere method can make the transition to additional iterations more efficiently than the FletcherReeves method.
© National Instruments Corporation
1215
LabVIEW Analysis Concepts
Polynomials
13
P(x) = a0 + a1x + a2x2 + … + anxn (131)
Polynomials have many applications in various areas of engineering and science, such as curve fitting, system identification, and control design. This chapter describes polynomials and operations involving polynomials.
General Form of a Polynomial
A univariate polynomial is a mathematical expression involving a sum of powers in one variable multiplied by coefficients. Equation 131 shows the general form of an nthorder polynomial.
where P(x) is the nthorder polynomial, the highest power n is the order of the polynomial if an ≠ 0, a0, a1, …, an are the constant coefficients of the polynomial and can be either real or complex. You can rewrite Equation 131 in its factored form, as shown in Equation 132. P(x) = an(x – r1)(x – r2) … (x – rn) where r1, r2, …, rn are the roots of the polynomial. The root ri of P(x) satisfies the following equation. P(x)
x = ri
(132)
= 0
i = 1, 2, …, n
In general, P(x) might have repeated roots, such that Equation 133 is true. P ( x ) = a n ( x – r 1 ) ( x – r 2 ) … ( x – r l ) ( x – r l + 1 ) ( x – r l + 2 )… ( x – r l + j ) The following conditions are true for Equation 133: • • r1, r2, …, rl are the repeated roots of the polynomial ki is the multiplicity of the root ri, i = 1, 2, …, l
k1 k2 kl
(133)
© National Instruments Corporation
131
LabVIEW Analysis Concepts
Chapter 13
Polynomials
• •
rl + 1, rl + 2, …, rl + j are the nonrepeated roots of the polynomial k1 + k2 + … + kl + j = n
A polynomial of order n must have n roots. If the polynomial coefficients are all real, the roots of the polynomial are either real or complex conjugate numbers.
Basic Polynomial Operations
The basic polynomial operations include the following operations: • • • • • • • • • Finding the order of a polynomial Evaluating a polynomial Adding, subtracting, multiplying, or dividing polynomials Determining the composition of a polynomial Determining the greatest common divisor of two polynomials Determining the least common multiple of two polynomials Calculating the derivative of a polynomial Integrating a polynomial Finding the number of real roots of a real polynomial
The following equations define two polynomials used in the following sections. P(x) = a0 + a1x + a2x2 + a3x3 = a3(x – p1)(x – p2)(x – p3) Q(x) = b0 +b1x + b2x2 = b2(x – q1)(x – q2) (134) (135)
Order of Polynomial
The largest exponent of the variable determines the order of a polynomial. The order of P(x) in Equation 134 is three because of the variable x3. The order of Q(x) in Equation 135 is two because of the variable x2.
Polynomial Evaluation
Polynomial evaluation determines the value of a polynomial for a particular value of x, as shown by the following equation. P( x)
x = x0
= a 0 + a1 x0 + a 2 x0 + a 3 x 0 = a 0 + x0 ( a1 + x0 ( a 2 + x0 a 3 ) )
2
3
LabVIEW Analysis Concepts
132
ni.com
Chapter 13
Polynomials
Evaluating an nthorder polynomial requires n multiplications and n additions.
Polynomial Addition
The addition of two polynomials involves adding together coefficients whose variables have the same exponent. The following equation shows the result of adding together the polynomials defined by Equations 134 and 135. P(x) + Q(x) = (ao + b0) + (a1 + b1)x +(a2 + b2)x2 + a3x3
Polynomial Subtraction
Subtracting one polynomial from another involves subtracting coefficients whose variables have the same exponent. The following equation shows the result of subtracting the polynomials defined by Equations 134 and 135. P(x) – Q(x) = (a0 – b0) + (a1 – b1)x + (a2 – b2)x2 + a3x3
Polynomial Multiplication
Multiplying one polynomial by another polynomial involves multiplying each term of one polynomial by each term of the other polynomial. The following equations show the result of multiplying the polynomials defined by Equations 134 and 135. P(x)Q(x) = (a0 + a1x + a2x2 + a3x3)(b0 + b1x + b2x2) = a0(b0 + b1x + b2x2) + a1x(b0 + b1x + b2x2) + a2x2(b0 + b1x + b2x2) + a3x3(b0 + b1x + b2x2) = a3b2x5 + (a3b1 + a2b2)x4 + (a3b0 + a2b1 + a1b2)x3 + (a2b0 + a1b1 + a0b2)x2 + (a1b0 + a0b1)x + a0b0
Polynomial Division
Dividing the two polynomials P(x) and Q(x) results in the quotient U(x) and remainder V(x), such that the following equation is true. P(x) = Q(x)U(x) + V(x) For example, the following equations define polynomials P(x) and Q(x). P(x) = 5 – 3x – x2 + 2x3 (136)
© National Instruments Corporation
133
LabVIEW Analysis Concepts
Chapter 13
Polynomials
Q(x) = 1 – 2x + x2 Complete the following steps to divide P(x) by Q(x). 1.
(137)
Divide the highest order term in Equation 136 by the highest order term in Equation 137.
2x x − 2 x + 1 2 x 3 − x 2 − 3x + 5
2
(138) 2. Multiply the result of Equation 138 by Q(x) from Equation 137. 2xQ(x) = 2x – 4x2 + 2x3 3. Subtract the product of Equation 139 from P(x). (139)
2x x − 2 x + 1 2 x 3 − x 2 − 3x + 5
2
− (2 x 3 − 4 x 2 + 2 x ) 3x 2 − 5x + 5
The highest order term becomes 3x2. 4. Repeat step 1 through step 3 using 3x2 as the highest term of P(x). a. Divide 3x2 by the highest order term in Equation 137. 3x  = 3 2 x b. Multiply the result of Equation 1310 by Q(x) from Equation 137. 3Q(x) = 3x2 – 6x + 3 c. Subtract the result of Equation 1311 from 3x2 – 5x + 5. (1311)
2
(1310)
2x + 3 x − 2 x + 1 2 x 3 − x 2 − 3x + 5
2
− (2 x 3 − 4 x 2 + 2 x ) 3x 2 − 5x + 5 − (3 x 2 − 6 x + 3) x+2
LabVIEW Analysis Concepts
134
ni.com
Q(x) is a multiple of V(x) and R(x). C(x) divides P(x) and Q(x) where C(x) divides R(x). For example. The following conditions are true for Equations 1312 and 1313: • • • • • U(x) and R(x) are factors of P(x). The following equations define two polynomials P(x) and Q(x). replacing x in Equation 134 with the polynomial from Equation 135 results in the following equation. R(x) is a common factor of polynomials P(x) and Q(x). Greatest Common Divisor of Polynomials The greatest common divisor of two polynomials P(x) and Q(x) is a polynomial R(x) = gcd(P(x). U(x) = 3 + 2x V(x) = 2 + x.Chapter 13 Polynomials Because the order of the remainder x + 2 is lower than the order of Q(x). P(Q(x)) = a0 + a1Q(x) + a2(Q(x))2 + a3(Q(x))3 = a0 + Q(x){a1 + Q(x)[a2 +a3Q(x)]} where P(Q(x)) denotes the composite polynomial. and R(x) are polynomials. (1312) (1313) © National Instruments Corporation 135 LabVIEW Analysis Concepts . Polynomial Composition Polynomial composition involves replacing the variable x in a polynomial with another polynomial. Q(x)) and has the following properties: • • R(x) divides P(x) and Q(x). V(x) and R(x) are factors of Q(x). V(x). The following equations give the quotient polynomial U(x) and the remainder polynomial V(x) for the division of Equation 136 by Equation 137. P(x) = U(x)R(x) Q(x) = V(x)R(x) where U(x). P(x) is a multiple of U(x) and R(x). the polynomial division procedure stops.
L(x) is a common multiple of P(x) and Q(x). In addition. Rn – 1(x) = Rn(x)Qn + 1(x). Q(x) = R1(x)Q2(x) + R2(x) 3. the greatest common divisor R(x) of polynomials P(x) and Q(x) equals Rn(x).Chapter 13 Polynomials If P(x) and Q(x) have the common factor R(x). . Divide P(x) by Q(x) to obtain the quotient polynomial Q1(x) and remainder polynomial R1(x). 1. P(x) and Q(x) are coprime. R(x) is the greatest common divisor of P(x) and Q(x). and if R(x) is divisible by any other common factors of P(x) and Q(x) such that the division does not result in a remainder. If the remainder polynomial becomes zero. P(x) = Q(x)Q1(x) + R1(x) 2. Divide R1(x) by R2(x) to obtain Q3(x) and R3(x) R1(x) = R2(x)Q3(x) + R3(x) R2(x) = R3(x)Q4(x) + R4(x) . P(x) and Q(x) are polynomials defined by Equations 1312 and 1313. you can complete the following steps to find the greatest common divisor R(x). If the order of P(x) is larger than Q(x). If the greatest common divisor R(x) of polynomials P(x) and Q(x) is equal to a constant. respectively. . Least Common Multiple of Two Polynomials Finding the least common multiple of two polynomials involves finding the smallest polynomial that is a multiple of each polynomial. If L(x) is a multiple of both P(x) and Q(x). if L(x) has the lowest order among LabVIEW Analysis Concepts 136 ni. Divide Q(x) by R1(x) to obtain the new quotient polynomial Q2(x) and new remainder polynomial R2(x). You can find the greatest common divisor of two polynomials by using Euclid’s division algorithm and an iterative procedure of polynomial division. as shown by the following equation.com .
P ( x )Q ( x ) U ( x )R ( x )V ( x )R ( x ) L ( x ) = . T(x). dividing the product of P(x) and Q(x) by R(x) obtains L(x). d. as shown by the following equation. k 2 © National Instruments Corporation 137 LabVIEW Analysis Concepts .T ( x ) = 2c 2 + 6c 3 x + … + n ( n – 1 )cn x n – 2 2 dx The following equation defines the kth derivative of T(x). as shown by the following equation. T(x) = c0 + c1x + c2x2 + … +cnxn (1314) The first derivative of T(x) is a polynomial of order n – 1.c n x n – k k 2! ( n – k )! 1! dx where k ≤ n.= U ( x )V ( x )R ( x ) R(x ) R(x ) Derivatives of a Polynomial Finding the derivative of a polynomial involves finding the sum of the derivatives of the terms of the polynomial.c k + 2 x 2 + … + .= . d . L(x) is the least common multiple of P(x) and Q(x).T ( x ) = c 1 + 2c 2 x + 3c 3 x 2 + … + nc n x n – 1 dx The second derivative of T(x) is a polynomial of order n – 2. Equation 1314 defines an nthorder polynomial.T ( x ) = k!c k + ( k + 1 )! c k + 1 x + .Chapter 13 Polynomials all the common multiples of P(x) and Q(x). ( k + 2 )! n! d. as shown by the following equation. The NewtonRaphson method of finding the zeros of an arbitrary equation is an application where you need to determine the derivative of a polynomial. If L(x) is the least common multiple of P(x) and Q(x) and if R(x) is the greatest common divisor of P(x) and Q(x).
Indefinite Integral of a Polynomial The following equation yields the indefinite integral of the polynomial T(x) from Equation 1314. you can set c to zero.com .c n x 2 n+1 b x=a a Number of Real Roots of a Real Polynomial For a real polynomial.Chapter 13 Polynomials Integrals of a Polynomial Finding the integral of a polynomial involves the summation of integrals of the terms of the polynomial.n+1 = c 0 x + . c can be an arbitrary constant.n+1 – c 0 x + .c n x n+1 x=b 1 2 1 . you can find the number of real roots of the polynomial over a certain interval by applying the Sturm function.c x 2 0 1 1 2 1 .n+1 + … + .c 1 x + … + .c x 2 a 0 1 b 1 2 1 . ∫ T ( x )dx = c x + .c n x n+1 Because the derivative of a constant is zero. as shown by the following equation. Definite Integral of a Polynomial Subtracting the evaluations at the two limits of the indefinite integral obtains the definite integral of the polynomial.P ( x ) . dx LabVIEW Analysis Concepts 138 ni. For convenience.c 1 x + … + . If P0(x) = P(x) and dP 1 ( x ) = .c n x 2 n+1 1 2 1 . ∫ T ( x ) dx = c + c x + .n+1 + … + .
Chapter 13 Polynomials the following equation defines the Sturm function. 32 4 101 = 32 i = 2. P2 ( x ) 9 = – P 1 ( x ) – P 2 ( x ) 27 + .x 3 8 = – 1 + . … © National Instruments Corporation 139 LabVIEW Analysis Concepts .P ( x ) = – 4 + 6x dx P0 ( x ) P 2 ( x ) = – P 0 ( x ) – P 1 ( x ) .x 3 P1 ( x ) P 3 ( x ) = – P 1 ( x ) – P 2 ( x ) . Pi – 2 ( x ) P i ( x ) = – P i – 2 ( x ) – P i – 1 ( x ) . Pi – 1 ( x ) where Pi(x) is the Sturm function and Pi – 2 ( x ) Pi – 1 ( x ) represents the quotient polynomial resulting from the division of Pi – 2(x) by Pi – 1(x). P1 ( x ) 1 = – P 0 ( x ) – P 1 ( x ) .x . You can calculate Pi(x) until it becomes a constant.. P0(x) = P(x) = 1 – 4x + 2x3 d 2 P 1 ( x ) = . 3 .1). For example. the following equations show the calculation of the Sturm function over the interval (–2.
For x = –2.Chapter 13 Polynomials To evaluate the Sturm functions at the boundary of the interval (–2. Table 131.1).26. The difference in the number of sign changes between the two boundaries corresponds to the number of real roots that lie in the interval. You only need to know the signs of the values of the Sturm functions.1). For the calculation of the Sturm function over the interval (–2. you do not have to calculate the exact values in the evaluation. the evaluation of Pi(x) results in three sign changes. For x = 1. 1) In Figure 131.5 –1 –0. the difference in the number of sign changes is two. P(x) 4 2 –2 –1.1). LabVIEW Analysis Concepts 1310 ni.5 1 Figure 131.com . the two real roots lie at approximately –1.1). 1) x –2 1 P0(x) – – P1(x) + + P2(x) – + P3(x) + + Number of Sign Changes 3 1 In Table 131. notice the number of sign changes for each boundary. Figure 131 shows the result of evaluating P(x) over (–2. Signs of the Sturm Functions for the Interval (–2.5 0 –2 –4 –6 –8 x 0. Result of Evaluating P(x) over (–2. the evaluation of Pi(x) results in one sign change. Table 131 lists the signs of the Sturm functions for the interval (–2.5 and 0. which means two real roots of polynomial P(x) lie in the interval (–2.1).
= 2 n A(x) a 0 + a 1 x + a2 x + … + a n x where F(x) is the rational polynomial. as shown by the following equation. In particular. A rational polynomial function takes the form of the division of two polynomials. B 1 ( x )A 2 ( x ) + B 2 ( x )A 1 ( x ) F 1 ( x ) + F 2 ( x ) = A 1 ( x )A 2 ( x ) Rational Polynomial Function Subtraction The following equation shows the subtraction of two rational polynomials. b0 + b 1 x + b2 x + … + b m x B(x) F ( x ) = . B 1 ( x )A 2 ( x ) – B 2 ( x )A 1 ( x ) F 1 ( x ) – F 2 ( x ) = A 1 ( x )A 2 ( x ) © National Instruments Corporation 1311 LabVIEW Analysis Concepts . system theory. B1 ( x ) F 1 ( x ) = A1 ( x ) B2(x ) F 2 ( x ) = A2(x ) (1315) 2 m Rational Polynomial Function Addition The following equation shows the addition of two rational polynomials. A(x) is the denominator polynomial. such as filter design. B(x) is the numerator polynomial. The roots of A(x) are the poles of F(x).Chapter 13 Polynomials Rational Polynomial Function Operations Rational polynomial functions have many applications. The roots of B(x) are the zeros of F(x). The following equations define two rational polynomials used in the following sections. and digital image processing. rational polynomial functions provide the most common way of representing the ztransform. and A(x) cannot equal zero.
B 1 ( x )A 2 ( x ) F1 ( x ) H ( x ) = . Generic System with Negative Feedback For the system shown in Figure 132.= F2 ( x ) A 1 ( x )B 2 ( x ) Negative Feedback with a Rational Polynomial Function Figure 132 shows a diagram of a generic system with negative feedback. B 1 ( x )A 2 ( x ) F1 ( x ) .com .Chapter 13 Polynomials Rational Polynomial Function Multiplication The following equation shows the multiplication of two rational polynomials. LabVIEW Analysis Concepts 1312 ni.= 1 + F 1 ( x )F 2 ( x ) A 1 ( x )A 2 ( x ) + B 1 ( x )B 2 ( x ) Positive Feedback with a Rational Polynomial Function Figure 133 shows a diagram of a generic system with positive feedback. B 1 ( x )B 2 ( x ) F 1 ( x )F 2 ( x ) = A 1 ( x )A 2 ( x ) Rational Polynomial Function Division The following equation shows the division of two rational polynomials. the following equation yields the transfer function of the system. – F1 F2 Figure 132.
Using the quotient rule. 2 dx dx dx You continue to derive rational polynomial function derivatives such that you derive the jth derivative of a rational polynomial function from the ( j – 1)th derivative. you obtain the derivative of a rational polynomial function from the derivatives of the numerator and denominator polynomials.= 1 – F 1 ( x )F 2 ( x ) A 1 xA 2 x – B 1 ( x )B 2 ( x ) Derivative of a Rational Polynomial Function The derivative of a rational polynomial function also is a rational polynomial function. © National Instruments Corporation 1313 LabVIEW Analysis Concepts . d d A 1 ( x ) . d d d .F 1 ( x ) . 2 Partial Fraction Expansion Partial fraction expansion involves splitting a rational polynomial into a summation of low order rational polynomials.Chapter 13 Polynomials + F1 F2 Figure 133. the following equation yields the first derivative of the rational polynomial function F1(x) defined in Equation 1315. According to the quotient rule. F1 ( x ) B 1 ( x )A 2 ( x ) H ( x ) = .F 1 ( x ) = 2 dx ( A1 ( x ) ) You can derive the second derivative of a rational polynomial function from the first derivative.B 1 ( x ) – B 1 ( x ) . Generic System with Positive Feedback For the system shown in Figure 133. . Partial fraction expansion is a useful tool for ztransform and digital filter structure conversion.F 1 ( x ) = . as shown by the following equation.A 1 ( x ) ddx dx . the following equation yields the transfer function of the system.
without loss of generality. …. • Assume that A(x) has one repeated root r0 of multiplicity k and use the following equation to express A(x) in terms of its roots. The following actions and conditions illustrate the Heaviside coverup method: • Define a rational polynomial function F(x) with the following equation.+ . the order of B(x) is lower than the order of A(x).+ … + x – r1 x – r2 x – rn – k where α i = ( x – r i )F ( x ) (k – j – 1) x = ri i = 1. b0 + b 1 x + b2 x + … + b m x B(x) F ( x ) = .= 2 n A(x) a 0 + a 1 x + a2 x + … + a n x where m < n. 1. B(x) F ( x ) = k a n ( x – r 0 ) ( x – r 1 )… ( x – r n – k ) β0 β1 βk – 1 = .Chapter 13 Polynomials Heaviside CoverUp Method The Heaviside coverup method is the easiest of the partial fraction expansion methods. k – 1 LabVIEW Analysis Concepts 1314 ni. …. meaning.( ( x – r 0 ) F ( x ) ) x = r0 ( k – j – 1 )! dx( k – j – 1 ) j = 0..+ … + 2 k x – r0 ( x – r ) (x – r ) 0 0 2 m α2 αn – k α1 + . n – k 1 d k β j = .com .+ . A(x) = an(x – r0)k(x – r1)(x – r2) … (x – rn – k) • Rewrite F(x) as a sum of partial fractions. 2.
… Chebyshev orthogonal polynomials of the first kind satisfy the following equations.T n ( x )T n ( x ) dx = 2 2 –1 π.Chapter 13 Polynomials Orthogonal Polynomials A set of polynomials Pi(x) are orthogonal polynomials over the interval a < x < b if each polynomial in the set satisfies the following equations.. ∫ w ( x )P ( x )P ( x ) dx = 0.T n ( x )T m ( x ) dx = 0. T0(x) = 1 T1(x) = x T n ( x ) = 2xT n – 1 ( x ) – T n – 2 ( x ) n = 2. 1–x 1 n≠0 n = 0 © National Instruments Corporation 1315 LabVIEW Analysis Concepts . 3. Chebyshev Orthogonal Polynomials of the First Kind The recurrence relationship defines Chebyshev orthogonal polynomials of the first kind Tn(x). 1 . ∫ w ( x )P ( x )P ( x )dx ≠ 0. a n m b a n n b n≠m n=m The interval (a. as shown by the following equations. 2 1–x n≠m π . b) and the weighting function w(x) vary depending on the set of orthogonal polynomials. One of the most important applications of orthogonal polynomials is to solve differential equations. ∫ ∫ 1 –1 1 .
C n – 2 ( x ) n n n = 2. 3. as shown by the following equations. … a≠0 a a Gegenbauer orthogonal polynomials satisfy the following equations. C 0 (x ) = 1 C 1 ( x ) = 2ax 2( n + a – 1) a n + 2a – 2 a a C n ( x ) = . U0(x) = 1 U1(x) = 2x U n ( x ) = 2xU n – 1 ( x ) – U n – 2 ( x ) n = 2.com . 3 . 1 ∫ ∫ –1 1 1 – x U n ( x )U m ( x ) dx = 0 2 1 – x U n ( x )U n ( x ) dx = π 2 2 n≠m n = m –1 Gegenbauer Orthogonal Polynomials The recurrence relationship defines Gegenbauer orthogonal polynomials a C n ( x ).Chapter 13 Polynomials Chebyshev Orthogonal Polynomials of the Second Kind The recurrence relationship defines Chebyshev orthogonal polynomials of the second kind Un(x). … Chebyshev orthogonal polynomials of the second kind satisfy the following equations. ∫ ∫ 1 –1 (1 – x ) 2 a–1⁄2 C n ( x )C m ( x ) dx = 0 a a n≠m a≠0 a = 0 π 2 1 – 2a Γ ( n + 2a ) 1 n! ( n + a )Γ 2( a ) 2 a–1⁄2 a a (1 – x ) C n ( x )C n ( x ) dx = –1 2π  2 n LabVIEW Analysis Concepts 1316 ni.xC n – 1 ( x ) – . as shown by the following equations.
… Laguerre orthogonal polynomials satisfy the following equations. … Hermite orthogonal polynomials satisfy the following equations.L n – 1 ( x ) – . as shown by the following equations. ∞ ∫ ∫ –∞ e –x 2 2 H n ( x )H m ( x ) dx = 0 π2 n! n n≠m n = m ∞ –∞ e –x H n ( x )H n ( x ) dx = Laguerre Orthogonal Polynomials The recurrence relationship defines Laguerre orthogonal polynomials Ln(x). Γ(z) = ∫ ∞ z – 1 –t 0 t e dt Hermite Orthogonal Polynomials The recurrence relationship defines Hermite orthogonal polynomials Hn(x). as shown by the following equations.L n – 2 ( x ) n n n = 2. H0(x) = 1 H1(x) = 2x H n ( x ) = 2xH n – 1 ( x ) – 2 ( n – 1 )H n – 2 ( x ) n = 2. ∞ –x ∞ –x ∫ ∫ 0 e L n ( x )L m ( x ) dx = 0 n≠m n = m 0 e L n ( x )L n ( x ) dx = 1 © National Instruments Corporation 1317 LabVIEW Analysis Concepts . 3. 3. L0(x) = 1 L1(x) = –x + 1 2n – 1 – x n–1 L n ( x ) = .Chapter 13 Polynomials where Γ(z) is a gamma function defined by the following equation.
… Legendre orthogonal polynomials satisfy the following equations. ∞ –x a ∫e ∫e 0 0 x L n ( x ) L m ( x ) dx = 0 a a n≠m n = m ∞ –x a a a x L n ( x ) L n ( x ) dx = Γ ( a + n + 1 ) n! Legendre Orthogonal Polynomials The recurrence relationship defines Legendre orthogonal polynomials Pn(x).com . 3 . … a a Associated Laguerre orthogonal polynomials satisfy the following equation. as shown by the following equations. P0(x) = 1 P1(x) = x 2n – 1 n–1 P n ( x ) = .L n – 1 ( x ) – .P n – 2 ( x ) n n n = 2. 3 . as shown by the following equations.Chapter 13 Polynomials Associated Laguerre Orthogonal Polynomials The recurrence relationship defines associated Laguerre orthogonal a polynomials Ln ( x ). L0 ( x ) = 1 L1 ( x ) = – x + a + 1 2n + a – 1 – x a n+a–1 a a L n ( x ) = .xP n – 1 ( x ) – .L n – 2 ( x ) n n n = 2. 1 ∫ ∫ –1 1 P n ( x )P m ( x ) dx = 0 2 P n ( x )P n ( x ) dx = 2n + 1 n≠m n = m –1 LabVIEW Analysis Concepts 1318 ni.
you must use a square matrix. as shown by the following equation. P(x) = a0 + a1x + a2x2 G = g1 g2 g3 g4 (1316) (1317) In 2D polynomial evaluation. you replace the variable x with matrix G. P(x ) P(x ) P( x) P( x) P(G) = x = g1 x = g3 x = g2 x = g4 When performing matrix polynomial evaluation. When performing matrix evaluation of a polynomial.Chapter 13 Polynomials Evaluating a Polynomial with a Matrix The matrix evaluation of a polynomial differs from 2D polynomial evaluation. as shown by the following equation. actual values replace the variables a and g in Equations 1316 and 1317. The following equations define a secondorder polynomial P(x) and a square 2 × 2 matrix G. In the following equations. P([G]) = aoI +a1G + a1GG where I is the identity matrix of the same size as G. P(x) = 5 + 3x +2x2 G = 1 2 3 4 (1318) (1319) © National Instruments Corporation 1319 LabVIEW Analysis Concepts . you evaluate P(x) at each element of matrix G.
Ψ ( λ )x = ( C 0 + λC 1 + … + λ n–1 C n – 1 + λ C n )x = 0 n (1320) The following conditions apply to Equation 1320: • • • • • Ψ(λ) is the matrix polynomial whose coefficients are square matrixes. a collection of functions exists that when operated on by the operator produces the same function. the problem becomes determining a scalar λ and a nonzero vector x such that Equation 1320 is true. …. 0 is the zero vector and has length m. λ is the eigenvalue of Ψ(λ). ˆ A f (x) = a f (x) . where f(x) is an eigenfunction of A and a is the eigenvalue of f(x). Some applications lead to a polynomial eigenvalue problem. The multiplicative constants modifying the eigenfunctions are eigenvalues.Chapter 13 Polynomials The following equation shows the matrix evaluation of the polynomial P(x) from Equation 1318 with matrix G from Equation 1319. modified only by a multiplicative constant factor. x is the corresponding eigenvector of Ψ(λ) and has length m. P([G]) = 5 1 0 + 3 1 2 + 2 1 2 1 2 0 1 3 4 3 4 3 4 = 5 0 + 3 6 + 14 20 0 5 9 12 30 44 = 22 26 39 61 Polynomial Eigenvalues and Vectors For every operator. LabVIEW Analysis Concepts 1320 ni. Given a set of square matrices. The following equation illustrates the eigenfunction/eigenvalue relationship. n. The members of the collection of functions are eigenfunctions. i = 0. Ci is a square matrix of size m × m. 1.com .
λ n–1 2 and is an nm matrix. . . I = 0 .Chapter 13 Polynomials You can write the polynomial eigenvalue problem as a generalized eigenvalue problem.. . … … . .. . – C 0 – C 1 – C 2 … –C n – 1 I B = I. . I is the identity matrix of size m × m. Az = λBz where = 0 = 0 A = . . as shown by the following equation. . . x = 0 is the zero matrix of size m × m. . = 0 I = 0 = 0 . . . . I and is an nm × nm matrix. © National Instruments Corporation 1321 LabVIEW Analysis Concepts . . I Cn x λx z = λ x . and is an nm × nm matrix. . . . . .
Chapter 13 Polynomials Entering Polynomials in LabVIEW Use the Polynomial and Rational Polynomial VIs to perform polynomial operations. For example. When entering polynomial coefficient values into an array. 1 P = –3 4 2 1 Q = –2 1 Figure 134 shows the front panel of a VI that uses the Add Polynomials VI to add P(x) and Q(x). maintain a consistent method for entering the values. The order in which LabVIEW displays the results of polynomial operations reflects the order in which you enter the input polynomial coefficient values. Figure 134. the following equations define polynomials P(x) and Q(x). The 1D array stores the polynomial coefficients. P(x) = 1 – 3x + 4x2 + 2x3 Q(x) = 1 – 2x + x2 You can describe P(x) and Q(x) by vectors P and Q. as shown in the following equations.com . LabVIEW uses 1D arrays for polynomial inputs and outputs. National Instruments recommends entering polynomial coefficient values in ascending order of power. Adding P(x) and Q(x) LabVIEW Analysis Concepts 1322 ni.
Also. © National Instruments Corporation 1323 LabVIEW Analysis Concepts . the VI displays the results of the addition in P(x) + Q(x) in ascending order of power. you enter the polynomial coefficients into the array controls. based on the order of the two input arrays. P(x) and Q(x). in ascending order of power.Chapter 13 Polynomials In Figure 134.
and describes a case study that illustrates the use of the Point By Point VIs. © National Instruments Corporation III1 LabVIEW Analysis Concepts . answers frequently asked questions about pointbypoint analysis.Part III PointByPoint Analysis This part describes the concepts of pointbypoint analysis.
realtime systems. Use the NI Example Finder to find examples of using the Point By Point VIs. can make arraybased analysis too slow for higher speed. such as buffer preparation. When your data acquisition system requires realtime. deterministic.PointByPoint Analysis 14 This chapter describes the concepts of pointbypoint analysis. With pointbypoint analysis. Realtime performance is a reality for data acquisition. you can build a program that uses pointbypoint versions of arraybased LabVIEW analysis VIs. because you work with a single signal instantaneously. Perform programming tasks more easily. Connect the analysis process directly to the signal for speed and minimal data loss. data analysis also can utilize realtime performance. and describes a case study that illustrates the use of the Point By Point VIs. because you do not allocate arrays and you make fewer adjustments to sampling rates. deterministic performance. Pointbypoint analysis enables you to accomplish the following tasks: • • • • Track and respond to realtime events. Introduction to PointByPoint Analysis Pointbypoint analysis is a method of continuous data analysis in which analysis occurs for each data point. analysis. Pointbypoint analysis is ideally suited to realtime data acquisition. and output. point by point. The discrete stages of arraybased analysis. © National Instruments Corporation 141 LabVIEW Analysis Concepts . Synchronize analysis with data acquisition automatically. answers frequently asked questions about pointbypoint analysis.
Initializing Point By Point VIs This section describes when and how to use the pointbypoint initialize parameter of many Point By Point VIs. the Value Has Changed PtByPt VI can respond to change events such as the following: • • Receiving the input data Detecting the change LabVIEW Analysis Concepts 142 ni. You usually have fewer programming tasks when you use the Point By Point VIs. Purpose of Initialization in Point By Point VIs Using the initialize parameter. Table 141. However. Characteristic Inputs and Outputs for Point By Point VIs Parameter input data output data initialize sample length Incoming data Description Outgoing. you must account for programming differences. you can reset the internal state of Point By Point VIs without interrupting the continuous flow of data or computation. analyzed data Routine that resets the internal state of a VI Setting for your data acquisition system or computation system that best represents the area of interest in the data Refer to the Case Study of PointByPoint Analysis section of this chapter for an example of a pointbypoint analysis system.com . This section also describes the First Call? function. You can reset a VI in response to events such as the following: • • A user changing the value of a parameter The application generating a specific event or reaching a threshold For example.Chapter 14 PointByPoint Analysis Using the Point By Point VIs The Point By Point VIs correspond to each arraybased analysis VI that is relevant to continuous data acquisition. Table 141 describes characteristic inputs and outputs of the Point By Point VIs.
Figure 141. The value remains FALSE for the remainder of the time you run the VI. the input data is a parameter value for the target VI. Using the First Call? Function with a While Loop Error Checking and Initialization The Point By Point VIs generate errors to help you identify flaws in the configuration of the applications you build. Typical Role of the Value Has Changed PtByPt VI Many pointbypoint applications do not require use of the initialize parameter because initialization occurs automatically whenever an operator quits an application and then starts again. In this case. The value of the initialize parameter in the First Call? function is always TRUE for the first call to the VI. In a VI that includes the First Call? function. the first time you call the VI. Figure 142 shows a typical use of the First Call? function with a While Loop. © National Instruments Corporation 143 LabVIEW Analysis Concepts . use the First Call? function to build point by point VIs. Using the First Call? Function Where necessary. Several pointbypoint error codes exist in addition to the standard LabVIEW error codes. the internal state of the VI is reset once.Chapter 14 PointByPoint Analysis • • Generating a Boolean TRUE value that triggers initialization in another VI Transferring the input data to another VI for processing Figure 141 shows the Value Has Changed PtByPt VI triggering initialization in another VI and transferring data to that VI. Figure 142.
LabVIEW checks for errors. configure your program to monitor and respond to irregularities in data acquisition or in computation. to the target VI. The Point By Point VIs generate an error code to inform you of any invalid parameters or settings when they detect an error during the first call. Report the error and stop. The following programming sequence describes how to use the Value Has Changed PtByPt VI to build a pointbypoint error checking mechanism for Point By Point VIs that have an error parameter. This is the default behavior. you create a form of error checking when you range check your data. LabVIEW Analysis Concepts 144 ni. the Point By Point VIs set the error code to zero and continue running. Wire the parameter value as input data to the Value Has Changed PtByPt VI. Ignore the error and continue running. Transfer the output data. Because Point By Point VIs generate error codes only once. For example. For higherlevel error checking. as shown in Figure 141. deterministic application. 1. The Value Has Changed PtByPt VI outputs a TRUE value whenever the input parameter value changes. Pass the TRUE event generated by the Value Has Changed PtByPt VI to the target VI to trigger initialization. they can perform optimally in a realtime. 4. For the first call that follows initialization of the target VI. Choose a parameter that you want to monitor closely for errors.com . Initialization of the target VI and error checking occurs every time the input parameter changes. 3. A Point By Point VI generates an error code once at the initial call to the VI or at the first call to the VI after you initialize your application. which is always the unchanged input data in Value Has Changed PtByPt VI. You can program your application to take one of the following actions in response to the first error: • • • Report the error and continue running. 2. generating no error codes.Chapter 14 PointByPoint Analysis Error codes usually identify invalid parameters and settings. In subsequent calls.
• Somewhat efficient combustion occurs. PointByPoint Analysis • Receive continuous stream of data. What Are the Differences between PointByPoint Analysis and ArrayBased Analysis in LabVIEW? Tables 142 and 143 compare arraybased LabVIEW analysis to pointbypoint analysis from multiple perspectives. the differences between two automotive fuel delivery systems. Comparison of Traditional and Newer Paradigms Traditional Paradigm Automotive Technology Carburation • Fuel accumulates in a float bowl. • Analyze data. precise combustion occurs. Table 143 presents other comparisons between arraybased and pointbypoint analysis.Chapter 14 PointByPoint Analysis Frequently Asked Questions This section answers frequently asked questions about pointbypoint analysis. In Table 142. • Filter and analyze data continuously. • Fuel sprays directly into each combustion chamber at the moment of combustion. • Generate realtime events and reports continuously. Data Analysis Technology ArrayBased Analysis • Prepare a buffer unit of data. • Responsive. © National Instruments Corporation 145 LabVIEW Analysis Concepts . Table 142. Newer Paradigm Fuel Injection • Fuel flows continuously from gas tank. carburation and fuel injection. • Produce a buffer of analyzed data. demonstrate the differences between arraybased data analysis and pointbypoint analysis. • Engine vacuum draws fuel through a single set of metering valves that serve all combustion chambers. • Generate report.
In arraybased analysis. the inputanalysisoutput process takes place continuously. backward compatible with arraybased systems Scalaroriented Interruptions tolerated You control. like a mirror Specify a buffer Output a report Delayed processing Stop Wait Asynchronous Data Acquisition and Analysis with Point By Point VIs Compatible with realtime systems. Comparison of ArrayBased and PointByPoint Data Analysis Characteristic Compatibility ArrayBased Analysis Limited compatibility with realtime systems Arrayoriented Interruptions critical You observe. natural flow of a process No explicit buffers Output a report and an event in real time Real time Continue Now Synchronous Data typing Interruptions Operation Performance and programming Point of view Programming Results Runtime behavior Runtime behavior Runtime behavior Work style Why Use PointByPoint Analysis? Pointbypoint analysis works well with computerbased realtime data acquisition. the inputanalysisoutput process takes place for subsets of a larger data set. in real time. LabVIEW Analysis Concepts 146 ni. online Startup data loss does not occur. initialize the data acquisition system once and run continuously Direct.com . offline Compensate for startup data loss (4–5 seconds) with complex “state machines” Reflection of a process.Chapter 14 PointByPoint Analysis Table 143. In pointbypoint analysis.
be sure to enable reentrant execution. mean value algorithms. and so on. keep in mind the following concepts: • Initialization—You must initialize the pointbypoint analysis application to prevent interference from settings you made in previous sessions of data analysis. the application in © National Instruments Corporation 147 LabVIEW Analysis Concepts . How Is It Possible to Perform Analysis without Buffers of Data? Analysis functions yield solutions that characterize the behavior of a data set. you might analyze a large set of data by dividing the data into 10 smaller buffers. Analyzing those 10 sets of data yields 10 solutions. • Deterministic Performance—Pointbypoint analysis is the natural companion to many deterministic systems. In pointbypoint analysis. because it efficiently integrates with the flow of a realtime data signal. The pointbypoint sample unit can have a length that matches the length of a significant event in the data set that you are analyzing. You can further resolve those 10 solutions into one solution that characterizes the behavior of the entire data set. the computation of zeroes in polynomial functions is not relevant to pointbypoint analysis. Reentrant execution allocates fixed memory to a single analysis process. integration. You use filters. A sample unit of a specific length replaces a buffer. • Note If you create custom VIs to use in your own pointbypoint application. ReEntrant Execution—You must enable LabVIEW reentrant execution for pointbypoint analysis. in the same situations and for the same reasons that you use these operations in arraybased data analysis. In arraybased data acquisition and analysis. In contrast.Chapter 14 PointByPoint Analysis What Is New about PointByPoint Analysis? When you perform pointbypoint analysis. For example. Reentrant execution is enabled by default in almost all Point By Point VIs. guaranteeing that two processes that use the same analysis function never interfere with each other. What Is Familiar about PointByPoint Analysis? The approach used for most pointbypoint analysis operations in LabVIEW remains the same as arraybased analysis. you analyze an entire data set in realtime. and pointbypoint versions of these arraybased VIs are not necessary.
A typical pointbypoint analysis application analyzes a long series of sample units. The pointbypoint approach simplifies the design. You can use this realtime amplitude reading to generate an event or a report about that wheel and that train. To identify those crucial samples of interest. and computers use a variety of analysis and transfer functions to control a realworld process. implementation. because you do not allocate arrays explicitly. because the flow of a pointbypoint application closely matches the natural flow of the realworld processes you want to monitor and control.Chapter 14 PointByPoint Analysis the Case Study of PointByPoint Analysis section of this chapter acquires a few thousand samples per second to detect defective train wheels. For example. pointbypoint analysis can respond. and data analysis flows naturally and continuously. Why Is PointByPoint Analysis Effective in RealTime Applications? In general. Pointbypoint analysis offers simplicity and dependability. LabVIEW Analysis Concepts 148 ni. and testing process. dependable programs. in industrial automation settings. when you must process continuous. it captures the maximum amplitude reading of the current sample unit. they require simple. the need for pointbypoint analysis increases. control data flows continuously. Instead. Do I Need PointByPoint Analysis? As you increase the samplesperseconds rate by factors of ten. but you are likely to have interest in only a few of those sample units.com . rapid data flow. Some realtime applications do not require highspeed data acquisition and analysis. The input data for the train wheel application comes from the signal generated by a train that is moving at 60 km to 70 km per hour. The train wheel detection application in the Case Study of PointByPoint Analysis section of this chapter uses the end of a signal to identify crucial samples of interest. The instant the application identifies the transition point. the pointbypoint application focuses on transitions. Pointbypoint analysis can take place in real time for these engineering tasks. This particular amplitude reading corresponds to the complete signal for the wheel on the train whose signal has just ended. such as the end of the relevant signal. The sample length corresponds to the minimum distance between wheels.
too prone to error. if you dedicate resources in a realtime data acquisition application. Collect and analyze data in real time to simplify programming and to increase speed and accuracy of results. pointbypoint data acquisition. Pointbypoint analysis is streamlined and stable because it directly ties into the acquisition and analysis process. What Is the LongTerm Importance of PointByPoint Analysis? Realtime data acquisition and analysis continue to demand more streamlined and stable applications. © National Instruments Corporation 149 LabVIEW Analysis Concepts . PointByPoint Analysis of Train Wheels In this example. use pointbypoint analysis to achieve the full potential of your application. An automated solution also adds the power of dynamic testing. DSP chips. The current method of detection consists of a railroad worker striking a wheel with a hammer and listening for a different resonance that identifies a flaw. and ASICs. embedded controllers. dedicated CPUs. because the train wheels can be in service during the test.llb. instead of standing still. Gather data when a train travels during a normal trip.Chapter 14 PointByPoint Analysis You can continue to work without pointbypoint analysis as long as you can control your processes without highspeed. and too crude to detect subtle defects. because manual surveillance is too slow. The automated solution to detect potentially defective train wheels needs to have the following characteristics: • • • Detect even subtle signs of defects quickly and accurately. deterministic. Case Study of PointByPoint Analysis The case study in this section uses the Train Wheel PtByPt VI and shows a complete pointbypoint analysis application built in LabVIEW with Point By Point VIs. The Train Wheel PtByPt VI is a realtime data acquisition application that detects defective train wheels and demonstrates the simplicity and flexibility of pointbypoint data analysis. Streamlined and stable pointbypoint analysis allows the acquisition and analysis process to move closer to the point of control in field programmable gate array (FPGA) chips. the maintenance staff of a train yard must detect defective wheels on a train. However. The Train Wheel PtByPt VI is located in the labview\examples\ptbypt\ PtByPt_No_HW. Automated surveillance must replace manual testing.
Figures 143 and 144 show the front panel and the block diagram. respectively. Figure 143. Front Panel of the Train Wheel PtByPt VI LabVIEW Analysis Concepts 1410 ni.Chapter 14 PointByPoint Analysis The Train Wheel PtByPt VI offers a solution for detecting defective train wheels. for the Train Wheel PtByPt VI.com .
The issues of ideal sampling periods and approaches to signal conditioning are beyond the scope of this example. Block Diagram of the Train Wheel PtByPt VI Note This example focuses on implementing a pointbypoint analysis program in LabVIEW.Chapter 14 PointByPoint Analysis Figure 144. and numeric operators. such as Case structures. © National Instruments Corporation 1411 LabVIEW Analysis Concepts . Overview of the LabVIEW PointByPoint Solution As well as Point By Point VIs. While Loops. the Train Wheel PtByPt VI requires standard LabVIEW programming objects. numeric controls. The following list reflects the order in which the five analysis stages occur. The process carried out by the Train Wheel PtByPt VI inside the While Loop consists of five analysis stages that occur sequentially. The data the Train Wheel PtByPt VI acquires flows continuously through a While Loop.
In the Events stage. The peak of the curve represents the moment when the wheel moves directly above the strain gauge. Low. responses to signal transitions of trains and wheels occurs.and highfrequency components. 3. as the wheel passes over the strain gauge. you detect a noisy signal similar to a bell curve. 4.com . 1. separation of low. wheel. Lowpass component of a typical train wheel signal Highpass component of a typical train wheel signal Figure 145. respectively. 5. Characteristics of a Train Wheel Waveform The characteristic waveform that train wheels emit determines how you analyze and filter the waveform signal pointbypoint. and trains that might have defective wheels occurs.and highfrequency components of this curve. and energy level of the waveform for each wheel occurs. If you mount a strain gauge in a railroad track. LabVIEW Analysis Concepts 1412 ni. 2. the logging of trains. Defective and normal wheels generate the same lowfrequency component in the signal. and corresponds to the labeled portions of the block diagram in Figure 144. wheels. Figure 145 shows the low. detection of the train. waveform data flows into the While Loop. In the Analysis stage. The lowest points of the bell curve represent the beginning and end of the wheel.Chapter 14 PointByPoint Analysis briefly describes what occurs in each stage. In the Filter stage. In the Report stage.and highfrequency components of the waveform occurs.and HighFrequency Components of a Train Wheel Signal The lowfrequency component of train wheel movement represents the normal noise of operation. In the data acquisition stage (DAQ). A train wheel in motion emits a signal that contains low.
Note You must adjust parameters for any implementation of the Train Wheel PtByPt VI because the characteristics of each data acquisition system differ. This predictable behavior allows you to choose the appropriate analysis parameters.and highfrequency components of the train wheel waveform. which is the minimum signal strength that identifies the departure of a train wheel from the strain gauge. These parameters apply to the five stages described in the Overview of the LabVIEW PointByPoint Solution section of this chapter. • © National Instruments Corporation 1413 LabVIEW Analysis Concepts . In the Train Wheel PtByPt VI. The pointbypoint detection application operates on the continuous stream of waveform data that comes from the wheels of a moving train. the Butterworth Filter PtByPt VIs use the following parameters: • order specifies the amount of the waveform data that the VI filters at a given time and is the filter resolution. fl specifies the low cutoff frequency. In other words. Filter Stage The Train Wheel PtByPt VI must filter low. This section discusses each of the five analysis stages and the parameters use in each analysis stage. 0. DAQ Stage Data moves into the Point By Point VIs through the input data parameter. 2 is acceptable for the Train Wheel PtByPt. falls within predictable ranges. Analysis Stages of the Train Wheel PtByPt VI The waveform of all train wheels. a defective train wheel generates more energy than a normal train wheel.Chapter 14 PointByPoint Analysis The signal for a train wheel also contains a highfrequency component that reflects the quality of the wheel. Two Butterworth Filter PtByPt VIs perform the following tasks: • • Extract the lowfrequency components of the waveform. For a train moving at 60 km to 70 km per hour. including defective ones. a few hundred to a few thousand samples per second are likely to give you sufficient information to detect a defective wheel. the highfrequency component for a defective wheel has greater amplitude. Extract the highfrequency components of the waveform. In operation.01 is acceptable for the Train Wheel PtByPt.
the end of each train. Identify the end of each wheel. 3 is an acceptable setting for threshold in the Train Wheel PtByPt VI. the minimum distance between wheels. 0. Analysis Stage The pointbypoint detection application must analyze the low. • • LabVIEW Analysis Concepts 1414 ni. the Array Max & Min PtByPt VIs use the following parameters and functions: • sample length specifies the size of the portion of the waveform that the Train Wheel PtByPt VI analyzes. consider the speed of the train. You do not need to allocate arrays for the Array Max & Min PtByPt VI. Identify the end of each train. threshold is wired to the Greater? function. In the Train Wheel PtByPt VI. 4 is acceptable for the Train Wheel PtByPt VI. When this longer portion fails to display signal activity for train wheels. which is the minimum signal strength that identifies the end of highfrequency waveform information.25 is acceptable for the Train Wheel PtByPt. threshold provides a comparison point to identify when no train wheel signals exist in the signal that you are acquiring.and highfrequency components separately. To calculate the ideal sample length.com . The Multiply function sets a longer portion of the waveform to analyze. and the number of samples you receive per second. Three separate Array Max & Min PtByPt VIs perform the following discrete tasks: • • • Identify the maximum highfrequency value for each wheel. The Train Wheel PtByPt VI uses sample length to calculate values for all three Array Max & Min PtByPt VIs. and the end of each wheel. the Array Max & Min PtByPt VIs identify the end of the train. Note The name Array Max & Min PtByPt VI contains the word array only to match the name of the arraybased form of this VI. 100 is acceptable for the Train Wheel PtByPt VI. The Array Max & Min PtByPt VI extracts waveform data that reveals the level of energy in the waveform for each wheel.Chapter 14 PointByPoint Analysis • fh specifies the high cutoff frequency.
the Train Wheel PtByPt VI captures its waveform. Every time a wheel passes the strain gauge. the Boolean Crossing PtByPt VIs use the following parameters: • • initialize resets the VI for a new session of continuous data acquisition.Chapter 14 PointByPoint Analysis Events Stage After the Analysis stage identifies maximum and minimum values. © National Instruments Corporation 1415 LabVIEW Analysis Concepts . the Events stage detects when these values cross a threshold setting. The Train Wheel PtByPt VI also reports any potentially defective wheels. Generate an event every time the Array Max & Min PtByPt VIs detect the transition point in the signal that indicates the end of a train. For the Train Wheel PtByPt VI. When the amplitude of a single wheel waveform falls below the threshold setting. the VI passes the information directly to the report at the moment the endofwheel event is detected. The Train Wheel PtByPt VI logs every wheel and every train that it detects. Table 144 describes the components of a report on a single train wheel. Two Boolean Crossing PtByPt VIs perform the following tasks: • • Generate an event each time the Array Max & Min PtByPt VIs detect the transition point in the signal that indicates the end of a wheel. Report Stage The Train Wheel PtByPt VI reports on all wheels for all trains that pass through the data acquisition system. Analysis of the highfrequency signal identifies which wheels. In the Train Wheel PtByPt VI. and reports the event. When the signal strength falls below the threshold setting. When the Train Wheel PtByPt VI encounters a potentially defective wheel. the Boolean Crossing PtByPt VIs recognize a transition event and pass that event to a report. The Boolean Crossing PtByPt VIs respond to transitions. direction specifies the kind of Boolean crossing. might be defective. if any. 3 is a good threshold setting to identify the end of a wheel. analyzes it. the end of the wheel has arrived at the strain gauge.
you filter and analyze it. LabVIEW Analysis Concepts 1416 ni. not to control an industrial process. Example Report on a Single Train Wheel Information Source Counter mechanism for waveform events Analysis of highpass filter data Counter mechanism for endoftrain events Meaning of Results Stage One: Wheel number four has passed the strain gauge. The Train Wheel PtByPt VI uses pointbypoint analysis to generate a report. However. point by point.com . While you acquire data. the Train Wheel PtByPt VI acquires data in real time. and the wheel might be defective. This case study demonstrates the effectiveness of the pointbypoint approach for generation of both events and reports in real time. such as stopping the train when the Train Wheel PtByPt VI encounters a potentially defective wheel. pointbypoint analysis helps you analyze data in real time. and you can modify the application to generate realtime control responses. Stage Two: Wheel number four has passed the strain gauge and the wheel might be defective. to extract the information you need and to make an appropriate response. Stage Three: Wheel number four in train number eight has passed the strain gauge. Conclusion When acquiring data with realtime performance.Chapter 14 PointByPoint Analysis Table 144. Pointbypoint analysis occurs continuously and instantaneously.
N. 1999. Dimitri P.. Berlin: Springer. Watts. Massachusetts: Athena Scientific. Nonlinear Programming.. Numerical Analysis. Boston: Prindle. D. New York: John Wiley & Sons. Dudewicz. Analog & Digital Signal Processing. Mahesh L. Bertsekas. New York: John Wiley & Sons. R. Modern Mathematical Statistics. Signal Processing Handbook. Abhay R. Faires. D. Upper Saddle River. C. Bates. Chugani. E.. 1988.N. no. New York: John Wiley & Sons. and S. Swartztrauber. et al. and Paul N. J.” Science 248. Nonlinear Regression Analysis and its Applications. (11 May 1990). R. Reading. New York: John Wiley & Sons. David H. 1988. Baher. S. Statistics for Research. Chen. Inc. 3d ed. NJ: Prentice Hall PTR. “Numerical Transforms. “The Fractional Fourier Transform and Applications. Refer to the following documents for more information about the theories and algorithms implemented in the analysis library. M. H. R. G.References A This appendix lists the reference material used to produce the analysis VIs. Projects in Scientific Computation. New York: Marcel Decker. 1988. 1985. E. including the signal processing and mathematics VIs. and S. M. Crandall. LabVIEW Signal Processing. and D.” Society of Industrial and Applied Mathematics Review 33. and Michael Cerna. 1994. DeGroot. Mishra. H. Massachusetts: AddisonWesley Publishing Co. Bracewell. Probability and Statistics. Weber & Schmidt. Samant. Belmont. 2d ed. Baily. 1998. L. 1990. 2d ed. 1991. © National Instruments Corporation A1 LabVIEW Analysis Concepts . 3 (September 1991): 389–404. Dowdy. Wearden. Burden. 1986. and J. 2nd ed.
and S. Matrix Computations. San Diego: Academic Press. “On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform. and C. M. “On Computing the Inverse DFT. Golub. C. M. Miller. Applied Statistics: Analysis of Variance and Regression. Introduction to Operations Research. W. Y. Englewood Cliffs. 1991. Gander. no. “Interpolated Finite Impulse Response Filters” IEEE Transactions on ASSP ASSP32. Joseph G. et al. Inc. Sanjit K.. no.” Personal Engineering and Instrumentation News 7. Neuvo. I. O’Neill. Berlin: Springer. Kaiser. K.com . Fahmy. and Michael Kupferschmid. New York: Krieger Publishing. Van Loan.Appendix A References Duhamel. 2nd ed. F. and J. Handbook for Digital Signal Processing. and J. no. 1993. 1983. et al. G. 1987. Y. New York: John Wiley & Sons. F. Maisel. Lanczos. New York: John Wiley & Sons. Inc. Probability and Statistics for Engineers.” Circuit Theory and Applications 5. Handbook of Digital Signal Processing Engineering Applications. Elliot. Harris. Clark. P. no.” Journal SIAM Numerical Analysis series B. J. Dunn. “Faster Than Fast Fourier. E. New Jersey: PrenticeHall. “Generalized Bessel Polynomials with Application to the Design of Bandpass Filters. Freund.” IEEE Transactions. E. Mitra. “Precision Approximation of the Gamma Function. A. A. Richard D. 1993. F. D. and V. 1989. Irwin.” BYTE (April 1988). O. “Hilbert Transform Works With Fourier Transforms to Dramatically Lower Sampling Rates. Neter. 1 (1964): 87–96. Baltimore: The John Hopkins University Press. 1987. and James F. 1987. J. 1 (1978).H.. 2 (February 1990).. C. Solving Problems in Scientific Computing using Maple and MATLAB. Hrebicek.” Proceedings of the IEEE 66. Ecker. LabVIEW Analysis Concepts A2 ni. Fredric J. Mitra. (1977): 337–342. Applied Linear Regression Models. 1984). Dong. 6 (June.
. New York: SpringerVerlag.. C. New York: Van Nostrand Reinhold Co. Shie and Dapang Chen. Oppenheim. Rabiner. “On Computing the SplitRadix FFT. 6 (June 1987). 1993.” IEEE Transactions on ASSP ASSP35. Numerical Recipes in C. J.” IEEE Transactions on ASSP ASSP34. DiscreteTime Signal Processing. Press. New Jersey: PrenticeHall. Hill. and R. and D. L. W. Numerical Methods in Engineering and Science. R. Gold. “RealValued Fast Fourier Transform Algorithms... and B. “Building a RandomNumber Generator: A Pascal Routine for VeryLongCycle RandomNumber Sequences. Bulirsch. Saul A. Flannery. Englewood Cliffs. Inc. New York: PrenticeHall. Inc. Alan V. 1983. A. Schafer. 1975. K. Stoer. V. 1975. Signals and Systems. and D. New Jersey: PrenticeHall. 1987. D. Willsky. V. Englewood Cliffs. et al. Vetterling. W. no. Burrus. Englewood Cliffs. W. Introduction to Numerical Analysis. William T.. H. C. V. B. Qian. Joint TimeFrequency Analysis. The Finite Element Method—A Basic Introduction for Engineers. Parks. Theory and Application of Digital Signal Processing. T. Multirate Systems and Filter Banks. Schaum’s Outline Series on Theory and Problems of Probability and Statistics. R. 1989. New York: McGrawHill. Digital Filter Design. and R. H.Appendix A References Oppenheim. Sorensen. Spiegel. Inc. © National Instruments Corporation A3 LabVIEW Analysis Concepts . P. Teukolsky.. 1 (February 1986). New York: John Wiley & Sons. M. P. William H. Pearson. New York: PrenticeHall. Inc. A. 1996. The Art of Scientific Computing. H..” BYTE (March 1987): 127–128. 1986. Nethercot. et al. and Alan S.. Sorensen. no. Vaidyanathan. Wichman.. Evans. Griffiths. 1994. 1987. and C. Cambridge: Cambridge University Press. and Brian P. 1983. New York: John Wiley & Sons. New Jersey: PrenticeHall. E. S. Rockey. Inc. Inc.
com . (1994): 2365–2388. John R. Williams. 1971. Vol. Reinsch. San Diego: Academic Press. J. 1992. and Kevin Amaratunga. New York: Springer. Daniel. H. 2 of Handbook for Automatic Computation.” International Journal for Numerical Methods in Engineering 37. LabVIEW Analysis Concepts A4 ni. Handbook of Differential Equations. Linear Algebra. and C. “Introduction to Wavelets in Engineering. Zwillinger.Appendix A References Wilkinson.
contact your local office or NI corporate headquarters. and Spanish at ni. eLearning virtual classrooms. a KnowledgeBase. These resources are available for most products at no cost to registered users and include software drivers and updates.Technical Support and Professional Services • B Visit the following sections of the National Instruments Web site at ni. Phone numbers for our worldwide offices are listed at the front of this manual. product manuals. which provide uptodate contact information. and Certification program information. instrument drivers. stepbystep troubleshooting wizards. © National Instruments Corporation B1 LabVIEW Analysis Concepts . Our online system helps you define your question and connects you to the experts by phone. discussion forum. tutorials and application notes. call your local NI office or visit ni. NI Alliance Program members can help. • If you searched ni. conformity documentation. visit our extensive library of technical support resources available in English. discussion forums. or other project challenges. example code. interactive CDs.com/support.com/support. handson courses at locations around the world. You also can visit the Worldwide Offices section of ni. You also can register for instructorled. limited inhouse technical resources. To learn more. – • Training and Certification—Visit ni. email addresses. and so on. or email. a measurement glossary. and current events.com for technical support and professional services: Support—Online technical support resources include the following: – SelfHelp Resources—For immediate answers and solutions.com/niglobal to access the branch office Web sites.com/alliance. System Integration—If you have time constraints. support phone numbers. Assisted Support Options—Contact NI engineers and other measurement and automation professionals by visiting ni. Japanese.com/training for selfpaced training.com and could not find the answers you need.