You are on page 1of 29

Chapter 4 - Measurement Accuracy

Measurement Accuracy

Terminology
 Definitions
– –

of Accuracy

– –

Closeness with which an instrument reading approaches the true value of the variable being measured. The maximum error in the measurement of a physical quantity in terms of the output of an instrument when referred to the individual instrument calibrations. The degree of conformance of a test instrument to absolute standards. The ability to produce an average measured value which agrees with the true value or standard being used.

Also called repeatability or reproducibility. The ability to repeatedly measure the same product or service and obtain the same results. precision is a measure of the degree to which successive measurements differ from one another. The degree to which repeated measurements of a given quantity agree when obtained by the same method and under the same conditions. Measurement Accuracy – Terminology  Precision – – – – A measure of the reproducibility of the measurements. • Given a fixed value of a variable. .

the consistency with which that measurement can be made.to refers to the overall closeness of an averaged measurement to the true value.  Repeatability .  Accuracy – – – takes all error sources into account Systematic Errors Random Errors Resolution (Quantization Errors) . – The word precision will be avoided. Measurement Accuracy – Book Terminology  Accuracy .

103mV. 103mV. Measurement Accuracy – Terminology  Systematic – Errors Errors that appear consistently from measurement to measurement • Ideal Value = 100mV • Measurements : 101mV. 101mV. 102mV • Average Error : 2mV • Caused by DC offsets. gain errors. . 102mV. non-linearities in the DVM • Systematic errors can often be reduced through calibrations. 102mV. 103mV. 101mV.

bad DUT design or by the tester itself. Big challenge is in determining whether the random error is caused by a bad DIB design. Measurement Accuracy – Terminology  Random – – – – Errors Notice that the list of numbers in the last slide vary from 101mV to 103mV. . All measurement tools have random errors even $2 Million Automated test instruments Random Errors are perfectly normal in analog and mixedsignal measurements.

Quantization error is a result of the conversion from an infinitely variable input voltage to a finite set of possible outputs from the ADC. The inherent error in ADCs and measurement instrumentation is called Quantization Error. Limited resolution results from the fact that continuous analog signals must be converted to digital format (using ADC’s) before a computer can evaluate the test results. . Measurement Accuracy – Terminology  Resolution – (Quantization Errors) – – – Notice that in the previous list of numbers. the measurement was always rounded off to the nearest milivolt.

but it does not in itself guarantee accuracy. . it should raise the question of ranging of the measurement tool. Repeatability is desirable. Measurement Accuracy – Terminology  Repeatability – – – – Non-repeatable answers are a fact of life for mixed-signal test engineer Could be caused by random noise or other external influences If a test engineer gets the same value multiple times in a row.

Testers are equipped with temperature sensors to allow recalibration if a certain change in temperature occurs. Measurement Accuracy – Terminology  Stability – – – – The degree to which a series of supposedly identical measurements remains constant over time. if the test cabinet or test head are opened. temperature. Caution must be exercised in the power-up of the tester. since temperature of tester electronics must stabilize before calibrations are accurate Also. and all other time varying factors is referred to as stability. the temperature must stabilize before any calibrations can be performed. humidity. .

to .Tester Correlation • Program .to .Bench Correlation • Tester .Day Correlation .DIB Correlation • Day .to .to .to .Program Correlation • DIB . Measurement Accuracy – Terminology  Correlation – The ability to get the same answer using different pieces of hardware or software. • Tester .

If a measurement is highly repeatable. then the test program may consistently pass a particular DUT one day but then may consistently fail the same DUT on another day or on another tester. but not reproducible. Measurement Accuracy – Terminology  Reproducibility • • • • Reproducibility is often incorrectly used interchangeably with repeatability Reproducibility is defined as the statistical deviations between a particular measurement taken by any operator on any group of testers on any given day using any DIB board. . Repeatability is used to describe the ability of a single tester and DIB board to get the same answer multiple times as the test program is repeatedly executed.

but it is still not optimal . Calibration and Checkers – Traceability to Standards  National Institute of Standards and Technology (NIST)  Thermally stabilized standardized instrument – periodically replaced by a freshly calibrated source – Hardware Calibration  Any mechanical process which brings a piece of equipment back into agreement with calibration standards – – usually not a convenient process Robotic manipulations can be used to automate the process.

 Calibration and Checkers – Software Calibration  The basic idea behind software calibration is the separation of the instrument’s ideal operation from its non-idealities so that a model of the instruments non-ideal operation can be constructed. followed by a correction of the non-ideal behavior using a mathematical routine written in software.  Most testers have extensive calibration processes for each measurement range in the tester instrumentation .

 Calibration and Checkers – System Calibrations & Checkers  Checkers verify the functionality of the hardware instruments in the tester.  Calibration and checkers are often found in the same program. – Several levels of checkers and calibrations are used • Calibration reference source replacement and recalibration is performed approximately every six months • Extensive performance evaluation (PV) process is used to verify the tester is in compliance with its published specifications • Automated calibrations on test floor as conditions warrant them .

 Test specific calibration to focus on the exact parameters of the test  Tester Focussed calibration may not be necessary on all tests any longer. Calibration and Checkers – Focussed Instrument Calibrations  Accuracy of faster instruments can be improved by periodically referencing them back to slower more accurate instruments. yet DIB focussed calibrations will remain a major task of the test engineer .

or to buffer a weak output of a device before it is tested. .  It is critical that the test engineer have a clear understanding which characteristics of each DIB circuit affects the test being performed.  Since DIB circuits are added in series between the DUT and the tester. the contribution of calibration factors must be treated accordingly.  Review of example 4-3 found on p 4-17. Calibration and Checkers – Focussed DIB Circuit Calibrations  Often. circuits are added to the DIB board to improve the accuracy of the particular test.

 Calibration and Checkers – DIB Checkers  Verifies the basic functionality of the DIB circuits.  Performed in the first run of the test program along with the calibration  Every possible relay and circuit path should be checked to produce a go-nogo response verifying the functionality of as much of the DIB board as possible. .

the noise floor will be totally different from when the same tester is operating at a university. Calibration and Checkers – Tester Specifications  Test engineers must determine if the tester instrument is capable of making the measurements they require. – A good example is a specification of the “noise floor” of a tester. the test engineer needs to understand the spec conditions and the variations from that spec which will affect the performance of the instrument. In a professional shielded room with no digital circuits operating. .  Due to the lack of information about specification values from the manufacturer.

with a limit of 1 mV or 2. is the spec still true???? . • Assumes the measurement is made 100 times and averaged • single measurement may have greater measurement error along with repeatability error. Calibration and Checkers – Tester Specifications  Example – – of a DC meter – five output ranges (set by a PGA internally and calibrated) accuracy is specified as a percentage of the measured value .5 mV. • Indicates extra settling time • if filter is disabled. Meter may also pass the signal through a low pass filter with the input either enabled or disabled.

the more noise is removed. Dealing with Measurement Error – Filtering  Acts – as a hardware averaging circuit and allows only the desired frequencies to pass. the longer the test time required for settling Settling time is inversely proportional to the cutoff frequency. – the lower the frequency.  Unfortunately. . The closer the cutoff frequency to the measurement frequency.

. one has to take four times as any readings and average them.always convert to linear form and average . – Note: Do not average values in dB .  To reduce the effect of noise on a voltage measurement by a factor of two. Dealing with Measurement Error – Averaging A form of discrete time filtering that can be used to improve the repeatability of a measurement.then return them to dB form.  This quickly results in a point of diminishing returns with respect to test times.

. Dealing with Measurement Error – Guardbanding  If a particular measurement is known to be accurate and repeatable with a worst cast uncertainty of  . then the final test limits should be tightened by  to insure that no bad devices are shipped to the customer.this increases test time and may not be a viable option. – – Guardbanding Positive Test Limit = Positive Test Limit . Guardbanding Negative Test Limit = Negative Test Limit +   The only way to reduce guardbanding is to increase accuracy and repeatability .

 Basic Data Analysis – Datalogs A concise list of results generated by the test program – – – – – – test number test category test description maximum and minimum test limits measured result Pass / fail indication .

2 63.2 dB dB dB dB dB dB dB dB dB dB dB dB < 1.0 mV Code Width T_UDAC_Lin 0.0 mV 1.23 lsb (F) < 0.90 lsb -0.90 lsb 1.0 mV < 100.8 60.90 lsb POS DNL T_UDAC_Lin -0.4 63.43 61.0 mV 3.50 lsb  Bin: 10 .0 mV POS INL T_UDAC_Lin -0.0 dB S_UDAC_SNR Gain Error T_UDAC_SNR -1.0 mV NEG ERR T_UDAC_Lin -100.0 dB S/THD T_UDAC_SNR 60.0 dB S/N+THD T_UDAC_SNR 55.00 dB 7.00 dB S/N T_UDAC_SNR 55.4 mV < 100.0 mV 0.23 lsb < 1.90 lsb LSB SIZE T_UDAC_Lin 0.90 lsb 1.50 lsb 0.90 lsb NEG DNL T_UDAC_Lin -0.95 mV < 100.84 lsb < 0.00 dB S/2nd T_VDAC_SNR 60.3 59.                           Sequencer: 1000 Neg Sequencer: 5000 DAC 5001 DAC 5002 DAC 5003 DAC 5004 DAC 5005 DAC Sequencer: 6000 DAC 6001 DAC 6002 DAC 6003 DAC 6004 DAC 6005 DAC Sequencer: 7000 DAC 7001 DAC 7002 DAC 7003 DAC 7004 DAC 7005 DAC 7006 DAC 7007 DAC 7008 Max 7009 Min S_continuity PPMU Cont Failing Pins: 0 S_VDAC_SNR Gain Error T_VDAC_SNR -1.0 dB S_UDAC_Linearity POS ERR T_UDAC_Lin -100.17 lsb < 1.0 dB S/3rd T_UDAC_SNR 60.00 dB < 1.1 -0.83 lsb < 0.0 dB S/N+THD T_VDAC_SNR 55.48 70.6 60.00 dB S/2nd T_UDAC_SNR 60.5 63.90 lsb NEG INL T_UDAC_Lin -0.13 63.00 lsb < <= <= <= <= <= < <= <= <= <= <= < < < < < < < < < < -0.00 dB S/N T_VDAC_SNR 55.0 dB S/3rd T_VDAC_SNR 60.2 mV < 100.00 lsb Code Width T_UDAC_Lin 0.84 lsb < 0.0 dB S/THD T_VDAC_SNR 60.10 86.90 lsb -0.00 mV 0.00 mV Offset V T_UDAC_Lin -100.

in Test engineering. Basic Data Analysis – Histograms A graphical method used to view the repeatability of numerical data – – – Ideally the values of the acquired data should be closely packed Statistical relevance of the data is determined by the number of samples taken . the minimum for statistical relevance is 100. including the mean and standard deviation . Histograms also give numerical values which indicate the fit to the standard bell curve.

.

0029 dB. – – The standard deviation of the Gaussian distribution is roughly equal to one sixth of the total variation from the minimum value to the maximum value In the example the standard deviation is 0.  The variations in a typical mixed-signal measurement comes from a summation of many different sources of noise and crosstalk in both the device and the tester instrument. so we would expect to see values ranging from -0. Basic Data Analysis – Normal (Gaussian) Distributions  Any summation of a large number of random variables results in a Gaussian distribution.139 dB to -0. These values are labeled as “Mean -3 sigma” and “Mean +3 sigma” .121 dB.

 Basic Data Analysis – Non-Gaussian Distributions  Bimodal  Outliers .

 Basic Data Analysis – Noise. Test Time and Yield  Yield = total good devices / total tested devices  Definite trade off between test time and production yield.  Designer controls the design margins which reduce the need for guardbanding  Centering of design within the specifications – – May cost extra silicon or extra power May make the test unnecessary .