You are on page 1of 26

CHAPTER 2

Measurement Uncertainty
2.1 Definitions Relating to Measuring
Instruments
1. True or actual value: The actual magnitude of a signal input
to a measuring system which can only be approached and never
evaluated.
2. Indicated value: the magnitude of a variable indicated by a
measuring instrument.
3. Correction: The revision applied to the critical value so
that the final result obtained improves the worth of the result is
called correction.
4. Scale readability: indicates the closeness with which the
scale can be read in analog instruments.
5. Tolerance: It is the range of inaccuracy which can be tolerated
in measurements.
Cont.
6. Backlash: the maximum distance or angle through
which any part of a mechanical system may be moved
in one direction without applying appreciable force or
motion to the next part in a mechanical system.
7. Range: lowest and highest values of the stimulus
8. Span: the arithmetic difference between the highest and
lowest values of the input that being sensed.
For example:
i) a load cell for the measurement of forces might have a range of
0 to 50 kN and a span of 50 kN
(ii) Range: 2𝐾𝑁 𝑚2 𝑡𝑜 50𝐾𝑁 𝑚2
Span: 50 − 2 = 48 𝐾𝑁 𝑚2
(iii) Range: −5℃ 𝑡𝑜 90℃:
Span: 90 − (−5) = 95℃
2.3 Static Characteristics
Measurements of applications in which parameter of
interest is more or less constant.
The main static characteristics may be summed up as
follows:
1. Accuracy 6. Dead zone
2. Sensitivity 7. Hysteresis Error
3. Reproducibility 8. Linearity
4. Drift 9. Reliability
5. Static error 10. Resolution
Accuracy and Inaccuracy
 Accuracy:
a measure of how close the output reading of the
instrument is to the correct value.
 Inaccuracy or measurement uncertainty:
 is the extent to which a reading might be wrong and is
often quoted as a percentage of the full-scale (f.s.)
reading of an instrument.
 Accuracy of the measured signal depends, upon the following
factors:
Variation of the signal being measured
Intrinsic accuracy of the instrument itself
Accuracy of the observer
Whether or not the quantity is being truly impressed upon the
instrument.
Cont.

𝑒
Relative Accuracy, 𝐴 = 1 − ,
𝑉𝑡
Accuracy in percentage, 𝑎 = 100% − % 𝑒𝑟𝑟𝑜𝑟
𝑜𝑟
𝑎 = 𝐴 × 100
Where: 𝑒 = absolute error, 𝑒 = 𝑉𝑚 − 𝑉𝑡
𝑉𝑡 = true value
Accuracy and Precision
 Accuracy: The closeness with which an instrument reading
approaches the true value of the quantity being measured is
called accuracy.
 Precision: A measure of the consistency or repeatability of
measurements.
 Let consider the results of tests on three industrial robots
programmed to place components at a particular point on a table.
The target point was at the center of the concentric circles shown,
and black dots represent points where each robot actually deposited
components at each attempt.

a) Low precision, low accuracy b) High precision, low accuracy C) High precision, high accuracy
2.3.2 Sensitivity
 Sensitivity is the relationship indicating how much
output there is per unit input, i.e. 𝒐𝒖𝒕𝒑𝒖𝒕/𝒊𝒏𝒑𝒖𝒕.
 Sensitivity has a wide range of units and these
depend upon the instrument or measurement system
being investigated.
For example, a resistance thermometer may have a sensitivity of
0.5Ω/°𝐶.
 This term is also frequently used to indicate the sensitivity to
inputs other than that being measured, i.e. environmental
changes.
A transducer for the measurement of pressure might be quoted
as having a temperature sensitivity of ±0.1%of the reading per
°C change in temperature.
Repeatability and Reproducibility
Repeatability: It pertains to the closeness of output
readings when the same input is applied repetitively over a
short period of time with the same measurement
conditions, same instrument and observer, same location
and same conditions of use maintained throughout.
Reproducibility: It relates to the closeness of output
readings for the same input when there are changes in the
method of measurement, observer, measuring instrument,
location, conditions of use and time of measurement.
Usually associated with calibration
Given as percentage of input full scale of the maximum
difference between two readings taken at different times
under identical input conditions.
𝑚𝑎𝑥 − 𝑚𝑖𝑛. 𝑣𝑎𝑙𝑢𝑒𝑠 𝑔𝑖𝑣𝑒𝑛
R𝑒𝑝𝑒𝑎𝑡𝑎𝑏𝑖𝑙𝑖𝑡𝑦 = × 100
𝑓𝑢𝑙𝑙 𝑟𝑎𝑛𝑔𝑒
Drift
 Undesired gradual departure of the instrument output
over a period of time that is unrelated to changes in input,
operating conditions or load.
 An instrument is said to have no drift if it reproduces
same readings at different times for same variation in
measured variable.
 The drift may be caused by the following factors:
High mechanical stresses developed in some parts of
instruments and systems
Wear and tear
Mechanical vibrations
Temperature changes
Stray electric and magnetic fields
Thermal e.m.fs.
Classification of drift:
1. Zero drift: the whole of instrument calibration
gradually shifts over by the same amount.
It may be due to permanent set or slippage and can
be corrected by shifting pointer position.
2. Span drift: span (or It may be due to change in spring
gradient etc.
3. Zonal drift: When the drift occurs only over a
portion of span of an instrument.
 In industrial instrument, drift is an undesirable
quantity since it is rarely apparent and cannot be easily
compensated for.
 Drift occurs very slowly and can be checked only by
periodic inspection and maintenance.
Static error
 The difference between the best measured value and the
true value of the quantity.
 The absolute value of error does not indicate precisely the
accuracy of measurement.
 Thus another term relative static error is introduced.
 The relative static error is the ratio of absolute static error to
the true value of the quantity under measurement.
Absolute Error (Es )
Relative static error (Er ) =
True Value (Vt )
 Percentage static error % Er = Er x 100
 Static Correction: is the difference between the true
value and the measured value of the quantity,
i.e δC = At - Am
Deadband/time
 The dead band or dead space of a instrument is the
range of input values for which there is no output.
 For example, bearing friction in a flow-meter using a rotor
might mean that there is no output until the input has
reached a particular velocity threshold.
 The dead time is the length of time from the application
of an input until the output begins to respond and change.
 A device should not operate in this range unless this
insensitivity is acceptable.
Hysteresis Error
 The maximum differences in output at
any measured value when approaching
the point first with increasing and then
with decreasing input.
 Caused by electrical or mechanical
systems
 Magnetization
 Thermal properties
 Loose linkages

 If temperature is measured, at a rated temperature of 50°C, the


output might be 4.95V when temperature increases but 5.05V
when temperature decreases.
 This is an error of ±0.5% (for an output full scale of 10V in this
idealized example
Linearity
 The ability to reproduce the input characteristics symmetrically.
 Measure of maximum deviation of any of the calibration points
from the straight line
 Any departure from straight line relationship is non-linearity.
 May be due to the following factors:
Viscous flow or creep
Non-linear elements in the measurement device
Mechanical hysteresis
The elastic after-effects in the mechanical system.
Cont.
 Can be two types
1.Terminal linearity: It is the deviation from a straight
line through the end points
2.Best fit linearity: It is the deviation from the
straight line which gives minimum errors both plus
and minus.

Figure: (a) end-range values (b) best straight line for all values
Reliability
 Reliability: a statistical measure of quality of a device
which indicates the ability of the device to perform its
stated function, under normal operating conditions
without failure for a stated period of time or number of
cycles.
 Given in hours, years or in MTBF
 Usually provided by the manufacturer
 Based on accelerated lifetime testing
Resolution
The smallest change of input from which there will be a
change of output.
In case of analog instruments, resolution is determined by
the observer’s ability to judge the position of a pointer on
scale.
In digital systems generally, resolution may be specified as
1/ 2𝑁 (N is the number of bit.)
Example: a digital voltmeter with resolution of 0.1V is
used to measure the output of a sensor.
The change in input (temperature, pressure, etc.) that
will provide a change of 0.1V on the voltmeter is the
resolution of the sensor/voltmeter system.
Measurement Uncertainty
 Errors are a property of the measurement.
 Error causes a difference between the value assigned by
measurement and the true value of the population of the variable.
 Uncertainty is a property of the result.
 The outcome of a measurement is a result, and the uncertainty
quantifies the quality of that result.
 Uncertainty analysis provides a powerful design tool for
evaluating different measurement systems and methods,
designing a test plan, and reporting uncertainty.
Errors Vs Uncertainty

 Errors are effects, and uncertainties are numbers.


 While errors are the effects that cause a measured
value to differ from the true value, the uncertainty is
an assigned numerical value that quantifies the
probable range of these errors.
Types of Measurement Error

 Errors arising during the measurement process can be


divided into systematic errors and random errors.
1. Systematic errors: describe errors in the output
readings of a measurement system that are consistently on
one side of the correct reading, that is, either all errors are
positive or are all negative.
Sources of systematic error

 The main sources of systematic error


 Effect of environmental disturbances, often called
modifying inputs
 Disturbance of the measured system by the act of
measurement
 Changes in characteristics due to wear in instrument
components over a period of time
 Resistance of connecting leads
Reduction of Systematic Errors
 The prerequisite for the reduction of systematic errors is a
complete analysis of the measurement system that
identifies all sources of error.
1. Careful Instrument Design: the most useful
weapon in the battle against environmental inputs by
reducing the sensitivity of an instrument to
environmental inputs to as low a level as possible.
2. Calibration: The maximum error that exists just
before an instrument is re calibrated can therefore be
made smaller by increasing the frequency of
recalibration so that the amount of drift between
calibrations is reduced.
Cont..
3. Methods opposing Inputs: compensates for the effect of
environmental input in measurement system by introducing an
equal and opposite and opposite environmental input the
cancels it out.
4. Manual Correction of Output Reading: a good
measurement technician can substantially reduce errors at the
output of a measurement system by calculating the effect of
such systematic errors and making appropriate correction to
the instrument readings.
5. Intelligent Instruments: contain extra sensors that
measure the value of environmental inputs and automatically
compensate the value of the output reading.
Random Error
 Random errors in measurements are caused by
unpredictable variations in the measurement system.
 They are known by the alternative name precision errors.
Typical sources of random error are:
Measurements taken by human observation of an analogue meter,
especially where this involves interpolation between scale points.
 Electrical noise.
 Random environmental changes, for example, sudden draught of air.
Examples
1. A voltmeter reads 112.68v. If the true value of the voltage is 112.6v, determine the
following:
i) The static error.
ii) The static correction for the voltmeter
2. A thermometer reads 92.35℃ and the static correction given in the correction
curve is -0.07 ℃. Determine the true value of the temperature.
3. An analog indicating instrument with a scale range of 0 - 5.0 V shows a voltage of
2.65 V. The true value of a voltage is 2.70 volts.
1. What are the values of absolute error and correction?
2. Express the error as a function of the true value and full scale deflection.
4. A pressure gauge of range 50 bar is stated to have an error of ±0.15 bar when
calibrated by manufacturer. Determine
a) The percentage error on the basis of maximum full scale value.
b) The possible error as the percentage of the indicated value when a
reading of 10 bar obtained.

You might also like