Professional Documents
Culture Documents
Choosing appropriate instruments for a particular application require the knowledge of the
characteristics of different classes of instruments and, in particular, how these different classes of
instrument perform in different applications and operating environments.
Classification of Instruments
Instruments can be subdivided into separate classes according to several criteria. These
subclassifications are useful in broadly establishing several attributes of particular instruments
such as accuracy, cost, and general applicability to different applications.
Instrument Characteristics
Characteristics of instrument is concerned with various attributes of instruments that determine
their performance and suitability for different measurement requirements and applications. Static
and Dynamic are the two classification of instrument characteristics.
Static Characteristics
These are their steady-state attributes of an instrument that is when the output measurement value
has settled to a constant reading after any initial varying output. The attributes are collectively
known as the static characteristics of instruments and are given in the data sheet for a particular
instrument. It is important to note that values quoted for instrument characteristics in such a data
sheet only apply when the instrument is used under specified standard calibration conditions. Due
allowance must be made for variations in the characteristics when the instrument is used in other
conditions. The various static performance characteristics are:
A. Accuracy
The accuracy of an instrument is defined as the difference between the true value of the measurand
and the measured value indicated by the instrument. It is a measure of how close the output reading
of the instrument is to the correct reading or value. In practice, it is more usual to quote the
inaccuracy or measurement uncertainty value rather than the accuracy value for an instrument.
Inaccuracy (measurement uncertainty) is the extent to which a reading might be wrong and is often
quoted as a percentage of the full-scale reading of an instrument. Accuracy can also be expressed
as the percentage of span, percentage of reading, or an absolute value.
B. Range and Span
The range of an instrument specifies the lowest (minimum) and highest (maximum) values of a
quantity that the instrument is designed to measure. For example, a thermometer whose scale goes
from −40°C to 100°C has a range from −40°C to 100°C. The span of an instrument is it range from
the minimum to maximum scale value, i.e., a thermometer whose scale goes from −40°C to 100°C
has a span of 140°C. When the accuracy is expressed as the percentage of span, it is the deviation
from true expressed as a percentage of the span. Reading accuracy is the deviation from true at the
point the reading is being taken and is expressed as a percentage. The absolute accuracy of an
instrument is the deviation from true as a number not as a percentage.
Example 1. A pressure gauge ranges from 0 to 50 N/m2, the worst-case spread in readings is ±4.35
N/m2. What is the %FSD accuracy?
Solution
%FSD = ± (4.35 /50) × 100 = ±8.7%
Example 3. In the data sheet of a scale capable of weighing up to 200 kg, the accuracy is given as
±2.5% of a reading. What is the deviation at the 50 and 100 kg readings, and what is the %FSD
accuracy?
Solution
Deviation at 50 kg = ± (50 × 2.5/100) kg = ±1.25 kg
Deviation at 100 kg = ± (100 × 2.5/100) kg = ±2.5 kg
Maximum deviation occurs at FSD, that is, ±5 kg or ±2.5% FSD
C. Precision
Precision refers to the limits within which a signal can be read or the term used to describe the
degree of freedom of an instrument from random errors. If a large number of readings are taken of
the same quantity by a high-precision instrument, then the spread of readings will be very small.
Precision is often, although incorrectly, confused with accuracy. High precision does not imply
anything about measurement accuracy. A high-precision instrument may have a low accuracy.
Low accuracy measurements from a high-precision instrument are normally caused by a bias in
the measurements, which is removable by recalibration.
The terms repeatability and reproducibility mean approximately the same but are applied in
different contexts. Repeatability describes the closeness of output readings when the same input is
applied repetitively over a short period of time, with the same measurement conditions, same
instrument and observer, same location, and same conditions of use maintained throughout.
Reproducibility describes the closeness of output readings for the same input when there are
changes in the method of measurement, observer, measuring instrument, location, conditions of
use, and time of measurement. Reproducibility can also be seen the ability of an instrument to
repeatedly read the same signal over time, and give the same output under the same conditions or
Reproducibility is the inability of an instrument to consistently reproduce the same reading of a
fixed value over time under identical conditions, creating an uncertainty in the reading. Both terms
thus describe the spread of output readings for the same input. This spread is referred to as
repeatability if the measurement conditions are constant and as reproducibility if the measurement
conditions vary. The degree of repeatability or reproducibility in measurements from an instrument
is an alternative way of expressing its precision. Figure 3 illustrates this more clearly by showing
results of tests on three industrial robots programmed to place components at a particular point on
a table. The target point was at the center of the concentric circles shown, and black dots represent
points where each robot actually deposited components at each attempt. Both the accuracy and the
precision of Robot 1 are shown to be low in this trial. Robot 2 consistently puts the component
down at approximately the same place but this is the wrong point. Therefore, it has high precision
but low accuracy. Finally, Robot 3 has both high precision and high accuracy because it
consistently places the component at the correct target position.
E. Linearity
Linearity is a measure of the proportionality between the actual value of a variable being measured
and the output of the instrument over its operating range. The deviation from true for an instrument
may be caused by one or several factors affecting accuracy, and can determine the choice of
instrument for a particular application. Figure 4 shows a linearity curve for a flow sensor, which
is the output from the sensor versus the actual flow rate. The curve is compared to a best-fit straight
line. The deviation from the ideal is 4 cm/min., which gives a linearity of ±4% of FSD.
Nonlinearity is then defined as the maximum deviation of any of the output readings of the marked
line from this straight dotted line. Nonlinearity is usually expressed as a percentage of full-scale
reading.
Offset is the reading of an instrument with zero input. Drift is the change in the reading of an
instrument of a fixed variable with time. The sensitivity of measurement is therefore the slope of
the straight dotted line drawn on Figure 4. If, for example, a pressure of 2 bar produces a deflection
of 10 degrees in a pressure transducer, the sensitivity of the instrument is 5 degrees/bar (assuming
that the deflection is zero with zero pressure applied).
Example 5. The following resistance values of a platinum resistance thermometer were measured
at a range of temperatures. Determine the measurement sensitivity of the instrument in Ω/oC
S/n Resistance (𝛀) Temperature (oC)
1 307 200
2 314 230
3 321 260
4 328 290
Solution
If these values are plotted on a graph, the straight-line relationship between resistance change and
temperature change is obvious.
For a change in temperature of 30 oC, the change in resistance is 7 Ω. Hence the measurement
sensitivity 𝑆 = 7/30 = 0.233 Ω / oC.
G. Threshold
If the input to an instrument is increased gradually from zero, the input will have to reach a certain
minimum level before the change in the instrument output reading is of a large enough magnitude
to be detectable. This minimum level of input is known as the threshold of the instrument.
Manufacturers vary in the way that they specify threshold for instruments. Some quote absolute
values, whereas others quote threshold as a percentage of full-scale readings. As an illustration, a
car speedometer typically has a threshold of about 15 km/h. This means that, if the vehicle starts
from rest and accelerates, no output reading is observed on the speedometer until the speed reaches
15 km/h.
H. Resolution
When an instrument is showing a particular output reading, there is a lower limit on the magnitude
of the change in the input measured quantity that produces an observable change in the instrument
output. Like threshold, resolution is sometimes specified as an absolute value and sometimes as a
percentage of full-scale deflection. One of the major factors influencing the resolution of an
instrument is how finely its output scale is divided into subdivisions. Using a car speedometer as
an example again, this has subdivisions of typically 20 km/h. This means that when the needle is
between the scale markings, we cannot estimate speed more accurately than to the nearest 5 km/h.
This value of 5 km/h thus represents the resolution of the instrument. Resolution is the smallest
amount of a variable that an instrument can resolve, i.e., the smallest change in a variable to which
the instrument will respond.
Example 6. A digital meter has 10-bit accuracy. What is the resolution on the 16V range?
Solution
Decade equivalent of 10 bits = 210 = 1,024
Resolution = 16/1,024 = 0.0156V = 15.6 mV
I. Sensitivity to Disturbance
All calibrations and specifications of an instrument are only valid under controlled conditions of
temperature, pressure, and so on. These standard ambient conditions are usually defined in the
instrument specification. As variations occur in the ambient temperature, certain static instrument
characteristics change, and the sensitivity to disturbance is a measure of the magnitude of this
change. Such environmental changes affect instruments in two main ways, known as zero drift and
sensitivity drift.
Zero drift or sometimes known as bias describes the effect where the zero reading of an instrument
is modified by a change in ambient conditions. This causes a constant error that exists over the full
range of measurement of the instrument. The mechanical form of a bathroom scale is a common
example of an instrument prone to zero drift. It is quite usual to find that there is a reading of
perhaps 1 kg with no one on the scale. If someone of known weight 70 kg were to get on the scale,
the reading would be 71 kg. Zero drift is normally removable by calibration. The typical unit by
which such zero drift is measured is volts/ oC. This is often called the zero-drift coefficient related
to temperature changes. If the characteristic of an instrument is sensitive to several environmental
parameters, then it will have several zero drift coefficients, one for each environmental parameter.
Sensitivity drift (also known as scale factor drift) defines the amount by which an instrument’s
sensitivity of measurement varies as ambient conditions change. It is quantified by sensitivity drift
coefficients that define how much drift there is for a unit change in each environmental parameter
that the instrument characteristics are sensitive to. Many components within an instrument are
affected by environmental fluctuations, such as temperature changes: for instance, the modulus of
elasticity of a spring is temperature dependent.
Example 7. The following table shows output measurements of a voltmeter under two sets of
conditions:
(a) Use in an environment kept at 20 oC which is the temperature that it was calibrated at.
(b) Use in an environment at a temperature of 50 oC.
It is then used in an environment at a temperature of 30 oC, and the following deflection/ load
characteristic is measured:
Load (kg) 0 1 2 3
Deflection (mm) 5 27 49 71
Determine the sensitivity at 200 𝐶 and 300 𝐶, zero drift and sensitivity drift per oC change in
ambient temperature.
Solution
20−0 20
a. At temperature of 200 𝐶, Sensitivity 𝑆20 = = = 20𝑚𝑚/𝑘𝑔
1−0 1
27−5 22
Similarly, temperature of 300 𝐶, Sensitivity 𝑆30 = = = 22𝑚𝑚/𝑘𝑔
1−0 1
b. Zero drift (Bias) is the constant difference between the pairs of the output readings at a
particular temperature. Zero Drift per 0 𝐶 change in ambient temperature = 5 𝑚𝑚 (No-
load deflection) while Sensitivity drift, 𝑆 = 𝑆30 − 𝑆20 = 22 − 20 = 2𝑚𝑚/𝑘𝑔
c. Zero drift coefficient is the magnitude of the drift divided by the magnitude of the
𝑧𝑒𝑟𝑜 𝑑𝑟𝑖𝑓𝑡 5
temperature causing the drift. Zero Drift/ 0 𝐶 = 𝑇𝑒𝑚𝑝 change = 30−20 = 0.5𝑚𝑚/𝑜 𝐶
𝑠𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 𝑑𝑟𝑖𝑓𝑡 2
Sensitivity drift coefficient 𝑆/𝑜 𝐶 = = 30−20 = (0.2𝑚𝑚/𝑘𝑔)/𝑜 𝐶
𝑇𝑒𝑚𝑝 change
J. Hysteresis Effects
Hysteresis is the difference in readings obtained when an instrument approaches a signal from
opposite directions (i.e., going from low up to the value, or high down to the value). Figure 5
illustrates the output characteristic of an instrument that exhibits hysteresis. If the input measured
quantity to the instrument is increased steadily from a negative value, the output reading varies in
the manner shown in curve A. If the input variable is then decreased steadily, the output varies in
the manner shown in curve B. The noncoincidence between these loading and unloading curves is
known as hysteresis. Two quantities are defined, maximum input hysteresis and maximum output
hysteresis, as shown in Figure 5. These are normally expressed as a percentage of the full-scale
input or output reading, respectively. Hysteresis is found most commonly in instruments that
contain springs because they exhibit friction in moving parts and also in instruments that contain
electrical windings formed round an iron core, due to magnetic hysteresis in the iron.
m2
Figure 5: Instrument characteristic with hysteresis
K. Dead Space
Dead space is defined as the range of different input values over which there is no change in
output value. Any instrument that exhibits hysteresis also displays dead space, as marked on
Figure 5. Backlash in gears is a typical cause of dead space and results in the sort of instrument
output characteristic shown in Figure 6. Backlash is commonly experienced in gear sets used to
convert between translational and rotational motion (which is a common technique used to
measure translational velocity).
Figure 6: Instrument characteristic with dead space.
Dynamic Characteristics
The static characteristics of measuring instruments are concerned only with the steady-state
reading that the instrument settles down to, such as accuracy of the reading. Measurement
outcomes are rarely static over time. They will possess a dynamic component that must be
understood for correct interpretation of the results. The dynamic characteristics of a measuring
instrument describe its behaviour between the time a measured quantity changes value and the
time when the instrument output attains a steady value in response. As with static characteristics,
any values for dynamic characteristics quoted in instrument data sheets only apply when the
instrument is used under specified environmental conditions. Outside these calibration conditions,
some variation in the dynamic parameters can be expected.
To properly appreciate instrumentation design and its use, it is necessary to develop insight into
the most commonly encountered types of dynamic response and to develop the mathematical
modeling basis that allows us to make concise statements about responses. In any Linear Time-
Invariant (LTI) measuring system, the relationship between the input (𝑞𝑖 ) and the output (𝑞𝑜 ) for
time(𝑡) > 0 is given by
𝑑𝑛 𝑑 𝑛−1 𝑑
𝑎𝑛 𝑛
𝑞𝑜 + 𝑎𝑛−1 𝑛−1
𝑞𝑜 + ⋯ 𝑎1 𝑞𝑜 + 𝑎𝑜 𝑞𝑜
𝑑𝑡 𝑑𝑡 𝑑𝑡
𝑑𝑚 𝑑 𝑚−1 𝑑
= 𝑏𝑚 𝑚
𝑞𝑖 + 𝑏𝑚−1 𝑚−1
𝑞𝑖 + ⋯ 𝑏1 𝑞𝑖 + 𝑏𝑜 𝑞𝑖 (1)
𝑑𝑡 𝑑𝑡 𝑑𝑡
Where 𝑞𝑖 is the measured quantity (input), 𝑞𝑜 is the output reading, 𝑎0 … 𝑎𝑛 , 𝑏0 … 𝑏𝑚 are constants.
Considering a step change in the measured quantity only, the equation (1) reduces to equation (2)
𝑑𝑛 𝑑 𝑛−1 𝑑
𝑎𝑛 𝑛
𝑞𝑜 + 𝑎𝑛−1 𝑛−1
𝑞𝑜 + ⋯ 𝑎1 𝑞𝑜 + 𝑎𝑜 𝑞𝑜
𝑑𝑡 𝑑𝑡 𝑑𝑡
= 𝑏𝑜 𝑞𝑖 (2)
Further simplification can be made by taking certain special cases of Equation (2), which collectively
apply to nearly all measurement systems.
A. Zero-Order Instrument
If all the coefficients 𝑎1 …𝑎𝑛 other than 𝑎0 in equation (2) are assumed zero, then 𝑎𝑜 𝑞𝑜 = 𝑏𝑜 𝑞𝑖 or
𝑏𝑜 𝑞𝑖⁄
𝑞0 = 𝑎𝑜 = 𝐾 𝑞𝑖 (3)
𝑏𝑜⁄
Where 𝐾 = 𝑎𝑜 is a constant known as the instrument sensitivity
Any instrument that behaves according to Equation (3) is said to be of a zero-order type. A
potentiometer, which measures motion, is a good example of such an instrument, where the output
voltage changes instantaneously as the slider is displaced along the potentiometer track Following
a step change in the measured quantity at time t, the instrument output moves immediately to a
new value at the same time instant t, as shown in Figure 7.
C. Second-order Instrument
If all the coefficients 𝑎3 . . . 𝑎𝑛 except for 𝑎0 𝑎1 and 𝑎2 are assumed zero in equation (2) then
𝑑2 𝑑
𝑎2 2
𝑞𝑜 + 𝑎1 𝑞𝑜 + 𝑎𝑜 𝑞𝑜 = 𝑏𝑜 𝑞𝑖 (8)
𝑑𝑡 𝑑𝑡
𝑑
If 𝑑𝑡 is replaced by the 𝑆 operator in equation (8), we get
𝑎2 𝑆 2 𝑞𝑜 + 𝑎1 𝑆𝑞𝑜 + 𝑎𝑜 𝑞𝑜 = 𝑏𝑜 𝑞𝑖 (9)
𝑏𝑜 𝑞𝑖
𝑞𝑜 = (10)
𝑎2 𝑆2 + 𝑎1 𝑆 + 𝑎𝑜
Equation (10) can be rearranged in terms of 3 parameters static sensitivity (𝐾), undamped natural
frequency (𝜔) and damping ratio (𝜀) by dividing the numerator and denominator by 𝑎𝑜
𝑏
( 𝑎𝑜 )𝑞𝑖
𝑜
𝑞𝑜 = 𝑎 𝑎 (11)
(𝑎 )𝑆 + (𝑎1 )𝑆 + 1
2 2
𝑜 𝑜
Where
𝑏𝑜
𝐾= (12)
𝑎𝑜
𝑎0 𝑎0
𝜔=√ ⟹ 𝜔2 = (13)
𝑎2 𝑎2
𝑎1 𝑎1 𝑎1 𝜔
𝜀= = = (14)
2√𝑎0 𝑎2 𝑎 2𝑎𝑜
2𝑎0 √ 2
𝑎0
Replacing equations (12), (13) and (14) in (11)
𝐾𝑞𝑖
𝑞𝑜 = (15)
1 2 2𝜀
𝑆 + 𝜔 𝑆+1
𝜔2
Equation (15) is the standard equation for a second-order system, and any instrument whose
response can be described by it is known as a second-order instrument. If Equation (14) is solved
analytically, the shape of the step response obtained depends on the value of the damping ratio
parameter 𝜀. The output responses of a second-order instrument for various values of 𝜀 following
a step change in the value of the measured quantity at time t are shown in Figure 9. For case A,
where 𝜀 = 0, there is no damping and the instrument output exhibits constant amplitude
oscillations when disturbed by any change in the physical quantity measured. For light damping
of 𝜀 = 0.2, represented by case B, the response to a step change in input is still oscillatory but the
oscillations die down gradually. A further increase in the value of 𝜀 reduces oscillations and
overshoots still more, as shown by curves C and D, and finally the response becomes very
overdamped, as shown by curve E, where the output reading creeps up slowly toward the correct
reading. Clearly, the extreme response curves A and E are grossly unsuitable for any measuring
instrument. If an instrument were to be only ever subjected to step inputs, then the design strategy
would be to aim toward a damping ratio of 0.707, which gives the critically damped response (C).
Unfortunately, most of the physical quantities that instruments are required to measure do not
change in the mathematically convenient form of steps, but rather in the form of ramps of varying
slopes. As the form of the input variable changes, so the best value for 𝜀 varies, and choice of 𝜀
becomes one of compromise between those values that are best for each type of input variable
behavior anticipated. Commercial second-order instruments, of which the accelerometer is a
common example, are generally designed to have a damping ratio (𝜀) somewhere in the range of
0.6–0.8.
Note both first and second-order instruments take time to settle to a steady-state reading when the
measured quantity changes. It is therefore necessary to wait until the dynamic motion has ended
before a reading is recorded. This places a serious limitation on the use of first- and second-order
instruments to make repeated measurements. Clearly, the frequency of repeated measurements is
limited by the time taken by the instrument to settle to a steady-state reading.
Figure 9: Response characteristics of second-order instruments.