You are on page 1of 84

Ch.

1- General
principles

EET 238
Instrumentation
Contents

1) Basic concepts of measurement


2) Measurement/Instrumentation standards
3) Elements of measuring instrument
(measurement system)
4) Classifications/types of instruments
5) Performance characteristics - dynamic
responses and static characteristics,
measurement errors,
6) Typical application of measurements

2
Objectives

By the end of the presentation, you will able


to:
Define metrology and some of its terms
Explain the various standards of measurement and
instrumentation.
Identify elements of instrumentation (measurement
system)
Classify instruments
Define the various performance characteristics of
measuring instruments
Indicate applications of measurements
3
1. Basic concepts of measurement

 The primary objective in any measurement system is


to establish the value or the tendency of some
variable.
 Measurement provides quantitative information on
the actual state of the physical variables and
processes that otherwise could only be estimated. It
determines the dimension, quantity or capacity of
something.
 Everything has to do with measurement. CAN BE
SEEN EVERYWHERE. Allowing people to plan their
lives and make commercial exchange with confidence
4  Q:What is the science that deals with measurement?
Metrology

 It is the science and "grammar" of measurement


 It is defined as:

"the field of knowledge concerned with


measurement".

 It concerned with standardized measurement units


so that scientific and economic figures can be
understood, reproduced, and converted with a high
degree of certitude.
5
Metrology covers three main
tasks:

 The definition of internationally accepted units of


measurement
 The realization of units of measurement by scientific
method
 Establishment of traceability chain in documenting
the accuracy of a measurement.

“Metrology is essential in scientific research”

6
Measurement

 The International Vocabulary of Basic and General


Terms in Metrology (VIM), using International
Organization for Standardization (ISO) norms, has
defined measurement as:

"a set of operations having the object of determining


the value of a quantity".

 In other words, a measurement is the evaluation of a


quantity made after comparing it to a quantity of the
same type which we use as a "unit".
7
Instrumentation

 It refers to a group of permanent systems which help us to


measure objects and maintain retroactive control of a
process.
 In abstract terms, an instrument is a device that transforms a
physical variable of interest (the measurand) into a form that
is suitable for recording (the measurement).
 An example of a basic instrument is a ruler. In this case the
measurand is the length of some object and the
measurement is the number of units (meters, inches, etc.)
that represent the length.
 In this sense, instruments and systems of measurement
constitute the "tools" of measurement and metrology.
8  Q: Are measurements truly exact or perfect?
Uncertainty (Accuracy)

 Value assigned to a measurement result that


characterizes how well it is unknown (known).
 It arises due to the imperfections in the measurement
system. I.e. there is always a certain error in
measurement.
 Since the error for any particular error source is unknown
and unknowable, its limits, at a given confidence, must
be estimated. This estimate is called the uncertainty.
“One thing you learn in science is that there is no perfect
answer, no perfect measure.” A.O.Beckman
 Q: How do we know the accuracy of a certain measuring
9 instrument from user perspective?
Calibration
 The relationship between the input information and the system output is
established by a calibration.
 The quantity to be measured being the measurand, m, the sensor must
convert m into an electrical variable called s. The expression s= F(m) is
established by calibration. By using a standard or unit of measurement,
we record for a known physical input values of m (m 1, m2 ...mi) electrical
signals sent by the sensor (s1, s2 ... si ) and we trace the curve s(m),
called the sensor calibration curve.

10
Calibration (contd.)

 It is a set of operations that establish the relationship


between values of quantities indicated by a measuring
instrument and a reference standard
 Calibration affords the opportunity to check the
instrument against a known standard and subsequently
to reduce errors in accuracy.
 Example: Calibration of a flow-meter
– Comparison with a standard flow-measurement facility.
– Comparison with a flow-meter of known accuracy, which is higher
than the instrument to be calibrated.
– Using indirect measurements e.g. weighing certain amount of
water in a tank and recording the time elapsed for this quantity to
11 flow.
Traceability

 Unbroken chain of comparisons to the SI units, all


having stated uncertainties.
 Property of a measurement result whereby the result
can be related to a reference through a documented
unbroken chain of calibrations, each contributing to
the measurement uncertainty

NMI

You, the
Consumer Inaccurate
12 Measurements
Metrologist

 Designs and runs


measurement Metrologist
calibrations & tests
 Analyzes the results
 Determines the final Calibration
accuracy of the device Engineer
under test
Calibration
Technician

13
Metrology Categories

 Scientific Metrology
– Organization for the development of measurement standards
and their maintenance (highest level)
– NIST Atomic Clock-Accurate to 1 s in 20 million years
– test weights and volume standards for pharmaceutical
companies.
– test standards for many military and defense companies
– test standards for many companies that provide parts of the
space shuttle

 Q: Where is such service provision or facility in


Ethiopia?
14
Metrology Categories (contd.)

 Industrial Metrology
– Adequate functioning of measurement instruments used in
industry as well as production and testing processes.
– Mechanical Metrology – Realizes , maintains and
disseminates the national measurement standards in the areas
of Mass, Volume, Pressure and Dimension
– Electrical Metrology –– Realizes , maintains and
disseminates the national measurement standards in the areas
of AC/DC, low frequency, time & frequency and temperature

 Q: Where is such type of service provision or facility in


Ethiopia?
15
Metrology Categories (contd.)

 Legal Metrology
– Measurements that influence economic transactions, health and
safety
– State metrology laboratories test standards used to test retail
scales and meters
 Services offered by legal metrology are:
– Mass measurements verification: verification of all mass
measuring instruments (balances, trade masses etc.)
– Volume measuring instruments : verification of fuel dispensers,
tankers , meters etc.
– Prepackaging control :verification of quantities in prepackaged
products (mass, volume, length, number etc.)
 Q: Where is such service provision or facility in Ethiopia?
16
Interested in Metrology?

Academic Subjects (Q: Where r we?)


Physical Science
 Physics
 Material Properties
Math
 Formula
 Statistical Analysis
Engineering and Computer Science
 Automation
 Programming
 Database Building
17 Management Systems
2. Measurement/Instrumentation
standards

 Measurement standards definitions


 Measurement units standards
 Standard organization
 The Need for Standards
 Standard instrumentation signal level

18
Measurement standard definitions

 Measurement standards are those devices, artifacts, procedures,


instruments, systems, protocols, or processes that are used to
define (or to realize) measurement units and on which all lower
echelon (less accurate) measurements depend.
 A measurement standard may also be said to store, embody, or
otherwise provide a physical quantity that serves as the basis
for the measurement of the quantity.
 Another definition of a standard is the physical embodiment of a
measurement unit, by which its assigned value is defined, and to
which it can be compared for calibration purposes.
 Another definition of a standard is a unit of known quantity or
dimension to which other measurement units can be compared.
19
Motive to measurement unit
standards

 Humans see their destination in cognition of the world.


 On the way experience has led us to the quantitative
comparison of physical quantities, that is to the
conclusion that one should measure something to
know it.
 Logic lead us further to measure one should choose a
measure, establish a single quantity- unit.
 There for a demand arose for a standard, material,
accurate and stable embodiment of this unit.
 Standardization of measurement has began at the start
of human civilization in barter trade and using human
torso.
20
Measurement units standards

 Imperial System of units (Yards, feet and inches)


– cumbersome multiplication factors (1 mile = 1760 yards,
1yard = 3 feet, 1 feet = 12 inches)
 Metric system (meter, centimeter and millimeter)
– They are related to the base by factors of ten and such units are
therefore much easier to use
– However, in the case of derived units such as velocity, the
number of alternative ways in which these can be expressed in
the metric system can lead to confusion.
 SI units (Systèmes Internationales d’Unités)
– An internationally agreed set of standard units
– 7 standard units defined for seven physical quantities (class activity)
21 – All the other units are called derived units
Primary and secondary standards

 Primary standards: The seven basic measurement SI units from


which all other units are derived.
– The unit of mass is defined as 1 kilogram (kg). It is also
unique in that it is the only unit currently based on an artifact.
The U.S. kilogram and hence all other standards of mass are
based on one particular platinum/iridium cylinder kept at the
BIPM in France. If that International Prototype Kilogram were
to change, all other mass standards throughout the world
would be wrong.
 Secondary standards: All standards other than primary standards
– They are traceable to primary standards at BIPM
– Eg. National Kilogram at Ethiopian Standard Authority and its

22 reference and auxiliary equipment


Standard organizations

 One problem with standards is that there are several kinds


produced by the various standards bodies.
 International standards
– International Organization for Standardization (ISO),
– International Electrotechnical Commission (IEC),
 National standards
– American National Standards Institute (ANSI), and the Standards Council of
Canada (SCC), BSI, DIN
 Industry standards
– At national level (ASTM, ASME, ISA)
– At international level (IEEE)

23
The need for
standards/calibration/traceability

 Standards define the units and scales in use, and allow


comparison of measurements made in different times and places.
For example, buyers of sugar are charged by a unit of weight in
the Ethiopia this would be the a Kilogram.
 It is important for the buyer that the quantity ordered is actually
received and the store expects to be paid for the quantity
delivered.
 Both parties are interested in accurate measurements of the
weight and, therefore, need to agree on the units, conditions, and
method(s) of measurement to be used.
 Q: How can we ensure both parties will be agreed upon the
transaction?
24
The need for….

 Persons needing to measure a mass cannot borrow the primary


standard maintained in France or even the national standard
from the Ethiopian Standard Authority.
 They must use lower-level standards that can be checked
(calibrated) against those national or international standards.
 Periodically measuring devices, such as scales and balances,
should be checked against working level mass standards to verify
their accuracy.
 These working-level standards are, in turn, calibrated against
higher-level mass standards. This chain of calibrations or checking
is called “traceability.” A proper chain of traceability must include
a statement of uncertainty at every step.
25  So that the transaction will be confident.
Instrumentation signal level
Standards

 Process industries utilize two type of signals to


transmit measurement and control info
– Pneumatic (air pressure) signal
– Electronic signal
 Pneumatic - before 1960 utilized almost exclusively
– These devices make uses of mechanical force-balance
elements to generate signals in the range of 3-15 psig
– They utilize physical displacement in the measurement
medium such as movement of diaphragm
– They are intrinsically safe (even in hazardous and
explosive environment)
– Even though continued to be used, electronic instruments
26 offer more futures (functions)
Pneumatic instruments
 Fig illustrate a pressure transmitter designed to output a variable air
pressure according to its calibration to a range of 0 to 250 psi.
 Such a transmitter would have to be supplied with a source of
constant pressure compressed air (20 psi), and the resulting output
signal would be conveyed to the indicator via tubing (3-15 psi).
 An output pressure of 3 psi represents the low end of the process
measurement scale and an output pressure of 15 psi represents the
high end of the measurement scale. The face of the indicator would
be labeled from 0 to 250 psi

27
Electronic instrumentation signal
levels standard

 Since about 1960 electronic instrumentation come


into widespread use.
 At one time or another, signal ranges of 1 to 5mA, 4
to 20mA, 10 to 50mA, 0 to 5VDC, ±10VDC, and
several others has been used.
 Most industrial analog instrumentation now has a
standard 4 to 20mA range, although controller and
transmitter often available with multiple out put
ranges.
 They offer more features than pneumatic instruments
such as a greater degree of flexibility, more
28 accuracy, and more wide area of use.
3. Elements of measuring
instrument
 A measuring system in general consists of several separate
elements as shown in figure.

 These elements form a bridge between the input to the


29 measurement system and the system output.
Sensor stage

 The first element in any measuring system is the


primary sensor
 Gives an output that is a function of the measurand
(the input applied to it). I.e. the physical variable to be
measured is detected.
 For most but not all sensors, this function is at least
approximately linear.
 Some examples of primary sensors are:
– a liquid-in-glass thermometer,
– a thermocouple and
– a strain gauge.
30
Variable conversion elements
 They are needed where the output variable of a primary sensor
is in an inconvenient form and has to be converted to a more
convenient/usable form.
 For instance, a strain gauge has an output in the form of a
varying resistance. The resistance change cannot be easily
measured and so it is converted to a change in voltage by a
bridge circuit, which is a typical example of a variable
conversion element.
 In some cases, the primary sensor and variable conversion
element are combined, and the combination is known as a
transducer.

31
Signal processing elements
 It exist to improve the quality of the output of a measurement
system in some way.
 A very common type is the electronic amplifier, which amplifies
the output of the sensor or transducer, thus improving the
sensitivity and resolution of measurement.
 This element is particularly important where the primary
transducer has a low output. For example, thermocouples have
a typical output of only a few millivolts.
 Other types of signal processing element are those that filter out
induced noise and remove mean levels etc.
 In some devices, signal processing is incorporated into a
transducer, which is then known as a transmitter.
 In some cases, the word ‘sensor’ is used generically to refer to
32 both transducers and transmitters.
Final element

 It is the component to display or record the


measurement signal if it is not fed automatically into a
feedback control system.
 Thus the final control element takes the form of:
– A signal presentation/display unit
– A signal-recording unit.
– As part of an automatic control scheme
 The presentation and display unit take many forms
according to the requirements of the particular
measurement application. (Ch. 5)

33
Class activity

 With a resistance thermometer, element X takes the


temperature signal and transforms it into resistance
signal, element Y transforms the resistance signal
into a current signal, element Z transforms the
current signal into a display of a movement of a
pointer across a scale.
 Q1: Which of these elements is the sensor, the
transducer, the signal processor, the transmitter,
the data presentation?
 Q2: Draw the block diagram representation of the
measuring system
34
4. Instrument types

 Instruments can be subdivided into separate classes


according to several criteria.
 Instruments can be classified into:
– Contact and non-contact (loading effect)
– Active and passive instruments
– Null-type and deflection-type instruments
– Analogue and digital instruments
– Smart and non-smart instruments
 These sub-classifications are useful in broadly
establishing several attributes of particular instruments
such as accuracy, cost, and general applicability to
35 different applications.
Loading effects

 Measurement operations may require connection (in


situ invasive, semi-invasive or contact measurement) or
without contact.
 This linking of an instrument to an object or site of
investigation means that a transfer of energy and/or
information termed "a load effect" takes place.
 An example of this is shown by the insertion of a
measuring probe into a cup of tea which takes some
heat from the tea, leading to a difference between the
"true“ value and the value to be measured.

36
Active and passive instruments

 Passive - the instrument output is entirely produced by


the quantity being measured.
– normally of a more simple construction thus cheaper to
manufacture
 Active - the quantity being measured simply modulates
the magnitude of some external power source(usually in
electrical form, but in some cases, pneumatic or hydraulic one).
– Scope for improving measurement resolution is much greater

 Therefore, choice between active and passive


instruments for a particular application involves carefully
balancing the measurement resolution requirements
against cost.
37
Passive instrument example

 A pressure measuring
device shown.
 The pressure of the fluid is
translated into a movement
of a pointer against a scale.
The energy expended in
moving the pointer is
derived entirely from the
change in pressure
measured: there are no
other energy inputs to the
system.
38
Active instrument example

 A float-type petrol tank


level indicator shown.
 The change in petrol level
moves a potentiometer
arm, and the output signal
consists of a proportion of
the external voltage
source.
 The primary transducer
float system is merely
modulating the value of
the voltage from this
39 external power source.
Class activity

 How can the measurement resolution be increased in


the two examples above? Which one allows much
greater control in resolution?
– In the 1st : by making the pointer longer, such that the pointer
tip moves through a longer arc, the scope for such
improvement is clearly restricted by the practical limit of how
long the pointer can conveniently be.
– In the 2nd: adjustment of the magnitude of the external energy
input allows much greater control over measurement resolution.
 Scope for improving measurement resolution is much
greater with giving consideration to heating effects and
safety.

40
Class activity-Liquid-in-glass
thermometer

 The fluid used is usually either


mercury or colored alcohol, and this
is contained within a bulb and
capillary tube.
 As the temperature rises, the fluid
expands along the capillary tube and
the meniscus level is read against a
calibrated scale etched on the tube.
 Is it active or passive instrument?
Make an argument.

41
Null-type and deflection-type
instruments

 Null-type: uses the null method for measurement.


–the instrument exerts an influence on the measured system so
as to oppose the effect of the measurand.
– influence and the measurand are compared until they are equal
but opposite in value, yielding a null measurement.
– More accurate than deflection types minimizes interaction
between the measuring system and the measurand
– A disadvantage of null instruments is that an iterative balancing
operation requires more time to execute
 Deflection-type: influenced by the measurand so as to bring
about a proportional response within the instrument.
– the response is an output reading that is a deflection or a
deviation such as a pointer or other type of readout from the
42 initial condition of the instrument.
Null-type example

 Deadweight gauge shown is an


example of null-type instrument.
– Here, weights are put on top of the
piston until the downward force
balances the fluid pressure.
– Weights are added until the piston
reaches a datum level, known as the
null point.
– Pressure measurement is made in
terms of the value of the weights
needed to reach this null position.

43
Deflection type example

 A spring scale is a good, simple example of a deflection


instrument.
– The input weight or measurand acts on a plate-spring. The
original position of the spring is influenced by the applied weight
and responds with a translational displacement, a deflection x

A mechanical coupler is
connected directly or by linkage
to a pointer. The pointer
position is mapped out on a
corresponding scale that
serves as the readout scale
44
Comparison on accuracy

 The accuracy of the two instruments of the examples


above depends on different things.
 For the first one it depends on the linearity and
calibration of the spring, whilst for the second it relies
on the calibration of the weights.
 As calibration of weights is much easier than careful
choice and calibration of a linear-characteristic
spring, this means that the null- type of instrument will
normally be the more accurate.

45
Comparison on usage

 In terms of usage, the deflection type instrument is


clearly more convenient. It is far simpler and fast to
read the position of a pointer against a scale than to
add and subtract weights until a null point is reached. A
deflection-type instrument is therefore the one that
would normally be used in the workplace.
 However, for calibration duties, the null-type
instrument is preferable because of its superior
accuracy. The extra effort required to use such an
instrument is perfectly acceptable in this case because
of the infrequent nature of calibration operations.
46
Analogue and digital instruments
 Analogue instruments: give an output that varies
continuously as the quantity being measured changes.
– The output can have an infinite number of values within the range
that the instrument is designed to measure.
– The deflection-type of pressure gauge described earlier is a
good example. As the input value changes, the pointer moves
with a smooth continuous motion thus be in an infinite number of
positions within its range of movement, but the number of
different positions that the eye can discriminate between is
strictly limited, this discrimination being dependent upon how
large the scale is and how finely it is divided.
 Digital instruments: has an output that varies in
discrete steps and so can only have a finite number of
47 values.
An example of a digital instrument
 The rev counter shown is an example of a digital instrument.

 A cam is attached to the revolving body whose motion is being


measured, and on each revolution the cam opens and closes a
switch and its operations are counted by an electronic counter.
This system can only count whole revolutions and cannot
discriminate any motion that is less than a full revolution.
48
Comparison in application with
microcomputer

 There is a rapid growth in the application of microcomputers


to automatic control systems now a days.
 Digital instruments are advantageous in such applications,
as it can be interfaced directly to the control computer.
 Analogue instruments must be interfaced to the
microcomputer by an analogue-to-digital (A/D) converter.
This conversion has several disadvantages.
– Firstly, the A/D converter adds a significant cost to the system.
– Secondly, a finite time is involved in the process of converting,
and this time can be critical in the control of fast processes where
the accuracy of control depends on the speed of the controlling
computer thus impairs the accuracy

49
Smart and non-smart instruments

 The advent of the microprocessor has created a new division


in instruments between those that do incorporate a
microprocessor (smart) and those that don’t.

50
5. Performance characteristics

 Measurement system responses/characteristics


– Dynamic responses
– Static responses
 Dynamic responses
– Amplitude response
– Frequency response
– Phase response
– Time response
 Static characteristics
– Accuracy and precision of measurement
– Terms used in instrument rating
– Measurement errors
51
Dynamic response of
measurement system

 Amplitude response: A linear response to various input


amplitudes within range. Beyond the linear range, the system
is said to be overdriven.
 Frequency response: is the ability of the system to treat all
frequencies the same so that the gain amplitude remains the
same over the frequency range desired.
– Phase response: is important for complex waveforms. Lack
of good response may result in severe distortion.
 Time response: Delay, Rise time, Slew rate:
– Delay or rise time is required to respond to an input
quantity.
– Slew rate is the maximum applicable rate of change.

52
Static characteristics
 The various static characteristics are:
– Accuracy and inaccuracy (measurement uncertainty),
tolerance
– Precision/repeatability/reproducibility
– Linearity
– Sensitivity of measurement
– Sensitivity to disturbance
– Hysteresis effects
– Resolution
– Dead space
– Range or span

53
Accuracy

 It can be estimated during calibration. If the input value


of calibration is known exactly, then it can be called the
true value.
 The accuracy of a measurement system refers to its
ability to indicate a true value exactly.
 Accuracy is related to absolute error, ɛ, defined as:
ɛ = true value−indicated value
 Accordingly from which the percent accuracy, A, is
found by

54
Precision

 Also called repeatability/reproducibility of a


measuring system
 It refers to the ability of the system to indicate a
particular value upon repeated but independent
applications of a specific value input.
 Precision is a term that describes an instrument’s
degree of freedom from random errors.
 It describe the spread of output readings for the
same input.
 If a large number of readings are taken of the same
quantity by a high precision instrument, then the
55 spread of readings will be very small.
Precision (contd.)

 Precision is often confused with accuracy. High


precision does not imply anything about measurement
accuracy. A high precision instrument may have a low
accuracy.
 Low accuracy measurements from a high precision
instrument are normally caused by a bias in the
measurements, which is removable by recalibration.
 Precision of a measurement describes the units used
to measure something. It is impossible to make a
perfectly precise measurement.
 Accuracy can be improved by calibration up to but
56 not beyond the precision of the instrument.
Precision Example: How long is
the pencil?

The best you can say is ‘about 9 cm’

The best you can say is ’about 9.5 cm’.


 Which one is more precise?
 The second measurement, because it used a smaller
unit to measure with.
57
Accuracy, Precision, Errors

 See the difference between the dart throw

High repeatability High accuracy


gives low random Systematic and
means low random error
error but not an random and
indication of lead to poor
systematic accuracy
accuracy
58 error
Measurement Errors

 Measurement errors are deviation of measurements


from their true values. They are generally divided into
two categories, random and systematic.
– Random error is a measure of the random
variation found during repeated measurements.
– Bias (systematic) error is the difference between
the average value in a series of repeated
calibration measurements and the true value.

59
Random/precision error
 If a measurement is repeated many times and a graph is plotted
of the number of occurrences with the values obtained, it may look
as follows. There will be a spread of values obtained and the
spread occurs around an average value. Such a spread shows the
presence of random errors.
 Random error occurs in an unpredictable manner, often arise due
to natural fluctuations in processes or environmental
conditions such as changes in temperature or pressure.

60
Systemic error
 Systematic errors are said to occur when repeated measurements
of the same quantity under the same conditions give rise to errors
of the same magnitude and sign.
 Systematic errors are consistent errors that cause all
measurements to be incorrect by the same amount. They are
predictable since they follow some fixed rule or pattern. Repeating
the experiment under the same conditions will yield the same
errors.
 Typical systematic errors include:
– zero errors (e.g. using an ammeter with zero reading of -0.2
A will result in all readings taken to be 0.2 A too small),
– incorrect calibration of instruments,

61
Effects of precision and bias
errors on calibration readings

62
Class activity
 The figure shows the results of tests on three industrial robots
that were programmed to place components at a particular point
on a table. The target point was at the center of the
concentric circles shown, and the black dots represent the
points where each robot actually deposited components at each
attempt.
 Compare their accuracy and precision by expressing as ‘low’
or ‘high’

63
Class activity
 Consider the following two groups of five measurements for the
value of g, whose true value is 9.81 m/s-2.

 What group is more accurate relatively?


 Which group is more precise relatively?

64
Class activity
 Given the below two sets of experimental results A and B obtained
for a particular measured quantity,

 Which reading is more precise relatively?


65  What reading is more accurate relatively?
Inaccuracy (measurement
uncertainty)
 The accuracy of an instrument is a measure of how close
the output reading of the instrument is to the correct
value. In practice, it is more usual to quote the
inaccuracy.
 Inaccuracy is the extent to which a reading might be
wrong, and is often quoted as a percentage of the full-
scale (f.s.) reading of an instrument.
 If, for example, a pressure gauge of range 0–10 bar has a
quoted inaccuracy of ±1.0% f.s. (±1% of full-scale
reading), then the maximum error to be expected in any
reading is 0.1 bar.
 The inaccuracy of some instruments is sometimes quoted
66 as a tolerance figure.
Simplified Error Estimation

 Consider the calculation of electrical power, P = EI


E = 100 V±5 V I = 10 A±0.1 A
 The nominal value of power is 100×10= 1000 W, &
Pmax = (100+5)(10+0.1)= 1060.5 W
Pmin = (100−5)(10−0.1)= 940.5 W
 The uncertainty in the power is +6.05%, −5.95%.
 If it is quite unlikely that the power would be in error
by these amount, a comprehensive uncertainty
analysis can be performed.

67
Linearity
 It is normally desirable that the
output reading of an instrument is
linearly proportional to the quantity
being measured. The Os marked
show a plot of the typical output
readings of an instrument when a
sequence of input quantities are
applied to it. Normal procedure is to
draw a good fit straight line through
the Os, as shown.
 The non-linearity is then defined as
the maximum deviation of any of the
output readings marked O from this
straight line. Non-linearity is usually
expressed as a percentage of full-
68 scale reading.
Sensitivity
 The sensitivity of measurement is
a measure of the change in
instrument output that occurs
when the quantity being measured
changes by a given amount.
 The sensitivity of measurement is
therefore the slope of the straight
line drawn on.
 If, for example, a pressure of 2 bar
produces a deflection of 10
degrees in a pressure transducer,
the sensitivity of the instrument is
5 degrees/bar (assuming that the
deflection is zero with zero
69 pressure applied).
Class activity
 The following resistance values of a platinum resistance
thermometer were measured at a range of temperatures.
Determine the measurement sensitivity of the instrument in
ohms/°C.

70
Sensitivity to disturbance

 All calibrations and specifications of an instrument are


only valid under controlled conditions of temperature,
pressure etc.
 These standard ambient conditions are usually defined
in the instrument specification.
 As variations occur in the ambient temperature etc.,
certain static instrument characteristics change, and
the sensitivity to disturbance is a measure of the
magnitude of this change.
 Such environmental changes affect instruments in two
main ways, known as zero drift (bias) and sensitivity
71 drift (scale factor drift).
Zero drift
 Describes the effect where the zero reading of an instrument is
modified by a change in ambient conditions.
 This causes a constant error that exists over the full range of
measurement of the instrument.
 Example: a mechanical form of bathroom scale having a reading
of 1 kg with no one stood on the scale. If someone of known
weight 70 kg were to get on the scale, the reading would be 71 kg.
 Zero drift is normally removable by calibration. In the case of the
bathroom scale, a thumbwheel is usually provided that can be
turned until the reading is zero with the scales unloaded, thus
removing the bias.

72
Sensitivity drift
 It defines the amount by which an instrument’s sensitivity of
measurement varies as ambient conditions change.
 It is quantified by sensitivity drift coefficients that define how
much drift there is for a unit change in each environmental
parameter that the instrument characteristics are sensitive to.
 Many components within an instrument are affected by
environmental fluctuations, such as temperature changes: for
instance, the modulus of elasticity of a spring is temperature
dependent.
 Sensitivity drift is measured in units of the form (angular
degree/bar)/°C.

73
Disturbance effect curves
 Typical changes in the output characteristic of pressure gauges.

74
Disturbance effect curves (contd.)
 If an instrument suffers both zero drift and sensitivity drift at the same
time, then the typical modification of the output characteristic is shown

75
Example
 A spring balance is calibrated in an environment at a temperature
of 20°C and has the following deflection/load characteristic.

 It is then used in an environment at a temperature of 30°C and the


following deflection/ load characteristic is measured.

 Determine the zero drift and sensitivity drift per °C change in


76 ambient temperature.
Solution
 The curves drawn shown below

77
Solution….

 The curves show that there is


– A zero drift
– A scale factor drift
 At 20°C, deflection/load characteristic is a straight line.
Sensitivity = 20 mm/kg.
 At 30°C, deflection/load characteristic is still a straight
line.
– Sensitivity = 22 mm/kg.
– Bias (zero drift) = 5mm (the no-load deflection)
– Sensitivity drift = 2 mm/kg
– Zero drift/°C = 5/10 D 0.5 mm/°C
78 – Sensitivity drift/°C = 2/10 = 0.2 (mm per kg)/°C
Hysteresis effects
 Fig. illustrates characteristic of
an instrument that exhibits
hysteresis. If the input
measured quantity to the
instrument is steadily
increased from a negative
value, the output reading
varies in the manner shown in
curve (a). If the input variable
is then steadily decreased, the
output varies in the manner
shown in curve (b). The non-
coincidence between these
loading and unloading curves
is known as hysteresis.
79
Dead space
 Dead space is defined as the
range of different input values
over which there is no change in
output value. Any instrument
that exhibits hysteresis also
displays dead space. Some
instruments that do not suffer
from any significant hysteresis
can still exhibit a dead space in
their output characteristics,
however.
 Backlash in gears is a typical
cause of dead space, and
results in the sort of instrument
output characteristic shown in
80 Fig.
Terms used in instrument rating
 Resolution: The smallest increment of change in the measured
value that can be determined from the instrument’s readout scale.
The resolution is often on the same order as the precision;
sometimes it is smaller.
 Sensitivity: The change of an instrument’s output per unit change
in the measured quantity. Typically, an instrument with higher
sensitivity will have also finer resolution, better precision, and
higher accuracy.
 Range: The proper procedure for calibration is to apply known
inputs ranging from the minimum to the maximum values for which
the measurement system is to be used. These limits the operating
range of the system.
 Hysteresis: An instrument is said to exhibit hysteresis when there
is a difference in reading on whether the value of the measured
81 quantity is approached from above or below.
6. Typical application of
measurements

 Significant results of measurements are:


– Use in regulating trade and data for safe and
economic performance of systems.
– Fundamental data for research, design and
development,
– Basic input data for control of processes and
operations,

82
Industrial application

 Monitoring functions to provide the information


necessary to allow the operator to control some
industrial operation or process.
– In a chemical process for instance, the progress of chemical
reactions is indicated by the measurement of temperatures
and pressures at various points, and such measurements
allow the operator to take correct decisions regarding the
electrical supply to heaters, cooling water flows, valve
positions etc.
 Use as part of automatic feedback control systems

83
Automatic feedback control
systems
 Figure shows a functional block diagram of a simple
temperature control systems in which the temperature Ta of a
room is maintained at a reference value Td.

 Q: Discuss the operation.


 Q: Why should sufficient emphasis be given to the measurement
84 instrument during control system design?

You might also like