You are on page 1of 54

Fair Use Notice

The material used in this presentation i.e., pictures/graphs/text, etc. is solely


intended for educational/teaching purpose, offered free of cost to the students for
use under special circumstances of Online Education due to COVID-19 Lockdown
situation and may include copyrighted material - the use of which may not have
been specifically authorised by Copyright Owners. It’s application constitutes Fair
Use of any such copyrighted material as provided in globally accepted law of many
countries. The contents of presentations are intended only for the attendees of the
class being conducted by the presenter.
INSTRUMENTATION
AND
MEASUREMENT

2
Necessity for Calibration

Calibration is a comparison between measurements


– one of known magnitude and another measurement
made in as similar way as possible with a second
device. The device with the known or assigned
correctness is called the standard. It can normally be
assumed that a new instrument will have been
calibrated when it is obtained from an instrument
manufacturer and will therefore initially behave
according to the characteristics stated in the
specifications..

3
Calibration Curve

4
Necessity for Calibration

During use, however, its behaviour will gradually


diverge from the stated specification for a variety of
reasons. Such reasons include mechanical wear and
the effects of dirt, dust, fumes, and chemicals in the
operating environment. The rate of divergence from
standard specifications varies according to the type
of instrument, the frequency of usage, and the
severity of the operating conditions. However, there
will come a time, determined by practical
knowledge, when the characteristics of the
instrument will have drifted from the standard
specification by an unacceptable amount. 5
Necessity for Calibration

When this situation is reached, it is necessary to


recalibrate the instrument back to standard
specifications. Such recalibration is performed by
adjusting the instrument at each point in its output
range until its output readings are the same as those of
a second standard instrument to which the same inputs
are applied. This second instrument is one kept solely
for calibration purposes whose specifications are
accurately known.

6
Necessity for Calibration

Principles of Calibration
Calibration consists of comparing the output of the instrument
or sensor under test against the output of an instrument of
known accuracy when the same input (the measured quantity)
is applied to both instruments. This procedure is carried out
for a range of inputs covering the whole measurement range of
the instrument or sensor.
Calibration ensures that the measuring accuracy of all
instruments and sensors used in a measurement system is
known over the whole measurement range, provided that the
calibrated instruments and sensors are used in environmental
conditions that are the same as those under which they were
calibrated
7
Necessity for Calibration

Instrument calibration has to be repeated at prescribed


intervals because the characteristics of any instrument
change over a period. Changes in instrument characteristics
are brought about by such factors as mechanical wear, and
the effects of dirt, dust, fumes, chemicals, and temperature
change in the operating environment. To a great extent, the
magnitude of the drift in characteristics depends on the
amount of use an instrument receives and hence on the
amount of wear and the length of time that it is subjected to
the operating environment. However, some drift also occurs
even in storage as a result of aging effects in components
within the instrument.
8
Necessity for Calibration

Calibration Chain and Traceability


The calibration facilities provided within the instrumentation department of a company
provide the first link in the calibration chain. Instruments used for calibration at this level are
known as working standards. As such, working standard instruments are kept by the
instrumentation department of a company solely for calibration duties, and for no other
purpose, then it can be assumed that they will maintain their accuracy over a reasonable
period of time because use-related deterioration in accuracy is largely eliminated. However,
over the longer term, the characteristics of even such standard instruments will drift, mainly
due to aging effects in components within them. Therefore, over this longer term, a program
must be instituted for calibrating working standard instruments at appropriate intervals of time
against instruments of yet higher accuracy. The instrument used for calibrating working
standard instruments is known as a secondary reference standard. This must obviously be a
very well-engineered instrument that gives high accuracy and is stabilized against drift in its
performance with time. This implies that it will be an expensive instrument to buy. It also
requires that the environmental conditions in which it is used be controlled carefully in respect
of ambient temperature, humidity, and so on. When the working standard instrument has been
calibrated by an authorized standards laboratory, a calibration certificate will be issued. The
calibration chain is shown in following diagram.

9
Necessity for Calibration

10
Problems

What is the range of the calibration data of Table.


(Find both input and output ranges)

11
Problems

For the same table plot data using rectangular plot

12
Problems

For the same table plot data using log-log plot

13
Problems

For the same table plot data using rectangular and log-log plots

14
ERRORS

Measurement errors are impossible to avoid,


although we can minimize their magnitude by
good measurement system design
accompanied by appropriate analysis and
processing of measurement data.

15
ERRORS

The starting point in the quest to reduce the


incidence of errors arising during the measurement
process is to carry out a detailed analysis of all error
sources in the system. Each of these error sources
can then be considered in turn, looking for ways of
eliminating or at least reducing the magnitude of
errors. Errors arising during the measurement
process can be divided into two groups, known as
systematic errors and random errors.

16
ERRORS

Systematic errors describe errors in the


output readings of a measurement system
that are consistently on one side of the
correct reading, that is, either all errors are
positive or are all negative.

17
ERRORS

Systematic error in physical sciences commonly


occurs with the measuring instrument having a
zero error. A zero error is when the initial value
shown by the measuring instrument is a non-
zero value when it should be zero.

18
ERRORS

For example, a voltmeter might show a reading of 1


volt even when it is disconnected from any
electromagnetic influence. This means the
systematic error is 1 volt and all measurements
shown by this voltmeter will be 1 volt higher than
the true value.

This type of error can be offset by simply deducing


the value of the zero error. In this case, if the
voltmeter shows a reading of 53 volt, then the actual
value would be 52 volt. In this case, the systematic
error is a constant value (Known as BIAS) 19
ERRORS

EXAMPLES: The cloth tape measure that you use to


measure the length of an object had been stretched
out from years of use. (As a result, all of your length
measurements were too small.) The electronic scale
you use reads 0.05 g too high for all your mass
measurements (because it is improperly tared
throughout your experiment).

21
ERRORS

Random errors are perturbations of the measurement


either side of the true value caused by random and
unpredictable effects, such that positive errors and
negative errors occur in approximately equal numbers
for a series of measurements made of the same
quantity. Such perturbations are mainly small, but
large perturbations occur from time to time, again
unpredictably.

22
ERRORS

23
ERRORS

24
ERRORS

25
What is Systematic Error?

• Systematic error (also called systematic bias) is consistent,


repeatable error associated with faulty equipment or a flawed
experiment design. These errors are usually caused by measuring
instruments that are incorrectly calibrated or are used
incorrectly. However, they can creep into your experiment from
many sources, including:
• A worn out instrument. For example, a plastic tape measure
becomes slightly stretched over the years, resulting in
measurements that are slightly too high,
• An incorrectly calibrated or tared instrument, like a scale that
doesn’t read zero when nothing is on it,
• A person consistently takes an incorrect measurement. For
example, they might think the 3/4″ mark on a ruler is the 2/3″
mark.

26
What is Random Error?
• Random error (also called unsystematic error,
system noise or random variation) has no
pattern. One minute your readings might be
too small. The next they might be too large.
You can’t predict random error and these
errors are usually unavoidable.

27
Preventing Errors

Random error can be reduced by:


• Using an average measurement from a set of
measurements, or
• Increasing sample size.
It’s difficult to detect — and therefore prevent —
systematic error. In order to avoid these types of
error, know the limitations of your equipment and
understand how the experiment works. This can
help you identify areas that may be prone to
systematic errors.
28
Systematic vs. Random Errors

• The main differences between these two error types are:


• Random errors are (like the name suggests) completely
random. They are unpredictable and can’t be replicated by
repeating the experiment again.
• Systematic Errors produce consistent errors, either a fixed
amount (like 1 lb) or a proportion (like 105% of the true
value). If you repeat the experiment, you’ll get the same
error.
• Systematic errors are consistently in the same direction
(e.g. they are always 50 g, 1% or 99 mm too large or too
small). In contrast, random errors produce different values
in random directions. For example, you use a scale to weigh
yourself and get 148 lbs, 153 lbs, and 132 lbs.

29
30
Signal Conditioning or Processing

The term signal conditioning means manipulating


sensor signal for further processing.

31
Signal Conditioning or Processing

The signals come from transducers or sensors


usually converting by various physical
quantities (like temperature, strain or
displacement) into change in resistance,
capacitance or inductance. These output signals
are usually too small, too noisy due to magnetic
effects. Therefore they are generally processed
in some way to make it suitable for the next
stage of operation.
32
Signal Conditioning or Processing

The signal may be, for example:

• too small and have to be amplified


• it contains interference which has to be removed
• if it is nonlinear then requires to be linearized
• if a change in resistance have to be converted into
current or voltage

All these changes are referred to as signal


conditioning.
33
Signal Conditioning or Processing

34
Signal Conditioning or Processing

35
Signal Conditioning or Processing

36
Signal Conditioning or Processing

37
Signal Conditioning or Processing

38
Signal Conditioning or Processing

39
Signal Conditioning or Processing

40
Signal Conditioning or Processing
Signal Conditioning Operations:

1. Signal Amplifications
Sometimes the output form transducers is very small,
so the magnitude of these signals are required to
increase, for this purpose various amplifiers like
lever, gears or electronics operational amplifier is
used. The amount by which a signal is increased in
magnitude is referred as either gain, amplification or
the magnification.

41
Signal Conditioning or Processing

2. Impedance Matching
The signal conditioner acts as a buffer (to lessen the
shock) stage between the transducing and recording
elements, the input and output impedances of
matching device being arranged to prevent loading of
the transducer and maintain a high signal level at the
recorder.

42
Signal Conditioning or Processing

3. Signal Linearization
Some transducers have outputs, which are nonlinear,
such as in thermocouple, the thermoelectric e.m.f. is
not linear function of temperature. One way that can
often be used to turn a nonlinear output into a linear
involves an operational amplifier circuit.

43
Signal Conditioning or Processing

AMPLIFIERS

Amplifiers are the devices which increases the magnitude (amplifies)


of the input signal. Amplifiers are frequently used as signal
conditioners in order to make signals from transducer big enough to
enable them for further processing / display.

The input signal qi is amplified by an amount G, the resulting output


qo is given by
qo= G . qi
G = qo / qi
Where,

G = gain or amplification or the ration of the output signal to the


input signal

44
Signal Conditioning or Processing

1. Mechanical Amplifiers

(a) Lever:
Levers are used as a displacement amplifier in instruments like Dial
test indicators, Extensometers and Pressure gauges

From figure, two


triangles are
equivalent; hence
the ratios of their
sides will be
equal given as:
45
Signal Conditioning or Processing

This can be shown in block diagram as

46
Signal Conditioning or Processing

47
Signal Conditioning or Processing

Electrical Amplifiers
Most of the sensors output electrical (for example voltage, current )
signal. Hence electrical amplifiers are widely used. An amplifier is
an electronic device, or group of devices, which increase the size of
(amplifies) a voltage or current signal, without altering the signal's
basic characteristics. It is made up of active and passive
components, and has a power supply separate from the signal it is
acting on.

48
Signal Conditioning or Processing

Operational amplifier
Operational amplifiers are a special type of amplifier. They are termed
'operational' because they were originally developed for early
computers to perform basic mathematical operations such as adding and
subtracting. They are the basic building blocks of most active electronic
signal conditioning circuits. In the form of integrated circuits, they are
relatively inexpensive, precise, and reliable.

49
Signal Conditioning or Processing

50
Signal Conditioning or Processing

Operational amplifier
Figure shows the schematic diagram of an operational
amplifier. The internal detail of the integrated circuits
which make up operational amplifiers consists of a
complex arrangement of transistors and resistors.
However, it is the function of the integrated circuit as
a whole which the user needs to know, and this is
how they are represented here. Operational amplifiers
are often referred to in their abbreviated form as 'op
amps'. Operational amplifiers have two inputs and
one output.
51
Signal Conditioning or Processing

Inverting amplifier

A non-inverting amplifier amplifies the input voltage


without inverting the signal. The configuration of this
type of circuit is shown in figure.

52
Signal Conditioning or Processing

Wheatstone bridge
Previously we have seen that several sensors
(which?) measure parameters in terms of a change
in resistance. When the resistance of a circuit
changes, the current and voltage also change
proportionally.

53
Signal Conditioning or Processing
Wheatstone bridge
Figure shows a
Wheatstone bridge
circuit. Usually one of
the resistors is a sensor.
For example, R1 may
be a metal resistance
thermometer. Because
its resistance changes
with temperature, its
resistance is unknown.
Similarly it may be a
bonded resistance strain
gauge, whose resistance
changes when it is
under a strain.
54
Signal Conditioning or Processing
Wheatstone bridge
When the output voltage
Vo is zero then the
potential at B is equal to
the potential at D. Thus,
VR1 = VR3,

giving, I1R1 = I2R3,


Similarly, VR2 = VR4,

giving, I1R2 = I2R4,


Dividing:

55

You might also like