You are on page 1of 19

ELECTRICAL MEASUREMENTS AND INSTRUMENTATION

1. UNITS AND DIMENSIONS


1.1. SI MECHANICAL UNITS
Systeme international (SI) or International System
The three basic units in the SI system are:
Unit of length: meter (m)
Unit of mass: kilogram (kg)
Unit of time: second (s)
Other units derived from the fundamental units are termed as derived units.
For example, the unit of area is meters squared, which is derived from meters.
• FORCE
The SI unit of force is the newton (N), defined as that force which will give a mass of 1kg an
acceleration of 1 meter per second per second.
Force = mass * acceleration; F = ma
• WORK
The work done in moving a body is the product of the force and the distance through which the
body is moved in the direction of the force.
SI unit of work is Joule, defined as the amount of work done when a force of 1 newton acts through
a distance of 1 meter.
Work = force * distance; W = F * d
• ENERGY
Defined as the capacity to do work.
Measured in the same units as work.
• POWER
Power is defined as the rate of doing work
Power = Work / time
The SI unit of power is Watt, defined as the power developed when 1 joule of work is done in 1
second.
1.2. Scientific Notation & Metric Prefixes
1.3. DIMENSIONS
2. STANDARDS AND THEIR CLASSIFICATION
A standard of measurement is a physical representation of a unit of measurement. The term
‘standard’ is applied to a piece of equipment having a known measure of physical quantity. For
example, the fundamental unit of mass in the SI system is the kilogram, defined as the mass of the
cubic decimeter of water at its temperature of maximum of 4°C. This unit of mass is represented
by a material standard; the mass of the international prototype kilogram consisting of a platinum–
iridium hollow cylinder. This unit is preserved at the International Bureau of Weights and
Measures at Sevres, near Paris, and is the material representation of the kilogram. Similar standards
have been developed for other units of measurement, including fundamental units as well as for
some of the derived mechanical and electrical units.
The classifications of standards are:
1. International standards
2. Primary standards
3. Secondary standards
4. Working standards
1. International standards
The international standards are defined by international agreement. They represent certain units of
measurement to the closest possible accuracy that production and measurement technology allow.
International standards are periodically checked and evaluated by absolute measurements in terms
of the fundamental units. These standards are maintained at the International Bureau of Weights
and Measures and are not available to the ordinary user of measuring instruments for purposes of
comparison or calibration.
2. Primary Standards
The primary standards are maintained by national standards laboratories in different places of the
world. The National Bureau of Standards (NBS) in Washington is responsible for maintenance of
the primary standards in North America. Other national laboratories include the National Physical
Laboratory (NPL) in Great Britain and the oldest in the world, the Physikalisch Technische
Reichsanstalt in Germany. The primary standards, again representing the fundamental units and
some of the derived mechanical and electrical units, are independently calibrated by absolute
measurements at each of the national laboratories. The results of these measurements are compared
with each other, leading to a world average figure for the primary standard. Primary standards are
not available for use outside the national laboratories. One of the main functions of primary
standards is the verification and calibration of secondary standards.
3. Secondary Standards
Secondary standards are the basic reference standards used in the industrial measurement
laboratories. These standards are maintained by the particular involved industry and are checked
locally against other reference standards in the area. The responsibility for maintenance and
calibration rests entirely with the industrial laboratory itself. Secondary standards are generally
sent to the national standards laboratory on a periodic basis for calibration and comparison against
the primary standards. They are then returned to the industrial user with a certification of their
measured value in terms of the primary standard.
4. Working Standards
Working standards are the principle tools of a measurement laboratory. They are used to check
and calibrate general laboratory instruments for accuracy and performance or to perform
comparison measurements in industrial applications. A manufacturer of precision resistances, for
example, may use a standard resistor in the quality control department of his plant to check his
testing equipment. In this case, the manufacturer verifies that his measurement setup performs
within the required limits of accuracy.
5. MEASUREMENTS & MEASUREMENT SYSTEMS
5.1. MEASUREMENTS
The measurement of a given quantity is essentially an act or the result of comparison between the
quantity (whose magnitude is unknown) and a predefined standard with the result expressed in
numerical values. The measuring process is one in which the property of an object or system under
consideration is compared to an accepted standard unit, a standard defined for that particular
property. The number of times the unit standard fits into the quantity being measured is the
numerical measure. The numerical measure is meaningless unless followed by a unit used, since
it (unit) identifies the characteristic or property measured.
5.2. METHODS OF MEASUREMENTS
The methods of measurements may be broadly classified into two categories :
(i) Direct Methods
(ii) Indirect Methods.
(i) Direct Methods
In these methods, the unknown quantity (also called the measurand) is directly compared against
standard. The result is expressed as a numerical number and a unit.
Direct methods are quite common for the measurement of physical quantities like length, mass and
time.
Direct methods of measurement are of two types:
(a) deflection methods (b) comparison methods
Deflection methods : the value of the unknown quantity is measured by the help of a measuring
instrument having a calibrated scale indicating the quantity under measurement directly e.g.,
measurement of current by an ammeter.
Comparison methods: the value of the unknown quantity is determined by direct comparison with
a standard of the given quantity e.g., measurement of emf by comparison with the emf of a
standard cell.
(ii) Indirect Methods
Indirect measurement methods, the comparison is done with a standard through the use of a
calibrated system. These methods for measurement are used in those cases where the desired
parameter to be measured is difficult to be measured directly, but the parameter has got some
relation with some other related parameter which can be easily measured. For instance, the
elimination of bacteria from some fluid is directly dependent upon its temperature. Thus, the
bacteria elimination can be measured indirectly by measuring the temperature of the fluid. In
indirect methods of measurement, it is general practice to establish an empirical relation between
the actual measured quantity and the desired parameter.
5.3.CLASSIFICATION OF INSTRUMENTS
Broadly, instruments are classified into two categories :
(i) Absolute Instruments, and (ii) Secondary Instruments.
1. Absolute instruments give the magnitude of the quantity under measurement in terms of
physical constants of the instrument and its deflection. Examples of this class of instruments are
Tangent Galvanometer and Rayleigh's Current Balance.
2. Secondary instruments. These instruments are so constructed that the quantity being measured
can only be measured by observing the output indicated by the instrument. These instruments are
calibrated by comparison with another secondary instrument which has already been calibrated
against an absolute instrument. Working with absolute instruments is time consuming, therefore
secondary instruments are most commonly used.
Electrical measuring instruments may also be classified according to the kind of quantity, kind of
current, principle of operation of moving system.
Secondary instruments can be classified into three types;
i. Indicating instruments;
ii. Recording instruments;
iii. Integrating instruments.
(i) Indicating Instruments: It indicates the magnitude of an electrical quantity at the time when it
is being measured. The indications are given by a pointer moving over a graduated dial.
(ii) Recording Instruments keep a continuous record of the variations of the magnitude of an
electrical quantity to be observed over a defined period of time.

(iii) Integrating Instruments measure the total amount of either quantity of electricity or electrical
energy supplied over a period of time. An example is the energy meter.

5.4.TYPES OF INSTRUMENTATION SYSTEMS


The introduction of microprocessors results a new classification of instrumentation systems and
these are : Intelligent Instrumentation systems, and Dumb Instrumentation systems.
1. Intelligent instrumentation. The use of an instrumentation system to evaluate a physical variable
employing a digital computer to perform all or nearly all signal and information processing. In this
system after a measurement has been made of the variable, further processing whether in digital
or analog form is carried out to refine the data, for the purpose of presentation to an observer or to
other computers.
2. Dumb instrumentation. In this system once the measurement is made, the data must be processed
by the observer.
5.5. ANALOG AND DIGITAL INSTRUMENTS
Analog Instruments
The signals of an analog unit vary in a continuous fashion and can take on infinite number of
values in a given range. Examples: Fuel gauge, ammeter and voltmeters, wrist watch,
speedometer
Digital Instruments
Signals varying in discrete steps and taking on a finite number of different values in a given
range are digital signals and the corresponding instruments are of digital type. Digital
instruments have high accuracy, high speed of operation and are capable of storing results for
future purposes. Examples: Digital multimeter
5.6. MECHANICAL, ELECTRICAL AND ELECTRONICS INSTRUMENTS
1. Mechanical Instruments
Mechanical instruments are very reliable for static and stable conditions. They are unable to
respond rapidly to the measurement of dynamic and transient conditions due to mass inertia
problems as they have moving parts that are rigid, heavy and bulky.
Advantages of Mechanical Instruments
• Relatively cheaper in cost
• More durable due to rugged construction
• Simple in design and easy to use
• No external power supply required for operation
• Reliable and accurate for measurement of stable and time invariant quantity
Disadvantages of Mechanical Instruments
• Poor frequency response to transient and dynamic measurements
• Large force required to overcome mechanical friction
• Incompatible when remote indication and control needed
• Cause noise pollution
2. Electrical Instruments
When the instrument pointer deflection is caused by the action of an electrical method. The
electrical system depends upon a mechanical measurement as an indicating device and the
mechanical movement has some inertia due to which the frequency response of these instruments
is poor.
3. Electronic Instruments
Electronic instruments use semiconductor devices to facilitate measurement. Response time is
extremely small owing to very small inertia of the electrons. Very weak signal can be detected
by using pre-amplifiers and amplifiers.
Advantages of Electrical/Electronic Instruments
• Non-contact measurements are possible
• Consume less power
• Compact in size, more reliable in operation and offer great flexibility
• Good frequency and transient response
• Remote indication and recording possible
5.7.ELEMENTS OF A GENERALIZED MEASUREMENT SYSTEM
Most of the measurement systems contain three main functional elements.
1. Primary sensing element
2. Variable conversion element
3. Data presentation element.

1. Primary Sensing Element


The physical quantity to be measured is sensed and detected by the element which gives the output
in a different analogous form. This output is then converted into an electrical signal by a transducer.
The first stage of a measurement system is known as a detector transducer stage.
2. Variable Conversion Element
After passing through the primary sensing element, the output is in the form of an electrical signal,
may be voltage, current, frequency, which may or may not be accepted to the system. For
performing the desired operation, it may be necessary to convert this output to some other suitable
form while retaining the information content of the original signal. For example, if the output is in
analog form and the next step of the system accepts only in digital form then an analog-to-digital
converter will be employed.
3. Variable Manipulation Element
The function of this element is to manipulate the signal presented to it preserving the original
nature of the signal. An electronic amplifier accepts a small voltage signal as input and produces
an output signal which is also voltage but of greater magnitude. Thus, voltage amplifier acts as a
variable manipulation element. Signal Conditioning are operations on the signal before it is
transmitted. These processes may be amplification, attenuation, integration, differentiation,
addition and subtraction. Some nonlinear processes like modulation, detection, sampling, filtering,
chopping and clipping etc. are also performed on the signal to bring it to the desired form to be
accepted by the next stage of measurement system.
4. Data Transmission Element.
When the elements of an instrument are actually physically separated, it becomes necessary to
transmit data from one to another. The element that performs this function is called a Data
Transmission Element. For example, space-crafts are physically separated from the earth where
the control stations guiding their movements are located. Therefore, control signals are sent from
these stations to space-crafts by a complicated telemetry systems using radio signals. The signal
conditioning and transmission stage is commonly known as Intermediate Stage.
5. Data Presentation Element
The information about the quantity under measurement has to be conveyed to the personnel
handling the instrument or the system for monitoring, control, or analysis purposes. The
information conveyed must be in a form intelligible to the personnel or to the intelligent
instrumentation system. This function is done by data presentation element. In case data is to be
monitored, visual display devices are needed. These devices may be analog or digital indicating
instruments like ammeters, voltmeters etc. In case the data is to be recorded, recorders like
magnetic tapes, high speed camera and T.V. equipment, storage type C.R.T., printers, analog and
digital computers or microprocessors may be used. For control and analysis purpose
microprocessors or computers may be used. The final stage is known as terminating stage.
Example of a measurement system

Bourdon tube pressure gauge

• The bourdon tube acts as the primary sensing element and a variable conversion element.
• The closed end of the bourdon tube is displaced due to pressure. Thus, the pressure is
converted into a small displacement.
• The closed end of the bourdon tube is connected through mechanical linkage to a gearing
arrangement which amplifies the small displacement and makes the pointer to rotate
through a large angle.
• The mechanical linkage thus acts as a data transmission element while the gearing
arrangement acts as a data manipulation element.
• The data presentation stage consists of the pointer and dial arrangement, which when
calibrated with known pressure inputs, gives an indication of the pressure signal applied to
the bourdon tube.
4. CHARACTERISTICS OF INSTRUMENTS & MEASUREMENT SYSTEMS
STATIC CHARACTERISTICS
These are a set of criteria that describe the quality of measurement in applications that involve the
measurement of quantities that are either constant or vary slowly with time. Static characteristics
include:
(i) Accuracy
(ii) Precision
(iii) Resolution
(iv) Reproducibility
(v) Repeatability
(vi) Drift
(vii) Dead zone
(viii) Tolerance
(ix) Speed of Response
1. Accuracy
Accuracy is the closeness with which the instrument reading approaches the true value of the
variable under measurement. Thus, accuracy of a measurement means conformity to truth. The
true value cannot be indicated by any measurement system due to the loading effect, lags and
mechanical problems (e.g., wear, hysteresis, noise, etc.)
2. Precision
Precision is a measure of the degree to which successive measurements differ from one another.
3. Resolution
The smallest change in the input signal (quantity under measurement) which can be detected with
certainty by an instrument.
4. Reproducibility
It is the degree of closeness with which a given value may be repeatedly measured. It is specified
in terms scale readings over a given period of time.
5. Repeatability
It is defined as the variation of scale reading and is random in nature.

6. Drift
Variation of the measured values over time with the measurement of a given input. Drift may be
classified into three categories:
(i) Zero drift
Gradually shifting of the whole calibration due to slippage, permanent set, undue warming up of
electronic circuits. Prevented by zero setting.

Input-output characteristics with zero drift


(ii) Span drift or Sensitivity drift
If there is proportional change in the indication all along the upward scale, the drift is called span
drift or sensitivity drift.
Input-output characteristics with span drift
7. Dead Zone
The largest change of input quantity for which there is no output of the instrument. The input
applied to the instrument may not be sufficient to overcome the friction and will, in that case, not
move at all. It will only move when the input is such that it produces a driving force which can
overcome friction forces.
8. Tolerance
The highest allowable error that is specified in terms of certain values while measurement. This is
typically represented as a +/- value off of a nominal specification.
9. Speed of Response
The time elapsed between the start of the measurement to the reading taken. Quickness of an
instrument to read the measurand variable.
5. ERRORS IN MEASUREMENTS
It is impossible to measure the exact value of the measurand. There is always a difference between
the measured value and the true value of the measurand.
(a) Absolute Error
The difference between the true value and the measured value of the unknown quantity. If δA be
the absolute error of the measurement, Am and A be the measured and absolute value of the
unknown quantity then δA may be expressed as:
𝛿𝐴 𝑜𝑟 𝜀0 = 𝐴𝑚 − 𝐴
(b) Relative Error
The ratio of absolute error to the true value of the unknown quantity to be measured
𝜹𝑨 𝜺𝟎 𝑨𝒃𝒔𝒐𝒍𝒖𝒕𝒆 𝒆𝒓𝒓𝒐𝒓
𝜺𝒓 = = ∗ 𝟏𝟎𝟎% =
𝑨 𝑨 𝑻𝒓𝒖𝒆 𝒗𝒂𝒍𝒖𝒆
(c) Limiting errors (Guarantee Error)
In instrument development, manufacturers have to specify the deviations from the nominal value
of a particular quantity in order to enable the purchaser to make proper selection according to their
requirements. The limits of these deviations from specified values are defined as limiting or
guarantee errors. We can say that the manufacturer guarantees or promises that the error in the
item they are selling is no greater than the limit set.
The magnitude of a quantity having a nominal value 𝐴𝑠 and a maximum error or limiting error of
±𝛿𝐴 must have a magnitude 𝐴𝑎 between the limits 𝐴𝑠 − 𝛿𝐴 and 𝐴𝑠 + 𝛿𝐴
Actual value of quantity 𝐴𝑎 = 𝐴𝑠 ± 𝛿𝐴
(d) Relative (Fractional) Limiting Error
The relative (fractional) error is defined as the ratio of the error to the specified (nominal)
magnitude of a quantity
𝛿𝐴 𝜀0
𝜀𝑟 = = 𝑜𝑟 𝜀0 = 𝜀𝑟 𝐴𝑠
𝐴𝑠 𝐴𝑠
Limiting values are
𝐴𝑎 = 𝐴𝑠 ± 𝛿𝐴 = 𝐴𝑠 ± 𝜀𝑟 𝐴𝑠 = 𝐴𝑠 (1 ± 𝜀𝑟 )
Example
5.1.TYPES OF ERRORS
• Gross error
• Systematic error
• Random error
1. Gross Error
This class of errors mainly covers human mistakes in reading instruments, recording and
calculating measurement results. The responsibility of the mistake normally lies with the
experimenter. The experimenter may grossly misread the scale. For example, he may, due to an
oversight, read the temperature as 31.5°C while the actual reading may be 21.5°C, read 25.8°C
and record 28.5°C.
Gross errors can be avoided by:
• Great care should be taken in reading and recording the data
• Two, three or even more readings should be taken for the quantity under measurement.
These readings should be taken preferably by different experimenters and the readings
should be taken at a different reading point to avoid re-reading with the same error.
• It is always advisable to take a large number of readings as a close agreement between
readings assures that no gross error has been committed.
2. Systematic errors
These types of errors are divided into:
1. Instrument errors
2. Environmental errors
1. Instrument errors
Instrumental errors are inherent in the measuring instruments because of their mechanical structure
and calibration or operation of the apparatus used. For example, in D’Arsonval movement, friction
in bearings of various components may cause incorrect readings. Improper zero adjustment has a
similar effect. Poor construction, irregular spring tensions, variations in the air gap may also cause
instrumental errors. Calibration error may also result in the instrument reading either being too
low or too high.
Such instrumental errors may be avoided by
• Selecting a proper measuring device for the particular application
• Calibrating the measuring device or instrument against a standard
• Applying correction factors after determining the magnitude of instrumental errors
2. Environmental errors
Are much more troublesome as the errors change with time in an unpredictable manner. These
errors are introduced due to using an instrument in different conditions than in which it was
assembled and calibrated. Change in temperature is the major cause of such errors as temperature
affects the properties of materials in different ways, including dimensions, resistivity, spring effect
and many more. Other environmental changes also effect the results given by the instruments such
as humidity, altitude, earth’s magnetic field, gravity, stray electric and magnetic field, etc.
Environmental errors can be eliminated or reduced by taking the following precautions:
• Use the measuring instrument in the same atmospheric conditions in which it was
assembled and calibrated.
• If the above precaution is not possible then deviation in local conditions must be
determined and suitable compensations are applied in the instrumental reading
• Automatic compensation, employing sophisticated devices for such deviations, is also
possible.
3. Random Errors
These errors are of variable magnitude and sign and do not maintain any known law. The presence
of random errors become evident when different results are obtained on repeated measurements of
one and the same quantity.
The effect of random errors is minimized by:
• Measuring the given quantity many times under the same conditions and calculating the
arithmetical mean of the results obtained.
• The mean value can justly be considered as the most probable value of the measured
quantity since random errors of equal magnitude but opposite sign are of approximately
equal occurrence when making a great number of measurements
6. STATISTICAL ANALYSIS
1. Arithmetic Mean
When a number of measurements of a quantity are made and the measurements are not all exactly
equal, the best approximation to the actual value is found by calculating the arithmetic mean
𝑥1 + 𝑥2 + 𝑥3 + ⋯ + 𝑥𝑛
𝑥̅ =
𝑛
2. Average deviation
• This is calculated as the average of the absolute values of the deviations.
• If the measured quantity is assumed to be constant, the average deviation might be regarded
as an indicator of the measurement precision.
|𝑑1 | + |𝑑2 |+|𝑑3 | + ⋯ + |𝑑𝑛 |
𝐷=
𝑛
3. Standard deviation

𝑑12 + 𝑑22 + 𝑑32 + ⋯ + 𝑑𝑛2


𝜎=√
𝑛

7. CALIBRATION
The calibration of all instruments is important since it affords the opportunity to check the
instrument against a known standard and subsequently to find errors and accuracy. Calibration
procedures involve a comparison of the particular instrument with either (1) a primary standard,
(2) a secondary standard with a higher accuracy than the instruments be calibrated, or (3) an
instrument of known accuracy. Actually, all working instruments must be calibrated against some
reference instruments which have a higher accuracy. Thus, reference instruments in turn must be
calibrated against instrument of still higher grade of accuracy, or against primary standard, or
against other standards of known accuracy. It is essential that any measurement made must
ultimately be traceable to the relevant primary standards.

You might also like