Professional Documents
Culture Documents
ECE 427
PRINCIPLES OF MEASUREMENT
This is a course that provides good groundwork for the basics of electronic measurements
and instrumentation. Starting from an introductory level, the course aims to make the students
aware of the importance of proper procedure and techniques in measuring different values and
parameter and appropriate interfacing circuit knowledge whereby students has the skills
necessary to gain entry to professional practice in the electronics and instrumentation world.
The measurement of any quantity plays very important role not only in science but in all
branches of engineering, medicine and in almost all the human day to day activities. The
technology of measurement is the base of advancement of science. The operation, control and the
maintenance of such equipment and the processes is also one of the important functions of the
science and engineering branches. All these activities are based on the proper measurement and
recording of physical, chemical, mechanical, optical and many other types of parameters.
Introduction:
The measurement of any quantity plays very important role not only in science but in all
branches of engineering, medicine and in almost all the human day to day activities. The
technology of measurement is the base of advancement of science. The role of science and
engineering is to discover the new phenomena, new relationships, the laws of nature and to apply
these discoveries to human as well as other scientific needs. The science and engineering is also
responsible for the design of new equipments. The operation, control and the maintenance of
such equipment and the processes is also one of the important functions of the science and
engineering branches. All these activities are based on the proper measurement and recording of
physical, chemical, mechanical, optical and many other types of parameters. The measuring
instrument should not affect the quantity to be measured.
Measurement is the act, or the result, of a quantitative comparison between a given
quantity and a quantity of the same kind chosen as a unit. The result of the measurement is
expressed by a pointer deflection over a predefined scale or a number representing the ratio
between the unknown quantity and the standard.
A standard is defined as the physical personification of the unit of measurement or
its submultiple or multiple values.
The device or instrument used for comparing the unknown quantity with the unit of
measurement or a standard quantity is called a measuring instrument.
4. The measured signals can be transmitted over long distances with the help
of cables or radio links, without any loss of information.
6. Electronic circuits can detect and amplify very weak signals and can
measure the events of very short duration as well.
Many a times, certain operations are to be performed on the signal before its further
transmission so that interfering sources are removed in order that the signal may not get
distorted. The process may be linear such as amplification, attenuation, integration,
differentiation, addition and subtraction or nonlinear such as modulation, detection, sampling,
filtering, chopping and clipping, etc. The process is called signal conditioning.
The stages of a typical measurement system are summarised below with the
help of a flow diagram in Figure 1.2.
Measurement methods can be classified as
• Direct comparison methods
• Indirect comparison methods
Calibration:
Calibration is the process of making an adjustment or marking a scale so that the
readings of an instrument agree with the accepted and the certified standard. The calibration
offers a guarantee to the device or instrument that it is operating with required accuracy, under
the stipulated environmental conditions. It creates the confidence of using the properly calibrated
instrument, in user's mind. The periodic calibration of an instrument is very much necessary.
1. Accuracy
Accuracy is the closeness with which the instrument reading approaches the true
value of the variable under measurement. Accuracy is determined as the maximum amount
by which the result differs from the true value. It is almost impossible to determine
experimentally the true value. The true value is not indicated by any measurement system
due to the loading effect, lags and mechanical problems (e.g., wear, hysteresis, noise, etc.).
2. Precision
Precision is a measure of the reproducibility of the measurements, i.e., precision is a
measure of the degree to which successive measurements differ from one another.
Precision is indicated from the number of significant figures in which it is expressed.
Significant figures actually convey the information regarding the magnitude and the
measurement precision of a quantity. More significant figures imply greater precision of the
measurement.
3. Resolution
If the input is slowly increased from some arbitrary value it will be noticed that the output
does not change at all until the increment exceeds a certain value called the resolution or
discrimination of the instrument. Thus, the resolution or discrimination of any instrument is the
smallest change in the input signal (quantity under measurement) which can be detected by the
instrument. It may be expressed as an accrual value or as a fraction or percentage of the full scale
value. Resolution is sometimes referred as sensitivity. The largest change of input quantity for
which there is no output of the instrument is called the dead zone of that instrument.
The sensitivity gives the relation between the input signal to an instrument or a part of the
instrument system and the output. Thus, the sensitivity is defined as the ratio of output signal or
response of the instrument to a change of input signal or the quantity under measurement.
Example 1.1
A moving coil ammeter has a uniform scale divisions and gives a full-scale reading of
instrument can read up to V the of a scale division degree of certainty. Determine the resolution
of the instrument in mA.
Solution
Full-scale reading = 5 A
Number of divisions on scale = 50
4. Speed of Response
The quickness of an instrument to read the measurand variable is called the speed of
response. Alternately, speed of response is defined as the time elapsed between the start of
the measurement to the reading taken. This time depends upon the mechanical moving
system, friction, etc.
CLASSIFICATION OF INSTRUMENTS
1. Absolute Instruments
The instruments of this type give the value of the measurand in terms of instrument
constant and its deflection. Such instruments do not require comparison with any other standard.
The example of this type of instrument is tangent galvanometer, which gives the value of the
current to be measured in terms of tangent of the angle of deflection produced, the horizontal
component of the earth’s magnetic field, the radius and the number of turns of the wire used.
Rayleigh current balance and absolute electrometer are other examples of absolute instruments.
Absolute instruments are mostly used in standard laboratories and in similar institutions as
standardising.
The classification of measuring instruments is shown in Figure 1.7.
Indicating Instruments
Indicating instruments are those which indicate the magnitude of an electrical quantity at
the time when it is being measured. The indications are given by a pointer moving over a
calibrated (pregraduated) scale. Ordinary ammeters, voltmeters, wattmeters, frequency meters,
power factor meters, etc., fall into this category.
Integrating Instruments
Integrating instruments are those which measure the total amount of either quantity of
electricity (ampere-hours) or electrical energy supplied over a period of time. The summation,
given by such an instrument, is the product of time and an electrical quantity under
measurement. The ampere hour meters and energy meters fall in this class.
Recording Instruments
Recording instruments are those which keep a continuous record of the variation of the
magnitude of an electrical quantity to be observed over a definite period of time. In such
instruments, the moving system carries an inked pen which touches lightly a sheet of paper
wrapped over a drum moving with uniform slow motion in a direction perpendicular to that of
the direction of the pointer. Thus, a curve is traced which shows the variations in the magnitude
of the electrical quantity under observation over a definite period of time. Such instruments are
generally used in powerhouses where the current, voltage, power, etc., are to be maintained
within certain acceptable limit.
Mechanical, Electrical and Electronics Instruments
1. Mechanical Instruments
Mechanical instruments are very reliable for static and stable conditions. They are unable
to respond rapidly to the measurement of dynamic and transient conditions due to the fact that
they have moving parts that are rigid, heavy and bulky and consequently have a large mass.
Mass presents inertia problems and hence these instruments cannot faithfully follow the rapid
changes which are involved in dynamic instruments. Also, most of the mechanical instruments
causes noise pollution.
2. Electrical Instruments
When the instrument pointer deflection is caused by the action of some electrical methods
then it is called an electrical instrument. The time of operation of an electrical instrument is
more rapid than that of a mechanical instrument. Unfortunately, an electrical system normally
depends upon a mechanical measurement as an indicating device. This mechanical movement
has some inertia due to which the frequency response of these instruments is poor.
3. Electronic Instruments
Electronic instruments use semiconductor devices. Most of the scientific and industrial
instrumentations require very fast responses. Such requirements cannot be met with by
mechanical and electrical instruments. In electronic devices, since the only movement involved is
that of electrons, the response time is extremely small owing to very small inertia of the
electrons. With
the use of electronic devices, a very weak signal can be detected by using pre-amplifiers and
amplifiers.
The power-operated instruments are those in which some external power such as
electricity, compressed air, hydraulic supply is required for operation. In such cases, the input
signal supplies only an insignificant portion of the output power. Electromechanical instruments
shown in Figure 1.8 fall in this category.
In practice, it is impossible to measure the exact value of the measurand. There is always
some difference between the measured value and the absolute or true value of the unknown
quantity (measurand), which may be very small or may be large. The difference between the
true or exact value and the measured value of the unknown quantity is known as the absolute
error of the measurement.
If δA be the absolute error of the measurement, A m and A be the measured and absolute
value of the unknown equantity then δA may be expressed as
The relative error is the ratio of absolute error to the true value of the unknown
quantity to be measured,
When the absolute error ε0 (=δA) is negligible, i.e., when the difference between the
true value A and the measured value A m of the unknown quantity is very small or negligible
then the relative error may be expressed as,
The relative error is generally expressed as a fraction, i.e., 5 parts in 1000 or in percentagevalue.
The measured value of the unknown quantity may be more than or less than the true value
of the measurand. So the manufacturers have to specify the deviations from the specified value
of a particular quantity in order to enable the purchaser to make proper selection according to his
requirements. The limits of these deviations from specified values are defined as limiting or
guarantee errors. The magnitude of a given quantity having a specified magnitude Am and a
maximum or a limiting error ±δA must have a magnitude between the limits.
For example, the measured value of a resistance of 100 Ω has a limiting error of ±0.5
Ω. Then the true value of the resistance is between the limits 100 ± 0.5, i.e., 100.5 and 99.5
Ω.
Example 1.2
A 0-25 A ammeter has a guaranteed accuracy of 1 percent of full scale reading. The
current measured by this instrument is 10 A. Determine the limiting error in percentage.
Solution The magnitude of limiting error of the instrument from Eq. (1.1),
δA = εr × A = 0.01 × 25 = 0.25 A
The magnitude of the current being measured is 10 A. The relative error at this
current is
Example 1.3
The inductance of an inductor is specified as 20 H ± 5 percent by a manufacturer.
Determine the limits of inductance between which it is guaranteed.
Solution
Example 1.4
A 0-250 V voltmeter has a guaranteed accuracy of 2% of full-scale reading. The
voltage measured by the voltmeter is 150 volts. Determine the limiting error in
percentage.
Solution
Types of Errors
The origination of error may be in a variety of ways. They are categorised in three main
types. • Gross error
• Systematic error
• Random error
1. Gross Error
The errors occur because of mistakes in observed readings, or using instruments and in
recording and calculating measurement results. These errors usually occur because of human
mistakes and these may be of any magnitude and cannot be subjected to mathematical treatment.
One common gross error is frequently committed during improper use of the measuring
instrument. Any indicating instrument changes conditions to some extent when connected in a
complete circuit so that the reading of measurand quantity is altered by the method used.
A multirange instrument has a different scale for each range. During measurements, the
operator may use a scale which does not correspond to the setting of the range selector of the
instrument. Gross error may also be there because of improper setting of zero before the
measurement and this will affect all the readings taken during measurements. The gross error
cannot be treated mathematically, so great care should be taken during measurement o avoid this
error. Pictorial illustration of different types of gross error is shown in Figure 1.10.
In general, to avoid gross error, at least two, three or more readings of the measurand
quantity should be taken by different observers. Then if the readings differ by an
unacceptably large amount, the situation can be investigated and the more erroneous readings
eliminated.
2. Systematic Error
These are the errors that remain constant or change according to a definite law on repeated
measurement of the given quantity. These errors can be evaluated and their influence on the
results of measurement can be eliminated by the introduction of proper correction.
Environmental errors are much more troublesome as the errors change with time in an
unpredictable manner. These errors are introduced due to using an instrument in different
conditions than in which it was assembled and calibrated. Change in temperature is the major
cause of such errors as temperature affects the properties of materials in different ways,
including dimensions, resistivity, spring effect and many more. Other environmental changes
also effect the
results given by the instruments such as humidity, altitude, earth’s magnetic field, gravity, stray
electric and magnetic field, etc.
3.Random Errors
These errors are of variable magnitude and sign and do not maintain any known law. The
presence of random errors become evident when different results are obtained on repeated
measurements of one and the same quantity. The effect of random errors is minimised by
measuring the given quantity many times under the same conditions and calculating the
arithmetical mean of the results obtained. The mean value can justly be considered as the most
probable value of the measured quantity since random errors of equal magnitude but opposite
sign are of approximately equal occurrence when making a great number of measurements.
LOADING EFFECTS
Under ideal conditions, an element used for signal sensing, conditioning, transmission and
detection should not change/distort the original signal. The sensing element should not use any
energy or take least energy from the process so as not to change the parameter being measured.
However, under practical conditions, it has been observed that the introduction of any element in
a system results invariably in extraction of the energy from the system, thereby distorting the
original signal. This distortion may take the form of attenuation, waveform distortion, phase shift,
etc., and consequently, the ideal measurements become impossible.
The incapability of the system to faithfully measure the input signal in undistorted form is
called loading effect. This results in loading error.
The loading effects, in a measurement system, not only occur in the detector–transducer
stage but also occur in signal conditioning and signal presentation stages as well. The loading
problem is carried right down to the basic elements themselves. The loading effect may occur on
account of both electrical and mechanical elements. These are due to impedances of the various
elements connected in a system. The mechanical impedances may be treated similar to electrical
impedances.
STANDARDS AND THEIR CLASSIFICATIONS
Analogue instruments
An analog device is one in which the output or display is a continuous function of
time and bears a constant relation to its input. Measuring instruments are classified according
to both the quantity measured by the instrument and the principle of operation.
Three general principles of operation are available:
(i) electromagnetic, which utilises the magnetic effects of electric currents;
(ii) electrostatic, which utilises the forces between electrically charged conductors;
(iii) electro-thermal, which utilises the heating effect.
Problem 1. A moving-coil instrument gives a f.s.d. when the current is 40 mA and its
resistance is 25 Ohms. Calculate the value of the shunt to be connected in parallel with the
meter to enable it to be used as an ammeter for measuring currents up to 50 A.
Problem 2. A moving-coil instrument having a resistance of 10 Ω, gives a f.s.d. when the
current is 8 mA. Calculate the value of the multiplier to be connected in series with the
instrument so that it can be used as a voltmeter for measuring p.d.s. up to 100 V.
Loading effect:
When selecting a meter for a particular measurement, the sensitivity rating IS very
important. A low sensitive meter may give the accurate reading in low resistance circuit but
will produce totally inaccurate reading in high resistance circuit.
The voltmeter is always connected across the two points between which the potential
difference is to be measured. If it is connected across a low resistance then as voltmeter
resistance is high, most of the current will pass through a low resistance and will produce the
voltage drop which will be nothing but the true reading. But if the voltmeter is connected
across the high resistance then due to two high resistances in parallel, the current will divide
almost equally through the two paths. Thus the meter will record the voltage drop across the
high resistance which will be much lower than the true reading. Thus the low sensitivity
instrument when used in high resistance circuit 'gives a lower than the true reading. This is
called . It is mainly caused due to low sensitivity instruments.