You are on page 1of 8

Generalized Measurement System

The generalized measurement system consists of three main functional elements. They
are:
1. Primary sensing element, which senses the quantity under measurement
2. Variable conversion element, which modifies suitably the output of the primary
sensing element, and
3. Data presentation element that renders the indication on a calibrated scale

Primary Sensing Element - The measurement first comes into contact with primary sensing
element where the conversion takes place. This is done by a transducer which converts the
measured quantity into a usable electrical output. The transduction may be from mechanical,
electrical, optical to any related form.

Variable Conversion Element - The output of the primary sensing element is in the electrical
form suitable for control, recording and display. For the instrument to perform the desired
function, it may be necessary to convert this output to some other suitable for preserving the
original information. This function is performed by the variable conversion element. A system
may require one or more variable conversion suitable to it.

Variable Manipulation Element - The signal gets manipulated here preserving the original
nature of it. For example, an amplifier accepts a small voltage signal as input and produces a
voltage, of greater magnitude. The output is the same voltage but of higher value, acting as a
voltage amplifier. Here the voltage amplifier acts as a variable manipulation element since it
amplifies the voltage. The element that follows the primary sensing element in a measurement
system is called signal conditioning element. Here the variable conversion element and variable
manipulation element are collectively called as Data Conditioning Element or Signal
Conditioning Element.

Data Transmission Element - The transmission of data from one another is done by the data
transmission element. In case of spacecraft, the control signals are sent from the control
stations by using radio signals. The stage that follows the signal conditioning element and data
transmission element collectively is called the intermediate stage.

Data Presentation Element - The display or readout devices which display the required
information about the measurement, forms the data presentation element. Here the information
of the measured has to be conveyed for, monitoring, control or analysis purposes. In case of
data to be monitored, visual display devices are needed like ammeters and voltmeters. In case
of data to be recorded, recorders like magnetic tapes, T.V equipment, and storage type CRT,
printers, and so on are used.
STATIC CHARACTERISTICS OF MEASURING INSTRUMENTS

The static characteristics of instruments are attributes that changes slowly with time. Static
characteristics can be divided in to desirable and undesirable.

Desirable characteristics - what we want to achieve - are

• Accuracy
• Sensitivity
• Repeatability
• Reproducibility

1) Accuracy
Accuracy is

• the closeness of a measurement to the true value


Relative accuracy can be expressed as

ar = (ymax - x) (1)

where

ar = relative accuracy (unit/unit)

x = input true value (unit)

y = instrument output (unit)

Example - Accuracy

The true length of a steel beam is 6 m. Three repeated readings with a laser meter indicates a
length of 6.01 m, 6.0095 and 6.015 m. The accuracy based on maximum difference can be
calculated as

an absolute value like

aa = 6.015 m - 6 m

= 0.015 m

or as a relative value

ar = (6.015 m - 6 m) / 6 m

= 0.0025 m/m
or as relative value in percentage

a% = ((6.015 m - 6 m) / 6 m) 100%

= 0.25 %

The accuracy of an instrument can be related to

• maximum measured value possible for the instrument


• maximum range for the instrument
• actual output from the instrument
Two terms commonly used in connection with accuracy are precision, trueness and calibration.

2) Precision

It is a measure of consistency or repeatability, i.e successive reading do not differ. Precision is defined as
the capability of an instrument to show the same reading when used each time (reproducibility of the
instrument). An instrument which is precise may not be necessarily accurate

Calibration

The precision of the laser meter used in the example above is good and the accuracy of the
meter can be improved with calibration as

calibration = 6.01 m - 6 m

= 0.01 m

3) Sensitivity
Sensitivity is the increment of the output signal (or response) to the increment of the input
measured signal - and can be expressed as

s = dy / dx (2)

where

s = sensitivity (output unit / input unit)

dy = change instrument output value (output unit)

dx = change in input true value (input unit)

Example - Temperature measurement with a Pt100 Platinum Resistance Thermometer

When temperature is changed from 0oC to 50oC - the resistance in a Pt100


thermometer changes from 100 ohm to 119.4 ohm. The sensitivity for this range can be
calculated as
s = (119.4 ohm - 100 ohm) / (50oC - 0oC)

= 0.388 ohm/oC

4) Repeatability
Repeatability describes the closeness of output readings when the same input is applied
repetitively over a short period of time, with the same measurement conditions, same
instrument and observer, same location and same conditions of use maintained throughout.

5) Reproducibility
It describes the closeness of output readings for the same input when there are
changes in the method of measurement, observer, measuring instrument,
location, conditions of use and time of measurement.

6) Range
The range of an instrument defines the minimum and maximum values of a
quantity that the instrument is designed to measure.

Span is the difference between maximum value and the minimum value that the
instrument is designed to measure

Example: 0-20 V Range -- Span is 20 25c – 100 c Range – Span is 75c

7) Resolution
the smallest amount of input signal change that the instrument can detect reliably.
Resolution is the lower limit on the magnitude of the change in the input
measured quantity that produces an observable change in the instrument output.

An error may be defined as the difference between the measured and actual values. For example, if
the two operators use the same device or instrument for measurement. It is not necessary that both
operators get similar results. The difference between the measurements is referred to as an
ERROR.
To understand the concept of measurement errors, you should know the two terms that define the
error. They are true value and measured value. The true value is impossible to find by experimental
means. It may be defined as the average value of an infinite number of measured values. The
measured value is a single measure of the object to be as accurate as possible.
Types of Errors
There are three types of errors that are classified based on the source they arise from; They are:

• Gross Errors
• Random Errors
• Systematic Errors

Gross Errors
This category basically takes into account human oversight and other mistakes while reading,
recording, and readings. The most common human error in measurement falls under this category of
measurement errors. For example, the person taking the reading from the meter of the instrument
may read 23 as 28. Gross errors can be avoided by using two suitable measures, and they are
written below:

• Proper care should be taken in reading, recording the data. Also, the calculation of error
should be done accurately.
• Without depending on only one reading. One or more persons to be involved in
taking at least three or even more readings.

Random Errors
The random errors are those errors, which occur irregularly and hence are random. These can arise
due to random and unpredictable fluctuations in experimental conditions (Example: unpredictable
fluctuations in temperature, voltage supply, mechanical vibrations of experimental set-ups, etc,
errors by the observer taking readings, etc. For example, when the same person repeats the same
observation, he may likely get different readings every time.

Systematic Errors:
Systematic errors can be better understood if we divide them into subgroups; They are:

• Environmental Errors
• Observational Errors
• Instrumental Errors

Environmental Errors: This type of error arises in the measurement due to the effect of the external
conditions on the measurement. The external condition includes temperature, pressure, and
humidity and can also include an external magnetic field. If you measure your temperature under the
armpits and during the measurement, if the electricity goes out and the room gets hot, it will affect
your body temperature, affecting the reading.
Observational Errors: These are the errors that arise due to an individual’s bias, lack of proper
setting of the apparatus, or an individual’s carelessness in taking observations. The measurement
errors also include wrong readings due to Parallax errors.
Instrumental Errors: These errors arise due to faulty construction and calibration of the measuring
instruments. Such errors arise due to the hysteresis of the equipment or due to friction. Lots of the
time, the equipment being used is faulty due to misuse or neglect, which changes the reading of the
equipment. The zero error is a very common type of error. This error is common in devices like
Vernier callipers and screw gauges. The zero error can be either positive or negative. Sometimes
the scale readings are worn off, which can also lead to a bad reading.

a) Shortcomings of instruments: friction in bearings of various moving parts, irregular


spring tension,

b). Misuse of instruments: A good instrument if not properly used leads to giving
abnormal readings.

c). Loading of instruments:

As shown in the above figure, a pressure gauge is used to measure the highest
pressure range of 40 bar. If it is subjected to pressure more than 40 bar, spring inside
will lose its tension due to the overloading effect. In the next measurement, it will show
error while measuring any range. The errors due to the loading can be corrected by
using measuring instruments intelligently and correctly.
Instrumental error takes place due to :

• An inherent constraint of devices


• Misuse of Apparatus
• Effect of Loading

Errors Calculation
Different measures of errors include:
Absolute Error
The difference between the measured value of a quantity and its actual value gives the absolute
error. It is the variation between the actual values and measured values. It is given by
• Absolute Error = |Experimental Measurement – Actual Measurement|

Percent Error
It is another way of expressing the error in measurement. This calculation allows us to gauge how
accurate a measured value is with respect to the true value. Per cent error is given by the formula
Percentage Error = ((Experimental Value – Actual Measurement value )/ Actual Measurement
value) x 100

Relative Error
The ratio of the absolute error to the accepted measurement gives the relative error. The relative
error is given by the formula:
Relative Error = Absolute error / Actual value

• Absolute Error = |Experimental Measurement – Actual Measurement|


• Relative Error= Absolute Error/Actual Measurement
• Percentage Error = Decimal Form of Relative Error x 100.

How To Reduce Errors In Measurement


Keeping an eye on the procedure and following the below listed points can help to reduce the error.

• Make sure the formulas used for measurement are correct.


• Cross check the measured value of a quantity for improved accuracy.
• Use the instrument that has the highest precision.
• It is suggested to pilot test measuring instruments for better accuracy.
• Use multiple measures for the same construct.
• Note the measurements under controlled conditions.

You might also like