You are on page 1of 52

INTRODUCTION TO

METROLOGY
MEASUREMENT
“ Whatever exists, exits in some amount”
• The determination of the amount is what measurement is all
about.
• Measurement is a process of obtaining a quantitative
comparison between a predefined standard and a
measurand.
OR
• Measurement is the process of comparing quantitatively
an unknown magnitude with a predefined standard.
* The word measurand is used to designate the input quantity
to the measuring process.
FUNDAMENTAL MEASURING PROCESS

Standards – Length, time, pressure, angle, mass ….

3
4
The International System of Units, or SI System.

5
Derived Units in SI

6
U. S. SYSTEM METRIC SYSTEM

1 mile = 5280 feet 1 kilometer = 1000 meter

1 mile = 1760 yards 1 hectometer = 100 meter

1 rod = 5.5 yards 1 decameter = 10 meters

1 yard = 3 feet 1 decimeter = 1/10 meter

1 foot = 12 inches 1 centimeter = 1/100 meter


1 millimeter = 1/1000 meter

7
 Metrology (from Ancient Greek metron (measure) and
logos (study of)) is the science of measurement.
Metrology includes all theoretical and practical aspects of
measurement.

 A quantity measured is incomplete without units

 Measurements play a vital role in every field of R & D


and the present day progress has enhanced its importance.

 It is the science of measurements and associated with the


correctness, the evaluation of uncertainty of
measurement and also the validation of the results by
specifying its limitations.
Objectives of Metrology

• The basic objective of metrology is to determine whether a


component has been manufactured to the required
specification. The advances in metrology have made possible
the mass production of modern ultra-precise apparatus.
Metrology is an essential past of the development of technology.

The basic objectives of metrology are as follows:

1. To provide required accuracy at minimum cost


2. Thorough evaluation of newly developed products, to ensure that
components are within the specified dimensions.
3. To reduce the cost of rejections and rework by applying statistical
quality control techniques.
4. To reduce the cost of inspections by effective and efficient
utilization of available facilities.

5. To maintain accuracies of measurement through periodical


calibration of the measuring instruments.

6. To prepare designs for gauges and special inspection fixtures.

7. To standardize measuring methods by proper inspection methods


at the development stage itself.

8. To asses the measuring instrument capabilites and ensure that they


are adequate for their specific measurements.
Metrology is concerned with

1. Establishing the units of measurements

2. Reproducing these units in the form of standards

3. Ensuring the uniformity of measurements

4. Developing the methods of measurements

5. Analyzing the accuracy of methods of measurements

6. Establishing uncertainty of measurements

7. Developing methods of identifying the causes of


measuring errors and eliminating the same
Terminologies
1. Calibration: It is the process of determining the values
of the quantity being measured corresponding to a pre
established arbitrary scale.

2. Repeatability: It is the ability of the measuring


instrument to give the same value every time the
measurement of the given quantity is repeated.

3. Precision: Precision is the repeatability of a


measuring process. The measuring process is said to be
precise when the process of measurement is repeated the
result appears same every time.
4. Accuracy: It is associated with the correctness or the
agreement of the result of a measurement with the true
value of the measured quantity.

5. Error: It is the difference between the measured value and


the true value. Lesser the error higher the accuracy
High precision low accuracy High accuracy but low precision
Classification of Standards
1 Line & End Standards:
In the Line standard, the length is the distance between
the centres of engraved lines whereas in End standard, it
is the distance between the end faces of the standard.
Example : for Line standard is Measuring Scale, for End
standard is Block gauge.

2 Primary, Secondary, Tertiary & Working Standards:


Primary standard: It is only one material standard and is
preserved under the most careful conditions and is used
only for comparison with Secondary standard.
Secondary standard:
It is similar to Primary standard as nearly as possible and
is distributed to a number of places for safe custody and
is used for occasional comparison with tertiary standards.

Tertiary standard:
It is used for reference purposes in laboratories and
workshops and is used for comparison with working
standard.

Working standard: It is used daily in laboratories and


workshops. Low grades of materials may be used.
Errors in Measurement
Error in measurement is the difference between the
measured value and the true value of the measured
dimension.

Error in measurement = Measured value – True value

The error in measurement may be expressed as an absolute


error or as a relative error.
1 Absolute error: It is the algebraic difference between the
measured value and the true value of the quantity
measured. It is further classified as;
a. True absolute error: It is the algebraic difference
between the measured average value and the conventional
true value of the quantity measured.
b) Apparent absolute error: It is the algebraic difference
between one of the measured values of the series of
measurements and the arithmetic mean of all measured values
in that series.
2 Relative error: It is the quotient of the absolute error and the
value of comparison which may be true value, conventional
true value or arithmetic mean value of a series of
measurements used for the calculation of that absolute error.
Example : If the actual true value is 5,000 and estimated
measured value is 4,500, find absolute and relative errors.
Solution : Absolute error = True value – Measured value
= 5,000 – 4,500
= 500 units
Relative error = Absolute error / Measured value
= 500 / 4,500
= 0.11 11%
Methods of measurement

1. Direct method- The value of the quantity to be measured


is obtained directly without the necessity of carrying out
supplementary calculations based on a functional
dependence of the quantity to be measured in relation to the
quantities actually measured.
Example : Weight of a substance is measured directly using a
physical balance.

2. Indirect method- The value of the quantity is obtained


from measurements carried out by direct method of
measurement of other quantities, connected with the quantity
to be measured by a known relationship.
Example : Weight of a substance is measured by measuring the
length, breadth & height of the substance directly and then by
using the relation Weight = Length x Breadth x Height x
Density
3. Comparison method- Based on the comparison of the value
of a quantity to be measured with a known value of the same
quantity direct comparison, or a known value of another quantity
which is a function of the quantity to be measured indirect
comparison.
DEFINITION OF STANDARDS
† A standard is defined as “something that is set up
and established by an authority as rule of the
measure of quantity, weight, extent, value or
quality”.
† For example, a meter is a standard established by an
international organization for measurement of
length.
† Industry, commerce, international trade in modern
civilization would be impossible without a good
system of standards. 21
ROLE OF STANDARDS
† The role of standards is to achieve uniform, consistent
and repeatable measurements throughout the world.
† Today our entire industrial economy is based on the
interchangeability of parts the method of manufacture.
† To achieve this, a measuring system adequate to define
the features to the accuracy required & the standards of
sufficient accuracy to support the measuring system are
necessary.

22
STANDARDS OF MEASUREMENTS
† There are two standard measurement systems being used
throughout the world, i.e. English and Metric (Yard and
meter).
† Due to advantages of metric system most of the countries
are adopting metric standard with meter as the
fundamental unit of linear measurement.
Length can be measured by
1. Line standard
2. End standard
3. Wavelength standard 23
LINE STANDARD

According to this standard yard or meter is


defined as the distance between scribed lines on a
bar of metal under certain conditions of
temperature and support.

24
LINE STANDARD

25
C H A R A C T E R I S T I C S O F L I N E S TA N D A R D S
1. Scales can be accurately engraved.
Example: A steel rule can be read to about ± 0.2 mm of true
dimension.
2. A scale is quick and easy to use over a wide range of
measurements.
3. The scale markings are not subjected to wear although
significant wear on leading ends results in
"UNDERSIZING".
4. Scales are subjected to parallax effect, which is a source of
both positive and negative reading errors.
5. Scales are not convenient for close tolerance length
measurements except in conjunction with microscopes.
26
IMPERIAL STANDARD YARD
† The imperial standard yard is a bronze bar of one inch
square cross section and 38 inches long.
† A round recess, one inch away from two ends up to central
plane of the bar.
† A gold plug 0.1 inch diameter having three lines engraved
transversely and two lines longitudinally is inserted into these
holes so that the lines are in neutral plane.
† Yard is then defined as the distance between the two central
transverse lines of the plug at 62oF.
† The purpose of keeping the gold plug lines in neutral axis is
due to bending of beam, the neutral axis remains unaffected

27
1 yard = 0.9144 meter

28
29
30
I N T E R N AT I O N A L P R O TO T Y P E M E T E R
† This is the distance between the center portions of the two
lines engraved on the polished surface of a bar of pure
platinum(90%)- iridium (10%) alloy which is non oxidizable
and retain good polished surface.
† The bar is kept at 0oC and under normal atmospheric
pressure
† It is supported by two rollers at least 1cm diameter
symmetrically situated in the same horizontal plane at a
distance of 571mm, so as to give minimum deflection.
† It has a shape of winged section (tresca cross section) having a
web whose surface lines are on the neutral axis.
† The shape gives the maximum rigidity 31
† The overall width and depth are 16mm each
† This standard is kept in BIPM in Paris (Bureau of
International prototype Meter)
† Thus one yard was equal to 0.91439841m. As
American yard was longer by four parts in one
million, international yard was adopted as 0.9144m

32
33
34
AIRY POINTS
In order to minimize slightest error in neutral axis due to
the supports at ends, the supports must be placed such that
the slope at the ends is zero and the flat end faces of the bar
are mutually parallel

35
Sir G.B. Airy showed that this condition was obtained
when the distance between the supports is

where
n → No. of supports
L → length of bar
For a simply supported beam, the expression becomes

These points of support are known as "Airy" points.

In other words, the distance of each support from the end


of the bar is =
36
END STANDARDS

† When the length being measured is expressed as the


distance between two parallel end faces then it is
called "End-standard".
† End standards can be made to a very high degree of
accuracy.
† They consists of standard blocks or bars used to build
up the required length.
† Examples: Slip gauges, gap gauges, end of micrometer
anvils etc.
37
C H A R A C T E R I S T I C S O F E N D S TA N D A R D S
1. End standards are highly accurate and are well suited to
measurements of close tolerances.
2. They are time consuming in use and prove only one dimension
at a time.
3. Dimensional tolerance as small as 0.0005 mm can be obtained.
4. End standards are subjected to wear on their measuring faces.
5. They are not subjected to the parallax effect since their use
depends on "feel".
6. Groups of blocks are "wringing" together to build up any
length, faulty wringing leads to damage.
7. The accuracy of both End and Line standards are affected by
temperature change. 38
WAVELENGTH STANDARD (1960)
† Because of the problems of variation in length of
material length standards, the possibility of using light
as a basic unit to define primary standard has been
considered.
† The wavelength of the selected radiation was measured
and used as the basic unit of length.
† Since wavelength standard is not a physical one, it need
not be preserved.
† Further, it is easily reproducible and the error of
reproduction is in the order of one part in 100 million.
39
Definitions according to wavelength standard
† The Meter is defined as 16,50,763.73 wavelengths of
the orange radiation in vacuum of the krypton-86
isotope.
† The Yard is defined as 15,09,458.35 wavelengths of
the orange radiation in vacuum of the krypton-86
isotope.
† The yard is also defined as 0.9144 meter.
† The substance krypton-86 is used because it produces
sharply defined interference lines and its wavelength
was the most uniform known at that time. 40
41
A d v a n t a g e s o f u s i n g Wa v e l e n g t h ( l i g h t ) S t a n d a r d

1. Length does not changes.

2. It can be reproduced easily if destroyed.

3. This primary unit can be accessible to any physical


laboratories.

4. It can be used for making comparative measurements.

5. much higher accuracy compare to material standards.


6. Wavelength standard can be reproduced consistently at any
time and at any place.

42
Calibration

The calibration of all instruments is important, for it affords the opportunity to check
the instrument against a known standard and subsequently to reduce errors in accuracy.
Calibration procedures involve a comparison of the particular instrument with either (1)
a primary standard, (2) a secondary standard with a higher accuracy than the instrument
to be calibrated, or (3) a known input source. For example, a flowmeter might be
calibrated by (1) comparing it with a standard flow-measurement facility of the National
Institute for Standards and Technology (NIST), (2) comparing it with another flowmeter
of known accuracy, or (3) directly calibrating with a primary measurement such as
weighing a certain amount of water in a tank and recording the time elapsed for this
quantity to flow through the meter.
The importance of calibration cannot be overemphasized because it is calibration that
firmly establishes the accuracy of the instruments. Rather than accept the reading of an
instrument, it is usually best to make at least a simple calibration check to be sure of the
validity of the measurements.

Standards

In order that investigators in different parts of the country and different parts of the
world may compare the results of their experiments on a consistent basis, it is necessary
to establish certain standard units of length, weight, time, temperature, and electrical
quantities. NIST has the primary responsibility for maintaining these standards in the
United States. The meter and the kilogram are considered fundamental units upon
which, through appropriate conversion factors, the English system of length and mass
is based. At one time, the standard meter was defined as the length of a platinum iridium
bar maintained at very accurate conditions at the International Bureau of Weights and
Measures in S`evres, France. Similarly, the kilogram was defined in terms of a
platinum-iridium mass maintained at this same bureau. The conversion factors for the
English and metric systems in the United States are fixed by law as
1 meter = 39.37 inches
1 pound-mass = 453.59237 grams
Standards of length and mass are maintained at NIST for calibration purposes. In 1960
the General Conference on Weights and Measures defined the standard meter in terms
of the wavelength of the orange-red light of a krypton-86 lamp. The standard meter is
thus 1 meter = 1,650,763.73 wavelengths In 1983 the definition of the meter was
changed to the distance light travels in 1/299,792,458ths of a second. For the
measurement, light from a helium-neon laser illuminates iodine which fluoresces at a
highly stable frequency.
The inch is exactly defined as
1 inch = 2.54 centimeters
Standard units of time are established in terms of known frequencies of oscillation of
certain devices. One of the simplest devices is a pendulum. A torsional vibrational
system may also be used as a standard of frequency. Prior to the introduction of quartz
oscillator–based mechanisms, torsional systems were widely used in clocks and
watches. Ordinary 60-hertz (Hz) line voltage may be used as a frequency standard under
certain circumstances. The fundamental unit of time, the second(s), has been defined in
the past as of a mean solar day. The solar day is measured as the time interval between
two successive transits of the sun across a meridian of the earth. The time interval varies
with location of the earth and time of year; however, the mean solar day for one year is
constant. The solar year is the time required for the earth to make one revolution around
the sun. The mean solar year is 365 days 5 h 48 min 48 s. The above definition of the
second is quite exact but is dependent on astronomical observations in order to establish
the standard. In October 1967 the Thirteenth General Conference on Weights and
Measures adopted a definition of the second as the duration of 9,192,631,770 periods
of the radiation corresponding to the transition between the two hyperfine levels of the
fundamental state of the atom of cesium-133. This standard can be readily duplicated
in standards laboratories throughout the world.

Dimensions and Units

Despite strong emphasis in the professional engineering community on standardizing


units with an international system, a variety of instruments will be in use for many years,
and an experimentalist must be conversant with the units which appear on the gages and
readout equipment. The main difficulties arise in mechanical and thermal units because
electrical units have been standardized for some time. It is hoped that the SI (Syst`eme
International d’Unit’es) set of units will eventually prevail, and we shall express
examples and problems in this system as well as in the English system employed in the
United States for many years. Although the SI system is preferred, one must recognize
that the English system is still very popular. One must be careful not to confuse the
meaning of the term “units” and “dimensions.” A dimension is a physical variable used
to specify the behavior or nature of a particular system. For example, the length of a rod
is a dimension of the rod.
In like manner, the temperature of a gas may be considered one of the thermodynamic
dimensions of the gas. When we say the rod is so many meters long, or the gas has a
temperature of so many degrees Celsius, we have given the units with which we choose
to measure the dimension. In our development we shall use the dimensions
L = length
M = mass
F = force
τ = time
T = temperature
All the physical quantities used may be expressed in terms of these fundamental
dimensions. The units to be used for certain dimensions are selected by somewhat
arbitrary definitions which usually relate to a physical phenomenon or law. For
example, Newton’s second law of motion may be written
Force ∼ time rate of change of momentum
F = k d(mv)/dτ
where k is the proportionality constant. If the mass is constant,
F = kma
where the acceleration is a = dv/dτ. Equation (2.1) may also be written with 1/gc = k.
Equation (2.2) is used to define our systems of units for mass, force, length, and time.
Some typical systems of units are:
1. 1 pound-force will accelerate 1 pound-mass 32.174 feet per second squared.
2. 1 pound-force will accelerate 1 slug-mass 1 foot per second squared.
3. 1 dyne-force will accelerate 1 gram-mass 1 centimeter per second squared.
4. 1 newton (N) force will accelerate 1 kilogram-mass 1 meter per second squared.
5. 1 kilogram-force will accelerate 1 kilogram-mass 9.80665 meter per secondsquared.
The kilogram-force is sometimes given the designation kilopond (kp). Since Eq. (2.2)
must be dimensionally homogeneous, we shall have a different value of the constant gc
for each of the unit systems in items 1 to 5 above. These values are:
1. gc = 32.174 lbm · ft/lbf · s2
2. gc = 1 slug · ft/lbf · s2
3. gc = 1 g · cm/dyn · s2
4. gc = 1 kg · m/N · s2
5. gc = 9.80665 kgm · m/kgf · s2
It does not matter which system of units is used so long as it is consistent with the above
definitions.

The Generalized Measurement System

Most measurement systems may be divided into three parts:

1. A detector-transducer stage, which detects the physical variable and performs either
a mechanical or an electrical transformation to convert the signal into a more usable
form. In the general sense, a transducer is a device that transforms one physical effect
into another. In most cases, however, the physical variable is transformed into an
electric signal because this is the form of signal that is most easily measured. The signal
may be in digital or analog form. Digital signals offer the advantage of easy storage in
memory devices, or manipulations with computers.
2. Some intermediate stage, which modifies the direct signal by amplification, filtering,
or other means so that a desirable output is available.
3. A final or terminating stage, which acts to indicate, record, or control the variable
being measured. The output may also be digital or analog. As an example of a
measurement system, consider the measurement of a low voltage signal at a low
frequency. The detector in this case may be just two wires and possibly a resistance
arrangement, which are attached to appropriate terminals. Since we want to indicate or
record the voltage, it may be necessary to perform some amplification. The
amplification stage is then stage 2, designated above. The final stage of the
measurement system may be either a voltmeter or a recorder that operates in the range
of the output voltage of the amplifier.

Consider the simple bourdon-tube pressure gage shown in Fig.1. This gage offers a
mechanical example of the generalized measurement system. In this case the bourdon
tube is the detector-transducer stage because it converts the pressure signal into a
mechanical displacement of the tube. The intermediate stage consists of the gearing
arrangement, which amplifies the displacement of the end of the tube so that a relatively
small displacement at that point produces as much as three- quarters of a revolution of
the center gear. The final indicator stage consists of the pointer and the dial
arrangement, which, when calibrated with known pressure inputs, gives an indication
of the pressure signal impressed on the bourdon tube. A schematic diagram of the
generalized measurement system is shown in Fig. 1.

Fig. 1 Bourdon- tube pressure Gauge as a generalised measurement system

Basic Concepts in Dynamic Measurements

A static measurement of a physical quantity is performed when the quantity is not


changing with time. The deflection of a beam under a constant load would be a static
deflection. However, if the beam were set in vibration, the deflection would vary with
time, and the measurement process might be more difficult. Measurements of flow
processes are much easier to perform when the fluid is in a nice steady state and become
progressively more difficult to perform when rapid changes with time are encountered.
Many experimental measurements are taken under such circumstances that ample time
is available for the measurement system to reach steady state, and hence one need not
be concerned with the behavior under non-steady-state conditions. In many other
situations, however, it may be desirable to determine the behavior of a physical variable
over a period of time. Sometimes the time interval is short, and sometimes it may be
rather extended. In any event, the measurement problem usually becomes more
complicated when the transient characteristics of a system need to be considered. In this
section we wish to discuss some of the more important characteristics and parameters
applicable to a measurement system under dynamic conditions.

Zeroth-, First-, and Second-Order Systems

A system may be described in terms of a general variable x(t) written in differential


equation form as
andnx/dtn+ an−1dn−1/dtn−1+· · ·+a1dx/dt+ a0x = F(t)
where F(t) is some forcing function imposed on the system. The order of the system is
designated by the order of the differential equation.
A zeroth-order system would be governed by
a0x = F(t)
a first-order system by
a1dx/dt+ a0x = F(t)
and a second-order system by
a2d2x/dt2+ a1dx/dt+ a0x = F(t) We shall examine the behavior of these two types of
systems to study some basic concepts of dynamic response.We shall also give some
concrete examples of physical systems which exhibit the different orders of behavior.
The zeroth-order system described by Equation indicates that the system variable x(t)
will track the input forcing function instantly by some constant value: that is,
x = 1/a0F(t)
The constant 1/a0 is called the static sensitivity of the sytem. If a constant force were
applied to the beam mentioned above, the static deflection of the beam would be F/a0.
System Response

We have already discussed the meaning of frequency response and observed that in
order for a system to have good response, it must treat all frequencies the same within
the range of application so that the ratio of output-to-input amplitude remains the same
over the frequency range desired. We say that the system has linear frequency response
if it follows this behavior.
Amplitude response pertains to the ability of the system to react in a linear way to
various input amplitudes. In order for the system to have linear amplitude response, the
ratio of output-to-input amplitude should remain constant over some specified range of
input amplitudes. When this linear range is exceeded, the system is said to be
overdriven, as in the case of a voltage amplifier where too high an input voltage is used.
Overdriving may occur with both analog and digital systems.

Distortion

Suppose a harmonic function of a complicated nature that is, composed of many


frequencies is transmitted through the mechanical system. If the frequency spectrum of
the incoming waveform were sufficiently broad, there would be different amplitude and
phase-shift characteristics for each of the input frequency components, and the output
waveform might bear little resemblance to the input. Thus, as a result of the frequency-
response characteristics of the system, distortion in the waveform would be
experienced. Distortion is a very general term that may be used to describe the variation
of a signal from its true form. Depending on the system, the distortion may result from
either poor frequency response or poor phase-shift response. In electronic devices
various circuits are employed to reduce distortion to very small values. For pure
electrical measurements distortion is easily controlled by analog or digital means. For
mechanical systems the dynamic response characteristics are not as easily controlled
and remain a subject for further development. For example, the process of sound
recording may involve very sophisticated methods to eliminate distortion in the
electronic signal processing; however at the origin of the recording process, complex
room acoustics and microphone placement can alter the reproduction process beyond
the capabilities of electronic correction. Finally, at the terminal stage, the loudspeaker
and its interaction with the room acoustics can introduce distortions and unwanted
effects.

Analysis of Experimental Data

Some form of analysis must be performed on all experimental data. The analysis may
be a simple verbal appraisal of the test results, or it may take the form of a complex
theoretical analysis of the errors involved in the experiment and matching of the data
with fundamental physical principles. Even new principles may be developed in order
to explain some unusual phenomenon. Our discussion in this chapter will consider the
analysis of data to determine errors, precision, and general validity of Experimental
measurements. The correspondence of the measurements with physical principles is
another matter, quite beyond the scope of our discussion. Some methods of graphical
data presentation will also be discussed. The interested reader should consult the
monograph by Wilson [4] for many interesting observations concerning correspondence
of physical theory and experiment.
The experimentalist should always know the validity of data. The automobile test
engineer must know the accuracy of the speedometer and gas gage in order to express
the fuel-economy performance with confidence. A nuclear engineer must know the
accuracy and precision of many instruments just to make some simple radioactivity
measurements with confidence. In order to specify the performance of an amplifier, an
electrical engineer must know the accuracy with which the appropriate measurements
of voltage, distortion, and so forth, have been conducted. Many considerations enter
into a final determination of the validity of the results of experimental data, and we
wish to present some of these considerations in this chapter. Errors will creep into all
experiments regardless of the care exerted. Some of these errors are of a random nature,
and some will be due to gross blunders on the part of the experimenter. Bad data due to
obvious blunders may be discarded immediately. But what of the data points that just
“look” bad? We cannot throw out data because they
do not conform with our hopes and expectations unless we see something obviously
wrong.

Types of Errors

At this point we mention some types of errors that may cause uncertainty in an
experimental measurement. First, there can always be those gross blunders in apparatus
or instrument construction which may invalidate the data. Hopefully, the careful
experimenter will be able to eliminate most of these errors. Second, there may be certain
fixed errors which will cause repeated readings to be in error by roughly the same
amount but for some unknown reason. These fixed errors are sometimes called
systematic errors, or bias errors. Third, there are the random errors, which may be
caused by personal fluctuations, random electronic fluctuations in the apparatus or
instruments, various influences of friction, and so forth. These random errors usually
follow a certain statistical distribution, but not always. In many instances it is very
difficult to distinguish between fixed errors and random errors. The experimentalist may
sometimes use theoretical methods.

General Considerations in Data Analysis

Our discussions in this chapter have considered a variety of topics: statistical analysis,
uncertainty analysis, curve plotting, and least squares, among others. With these tools
the reader is equipped to handle a variety of circumstances that may occur in
experimental investigations. As a summary to this chapter let us now give an
approximate outline of the manner in which one would go about analyzing a set of
experimental data:
1. Examine the data for consistency.
No matter how hard one tries, there will always be some data points that appear to be
grossly in error. If we add heat to a container of water, the temperature must rise, and
so if a particular data point indicates a drop in temperature for a heat input, that point
might be eliminated. In other words, the data should follow commonsense consistency,
and points that do not appear proper should be eliminated. If very many data points fall
in the category of “inconsistent,” perhaps the entire experimental procedure should be
investigated for gross mistakes or miscalculation.
2. Perform a statistical analysis of data where appropriate.
A statistical analysis is only appropriate when measurements are repeated several
times. If this is the
case, make estimates of such parameters as standard deviation, and so forth. In those
cases where the uncertainty of the data is to be prescribed by statistical analysis, a
calculation should be performed using the t-distribution. This may be used to determine
levels of confidence and levels of significance. The number of measurements to be
performed may be determined for different levels of confidence.
3. Estimate the uncertainties in the results.
We have discussed uncertainties at length. Hopefully, these calculations will have been
performed in advance, and the investigator will already know the influence of different
variables by the time the final results are obtained.
4. Anticipate the results from theory.
Before trying to obtain correlations of the experimental data, the investigator should
carefully review the theory appropriate to the subject and try to glean some information
that will indicate the trends the results may take. Important dimensionless groups,
pertinent functional relations, and other information may lead to a fruitful interpretation
of the data. This step is particularly important in determining the graphical form(s) to
be selected for presentation of data.
5. Correlate the data.
The word “correlate” is subject to misinterpretation. In the context here we mean that
the experimental investigator should make sense of the data in terms of physical theories
or on the basis of previous experimental work in the field. Certainly, the results of the
experiments should be analyzed to show how they conform to or differ from previous
investigations or standards that may be employed for such measurements.

You might also like