You are on page 1of 9

Experiment ____

Measurements and Uncertainty


Introduction

Measurement and Error Analysis

Physics, by nature, is an experimental science. One must take measurements in performing physics
experiments. These measurements are described by numbers and the physical phenomena described by
these measurements are called physical quantities. Example of physical quantities are the length of an
object, the weight of a wooden block, the height of a person, and so on.

No measurement is absolutely precise. There is an uncertainty associated with every measurement.


This can be due to the limited accuracy of measuring instrument, the incapability of an instrument to read
some fraction of the smallest division, and personal bias of the experimenter. When giving the result of a
measurement, it is helpful to report the estimated uncertainty in the measurement. One example is the
width of a book might be written as 5.2 ± 0.1cm. The 0.1 cm (“plus or minus 0.1cm”) represents the
estimated measurement uncertainty. This means that the actual width most likely lies between 5.1 and
5.3cm.

“True values” are different from standard or textbook values. We can think of a “true value” as
that value which we would measure if all sources of error can be eliminated from the experiment. However,
since we cannot totally eliminate all sources of error, true values cannot be measured. Standard values are
NOT true values. A usual practice is to compare a measured value with the standard value of a certain
physical quantity. This comparison is more appropriately called experimental discrepancy and not the
error or uncertainty of the measurement. For most simple measuring devices, the following rule of thumb
is usually applied: In making a measurement, the uncertainty is taken to be at least ± 1 of the smallest unit
on the instrument used.

The percent uncertainty is the ratio of the uncertainty to the measured value, multiplied by 100.
One example is if the measurement is 5.2cm and the uncertainty about 0.1cm, the percent uncertainty is

0.1
× 100% ≈ 1.9%
5.2

where ≈ means “is approximately equal to.”

Precision refers to the repeatability of the measurement using a given instrument. For example,
the measurement of the width of a book was taken many times, getting results like 8.81cm, 8.85cm,
8.78cm, 8.82cm, you could say the measurements give a precision a bit better than 0.1cm. On the other
hand, accuracy refers to how close a measurement is to the true value. Estimated uncertainty is meant to
take both accuracy and precision into account.
Figure 1. Illustration of accuracy and precision on the number line
(mathsisfun.com/accuracy-precision.html)

Random and Systematic Errors

Errors can be classified as either random (indeterminate) or systematic (determinate). Random


errors arise from unpredictable or unknown variations in the experimental environment. These errors
include operator errors, fluctuating or varying experimental conditions, and inherent variability of the
measuring instruments. The effect of random errors can be minimized by repeating the measurements and
taking the average value.

Systematic errors are errors caused by a particular instrument or experimental technique. These
errors are called “systematic" because when the measurement is repeated many times, the error has the
same size and algebraic sign for each measurement. These errors are more serious because they are harder
to detect. Getting the average of several measurements does not minimize the effect of systematic errors.
But once they are detected, the size and sign of the errors can be determined. Examples of systematic
errors are miscalibrated instruments, using an incorrect constant in the equations, and reading a scale
incorrectly.

At this point, we would like to distinguish between precision and accuracy. A high precision
measurement is one with a relatively small random error. A high accuracy measurement is one with small
random error and small systematic error Precision does not necessarily imply accuracy. A precise
measurement may be inaccurate if it has a systematic error. The difference between precision and accuracy
is illustrated in the dartboard in Figure 2.

Figure 2. Difference between systematic and random errors


Figure 2 (a) shows high accuracy and high precision; (b) shows some accuracy but low
precision; (c) is inaccurate but highly precise; (d) is either accurate nor precise.

The deviations of the individual measurements from the average give an indication of the
reliability of that average value. The typical value of this deviation is a measure of the precision. This
average deviation is calculated from the absolute values of the deviations, since otherwise the fact that
there are both positive and negative deviations means that they will cancel. If one finds the average of the
absolute values of the deviations, this “average deviation from the mean” may serve as a measure of
reliability. For example, let column 1 represent 10 readings of the length of a rod taken at one place so
that variations in the rod do not come into consideration, then column 2 gives the magnitude (absolute) of
each reading's deviation from the mean.

MEASUREMENTS DEVIATION FROM AVERAGE


9.943cm 0.000
9.942cm 0.001
9.944cm 0.001
9.941cm 0.002
9.943cm 0.000
9.943cm 0.000
9.945cm 0.002
9.943cm 0.000
0.941cm 0.002
9.942cm 0.001
Average = 9.943cm Average = 0.0009cm ≈ 𝟎. 𝟎𝟎𝟏𝒄𝒎

Length = 𝟗. 𝟗𝟒𝟑 ± 𝟎. 𝟎𝟎𝟏𝒄𝒎

Expressed algebraically, the average deviation from the mean (ADM) is

(∑|𝑥𝑖 − 𝑥̅ |)
𝐴𝐷𝑀 =
𝑛

where xi is the ith measurement of n taken and 𝑥̅ is the mean or arithmetic average of the readings.

Operations in uncertainties

Given two quantities 𝑥 and 𝑦 measured with uncertainties Δ𝑥 and Δ𝑦, respectively, the basic operations
between x and y, with their corresponding uncertainties, are as follow:
Operation Uncertainty
𝑥+𝑦 Δ𝑥 + Δ𝑦
𝑥−𝑦
𝑥𝑦 Δ𝑥 Δ𝑦
( × 100%) + ( × 100%)
𝑥/𝑦 𝑥 𝑦

Vernier Caliper

A vernier caliper is a measuring device more precise than an ordinary ruler. It consists of a fixed scale and
a moving vernier scale as shown in Figure 3. The vernier scale is used to read fractions of small divisions
on the main scale.

In a metric vernier, the fixed scale is marked in centimeters and millimeters. The vernier scale, on
the other hand, usually consists of ten equally spaced marks, but their spacing is not the same as that of
the marks on the main scale (See Figure 4). If you look at these closely you will see that ten divisions of
the vernier scale occupy a length interval equal to nine of the smallest main scale divisions (millimeters).
Thus, we can see that the vernier scale markings are engraved 9/10 millimeter apart.

Figure 3. Vernier Caliper and its parts

Figure 4. Principle of the Vernier scale

In taking the measurement of an object using the vernier caliper, write down where the index mark
(zero of the vernier scale) is located on the main scale. Then locate which of the marks on the vernier scale
lines up best with a main scale mark. The number of that vernier scale mark represents the fraction of a
main scale division which must be added. It doesn't matter which main scale mark lines up best with a
vernier mark. It is the vernier mark number which you record.

To illustrate this, suppose a particular centimeter vernier caliper gives a measurement as shown below:

Figure 5. A Vernier caliper reading

We see that the index mark falls between 9.1 and 9.2 cm. So, we write down 9.1 cm. Now, looking
at the vernier scale, we see that the 3rd vernier scale mark coincides with a main scale mark. Thus, our
measurement is 9.13 cm.
Read the following Vernier caliper measurements in metric units.

(a) Reading: _______

(b) Reading: _______

(c) Reading: _______

(d) Reading: _______

(e) Reading: _______


Micrometer Caliper

The micrometer caliper is a measuring device widely used for accurate measurement of components in
sciences, engineering, and machining. Like Vernier caliper, this is more precise than an ordinary ruler.

Figure 6. A micrometer caliper

Figure 6 shows a typical micrometer caliper. The object to be measured is placed between the
fixed jaw and the movable jaw and the jaw is gently closed on the object. The movable jaw of the
micrometer caliper is driven by a precise and uniform screw.

A typical metric instrument (Figure 7) has the main scale marked to 1/2 millimeter. The
circumference of the rotating handle (the thimble) is subdivided into 50 equal subdivisions. One rotation
of the handle carries the screw a distance of 1/2 mm along the main scale. Therefore the markings on the
thimble allow one to read hundredths of millimeters. That is, 0.5 mm/50 div = 0.01 mm/div. In Figure 6,
the reading on the main scale is more than 5.5 mm but less than 5.6 mm. The thimble reading is 27.5, so
the instrument's length reading is 5.5 mm + (27.5 div)(0.01 mm/div) = 5.5 mm + 0.275 mm = 5.775 mm.

Figure 7. A metric micrometer caliper

Always check whether full closure of the jaws actually gives a zero reading. Special wrenches
are available to set the zero reading exactly. Alternatively, the zero reading may be treated as a
correction value to be added to (or subtracted from) all readings made with the instrument. This value is
called a "zero correction."
Read the following Micrometer caliper measurements in metric units.

(a) Reading: _______

(b) Reading: _______

(c) Reading: _______

(d) Reading: _______


(e) Reading: _______

Density is an intensive property of the material, that is, it does not depend on mass. The
density of a homogeneous material is defined as its mass per unit volume:

𝑚
𝜌=
𝑉

The SI unit for density is kg/m3 but sometimes densities are given in g/cm3.

Activity

• In this activity, the student should be able to calculate the volume and total surface area of
any three rectangular objects using a ruler and report their data with uncertainties in SI units.

Data Sheet

• Using a ruler, measure the dimensions of three different rectangular objects. Take 3
measurements for each object.

Table 1. Measurements of the three rectangular objects


Object Length Width Height
Trial Trial Trial Trial Trial Trial Trial Trial Trial
1 2 3 1 2 3 1 2 3

Table 2. Average measurements of the three rectangular objects


Object Length Width Height
± ± ±
± ± ±
± ± ±
Table 3. Volume and total surface area
Object Volume Total Surface Area
± ±
± ±
± ±

• What is the density of each object?

REFERENCES:
1. Physics and Geology Unit, UP Manila. Laboratory Manual for Physics 71.1 (2006).
2. Snapir, N and Perek, M. (1969). Evaluation of various methods of measuring egg shell
quality. Annales de zootechnie, INRA/EDP Sciences, 1969, 18 (4), pp.399-405. <hal-
00886980>
3. http://www.lhup.edu/~dsimanek/errorsx.htm
4. http://www.lhup.edu/~dsimanek/scenario/labman1/measure.htm
5. http://badger.physics.wisc.edu/lab/manual/
6. http://www.sonoma.edu/users/d/delcorra/delcorral/p209b04/index.html
7. http://www.phy.uct.ac.za/courses/c1lab (vernier photo)
8. http://instructor.physics.lsa.umich.edu/ip-labs/tutorials/errors/

You might also like