You are on page 1of 48


By the end of this lesson, students will be able to measure physical quantities using appropriate instruments. explain accuracy and consistency. explain sensitivity. explain types of experimental error. use appropriate techniques to reduce errors

When measuring a physical quantity, the following steps need to be considered

The magnitude of the quantity to be measured Choose a suitable instrument so that the magnitude of the quantity does not exceed the maximum capacity of the instrument The instrument must be sensitive enough to detect and give a meaningful measurement of the quantity.


Precision is the ability of an instrument in measuring a quantity in a consistent manner with only a small relative deviation between readings. The precision of a reading can be indicated by its relative deviation.

The relative deviation is the percentage of mean deviation for a set of measurements and it is defined by the following formula:

The accuracy of a measurement is the approximation of the measurement to the actual value for a certain quantity of Physics. The measurement is more accurate if its number of significant figures increases.

Table below shows that the micrometer screw gauge is more accurate than the other measuring instruments.

The accuracy of a measurement can be increased by

taking a number of repeat readings to calculate the mean value of the reading. avoiding the end errors or zero errors. taking into account the zero and parallax errors. using more sensitive equipment such as a vernier caliper to replace a ruler.

The difference between precision and accuracy can be shown by the spread of shooting of a target (as shown in Diagram below)

The sensitivity of an instrument is its ability to detect small changes in the quantity that is being measured. Thus, a sensitive instrument can quickly detect a small change in measurement. Measuring instruments that have smaller scale parts are more sensitive. Sensitive instruments need not necessarily be accurate.

Error is the difference between the actual value of a quantity and the value obtained in measurement. There are 2 main types of error - Systematic Error - Random Error

Systematic Error
Systematic errors are errors which tend to shift all measurements in a systematic way so their mean value is displaced. Systematic errors can be compensated if the errors are known.

Examples of systematic errors are

zero error, which is caused by an incorrect position of the zero point. incorrect calibration of the measuring instrument. consistently improper use of equipment.

Systematic error can be reduced by

Conducting the experiment with care. Repeating the experiment by using different instruments.

Zero Error
A zero error arises when the measuring instrument does not start from exactly zero. Zero errors are consistently present in every reading of a measurement. The zero error can be positive or negative.

(NO ZERO ERROR: The pointer of the ammeter place on zero when no current flow through it.)

(NEGATIVE ZERO ERROR: The pointer of the ammeter does not place on zero but a negative value when no current flow through it.)

(POSITIVE ZERO ERROR: The pointer of the ammeter does not place on zero but a negative value when no current flow through it.)

Random Error
Random errors arise from unknown and unpredictable variations in condition. It fluctuates from one measurement to the next. Random errors are caused by factors that are beyond the control of the observers.

Random error can be caused by:

personal errors such as human limitations of sight and touch. lack of sensitivity of the instrument: the instrument fail to respond to the small change. natural errors such as changes in temperature or wind, while the experiment is in progress. wrong technique of measurement.

One example of random error is the parallax error. Random error can be reduced by: - taking repeated readings - find the average value of the reading.

Parallax Error
A parallax error is an error in reading an instrument due to the eye of the observer and pointer are not in a line perpendicular to the plane of the scale.

Metre Rule
A metre rule is 1 meter or 100 cm long. The smallest division is 1 mm or 0.1 cm. Therefore, it can measure length with accuracy up to 0.1 cm or 0.01 m.

Inaccurate measurement

Accurate Measurement

Vernier Callipers
A pair of vernier callipers consists of a main scale and a vernier scale. The main scale is numbered in centimetres, but has milimetre divisions. The vernier scale is a milimeter scale which is 9 mm long and is subdivided into 10 equal divisions. Therefore, each division in the vernier scale has a length of 0.9 mm.

The difference between one division in the main scale and 1 division in the vernier scale in (1.0 mm 0.9 mm =) 0.1 mm. Therefore, the vernier callipers can measure with accuracy up to 0.1 mm or 0.01 cm.

Parts of a vernier caliper:

1. Outside jaws: used to measure external diameter or width of an object 2. Inside jaws: used to measure internal diameter of an object 3. Depth probe: used to measure depths of an object or a hole 4. Main scale: scale marked every mm

5. Main scale: scale marked in inches and fractions 6. Vernier scale gives interpolated measurements to 0.1 mm or better 7. Vernier scale gives interpolated measurements in fractions of an inch 8. Retainer: used to block movable part to allow the easy transferring of a measurement

Steps to obtain a reading

1. Read the main scale directly opposite the zero mark on the vernier scale. In this case, the reading on the main scale is 22 mm or 2.2 cm. 2. The 7th vernier marking coincide with a marking on the main scale. This give a reading of 0.7 mm or 0.07 cm to be added to the main scale reading.

3. The internal diameter of the beaker is obtained by adding the main scale reading to the vernier scale reading. Reading = 22.0 mm + 0.7 mm = 22.7 mm = 2.27 cm

Zero error in vernier callipers

a)Positive zero error

Positive zero error = + 0.02 cm Actual reading = Observed reading zero error = Observe reading (+ 0.02) = Observe reading 0.02

b) Negative zero error

Negative zero error = - (0.10 0.07) cm = - 0.03 cm Actual reading = Observed reading zero error = Observed reading (- 0.03) = Observed reading + 0.03

Micrometer Screw Gauge

The micrometer screw gauge also consist of two scale, the main scale on the sleeve and the vernier scale on the thimble. One complete turn of the thimble moves the spindle by 0.5 mm. There are 50 divisions on the thimble. Hence, each division represents a distance of 0.5 mm/ 50 mm = 0.01 mm. Therefore, a micrometer screw gauge has an accuracy of 0.01 mm of length.

Steps to obtain reading

1.Read the main scale at the edge of the thimble. In this case, it is 4mm

2. Take the thimble reading opposite the datum line of the main scale. In this case, it is 46 divisions, which gives a value of 0.46 mm (46 x 0.01 mm)

3. The reading is taken by adding the main scale reading to the thimble reading. Reading = 4.00 mm + 0.46 mm = 4.46 mm = 0.446 cm


Negative zero error

If the 0 mark on the thimble scale is above the datum line on the main scale, the zero error is negative.

Positive zero error

If the 0 mark on the thimble scale is below the datum line on the main scale, the zero error is positive.

Vernier Callipers External diameter of test tube Internal diameter of test tube Diameter of Artline marker


The thickness of Physic Practical book

The diameter of a 2B pencil