You are on page 1of 20

Dr. Sc. Syahril, S.

Si, MT

MEASUREMENT PAPERS AND UNCERTAINTIES

By :

Atika Friska Lumban Gaol


1805111010
Study Program:Physical education

PHYSICAL EDUCATION

DEPARTMENT OF MATHEMATICS AND NATURAL SCIENCE

FACULTY OF TEACHER TRAINING AND EDUCATION

RIAU UNIVERSITY

PEKANBARU

2021
Foreword
Praise the author would like to convey to the one and only God for his blessings and
mercy gives ease in completing this paper, for which I give thanks to all parties especially to our
lecturer Mr.Syahril who has given us motivation so that this paper can be comfortable just in
time.
As for the paper is compiled based on various sources that I am a collection of both the
Internet and print media that I can, and take the formula and theory taken in this paper based on
the material I take from the sources that I gather with all my limitations as a party that still
requires many learning-educative learning.
My hope, this paper can provide practical concept for them, both practitioner and student
friends in understanding about measurement and uncertainty. I realized, the way the submission
of this paper is far from perfect. For that I need constructive advice and criticism from our readers
or our beloved lecturer. So I could develop this paper for the better.

Pekanbaru, March 19, 2021

Complier

ATIKA FRISKA LUMBAN GAOL


1805111010
CHAPTER I
(INTRODUCTION)

1.1 Issue Background

Measurement Measurement is a process by which a value of a particular


quantity such as the temperature of a water bath or the pH of a solution is
obtained. In the case of length measurement, this might involve measuring the
atomic-scale topography of a surface using an instrument such as an atomic-force
microscope (AFM), or measuring the length of a pendulum using a metre rule.
Values obtained through measurement form the foundation upon which we are
able to :

 test both new and established scientific theories;


 decide whether a component, such as a resistor, is within specification
 compare values obtained by workers around the world of a
particular quantity, such as the thickness of the ozone layer of the
atmosphere;
 quantify the amount of a particular chemical species, such as the
amount of steroid in a sample of urine taken from an athlete; and
 establish the proficiency of laboratories involved with the testing and
calibration of equipment

A measurement tells us about a property of something. It might tell us how


heavy an object is, or how hot, or how long it is. A measurement gives a number
to that property. Measurements are always made using an instrument of some
kind. Rulers, stopwatches, weighing scales, and thermometers are all measuring
instruments. The result of a measurement is normally in two parts: a number and a
unit of measurement, e.g. ‘How long is it? ... 2 metres.

The uncertainty of a measurement tells us something about its quality.


Uncertainty of measurement is the doubt that exists about the result of any
measurement. You might think that well-made rulers, clocks and thermometers
should be trustworthy, and give the right answers. But for every measurement -
even the most careful - there is always a margin of doubt. In everyday speech, this
might be expressed as ‘give or take’ ... e.g. a stick might be two metres long ‘give
or take a centimetre’.

Since there is always a margin of doubt about any measurement, we need


to ask ‘How big is the margin?’ and ‘How bad is the doubt?’ Thus, two numbers
are really needed in order to quantify an uncertainty. One is the width of the
margin, or interval. The other is a confidence level, and states how sure we are
that the ‘true value’ is within that margin. For example: We might say that the
length of a certain stick measures 20 centimetres plus or minus 1 centimetre, at
the 95 percent confidence level. This result could be written: 20 cm ±1 cm, at a
level of confidence of 95%.
The statement says that we are 95 percent sure that the stick is between 19
centimetres and 21 centimetres long. There are other ways to state confidence
levels. It is important not to confuse the terms ‘error’ and ‘uncertainty’.Error is
the difference between the measured value and the ‘true value’ of the thing being
measured. Uncertainty is a quantification of the doubt about the measurement
result. Whenever possible we try to correct for any known errors: for example, by
applying corrections from calibration certificates. But any error whose value we
do not know is a source of uncertainty.

An uncertainty analysis of experimental measurements is necessary for the


results to be used to their fullest value. Authors submitting papers for publication
to this Journal are expected to describe the uncertainties in their experimental
measurements and in the results calculated from those measurements. The
presentation of experimental data should include the following information:

(1) The precision limit, P. The 6P interval about a result (single or averaged) is the
experimenter’s 95 percent confidence estimate of the band within which the
mean of many such results would fall, if the experiment were repeated many
times under the same conditions and using the same equipment. The precision
limit is thus an estimate of the scatter (or lack of repeatability) caused by random
errors and unsteadiness.

(2) The bias limit, B. The bias limit is an estimate of the magnitude of the fixed,
constant error. When the true bias error in a result is defined as b, the quantity B
is the experimenter’s 95 percent confidence estimate such thatjbjB.

(3) The uncertainty U. The 6U interval about the result is the band within which
the experimenter is 95 percent confident the true value of the result lies.

(4) A brief description of, or reference to, the methods used for the uncertainly
analysis. (If estimates are made at a confidence level other than 95 percent,
adequate explanation of the techniques used must be provided.) The estimates
of precision limits and bias limits should be made corresponding to a time
interval appropriate to the experiment.

It is preferred that the following additional information also be included:

(1) The precision limit and bias limits for the variables and paramenters used in
calculating each result.

(2) A statement comparing the observed scatter in results on repeated trials (if
performed) with the expected scatter (6P) based on the uncertainty analysis.
1.2 Problem Formulation

1. How to perform simple measurement?


2. How to find experimental uncertainties in your measurements?
3. How to correctly write down your measurements with their uncertainties?
4. How to obtain the uncertainty in a calculated value (for example a
perimeter, an area or average)?
5. How between precision and accuracy?

1.3 Limitation of Problems

1) Learn how to perform simple measurements.


2) Learn how to find experimental uncertainties in your measurements.
3) Learn how to correctly write down your measurements with their
uncertainties.
4) Learn how to obtain the uncertainty in a calculated value (for example
a perimeter, an area or average).
5) Learn the difference between precision and accuracy
CHAPTER II
(DISCUSSION)

2.1 Uncertainty of measurement

You may be interested in uncertainty of measurement simply because you


wish to make good quality measurements and to understand the results. However,
there are other more particular reasons for thinking about measurement
uncertainty. You may be making the measurements as part of a:

 calibration - where the uncertainty of measurement must be reported


on the certificate
 test - where the uncertainty of measurement is needed to determine a
pass or fail or to meet a
 tolerance - where you need to know the uncertainty before you can
decide whether the tolerance is met or you may need to read and
understand a calibration certificate or a written specification for a test or
measurement.

When repeated measurements give different results, we want to know how


widely spread the readings are. The spread of values tells us something about the
uncertainty of a measurement. By knowing how large this spread is, we can begin
to judge the quality of the measurement or the set of measurements. Sometimes it
is enough to know the range between the highest and lowest values. But for a
small set of values this may not give you useful information about the spread of
the readings in between the highest and the lowest. For example, a large spread
could arise because a single reading is very different from the others.

The usual way to quantify spread is standard deviation. The standard


deviation of a set of numbers tells us about how different the individual readings
typically are from the average of the set. As a ‘rule of thumb’, roughly two thirds
of all readings will fall between plus and minus (±) one standard deviation of the
average. Roughly 95% of all readings will fall within two standard deviations.
This ‘rule’ applies widely although it is by no means universal. The ‘true’ value
for the standard deviation can only be found from a very large (infinite) set of
readings. From a moderate number of values, only an estimate of the standard
deviation can be found. The symbol s is usually used for the estimated standard
deviation.

2.2 Where do errors and uncertainties come from?

Many things can undermine a measurement. Flaws in the measurement may be


visible or invisible. Because real measurements are never made under perfect
conditions, errors and uncertainties can come from:
 The measuring instrument - instruments can suffer from errors including
bias, changes due to ageing, wear, or other kinds of drift, poor readability,
noise (for electrical instruments) and many other problems.
 The item being measured - which may not be stable. (Imagine trying to
measure the size of an ice cube in a warm room.)
 The measurement process - the measurement itself may be difficult to
make. For example measuring the weight of small but lively animals
presents particular difficulties in getting the subjects to co-operate.
 ‘Imported’ uncertainties - calibration of your instrument has an
uncertainty which is then built into the uncertainty of the measurements
you make. (But remember that the uncertainty due to not calibrating
would be much worse.)
 Operator skill - some measurements depend on the skill and judgement
of the operator. One person may be better than another at the delicate
work of setting up a measurement, or at reading fine detail by eye. The
use of an instrument such as a stopwatch depends on the reaction time
of the operator. (But gross mistakes are a different matter and are not to
be accounted for as uncertainties.)
 Sampling issues - the measurements you make must be properly
representative of the process you are trying to assess. If you want to
know the temperature at the work-bench, don’t measure it with a
thermometer placed on the wall near an air conditioning outlet. If you are
choosing samples from a production line for measurement, don’t always
take the first ten made on a Monday morning.
 The environment - temperature, air pressure, humidity and many other
conditions can affect the measuring instrument or the item being
measured.

Where the size and effect of an error are known (e.g. from a calibration
certificate) a correction can be applied to the measurement result. But, in general,
uncertainties from each of these sources, and from other sources, would be
individual ‘inputs’ contributing to the overall uncertainty in the measurement.

Any experimental measurement or result has an uncertainty associated


with it. In today’s lab you will perform a set of very simple measurements. You
will have to estimate the random uncertainty associated with each of them. As a
rule of thumb the precision of your measuring device (for example a ruler) is
always a very good starting value for your uncertainty.Furthermore you will be
asked to perform some calculations using the values you just measured. The
results of those calculations will also have an uncertainty associated with them.
To obtain those values you will have to follow a set of rules. They are explained
on this book in “Making Measurements in Physics” section.
Finally you will collect a set of 10 measurements of the same quantity
from your classmates and asked to calculate their average. That result also has an
uncertainty associated with it. Your instruction will tell you how to calculate it.

All measurements always have some uncertainty. We refer to the uncertainty as


the error in the measurement. Errors fall into two categories:

 Systematic Error - errors resulting from measuring devices being out of


calibration. Such measurements will be consistently too small or too
large. These errors can be eliminated by pre-calibrating against a known,
trusted standard.
 Random Errors - errors resulting in the fluctuation of measurements of
the same quantity about the average. The measurements are equally
probable of being too large or too small. These errors generally result
from the fineness of scale division of a measuring device.

2.3 Expressing uncertainty of measurement

Physics is a quantitative science and that means a lot of measurements and


calculations.These calculations involve measurements with uncertainties and thus
it is essential for the physics student to learn how to analyze these uncertainties
(errors) in any calculation. Systematic errors are generally “simple” to analyze
but random errors require a more careful analysis and thus it will be our focus.
There is a statistical method for calculating random uncertainties in
measurements. This requires taking at least 10 measurements of a quantity. We
will consider such method later on in the lab. For now we will consider the
uncertainty associated with one single measurement. The following general rules
of thumb are often used to determine the uncertainty in a single measurement
when using a scale or digital measuring device.

1. Uncertainty in a Scale Measuring Device is equal to the smallest


increment divided by
smallest increment
σ=
2

2. Uncertainty in a Digital Measuring Device is equal to the


smallest increment.

σ= smallest increment

 Ex. Meter Stick (scale device)

1 mm
σ= = 0,5 mm = 0,05 cm
2

 Ex. Digital Balance (digital device)


5 . 7 5 1 3 kg
σ = 0,0001 kg

When stating a measurement the uncertainty should be stated explicitly so


that there is no question about the uncertainty in the measurement. However, if
the is not stated explicitly, an uncertainty is still implied. For example, if we
measure a length of 5.7 cm with a meter stick, this implies that the length can be
anywhere in the range 5.65 cm ≤ L ≤ 5.75 cm. Thus, L =5 .7 cm measured with a
meter stick implies an uncertainty of 0.05 cm. A common rule of thumb is to take
one-half the unit of the last decimal place in a measurement to obtain the
uncertainty.

In general, any measurement can be stated in the following preferred form:

measurements = 𝑥𝑏𝑒𝑠𝑡 ± 𝜎𝑥

𝑥𝑏𝑒𝑠𝑡 = 𝑏𝑒𝑠𝑡 𝑒𝑠𝑡𝑖𝑚𝑎𝑡𝑒 𝑜𝑓 𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑚𝑒𝑛𝑡𝑠

𝜎𝑥 = 𝑢𝑛𝑐𝑒𝑟𝑡𝑎𝑖𝑛𝑡𝑦 (𝑒𝑟𝑟𝑜𝑟)𝑖𝑛 𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑚𝑒𝑛𝑡𝑠

Rule For Stating Uncertainties - Experimental uncertainties should be stated to


1-significant figure.

Ex. v = 31.25 ± 0.034953 m/s

v = 31.25 ± 0.03 m/s (correct)

The uncertainty is just an estimate and thus it cannot be more precise (more
significant figures) than the best estimate of the measured value.

Rule For Stating Answers – The last significant figure in any answer should be in
the same place as the uncertainty.

Ex. a = 1261.29 ± 200 cm/𝑠2

a = 1300 ± 200 cm/𝑠2 (correct)

Since the uncertainly is stated to the hundreds place, we also state the answer to
the hundreds place. Note that the uncertainty determines the number of
significant figures in the answer.

Significant Figures

Calculating uncertainties in calculations involving measurements (error


propagation) can sometime be time consuming. A quicker and approximate
method that is used to determine the number of significant figures in a calculation
is to use a couple rules.
DEF: A significant figure is a reliably known digit.

 Because zeros serve as counters and to set the decimal point, they
present a problem when determining significant figures in a
number.

A. Rules for Determining Significant Figures in a Number

a. All non-zero numbers are significant.


b. Zeros within a number are always significant.
c. Zeros that do nothing but set the decimal point are not significant.
Both 0.000098 and 0.98 contain two significant figures.
d. Zeros that aren’t needed to hold the decimal point are significant. For
example, 4.00 has three significant figures.
e. Zeros that follow a number may be significant.

B. Rule for Adding and Subtracting Significant Figures

When measurements are added or subtracted, the number of decimal places in the
final answer should equal the smallest number of decimal places of any term.

Ex. 256.5895 g + 8.1 g

M = 264.6895 g

M = 264.7 g (answer)

C. Rule for Multiplying/Dividing Significant Figures When measurements are


multiplied or divided, the number of significant figures in the final answer
should be the same as the term with the lowest number of significant figures.

Ex. L1 = 2.2 cm ; L2=38.2935 cm

A=L1L2=84.126900000 cm2

A=84 cm2 (answer)

D. Stating a number in scientific notation removes all ambiguities with


regard to how many significant figures a number has.

Accuracy and Precision

The terms accuracy and precision are often mistakenly used interchangeably. In
error analysis there is a clear distinction between the two.

Accuracy – an indication of how close a set of measurements is to the exact (true)


value.
Precision – a measure of the closeness of a set of measurements. (sometimes it is
used to specify the fineness of detail to which a measurement is made, the
significant figures) .

To get a better feeling for the difference between accuracy & precision and
random & systematic errors, let’s consider the following shooting-target
analogy.The experiment is to shoot a set of rounds at a stationary target and
analyze the results. The results are summarized below.

Random or Systematic

The effects that give rise to uncertainty in measurement can be either:

a) Random - where repeating the measurement gives a randomly different


result. If so, the more measurements you make, and then average, the
better estimate you generally can expect to get.
b) systematic - where the same influence affects the result for each of the
repeated measurements (but you may not be able to tell). In this case,
you learn nothing extra just by repeating measurements. Other methods
are needed to estimate uncertainties due to systematic effects, e.g.
different measurements, or calculations.

To calculate the uncertainty of a measurement, firstly you must identify the


sources of uncertainty in the measurement. Then you must estimate the size of the
uncertainty from each source. Finally the individual uncertainties are combined to
give an overall figure. There are clear rules for assessing the contribution from
each uncertainty, and for combining these together. All contributing uncertainties
should be expressed at the same confidence level, by converting them into
standard uncertainties. A standard uncertainty is a margin whose size can be
thought of as ‘plus or minus one standard deviation’. The standard uncertainty
tells us about the uncertainty of an average (not just about the spread of values). A
standard uncertainty is usually shown by the symbol u (small u), or u(y) (the
standard uncertainty in y).

Calibration

In order that an instrument or artefact should accurately indicate the value


of a quantity,theinstrumentorartefactrequirescalibration.This procedure is essential
for establishing the traceability of the instrument or artefact to a primary standard.
There is no hard-and-fast distinction between ‘instrument’ and ‘artefact’, but in
general an instrument measuresa quantity,where as anartefact provides aquantity.
For example, a digital multimeter (DMM) is an instrument that measures voltage,
resistance or current and displays it as a number. An instrument may also measure
a quantity by means of the position of a pointer on a dial.4 By contrast, standard
weights and gauge blocks are artefacts, also known as artefact standards or
standard artefacts.A standard artefact, with low temperature coefficient of
resistance, which was designed and manufactured at the National Measurement
Institute of Australia (Pritchard 1997). During calibration,a value measured by
instrument or provided by an artefact is compared with that obtained from a
standard instrument or artefact. If there is a discrepancy between the value as
indicated by the instrument or artefact and the corresponding standard, then the
difference between the two is quoted as a correction to the instrument or artefact.
This process is referred to as calibration, and the correction always has a stated
associated uncertainty.Overtime it is possible for the values indicated by an
instrument or provided by an artefact to ‘drift’. This makes recalibration
necessary. Manufacturers often advise that calibration be carried out at regular
intervals (say every 12 months).

Traceability

The result of a measurement is said to be traceable if, through an unbroken


chain of comparisons often involving working and secondary standards,the result
can be compared with a primary standard. Any instrument or artefact used as part
of the measurement process must recently have been calibrated by reference to a
standard that is trace able to a primary standard.A requirement of traceability is
that the chain of comparisons be documented. The consequences of lack of
traceability, in some instances, can be severe. For example, if a component
manufacturer cannot satisfy a regulatory authority that results of measurements on
its components can be traced back to a primary standard, then that manufacturer
may be prohibited from selling its products in its own country or elsewhere.

Value

The process of measurement yields a value of a particular quantity. As


examples,

a. the value of the period of a pendulum, T =2.37 s;


b. the value of the length of a pendulum, l =1.35 m; and
c. the value of the mass of a steel ball, m =67.44 g.

A value may be regarded as the product of a number and the unit in which the
particular quantity is measured.
Uncertainty

Errors are key and unavoidable ingredients of the measurement process.


Their net effect is to create an uncertainty in the value of a measurand. As with the
word ‘error’,theword‘uncertainty’isusedwidelyineverydaylanguage,suchas‘There
is some uncertainty as to whether it will rain today.’ When used in the context of
measurement, uncertainty has a number and (most often) a unit associated with it.
More specifically, measurement uncertainty has the same unit as the measurand.
The manner by which an uncertainty is calculated depends on the
circumstances,but it is usual to apply established statistical methods in order to
calculate uncertainty.

Random errors

The distinction between random and systematic errors is best seen by


considering the notion of ‘repeating the measurement under unchanging
conditions’, or as closely as we can arrange such conditions. By ‘unchanging
conditions’ we mean a well-defined measurand, a tightly controlled environment
and the same measuring instrument. Often when we repeat the measurement in
this way, we will obtain a different value.The reason for this lack of perfect repeat
ability is that the instrument we use or the measurand, or both, will be affected by
uncontrollable and small changes in the environment or within the measurand
itself. Such changes may be due, for example, to electrical interference,
mechanical vibration or changes in temperature. So if we make the measurement
ten times, we are likely to get ten values that, although similar, vary by a small
amount. When our intention is to obtain a single value for the measurand, we
interpret such variations as the effect of errors. The errors fluctuate, otherwise we
would see no variation in our values. Errors that fluctuate,because of the
variability in our measurements even under what we consider to be the same
conditions, are called random errors. In brief, random errors arise because of our
lack of total control over the environment or measurand.

Systematic errors

During any measurement, there will probably be an error that remains


constant when the measurement is repeated under the same conditions. An
example of such an error is a constant offset in a measuring instrument. Unlike
random errors, such systematic errors cannot be reduced by repeating the
measurements and taking their mean; they resist statistical attack. A systematic
error may be revealed by one of two general methods. In the following discussion,
we use the term ‘device’ to refer to either an instrument or an artefact. We may
look up previously obtained information on the devices used in a measurement.
This information may take the form of specifications by a manufactur eror
supplier,orlook-uptables of physical constants of materials,and previously
reported measurements against higher-accuracy devices.We note especially the
latter resource: any device, particularly if used in an accurate measurement,
should have been calibrated recently. There are laboratories that perform
calibrations and issue a calibration report for a specified device.The devices of
higher accuracy used in the calibration are themselves calibrated against devices
of yet higher accuracy. In this manner, all devices are traceable to the ‘top of the
food chain’ – the international primary standard for the particular quantity. We
may call this general class of information ‘specific information’, since it is
specific to the actual measurand that is of immediate concern. Any discrepancy
between this specific information and the result of the present measurement
suggests that there is a systematic error in the present measurement. The other
method of identifying a systematic error is by changing the experimental set-up.
The change may be intentional in order to seek out any systematic error, or may
occur for other reasons, with the systematic error being discovered ‘by accident’
as a result of the change. The change may also take place as a slow natural
process, generating an increasing and significant systematic error, which,
however, remains unsuspected for a prolonged period. In high-accuracy electrical
measurements, the slow deterioration in the insulating property of materials,
permitting increasing leakage currents, is such a process. Here are four examples
of intentional change that may uncover a systematic error.

a) In high-accuracy electrical measurements of voltage, swapping the


electrical leads connecting a source of constant voltage to a high-accuracy
DMM can reveal the systematic errors arising from the DMM’s ‘zero-
offset’ and from small thermal voltages caused by the Seebeck effect. The
zero-offset error is a non-zero DMM reading when it should be exactly
zero (as when a short-circuiting wire is connected across the input
terminals), and is due to imperfections in the DMM’s internal
electronics.The Seebeck effect creates small voltages at junctions
between different metals at different temperatures.
b) Exchanging one instrument for another that is capable of the same
accuracy and preferably made by a different manufacturer.
c) Having a different person perform the measurement.Thus the exact
position of a marker on a scale or of a pointer on a dial will be read
differently by different people (a case of so-called ‘parallax’ error, caused
by differences in the positioning of the eye relative to an observed
object). In high-accuracy length measurements, using gauge blocks of
standard thicknesses, the blocks must often be wrung together to form a
stack, and the wringing process, which will determine the overall length
of the stack, varies with the operator. 4. An established method of
measurement and an ovel method the promises higher accuracy may
give discrepant results,
which will be interpreted as revealing a systematic error in the older
method.

EXAMPLE

1. The velocity of a wave,v,is written in terms of the frequency, f,and the


wave length, λ, as v = f λ. (4.8) An ultrasonic wave has f =40.5 kHz with a
standard uncertainty of 0.15 kHz and λ =0.822 cm with a standard
uncertainty of 0.022 cm. Assuming that there is no correlation between
errors in f and λ, calculate the velocity of the wave and its standard
uncertainty.

2. Exercise D (1) The flow rate of blood, Q, through an aorta is found to be


81.5 cm3/s with a standard uncertainty of 1.5 cm3/s. The cross-sectional
area, A, of the aorta is 2.10 cm2 with a standard uncertainty of 0.10 cm2.
Find the flow speed of the blood, v, and the standard uncertainty in the
flow speed using the relationship13 Q = Av.
The velocity, v, of a wave on a stretched string, where F is the tension in
the string and µ is the mass per unit length of the string. Given that F
=18.5N with a standard uncertainty of 0.8 N and µ =0.053 kg/m with a
standard uncertainty of 0.007 kg/m, calculate the velocity of the wave and
its standard uncertainty.
It used to be the common practice, before the introduction of the GUM, for
measurement and testing laboratories to report uncertainties as so
called‘errors’.It was also common to report separately the random and
systematic errors in the measurand.This often created the complication
that,in any subsequent use of the report by others, a single number for the
uncertainty, though desirable, was not immediately apparent. There was no
consensus regarding the measure of uncertainty: whether this should be the
standard deviation or a small multiple of this. Instead of the root-sum-
square rule, errors and/or uncertainties were often simply summed
linearly. This linear sum applies strictly to perfectly positively correlated
input quantities, and if there is little or no correlation the linear sum gives
a needlessly pessimistic estimate of the uncertainty in the measurand.

Eight main steps to evaluating uncertainty

i. Decide what you need to find out from your measurements. Decide what
actual measurements and calculations are needed to produce the final
result.
ii. Carry out the measurements needed.
iii. Estimate the uncertainty of each input quantity that feeds into the final
result. Express all uncertainties in similar terms.).
iv. Decide whether the errors of the input quantities are independent of
each other. If you think not, then some extra calculations or information
are needed.
v. Calculate the result of your measurement (including any known
corrections for things such as calibration).
vi. Find the combined standard uncertainty from all the individual aspects.
vii. Express the uncertainty in terms of a coverage factor ,together with a size
of the uncertainty interval, and state a level of confidence.
viii. Write down the measurement result and the uncertainty, and state how
you got both of these
CHAPTER III
(CONCLUSION)

A measurement tells us about a property of something. It might tell us how


heavy an object is, or how hot, or how long it is. A measurement gives a number
to that property. Measurements are always made using an instrument of some
kind. Rulers, stopwatches, weighing scales, and thermometers are all measuring
instruments.Measurement uncertainty is a measure of the distribution of
measurement result. Uncertainty of measurement is the doubt that exists about the
result of any measurement. You might think that well-made rulers, clocks and
thermometers should be trustworthy, and give the right answers.

Uncertainty is a quantification of the doubt about the measurement result.


Whenever possible we try to correct for any known errors: for example, by
applying corrections from calibration certificates. But any error whose value we
do not know is a source of uncertainty.

Specifications are not uncertainties. A specification tells you what you can
expect from a product. It may be very wide-ranging, including ‘non-technical’
qualities of the item, such as its appearanc. Accuracy (or rather inaccuracy) is not
the same as uncertainty. Unfortunately, usage of these words is often confused.
Correctly speaking, ‘accuracy’ is a qualitative term (e.g. you could say that a
measurement was ‘accurate’ or ‘not accurate’). Uncertainty is quantitative. When
a ‘plus or minus’ figure is quoted, it may be called an uncertainty, but not an
accuracy
REFERENCES

1.Kandil. “Measurement Uncertainty in Material Testing: Differences and


Similarities between ISO, CEN and ASTM Approaches”, Guide of Euro Test
Solutions Ltd(2009).

2.. Liang et al“ A new measure of uncertainty based on knowledge granulation


for rough sets”, Information Sciences, Elsevier, 179 458–47).

3. Dahlberg G., “ Materials Testing Machines investigation of error sources and


determination of measurement uncertainty”, EUROLAB International Workshop:
Investigation and Verification of Materials Testing Machines, pp.21-32.

4. Clark J.P, “ Evaluation of Methods for Estimating the Uncertainty of Electronic


Balance Measurements”, Evaluation of Methods for Estimating the Uncertainty
of Electronic Balance Measurements Page 2 of 20.
Alkhatib and Kutterer,“Towards an advanced estimation of Measurement
Uncertainty using Monte-Carlo Methods case study kinematic TLS Observatio
Process”, FIG Working Week 2011 Bridging the Gap between Cultures
Marrakech, Morocco).

5. Gajghate,“Uncertainty estimation in analysis of particulate-bound mercury in


different size fractions of Pm10 in ambient air”, Accred Qual Assur).

6. BIPM/IECIIFCC/ISO/OIML/IUPAC, ISBN 92 67 101889, 1993-95- Guide to the


expression of uncertainty in measurement.

7. JCGM 100:2008 GUM 1995 with minor corrections Evaluation of


measurement data Guide to the expression of uncertainty in measurement.

8. Euro lab Technical report 1/2002 June 2002 on measurement Uncertainty in


testing, Euro lab Germany.

9. Mandavgade et al, “Determination of uncertainty in gross calorific value of


coal using bomb calorimeter,” International journal of measurement
technologies and instrumentation engineering, ISSN: 2156-1737,(4),4552.

10. Mandavgade et al ,“Measurement uncertainty evaluation of automatic Tan


Delta and resistivity test set for transformer oil” published in International
Journal of Metrology and Quality Engineering (IJMQE) ISSN: 2107-6839 EISSN:
2107-6847, Cambridge University journalvolume3, issue1,39–45
DOI:10.1051/ijmqe/2012004.
11. Mandavgade et al, “ Mathematical modeling of effects of various factors on
uncertainty of measurement in material testing”, proceedings of International
conference on mechanical engineering and technology, DOI:http://dx.doi.org/
10.1115/1.859896.paper43

12. Awachat and Mandavgade, “Comparative Analysis of Measurement


Uncertainty Associated With Brinell Hardness Test and Rockwell Hardness Test”
in 5th International conference ICAME-2011.

13. https://www.sciencedirect.com/science/article/pii/0034487779900727
diakses pada 18 Maret 20201 pukul 20.55

14. https://iopscience.iop.org/article/10.1088/0026-1394/43/4/S06 diakses


pada 18 Maret 2021 pukul 20.00

15. https://ieeexplore.ieee.org/abstract/document/5152887/ diakses pada 18


Maret 2021 pukul 22.10

You might also like