You are on page 1of 32

SU3250

SURVEY MEASUREMENTS AND


ADJUSTMENTS

Course Notes

Prepared by

Indrajith D. Wijayratne
Associate Professor of surveying
Michigan Technological University
Houghton, MI 49931

Copyright © 2002 by Indrajith Wijayratne


SURVEY MEASUREMENTS AND ERRORS

Topics to be covered



Types of survey measurements


Types and sources of errors


Estimating the precision of measurements


Behavior of random errors


Controlling random errors
Rejection of Measurements
SURVEY MEASUREMENTS

Angles, distances, elevation differences, gravity, photo


coordinates, etc.

Read verniers, micrometers, scales

Sometimes estimation (judgment) is necessary

Some Facts about Measurements

No measurement is exact as opposed to a count

Repeated measurements do not agree, in general

More refined the measurement is more prominent is the variation

Necessary degree of refinement depends on expected precision


MEASUREMENT ERRORS

Error = Measured value - True value

True value can never be known and can only be estimated

Consequently, true error can only be estimated

Residual = Measured value - Estimated true value

Residual is an estimate of the error


Sources of Errors (Chap 1.3)

• Natural
• Personal
• Instrumental

Natural Errors

Caused by the changes in the environment in which the


measurements are made , e.g. wind, temperature changes etc.

Can only be reduced by controlling conditions and/or applying


corrections, e.g. temperature correction in taping

Personal Errors

Mostly due to the limitations of the observer skills

Limitations may be personal dexterity (e.g. inability to remove


parallax from telescope optics), inexperience (e.g. not applying
correct tension in taping), or carelessness (e.g. not centering the
instrument correctly over the point), etc.

Instrumental Errors

These are caused by imperfections in the instruments

May be due to construction (e.g. irregular circle graduations in


angle measuring instruments, or adjustments (e.g. incorrect prism
constant in EDM), etc.
Types of Errors (Chap 1.4)



Mistakes or blunders


Systematic errors
Random errors

Mistakes or Blunders

Cannot be considered as errors


Generally large in magnitude
Are due to carelessness, inexperience or fatigue of the observer
Should be avoided or detected and eliminated

Systematic Errors

Constant or proportional to the measurement


Mostly due to instruments and sometimes due to nature
Generally cumulative
A correction can be computed and applied to measurement

Random Errors

Unavoidable, generally small


Plus and minus errors are equally probable i.e. frequency
(probability) of occurrence of plus and minus errors is
approximately the same
Small errors occur more frequently than large ones
Very large errors seldom occur
Exact magnitude or sign cannot be determined
Cannot be corrected but can be controlled by exercising care
Adjustment techniques are required for proper treatment
Systematic Errors in Distance Measurements

Taping



calibration


temperature


tension


sag
slope and alignment

EDM



instrument and reflector constants


frequency drift


temperature, humidity etc.


centering(instr. and refl)
error in vert. or zenith angle for slope reduction

Random Errors in Distance Measurements

Taping

• reading
• marking
• random variations in temp., Tension, etc.

EDM

• frequency fluctuations
• changes in phase center
• temp., Humidity variations
Systematic Errors in Angle Measurements



plate bubble


collimation (horiz. and vert.)


horizontal axis
index error(vertical or zenith angles)

Random Errors in Angle Measurements



pointing


reading


centering over point (instr. and target)
bubble centering

Systematic Errors in Differential Leveling



collimation


curvature and refraction


bubble


rod length
rod not vertical

Random Errors in Differential Leveling

• rod reading
• bubble centering
• unbalanced sights
Self-Study Problems

List different errors encountered in each of the following


measurements and classify them as to the source (i.e. natural,
instrumental or personal) and type (i.e. mistake, systematic or
random)

(a) Measuring a distance with a steel tape. Assume the distance


to be more than 100 feet.
(b) Measuring a distance with an EDM
(c) Measuring an angle with a conventional transit/theodolite
(d) Measuring an angle with an electronic theodolite
(e) Running a level line with a conventional level, i.e. level with
a bubble that needs to be centered before reading the rod
(f) Running a level line with an autolevel
SIGNIFICANT FIGURES (DIGITS)

Number of digits to which a measurement can be made is limited by the


equipment used

A reading can be made with certainty to the least count available on the
instrument

One extra digit can be estimated with reliability in most cases. E.g. Scales
not verniers

All digits in a measurement including this estimated digit are significant

If a measurement is made to a certain number of significant digits all of


them must be recorded exactly e.g. 34.10 not 34.1 or 34.100

The precision of a measurement, if not stated, is generally considered +/-


half the last significant digit

All the significant digits plus at least one extra digit must be used in all
computations involving measurements

Final answers must be rounded to the appropriate number of significant


digits because too few indicates false crudeness in measurements and too
many false precision

If intermediate values are rounded rules of significant digits must be


followed and at least one extra digit must be retained in intermediate values

When using constants such as π, appropriate number of decimal digits must


be used

When using trigonometric functions (e.g. Sine, cosine etc.) with angles, 7 or
more decimals must be used
ANALYSIS OF MEASUREMENTS (Chap 1.5 and Chap 2)

• How to identify a good measurement from a bad one

• Precision or accuracy?

• Mean and Standard Deviation

Theoretically, there is only one value, called true value for any
measurement, but there is an infinite number of possible values
(population), in general, due to random errors

We, as observers, can only collect a finite number of


measurements called a sample which is used to estimate the true
value

Even a single measurement is an estimate of the true value of that


measurement

However, a single measurement cannot tell how close it is to the


true value unless the true value is already known

Information about the quality of measurements can be obtained by


repeating the measurement, i.e. collecting a sample

Spread or scatter or variation is an indication of the quality of the


set of measurements

In order to use this as a true assessment of the quality of the


measurement, all repetitions must be done under the same
conditions
This includes instruments, operators as well as environmental
conditions

If individual values of repeated measurements are close to one


another then we say it’s a precise set of measurements

Precision is a measure of the degree of refinement in a set of


measurements

The achievable precision depends on the quality of instruments,


skill of observers, care exercised and the stability of environment

Precise measurements tend to reveal discrepancies and crude ones


tend to hide them

Accuracy is an indication of the closeness of a measurement or a


value to the true value

Since the true value of a measurement is unknown, its accuracy is


also unknown

A set of repeated measurements does not have any information


about its accuracy

A precise measurement is also accurate in the absence of


systematic errors

Both accuracy and precision are desirable in any measurement


Only way to assure accuracy is by comparison with previously
established standards or values

If a traverse or level line, run between known controls, closes


within expected tolerance then that is indicative of both precision
and accuracy, in general.
Measures of Precision



Range


Average error


Maximum error


Standard deviation
Probable error

Estimates of the true value or the best value or the most probable
value



Mode


Mid-range


Median
Mean(average)

Example:

Consider 10 measurements

9.5544 11.3642
9.8675 8.8039
9.6379 8.5621
9.8689 9.2755
10.8669 11.1126
Measurements arranged in ascending order:

8.5621 9.8675
8.8039 9.8689
9.2755 10.8669
9.5544 11.1126
9.6379 11.3642

Range = 11.3642-8.5621
= 2.8021

Mid-range = 11.3642+8.5621
2
= 9.9632

Median = 9.6379+9.8675
2
= 9.7527

Mean (average) = sum


Number of values
x = (Σx)/n
= (98.9139)/10
= 9.8914

Residual (v) = x - x

-0.3370 1.4728
-0.0239 -1.0875
-0.2535 -1.3293
-0.0225 -0.6159
0.9755 1.2212

Note: order of residuals is same as that of initial measurements


Average error = Σ ⎢v ⎪
n
= 7.3391/10
= 0.7339

Maximum error = Maximum Absolute Residual


= 1.4728

Deviation (s) = √∑(v2)/(n-1)


Standard

= √(8.1200)/(10-1)
= 0.9498

Probable error = 0.6745*s


= 0. 0.6407

The mean value of a measurement , although accepted as the best


value representing the true value, has its own variation (error)

This is called the Standard Deviation of the mean and is computed


by s/√n (equation 2.8 of text)

In the above example

Sx = 0.9498/√10
= 0.3004

Error or uncertainty, as it is sometimes called, can be expressed as


a ratio of the error to the measurement e.g. 1:50,000 or 1/50,000 or
20 ppm

This is sometimes called relative precision


Example-1

If a distance measurement is 597.12 feet and the error is 0.03 feet


then the relative precision is

0.03/597.12 = 1/19904 = 1/20000

Example-2

RMS error of EDM is expressed as 5 mm + 2 ppm. If the above


distance is measured with this EDM the expected uncertainty of
the distance is:

SQRT [(5/304.8)2 + (2/1,000,000)(597.12)2 ]= 0.016 feet

Relative precision in an angle is same as the angle expressed in


radians

This is because error introduced in the position at the end of a line


is equal to the product of the angular error in radians and length of
the line

ϑ
A s
B

∆s
B’

∆s = (ang. error in radians) (length)


= ϑ.s
Now,

relative precision = Positional error

= ∆s /s
length
Now, since ∆s = ϑ . s

Relative precision = (ϑrad )(s)/(s)


= ϑrad
= Ang. error in radians

Example:

If an angle has an error of 5” then the relative error of this angle is

5/206265 = 1/41253

It is important to have consistency in distance and angle


measurements in surveying

20” transit for angles and an EDM for distances are not consistent
or 1” theodolite and taping are not

In a statistical sense, random errors have probabilities associated


with them, and therefore, the error expressed as an absolute value
or relative value without the probability does not give much
information
Self-study Problems

1. Using the first 25 data values given in Example 2.2, page 27


of text, determine

i. mode, median and mean


ii. average error, standard deviation, and probable error
iii. standard deviation of the mean

Note: Organize the data, in a table, in ascending or descending


order along with their residuals.

2. Comment on the randomness and symmetry of the sample


MORE ON RANDOM ERRORS AND ANALYSIS OF
MEASUREMENTS

• Randomness of measurements and/or residuals

• Statistical distribution of random errors

Recall that random errors have the following properties:

Are equally positive and negative i.e. Frequency (probability)


of occurrence of plus and minus errors is approximately the
same

Small errors occur more frequently than large ones

Very large errors seldom occur

How do we know if a set of measurements (or residuals) is truly


random ?

There are statistical methods to determine if a sample is truly


random

If a set of measurements is truly random then mode, median and


mean are all same or very close to each other

Above are the measures of central tendency


A set of measurements can be analyzed by the following methods

• logical
• graphical
• statistical

Logical

Determine a single probable value, e.g. mode, median, mean

Examine to see if the measurements are symmetrical about this


value

Examine the total spread

Graphical (Bar Graph, Frequency Polygon, Histogram)

A graphical representation of the measurements(or residuals)can


be used for this purpose

Construction:



axes


class interval


frequency (or relative frequency)
scale
Bar graph

vertical lines (bars) are drawn at the center of each class interval

height of each bar is proportional to the frequency(or relative


frequency) in that class

Frequency polygon is obtained by connecting the top of bars


Histogram consists of rectangular blocks instead of bars

Boundary of each block is that of class interval and height is


proportional to frequency (or relative frequency)

All the above graphical forms can be used to observe visually

• if the measurements(or residuals) are distributed

• total spread or dispersion


symmetrically about some central value

• frequency of occurrence
• steepness or flatness, which is an indication of the
precision

This can be used to test the performance of instruments

Histogram is the graph showing the frequency of occurrence for a


value or a range of values

Area of the block is proportional to the probability of a value


falling in this range

If relative frequency is used, area of the block is the probability of


all the values in that class
This way, total area will be one as is the total probability

If we increase the sample size infinitely for truly random set, the
histogram approaches a very symmetrical form

If the histogram was constructed using relative frequency then, as


the sample size increases indefinitely, each relative frequency
approaches a constant value

This limiting value of the relative frequency is the probability of


occurrence of values in that class

Normal Distribution

Random errors usually behave as statistical variables

Truly random variations (errors) follow a known statistical


distribution called normal or Gaussian distribution (bell curve)

Behavior of random errors can be known from the properties of


normal distribution

Normal distribution is only defined for continuous variables


whereas histogram is a representation of discrete measurements
The probability density function of the normal distribution is given
by

y = 1 exp{-(x-µ)2/2σ2}
σ√2π

where µ and σ are the mean and standard deviation of the


theoretical population or standard error

-∞ < x < ∞,
This function is defined for a continuous random variable x where

and therefore, we cannot talk about a value but a range

The probability of a value falling in a given range dx is the area


under the curve within that range i.e.

p = ∫ y.dx

The total area under the curve and x-axis is equal to 1 as this
represents the total probability

Smaller the standard deviation taller and steeper the curve is


The curve is symmetrical about the mean, and the points of
inflexion, one on each side of the mean, represent the std. deviation

µ and σ, and therefore, there can be infinite number of normal


Two parameters that will describe a normal distribution function is

distributions

It has been found that measurements are best represented by


normal distribution when the variation of them is only due to
random errors
There are several components of random errors compounded
together in a measurement

Each component of random error may have a different distribution


but the combined effect (sum) of all the random errors is generally
normally distributed, the proof of which is given in central limit
theorem

Computing Probabilities

The probability of occurrence of errors of certain magnitude can be


computed using normal distribution or given the probability the
value can be calculated

Recall that normal distribution is defined for a continuous variable,


and therefore, we can only calculate a range

The probability is the area under the curve and can be determined
by integrating the probability density function between the
required limits

As this is not convenient a set of tables have been prepared

Since there are infinite number of normal distributions, tables


cannot be prepared for every one of them

Hence we define a standard normal distribution in which mean is


zero and the standard error is one
Any normally distributed variable can be converted to a standard
normal variable by moving the origin of the coordinate system to
the mean and the units of measurement is changed to std. error of
the distribution, i.e. we define a new random variable

z = (x-µ)/σ

'z’ has the same properties as x except its µ=0 and σ = 1

Example:

Find the probability of an error less than or equal to std. error in


magnitude

i.e. z = (x- µ )/σ = ±1

This is the area under the curve between +1 and -1 on standard


normal curve, or

p = (area up to +1)-(area up to -1)


= 0.8413-0.1587 (Table D1 pp 502- 503)
= 0.6826 or 68.26 %

Example:

Find the value so that the errors which are equal or less than this
value have 50% probability

We have to find a value so that 50% or 0.50 of the area is outside


this range, i.e. 0.25 on each end of the curve since it’s symmetric
Now find corresponding ‘z’ value so that the probability given in

interpolation, we find that this is equal to ± 0.6745. Noting that the


the body of the table is either 0.25 and 0.75. By a simple

‘z’ expressed in the units of std. deviations

x = ± 0.6745 σ

Now the value is in original units of measurement and is usually


given as

E50 = 0.6745 σ

Note that the ± sign is generally not written with random errors but
it is implied

We can calculate an error which has any given probability in a


similar way, see table 3.2 p 45 of text

Even though the sample of measurements collected in the field


may not have the same std. error as the parent population, this
estimate works well in practice

Rejection of Measurements

Use of normal distribution to analyze measurements presents a


way to detect blunders

Although theoretically a random error can be of any magnitude,


large errors have a very small probability
If an error is larger than a certain magnitude then we can reject that

1.645σ or 1.96σ ≈ 2 σ
measurement. Usually we use E90 or E95 as rejection limits, i.e.

After rejecting the measurements suspected of blunders, the mean


of remaining measurements is accepted as the most probable value
(highest probability)

Self-study Problems (Use the data given on page 29)

(a) Compute E90, E95 and E99 for this data set. Analyze the data set
to check if the actual number of measurements containing
errors equal to or less than each of the above limits agrees with
theoretical values. Show actual and theoretical values in a table.

(b) Using the E95 as a rejection criterion, determine and show


which observations, if any, are considered to be outliers
(blunders).

(c) Recompute the mean and standard deviation after removing


outliers. Check the remaining data for any more outliers at E99
level. Once all outliers have been removed, compute the mean
and E95 of the newly computed mean.

(d) What would be the expected standard deviation of the mean for
a set of 10 repetitions made with the same instrument under
the same conditions in the future ?
Why Least Squares

If the measurements are truly random the value that has the highest
probability is the mean, and therefore, is the most probable value

Also, if the measurements are truly random, the deviations (errors)


from the mean are generally small

Conversely, if a single value can be found from a set of


measurements in such a way that the absolute deviations of each
measurement is the smallest possible then this value is the most
probable value

This is the basis for least squares which is used for adjustment of
observations

If every measurement in a set of measurement has the same


precision, then mean is the least squares estimate of the quantity
measured

If measurements made under different conditions are used for the


calculation of mean, then weights can be applied to individual
measurements, based on the precision of each measurement

Unlike in direct measurements, the mean value cannot be


computed directly for computed quantities, especially when
different types of measurements with different precision are used,
e.g. coordinates

Therefore, the calculations are done with the least squares


condition imposed
Sampling Distributions

For the calculation error limits and probabilities discussed above,


we use the std. deviation derived from the sample that is only an
estimate of the std. error of the population

Sample mean and sample standard deviations are considered


unbiased estimates of the population parameters only for an
infinitely large sample

In practice, however, it is not possible to have large enough


sample, so the sample parameters are not true estimates of
population parameters

For this reason, when sample mean and variance are used in place
of population mean and variance, slightly different distributions
called sample distributions are used, e.g. computing confidence
intervals and statistical testing

Sample Mean

The mean, computed from a small set of repeated measurements,


although better than a single measurement, is not necessarily the
same as population mean

The mean computed from different sets of measurements of the


same quantity is also generally different for each set
The mean of a set of independent measurements belongs to a
different distribution and has a standard deviation = s/√n, where s
is the sample standard deviation and n is the sample size
Note that the standard deviation of the sample mean depends on
the sample size

Since the mean value of a sample is subject to variation with a


standard deviation of s/√n, it is customary to compute a confidence
interval for the population mean, for example 95/%, that is, a range
in which the population mean falls with 95% probability

Fore the calculation of a confidence interval for population mean, a


sampling distribution called 't' distribution is used

mean = µ and standard error= σ, the random variable defined by


When the sampling is done from a normal population with

t = (x - µ)/ (s/√n)

is said to have a 't' distribution with (n-1) degrees of freedom

This is also a symmetrical distribution similar to normal


distribution, dependent on the degree of freedom, which in turn is
dependent on the sample size (n), but approach normal
distribution for large values of n

Sample Variance

Chi-Square distribution is used to test and compute confidence


intervals for the population variance (σ2) using sample variance
(s2) if the sample was collected from a normal distribution
The quantity χ2 = (n-1)s2
σ2
is said to be distributed as a chi-square distribution
The mean value computed from a sample will only be the best
estimate of the true value if the measurements are free of
systematic errors or biases and blunders

Standard deviation is a measure of precision and is an indication of


the repeatability or the reproducibility of the measurement, and
does not indicate the presence of any systematic errors

Mean Square Error (MSE) is often used as a measure of accuracy,


i.e.

MSE = s2 + (bias)2

If no bias is present

MSE = s2

RMS = Root Mean Square error is the square root of MSE

Another use of this type of analysis is the ability to predict the


precision of set of measurements made later
Summary

Mean and standard error represent the distribution of a random


variable

Sample values are estimates of population parameters

These values can be used to obtain information about the


population, e.g. probability of errors equal or smaller than a certain
magnitude

This can be used for detection of blunders

A confidence level can be attached to mean value

Uncertainty of similar measurements can be predicted?

In all the above the underlying assumption is that each value in the
sample independent i.e. one value has no influence on another or
are not correlated

It is also assumed that sample or measurements are made under the


same conditions so that they came from the same population i.e.
measurements have equal chances of having similar errors

Covariance is a term that is used to describe the inter-dependence


of two random variables

You might also like