You are on page 1of 17

LEAST SQUARES ADJUSTMENT

 The principle of least squares


 For a group of equally weighted observations, the L.S.A is based statistical techniques of obtaining
the most probable value of the set of observations or measurements through a fundamental
principal of least squares which states that the most probable value for measurement or
observation is obtained when the sum of squares of residuals is minimized i.e.
σ 𝒆𝟐 = 𝒆𝟐𝟏 + 𝒆𝟐𝟐 + 𝒆𝟐𝟑 + . . ………….+𝒆𝟐𝒏 = minimum
 For a group of non-equally weighted observations, the L.S.A is based statistical techniques of
obtaining the most probable value of the set of observations or measurement through a
fundamental principal of least squares which states that the most probable value for
measurements or observations is obtained when the sum of weights times their
corresponding squared residuals should is minimized. That is if the measurements were
not made with the same precision.
∑we2 = we12 + we22 ………. + wen2=minimum
 Where w are the weights. 1
LEAST SQUARES ADJUSTMENT
 Assumptions and conditions for L.S.A
 For L.S.A to be performed on a set of measurements, the following conditions and assumptions
must exist:
1. That mistakes (blunders) and systematic errors have been eliminated, therefore only random errors
exist
2. That the random errors are normally distributed variables i.e.
a) Smaller errors occur at higher frequency than larger errors
b) Mean of errors tend to zero as number of measurements increases
c) Positive and negative errors occur at equal frequency
3. That there exists redundant measurements i.e. the number of observations/ measurements is larger than
the number of unknown parameters being determined or that the system is over determined
4. That there exist some controls or constraints i.e. some control points with higher accuracy exists (there
are known parameters)
5. There exist some forms of precision estimates or weight assignment 2
LEAST SQUARES ADJUSTMENT

 Basic observation adjustment methods


 All the adjustment methods are equal, meaning that they lead to the same
adjustment results regardless the specific method
 The choice of one or another method depends on the type of the functional
model
 There are three basic least-squares adjustment methods, namely:
 the method of observation equations (the method of parameters or the method of
indirect observations),
 the method of condition equations (the method of direct observations)
 the method of mixed or compound equations

3
LEAST SQUARES ADJUSTMENT

 The method of observation equation


 In any LS adjustment problem, there must be a least number of observations that give a
solution, i.e., making possible the determination of all unknown parameters. This minimum
number is called, the parametric degree r
 The necessary and sufficient condition to characterize a problem as an adjustment problem
is that the number of observations n must be greater than the parametric degree r (n > r)
 For a unique solution of unknowns, the number of equations must equal the number of
unknowns. Usually, there are more observations (and hence equations) than unknowns, and
this permits determination of the most probable values for the unknowns based on the
principle of least squares

4
LEAST SQUARES ADJUSTMENT
 The method of observation equation
 The method of observation equations relates measurements (observed quantities) to both the unknown
parameters being determined and observational errors. One equation is written for each
measurement/observation related to the unknown parameters.
 This is the most common least-squares adjustment method having an obvious number of equations equal
to the number of measurements
 The general form of the functional model/observation equation is given in the form
𝒚 = 𝑨𝒙 + 𝒆
 Which in matrix form
𝑦1 𝑎11 𝑎12 𝑎31 … 𝑎1𝑢 𝑥1 𝑒1
𝑦2 𝑎21 𝑎22 𝑎32 … 𝑎2𝑢 𝑥2 𝑒2
𝑦3 = 𝑎31 𝑎32 𝑎33 … 𝑎3𝑢 𝑥3 + 𝑒3
⋮ ⋮ ⋮ ⋮ … ⋮ ⋮ ⋮
𝑦𝑛 𝑎𝑛1 𝑎𝑛2 𝑎𝑛3 … 𝑎𝑛𝑢 𝑥𝑢 𝑒𝑛
 𝑢 is the number of unknowns parameters being determined 5

 𝑛 is the number of observations/measurements


LEAST SQUARES ADJUSTMENT
 The method of observation equation
 Where

𝑦1
𝑦2
 𝑦 = 𝑦3 is the vector of the observations

𝑦𝑛

𝑥1
𝑥2
 𝑥 = 𝑥3 is the vector of the unknown parameters being determined

𝑥𝑢

𝑒1
𝑒2
 𝑒 = 𝑒3 is the vector of the residuals being minimized

𝑒𝑛 6
LEAST SQUARES ADJUSTMENT
 The method of observation equation

𝑎11 𝑎12 𝑎31 … 𝑎1𝑢


𝑎21 𝑎22 𝑎32 … 𝑎2𝑢
 𝐴 = 𝑎31 𝑎32 𝑎33 … 𝑎3𝑢 is referred to as the design matrix
⋮ ⋮ ⋮ … ⋮
𝑎𝑛1 𝑎𝑛2 𝑎𝑛3 … 𝑎𝑛𝑢
 Under the LS optimization constraint, and without going into proofs, the (m x m) or (r x r)
NORMAL EQUATIONS system is formed,
𝑨𝑻 𝑨𝒙 = 𝑨𝑻 𝒚
 And the L.S.A estimate of the unknown parameters 𝑥 is given by:
−𝟏 𝑻
𝒙 = 𝑨𝑻 𝑨
ෝ 𝑨 𝒚
 Then the estimate adjusted observations 𝑦 termed 𝑦ො is given by
𝑻 −𝟏 𝑻
ෝ = 𝑨ෝ
𝒚 𝒙=𝑨 𝑨 𝑨 𝑨 𝒚
 The estimate of the residuals (correction to the observations) 𝑒 termed as 𝑒Ƹ is given as:
7

−𝟏 𝑻
ෝ − 𝒚 = 𝑨 𝑨𝑻 𝑨
𝒆ො = 𝒚 𝑨 𝒚 −𝒚
LEAST SQUARES ADJUSTMENT

 The method of observation equation


Example 1:
Figure 3 shows six distance situated on a straight line have been measured as shown. Use the observation
equation method of least squares adjustment to determine the distances between points A, B, C and D.

8
LEAST SQUARES ADJUSTMENT
 The method of observation equation
Example 1:
 Solution
 The steps of least-squares adjustment are as follows:
1. Identify control points or constraints i.e. identify the known points (coordinates, benchmarks (elevations),
bearings, azimuths, distances, angles etc.
In this case we do not have the control points or constraints
2. Identify the parameters to be determined i.e. the coordinates, elevations, azimuths, bearings, distances, angle etc.
to be determined
The parameters to be determined are the three distances between points A, B, C & D i.e. let say 𝑫𝟏 ,
𝑫𝟐 𝒂𝒏𝒅 𝑫𝟑 or even call them 𝒙𝟏 , 𝒙𝟐 𝒂𝒏𝒅 𝒙𝟑 , t,p,m
3. Ensure that there are redundant measurements (ensure there is some degree of freedom) i.e. the number of
measurement are more than the number of parameters being determined
Note that n=6 (number of observation),u = 3 (parameters to be estimated) and f = 3 (degrees of freedom) hence
there exist some degree of freedom 9

degree of freedom f = no. of observations n − no. of unknown parameters (u)


f=6−3=3
LEAST SQUARES ADJUSTMENT
 The method of observation equation
Example 1:
 Solution
4. Form the observation equations for least squares solution.
 Let the unknown parameters be 𝒙𝟏 , 𝒙𝟐 𝒂𝒏𝒅 𝒙𝟑 then our six observation equations are therefore in
two forms:
𝐷1 = 𝑥1 + 𝑒1 =1𝑥1 +0𝑥2 +0𝑥3 + 𝑒1
𝐷2 = 𝑥2 + 𝑒2 =0𝑥1 +1𝑥2 +0𝑥3 + 𝑒2
𝐷3 = 𝑥3 + 𝑒3 =0𝑥1 +0𝑥2 +1𝑥3 + 𝑒3
𝐷4 = 𝑥1 + 𝑥2 + 𝑒4 =1𝑥1 +1𝑥2 +0𝑥3 +𝑒4
𝐷5 = 𝑥1 + 𝑥2 + 𝑥3 + 𝑒5 =1𝑥1 +1𝑥2 +1𝑥3 +𝑒5
𝐷6 = 𝑥2 + 𝑥3 + 𝑒6 =0𝑥1 +1𝑥2 +1𝑥3 +𝑒6
 Where 𝐷1 , … … … 𝐷6 are the observations and 𝑥1 , 𝑥2 𝑎𝑛𝑑 𝑥3 are the unknown parameters being 10

determined
LEAST SQUARES ADJUSTMENT
 The method of observation equation
Example 1:
 Then in the matrix form 𝑦 = 𝐴𝑥 + 𝑒 we get:
𝐷1 = 𝑥1 + 𝑒1 =1𝑥1 +0𝑥2 +0𝑥3 + 𝑒1
𝐷2 = 𝑥2 + 𝑒2 =0𝑥1 +1𝑥2 +0𝑥3 + 𝑒2
𝐷3 = 𝑥3 + 𝑒3 =0𝑥1 +0𝑥2 +1𝑥3 + 𝑒3
𝐷4 = 𝑥1 + 𝑥2 + 𝑒4 =1𝑥1 +1𝑥2 +0𝑥3 +𝑒4
𝐷5 = 𝑥1 + 𝑥2 + 𝑥3 + 𝑒5 =1𝑥1 +1𝑥2 +1𝑥3 +𝑒5
𝐷6 = 𝑥2 + 𝑥3 + 𝑒6 =0𝑥1 +1𝑥2 +1𝑥3 +𝑒6
3.17 1 0 0 𝑒1
1.12 0 1 0 𝑥 𝑒2
1
2.25 0 0 1 𝑥 𝑒3
= 2 +
𝑒4
4.31 1 1 0 𝑥
3
6.51 1 1 1 𝑒5
3.36 0 1 1 𝑒6
5. Obtain the normal equations that will lead to the least squares solution
 The normal equations (𝐴𝑇 𝐴𝑥 = 𝐴𝑇 𝑦 are:
1 0 0 3.17
0 1 0 𝑥 1.12
1 0 0 1 1 0 1 1 0 0 1 1 0
0 0 1 𝑥 2.25 11
0 1 0 1 1 1 2 = 0 1 0 1 1 1
1 1 0 𝑥 4.31
0 0 1 0 1 1 3 0 0 1 0 1 1
1 1 1 6.51
0 1 1 3.36
LEAST SQUARES ADJUSTMENT
 The method of observation equation
Example 1: 7. Analyze results (adjusted observations/measurement)
 Giving the NORMAL EQUATIONS as: 1 0 0 3.1700
𝑨𝑻 𝑨𝒙 = 𝑨𝑻 𝒚 0 1 0 1.1225
3.1700
0 0 1 2.2350
3 2 1 𝑥1 13.99 𝑦ො = 𝑨ෝ
𝒙= 1.1225 =
1 1 0 4.2925
2 4 2 𝑥2 = 15.30 2.2350
1 1 1 6.5275
1 2 3 𝑥3 12.12
0 1 1 3.3575
6. Solve normal equations 3.1700 3.17 +0.0000
 And the least squares solution gives 1.1225 1.12 +0.0025
−𝟏 𝑻 2.2350 2.25 −0.0150
ෝ=
𝒙 𝑻
𝑨 𝑨 𝑨 𝒚 𝑒 = 𝑦ො − 𝑦 = − =
4.2925 4.31 −0.0175
𝑥1 3.1700 3.170 6.5275 6.51 +0.0175
𝑥2 = 1.1225 ≈ 1.123 3.3575 3.36 −0.0025
𝑥3 2.2350 2.235 +0.0000
+0.0025
−0.0150
𝜎 = 𝑒 𝑇 𝑒 = 0.0000 0.0025 −0.0150 −0.0175 0.0175 −0.0025 × = ±0.0292
−0.0175 12
+0.0175
−0.0025
LEAST SQUARES ADJUSTMENT

 The method of observation equation


Example 2:
In the levelling network shown in Figure 1 below, six independent height differences were
measured using spirit levelling. The measured height differences are

from Q to A =(∆ℎ𝑄𝐴 )=0.905 m, from C to Q=(∆ℎ𝑪𝑄 )= 5.864 m,


from A to B =(∆ℎ𝑨𝑩)=1.675 m, from Q to B =(∆ℎ𝑄𝑩)=2.578 m,
from C to B =(∆ℎ𝑪𝑩 )=8.445 m, from C to A=(∆ℎ𝑪𝐴 )= 6.765 m.

13
LEAST SQUARES ADJUSTMENT
 The method of observation equation
Example 2:
 Solution
 The steps of least-squares adjustment are as follows:
 Identify control points or constraints i.e. identify the known points (coordinates, benchmarks (elevations), bearings, azimuths,
distances, angles etc.
 The control points or constraints is the given height of point Q i.e. 𝑯𝑸 = 𝟏𝟓𝟎𝟐. 𝟓𝒎
 Identify the parameters to be determined i.e. the coordinates, elevations, azimuths, bearings, distances, angle etc. to be
determined
 The parameters to be determined are the three heights of points A, B &, C i.e. let say 𝑯𝑨 , 𝑯𝑩 𝒂𝒏𝒅 𝑯𝑪 or even call
them 𝒙𝟏 , 𝒙𝟐 𝒂𝒏𝒅 𝒙𝟑
 Ensure that there are redundant measurements (ensure there is some degree of freedom) i.e. the number of measurement are
more than the number of parameters being determined
 Note that n=6 (number of observation),𝒖 = 𝟑 (parameters to be estimated) and 𝒇 = 𝟑 (degrees of freedom) hence
there exist some degree of freedom
 𝒅𝒆𝒈𝒓𝒆𝒆 𝒐𝒇 𝒇𝒓𝒆𝒆𝒅𝒐𝒎 𝒇 = 𝒏𝒐. 𝒐𝒇 𝒐𝒃𝒔𝒆𝒓𝒗𝒂𝒕𝒊𝒐𝒏𝒔 𝒏 − 𝒏𝒐. 𝒐𝒇 𝒖𝒏𝒌𝒏𝒐𝒘𝒏 𝒑𝒂𝒓𝒂𝒎𝒆𝒕𝒆𝒓𝒔 (𝒖) 14

𝒇=𝟔−𝟑=𝟑
LEAST SQUARES ADJUSTMENT
 The method of observation equation
Example 2:
4. Form the observation equations for least squares solution.
 There are 6 observation, 3 unknown parameters 𝐻𝐴 , 𝐻𝐵 𝑎𝑛𝑑 𝐻𝐶 , hence 3 degrees of freedom.
 The observation equations are in two forms below:
∆ℎ𝑄𝐴 = 𝐻𝐴 − 𝐻𝑄 + 𝑒1 → 𝐻𝑄 + ∆ℎ𝑄𝐴 = 𝐻𝐴 + 𝑒1 → 𝐻𝑄 + ∆ℎ𝑄𝐴 = 1𝐻𝐴 + 0𝐻𝐵 + 0𝐻𝐶 + 𝑒1
∆ℎ𝐴𝐵 = 𝐻𝐵 − 𝐻𝐴 + 𝑒2 → ∆ℎ𝐴𝐵 = 𝐻𝐵 − 𝐻𝐴 + 𝑒2 → ∆ℎ𝐴𝐵 = −1𝐻𝐴 + 1𝐻𝐵 + 0𝐻𝐶 + 𝑒2
∆ℎ𝐶𝐵 = 𝐻𝐵 − 𝐻𝐶 + 𝑒3 → ∆ℎ𝐶𝐵 = 𝐻𝐵 − 𝐻𝐶 + 𝑒3 → ∆ℎ𝐶𝐵 = 0𝐻𝐴 + 1𝐻𝐵 −1𝐻𝐶 +𝑒3
∆ℎ𝐶𝑄 = 𝐻𝑄 − 𝐻𝐶 + 𝑒4 → ∆ℎ𝐶𝑄 − 𝐻𝑄 = −𝐻𝐶 + 𝑒4 → ∆ℎ𝐶𝑄 − 𝐻𝑄 = 0𝐻𝐴 + 0𝐻𝐵 −1𝐻𝐶 +𝑒4
∆ℎ𝑄𝐵 = 𝐻𝐵 − 𝐻𝑄 + 𝑒5 → 𝐻𝑄 + ∆ℎ𝑄𝐵 = 𝐻𝐵 + 𝑒5 → 𝐻𝑄 + ∆ℎ𝑄𝐵 = 0𝐻𝐴 + 1𝐻𝐵 + 0𝐻𝐶 + 𝑒5
∆ℎ𝐶𝐴 = 𝐻𝐴 − 𝐻𝐶 + 𝑒6 → ∆ℎ𝐶𝐴 = 𝐻𝐴 − 𝐻𝐶 + 𝑒6 → ∆ℎ𝐶𝐴 = 1𝐻𝐴 + 0𝐻𝐵 + −1𝐻𝐶 + 𝑒6

Where ∆ℎ𝑄𝐴 , ∆ℎ𝐴𝐵 , ∆ℎ𝐶𝐵 , ∆ℎ𝐶𝑄 , ∆ℎ𝑄𝐵 and ∆ℎ𝐶𝐴 are the observations and 𝐻𝐴 , 𝐻𝐵 𝑎𝑛𝑑 𝐻𝐶 are the unknown
15
parameters being determined
LEAST SQUARES ADJUSTMENT
 The method of observation equation
Example 2:
 Then in the matrix form 𝑦 = 𝐴𝑥 + 𝑒 we get:
𝐻𝑄 + ∆ℎ𝑄𝐴 1 0 0 𝑒1 1503.405
∆ℎ𝐴𝐵 −1 1 0 𝐻 𝑒2 1.675
∆ℎ𝐶𝐵 𝐴 𝑒3
0 1 −1 𝐻 8.445
= 𝐵 + 𝑒4 =
∆ℎ𝐶𝑄 − 𝐻𝑄 0 0 −1 𝐻 −1496.636
𝐻𝑄 + ∆ℎ𝑄𝐵 0 1 0 𝐶 𝑒5 1505.078
𝑒6 6.765
∆ℎ𝐶𝐴 1 0 −1

5. Obtain the normal equations that will lead to the least squares solution
 The NORMAL EQUATIONS (𝐴𝑇 𝐴𝑥 = 𝐴𝑇 𝑦 are:
1 0 0 1503.405
−1 1 0 𝐻 1.675
1 −1 0 0 0 1 1 −1 0 0 0 1
0 1 1 𝐻𝐴 8.445
0 1 1 0 1 0 = 0 1 1 0 1 0
0 0 −1 𝐻𝐵 −1496.636 16
0 0 −1 −1 0 −1 𝐶 0 0 −1 −1 0 −1
0 1 0 1505.078
1 0 −1 6.765
LEAST SQUARES ADJUSTMENT
 The method of observation equation
Example 2:
 Giving the NORMAL EQUATIONS as:
𝑨𝑻 𝑨𝒙 = 𝑨𝑻 𝒚
3 −1 −1 𝐻𝐴 1508.495
−1 3 −1 𝐻𝐵 = 1515.198
−1 −1 3 𝐻𝐶 1481.426
6. Solve normal equations
 And the least squares solution gives
−𝟏 𝑻
ෝ = 𝑨𝑻 𝑨
𝒙 𝑨 𝒚
𝐻𝐴 −1
3 −1 −1 1508.495 1503.4035 1503.404
𝐻𝐵 = −1 3 −1 1515.198 = 1505.0792 ≈ 1505.079
𝐻𝐶 −1 −1 3 1481.426 1496.6363 1496.636
7. Analyze results
1503.4035 −0.0015
1.6757 +0.0007
8.4430 −0.0020
𝑦ො = 𝑨ෝ
𝒙= , 𝑒 = 𝑦ො − 𝑦 = 𝜎 = 𝑒 𝑇 𝑒 = ±0.0037 17
−1496.6363 −0.0003
1505.0792 +0.0012
6.7672 +0.0022

You might also like