You are on page 1of 44

University of Bahrain

Civil Engineering Department


Adjustment Computations
CENG 523
Principles of Least Squares
Introduction
• In surveying, we often have geometric
constraints for our measurements
– Differential leveling loop error of closure = 0
– Sum of interior angles of a polygon = (n-2)*180°
– Closed traverse: Σ latitudes = Σ departures = 0
• Because of measurement random errors, these
constraints are generally not met exactly, so an
adjustment should be performed
Random Error Adjustment
• We assume (hope?) that our measurements are
blunder-free and that all systematic errors have
been removed. Therefore, only random errors
remain.
• We mentioned before that:
– random errors conform to the laws of probability
and follow a normal distribution pattern. Therefore,
the measurements should be adjusted accordingly.
Definition of a Residual
If M represents the most probable value of a measured quantity,
and zi represents the ith measurement, then the ith residual, vi is:
vi = M – zi
The normal distribution function of any random variable x is:

The normal distribution function of residuals v is:

It can be shown that to maximize the probability of f(x), the sum


of vi2 must be minimized. Book, Ch11, eq. 11.1 to 11.9
Fundamental Principle of Least
Squares
In order to obtain most probable values (MPVs), the sum
of squares of the residuals must be minimized. (See book
for derivation.) In the weighted case, the weighted squares
of the residuals must be minimized.

 v 2
 v1
2
 v 2
2  v3
2
   v n  minimum
2

 wv 2
 w v
1 1
2
 w v
2 2
2
 w v
3 3
2
   wn n  minimum
v 2

Technically the weighted form shown assumes that the


measurements are independent, but we can handle the
general case involving covariance.
Mathematical Model
• The mathematical model is a set of one or more
equations that define an adjustment condition
• Examples are the constraints mentioned earlier
• Models also include collinearity equations in
photogrammetry and the equation of a line in linear
regression
• It is important that the model properly represents reality
– for example the angles of a plane triangle should total
180°, but if the triangle is large, spherical excess cause a
systematic error so a more elaborate model is needed.
Types of Mathematical Models
Conditional and Parametric
• A conditional model enforces geometric conditions on
the measurements and their residuals
• A parametric model expresses equations in terms of
unknowns that were not directly measured, but relate to
the measurements (e.g. a distance expressed by
coordinate inverse)
• Parametric models are more commonly used because it
can be difficult to express all of the conditions in a
complicated measurement network
Observation Equations
• Observation equations are written for the
parametric model
• One equation is written for each observation
• The equation is generally expressed as a
function of unknown variables (such as
coordinates) equals a measurement plus a
residual
• We want more measurements than unknowns
which gives a redundant adjustment
Elementary Example
Consider the following three equations involving two unknowns. If
Equations (1) and (2) are solved, x = 1.5 and y = 1.5. However, if
Equations (2) and (3) are solved, x = 1.3 and y = 1.1 and if
Equations (1) and (3) are solved, x = 1.6 and y = 1.4.

(1) x + y = 3.0
(2) 2x – y = 1.5
(3) x – y = 0.2
If we consider the right side terms to be measurements, they have
errors and residual terms must be included for consistency.
Example - Continued
x + y – 3.0 = v1
2x – y – 1.5 = v2
x – y – 0.2 = v3
To find the MPVs for x and y we use a least squares solution by
minimizing the sum of squares of residuals.

f ( x, y )   v 2  ( x  y  3.0) 2  (2 x  y  1.5) 2  ( x  y  0.2) 2


Example - Continued
To minimize, we take partial derivatives with respect to each of
the variables and set them equal to zero. Then solve the two
equations.
f
 2( x  y  3.0)  2(2 x  y  1.5)(2)  2( x  y  0.2)  0
x
f
 2( x  y  3.0)  2(2 x  y  1.5)(1)  2( x  y  0.2)(1)  0
y

These equations simplify to the following normal equations.


6x – 2y = 6.2
-2x + 3y = 1.3
Example - Continued
Solve by matrix methods.

 6  2  x  6.2
 2 3   y   1.3 
    
 x  1 3 2 6.2 1.514
 y   14 2 6 1.3   1.443
      
We Can also compute residuals:
v1 = 1.514 + 1.443 – 3.0 = -0.044
v2 = 2(1.514) – 1.443 – 1.5 = 0.086
v3 = 1.514 – 1.443 – 0.2 = -0.128
Systematic Formation of Normal
Equations
Resultant Equations
Following derivation in the book results in:
Example – Systematic Approach
Now let’s try the systematic approach to the example.
(1) x + y = 3.0 + v1
(2) 2x – y = 1.5 + v2
(3) x – y = 0.2 + v3
Create a table:
a b l a2 ab b2 al bl
1 1 3.0 1 1 1 3.0 3.0
2 -1 1.5 4 -2 1 3.0 -1.5
1 -1 0.2 1 -1 1 0.2 -0.2
Σ=6 Σ=-2 Σ=3 Σ=6.2 Σ=1.3
Note that this yields the same normal equations.
Matrix Method
Matrix form for linear observation equations:
AX = L + V
Where:

 a11 a12  a1n   x1   l1   v1 


a a22  a2 n  x  l  v 
A   21  X   2 L 2 V  2
          
a am 2  amn  x  l  v 
 m1  n  m  m

Note: m is the number of observations and n is the number of


unknowns. For a redundant solution, m > n .
Least Squares Solution
Applying the condition of minimizing the sum of squared residuals:
ATAX = ATL
or
NX = ATL
Solution is:
X = (ATA)-1ATL = N -1ATL
and residuals are computed from:
V = AX – L
Example – Matrix Approach
1 1  3.0  v1 
   x    
AX  2  1    1.5  v2  L  V
  y    
1  1 0.2 v3 
1 1 
1 2 1     x   6  2  x 
A AX  
T
 2 1       y
1  1  1    y
    2 3  
1  1
3.0
1 2 1    6.2
A L
T
 1.5   
1  1  1   1.3 
0.2
Matrix Form with Weights

Weighted linear observation equations:

WAX = WL + WV
Normal equations:

ATWAX = NX = ATWL
Matrix Form – Nonlinear System
We use a Taylor series approximation. We will need the Jacobian
matrix and a set of initial approximations.
The observation equations are:
JX = K + V
Where: J is the Jacobian matrix (partial derivatives)
X contains corrections for the approximations
K has observed minus computed values
V has the residuals
The least squares solution is: X = (JTJ)-1JTK = N-1JTK
Weighted Form – Nonlinear System
The observation equations are:

WJX = WK + WV
The least squares solution is:

X = (J WJ) J WK = N J WK
T -1 T -1 T
Example 2
Determine the least squares solution for the following:

F(x,y) = x + y – 2y2 = -4
G(x,y) = x2 + y2 = 8
H(x,y) = 3x – y 2 2
= 7.7
Use x0 = 2, and y0 = 2 for initial approximations.
Example 2 - Continued
Take partial derivatives and form the Jacobian matrix.

F G H
1  2x  6x
x x x
F G H
 1 4 y  2y  2 y
y y y

 1 1  4 y0   1  7 
J  2 x0 2 y0    4 4 
   
6 x0  2 y0  12  4
Example 2 - Continued
Form K matrix and set up least squares solution.
  4  F ( x0 , y0 )   4  (4)  0 
K   8  G ( x0 , y0 )    8  8    0 
     
7.7  H ( x0 , y0 )  7.7  8   0.3

 1  7
 1 4 12     161  39
J J 
T
 4 4  
  7 4  4      39 81 
12  4
 0 
 1 4 12     3.6
J K 
T
 0  
  7 4  4     1 .2 
 0.3
Example2 - Continued
1
 161  39  3.6  0.2125
X      
  39 81   1.2   0. 00458 

Add the corrections to get new approximations and repeat.


x0 = 2.00 – 0.02125 = 1.97875 y0 = 2.00 + 0.00458 = 2.00458
1
 157.61806  38.75082  0.12393 0.00168
X      
  38 .75082 81 .40354   0. 75219   0. 01004 

Add the new corrections to get better approximations.


x0 = 1.97875 + 0.00168 = 1.98043 y0 = 2.00458 + 0.01004 = 2.01462
Further iterations give negligible corrections so the final solution is:
x = 1.98 y = 2.01
Linear Regression
Fitting x,y data points to a straight line: y = mx + b
Observation Equations
y A  v y A  mx A  b
y B  v y B  mxB  b
yC  v yC  mxC  b
y D  v y D  mxD  b

In matrix form: AX = L + V

 xA 1  y A  v y A 
x 1 m   y B   v y B 
 B   
 xC  
1  b   yC  v yC 
x 1  y  v 
 D  D   yD 
Example 3
Fit a straight line to the point x y
points in the table. A 3.00 4.50
Compute m and b by
least squares. B 4.25 4.25
C 5.50 5.50
D 8.00 5.50
In matrix form:
3.00 1 4.50  v A 
4.25 1 m   4.25  vB 
    
5.50  
1  b  5.50   vC 
8.00 1 5.50  v 
    D
Example 3 - Continued
1
m 1 121.3125 20.7500 105.8125 0.246
X     ( A A) ( A L)  
T T
    
b
   20 .7500 4. 0000   19 . 7500   3. 663 

3.00 1 4.50  0.10


4.25 1 0.246  4.25  0.46 
V  AX  L   
    
5.50 1 3.663 5.50   0.48
8.00 1 5.50  0.13 
    
Standard Deviation of Unit Weight

S0   v 2


0.47
 0.48
mn 42

Where: m is the number of observations and


n is the number of unknowns

Question: What about x-values? Are they


observations?
Fitting a Parabola to a Set of
Points

Equation: Ax2 + Bx + C = y
This is still a linear problem in terms of the
unknowns A, B, and C.

Need more than 3 points for a redundant solution.


Example - Parabola
Parabola Fit Solution - 1
Set up matrices for observation equations

0 2 0 1
 2  103.84
1 1 1 105.43
2 2 2 1  
A 2  104.77 
3 3 1 L
4 2 102.21
4 1  
 2   98.43 
5 5 1  
 93. 41 
Parabola Fit Solution - 2
Solve by unweighted least squares solution
1
979 225 55 5354.53   0.813 
x  ( AT A) 1 AT L  225 55 15  1482.37    1.902 
     
 55 15 6   608.09  104.046

 0.206 
  0.295
Compute  
 0.172
residuals V  AX  L  
0.225 
 
 0.216 
 
  0. 180 
Condition Equations
• Establish all independent, redundant
conditions
• Residual terms are treated as unknowns in the
problem
• Method is suitable for “simple” problems
where there is only one condition (e.g. interior
angles of a polygon, horizon closure)
Condition Equation Example
Condition Example - Continued
Condition Example - Continued
Condition Example - Continued

Note that the angle with the smallest standard deviation has
the smallest residual and the largest SD has the largest
residual
Example Using Observation Equations
Observation Example - Continued
 1 
1 0  6.7 2 0 0 
 13438'56" 
A 0 1   1  
L   8317'35" 

  W  0 0 
 1  1  9.9 2  14203'14"360 
 0 1 
0
 4.32 

0.07636 0.05408 14.7867721


A WA  
T
 A WL  
T

 0. 05408 0. 06429  12. 6370848
Observation Example - Continued

1 134 
39'00.2"
X  ( A WA ) ( A WL)   
T T

 83 17 '44 .1" 

a3  360  13439'00.2"8317'44.1"  14203'15.7"

Note that the answer is the same as that obtained with


condition equations.
Simple Method for Angular Closure
Given a set of angles and associated variances and a
misclosure, C, residuals can be computed by the following:

C i2
vi  n

 i
 2

i 1
Angular Closure – Simple Method
3

 i
 2

i 1
 6 .7 2
 9 .9 2
 4. 3 2
 161.39

15" (6.7) 2
v1   4.2"
161.39
15" (9.9) 2
v2   9.1"
161.39
15" (4.3) 2
v3   1.7"
161.39

You might also like