You are on page 1of 20

STATISTICAL ADJUSTMENT AND ANALYSIS OF DATA

(With Applications in Geodetic Surveying and Photogrammetry)

Olubodun O. Ayeni, Ph.D.


Professor of Surveying,
Department of Surveying,
University of Lagos,
Lagos, Nigeria.
CONTENTS
Preface..........................................................................................Page
Acknowledgements...........................................................................xii
PART I...............................................................................................xvi

CHAPTER 1 INTRODUCTION
1.1 Principles of Least Squares method.............................................
1.2 Brief Historical Developments.....................................................
Exercise 1....................................................................................

CHAPTER 2 MATRICES
2.1 Definitions...................................................................................
2.2 Matrix Addition and Subtraction..................................................
2.21 Laws of Addition................................................................
2.3 Matrix Multiplication....................................................................
2.31 Laws of Multiplication........................................................
2.4 Transportation with Matrix Addition and Multiplication................
2.5 Derivative of a Matrix..................................................................
2.6 Determinants..............................................................................
2.61 Definition...........................................................................
2.62 Properties of Determinants................................................
2.63 Rank of a Matrix................................................................
2.7 The Adjoint of a Square Matrix....................................................
2.71 Definition...........................................................................
2.72 Theorems of Adjoint...........................................................
2.8 Eigenvalues and Eigenvectors....................................................
2.81 Definitions.........................................................................
2.82 Theorems on Eigenvalues and Eigenvectors......................
2.9 Inverse of a Matrix......................................................................
2.91 Singular and non-singular Matrices....................................
2.92 Theorems on Inverse of a Matrix.......................................
2.10 Generalized Inverse of a Matrix...................................................
2.101 Definitions................................................................
2.102 Properties of Pseudo Inverse....................................
2.11 The Trace of a Matrix...................................................................
2.12 Special Matrices..........................................................................
2.121 Positive Semidefinitive matrix..................................
2.122 Non-negative Matrix.................................................
2.123 Characteristics of non-negative Matrix.....................
2.124 Similar Matrices.......................................................
2.125 Periodic Matrices......................................................
Exercise 2.................................................................

CHAPTER 3 SYSTEMS OF LINEAR EQUATIONS


3.1 Introduction................................................................................
3.11 Existence of a solution.......................................................
3.12 Matrix Inversion and solution.............................................
of system of linear equations............................................
3.2 Adjoint Method............................................................................
3.3 Cramer's Rule..............................................................................
3.4 Elementary Operations...............................................................
3.5 Gaussian Elimination...................................................................
3.6 Gauss-Jordan Method..................................................................
3.7 Matrix Partitioning.......................................................................
3.8 Bordering Method.......................................................................
3.9 Choleski Square-root Method......................................................
3.10 Jacobi's Method...........................................................................
3.101 Matrix Approach to Jacobi's Method.........................
3.11 Gauss - Siedel's Method..............................................................
3.111 Matrix Approach to the Gauss-Siedel Method...........
3.12 Successive Overrelaxation Method..............................................
3.121 Matrix Approach to Overrelaxation Method..............
3.13 Conjugate Gradient's Method......................................................
3.14 Convergence of Interative Method..............................................
Exercise 3
Chapter 4 ADJUSTMENT CALCULIUS
4.1 Introduction
4.2 Basic statistical Concepts............................................................
4.21 Summarising Data...................................................
4.211 Frequency Distribution.............................................
4.212 Grouped Distribution................................................
4.213 Continuous Distribution............................................
4.22 Measures of Dispersion............................................
4.221 Variance of a Grouped Distribution...........................
4.222 Variance of Continuous Distribution.........................
4.23 Measures of Correlation...........................................
4.3 Propagation of Errors...................................................................
4.31 Observational Errors.......................................
4.32 Laws of Expectation........................................
4.33 Laws of Variance and Convariance.................
4.331 Propagation of Variance: Linear Function........
4.332 Taylor’s Series Expansion for non-linear
function
4.333 Propagation of Variances: non-linear function.
4.34 Law of Propagation of Convariance Matrices.....................
4.35 Weight Matrix of Observations...........................................
4.4 Maximum and Minimum Problem.......................................
Exercise 4..........................................................................
PART II
CHAPTER 5 METHOD OF OBSERVATION EQUATIONS
5.1 Introduction................................................................................
5.2 Linear Mathematical Model.........................................................
5.3 Properties of Least Squares as a Linear Estimator.......................
5.4 Non-linear Mathematical Model...................................................
5.5 Addition (Subtraction) of Observations........................................
5.51 Sequential Approach..........................................................
5.52 Generalization of Sequential Formulas..............................
5.6 Addition of new observations and new parameters.....................
5.7 Error Ellipse and Error Ellipsoid....................................................
Exercise 5....................................................................................

CHAPTER 6 METHOD OF CONDITION EQUATIONS


6.1 Introduction................................................................................
6.2 Non-linear Model.........................................................................
6.3 Partitioned Model........................................................................
6.4 Addition (substraction) of Observations......................................
6.5 Pope's Pitfall for non-linear Model...............................................
Exercise 6....................................................................................

CHAPTER 7 COMBINATION OF OBSERVATION EQUATIONS AND


CONDITION EQUATIONS
7.1 Mixed Model................................................................................
7.2 Addition (substraction) of Observations......................................
7.21 Sequential Approach..........................................................
7.3 Addition of New Observations and New Parameters....................
7.4 Pope's Pitfall for non-linear Model.............................................
Exercise 7....................................................................................

PART III
CHAPTER 8 APPLICATION OF CONSTRAINTS IN LEAST SQUARES
ADJUSTMENT
8.1 Observation Equations with Functional Constraints.....................
8.2 Combination of Observation and Condition Equations................
with functional Constraints..........................................................
8.3 Observation Equations with weight Constraints on Parameters...
8.4 Weight Constraints on the Parameters of the Mixed Model.........
Exercise 8....................................................................................

CHAPTER 9 ADVANCE LEAST SQUARES METHOD


9.1 Introduction................................................................................
9.2 Least Squares Collocation...........................................................
9.21 Stepwise Collocation..........................................................
Exercise 9....................................................................................

PART IV
CHAPTER 10 SOME STATISTICAL DISTRIBUTIONS
10.1 The Normal Distribution..............................................................
10.11 Normalization of a Normal Random Variable............
10.12 Filting a Normal Curve..............................................
10.13 Multivariate Normal Distribution..............................
10.2 Student distribution....................................................................
10.3 Chi-squared Distribution..............................................................
10.4 F Distribution..............................................................................
Exercise 10

CHAPTER 11 UNIVARIATE INTERVAL ESTIMATION AND


HYPOTHESIS TESTING
11.1 Introduction................................................................................
11.2 Univariate Interval Estimation.....................................................
11.21 Interval Estimation of  when  is known.................
11.22 Estimation of  when  is unknown..........................
11.23 Interval Estimation of difference between two
means......................................................................
11.24 Interval Estimation of population variance...............
11.25 Interval Estimation of ratio of two variances............
11.3 UNIVARIATE HYPOTHESIS TESTING
11.31 Testing of hypothesis on the mean: Single
Population................................................................
11.32 Test of hypotheses on the mean: Two Populations....
11.33 Test of hypotheses on the mean: Three or more
Populations (ANOVA one-way)..................................
11.331 Orthogonal Contrast.....................................
11.34 ANOVA two-way Classification..................................
11.341 ANOVA two-way with repeated observations
11.35 Test of hypothesis on the variance: Single
Population................................................................
11.36 Test of hypothesis on the variance: Two Populations
11.37 Test of hypothesis on the variance: Three or more
Populations..............................................................
11.38 Goodness of fit.........................................................
Exercise 11...............................................................

CHAPTER 12 MULTIVARIATE INTERVAL ESTIMATION AND


HYPOTHESIS TESTING
12.1 Multivariate Confidence Interval.................................................
12.11 Internal Estimation and the Distribution of Vˆ PVˆ 1.
T

12.12 Interval Estimation and the Distribution of the Ratio


of two Vˆ PVˆs 2.......................................................
T

12.13 Interval Estimation and the Distribution of the


Quadratic Form........................................................
12.2 Multivariate Hypothesis Testing...................................................
12.21 Hypothesis Testing on Vˆ PVˆ 3.................................
T

12.22 Equality of Mean Vectors..........................................


12.221 Testing Hypothesis that the Mean Vector is
Equal to a Given Vector..................................
12.222 Testing Hypothesis on the Equality of Two
Mean Vectors..................................................
12.23 Multivariate Analysis of Variance (MANOVA).............
12.24 Test for Homogeneity of Convariance Matrices........
Exercise 12...............................................................
PART IV
CHAPTER 13 APPLICATION IN GEODETIC SURVEYING
13.1 Adjustment of Angles..................................................................
13.2 Adjustment of Triangulation Network..........................................
13.21 Method of Condition Equations................................
13.22 Method of Observation Equations............................
13.3 Adjustment of Trilateration Network............................................
13.4 Adjustment of Triangulation Network with Measured angles and
Distances....................................................................................
13.41 Method of Condition Equations................................
13.42 Method of Observation Equations............................
13.5 Traverse Adjustment...................................................................
13.51 Traverse Adjustment by Condition Equations...........
13.512 Traverse Adjustment by Observation Equations with
Constraints...............................................................
13.53 Discussion of Results................................................
13.6 Resection Problem.......................................................................
13.61 Resection by Observation Equations........................
13.62 Resection by observation Equations with weight
constraints...............................................................
13.63 Resection by Mixed Model........................................
13.7 Calibration of EDM Instrument....................................................
13.8 Adjustment of a level network.....................................................
13.81 Condition Equations.................................................
13.82 Observation Equations.............................................
13.9 Relative weights..........................................................................
Exercise 13..................................................................................
CHAPTER 14 APPLICATION IN PHOTOGRAMMETRY
14.1 Introduction................................................................................
14.2 Perspective Centre Determination...............................................
14.21 Affine Transformation...............................................
14.22 Straight line Condition Method.................................
14.3 Relative Orientation...........................................................
14.31 Numerical Relative Orientation.................................
14.311 Numerical Relative Orientation.......................
14.312 Numerical Relative Orientation using w
element..........................................................
14.32 Analytical Relative Orientation.................................
14.321 Collinearity Equations...........................
14.322 Coplanarity Equations...........................
14.4 Absolute Orientation..........................................................
14.5 Strip and block Formation..................................................
14.6 Polynomial Strip Adjustment..............................................
14.7 Polynomial Block Adjustment of Strips...............................
14.8 Removal of film Distortion.................................................
14.9 Single Photo Resection......................................................
14.10 Application of weight Constraint........................................
14.11 Post Block Aerial Survey (PBAS).........................................
14.12 Simultaneous Adjustment of Photogrammetric and
GeodeticObservation (SAPGO)...........................................
Exercise 14

Appendix A Statistical Tables


Table A1: Areas Under the Normal Curve.................................
Table A2: Percentage Points of the t Distribution.....................
Table A3: Percentage Points of the Chi-square Distribution......
Table A4: Percentage Points of the F(V1 V2) Distribution...........
Appendix B Glossary of Symbols and Summary of Least Squares formulas
(Chapters 5-9).............................................................................
Biblography.................................................................................
Index...........................................................................................
PREFACE
This book is intended primarily to provide a text on adjustment
computations and statistical data analysis, suitable for nature
undergraduate as well as Master's Students in Geodesy, Surveying and
Photogrammetry in Universities, Polytechnical and Colleges of Technology.
Ph.D students in these and related disciplines will also find it useful as an
introductory text. The book also offers a variety of tools in least squares
and statistical necessary for investigative research in some aspects of
chemical, physical, biological and engineering sciences. This book should
also be a valuable companion to geodesists, surveyors, photogrammetrist,
engineers, geologists, chemists, physicists, and biologists in their
professional practice.
The derivations of important results in the method least squares
presented in this book are based on matrix algebra. Chapter two on
matrices provides the necessary background to these derivations. It is the
author's belief that analytical methods are best handled by matrix algebra
if the advantages of digital computers are to be maximised. Another
advantage of matrix approach is its conciseness which enables the student
to understand what is happening at every stage of derivation or
computations procedure.
A survey of both iterative and direct methods for solving linear
systems of equations is presented in chapter 3. One of the traditional
procedures of least squares method is the formation of system of linear
equations (normal equations) by minimizing the sum of squares of residuals
(or weighted residuals). If one is faced with a large system of normal
equations their solution becomes a formidable crucial in terms of speed,
accuracy, cost and storage requirements on the computer. Chapter 3
provides a variety of alternative methods for solving normal systems of
equations.
Chapter four briefly reviews four important aspects of adjustment
calculus - basic concepts, error propagation, Taylor's series expansion and
minimization by Lagrange's multipliers. Chapter four together with the
previous ones form part 1 of this book and may be regarded as the
mathematical and statistical background necessary for understanding
matrix approach to least squares methods. The student who is faithful to
the exercise given at the end of Chapter 2,3 and 4 will be well prepared to
handle the enormous computational efforts involved in solving problems by
the method of least with matrices.
Part two of this book treats the three classical methods of least
squares - the method of observation equations, the method of condition
equations and a combination of observation and condition equations in
Chapter 5,6 and 7 respectively. The sequential approach as it relates to
addition of observations and addition or parameters are also discussed in
these chapters.
Part three explores some modification of these standard methods.
Chapter 8, for example, deals with the introduction of functional and weight
constraints in lease squares. A brief review of lease squares collocation is
presented in chapter 9.
Part four consists of a review of statistical distribution) Chapter 10) as
well as a brief outline of parametric statistical tests in Chapter 11
(unvariate) and Chapter 12 (multivariate). These tests will be found
extremely useful in post-adjustment data analyses.
Part five is devoted to specific applications in Geodetic surveying
(Chapter 13) and photogrammetry (Chapter 14). Although attention is
focussed on Geodetic as well as photogrammetric applications in these last
two chapters one of the main objectives of this book is to demonstrate the
practical applications of least squares a principles in solving a variety of
problems which cut across disciplines. Examples in Chapter 5,6,7,8 and 9
are therefore not restricted to any particular discipline.
Students will find the exercises which follow each chapter, very
crucial in grasping some of the theoretical concepts. Supplementary
exercises are designed primarily to serve as a training in computer
programming. The fortran IV computer programs used in this book are not
included in the text so as to keep the cost of publication at a reasonable
level. A solution manual which is now in preparation will incorporate a
listing of these computer programs.
It not possible within the constraints of time, space and economy to
illustrate with examples each method of least squares discussed in this
book. The book therefore contains a bibliography of work done by various
authors whose experience will enrich the knowledge of readers looking for
additional materials on various applications.
ACKNOWLEDGEMENTS
The authors wishes to express his sincere appreciation to various
individuals and organizations who have contributed directly or indirectly to
this book. Special thanks are due to Dr. F.A. Fajemirokun and Dr. F.O.A.
Egberongbe of Lagos University who reviewed the manuscript and made
several suggestions. The author is particularly grateful to Prof. G. Obenson
also of Lagos University who first instructed him on the subject of least
squares and who gave him the initial encouragement when the idea of this
book was proposed. The author is also equally indebted to Prof. U. Uotila of
the Ohio State University who taught him many profound concepts in least
squares theory. Special appreciation is also expressed for the inspiration
received from other professors at Ohio State who taught the author various
applications of least squares principles, Prof. R. Rapp, Prof. I. Muller, Prof.
D.C. Merchant and Prof. S.K. Ghosh (now of Laval University). Grateful
thanks are also due to Prof. J.S. Rustagi, the author's statistics teacher at
Ohio State.
Some of the examples and exercise in this book are based on
laboratory assignments given to students at Ohio State and at Lagos
University. Sincere appreciation must therefore be expressed for the
knowledge gained from my laboratory instructors at Ohio State - Dr. (Major)
Spinski and Dr. Rampal. Credit must also be given to my student at Lagos
University particularly Messrs Owolabi (P.G. Student), Oyinloye and Olaleye
for their assistance received from them. The author has also received
inspiration from the published work of several authors too numerous to
mention. Special mention must however be made of the following: Mr. G.W.
Schut of National Research Council, Canada, Prof. P.R. Wolf of the University
of Winsconsin, Prof. E.M. Mikhail of Uurdue University, Prof. F. Ackerman of
Stuttard University, Profs, Karara and K.W. Wong of the University or Illinois,
Late Prof. H. Thompson of University College, London, Prof. W. Faig of the
University of Washington of Newsbrunswick, Prof. S.A. Veress of the
University of Washington Seatle, Prof. K. Tolgard of the Royal Institute of
Technology, Stockolm, Mr. Duane Brown, Adjunct Professor at the Ohio State
University, Prof. A.J. Bradenberger of Laval University, Quebee, and Prof.
D.M.J. Fubara of the University of Science and Technology in Port Harcourt.
The author also wishes to express his gratitude to Miss Okparaocha
and Mrs. Ibajesomo for their dedication and perseverance in typing the
manuscript. Finally, special recognition is due to the author's wife, Esther
and children, Tayo, Kunle, Dupe and Tumi who is spite of their suffering due
to neglect on my part, sustained and encouraged me throughout the
preparation of this book.

Olubodun O. Ayeni
1981
CHAPTER 1
INTRODUCTION
1.1 Principles of Least Squares Method
It is generally accepted that the precision of a measurement may be
improved by increasing the number of observations. The redundant
observations arising therefore however create a number of problems. For
example discrepancies may occur between repeated observations since
each observation has a certain amount of insecurity (precision) attached to
it. Such discrepancies therefore have to be reconciled (adjusted) so as to
obtain the most satisfactory" (most probable or adjusted) values of
unknown quantities. In another situation, redundant observations may lead
to redundant but consistence equations in which there are more equations
than unknown quantities. There is therefore the need to obtain not only the
most probable values of unknown quantities but also to find a unique
solution for these quantities. The method of least squares may be defined
as a method which makes use of a redundant observations in the
mathematical modelling of a given problem with a view to minimizing the
sum of squares of discrepancies between the observations and their most
probable (adjusted) values subject to the prevailing mathematical model.
The discrepancies between the observations and their most probable
values are known as residuals. The need for a least squares adjustment
therefore arises from superfluous observations and it aims at minimizing
the sum of squares of residuals in order to obtain the "best" estimate or
the most probable values of unknown quantities.
It is important at the onset to be familiar with the properties of the
method of least squares which may be stated as follows. (See section 5.3
for the proofs of these properties)
1. Least squares estimate is the best in the same that it is a
minimum variance estimate
2. Least squares estimate is an unbiased estimate
3. The method of least squares gives a unique solution and

* Sample mean is the average of a sample taken from a population (see section
4.2)
4. The method of least squares is a distribution-free method.
The sample mean*, as a least squares estimate will now be used to
illustrate these least squares properties.
Consider a sample of n independent observations of a single quantity
x. The problem is to find a sample mean which has the above least squares
properties. In other words, we are looking for a sample mean x 4 which has
the minimum sum of squares of residual Vi; that is
n n
 V i2 = 5 minimum. The minimum of  V i2 6 may be found by
i=1 i=1

equating to zero its partial derivatives with respect to x 7

n
  V
2
i
n 2
i = 1 (x - x )
=  
x i = 1
x

n
= 2  ( xi - x ) = 0
i = 1

therefore
n n
2  xi - 2  x = 0
i = 1 i = 1

n
n x =
i

= 1
xi (1)
n
1
x =  xi
n
i = 1

Equation (1) is the familiar formula for computing a sample mean. The
above derivation proves that the sample means minimizes the sum of
squares of residual Vi which is the minimum variance property. The sample
mean is therefore the best estimate of the population means ().
The sample mean ( x 10) is an unbiased estimate of the population
mean () if E(x) = 

* Sample mean is the average of a sample taken from a population (see section
4.2)
(2)
Equation (2) is statistical term is described as: the expected value of x
equal to .
From eqn. (1)
n
1
E( x )= E( -  xi )
n
i = 1

n
1
=  E( xi )
n
i = 1

n
=
1
n
 ( xi )
i = 1

= 

The sample mean x is therefore an unbiased least squares estimate of the


population mean . The uniqueness of x is obvious from the derivation
since there exists only one possible solution to the partials leading to eqn.
(1). It is important to note that no form of statistical distribution is imposed
on the observation x or the residuals Vi in deriving eqn. (1). The least
squares estimate is therefore independent of any restrictions on the
distribution of the observations. Hamilton [1964] has pointed out that the
only assumption made is that the observations are drawn from populations
with finite second moments. It is sometimes erroneously stated in some
textbooks that the principles of least squares depend on the fact that the
residuals or observations should be normally distributed. If the normality
assumption is necessary for least squares it will constitute a serious draw
back on its applications to a variety of problems since there is in most cases
no statistical method for testing whether the observations are normally
distributed. The truth is that the least squares estimate is distribution-free.
It should however, be borne in mind that the principles of maximum
likelihood estimate leads to the same result if the normality assumption is
imposed on the observations. It is important also to know that post-least
squares adjustment analyses of data may require the application of
classical statistical tests which in most cases are based on the properties of

* Sample mean is the average of a sample taken from a population (see section
4.2)
normal distribution in order to obtain, statistically speaking, valid results.

1.2 Brief Historical Developments.


The principle of least squares was first published by Adrien Marie
Legendre [1806] in relationship to the problem of estimation of the orbits of
Comets. Karl Friederich Gauss was however said to have the same principle
since 1795 at the age of 18, in connection with the calculations of the orbits
of planets from telescopic measurements, Wells and Krakiwsky [1971]. The
first publication of Gauss [1809] established a sound statistical basis for the
theory of least squares. A series of important results on the theory of least
squares, such as Gaussian unbiased estimation for the variance of
residuals, were published in his subseqent publications, Gauss [1810, 1821,
1963]
In 1812 Laplace [1812] applied some important results derived from
his work on the theory of probabilities to the method of least squares. In
1859 Chebyshev [1859] while developing his Chebyshev Polynomials and
mini-max theory investigated the use of these polynomials in least squares
method for interpolation. Markov [1898] also made a notable contribution
in relation to theory of estimation of parameters in mathematical statistics.
Helmert's [1907] application of least squares in astronomy and geology is
noteworthy; so also is the geometrical presentation of least squares
method by Kolmogorov [1946].
Tienstra [1956] showed how least squares problem could be solved in
phases" (parts). This has lead to the development of the sequential
approach which was further advanced by Schmid [1968], Richardus [1966],
Uotila [1967] and Krakisky [1968]. It should be noted that the flexibility of
the method of least squares, manifested in sequential procedures for
addition of observations or parameters, is due to the introduction of matrix
approach popularised by Uotila [1967] and Hirvonen [1971].
Kalman [1960] introduced a new concept (Kalman Filter) in least
squares which allows the "state vector" (vector of parameters) to vary with

* Sample mean is the average of a sample taken from a population (see section
4.2)
time. Karup [1969] and Motitz [1972] developed the method of least
squares collocation which permits the introduction of two random
components in the observations - the signal and the noise (residuals).
Bjerhammer [1973] has developed a completely generalised model for
least squares of which Kalman filter and collocation may be regarded as
special cases. Krakiwsky [1975] has published a synthesis of recent
advances in the method of least squares includes a review of Kalman
filtering, Bayes filtering, Trienstra phase, sequential approach and
collocation.

EXERCISE 1
1. What is a least squares solution?
2. What are the properties of a least squares estimate?
3. Show that the sample mean is a least squares solution?
4. Define the following terminologies: adjusted observations, most
probable value, adjusted parameters, residuals, accuracy, precision,
minimum variance property, and unbiased estimate
5. What do you mean by least square is a distribution - free estimate?
What is the relationship between a least squares estimate and a
maximum likelihood estimate?

* Sample mean is the average of a sample taken from a population (see section
4.2)

You might also like