Professional Documents
Culture Documents
Chapter 1
Chapter 1
CHAPTER 1 INTRODUCTION
1.1 Principles of Least Squares method.............................................
1.2 Brief Historical Developments.....................................................
Exercise 1....................................................................................
CHAPTER 2 MATRICES
2.1 Definitions...................................................................................
2.2 Matrix Addition and Subtraction..................................................
2.21 Laws of Addition................................................................
2.3 Matrix Multiplication....................................................................
2.31 Laws of Multiplication........................................................
2.4 Transportation with Matrix Addition and Multiplication................
2.5 Derivative of a Matrix..................................................................
2.6 Determinants..............................................................................
2.61 Definition...........................................................................
2.62 Properties of Determinants................................................
2.63 Rank of a Matrix................................................................
2.7 The Adjoint of a Square Matrix....................................................
2.71 Definition...........................................................................
2.72 Theorems of Adjoint...........................................................
2.8 Eigenvalues and Eigenvectors....................................................
2.81 Definitions.........................................................................
2.82 Theorems on Eigenvalues and Eigenvectors......................
2.9 Inverse of a Matrix......................................................................
2.91 Singular and non-singular Matrices....................................
2.92 Theorems on Inverse of a Matrix.......................................
2.10 Generalized Inverse of a Matrix...................................................
2.101 Definitions................................................................
2.102 Properties of Pseudo Inverse....................................
2.11 The Trace of a Matrix...................................................................
2.12 Special Matrices..........................................................................
2.121 Positive Semidefinitive matrix..................................
2.122 Non-negative Matrix.................................................
2.123 Characteristics of non-negative Matrix.....................
2.124 Similar Matrices.......................................................
2.125 Periodic Matrices......................................................
Exercise 2.................................................................
PART III
CHAPTER 8 APPLICATION OF CONSTRAINTS IN LEAST SQUARES
ADJUSTMENT
8.1 Observation Equations with Functional Constraints.....................
8.2 Combination of Observation and Condition Equations................
with functional Constraints..........................................................
8.3 Observation Equations with weight Constraints on Parameters...
8.4 Weight Constraints on the Parameters of the Mixed Model.........
Exercise 8....................................................................................
PART IV
CHAPTER 10 SOME STATISTICAL DISTRIBUTIONS
10.1 The Normal Distribution..............................................................
10.11 Normalization of a Normal Random Variable............
10.12 Filting a Normal Curve..............................................
10.13 Multivariate Normal Distribution..............................
10.2 Student distribution....................................................................
10.3 Chi-squared Distribution..............................................................
10.4 F Distribution..............................................................................
Exercise 10
Olubodun O. Ayeni
1981
CHAPTER 1
INTRODUCTION
1.1 Principles of Least Squares Method
It is generally accepted that the precision of a measurement may be
improved by increasing the number of observations. The redundant
observations arising therefore however create a number of problems. For
example discrepancies may occur between repeated observations since
each observation has a certain amount of insecurity (precision) attached to
it. Such discrepancies therefore have to be reconciled (adjusted) so as to
obtain the most satisfactory" (most probable or adjusted) values of
unknown quantities. In another situation, redundant observations may lead
to redundant but consistence equations in which there are more equations
than unknown quantities. There is therefore the need to obtain not only the
most probable values of unknown quantities but also to find a unique
solution for these quantities. The method of least squares may be defined
as a method which makes use of a redundant observations in the
mathematical modelling of a given problem with a view to minimizing the
sum of squares of discrepancies between the observations and their most
probable (adjusted) values subject to the prevailing mathematical model.
The discrepancies between the observations and their most probable
values are known as residuals. The need for a least squares adjustment
therefore arises from superfluous observations and it aims at minimizing
the sum of squares of residuals in order to obtain the "best" estimate or
the most probable values of unknown quantities.
It is important at the onset to be familiar with the properties of the
method of least squares which may be stated as follows. (See section 5.3
for the proofs of these properties)
1. Least squares estimate is the best in the same that it is a
minimum variance estimate
2. Least squares estimate is an unbiased estimate
3. The method of least squares gives a unique solution and
* Sample mean is the average of a sample taken from a population (see section
4.2)
4. The method of least squares is a distribution-free method.
The sample mean*, as a least squares estimate will now be used to
illustrate these least squares properties.
Consider a sample of n independent observations of a single quantity
x. The problem is to find a sample mean which has the above least squares
properties. In other words, we are looking for a sample mean x 4 which has
the minimum sum of squares of residual Vi; that is
n n
V i2 = 5 minimum. The minimum of V i2 6 may be found by
i=1 i=1
n
V
2
i
n 2
i = 1 (x - x )
=
x i = 1
x
n
= 2 ( xi - x ) = 0
i = 1
therefore
n n
2 xi - 2 x = 0
i = 1 i = 1
n
n x =
i
= 1
xi (1)
n
1
x = xi
n
i = 1
Equation (1) is the familiar formula for computing a sample mean. The
above derivation proves that the sample means minimizes the sum of
squares of residual Vi which is the minimum variance property. The sample
mean is therefore the best estimate of the population means ().
The sample mean ( x 10) is an unbiased estimate of the population
mean () if E(x) =
* Sample mean is the average of a sample taken from a population (see section
4.2)
(2)
Equation (2) is statistical term is described as: the expected value of x
equal to .
From eqn. (1)
n
1
E( x )= E( - xi )
n
i = 1
n
1
= E( xi )
n
i = 1
n
=
1
n
( xi )
i = 1
=
* Sample mean is the average of a sample taken from a population (see section
4.2)
normal distribution in order to obtain, statistically speaking, valid results.
* Sample mean is the average of a sample taken from a population (see section
4.2)
time. Karup [1969] and Motitz [1972] developed the method of least
squares collocation which permits the introduction of two random
components in the observations - the signal and the noise (residuals).
Bjerhammer [1973] has developed a completely generalised model for
least squares of which Kalman filter and collocation may be regarded as
special cases. Krakiwsky [1975] has published a synthesis of recent
advances in the method of least squares includes a review of Kalman
filtering, Bayes filtering, Trienstra phase, sequential approach and
collocation.
EXERCISE 1
1. What is a least squares solution?
2. What are the properties of a least squares estimate?
3. Show that the sample mean is a least squares solution?
4. Define the following terminologies: adjusted observations, most
probable value, adjusted parameters, residuals, accuracy, precision,
minimum variance property, and unbiased estimate
5. What do you mean by least square is a distribution - free estimate?
What is the relationship between a least squares estimate and a
maximum likelihood estimate?
* Sample mean is the average of a sample taken from a population (see section
4.2)