Professional Documents
Culture Documents
Libro Pag. 211-329
Libro Pag. 211-329
n
2
2
n
E X i X X i X nE X
i 1
i 1
that
X ].
(8-22)
, we get
2
2
n
E X i X n 2 n
n 1 2
n
i1
(8-23)
2
1 n
E S2 E
Xi X 2
n 1 i1
(8-24)
2
2
Which shows that S is an unbiased estimator of . This is the reason why the term
n 1 is used for the sample variance. If the sum of the squares of the deviations were to
be divided by n rather than by n 1 , the resulting estimator would be biased.
If the population from which the sample is drawn is normally distributed, it can be shown
2
2
2
that S is also a consistent estimator of and that the distribution of S is related to the
chi-square distribution; specifically
n 1 2
S
2
(8-25)
variance
2
n
(8-26)
has a standard normal distribution with zero mean and unit variance, and it follows that
P Z
Z 2 Z 1
(8-27)
where Z is the value of the standard normal distribution function, obtainable from
Table I of Appendix B.
Rearranging the inequality inside the brackets of Eq. (8-27), we get
P X
2 Z 1
lies between
(8-28)
X z
and
is 2
z 1 .
X z
and
z 1 0.95 , z 195 2
0.975
z 1.96 .
537.615
n
and
z
n
1.96(0.033)
537.601m
20
537.615
1.96(0.033)
537.629m
20
Thus, we can say with 95% confidence that u lies in the interval 537.601 m to 537,629 to.
In Example 8-5 the standard deviation v of the population was known. More often,
however, o is unknown and must be estimated, usually by the
sample standard deviation S. Thus, instead of using Eq. (8-26), we must use
T
X
S
(8-29)
It is easily shown from Eqs. (8-7), (8-25), (8-26), and (8-29) that T has a t distribution with
n -1 degrees of freedom. Thus, instead of Eq. (8-28), we have
tS
tS
P X
X
2 F( t ) 1
n
n
X tS
and X
tS
(8-30)
EXAMPLE 8-6
With reference to Example 8-3, in which the sample mean of 20 independent
measurements of a distance is calculated to be 537.615 m, and to Example 8-4 in which
the sample standard deviation of the same 20 measurements is calculated to be 0.035 m,
construct a 0.95 (95%) confidence interval for the population mean, .
Solution
For
III, Appendix B,
209 .
Degrees of freedom
n 1 20 1 19
. From Table
0.975 , 19
tS
537.615
( 2.09 )( 0.035 )
537.599 m
20
and
X
tS
n
537.615
( 2.09 )( 0.035 )
537.631m
20
Thus, we can say with 95% confidence that lies in the interval 537.599m to 537.631m.
In establishing a confidence interval for the mean of a distribution it has been assumed that
the random sample is drawn from a normal distribution. If the population distribution is
not normal, but the sample size is large, X will have a distribution that is approximately
normal, and Eqs. (8-28) and (8-30) are still valid for all practical purposes.
Finally, as the sample size increases, we see that the values of t approach the
corresponding values of z. Indeed, for a sample size of 30 or larger, the t
n 1
2 S
freedom, can be used to construct a confidence interval for the population variance,
Thus,
( n 1)S2
P X a2, n 1
X 2b , n 1 b a
2
the
th
and
b:h
percentiles,
(8-31)
respectively,
of
the
( n 1)S2
2
( n 1)S2
P 2
ba
X a2, n 1
X b , n 1
and when a specific numerical value
interval with limits
( n 1) S 2 X 2
b, n 1
is provided for
and
( n 1) S 2 X 2
a, n 1
(8-32)
, we obtain a confidence
and degree of confidence
b a.
2
With reference once more to Example 8-4, in which a sample variance of 0.00121 m is
calculated from 20 independent measurements of a distance, construct a 0.95 confidence
2
n 1 19 .
2
X 0.025 ,19 8.91
( n 1)s 2 (19)(0.00121)
0.00071m 2
2
X 0.975,19
32.9
and
and
( n 1)s 2 (19)(0.00121)
0.00258m 2 .
2
8
.
91
X 0.025,19
Thus, we can say with 95% confidence that
m
to 0.00258
H0
H0
H0
drawn from a population rather than upon the entire population itself. Four possible
outcomes can occur:
1.
H0
2. H 0 , is rejected, when
3. H 0 , is accepted, when
4.
H0
H0
H0
is true.
is false.
If outcome (1) or outcome (4) occurs, no error is made in that the correct course of action
has been taken. Outcome (2) is known as a Type I error; outcome (3) is known as a Type II
error.
The size of the Type I error, designated , is defined as the probability of rejecting
when
H0
is true, i.e.,
H0
(8-33)
When is fixed at some level for H 0 , and is expressed as a percentage, it is known as the
significance level of the test. Although the choice of significance level is arbitrary,
common practice indicates a significance level of 5% as "significant", and 1% as "highly
significant".
H0 : 0
H0 : 0
X
is known:
P ( 0 c) X ( 0 c) 2(z) 1,
(8-34)
where c z n .
If is unknown, the following probability statement can be derived from Ea. (8-30):
P ( 0 c) X ( 0 c) 2F( t ) 1,
where
H0
c ts
(8-35)
n.
between 0
and 0
rejection are shown in Fig. 8-4. If is the probability that H 0 , is rejected when it is true,
then
must e the probability that H 0 is accepted when it is true. It follows, then, that
1 2(z) 1
for know
(8-36a)
2F( t ) 1
for know
(8-36b)
Fig. 8-4.
Solving for ( z ) , or f ( t ) , we get:
( z ) 1
(8-37a)
F( t ) 1
(8-37b)
or
Thus, the value of z or t is obtained from the significance level of the test, . Specifically,
Table I of Appendix B is used to evaluate z; Table III of Appendix B is used to evaluate t.
[Note: In Table III, t
t p , n 1 ,
where p
F( t )
.]
EXAMPLE 8-8
An angle is measured 10 times. Each measurement is independent and made with the same
precision, i.e., the 10 measurements constitute a random sample of size 10. The sample
mean and sample standard deviation are calculated from the measurements: x =
4212'14.6", s = 3.7". Test at the 5%o level of significance the hypothesis that , the
population mean of the measurements, is 4212'16.0" against the alternative that is not
4212'16,0",
Solution
0 4212'16,0",
For 5% level of significance, a = 0.05. Thus,
0.975,
2
p F( t ) 1
Degrees of freedom
n 1 9,
t 0.975 , 9 2.26
ts
. Thus,
( 2.26)(3.7 )
10
2.6
and so
0 c 42 1213 .4
and
0 c 42 1218.6
Since
x 4212'14.6"
lies between 0
and 0
hypothesis H 0 that the population variance is 0 ; against the alternative that it is not 0 ,
2
( n 1) S2 2
0
P x a2,n 1 (n 1)S2 02 x 2b ,n 1 b a ,
(8-38)
x a2,n 1 02
x 2b ,n 1 02
2
P
S
b a,
(n 1)
(n 1)
(8-39)
H0
is accepted if
between x
2
a ,n 1
(n 1) and x
2
0
(n 1) ;
2
b ,n 1
2
0
H0
is rejected. The
H0
1 b a,
(8-40)
Fig. 8-5.
and for a
b 1 (complementary
percentiles). We obtain.
(8-41)
and
b 1
,
2
(8-42)
This test procedure can, of course, be used to test the standard deviation as well as the
variance.
EXAMPLE 8-9
Referring to the data in Example 8-8, test at the 5% level of significance the hypothesis
that , the population standard deviation of the measurements, is 2.0" against the
alternative that is not 2.0".
Solution
02 2.0.
a 2 0.025
II, Appendix
.Now
a 0.05
2
B, X 0.025 , 9 2.70
X 02.025 ,9
n 1
, and
2
X 0.975 , 9 19.0
. Thus,
( 2.70 )( 2.0 ) 2
1.20 (seconds of arc) 2
9
and
X 02.975 ,9
n 1
Now
( 3 .7 )
13.7
2.0
f ( x , y)
2 x y 1
exp
x
x
2
2 (1 ) x
x x
2
x
y y
y y
(8-43)
in which
, and
x
, and
y
is the correlation
x x , y y ,
are, respectively,
f (x)
1
x
exp x
x
2
2
(8-44)
Fig. 8-6.
And
1 y y
1
f ( y)
exp
y 2
2
y
},
(8-45)
which are the usual density functions for individual normally distributed random variables.
The two marginal density functions are also shown in Fig. 8-6.
A plane that is parallel to the x, y coordinate plane will cut the bivariate density surface in
an ellipse (see Fig. 8-6). The equation of this ellipse is obtained by setting f (x, y) in Eq.
(8-43) equal to the height K of the intersecting plane above the x, y plane, and simplifying.
The result is
x x
x
where c
x x
2
2 2 2 2
2
ln 4 K x y (1 )
y y y y
1 2 c2 ,
y
y
, a constant.
(8-46)
4 , 5,
4 ,
x
0 .5
and
K 0 .1
xy
1
c 2 ln 42 (0.1) 2 (1) 2 (0.5) 2 (1 0.25) 2.60
2
2
2
2
(1 )c (1 0.25 )( 2.60) 1.95,
x 4
x 4 y 5 y 5
2(0.5)
1.95,
1
1 0.5 0.5
Simplifying, we get
( x 4) 2 2( x 4)( y 5) 4( y 5) 2 1.95.
Letting u
x 4,
and v
y5
, we have
u 2 2uv 4 v 2 i
Solving for u in terms of v, we get
u v 1.95 3v 2 .
Thus,
x 4 y 5 1.95 3( y 5) 2
or
x y 1 1.95 3( y 5) 2
Values for x and y are listed in Table 8-4, and the ellipse is plotted in Fig. 8-7
It can be shown through appropriate differentiation of Eq. (8-46) that the extreme points of
the ellipse (A, B, C, and D in Fig. 8-7), have the following coordinates:
X
Table 8 . 4
y
4.2
4-4
4.6
4.8
5-0
5.2
5-4
5.6
5.8
Fig. 8-7.
If the ellipse is enclosed within an imaginary box, indicated by the broken lines
in Fig- 8-7, we see that the half-dimensions of the box are c x , and c y . We can
also see that acts as a proportioning factor in locating A, B, C, and D.
Points E and G (Fig. 8-7) on the ellipse can be located by setting x in Eq- (8-46)
equal to zero and solving for y:
y c y 1 2 .
z c x 1 2 .
When x
, and
x x 2 y y 2
2c2 ,
(8-47)
y 0
f ( x , y)
1
2 x y
1
exp
2
1 2
2 1
x
2
y y
y y
(8-48)
x
2
y y
1 2 c 2 .
y y
(8-49)
Equation (8-49) represents a family of error ellipses centered on the origin of the x, y
coordinate system. When c 1 , Eq. (8-49) is the equation of the standard error ellipse.
The size, shape and orientation of the standard error ellipse are governed by the
distribution parameters x , y and . Six examples illustrating the effects of different
combinations of distribution parameters are shown in Fig. 8-8.
A typical standard error ellipse is shown in Fig. 8-9. Since c
1,
(broken line) that encloses the ellipse has half-dimensions x , and y , In general, the
principal axes of the ellipse, x and y, do not coincide
Fig. 8-8.
X
Y
X
Y
vector; the
.The orthogonal
Fig. 8-9.
X cos sin X
Y sin cos Y ,
(8-50)
2
2
xy y
0 y
respectively. The off-diagonal terms in the covariance matrix for
are zero because X' and Y' are uncorrelated (x' and y' are the
X
Y
. (8-51)
2
2
0 y sin cos xy y sin cos
Substituting
(8-52)
(8-53)
(8-54)
1 2 sin 2
2
2
cos sin
1 2
y 2x sin 2 xy cos 2 0
2
from which we get
tan 2
The quadrant of
and denominator
2 xy
2x 2y
(8-55)
2
2
x y
Eliminating from Eqs. (8-52) and (8-53) results in the following expressions for the
variances of X' and Y':
2x
2y
2x 2y
2
2x 2y
2
2xy
2xy .
2 2
y
x
4
2 2
y
x
4
12
(8-56)
12
(8-57)
The standard deviations x and y ,- are the semimajor axis and semiminor axis,
respectively, of the standard error ellipse.
2
It can be demonstrated that the variances and are the eigenvalues of the covariance
matrix of the random vector
.
X
Y
EXAMPLE 8-11
The random error in the position of a survey nation is expressed by a bivariate normal
distribution with parameters x
x 0 , x 0.22 m
y 0.14 m
and
0.80
c=0.80.
Evaluate the semimajor axis, semiminor axis, and orientation of the standard error ellipse
associated with this position error.
xy x y 0.800.220.14 0.0246m 2 ,
2x 2y
2 2
y
x
4
2xy
0.22 2 0.14 2
2
12
(0.22 ) 2 (0.14 ) 2
0.0340 m 2 ,
(0.0246 ) 2
12
0.0285 m 2 .
Thus,
12
12
2 2 2
y
2
x
x
2xy
2
4
and
2 2 2
y
x
2xy
2
4
2
y
x 0.0625 0.25m
and the semiminor axis is
y 0.0055 0.074 m.
now
tan 2
Since xy ,
Thus, 2
tan
and x
1
2 xy
2
x
2
y are
(1.711) 59.7 ,
2
y
both
2(0.0246 )
1.711 .
0.22 2 0.14 2
positive, 2 lies
in
the
first
29.8
quadrant.
.
x 2 y2
2 c2.
2
x y
(8-58)
Now consider the position of a point defined by the two random errors X and Y. This point
will lie on or within the error ellipse if
x 2 y2
c2.
2x 2y
(8-59)
Since X and Y are two independent normal random variables with zero means, the random
variable
X2 Y2
2x 2y
(8-60)
has a chi-square distribution with two degrees of freedom. The probability density function
of U can be easily derived from the general chi-square density function, Eq. (8-3), noting
f (u )
1 u 2
e for u 0.
2
(8-61)
The probability that the position given by values of X and Y lies on cr within the error
ellipse is
x 2 y2
P 2 2 c2 P U c2
x y
21
e u 2 du
0 2
1 e u 2 .
(8-62)
U c
P U c
given in Table 8-5. Since for the standard error ellipse c 1 , we see from Table 8-5 that the
probability is 0.394 that the position of a point plotted from the two random errors will lie
on or within the standard error ellipse.
p[U<=c]
1.000
1,177
1,414
2.000
2,146
2,447
3.000
3,035
3,500
0,394
0,500
0,632
0,865
0,900
0,950
0,989
0,990
0,998
EXAMPLE 8-12
For the random position error given in Example 8-11, evaluate the semimajor and
scmiminor axes of the error ellipse within which it is 0.90 probable that the error in
position will lie.
Solution
For
P U c
y 0.14 m
0.90 , c 2.146
x 0.22 m
and
f ( y) 1 1 e y 2 for y 0.
2
Evaluate F(y) for y = 0.297, 3.36, and 9.49 and compare with the corresponding entries in
Table II of Appendix B.
8-2
Show that the distribution function for the random variable T in Example 8-2 is
f (t)
1.5
1 2
t t 6 t2 4
1.
2
Evaluate F(t) for t = 0.134, 0.741, and 2.13 and compare with the corresponding entries in
Table III of Appendix B.
231.354 m
231.312 m
231.320 m
231.361 m
231.322 m
Evaluate the sample mean, sample median, sample midrange, sample range, sample mean
deviation, sample variance, and sample standard deviation.
8-5
The following 20 observations of a pair of angles, and , are obtained:
3114'16.2"
4208'24.0"
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
3114'15.2"
3114'15.6"
3114'14.5"
3114'14.0"
31'14'15.8"
3114'16.0"
3114'14.1"
3114'16.4"
3114'13.6"
3114'16.7"
3114'14.1"
3114'15.3"
3114'15.2"
3114'12.9"
3114'17.9"
3114'16.2"
3114'14.2"
3114'14.1"
3114'14.8"
4208'24.4"
4208'24.5'
4208'23.8"
4208'25.7"
4208'26.1"
4208'21.8"
4208'23.3"
4208'24.8"
4208'23.2"
4208'25.7"
4208'22.7"
4208'26.2"
4208'25.8"
4208'25.3"
4208'24.0"
4208'27.4"
4208'25.8"
4208'23.7"
4208'24.0"
2
S
OBSERVATION
. From these
S
SS
8-6
Angles and in Problem 8-5 are added to form a new angle .Compute
a sample of 20 values f o r directly from the 20 pairs of and values in Problem
2
8-5. Then compute from these data the sample variance s , the sample covariance
2
s
1 0
0 1 ,
using the sample variances and covariance for and , computed in Problem 8-5, as
elements of the covariance matrix for
8-7
A distance is measured 10 times. All measurements are independent and have the
some precision. The standard deviation of each measurement is known to be 0.025 m. The
following values are obtained: 307.532 m, 307.500 m, 307.474 m, 307.549 m, 307.490 m,
307.527 m, 307.556 m, 307.502 m, 307.489 m, and 307.514 m. Evaluate the sample mean
for the distance and the standard deviation of this sample mean. Construct 50% and 95%
confidence intervals for the population mean of t distance.
8-8
if in Problem S-7 the standard deviation of each measurement is not known,
construct 50% and 95% confidence intervals for the unknown standard deviation, under
the assumption that the measurements are normally distributed. Is there any significant
difference between the sample standard deviation and the standard deviation (0.025 m)
given in Problem 8-7?
8-9 An angle is measured six times with the following results:
4010'15.6"
4010'10.8"
4010'08.9"
4010'16.4"
4010' 13.5"
4010'11.0"
5235'24"
5235'28"
5235'22"
5235'20"
5235'25"
5235'29"
5235'18"
5235'26"
9.
10.
11.
12.
13.
14.
15.
16.
5235'24"
5235'29"
5235'35"
5235'31"
5235'29"
5235'26"
5235'30"
5235'31"
1412.85 mm
1412.50 mm
1412.84 mm
1412.84 mm
1413.02 mm
1412.87 mm
1412.80
mm
mm
1412.66 mm
1412.84 mm
1412.72 mm
0.08 mm against H1 :
0.08 mm
(d) H 0 :
0.20 mm against H1 :
0.20 mm
8-13
Two independent calibrations of a 50 m steel tape yield two different values for
its length: 50.0026 m, and 50.0008 m. The standard deviation of a single tape calibration is
known to be 0.7 mm.
(a) Test at the 2% level of significance for any significant difference between the two
calibration values.
(b) Assuming there is no significant difference between the two calibration values,
construct a 99% confidence interval for the length of the tape based upon the mean of the
two calibration values.
8-14
Plane coordinates X and Y of a survey station have a bivariate normal distribution. The mean and standard deviation of X are 1700.50 m and 0.20 m, respectively; the
mean and standard deviation of Y are 810.65 m and 0.10 m, respectively, The coefficient
of correlation between X and Y is 0.60. Evaluate the principal dimensions (semimajor and
semiminor axes) and orientation of the standard error ellipse associated with this survey
station position.
8-15
The following covariance matrix is associated with the random error in the
horizontal (x, y) position of a point:
0.090
0.096
0.096 2
m ,
0.160
8-16
and
xy 0
y 0 , x y 0
density function:
r2
r
exp
2
2
2
f (r )
for
r 0.
This is the density function of the Rayleigh distribution. Evaluate P R 0 and compare
with the corresponding entry in Table 8-5.
8-17
The computation of a closed traverse results in the following misclosure in the x
and y coordinates:
x c x 0 3.3 cm
y c y 0 6.9 cm,
where xo and y 0 are the given coordinates of the origin of the survey and
xc
and
y c are
the
4.63
0.87
0.87
cm 2 .
8.83
Is the misclosure acceptable in the sense that it lies within the 0.95 probability error
ellipse?
8-18
The x, y position of a survey station is computed by the method of least squares.
The initial (approximate) position of the station is given by x 0
1040.60 m
y 0 2143.50 m ,
0.250 x 0.40
.
0.500 y 0.20
1.125
0.250
The reference variance is 0.040 m ,Assume the solution requires only one iteration.
Evaluate the least squares position of the survey station and the principal dimensions and
orientation angle of its 99% confidence region (an ellipse identical in size, shape and
orientation to the corresponding 0.99 probability error ellipse but centered on the least
squares position).
8-19
The principal axes of a standard error ellipse coincide with the x and y axes
(Fig.8-10). The standard deviations x , and y are as shown in the figure. Let of
0 OP
be the standard deviation in any direction . In general, P does not lie on the
ellipse; instead, its locus is the so-called pedal curve, shown as a broken line in Fig. 8-10.
(a) Show that
02 2x cos 2 2ysen 2.
(Hint: Put the x' axis in the direction of OP.)
(b) Then show that the equation of the pedal curve in the x, y coordinate system is
y2
x
2
2
x
2y y 2 0.
8-20
A steel tape of length 1 is used to measure the distance between two survey
stations. A total of n full tapelengths are required to measure the distance. Random errors,
E1 , E 2 , E n in aligning the tape introduce a positive error V in the observed distance.
Assume these alignment errors are independent and normally distributed with zero mean
and standard deviation , and that the effect of each error E i , in the direction of taping can
2
2 .
i
2
V 2 Y
be approximated by
V v 0.95 .
Fig. 8-10
9.1. INTRODUCTION
In Examples 3-5 and 4-6 a straight line is fitted through three points. In these examples,
the y-coordinates were assumed to be the observations, while the x-coordinates were
considered as constants. Based on the straight-line equation
y ax b 0
(9-1)
the condition equations took the form for the adjustment of indirect observations
v B f .
(9-2)
The elements of the unknown parameters vector are the slope a and the y-intercept b, and
the residuals are those associated with the observed y-coordinates.
Let us now consider the case in which not only the y-coordinate but also the x-coordinate
of each point is an observed quantity. The straight-line equation (9-1), written for any
point, will then contain two observations, x and y, and two parameters, a and b. In this
form, it does not fit the condition equations form of either one of the two techniques of
least squares adjustment discussed in Chapter 4. In adjustment of indirect observations,
where the form of the condition equations is given by Eq. (9-2), each condition equation
contains only one observation. In adjustment of observations only, where the condition
equations are of the form
Av f ,
(9-3)
ax b
POINT
x(cm)
y(cm)
2
2
x ( cm )
2
2
y ( cm )
1
2
3
2.00
4.00
6.00
3.20
4.00
5.00
0.04
0.04
0.04
0.10
0.08
0.08
These are precisely the same data as given for Example 4-6, Part (2), with variances for the
x-coordinates added. All measured coordinates are assumed to be uncorrelated. It is
required, under these new conditions, to find least squares estimates for the two parameters
a and b.
Solution
For any one of the three points, the equation of the straight line is
F y v y a x v x b 0.
Since a and v x , are both unknown, this equation is nonlinear in the unknowns. It cannot
therefore be used directly, as was the case in Example 4-6, but must be linearized as
follows:
y a 0 x b 0 F v x F v y F a F b 0
x
or
a 0 v x 1v y x a 1b y a 0 x b 0 ,
a 0 , b0
are approximate values for the two unknown parameters. Combining the
terms in the residuals together, and those in the parameter corrections together, and
expressing the results in matrix notation, we have
a 0
v x
1 x
v y
a
1 y a 0 x b 0 .
b
The linearized equations for all three points can be written in this form and combined as
follows:
a 0
0
0 a0
0 a0
v x1
v
x2
0 x1
vx3
0 x 2
v
x4
1 x 3
vx5
v x 6
1
y1 a 0 x1 b 0
a
1 y 2 a 0 x 2 b 0 .
b
a
x
b
3
0 3
0
Av B f .
Values for the approximations a 0 , and b 0 , can be obtained by finding the equation of a
straight line that passes through any two of the three points. Taking points 2 and 3, for
example, we have:
y 2 a 0 x 2 b0
and
y3 a 0 x 3 b 0 .
Solving for
a0 ,
a0
y 3 y 2 5.00 4.00
0.500
x 3 x 2 6.00 4.00
and
0
0
0
0
0 .5 1
2 1
A 0
0 0 .5 1
0
0 , B 4 1
0
6 1
0
0
0 0.5 1
2x1
2
y1
2x 2
2y 2
2x 3
0.04
0.10
0.04
2
cm ,
0.08
0.04
2y 3
0.08
a 0
f 0
0 a0
0 a0
which, symbolically, is
f A d,
b0
b0 .
b 0
, as follows:
x1
y
1
0 b 0
x 2
0 b 0 ,
y
2
1 b 0
x3
y 3
c A.
v 0 Av.
Thus,
f c d
and the linearized condition equations become
v0 B c d,
which is the form of Eq. (4-7) for the technique of adjustment of indirect observations.
From the general law of propagation of variances and covariances, expressed by
Eq. (6- 19), we can obtain the covariance matrix of c .
0
0
0.11
A A 0 0.09 0 cm 2 .
0
0
0.09
t
Letting 0
0.09 cm
w 02 1.
1 0.01
0.818
(0.09 )
1 0.09
1 0.09
1
and the squares solution, according to Eqs. (4-28), (4-29), and (4-38), is
55 .272 11 .636
N B t WB
11 .636 2.818
0.3272
t B t Wf
0.1636
0.1384 0.5715
N 1
0.5715 2.7147
0.0482
N 1t
.
0.2571
The corrected parameter values are
0
0
0
0
0.4518 1
A1
0
0 0.4518 1
0
0
0
0
0
0 0.4518 1
2 1
B1 4 1
and as before,
6 1
0.0393
f1 0.0643
0.0321
0.1082
0.0882
1 A A
0.8151
2
1
Wc1 0 c1
1
1
55 .260 11 .630
N1 B1t Wc1B1
11 .630 2.815
0.0005
t1 B1t Wc1f1
0.0002
t
1
t
1
0.0882
0.5729
2.7219
0.00005
1 N11t1
.
0.00026
0.1387
N11
0.5729
Since the values in 1 , are sufficiently small, the iterative procedure is terminated, and the
final estimates of the parameters arc
a 0.452 ,
b 2.257 cm.
We have seen in the preceding example that in some problems the condition equations are
more readily written in the form Av B f than in either of the two simpler forms
discussed in Chapter 4. While it is possible in principle to solve any problem by any
technique of Least squares, it is quite often
9.2. DERIVATION
Given a mathematical model, the minimum number of observations necessary for its
unique determination is n 0 . When n measurements which are consistent with the model
are acquired, such that n
n0
, the redundancy is
r n n0.
(9-4)
This means that among the n observational variables there exist r independent condition
equations (see Sections 3.3, 4.1, and 4.4). If we wish to carry unknown parameters into the
adjustment, then we must write an additional condition equation for each parameter.
Hence, if u parameters are carried, the number of condition equations will be
c r u.
(9-5)
The lower limit for the value of u is obviously zero, in which case no parameters are
carried and there are c r conditions among the observations only. On the other hand, the
upper limit for the value of u is n 0 , for which c
r n0 n
0 u n0
(9-6)
r c n,
(9-7)
the lower limits of which define the case for adjustment of observations only, and the
upper limits define the case for adjustment of indirect observations, the two special cases
treated in Chapter 4 (see Section 9.4). In many problems, the number of parameters carried
in the adjustment is neither zero nor n 0 . In such cases, neither one of the two special
techniques of Chapter 4 would be directly suitable. Instead, c condition equations exist
which, when they are originally linear, take the form
A v B d,
where
(9-8)
c n ;
is a c u rectangular coefficient matrix c n ;
c A
(9-9)
vc Av
(9-10)
be the corresponding c residuals. If Q is the cofactor matrix of the given observations, then
Qc
, the cofactor matrix of the equivalent observations, is, according to Eq. (6-27), given
by
Qc A Q A t
(9-11)
c , c c, n n , n n , c
In terms of the equivalent observations, the condition equations given by Eq. (9-8) become
c vc B d
or
(9-12)
v c B d c f .
If the weight matrix of the equivalent observations is
Wc Q c1 AQA t
(9-13)
(9-14)
(9-15)
N Bt Wc B Bt AQA t
and
t Bt Wc f Bt AQA t
and the least squares estimates of the parameters are, from Eq. (4-38),
N 1t.
(9-16)
The reader should recognize that Eq. (9-13) is identical to Eq. (4-53) in Chapter 4, and
should understand that the use of the symbols Q c and Wc in Chapter 4 was justified since
they, indeed, represent cofactor and weight matrices, respectively, of the set of equivalent
observations.
In the second derivation procedure we shall apply the minimum criterion directly. Recall
from Chapter 4 that the minimum criterion for observations with a weight matrix W is
v t Wv min imun .
(9-17)
Av B f .
(9-18)
f d A.
(9-19)
The more common case is to have the conditions nonlinear with c functions of the general
form
F F, x 0
(9-20)
in which is the vector or n observations and x the vector of u parameters. Thus, the
linearization of Eq. (9-20) leads to Eq. (9-18), with
F
F
,B
, f F , x ,
0
(9-21)
where the three matrices A, B, and f are evaluated at the numerical values for the
observations and a set of u approximate values x 0 for the unknown parameters.
In a manner similar to the technique of adjustment of observations only (see Section 4.4), a
vector k of c Lagrange multipliers is used. The minimum criterion then becomes
(9-22)
To achieve a minimum, the partial derivatives of ' with respect to both v and must be
equated to zero, or
'
2 v t W 2k t A 0
v
'
2k t B 0,
Wv A t k
Bt k 0.
(9-23)
(9-24)
v W 1A t k QA t k,
(9-25)
AQA k B f ,
t
Q c k f B.
(9-26)
k Q c1 f B Wc f B ,
(9-27)
B t Wc f B 0
B W B B W f ,
(9-28)
N t ,
(9-29)
i.e.,
where
N Bt Wc B Bt AQA t
and
t B t Wc f Bt AQA t
f,
which are precisely the relationships given by Eqs. (9-14) and (9-15). Equation (9-16) can
then be used to solve for .
If the condition equations are nonlinear, the vector estimate of the parameters is
x x 0 .
(9-30)
The vector of residuals. v, is obtained by substituting the righthand side of Eq. (9-27) for k
in Eq. (9-25):
v QA t Wc f B .
(9-31)
(9-32)
EXAMPLE 9-2
Using the data of Example 9-1, calculate the adjusted coordinates of the three points.
Solution
02 0.09 cm 2
and
0.04
0.10
0.04
2
cm .
0
.
08
0.04
0.08
0.04
1.111
0.444
1
Q 2
0.889
0
0.444
0.889
0.4518
A
0
0 0.4518
0 0.4518
2 1
B 4 1 , f
6 1
0.8151 0
Wc 0
1
0
0
0.0393
0.0643
0.0321
0
0
0 , 0
0
1
Thus
0.01
0.04
0.01
t
t
v QA Wc f B QA f
cm
0.06
0.01
0.03
and
cm ,
4.00 0.06 4.06
6.00 0.01 6.01
0
0
x(cm)
2.01
3.99
6.01
y(cm)
3.16
4.06
4.97
The reader can verify that these adjusted positions lie on the line y
0.452 x 2.26.
N 1B t Wc f .
The vector f is then replaced by d
(9-33)
N 1B t Wc d A .
(9-34)
The only random vector in Eq. (9-34) is . Its cofactor matrix is Q. Thus, the Jacobian of
with respect to is
N 1B t Wc A
(9-35)
Q J QJ t
N 1B t Wc A Q N 1B t Wc A
N 1B t Wc (AQA t ) Wc BN 1
N 1 ,
(9-36)
1
since Wc , and N are symmetric and, from Eqs. (9-13) and (9-14), Wc
t
N B Wc B
AQA
, and
, respectively.
If the condition equations are originally linear, then is the vector of estimated
parameters, and Q , given by Eq. (9-36), is the corresponding cofactor matrix. If,
however, the condition equations are nonlinear, they are linearized by a series expansion
Q xx Q N 1 ,
(9-37)
xx
02Q 02 N 1.
or
xx
:
(9-38)
EXAMPLE 9-3
With reference to Example 9-1, calculate the covariance matrix for the two parameter
estimates a and b .
Solution
From Example 9-1, the cofactor matrix of the parameter estimates is
0.1387
Q xx Q N11
0.5729
2
0.5729
.
2.7219
If, however,
2
0
xx
0.0125
02 N 1
0.0516
0.0516
.
0.2450
2
02
v t Wv v t Wv
,
cu
r
(9-39)
n0
n0
02Q 02 N 1.
xx
(9-40)
2
The a priori value of 0 , if given, can be statistically tested by using 0 in place of S and r
in place of n
1 in
is only one estimate from one data set with limited redundancy, while 0 presumed
2
to be far better known. Should 0 turn out to he inconsistent with 0 , then several steps are
taken to determine the reason. [This topic is outside the scope of this book; for those
interested, see Mikhail (1976)].
t
v Wv
t
QA Wc f B
of Eq.
v t Wv QA t Wc f B W QA t Wc f B
t
which reduces to
v t Wv f t Wc f f t
if
we
note
1
t
Wc AQA
,N
that
and
are
Wc
B Wc B , t B Wc f
, and
symmetric,
(9-41)
that
Q W
and
that
spectively.
When the condition equations are nonlinear, the iterative procedure is usually carried out
until the final value of is so small as to be essentially zero. Hence, for the nonlinear case,
Eq. (9-41) reduces to
v t Wv f t Wc f .
*It is interesting to note that when
W 1
n,= 1 and
2
0
t
v v
n 1
n 1
(9-42)
W f BN B W f
W I BN B W d A ,
k Wc f BN 1t
1
(9-43)
v QA t k.
Applying the general law of cofactor propagation to Eq. (9-43), and noting that d is a
vector of constants, we obtain
Q kk Wc I BN 1B t Wc A Q Wc I BN 1B t Wc A
which reduces to
since Wc
I BN
Q kk Wc I BN 1B t Wc
1 t
B Wc
(9-44)
QA W I BN
Q vv QA t Q kk QA t
B t Wc AQ
(9-45)
v
QA t Wc I BN 1B t Wc d A
I QA t Wc I BN 1B t Wc c,
where c
(9-46)
t
1 t
QA Wc I BN B Wc d
Q I QA t Wc I BN 1B t Wc A Q I QA t Wc I BN 1B t Wc A ,
which reduces to
Q Q QA t Wc I BN 1Bt Wc AQ
*An idempotent matrix has the property that it is equal to its square.
(9-47)
Q Q Q vv ,
(9-48)
0.04
1.111
0.444
Q
0.889
0.444
0
0.4518 1
A
0
0 0.4518
0
0
0
2 1
0.8151 0 0
B 4 1 , Wc 0
1 0,
6 1
0
0 1
0.889
0 0.4518
0
0
0.1387
N 1
0.5729
Thus,
0.8030
BN B Wc 0.3212
0.1605
1
0.3941
0.3579
0.3217
0.1969
0.3217
0.8403
and
0.1970
I BN B Wc 0.3212
0.1605
1
0.3941
0.6421
0.3217
0.1969
0.3217
0.1597
0.5729
.
2.7219
0
0
2.00
1.111
0
0
2
.
00
0
QA t
.
0.889
0
0
0
0
2.00
0
0.889
0
Using Eq. (9-45), the cofactor matrix of the residual vector v can be calculated:
Q vv QA t Wc I BN 1B t Wc AQ
0.006
Symmetric
0.036
0.013
0.057
0.006
0.198
0.071
0.317
0.036
0.026
0.114
0.013
0.507
0.057
0.029
0.159
0.057
.
0.254
0.028
0.126
0.006
From Eq. (9-48) we can get the cofactor matrix of the adjusted coordinates,
Q Q Q vv ,
0.438
Symmetric
2
Finally, since 0
0.036
0.013
0.913 0.071
0.09 cm
0.418
0.057
0.006
0.317
0.036
0.114
0.013
0.382
0.057
0.438
0.029
0.159
0.057
.
0.254
0.028
0.763
0.0394
02 Q
0.0051 0.0005
0.0032
0.0012
0.0822
0.0064
0.0285
0.0376
0.0103
0.0344
0.0026
0.0082 0.0143
0.0012 0.0051
2
cm .
0.0051 0.0229
0.0394
0.0025
0.0687
A 0 v B0 f 0 ,
where A 0 , B 0 , and
When n
c,
f0
(9-49)
A 0 must
be nonsingular, i.e.,
A0 ,
1
f
0
and
B
0
, respectively, we get
0
v B f ,
(9-50)
which is identical to Eq. (9-2), the form of the condition equations for adjustment of
indirect observations. The distinctive feature of this equation is that the coefficient matrix
of v is an identity matrix.
With A replaced by the identity matrix, we have, from Eqs. (9-11), (9-13), (9-14), and (915)
Qc IQI Q
(9-51)
Wc Q c1 Q 1 W
(9-52)
N B Wc B B WB
(9-53)
t B t Wc f B t Wf .
(9-54)
Equations (9-53) and (9-54) are identical to Eqs. (4-28) and (4-29), respectively,
developed in Chapter 4.
When A
and Wc
v f B,
(9-55)
and Wc
Q vv Q BN 1B t
(9-56)
Q BN 1Bt
(9-57)
With reference once more to Section 9.2, this special case is achieved when u is equal to
its lower limit, zero, making c equal to its lower limit, r (the redundancy). With no
parameters carried in the adjustment, the B term in Eq. (9-18) vanishes, leaving
Av f ,
(9-58)
which is identical to Eq. (9-3), the form of the condition equations for adjustment of
observations only.
To make the B term vanish we can simply set B 0 . If this is done, Eq. (9-27) reduces to
k Wc f
which is identical to Eq. (4-52). Equation (9-25), which is
v QA t k
(9-59)
Q vv QA t Wc AQ
(9-60)
Q Q QA t Wc AQ
(9-61)
Q Q Q vv ,
is consistent with Eqs. (9-60) and (9-61).
This should make it clear that the technique of adjustment of observations only is also a
special case of the general procedure.
In principle, a problem that can be solved by the general method of adjustment can also be
solved by using the special procedures, provided appropriate transformations are made.
The necessary transformations may be relatively complicated, however, and it is not
advocated that they be attempted. Instead, it is suggested that the most appropriate of the
three available techniques be selected to solve a specific adjustment problem.
Selection of the most appropriate technique is based on experience.
The basic symbols and equations of the least squares method are summarized for quick
reference when solving adjustment problems.
Symbols
v
x
x0
d
f
n0
r
u
elements in
c
A, x,
x0
, and
A
B
Q
W
Qc
Wc
N
t
k
Q xx
Q vv
2
0
2
0
xx
vv
r n n0
cru
0 u n0
rcn
(9-4)
(9-5)
(9-6)
(9-7)
Av B f .
(9-18)
f d A.
(9-19)
F
F
,B
, f F , x ,
0
(9-21)
F F, x 0
(9-20)
in which
Q c AQA t
(9-11)
Wc Q c1 AQA t
(9-13)
N Bt Wc B Bt AQA
t 1
t Bt Wc f Bt AQA t
N t.
x x 0 .
(9-14)
(9-15)
(9-16)
(9-30)
v QA t Wc f B .
v.
(9-31)
(9-32)
Q N 1
Q xx N
(9-36)
Q Q QA Wc I BN B Wc AQ
1
Q Q Q vv .
(9-37)
(9-47)
(9-48)
v t Wv v t Wv
,
cu
r
(9-39)
v t Wv f t Wc f f t .
(9-41)
02
where
xx
02Q 02 N 1 , if
2
0
is know a priori
(9-38)
xx
02Q 02 N 1 , if
2
0
is know a priori;
02 Q , if
2
0
is know a priori
02Q , if
2
0
(9-40)
is know a priori.
u n0
c n.
Upper limit
(9-6)
Upper limit
(9-7)
v B f ,
(9-50)
Wc Q c1 Q 1 W
(9-52)
N B t Wc B B t WB
(9-53)
t B Wc f B Wf
(9-54)
N t.
x x 0 . (nonlinear case)
v f B,
v
(9-16)
(9-30)
(9-55)
(9-32)
Q N 1
(9-36)
1
Q xx Q N ,
(9-37)
Q vv Q BN 1B t
(9-56)
Q BN B
(9-57)
u0
c r.
Lower limit
Lower limit
(9-6)
(9-7)
Av f .
(9-58)
Q c AQA t
Wc Q
1
c
(9-11)
(9-13)
k Wc f
(9-59)
v QA k,
v.
t
(9-25)
(9-32)
Q vv QA t Wc AQ
Q Q Q vv .
(9-60)
(9-48)
PROBLEMS
9-1
Figure 9-1 depicts a plane isosceles triangle ABC. The sides and height of the
Fig. 9-1.
Av B f
In Fig. 9-2, distances OA, AB, BC, and CO are observed. The observed values
are 1 , 2 , 3 , and
squares estimates for the coordinates of C ( x 0 and y 0 ). (a) Write suitable condition
equations for this problem in the form Av f . (b) Write the condition equations in the
form Av B f , carrying x, and y, as the parameters.
9-3
Two sides and the three angles of the triangle in Fig. 9-3 are measured. The
measured values are 1 , 2 , 1 and 2 , as shown. Least squares estimates for side x and
altitude h are required. Write suitable condition equations for this problem in the form
Av B f , carrying x and h as the parameters.
9-4
If in Problem 9-1 1
1000.00 m , 2 1000.10 m
and 3
and with equal precision, find the least squares estimate for x
Fig. 9-2.
Fig. 9-3.
800.25 m
, all uncorrelated
If in Problem 9-2 1
1000.20 m , 2 500.55 m
, 3
707.75 m
,and
4 1118.60 m
is
t
0
200 100 0
100 200 0
0
cm 2 ,
0
0 100 50
0
50 100
0
find the least squares estimates for x 0 and y 0 Also determine the principal dimensions and
orientation of the standard error ellipse for the computed position of C.
9-6
and
2 1550.00 m
4000'00" ,
2 9500'00" ,
related, the standard deviation of each observed angle is 15", and the standard deviation of
each observed side is 0.10 m. Find least squares estimates for x and h. Evaluate also the
covariance matrix for x and h, and construct 95% confidence intervals for x and h.
9-7
With reference to Fig. 9-4, the following angles are measured:
Fig. 9-4.
4 2 0
4 2
Symmetric
0 0 0
0 0 0
2 0 0
4 2 0
4 2
sec onds of
arc .
2
Find the least squares estimate for angle COD. Construct also a 99% confidence interval
for the angle COD.
9-8
The following data are observed:
x(m)
y(m)
1.00
2.95
2.05
3.05
4.10
3.60
4.85
4.30
8.00
4.80
9.10
5.25
All observations (x and y) are independent and have the same precision. Find least squares
estimates for the parameters of the straight line y ax b fitted to the data. Find also the
a posteriori estimate for the reference variance and use this value to estimate the standard
deviation of each observation and the standard deviations of the estimates for a and b. Test
at the 2% level of significance the hypothesis that b= 2.70 m.
9-9
Rework Example 48, Chapter 4, using the time as an observed variable as well as
the altitude, with standard deviations 1.0s and 20 ", respectively.
9-10
Calculate the covariance matrix for the least squares estimates of the maximum
altitude and the time at which maximum altitude occurs, as determined in Problem 9-9.
Evaluate also the standard deviations and coefficient of correlation for these least squares
estimates.
9-11
The angles shown in Fig. 9-5 are measured with a theodolite. The observed
values and their weights are
ANGLE
OBSERVED VALUE
WEIGHT
7014'30"
6227'14"
10338'26"
6152'04"
10910'04"
7656'36"
Fig. 9-5.
(a) Use the method of least squares to determine adjusted values for these angles. (b)
Construct 95% confidence intervals for angles b and f.
9-12
The following data are given for the level net in Fig. 9-6
LINE
FROM
TO
OBSERVED
ELEVATION
DIFFERENCE(m)
DISTANCE
(Km)
1
2
3
4
5
6
A
B
C
A
D
D
B
C
A
D
B
C
+34.090
+15.608
-49.679
+16.010
+18.125
+33.704
2.90
6.15
17.32
15.84
9.22
10.50
Fig. 9-6.
Applications
in Plane Coordinate
Surveys
10.1. INTRODUCTION
Many survey projects are based upon two-dimensional positioning within a plane
rectangular coordinate system. This chapter addresses the application of least squares
adjustment in such plane coordinate surveys. It includes formulation and linearization of
the three basic condition equations (distance, azimuth, and angle) encountered in the
adjustment of plane coordinates by the method of indirect observations (also known as the
method of variation of coordinates), least squares position adjustment for a typical
procedure employed in plane coordinate surveying, and least squares transformation of
coordinates from one plane rectangular system to another_
10.2. THE DISTANCE CONDITION AND ITS LINEARIZATION
The adjusted distance, S ij , between two points i and j is given by
12
2
2
S X X Y Y ,
j
i
i
ij
j
(10-1)
Sij
S
S
S
X ij Y ij X ij Y
S S0
ij
j Y j
ij X i Y i X
j
i
i
(10-2)
where
12
2
2
0
S X 0 X 0 Y 0 Y 0 ,
j
i
i
j
ij
(10-3)
0
i
0
,Y )
i
and ( X
0
j
0
,Y )
j
respectively.
Equation (10-2) assusmes that all four coordinates are unknown variables for which
approximations are necessary. If one of the two points is a control point, the coordinates of
this point Would be known and would thus be taken as constants. In this case, Eq. (10-2)
would include only two partial derivatives; namely, the derivatives with respect to the two
coordinates of the unknown point, For example, if point j is a control point, Eq. (10-2)
reduces to
Sij
S
X ij Y
S S0
ij
ij X i Y i
i
i
(10-4)
12
2
2
S0 X X 0 Y Y 0 ,
j
i
i
j
ij
(10-5)
in which
noting that ( X j , Yj ) are the known coordinates of the control point j, and ( X
0
i
0
,Y )
i
are
approximate values for the coordinates or the unknown point i. For the remainder of this
section we will consider the general case of having all four coordinates as unknowns.
Now, according to Eq. (9-32), the adjusted distance is
S ij Sij v ij ,
(10-6)
where S ij is the observed value of the distance and v is the corresponding residual. Thus, it
ij
Sij
S
S
S
X ij Y ij X ij Y S0 S .
v
j Y j
ij
ij X i Y i X
j
ij
i
i
j
(10-7)
Denoting the partial derivatives by
b1
Sij
X ij
, b2
Sij
Yi
,b 3
Sij
X j
, b4
Sij
Yj
and letting
f ij Sij0 Sij
(10-8)
we obtain
v b X b Y b X b Y f .
ij 1 i
2 i
3 j
4 j ij
If we left
(10-9)
v v ij , B b1
b2
b3
X i
Y
i
b 4 ,
, f f ij ,
X j
Y j
we see that Eq. (10-9) can be written in the familiar matrix form
v B f
(10-10)
b
1
X
X i Yj Yi
2 12
1
X j Xi 2 Yj Yi 2
2
1 2
2X j X i 1
X X
j
i
X X
j
i
.
1 2
S
2
2
ij
X j X i Y j Yi
(10-11)
Since b, must be evaluated before the final (adjusted) coordinates of i and j are known,
approximate '.nines must he used. Thus.
X0 X0
j
i
b
.
1
S0
ij
(10-12)
Y0 Y0
j
i
b
.
2
0
S
ij
X0 X0
b
3
(10-13)
. b
1
(10-14)
Y0 Y0
j
i
b
b .
4
2
0
S
ij
(10-15)
S0
ij
and
X j Xi
arctan
ij
Y j Yi
(10-16)
and denominator Y
i
Y
i
must be taken into account. If a computational aid, such as a hand calculator or computer,
is used and it yields only a positive or negative acute angle ij ,an appropriate constant
must be added to ij , in order to obtain the azimuth. Table 10-1 provides the
Fig. 10-1.
.X j X i
Yj Yi
Positive
Positive
0-90
Positive
Negative
II
90-180
ij 180
Negative
Negative
III 180-270
ij 180
Negative
Positive
IV 270-360
ij 180
ij is
AZIMUTH
ij
ij
ij
X ij Y ij X ij Y
0
j Y j
ij
X i Yi i X j
j
ij i
(10-17)
where
X0 X0
j
i
0 arctan
.
Y0 Y0
ij
j
i
Fig. 10-2.
(10-18)
ij ij v ij ,
where ij is the observed value of the azimuth and
v ij
(10-19)
ij
X ij Y ij X ij Y 0 .
v
j Y j
ij
ij X i Y i X
ij
j
j
i
i
(10-20)
Denoting the partial derivatives by
b1
ij
X ij
, b2
ij
Yi
,b 3
ij
X j
, b4
ij
Yj
and letting
f ij 0ij ij
(10-21)
we obtain
v b X b Y b X b Y f .
ij 1 i
2 i
3 j
4 j ij
(10-22)
which can also be expressed in the usual matrix form as given by Eq. (10-10). To evaluate
the partial derivatives b1 , b 2 , b 3 , and
b4
arctan t 1 2 t ,
x
1 t x
(10-23)
X j Xi
Y j Yi
and
xX
we have
X X
1
i
j
b1 ij
2
X i
X j X i X i Yj Yi
1
Yj Yi
Yj Yi 2
2
2
Y Y X X
i
j
i
j
1 Yj Yi
.
Y Y
2
S
j
i
ij
(10-24)
b1 ,
we have
Y0 Y0
j
i
b
.
1
2
S0
ij
(10-25)
In similar fashion
X0 X0
j
i
b
,
2
2
S0
ij
Y0 Y0
j
i
b
b ,
3
1
2
S0
ij
(10-26)
(10-27)
and
X0 X0
j
i
b
b ,
2
4
2
S0
ij
(10-28)
It should be pointed out that all four coordinates are being carried as unknown variables.
This is the general case. If either i or j is a control point, having known coordinate, the two
terms of Eq. (10-20) that correspond to the known coordinates arc dropped.
10.4. THE ANGLE CONDITION AND ITS LINEARIZATION
In surveying, horizontal angles may be turned to the right (clockwise) or to the left
(counterclockwise). Since the azimuth has been defined as a clockwise angle, we shall
consider here angles that are clockwise only.
In Fig. 10-3, ijk is the adjusted horizontal angle at point i measured clock-wise from line
ij to line ik. It is obvious that
jik
ik
,
ij
(10-29)
where ik is the adjusted azimuth of line ik, and ij is the adjusted azimuth of line ij. If the
rectangular plane coordinates of i, j and k are
it follows that
X i , Yi , X j , Yj and X k , Yk , respectively,
X j Xi
X k Xi
arctan
arctan
,
jik
Yk Yi
Y j Yi
(10-30)
Fig. 10-3.
In a manner similar to the presentation of Eq. (10-22) for the azimuth condition, the
linearized form of the angle condition can be written as
jik
b X b Y b X b Y b X b Y f .
j
jik.
1 i
2 i
3
4 j
5 k
6 k
(10-31)
Note that
where
ijk
jik
jik
jik
(10-32)
is the observed value of the angle and v ijk is the corresponding residual. Note
also that
jik
0 ,
jik
jik
(10-32)
where
X0 X0
X0 X0
i
i arctan j
0 arctan k
,
0
0
0
0
Y
Y
Y
Y
jik
k
i
j
i
in which X
0
i
,Y
0
i
(10-34)
0 0 0 0
X , Y and X , Y are approximate values for the coordinates
j j k k
jik
b
1
X
i
0
0
Y 0 Y 0 Y j Yi
k
i
2
2
S0
S0
ik
ij
(10-35)
Y0 Y0
jik
j
i
b
3
2
X
S0
i
ij
X0 X0
jik
j
i
b
4
2
Y
S0
i
ij
Y0 Y0
jik
i b b
b5
k
1 3
2
X
k
S0
ik
0
X X0
jik
i b b ,
b
k
6
2
4
2
Y
k
S0
ik
(10-36)
(10-37)
(10-38)
(10-39)
(10-40)
where
S X
0 2
ij
and
0
j
S X
0 2
ik
X i0
0
k
Y
2
0
j
Yi0
X i0 Yk0 Yi0
(10-41)
(10-42)
The information provided by Table 10-1 is applicable as well to Eqs. (10-30) and (10-34).
In order to be general, the coordinates of all three points, i, j, and k, are carried as
unknown variables in Eq. (10-31). In practice, one or two of the three points may be
control points, in which case the corresponding correction terms in Eq. (10-31) would be
dropped.
10.5. POSITION FIXING BY DISTANCE
Position fixing by distance is a typical plane coordinate survey procedure. In this
procedure, the measurements are the distances between a set of known control points and a
point whose position is unknown. Since we need to determine two coordinates for the
unknown point, at least two distances must be measured. Whenever more than two
distances are measured, the least squares procedure is used for calculating the adjusted
position of the unknown point.
Since for each measured distance there is one known point and one unknown point, the
form of the linearized condition equation is as given by Eq. (10-4).
v b X b Y f .
ij 1 i
2 i ij
(10-43)
f ij Sij0 Sij
and from Eqs. (10-12) and (10-13),
X0 X0
j
i
b
,
1
S0
ij
and
Y0 Y0
j
i
b
,
2
S0
ij
12
2
2
S0 X X 0 Y Y 0 ,
j
i
i
j
ij
Since Eq. (10-43) is in the linearized form of the condition equation for adjustment of
indirect observations, this least squares adjustment procedure can be used to solve for the
position of i. In principle, the least squares solution should be iterated in order to
compensate for the higher order terms that are dropped in the linearization. However, if the
0
initial approximations X i and Yi are close to the final values, as they normally should be,
one iteration is usually sufficient.
The following example demonstrates the procedure.
EXAMPLE 10-1
With reference to Fig. 10-4, the position of an unknown point P is to be determined by
measuring distances with an EDM instrument from P to five control points A, B, C, D, and
E. The known positions of the control points and the observed distances are
CONTROL
OBSERVED
POINT
x(m)
y(m)
DISTANCE (m)
698.41
1005.07
4122.109
580.14
2207.37
3444.530
5482.77
8503.22
4897.717
6191.16
7160.26
4129.233
6095.81
4920.30
2739.177
The a priori standard deviation of each distance measurement is given (in meters) by
0.020 2 0.040 2 S2
Fig. 10-4
where S is the distance in km. [See Eq. (7-22).] All measurements are assumed to be
independent.
The reference variance is taken to be
0.0020 m
1.
2.
3.
4.
5.
6.
7.
2
0
for
S 1 km
, i.e.,
2
0
0.020 2 0.040 2
.
Compute an approximate position for P using the observed distances to C and E.
Determine the least squares (adjusted) position of P using all five observed
distances.
Determine the adjusted values of the distances.
Evaluate the covariance matrix of the adjusted position of P based upon the a
priori reference variance.
Evaluate the a posteriori estimate of the reference variance, and test this value
against the given reference variance at the 5% level of significance.
Evaluate the semimajor axis, semiminor axis, and orientation of the standard
error ellipse for P using the covariance matrix evaluated in Part 4.
Plot the 95% confidence region for the position of P.
Solution
1. Calculation of the approximate position of P. Figure 10-4 depicts the general layout of
the problem.
EC
3634 .987 m.
EC 2 PE 2 PC 2
2EC PE
3634 .987 2 2739 .177 2 4897 .717 2
cos( PEC )
0.1642790 .
Therefore (PEC) = 99.45535.
Now the azimuth of EC is
EC arctan
613.04
350.29067.
3582.92
are calculated and listed in Table 10-2, From this table the
LINE
X
b
1
0
i
0
i
0
ij
ij
PA
-0.68168
-0.73165
PB
PC
PD
PE
-0.85014
0.4031 1
0.64967
0.94458
-0.52655
0.91515
0.76021
0.32823
PA
PB
PC
PD
PE
Xj
Yj
0
X j Xi
0
Yj Yi
0
S ij
(m)
(m)
(m)
(m)
698.41
580.14
5482.77
6191.16
6095.81
1005.07
2207.37
8503.22
7160.26
4920.30
-2810.03
-2928.30
1974.33
2682.72
2587.37
-3016.00
-1813.70
4482.15
3139.19
899.23
0
X
0
p
3508 .44 m ;
Y
p
Y
i
0
fij Sij Sij
(m)
(m)
(m)
4122.199
3444.481
4897.719
4129.346
2639.178
4122.109
3444.530
4897.717
4129.233
2739.177
0.090
-0.049
0.002
0.113
0.001
Xj
4021 .07 m ;
ij
S
ij
0
i
0
Y
i
Y
j
1 2
.
v1 0.68168 0.73165
0.090
v 0.85014 0.52655
0.049
2
X p
v 3 0.40311
0.002 ,
0.91515
Yp
v
0
.
64967
0
.
76021
4
0.113
v 5 0.94458
0.001
0.32828
which is of the form v
B f .
The weights of the observed distances are obtained from the given functions for
the relationship
2
w 0 0
noting
2
2
that 0 0.0020 m
LINE
PA
S(Km)
4.12
2
0
, and
0.020 2 0.040 2 S 2
2
(m )
0.0276
PB
3.44
0.0193
PC
4.90
0.0388
PD
4.13
0.0277
PE
2.74
0.0124
Since the observations are independent, the weight matrix is
2
2
w 0 0
0.0725
0.1036
0.0515
0.0722
0.1613
0.0725
0.1036
.
W
0.0515
0.0722
0.1613
Thus,
0.04942
Bt W
0.05304
0.08807
0.02076
0.04691
0.05455
0.04713
0.05489
0.15236
0.05295
0.18721 0.16977
0.00536
t B t Wf
0.00425
11 .781 12 .992
N 1
12 .992 20 .217
0.008
N 1 t.
0.016
and the least squares position of P is
0.008
v f B 0.002 0.40311
0.016 m
0.91515
0.016
0.76021
0.113 0.64967
0.096
0.001 0.94458
0.012
0.32828
and so the adjusted distances are
v. 4897.717 0.016 4897.701 m.
4129.233 0.096 4129.329
2739.177 0.012 2739.165
4.Evaluation of the covariance matrix of the adjusted position of P.
xx
11 .781 12 .992
Q XX Q N 1
12 .992 20 .217
11 .781 12 .992 0.0236
02Q xx 0.0020
0.0260 2
m .
0.0404
r n n0 5 2 3
(degrees of freedom).
v t Wv 0.00165
0.00055 m 2
r
3
2
0
and
0 0.00055 0.023 m.
To test at the 5% level of significance, set
b 1 2 0.975
2
x 0.025.3 0.216
, and
0.05
. Then
a 2 0.025
, and
. Thus
x2
2
0.025 .3 0 0.216 0.0020 0.00014 m 2
r
3
and
x2
2
0.975 .3 0 9.35 0.0020 0.00623 m 2 .
r
3
2
Now 0
0.00055 m
accepted, i.e.,
. Thus, 0.00014
2
0 consistent
2
0 0.00623 ,and
2
2
0 0.0020 m
is
6. Evaluation of the semimajor axis, semiminor axis, and orientation of the standard error
ellipse for from part 4,
Thus,
xx
0.0260 2
m .
0.0404
0.0236
0.0260
2
2
2
2
2
x 0.0236 m , y 0.0404 m and xy 0.0260 m
2x 2y
2
, and so
0.0320 m 2
and
12
2x 2y
2xy
0.0273 m 2 .
Thus,
[Eq. (8-56)]
[Eq. (8-57)]
x 0.0593 0.244 m 2 .
y 0.0047 0.069 m.
Now,
tan 2
2 xy
2
x
2
y
2 0.0260
0.0520
3.095
0.0236 0.0404 0.0168
[Eq. (8-55)]
2.447 0.244 0.579 m , and the semiminor axis is 2.447 0.069 0.169 m . Again, the
orientation of the ellipse is 126 . The 95% confidence ellipse is plotted in Fig. 10-5.
Figure 10-6 depicts the relationship between two plane rectangular coordinate systems
Fig. 10-5.
(10-44)
y i ri sin i
(10-45)
s i ri cos i
(10-46)
t i ri sin i .
(10-47)
and
Thus
(10-48)
and
Fig. 10-6.
(10-49)
x i' si
(10-50)
yi' t i ,
(10-51)
and
x ' , y ' can have a scale that is different from that of (x, y).
(10-52)
(10-53)
and
Since there are two parameters, and , involved in Eqs. (10-52) and (10-53), and since it
is more convenient not to work with trigonometric functions, this transformation from the
(x, y) system to the
(10-54)
b sin .
(10-55)
And
x i' ax i by i
(10-56)
yi' bx i ayi .
(10-57)
'
as follows:
a 2 b2
b
arctan .
a
(10-58)
(10-59)
If one point only has known coordinates in both systems, Eqs. (10-56) and (10-57) can be
solved directly for and . When two or more points have known coordinates in both
systems, then redundancy will exist and a least squares solution is needed. If both sets of
coordinates are observed variables, the general technique of adjustment presented in
Chapter 9 is the appropriate approach to take. If, however, only the x, y coordinates or the
x ' , y ' coordinates are considered as observed variables, the technique of least squares
adjustment of indirect observations can be applied. The technique of adjustment of
observations only can also be applied after some preliminary manipulation of the
conditions equations.
f1i ax i by i x i' 0
(10-56)
f 2i bx i ayi yi' 0.
(10-57)
f1i
f
f
f
f
f
v x i 1i v y i 1'i v x ' 1'i v y ' 1i a 1i b
i
i
x i
y i
x i
y i
a
b
x i' a 0 x i b 0 y i
(10-62)
f 2 i
f
f
f
f
f
v x i 2 i v y i 2'i v x ' 2'i v y ' 2 i a 2 i b
i
i
x i
y i
x i
y i
a
b
y i' b 0 x i a 0 y i
(10-63)
In Eqs. (10-62) and (10-63), a 0 , and b 0 , are the parameter approximations a and a are
their corresponding corrections. The partial derivatives are evaluated as follows:
f1i
a0,
x i
f1i
b0 ,
y i
f1i
1,
x i'
f1i
0,
y i'
f 2 i
1,
y i'
f 2i
xi .
b
a0
b
0
b0
a0
v xi
1 0 v yi x i
0 1 v xj y i
v yj
y i a x i' a 0 x i b 0 y i
,
x i b y i' b 0 x i a 0 y i
(10-64)
a0
b
0
b0
a0
1
a0
b0
b0
a0
2n 4n
a0
b0
b0
a0
v x1
v
x1 y1
y1
y x
v 'x1
1
1
'
x 2 y2
v y1
B y2 x 2 v
a
,
,
v xn
b
v
x n yn
'yn
y x
v xn
n
n
v'
2n 2
yn
4n 1
1 0
0 1
y1 b 0 x1 a 0 y1
x '2 a 0 x 2 b 0 y 2
f y '2 b 0 x 2 a 0 y 2 .
'
x n a 0 x n b0 yn
y' b x a y
0 n
0 n
n
2n 1
2.
3.
All x' and y' coordinates have the same standard deviation,
1 .
2 .
diag , ,
2
1
2
1
2
2
, 22 , , 12 , 12 , 22 , 22
4 n 4 n
(10-65)*
Q diagQ1 , Q1 , Q 2 , Q 2 , Q1 , Q1 , Q 2 , Q 2 4 n4 n
2
where Q1 1
2
0
2
and Q1 2
2
0
, noting
2
that 0
(10-66)
a 20 Q1 b 20 Q1 Q 2
0
t
Q c AQA
a 0 Q1 b 0 Q1 Q 2
2
a 20 b 20 Q1 Q 2 I 2 n 2 n
a 20 Q1 b 20 Q1 Q 2
0
2
(a scalar matrix ).
(10-67)
(scalar matrix)
(10-68)
Wc Q c1 w c I 2 n 2 n
where
wc
1
.
a 0 b 0 Q1 Q 2
(10-69)
*The symbol " diag a11 , a 22 , , a mm " represents a diagonal matrix whose main diagonal elements
are a 11 , a 22 , , a mm . All other elements in the matrix arc, of course, zero.
n
2
2
( x i y i )
N B t Wc B w c B t B w c i 1
,
n
2
2
(x i y i )
i 1
0
(10-70)
y
y
)
a
i i
0 (x i yi )
i i
i 1
t B t Wc f w c B t f w c i n1
. (10-71)
n
2
2
( y x ' x y' ) b
i i
i i
0 ( x i yi )
i 1
i 1
Thus, letting
n
p x i2 y i2
(10-72)
(10-73)
i 1
q x i x i' y i y i'
i 1
and
n
(10-74)
i 1
q a 0 p
p 0 a
wc
wc
,
0 p b
q ' b 0 p
(10-75)
1
1 p
N 1t.
wc 0
0
a0
q a 0 p
p
w c
,
1
q ' b 0 p q ' b 0
p
(10-76)
q
q'
a 0 and b b 0 ,
p
p
(10-77)
a a 0 a
q
p
(10-78)
and
q'
b b 0 b ,
p
which
indicates
be
computed
(10-79)
directly
from
functions
of
and b . The only requirement is that 1 and 2 cannot both be zero; otherwise w c would be
0 and 2 0
fixed (errorless) and all adjustment goes into the x', y' coordinates; if 1 0 and 2 0 ,
the x', y' coordinates are fixed and all adjustment goes into the x, y coordinates. In either
case, the same values for a and b are obtained.
Although 1 and 2 may not have any influence on the evaluation of a and b themselves,
they do affect evaluation of the covariance matrix of a and b , which is
02Q
1
p
02 N 1
wc 0
2
0
0
,
1
p
(10-80)
n 0 4 2n 1,
i.e., all four coordinates of one point are needed to find a and b
needed to specify the position of each one of the
is thus
(10-81)
r 4n n 0 2n 2.
(10-82)
EXAMPLE 10-2
Following are the coordinates of three points in two plane survey coordinate systems that
have a common origin:
POINT
1
x(m)
1314.31
y(m)
2540.26
x(m)
2102.35
y(m)
1936.44
2
3
2078.70
4900.60
3511.23
5000.39
3152.60
631 l.60
2587.35
3021.20
Each one of the x and y coordinates has a standard deviation of 0.20 m, and each one of
the x' and y' coordinates has a standard deviation of 0.05 m. All coordinates are assumed to
be uncorrelated.
Calculate the least squares estimates for the transformation parameters a and b that are
used in the transformation of the x, y coordinates into the x', y' coordinates. Also, calculate
the covariance matrix and standard deviations of a and b , and the scale and rotation
parameter estimates, and .
Solution
From Eqs. (10-72), (10-73), and (10-74),
p x i2 y i2 73849842 m 2
i 1
q 69358096
0.939177
p 73849842
q ' 25241381
b
0.341793.
p 73849842
a
Now
2
2
2
1 0.20
0.0400 m ,
and 2
0.05
12 0.0400
Q1 2
4,
0 0.0100
and
Q2
22 0.0025
0.25 .
02 0.0100
Thus, from Eq. (10-69), with a and b substituted for a 0 and b 0 , respectively,
wc
1
2
b Q1 Q 2
2
0.998876 4 0.25
0.2355 ,
and b is
1
p
w
c 0
2
0
0
8
0
0.0100 1.35 10
8
1 0.2355
0
1.35 10
p
5.73 10 10
.
10
0
5.73 10
a b 5.73 10 3 2.39 10 5.
From Eqs. (10-58) and (10-59), the scale and rotation parameter estimates are
a 2 b 2 0.998876 0.999438,
and
b
0.341793
arctan arctan
a
0.939177
'
equations are Eqs. (10-56) and (10-57) and the coordinates x i and y i are the observations
with x 1 and y1 fixed. If, however, x 1 , and y1 are the observations and x i and y i are fixed,
Eqs. (10-56) and (10-57) are not in the appropriate form because each equation contains
two observations. However, we can use an equivalent pair of condition equations that are
in the appropriate form. These equations are obtained by first expressing Eqs. (10-56) and
(10-57) in matrix form
'
x
xi a b i
' b a y
i
yi
(10-83)
and then premultiplying both sides of Eq. (10-83) by the inverse of the coefficient matrix
to yield
xi a b
y
i b a
1 '
x
1
i
'
2
y a b2
i
'
a b x i
b a '
yi
(10-84)
or
'
'
a
b
x
y
x
i
i
i
a 2 b2
a 2 b2
(10-85)
'
'
b
a
x
y
y
i
i
i
2
2
2
2
a b
a b
(10-86)
It should be obvious that Eqs. (10-85) and (10-86) are condition equations that satisfy the
basic requirements for application of the special technique of adjustment of indirect
observations. However, these condition equations are no longer linear in the parameters a
and b. While it is possible to linearize these equations and then proceed with the least
squares solution, a much simpler way to attack the problem is to replace the original two
parameters a and b with two new parameters c and d such that
a 2 b2
b
d
2
a b2
(10-87)
(10-88)
x i cx i' dyi'
(10-89)
yi dx i' cyi'
(10-90)
x i v xi c x i' d y i'
(10-91)
y i v yi d x i' c y i' ,
(10-92)
v B f
, is
v xi x i'
v '
yi y i
y i' c x i
.
x i' d y i
(10-93)
v xi
x i'
v
'
yi
yi
v , B
'
v xn
x n
v yn
y'
n
y i'
x i'
c
, , and
d
y 'n
x 'n
xi
y
i
f .
x n
y n
Under the assumptions that all x i and y i coordinates are uncorrelated and have the same
standard deviation, the weight matrix W can be set equal to an identity matrix I. Thus,
according to Eq. (9-53), the coefficient matrix of the normal equations is
n
' 2
'
x i y i
N B t WB B t B i 1
x y
n
i 1
' 2
i
' 2
i
(10-94)
n
'
'
x i x i y i y i
t B t Wf B t f i n1
y i x i' x i y i'
i 1
(10-95)
Letting
n
y ,
as in Eq.(10-73)
as in Eq.(10-74)
p x i'
i 1
n
' 2
i
q x i x i' y i y i'
(10-96)
i 1
and
n
p ' 0 c q
0 p ' d q '
(10-97)
q
c p'
d q ' .
p'
(10-98)
Least squares estimates for the original parameters a and b can then be obtained through
inversion of Eqs. (10-87) and (10-88), i.e.,
c
2
c d 2
d
c 2 d 2
(10-99)
(10-100)
c ,d
02 Q c ,d
J c
b
1
p'
02 N 1 02
0
(10-101)
2
2
a c d
2
2 2
d c d
b 2cd
d c 2 d 2 2
0
02
I.
1 p'
p'
c 2 d c 2 d 2
c 2 d 2 2ab
2
c2 d 2
2cd
2 2
2ab
2
c 2 d 2
(10-102)
Then the general law of propagation of variances and covariances, Eq. (6-20), is applied
a , b J c,d J t
02
p'
a 2 b2
0
2
2 2
(10-103)
EXAMPLE 10-3
Use the coordinate data given in Example 10-2 to calculate least squares estimates for the
parameters a and b in accordance with the method of adjustment of indirect observations,
given that the standard deviation of each x i and y i coordinate is 0.20 m, the x i and y i are
'
'
uncorrelated, and the x i and y i coordinates are errorless. Calculate also the standard
deviations of the estimates of a and b and compare these values with those obtained from
the general adjustment method based upon the statistical data given for this example.
Solution
From Eqs. (10-96), (10-73), and (10-74),
3
73766886 m .
p x i' y i'
i 1
3
as before.
as befote.
q 69358096
0.9402335
p' 73766886
q ' 25241381
d
0.3421777,
p' 73766886
c
c
c 2 d 2
0.9402335
0.939177
0.9402335 2 0.3421777 2
0.3421777
0.341793 .
0.9402335 2 0.3421777 2
and
d
c 2 d 2
Note that these values for a and b are identical to the values obtained in Example 10-2. The
standard deviations of a and b are obtained directly from Eq. (10-103):
2
2
a b 0 a 2 b 2
p'
x y 0.20 m . .Also, a
12
0 a 2 b 2
.
p'
2
b 0.939177
2 0.341793 2
0.998876
a b
0.200.998876 2.33 10 5.
7376886
To compare these values with those obtained from the general adjustment method, we first
note that in this example 0
1 0.20 m. ,
and 2
. Thus
Q1 1, Q 2 0,
and
wc
1
1
1.001125.
2
a b Q1 Q 2 0.998876
and b are
a b 02 w c p 0
w cp
0.20
2.33 10 5 ,
1.00112573849842
'
using the special technique of adjustment of observations only. Let coordinates x i and y i be
'
'
taken as the observed variables while coordinates x i and y i are considered fixed. As already
'
'
mentioned, one point with known coordinates in both coordinate systems (x,y and x i , y i )
'
'
2.
If
n points are used to find least squares estimates for a and b, the total number of
observations involved is 2n and the redundancy is
2 n 2 .
(10-104)
y1' bx 1 ay1 ,
(10-105)
x1' x1
'
y1 y1
'
a x1
b y
1
'
1
y1 a
.
x1 b
we get
y1 x1'
x1
1
'
2
2
x1 y1 x1 y1 y1
(10-106)
y1 x1'
,
x1 y1'
(10-107)
from which
x1x1' y1 y1'
x12 y12
(10-108)
and
y1x1' x1 y1'
b
.
x12 y12
(10-109)
x '2 ax 2 by 2
(10-110)
y '2 bx 2 ay 2 ,
(10-111)
y x
y x y
y x x x x
y y y .
2
1
2
1
'
1 1
'
1 1
'
1 1
'
1 1
(10-112)
(10-113)
Similar equations can be written for the remaining points, yielding a total
of 2 n 2 condition equations in terms of observations only.
Although it is possible to manipulate the transformation equations into a system of
condition equations that can be used as basis for this special technique of adjustment, it is
also obvious that these equations are much more complicated than the condition equations
used in the other techniques of adjustment. Moreover, this technique includes inversion of
a matrix
Q c of order 2n 2 , which can be much larger than the simple 2 2 matrix N that
is inverted in the other adjustment techniques. Clearly, the technique of least squares
adjustment of observations only is not as well suited as the other techniques to this type of
problem.
If this technique of adjustment is applied to the problem, the solution yields adjusted
observations which can then be used to calculate a and b ; for example, using the adjusted
coordinates
' '
x 1 , y 1
for point 1:
y x ' x 1 y 1'
b 1 21
.
x 1 y12
(10-114)
(10-115)
It should be pointed out that the same values for a and b will be obtained using the
adjusted coordinates of any one of the other points.
It is left as an exercise for the reader to apply least squares adjustment of observations only
to solve for a and b using the data given in Example 10-2.
x ' ax by k1
(10-116)
y ' bx ay k 2 ,
in which k1 , k 2 are two shifts which represent the coordinates of the origin of the x, y
'
IV
'
n 2
2
x i y i
i 1
N w0
x
n
2
i
i 1
i 1
n
2
i
i 1
Simmetric
Fig. 10-7.
i 1
n
xi
i 1
0
n
n
(10-117)
2
i
y i2 k10
2
i
y k10
i 1
n
2
i
x
i 1
n
'
i
'
i
i 1
n
a0
a0
i 1
i 1
i 1
n
b0
nk 10
nk 20
i 1
n
b0
i 1
k 20
yi k 20
i 1
i 1
y
i
i 1
,
(10-118)
i 1
n
where w 0 is the scalar given by Eq. (10-69). Since N is not a diagonal matrix, the general
solution would be to calculate
a , b , k , k
1 2
t N 1t,
(10-119)
add these corrections to the approximations a 0 , b 0 , k10 , k 20 and iterate the solution. A
considerable simplification is possible when we replace x i , y i coordinates by u i , v i whose
origin is the centroid of all coordinates x i , y i , or
ui xi x xi
1 n
xi
n i 1
vi yi y yi
1
y i .
n i 1
(10-120)
When
u i , vi
ui 0
i 1
and
0.
i 1
i 1
i 1
i 1
i 1
r x i' , ri y i' ,
N w 0diagp, p, n , n
t w 0 q pa 0 , q 'pb 0 , r nk 10 , r 'nk 20 .
t
Finally
1
q pa 0 a 0 q p see also (Eq. 10 78)
p
1
b b b 0 q 'qb 0 b 0 q ' p see also (Eq. 10 79)
p
1
k 1 k1 k10 r nk 10 k10 r n
n
1
k 2 k 2 k 20 r ' nk 20 k 20 r ' n .
n
a a a 0
(10-121)
(10-122)
(10-123)
(10-124)
Notice, again, that under the assumption that all x i , y i (and consequently u i , v i ) are
2
uncorrelated and have equal variance 1 ,and x i , y i are uncorrelated and have equal
2
'
'
xp x
and v p
y p y and
then
'
If the centroid x of the x i , y i coordinates is used as an origin, the two shifts k1 , k 2 will drop
out an this reduces to the two-parameter transformation discussed in Section 10.6. The
reader should ascertain this fact by analyzing Eqs. (10-117) through (10-124). It is also left
as a recommended exercise for the reader to develop the case of least squares of indirect
observations for the four-parameter transformation. [Those interested in further study of
transformations and their adjustment should consult Mikhail, (1976)].
PROBLEMS
10-1
The positions of stations B and C in Fig. 10-8 are known and fixed.
B
C
x(m)
1000.000
714754
y(m)
1000.000
1380.328
Fig. 10-8.
The following observations are made:
17 11'15"
10 "
119 09 '39"
10"
43 38 '54"
10"
b 1404.615 m
0.080 m
b
c 1110 .082 m
0.040 m
c
X(cm)
Y(cm)
LINE
AZIMUTH
A
C
421.420
1862.977
352.115
195.987
AA'
CC'
32732'50"
d1 890.455 m
d 2 921.300 m
3 64 14 ' 23".
1632'09"
Fig. 10-9.
All observations are independent. the standard deviation of each angle observation is 10";
the standard deviation of each distance observation is 0.050 m.
(a) Determine least squares estimates for the coordinates of B. (b) Determine the
standard error ellipse for the position of B. (c) Compute 90% and 99% confidence
intervals for the adjusted distances d 1 , and d 2 . (d) Compute the coefficient of correlation
between d 1 , and d 2 .
10-3
Angles , , and in Fig. 10-10 are observed at P with a theodolite. The observations are independent and have standard deviation 2.5" . The positions of A, B, C,
and D are known and fixed. The given data are:
ANGLE
OBSERVED
POINT
X (m)
Y(m)
3516'00"
1652'33"
3224'51"
A
B
C
D
5000.000
5266.841
5256.564
5236.650
5371.180
5330.315
5191.664
4979.677
(a) Use the observed values for and and an appropriate resection procedure to obtain
approximate coordinates for P. (b) Determine the least squares position for P. (c) Evaluate
the covariance matrix for the position determined in (b) and compute the principal
dimensions and orientation of the standard error ellipse. (d) Evaluate the a posteriori
reference variance and use it to test the given value of at the 5% level of significance.
10-4
Following are the coordinates for four point in two plane coordinate systems the
have a common origin:
Fig. 10-10.
POINT
x(m)
y(m)
x' (m)
y'(m)
1
2
3
4
779.94
2045.66
902.51
1548.31
250.15
1743.46
1789.47
-135.24
317.15
718.23
210.07
695.82
191.71
981.58
882.31
102.40
1018.77
104.33
1221.04
3633.22
2
3
4
5
6
1016.60
2002.35
2000.99
2994.24
2979.03
935.85
128.62
1043.58
99.42
1050.87
2010.36
1195.02
2061.10
1118.30
2018.98
3597.72
2701.71
2658.20
1765.28
1732.26
APPENDIX A
An introduction
to Matrix Algebra
A.1. DEFINITIONS
A matrix is a group of numbers or symbols collected in an array form. The following are
examples of matrices.
1
6
2 0
1
, ,
4 3
5
( 2)
(1)
a b
9 3,
.
c
d
(3)
( 4)
Every matrix has a specified number of rows and a specified number of columns. Thus
matrix (1), above, has 2 rows and 3 columns and is said to be a 2 3 matrix. Similarly (2) is
a 2 1 matrix, (3) is a 1 3 matrix, and (4) is 2 2 matrix. The two numbers representing the
rows and columns are referred to as the matrix dimensions.
A matrix is designated by a boldface capital Roman letter. Thus, an m n matrix can be
symbolically written as:
a11
a 21
A
m, n
a m1
a12
a 22
a m2
a1n
a 2n
.
a mn
a ij
represents a typical element of the matrix A. The first subscript, i, refers to the number of
the row in which a ij lies, starting with 1 at the top and proceeding down to m at the bottom.
The second subscript, j, refers to the number of the column containing a ij , starting with 1
at the left and proceeding to n at the right. Thus
and jth column. For example,
a 23
a ij ,
a 12
in matrix (3) is 9.
A
m,n
of square matrices:
a
1 2
A
, B d
3 4
g
b
e
h
c
f .
The main diagonal of A is composed of the elements 1 and 4, while that of B contains the
elements a, e, and k,
A row matrix, or row vector, is the matrix composed of only one row. It is designated by a
lowercase boldface Roman letter. For example,
a a 1 , a 2 , , a n and c 1,2,4.
1, n
1, 3
A column matrix, or column vector, is a matrix composed of only one column. For
example,
b1
b
2
b
m ,1
b m
and
1
d 3 .
2 ,1
A diagonal matrix is a square matrix in which all elements not on the main diagonal are
zero. For example,
0
0
d11
0 d
22
D
,
0 d mm
0
where
d ij 0
for all
i j
d ij 0
for all
i j.
1 0 0
G 0 0 0
0 0 3
and
p 0 0
H 0 q 0 .
0 0 r
Appendix A
307
a 0 0
0 a 0
0 0 a
a ij 0 for all i j
a ij a for all i j
and
2 0 0
H 0 2 0
0 0 2
are scalar matrices.
A unit or identity matrix is a diagonal matrix whose main diagonal elements are all equal
to 1. A unit matrix will always be referred to by I. Thus,
1 0 0
0 1 0
0 0 1
a ij 0 for all i j
a ij 1 for all i j
A null or zero matrix is a matrix whose elements are all zero. It is denoted by a boldface
zero, 0.
A triangular matrix is a square matrix whose elements above (or below), but not
including, the main diagonal are all zero. An upper triangular matrix takes the form
a11
0
A
a 1n
a 22 a 2 n
,
0 a mn
a11
with
The matrix
1 3 4
A 0 1 0
0 0 7
is an example of an upper triangular matrix of order 3.
A lower triangular matrix is of the form
a ij 0
for
i j.
a m1
0
0
, where a 0
ij
a mm
0
a 22
a m2
for
i j.
the matrix
0
18
B
2 11
is a lower triangular matrix of order 2.
A.3. EQUALITY OF MATRICES
Two matrices A and B of the same dimensions are equal if each element a ij b ij for all i and
j. Matrices of different dimensions cannot be equated. Some relationships which apply to
matrix equality include
If A = B, then B=A for all A and B.
If A = B and B=C, then A=C for all A, B and C.
(A-1)
(A-2)
1 2
A
3 4
and
b11
B
b 21
b12
.
b 22
a ij b ij
A B C A B C A B C (associative law)
(A-4)
A00A A
(A-5)
A (A) 0,
(A-6)
Appendix A
309
1 2 0
6 4 2
A
, B
,
0 3 5
3 2 7
and C
B A
BA
and
x
C
u
y
v
z
,
w
6 4 2 1 2 0 5 2 2
3 2 7 0 3 5 3 1 2 .
then form
x
C
u
Thus,
y
v
z 5 2 2
.
w 3 1 2
x 5, y 2 , z 2 , u 3, v 1 ,and w 2
a ij
whose elements
, for all i and j. (In general, a scalar is denoted by a lowercase Greek letter).
, and
7 2
A
,
3 4
we get
6
21
B 3A
.
9 12
The following relations hold for scalar multiplication
A B A B
A A A
AB A B A B
A A.
(A-7)
(A-8)
(A-9)
(A-10)
cij a i1b1 j a i 2 b 2 j a ik b kj .
(A-11)
a 11
a i1
a m1
a 12
a i2
a m2
a 1k
b
11 b11
b
b11
a ik 11
b11 b11
a mk
b11
b11
b11
c11
c11
c11
r k
.
c
c ij a ir b rj c11
11
r 1
c
c11
c11
11
To illustrate the multiplication of matrices, let
1 2
1 0 1
1 1
A
, B 0 1 , C 0 1 2.
2 0
1 0
1 0 1
Then
3, 2
2, 2
3, 2
1 2
0 1 1 1
2 0
1 0
Appendix A
3, 3
311
11 2 2 11 2 0 5 1
0 1 12 0 1 10 2 1
11 0 2 11 0 0 1 1
1 0 1 1 2
B E 0 1 2 0 1
3, 2
3, 2
1 0 1 1 0
11 0 0 11 12 0 1 10 2 2
0 1 10 2 1 0 2 11 2 0 2 1 .
11 0 0 11 12 0 1 10 2 2
ST
1 2
T
5 0
and
3 4
S
,
0 2
then
1 2 3 4 3 8
TS
5 0 0 2 15 20
while
3 4 1 2 23 6
ST
,
0 2 5 0 10 0
with the obvious result that TS ST .
The following relationships regarding matrix multiplication hold:
AI IA A, in which I is the unit or identity matrix
(A-13)
(A-14)
(A-15)
(A-16)
One important property of matrix multiplication which distinguishes it from scalar
multiplication is that the product of two matrices can be the null or zero matrix without
1.
2.
3.
3 0 0
1 1 2
0 0 2 3 0 0.
3 5
1 2 3
0
1 1 0 3 5 0
3 5
1 2 1
2 2 20 2 1 0
1 0 2
0
.
0
0 0.
. If
B A
, it follows that b ij
a ij
For example,
if
3 2
A
,
1 1
then
3 1
At
;
2 1
if
1 6
B 0 4,
5 0
then
1 0 5
Bt
;
6 4 0
if
C a
b c,
then
a
C t b .
c
A Bt A t Bt
AB t A t Bt
A t A t .
(A-17)
(A-18)
(A-19)
Eq. (A-17) can be readily verified by recalling that matrix addition Is element by element.
Thus, whether transposing follows addition, or addition follows transposing, the result will
Appendix A
313
be the same. Equation (A-18) is quite important, since it shows that transposing a matrix
product leads to transposing each matrix then reversing the sequence before performing
the multiplication. Since AB is an m n matrix,
which is the same as the dimensions of
t t
B A
n,k n,k
1 1 0
A 0 2 3
2,3
1
B 1 ;
3,1
2
and
then
and C
; while
1
1 1 0 2
AB C
1 ,
2 ,1
0 2 3 8
2
2 8
1 0
B A 1 1 21 2 2 8,
0 3
t
which is equal to C .
Equations (A-19) and (A-20) are straightforward and should be verified by the reader,
using numerical examples.
When the original matrix is square, the operation of transposition does not affect the
elements of the main diagonal. For example, for
a
A
c
b
d
the transpose
a c
At
b d
It follows that if the matrix is the identity matrix I, a diagonal matrix D, or a scalar matrix
K, it is equal to its transpose. Hence,
I, D
and
If x is a column matrix, also called a column vector, then x x is a positive scalar which is
equal to the sum of the squares of the vector components, or the square of its length. For
example,
if
x1
x x 2 , then x t x x1
x 3
x2
x1
x 3 x 2 x12 x 22 x 32 .
x 3
3 2 1
2 5 6 , a b ,
c d
1 6 4
6
3
and
3 0 1
7 2 0
2 4 0
0 0 5
and A
BA
1 1 0
A
0 2 1
and
3 1
B
;
1 4
then
1 0
3 1
3 5 1
3 1 1 1 0
1 1 0
t
A BA 1 2
5 9
5 23 9 ,
1 4 0 2 1
0 2 1
0 1
1 4
1 9 4
which is obviously symmetric.
A.9. THE INVERSE OF A MATRIX
Division of matrices is not defined. In fact, we may have AB
This implies that the operation of "dividing" by A, even if A
example, let
AC
0
without having B C .
, is not possible. As an
2 0
2 2
2 2
A
, B
, C
,
4 0
5 3
1 4
where obviously
B C.
Appendix A
315
4 4
AB
AC.
8 8
In place of division, the concept of matrix inversion is used. The inverse of a square
matrix A, if it exists, is the unique matrix A
AA 1 A 1A I,
(A-21)
3 1
A
.
2 1
the matrix
1 1
A 1
2 3
is its inverse because
1 1 3 1 1 0
2 3 2 1 0 1 .
AB 1 B1A 1
A
A
1 1
t 1
(A-22)
(A-23)
A 1
A 1 1 A 1.
(A-24)
(A-25)
The square matrix which has an inverse is called nonsingular, while the matrix which does
not have an inverse is called singular.
It was shown previously that AB can equal 0 without either A 0 or B 0 . If, however,
either A or B is nonsingular, then the other matrix must be a null matrix. Hence, the
product of two nonsingular matrices cannot be a null or zero matrix.
In order to present a method for computing a matrix inverse, the concept and properties of
determinants are first introduced
Associated with each square matrix A is a unique scalar value called the determinant of A.
It is denoted either by det A or by
as
3 1
1 2
. Thus, for A
3 1
1 2
, the value of which is computed ass shown below. The student should be
careful to differentiate between the square brackets used for the matrix and the vertical
lines used for the determinant.
The determinant of order n (for an n n square matrix) can be defined in terms of
determinants of order n 1 and less. In order to apply this procedure, the determinant of
a 1 1 matrix must be defined. Accordingly, for a matrix consisting of a single clement, the
determinant is defined as the value of the element, i.e., for
If A is an
is an
nn
A a 11
A det A a11 .
matrix, and one row and one column of A are deleted, the resulting matrix
of A, and it is designated by m ij where i and j correspond to the deleted row and column,
respectively. More specifically, m ij is known as the minor of the element a ij in A. For
example, consider
a 11
A a 21
a 31
a 12
a 22
a 32
a 13
a 23 .
a 33
Each element of A has a minor. The minor of a11 , for example, is obtained by deleting the
first row and first column from A and taking the determinant of the 2 2 submatrix that
remains, i.e.,
m11
In similar fashion, the minors of
m12
The cofactor c ij of an element
a 12
a 23
a 32
a 33
a 21
a 23
a 31
a 33
a ij
a 22
, and
m13
a 21
a 22
a 31
a 32
is defined as
c ij
1i j m ij ,
Obviously, when the sum of the row number i and column number j is even, c ij
when i j is odd,
c ij m ij
(A-26)
m ij
; and
(A-27)
Appendix A
317
which states that the determinant of A is the sum of the products of the elements of the
first row of A and their corresponding cofactors. (It is equally possible to define
in
terms of any other row or column, but for simplicity, the first row only is used.) On the
basis of this definition, the 2 2 matrix
a 11
A
a 21
has cofactors
c11 a 22 a 22 ,
and
a 12
a 22
c12 a 21 a 21 ,
3 1
A
,
1 2
then
A 32 11 5.
a 11
A a 21
a 31
a 12
a 22
a 32
a 13
a 23 .
a 33
c11
a 22
a 23
a 32
a 33
c12
c13
a 22 a 33 a 23a 32
a 21
a 23
a 31
a 33
a 21
a 22
a 31
a 32
a 21a 33 a 23a 31
a 21a 32 a 22 a 31
1 0 1
A 0 2 3 ,
1 0 1
A 12 0 0 0 3 10 2 4.
A.11. COFACTOR AND ANOINT MATRICES
The cofactor matrix C of a matrix A is the square matrix of the same order as A in which
each clement a ij is replaced by its cofactor
c ij
1 2
A
3 4
is
4 3
C
.
2 1
The adjoint matrix of A, denoted by adj A, is the transpose of its cofactor matrix, i.e.,
adj A C t .
(A-28)
Aadj A adj A A A I.
Thus, for
1 2
A
,
3 4
we have
A 14 2 3 10
4 2
adj A C t
3 1
and
1 2 4 2 10 0
A adj A
10 I,
3 4 3 1 0 10
or
(A-29)
Appendix A
319
4 2 1 2 10 0
10 I,
3 1 3 4 0 10
adj A A
A 1
adj A
.
A
1 2
A
3 4
is
A 1
1 4 2 0 .4 0 .2
.
10 3 1 0.3 0.1
3 1 1
A 2 1
0 .
1 2 1
First, the determinant of A is
A 3 1 0 1 2 0 14 1 2.
and the elements of the cofactor matrix are
C 21 1, C 22 4 , C 23 7
C 31 1, C 32 2 , C 33 5.
Thus,
3
1 2
1 1 1
C 1 4 7 , adj A C t 2 4 2 .
1 2
3 7 5
5
(A-30)
2 4 2 1 .0 2 . 0 1 .0
2
A
3 7 5 1.5 3.5 2.5
1
b1
a 21x1 a 22 x 2 a 2 n x n
b2
(A-31)
a n1x1 a n 2 x 2 a nn x n
bn .
In Eq. (A-31), the a ij are numerical coefficients, the b i are constants, and the x j , are the
unknowns. Equation (A-31) may be expressed in matrix form
Ax b
(A-32)
in which
a 11
a
21
A
a n1
a12
a 22
an2
a 1n
x1
x
a 2n
, x 2 ,
a nn
x n
and
b1
b
2
b .
b n
If the determinant of A is nonzero, the solution to Eq, (A-31), or to Eq, (A-32), is a unique
set of n numerical values for the x j that satisfy all equations simultaneously. This solution
is obtained by premultiplying both sides of Eq. (A-32) by
because
A 0
) to give
A 1Ax A 1b
or
(which exists
Appendix A
x A 1b.
321
(A-33)
since A A I by definition. Thus, the solution is equivalent to finding the inverse of the
coefficient matrix. As an example, consider the following set of three equations in three
unknowns:
3x 1 x 2 x 3 2
2 x1 x 2
x1 2 x 2 x 3 3
or
3 1 1 x1 2
2 1
0 x 3 1 .
1 2 1 x 3 3
The inverse of the coefficient matrix has already been computed in Section A-12.
Thus, by Eq. (A-33),
0 .5 0 .5 0 .5 2 2
x A 1b 1.0 2.0 1.0 1 3
1.5 3.5 2.5 3 7
is the solution to the given set of equations. It is easily seen that when
x 3 7
x 2 , x 3
1
2
, and
Computing the inverse by the adjoint matrix method becomes quite tedious and timeconsuming when n is greater than 3. In such eases, more efficient procedures are usually
employed for finding the inverse and solving the equations. These procedures are not
within the scope of this appendix.
A.14. BILINEAR AND QUADRATIC FORMS
If x is a vector of m variables, y is a vector of n variables, and A is an
scalar function
u x t Ay
is known as a bilinear form. For example, if
x1
x x 2 ,
x 3
then
y1
y ,
y2
and
3 1
A 2 1 ,
1 2
mn
matrix, the
(A-34)
y
1 2 2
3x1y1 2x 2 y1 x 3 y1 x1y 2 x 2 y 2 2 x 2 y 2
is a bilinear form.
If x is a vector of n variables, and A is a square symmetric matrix of order n, the
scalar function
q x t Ax
(A-35)
x1
3 2
x , and A
2 1
x 2
then
q x1
3 2 x1
x 2
3x12 4 x1x 2 x 22
2 1 x1
a quadratic form.
A good example of a quadratic form is the weighted sum of the squared residuals
v t Wv
(A-36)
hich is minimized in least squares. In this particular quadratic form, v is the vector of
observational residuals and W is the symmetric weight matrix of the observations.
A.15. DIFFERENTIATION
QUADRATIC FORMS
OF
VECTORS,
BILINEAR
FORMS,
AND
mn
jacobian matrix, which is composed of elements that are the partial derivatives of the
individual components of y with respect to the individual components of x, i.e.,
J yx
y1
x
y 1
x y m
x 1
y1
x 2
y m
x 2
y1
x n
.
y m
x n
(A-37)
Appendix A
For example, if
J yx
2
y x1 2 x 2 3 x 3 ,
1
y1
x
y 1
x y m
x 1
Ay
and
7 2x
y
2
y1
x 2
y m
x 2
2
2
5x
323
,
3
y1
x n
2
1
y m 0 4 x 2
x n
6x 3
.
5
u u u
u
t
t
,
, ,
yA
x x 1 x 2
x m
(A-38)
u u u
u
t
t
,
, ,
xA.
y y1 y 2
y m
(A-39)
Ay b ,
a column vector of m
u x t Ay x t b x1b1 x 2 b 2 x m b m .
It is then easily seen that
u u u
u
t
t
t
,
, ,
b1 , b 2 , , b m b y A
x x 1 x 2
x m
The partial derivative with respect to y is derivedsimilar fashion by setting
column vector of n constants, and then expressing u as follows:
t
A x c
, a
u x t Ay A t x y c t y c1 y1 c 2 y 2 c n y n .
Thus
u u u
u
t
t
t
,
, ,
c1 , c 2 , , c n c x A .
y y1 y 2
y m
The partial derivative of the quadratic form q
u u
,
x x 1
t
x Ax
u
, ,
x 2
with respect to x is
u
t
2 x A,
x n
(A-40)
q x t Ax x1 , x1 , , x n
a11
a 21
a n1
a12
a 22
a n2
a1n x1
a 2n x 2
a nn
x n
Symmetric
a 11x12 a 22 x 22 a nn x 2n 2a 12 x1x 2
2 a 1n x 1 x n 2 a 2 n x 2 x n .
Thus
q
2a 11x1 2a 12 x 2 2a 1n x n 2 x t a 1 ,
x 1
where
q
2a 21x1 2a 2 2 x 2 2a 2 n x n 2 x t a 2 ,
x 2
where a 2
Hence,
a 21 , a 22 , a 2 n
q q
,
x x 1
q
, ,
x 2
q
t
t
t
2 x a1 , 2 x a 2 , , 2 x a n ,
x n
2 x t a1 , a 2 , , a n 2x t A.
APPENDIX B
Tables
( z )
1 u 2 2
e
du PZ z
2
-3. .0013 .0010 .0007 .0005 .0003 .0002 .0002 .0001 .0001 .0000
-2.9 .0019 .0018 .0017 .0017 .0016 .0016 .0015 .0015 .0014 .0014
-2.8 .0026 .0025 .0024 .0023 .0023 .0022 .0021 .0021 .0020 .0019
-2.7 .0035 .0034 .0033 .0032 .0031 .0030 .0029 .0028 .0027 .0026
-2.6 .0047 .0045 .0044 .0043 .0041 .0040 .0039 .0038 .0037 .0036
-2.5 .0062 .0060 .0059 .0057 .0055 .0054 .0052 .0051 .0049 .0048
-2.4 .0082 .0080 .0078 .0075 .0073 .0071 .0059 .0068 .0066 .0064
-2.3 .0107 .0104 .0102 .0099 .0096 .0094 .0091 .0089 .0087 .0084
-2.2 .0139 .0136 .0132 .0129 .0126 .0122 .0119 .0116 .0113 .0110
-2.1 .0179 .0174 .0170 .0166 .0162 .0158 .0154 .0150 .0146 .0143
-2.0 .0228 .0222 .0217 .0212 .0207 .0202 .0197 .0192 .0188 .0183
-1.9 .0287 .0281 .0274 .0268 .0262 .0256 .0250 .0244 .0238 .0233
-1.8 .0359 .0352 .0344 .0336 .0329 .0322 .0314 .0307 .0300 .0294
-1.7 .0.446 .0436 .0427 .0418 .0409 .0401 .0392 .0384 .0375 .0367
-1.6 .0548 .0537 .0526 .0516 .0505 .0495 .0485 .0475 .0465 .0455
-1.5 .0668 .0655 .0643 .0630 .0618 .0606 .0594 .0582 .0570 .0559
-1.4 .0808 .0793 .0778 .0764 .0749 .0735 .0722 .0708 .0694 .0681
-1.3 .0968 .0951 .0934 .0918 .0901 .0885 .0869 .0853 .0838 .0823
-1.2 .1151 .1131 .1112 .1093 .1075 .1056 .1038 .1020 .1003 .0985
-1.1 .1357 .1335 .1314 .1292 .1271 .1251 .1230 .1210 .1190 .1170
-1.0 .1587 .1562 .1539 .1515 .1492 .1469 .1446 .1423 .1401 .1379
-.9 .1841 .1814 .1788 .1762 .1736 .1711 .1685 .1660 .1635 .1611
-.8 .2119 .2090 .2061 .2033 .2005 .1977 .1949 .1922 .1894 .1867
.9
-.7
.2420 .2389 .2358 .2327 .2297 .2266 .2236 .2206 .2177 .2148
.8
-.6
.2743 .2709 .2676 .2643 .2611 .2578 .2546 .2514 .2483 .2451
.7
-.5
.3085 .3050 .3015 .2981 .2946 .2912 .2877 .2843 .2810 .2776
-.6.4 .3446 .3409 .3372 .3336 .3300 .3264 .3228 .3192 .3156 .3121
.5
-.3
.3821 .3783 .3745 .3707 .3669 .3632 .3594 .3557 .3520 .3483
-.2 .4207 .4168 .4129 .4090 .4052 .4013 .3974 .3936 .3897 .3859
.3
-.1
.4602 .4562 .4522 .4483 .4443 .4404 .4364 .4325 .4286 .4247
.2
-.0
.5000 .4960 .4920 .4880 .4840 .4801 .4761 .4721 .4681 .4641
.1
Reprinted
with permission of Macmillan Publishing Co., Inc. from introduction to
.0
Probability and Statistics by B.W. Lindgren and G.W. McElrath. Copyright 1969 by
B.W. Lindgren and G.W. McElrath.
Appendix B
0
.1
.2
.3
.4
.5
.6
.7
.8
.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3.
.5000
.5398
.5793
.6179
.6554
.6915
.7257
.7580
.7881
.8159
.8413
.8643
.8849
.9032
.9192
.9332
.9452
.9554
.9641
.9713
.9772
.9821
.9861
.9893
.9918
.9938
.9953
.9965
.9974
.9981
.9987
.5040
.5438
.5832
.6217
.6591
.6950
.7291
.7611
.7910
.8186
.8438
.8665
.8869
.9049
.9207
.9345
.9453
.9564
.9648
.9719
.9778
.9826
.9864
.9896
.9920
.9940
.9955
.9966
.9975
.9982
.9990
.5080
.5478
.5871
.6255
.6628
.6985
.7324
.7642
.7939
.8212
.8461
.8686
.8888
.9066
.9222
.9357
.9474
.9573
.9656
.9726
.9783
.9830
.9868
.9898
.9922
.9941
.9956
.9967
.9976
.9982
.9993
.5120
.5517
.5910
.6293
.6664
.7019
.7357
.7673
.7967
.8238
.8485
.8708
.8907
.9082
.9236
.9370
.9484
.9582
.9664
.9732
.9788
.9834
.9871
.9901
.9925
.9943
.9957
.9968
.9977
.9983
.9995
.5160
.5557
.5948
.6331
.6700
.7054
.7389
.7703
7995
.8264
.8508
.8729
.8925
.9099
.9251
.9382
.9495
.9591
.9671
.9738
.9793
.9838
.9874
.9904
.9927
.9945
.9959
.9969
.9977
.9984
.9997
.5199
.5596
.5987
.6368
.6736
.7088
.7422
.7734
.8023
.8289
.8531
.8749
.8944
.9115
.9265
.9394
.9505
.9599
.9678
.9744
.9798
9842
.9878
.9906
.9929
.9946
.9960
.9970
.9978
.9984
.9998
.5239
.5636
.6026
.6406
.6772
.7123
.7454
.7764
.8051
.8315
.8554
.8770
.8962
.9131
.9278
.9406
.9515
.9608
.9686
.9750
.9803
.9846
.9881
.9909
.9931
.9948
.9961
.9971
.9979
.9985
.9998
.5279
.5675
.6064
.6443
.6808
.7157
.7486
.7794
.8078
.8340
.8577
.8790
.8980
.9147
.9292
.9418
.9525
.9616
.9693
.9756
.9808
.9850
.9884
.9911
.9932
.9949
.9962
.9972
.9979
.9985
.9999
327
.5319 .5359
.5714 .5753
.6103 .6141
.6480 .6517
.6844 .6879
.7190 .7224
.7517 .7549
.7823 .7852
.8106 .8133
.8365 .8389
.8599 .8621
.8810 .8830
.8997 .9015
.9162 .9177
.9305 .9319
.9430 .9441
.9535 .9545
.9625 .9633
.9700 .9706
.9762 .9767
.9812 .9817
.9854 .9857
.9887 .9890
.9913 .9916
.9934 .9936
.9951 .9952
.9963 .9964
.9973 .9974
.9980 .9981
.9986 .9986
.9999 1.0000
DEGREES
OF
FREEDOM
2
x 0.005
2
x 0.01
2
x 0.025
2
x 0.010
2
x 0.10
2
x 0.20
2
x 0.30
1
.000
.000
.001
.004
.016
.064
.148
2
.010
.020
.051
.103
.211
.446
.713
3
.072
.115
.216
.352
.584
1.00
1.42
4
.207
.297
.484
.711
1.06
1.65
2.20
5
.412
.554
.831
1.15
1.61
2.34
3.00
6
.676
.872
1.24
1.64
2.20
3.07
3.83
7
.989
1.24
1.69
2.17
2.83
3.82
4.67
8
1.34
1.65
2.18
2.73
3.49
4.59
5.53
9
1.73
2.09
2.70
3.33
4.17
5.38
6.39
10
2.16
2.56
3.25
3.94
4.87
6.18
7.27
.
11
2.60
3.05
3.32
4.57
5.58
6.99
8.15
12
3.07
3.57
4.40
5.23
6.30
7.81
9.03
13
3.57
4.11
5.01
5.89
7.04
8.63
9.93
14
4.07
4.66
5.63
6.57
7.79
9.47
10.8
15
4.60
5.23
6.26
7.26
8.55
10.3
11.7
16
5.14
5.81
6.91
7.96
9.31
11.2
12.6
17
5.70
6.41
7.56
8.67
10.1
12.0
13.5
18
6.26
7.01
8.23
9.39
10.9
12.9
14.4
19
6.83
7.63
8.91
10.1
11.7
13.7
15.4
20
7.43
8.26
9.59
10.9
12.4
14.6
16.3
21
8.03
8.90
10.3
11.6
13.2
15.4
17.2
22
8.64
9.54
11.0
12.3
14.0
16.3
18.1
23
9.26
10.2
11.7
13.1
14.8
17.2
19.0
24
9.89
10.9
12.4
13.8
15.7
18.1
19.9
25
10.5
11.5
13.1
14.6
16.5
18.9
20.9
26
11.2
12.2
13.8
15.4
17.3
19.8
21.8
27
11.8
12.9
14.6
16.2
18.1
20.7
22.7
28
12.5
13.6
15.3
16.9
18.9
21.6
23.6
29
13.1
14.3
16.0
17.7
19.8
22.5
24.6
30
13.3
15.0
16.8
18.5
20.6
23.4
25.5
40
20.7
22.1
24.4
26.5
29.0
32.3
34.9
50
28.0
29.7
32.3
34.8
37.7
41.4
44.3
60
35.5
37.5
40.5
43.2
46.5
50.6
53.8
Reprinted with permission of Macmillan Publishing Co., Inc. from Introduction to
Probability and Statistics by B.W. Lindgren and G.W. McElrath. Copyright 1969 by
B.W. Lindgren and G.W. McElrath. The table was adapted from Table VIII of Biometrika
Tables for Statisticians, Vol. 1, 3rd Edition (1966) by E.S. Pearson and H.O. Hartley,
originally prepared by Catherine M. Thompson, and is reprinted with the kind permission
of the Biometrika Trustees.
Appendix B
329
DEGREES
OF
FREEDOM
2
x 0.60
2
x 0.70
2
x 0.80
2
x 0.90
2
x 0.95
2
x 0.975
2
x 0.99
2
x 0.995
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
40
50
60
.455
1.39
2.37
3.36
4.35
5.35
6.35
7.34
8.34
9.34
10.3
11.3
12.3
13.3
14.3
15.3
16.3
17.3
18.3
19.3
20.3
21.3
22.3
23.3
24.3
25.3
26.3
27.3
28.3
29.3
39.3
49.3
59.3
1.07
2.41
3.66
4.88
6.06
7.23
8.38
9.52
10.7
11.8
12.9
14.0
15.1
16.2
17.3
18.4
19.5
20.6
21.7
22.8
23.9
24.9
26.0
27.1
28.2
29.2
30.3
31.4
32.5
33.5
44.2
54.7
65.2
1.64
3.22
4.64
5.99
7.29
8.56
9.80
11.0
12.2
13.4
14.6
15.8
17.0
18.2
19.3
20.5
21.6
22.8
23.9
25.0
26.2
27.3
23.4
29.6
30.7
31.8
32.9
34.0
35.1
36.2
47.3
58.2
69.0
2.71
4.61
6.25
7.78
9.24
10.6
12.0
13.4
14.7
16.0
17.3
18.5
19.8
21.1
22.3
23.5
24.8
26.0
27.2
28.4
29.6
30.8
32.0
33.2
34.4
35.6
36.7
37.9
39.1
40.3
51.8
63.2
74.4
3.84
5.99
7.81
9.49
11.1
12.6
14.1
15.5
16.9
18.3
19.7
21.0
22.4
23.7
25.0
26.3
27.6
28.9
30.1
31.4
32.7
33.9
35.2
36.4
37.7
38.9
40.1
41.3
42.6
43.8
55.8
67.5
79.1
5.02
7.38
9.35
11.1
12.8
14.4
16.0
17.5
19.0
20.5
21.9
23.3
24.7
26.1
27.5
28.8
30.2
31.5
32.9
34.2
35.5
36.8
38.1
39.4
40.6
41.9
43.2
44.5
45.7
47.0
59.3
71.4
83.3
6.63
9.21
11.3
13.3
15.1
16.8
18.5
20.1
21.7
23.2
24.7
26.2
27.7
29.1
30.6
32.0
33.4
34.8
36.2
37.6
38.9
40.3
41.6
43.0
44.3
45.6
47.0
48.3
49.6
50.9
63.7
76.2
88.4
7.88
10.6
12.8
14.9
16.7
18.5
20.3
22.0
23.6
25.2
26.8
28.3
29.8
31.3
32.8
34.3
35.7
37.2
38.6
40.0
41.4
42.8
44.2
45.6
46.9
48.3
49.6
51.0
52.3
53.7
66.8
79.5
92.0