1 views

Uploaded by Jajang Nurjaman

Bidang Fotogrametri

- Hex
- 9783319702926(2)
- Linear Algebra
- OMN 201612 110650 1 Course_notes v1 Terence Tao
- Introduction to Tensors
- Cs 1996 Sureshkumar.doc
- Appliid Math Text
- The PolyMAX frequency-domain method a.pdf
- III Year B FEM
- Exercises
- Agricultural Engineering 2010
- Optics Notes
- syllabus
- Least Squares
- cr.intro
- r5100204-mathematical-methods
- bilinear inverse
- 10.5923.j.ijtmp.20130302.04
- correction_tp_f90.pdf
- Syllabus IITJEE 2012

You are on page 1of 19

8000 North Ocean Drive

Dania, Florida 33004

A NOTE ON A D J U S T M E N T OF FREE N E T W O R K S

Abstract

The present paper deals with the least-squares adjustment where the design

matrix (A) is rank-deficient. The adjusted parameters (x) as well as their variance-

covariance matrix ( Z x ) can be obtained as in the "standard" adjustment where A has

the full column rank, supplemented with constraints, Cx = w , where C is the constrabTt

matrix and w is sometimes called the "constant vector" In this analysis only the inner

ad/ustment constraints are considered, where C has the full row rank equal to the rank

deficiency o f A , and AC T = 0 . Perhaps the most important outcome points to the

three kinds o f results ."

1) A general least-squares solution where both x and ~'x are indeterminate

corresponds to w = arbitrary random vector.

is determined {and trace Z x = m i n i m u m ) corresponds to w = arbitrary constant vector.

3) The minimum norm (least-squares) solution where both x and ~'x are determined

^

1. Introduction

A network is said to be free if its geometrical shape has been determined, as

during triangulation, but which is essentially unattached (to some well defined coordinate

axes) in a space of appropriate dimensions. The scale of the network could also be free, or

could be a part of an observational process ; for example, one or more baselines in a

triangulation network can be measured, or all sides of a network can be measured as

during trilateration, etc. These concepts of classical geodesy can be, of course, generalized

and extended to other fields. In the present context of geometric networks one realizes

that unless external information is supplied, a unique least-squares solution in terms of

coordinates is impossible without some further stipulations. The theoretical aspects of

such "further stipulations" form the backbone of this paper.

The subject of adjusting free networks is not without useful applications in

practice. Although external information may not be readily available, one could still be

compelled to form the observation equations and carry out the Least-squares solution, in

a preliminary coordinate system, for the sake of an analysis of the residuals and for other

Bull. G~od. 56 (1982) pp. 281-299.

281

G. BLAHA

reasons. Ther~ are an infinite number of ways in which this preliminary coordinate system

can be realized mathematically. However, some definitions could lead to the variance-

covariance characteristics favoring certain coordinates while impairing others, to the point

that numerical difficulties could imperil the solution itself. Furthermore, should the

preliminary coordinates serve in their own right for any length of time, their error

characteristics should be balanced as much as possible. Therefore, a useful definition of

the least--squares adjustment of free networks could entail a minimum trace of the

variance--covariance matrix of the adjusted parameters (here coordinates). This can serve

as an example of the stipulations mentioned above.

A preliminary adjustment as just discussed could be of interest, especially for an

analysis of the residuals, even if external information were available from the outset. In

particular, if a network were "attached" to a coordinate system via more information

than is strictly necessary, stresses in its structure would be produced which could

significantly affect the residuals (in general increasing their magnitude) and mask their

true consistency. This problem would be aggravated by a lack of consistency within the

external information as well as between such information and the observations. One could

eliminate any difficulty of this kind by adjusting the network separately as free, i.e., by

temporarily disregarding all of the external information.

Before theoretical aspects of adjusting free networks can be addressed in this

paper, its scope and limitations should be firmly established. For example, the least--

squares method used will be that known as the observation equation method (also called

the method of variation of parameters), as contrasted to the condition method or to more

general methods. The constraints, when used, will be absolute rather than weighted.

Further limitations are :

a) The general class of minimal constraints resulting in a stress-free adjustment of

free networks will not be used in its entirety. For the sake of simplicity, its sub--class

called the inner adjustment constraints will be utilized instead.

b) When applying the inner adjustment constraints to a free network, no weighting

of parameters will take place. This weighting would essentially amount to introducing

additional external information and, as such, could produce undue stress affecting the

residuals.

c) In the derivations leading to the minimum trace and other properties, the

parameter set will be considered in its entirety. One could, of course, minimize only that

part of the trace associated with some preferred parameters, but such an approach would

require a separate treatment. (It was considered in [Blaha, 1971], where the preferred

parameters were the ground station coordinates and the other parameters were the

coordinates of the satellite targets; consequently, the entries corresponding to these

targets were replaced by zeros in the pertinent constraint matrix which thus lost its

original quality of an inner adjustment constraint matrix with regard to the complete

observation equation matrix, making it necessary to partition most of the vectors and

matrices).

d) The weight matrix of observations will be assumed to be a unit matrix throughout.

Clearly, the "original" observation equations could always be normalized upon the pre-

multiplication by an upper-triangular matrix T computed by the well--known Choleski

algorithm, such that T T T is the "original" positive-definite weight matrix. (In practice

the latter would often be diagonal and thus T woutd also be diagonal, composed of the

282

A NOTE ON ADJUSTMENT OF FREE NETWORKS

square roots of the "original" weights; this would correspond to dividing each

observation equation by the appropriate standard deviation).

e) No consideration will be given to the numerical analysis aspects of the solution.

This problem area, subject of extensive research in its own right, addresses a number of

situations such as the sparseness of the observation and normal equation matrices, the

pivotal search, the computation of the variances and covariances for selected parameters

(all the possible variances and covariances are hardly ever needed in practice), an

automation of these and other processes, etc. The subject of w e l l - and ill--conditioned

regular matrices also belongs in this category.

This paper is intended to be almost entirely self-contained. Only one reference

on the subject is listed and even that is not indispensable. A major source of outside

information has proved to be the private communication the author had with the late

Professor P e t e r Meissl whose contribution is briefly outlined in the Acknowledgement.

2. Mathematical Background

We begin with a description of the observation equations. With n observations

and n residuals represented by the vectors y and v , respectively, with m parameters

represented by the vector x , and with the observation equation matrix A of dimensions

n x m , also called the design matrix, the observation equations can be written in the

linear (or linearized} form essentially as

Ax = y+v. (2.1)

The least-squares results will later be attributed the symbol ..... ( x and v will then

become x and v ) . Possible constraints associated with (2.1) are symbolized by

Cx = w , (2.2)

where C is the constraint matrix and w is sometimes called the "constant vector."

In free networks the rank of A is considered to be m - s , w h e r e s is the rank

deficiency. An important role in carrying out the least-squares solution of free networks

will be played by special kinds of constraints called "inner adjustment constraints." The

latter are identified through their matrix of dimensions s x m ,called the inner adjustment

constraint matrix, having two basic properties :

rank C = s, (2.3a)

AC T = O. (2.3b)

rank [ ~ ] = m, (2.4)

stating that the augmented design matrix of dimensions (n -t- s) x m has the full column

rank m (due to 2.3b the resulting rank is the sum of ranks of A and C ,and due to 2.3a

this rank is m - s + s = m ) . We note that the more general "minimal constraints" would

be defined through (2.3a) and (2.4) only ; the condition (2.3b) is sufficient for (2.4) to

283

G. B L A H A

hold true, but it is not necessary. However, due to (2.3b) the approach using the inner

adjustment constraints is much simpler and, as such, is adopted in this paper.

A specific inner adjustment constraint matrix, denoted for the moment as C",

can be used to generate a whole family of inner adjustment constraint matrices through

arbitrary nonsingular matrices D of dimensions s x s . Recalling that C satisfies (2.3a, b)

and forming

C = DC-, (2.5)

one can readily assert that any C in (2.5) also satisfies (2.3a, b) and is therefore an inner

adjustment constraint matrix. The product of any such C with its pseudo-inverse C + is

invariant " in particular, we have

C C + = CC + = I , (2.6a)

C + C - = C + C = CT ( c c T ) - I C, (2.6b)

C F = CT ( c c T ) - 1 , (2.7)

Section 2.2. Having noticed the above properties we no longer need to distinguish

between "original" and "derived" C matrices and can discard the overbars.

One way to generatea C matrix is through partitioning of the design matrix,

A = [ A a, A b ] , (2.8)

matrix A b is assumed to have the full column rank m - s . This suggests caution when

numbering the parameters, whose arrangement corresponds to the arrangement of the

columns in A . With this provision the columns of A a are linear combinations of the

columns in A b ,inotherwords, an ( m - s ) xs matrix R exists such that

Aa = Ab R. (2.9a)

substituted back into (2.9a), the identity

A a = A b (A T A b ) - I AT A a (2.9b)

is obtained. This identity will be used below to verify the inner adjustment constraint

character of the matrix C defined as

The condition (2.3a) is satisfied by the presence of the unit matrix [ of dimensions

s x s. And the condition (2.3b) follows from (2.9b) considered in conjunction with the

product of (2.8) and (2.10) transposed.

284

A NOTE ON ADJUSTMENT OF FREE NETWORKS

problem when forming C as in (2.10). However, an appeal to geometry may represent a

more expedient avenue in obtaining a C matrix in practice. In [Blaha, 1971], Section 8,

this avenue leads to the following C matrix for three--dimensional range networks (with

s=6) '

-1 0 0 1 0 0

0 1 0 0 1 0 ...

0 0 1 0 0 1

C . . . . . . . . . . . . . . . . . . . . , (2.11)

O z l -Yt 0 z2 -Y2

-zl 0 xt -z2 0 x~ ...

Yl -xl 0 y= -x= 0

where x t , Yl , z t are the initial coordinates of the first point in the network, etc. For

convenience, the coordinates may be scaled by a suitable constant since this would

correspond to applying a diagonal D matrix (see 2.5) with the first three elements equal

to unity and the remaining elements equal to that constant. We may add that if a network

should be "free" also with respect to its scale - i n addition to its position and

o r i e n t a t i o n - the matrix C in (2.11) would be augmented by the row Ix1 y l z l ,

x 2 y = z= . . . . ].

matrix A (such as the design matrix herein) are the r - , s m - , and g-condition,

giving rise to the r - , s m - , and g-inverse, respectively. They are symbolically

recapitulated '

s (AAs T = AAs

in... (A m A ) T = A m A ,

g... AA g A = A ;

thus

A + - Args

The following well--known identities involving the pseudo--inverse will be useful

on several occasions. They are listed without proof, and referring to them will be done

mentally without specifying the equation numbers.

A T = AT AA + = A + A A T , (2.12)

A+ = A+A+TA T = ATA+TA +, (2.13)

285

G. B LAHA

(A TA) + = A+ A+ T . (2.14)

These identities could be expanded upon transposition (such formulas need not be written

down) and/or upon their various combinations, for example

With the aid of the above formulas, we could derive a number of other identities

some of which could be rendered computationally appealing through the use of the inner

adjustment constraint matrix. As an example which may be of interest in practice, we

first write the identity

A+A+K(I-A+A) A+A +T = A + A ,

where K is a completely arbitrary matrix of dimensions m x m . This is expressed as

[A T A + K ( I - A + A ) ] ( A TA) + = A + A ,

yielding

(ATA) + = [ A T A + K ( I - A + A ) ] -1 A + A , (2.16)

A+ = [ATA+K(I-A+A)] -1 A T (2.17)

However, K in (2.16) and (2.17) is no longer completely arbitrary but subject to the

restriction that the matrix within the brackets should be nonsingular. But without further

modifications such as designed below these two equations would be of little use.

Equation (2.3b) entails the relationships :

AC T = O, CA T = O, (2.18a)

AC + = 0 , CA + = 0 , (2.18b)

(ATA)+C T = 0 , C(AT A) + = 0.

We next form

[_A+A-C+C = Y

or CY = 0 . But this means that

where the matrix within the brackets has the full (column) rank as in (2.4). Accordingly,

Y = 0 and

286

A NOTE ON ADJUSTMENT OF FREE NE'DNORKS

This result would hold in general for any matrix C whose rank equals the rank deficiency

of A and which satisfies A C T = 0 . In the case of inner adjustment constraints

considered presently (the number of rows in C equals the rank of C and not more)

C + is given by (2.7) and (2.19a) becomes

A+A = I - CT ( c c T ) -1 C. (2.19b)

To render these formulas even more advantageous one can choose K simply as

K = k I. k > O. (2.22)

The matrix in the first brackets of (2.20) -- and the same matrix in (2.21)-- then

becomes positive--definite since it can be expressed as [ A t', ~/-kC T M T ] of full (row)

rank m post-multiplied by its transpose, where M T M = ( c c T ) -1 is positive--definite.

That the matrix just mentioned has the full (row) rank rn, or its transpose has the full

(column) rank m , follows immediately from (2.4). One of the advantages of choosing K

in practice as in (2.22), the simplest case being K = [ , is that it allows the use of efficient

computer algorithms designed for the inversion of positive-definite matrices.

The following identities will help to prove some of the relations in this paper.

With G 1 through G4 defined by their generalized-inverse properties, namely

GI =

A rgn '

G 2 = A~ '

G3 = A~ r , G4 = A m~r '

we have

G1 AG2 = A + , (2.23a)

G~ A = A + A , (2.23b)

AC,2 = A A + , (2.23c)

63 AA + = G3 (2.23d)

The symbols A m g ' etc., make allowance for the complete sets (but once G 3 is chosen it

is the same matrix on both sides of 2,23d ; a similar statement applies also for G4 ). The

identity (2.23a) can be proved by showing that all the four conditions ( g , s m , and r)

are satisfied ; (2.23b, c) then follow from post-multiplying and pre-multiplying (2.23 a)

by A . The product G 3 A A + in (2.23d) can be written as G 3 AG 3 (upon using

287

G. B L A H A

AA + = AG3 following from 2.23c) which equals G3 due to the r-condition. The

proof for (2.23e) proceeds along similar lines.

Next, consider a set of matrices U such that

where the matrix Z is completely arbitrary. Let U denote a complete set of matrices

such that, symbolically, A U = 0 . Since it holds true for any Z in (2.24a) that A U = 0 ,

U is included in U and we write U = U . On the other hand, asubset of U in (2,24a)

can be chosen such that Z is restricted to run through U . Due to A U = 0 , U in (6.24a)

with all such matrices Z covers the whole set U and, accordingly, I Z = U . These two

inclusions indicate that the above U represents the complete set U . The symbol " ~ "

will be omitted in the sequel and U will be understood as any possible matrix satisfying

AU= 0 . (2.24b)

equations (2.24a, b) reduce to

Au = 0 . (2.25b)

V = Z'(I-AA+), (2.26a)

VA = 0. (2.26b)

Wherever U and V appear in the derivations below they can be replaced by the

expressions of the kind ( [ - A + A ) Z 1 and Z2 ( I - A A + ) , r e s p e c t i v e l y , as follows

from (2.24a) and (2.26a). For the g-inverse we have

A g = A ++U+V. (2.27)

The first inclusion (in the sense of the discussion that followed 2.24a) is readily

established using the properties of U and V matrices. The second inclusion is arrived at

bychoosing U = ( I - A + A ) A g A A + and V = A g ( I - A A + ) J A g in these two

expressions being the same matrix (it can take on any values from thecompleteset).

Since the first inclusion in all the cases considered results from straightforward testing,

only the second inclusion will be worth elaborating upon.

It is readily confirmed that

A~. = A+ + U , (2.28)

such U satisfies A U = 0 due to 2.23c 9 A~ in 2.28 can again run through the complete

set.)

288

A NOTE ON ADJUSTMENT OF FREE NETWORKS

A ~r = A+ + U A + ' (2.29a)

A ~ r = A+ + U A T . (2.29b)

The second inclusion in (2.29a) follows from the choice U = ( 1 - A + A ) A{rA and

from (2.23d) and the second inclusion in (2.29b) follows from the choice

U=(I-A +A)A~r A+T'fr~176 A A + and from(2.23d).

Although A mg , A mgr (two equivalent formulas) and ARm will not serve in the

course of this study, they are listed for the sake of interest as

A I lgl = A + + V 9

As = A++W... AW = 0 , WA = 0 .

one space to another, a few definitions are presented '

N(A) ... null spaceof A , a space of all vectors orthogonal to the rows of A .

It is seen that the matrix A maps N ( A ) s into R ( A ) , and N(A) into the zero vector 9

xeN(A) . . . Ax = 0 . (2.30b)

On the other hand, A T maps R(A) into N(A) • , and R(A) • into the zero vector 9

y e R ( A ) -L . . . A T y = 0 . (2.31b)

It can be shown that the same description applies also for the (unique) mapping by A + 9

y e R(A) L . . . A + y = 0. (2.32b)

289

G. BLAHA

One can also demonstrate the properties of the following projection operators "

For stating certain results in terms of mapping, brackets will be utilized to single

out this purpose in the text. [We had, for example, A x e R(A) .] If this interpretation is

not of interest, one may imagine such expressions deleted.

In the first step of this development a solution for the inconsistent system

A x =# y will be derived through the least--squares criterion applied to the consistent

system A x = ( y + v ) . The least--squares solution ~x will be derived using two different

approaches, I and I I . In the first approach, a general solution of the initial consistent

system is formulated and then subjected to the restriction of being the solution of normal

equations. In the second approach the normal equations represent the consistent system

whose general solution is sought.

Ax = ( y + v ) . (3.1)

A A g ( y + v) = AA + ( y + v) = ( y + v ) . (3.2)

former -- acts as a projection operator on R ( A ) . ]

The general solution of (3.1) is written as

x = Ag(y +v)+u ;

this solution satisfies (3.1) as is confirmed through (3.2) and (2.25a, b). Upon using (2.27)

we obtain

x = A +(y+v)+U(y+v)+u, (3.3a)

and third terms are in N ( A ) .] Upon using (2.24a, b) and (2.25a, b) explicitly, (3.3a)

is written as

290

A NOTE ON ADJUSTMENT OF FREE NETWORKS

~.,,~) , we have

A T A~r = AT y , AT ~ = 0 . (3.4)

A+~ = 0 (3.5)

Upon utilizing (3.5) and (3.6) in (3.3b) with the new notations &, v , we have

= A+y+(I-A+A)(ZA+TATy+z).

In using complete sets it can be shown that Z A + T A T y can be replaced, without any

loss of generality, by ~/A T y where W is an arbitrary matrix of dimensions m x m .The

above least-squares solution is thus rewritten as

,x = A + y + ( I - A+ A ) ( ~ / A T y+ z) (3.7)

on the right-hand side of (3.7) is in N ( A ) 1 and the second term is in N ( A ) , in fact, it

is the projection of the vector inside the second parentheses on N ( A ) . ]

II. The starting point of the second, more familiar approach is represented by the

consistent system of normal equations already seen in (3.4) :

A T A~ = AT y . (3.8)

= (A T A ) gA T y + u;

A(A TA) gA T A=A. In analogy to (2.27) one can write

the latter criteria following from (AT A)U = 0 , V(A T A) = 0 , Accordingly, the general

solution becomes

~, = A + y + U A T y + u . (3.10)

Upon using the explicit expressions for U and u , the above solution is written as

291

G. B L A H A

which is the same result as (3.7) except that the symbol Z has replaced W.

[In considering (3.10), x is seen to consist of two parts. The first part, A + y ,

is contained in N ( A ) -L and is unique (if y e R ( A ) J', it is zero) ; the second, remaining

part is contained in N ( A ) and is arbitrary. The solution (3.10) is expressed below using

three different formulations. In the first formulation, it is rewritten with a new notation

(the second part is grouped into the vector u l ) . The second formulation, given in terms

of As r , is simply (3.10) with the information embodied in (2.29b) taken into account.

And the third formulation brings A ~ into the picture. Any result obtained through

any of the three formulations can be reproduced through the other two, upon properly

choosing the U' s and u' s (arbitrary except for the requirements 2.24b, 2.25b). With

regard, in particular, to the u' s, one can symbolically write u i e N ( A ) , arbitrary, where

i = 1, 2, 3 . The three formulations are represented by

= A+y+ul ;

To summarize, all three expressions represent the complete set of least--squares solutions

and they are equivalent ; only the form of the arbitrary vector in N ( A ) differs from

one expression to the next. A similar relation could also be written in terms of A ~ m . ]

the other operators above, As maps thevectors y that are in R ( A ) into the space of

all m-vectors, the part mapped into N(A) / being unique (it is A + y ) and the part

mapped into N(A) being arbitrary (it is U A T y) ; it maps all the vectors y that are in

R(A) s into the zero vector. The mapping by A~ of the vectors y that arein R(A) is

similar to the mapping by Ags r , except that the form of the arbitrary part mapped into

N(A) is now U' y ; however, if the vector y is in R ( A ) / , it is not mapped into the

zero vector in general, but into an arbitrary vector U ' y contained in N ( A ) . ]

Let us now consider (3.11) in as general a form as possible, focusing especially

on the arbitrary vector z. We allow it to be an arbitrary random vector, an arbitrary

constant (non-random) vector, or a mixture of the two. The random part, in turn, can

be linearly dependent on y , linearly independent of y , or a mixture of the two. In

addition, the variance-covariance matrix associated with z when it is not a constant

vector can also be arbitrary. These properties are fulfilled if we stipulate that

where K1 , K ' , K" are matrices of arbitrary coefficients, y is the vector of observations

292

A NOTE ON ADJUSTMENT OF FREE NETWORKS

random vector associated for simplicity (but without loss of generality) with :~y, = I ,

and c is an arbitrary constant vector. Since the vectors y and y ' are. assumed to be

stochastically independent, one has

NY'Y' =

['~

0 I ' (3.13)

so that the corresponding random vectors zx and z2 are also stochastically independent.

In grouping the terms containing y , (3.11) and (3.12a, b) lead us to denote

Z A T y + K1 y = K y , (3.14)

where the new coefficient matrix K is completely arbitrary " (3.11) then becomes

[The first vector forming :~ above is in N(A) J- and is unique, random" the second

vector is {n N(A) and is arbitrary, random or constant. ]

We state without proof that due to the arbitrary character of y ' , both K ' y '

and ~]K' y' can be made equal to any desired vector and to any desired positive (semi-)

definite matrix~ respectively. This serves as an indication that the expression in the

second parentheses in (3.15a) might be in fact more general than needed for the purpose

of this study. However, if this expression were replaced by some K y the solution would

be restricted" for example, x would be restricted to zero whenever the (observed)

vector y were zero. In the subsequent derivations of the minimum trace and the

minimum norm solutions the general expression (3.15a) will be used as it stands.

The law of variance-covariance propagation applied to (3.15a/yields

Gx = DDT (3.15b)

where

and where (3.13) together with the fact that c is a constant vector have been taken into

accou nt.

(3.15b). If the latter is developed and the rule Tr (AB) = T r (BA) is used, it follows that

wl-ere

293

G. BLAHA

Since Tr(WW T ) is the sum of the squares of all the elements in W and a similar

statement holds true for T r ( W ' W 'T) , the minimum is produced when

W-~O, W'-=O.

= A + y + ( I - A + A ) K '' c , {3.16a)

E x = A + A + T ---- ( A T A ) + (3.16b)

represents the minimum trace solution. [The first vector forming y, in (3.16a) is in

N(A) L and is unique, random ; the second vector is in N(A) and is arbitrary, constant. ]

carrying out the indicated operations we obtain

"~ = ( I - A + A ) ( K y + K 'y'+K''c) o

x = A + Y, (3.17a)

:%~, = A + A + T - ( A T A ) + . (3.17b)

When comparing (3.16a, b) with (3.17a, b) we realize that the only difference

between the two results is the indeterminacy in x in (3.16a), imputable to the presence

of the arbitrary constant vector K " c . Clearly, the pre-multiplication of the right-

hand sides of (3.15a) and (3.16a) by A + A [the projection operator on N ( A ) •

transforms them into A + y , the result in (3.17a). Accordingly, if the constraint

x = A+A.{ (3.18)

were included as a part of an algorithm derived to yield (3.15a, b) or (3.16a, b), the

indeterminacy in the solution would be suppressed and the result would be (3.17a, b).

[The constraint (3.18) implies that ,~ is in N ( A ) L - - A + A is an identity operator on

N ( A ) J- only - and thus it can be only A + y . ] The constraint (3.18) would thus

guarantee that the minimum norm solution, a special case of the minimum trace solution

and, of course, of the general least--squares solution, has been arrived at. tn accordance

with (2.19b) this constraint could also be written as

294

A NOTE ON ADJUSTMENT OF FREE NETWORKS

writing the constraint as

Cx = 0.

We return to the general least--squares solution and try to express the second

term on the right-hand side of (3.15a). For this purpose we denote

Ky+K'y'+K" c = t , (3.19)

or uncorrelated with y) or constant, etc. Since the rows of A and C together span the

whole space of m -vectors, t can be written as

t = AT p + C Tq, (3.20)

where the n - v e c t o r p and the s-vector q may again be arbitrary (random or constant,

etc.). In fact, t could have been written in a more restrictive - b u t still completely

general - - form, namely

t = [AoT, cT][~1,

that the matrix [ A T , C T ] of dimensions m x m isnonsingular. Then for every t one

can determine uniquely Po and q , and a similar statement can be made also with

regard to the respective variance-covariance matrices.

With (3.20), the second term in equation (3.15a) becomes

( I - A + A ) t = CTq, (3.21)

q = ( c c T ) - t w. (3.22)

For every q and ~q onecan uniquely determine w and ]Sw , and vice-versa. We thus

have

( I - A + A ) t = C+w (3.23)

and, according to (3.15a),

~, = A + y + C + w . (3.24)

that (3.24) can be used to symbolize three distinct types of least-squares adjustment "

295

G. B L A H A

w --- 0 . . . minimum norm solution.

Each of these categories (starting with the second) represents a special case of the

category preceding it.

It may be desirable to arrive at the formulation (3.24), with all its clearcut

characteristics, through the usual least-squares approach. First, if (3.24) is pre-

multiplied by A one obtains

A:~ = A A + y . (3.25)

This relation confirms the normal equations (3.8) which eventually lead to the

formulation (3.24) itself. (The equivalence between 3.25 and 3.8 can be seen on one

hand upon pre-multiplying the former by A T and arriving at the latter, and on the

other hand upon pre--muitipIying the tatter by A +T and arriving at the former.) Next,

wepre-multiply (3.24) by C which yields

Cx = w . (3.26)

The above two steps point to a regular least-squares adjustment with constraints. By

choosing the characteristics of the vector w in (3.26), one effectively chooses the type

of sNutioa according to the classificatiors in the preceding paragraph. Clearly, the

mini~.'q.um norm solution corresponds to

C~, = 0. (3.26')

The equations of the type (3.26) or (326') have been called the ~nner adjustment

constraints. According to the above sug~4~stion a least--squares adi~:stment of free

networks can be formulated with the aid of these constraints as follows 9

A~: = y + v ,

C~ = w,

where the vector w characterizes the flexibility of the solution. The (augmented) normal

equations for this set- ~,p are

system is

The above assertion is readily verified upon post-multiplying the matrix in (3.27) by its

inverse in (3.28) and obtaining the unit m a t r i x in this process the earlier-derived

properties, such as A + A + C +C=] and CC + = [ , h a v e b e e n u t i l i z e d .

296

A NOTE ON A D J U S T M E N T OF FREE NETWORKS

;( = A + y + C + w , (3.29a)

K = 0. (3.29b)

c

constant vector (whether arbitrary or zero), we can take advantage of the known result

from the adjustment calculus that the variance-covariance matrix for x, occupies the

corresponding location in the matrix of the augmented normal equations after the

inversion. In particular, from (3.28) we have

If w should be a random (arbitrary) vector, upon retracing the steps (3.22),

(3.21), (3.19) we find

and

KTcT j

~y,w C[( CKK T CT + CK' K 'T CT

Upon using this matrix in the variance--covariance propagation applied to (3.28), we get

~x'-Kc

_[Q ~1

0 0

where

x

according to (2.19a). In case w is a constant vector, K and K' above are replaced by

zeros and the result is (3.30). Thus an earlier outcome is again confirmed.

4. Conclusions

In this paper a basic adjustment of free networks has been discussed. The

discussion has been based on the usual least.-squares criterion v T v = minimum where,

without loss of generality, the weight matrix has been assumed to be the unit matrix, and

on the notion of inner adjustment constraints. These constraints, considered in the usual

sense of absolute constraints with constant terms (as opposed to some random terms),

have led to an important class of the general least--squares solution for x having the

minimum trace property, T r ( ~ x ) = minimum.

297

G. B L A H A

in the coefficient matrix of observation equations and thus also in the matrix of normal

equations. A'lthough ~ x in the minimum trace solution is unique, the solution itself, ~,,

is not. This indeterminacy stems from the property that the constant vector w ,

containing the constant terms of the constraint equations, can be an arbitrary s-vector.

(Although there are infinitely many inner adjustment constraint matrices C, the results

do not depend on a particular choice of one such matrix.)

The constrained least-squares adjustment having the minimum trace property is

formulated as

A~ = y+v,

C~ =w.

= (4.1)

-K c w

covariance matrix ~ x occupies the location of the matrix of normal equations, A T A ,

after the matrix inversion indicated in (4.1) has been carried out.

The solution of (4.1) can be expressed analytically by

x = A+y+C+w, (4.2a)

2 i = ( A T A) + . (4.2b)

Ax = E ( y ) ,

where x represents the "true" parameters. Thus, regardless of w, (4.3) implies that

E ( x ) =/: x . (4.4a)

However, from (4.3) and from the "g" property of the pseudo-inverse it also follows 9

which again holds true regardless of the constant vector w (w is eliminated due to

AC+=0).

The special case of the minimum trace solution of the greatest practical

significance is the minimum norm solution. It is the simplest case where the arbitrary

constant terms in the constraint equations are set to zero, i.e.,

298

A NOTE ON ADJUSTMENT OF FREE NETWORKS

w ---- 0 . (4.5)

All of the above equations could now be imagined written with this particular constant

vector w . Although the indeterminacy in the solution has thus been eliminated, it is not

so with the bias. As has been already indicated in (4.4a), the parameter estimates in free

networks are biased r~ardless of the numerical value of the constant vector w . This is

imputable to the rank deficiency which is, of course, what makes such networks free. On

the other hand, equation (4.4b) implies that the functions A x or R A x - thus the

adjusted observables or their linear c o m b i n a t i o n s - are indeed unbiased.

Acknowledgement

This paper is dedicated to the m e m o r y o f the late Professor Peter Meiss] w h o , as

one o f the original reviewers~ inspired the a u t h o r to l o o k in t o the p r o b l e m s o f free

networks from an u n o r t h o d o x perspective. A l t h o u g h he refused any c r e d i t or even

a c k n o w l e d g e m e n t f o r his selfless help, the m e m o r y o f Prof. Meiss] deserves t h a t at least

t w o areas where he o f f e r e d new ideas be m e n t i o n e d :

and the insight it offers in the analysis o f f r e e - - n e t w o r k adjustments. The various steps in

the least--squares process can be visualized in terms o f the subspaces associated w i t h the

design matrix A (range o f A , null space o f A , etc.), thus c o m p l e m e n t i n g -- and at

times o f f e r i n g an alternative to - the i n t e r p r e t a t i o n w h i c h makes use o f several types o f

n o n - u n i q u e generalized inverses.

(non-random) vectors, in the d e v e l o p m e n t o f a least--squares s o l u t i o n f o r free n e t w o r k s

w h i c h w o u l d be the m o s t general possible.

and personal preference, his openmindedness as a scientist, and his selflessness as a teacher

and a colleague can p r o b a b l y never be fully appreciated.

0 0

REFERENCE

(3. BLAHA : Inner Adjustment Constraints with Emphasis on Range Observations. Department of

Geodetic Science, Report No. 148, The Ohio State University, Columbus, 1971.

Received : 10.05.1979

Accepted : 05.08.1982

299

- HexUploaded bysaradha_ramachandran
- 9783319702926(2)Uploaded byDan Ivanov
- Linear AlgebraUploaded byMohammed Arshad Siddiqui
- OMN 201612 110650 1 Course_notes v1 Terence TaoUploaded byMarinho Medeiros
- Introduction to TensorsUploaded byAtta Ur Rehman Shah
- Cs 1996 Sureshkumar.docUploaded byAnonymous 8pCXXs
- Appliid Math TextUploaded byakeilonh
- The PolyMAX frequency-domain method a.pdfUploaded byyaser yas
- III Year B FEMUploaded byDurga Bhavani
- ExercisesUploaded byNilambar Bariha
- Agricultural Engineering 2010Uploaded byAdil Hussain
- Optics NotesUploaded byistarhir
- syllabusUploaded byViren Singh
- Least SquaresUploaded byRahul Verma
- cr.introUploaded byah_shali
- r5100204-mathematical-methodsUploaded bysivabharathamurthy
- bilinear inverseUploaded bySandeep Rai
- 10.5923.j.ijtmp.20130302.04Uploaded byZaratustra Nietzche
- correction_tp_f90.pdfUploaded byBassour Salah Eddine
- Syllabus IITJEE 2012Uploaded byshitajit
- MATH321 Lec2 Solutions to Systems of Linear Equations, Eigenvalues and Eigenvectors (1).pdfUploaded byTimothy Gordo
- LINEAR INVERSE PROBLEMS: LENGTH APPROACHUploaded bysohaib011
- Sugon Geometric Algebra Matrix OpticsUploaded bycrguntalilib
- LinealizaciònUploaded byfernando tipan
- Ansys-7 semUploaded bySimon Amboise
- FastCS14Uploaded bywillemsjunk
- Hmwk01_APAM4300Uploaded byRenzo Alexander Montoya Morales
- c^M^MUploaded byvivek
- jgcdUploaded byjrbjrbFU
- Physics SyllabusUploaded bySanjeev Kumar

- Horn - 2000 - Tsai’s camera calibration method revisitedUploaded byredeemer90
- Stereo Vision Based Mapping and Navigation for Mobile RobotsUploaded byJajang Nurjaman
- Camera Calibration With Genetic AlgorithmsUploaded byJajang Nurjaman
- 3D Coordinate Measurement of Dam by Close Range Photogrammetry.pdfUploaded byJajang Nurjaman
- Bundle Adjustment RulesUploaded byJajang Nurjaman
- A Simple Technique for Self-CalibrationUploaded byJajang Nurjaman
- A Flexible New Technique for Camera.pdfUploaded byJajang Nurjaman
- Using CORS Stations With Trimble TGOUploaded byJajang Nurjaman
- The Academic Record ( b. Inggris )Uploaded byJajang Nurjaman
- Materi Traverse AdjustmentUploaded byJajang Nurjaman
- znr_BollmanUploaded byJajang Nurjaman
- 6-52Uploaded byJajang Nurjaman
- tyzx_wpUploaded byJajang Nurjaman
- IFSAR rsUploaded byJajang Nurjaman
- hayford-130819155619-phpapp01Uploaded byJajang Nurjaman
- 102-ch4-2007Uploaded byJajang Nurjaman

- Bundle AdjustmentUploaded byfaizz islamm
- Lecture 1 - Overview of Supervised LearningUploaded bymissinu
- hw1Uploaded byMorokot Angela
- SM 56Numerical AnalysisUploaded bypavan kumar kvs
- svdUploaded bymanish
- Regbook Inside (2)Uploaded byHabib Mrad
- RegularizationUploaded byPaul
- Machine LearningUploaded byRachit Sharma
- Practice MidtermUploaded byArka Mitra
- Noise ModelsUploaded byanandbabugopathoti
- Mathematics for Crystallography 2Uploaded byZohaibKhan
- Time Series Analysis by salah uddinUploaded byNazim Uddin Mahmud
- Doolittle (a Treatise on Practical Astronomy, As Applied to Geodesy and Navigation) (1900)Uploaded byMarcelo Silvano de Camargo
- Cours LicciardiUploaded byYaseen Muhammad
- Chapter4 (2W) - Signal Modeling -Statistical Digital Signal Processing and ModelingUploaded byKhanh Nam
- Least Squares EstimationUploaded byKofi Deh
- Matlab Intro 2Uploaded byCarlos Rodríguez Vega
- Curve Fitting with MathemaricaUploaded byomitsaras
- Linear Least Squares (Mathematics) - Wikipedia, The Free EncyclopediaUploaded byWanly Pereira
- chapter2-anova-general-linear-hypothesis.pdfUploaded bysricharitha6702
- Inverse Problems in Exploration SeismologyUploaded bygt0084e1
- Least Square AdjustmentUploaded byTabish Islam
- Chap 3Uploaded byGabo García
- Curve Expert Basic & Professional - User GuideUploaded byGEOMAHESH
- MTH 307 Numerical Analysis IIUploaded byakinwamb
- matlab3Uploaded byHusam AL-Qadasi
- Syllabus and ScheduleUploaded byThapelo Sebolai
- Robust Inverse KinematicsUploaded byHemanth Raju
- CSU Salary Analysis SummaryUploaded byColoradoan
- chap7Uploaded byNizar Tayem