You are on page 1of 12

J Geod (2010) 84:751–762

DOI 10.1007/s00190-010-0408-0

ORIGINAL ARTICLE

Generalization of total least-squares on example of unweighted


and weighted 2D similarity transformation
Frank Neitzel

Received: 23 December 2008 / Accepted: 18 August 2010 / Published online: 17 September 2010
© Springer-Verlag 2010

Abstract In this contribution it is shown that the so-called 1 Introduction


“total least-squares estimate” (TLS) within an errors-
in-variables (EIV) model can be identified as a special case A considerable part of the literature on least-squares esti-
of the method of least-squares within the nonlinear Gauss– mation distinguishes between standard “least-squares” (LS)
Helmert model. In contrast to the EIV-model, the nonlinear and “total least-squares” (TLS); cf., e.g., Golub and van Loan
GH-model does not impose any restrictions on the form of (1980), or van Huffel and Vandewalle (1991, p. 27 ff.). First,
functional relationship between the quantities involved in the an over-determined linear model
model. Even more complex EIV-models, which require spe-
cific approaches like “generalized total least-squares” y ≈ Aξ (1)
(GTLS) or “structured total least-squares” (STLS), can be is considered, in which the functional matrix A links the m×1
treated as nonlinear GH-models without any serious prob- vector of unknowns ξ with the n ×1 vector of observations y.
lems. The example of a similarity transformation of planar Due to inevitable measurement errors, this equation system
coordinates shows that the “total least-squares solution” can can be fulfilled only approximately. Under the assumptions
be obtained easily from a rigorous evaluation of the Gauss– that the measurement errors have merely a stochastic charac-
Helmert model. In contrast to weighted TLS, weights can ter without bias and that only the components of the obser-
then be introduced without further limitations. Using two vation vector y are affected by these errors, it is appropriate
numerical examples taken from the literature, these solutions to introduce the n × 1 vector e of random errors that leads to
are compared with those obtained from certain specialized the observation equations
TLS approaches.
y = Aξ + e (2)
Keywords Method of least-squares · Total least-squares
where, under the principle of ordinary least-squares estima-
(TLS) · Structured total least-squares (STLS) · Weighted
tion (OLSE), the objective function
total least-squares (WTLS) · Gauss–Helmert model ·
Gauss–Markov model · Coordinate transformation (2D) 
n
eT e = ei2 (3)
i=1

is to be minimized where the number of observations is


denoted by n. In the case that different variances are asso-
ciated with the observations yi which, moreover, could pos-
F. Neitzel sibly be correlated, a weight matrix P can be introduced
University of Applied Sciences Mainz, Mainz, Germany following Aitken (1935). If P is chosen proportionate to
the inverse variance–covariance matrix, this sort of adjust-
F. Neitzel (B)
ment is called “weighted least-squares” (WLS) and is asso-
School of Earth Sciences, The Ohio State University,
Columbus, OH, USA ciated with the linear Gauss–Markov model (GM-model); cf.
e-mail: neitzel.5@osu.edu Niemeier (2008, p. 137 ff.). Obviously, the notion of “method

123
752 F. Neitzel

of least-squares” comprises, since its very beginning about Starting from this state of the discussion and taking, as an
two centuries ago, certain nonlinear problems as well, cf; example, the similarity transformation in the plane into con-
Gauss (1809, p. 215). sideration, the present study aims at the following objectives:
In the case that, after defining the functional matrix A, a
decision is made that it is necessary to regard also the ele- 1. The view of Schaffrin et al. (2006) that TLS does not rep-
ments of this matrix as observations, the inconsistency of the resent a new adjustment method, but merely uses another
equation system (1) has to be repaired in a way different from adjustment model in the frame of the method of least-
(2). Consequently, this can be done in a meaningful way only squares, should be confirmed.
by introducing random errors both for the vector y and for 2. It should be checked under which conditions the state-
the elements ai j of the functional matrix A. This results in a ment is justified: that TLS indeed can provide “better”
consistent equation system results in comparison with standard LS, considering that
y = A∗ ξ + e (4) the GH-model is a special case of the EIV-model.
3. Furthermore, it should be shown that the solution of
with the so-called TLS problem can be obtained by a rig-
⎡ ⎤
a11 − e11 a12 − e12 · · · a1m − e1m orous evaluation in a nonlinear Gauss–Helmert Model
⎢ a21 − e21 a22 − e22 · · · a2m − e2m ⎥ (GH-model).
⎢ ⎥
A∗ = ⎢ .. .. .. .. ⎥ = A − E A.
⎣ . . . . ⎦
an1 − en1 an2 − en2 · · · anm − enm The example of a planar coordinate transformation in 2D was
chosen, because it is one of the most frequent applications of
(5)
adjustment in the fields of geodesy, engineering surveying,
Here, m denotes the number of unknowns. In the absence of photogrammetry, computer vision, and geographical infor-
weights, the objective function to be minimized obtains the mation science (GIS). Specific TLS solutions for coordinate
form transformations have been presented earlier; e.g., in Felus and

n 
n 
m Schaffrin (2005), Akyilmaz (2007) as well as in Schaffrin and
eT e + eTA e A = ei2 + ei2j ; e A = vec E A . (6) Felus (2008).
i=1 i=1 j=1

Here, the “vec” operator stacks the columns of E A , one


underneath the other, into a vector. Corresponding weight 2 Total least-squares as special case of the method
matrices could also be taken into account if necessary. This of least-squares
adjustment has been named “total least-squares” (TLS) by
Golub and van Loan (1980) although it is known to be the The functional model of the similarity transformation
standard LS-method as applied to the errors-in-variables (4-parameter-transformation) in the plane is considered. Tak-
(EIV) model, see Schaffrin and Snow (2010), for instance. It ing as transformation parameters
should be noted that in some cases only some of the columns
of the functional matrix are subject to random errors, depend- ξ0 , η0 . . . translation of the coordinate origin,
ing on the problem definition, a case that can be handled by α . . . rotation angle,
zero blocks in the respective weight matrix. μ . . . scale factor,
In order to solve TLS problems, one of the most elegant
algorithms ever proposed is based on singular value decom- the well-known transformation law follows the approxima-
position (SVD); cf. Golub and van Loan (1980), or van Huffel tion
and Vandewalle (1991, p. 29 ff.), among others. If it can be



Xi cos α − sin α μ 0 xi ξ
extended to weighted TLS problems, however, is unclear. ≈ + 0 . (7)
Yi sin α cos α 0 μ yi η0
A look at the recent geodetic literature shows that the
application of TLS enjoys increasing popularity there as well. By multiplying out the corresponding expressions, (7) obtains
This popularity is occasionally justified by the claim that the the form
results of a TLS adjustment are “better” compared with the
results of a standard LS adjustment; “better” in the sense X i = (μ cos α)xi − (μ sin α)yi + ξ0 ,
that, in general, a TLS adjustment can be expected to provide (8)
Yi = (μ sin α)xi + (μ cos α)yi + η0 .
“satisfactory” or “more realistic” estimates for the unknown
parameters due to increased model flexibility; cf., e.g., Substituting
Schaffrin et al. (2006) who base this judgment on their inter-
pretation of TLS as “standard LS in a more suitable model”. ξ2 = μ cos α, ξ3 = μ sin α, (9)

123
Total least-squares on example of unweighted and weighted 2D similarity transformation 753

results in an approximate linear equation system By multiplying the first equation with ξ2 , the second with ξ3 ,
and adding the resulting expressions, it follows
X i ≈ ξ2 xi − ξ3 yi + ξ0
(10) ξ2 ξ3
Yi ≈ ξ3 xi + ξ2 yi + η0 xi − exi = 2 (X i − ξ0 ) + 2 (Yi − η0 ), (15)
μ μ
with i = 1, . . . , k, where k denotes the number of homo- and in analogy to this
logous points. If there are k > 2 homologous points, the un- ξ3 ξ2
knowns can be determined thorough an adjustment process. yi − e yi = − 2 (X i − ξ0 ) + 2 (Yi − η0 ). (16)
μ μ
This can essentially result in three different problem formu-
Substituting
lations.
ξ2 ξ3
Problem 1 The transformation parameters are to be deter- ξ̄2 = 2 , ξ̄3 = 2 , (17)
μ μ
mined under the assumption that the coordinates X i , Yi are and
observations and hence subjected to random errors whereas
the coordinates xi , yi represent error-free quantities. Obvi- ξ̄0 = −ξ̄2 ξ0 − ξ̄3 η0 , ξ̄1 = ξ̄3 ξ0 − ξ̄2 η0 (18)
ously the variances and covariances of the observations are yields linear observation equations in new parameters
to be taken into account.
xi − exi = ξ̄2 X i + ξ̄3 Yi + ξ̄0 ,
In this problem formulation it is necessary to introduce (19)
random errors e X i , eYi , which results in linear observation yi − e yi = −ξ̄3 X i + ξ̄2 Yi + ξ̄1 .
equations
These can be brought into a matrix notation (2) by denoting
X i − e X i = ξ2 xi − ξ3 yi + ξ0 , ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
(11) ··· ··· ξ̄0
Yi − eYi = ξ3 xi + ξ2 yi + η0 . ⎢ xi ⎥ ⎢ exi ⎥ ⎢ ξ̄1 ⎥
ȳ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ yi ⎦, ē = ⎣ e y ⎦, ξ̄ = ⎣ ξ̄2 ⎦,
These can be written in matrix notation (2) by denoting ξ1 := i
··· ··· ξ̄3
η0 and ⎡ ⎤
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ··· ··· ··· ···
··· ··· ξ0 ⎢ 1 0 X i Yi ⎥
⎢ Xi ⎥ ⎢ eXi ⎥ ⎢ ξ1 ⎥ Ā = ⎢
⎣ 0 1 Y −X i ⎦.
⎥ (20)
y=⎢ ⎥ ⎢ ⎥
⎣ Yi ⎦, e = ⎣ eY ⎦, ξ = ⎣ ξ2 ⎦,
⎢ ⎥
i
··· ··· ··· ···
··· ··· ξ3
⎡ ⎤ Taking into account the weight matrix P̄ for the observations
··· ··· ··· ···
⎢ 1 0 xi −yi ⎥ xi , yi results in the objective function
A=⎢ ⎣ 0 1 yi xi ⎦.
⎥ (12)
ēT P̄ ē = min subject to (19)–(20). (21)
··· ··· ··· ··· ē, ξ̄

Taking into account the weight matrix P for the observa- The parameter estimation of type least-squares can be per-
tions X i , Yi the objective function to be minimized obtains formed in the frame of a linear GM-model. The original
the form unknowns ξ2 and ξ3 can be obtained from the nonlinear rela-
tionships
eT Pe = min subject to (11)–(12). (13)
e, ξ ξ̄2 ξ̄3
ξ2 = , ξ3 = 2 , (22)
The estimation of the parameters can be accomplished by ξ̄22 + ξ̄3
2 ξ̄2 + ξ̄32
weighted least-squares within a linear Gauss–Markov model and the values of ξ0 , η0 from the solution of the equation
(GM-model). system


Problem 2 Transformation parameters are to be determined −ξ̄2 −ξ̄3 ξ0 x


= 0 . (23)
under the assumption that the coordinates xi , yi are obser- ξ̄3 −ξ̄2 η0 y0
vations and hence subjected to random errors whereas the
The corresponding estimates can no longer be claimed as
coordinates X i , Yi represent error-free quantities. Again,
least-squares estimates, due to the nonlinear nature of the
variances and covariances of the observations have to be
identities (22) and (23).
taken into account.
In this case it is necessary to introduce random errors Problem 3 Transformation parameters are to be determined
exi , e yi which results in the identities under the assumption that both the coordinates X i , Yi , as
well as xi , yi , represent observed quantities, thus containing
ξ2 xi − exi − ξ3 yi − e yi = X i − ξ0 ,
(14) random errors. As always, variances and covariances of the
ξ3 xi − exi + ξ2 yi − e yi = Yi − η0 . observations have to be taken into account.

123
754 F. Neitzel

In this problem, both the random errors e X i , eYi , and exi , e yi described in (4), namely
have to be introduced, which results in the identities
y = A∗ ξ + e, (29)
X i − e X i = ξ2 xi − exi − ξ3 yi − e yi + ξ0 ,
(24) where the respective quantities are defined as
Yi − eYi = ξ3 xi − exi + ξ2 yi − e yi + η0 . ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
··· ··· ξ0
Putting all corrections into the vector ⎢ Xi ⎥ ⎢ eXi ⎥ ⎢ ξ1 ⎥
y=⎢ ⎥ ⎢ ⎥
⎣ Yi ⎦, e = ⎣ eY ⎦, ξ = ⎣ ξ2 ⎦,
⎢ ⎥
T  T i
eext := eT , ēT = · · · e X i eYi · · · · · · exi e yi · · · , ··· ··· ξ3
⎡ ⎤
(25) ··· ··· ··· ···
⎢ 1 0 ⎥
and the accuracy relations into a corresponding weight matrix A∗ = ⎢ xi − exi − yi − e y i ⎥. (30)
⎣ 0 1 yi − e yi x i − exi ⎦
P, results again in the objective function
··· ··· ··· ···
T
eext Peext = min subject to (24). (26) The objective function to be minimized is defined in (26),
eext , ξ
with the vector of random errors defined by (25). Since the
This adjustment problem cannot be solved directly in the
product of A∗ with ξ now includes inherently nonlinear terms,
frame of the linear GM-model, since the functional model
the method of least-squares will necessarily lead to nonlin-
cannot be given the form (2). Therefore, there is no standard
ear normal equations which may be solved by linear iteration.
LS solution of this problem. Rearranging (24), however, an
One possible algorithm may follow these steps:
implicit form of the functional relation follows:

X i − e X i − ξ2 (xi − exi ) + ξ3 (yi − e yi ) − ξ0 = 0, • First, the least-squares solution for the coordinate trans-
(27) formation within model (11) is computed as approxima-
Yi − eYi − ξ3 (xi − exi ) − ξ2 (yi − e yi ) − η0 = 0.
tion.
This form is an example of a functional relationship that • The fact that the coordinates xi , yi are now treated as
leads to an adjustment of (nonlinear) condition equations observations as well, and hence are subject to random
with unknowns (Helmert 1924, p. 285 ff.); see also Schaf- errors, is taken into account afterwards by formally com-
frin and Snow (2010) for a different application using this paring the vector ξ in (29) with its approximation.
rearranging. In this context it is of little importance that
the functional relationship is nonlinear. The solution of this Finally, the TLS principle applied to Problem 3 again results
adjustment by the method of least-squares can be achieved in an adjustment problem with nonlinear normal equations.
through an evaluation within the GH-model as will be shown The least-squares solution of this problem should be identical
in Sect. 3. with the result of the evaluation within the original GH-model
However, with the exception of Schaffrin and Snow (2010), as long as in both cases the identical objective function (26)
this possibility for solving the problem was rather neglected of the random error vector (25) is minimized subject to an
in the literature discussed previously. After realizing that the identical functional relationship. Note that the structure of the
model underlying Problem 3 is of type errors-in-variables matrix A∗ , where some observations appear twice, see (30),
(EIV), the total least-squares (TLS) approach is introduced. has to be considered within an appropriate TLS approach.
The starting point for the TLS adjustment is the definition of It follows that TLS adjustment does not represent a novel
a quasi-linear model. While in model (11) only the quantities type of adjustment method per se, but merely an additional
X i , Yi are regarded as observations, resulting in a functional adjustment model (EIV-model) in the general frame of the
matrix method of least-squares. In the considered example this
⎡ ⎤ model can be regarded as a special case of the nonlinear
··· ··· ··· ···
⎢ 1 0 xi −yi ⎥ GH-model.
A=⎢ ⎣ 1 0 yi xi ⎦ ,
⎥ (28) It should be noted that, in the frame of the TLS approach,
methods have been developed which find the solution without
··· ··· ··· ···
iteration, thereby paying special attention to stability, favor-
the situation changes rather fundamentally in Problem 3. able numerical features, and efficiency; cf., e.g., Golub and
Beside the coordinates X i , Yi , also the quantities xi , yi are van Loan (1980), or van Huffel and Vandewalle (1991, p. 29
being regarded as observations, and hence are subject to ff.). These aspects are very important and the respective con-
errors. Thus, it is necessary to associate the elements of the tribution of the quoted sources can be hardly overestimated.
third and fourth column of the functional matrix with ran- However, which algorithm is used for solving a nonlinear
dom errors. Consequently, a new functional model results as equation system, does not depend on the chosen adjustment

123
Total least-squares on example of unweighted and weighted 2D similarity transformation 755

model. In spite of the fact that in the frame of adjustment (2008). Therefore, they are widely spread in practical appli-
calculus the Gauss–Newton iteration is established as one of cations. A comparison between the approximate and the rig-
the preferred solution methods, it was never the exclusive orous solutions when fitting a straight line can be found, e.g.,
solution method; cf., e.g., Schwarz et al. (1968, p. 78 ff.), or in Neitzel and Petrovic (2008).
Lawson and Hanson (1974). The rigorous treatment of the iteratively linearized
Prospects of the minimization of the objective function GH-model as presented in the following, is based on
(26) by an evaluation in the EIV-model are investigated in Lenzmann and Lenzmann (2004). Here, general formulas are
this study by considering the example of the planar coordi- given as far as they are necessary for the considered prob-
nate transformation. This investigation is based on numeri- lem. Next, the formulas for the determination of the trans-
cal examples from Felus and Schaffrin (2005) and Akyilmaz formation parameters of a planar coordinate transformation
(2007). are given for the case when both the coordinates X i , Yi and
Finally, the opinion from the literature that the results of xi , yi are regarded as observations which are subject to ran-
a TLS adjustment in model (24) tend to be “better” than the dom errors.
results of a LS adjustment in model (11) or (19), in the sense The vector of observations is denoted by y. The random
that this method supplies “more realistic” estimates for the error vector e and the vector of unknowns ξ are connected
unknown parameters, is discussed. Looking at the LS adjust- by r nonlinear differentiable condition equations of the form
ment resulting from Problem 1 and at the TLS adjustment
ψi (e, ξ ) = h i (y − e, ξ ) = 0 (31)
resulting from the Problem 3, it is directly obvious that LS
and TLS are not two different methods, but applications of with i = 1, …, r . Introducing appropriate approximate values
the same method (method of least-squares) to two different e0 and ξ 0 the linearized condition equations can be written
problems. Thus is any discussion unnecessary, which of the as
two approaches is better. It is only necessary always to model
the given problem, and not something completely different f (e, ξ ) ≈ B(e − e0 ) + A(ξ − ξ 0 ) + ψ(e0 , ξ 0 ) = 0, (32)
from it. A statement to this extent can already be found in involving the matrices of partial derivatives
Petrovic (2003, p. 56). 
∂ψ (e, ξ ) 
B0 (e, ξ ) = 0 0 (33)
∂eT e ,ξ
3 TLS solution generated by least-squares adjustment and
within the nonlinear Gauss–Helmert model 
∂ψ (e, ξ ) 
A0 (e, ξ ) =  0 0. (34)
In this section the solution of Problem 3 from Sect. 2 will be ∂ξ T e ,ξ
based on “classical” procedures specifically on an iterative These derivatives have to be formed at the approximate val-
evaluation of the nonlinear normal equations as derived for ues e0 and ξ 0 . It should be noted that the symbol A0 is now
the nonlinear GH-model by least-squares adjustment. A very used in a different way than in the preceding sections. With
popular approach does replace the original GH-model by a the vector of misclosures
sequence of linearized GH-models which, obviously requires  
a correct linearization of the condition equations. This means w 0 = −B0 e0 + ψ e0 , ξ 0 , (35)
that the linearization has to be done both at the approximate
values ξ 0 for the unknowns, and at approximate values e0 the solution ξ̂ 1 for the unknowns in the first iteration step is
for the random errors; alternatively, the linearization can be obtained from the equation system
performed at (y − e0 ). Such iterative solution procedures are

0

B0 Q B0T A λ̂1 w
regarded as “rigorous” evaluations of the nonlinear normal + = 0, (36)
AT0 0 ξ̂ 1 − ξ 0 0
equations although no formal proof seems to exist.
An extensive presentation of an evaluation of this kind can where Q denotes the cofactor matrix of the observations, and
be found already in Böck (1961) and Pope (1972). Lenzmann λ is a vector of auxiliary “Lagrange multipliers”; hats indi-
and Lenzmann (2004) pick up this problem once more and cate estimates. The first residual vector can now be obtained
present another rigorous treatment of a nonlinear GH-model from
as well. In doing so, they show very clearly which terms when
ẽ1 = Q B T λ̂1 . (37)
neglected will produce merely approximate formulas.
Unfortunately, these approximate formulas, which can The solutions ẽ1 , ξ̂ 1 , after stripping them of their random
yield an unusable solution, are being found in all too many character, are to be substituted as new approximate values
popular textbooks, among them Mikhail and Gracie (1981), as long as necessary until a sensibly chosen break-off con-
Wolf and Ghilani (1997), Benning (2007), and Niemeier dition is met. For the choice of break-off conditions, refer,

123
756 F. Neitzel

e.g., to Böck (1961) and Lenzmann and Lenzmann (2004). Q x y of the coordinates in target and start systems, and assum-
Experience shows that, after convergence, the final solution ing no correlation between them, it is possible to obtain the
fulfills the nonlinear least-squares normal equations. Note estimates for the unknowns from the solution of the linear
that oftentimes the update in (35) is being mishandled, in equations system
which case convergence may occur, but not to the nonlinear

Q X Y + B20 Q x y B2T0 A0 λ̂1 w0


least-squares solution. + = 0.
Considering the problem that, using the coordinates X i , Yi AT0 0 ξ̂ − ξ
1 0 0
and xi , yi as observations, the parameters of a planar coordi- (43)
nate transformation are to be determined, then according to The first residual vector follows from
(31) the conditions

⎡ ⎤
Q XY
······································· ẽ =
1
λ̂1 . (44)
Q x y B2T0
⎢ i
X − e Xi − ξ (x
2 i − e xi ) + ξ (y
3 i − e yi ) − X 0⎥
ψ(e, ξ ) =⎣ =0
Yi − eYi − ξ3 (xi − exi ) − ξ2 (yi − e yi ) − Y0 ⎦ After stripping the solution ẽ1 , ξ̂ 1 of its random character, it
·······································
is then used in the next iteration step as the approximation
(38) e1 , ξ 1 . It is important to note that the second (simplified)
identity in (42) is only valid for the initial step; in all later
are to be satisfied. Taking appropriate approximate values iteration steps the first identity in (42) must be used.
e0X i , eY0i and e0xi , e0yi , as well as ξ00 , ξ10 , ξ20 , ξ30 , it is possible to For the considered example, the description of the adjust-
build the matrices ment problem in the frame of the nonlinear GH-model is
⎡ ⎤ equivalent to the corresponding formulation using the TLS
1 0 ··· 0
⎢0 1 ··· 0⎥ approach within an EIV-model. This follows from the fact
⎢ ⎥
B10 = ⎢ . . . . ⎥, (39) that at all places in the respective matrices where observa-
⎣ .. .. . . .. ⎦ tions appear, the corresponding approximate values for the
0 0 ··· 1 random errors are to be substituted; cf. (41) and (42). The
⎡ ⎤ fact that, in the second identity of (42), the vector of misclo-
−ξ20 ξ30 0 0 ··· 0 0 sures does not contain any approximate values for the random
⎢ 0 ⎥
⎢ −ξ3 −ξ20 0 0 ··· 0 0 ⎥ errors, is a special case for the initial step only. It cannot be
⎢ ⎥
⎢ 0 0 −ξ20 ξ30 ··· 0 0 ⎥ generalized to later iteration steps.
⎢ ⎥
⎢ 0 −ξ30 −ξ20 ··· 0 0 ⎥
B20 =⎢ 0 ⎥, (40)
⎢ . .. .. .. .. .. .. ⎥
⎢ .. . . . . . . ⎥
⎢ ⎥ 4 Case study I: transformation of equally weighted
⎢ ⎥
⎣ 0 0 0 0 · · · −ξ20 ξ30 ⎦ coordinates
0 0 0 0 0 −ξ30 −ξ20
⎡ ⎤ 4.1 Transformation with two parameters
··· 0  ··· 0  ··· ···
⎢ − xi − e yi − e yi −1 0 ⎥
⎢ xi ⎥ The first numerical example with the coordinates listed in
A0 = ⎢   ⎥, (41)
⎣ − yi − e0y − x i − exi
0 0 −1 ⎦ Table 1 comes from Felus and Schaffrin (2005). In this exam-
···
i
··· ··· ··· ple the coordinates in the target system X i , Yi and the coor-
dinates in the start system xi , yi are regarded as equally
and the vector of misclosures weighted uncorrelated observations. In Felus and Schaffrin
⎡ ⎤
(2005) the solutions for the rotation angle α and the scale fac-
· · · · · · · · · · · · · · · · · · · · · · · · · · ·· · · · · · · · · · · · · · · · · · ·· · · · · · · · · · ·· tor μ are to be determined taking into account random errors
⎢e0 −ξ 0 e0 +ξ 0 e0 + X −e0 −ξ 0 x −e0 +ξ 0 y −e0 −ξ 0 ⎥
⎢ X i 2 xi 3 yi i Xi 2 i xi 3 i yi 0⎥
w0 = ⎢
⎢ 0   0  ⎥

⎣ eYi −ξ3 exi −ξ2 e yi + Yi −eYi −ξ3 xi −exi −ξ2 yi −e yi −ξ1 ⎦
0 0 0 0 0 0 0 0 0
Table 1 Numerical example from Felus and Schaffrin (2005)
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ··
Point no. Calibrated coordinates Measured coordinates
⎡ ⎤
·····················
⎢ X i − ξ 0 xi + ξ 0 yi − ξ 0 ⎥ X i [mm] Yi [mm] xi [mm] yi [mm]
⎢ 2 3 0 ⎥
=⎢ ⎥. (42)
⎣ Yi − ξ 0 xi − ξ 0 yi − ξ 0 ⎦ 1 −117.478 0 17.856 144.794
3 2 1
·····················
2 117.472 0 252.637 154.448
3 0.015 −117.41 140.089 32.326
Here, B10 and B20 denote the matrices of partial derivatives
4 −0.014 117.451 130.40 267.027
according to (33). Applying the cofactor matrices Q X Y and

123
Total least-squares on example of unweighted and weighted 2D similarity transformation 757

Table 2 STLS solution from Felus and Schaffrin (2005)


Obtained parameters STLS solution

Parameter ξ̂2 0.30579145769903


Parameter ξ̂3 0.01254378090726
Scale factor μ̂ 0.3060486
Rotation angle α̂ 2◦ 20 56.39
Variance component σ̂02 6656.6

Table 3 Residuals from Felus and Schaffrin (2005)


Point no. Calibrated coordinates Measured coordinates

ẽ X i [mm] ẽYi [mm] ẽxi [mm] ẽ yi [mm]

1 −119.155 −42.075 18.014 7.204


Fig. 1 Coordinate systems XY and xy plotted one on top of the other 2 36.562 −42.082 −5.873 6.225
3 −41.288 −119.903 5.579 18.654
4 −41.298 35.752 6.560 −5.224
for both, the coordinates in the target (“calibrated coordi-
nates”) and start (“measured coordinates”) systems.
The transformation formula applied in Felus and Schaffrin
(2005) reads a new technique and called it “structured TLS procedure”



Xi cos α sin α μ 0 xi (STLS). It is, however, noted that this procedure is generated
≈ . (45)
Yi − sin α cos α 0 μ yi from the standard TLS solution by imposing the appropriate
structure on it, without claiming that it is the TLS solution
It should be noted that the rotation direction in the rota- among all other structured solutions. The numerical results
tion matrix is reversed compared with (7), and no translation for the auxiliary quantities ξ2 and ξ3 , as well as the resulting
parameters are taken into account. With the substitutions (9) outcome for the scale factor μ and the rotation angle α are
the linear functional model receives the form listed in Table 2. The corresponding residuals are listed in
X i ≈ ξ2 xi + ξ3 yi , Table 3.
(46) It needs to be pointed out, however, that the algorithm
Yi ≈ −ξ3 xi + ξ2 yi . used by Felus and Schaffrin (2005) to generate these results
By introducing the random errors e X i , eYi and exi , e yi it fol- does not exactly follow the analytical formulas provided in
lows their very paper. After correcting the algorithm, both esti-
mated parameters as well as the residuals do change, with a
X i − e X i = ξ2 (xi − exi ) + ξ3 (yi − e yi ), surprising answer to be discussed below.
(47) Now let us solve the so-called STLS problem by iteratively
Yi − eYi = −ξ3 (xi − exi ) + ξ2 (yi − e yi ).
linearizing the nonlinear Gauss–Helmert model, subject to
The choice of this functional model is somewhat extraor- the same requirements. This means that the same solution is
dinary because, from the numerical values in Table 1, it is sought that fulfills the equation system (47) while minimiz-
directly visible that there must be a large translation between ing the function (26) of random errors in (25). The required
both coordinate systems. This is illustrated in Fig. 1, where approximate values for the unknowns can be computed in
these two coordinate systems, XY and xy, are superimposed. the first step from a transformation with error-free values
A suitable consideration with additional translations is xi , yi (Problem 1 in Sect. 2). These are ξ20 = 0.25 and ξ30 =
presented later, in Sect. 4.3. First, the solution based on (47) 0.01. As approximate values for the random errors, e0xi =
is considered, independently of the question, whether this e0yi = 0 can be chosen. Applying the formulas introduced
functional model is appropriate. in Sect. 3 (but with the translation parameters set to zero)
From Felus and Schaffrin (2005) it follows that a solution yields, after several iterations, the solution as presented in
is sought which satisfies the equation system (47) while min- Table 4.
imizing the function (26) for the random errors in (25). Due The objective function (26) at its minimum obtains the
to the structure of the matrix A∗ ,where some observations value ẽT P ẽ = 38229.2. The corresponding residuals are
appear twice, see (30), Felus and Schaffrin (2005) proposed listed in Table 5.

123
758 F. Neitzel

Table 4 Solution within an iteratively linearized GH-model For the computation of the variance component, the formula
Obtained parameters GH solution  T  
ẽT · ẽ + vec Ẽ A · vec Ẽ A
Parameter ξ̂2 0.30686619800718 σ̂02 = (50)
2k − m
Parameter ξ̂3 0.01258786751144
may have been applied which is standard for unstructured
Scale factor μ̂ 0.3071243
TLS problems. In order to understand this formula it should
Rotation angle α̂ 2◦ 20 56.39
be noted that the operator vec stacks the columns of a matrix
Variance component σ̂02 6371.5
one beneath the other taking them from the matrix from left
to right; hence,
Table 5 Residuals within the iteratively linearized GH-model
Point no. Calibrated coordinates Measured coordinates
ẽ X i [mm] ẽYi [mm] ẽxi [mm] ẽ yi [mm]

1 −114.025 −40.397 34.482 13.832


2 34.726 −40.404 −11.165 11.961
(51)
3 −39.641 −114.743 10.720 35.710
4 −39.651 33.949 12.595 −9.919

4.2 Discussion of the results

Comparing the results for the transformation parameters in


Tables 2 and 4, considerable differences are detected. Merely Building the product vec( Ẽ A )T · vec( Ẽ A ) yields
the values for the rotation angle agree. Furthermore, it should  T   k 
 
be noted that the outcomes for the variance factor disagree, vec Ẽ A · vec Ẽ A = 2 · ẽ2xi + ẽ2yi . (52)
actually not an unexpected result. Comparing the size of i=1
residuals in Tables 3 and 5, quite large differences between In classical notation (50) can be written as
the results attract attention.
k   k  
Now, a natural question arises: what is the cause for the ẽ2X i + ẽY2i + 2 · ẽ2xi + ẽ2yi
results deviating so much from one another? Let us first take i=1 i=1
σ̂02 = , (53)
a look at the value of the estimated variance component for 2k − m
the STLS solution in Table 2, which is σ̂02 = 6656.6. Com- which obviously uses half of the residuals twice if compared
puting this factor in the well-known way, which follows from with (48). This means that the expression (50) used by Felus
(26) by applying and Schaffrin (2005) should be replaced by
k 
  k    T  
ẽ2X i + ẽY2i + ẽ2xi + ẽ2yi ẽT · ẽ + 0.5 · vec Ẽ A · vec Ẽ A
σ̂02 =
i=1 i=1
(48) σ̂02 = , (54)
2k − m 2k − m
with k = 4 and m = 2 to the residuals from Table 3, yields a due to the very structure of the matrix Ẽ A . Using this expres-
value of σ̂02 = 6506.7. The inconsistency may be explained sion yields the same variance component of σ̂02 = 6506.7
as follows: For the computation of the variance component as derived from (48). For the value of the objective function
Felus and Schaffrin (2005) use a matrix representation of the (26), it follows ẽT P ẽ = 39040.1. Comparing this amount
residuals with the value of the objective function ẽT P ẽ = 38229.2 that
follows from the solution of the same adjustment problem by
iterated linearization, it is visible at once that the STLS way
of solution as presented in Felus and Schaffrin (2005) does
not minimize the objective function (26) as intended. Conse-
(49) quently, the proposed STLS procedure may not generate the
TLS solution among all structured solutions. In all fairness,
Felus and Schaffrin (2005) had never claimed this.
In summary, the solution strategy developed in Sect. 3
makes an appropriate solution of the considered problem

123
Total least-squares on example of unweighted and weighted 2D similarity transformation 759

Table 6 Solution within an iteratively linearized GH-model Table 7 Residuals obtained from the iteratively linearized GH-model
Obtained parameters GH solution Point no. Calibrated coordinates Measured coordinates

Parameter ξ̂2 0.99900748077781 ẽ X i [mm] ẽYi [mm] ẽxi [mm] ẽ yi [mm]

Parameter ξ̂3 –0.04109806319405 1 −0.0021 0.0076 0.0024 −0.0075


Scale factor μ̂ 0.99985248784424 2 0.0005 0.0099 −0.0001 −0.0099
Rotation angle α̂ −2◦ 21 20.72 3 −0.0004 −0.0074 −0.0000 0.0075
Shifting ξ̂0 –141.2628 mm 4 0.0020 −0.0101 −0.0024 0.0100
Shifting ξ̂1 –143.9316 mm
Variance component σ̂02 0.00016081 Table 8 Numerical example from Akyilmaz (2007)
Point no. Target system Start system

X i [m] Yi [m] xi [m] yi [m]


possible, though only by iteration. This problem consists in
the determination of the transformation parameters, rotation, 3 4540134.2780 382379.8964 4540124.0940 382385.9980
and scale factor when taking into account the random errors 185 4539937.3890 382629.7872 4539927.2250 382635.8691
for the coordinates in both the target and the start system. 2796 4539979.7390 381951.4785 4539969.5670 381957.5705
The STLS way of solving the problem, originally proposed 2996 4540326.4610 381895.0089 4540316.2940 381901.0932
by Felus and Schaffrin (2005), has meanwhile been modified 5005 4539216.3870 382184.4352 4539206.2110 382190.5278
accordingly and will be published by Schaffrin and Neitzel
(2011) soon.
4.4 Discussion of the results

Using the solution strategy developed in Sect. 3, it is straight-


4.3 Transformation with four parameters
forward to treat the coordinate transformation with the trans-
lation parameters ξ0 , ξ1 included as well. In contrast to the
From the coordinates in Table 1 and the graphical presen-
STLS approach, the circumstance that the corresponding col-
tation of the points to be transformed in Fig. 1 it is directly
umns of the functional matrix do not contain any random
visible that there is a large translation between the two coor-
errors (“fixed” or “frozen” columns in the frame of TLS ter-
dinate systems. An application of the functional model (46),
minology) would not make any problems and can be treated
which neglects the translation parameters, thus leads inevi-
in an appropriate way. The choice of a more suitable func-
tably to unrealistic model parameter estimates especially for
tional model which takes into account the translation param-
the scale factor. Therefore, for an appropriate solution let us
eters as well yields, in the considered example, drastically
now take the functional model (10) as the basis.
reduced residuals. This is obvious from a comparison of
Hence, the goal is to find a solution for the rotation angle α,
Tables 5 and 7. The residuals for the 4-parameter transfor-
the scale factor μ, and the translation parameters ξ0 , ξ1 , con-
mation are at least 1,000 times smaller than the respective
sidering both the coordinates in the target system X i , Yi and
residuals from the 2-parameter transformation. Furthermore,
the coordinates in the start system xi , yi as equally weighted
the estimated model parameters are now much more realistic
uncorrelated observations. The solution has to fulfill the equa-
than the parameters estimated in Sect. 4.1.
tion system (24) and minimize the function (26) of the ran-
dom errors (25). As approximate values for the unknowns 5 Case study II: transformation of weighted coordinates
ξ20 = 0.25, ξ30 = 0.01, as well as ξ00 = ξ10 = 0 are
chosen, and as approximate values for the random errors 5.1 Transformation with four parameters
e0xi = e0yi = 0. Using the formulas developed in Sect. 3, the
solution listed in Table 6 is obtained after several iterations. The second numerical example is based on the coordinates
The value of the objective function (26) amounts to listed in Table 8 and originates from Akyilmaz (2007). In this
ẽT P ẽ = 0.00064325 (more than 107 times less than for the example, the coordinates in both the target system X i , Yi and
two-parameter solution from Sect. 4.1); the corresponding the start system xi , yi are regarded as observations, associ-
residuals are listed in Table 7. ated with the weight matrices

PX Y = Diag[ 10 14.2857 0.8929 1.4286 7.1429 10 2.2222 3.2259 7.6923 11.1111 ],


(55)
Px y = Diag[ 5.8824 12.5 0.9009 1.7241 7.6923 16.6667 4.1667 6.6667 8.3333 16.6667 ].

123
760 F. Neitzel

Table 9 GTLS solution from Akyilmaz (2007) Table 11 Solution from an iteratively linearized GH-model
Obtained parameters GTLS solution Obtained parameters GH solution

Parameter ξ̂2 0.9999974364 Parameter ξ̂2 0.9999953579


Parameter ξ̂3 −0.0000086397 Parameter ξ̂3 −0.0000042049
Shifting ξ̂0 18.5145m Shifting ξ̂0 29.6432m
Shifting ξ̂1 34.1062m Shifting ξ̂1 14.7696m
Scale factor μ̂ 0.9999974362809 Scale factor μ̂ 0.9999953578895
Rotation angle α̂ −0.0005500gon Rotation angle α̂ −0.0002677gon
Variance component σ̂02 0.000179

Table 10 Residuals for the target system from Akyilmaz (2007)


Point no. Target system Table 12 Residuals obtained from an iteratively linearized GH-model
Point no. Target system Start system
ẽ X i [m] ẽYi [m]
ẽ X i [m] ẽYi [m] ẽxi [m] ẽ yi [m]
3 −0.0048 0.0023
185 0.0179 −0.0163 3 0.0032 −0.0025 −0.0055 0.0029
2796 0.0039 −0.0049 185 −0.0066 0.0080 0.0066 −0.0066
2996 0.0075 −0.0154 2796 −0.0011 0.0010 0.0010 −0.0006
5005 0.0039 0.0017 2996 −0.0035 0.0070 0.0018 −0.0034
5005 −0.0014 −0.0007 0.0013 0.0005

The goal is to determine the parameters α, μ, ξ0 , ξ1 by an Table 13 Residuals in the start system for the GTLS solution
adjustment taking into account the random errors for the
Point no. Start system
coordinates in both the target system and the start system.
From Akyilmaz (2007) it follows that a solution is sought ẽxi [m] ẽ yi [m]
that satisfies the equation system (24) while minimizing the
3 0.0001 0.0001
function (26) of the random errors (25). In order to solve
185 0.0001 0.0001
this adjustment problem, Akyilmaz (2007) elaborates on a
2796 0.0001 0.0001
so-called “generalized TLS procedure” (GTLS) which is
2996 0.0000 0.0001
known to not necessarily furnish the weighted TLS solution
5005 0.0001 0.0001
according to Schaffrin and Wieser (2008). The GTLS solu-
tion for the transformation parameters is given in Table 9.
The result for the variance component σ̂02 was not given in
Akyilmaz (2007). The residuals, which were computed only 5.2 Discussion of the results
for the coordinates of the target system are listed in Table 10.
Now we solve the weighted TLS problem within an itera- Comparing the results for the transformation parameters from
tively linearized Gauss–Helmert model, subject to the same Tables 9 and 11 and the residuals in the target system from
requirements. This means that again the solution is sought Tables 10 and 12, an obvious difference in the results can be
that fulfils the equation system (24) while the function (26) detected.
minimizes the random errors in (25), taking into account the In order to make a deeper comparison of the results, the
weights (55). First of all, the necessary approximate values residuals for the coordinates in the start system as result-
for the unknowns are computed based on a transformation for ing from Akyilmaz (2007) are reconstructed using (24); cf.
error-free values xi , yi (Problem 1 of Sect. 2) without taking Table 13.
into account the weights. This results in ξ20 = 1, ξ30 = 0 Since the value of the objective function (26) for the GTLS
and ξ00 = 19.9, ξ10 = −11.7. As approximate values for the solution is not given in Akyilmaz (2007), this value is com-
random errors we choose e0xi = e0yi = 0. Applying the for- puted from the residuals in Tables 10 and 13 while taking
mulas as developed in Sect. 3, several iterations lead to the into account the weight matrices PX Y and Px y . This yields
results as presented in Table 11. the value ẽT P ẽ = 0.002360, and the estimated variance com-
The value of the objective function (26) is ẽT P ẽ = ponent becomes σ̂02 = 0.000393. Comparison with the value
0.001073; the corresponding residuals can be found in ẽT P ẽ = 0.001073 of the objective function, as obtained from
Table 12. the solution of the same adjustment problem in the iteratively

123
Total least-squares on example of unweighted and weighted 2D similarity transformation 761

linearized GH-model, it is directly seen that the GTLS way possibility to formulate a new algorithm in the frame of the
of solution from Akyilmaz (2007) does not minimize the general method of least-squares.
objective function (26). The discussion whether TLS yields “better” results than
Special attention should be paid to the residuals in a LS adjustment, is always meant to refer to the EIV-model
Table 13. Obviously, the GTLS results in Akyilmaz (2007) in case of TLS, and to the standard GM-model in case of LS
represent a solution which assigns very small random errors where the random error matrix is set to zero right away. Con-
to the coordinates in the start system. The fact that in Table 13 sequently, two different models are compared essentially,
in the fourth decimal place some small nonzero values appear, not two different adjustment methods. This can already be
is probably due merely to some rounding effects, e.g., when learned from Schaffrin and Snow (2010), and is virtually
multiplying the parameters ξ̂2 resp. ξ̂3 given to ten decimal in complete analogy why LS-collocation is just the old LS-
places with seven-place coordinate values. Furthermore, it adjustment when applied to a model with prior informa-
should be noted that in spite of this, the solution as given in tion.
Akyilmaz (2007) is not identical with the solution of Prob- The solution of the so-called TLS problem for the pla-
lem 1 from Sect. 2 (“LS solution”) either, since it does not nar similarity transformation is demonstrated by means of
minimize the objective function (13) of the random errors an evaluation within an iteratively linearized GH-model. In
(12). doing so, special attention should be paid to an appropri-
In summary, the solution strategy developed in Sect. 3 ate linearization and iteration. A special caution is necessary
makes it possible to also consider, without any problems, here, since the treatment of the linearized GH-model, in many
weight matrices for the coordinates in the start system as well textbooks, presents merely an approximate solution. In con-
as in the target system, similar to Schaffrin and Wieser (2009) trast, the elegance of the TLS algorithm consists in the lack
for the affine transformation. Beside the diagonal matrices for the need of iteration.
considered in this example, it is quite possible to introduce On two examples, it is shown that the STLS procedure by
completely filled matrices as well. The GTLS solution as Felus and Schaffrin (2005), resp. the GTLS approach pro-
applied by Akyilmaz (2007) was never designed to mini- posed by Akyilmaz (2007) for a planar similarity transfor-
mize the sum of weighted squared residuals. Additionally, mation do not minimize the chosen objective function that
the coordinates in the start system do not obtain any correc- corresponds to the problem at hand. Based on a comparison
tions. Thus, the deficiencies of the GTLS solution procedure of the numerical results as provided in the literature with
by Akyilmaz (2007), which had been indicated by Schaffrin the results from the iteratively linearized GH-model, neither
(2008) already, can be regarded as confirmed. the STLS approach from Felus and Schaffrin (2005) nor the
GTLS approach from Akyilmaz (2007) will produce the opti-
mal answer right away. While the GTLS technique, favored
6 Conclusions and outlook by Akyilmaz (2007), must be dismissed, however, since it
does neither handle the weights properly nor can it main-
After a short introduction into the TLS terminology in the tain the very structure, the Cadzow step in the algorithm by
context of errors-in-variables (EIV) models, the example, of Felus and Schaffrin (2005) can be modified in such a way
a planar similarity transformation is discussed. Depending on that it generates the optimal TLS solution; for more details
whether the coordinates in the target system, the coordinates see Schaffrin and Neitzel (2011).
in the start system, or the coordinates in both systems are Regarding the problem formulation treated in this contri-
regarded as observations subjected to random errors, three bution, it can be concluded that the method of least-squares
different problems result. All three can be solved appropri- covers the case of TLS as well, something that the experts
ately by an adjustment according to method of least-squares obviously knew all along. They only distinguish between dif-
either in a standard GM-model (Problem 1 and 2 of Sect. 2) ferent models to which the method of least-squares is applied,
or in a nonlinear GH-model (Problem 3 of Sect. 2). or between different algorithms. Thus, by using an iteratively
Various solutions have been considered in the literature linearized GH-model as presented here, the correct solution
on mathematical statistics and, in the recent time, also in of the adjustment problem can be achieved in a reasonable
the geodetic literature. The case, in which the coordinates in way, although alternative algorithms are of interest, too. A
both systems are observations subjected to random errors, if treatment of the affine transformation in 2D, for which in
treated as a TLS problem, may lead to an alternative algo- Schaffrin and Felus (2008) a very elegant “multivariate total
rithm. However, since in a TLS adjustment within an EIV- least-squares” approach (MTLS) was developed, is possible
model the same objective function is minimized, as in an with the solution strategy presented here, as well. The basic
adjustment by the method of least-squares within a nonlin- decision has to be made as to whether the EIV-model ought
ear GH-model, TLS adjustment may not be regarded as a to be treated as such or within a nonlinear GH-model. The
new adjustment method per se, but rather as an additional user has the choice.

123
762 F. Neitzel

Acknowledgements The author would like to acknowledge the sup- Niemeier W (2008) Adjustment computations (in German), 2nd edn.
port of a Feodor Lynen research fellowship from the Alexander von Walter de Gruyter, New York
Humboldt Foundation (Germany), and the School of Earth Sciences at Petrovic S (2003) Parameter estimation for incomplete functional mod-
the Ohio State University (USA), with Prof. Schaffrin as his host. els in geodesy (in German). German Geodetic Comm, Publ. No.
C-563, Munich
Pope AJ (1972) Some pitfalls to be avoided in the iterative adjustment
of nonlinear problems. In: Proceedings of the 38th Annual Meet-
References ing of the American Society of Photogrammetry, Washington, DC,
pp 449–477
Aitken AC (1935) On least squares and linear combinations of obser- Schaffrin B, Lee I, Felus Y, Choi Y (2006) Total least-squares (TLS) for
vations. Proc R Soc Edinburgh 55:42–48 geodetic straight-line and plane adjustment. Boll Geod Sci Affini
Akyilmaz O (2007) Total least squares solution of coordinate transfor- 65(3):141–168
mation. Surv Rev 39(303):68–80 Schaffrin B (2008) Correspondence, coordinate transformation. Surv
Benning W (2007) Statistics in geodesy, geoinformation and civil Rev 40(307):102
engineering (in German), 2nd edn. Herbert Wichmann Verlag, Schaffrin B, Felus Y (2008) On the multivariate total least-squares
Heidelberg approach to empirical coordinate transformations. Three Algo-
Böck R (1961) Most general formulation of least-squares adjustment rithms J Geod 82(6):373–383
computations (in German). Z für Vermessungswesen 86:43–45 Schaffrin B, Wieser A (2008) On weighted total least-squares adjust-
(see also pp 98–106) ment for linear regression. J Geod 82(7):415–421
Felus Y, Schaffrin B (2005) Performing similarity transformations Schaffrin B, Neitzel F (2011) Modifying Cadzow’s algorithm to gener-
using the errors-in-variables-model. In: Proceedings of the ASPRS ate the TLS solution for Structured EIV-Models (submitted)
Meeting, Washington, DC, May 2005, on CD Schaffrin B, Snow K (2010) Total least-squares regularization of
Gauss CF (1809) Theoria motus corporum coelestium in sectionibus Tykhonov type and an ancient racetrack in Corinth. Linear Algebra
conicis solem ambientium. F. Perthes und I.H. Besser, Hamburg Appl 432(8):2061–2076
Golub GH, van Loan C (1980) An analysis of the total least-squares Schaffrin B, Wieser A (2009) Empirical affine reference frame transfor-
problem. SIAM J Numer Anal 17(6):883–893 mations by weighted multivariate TLS adjustment. In: Drewes H
Helmert FR (1924) Adjustment computation with the least-squares (ed) International Association of Geodesy Symposia Volume 134,
method (in German), 3rd edn. Teubner-Verlag, Leipzig Geodetic Reference Frames IAG Symposium Munich, Germany,
Lawson CL, Hanson RJ (1974) Solving least-squares problems. 9–14 October 2006, pp 213–218
Prentice-Hall, Englewood Cliffs Schwarz HR, Rutishauser H, Stiefel E (1968) Numerics of symmetric
Lenzmann L, Lenzmann E (2004) Rigorous adjustment of the nonlin- matrices (in German). B. G. Teubner, Stuttgart
ear Gauss–Helmert model (in German). Allgem Verm Nachr 111: van Huffel S, Vandewalle J (1991) The total least-squares problem,
68–73 computational aspects and analysis. SIAM, Philadelphia
Mikhail EM, Gracie G (1981) Analysis and adjustment of survey mea- Wolf PR, Ghilani CD (1997) Adjustment computations: statistics and
surements. Van Nostrand Reinhold Company, New York least squares in surveying and GIS. Wiley, New York
Neitzel F, Petrovic S (2008) Total least-squares (TLS) in the context of
least-squares adjustment on the example of straight-line fitting (in
German). Z für Vermessungswesen 133:141–148

123

You might also like