# SIAM J. ScI. STAT. COMPUT. Vol. 2, No.

3, September 1981

1981 Society for Industrial and Applied Mathematics 0196-5204/81/0203-0009 \$01.00/0

ANALYSIS OF MEASUREMENTS BASED ON THE SINGULAR VALUE DECOMPOSITION*
RICHARD J. HANSON
AND

MICHAEL J. NORRIS\$

Abstract. The problem of maintaining quality control of manufactured parts is considered. This involves matching points on the parts with corresponding points on a drawing. The difficulty in this process is that the measurements are often in different coordinate systems. Using the assumption that the relation between the two sets of coordinates is a certain rigid transformation, an explicit least squares solution is obtained. This solution requires the singular value decomposition of a related matrix. Other topics in the paper include an appropriate angular representation of the resulting orthogonal transformation matrix, and a computational algorithm for the various required quantities.

Key words, orthogonal matrices, singular value decomposition, orthogonal Procrustes problem, analysis of measurements, quality control, part matching, least squares

1. Introduction. The following problem arises in the analysis of measurements made on manufactured parts for the purpose of quality control. A part is to be machined in accordance with specifications on drawings. To check conformance of this part with the drawings, a set of distinguished points is taken from the drawings and a corresponding set of points is taken from the part. If discrepancies between the two sets of coordinates are all within tolerance, the part is acceptable. Otherwise the part is deemed unacceptable and is discarded. Discrepancies between the two sets of corresponding points can come from inaccuracies in manufacturing the part, or inaccuracy in measurements with respect to coordinate systems for the part or the drawing. Furthermore, it may be necessary to measure the set of points on the part in a different coordinate system than the points on the drawing. This introduces (possibly) large discrepancies between the points on the part and the points on the drawing. We will not discuss in detail the problem of where or how the distinguished set of measurements should be taken on both the drawing and the part. This is a difficult matter that involves judgment from the individual responsible for quality control of the manufacturing process. To salvage acceptable parts, one must account for the different coordinate system, in the two sets of measurements. It seems plausible that (to within error whose effect is negligible relative to tolerances) the coordinate axes of the set of drawings are orthogonal. Also, errors in any measurements made on the drawings are similarly negligible. One also expects that the measuring device, to within acceptable errors, yields measurements relative to a set of orthogonal coordinate axes. However, in addition to any intended change of coordinate system, the part undergoing the test may be improperly positioned relative to the coordinate system of the measuring device. This could result from some designated point on the part not being set at its prescribed location or from the entire part being misoriented relative to the coordinate axes. To accommodate these possibilities the relationship between the coordinate

* Received by the editors August 18, 1980. This work was sponsored by the U.S. Department of Energy under contract DE-AC04-76DP00789. Numerical Mathematics Division, Sandia National Laboratories, Albuquerque, New Mexico 87185 (a U.S. Department of Energy facility). Applied Mathematics Department, Sandia National Laboratories, Albuquerque, New Mexico 87185 (a U.S. Department of Energy facility).

"

363

364

R. J. HANSON AND M. J. NORRIS

systems should have the general form of a rigid transformation

drawing points

orthogonal matrix part points + fixed translation.

(This form of transformation also seems consistent with what one might do to fit two parts together without forcing. In fact, one could consider one of the parts as the "drawing." Then the transformation can be used to mate the two parts.) It is possible to minimize this corresponding discrepancy between drawing points and transforms of part points in the least squares sense. One can then base the accept or reject tests on the residuals of these equations. Perhaps the most natural choice for an approximation criterion would be to minimize the maximum residual. No direct solution to this "min-max" problem, with the simplicity of the solution for the least squares criterion, is known to the authors. Using least squares, it is possible to give a simple algorithm for computing the (unknown) orthogonal matrix and the fixed translation of this transformation. The expression for this orthogonal matrix involves quantities that derive from the singular value decomposition (SVD) [1, pp. 238-239] of a related matrix. This is fully developed in 2. Section 3 outlines the computations to be performed and 4 discusses the numerical analysis of the process and describes .a computer program. Section 5 presents an appropriate angular representation for the transformation matrix. Although the original problem was in two or three space dimensions (and it is difficult to contemplate the problem in higher dimensions) the treatment in 2 and 3 is applicable to any finite-dimensional space.
2. Analysis. In the following development the vectors are real, n-dimensional, and are also considered as n 1 matrices. For our derivation of the optimal transformation, we use the functional

(M, N) -= trace (MTN).
For M N, this dot (or inner) product defines the Frobenius or Euclidean norm in the linear vector space of rh x real matrices. (Here M T transpose of M, and trace
sum of the diagonal terms of MTN.) The dot. product has a number of easily proved properties stated as (1)-(5). Those properties allow for various formal interchanges in terms involving dot products of matrices. These rules are used freely within this section.

,

(MTN)

() (2) (3)

(M, N)= (N, M),

(M,N)=(N:r, MT),
(MN, PQ)= (NQ MTp).

,

In (3), the indicated products must be defined in order for this to make sense. Also,
(4) (5)
(M, N + P) (M, N) + (M, P), (M, aN) a (M, N) (aM, N)

for scalar values of a. The set of drawing points and the set of part points are respectively denoted by { Yi 1 _-< _-< K} and {Xi 1 _-< -< K}. Since the coor.dinate systems for the drawing points

ANALYSIS OF MEASUREMENTS

365

Y and part points X are related by a rigid transformation we have

(6)

Y AX + B. In (6), the n x n matrix A is orthogonal, A TA I, while B is an n-vector.

Other authors have considered problems and used methods that are closely related to ours. For example, Green [5], Bar-Itzhack [6] especially SchSnemann [7] have solved the problem of minimizing (A- AX- Y) where X and I7" are N x K real matrices and A is an NxN orthogonal matrix. Here we solve the problem of minimizing (A" + Be 7"- I7, A3" + Be T- I7") where ’, I7" and A are as stated above, 1)T.) B is an N-vector, and determinant of A + 1. (The vector e (1, The use of the singular value decomposition of square matrices, as utilized in this paper, simplifies derivations of the formulas for A and the required computations. The methods we present also apply to the above-mentioned related problems where A is only required to be orthogonal. To obtain the best fit in the least squares sense, we want to minimize

,

an.d

,

f(A,B)=

.,
K

(B-(Yi-AXi),B-(Yi-AXi)).

i=1

This expression can be simplified by eliminating B as an unknown. To do this we use

Lemma 1. LEMMA 1. If{Rill <-_ <- K} is a set of n-vectors, then & (R ,in= (R Ri, R Ri) is minimized if and only if R =/ (/r= Ri)/K. This lemma is easily proved by noting that, for all R, (R)= K

.i=1 [(g-,g-)+(Ri-,gi-)].
(7)
where

For any specific choice of A, including the optimal value of A which we do not know yet, Lemma 1 shows that f(A, B) is minimized with

B

B

Y-AX,

and

i=l

Substituting this expression for B (each with mean zero)

B into f(A, B) and defining the two sets of points
Yi

(8)

Xi-- Xi

-X,

Yi Y,

1,.

,K

K

f(A, B)

i=l

E

(Axi

Yi,

Axi Yi).

We want to choose an orthogonal n n matrix A (possibly with the additional restriction det (A)= 1), to minimize f(A, B). Using (1), (4) and (5),
K

f(A,B)=

i=1

Y

[(Axi, mxi)-2(yi, Axi)+(yi, yi)].

366

R. J. HANSON AND M. J. NORRIS

An application of (3)-(5), together with the fact that A is an orthogonal matrix, shows that
K

f(A,B)=-2

i=1

E

(yi, Axi)+

,
K

[(xi, xi)+(yi, yi)].

i=1

Finding A to minimize ’(A, B) requires finding an orthogonal matrix that maximizes tle related functional
K

g(A)=

i=1

E

(yi, Axi).
K

Again, (3) and (4) show that, with C

(9)

g(A)

.,

=; yxr,
yx.T,, A
i=1

(yxf, A)=

=--(C, A).

i=1

It is worthwhile to point out that the inclusion of weights in the definition of f(A, B) adds no essential complication or change to the development given here. In fact, if
K

2
then an obvious modification of Lemma 1 shows that

B=B= Y-AX,

? E i=1
and

Yi

X
i=

"/i
i=

’/i.

Equation (8) is still valid, and minimizing
K

f(A, B)
is equivalent to maximizing

F,
i=1

y (Axi
2 "yiYixTi,A

Yi,

Axi yi)

g(A)

(

i=1

)

(C, A).

Our point here is that the use of weights merely changes the computation of X, Y and C in a trivial way. The arguments that follow remain unchanged. To aid in the choice of A that maximizes g(A), we introduce the singular value decomposition, SVD, of the n n matrix C, C USV T. 1 O) In (10) the n x n matrices U and V are orthogonal. Further, the n x n diagonal matrix S-diag (s,..., s.) has the singular values of C as its diagonal terms with >= s. >= 0. Substituting the SVD of C from (10) into g(A) of. (9) results in Sl >- s2 >=. g(A) (USV T, A)= (S, uTA V) (S, W), where W UTA V {wii} is an n x n orthogonal matrix.

ANALYSIS OF MEASUREMENTS

367

We now consider two cases for the optimal choice of A. In the first case A may be any orthogonal matrix, whereas in the second case A must also have determinant equal to the value 1. Case I. Determinant of A has either sign. Since W is an n n orthogonal matrix, each diagonal term satisfies [Wii[ 1, and thus
(\$, W)=

Y
i=1

sw, N
i=1

s.

This inequality becomes an equality if and only if w, 1 for each s > 0. If r is the number of nonzero singular values, then g(A) (\$, W) is maximized precisely when

Here Z is an arbitrary (n- r)x (n- r) orthogonal matrix. In case r n the matrix Z is absent, and then W L One is always free to choose W I for all values of r. This is effectively the results obtained by Sch6nemann [7]. Case II. Determinant of A +1. The required condition is equivalent to det (W)= det (UrV). If either det (UrV)= 1 or s 0, then the maximum of (\$, W) is s. An optimal choice of W is the same as in Case I with Z essentially arbitrary except for the requirement det (Z)- det (UrV). We will need the following lemma for the remaining development. LEMMA 2. SUppOSe that W is an n x n real orthogonal matrix with det (W) -1.
Then

_

I

(a) trace (W) tr(W) _<- n 2; (b) tr(W) n 2 i[ and only i[ W P diag (1,.

, 1, -1)P r
IAI

or some n x n orthogonal matrix P.
Pro@

Partition the eigenvalues of W into three groups 1), (i) A Xt,.. ", A., (2p complex roots; (ii) 1,..., 1 (q roots, real and positive), (iii) -1,...,-1 (r roots, real and negative). Since det (W) IA] 2... IA, I2(-1) r= (-1) =-1, we see that r is an odd integer with

,

,

=

r->l. Using Theorem B.9 of [2, p. 285], we note that there is a real .orthogonal matrix P such that p TWp is the direct sum of p individual 2 2 matrices
cos (Oi) -sin (Oi)
sin (0i) cos (Oi)]

and the scalar matrices Iq followed by -L. Each angle is a principal argument of the complex eigenvalue Z of W, 1-<i<-p. Now computing tr (W)=tr (ppTw)= tr (pTwp) shows that
p

tr (W)

2

i=1

E

cos (Oi) + q
n

r.

The facts that r >-_ 1, cos (O) <= 1, and 2p + q

r then show that

tr(W)<-n-2r<-n-2. This completes the proof of part (a) of the conclusion. The "if" clause of part (b) is

368

R.J. HANSON AND M. J. NORRIS

obviously true. To prove the "only if" clause of part (b), suppose that tr (W)= n -2. This can occur only if r 1 and

2

,
p

cos(Oi)+q=n-1.
0 and q
n

i=1

The last of these equations says that p

1, so

PT"WP

diag-(1,

, 1, -1)

and the condition on W follows. This completes the proof of part (b) of the lemma. Suppose then that det(UV)=-I and s,>O. In this case we show that the maximum of (S, W) is si-s and that W is essentially the matrix diag (1,. 1, -1). To this end we write the inequalities

,

=

n--1

n--1

Si--Sn--(S W)-i=1

i=1

Z

(1--Wii)Si--(1 -t" Wnn)Sn

_->

Y (1- wii)s. -(1 + w.,,)s,,
i=1

[(n 2)- tr (W)]s,,
-_>0.

The first inequality here follows from the elementary fact that no entry of W exceeds one, and that the nonnegative singular values, {s}, are ordered. The second inequality uses Lemma 2, part (a). Suppose now that equality holds throughout these relations. If ] is the smallest index such that si s,, then equality holds throughout if and only if wii 1 for < and tr (W)= n-2. Thus W maximizes (S, W) if and only if W =diag (/-1, Wn-j/l) where Wn_j/l is orthogonal, det(W,_j/l)=-l, and tr(W,_i/l)=(n-/’+l)-2= n -/’- 1. By Lemma 2, part (b), W maximizes (S, W) if and only if

Pdiag(1,..., 1,-1)P r
for some orthogonal matrix P. A particular choice of W that always results in the maximum of g(A) for Case II is

W diag (1,..., 1, det (UrV)).
With this particular choice we can evaluate the orthogonal matrix

A

UWV r

that maximizes g(A) or, equivalently, minimizes f(A, B). The phenomenon of qll-conditioning" is present in the computation of A in a subtle form. The dimensions of the arbitrary matrices Z and P of Cases I and II are related to the multiplicity of the smallest singular value. If s is the only zero singular value then the extra condition det (A)= 1 fixes the previously arbitrary scalar orthogonal matrix Z. Multiple zero singular values may occur when a poor choice of coordinate pairs are chosen on the part and drawing. This in turn may lead to discontinuous dependence of the matrix A on the data in the sense that small changes in the data yield large (but bounded) changes in the elements of A.

ANALYSIS OF MEASUREMENTS

369

A similar ty.pe of discontinuity could occur in Case II when sn > 0 is a multiple singular value. These remarks should be considered with the disclaimer in the Introduction regarding the choice of a distinguished set of points. First, the pair {A, B} may reduce the size of the residuals and hence be of value even when a large variation in A may result from small variations in the data. Next, it is possible to evaluate the choice of distinguished points under certain circumstances that we now describe. Suppose the part is expected to match the drawing rather closely so that Xk is close to Yk. One can take the matrix G based only on drawing points
K

G

’.
k=l

YkY k,

T

and obtain the singular values (= eigenvalues) of G. Let 8 be the minimum distance between two singular values of G. Now C=G+H with H--,kL (Xk--yk)yT. If [[H[I < 8/2 then the singular values of C differ from those of G by less than 8/2 and .also are distinct, [1, p. 25]. Here I[" largest singular value.) Should the choice of a distinguished set be a poor one on the basis of the suggested evaluation, it might be possible to change to a completely different distinguished set or to significantly modify the matrix C by adding points to the distinguished set. Such possibilities depend of course on the nature of the part and its intended usage. Frequently a drawing for a part is in a different scale than the part itself. This means that the set of part points {Xi} may be derived from a set of actual measurements on the part {.ei}, by a scale factor, Xi a.,.i, i= 1,..., K. To provide a check for consistency of these measurements, one can slightly extend the type of transformation allowed, namely, a rigid transformation combined with a change of scale, or

Y aAX + B. Here a >_-0 is a scale factor to be determined, A is orthogonal, and B is an n-vector. The optimal value of a does not affect the optimal choice for A. We will not prove this here, but one chooses A, as outlined above but using the data {} instead of {X}. The optimal choice for a can be shown to be based on the quantity g(A)K (Axi, yi) as follows"
Xi),
Ol-/x(Xi,

0,
otherwise.

0,

Now, for example, one can examine the size of both sets of residuals, 1 _-< _-< K,

Y-AX-B,
and

B

Y-AX,

Yi aAXi B,

J

This consistency check on the scaling of measurements can catch the following type of blunder" A part is manufactured half as large as the drawing scale when it should have been twice as large as the drawing scale. Incidentally, it is not necessarily true that the residuals using a will be smaller than the residuals with c 1. What is true, of course, is that the sums of squares of residuals is not larger with the optimal a than with a 1.

-

aA..

370

R. J. HANSON AND M. J. NORRIS

Monitoring of the value of a can provide information of potential use for process control. If a is usually significantly above (below) unity, then there is probably a bias in the process. If in addition, the residuals using the optimum value of a are significantly smaller than those for a 1, then there is possibly a removable source of error in the

process.
3. Computational steps. The given data are the set {Yi} of drawing points and the set {Xi} of part points. An algorithm is outlined for determining the orthogonal matrix A, the n-vector B, and the optimum scale factor, a.

Step
1.

Operation

Compute the means

Compute the differences
yi=Yi-Y,

Compute the matrix
K

-

(i- - 1 Yi)/K, -e--(/= Xi)/K.
xi-Xi-X,

i=I,...,K.

C
i=1

xiyT.

Compute the SVD of C, C USV T. Compute the product uTv. Compute the value
cr

det (uTv).

10.

Compute the orthogonal matrix A U diag (1,. 1, tr) V Compute the points Axi and yi, 1,..., K. Compute the scalar g -,i (Axi, yi). Compute the optimum scale factor a"

,

r.

11. 12.

otherwise. 1,0, Compute the translation vector B Y-AX. Compute the residuals
ri

Yi-Axi,

1,.

,K

(= Yi

AX B ).

ANALYSIS OF MEASUREMENTS

371

Following Step 12, the components of the residual vector ri are checked for size. If they are all smaller in magnitude than a specified tolerance, the part is accepted. Otherwise the part is rejected. The optimum scale factor can be used to generate another set of residuals These components should usually be smalYi- oAi-, where/ I7ler in magnitude than the specified tolerance for acceptance of the part. The (rare) situation may occur where some r] are larger than the corresponding ri. In the case where the ri are within tolerances but the r] are not, it seems that the part should still be accepted. Our justification for this statement is that the user of this analysis (most likely) has the two sets of points in the same scale, yielding residuals r. Thus some rigid transformation of one set of points to the other has been shown to exist with sufficiently small tolerances. This is, we think, the important point behind this type of analysis in the first place.

ceA’.

4. Numerical analysis. In 3 the main steps of an algorithm are given that allows one to compute the orthogonal matrix A, the vector B, and the scalar a of the transformation Y aAX /B. The computation of A, B and the residuals YI(AXI / B) are done in the provided subprogram MAP(. for the special cases of n 2 or n 3. Each of these steps is straightforward with the exception of Step 4. In that step the SVD of an n n matrix C is required. For general values of n, one can use the Golub-Reinsch algorithm as implemented in subroutine SVDRS(.), [1, p. 250]. However, as we noted in the Introduction, this particular problem is most likely to occur for values of n <_-3. For tis reason a special and succinct version of the Golub-Reinsch algorithm was written for a 3 3 matrix. This subroutine, SVD3B3 (.), is called by MAP(. and implements that algorithm in this special case. By avoiding the most expensive loops and using plane rotations computed by SROTG (.), [3, p. 314], for all the elimination steps, a robust and efficient code was obtained. The exclusive use of SROTG(.) was made because of the few elimination steps required for a 3 3 matrix. This version of the SVD for 3 3 matrices is at least a factor of five times faster than SVDRS (.) using the CDC 6600 computer. The package of subroutines and a sample test program are available directly from the authors. The primary subroutine of the package is MAP (.). This subroutine accepts data triples {Xz} and { Yz} as input and returns the matrix A and the vector B, and the residuals Rg Yt AXt B as output. There are approximately 796 card images in this package. The program units are all written in highly portable Fortran. We believe that it can be used on most computers that support Fortran, with no changes required. It is anticipated that the primary interest in this package is with MAP (.). The usage instructions for this subroutine are as follows. The user must have the following declaration statement in a program that calls

MAP (.):
DIMENSION X(3, K), Y(3, K), R (3, K), A(3, 3), B(3).
Assign values to K, X(*, *), and Y(*, *) and execute the subroutine call

CALL MAP (K, X, Y, R, A, B).
The parameters in the calling sequence fall into two classes" Input and Output.

372

R.J. HANSON AND M. J. NORRIS

Input K, X(3, K), Y(3, K) The integer K contains the number of triples of points, {Xx} and {Yx}. The arrays X(3, K) and Y(3, K) contain the X and Yt, respectively. The first coordinate of Xz is stored in X(1, I); the second is stored in X(2, I); the third is stored in X(3, I). The same convention holds for Yz. (Note that for n 2 variables,
the third coordinate of both the the same value, say zero.)

Xx

and

Yz

should be fixed at

Output R(3, K)
A(3, 3), B(3)

The residuals Rt Yt-AXz-B are returned in the array R (3, K). The coordinates of Rx are stored in the same convention as the coordinates of the Xt and Yz. The optimal orthogonal transformation matrix, A, and translation vector, B, are returned in the arrays A(3, 3) and B(3). The transformation has the form Y AX + B. The elements au of A and the entries bt of B are returned in A(L J) and B(I), respectively.

As an illustration of the process we used MAP (.) to compute A and B from a set of data for both {X} and { Y}.
An example, for n
TABLE 3, computed using MAP (.).
1000 x (R,
0.99962512 1.00049650 0.99903218 1.0 1.0 1.0

Yr-AXr-B)

-0.82456 0.28771 -0.23266 0.93430 -0.17797 0.80444

2

0.99837196 0.99947869 -1.00200510 -1.00184080 -1.00025510 1.00015700 -0.99843296 -1.00105830 -1.00003210

1.0 1.0 -1.0
-1.0 -1.0 1.0 -1.0 -1.0 -1.0

3

1.39634 0.28408 -0.38086 -1.50609 -0.39382 -0.19092
-0.00074112 0.99999969

4

A"
B:

-F

0.99999990

[-0.00037741

-0.00025281
0.00056948

0.00037760 0.99999965 0.00074102 0.00033381

0.0 025253J
0.00071211

5. Angular representation of the orthogonal matrix A for n 3. Occasionally one needs to represent a 3 3 orthogonal matrix as a product of plane rotations or by the angles of these rotations. In a sense, the 3 3 9 pieces of data can be represented with just three pieces of data. (Often machines are manufactured so that achieving a rigid coordinate transformation can only be done by rotating about one coordinate axis, then another, etc.) It seems to the authors that there are compelling reasons to suggest that the use of Euler angles [4, p. 259-270] is the wrong approach for representing a broad class of orthogonal matrices. Briefly, the Euler angles are

ANALYSIS OF MEASUREMENTS

373

derived from the three plane rotation matrices that diagonalize the orthogonal matrix 1, 2, 3. First, Cl and S A, determinant of A 1. The ci cos (0i) and si sin (0), are chosen to eliminate the entry at the intersection of row 1 and column 3 of A. This rotation is applied to A to form the matrix product R1A. Next, c2 and s2 are chosen to eliminate row 2, column 3 of R 1A. This rotation is applied to R IA to form the product R2R1A. Finally, c3 and s3 are chosen to eliminate row 1, column 2 of R2RIA. The product R3R2R1A is the identity matrix by virtue of the fact that the R and A are orthog0nal, and determinant of A 1. The central problem with this representation for A in our application is that the angles 0 may not depend continuously on the data. This is true because typically A- Identity matrix +small terms. These small terms uniquely determine the Oi, but an arbitrarily small change to these terms can make a change in the 0 as much as 2zr.
\$3
3

c3

0

ilI I Cl._oS il Ia lX a131 I1 0il
0

0

S1

a12

0

c2

\$2

Cl

21

a22

a23

--\$2

c2

0

la31

a32

a33

0 0

1 0

FIG. 1. Deriving Euler angles from an orthogonal matrix.

I 1I2
1
0 0
c3 s3
-s3 c3

Os2FcISI 1 0
0
c2

L.-s2

Jl

-sl

cl

0

0

01I
0 1

11

a12

al

a22 a32

a 31

a13/ Ii0il
a231
a33A

1 0

FIG. 2. Deriving stable set of angles from an orthogonal matrix.

Our suggestion for a set of angle coordinates for A in our application is based on triangularizing A as shown in Fig. 2. The first plane rotation is chosen to eliminate row 2, column 1. This plane rotation is applied to A to form P1A. Next, the second plane rotation is determined that eliminates row 3, column 1 of P1A. This is applied to PIA to form P2PIA. Finally, the third plane rotation is applied to eliminate row DTDTDT 3, column 2 of P2PIA. This yields the (continuous) representation A =-1-2-3. The principal angles 0 satisfying ci cos (0i), s sin (0i), 1, 2, 3 are determined using the two-argument arctangent function. This elementary function is available in most Fortran systems as ATAN2(., .). One can also use the idea of Stewart [3, p. 314] to essentially store just one number for the pair (cg, s), 1, 2, 3. This is equivalent to storing the angles Oi, but it avoids computing the arctangent, sine and cosine functions to reconstruct the rotation matrices.
REFERENCES

[1] C. L. LAWSON AND R. J. HANSON, Solving Least Squares Problems, Prentice-Hall, Englewood Cliffs, NJ, 1974. [2] G. W. STEWART, Introduction to Matrix Computations, Academic Press, New York-London, 1973. [3] C. L. LAWSON, R. J. HANSON, F. T. KROGH AND D. R. KINCAID, Basic linear algebra subprograms ]:or Fortran usage, ACM Trans. Math. Software, 5 (1979), pp. 308-323. [4] I. N. BRONSHTEIN AND K. A. SEMENDYAYEV, A Guide-book to Mathematics ]:or Technologists and Engineers, Pergamon Press, New York, 1964. [5] B. F. GREEN, The orthogonal approximation of an oblique structure in factor analysis, Psychometrika, 17 (1952), pp. 429-440. [6] I. Y. BAR-ITZHACK, Iterative optimal orthogonalization of the strapdown matrix, IEEE Trans. Aero. and Electronic Systems, 11 (1975), pp. 30-37. [7] P. H. SCHNEMANN, A generalized solution of the orthogonal Procrustes problem, Psychometrika, 31
(1966), pp. 1-10.