By K. G. J O RqSKOG
E
Educational Testing Service, Princeton
and D. N. LAWLEY
Department of Statistics, University of Edinburgh
1. INTRODUCTION
In the field of psychology, factor analysis is most often employed to study
the measurements that arise from the use of a battery of tests. It will be
convenient to discuss factor analysis with particular reference to this type of
data, though most of the remarks in this paper are relevant to a much wider
context. First of all, a distinction is made between exploratory and confirmatory
factor analysis. In confirmatory analysis the experimenter has already obtained
a certain amount of knowledge about the variates measured and is therefore in a
position to formulate a hypothesis that spccifies the factors on which the variates
depend. Factor analysis may then be used to test this hypothesis. In explora
tory analysis, on the other hand, no such knowledge is available, and the main
object is to find a simple but meaningful interpretation of the cxperimental
results. An exploratory analysis is usually performed in two steps. T h e first
step is to decide how many factors are needed to account adequately for the data
and to estimate the loadings on the factors, which are initially defined in a some
what arbitrary manner. A second step consists of a rotation or a linear trans
formation of these factors into others which can be given a more meaningful
interpretation.
I n practice, the above distinction is not always clearcut. Many investiga
tions are to some extent both exploratory and confirmatory, since they involve
some variates of known and other variates of unknown factorial composition.
'I'he former should be chosen with great care in order that as much information
as possible about the latter may be extracted. It is highly desirable that a
hypothesis that has becn suggested by mainly exploratory procedurcs should
subsequently bc confirmed, or disproved, by obtaining new data and subjecting
these to morc rigorous statistical techniques.
86 K. G. Joreskog and D. N. Lawley
2. PRELIMINARY CONSIDERATIONS
l’he basic model in factor analysis is
x = Af + e, (1)
whcre x is a column vector of p variates, f is a vector of k common factors, e is a
vector of p residuals, which represent the combined effect of spccific factors and
random error, and A = [Atr] is a p x k matrix of factor loadings.
T h e residuals e are assumed to be independent of each other and of the
common factors f. It is also assumed that the elements of f, e and x are all
normally distributed with zero means. T h e dispersion or covariance matrices
off, e and x are denoted respectively by 0 ,Y and Z. ‘I’he matrix Y is diagonal
with elements $ ~ t t (i= 1, .. ., p ) , which are termed either residual or unique
variances. I t is further assumed, without loss of generality, that the common
factors have unit variances, so that the diagonal elements of 0 are unities. If,
in addition, for k > 1, the common factors are orthogonal or uncorrelated, then
the nondiagonal elements of 0 are zeros and thus Q becomes the unit matrix
of order K. In view of eqn. (1) and of the assumptions that have been made,
Z is given in terms of the other matrices by the equation
Z = AeA’ Y.+ (2)
This relationship can be tested statistically, unlike eqn. (l), which cannot be
verified directly.
Suppose that a random sample of n + 1 sets of observations of x is obtained
and that S is the matrix whose elements are the usual unbiased sample estimates
of the elements of Z. In view of the assumptions of normality, the elements of
S follow a Wishart distribution with n degrees of freedom. This means that
the loglikelihood function L corresponding to the information provided by S
is, neglecting a function of the observations, given by
L = $~~{log,(C(+tr(SZ~)).
In order to obtain efficient estimates (for large n) of all unknown parameters
I, is maximized with respect to these parameters. In practice, it is slightly more
convenient to minimize the function
F(R,0 , Y)=log,JZJ+tr(SZl) log,ISIp. (3)
Minimizing F is clearly equivalent to maximizing L , and the minimum value of
multiplied by a constant is later used 3s a ‘ goodness of fit ’ x2 criterion.
New Methods in Maximum Likelihood Factor Analysis 87
When k > 1, and there is more than one common factor, it is necessary to
remove an element of indeterminacy in the basic model before the procedure for

rriinimizing F can be applied. This indeterminacy arises from the fact that a
nonsingular linear transformation of the common factors changes A, and in
general also 9,but leaves C, and therefore also the fiinction F, unaltered.
Hence, in order to obtain a unique set of parameters and a corrcsponding unique
set of estimates, some additional restrictions must be imposed. 'l'hese havc the
effect of selecting a particular set of factors and thus of defining the parameters
uniquely.
I n each iteration the method of Fletcher & Powell requires the calculation
of the function value and also the partial derivatives af/a#t,. T h e latter
arc easily found, since they are in fact the diagonal elements of the matrix
Yl(AYAry’+ Y  S)Y’.
T h e valuc o f f is computed as a function of the latent roots of Y4SYt. I n
addition, the calculation of a positive definite symnietric matrix E of order p
is required. As the iterative procedure converges, the sequence of E matrices
converges to the inverse of the matrix of secondorder partial derivatives
P j / a # , ~ a $ ~ evaluated
~~, at thc minimum.
T h e number of iterations required is considerably reduced by the provision
of a good initial estimate of E. A method of obtaining this has been given by
Lawley (1967). By standard estiniation theory, the final matrix E multiplied
by 2 / n provides estimates of the sampling variances and covarianccs of the
estimates &.I. I n the same paper Lawley has shown how the variances and
covariances of the elements of A may also be found.
Various stopping rules for the above procedure could be adopted. I n
practice, it seems best to stop when the value of each of the firstorder partial
derivatives is less than a small prescribcd value.
When the maxirnumlikelihood estimates of A and Y have been found,
and f has been Ininimized, it is possible to test the hypothesis represented by
eqn. (4). T h e minimum value off is multiplied by the factor
+
n  (2p 5 ) / 6 2 k / 3 ,
and the result is treated as a x2 variate for which the number of degrees of
frecdoin is
1 {(pk)2(p+k) ).
This number is positive provided that inequality (5) is satisfied. l’he hypothesis
is accepted or rejected according to whether the value of x2 is below or above a
prescribed significance level. This x 2 test is valid provided that the value of n
is reasonably large. A safe rule is that n > 50.
I n the above discussion one important point has not been mentioned.
Certain data give rise to what has often been termed a Heywood case. I n such
a case the function f has a minimum only at some point where one or more of
thc residual variances are negative. T o overcome this difficulty, the function f
is considcrcd only within the region R,,where each + ~ 1 > E , for some small
positive value E. In practice, for standardized variates, we have taken E to be
0.005. If the smallest value off within R,is attained on the boundary, so that at
least one of the $lp is equal to E, the solution is called improper, since in this case
f has not attained a true minimum within R,.
Suppose that an improper solution is obtained in which ni of the #lr are
equal to E. ‘l’he hypothesis is then refrained, and it is assumed that these
90 K. G. Joreskog and D. N. Lawley
4. CONFIRMATORY
FACTOR
ANALYSIS
I n this section it is assumed that the experimenter has been able to set up a
hypothesis that defines the parameters uniquely. T h e hypothesis must specify
the values of certain elements in A and in 9. It may, in addition, specify the
values of some or all of the residual variances +(f, though this would be unusual
in practice. As a rule, the values specified for the elements of A or the non
diagonal elements of 9 are zeros, but other values could be used. Let the
numbcrs of specified or fixed parameters in A, Q, and Y (including diagonal
r ,
elements of 9)be denoted respectively by n,, n , and n1,. 1hen a necessary,
though not sufficient, condition for uniqueness is that
n, tn, 2 k2.
I n general, it is difficult to give sufficient conditions for uniqueness, since the
positions of the fixed parameters are important as well as thc numbers.
A common type of hypothesis is that certain of the loadings, at least k  1 in
each column of A, are zeros and that the diagonal elements of Q, are unities;
the factors are thcn correlated or oblique. Another common type of hypothesis
is that certain loadings, at least lk(k  1) in number, are zero and that Q, is the
unit matrix of order k, in which case the factors are uncorrelated or orthogonal.
It is possible, howcvcr, to have hybrid cases in which one group of factors may
be correlated while the remaining factors arc assumed to bc uncorrelated with
this group or with each other. T h e generality of thc method gives it great
flexibility, since it will deal with all such kinds of hypotheses. Technical details
and scvcral examples illustrating the usefdness of the method are given by
Jorcskog (1967 6 ) .
T h e total numbcr of parameters in A, Q, and Y is
@+fk(k+l)+p=1(2p+k)(k+1).
T h e number q of fixed parameters is given by
q = n, + n,+ ny,
and the number of free parameters is
6(2P + k ) ( k + 1)  q .
k'or the hypothesis to be nontrivial this numbcr must be less than & p ( p+ 1).
T hi s is equivalcnt to the inequality
p2+ + + 1).
q > ? ( p+ k ) ( p k (6)
'I'o apply the method of maximum likelihood, it is necessary to maximize
the likelihood, or to minimize the function F, with respect to all free parameters.
As before, t' is given by cqn. (3), and I: is given in terms of A, Q, and Y by
cqn. (2). Previous methods for maximizing the likelihood in situations of this
kind arc referred to by Lawley & Maxwell (1963). In all of these methods,
92 K. G. Joreskog and D. N. Lawley
partial derivatives with respect to the free parameters are equated to zero and,
after some algebraical simplification, an iterative procedure for solving the
equations is employed. Recent work has shown, however, that such procedures
do not always converge. Even when convergence does occur it is usually very
slow. A better method, for which ultimate convergence is assured, was given
by Joreskog (1966 u ). Experience with this method has made it clcar that it is
still sometimes difficult to obtain a very accurate solution unless many iterations
are performed. Efficient minimization of the function F seems impossible
without the use of secondorder derivatives.
T h e present procedurc again uses the method of Fletcher & Powell.
Unfortunately, a twostage minimization procedure such as that described in
the previous section is here not possible, except in special cases. T h e function F
has therefore to be minimized simultaneously with respect to all free parameters.
T h e E matrix evaluated in each iteration converges finally to the inverse of the
matrix of secondorder derivatives with respect to the free parameters. A
method of providing a good initial approximation for E, and thus of reducing
the number of iterations required, has been given by Lawley (1967). This
involves the calculation and inversion of a symmetric matrix G , whose elements
are approximations to the secondorder derivatives. I n subsequent iterations
no further matrix inversion is required, since only simple modifications to E
are necessary.
T h e order of the matrices G and E is the number of free parameters. If
this is not too large, the above calculations are easily performed. But if, for
example, there were 40 variates and 10 common factors, the number of free
parameters might well be almost 400. T h e inversion and storage of matrices
whose order is as large as this present considerable difficulties. With the
development of computers having greater storage capacity than those of today,
these difficulties may well disappear. I t has been found that with a G matrix
of large order a considerable number of nondiagonal elements may reasonably
be neglected. This means that a fairly good initial estitnate of E can be obtained
by inverting only a number of relatively small submatrices of G .
T h e elements of the final E matrix multiplied by 2 / n provide estimates of
the sampling variances and covariances of the estimates of the free parameters.
T h e minimization procedure starts with initial estimates of A, Q, and Y.
T h e better these are the fewer iterations will be required. For the most common
types of hypotheses good initial estimates are given by the factor transformation
methods proposed by Lawley & Maxwell (1964). From the initial point, it is
usually best to perform a few steepest descent iterations before employing the
method of Fletcher & Powell, Steepest descent iterations have been found to
be very effective at the beginning when one is not very close to the minimum.
They enable one to obtain better approximations for G and for the initial E
matrix.
When maximumlikelihood estimates of the free parameters have been
found and F has been minimized, it is possible to test the hypothesis represented
New Methods in Maximum Likelihood Factor Analysis 93
by eqn. (2) with its specified values for the fixed parameters. T h e minimum
value of E' is multiplied by the factor
n  (2p + 51/69
and the result is treated as a x2 variate for which the number of degrees of
freedom is
p 2 a@ + k ) ( p + k + 1) + 4.
'rhis number is positive provided that inequality (6) is satisfied. 'I'he hypothesis
is accepted or rejected according to whether the value of x2 is below or above the
chosen significance level.
'I'he method of this section has been programmed in FORTRAN IV by
Joreskog & Gruvaeus (1967). T h e program has been tested on an IBM 7044
computer.
interpretation of the data this solution was rotated orthogonally using the varimax
method of Kaiser (1958). T h e varimax solution is given in ‘l’able 3. Since
the sample size is rathcr small, sampling variability is very large. Hence only
factor loadings larger than 0.30 in absolute magnitude are interpreted. I t then
seems that the first factor, determined by tests 1, 2, 3 and 9, is a visual factor,
the second factor, determined by tests 4,5 and 6 , is a verbal factor, and that the
third factor, determined by tests 7, 8 and 9, is a speed factor.
2. UNROTATED
TABLE SOLUTION
MAXIMUMLIKELIHOOD FOR EXPLORATION
SAMPLE
a iii As L 3 3ii
1 0.59  0.14 0.37 049
2 0.37 4 . 19 0.45 0.62
3 0.42  0.32 0.53 0.44
4 0.71  0.37  0.27 0.29
5 0.71  0.26  0.23 0.37
6 0.74  0.33 0.17 0.33
7 0.50 0.58  0.30 0.32
8 0.65 054 0.13 0.27
9 0.64 0.34 0.27 0.40
TABLE
3. VARIMAX ROTATEDSOLUTION FOR EXPLORATIONSAMPLE
Factors
Variate Visual Verbal Speed
1 0.61 0.30 0.23
2 0.60 0.12 006
3 072 0.18  002
4 0.17 0.82 0.11
5 0.18 0.75 0.21
6 0.26 0.76 0.16
7  0.22 0.22 0.76
8 0.21 0.12 0.82
9 0.40 0.14 0.65
'rhe matrix of Table 2 was transformed into an oblique solution giving as good
agreement as possible with this target. 'I'his was accomplished by use of the
method of Lawley & Maxwell (1964). T h e method transforms the factors in
such a way that, for any column of the target matrix, the ratio of the sum of
squares of loadings corresponding to zeros in the target to the total sum of
squares is minimized. T h e solution is given in Table 4;it is evidently a refine
ment of that given in 'l'able 3. With a few exceptions the small loadings have
become smaller and the large loadings have become larger.
TABLE
4. 'hANSFORMED S o L U T I O N FOR EXPLORATION
SAMPLE
Factors
Variate Visual Verbal Speed
1 0.60 0.14 0.14 Factor correlations
2 0.63  0.02  0.00 Visual Verbal Speed
3 0.74 0.03  0.11 Visual 1.00
4  0.02 0.88  0.06 Verbal 0.45 1.00
5 0.01 0.77 0.05 Speed 0.13 0.36 100
6 0.10 0.78  0.00
7  0.24 0.14 0.78
8 0.26  0.09 0.82
9 0.44  0.08 0.63
TABLE
5. RESTRICTED MAXIMUMLIKELIHOOD FOR CONFIRMATION
SOLUTION SAMPLE
Factors Residual
Variate Visual Verbal Speed Variance
1 0.68 O* 0" 054
2 0.34 O* 0" 0.88 Factor correlations
3 0.66 O* 0" 0.57 Visual Verbal Speed
4 O* 0.91 O* 0.18 Visual 1*00*
5 O* 087 0" 0.25 Verbal 0.55 1*00*
6 O* 0.82 0" 0.32 Speed 0.47 0.09 1*00*
7 O* 0" 0.65 0.58
8 0" 0" 0.93 0.15
9 067 0" 0.19 0.39
Asterisks denote parameter values specified by hypothesis.
T h e results obtained suggest the hypothesis that the nine tests can be
explained in terms of three correlated factors with unit variances such that the
loading matrix A is of the form specified by the target matrix, where 0 now
denotes an exact zero loading and x denotes a loading to be estimated from the
data. This hypothesis is tested on the confirmation sample, using the RMLFA
program of Joreskog & Gruvaeus (1967). T h e maximumlikelihood solution
under the hypothesis is given in Table 5 . T h e hypothesis is finally accepted,
since the value of ~a is 29.96 with 23 degrees of freedom, which corresponds to a
probability of 0.15.
96 New Methods in Maximum Likelihood Factor Analysis
It should be noted that the solutions of Tables 24 are three alternative
unrestricted solutions in the same factor space. They fit the observed correla
tions equally well. T h e solution of Table 5, on the other hand, is a restricted
one. The number of fixed parameters is 20, which is 11 more than that
necessary for uniqueness. T h e restrictions affect the estimation of the residual
variances. T h e differences between the residual variances in Tables 2 and 5
are therefore not entirely due to sampling errors.
T h e above example has been given mainly to show how the methods
described may be put to practical use in cases where the hypothesis is not specified
prior to the analysis of the data. If the hypothesis were set up in advance, one
could proceed directly to the confirmatory stage.
ACKNOWLEDGEMENT
Part of this work was supported by a grant (NSFGB 1985) from the
National Science Foundation to Educational Testing Service.
REFERENCES
FLETCHER,R. & POWELL, M. J. D. (1963). A rapidly convergent descent method for
minimization. Computer J. 2, 163168.
HOLZINGER,K. J. & SWINEFORD, F. (1939). A Study in Factor Analysis: The Stability
of a Bijactor Solution. University of Chicago: Supplementary Educational
Monographs, No. 48.
JBRESKOG, K. G. (1966 a). ‘resting a simple structure hypothesis in factor analysis.
Psychometrika 31, 165178.
JORESKOG, K. G. (1966 b). UMLFAA computer program for unrestricted maximum
likelihood factor analysis. Research Memorandum 6620. Princeton, N. J. :
Educational ‘resting Service.
JBRESKOG, K. G. (1967 a). Some contributions to maximum likelihood factor analysis.
Psychometrika 32, 4 4 3 4 8 2 .
J ~ R E S K O G , K. G. (1967 b). A general approach to confirmatory maximum likelihood
factor analysis. Research Bulletin. Princeton, N. J. : Educational Testing Service.
J~RESKOG, K. G. & GRUVAEUS, G. (1967). RMLFAA computer program for restricted
maximum likelihood factor analysis. Research Memorandum 6721. Princeton, N.J. :
Educational Testing Service.
KAISER,H. F. (1958). The varimax criterion for analytic rotation in factor analysis.
Psychwetrika 23, 187200.
LAWLEY, D. N. (1940). The estimation of factor loadings by the method of maximum
likelihood. Proc. Roy. SOC. Edinb. ( A )60, 6482.
LAWLEY, D. N. (1967). Some new results in maximum likelihood factor analysis. Proc.
Roy. SOC. Edinb. (A) 67, 256264.
LAWLEY, D. N. & MAXWELL, A. E. (1963). Factor Analysis as a Statistical Method.
London: Buttenvorths.
LAWLEY, D. N. & MAXWELL, A. E. (1964). Factor transformation methods. Br. J.
statist. Psychol. 17, 97103.
MCDONALD, R. P. (1967). Factor interaction in nonlinear factor analysis. Br. J. math.
statist. Psychol. 20, 20521 5.
MATTSSON, A., OLSSON,U. & R O S ~ NM. , (1966). The maximum likelihood method in
factor analysis with special consideration to the problem of improper solutions.
(Research Report, Institute of Statistics, University of Uppsala, Sweden.)
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
Cancel anytime.