You are on page 1of 2

METHODS

comprising x items, Kaiser–Guttman will
find x/2 factors… It’s time to worry about
this method!
The technique has also been shown

Finding the magic number to be sensitive to properties of a dataset
other than the number of factors, with
a tendency to consistently overestimate
Paul Wilson and Colin Cooper investigate methods used to extract the number the number of factors. Nunnally and
of factors in a factor analysis Bernstein highlighted that the more
variables a dataset contained, the weaker
the Kaiser–Guttman threshold became.
A factor whose eigenvalue equals 1.0
accounts for 10 per cent of the variance
when composed of 10 variables, but only
actor analysis is the most widely least as much variance as an individual 5 per cent of variance when composed of

F used (and abused) family of data-
structuring techniques to be found
in psychological research. One of the most
variable’. In other words, the average of
all eigenvalues is one, and factor analysis
should extract those factors with an
20. Given such vulnerability, it is no
wonder that this method is not
recommended for use (see Cooper, 2002,
important considerations when eigenvalue greater than this average value. p.124; Nunnally & Bernstein, 1994,
performing factor analysis is determining For simplicity, it may be useful to think of p.482; and Pallant, 2005, p.183).
how many factors are present. It is eigenvalues as indicators of the variance
important because if either too few or too explained by a factor. The
many factors are extracted, the rotated Kaiser–Guttman rule, therefore, is Finding the elbow
solution may make little sense, or even be arbitrarily based on the assumption that Cattell’s (1966) scree test is another
misleading. A number of procedures are factors with ‘better than average’ variance factor estimation technique reliant on
currently available to help make this explanation are significant, and those with eigenvalues. Also known as the ‘above
decision; however, those most commonly ‘below the elbow’ approach,
used are inappropriate and not based on average’ it uses relative, rather
sound statistical theory. We will review variation than absolute
four common methods; discuss the risks explanation eigenvalues. Factors
and benefits associated with each, and the are not. have their eigenvalues
practicalities of performing them. All are Indeed, plotted alongside each
based on an initial principal components we should other (y-axis) in order
analysis of the data. expect from of magnitude.
any dataset Insignificant factors
(importantly, explaining little
‘Greater than one’ this includes variance (and therefore
Recently, Costello and Osborne (2005) random low eigenvalues) will
found no fewer than 1700 studies that datasets), that form a near-straight
used some form of exploratory factor there will be line towards the right
analysis in a two-year review of factors that of the graph. Factors
PsycINFO. The majority of these used the explain a Figure 1: Cattell’s scree plot. Where is explaining large amounts
Kaiser–Guttman ‘Eigenvalues greater than ‘greater than the elbow of this plot? Some may see it of variance will appear
one’ criterion, (Guttman, 1954; Kaiser, average’ at Factor 4, others may see it at Factor 9: above the line to the left
1960, 1970) to determine the number of variance, and subjectively it could be either! of the graph (see Figure 1).
factors. Such a revelation would raise similarly The number of factors
many psychometricians’ eyebrows. some that explain ‘below average’ contained within the data is indicated by
In the latest edition of the variance. Therefore, with random data, the number of points ‘above the elbow’ of
psychometrician’s bible Psychometric the number of factors with eigenvalues the straight line. The obvious criticism of
Theory, Nunnally and Bernstein (1994) greater than one will be the same as half this method is its subjectivity, which is all
reason that the Kaiser–Guttman rule is the number of items making up that too often frowned upon by the dogma of
simply ‘that a factor must account for at dataset. In other words, with random data modern-day psychology. Nonetheless, it

Cattell, R.B. (1966). The scree test for the Guttman, L. (1954). Some necessary Kaiser, H.F. (1960). The application of Instruments and Computers, 38, 88–91.
references

number of factors. Multivariate conditions for common factor electronic computers to factor Available at tinyurl.com/67rowz
Behavioral Research, 1, 245–276. analysis. Psychometrika, 19, 149–161. analysis. Educational and Psychological Nunnally, J.C. & Bernstein, I.H. (1994).
Cooper, C. (2002). Individual differences Hayton, J.C., Allen, D.G. & Scarpello, V. Measurement, 20, 141–151. Psychometric theory (3rd edn). New
(2nd edn). London: Arnold. (2004). Factor retention decisions in Kaiser, H.F. (1970). A second generation York: McGraw-Hill.
Costello, A.B. & Osborne, J. (2005). Best exploratory factor analysis: A tutorial Little Jiffy. Psychometrika, 35, Pallant, J. (2005). SPSS survival manual,
practices in exploratory factor on parallel analysis. Organizational 401–417. (2nd edn). Maidenhead: McGraw-Hill
analysis. Practical Assessment Research Methods, 7, 191–205. Lorenzo-Seva, U. & Ferrando, P.J. (2006). Education.
Research & Evaluation, 10, 1–9. Horn, J.L. (1965). A rationale and test for FACTOR: A computer program to fit Velicer, W.F. (1976). Determining the
Dinno, A. (2008). The Paran Package. the number of factors in factor the exploratory factor analysis model. number of components from the
Available at: http://cran.r-project.org/ analysis. Psychometrika, 30, 179–185. Behavioral Research Methods, matrix of partial correlations.

866 vol 21 no 10 october 2008
methods

can be useful, as it allows a visual as the Component Experimental eigenvalues Value of eigenvalue which
examination of a data structure, but only experimental number (eigenvalues observed in the occurs less than 5% of the
in accompaniment to a more statistically data (same real data) time when factoring random
robust technique to provide that magic number of data
number of factors. participants,
same number 1 6.2 4.0
of variables, 2 5.6 3.9
Mapping the right direction? etc.). It then
Velicer’s (1976) minimum average partial, factor analyses 3 4.2 3.7
or MAP method differs from methods each set of
4 3.5 3.6
mentioned so far, in that it has a much random data
sounder theoretical rationale and is and collates 5 3.1 3.4
consequently more complex to compute. the resulting
MAP produces a one-factor solution to a eigenvalues.
dataset and calculates an associated index This shows how big the first, second, have two theoretically grounded methods
based on the (average-squared) residual third, etc. eigenvalues typically are when of estimating the number of factors been
correlations of that one-factor solution. the null hypothesis is actually true (i.e. more often than not been tossed by the
A residual correlation can be best thought it shows how large a first-eigenvalue one wayside in favour of lesser methods such
of as a correlation indicating ‘left-over’ can expect to find by chance, when in as Kaiser–Guttman or Cattell’s scree?
variance that could not be explained by reality there are no factors present in the Maybe it is because these lesser
the single-factor solution. The higher data). If the eigenvalue for the first factor methods are commonly defaults within
this index, the more variance is left is larger for the experimental dataset than statistics software packages. For example,
unexplained by the factor. This process is for the random data, one can conclude SPSS by itself can only offer Cattell’s scree
then repeated for a two-factor extraction, that there is at least one factor present in plot and Kaiser–Guttman methods. So ‘if
then a three-factor extraction, and so on, the experimental dataset. If so, one SPSS can’t do it, I can’t do it’? Think
with the index of residual correlations considers whether the second eigenvalue again! There are numerous psychologist-
providing an indication of the amount of from the experimental dataset is greater friendly factor analysis programs out
variance that goes unaccounted for in an than its simulated counterpart, and so on. there, for free! ‘FACTOR’ (Lorenzo-Seva
extraction of x factors. This index will Rather than just checking whether the & Ferrando, 2006) is a very
show the number of factors (x) that can eigenvalue from straightforward freeware
be extracted to account for the maximum the program that computes MAP
amount of variance within the dataset experimental “these lesser methods are and parallel analysis at the
(i.e. the lowest residual correlation dataset is larger commonly defaults within tick of a box. You can even do
index). This is a primary objective of than the software packages” parallel analysis with SPSS by
factor analysis: to account for and average of the downloading a macro from
structure appropriately as much of the simulated the internet. If macros aren’t
variation within a dataset as possible. eigenvalues, it your thing, you may consider
So should we all use Velicer’s MAP is becoming more common to scrutinise a freeware program by Watkins (2000)
then? Unfortunately, MAP has been the sampling distribution of the simulated that will calculate random eigenvalues to
shown to underestimate the true number eigenvalues. This allows one to determine compare with SPSS’s output. The program
of factors (Hayton et al., 2004), but may whether there is less than a 5 per cent simply requires the number of
be more accurate than the chance that the first eigenvalue from the participants and variables in your
Kaiser–Guttman or Cattell’s scree (Zwick dataset could have occurred if, in reality, experimental data and how many random
& Velicer, 1986, p.440). there are no factors in the data. If there datasets you want calculated before
appears to be one factor present, the real averaging. Those who use R for their
and simulated eigenvalues for the second, statistics will find parallel analysis
The crème de la crème third, etc. factors are compared, until packages such as the very recent ‘paran’
Finally, we look at what seems to be sooner or later the real dataset produces package, Dinno (2008) on the CRAN
the crème de la crème of tests for the an eigenvalue that is no larger than one website.
numbers of factors: Horn’s (1965) parallel would expect by chance. Thus the Journal editorial policies are coming
analysis. This generates many, many sets number of eigenvalues before this point is up to speed with factor analytic theory,
of random data of the same appearance indicative of the number of ‘true’ factors with many now not accepting papers
contained within the experimental data – using Kaiser–Guttman and Cattell’s scree
three factors in the example shown in the methods alone. Our hope for this article
Table above. is to encourage those not yet sure about
MAP and parallel analysis methods to
Psychometrika, 41, 321–327.
Watkins, M.W. (2000). Monte Carlo PCA for
give them a try – they are not as daunting
parallel analysis [computer software]. Time to go beyond the default as they first seem.
State College, PA. Available at The conclusions of this review are far
tinyurl.com/6emtay from new. Zwick and Velicer (1986)
Zwick, W.R. & Velicer, W.F. (1986). compared these four methods using I Paul Wilson is in the School of Psychology,
Comparison of five rules for simulated data of various dataset Queen’ s University, Belfast
determining the number of properties and found MAP and parallel pwilson23@qub.ac.uk
components to retain. Psychological
analysis to be the most accurate methods, I Colin Cooper is in the School of Psychology,
Bulletin, 99, 432–442.
with Kaiser–Guttman being least accurate Queen’ s University, Belfast.
with consistent overestimation. So why c.cooper@qub.ac.uk

read discuss contribute at www.thepsychologist.org.uk 867