Professional Documents
Culture Documents
An Introduction To: Robin@organplayers - Co.uk
An Introduction To: Robin@organplayers - Co.uk
Robin Beaumont
robin@organplayers.co.uk
Acknowledgment:
The original version of this chapter was written several years ago by Chris Dracup
Factor analysis and Principal Component Analysis (PCA)
Contents
1 Learning outcomes............................................................................................................................................ 3
2 Introduction ........................................................................................................................................................ 4
2.1 Hozinger & Swineford 1939 ......................................................................................................................... 5
3 Overview of the process ................................................................................................................................... 6
3.1 Data preparation .......................................................................................................................................... 6
3.2 Do we have appropriate correlations to carry out the factor analysis? ....................................................... 6
3.3 Extracting the Factors .................................................................................................................................. 8
3.4 Giving the factors meaning .......................................................................................................................... 9
3.5 Reification .................................................................................................................................................. 10
3.6 Obtaining factor scores for individuals....................................................................................................... 11
3.6.1 Obtaining the factor score coefficient matrix ................................................................................... 11
3.6.2 Obtaining standardised scores .......................................................................................................... 11
3.6.3 The equation...................................................................................................................................... 11
3.7 What do the individual factor scores tell us? ............................................................................................. 12
4 Summary - to Factor analyse or not .............................................................................................................. 13
5 A typical exam question ................................................................................................................................. 14
5.1 Data layout and initial inspection ............................................................................................................... 14
5.2 Carrying out the Principal Component Analysis ........................................................................................ 15
5.3 Interpreting the output................................................................................................................................ 16
5.4 Descriptive Statistics.................................................................................................................................. 16
5.5 Communalities ........................................................................................................................................... 16
5.6 Eigenvalues and Scree Plot ...................................................................................................................... 17
5.7 Unrotated factor loadings .......................................................................................................................... 17
5.8 Rotation ..................................................................................................................................................... 18
5.9 Naming the factors..................................................................................................................................... 19
5.10 Summary ................................................................................................................................................... 19
6 PCA and factor Analysis with a set of correlations or covariances in SPSS ............................................ 20
7 PCA and factor analysis in R .......................................................................................................................... 21
7.1 Using a matrix instead of raw data ............................................................................................................ 23
8 Summary .......................................................................................................................................................... 24
9 Reference ......................................................................................................................................................... 24
1 Learning outcomes
Working through this chapter, you will gain the following knowledge and skills. After you have worked through it
you should come back to these points, ticking off those with which you feel happy.
After you have worked through this chapter and if you feel you have learnt something not mentioned above
please add it below:
2 Introduction
This chapter provides details of two methods that can help you to restructure your data specifically by reducing
the number of variables; and such an approach is often called a “data reduction” or “dimension reduction”
technique. What this basically means is that we start off with a set of variables, say 20, and then by the end of
the process we have a smaller number but which still reflect a large proportion of the information contained in
the original dataset. The way that the ‘information contained’ is measured is by considering the variability within
and co-variation across variables, that is the variance and co-variance (i.e. correlation). Either the reduction might
be by discovering that a particular linear componation of our variables accounts for a large percentage of the
total variability in the data or by discovering that several of the variables reflect another ‘latent variable’.
This process can be used in broadly three ways, firstly to simply discover the linear combinations that reflect the
most variation in the data. Secondly to discover if the original variables are organised in a particular way
reflecting another a ‘latent variable’ (called Exploratory Factor Analysis – EFA) Thirdly we might want to confirm
a belief about how the original variables are organised in a particular way (Confirmatory Factor Analysis – CFA). It
must not be thought that EFA and CFA are mutually exclusive often what starts as an EFA becomes a CFA.
I have used the term Factor in the above and we need to understand this concept a little more.
A factor in this context (its meaning is different to that found in Analysis of Variance) is equivalent to what is
known as a Latent variable which is also called a construct.
construct = latent variable = factor
A latent variable is a variable that cannot be measured directly but is measured indirectly through several
observable variables (called manifest variables). Some examples will help, if we were interested in measuring
intelligence (=latent variable) we would measure people on a battery of tests (=observable variables) including
short term memory, verbal, writing, reading, motor and comprehension skills etc.
Similarly we might have an idea that patient satisfaction (=latent variable) with a person’s GP can be measured by
asking questions such as those used by Cope et al (1986), and quoted in Everitt & Dunn 2001 (page 281). Each
question being presented as a five point option from strongly agree to strongly disagree (i.e. Likert scale, scoring
1 to 5):
1. My doctor treats me in a friendly manner
2. I have some doubts about the ability of my doctor X1 error
You might be thinking that you could group some of the X9 error
Secondly you will notice in the diagram above that besides the line pointing towards the observed variable Xi
from the latent variable, representing its degree of correlation to the latent variable, there is another line
pointing towards it labelled error. This error line represents the unique contribution of the variable, that is that
portion of the variable that cannot be predicted from the remaining variables. This uniqueness value is equal to
1-R2 where R2 is the standard multiple R squared value. We will look much more at this in the following sections
considering a dataset that has been used in many texts concerned with factor analysis, using a common dataset
will allow you to compare this exposition with that presented in other texts.
In this chapter we will use a subset of data from the Holzinger and Swineford (1939) study where they collected
data on 26 psychological tests from seventh – eighth grade children in a suburban school district of Chicago (file
called grnt_fem.sav). Our subset of data consists of data from 73 girls from the Grant-White School. The six
variables represent scores from seven tests of different aspects of educational ability, Visual perception, Cube
and lozenge identification, Word meanings, sentence structure and paragraph understanding.
Descriptive Statistics (produced in SPSS)
Correlations
wordmean sentence paragrap lozenges cubes visperc
wordmean 1.000
sentence .696 1.000
paragrap .743 .724 1.000
lozenges .369 .335 .326 1.000
cubes .184 .179 .211 .492 1.000
visperc .230 .367 .343 .492 .483 1.000
Exercise 1.
Consider how you might use the above information to assess the data concerning:
The shape of the various distributions
Any relationships that may exist between the variables
Any missing / dodgy(!) values
Could some additional information help?
Exercise 2.
Considering each of the following matrixes complete the table below:
While eyeballing is a valid method of statistical analysis (!) obviously some type of statistic, preferably with an
associated probability density function to produce a p value, would be useful to help us make this decision. Two
such statistics are the Bartlett test of Sphericity and the Kaiser-Meyer-Olkin Measure of Sampling Adequacy
(usually called the MSA).
The Bartlett Test of Sphericity compares the correlation matrix with a matrix of zero correlations (technically
called the identity matrix, which consists of all zeros except the 1’s along the diagonal). From this test we are
looking for a small p value indicating that it is highly unlikely for us to have obtained the observed correlation
matrix from a population with zero correlation. However there are many problems with the test – a small p
value indicates that you should not continue but a large p value does not guarantee that all is well (Norman &
Streiner p 198).
The MSA does not produce a P value but we are aiming for a value over 0.8 and below 0.5 is considered to be
miserable! Norman & Streiner p 198 recommend that you consider removing variables with a MSA below 0.7
In SPSS we can obtain both the statistics by selecting the menu option Analyse-> dimension reduction and then
placing the variables in the variables dialog box and then selecting
the descriptives button and selecting the Anti-image option to show
the MSA for each variable and the KMO and Bartlett’s test for the
overall MSA as well:
KMO and Bartlett's Test
Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .763
Approx. Chi-Square 180.331
Bartlett's Test of Sphericity df 15
Sig. .000
Anti-image Matrices
visperc cubes lozenges paragraph sentence wordmean
visperc .613 -.204 -.177 -.065 -.101 .091
cubes -.204 .676 -.210 -.017 .042 -.008
Anti-image lozenges -.177 -.210 .615 .022 -.012 -.100
Covariance paragraph -.065 -.017 .022 .354 -.145 -.176
sentence -.101 .042 -.012 -.145 .399 -.133
wordmean .091 -.008 -.100 -.176 -.133 .371
visperc .734a -.317 -.289 -.140 -.204 .191
We can see that we have good values for cubes -.317 .732a -.326 -.034 .082 -.015
a
Anti-image lozenges -.289 -.326 .780 .047 -.025 -.209
all variables for the MSA but the overall Correlation paragraph -.140 -.034 .047 .768a -.385 -.486
sentence -.204 .082 -.025 -.385 .803a -.346
value is a bit low at 0.763, however wordmean .191 -.015 -.209 -.486 -.346 .743a
a. Measures of Sampling Adequacy(MSA)
Bartlett’s Test of Sphericity has an
associated P value (sig in the table) of
<0.001 as by default SPSS reports p values of less than 0.001 as 0.000! So from the above results we know that
we can now continue and perform a valid factor analysis.
Finally I mentioned that we should exclude variables that are just simple derivations of another in the analysis,
say variable A = variable B + 4. A similar problem occurs with variables that are very highly correlated (this is
called multicollinearity) and when this occurs the computer takes a turn and can’t produce valid factor loading
values. A simple way of assessing this is to inspect a particular summary measure of the correlation matrix called
the determinant and check to see if it is greater than 0.00001 (Field 2012 p771). Clicking on the determinant
option in the above dialog box produces a determinant value of 0.0737 for our dataset.
Communalities Communalities
Extraction Extraction
visperc .487 visperc .650
cubes .511 cubes .712
lozenges .504 lozenges .650
paragraph .764 paragraph .826
sentence .689 sentence .797
wordmean .714 wordmean .812
Extraction Method: Principal Axis Factoring. Extraction Method: Principal Component Analysis.
You will notice that both methods extracted 2 factors. However the factor loadings (or strictly speaking the
component loadings for the PCA) for the PCA are larger in absolute values as are the communalities and as a
consequence the total variance explained is also greater. Here are a few pointers to help you interpret the above:
Factor loadings for the PA = correlation between a specific observed variable and a specific factor. Higher values
mean a closer relationship. They are equivalent to standardised regression coefficients (β weights) in multiple
regression. Higher the value the better.
Communality for the PA = Is the total influence on a single observed variable from all the factors associated with
it. It is equal to the sum of all the squared factor loadings for all the factors related to the observed variable and
this value is the same as R2 in multiple regression. The value ranges from zero to 1 where 1 indicates that the
variable can be fully defined by the factors and has no uniqueness. In contrast a value of 0 indicates that the
variable cannot be predicted at all from any of the factors. The communality can be derived for each variable by
taking the sum of the squared factor loadings for each of the factors associated with the variable. So for visperc =
0.5552 + 0.4232 = 0.4869 and for cubes =0.4522 + 0.5532 =0.510 These values can be interpreted the same way as
R squared values in multiple regression that is they represent the % of variability attributed to the model,
inspecting the total variance explained table in the above analyses you will notice that this is how the % of
variance column is produced. Because we are hoping that the observed dataset is reflected in the model we
want this value to be as high as possible, nearer to one the better.
Uniqueness for each observed variable it is that portion of the variable that cannot be predicted from the other
variables (i.e. the latent variables). It’s value is 1-communality. So for wordmean we have 1-0.714 = 0.286 and as
the communality can be interpreted as the % of the variablility that is predicted by the model we can say this is
the % variability in a specific observed variable that is NOT predicted by the model. This means that we want this
value for each observed variable to be as low as possible. On pgae 3 referring to the diagram it is the ‘error’
arrow.
Total variance explained this indicates how much of the variability in the data has been modelled by the
extracted factors. You might think that given that the PCA analysis models 74% of the variability compared to just
61% for the PA analysis we should go for the PCA results. However why the estimate is higher is because in the
PCA analysis the initial estimates for the communalities are all set to 1 which is higher than for the PA analysis
which uses an estimate of the R2 value also whereas the PCA makes use of all the variability available in the
dataset in the PA analysis the unique variability for each observed variable is disregarded as we are only really
interested in how each relates to the latent variable(s). What is an acceptable level of variance explained by the
model? Well one would hope for the impossible which would be 100% often analyses are reported with 60-70%.
Besides using the eigenvalue >1 criteria we could have inspected a scree pot and worked out where the factors
levelled off – we will look at this approach latter.
Now we have our factors we need to find a way of interpreting them – to enable this we
carry out a process called factor rotation.
Structure Matrix
Factor
1 2
visperc .368 .696
cubes .226 .706
lozenges .404 .704
paragraph .874 .404
sentence .830 .403
wordmean .845 .358
Extraction Method: Principal Axis Factoring.
Rotation Method: Promax with Kaiser Normalization.
Exercise 3.
Made some suggestions for the names of the two latent variables (factors) identified.
3.5 Reification
Although the computer presents us with what appears a lovely organised se of variables that make or a factor
there is no reason that this subset of variable should equate to something in reality. This is also called the fallacy
of misplaced concreteness. Basically it is assuming something exists because it appears so for example a Latent
variable.
Exercise 4.
Do a Google search on reification – a good place to start is the Wikipedia article.
The above shows the estimated factor scores for the FA analysis
and opposite for the PCA analysis.
How are the above factor scores for each case calculated? The answer is that an
equation is used where the dependent variable is the predicted factor score and
the independent variables are the observed variables. We can check this but to do
this we need two more pieces of information the factor score coefficient matrix
and the standardised scores for the observed variables.
So returning back to the PCA factors. As we have two factors we have two factor equations:
Using the values from the component/factor score coefficient matrix:
FS1 = (0.207)visperc + (0.170)cubes + (0.216)lozenges + (0.265)paragraph + (0.262)sentence + (0.256)wordmean
FS2 = (0.363)visperc + (0.489)cubes + (0.332)lozenges + (-0.288)paragraph + (-0.277)sentence + (-0.317)wordmean
Now considering the first case that is the first row in the SPSS datasheet, we can also plug in their standardised
scores:
FS1subject1 = (0. 207)0.53282 + (0. 170)(-0.59535) + (0. 216)0.27359 + (0. 265)(-0.72679) + (0. 262)(-0.45532) + (0. 256)(-0.96328)
In R
Answer <- (0.207)*0.53282 + (0.170)*(-0.59535) + (0.216)*0.27359 + (0.265)*(-0.72679) + (0.262)*(-0.45532) + (0.256)*(-0.96328)
= -0.4903132
Which is the same as the answer produced by SPSS, shown again opposite.
Obviously you do not need to go through this process of checking how SPSS
produced the individual factor score values I just did it to show you how they are
produced.
Descriptive Statistics
Std.
N Minimum Maximum Mean
Deviation
REGR factor score
73 -2.63 2.56 .00 1.00
1 for analysis 1
REGR factor score
73 -3.29 3.05 .00 1.00
2 for analysis 1
Valid N (listwise) 73
Exercise 5.
1. Why would we expect the factors not to be correlated in the above scatterplot?
2. Would you expect the scatterplot to always show uncorrelated variables for any type of factor
extraction/rotation strategy? Hint: you may need to run the analysis several times with different
extraction/rotation methods to see what you get to confirm your suspicions.
3. Given that the factors are ‘standardised’ scores assuming that they are also normally distributed what
would a score of around -2 for the first factor suggest (refer back to the first statistics course concerning Z
values if necessary)
Loehlin 2004 p.230-6 provides an excellent in depth criticism of Latent Variable modelling (of which factor
analysis is one example). In contrast to these criticisms a more positive approach can be found in many books
about factor analysis for example chapter 7 entitled factor analysis in Bartholomew, Steele, Moustaki & Galbraith
2008 also the conference proceedings held in 2004 entitled 100 years of factor analysis are available at
http://www.fa100.info/
Factor analysis forms the basis of a more complex technique called Structural Equation modelling (SEM) and all
we have done here, and much more, can be achieved using SEM. SEM provides much more sophistication than
the traditional exploratory factor analysis, although a traditional EFA often is the first step to a full SEM analysis
notably we can compare models and also analyse the overall fit of a model this will be discussed in a chapter re-
analysing this data using a SEM framework.
The next section provides a worked example of a typical PCA/factor analysis exam question. I have also provided
two practical sections one describing how to carry out a PCA/factor analysis using a correlation matrix as the
basis rather than raw data and also how to carry out the equivalent analysis in R.
1 71 68 80 44 54 52
2 39 30 41 77 90 80 In this section I will provide an answer to a typical
3 46 55 45 50 46 48 exam question based on this data.
4 33 33 39 57 64 62
5 74 75 90 45 55 48 The exam question
6 39 47 48 91 87 91
Conduct a principal component analysis to
7 66 70 69 54 44 48
determine how many important components are
8 33 40 36 31 37 36 present in the data. To what extent are the
9 85 75 93 45 50 42 important components able to explain the
10 45 35 44 70 66 78 observed correlations between the variables?
Rotate the components in order to make their
interpretation more understandable in terms of a specific theory. Which tests have high loadings on each of the
rotated components? Try to identify and name the rotated components.
It is possible, as we have seen before, to look at the scatterplots of all the variables with one another, as I did
before we are looking for significant correlations, and possibly clusters of them. Also we want to check that there
are no perfectly correlated variables (which would need removing). The following output was generated by the
Graphs, Scatterplot, Matrix command.
Correlations
anx agora arach adven extra social
Advent
1
If Covariance matrix is selected, more weight is given to variables with higher standard deviation. With Correlation matrix,
all the variables are given equal weight (by standardising them).
Click on Continue and then on Scores to select which type of factor scores you want to save in the data set, select
regression:
Exercise 6.
Add some notes below about some of the various options in the dialogue
boxes shown above.
The KMO value indicates that we have is pretty poor – just above miserable, however Bartlett’s test of sphericity
with an associated p value of <0.001 indicates that we can proceed. Communalities
Initial Extraction
5.5 Communalities Anxiety 1.000 .976
Agora 1.000 .942
Next is a table of estimated communalities (i.e. estimates of that part of Arachno 1.000 .982
the variability in each variable that is shared with others, and which is not Adv ent 1.000 .954
Extrav
due to measurement error or latent variable influence on the observed 1.000 .942
Sociab 1.000 .980
variable). The initial values can be ignored. Extraction Method: Principal Component Analy sis.
Initial Eigenv alues Extraction Sums of Squared Loadings Rotation Sums of Squared Loadings
Component Total % of Variance Cumulativ e % Total % of Variance Cumulativ e % Total % of Variance Cumulativ e %
1 4.164 69.397 69.397 4.164 69.397 69.397 2.895 48.251 48.251
2 1.612 26.862 96.259 1.612 26.862 96.259 2.881 48.008 96.259
3 .144 2.396 98.655
4 .052 .867 99.522
5 .023 .383 99.905
6 .006 .095 100.000
Extraction Method: Principal Component Analy sis.
This conclusion is supported by the scree splot (which is actually simply displaying the same data visually):
.
Scree Plot
2
5.7 Unrotated factor loadings
The unrotated factor loadings are presented next. These
1
show the expected pattern, with high positive and high
negative loadings on the first factor:
0
Component Matri xa
1 2 3 4 5 6
Component Number Component
1 2
Anxiety .824 .545
Agora .878 .415
Arachno .799 .586
Adv ent -.818 .533
Extrav -.804 .544
Sociab -.873 .467
The next table shows the extent to which the original Extraction Method: Principal Component Analy sis.
correlation matrix can be reproduced from two factors: a. 2 components extract ed.
Reproduced Correlations
Anxiety Agora Arachno Adv ent Extrav Sociab The small residuals show
Reproduced Correlation Anxiety .976b .949 .978 -.383 -.365 -.464 that there is very little
Agora .949 .942b .944 -.497 -.479 -.572
Arachno .978 .944 .982b -.341 -.323 -.423
difference between the
Adv ent -.383 -.497 -.341 .954b .948 .963 reproduced correlations
Extrav -.365 -.479 -.323 .948 .942b .956 and the correlations
Sociab -.464 -.572 -.423 .963 .956 .980b
actually observed between
Residual a Anxiety -.028 .002 -.006 .000 .002
Agora -.028 -.023 .036 -.028 .003 the variables. The two
Arachno .002 -.023 -.025 .022 -.002 factor solution provides a
Adv ent -.006 .036 -.025 -.042 .003 very accurate summary of
Extrav .000 -.028 .022 -.042 -.022
Sociab .002 .003 -.002 .003 -.022
the relationships in the
Extraction Method: Principal Component Analy sis. data.
a. Residuals are comput ed between observ ed and reproduced correlations. There are 0 (.0%) nonredundant
residuals with absolute v alues greater than 0.05. We have now carried out, and answered the second
b. Reproduced communalit ies part of the question "To what extent are the
important components able to explain the observed
correlations between the variables?"
5.8 Rotation
The next table shows the factor loadings that result from Varimax rotation:
These two rotated factors are just as good as the initial factors in explaining and reproducing the observed
correlation matrix (see the table below). In the rotated factors, Adventure, Extraversion and Sociability all have
high positive loadings on the first factor (and low loadings on the second), whereas Anxiety, Agoraphobia, and
Arachnophobia all have high positive loadings on the second factor (and low loadings on the first).
Total Variance Explained
Initial Eigenv alues Extraction Sums of Squared Loadings Rotation Sums of Squared Loadings
Component Total % of Variance Cumulativ e % Total % of Variance Cumulativ e % Total % of Variance Cumulativ e %
1 4.164 69.397 69.397 4.164 69.397 69.397 2.895 48.251 48.251
2 1.612 26.862 96.259 1.612 26.862 96.259 2.881 48.008 96.259
3 .144 2.396 98.655
4 .052 .867 99.522
5 .023 .383 99.905
6 .006 .095 100.000
Extraction Method: Principal Component Analy sis.
Above, is the table showing the eigenvalues and percentage of variance explained again. The middle part of the
table shows the eigenvalues and percentage of variance explained for just the two factors of the initial solution
that are regarded as important. Clearly the first factor of the initial solution is much more important than the
second. However, in the right hand part of the table, the eigenvalues and percentage of variance explained for
the two rotated factors are displayed. Whilst, taken together, the two rotated factors explain just the same
amount of variance as the two factors of the initial solution, the division of importance between the two rotated
factors is very different. The effect of rotation is to spread the importance more or less equally between the two
rotated factors. You will note in the above table that the eigenvalues of the rotated factor are 2.895 and 2.881,
compared to 4.164 and 1.612 in the initial solution. I hope that this makes it clear how important it is that you
extract an appropriate number of factors. If you extract more than are needed, then rotation will ensure that the
variability explained is more or less evenly distributed between them. If the data are really the product of just
two factors, but you extract and rotate three, the resulting solution is not likely to be very informative.
5.10 Summary
In answering the question requiring us to conduct a principal component analysis we went through a series of
clearly defined stages:
1. Data preparation (most of it was already been done in this example)
2. Observed correlation matrix inspection
3. Statistics to assess suitability of dataset for basis of PCA (KMO, Bartlett’s and determinant measures)
4. Factor extraction - PCA
5. Factor rotation – to allow interpretation
6. Factor name attribution
7. Factor score interpretation
Exercise 7.
For each of the above stages add a sentence stating its purpose along with another giving the finding(s) from
the analysis above.
------------------------------------ end of exam answer -----------------------------------------------
MATRIX DATA VARIABLES = Rowtype_ wordmean sentence paragrap lozenges cubes visperc.
BEGIN DATA
CORR 1.000
CORR .696 1.000
CORR .743 .724 1.000
The first thing to notice is the set of CORR .369 .335 .326 1.000
CORR .184 .179 .211 .492 1.000
correlations which you will find on page 4. CORR .230 .367 .343 .492 .483 1.000
N 73 73 73 73 73 73
A quick explanation is given below: END DATA.
Warning once you have used SPSS syntax to define the data as a correlation matrix you cannot use the SPSS dialog boxes to carry out any
subsequent analysis. If you do you just get rubbish out.
To actually run the SPSS syntax you need to highlight the code you wish to run and then click on the run button
(below right).
FACTOR
/MATRIX = IN (CORR=*)
/PRINT KMO EXTRACTION ROTATION
/PLOT EIGEN ROTATION
/CRITERIA MINEIGEN(1) ITERATE(25)
/EXTRACTION PC
/CRITERIA ITERATE(25)
/ROTATION VARIMAX.
You can always use the dialog boxes to get a rough idea of what the syntax should look like, by clicking on the
Paste button, before trying to write it yourself. For Example the syntax above for the Factor analysis was largely
generated from the factor analysis dialog boxes except the line /MATRIX= IN (CORR=*) which instructs SPSS to
use the correlation matrix as the data which we previously defined also using SPSS syntax.
You can get help about the SPSS syntax various ways but I personally prefer looking up the entry in the Command
Syntax Guide which is a pdf file accessed from the help menu - don’t try printing this out as the FACTOR entry
along is nearly 20 pages long,
providing numerous examples, and
often it is just a case of adapting one
of them to your own particular needs.
install.packages("psych", dependencies=TRUE)
library(psych)
hozdata <-read.delim(file=file.choose()) #the required file is available to download named grnt_fem.dat
# put the data into a matrix of correlations
hozdatamatrix <- cor(hozdata)
# print out the correlation matrix but ask for numbers to 4 decimal places
round(hozdatamatrix,4)
# bartlett test - want a small p value here to indicate c0rrelation matrix not zeros
cortest.bartlett(hozdata)
# unable to calculate the kmo - see field 2012 p776
# but can do the determinant need it to be above 0.00001
# to be able to continue
det(hozdatamatrix)
# appropriate value therefore can continue
# do a pca analysis use the principal function in the psych package
model1<- principal(hozdata, nfactors = 6, rotate = "none")
model1
# get the scree plot
plot(model1$values, type = "b")
# now know how many components we want to extract = 2
# rerun the anylsis specifying this
model2 <- principal(hozdata, nfactors = 2, rotate = "none")
model2
# can find the reproduced correlations and the communalities (the diagonals)
factor.model(model2$loadings)
# can also find the differences between the observed and model estimated correlations
# the diagonals represent the uniqueness values (1- R squared):
residuals <- factor.residuals(hozdatamatrix, model2$loadings)
residuals
# nice to plot the residuals to check there are normally distributed
hist(residuals)
# now to the rotation
model3 <- principal(hozdata, nfactors = 2, rotate = "varimax")
model3
# can get the loading matrix to stop printing out loading below
# a specific value say 0.3 cna also get it sorted by size of loading
# h2 is the communality; u2 is the uniqueness
print.psych(model3, cut = 0.3 , sort = TRUE)
# now to do a principal axis factor analysis
# fa means factoring method; rotate options=none/varimax/blimin/promax etc.
model4 <- fa(hozdata, nfactors = 2,fm = "pa", rotate = "none")
model4
# repeat the analysis with a varimax rotation
model5 <- fa(hozdata, nfactors = 2,fm = "pa", rotate = "varimax")
model5
# repeat the analysis with a promax rotation (correlated factors)
model6 <- fa(hozdata, nfactors = 2,fm = "pa",rotate="promax")
model6
# the factor loadings in the above are not the same as that in SPSS
# this is because SPSS scales the values using something called Kaiser normalisation
# the psych package provides a function to do this
# best to input the non rotated form into the function (info. from help file)
model7 <- kaiser(model4, rotate="promax")
model7
#the above output is slight more like that of SPSS
# to obtain factor scores
# the for PCA we just add scores = true
model3a <- principal(hozdata, nfactors = 2, rotate = "varimax", scores = TRUE)
model3a
# to print out all the scores:
model3a$scores
# to print out just the top 10 scores:
head(model3a$scores, 10)
# to save the above values we need to add them to a dataframe
factorscores <- cbind(model3a$scores)
# then we can produce a plot of the scores:
plot(factorscores)
If you run the above script you will notice that there are many more statistics than that produced by SPSS this is
because R takes a different approach to the factoring modelling process, considering of primary importance how
well the model (i.e. the model correlations) fits the observed correlations. This is in the spirit of the SEM
approach which I will discuss in another chapter.
# The above shows how easy it is to adapt to either using raw data or the correlation matric
# in R for PCA and factor analysis in R
You can also use the table2matrix() function in the psych package to convert a R table to a matrix. Also in the
psych package is various read.clipboard() functions which allow you to copy and paste a matrix of correlations in
something like Excel or word and then paste directly into R (see the psych package manual for details.
Optional Exercise 8.
Returning back to the patients’ satisfaction with their GP discussed on page 4 here are the correlations for the 14
items discussed on that page.
Item 1 2 3 4 5 6 7 8 9 10 11 12 13 14
1 1.00
2 0.56 1.00
3 0.63 0.58 1.00
4 0.64 0.46 0.35 1.00
5 0.52 0.44 0.50 0.52 1.00
6 0.70 0.51 0.49 0.52 0.54 1.00
7 0.45 0.48 0.28 0.34 0.38 0.63 1.00
8 0.61 0.68 0.44 0.43 0.56 0.64 0.49 1.00
9 0.79 0.58 0.66 0.55 0.66 0.64 0.34 0.70 1.00
10 0.57 0.63 0.40 0.55 0.54 0.58 0.65 0.62 0.62 1.00
11 0.32 0.27 0.33 0.21 0.13 0.26 0.22 0.24 0.17 0.25 1.00
12 0.55 0.72 0.51 0.49 0.63 0.62 0.47 0.75 0.70 0.67 0.31 1.00
13 0.69 0.51 0.60 0.54 0.51 0.73 0.44 0.50 0.66 0.53 0.24 0.65 1.00
14 0.62 0.42 0.33 0.47 0.38 0.58 0.51 0.49 0.53 0.56 0.23 0.51 0.56 1.00
Quoting Everitt and Dunn 2001 p.283 “The results [Principal factor analysis + varimax rotation] suggests that we
should use a three-factor solution. The rotated factors might be labelled 'trust in doctor', 'confidence in doctor's
ability' and 'confidence in recommended treatment’. Carry out an appropriate analysis and demonstrate that
this is indeed the case.
You might wish to try using one of the read.clipboard() functions.
8 Summary
This chapter has provided a large amount of information, focusing on the practical aspects of carrying out a PCA
or factor analysis in SPSS or R. The first section focused on interpreting the output at each stage and then we
considered a typical exam question and finally the matrix input approach using both SPSS and R.
9 Reference
Bartholomew D J, Steele F, Moustaki I, Galbraith J I. 2008 (2nd ed.) Analysis of multivariate Social Science data.
CRC press.
Everitt B S, Hothorn T. 2010 (2nd ed.) A handbook of Statistical Analyses using R. CRC Press.
Everitt B S, Hothorn T. 2011 An introduction to applied multivariate analysis with R. Springer.
Everitt B S, Dunn G. 2001 (2nd ed.) Applied Multivariate Data Analysis. Arnold
Field A 2012 Discovering statistics using R.
Kinnear, P.R. and Gray, C.D. (2004) SPSS 12 Made Simple. Hove: Psychology Press.
Loehlin J C. 2004 (4th ed.) Latent Variable Models: an introduction to factor, path, and structural equation
analysis. Erlbaum.