You are on page 1of 22

EXPLORATORY FACTOR ANALYSIS

ORIGINALLY PRESENTED BY: DAWN HUBER FOR THE COE FACULTY RESEARCH CENTER MODIFIED AND UPDATED FOR EPS 624/725 BY: ROBERT A. HORN & W ILLIAM MARTIN (SP. 08) The purpose of this lesson on Exploratory Factor Analysis is to understand and apply statistical techniques to a single set of variables when the researcher is interested in discovering which variables in the set form coherent subsets that are relatively independent of one another. Variables that are correlated with one another but largely independent of other subsets of variables are combined into factors. Factors are thought to reflect underlying processes that have created the correlations among variables.

INTRODUCTION
That dataset (FACTOR.sav) that we will be using is part of a larger data set from Tabachnick and Fidell (2007). The study involved 369 middle-class, English-speaking women between the ages of 21 and 60 who completed the Bem Sex Role Inventory (BSRI). Respondents attribute traits to themselves by assigning numbers between 1 (never or almost never true of me) and 7 (always or almost always true of me) to each of the items. Forty-four items from the BSRI were selected for this research example.

DATA SCREENING
SAMPLE SIZE A general rule of thumb is to have at least 300 cases for factor analysis. Solutions that have several high loading marker variables (> .80) do not require such large sample sizes (about 150 cases should be sufficient) as solutions with lower loadings (Tabachnick & Fidell, 2007, p. 613). *Our data set has an adequate sample size of 369 cases. Bryant and Yarnold (1995) state that, ones sample should be at least five times the number of variables. The subjects-to-variables ratio should be 5 or greater. Furthermore, every analysis should be based on a minimum of 100 observations regardless of the subjects-to-variables ratio (p. 100). MISSING DATA To check for missing data: Click Analyze Descriptive Statistics Click Frequencies Click over all 44 Items to Variable(s): (except Subno) De-select [ ] Display frequency tables This will produce a warning message, simply click OK Click OK

The first table of the output identifies missing values for each item. Scrolling across the output, you will notice that there are no missing values for this set of data. If there were missing data, use one option (estimate, delete, or missing data pairwise correlation matrix is analyzed). If nonrandom pattern or small sample size, consider estimation but it can lead to overfitting the data resulting in too high correlations. Please refer to Tabachnick and Fidell (2007) to obtain more information about deleting and dealing with missing data.

DETECTING MULTIVARIATE OUTLIERS


For the sake of this training, we will start with an assessment of multivariate outliers. However, we would usually begin by conducting screening for univariate outliers and assumptions. Many statistical methods are sensitive to outliers so it is important to identify outliers and make decisions about what to do with them. Recall, that a multivariate outlier is an extreme score on one or more variables. REASON FOR OUTLIERS (TABACHNICK & FIDELL, 2007) 1. 2. 3. 4. Incorrect data entry Failure to specify missing values in the computer syntax so missing values are read as real data. Outlier is not member of population that you intended to sample. Outlier is representative of population you intended to sample but population has more extreme scores than a normal distribution.

To check for multivariate outliers: Click Analyze Regression Click Linear Dependent: Independent(s): Click Save Under Distances [] Mahalanobis Click Continue Click OK subno All remaining 44 Items

Exploratory Factor Analysis Page 2

An output page will be produced Minimize the output page and go to the Data View page. Once there, you will need to scroll over to the last column to see the Mahalanobis results for all 44 variables. To detect if a variable is a multivariate outlier, one must know the critical value for which the Mahalanobis distance must be greater than. Using the criterion of = .001 with 44 df (number of variables), the critical 2 = 78.75. According to Tabachnick and Fidell (2007), we are not using N 1 for df because Mahalanobis distance is evaluated as 2 with degrees of freedom equal to the number of variables (p. 99). Thus, all Mahalanobis variables must be examined to see if they value exceeds the critical value of 2 = 78.75. Due to the large number of variables to examine, an easy way to analyze all the Mahalanobis distance values for the 44 items is to Click Data Click Sort Cases Scroll down the variable list to the last variable and highlight the Mahalanobis Distance variable (MAH_1) and click it over to the Sort by: box Then under Sort Order
e

Descending

Click OK We can also sort by moving the cursor over the variable of interest (e.g., MAH_1), right clicking on the mouse and click on Sort Descending The values under the Mahalanobis (MAH_1) column will then be arranged in descending order from highest to lowest values. On the Data View page, examine the top values and determine how many cases meet the criteria for a multivariate outlier (i.e., > 78.75). For this set of data there should be 25 cases that are considered multivariate outliers, leaving 344 non-outlying cases still an acceptable number of cases. We are opting to delete the 25 outlying cases. To delete the cases, highlight the gray numbers 1 through 25 (on the left of the screen) then click the Delete key. Save As the modified data set, FACTORMINUSMVOUTLIERS OPTIONS FOR DEALING WITH OUTLIERS (TABACHNICK & FIDELL, 2007) 1. 2. Delete variable that may be responsible for many outliers, especially if it is highly correlated with other variables in the analysis. If you decide that cases with extreme scores are not part of the population you sampled, then delete them.
Exploratory Factor Analysis Page 3

3.

If cases with extreme scores are considered part of the population you sampled then a way to reduce the influence of a univariate outlier is to transform the variable to change the shape of the distribution to be more normal. Tukey said you are merely reexpressing what the data have to say in other terms (Howell, 2007). Another strategy for dealing with a univariate outlier is to assign the outlying case(s) a raw score on the offending variable that is one unit larger (or smaller) than the next most extreme score in the distribution (Tabachnick & Fidell, 2007, p. 77). Univariate transformations and score alterations often help reduce the impact of multivariate outliers but they can still be a problem. These cases are usually deleted (Tabachnick & Fidell, 2007). All transformations, changes to scores, and deletions are reported in the results section with the rationale and with citations.

4.

5.

MULTICOLLINEARITY AND SINGULARITY


Multicollinearity occurs when the IVs are highly correlated. Singularity occurs when you have redundant variables. To test for multicollinearity and singularity, use the following SPSS commands: Click Analyze Regression Click Linear Click Reset Dependent: Independent(s): subno All 44 Items

Be sure not to include MAH_1 Click Statistics [] Collinearity diagnostics Click Continue Click OK This will produce an output page If the determinant of R and eigenvalues associated with some factors approach 0, multicollinearity or singularity may be in existence. To investigate further, look at the SMCs for each variable where it serves as DV with all other variables as IVs (Tabachnick & Fidell, 2007, p. 614).

Exploratory Factor Analysis Page 4

Looking at the output page on the following page, under Collinearity Statistics look at the Tolerance values for each item on the test. We want the Tolerance values to be high, closer to 1.0. Next, we want to explore SMCs (squared multiple correlations) of a variable where it serves as DV with the rest as IVs in multiple correlation (Tabchnick & Fidell, 2007). Many programs, including SPSS, convert the SMC values for each variable to tolerance (1 SMC) and deal with tolerance instead of SMC. Thus, we have to calculate the SMCs ourselves. Turn to the next page of this handout and next to the tolerance values calculate the SMCs for the first tem items (1 Tolerance). We want the SMCs to be low, closer to .00. If any of the SMCs are one (1), then singularity if present. If any of the SMCs are very large (i.e., near one), then multicollinearity is present (Tabachnick & Fidell, 2007). The tolerance and SMC values were fine for this group of data. However, if the tolerance values are too low, we would want to scroll down to the next table and examine the Condition Index for each item. According to Tabachnick and Fidell (2007), we do not want the Condition Index values to be greater than 30. Examine the Condition Index for all 44 items. As you can see, the last 25 items have Condition Indexes that are grater than 30. Because of these high Condition Indexes, you would next need to examine the Variance Proportion for those high Condition Index items which are located next to the Condition Index. According to Tabachnick and Fidell (2007), we do not want two Variance Proportions to be greater than .50 for each item. To explain further, look at the Variance Proportion of Dimension 45. Scroll across the page and see if there are two items with Variance Proportions that are greater than .50 for Dimension 45. Next, you have to make some decisions about multicollinearity. Because we did not find evidence of any Variance Proportions that are grater than .50, we may decide that we do not have evidence of multicollinearity. However, one can also combine evidence (explore the SMC, Tolerance Values, Condition Index, and Variance Proportions) and decide if there is combined evidence of multicollinearity. Generally, if the Condition Index and Variance Proportion values are high, then there is evidence of multicollinearity. For this set of data we have no evidence that multicollinearity or singularity exist. Save the output as MULTICOLLINEARITY

Exploratory Factor Analysis Page 5

Coefficientsa Unstandardized Coefficients B Std. Error 419.760 178.029 1.262 14.128 -10.869 14.123 -22.987 11.562 -1.641 9.710 2.274 14.943 9.504 12.457 2.993 6.483 -2.901 8.358 -17.931 11.037 1.513 11.379 1.713 9.785 9.828 14.677 11.581 7.672 -6.287 21.465 7.003 8.204 -2.022 10.825 20.437 13.682 6.551 8.521 7.278 13.908 -18.842 18.775 -9.945 19.027 -14.309 12.840 .054 12.300 -2.448 8.623 11.105 9.221 12.875 14.868 15.835 15.625 -1.690 9.471 -8.293 10.097 -8.662 12.148 1.948 14.456 11.220 8.463 -8.923 17.573 -10.953 18.483 27.841 15.318 2.995 6.842 23.116 12.936 2.230 9.501 -8.040 11.137 2.294 5.450 -13.020 10.356 -15.942 8.392 4.352 9.528 -23.736 16.351 Standardized Coefficients Beta .006 -.057 -.141 -.010 .012 .061 .029 -.023 -.129 .011 .014 .054 .097 -.020 .053 -.013 .109 .053 .039 -.088 -.048 -.124 .000 -.020 .091 .077 .071 -.015 -.054 -.059 .011 .093 -.044 -.040 .156 .028 .198 .015 -.050 .024 -.080 -.140 .034 -.121 Collinearity Statistics Tolerance VIF .626 .539 .583 .770 .508 .462 .742 .675 .462 .407 .434 .447 .704 .609 .758 .642 .547 .619 .514 .383 .351 .237 .565 .578 .511 .367 .601 .438 .684 .427 .464 .598 .385 .644 .395 .703 .239 .704 .621 .887 .716 .535 .541 .420 1.598 1.856 1.715 1.299 1.969 2.167 1.347 1.481 2.163 2.459 2.302 2.235 1.420 1.641 1.319 1.557 1.828 1.615 1.946 2.608 2.852 4.214 1.770 1.731 1.958 2.722 1.664 2.284 1.462 2.341 2.154 1.672 2.599 1.553 2.529 1.422 4.190 1.421 1.610 1.128 1.397 1.868 1.847 2.378

Model 1

(Constant) helpful self reliant defend beliefs yielding cheerful independent athletic shy assertive strong personality forceful affectionate flatter loyal analyt feminine sympathy moody sensitiv undstand compassionate leadership ability eager to soothe hurt feelings willing to take risks makes decisions easily self sufficient conscientious dominant masculin willing to take a stand happy soft spoken warm truthful tender gullible act as a leader childlik individualistic use foul language love children competitive ambitious gentle

t 2.358 .089 -.770 -1.988 -.169 .152 .763 .462 -.347 -1.625 .133 .175 .670 1.510 -.293 .854 -.187 1.494 .769 .523 -1.004 -.523 -1.114 .004 -.284 1.204 .866 1.013 -.178 -.821 -.713 .135 1.326 -.508 -.593 1.818 .438 1.787 .235 -.722 .421 -1.257 -1.900 .457 -1.452

Sig. .019 .929 .442 .048 .866 .879 .446 .645 .729 .105 .894 .861 .504 .132 .770 .394 .852 .136 .443 .601 .316 .602 .266 .996 .777 .229 .387 .312 .859 .412 .476 .893 .186 .612 .554 .070 .662 .075 .815 .471 .674 .210 .058 .648 .148

a. Dependent Variable: Subject identification

Exploratory Factor Analysis Page 6

NORMALITY
If Principal Factor Analysis is used descriptively, then assumptions about distributions are not essential. However, normality of variables enhances the solution (Tabachnick & Fidell, 2007). When the numbers of factors are determined using statisicial inference, multivariate normality is assumed. Normality among single variables is assessed by skewness and kurtosis (Tabachnick & Fidell, 2007, p. 613) and as such, the distributions of the 44 variables need to be examined for skewness and kurtosis. To obtain the skewness and kurtosis of the 44 variables one would first Click Analyze Descriptive Statistics Click Frequencies Click Reset Click over all 44 Items to Variable(s): box Be sure not to include Subno and MAH_1 Click Statistics Under Dispersion [] all Under Central Tendency [] all Under Distribution [] all Click Continue Click Charts
e

Histograms

[] With normal curve Click Continue De-select [ ] Display frequency tables Click OK An output will be produced scroll to the top of the output to Frequencies. You will see the skewness values and their standard error values for all 44 items.

Exploratory Factor Analysis Page 7

Skewness: A distribution that is not symmetric but has more cases (more of a tail) toward one end of the distribution than the other is said to be skewed (Norusis, 1994). Value of 0 = normal Positive Value = positive skew (tail going out to right) Negative Value = negative skew (tail going out to left)

Divide the skewness statistic by its standard error. We want to know if this standard score value significantly departs from normality. Concern arises when the skewness statistic divided by its standard error is greater than z = +3.29 (p < .001, two-tailed test) (Tabachnick & Fidell, 2007). To illustrate, calculate the standardized skewness of one item labeled helpful and provide the information asked for below. Keep in mind, that you would do this for each of the 44 items. helpful
Skewness Value = Std. Error

Skewness Standard Score

Direction of the Skewness

Significant Departure? (yes, no)

Scroll to the top of the output to Frequencies. You will see the kurtosis values and their standard error values for all 44 items. Kurtosis: The relative concentration of scores in the center, the upper and lower ends (tails) and the shoulders (between the center and the tails) of a distribution (Norusis, 1994). Value of 0 = mesokurtic (normal, symmetric) Positive Value = leptokurtic (shape is more narrow, peaked) Negative Value = platykurtic (shape is more broad, widely dispersed, flat)

Divide the kurtosis statistic by its standard error. We want to know if this standard score value significantly departs from normality. Concern arises when the kurtosis statistic divided by its standard error is greater than z = +3.29 (p < .001, two-tailed test) (Tabachnick & Fidell, 2007). To illustrate, calculate the standardized kurtosis of one item labeled helpful and provide the information asked for below. Keep in mind, that you would do this for each of the 44 items. helpful
Kurtosis Value = Std. Error

Kurtosis Standard Score

Direction of the Kurtosis

Significant Departure? (yes, no)

Exploratory Factor Analysis Page 8

Overall, many of the variables are negatively skewed and a few are positively skewed, However, because the BSRI is already published and in use, no deletion of variables or transformations of them is performed (Tabachnick & Fidell, 2007, p. 652). Save the output as NORMALITY

LINEARITY
Multivariate normality implies linearity, so linearity among pairs of variables is assessed through inspection of scatterplots (Tabachnick & Fidell, 2007, p. 613). With 44 variables, however, examination of all pairwise scatterplots (about 1,000 plots) is impractical. Therefore, to spot check for linearity, we will examine Loyal (with strong negative skewness) and Masculin (with strong positive skewness). To create a scatterplot, select Click Graphs Legacy Dialogs Click Scatter/Dot Click Simple Scatter (this should be the default) Click Define Y-Axis: X-Axis: Click OK An output (graph) will then be produced Save the output as LINEARITY The scatterplot should show a balanced spread of scores. According to Tabachnick and Fidell (2007), when assessing bivariate scatterplots if they are oval-shaped, they are normally distributed and linearly related. Although the plot is far from pleasing, and shows departure from linearity as well as the possibility of outliers, there is no evidence of true curvilinearity. And again, transformations are viewed with disfavor considering the variable set and the goals of analysis (Tabachnick & Fidell, 2007, p. 652 Masculin Loyal

Exploratory Factor Analysis Page 9

CONDUCTING A PRINCIPAL FACTOR ANALYSIS


Click Analyze Data Reduction Click Factor Highlight all 44 Items and click them over to the Variable(s): box. Be sure not to include Subno and MAH_1 Click Descriptives Under Statistics [] Univariate descriptives [] Initial solution (default)

Exploratory Factor Analysis Page 10

Under Correlation Matrix [] Coefficients [] Determinant [] KMO and Bartletts test of sphericity Click Continue Click Extraction Change Method to Principal axis factoring Under Display [] Unrotated factor solution (default) [] Scree plot Click Continue Click OK An output will then be produced

INTERPRETATION OF THE EXPLORATORY FACTOR ANALYSIS


To review the study, a sample of 369 middle-class, English-speaking women between the ages of 21 and 60 completed the Bem Sex Role Inventory (BSRI) and 44 items (variables) were used in the analysis. The research question is: Will the factor structure of the BSRI be similar to previous research indicating the presence of between three and five factors underlying the items of the BSRI for this sample of women? The purpose of factor analysis is to study a set of variables and discover subsets of variables that are relatively independent from one another. The subsets of variables that correlate with each other are combined as factors (linear combinations of observed variables) and are thought to reflect underlying processes (latent variables) that have created the correlations among the observed variables. Principal components analysis (PCA) uses the total variance (common variance + unique variance + error variance) to derive components (Hair, et al., 2006). PCA is an empirical summary of the data set. PCA aggregates the correlated variables, the variables produce the components. Common variance is variance in a variable that is shared with all other variables in the analysis. A variables communality is the estimate of such shared variance. Unique variance is variance only associated with a specific variable which is not explained by correlations to other variables. Error variance cannot be explained by correlations to other variables either but it is due to unreliability in data-gathering, measurement error, or random selection.

Exploratory Factor Analysis Page 11

Factor Analysis (FA) focuses only on the common variance (covariance, communality) that each observed variable shares with other observed variables. FA excludes unique and error variance which confuses the understanding of underlying processes (latent variables). FA is the choice if a theoretical solution of factors is thought to cause or produce scores on variables. The steps of interpretation are (1) selecting and measuring variables, (2) preparing the correlation matrix, (3) determining the factorability of R, (4) assessing the adequacy of extraction and determining the number of factors, (5) extraction and rotating the factors to increase interpretability, and (6) interpreting the results. Once an initial final solution is selected validation continues using cross-validation, confirmatory factor analysis, and criterion validation methods (Tabachnick & Fidell, 2007).

FACTORABILITY OF R:
There are several sources of information to determine if the R matrix is likely to produce linear combinations of variables as factors. Look at the Correlation Matrix (R) produced on the output page. A matrix that is factorable should include several sizable correlations. The expected size depends, to some extent, on N (larger sample sizes tend to produce smaller correlations), but if no correlation exceeds .30, use of FA is questionable because there is probably nothing to factor analyze (Tabachnick & Fidell, 2007, p. 614). We want the correlations between items to be greater than .30. Interpret the correlation matrix:

High bivariate correlations, however, are not ironclad proof that the correlation matrix contains factors. It is possible that the correlations are between only two variables and do not reflect underlying processes that are simultaneously affecting several variables. For this reason, it is helpful to examine matrices of partial correlations where pairwise correlations are adjusted for effects of all other variables (Tabachnick & Fidell, 2007, p. 614). To examine partial correlations, look on the output page at the KMO. The Kaiser-Meyer-Olkin (KMO) Measure of Sampling Adequacy is the sum of all the squared correlation coefficients in the numerator and the denominator is the sum of all the squared correlation coefficients plus the sum of all of the squared partial correlation coefficients (Norusis, 2003). A partial correlation is a value that measures the strength of the relationship between a dependent variable and a single independent variable when the effects of other independent variables are held constant (Hair, et al., 2006).

Exploratory Factor Analysis Page 12

The following criteria are used to assess and describe the sampling adequacy (Kaiser, 1974): .90 = Marvelous .80 = Meritorious .70 = Middling .60 = Mediocre .50 = Miserable Below .50 = Unacceptable

If small KMOs, it is a good idea not to do factor analysis. Please interpret the KMO below: KMO Value: Sampling Adequacy Criteria Rating:

Next, look at Bartletts Test of Sphericity on the output page. Bartletts (1954) Test of Sphericity is a notoriously sensitive test of the hypothesis that the correlations in a correlation matrix are zero. According to Tabachnick and Fidell (2007), the test is likely to be significant with samples of substantial size even if correlations are very low. Therefore, use of the test is recommended only if there are fewer than, say, five cases per variable (p. 614). Overall, we want Bartletts Test of Sphericity to be significant so that we can reject the hypothesis. Interpret Bartletts Test of Sphericity by providing the information asked for below. Approx. Chi-Square Bartletts Test of Sphericity What was your decision about the null hypothesis? Significance

ADEQUACY OF EXTRACTION AND NUMBER OF FACTORS:


An initial factor analysis is run using principal axis factoring with an unrotated factor solution with the purpose to determine the adequacy of extraction and to identify the likely number of factors in the solution. PCA is often used for the same initial purpose.

Exploratory Factor Analysis Page 13

Look at the communalities from the output. A communality of the variable is the proportion of variance explained by the common factors. The initial communalities are the SMC of each variable as DV with the others in the sample as IVs. Extraction communalities are SMCs between each variable as DV and the factors as IVs. Communalities range from 0 to 1 where 0 means that the factors dont explain any of the variance and 1 means that all of the variance is explained by the factors. Variables with small extraction communalities cannot be predicted by the factors and you should consider eliminating them if too small (<.20). How many extraction communalities are below .20? A first check of the number of factors is obtained from the sizes of the eigenvalues reported as part of an initial run with principal axis factoring extraction. An eigenvalue (latent root) represents the amount of variance accounted for by a factor. Because the variance that each standardized variable contributes to a principal factor extraction is 1, a factor with an eigenvalues less than 1 is not as important, from a variance perspective, as an observed variable. Look at the output and look under the heading: Total Variance Explained. Then look under the heading: Initial Eigenvalues. Examine the Initial Eigenvalues and under Total examine how many factors are above the value of one (1). How many factors are above an initial eigenvalue of 1.0? There should be 11 factors above one. However, having 11 factors is not parsimonious. Thus, you may use eigenvalues over two (2) as the criterion in specifying which factors are the most worthy of further exploration. Tabachnick & Fidell (2007) say Eigenvalues for the first four factors are all larger than two, and, after the sixth factor, changes in successive eigenvalues are small. This is taken as evidence that there are probably between 4 and 6 factors (p. 657). A second criterion is the scree test of eigenvalues plotted against factors. Factors, in descending order, are arranged along the abscissa with eigenvalues as the ordinate. Usually the scree plot is negatively decreasing the eigenvalue is highest for the first factor and moderate but decreasing for the next few factors before reaching small values for the last several factors. Examine the Scree Plot on your output page According to Norusis (2003), the plot most often will show a distinct break between the steep slope of the large factors and the gradual trailing off of the rest of the factors, the scree that forms at the foot of a mountain. One should use only the factors before the scree begins. According to Hair et al. (2006), starting with the first factor, the plot slopes steeply downward initially and then slowly becomes an approximately horizontal line. The point at which the curve first begins to straigten out is considered to indicate the maximum number of factors to extract (p.120). You look for the point where the line drawn through the points change slope.

Exploratory Factor Analysis Page 14

Unfortunately, the scree test is not exact; it involves judgment of where the discontinuity in eigenvalues occurs and researchers are not perfectly reliable judges (Tabachnick & Fidell, 2007). In the example, a single straight line can comfortably fit the first four eigenvalues. After that, another line, with a noticeably different slope, best fits the remaining eight points. Therefore, there appears to be about four (4) factors in the data. Once you have determined the number of factors by these criteria, it is important to look at the rotated loading matrix to determine the number of variables that load on each factor.

CREATING 4 FACTORS:
Click Analyze Data Reduction Click Factor Click Reset Highlight all 44 Items and click them over to the Variable(s): box. Be sure not to include Subno and MAH_1 Click Extraction Change Method to Principal axis factoring Under Extract
e

Number of factors: Type in the number 4 (four)

Click Continue Click Rotation Under Method


e

Varimax

Click Continue Click OK An output should be produced

EXTRACTION AND ROTATING THE FACTORS TO INCREASE INTERPRETABILITY:


We are now looking for the most parsimonious final solution of factors representing the R matrix and the theory of the problem related to the presence of between three and five factors underlying the items of the BSRI. We specified 4 factors for the run. Again, we will use principal axis factoring which maximizes variance extracted by orthogonal
Exploratory Factor Analysis Page 15

factors. It estimates communalities to attempt to eliminate unique and error variance from the variables (Tabachnick & Fidell, 2007). Principal axis factoring is the most commonly used FA is often the beginning extraction method used by researchers. There are several other extraction procedures available (see Tabachnick & Fidell, 2007, p. 633). It is common to use other procedures, perhaps varying number of factors, communality estimates, and rotational methods with each run. Analysis terminates when the research decides on the preferred solution( Tabachnick & Fidell, 2007, p. 634). You want increase variance explained by the most parsimonious set of factors. An extraction procedure is usually accompanied by rotation to improve the interpretability and scientific utility of the solution. The purpose of rotation is to achieve a simple structure in which each factor has large loadings in absolute value for only some of the variables, making it easier to identify (Nourusis, 2003). If effective, rotation amplifies high loadings (correlations) of variables to factors and reduces low loadings. A geometric illustration of rotation is on page 641 of Tabachnick & Fidell (2007). Orthogonal Factor Rotation is used when each factor is independent (orthogonal) of all other factors. The factors are extracted so that their axes are maintained at 90 degrees. Oblique Factor Rotation is used when the extracted the factors are correlated with each other and identifies the extent to which the factors are correlated. We chose to use the most common orthogonal rotation method known as varimax. A varimax rotation minimizes the complexities of factors by maximizing variance of loadings on each factor (Tabachnick & Fidell, 2007). Again, there are several rotational techniques for both orthogonal and oblique solutions and they are identified on page 639 of Tabachnick and Fidell (2007). As with extraction methods, it is acceptable and common to experiment with various extraction and rotation procedures before deciding upon the preferred solution (Tabachnick & Fidell, 2007). We are using principal axis factoring extraction and varimax with Kaiser normalization rotation. Kaiser normalization is automatically apart of the analysis and it is used to rescale the rotated matrix to restore the original row sums of squares.

INTERPRETING THE RESULTS:


Next, we will interpret the results. In actuality, we may run several different FAs using differing numbers of factors, extractions, and rotations to find the most parsimonious solution. Moreover, for development of a new instrument especially, it is likely that cases and variables will be deleted as you make several FA runs. Deletion of variables are done when underlying assumptions are not met. Deletion of variables are conducted by looking at communalities, loadings, inter-correlations, and coefficient alphas. But, we show the final solution chosen. If cases or variables are deleted then the KMO and Bartlett Test of Sphericity, and communalities should be assessed for each run with a change. Look at the Communalities chart on your output. Under the Extraction heading, we want values to be greater than .20. Looking at the output, you can see that there are several variables below .20. Identify the number of extraction communalities below .20:
Exploratory Factor Analysis Page 16

Having many factors less than .20 indicates that the items are not loading properly on the factors. However, Tabachnick and Fidell (2007) explain that factorial purity was not a consideration with the development of the BSRI which means that when developing the BSRI there was no concern with items loading on certain factors. Next, examine the table labeled Total Variance Explained on your output. Under Rotation Sums of Squared Loadings, you can see that the four factors have eigenvalues greater than two (2). Factor 1 2 3 4 Finally, examine the Rotated Factor Matrix table on your output. Factors are interpreted through their factor loadings. Factor loadings are the correlations between the original variables and the factors. Squaring these loadings indicates what percentage of the variance in an original variable is explained by a factor. Tabachnick and Fidell (2007) decided to use a loading of .45 (20% variance overlap between variable and factor). Factors appear as columns and items appear as rows. Tabachnick and Fidell also recommend a minimum factor loading of .32. The greater the loading, the more the variable is a pure measure of the factor. Comrey and Lee (1992) suggest that loadings in excess of .71 (50% overlapping variance) are considered excellent, .63 (40% overlapping variance) are considered very good, .55 (30% overlapping variance) are considered good, .45 (20% overlapping variance) are considered fair, and .32 (10% overlapping variance) are considered poor. Total % of Variance Cumulative %

Choice of the cutoff for size of loading to be interpreted is a matter of researcher preference (Tabachnick & Fidell, 2007). Look at the output for the Rotated Factor Matrix. For each factor column (there should be four of them), circle the values that exceed .45 for each factor column. There should be twelve (12) items circled for Factor 1, six (6) under Factor 2, five (5) under Factor 3, and three (3) under Factor 4. Examine the items circles and label the factors accordingly.
Exploratory Factor Analysis Page 17

a Rotated Factor Matrix

Factor 1 helpful self reliant defend beliefs yielding cheerful independent athletic shy assertive strong personality forceful affectionate flatter loyal analyt feminine sympathy moody sensitiv undstand compassionate leadership ability eager to soothe hurt feelings willing to take risks makes decisions easily self sufficient conscientious dominant masculin willing to take a stand happy soft spoken warm truthful tender gullible act as a leader childlik individualistic use foul language love children competitive ambitious gentle .311 .365 .413 -.139 .167 .466 .324 -.383 .643 .701 .645 .300 .165 .200 .277 .057 -.042 .030 .056 .024 .052 .739 .070 .496 .483 .418 .203 .675 .308 .593 .122 -.290 .149 .139 .107 -.041 .727 .005 .435 -.017 .024 .541 .466 .023 2 .270 .083 .280 .113 .088 .043 -.122 -.074 .142 .102 .055 .392 .095 .388 .233 .189 .649 .103 .660 .731 .811 .086 .540 .082 .085 .110 .285 -.064 -.105 .244 .069 .129 .483 .320 .446 .085 -.024 -.068 .094 .032 .201 -.083 .000 .447 3 .296 .167 .001 .345 .559 .036 .247 -.050 -.084 -.052 -.210 .391 .215 .319 -.055 .324 .129 -.374 .069 .177 .151 .046 .297 .154 .130 .136 .235 -.281 -.287 .043 .641 .388 .594 .139 .551 .110 .020 -.111 .069 .150 .282 .162 .199 .554 4 .160 .480 .017 -.009 .109 .484 -.054 -.042 .014 -.054 -.013 -.289 -.343 -.038 .131 .111 -.024 -.346 .028 .124 .036 .146 -.060 .016 .345 .657 .416 .036 -.032 .162 .106 .160 -.150 .166 -.142 -.448 .106 -.418 .182 .030 -.127 -.109 .087 -.084

Extraction Method: Principal Axis Factoring. Rotation Method: Varimax with Kaiser Normalization. a. Rotation converged in 9 iterations.

Exploratory Factor Analysis Page 18

LABELING FACTORS:
One of the most important reasons for naming a factor is to communicate to others. The name should capsulize the substantive nature of the factor and enable others to grasp its meaning (Rummel, 1970). The choice of factor names should be related to the basic purpose of the factor analysis. If the goal is to describe or simplify the complex interrelationships in the data, a descriptive factor label can be applied. The descriptive approach to factor naming involves selecting a label that best reflects the substance of the variables loaded highly and near zero on a factor. The factors are classificatory and names to define each category are sought (Rummel, 1970). There are a number of considerations involved in descriptively naming factors: 1. Those variables with zero or near-zero loadings are unrelated to the factor. In interpreting a factor, these unrelated variables should also be taken into consideration. The name should reflect what is as well as what is not involved in a factor (Rummel, 1970). The loading squared gives the variance of a variable explained by an orthogonal factor. Squaring the loadings on a factor helps determine the relative weight the variables should have in interpreting a factor (Rummel, 1970). The naming of the factors with high positive and high negative loadings should reflect this bipolarity. One term may be appropriate, as is temperature for a hotcold bipolar factor. Additionally, each pole may be interpreted separately and the factor named by its opposite, e.g., hot versus cold (Rummel, 1970).

2.

3.

Review the names of the variables that have loadings circled for each factor and look for a theme of the variable names for each factor and choose a name to represent each factor.

Factor 1 2 3 4

Name (Label)

Exploratory Factor Analysis Page 19

INTERNAL CONSISTENCY OF FACTORS


Click Analyze Scale Click Reliability Analysis Click over the 44 Items under the Items: box Be sure not to include Subno and MAH_1 For the Model: box be sure that Alpha is selected Click OK Cronbachs coefficient alpha is a measure of internal consistency of the items of a total test or scales of a test based upon the scores of the particular sample. The scores range from 0-1. Scores on the higher range of the scale (>.70) suggest that the items of the total test or scales are measuring the same thing. Interpret Cronbachs Alpha by providing the information asked for below: Cronbachs Alpha For all 44 items N of items

Interpretation:

FOR EACH FACTOR (SCALE)


Next, examine the internal consistency of the items which have high factor loadings on each of the four factors (i.e., > .45). These are the item loadings you circled for each of the four factors in the Rotated Factor Matrix. Click Analyze Scale Click Reliability Analysis Click Reset Click over the items for that factor under the Items: box For the Model: box be sure that Alpha is selected Click OK

Exploratory Factor Analysis Page 20

Cronbachs Alpha For Factor 1

N of items

Interpretation: Do the same procedure for the next three factors and interpret Cronbachs Alpha by providing the information asked for below: Cronbachs Alpha For Factor 2 N of items

Interpretation: Cronbachs Alpha For Factor 3 N of items

Interpretation: Cronbachs Alpha For Factor 4 N of items

Interpretation:

Exploratory Factor Analysis Page 21

References Bartlett, M. S. (1954). A note on the multiplying factors for various chi square approximations. Journal of Royal Statistical Society, 16(Series B), 296-298. Bryant, F. B., & Yarnold, P. R. (1995). Principal-components analysis and exploratory and confirmatory factor analysis. In L. G. Grimm & P. R. Yarnold (Eds.), Reading and understanding multivariate statistics (pp. 99-136). Washington, DC: American Psychological Association. Comrey, A. L., & Lee, H. B. (1992). A first course in factor analysis (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates. Hair, J. R., Jr., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2006). Multivariate analysis. Upper saddle River, NJ: Pearson Prentice Hall. Howell, D. C. (2007). Statistical methods for psychology (6th ed.). Belmont, CA: Thomson Wadsworth. Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrica, 39, 31-36. Norusis, M. J. (2003). SPSS 12.0 Statistical Procedures Companion. Upper Saddle, NJ: Prentice Hall. Norusis, M. J. (1994). SPSS advanced statistics 6.1. Chicago, IL: SPSS Inc. Rummel, R. J. (1970). Applied multivariate statistics for the social sciences. Mahwah, NJ: Lawrence Erlbaum Associates. Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed.). Boston: Allyn and Bacon.

Exploratory Factor Analysis Page 22

You might also like