You are on page 1of 17

Educational and

Psychological Measurement
Volume XX Number X
Month XXXX x-xx
A Confirmatory Factor Analysis Ó Sage Publications
10.1177/0013164406299125

of the Student Adaptation to http://epm.sagepub.com


hosted at
http://online.sagepub.com
College Questionnaire
Melinda A. Taylor
Dena A. Pastor
James Madison University

The construct validity of scores on the Student Adaptation to College Questionnaire


(SACQ) was examined using confirmatory factor analysis (CFA). The purpose of this
study was to test the fit of the SACQ authors’ proposed four-factor model using a
sample of university students. Results indicated that the hypothesized model did not
fit. Additional CFAs specifying one-factor models for each subscale were performed
to diagnose areas of misfit, and results also indicated lack of fit. Exploratory factor
analyses were then conducted and a four-factor model, different from the model pro-
posed by the authors, was examined to provide information for future instrument revi-
sions. It was concluded that researchers need to return to the first stage of instrument
development, which would entail examining not only the theories behind adjustment
to college in greater detail, but also how the current conceptualization of the SACQ
relates to such theories.

Keywords: adjustment to college; validity; factor analysis; SACQ

F or the past 35 years, there have been annual increases in the percentage of
students attending college. From 1970 to 2000, college attendance at both 2- and
4-year institutions increased 44% (National Center for Education Statistics, 2003). It
is projected that college attendance will continue to grow by 12% between now and
2012 to include 17.6 million people enrolled in college courses (National Center for
Education Statistics, 2003). With enlarged attendance comes an increased proportion
of students who might face difficulties adjusting to the college environment. It is
imperative that students facing obstacles be identified so they can be provided with
support services.
There are a variety of ways in which to go about identifying students who are
having trouble adjusting to college. For instance, adjustment may be measured by

Authors’ Note: Results from this study were presented at the May 2005 Annual Forum of the Associa-
tion of Institutional Research in San Diego, CA. Please address correspondence to Melinda A. Taylor,
North Carolina Department of Public Instruction, 301 N. Wilmington St., Raleigh, NC 27601; email:
mtaylor@dpi.state.nc.us.

Copyright 2007 by SAGE Publications.


Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015
2 Educational and Psychological Measurement

acquiring student’s self-reports of their attachment to a university, participation in


campus activities, psychological well-being, and academic standing. Most research-
ers who study adjustment would advocate that all such indicators be used simulta-
neously so a more comprehensive picture of a student’s adjustment can be obtained
(Spady, 1971; Terenzini & Pascarella, 1977; Tinto, 1996). In fact, the Student Adap-
tation to College Questionnaire (SACQ) is a self-report instrument created with
the intention of capturing such a multifaceted view of adjustment (Baker & Siryk,
1999). In the paragraphs that follow, we describe (a) how the original and revised
versions of the SACQ were created, (b) the results of internal domain studies used
to assess the dimensionality of the SACQ scores, and (c) the purpose of this article,
which is to assess the dimensionality of the SACQ item scores using confirmatory
factor analytic techniques.

Development of the SACQ

The creation of the SACQ began in 1980 after Baker and Nisenbaum (1979)
implemented an unsuccessful 2-year intervention program designed to aid students
in the transition from high school to college. Baker and Siryk (1980) attributed the
failure of the program to the voluntary participation of students, where students in
need of the intervention were precisely the ones who chose not to participate. The
authors determined that a better approach would be, first, to identify students in
need of intervention or counseling services through the use of a measurement tool
and, second, to implement the services only for such students. To identify students
at risk for transition difficulties, the authors created a 52-item self-report measure
of adjustment. The only information provided for how this 52-item measure of
adjustment was created is as follows: ‘‘To measure success of transition, [a] self-
rating scale was devised that consisted of 52 statements related to various aspects
of students’ adjustment to the college situation’’ (Baker & Siryk, 1980, p. 439). In
1984, Baker and Siryk provided brief descriptions for the subscales to which each
of the 52 items uniquely belonged: Academic Adjustment (18 items), Social
Adjustment (14 items), Personal-Emotional Adjustment (10 items), and General
Adjustment (10 items).
In 1985, Baker, McNeil, and Siryk used a revised version of the instrument
when studying freshmen’s adjustment to college. The instrument used in this study
was described as ‘‘an expanded version of the original instrument’’ (Baker et al.,
1985, p. 95). This expanded version was comprised of 67 items with 24, 20, and
15 items belonging to the Academic, Social, and Personal-Emotional subscales,
respectively. It is unclear whether any changes in the original items were made
(either in wording or to which subscale they were assigned) or how new items were
developed. Another difference between the revised and original version was the
removal of the General Adjustment subscale and the addition of an Institutional

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015


Taylor, Pastor / Confirmatory Analysis of the SACQ 3

Attachment subscale. Unlike the other three subscales, this subscale was created to
share eight items with the Social subscale and one with the Academic subscale. In
this version, there were also two items that were not associated with any of the four
subscales but instead were used along with the other items to create an overall
adjustment score. The constructs measured by the four subscales of the 67-item
SACQ are described in Table 1. This version represents the final version published
for commercialization.
In the manual, created in 1989 and revised in 1999, the authors state that the SACQ
can be used for research purposes or as a diagnostic tool to identify students with poor
adjustment to college. The extent to which the SACQ is used as a diagnostic tool
is unknown, but there has been an increase in the number of authors using the SACQ
for research purposes, especially during the past 5 years (e.g., Hook, 2004; Meehan &
Negy, 2003). Because of its widespread use and the implications involved with its
use, it is necessary to continue providing validity evidence for the SACQ scores.
Based on the information provided by the authors, it appears that little theory
was used in the creation and revision of the SACQ. Item creation occurred prior to
any investigation of the adjustment literature, and no information was provided
regarding how and why new items were developed for the commercially available
67-item version. The lack of a strong theoretical basis for the instrument is perhaps
most evidenced by how the authors went about creating the Institutional Attach-
ment subscale (Baker et al., 1985). Instead of defining institutional attachment prior
to item development and citing how such a facet was important to the representa-
tion of the adjustment construct, the subscale was developed by combining items
from the other subscales that were most related to attrition at one particular univer-
sity. Also, the existence of two items contributing only to the full-scale score and
not to any subscale is another example of how theory was not utilized during
instrument development because the authors never state why these two items are
necessary to represent the construct of overall adjustment.

Dimensionality of the SACQ

After an instrument has been developed, Benson (1998) suggest pursuing the
structural validation of the instrument through the use of internal domain studies.
Analyses typical in this stage include computation of reliability coefficients as well
as item and factor analysis. There have been several studies of the reliability of the
SACQ scores (e.g., Baker et al., 1985; Baker & Siryk, 1984, 1986, 1999; Krotseng,
1992; Mooney, Sherman, & LoPresto, 1991); however, there is only one study that
attempts to examine the scale’s dimensionality. In the SACQ manual, Baker and
Siryk submitted the intercorrelations among the subscales to a principal compo-
nents analysis (PCA) to determine the legitimacy of the four-subscale structure ver-
sus the existence of a single overall adjustment construct.

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015


4
Table 1
Subscale Descriptions and Means, Standard Deviations, and Internal
Consistency (With Confidence Intervals) of Subscale Scores
Total Number Possible
Subscale Description Items of Items Range M SD a (CI)

Academic Students’ success in coping 3, 5, 6, 10, 13, 17, 19, 24 24-216 147.23 24.33 .88 (.87, .89)
Adjustment with various educational 21, 23, 25, 27, 29, 32,
demands of college (e.g., 36, 39, 41, 43, 44, 50,
motivation to do well 52, 54, 58, 62, 66
academically, academic
performance)
Social Adjustment Students’ success in coping 1, 4, 8, 9, 14, 16, 18, 22, 20 20-180 136.66 22.51 .89 (.88, .90)
with interpersonal–societal 26, 30, 33, 37, 42, 46,
demands of college (e.g., 48, 51, 56, 57, 63, 65
social activities, relationships
with others)
Personal-Emotional Students’ intrapsychic state 2, 7, 11, 12, 20, 24, 28, 31, 15 15-135 87.63 19.55 .85 (.84, .86)
Adjustment during adjustment to college 35, 38, 40, 45, 49, 55, 64
and the degree to which he
or she is experiencing
psychological distress and/or
any somatic problems
Institutional Students’ degree of commitment 1, 4, 15, 16, 26, 34, 36, 42, 15 15-135 107.81 17.47 .88 (.87, .89)

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015


Attachment to educational and institutional 47, 56, 57, 59, 60, 61, 65
goals as well as the degree of
attachment to the particular
institution he or she is attending

Note: N = 861. Italicized items are shared between the Institutional Attachment and Social Adjustment subscales, and items in bold are shared between the
Institutional Attachment and Academic Adjustment subscales. The full-scale score based on all 67 items includes all items shown in the table as well as Items
53 and 67. The 95% confidence interval around a was computed in the manner proposed by Fan and Thompson (2001).
Taylor, Pastor / Confirmatory Analysis of the SACQ 5

One disadvantage of this approach is that PCA is a data reduction method


attempting to explain as much total variance as possible in a set of observed vari-
ables using a smaller number of components. Essentially, components are created
that are simply transformations of the observed variables. Principle axis factor ana-
lysis (FA) would have been a more appropriate statistical technique because it is suit-
able in situations in which latent constructs or factors are thought to cause variable
responses. Also, FA is advantageous over PCA in that it analyzes only the common
variance that a variable shares with other variables. PCA analyzes total variance,
which includes not only common variance but also specific variance that is unique
to the variable as well as error. If the variables submitted to a PCA include a large
amount of measurement error, the results from a PCA will likely look very different
from the results using the same data and FA techniques. In fact, the results from a
PCA and FA will look similar only when either the observed variables are measured
with little measurement error or a large number of observed variables are used as
input into the analysis (Benson & Nasser, 1998; Gorsuch, 1983, 1990).
Another major problem with the authors’ use of PCA is that only intercorrela-
tions among the subscale scores were submitted to the PCA. It would be more
appropriate to submit the intercorrelations among actual items to this sort of analysis
so as to observe the strength of an association between each item and each compo-
nent. By analyzing the items as opposed to the subscales, one can observe whether
the items assigned to a given subscale are related to the same component. Also, one
can examine if the item is related to more than one component, which may be desir-
able because several of the SACQ items serve as indicators for multiple subscales.
Even if one is content with the authors’ use of PCA, there are still problems with
their interpretation of the PCA results. The estimation technique used (maximum
likelihood) for the PCA gave a significance test for model fit that Baker and Siryk
(1999) reported as rejecting a one-component model. Despite this finding, they
continued to report pattern coefficients from that solution. Additionally, the authors
concluded that the lack of fit for the one-component solution implied evidence in
favor of the four-component solution. It could be, however, that a two-component
solution or a different four-component solution would best represent the data, but
these solutions were not examined by the authors. Also puzzling is the authors’ use
of exploratory techniques when they clearly have in mind particular structures
(one-factor, four-factor) for the dimensionality of the data. Confirmatory factor
analysis (CFA) would have provided them with a more rigorous test of the dimen-
sionality of their scale.
According to Benson (1998), external domain studies should be pursued only
after extensive study of the internal domain. External domain studies are used to
examine if the constructs measured by an instrument relate to external variables
in ways anticipated by theory. A number of external domain studies do exist for the
SACQ (e.g., Conti, 2000; Hertel, 2002; Mooney et al., 1991; Schwitzer, Robbins,
& McGovern, 1993), but perhaps these studies were executed prematurely because

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015


6 Educational and Psychological Measurement

internal domain studies, particularly those investigating the dimensionality of the


instrument, are lacking.

Purpose of the Present Study

The purpose of this study, therefore, is to examine the fit of Baker and Siryk’s
(1999) proposed structure of adjustment to college. Specifically, CFA is used to test
the fit of the proposed four-factor structure. In addition to examining the fit of the
proposed four-factor model, we are also interested in the plausibility of calculating a
full-scale score from the SACQ items. Calculation of a full-scale score, in addition
to four subscales, implies that there may be a higher order factor structure underlying
responses with a single second-order factor giving rise to four first-order factors. The
notion of a full-scale score also implies that a more parsimonious, single first-order
factor model may be used to describe item responses. Finally, because of the large
number of items that are shared by the Social Adjustment and Institutional Attach-
ment subscales, a three-factor model will also be tested that combines the two sub-
scales into a single factor.

Method
Participants and Procedure
The sample used in this study consisted of 878 sophomores at a midsized south-
eastern university. The SACQ was administered to these students during a large-scale
assessment performed annually in February to examine student learning outcomes
for students with 45 to 70 credit hours. Because the sample used in this study was
randomly drawn from the sophomore class, its demographic makeup mirrors that of
the university population, which is 61% female and 85% Caucasian, with all other
ethnicities representing less than 5% of the student population.

Materials
Participants responded to each of the 67 items using a 9-point scale ranging
from 1 (applies very closely to me) to 9 (doesn’t apply to me at all). See Table 1
for information regarding item assignment to subscales. To allow comparisons of
nested models, we used only the 65 items that contribute to the subscale scores.

Model Comparisons
Model comparisons for nested models were used to determine the best fitting
and most parsimonious models. The order of models in the present study from most

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015


Taylor, Pastor / Confirmatory Analysis of the SACQ 7

to least complex is the four-factor model, higher order factor model, three-factor
model, and one-factor model. The latter three models are all nested within the four-
factor model. It should be noted that the higher order model should be tested only
if acceptable fit is found for the four-factor model and the intercorrelations among
the factors imply the existence of a second-order factor.
Models that are nested can be compared using a w2 difference test, which gives
information regarding whether the simpler model fits statistically significantly
worse than the more complex model. It is important to note, however, that the
effects of sample size on the w2 tests of model fit are the same for the w2 differ-
ence test for nested models (Cheung & Rensvold, 2002; Kelloway, 1995). Specifi-
cally, large sample sizes tend to result in more Type I errors for the w2 and w2
difference tests; therefore, additional fit indices were examined for all models
tested.

Assessing Model Fit


The models tested in this study were estimated using maximum likelihood esti-
mation (MLE). Several authors advocate the use of MLE over other methods
because of its sensitivity to model misspecification. Less sensitivity to model mis-
specification can lead to higher Type II error rates (Olsson, Foss, Troye, & Howell,
2000; Olsson, Troye, & Howell, 1999). Model fit was assessed by a number of
indices. First, model fit was determined using the minimum fit function w2 . As this
index is extremely sensitive to sample size (Hu & Bentler, 1995), it was supple-
mented with additional fit indices.
Two absolute fit indices are reported here: the standardized root mean square
residual (SRMR) and the root mean square error of approximation (RMSEA). The
SRMR is recommended by Hu and Bentler (1998) as an index to report because of
its sensitivity to simple model misspecification with acceptable model fit indicated
by values less than .08. The second absolute fit index reported was the RMSEA.
The RMSEA is particularly useful as an absolute fit index in detecting complex
model misspecification, which is a likely source of misspecification in this study
due to the large number of items that are assigned to multiple factors. Hu and Ben-
tler (1998) recommend that this value not exceed .06.
Finally, the comparative fit index (CFI), an incremental fit index, was also con-
sulted. This fit index compares model fit of the proposed model to that of an inde-
pendence model and is particularly sensitive to complex model misspecification.
Hu and Bentler (1998, 1999) suggest that CFI values indicating adequate model fit
should exceed .95. Currently, there are disagreements about the strict application
of cutoffs (see Marsh, Hau, & Grayson, 2005; Marsh, Hau, & Wen, 2004), but in
order to aid in decision making about model fit, we chose to use cutoffs in conjunc-
tion with examination of standardized residuals.

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015


8 Educational and Psychological Measurement

Results
Data Screening
The data were examined for out-of-range responses (i.e., responses greater than
9), and none were detected. Following recommendations by the authors of the instru-
ment, any student with three or more missing responses per subscale was excluded
from the data set. This exclusion, along with the deletion of a record due to the pres-
ence of a response set, resulted in a valid sample of 865 students. Missing data for
the remaining students were handled using Baker and Siryk’s (1999) recommenda-
tion to impute values for students with two or fewer missing responses using the
mean response for that student on the subscale for which the response was missing.
The data were screened with respect to multicollinearity and outliers. Bivariate
correlations, tolerance, and variance inflation values (Tabachnick & Fidell, 2001)
indicated that neither bivariate nor multivariate multicollinearity was present.
Although no univariate outliers were identified after inspection of the items’ boxplots
and histograms, four multivariate outliers were detected using Mahalanobis distance
and were excluded from further analyses, resulting in a valid sample of 861.
Means and standard deviations for each subscale and measures of internal con-
sistency (Cronbach’s coefficient a) using the final sample of 861 students are pre-
sented in Table 1. Means and standard deviations are somewhat higher for this
sample than those presented in the SACQ manual. Values for the internal consis-
tency of the scores for this sample are similar to those reported in the manual.
Because MLE assumes multivariate normality of the observed variables, the
data were examined with respect to univariate and multivariate normality. No items
showed skew or kurtosis that exceeded the cutoffs of |3| or |8| (Kline, 1998),
respectively, indicating no problems with univariate nonnormality. Finally, multi-
variate normality was examined using Mardia’s normalized multivariate kurtosis
value. Bentler and Wu (1993) suggest that this value should not exceed three. The
Mardia’s value for this study was greater than 100, suggesting extreme multivariate
kurtosis. In situations like this, it is appropriate to use the Satorra-Bentler scaled
w2 and robust standard error corrections (West, Finch, & Curran, 1995). Unfortu-
nately, the number of observed variables for this study was so large that an asymp-
totic covariance matrix (needed for Satorra-Bentler corrections) could not be
produced with the sample size available. Thus, MLE without the Satorra-Bentler
corrections was used.

Model Fit
The covariance matrices submitted for analysis were produced using PRELIS,
and the various models were tested using LISREL 8.54 (Jöreskog & Sörbom,
2003). All models were identified by fixing the variance of the latent variables to
one. Table 2 shows results for the various models tested. The results indicate that

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015


Taylor, Pastor / Confirmatory Analysis of the SACQ 9

Table 2
Model Fit of Originally Hypothesized Models
Model w2 (df) p SRMR RMSEA (90% CI) CFI

Four-factor 10,932.53 (2000) < .001 .085 .089 (.087, .090) .91
Three-factor 12,010.36 (2011) < .001 .087 .094 (.093, .096) .90
One-factor 14,927.83 (2015) < .001 .094 .120 (.120, .120) .87

Note: SRMR = standardized root mean square residual; RMSEA = root mean square error of approxi-
mation; CI = confidence interval; CFI = comparative fit index.

the four-factor model did not fit the data. The fit of the alternative models, although
shown in Table 2, are not of interest because the four-factor model, a more com-
plex model, did not fit. Also, the higher order factor model was not fit to the data
because fit of the first-order factor structure was lacking. Table 2 shows that the
RMSEA and CFI, which are both particularly sensitive to complex model misspeci-
fication, were the most indicative of model misfit for the four-factor model.
When models do not fit, rather than interpreting the factor solution the focus
shifts to diagnosing model misfit. Because the most complex model, the four-factor
model, did not fit, it is this model that we used to investigate areas of misfit. Stan-
dardized residuals can be examined to diagnose misfit with values exceeding |3|,
indicative of model misfit (Byrne, 1998). Out of 1,007 possible standardized resi-
duals, 642 were greater than |3|, meaning there are large differences between the
observed covariance matrix and the model-implied covariance matrix. Often, a pat-
tern of large standardized residuals can be discerned for a set of items. In this case,
however, the large number of residuals exceeding the cutoff prohibited our ability
to uncover any pattern that would aid in diagnosing misfit.

Additional Analyses
One-factor models for each subscale. Because results for the four-factor model
were not useful in determining the areas of misfit, we decided to test four one-
factor models corresponding to each subscale separately. The belief was that anal-
yses of the one-factor models separately would reveal whether the problems with
model fit arose from cross-loadings, highly related items, and/or from certain prob-
lematic subscales.
Only the one-factor model for Personal-Emotional Adjustment showed tolerable
fit (SRMR = .049), although the RMSEA and the lower bound of the 90% confi-
dence interval for the RMSEA exceeded the .06 cutoff recommended (Hu & Bentler,
1998). Because this index is sensitive to complex model misspecification, it is likely
that it exceeds the cutoff because of the sizable percentage (17%) of standardized
residuals exceeding |3|. Because the residuals are associated with a large number of

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015


10 Educational and Psychological Measurement

items, it was concluded that the one-factor model did not do a satisfactory job of
explaining the relationships among the items on the Personal-Emotional Adjustment
subscale.
For the remaining three one-factor models, the standardized residuals were
examined in an effort to determine sources of misfit. However, the large number of
standardized residuals exceeding |3| made it difficult to simply describe the results.
For instance, 31%, 45%, and 42% of the standardized residuals for the Social
Adjustment, Institutional Attachment, and Academic Adjustment subscales, respec-
tively, exceeded a value of |3|.

Exploratory factor analyses (EFAs). Because examination of the individual one-


factor CFA results did not adequately reveal the causes of misfit for the various
models, an EFA was also used to determine if there were plausible models that
could explain the relationships among the items. The data were submitted to an
EFA using SPSS 12.0 (SPSS, 2003).
The eigenvalues (and percentage of variance explained) associated with the first
five factors prior to rotation were 14.77 (22.73%), 4.79 (7.37%), 3.62 (5.56%), 2.74
(4.22%), and 1.85 (2.84%). Although the sizable drop in the percentage of total
variance from the first to the second factor favors retention of a one-factor solution,
inspection of the scree plot favors a four-factor solution, supporting the original
number of factors proposed by Baker and Siryk (1999). Parallel analysis, an addi-
tional method to determine the number of factors to retain (Henson & Roberts,
2006; Thompson & Daniel, 1996), indicated retention of a seven-factor solution.
Using the results from the scree plot, percentage of variance explained, and parallel
analysis, we decided to rotate and interpret models specifying between one and
seven factors.
Solutions specifying two or more factors were rotated for interpretability using
direct oblimin rotation (d ¼ 0). Because these analyses were exploratory in nature,
we felt that the factors should be allowed to correlate. Results of the analysis sup-
ported this judgment with interfactor correlations ranging from −.35 to .26 for the
four factors. Because simple structure was not achieved with the six- and seven-
factor solutions, we focused on interpreting the more parsimonious solutions speci-
fying one to five factors. Even though the four-factor solution explained less than
half of the total variance, it emerged as being the most informative and is inter-
preted below.
When looking at the four-factor solution, our goal was to determine if the sets of
items proposed by the authors to belong to a given subscale actually loaded on the
same factor. Contrary to what the authors proposed, three factors contained items
from more than one subscale (see Table 3). Each of these factors did, however,
contain a large number of items from a particular subscale. Factors I, II, and IV
were defined by a large number of Social Adjustment, Personal-Emotional Adjust-
ment, and Institutional Attachment items, respectively. Only Factor III consisted of

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015


Taylor, Pastor / Confirmatory Analysis of the SACQ 11

Table 3
Exploratory Factor Analysis Pattern and (Structure)
Coefficients and Communalities for the Four-Factor Model
Item No. I II III IV Communalities

1 .63 (.73) .05 (.24) −.02 (.17) −.26 ( −.49) .65


4 .75 (.75) .10 (.23) .03 (.21) .07 (−.23) .62
8 .55 (.54) −.04 (.07) .06 (.17) .04 (−.15) .58
9 .52 (.66) .09 (.31) .04 (.24) −.35 (−.57) .65
18 .62 (.67) .01 (.16) −.05 (.11) −.18 (−.39) .55
26 .39 (.44) .09 (.19) .06 (.18) −.04 (−.22) .27
30 .49 (.55) .02 (.17) .12 (.25) −.09 (−.29) .41
37 .57 (.64) .09 (.24) .01 (.18) −.14 (−.37) .53
46 .61 (.62) .08 (.19) .06 (.20) .06 (−.19) .62
63 .44 (.54) .06 (.22) −.04 (.12) −.28 (−.45) .50
65 .72 (.75) .20 (.33) −.07 (.14) −.04 (−.35) .70
2 .07 (.15) .63 (.60) −.15 (.03) .04 (−.17) .51
7 .14 (.26) .70 (.70) −.11 (.11) −.03 (−.28) .66
10 −.08 (.07) .46 (.52) .23 (.34) −.04 (−.20) .46
11 −.06 (.03) .54 (.52) .08 (.19) .09 (−.07) .40
12 −.03 (.13) .42 (.49) .08 (.21) −.17 (−.31) .37
20 .10 (.21) .70 (.70) −.09 (.12) −.01 (−.25) .56
21 −.06 (.12) .41 (.51) .21 (.34) −.19 (−.33) .52
22 .14 (.22) .49 (.52) −.18 (.01) −.17 (−.34) .61
28 .05 (.17) .46 (.50) −.03 (.12) −.11 (−.27) .38
31 .04 (.18) .41 (.48) −.07 (.09) −.23 (−.36) .34
38 .07 (.19) .62 (.64) .04 (.21) .00 (−.23) .46
39 −.08 (.05) .52 (.56) .39 (.48) .14 (−.07) .58
40 .09 (.18) .52 (.54) .05 (.20) .03 (−.18) .40
41 −.03 (.12) .48 (.53) .10 (.24) −.10 (−.26) .53
42 .36 (.48) .48 (.57) −.09 (.14) −.16 (−.42) .57
45 .04 (.16) .59 (.62) .15 (.30) .07 (−.16) .47
48 .14 (.24) .36 (.42) .00 (.14) −.10 (−.26) .31
51 .34 (.43) .63 (.66) −.18 (.07) −.06 (−.35) .65
56 .23 (.37) .47 (.55) −.11 (.10) −.22 (−.44) .54
64 .11 (.25) .61 (.65) .04 (.23) −.03 (−.27) .50
3 .06 (.22) .03 (.22) .59 (.62) −.07 (−.20) .48
13 .12 (.25) .12 (.28) .49 (.55) −.01 (−.17) .72
17 −.23 (−.06) .18 (.30) .56 (.56) −.04 (−.11) .52
19 .24 (.40) −.03 (.19) .49 (.56) −.16 (−.32) .49
25 −.17 (−.02) .28 (.37) .53 (.56) .05 (−.07) .61
29 −.15 (.01) .32 (.41) .49 (.54) .02 (−.12) .58
36 .23 (.37) −.03 (.16) .33 (.41) −.21 (−.34) .55
43 .26 (.40) −.07 (.14) .35 (.42) −.23 (−.36) .65
50 .13 (.27) .00 (.19) .53 (.57) −.07 (−.21) .44
52 −.08 (.09) .37 (.49) .50 (.58) .02 (−.16) .58

(continued)

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015


12 Educational and Psychological Measurement

Table 3 (continued)
Item No. I II III IV Communalities

54 .15 (.31) −.03 (.17) .45 (.51) −.18 (−.30) .57


62 .17 (.29) .05 (.21) .36 (.43) −.09 (−.23) .39
66 .22 (.37) .12 (.31) .49 (.58) −.07 (−.26) .63
15 .23 (.47) −.10 (.19) .15 (.29) −.64 (−.72) .73
16 .32 (.50) −.10 (.15) .05 (.19) −.55 (−.64) .73
23 .05 (.27) −.12 (.13) .19 (.27) −.59 (−.60) .57
32 −.07 (.22) .22 (.44) .14 (.28) −.62 (−.69) .62
34 .13 (.34) .13 (.32) −.06 (.10) −.57 (−.65) .70
57 .22 (.38) .26 (.38) −.19 (.00) −.43 (−.55) .61
59 .02 (.26) .11 (.30) −.09 (.06) −.66 (−.69) .73
60 −.12 (.18) .07 (.31) .13 (.25) −.72 (−.73) .69
61 −.11 (.18) .09 (.33) .12 (.24) −.70 (−.71) .69
5 .15 (.30) −.01 (.17) .32 (.38) −.22 (−.33) .37
6 −.16 (−.10) .38 (.35) .06 (.12) .06 (−.02) .38
14 .24 (.25) −.14 (−.04) .31 (.31) .09 (.00) .17
24 .24 (.31) .07 (.17) .14 (.22) −.07 (−.20) .31
27 .20 (.16) −.06 (−.04) .17 (.16) .19 (.10) .21
33 .30 (.33) .15 (.21) −.01 (.09) −.02 (−.17) .24
35 .01 (.09) .36 (.39) .07 (.17) .00 (−.13) .27
44 .01 (.15) −.04 (.12) .29 (.33) −.26 (−.30) .26
47 .10 (.21) −.05 (.08) −.01 (.06) −.34 (−.36) .19
49 .01 (.06) .26 (.27) .10 (.17) .03 (−.07) .11
55 .31 (.36) .18 (.27) .18 (.29) .05 (−.15) .41
58 −.10 (.10) .19 (.34) .30 (.38) −.26 (−.34) .36

Note: Italicized item numbers are items with structure coefficients that did not reach the specified criteria
to be designated as loading on a certain factor. Structure coefficients in bold met the specified criteria for
an item to be designated as loading on a certain factor.

items from a single subscale, Academic Adjustment. Not all items from the Aca-
demic Adjustment subscale were represented by this factor; the remaining Aca-
demic Adjustment items were dispersed throughout Factors II and IV.
We attempted to interpret the substantive meaning of the items associated with
each of the four factors that emerged from the EFA. Any pattern that we could dis-
cern, based on our empirical results and our reading of the items, revealed such
complex interpretations that we felt it might be more useful instead to suggest areas
of improvement for the instrument. There were 13 items that loaded on at least two
factors, suggesting that these items might be the first to receive attention during
instrument revision. Items with low structure coefficients should also be considered
for revision or deletion. There were 12 items that yielded low structure coefficients
(less than |:40|) consistently across the five solutions, including the four-factor
solution (Items 5, 6, 14, 24, 27, 33, 35, 44, 47, 49, 55, and 58). If the four-factor
model is the desired structure, perhaps these 25 items should be the initial focus for

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015


Taylor, Pastor / Confirmatory Analysis of the SACQ 13

those wanting to pursue instrument revisions. It should be noted, however, that


results from the current EFA will likely change with any revision to the instrument.
Therefore, data obtained using a revised instrument should be submitted to its own
set of analyses to examine the structure of the revised version.

Discussion

The primary purpose of this study was to use confirmatory factor analytic tech-
niques to explore the fit of the four-factor model proposed by the authors of the
SACQ. Using a large sample of college sophomores, we did not find evidence
supporting the fit of the four-factor model. Although alternative models were pro-
posed, these models were more parsimonious and, as such, would not exceed the
already inadequate fit of the four-factor model. We examined the standardized resid-
uals of the four-factor solution in an attempt to identify misfit with disappointing
results. The large quantity of standardized residuals greater than |3| halted our abil-
ity to detect sources of misfit easily. Four separate one-factor models, one for each
SACQ subscale, were then fit to the data in an attempt to identify problematic sub-
scales or items. Only moderate support for fit was found for the Personal-Emotional
subscale. However, after inspection of the standardized residuals, we concluded
that the one-factor model did not fit satisfactorily enough to reproduce the relation-
ships for this subscale.
We then used principal axis FA in an attempt to reveal other plausible models
that might explain the relationships among the items. Although a four-factor model
was discussed, it explained less than half of the total variance in items, indicating
that more than half of the total variance among items was unexplained by the fac-
tors. This unexplained variance is due either to random measurement error or, more
likely, to systematic error arising from other unmeasured factors. Inspection of the
structure coefficients of the four-factor solution led us to conclude that (a) several
items should be omitted or revised because, across solutions, they consistently did
not load on any factor or loaded on more than one factor and (b) the authors’
assignment of items to subscales should be reconsidered. Specifically, each of the
four factors consisted of a large proportion of items from a given subscale, with
Factors I, II, III, and IV each being largely represented by Social Adjustment,
Personal-Emotional Adjustment, Academic Adjustment, and Institutional Attach-
ment items, respectively. Contrary to what the SACQ authors proposed, items
currently shared between the Institutional Attachment and Social Adjustment sub-
scales did not have split loadings between Factor I and Factor IV but instead loaded
either only on those factors or on the Personal-Emotional Adjustment factor.
Whereas Factor III consisted only of Academic-Adjustment items, other Academic
Adjustment items were found on Factors II and IV.

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015


14 Educational and Psychological Measurement

In sum, our attempts to identify a plausible factor structure for the SACQ items
were thorough but unsuccessful. Although we were not able to provide a useful
model to explain the intercorrelations among the items, we were able to reject the
four-factor model proposed by the authors for 65 of the SACQ items. We also were
able to reject separate one-factor models for each of the subscales. We do not advo-
cate the four-factor structure found using the EFA but instead provide the results
for readers to have some sense as to how the items were relating to each other and
to provide information for those wanting to revise the instrument.
After consideration of the literature on college student adjustment, we concluded
that revision of the instrument is necessary. It was not incredibly surprising that the
four-factor and one-factor models, those models advocated most strongly by the
instrument’s creators, did not fit the data given the lack of theory that was used in
instrument development. Little information is provided about how the items were
developed in 1980 or about the revisions in 1985. Assuming that theory did guide
instrument development, the most current version of the instrument was developed
20 years ago and may not reflect the current understanding of a student’s adjustment
to college. Possible directions for future researchers include further examination of
the college adjustment literature to determine if Baker and Siryk’s conceptualization
is plausible. If Baker and Siryk’s conceptualization still holds in today’s adjustment
literature, another direction would be to use backward translation procedures on the
current set of items. This would require experts in the field of college adjustment to
examine the items and assign them to the subscale they feel is appropriate.
It is prudent to mention some of the limitations of the current study. First, Baker
and Siryk (1999) recommend use of the instrument for first-year students. The sam-
ple of students presented here was mostly second-year students or students with
between 45 and 70 credit hours. This limitation may be of little consequence as
Baker and Siryk pointed out that the instrument was used in a number of studies with
non-first-year students and showed similar results. Nevertheless, future research
could replicate this study with a sample of first-year students. In addition, the sample
used here was extremely homogeneous in terms of race and retention. A more
heterogeneous sample would likely produce more generalizable results. In addition,
replication of this study at an institution where the retention rate is not so high and
where levels of adjustment might be lower would further add to the construct validity
of the scores obtained from the SACQ.
Second, there were issues with nonnormality, particularly multivariate non-
normality. Although univariate examination of skew and kurtosis indicated no
problems with normality, when examining Mardia’s normalized coefficient, we
determined that the multivariate normality assumption was most likely violated.
These issues could not be addressed using corrections based on the asymptotic co-
variance matrix (i.e., Satorra-Bentler corrections). Although parameter estimates
are not affected by this problem, standard errors and significance tests are likely to
be biased. The violation of the multivariate normality assumption should be kept in

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015


Taylor, Pastor / Confirmatory Analysis of the SACQ 15

mind when interpreting the fit indices found in the present study, yet the large num-
ber of standardized residuals greater than |3| for the four-factor model indicate that
the model would not fit the data even if the Satorra-Bentler corrections had been
utilized. We applied cutoffs for the model in determining fit, but given the current
debate regarding how fit indices should be used, readers are encouraged to examine
the fit indices we have provided in Table 2 along with information regarding
(a) standardized residuals, (b) fit of the one-factor model, and (c) results of the
EFA to decide whether the model showed adequate fit. If the fit indices are inter-
preted less rigidly or if one takes into consideration the effect of nonnormality on
the fit indices, it could be concluded that the four-factor model marginally fits the
data. However, the large number of standardized residuals and the additional FAs
pursued with the data encourage revision of the instrument, not interpretation of
the originally proposed four-factor model. Future research pursued in order to
use Satorra-Bentler corrections would require a much larger sample size (∼ 2,500 to
compensate for the large number of observed variables) in order produce the asymp-
totic covariance matrix used to apply the corrections.
As mentioned earlier, it is important for those students facing difficulty with the
transition from high school to college to make use of whatever resources they have
available. But, as Baker and Nisenbaum (1979) found, these students most likely
need to be targeted for services as they will not seek them out on their own. Thus,
an instrument for diagnosing maladjustment or identifying students in need of
services is a worthy goal. At this point, however, we would not recommend use of
the SACQ as such a tool. The results of our study do not lend support to the internal
validity of the SACQ full-scale and subscale scores. Our recommendations would
be, first, to pursue further theoretical development of the construct and, second,
either to revisit the structure of this instrument or create a new instrument to mea-
sure adjustment to college.

References
Baker, R., McNeil, O., & Siryk, B. (1985). Expectation and reality in freshman adjustment to college.
Journal of Counseling Psychology, 32, 94-103.
Baker, R., & Nisenbaum, S. (1979). Lessons from an attempt to facilitate freshman transition into
college. Journal of the American College Health Association, 28, 79-81.
Baker, R., & Siryk, B. (1980). Alienation and freshman transition into college. Journal of College
Student Personnel, 21, 437-442.
Baker, R., & Siryk, B. (1984). Measuring adjustment to college. Journal of Counseling Psychology, 31,
179-189.
Baker, R., & Siryk, B. (1986). Exploratory intervention with a scale measuring adjustment to college.
Journal of Counseling Psychology, 33, 31-38.
Baker, R., & Siryk, B. (1999). SACQ: Student Adaptation to College Questionnaire manual.
Los Angeles: Western Psychological Services.
Benson, J. (1998). Developing a strong program of construct validation. Educational Measurement:
Issues and Practice, 17, 10-17, 22-23.

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015


16 Educational and Psychological Measurement

Benson, J., & Nasser, F. (1998). On the use of factor analysis as a research tool. Journal of Vocational
Education Research, 23, 13-33.
Bentler, P. M., & Wu, E. J. C. (1993). EQS/Windows user’s guide: Version 4. Los Angeles: BMDP
Statistical Software.
Byrne, B. M. (1998). Structural equation modeling with LISREL, PRELIS, and SIMPLIS: Basic
concepts, applications, and programming. Mahwah, NJ: Lawrence Erlbaum.
Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indexes for testing measurement
invariance. Structural Equation Modeling, 9, 233-255.
Conti, R. (2000). College goals: Do self-determined and carefully considered goals predict intrinsic
motivation, academic performance, and adjustment during the first semester? Social Psychology
of Education, 4, 189-211.
Fan, X., & Thompson, B. (2001). Confidence intervals about score reliability coefficients, please:
An EPM guidelines editorial. Educational and Psychological Measurement, 61, 517-531.
Gorsuch, R. L. (1983). Factor analysis (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.
Gorsuch, R. L. (1990). Common factor analysis versus component analysis: Some well and little known
facts. Multivariate Behavioral Research, 25, 33-39.
Henson, R. K., & Roberts, J. K. (2006). Use of exploratory factor analysis in published research:
Common errors and some comment on improved practice. Educational and Psychological Measure-
ment, 66, 393-416.
Hertel, J. (2002). College student generational status: Similarities, differences, and factors in college
adjustment. Psychological Record, 52, 3-18.
Hook, R. J. (2004). Students’ anti-intellectual attitudes and adjustment to college. Psychological
Reports, 94, 909-914.
Hu, L., & Bentler, P. M. (1995). Evaluating model fit. In R. H. Hoyle (Ed.), Structural equation model-
ing: Concepts, issues, and applications (pp. 76-99). Thousand Oaks, CA: Sage.
Hu, L., & Bentler, P. M. (1998). Fit indices in covariance structure modeling: Sensitivity to underpa-
rameterized model misspecification. Psychological Methods, 3, 424-453.
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conven-
tional criteria versus new alternatives. Structural Equation Modeling, 6, 1-55.
Jöreskog, K., & Sörbom, D. (2003). LISREL (Version 8.54) [Computer software]. Chicago: Scientific
Software International.
Kelloway, E. K. (1995). Structural equation modeling in perspective. Journal of Organizational
Behavior, 16, 215-224.
Kline, R. B. (1998). Principals and practice of structural equation modeling. New York: Guilford.
Krotseng, M. V. (1992). Predicting persistence from the Student Adaptation to College Questionnaire:
Early warning or siren song? Research in Higher Education, 33, 99-111.
Marsh, H. W., Hau, K.-T., & Grayson, D. A. (2005). Goodness of fit in structural equation modeling.
In A. Maydeu-Olivares & J. McArdl (Eds.), Contemporary psychometrics: A festschrift for Rod
McDonald (pp. 275-340). Mahwah, NJ: Lawrence Erlbaum.
Marsh, H. W., Hau, K.-T., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis testing
approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s
(1999) findings. Structural Equation Modeling, 11, 320-341.
Meehan, D.-C.-M., & Negy, C. (2003). Undergraduate students’ adaptation to college: Does being
married make a difference? Journal of College Student Development, 44, 670-690.
Mooney, S. P., Sherman, M. F., & LoPresto, C. T. (1991). Academic locus of control, self-esteem, and
perceived distance from home as predictors of college adjustment. Journal of Counseling and Devel-
opment, 69, 445-448.
National Center for Education Statistics. (2003). The condition of education: Section 1, Participation in
education. Retrieved January 20, 2004, from http://nces.ed.gov/pubs2003/2003067_1.pdf

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015


Taylor, Pastor / Confirmatory Analysis of the SACQ 17

Olsson, U. H., Foss, T., Troye, S. V., & Howell, R. D. (2000). The performance of maximum likelihood,
generalized least squares, and weighted least squares estimation in structural equation modeling
under conditions of misspecification and nonnormality. Structural Equation Modeling, 7, 557-595.
Olsson, U. H., Troye, S. V., & Howell, R. D. (1999). Theoretic fit and empirical fit: The performance
of maximum likelihood versus generalized least squares estimation in structural equation models.
Multivariate Behavioral Research, 34, 31-58.
Schwitzer, A. M., Robbins, S. B., & McGovern, T. V. (1993). Influences of goal instability and social
support on college adjustment. Journal of College Student Development, 34, 21-25.
Spady, W. (1971). Dropouts from higher education: Toward an empirical model. Interchange, 2, 38-62.
SPSS. (2003). Statistical package for the social sciences [Computer software]. Chicago: Author.
Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate statistics (4th ed.). Boston: Allyn &
Bacon.
Terenzini, P., & Pascarella, E. (1977). Voluntary freshman attrition and patterns of social and academic
integration in a university: A test of a conceptual model. Research in Higher Education, 6, 25-43.
Thompson, B., & Daniel, L. G. (1996). Factor analytic evidence for the construct validity of scores: A
historical overview and some guidelines. Educational and Psychological Measurement, 56, 197-208.
Tinto, V. (1996). Reconstructing the first year of college. Planning for Higher Education, 25, 1-6.
West, S. G., Finch, J. F., & Curran, P. J. (1995). Structural equation models with nonnormal variables.
In R. H. Hoyle (Ed.), Structural equation modeling: Concepts, issues, and applications (p. 57-75).
Thousand Oaks, CA: Sage.

Downloaded from epm.sagepub.com at UNIV OF CONNECTICUT on May 19, 2015

You might also like