You are on page 1of 5

This article was downloaded by: [Ulster University Library]

On: 02 April 2015, At: 01:17


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Communication Research Reports


Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/rcrr20

Confirmatory Factor Analysis and Scale


Validation in Communication Research
Timothy R. Levine
Published online: 17 Feb 2007.

To cite this article: Timothy R. Levine (2005) Confirmatory Factor Analysis and Scale
Validation in Communication Research, Communication Research Reports, 22:4, 335-338, DOI:
10.1080/00036810500317730

To link to this article: http://dx.doi.org/10.1080/00036810500317730

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever
or howsoever caused arising directly or indirectly in connection with, in relation to or
arising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Communication Research Reports
Vol. 22, No. 4, December 2005, pp. 335 /338

Confirmatory Factor Analysis and Scale


Validation in Communication Research
Timothy R. Levine
Downloaded by [Ulster University Library] at 01:17 02 April 2015

Confirmatory factor analysis (CFA) is an under-used and often misunderstood statistical


tool. CFA provides useful information about scale dimensionality and validity. This
paper offers a brief and accessible introduction to the role of CFA in communication
research. Some common issues with CFA including dimensionality and model fit are
addressed. More frequent and informed use of CFA would likely improve the quality of
measurement in quantitative communication research.

Keywords: Confirmatory factor analysis; CFA; Measurement validation

Research results are no more valid than the measures used to collect the data. Thus,
valid measurement is absolutely essential for meaningful empirical research. Yet, the
quality of the measurement is unknown in much published and submitted research.
More often than not, little information beyond reliability is reported. Reliability,
however, is not sufficient for measurement validity, and some measurement
confounds can even inflate reliability (Shevlin, Miles, Davies, & Walker, 2000).
Such problems cannot be detected with exploratory factor analysis (EFA), and EFA is
often reported in situations where confirmatory factor analysis (CFA) would be a
better statistical decision.
Even the publication of validation studies does not provide a sufficient guarantee
of validity. For example, consider the widely used machiavellianism (Christie & Geis,
1970), verbal aggressiveness (Infante & Wigley, 1986), and self-construal (Singelis,
1994) scales. Each of the scales was published along with validity tests. Yet,
subsequent research has shown data inconsistent with the measurement models
proposed by each (see Bresnahan et al., 2005; Hunter, Gerbing & Boster, 1982; Levine
et al., 2003, Levine et al., 2004).
Ideally, a program of validation research would vet all research measures. A stable
factor structure consistent with a thoughtful conceptual definition would be

Timothy R. Levine (PhD Michigan State University, 1992) is a Professor in the Department of Communication
at Michigan State University. Correspondence to Tim Levine, Department of Communication, Michigan State
University, East Lansing, MI 48824-1212, USA (Email: levinet@msu.edu).

ISSN 0882-4096 (print)/ISSN 1746-4099 (online) – 2005 Eastern Communication Association


DOI: 10.1080/00036810500317730
336 T. R. Levine

established, replicated, and shown to be invariant across samples and applications.


Multiple methods (Campbell & Fiske, 1959) would be used to show convergent,
discriminant, and predictive validity in addition to the more commonly provided
nomological network validation (Cronbach & Meehl, 1955). In practice, however,
such programs of research are rare. Although such work is highly desirable in the case
of frequently used scales, such laborious efforts are simply not practical in many
cases. Researchers must often make due with less.
CFA is a useful statistical procedure for providing validity information (see Hunter
& Gerbing, 1982). It requires little effort, and no additional data. CFA is used when
constructs are measured with multiple items, when the scale items have a linear
relationship to the scale average or total, and when a researcher has an a priori idea of
which items measure which constructs. These three criteria are met in much
Downloaded by [Ulster University Library] at 01:17 02 April 2015

communication research.
While CFA does not provide sufficient information for a strong validity case, it
does provide necessary and useful information. CFA provides information on the
number of constructs measured and tests which items measure the same construct
and which items measure different constructs. Such information is needed in order to
meaningfully sum or average a set of items as a measure of some construct and to
meaningfully interpret a reliability coefficient. CFA does not test if a unidimensional
set of items assess some specific substantive construct. That requires additional
assessments involving face validity, nomological networks, and multi-method, multi-
trait validation. So, CFA provides information on the number of constructs measures
and which item assesses the same construct, but not the substantive meaning of a
construct.
A useful distinction can be made between CFA and EFA. Whether factor analysis is
exploratory or confirmatory is often confused with the type of statistical software
used. However, one could use LISREL, AMOS, or EQS in an exploratory way just as
one could seek to confirm a model with SPSS or SAS. As the name implies, EFA
involves using factor analysis to find patterns of inter-correlations in the data (see
Park, Dailey, & Lemus, 2002). EFA is data driven and is best used when there is little
preconception about how the items will factor. In CFA, one has reason to believe that
some specified number of constructs is being measured, and that specific items are a
function of specified constructs. That is, CFA is used when the researcher has an
expectation of how the items will factor, and CFA is used to test this expectation
against the data.
In CFA, the data are compared to a proposed measurement model, goodness of fit
is assessed, and an acceptable fit is a prerequisite for validity. Fit can be assessed with
various global fit indices (e.g., GFI [goodness of fit index], CFI [comparative fit
index], RMSEA [root mean square error of approximation], root mean squared
error), by examining specific deviations for the model (e.g., standardized residuals or
modification indexes), or both (see Hu & Bentler, 1995 for a detailed discussion).
Global indicators of fit are attractive because they provide a single value reflecting fit
of the entire model that can be compared to rules of thumb, but examination of
Confirmatory Factory Analysis 337

specific deviations is more informative about which particular items or factors might
be problematic. When testing model fit, it is generally desirable to test multiple
factors in a single model so that cross-dimension relationships can be assessed in
addition to within dimension associations. Further, loosening constraints or
correlating error terms to obtain fit is generally a sign of invalidity. Correlated error
terms can reflect confounded measurement. A model that only fits when confounds
are built in is not consistent with valid measurement.
The concept of a confirmatory approach to data analysis may be unfamiliar to
many social scientists. In most common uses of inferential statistics, the null
hypothesis reflects a negation of the research hypothesis, and statistical significance is
interpreted providing support for a research hypothesis. The reverse is true of model
fit in confirmatory research and CFA. The proposed model is treated as the null, and
Downloaded by [Ulster University Library] at 01:17 02 April 2015

statistically significant results reflect a discrepancy between the predicted model and
the data. Thus, non-significant results indicate that the data are consistent with the
model, and significance in a goodness-of-fit test indicates that the data are
inconsistent with the model. Whereas this may seem strange to those indoctrinated
in traditional significance testing, the confirmatory analysis actually allows for strong
knowledge claims (cf. Meehl, 1967).
A common source of confusion involves the idea of dimensionality. Whereas
people sometimes talk of a multidimensional construct, there can be no such thing
(with the exception of second-order unidimensionality, see Hunter & Gerbing, 1982).
If a scale is multidimensional, there is more than one construct. Each dimension is
associated with an empirically distinct construct, and scales are scored so that there is
a one-to-one correspondence between the number of constructs-dimensions
measured and the number of scores obtained from the scales. For example, if
empathy is specified to have three dimensions (perspective taking, empathic concern,
and emotional contagion), then the scales will be scored so as to yield three scores,
each with its own estimate of reliability. In this example, a CFA would specify three
factors with items measuring each dimension loading on the intended dimension.
Model fit would indicate that the set of items reflect three different constructs with
items specified as measuring each of the three dimensions clustering on the specified
dimensions.
Given the distinction between EFA and CFA, one would think that CFA
applications would be common. Most researchers include items and scales for a
reason, and usually they have an expectation that a set of items measures the some
specific construct. Few scales are so thoroughly validated that assurance of validity is
unnecessary. Researchers also want to show that scales are reliable, and to
meaningfully interpret reliability estimates, unidimensionality is a prerequisite
(Shevlin et al., 2000). CFA provides stronger evidence for dimensionality than EFA
because EFA can under-factor correlated constructs and because model fit is typically
tested with CFA. Thus, CFA is preferred over EFA in many research situations.
Communication researchers are encouraged to use and report CFA when using
multiple-item linear scales.
338 T. R. Levine

References
Bresnahan, B. J., Levine, T. R., Shearman, S., Lee, S. Y., Park, C. Y., & Kiyomiya, T. (2005). A
multimethod-multitrait validity assessment of self-construal in Japan, Korea, and the U.S.
Human Communication Research , 31 , 33 /59.
Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multi-trait-
multimethod matrix. Psychological Bulletin , 56 , 81 /105.
Christie, R., & Geis, F. L. (1970). Studies in Machiavellianism. NY: Academic Press.
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological
Bulletin , 52 , 281 /302.
Hu, L. T., & Bentler, P. M. (1995). Evaluating model fit. In R. H. Hoyle (Ed.), Structural equation
modeling: Concepts, issues, and applications (pp. 76 /99). Thousand Oaks, CA: Sage.
Hunter, J. E., & Gerbing, D. W. (1982). Unidimensional measurement, second order factor analysis,
and causal models. Research in Organizational Behavior, 4 , 267 /320.
Downloaded by [Ulster University Library] at 01:17 02 April 2015

Hunter, J., Gerbing, D., & Boster, F. (1982). Machiavellian beliefs and personality: Construct
invalidity of the Machiavellianism dimension. Journal of Personality and Social Psychology,
43 , 1293 /1305.
Infante, D. A., & Wigley, C. J., III. (1986). Verbal aggressiveness: An interpersonal model and
measure. Communication Monographs , 53 , 63 /69.
Levine, T. R., Beatty, M. J., Limon, S., Hamilton, M. A., Buck, R., & Chory-Asada, R. M. (2004). The
dimensionality of the verbal aggressiveness scale. Communication Monographs , 71 , 245 /268.
Levine, T., R., Bresnahan, M., Park, H. S., Lapinski, M. K., Wittenbaum, G., Shearman, S., Lee, S. Y.,
Chung, D. H., & Ohashi, R. (2003). Self-construals scales lack validity. Human Commu-
nication Research , 29 , 210 /252.
Meehl, P. E. (1967). Theory testing in psychology and physics: A methodological paradox.
Philosophy of Science , 34 , 103 /115.
Park, H. S., Dailey, R., & Lemus, D. (2002). The use of exploratory factor analysis and principal
components analysis in communication research. Human Communication Research , 28 ,
562 /577.
Shevlin, M., Miles, J. N. V., Davies, M. N. O., & Walker, S. (2000). Coefficient alpha: A useful
indicator of reliability? Personality and Individual Differences , 28 , 229 /237.
Singelis, T. M. (1994). The measurement of independent and interdependent self-construals.
Personality and Social Psychological Bulletin , 20 , 580 /591.

You might also like