You are on page 1of 4

Introduction to Cronbach’s Alpha – Dr. Matt C. Howard https://mattchoward.

com/introduction-to-cronbachs-alpha/

Introduction to Cronbach’s Alpha

Cronbach’s alpha is certainly among the most used statistics in the social
sciences, but many students and researchers don’t really know what it tells us
– or how to interpret it. Fortunately, Chad Marshall wrote a wonderful intro-
duction to Cronbach’s Alpha, below. Chad Marshall is currently a DBA stu-
dent in the Mitchell College of Business at the University of South Alabama. If
you have any questions about Cronbach’s Alpha, you can email me (Dr. Matt
C. Howard – MHoward@SouthAlabama.edu) or Chad Marshall
Dr. Matt C. Howard
(CJM1423@JagMail.SouthAlabama.edu).
I perform (1) statistically- and
methodologically-sound re-
search on (2) health and well-
being, (3) personality, and (4)
Overview
computer-based training and
development. Cronbach’s alpha (Cronbach, 1951), also known as coefficient alpha, is a
measure of reliability, specifically internal consistency reliability or item
Noteworthy Posts interrelatedness, of a scale or test (e.g., questionnaire). Internal consistency
About refers to the extent that all items on a scale or test contribute positively towards
Blog measuring the same construct. As such, internal consistency reliability is
relevant to composite scores (i.e., the sum of all items of the scale or test).
Statistics Help
Also important to note, reliability pertains to the data, not the scale or test mea-
Courses sure. The simplified formula for Cronbach’s alpha is a follows:
CV
α = (N · c̄) / [v̄ + (N – 1) · c̄]

Where N is the number of scale or items, c-bar is the average inter-item


covariance among the scale items, and v-bar is the average variance.

Cronbach’s alpha typically ranges from 0 to 1. Values closer to 1.0 indicate a


greater internal consistency of the variables in the scale. In other words,
higher Cronbach’s alpha values show greater scale reliability. A value of 1.0 in-
dicates that all of the variability in test scores are due to true score differences
(i.e.,
reliable variance) with no measurement error. Conversely, a value of 0.0
indicates that no true score (i.e., no reliable variance) is present and only
measurement error exists in the items. Stated differently, a Cronbach’s alpha
of 1.0 represents perfect consistency in measurement, while an alpha value of
0.0 represents no consistency in measurement. Although rare, the coefficient
may be negative in certain instances. For example, if a negative worded sur-
vey item is not properly reversely scored, the resulting Cronbach’s alpha may
be negative (Cho & Kim, 2015).

Utility of Cronbach’s Alpha

1 of 4 1/27/2021, 5:39 AM
Introduction to Cronbach’s Alpha – Dr. Matt C. Howard https://mattchoward.com/introduction-to-cronbachs-alpha/

Cronbach’s alpha has several benefits for researchers including the ability to
be used for dichotomous and continuously scored variables. Compared to
other strategies for estimating reliability that require samples taken at two dif-
ferent points in time (e.g., test-retest reliability or parallel reliability), Cronbach’s
alpha can be calculated using a single sample. This is particularly beneficial
for
constructs where shifts in responses may occur (e.g., states). With respect to
other measures of internal consistency (e.g., split-half reliability), Cronbach’s
alpha provides the average of all possible split-half, rather than forcing the
researcher to only select one for analysis.

The major concern with Cronbach’s alpha is its assessment when the number
of scale or test item are large. Hair, Black, Babin, and Anderson (2010) warn
that
researchers should place more rigorous requirements on scales with large
numbers, because increasing the number of items on a scale, even without
changing the intercorrelations, will result in an increase of the alpha value.
Research by Cortina (1993) suggests that scales with over 20 items can have
a Cronbach’s alpha above 0.70 even when item intercorrelations are very
small

Coefficient Interpretation

Interpretation of Cronbach’s alpha is muddled with a lack of agreement regard-


ing the appropriate range of acceptability. A frequently cited acceptable range
of Cronbach’s alpha is a value of 0.70 or above. This is derived from the work
of Nunnally (1978). However, as noted by Lance, Butts, and Michels (2006),
this norm is misleading.

So what is an acceptable cut-off? The answer is it depends, because rather


than a universal standard the appropriate level of reliability is dependent on
how a measure is being applied in research. As more succinctly put by Cho
and Kim (2015), “One size does not fit all” (p. 218). The work of Nunnally
(1978) provided that the lower cut-off (i.e., 0.70) was appropriate in the early
stages of research (i.e., exploratory) such as during scale development, and
more stringent cut-offs should be used for basic research (0.80 or higher) and
applied research (0.90 or higher) (Lance et al., 2006). In fact for applied re-
search, Nunnally (1978) argued that 0.95 should be considered the standard.

Other researchers have provided acceptable lower limits of acceptability for


Cronbach’s alpha, including Nunnally (1967) who in the first edition of his book
suggested that values as low as 0.50 are appropriate for exploratory research.
For another example, Hair et al. (2010) provide that while a value of 0.70 is
generally agreed upon as an acceptable value, and values as low as 0.60 may
be acceptable for exploratory research. Additionally, George and Mallery
(2003) suggest a tiered approach consisting of the following: “≥ .9 – Excellent,
≥ .8 – Good, ≥ .7 – Acceptable, ≥ .6 – Questionable, ≥ .5 – Poor, and
≤ .5 – Unacceptable” (p. 231).

2 of 4 1/27/2021, 5:39 AM
Introduction to Cronbach’s Alpha – Dr. Matt C. Howard https://mattchoward.com/introduction-to-cronbachs-alpha/

While researchers disagree on the appropriate lower cut-off values of


Cronbach’s alpha, some researchers (e.g., Cho & Kim, 2015; Cortina,
1993) caution against applying any arbitrary or automatic cutoff criteria.
Rather, it is suggested than any minimum value should be determined on an
individual
basis based on the purpose of the research, the importance of the decision
involved, and/or the stage of research (i.e., exploratory, basic, or applied).
Cortina (1993) contends that researchers should not make decisions about
scale adequacy based solely on the level of Cronbach’s alpha, and the ade-
quate level of reliability depends on the decision that is made with the scale.
For example, the reliability of the Graduate Management Admission Test is
more than adequate for distinguishing between a 650 score and a 420 score.
However, its reliability is not adequate for distinctions between very close
scores (e.g., 660 and 650).

Conclusion

Currently among researchers, Cronbach’s alpha continues to be the most


widely used measure of internal consistency reliability. Researchers should be
cautious when applying arbitrary cutoff values for Cronbach’s alpha, and rather
should
select the appropriate level of reliability based on the specific research purpose
and importance of the decision associated with the scale’s use. Researchers
should also remember that while Cronbach’s alpha is an indicator of consis-
tency, it is not a measure of homogeneity or unidimensionality.

References

Cho, E., & Kim, S. (2015). Cronbach’s Coefficient Alpha: Well Known but
Poorly Understood. Organizational Research Methods, 18(2), 207.

Cortina, J. M. (1993). What is coefficient alpha? An examination of theory and


applications. Journal of Applied Psychology(1), 98.

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests.


psychometrika, 16(3), 297-334.

George, D., & Mallery, P. (2003). SPSS for Windows step by step: A simple
guide and reference. 11.0 update (4th ed.). Boston: Allyn & Bacon.

Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate
data analysis: Pearson College Division.

Lance, C. E., Butts, M. M., & Michels, L. C. (2006). The Sources of Four
Commonly Reported Cutoff Criteria: What Did They Really Say?
Organizational Research Methods, 9(2), 202.

Nunnally, J. C. (1967). Psychometric theory: McGraw-Hill [1967].

3 of 4 1/27/2021, 5:39 AM
Introduction to Cronbach’s Alpha – Dr. Matt C. Howard https://mattchoward.com/introduction-to-cronbachs-alpha/

Nunnally, J. C. (1978). Psychometric theory: New York : McGraw-Hill, c1978.


2d ed.

Share this:

Be the first to like this.

4 of 4 1/27/2021, 5:39 AM

You might also like