You are on page 1of 4

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/257496612

Evaluating Training Programs: The Four Levels: Donald L.


Kirkpatrick, Berrett-Koehler Publishers, San Francisco, CA,
1996, 229 pp.

Article  in  American Journal of Evaluation · August 1998


DOI: 10.1016/S1098-2140(99)80206-9

CITATIONS READS
10 8,956

1 author:

Salvatore Falletta
Drexel University
9 PUBLICATIONS   133 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Salvatore Falletta on 04 July 2020.

The user has requested enhancement of the downloaded file.


Evaluating Training Programs: The Four Levels,
by Donald L. Kirkpatrick, Berrett-Koehler
Publishers, San Francisco, CA, 1996,229 pp.

Reviewed by: SALVATORE V. FALLETTA

Evaluating Training Programs: The Four Levels is written for practitioners and administra-
tors who are interested in a practical approach to evaluating training programs. The model pre-
sented in the book was originally introduced by the author in 1959; since then, Kirkpatrick’s
evaluation model has been considered to be the most useful framework in the evaluation of
training (Basarab & Root, 1992; Phillips, 1991; Rothwell & Sredl, 1992). Kirkpatrick’s model
allows for the measurement of potential effects of training at four levels: (a) participants’ reac-
tion to the training, (b) participants’ learning as a result of the training, (c) participants’ change
in behavior as a result of the training, and (d) the subsequent impact on the organization as a
result of participants’ behavior change. Following Kirkpatrick’s model, practitioners may
determine the extent to which participants are satisfied with a training program, whether par-
ticipants learned from the program, whether participants were able to apply the learning on the
job, and/or the impact on the organization.
The author provides three basic reasons for evaluating training: (a) to justify the existence
of a training function by showing how it contributes to organizational goals and objectives, (b)
to decide whether to continue a training program, and (c) to improve training. Kirkpatrick’s
book is organized into two sections: concepts and techniques for evaluating the impact of
training according to the model, and case studies illustrating the use of the model. In the first
section, Kirkpatrick describes a process of planning and implementing a sound training pro-
gram that will later lead to positive outcomes. With the exception of two quasi-public institu-
tions, the case studies included in the second section represent corporate entities. While the
book is not an edited text, the case studies, which comprise a major portion of the book, are
written by evaluation specialists from Motorola, Intel, Arthur Anderson, and other leading
corporations.
According to Kirkpatrick, the four levels of his model provide a sequential framework
from which to evaluate training. Kirkpatrick contends that each level is important and should

Salvatore V. Falletta l Regional Manager, Training & Development, Alltel. 4000 Regency Parkway, Suite 400, Gary, NC
27511.

American Journal of Evaluation, Vol. 19. No. 2. 1998, pp. 259-261. All rights of reproduction in any form reserved.
ISSN: 1098-2140 Copyright 0 1998 by American Evaluation Association.

259
260 AMERICANJOURNAL OF EVALUATION, 19(2), 1998

not be overlooked in an attempt to measure more important outcomes which occur later in
time (e.g., Level 4: the impact on the organization). He contends that without collecting eval-
uation data at each of the levels, the evaluator loses valuable information. For example, if an
evaluator neglected to collect data related to Level 2 (i.e., the participants’ learning), the eval-
uator would be limited in the interpretation of null findings at Level 3 (i.e., the extent to which
participants applied what was learned in training on the job). It could be the case that partici-
pants did not learn anything in the training or that participants learned something but that the
skills associated with this learning did not transfer to the work setting.
A number of variants of Kirkpatrick’s model have emerged in the evaluation literature
since the introduction of the model:

Ultitnare Value-Hamblin (1974) suggested adding a fifth level to the mode1 to account
for the economic benefit or human good of training.
Formative evaluation of training process-Brinkerhoff (1987) incorporated two forma-
tive evaluation stages into the mode1 to make it a six-stage process of evaluation.
Societal Value-Kaufman and Keller (1994) suggested adding the societal value of the
training as a fifth level.
Return on Invesmrent-Phillips (1994) suggested focusing on return on investment on
investment as a fifth level of the model.

The Brinkerhoff mode1 was published by the American Society for Training and Devel-
opment. (ASTD) (Brinkerhoff, 1995); in other words, the training industry has endorsed Kirk-
patrick’s model, or a variant of it, for the past three decades. Such models provide a practical
framework to easily understand the impact of training and compare the effectiveness of train-
ing within similar organizations.
Kirkpatrick’s mode1 is not universally accepted, however. Holton (1996) argues that
Kirkpatrick’s mode1 is not a mode1 at all, but rather, a taxonomy of outcomes (i.e., a classifi-
cation of schemes). According to Holton: (1) taxonomies such as Kirkpatrick’s evaluation
model are too loose because they do not fully identify the constructs underlying the phenom-
ena of interest, (2) in a genuine model, a causal relationship or interdependence between vari-
ables is assumed, and (3) a mode1 must stand the test of time through empirical validation.
Kirkpatrick does imply a causal relationship between the levels in his model, although it is
apparent that it does not represent causality and cannot be tested as such. While the semantic
difference between model and taxonomy may be significant to the theoretician, training prac-
titioners and administrators are not likely to concern themselves with the distinction, nor will
the remainder of this review.
Another critique of Kirkpatrick’s model is that it is entirely outcome oriented. As men-
tioned, Brinkerhoff (1987) expanded the use of the taxonomy by adding formative stages.
Similarly, Burrow (1996) contends that the mode1 can be construed to function as either a pro-
cess evaluation or an outcome evaluation. For example, he suggests that any evaluation ele-
ment can be evaluated in a formative as well as summative manner within Kirkpatrick’s
model.
While Holton (1996) and Brinkerhoff (1995) assert that an integrative mode1 should be
developed for training evaluation, it does not appear that training practitioners, administrators,
or stakeholders are ready for such a model. For example. much of the training evaluation lit-
erature begins with what has become a cliche in the field: “Everyone’s talking about it. No one
is doing it.” While Kirkpatrick’s model has limitations, it has yet to be fully implemented to
Book Rez~ie~.s 261

the extent possible. If organizations are incapable of implementing a simple four-level evalu-
ation framework to assess the impact of training, they are not likely to understand or use an
empirically tested, integrative causal model. Until a workable integrative model of training
evaluation is conceptualized and validated, Kirkpatrick’s model will continue to provide a
practical framework for practitioners to evaluate the effectiveness of their training programs;
this book enables practitioners to use the Kirkpatrick model to understand and implement
training evaluation.

REFERENCES

Basarab, D. J., Sr. & Root, D. K. (1992). The training evaluation process. Boston: Kluwer.
Brinkerhoff, R. 0. (1987). Achieving resultsfrom training. San Francisco,CA: Jossey-Bass.
Brinkerhoff, R. 0. (1995). Using evaluation to improve the quality of technical training. In L. Kelly
(Ed.), The ASTD technical and skills training handbook (pp. 385409). New York: McGraw-Hill.
Burrow, J. (1996, November). Evaluation: Perception to transfer. Paper presented at the meeting of the
International Society for Performance Improvement, Research Triangle Park, NC.
Hamblin, A. C. (1974). Evaluation and control of training. New York, NY: McGraw-Hill.
Holton, E. F., III. (1996). The flawed four-level evaluation model. Human Resource Development Quar-
terly, 7,521.
Kaufman, R. & Keller, J. M. (1994). Levels of evaluation: Beyond Kirkpatrick. Human Resources
Development Quarterly, 5, 37 l-380.
Phillips, J. J. (1991). Handbook of training evaluation and measurement methods (2nd ed.). Houston:
Gulf.
Phillips, J. J. (Ed.). (1994). Measuring return on investment. Vol. I. Alexandria, VA: American Society
for Training Development.
Rothweli, W. J. & Sredl, H. J. (1992). The ASTD reference guide to professional human resource devel-
opment roles and competencies. (2nd ed., Vol. II). Amherst, MA: HRD Press.

View publication stats

You might also like