You are on page 1of 11

Corral GTD-7013 V2 W2

Compare Models of Evaluation

Raymond M. Corral  

School of Education, Northcentral University 

Corral GTD-7013 v2 W2

Dr. Bob Bulik

Jan 2, 2022 

0
Corral GTD-7013 V2 W2

Comparing Evaluation Models

This paper will provide a critical evaluation and comparison of Kirkpatrick’s

Learning Evaluation (KM) (Kirkpatrick D. , 1959) model against the Context, Input, Process,

Product model (CIPP) (Worthen & Sanders, 1987), the Training Validation System (TVS) (Fitz-

Enz, 1994) approach, and the Input Process Output Outcome (IPO) (Bushnell, 1990) evaluation

models. In addition to the differences in each model, this paper will show how each model

supports the evaluation purpose and will provide their pros and cons, and potential use in the

evaluation process.

The Kilpatrick Learning Evaluation Model

Don Kirkpatick (1959) created the Learning Evaluation Model for classifying training

evaluation. Over time, it has become the best known and the most widely used framework for

training evaluation. The original model consisted of four steps which are now described as levels

(Kirkpatrick & Kirkpatrick, 2006).

According to D. and J. Kirkpatrick (2009), the three principal reasons for evaluating

training programs are that firstly, evaluation will indicate ways to improve future programs.

Next, it can help to determine whether a program should be continued or discontinued. Finally,

evaluation helps to justify the existence of the institution’s training department and its budget.

Training professionals are urged to use firm guidelines for evaluating programs at all four levels,

and to substantiate their evaluation finding using more than reaction sheets at the end of their

programs (Kurt, 2018), (Kirkpatrick & Kirkpatrick, 2009).

1
Corral GTD-7013 V2 W2

Continued Relevance of the Kirkpatrick Model

The Kirkpatrick Model continues to be relevant and applicable as it is continuously being

revised and updated in response to feedback provided by researchers using the model. Critiques

are many however, despite the number, there are addressed with consideration for the

improvement of the model (Kirkpatrick D. , 1996) Fortunately, the simple foundation of the

model is strong and flexible. The new knowledge and acquired skills needed to succeed in the

training program should be congruent with those required to be successful on the job. The

evaluation will be relevant if it is able to measure those factors.

The criticism this model has received includes that there is ambiguity about how to

operationalize measurement levels, and that the model fails to incorporate the latest

psychological findings on skill acquisition and learning (Tamkin, Yarnall, & Kerrin, 2002)

(Kirkpatrick D. , 1996), (Kirkpatrick & Kirkpatrick, 2009), Yet others say that the model fails to

account for the various intervening variables that can affect the transfer teaching, learning and

training (Tamkin, Yarnall, & Kerrin, 2002).

The New World Kirkpatrick (NWKM) four-level model is a new version of the

Kirkpatrick Model which was developed in response to critiques of the original KM.

Kirkpatrick’s original model (KM) is widely used for evaluating continuing education. New

World (NWKM) expands the scope of the original KM by adding concepts and process measures

to enable educators to interpret the results of evaluation Liao & Hsu, (2019)

2
Corral GTD-7013 V2 W2

The Four Levels

The first is the Reaction level which monitors what participants thought of the program

and uses questionnaires, forms, or surveys to measure their reactions. Evaluators will need to

find out if the participants were happy with the instructors and with the presentation and

educational tools that were used such as PowerPoint, handouts, and such, and most importantly,

if the training met the participants’ needs. Honest responses via written comments and feedback

are most valued (Kirkpatrick D. , 1959)

Level 2 is the Learning level which tracks the degree to which training objectives are

met. Performance tests are used to measure any increases in knowledge and skills and attitudinal

changes congruent with those objectives. This level is more time consuming and challenging

than the first level. At this level reliable strategies relevant to program objectives are essential. A

pre-test will provide data with which to gauge learning. This evaluation at this level requires that

a distinct scoring process that is clear and consistent must be utilized to reduce the possibility of

inconsistent evaluation reports (Kirkpatrick D. , Techniques for Evaluating Training Programs,

1959).

Level 3 is the Behavior level. Evaluators make post-training assessments through

observation and gathering productivity data to measure job-related behavioral changes.

Evaluation at this level potentially allows individuals to determine the usefulness of the training

program as it relates to their attitude and approach as well as their ability to apply new

knowledge, and implement new skills, all of which contribute to their sense of self-efficacy

(Kirkpatrick D. , Techniques for Evaluating Training Programs, 1959),

3
Corral GTD-7013 V2 W2

Finally, Level 4 is the Results level which assesses the overall impact and contribution of

the training to the organizations bottom line. At this level cost and quality are measured to

determine the return on investment (ROI) (Barnett & Mattox, 2010) (Kirkpatrick & Kirkpatrick,

2006).

Context, Input, Process, Product (CIPP) Model

The CIPP model was created by Daniel Stufflebeam in the 1960s and is designed to identify

strengths and weakness in a four- stage framework (Yale, 2021) . The model’s focus is on

continuous improvement through the evaluation of context input, process, and product.

Context

In this model the contextual elements such as needs, problems opportunities and gains are

evaluated along with conditions and dynamics. Context evaluation allows for the discovery of

unmet needs, serves planning decisions by identifying unmet needs, missed opportunities, once

the discoveries are made, decision makers can set achievement targets and prioritize needs and

problems (Tokmak, Baturay, & Fadde, 2013).

Input

Input evaluation primarily focuses on allocating resources support, and management of

the apprenticeship projects, Input evaluation serves to judge the feasibility and effectiveness of

them. Content-themes, participant views on instructional designs, educational materials, and how

the training was facilitated might also be considered (Bushnell, 1990).

Process

4
Corral GTD-7013 V2 W2

Process evaluations, allow for the monitoring of the document, study, and report on how the

program plans were applied. The evaluators provide feedback on the program’s implementation

process and on the continuation of the program as targeted and according to plans once the

program is completed. The way the instructor managed the process, the activities, methods and

techniques will also be examined (Bushnell, 1990).

Product

By having this evaluation at the end all the program achievements can be determined and

reviewed. Product evaluation determines the degree to which objectives have been met and how

they were met (AL-Ajlouni, Athamneh, & Jaradat, 2010). The following questions will serve to

conduct a thorough evaluation. 1. Has the program succeeded at reaching its desired targets? 2.

Where the targeted needs and problems handled effectively? 3. What if any, are the side effects

of the program? 4.Were there any parallel created conflicts with the positive results? And 5.

Were the program achievements worth the expenses? Similar self-evaluation questions are also

recommended (Bushnell, 1990).

Recommended Application

This model lends itself well to diverse educational training scenarios and such disciplines as

health services, public management and more.

Training Validation System (TVS) Approach

The standardized training valuation system (TVS) was created in 1994 by J. Fitz -Enz

This model is based on a four -step process and utilizes a set of analytic tools. The four steps

include situation, intervention, impact, and value. This methodology helps to reveal specific,

5
Corral GTD-7013 V2 W2

current, and potential values and acquired value. It also helps identify training program shortfalls

(Dahiya & Jha, 2011).

However, this approach along with the CIPP (Worthen & Sanders, 1987) and the Input,

Process, Output, Outcome (IPO) Model (Bushnell, 1990), seem to be more useful in terms of

understanding the overall context and situation, but they may not provide sufficient detail.

Systems-based models such as these provide very little insight may not represent the interfacing

dynamics between the training design and the evaluation itself (Kavgaoğlu & Alcı, 2016).

These models rely heavily on the evaluator’s creativity and implementation as they

provide very detail or description on step- by-step process and no tools for evaluation. It is

important to note that these approaches utilize collaborative evaluation processes with various

roles for participants to manage yet there is little detail on how it should be done. (Dahiya &

Jha, 2011).

Furthermore, these models do not address the collaborative process of evaluation, that is,

the different roles and responsibilities that people may play during an evaluation process. The

model is seen as the “road map” or “planning process” for the designer. An effective model can

help the user to understand what a complicated process is essentially and presents reality in a

simplified and comprehensible form (Dahiya & Jha, 2011)

The last two steps for the TVS approach are very similar to Kirkpatrick (1959) with less

detail, the four steps are as follows:

Step 1: Situation analysis — managers are questioned regarding work process and their answers

are probed until a tangible desired outcome is revealed.

6
Corral GTD-7013 V2 W2

Step 2: Intervention —once the problem is revealed a remedial training design is created.

Step 3: Impact — variables impacting performance and shortfalls are examined and measured.

Step 4: Value — a monetary value is placed on the anticipated performance improvement.

A strong collaborative relationship is highly suggested for this approach to work (Wei Chen &

Dai, Wei Chen, ; Fengwei Dai).

Recommendation

This approach would be greatly enhanced with alternative sets of questions to be used at

each step to aid in discovering performance gaps and lost revenue resulting from performance

shortfalls along with implementation strategies (Ward & Parker, 2006).

Input, Process, Output, Outcome Model

Bushnell (1990) developed the (Input, Process, Output Outcome (IPO) model which

focuses on elements that go into training (Bushnell, 1990). With this model employee progress

can be monitored by setting performance indicators at each stage (Dahiya & Jha, 2011).

The four stages are described as follows:

Input — key elements such as the instructor’s experience level, trainee qualifications, and

resources.

Process — the development, design, plan, and delivery of the training.

Outputs — the trainees’ reactions, acquired knowledge and developed skills and any

improvements in behavior and performance on the job.

Outcomes — enhanced bottom line, additional revenue, profits, customer satisfaction and

employee’s productivity.
7
Corral GTD-7013 V2 W2

The outputs and outcomes stages of this also model relate closely to Kirkpatrick’s four levels

(Fitz-Enz, 1994). The benefits associated with this model include but are not limited to:

 Flexibility for the trainer

 Simple to use

 Easy to manage

 Accountability

Some limitations associated with this model are:

 Lack of design structure

 Lack of reference

 Lack of evaluation structure

 Lack of implementation detail

Recommendations

This model is recommended for small enterprises or new operations that have a lot of operational

flexibility. Gap analysis and cost analysis matrices for determining profit /loss margins related to

performance and training issues.

8
Corral GTD-7013 V2 W2

References
AL-Ajlouni, M., Athamneh, S., & Jaradat, A. (2010). Methods of Evaluation: Training
Techniques. International Research Journal of Finance and Economics, 56-65.
Barnett, K., & Mattox, J. R. (2010). Measuring Success and ROI in Corporate Training. Journal
of Asynchronous Learning Networks, 14(2), 28–44.
Bushnell, D. S. (1990, March). Input, process, output: a model for evaluating training. Training
& Development Journal, 44(3). Retrieved from
https://link.gale.com/apps/doc/A8254390/AONE?
u=mlin_oweb&sid=googleScholar&xid=1acef663
Dahiya, S., & Jha, A. (2011). Review of Training Evaluation. nternational Journal of Computer
Science and Communication, 2(1), 11-16.
Fitz-Enz, J. (1994). Yes...you can Weigh Training’s Value. Training, 31(7), 54-58.
Kavgaoğlu, D., & Alcı, B. (2016, September 10). Application of context input process and
product model. Educational Research and Reviews, 11(17), 1659-1669.
doi:10.5897/ERR2016.2911
Kirkpatrick, D. (1959). Techniques for Evaluating Training Programs. Journal of the American
Society of Training Directors, 13, 3-26.
Kirkpatrick, D. (1996, January). Great ideas revisited. Training & Development, 50(1), p. 50+.
doi:Kirkpatrick, D. (1996, January). Great ideas revisited. Training & Development,
50(1), 54+. https://link.gale.com/apps/doc/A18063280/AONE?
u=mlin_oweb&sid=googleScholar&xid=8c27e382
Kirkpatrick, D. L., & Kirkpatrick, J. D. (2006). Evaluating Training Programs : The Four Levels
(3rd ed.). Berrett-Koehler.
Kirkpatrick, D., & Kirkpatrick, J. (2009). Evaluating Training Programs. Berrett-Koehler .
Kurt, S. (2018). Kirkpatrick Model: Four Levels of Learning Evaluation. The International
Journal of Educational Technology. Retrieved from Frameworks & Theories.
Tamkin, P., Yarnall, J., & Kerrin, M. (2002). Kirkpatrick and Beyond: A review of models of
training evaluation. Sussex Univ. Brighton: Institute for Employment Studies.
Tokmak, H. S., Baturay, H. M., & Fadde, P. (2013, 07). Applying the Context, Input, Process,
Product Evaluation Model for Evaluation, Research, and Redesign of an Online Master’s
Program. International Review of Research in Open & Distance Learning, 14(3), 273-
292. doi:10.19173/irrodl.v14i3.1485
Ward, S., & Parker, G. (2006, April 7). Evaluation: Do You Have a Strategy? Alter Inc. Falls
Church, VA.

9
Corral GTD-7013 V2 W2

Wei Chen, & Dai, F. (Wei Chen, ; Fengwei Dai). Evaluation of Talent Cultivation Quality of
Modern Apprenticeship Based on Context-Input-Process-Product Model. International
Journal of Emerging Technologies in Learning, 16(14), 197–212.
https://doi.org/10.3991/ijet.v16i. International Journal of Emerging Technologies in
Learning, 16(14), 197–212. doi:10.3991/ijet.v16i
Worthen, B., & Sanders, J. (1987). Educational Evaluation: Alternative Approaches and
Practical Guidelines. New York, NY: Longman Press.
Yale, U. (2021). CIPP Model. Retrieved from Yale Poorvu Center for Teaching and Learning:
https://poorvucenter.yale.edu/CIPP

10

You might also like