You are on page 1of 8

Module

Instr. Revenlie G. Galapin

4
TRAINING EVALUATION

Objectives
As a result of reading and discussing this chapter, students should be able to:
1. Explain why evaluation is important.
2. Identify and choose outcomes to evaluate a training program.
3. Discuss the process used to plan and implement a good training evaluation.
4. Discuss the strengths and weaknesses of different evaluation designs.
5. Choose the appropriate evaluation design based on the characteristics of the company and the
importance and purpose of the training.
6. Conduct a cost-benefit analysis for a training program.

I. Introduction
A. Training effectiveness refers to the benefits that the company and the trainees experience as a
result of training. Benefits for the trainees include learning new knowledge, skills, and
behaviors. Potential benefits for the company include increased sales, improved quality and
more satisfied customers.
B. Training outcomes or criteria refer to measures that the trainer and the company use to
evaluate training programs.
C. Training evaluation refers to the process of collecting data regarding outcomes needed to
determine if training objectives were met.
D. Evaluation design refers to from whom, what, when and how information is collected to
determine the effectiveness of the training program.
II. Reasons for Evaluating Training
A. Because companies have made large dollar investments in training and education and view
training as a strategy to be successful, they expect the outcomes or benefits related to training
to be measurable.
B. Training evaluation provides a way to understand the investments that training produces
and provides information needed to improve training.
C. Formative evaluation refers to the evaluation of training that takes place during program
design and development. It is conducted to improve the training process; ensuring that the
training program is well-organized and runs smoothly and trainees are learning and are
satisfied with the training.
1. As a result of the formative evaluation, training content may be changed to be more
accurate, easier to understand, or more appealing; the training method can be adjusted to
improve learning.

2. Introducing the training program as early as possible to managers and customers helps in
getting them to buy into the program, which is critical for their role in helping employees
learn and transfer skills; it also allows their concerns to be addressed before the program
is implemented.

3. Pilot testing is the process of previewing a training program with potential trainees and
their managers, or other customers. The pilot testing group is then asked to provide
feedback about the content of the training as well as the methods of delivery. This
feedback enables the trainer to make needed improvements to the training.

D. Summative evaluation is evaluation conducted to determine the extent to which trainees have
improved or acquired knowledge, skills attitudes, behaviors, or other outcomes specified in
the learning objectives, as a result of the training.
E. Reasons training programs should be evaluated:
1. To identify the program’s strengths and weaknesses, including whether the program is
meeting the learning objectives, the quality of the learning environment, and if transfer of
training back to the job is occurring.
2. To assess whether the various features of the training context and content contribute to
learning and the transfer of learning back to the job.
3. To identify which trainees benefited most or least from the program and why.
4. To gather information, such as trainees’ testimonials, to use for marketing training
programs.
5. To determine financial benefits and costs of the program.
6. To compare the costs and benefits of training versus other human resource investments.
7. To compare the costs and benefits of various training programs in order to choose the most
effective programs.

III. Overview of the Evaluation Process

IV. Outcomes Used in the Evaluation of Training Programs

A. One of the original frameworks for identifying and categorizing training outcomes was developed
by Kirkpatrick.
1. Levels 1 and 2 measures are collected before trainees return to their jobs.
2. Levels 3, 4, and 5criteria measure the extent to which the training transfers back to the job.
3. The framework has been criticized for the following reasons:
a. Research has not found that each level is caused by the level that precedes it in the
framework, nor does evidence suggest that the levels differ in importance.
b. The approach does not take into account the purpose of the evaluation.
c. Use of the approach suggests that outcomes can and should be collected in an orderly
manner.
1.) Cognitive outcomes demonstrate the extent to which trainees are familiar with
information, including principles, facts, techniques, procedures, and processes,
covered in the training program.
2.) Skill-based outcomes assess the level of technical or motor skills and behaviors
acquired or mastered. This incorporates both the learning of skills and the
application of them (i.e., transfer).
a.) Skill learning is often assessed by observing performance in work samples
such as simulators.
b.) Skill transfer is typically assessed by observing trainees on the job or
managerial and peer ratings.
3.) Affective outcomes include attitudes and motivation. Affective outcomes that
might be collected in an evaluation include tolerance for diversity, motivation to
learn, safety attitudes, and customer service orientation. The attitude of interest
depends on training objectives.
4.) Reaction outcomes refer to the trainees’ perceptions of the training experience,
including the content, the facilities, the trainer and the methods of delivery. These
perceptions are typically obtained at the end of the training session via a
questionnaire completed by trainees, but usually are only weakly related to
learning or transfer.
a.) An instructor evaluation measures a trainer’s or instructor’s success.

5.) Results are those outcomes used to determine the benefits of the training program
to the company. Examples include reduced costs related to employee turnover or
accidents, increased production, and improved quality or customer service.
6.) Return on Investment involves comparing the training program’s benefits in
monetary terms to the program’s costs, both direct and indirect.

a.) Direct costs include salaries and benefits of trainees, trainers, consultants, and
any others involved in the training; program materials and supplies;
equipment and facilities; and travel costs.
b.) Indirect costs include office supplies, facilities, equipment and related
expenses not directly related to the training program; travel and expenses not
billed to one particular program; and training department management and
staff support salaries.
c.) Benefits are the value the company receives from the training.
d.) Training Quality Index (TQI) is a computer application that collects data
about training department performance, productivity, budget, and courses and
allows for detailed analysis of the data. TQI tracks all department training
data into five categories: effectiveness, quantity, perceptions, financial
impact, and operational impact.

V. Determining Whether Outcomes are Appropriate

A. Relevance
1. Criteria relevance refers to the extent to which training outcomes appropriately reflect the
content of the training program. The learned capabilities needed to successfully complete the
training program should be the same as those required to successfully perform one’s job.

2. Criterion contamination refers to the extent that training outcomes measure inappropriate
capabilities or is affected by extraneous conditions.
3. Criterion deficiency refers to the failure of the training evaluation measures to reflect all
that was covered in the training program
B. Reliability is the degree to which training outcomes can be measured consistently over time.
Predominantly, we are concerned with consistency over time, such that a reliable test
contains items that do not change in meaning or interpretation over time.
C. Discrimination refers to the degree to which trainees’ performance on the outcome actually
reflects true differences in performance; that is, we want the test to discriminate on the basis
of performance and no other things.
D. Practicality is the ease with which the outcome measures can be collected. Learning, job
performance, and results level measures can be somewhat difficult to collect.

VI. Evaluation Practices


A. Reactions and cognitive outcomes are the most frequently used outcomes in training
evaluation.
B. To ensure adequate training evaluation, companies should collect outcome measures related to
both learning and transfer of training.
C. Outcome measures are largely independent of each other; you cannot assume that positive
reactions to the training program mean that trainees learned more and will apply what they
learned back on the job.
D. To the extent possible, evaluations should include measuring job behavior and results level
outcomes to determine whether transfer of the training has occurred.
E. Learning, behavior, and results should be measured after sufficient time has elapsed to
determine whether training has had an influence on these outcomes.
F. There are three types of transfer:
1. Positive transfer is demonstrated when learning occurs and job performance and positive
changes in skill-based, affective, or results outcomes are also observed. This is the
desirable type of transfer.
2. No transfer of training is demonstrated if learning occurs, but no changes are observed in
skill-based, affective, or learning outcomes.
3. Negative transfer is evident when learning occurs, but skills, affective outcomes, or
results are less than at pretraining levels.

VII. Evaluation Designs: The design of the training evaluation determines the confidence that can be
placed in the results. No training evaluation can be absolutely certain that the results of the
evaluation are completely true.

A. Threats to validity: Alternative explanations for evaluation results.


1. Threats to validity refer to factors that will lead an evaluator to question either (1) the
believability of the study results or (2) the extent to which the evaluation results are
generalizable to other groups of trainees and situations.
2. Internal validity is the believability of the study. a. An evaluation study needs internal
validity to provide confidence that the results of the evaluation are due to the training
program and not to another factor.

3. External validity refers to the generalizability of the evaluation results to other groups and
other situations.
4. Methods to control for threats to validity:
a. Use pre- and post-tests to determine the extent to which trainees’ knowledge, skills or
behaviors have changed from pre-training to post-training measures. The
pretraining measure essentially establishes a baseline.
b. Use a comparison (or control) group (i.e., a group that participates in the evaluation
study, but does not receive the training) to rule out factors other than training as the
cause of changes in the trainees. The group that does receive the training is referred
to as the training group or treatment group. Often employees in an evaluation will
perform higher just because of the attention they are receiving. This is known as the
Hawthorne effect.
c. Random assignment refers to assigning employees to the control and training groups
on the basis of chance. Randomization helps to ensure that members of the control
group and training group are of similar makeup prior to the training. It can be
impractical and/or even impossible to employ in company settings.

B. Types of evaluation designs vary as to whether they include a pretest and posttest, a control or
comparison group and randomization.

1. The posttest only design involves collecting only posttraining outcome measures. It would
be strengthened by the use of a control group, which would help to rule out alternative
explanations for changes in performance.
2. The pretest/posttest design involves collecting both pretraining and posttraining outcome
measures to determine whether a change has occurred, but without a control group which
helps to rule out alternative explanations for any change that does occur.
3. The pretest/posttest with comparison group design includes pretraining and posttraining
outcome measurements as well as a comparison group in addition to the group that
receives training. If the posttraining improvement is greater for the group that receives
training, as we would expect, this provides evidence that training was responsible for the
change.
4. The time series design involves collecting outcome measurements at periodic intervals
pre- and posttraining. A comparison group may also be used. The strength of this design
can be improved by using reversal, which refers to a time period in which participants no
longer receive the training intervention. Its advantage are: it allows an analysis of the
stability of training outcomes over time, and using both the reversal and comparison
group helps to rule out alternative explanations for the evaluation results.
Table 6.10 shows a time series design that was used to evaluate how much a training
program improved the number of safe work behaviors in a food manufacturing plant.
5. The Solomon Four-Group design combines the pretest/posttest comparison group design
and the posttest-only control group design. It involves the use of four groups: a training
group and comparison group for which outcomes are measured both pre and posttraining
and a training group and comparison group for which outcomes are measured only after
training. This design provides the most controls for internal and external validity.

C. Considerations in choosing an evaluation design


1. Factors that influence the type of evaluation design used
2. A more rigorous evaluation design should be considered if any of the following conditions
are true:
a. The evaluation results can be used to change the program.
b. The training program is ongoing and has the potential to affect many employees.
c. The training program involves multiple classes and a large number of trainees.
d. Cost justification for training is based on numerical indicators.
e. Trainers or others in the company have the expertise to design and evaluate the data
collected from an evaluation study.
f. The cost of the training creates a need to show that it works.
g. There is sufficient time for conducting an evaluation. Here, information regarding
training effectiveness is not needed immediately.
h. There is interest in measuring change from pretraining levels or in comparing two or
more different programs.

3. Evaluation designs without pretesting or comparison groups are most appropriate when
you are interested only in whether a specific level of performance has been achieved, and
not how much change has occurred.
VIII. Determining Return on Investment

A. Cost-benefit analysis of training is the process of determining the net economic benefits of
training using accounting methods. Training cost information is important for several
reasons:
1. To understand total expenditures for training, including direct and indirect costs.
2. To compare the costs of alternative training programs.
3. To evaluate the proportion of the training budget spent on the development of training,
administrative costs, and evaluation as well as how much is spent on various types of
employees e.g., exempt versus nonexempt.
4. To control costs.

B. There is an increased interest in measuring the ROI of training and development programs
because of the need to show the results of these programs to justify funding and to increase
the status of the training and development function.
C. The process of determining ROI:
1. Understand the objectives of the training program.
2. Isolate the effects of training from other factors that might influence the data.
3. The data are converted to a monetary value and ROI is calculated. D. Because ROI
analysis can be costly, it should be limited only to certain training programs.

E. Determining costs

1. The resource requirements model compares equipment, facilities, personnel, and materials
costs across different stages of the training process (needs assessment, development,
training design, implementation, and evaluation).
2. There are seven categories of cost sources: costs related to program development or
purchase; instructional materials; equipment and hardware; facilities; travel and lodging;
and salary of the trainer and support staff along with the cost of either lost productivity or
replacement workers while trainees are away from their jobs for the training.

F. Determining benefits can be done via a number of methods, including:


1. Technical, practitioner and academic literature summarizes benefits of training programs.
2. Pilot training programs assess the benefits from a small group of trainees before a
company commits more resources.
3. Observing successful job performers can help to determine what successful job performers
do differently than unsuccessful performers.
4. Asking trainees and their managers to provide estimates of training benefits.

G. To calculate return on investment, follow these steps:


1. Identify outcomes.
2. Place a value on the outcomes.
3. Determine the change in performance after eliminating other potential influences on
training results.
4. Obtain an annual amount of benefits from training by comparing results after training to
results before training.
5. Determine the training costs.
6. Calculate the total savings by subtracting the training costs from benefits.
7. Calculate the ROI by dividing benefits by costs. The ROI gives an estimate of the dollar
return expected from each dollar invested in training.

H. Other methods of cost-benefit analysis


1. Utility analysis assesses the dollar value of training based on estimates of the difference in
job performance between trained and untrained employees, the number of employees
trained, the length of time the program is expected to influence performance, and the
variability in job performance in the untrained group of employees. This is a highly
sophisticated formula that requires the use of pretest and posttest with a comparison
group.
2. Other types of economic analysis evaluate training as it benefits the firm or government
using direct and indirect costs, incentives paid by the government for training, wage
increases received by trainees as a result of the training, tax rates, and discount rates.
I. Practical considerations in determining return on investment
1. Training programs best suited for ROI analysis have clearly identified outcomes, are not
one-time events, are highly visible in the company, are strategically focused, and have
effects that can be isolated.
2. The demand for measuring ROI is high. As a result, companies are using creative ways to
measure the costs and benefits of training.
3. Success cases refer to concrete examples of the impact of training that show how learning
has led to results that the company finds worthwhile and the managers find credible.

IX. Measuring Human Capital and Training Activity

A. Metrics are valuable for benchmarking purposes, for understanding the current amount of
training activity in a company, and for tracking historical trends in training activity.
B. Collecting the metrics does not address such issues as whether training is effective or whether
the company is using the data to make strategic training decisions.
C. There is no one accepted method for measuring intellectual or human capital.

You might also like