You are on page 1of 64

Training Evaluation

Training

• Involves learning
• implies learning to do something
• it results in things being done
differently
Evaluation

• It is a process of establishing a worth of


something.
• The ‘worth’, which means the value,
merit or excellence of the thing
Purpose of Evaluation

• Feedback - on the effectiveness of the


training activities
• Control - over the provision of
training
• Intervention - into the organizational
processes that affect training
Benefits of Evaluation
• Improved quality of training activities
• Improved ability of the trainers to relate inputs to outputs
• Better discrimination of training activities between those that
are worthy of support and those that should be dropped
• Better integration of training offered and on-the job
development
• Better co-operation between trainers and line-managers in
the development of staff
• Evidence of the contribution that training and development
are making to the organization
What can be evaluated

Remember 3 Ps

• The Plan
• The Process
• The Product
How to evaluate the Plan

• Course Objectives
• Appropriate selection of
participants
• Timeframe
• Teaching Methods
How to Evaluate the Process

• Planning Vs. Implementation


• Appropriate participants
• Appropriate time
• Effective use of time
• teaching according to set
objectives
Methods for Process Evaluation

• Observation by the teacher


him/herself
• Observation by other teachers
• Questionnaire completed by students
• Evaluation discussion by students
• Staff meetings
How to Evaluate the Product
• Is only evaluation of the product sufficient?
• Time
• Ultimately all stages require evaluation in any case
• Triangulation technique
• Changes in effectiveness
• Impact Analysis
• Achieving Targets
• Attracting Resources
• Satisfying Interested Parties
Achieving Targets

• Productivity • Level of variation in


• Processing Time product
• • Ability to cope with
Profit
circumstances
• Operating Cost
• Time to reach job
• Rates of meeting competency
deadlines
• levels of supervision
• Cost/Income ratio
required
• % of tasks incorrectly • Frequency and costs of
done accidents
Attracting Resources

• Increase in number of • Increase in the pool of


clients trained staff
• New markets entered • skills for future job
• New branches opened requirement developed
• Ability to cope with • Flexibility in meeting
external changes changing customer’s
requirements
• improvements in the
competencies
Training and the workplace
Framework of Kirkpatrick

1 2 3 4

TRAINING WORK
PLACE

Reactions Results

Learning

Behavior
Reasons for Evaluating Training

• Companies are investing millions of dollars in


training programs to help gain a competitive
advantage.
• Training investment is increasing because
learning creates knowledge which
differentiates between those companies and
employees who are successful and those who
are not.
Reasons for Evaluating Training
(continued)

Because companies have made large dollar


investments in training and education and view
training as a strategy to be successful, they
expect the outcomes or benefits related to
training to be measurable.
Why Should A Training Program Be
Evaluated?

• To identify the program’s strengths and


weaknesses.
• To assess whether content, organization, and
administration of the program contribute to
learning and the use of training content on the
job.
• To identify which trainees benefited most or
least from the program.
Why Should A Training Program Be
Evaluated? (continued)

• To gather data to assist in marketing training


programs.
• To determine the financial benefits and costs of
the programs.
• To compare the costs and benefits of training
versus non-training investments.
• To compare the costs and benefits of different
training programs to choose the best program.
The Evaluation Process
Conduct a Needs Analysis

Develop Measurable Learning Outcomes

Develop Outcome Measures

Choose an Evaluation Strategy

Plan and Execute the Evaluation


Training Outcomes: Kirkpatrick’s
Four-Level Framework of Evaluation
Criteria
Level Criteria Focus
1 Reactions Trainee satisfaction

2 Learning Acquisition of knowledge, skills, attitudes, behavior

3 Behavior Improvement of behavior on the job

4 Results Business results achieved by trainees


Outcomes Used in Evaluating Training Programs
(A conceptually based classification scheme of learning)

Cognitive
Outcomes Skill-Based
Outcomes

Affective
Outcomes

Results
Return on
Investment
Outcomes Used in Evaluating Training
Programs: (continued)

• Cognitive Outcomes
- verbal knowledge, organization knowledge, cognitive
strategies.
– Determine the degree to which trainees are familiar with the
principles, facts, techniques, procedures, or processes emphasized
in the training program.
– Measure what knowledge trainees learned in the program.
• Skill-Based Outcomes
- compilation, automatically
– Assess the level of technical or motor skills.
– Include acquisition or learning of skills and use of skills on the job.
Outcomes Used in Evaluating Training
Programs: (continued)

• Affective Outcomes
– Include attitudes and motivation.
– Trainees’ perceptions of the program including the
facilities, trainers, and content.
• Results
– Determine the training program’s payoff for the
company.
How do you know if your outcomes are
good?

Good training outcomes need to be:


• Relevant
• Reliable
• Discriminate
• Practical
Good Outcomes (continued)
• Reliability – degree to which outcomes can be
measured consistently over time.
• Discrimination – degree to which trainee’s
performances on the outcome actually reflect
true differences in performance.
• Practicality – refers to the ease with which the
outcomes measures can be collected.
Evaluation Designs: Threats to Validity

• Threats to validity refer to a factor that


will lead one to question either:
– The believability of the study results (internal
validity), or
– The extent to which the evaluation results are
generalizable to other groups of trainees and
situations (external validity)
Threats to Validity

• Threats To Internal Validity


– Company
– Persons
– Outcome Measures
• Threats To External Validity
– Reaction to pretest
– Reaction to evaluation
– Interaction of selection and training
– Interaction of methods
Methods to Control for Threats to
Validity

Pre- and Posttests

Use of Comparison Groups

Random Assignment
Types of Evaluation Designs

• Posttest – only • Time series

• Pretest / posttest • Time series with


Comparison group and
• Posttest – only with Reversal
Comparison group
• Solomon Four – group
• Pretest / posttest with
Comparison group
Factors That Influence the Type of
Evaluation Design
Factor How Factor Influences Type of Evaluation Design

Change potential Can program be modified?

Importance Does ineffective training affect customer service, product development, or


relationships between employees?

Scale How many trainees are involved?

Purpose of training Is training conducted for learning, results, or both?

Organization culture Is demonstrating results part of company norms and expectations?

Expertise Can a complex study be analyzed?

Cost Is evaluation too expensive?

Time frame When do we need the information?


To calculate return on investment
(ROI), follow these steps:

1. Identify outcome(s) (e.g., quality, accidents)


2. Place a value on the outcome(s)
3. Determine the change in performance after
eliminating other potential influences on training
results.
4. Obtain an annual amount of benefits (operational
results) from training by comparing results after
training to results before training (in dollars)
To calculate return on investment
(ROI), follow these steps: (continued)

5. Determine training costs (direct costs + indirect


costs + development costs + overhead costs +
compensation for trainees)
6. Calculate the total savings by subtracting the
training costs from benefits (operational results)
7. Calculate the ROI by dividing benefits
(operational results) by costs.
 The ROI gives you an estimate of the dollar return
expected from each dollar invested in training.
Example of Return on Investment
Industry Training Program ROI
Bottling company Workshops on managers’ roles 15:1

Large commercial bank Sales training 21:1

Electric & gas utility Behavior modification 5:1

Oil company Customer service 4.8:1

Health maintenance Team training 13.7:1


organization
Objective and Subjective Measures

Measures that require the statements of opinion,


belief, or judgments are considered subjective.
Example: Rating scales are subjective measures
whereas measures of absenteeism are more
objective.
Examples of objective: rate of production,
Barriers and contributions in the
evaluation process
Barriers to training evaluation
1. Top Mgmt. does not emphasize training evaluation.
2. Training directors do not often have the skills to
conduct the training evaluation.
3. It is often not clear to training human recourse people
what should be evaluated and what questions should be
answered by an evaluation.
4. There is view that training evaluation can be risky and
expensive enterprise.
Contributions of training evaluation

1. Training evaluation can serve as a diagnostic


technique to permit the revision of programes to meet
the large number of goals and objectives.
2. Good evaluation information can demonstrate the
usefulness of the training enterprise.
3. Evaluation data helps to know job-relatedness of
training.
The evaluation of criteria
1. Criterion relevancy.
- The fundamental requirement that transcends all other
considerations related to criterion development is its relevance.
Criterion Defienciency

KSA From Need Organizational Goals


Assessment from need Assessment

Criterion Relevance

KSA presented Organizational Goals


in criteria presented in criteria

Criterion Contamination
Criterion Defienciency

Criterion Defienciency refers the degree to


which components are identified in a needs
assessment that are present in the actual
criteria.
Criterion contamination

Criterion contamination refers to extraneous


elements present in the criteria that result in
the measure inaccurately representing the
construct identified in the needs assessment.
KSA Determined by the needs assessment
- Not represented + represented

Criterion relevance Criterion deficiency


- Not represented
A B

Criterion contamination Criterion relevance

C D

+ represented

Fig 5.2 the relationship b/w criteria and needs assessment


Criterion reliability
Refers to the consistency of criteria measures. It is necessary for
stable measures of criteria. There are many factors that affects
the reliability of the criteria. Example are:
1. Rating scale .
2. When there is no specification details ( ex. Leadership
measurement).
3. Rater does not have much time to observe the things.
The interrelationship of reaction, learning ,
behavior and results

Kirkpatrick's Taxonomy of reaction, learning,


behavior and results.
Kirkpatrick's Taxonomy Augmented frame work

Reaction Reaction
Affective reactions
Utility judgments
Learning Learning
Immediate knowledge
Knowledge retention
Behavior/skill demonstration
Behavior Transfer
Results Results
Outcome criteria and summative
evaluation
Outcome measures refer to criteria, like learning and
performance, that represent various levels of achievement.
Summative evaluation: Describes assessment using outcome
measures that focus on the effectiveness of completed
interventions.
First type of summative evaluation refers to the question of
whether a particular training programme produces the expected
outcomes. Ex: comparison b/w trained and un-trained group.
Second type of which of two or more training method produces the
greatest benefits.
.
Formative evaluation

Formative evaluation focuses on the process criteria to


provide further information to help understand the
training system so that the originally intended
objectives are achieved. The benefits of formative
evaluation includes, collecting information to
change training programs to reach objectives. In
addition the information may be collected from all
the stakeholders to ensure the programme meets their
needs. It also provides feedback to how to improve
the programme.
Time Dimension
Criteria also vary according to time of collections. Thus,
learning criterion measures are taken early in training, and
behavior criterion measures are taken after the individuals has
completed the training programm and transferred to the new
activity.
Time dimensions of criteria are
1.Iimediate criteria
2. Proximal criteria.
3. Distal criteria.
Time dimensions of criteria
Proximal criteria Distal criteria
Immediate criteria (obtained in advanced ( obtained after considerable
( obtained in the training time in transfer setting)
training programme) or early in trans for setting)

Time
Types of criteria
1. Criterion – referenced measures
2. Norm –referenced measures .

Criterion – referenced measures: provide a standard of


achievement for the individual as compared with specific
behavioral objectives and therefore provide an indicant
of the degree of competence attained by the trainee.
Norm –referenced measures: It help to compare the
capabilities of the individual with those of other
trainees.
To evaluate the training programme properly it is
necessary to obtain criterion-referenced measures that
provide information about the skill level of trainee in
relationship to expected programme achievement
levels.
Methodological considerations in the use of
experimental designs

1. Pre Testing and Post testing.


- Pre test is administered before the instructional
program begins and post test will be given after
the exposure to the instructional program.
The variables measured in the pretest and post test
must be associated with the objectives of the
training program.
Control Groups
A control group( is an experimental group on all the
variables that might contribute the differences among pre
and post tests except for the actual instructional
program) to eliminate the possibility of other
explanations for the change between pre and post
differences. control group helps to find whether the
differences are due to any other factors or factors like
time etc.,
Internal & external validity

Internal validity : refers the basic question that did the


treatment make a difference in this particular situation?
Unless the internal validity has been established
intenerating the effects of any experiment, training or
otherwise is not possible.

External validity: refers to the general ability of results to


other populations, settings and treatment variables.
Threats to Internal & external validity
• Threats to validity refer to a factor that will
lead one to question either:
– The believability of the study results (internal
validity), or
– The extent to which the evaluation results are
generalizable to other groups of trainees and
situations (external validity)
Threats to Internal validity

Includes:
1. History
2. Testing
3. Instrumentation
4. Statistical regression
5. Differential selection of participants
6. Experimental mortality
Threats to External validity
1. Reactive effects of presetting
2. Reactive effects from the group receiving the
treatment
3. Reactive effects of the experimental settings
Experimental Design

Methods:

1. The –one group post test – only design.


2. The –one group pretest /post test design.
3. pretest /post control –group design
Quasi experimental design
1. The time series design
2. The non- equivalent control group design
Examples

Examples for Quasi experimental design (The time series


design) :
 Proper use of tools and equipments.
 Use of safety equipments
 House keeping
 General safety procedure
Utility considerations

It is one of the important method of training


evaluation similar to evaluation done by
production manager at the time of request to
new machine after the support of projected
increases in productivity and cost of
production.
Translating the validity evidence into money
form is called utility analysis. It helps for the
costing of program along with provisions for
comparisons of different programs.
Utility considerations involves,
1. Use of capital budget methodology to
analyze the minimum annual benefits in
rupees required from any program.
2. Use of break –even – analysis to estimate
the minimum effect size for a program.
3. Use of data across multiple studies to
estimate the expected actual payoff from
the program.
Structured Training Un Structured Training
Training Developments Training Developments
Training Materials. Training Materials.
Training Time Training Time Trg.Cost
Production Losses Production Losses

Time to reach job Time to reach job


Competency Competency Trg.return
Job performance Job performance
Worker attitudes Worker attitudes

Training time Training time


Production rate Production rate
Performance test Performance test
Product quality Anal
Product quality
Raw material efficiency Raw material efficiency
Worker attitudes Worker attitudes
Cost conversions Cost conversions
Training Time
Job performance
Cost comparisons
Worker attitudes Evaluation

Fig: Industrial Training Cost-Effectiveness Model


Other methods of evaluations
1.Individual differences models of predictive
validity
2. Content validity model
3. practical, statistical and scientific significance
.Individual differences models of predictive validity

Sales

Trg. Scores

Fig: Hypothetical scores on a sales test at the end of Trg and Sales Vol. after one year on
Content VALIDITY Model
- Not Important + Important

- Not Emphasized A
B

+ Emphasized D C

Fig: A conceptual diagram of content validity of training programs


Need assessment to Training validity

1. Training validity.
2.Transfer validity
3. Intra-organizational validity
4. Inter-organizational validity.

You might also like