You are on page 1of 81

Training & Evaluation

10.1 Evaluation and Control Process (Fig. 10.1)

Evaluation and Control


Process

1 2 3 4 5
Does
Determine Establish perform- No Take
what to predetermined Measure ance match corrective
performance. stand- action.
measure. standards.
ards?

Yes

STOP

Prentice Hall, 2000 Chapter 10 2


Effectiveness

• The degree to which a training (or


other HRD program) achieves its
intended purpose.
• Measures are relative to some starting
point.
• Measures how well the desired goal is
achieved.
HRD Evaluation
definition:
“The systematic collection of descriptive
and judgmental information necessary to
make effective training decisions related
to the selection, adoption, value, and
modification of various instructional
activities.”
In Other Words…
Are we training:
• the right people
• the right “stuff”
• the right way
• with the right materials
• at the right time?
Evaluation Needs
• Descriptive and judgmental information
needed.
– Objective and subjective data
• Information gathered according to a
plan and in a desired format.
• Gathered to provide decision making
information.
Purposes of Evaluation
• Determine whether the program is meeting
the intended objectives.
• Identify strengths and weaknesses.
• Determine cost-benefit ratio.
• Identify who benefited most or least.
• Determine future participants.
• Provide information for improving HRD
programs.
Purposes of Evaluation-2
• Reinforce major points to be made.
• Gather marketing information.
• Determine if training program is
appropriate.
• Establish management database.
Evaluation Bottom Line
• Is HRD a revenue contributor or a revenue
user?
• Is HRD credible to line and upper-level
managers?
• Are benefits of HRD readily evident to all?
How Often are HRD Evaluations
Conducted?
• Not often enough!!!
• Frequently, only end-of-course participant
reactions are collected.
• Transfer to the workplace is evaluated less
frequently.
Why HRD Evaluations are Rare
• Reluctance to having HRD programs
evaluated.
• Evaluation needs expertise and resources.
• Factors other than HRD cause performance
improvements, e.g.,
– Economy
– Equipment
– Policies, etc.
Need for HRD Evaluation
• Shows the value of HRD.
• Provides metrics for HRD efficiency.
• Demonstrates value-added approach for
HRD.
• Demonstrates accountability for HRD
activities.
• Everyone else has it… why not HRD?
Assessing the Impact of HRD
• Money is the language of business.
• You MUST talk dollars, not HRD jargon.
• No one (except maybe you) cares about
“the effectiveness of training
interventions as measured by and
analysis of formal pretest, posttest
control group data.”
HRD Program Assessment
• HRD programs and training are
investments.
• Line manager often see HR and HRD as
costs, i.e., revenue users, not revenue
producers.
• You must prove your worth to the
organization –
– Or you’ll have to find another
organization….
Two Basic Methods for
Assessing Financial Impact
• Evaluation of training costs
• Utility analysis
Evaluation of Training Costs
• Cost-benefit analysis
– Compares cost of training to benefits gained
such as attitudes, reduction in accidents,
reduction in employee sick-days, etc.
• Cost-effectiveness analysis
– Focuses on increases in quality, reduction in
scrap/rework, productivity, etc.
Return on Investment
• Return on investment = Results/Costs
Types of Training Costs
• Direct costs
• Indirect costs
• Development costs
• Overhead costs
• Compensation for participants
Direct Costs
• Instructor
– Base pay
– Fringe benefits
– Travel and per diem
• Materials
• Classroom and audiovisual equipment
• Travel
• Food and refreshments
Indirect Costs
• Training management
• Clerical/Administrative
• Postal/shipping, telephone, computers,
etc.
• Pre- and post-learning materials
• Other overhead costs
Development Costs
• Fee to purchase program
• Costs to tailor program to organization
• Instructor training costs
Overhead Costs
• General organization support
• Top management participation
• Utilities, facilities
• General and administrative costs, such
as HRM
Compensation for Participants
• Participants’ salary and benefits for time
away from job
• Travel, lodging and per-diem costs
Measuring Benefits
– Change in quality per unit measured in dollars
– Reduction in scrap/rework measured in dollar
cost of labor and materials
– Reduction in preventable accidents measured
in dollars
– ROI = Benefits/Training costs
Utility Analysis
• Uses a statistical approach to support
claims of training effectiveness:
– N = Number of trainees
– T = Length of time benefits are expected to last
– dt = True performance difference resulting from
training
– SDy = Dollar value of untrained job performance (in
standard deviation units)
– C = Cost of training

 U = (N)(T)(dt)(Sdy) – C
Critical Information for Utility
Analysis
• dt = difference in units between
trained/untrained, divided by standard
deviation in units produced by trained.
• SDy = Standard deviation in dollars, or
overall productivity of organization.
Ways to Improve HRD
Assessment
• Walk the walk, talk the talk: MONEY.
• Involve HRD in strategic planning.
• Involve management in HRD planning and
estimation efforts.
– Gain mutual ownership
• Use credible and conservative estimates.
• Share credit for successes and blame for
failures.
HRD Evaluation Steps
• Analyze needs.
• Determine explicit evaluation strategy.
• Insist on specific and measurable training
objectives.
• Obtain participant reactions.
• Develop criterion measures/instruments to
measure results.
• Plan and execute evaluation strategy.
Summary
• Training results must be measured
against costs.
• Training must contribute to the “bottom
line.”
• HRD must justify itself repeatedly as a
revenue enhancer, not a revenue
waster.
Ensure Transfer of Training

• Make self-management a part of training


• Opportunities to use training
• Peer and manager support
Evaluating Training Programs

Four categories of evaluation include:


• Affective
– reaction of trainee to program
• Cognitive
– knowledge of program content
• Skill-based
– technical skills or behavior
• Results
– effect on company performance
Reasons for Evaluating Training

• Determine if program met objectives


• Determine trainee’s reaction to program
content and administration
• Determine benefits / costs of program
• Help select the best program
Training Evaluation Designs

• Pretest / Posttest with control group


• Pretest / Posttest
• Posttest only
• Time series
Factors in Choosing an
Evaluation Design

• Size of program
• Purpose
• Implications if program fails
• Company norms
• Costs of conducting evaluation
• Speed needed in obtaining data on
program effectiveness
Make or Buy Evaluation
• “I bought it, therefore it is good.”
• “Since it’s good, I don’t need to post-test.”
• Who says it’s:
– Appropriate?
– Effective?
– Timely?
– Transferable to the workplace?
Evolution of Evaluation Efforts

1. Anecdotal approach: Talk to other


users.
2. Try before buy: Borrow and use
samples.
3. Analytical approach: Match research
data to training needs.
4. Holistic approach: Look at overall HRD
process, as well as individual training.
Models and Frameworks of
Evaluation
• Table 7-1 lists nine frameworks for
evaluation.
• The most popular is that of D. Kirkpatrick:
– Reaction
– Learning
– Job Behavior
– Results
Kirkpatrick’s Four Levels
• Reaction
– Focus on trainee’s reactions
• Learning
– Did they learn what they were supposed to?
• Job Behavior
– Was it used on job?
• Results
– Did it improve the organization’s
effectiveness?
Issues Concerning Kirkpatrick’s
Framework
• Most organizations don’t evaluate at all
four levels.
• Focuses only on post-training.
• Doesn’t treat inter-stage improvements.
• WHAT ARE YOUR THOUGHTS?
Other Frameworks/Models – 1
• CIPP: Context, Input, Process, Product
• CIRO: Context, Input, Reaction, Outcome
• Brinkerhoff:
– Goal setting
– Program design
– Program implementation
– Immediate outcomes
– Usage outcomes
– Impacts and worth
Other Frameworks/Models – 2
• Kraiger, Ford, & Salas:
– Cognitive outcomes
– Skill-based outcomes
– Affective outcomes
• Phillips:
– Reaction
– Learning
– Applied learning on the job
– Business results
– ROI
A Suggested Framework – 1
• Reaction
– Did trainees like the training?
– Did the training seem useful?
• Learning
– How much did they learn?
• Behavior
– What behavior change occurred?
Suggested Framework – 2
• Results
– What were the tangible outcomes?
– What was the return on investment
(ROI)?
– What was the contribution to the
organization?
Data Collection for HRD
Evaluation
Possible methods:
• Interviews
• Questionnaires
• Direct observation
• Written tests
• Simulation/Performance tests
• Archival performance information
Interviews
Advantages: Limitations:
• Flexible • High reactive effects
• Opportunity for • High cost
clarification • Face-to-face threat
• Depth possible potential
• Personal contact • Labor intensive
• Trained observers
needed
Questionnaires
Advantages: Limitations:
• Low cost to • Possible inaccurate
administer data
• Honesty increased • Response conditions
• Anonymity possible not controlled
• Respondent sets the • Respondents set
pace varying paces
• Variety of options • Uncontrolled return
rate
Direct Observation
Advantages: Limitations:
• Non-threatening • Possibly disruptive
• Excellent way to • Reactive effects are
measure behavior possible
change • May be unreliable
• Need trained
observers
Written Tests
Advantages: Limitations:
• Low purchase cost • May be threatening
• Readily scored • Possibly no relation to
• Quickly processed job performance
• Easily administered • Measures only
• Wide sampling cognitive learning
possible • Relies on norms
• Concern for racial/
ethnic bias
Simulation/Performance Tests
Advantages: Limitations:
• Reliable • Time consuming
• Objective • Simulations often
• Close relation to job difficult to create
performance • High costs to
• Includes cognitive, development and use
psychomotor and
affective domains
Archival Performance Data
Advantages: Limitations:
• Reliable • Criteria for keeping/
• Objective discarding records
• Job-based • Information system
• Easy to review discrepancies
• Minimal reactive • Indirect
effects • Not always usable
• Records prepared for
other purposes
Choosing Data Collection Methods

• Reliability
– Consistency of results, and freedom from
collection method bias and error.
• Validity
– Does the device measure what we want to
measure?
• Practicality
– Does it make sense in terms of the
resources used to get the data?
Type of Data Used/Needed
• Individual performance
• System-wide performance
• Economic
Individual Performance Data
• Individual knowledge
• Individual behaviors
• Examples:
– Test scores
– Performance quantity, quality, and timeliness
– Attendance records
– Attitudes
System-Wide Performance Data
• Productivity
• Scrap/rework rates
• Customer satisfaction levels
• On-time performance levels
• Quality rates and improvement rates
Economic Data
• Profits
• Product liability claims
• Avoidance of penalties
• Market share
• Competitive position
• Return on Investment (ROI)
• Financial utility calculations
Use of Self-Report Data
• Most common method
• Pre-training and post-training data
• Problems:
– Mono-method bias
• Desire to be consistent between tests
– Socially desirable responses
– Response Shift Bias:
• Trainees adjust expectations to training
Research Design
Specifies in advance:
• the expected results of the study.
• the methods of data collection to be used.
• how the data will be analyzed.
Research Design Issues
• Pretest and Posttest
– Shows trainee what training has
accomplished.
– Helps eliminate pretest knowledge bias.
• Control Group
– Compares performance of group with training
against the performance of a similar group
without training.
Recommended Research
Design
• Pretest and posttest with control group.
• Whenever possible:
– randomly assign individuals to the test group
and the control group to minimize bias.
– Use “time-series” approach to data collection
to verify performance improvement is due to
training.
Ethical Issues Concerning
Evaluation Research
• Confidentiality
• Informed consent
• Withholding training from control groups
• Use of deception
• Pressure to produce positive results
Background:
• Many organizations spend very little on
training (less than 1% of payroll)
• Some “over-spend” (IBM = 15%)
• “Hot” training issues in the 1990s were
– Cultural Diversity
– Quality
– Computer Skills
– Team Building
– Cross-training skills
Key Terms to Remember:
Training - the systematic process of attempting to
develop job-specific KSAs for current or future jobs
Development- learning the KSAs for future jobs
and career opportunities
Education - the development of more general KSAs
related to a person’s career or job, but not
necessarily tailored to their career or job
Learning- a relatively permanent change in
cognition that results from experience that directly
influences behavior.
KSAs….
• Knowledge - Info. we acquire & place into memory, how it’s
organized into our structure & to our understanding of how/when
it’s used
– Delcarative, Procedural
• Skills - Capacities needed to perform a set of tasks developed
from a training experience
– Compilation & Automaticity stages
• Attitudes - Reflections of employee beliefs/opinions that
support/inhibit behavior
– Learned and affect motivation re: training
• Abilities - General capacities related to performing a set of tasks
developed over time as a result of heredity and experience
Role of Training in Organizations
• Regardless of where training lies in an organization, its
role is to improve the organization’s effectiveness by:
– Providing employees w/ necessary KSAs
– Provide personal enrichment
– Increase competitive advantage
– Respond to specific organizational needs
– Increase organizational strategic capability
– Improve quality
– While staying within the budget!
Factors affecting the training
structure:
• Management Philosophy
• Organizational Strategy
• Organizational Structure
• Size
• Technology Requirements
• Industry demands
Ultimate Goal of Training:
• To provide and facilitate effective and
efficient organizational learning that
improves organizational performance
A Training Process Model
• Utilizing Input-Process-Output Model

– Needs Analysis/Assessment
– Design
– Development
– Implementation
– Evaluation
– Follow-up
Key Training Roles
• Researcher
• Needs Analyst
• Evaluator
• Program Designer
• Materials Developer
• Manager
• Marketer
• Counselor
• Change Agent
• Instructor
• Communicator
TRAINING COMPETENCIES
• Computer and data analysis skill
• Research skills
• Understanding of Adult Learning
• Oral and Written Communication Skills
• Goal setting
• Understanding how careers develop
• Ability to coach and give feedback
• Cost/benefit analysis
• Project Management/Records Management
• Delegation skills
• Logistics
• Strategic Planning
• Negotiation
• General business and industry understanding
The Learning Environment
• Key questions
– Is the individual trainable?
– How should the training program be
arranged to facilitate learning?
– What can be done to ensure that what was
learned in training will be retained and
transferred to the job?
Overview
• Learning
– Trainability
– Motivation
– Learning Environment
• Retention
– Meaningfulness
– Interference
• Transfer
– Positive, Negative, none
Learning – What is it?
• Cognitive vs. behavioral approaches
• Changes in behavior vs.changes in information
processing/storage etc.
• Implications for learning theory
– Impacts learner’s role, instructors role, training
content, learner motivation, training climate,
instructional goals and instructional activities.
• Our definition: a relatively permanent change
in cognition, resulting from experience and
directly influencing behavior.
Learning: Trainability
• Ability/Trainee Readiness
– what are pre-requisite trainee characteristics and
skills needed for training
– these can be assessed during person analysis
phase of needs assessment or from pretest
• How to motivate trainees to learn KSAs
– Issues
• motivation is an individual phenomenon
• need to deal with attitudes
• need to deal with assumptions
• need to deal with expectations
Motivation Theories:Training
Applications
• Need Theories (Maslow, McClelland)
– lower level needs must be met first
– spot trainees with “high need for achievement” - give them challenging
tasks and feedback
– different individuals have different needs
• Equity Theory (Adams) and Justice Theory (Greenberg,
Folger)
– training may be seen as an input -- make sure outcomes are fair
– training may be seen as output -- make sure access to training is fair

• Goal Setting Theory (Locke and Latham)


– set specific, challenging and obtainable goals for trainees
– match goals to ability of individual
– give feedback on progress
Designing the Learning
Environment
• Training sites and times
– Agenda and Training Objectives
• Training objectives have 3 components
– Statement of what employee is expected to do
– Statement of the quality or level of performance that is acceptable
– Statement of the conditions under which the trainee is expected to
perform
– Room, seating, climate, breaks and schedule
• Practice: Give trainees the opportunity to practice
– Overlearning
• makes response reflexive
• increases retention and positive transfer
• use when learning is simulated, response not natural
– Massed v. Distributed practice
• Whole vs. Part Learning
– Part learning: Task complex or independent
– Whole learning: task easy or interdependent
• Feedback
– Important for learning and motivation, related to
goal setting
– Effective use
• immediate
• specific
• behavioral
• positive/shaping
• amount: not too little too late/too much too soon
– Intrinsic feedback: from task itself
• *Information gathered from task analysis
Retention
• Learning
– degree of original learning
– practice
– overlearning
• Increase Meaningfulness of material
– show relation to valued outcomes
– use real world examples
– sequence material in logical order
– break down complex skills to simple skills
• Motivation
• Retention Increased by Reducing
Interference
– Proactive
• Old material interferes with learning new material
• (Driving in England)
– Retroactive
• new material interferes with retention of old
material
• (Back in U.S.)
– Stimulus-Response Similarity
• more similar S-R in new and old situation: less
interference
• (Driving new car)
Transfer of Learning
• Types: Positive, Negative, Zero
• Stimulus Generalization Theory
– General principles are applicable to many work settings
– applicable when work environment is unpredictable and charging
– example: training in interpersonal skills
• Identical Elements Theory
– Training environment is identical to work environment
– applicable when work environment is predictable and stable
– example: training to use equipment
• Cognitive Theory
– meaningful material and coding schemes enhance storage and
recall of training content
– cognitive maps useful in all types of training
Climate for Training Transfer

• Management support
• peer support
• technological support and resources
• opportunity to perform
• Positive reinforcement and feedback
• Organizational culture: Learning Organization
– encourages flexibility and experimentation
– values critical thinking and sharing of knowledge
– continuous learning ingrained in system
– employees developed and valued
Guidelines for increasing
Learning, Retention and
Transfer
• Maximize the similarity between the training and the job
• Design training so that trainees can see its applicability
• Allow trainees to practice task
• Label or identify important features of task
• Use a variety of examples and show relevance of
training for valued outcomes
• Make sure general principles are understood before
expecting much transfer
• Build trainees’ self-efficacy and cognitive maps
• Make certain trained behaviors are rewarded on the job
• Use Needs Assessment...

You might also like