You are on page 1of 57

Evaluating

HRD
Programs

By Si-Hosseini
Effectiveness ‫الفعالية‬

 The degree to which a training (or


other HRD program) achieves its
intended purpose
 Measures are relative to some starting
point
 Measures how well the desired goal is
achieved
2
Evaluation

3
HRD Evaluation
Textbook definition:
“The systematic collection of descriptive
and judgmental information necessary to
make effective training decisions related to
the selection, adoption, value, and
modification of various instructional
activities.”

4
In Other Words…
Are we training:
 the right people
 the right “stuff”
 the right way
 with the right materials
 at the right time?

5
Evaluation Needs
 Descriptive and judgmental information
needed ‫المعلومات الوصفية والحكمية المطلوبة‬
 Objective and subjective data
 Information gathered according to a plan
and in a desired format
 Gathered to provide decision making
information

6
Purposes of Evaluation
 Determine whether the program is meeting
the intended objectives
 Identify strengths and weaknesses
 Determine cost-benefit ratio
 Identify who benefited most or least
 Determine future participants
 Provide information for improving HRD
programs

7
Purposes of Evaluation – 2
 Reinforce major points to be made
 Gather marketing information
 Determine if training program is appropriate
 Establish management database

8
Evaluation Bottom Line
 Is HRD a revenue contributor or a revenue
user?
 Is HRD credible to line and upper-level
managers?
 Are benefits of HRD readily evident to all?

9
How Often are HRD Evaluations
Conducted?
 Not often enough!!!
 Frequently, only end-of-course participant
reactions are collected
 Transfer to the workplace is evaluated less
frequently

10
Why HRD Evaluations are Rare
 Reluctance to having HRD programs evaluated
 Evaluation needs expertise and resources
 Factors other than HRD cause performance
improvements – e.g.,
 Economy (it costs time & money)
 Equipment (HR staff are not expert doing it
alone so have to outsource tools)
 Policies, etc. (criteria)

11
Need for HRD Evaluation
 Shows the value of HRD
 Provides metrics for HRD efficiency
 Demonstrates value-added approach for
HRD
 Demonstrates accountability for HRD
activities
 Everyone else has it… why not HRD?

12
Make or Buy Evaluation
 “I bought it, therefore it is good.”
 “Since it’s good, I don’t need to post-test.”
 Who says it’s:
 Appropriate?
 Effective?
 Timely?
 Transferable to the workplace?

13
Evolution of Evaluation Efforts

1. Anecdotal (story) approach – talk to other


users
2. Try before buy – borrow and use samples
3. Analytical approach – match research data to
training needs
4. Holistic approach – look at overall HRD
process, as well as individual training

14
Models and Frameworks of
Evaluation
 Table 7-1 lists six frameworks for evaluation
 The most popular is that of D. Kirkpatrick:

 Reaction
 Learning
 JobBehavior
 Results

15
Kirkpatrick’s Four Levels (1994)
 Reaction
 Focus on trainee’s reactions (immediately after the
program e.g., Did you like the program?)
 Learning
 Did they learn what they were supposed to?
 Job Behavior
 Was it used on job?
 Results
 Did it improve the organization’s effectiveness?

16
Issues Concerning Kirkpatrick’s
Framework
 Most organizations don’t evaluate at all
four levels
 Focuses only on post-training
 Doesn’t treat inter-stage improvements
 WHAT ARE YOUR THOUGHTS?

17
Other Frameworks/Models
 CIPP: Context, Input, Process, Product (Galvin,
1983)
 Brinkerhoff (1987):
 Goal setting
 Program design
 Program implementation
 Immediate outcomes
 Usage outcomes
 Impacts and worth

18
Other Frameworks/Models – 2
 Kraiger, Ford, & Salas (1993):
 Cognitive outcomes
 Skill-based outcomes
 Affective outcomes
 Holton (1996): Five Categories:
 Secondary Influences
 Motivation Elements
 Environmental Elements
 Outcomes
 Ability/Enabling Elements

19
Other Frameworks/Models – 3
 Phillips (1996):
 Reaction and Planned Action
 Learning
 AppliedLearning on the Job
 Business Results
 ROI

20
A Suggested Framework – 1
 Reaction
 Did trainees like the training?
 Did the training seem useful?

 Learning
 How much did they learn?
 Behavior
 What behavior change occurred?

21
Suggested Framework – 2
 Results
 What were the tangible outcomes?
 What was the return on investment
(ROI)?
 What was the contribution to the
organization?

22
Data Collection for HRD
Evaluation
Possible methods:
 Interviews
 Questionnaires
 Direct observation
 Written tests
 Simulation/Performance tests
 Archival performance information

23
Interviews
Advantages: Limitations:
 Flexible  High reactive effects
 Opportunity for  High cost
clarification  Face-to-face threat
 Depth possible potential
 Personal contact  Labor intensive
 Trained observers
needed

24
Questionnaires
Advantages: Limitations:
 Low cost to administer  Possible inaccurate

 Honesty increased
data
 Response conditions
 Anonymity possible
not controlled
 Respondent sets the  Respondents set
pace varying paces
 Variety of options  Uncontrolled return
rate

25
Direct Observation
Advantages: Limitations:
 Nonthreatening  Possibly disruptive
 Excellent way to  Reactive effects are
measure behavior possible
change  May be unreliable
 Need trained
observers

26
Written Tests
Advantages: Limitations:
 Low purchase cost  May be threatening

 Readily scored  Possibly no relation to


job performance
 Quickly processed
 Measures only
 Easily administered
cognitive learning
 Wide sampling  Relies on norms
possible  Concern for racial/
ethnic bias

27
Simulation/Performance Tests
Advantages: Limitations:
 Reliable  Time consuming
 Objective  Simulations often
 Close relation to job difficult to create
performance  High costs to
 Includes cognitive, development and use
psychomotor and
affective domains

28
Archival Performance Data
Advantages: Limitations:
 Reliable  Criteria for keeping/

 Objective
discarding records
 Information system
 Job-based
discrepancies
 Easy to review  Indirect
 Minimal reactive  Not always usable
effects  Records prepared for
other purposes

29
Choosing Data Collection
Methods
 Reliability
 Consistency of results, and freedom from
collection method bias and error
 Validity
 Does the device measure what we want to
measure?
 Practicality
 Does it make sense in terms of the resources
used to get the data?

30
Type of Data Used/Needed
 Individual performance
 Systemwide performance
 Economic

31
Individual Performance Data
 Individual knowledge
 Individual behaviors
 Examples:
 Test scores
 Performance quantity, quality, and timeliness
 Attendance records
 Attitudes

32
Systemwide Performance Data
 Productivity
 Scrap/rework rates (waste)
 Customer satisfaction levels
 On-time performance levels
 Quality rates and improvement rates

33
Economic Data
 Profits
 Product liability claims
 Avoidance of penalties
 Market share
 Competitive position
 Return on investment (ROI)
 Financial utility calculations

34
Use of Self-Report Data
 Most common method
 Pre-training and post-training data
 Problems:
 Mono-method bias
Desire to be consistent between tests

 Socially desirable responses


 Response Shift Bias:
Trainees adjust expectations to training
35
Research Design
Specifies in advance:
 the expected results of the study
 the methods of data collection to be used
 how the data will be analyzed

36
Research Design Issues
 Pretest and Posttest
 Shows trainee what training has
accomplished
 Helps eliminate pretest knowledge bias

 Control Group
 Compares performance of group with
training against the performance of a similar
group without training

37
Recommended Research Design
 Pretest and posttest with control group
 Whenever possible:
 Randomly assign individuals to the test
group and the control group to minimize bias
 Use “time-series” approach to data collection
to verify performance improvement is due to
training

38
Ethical Issues Concerning
Evaluation Research
 Confidentiality
 Informed consent
 Withholding training from control groups
 Use of deception
 Pressure to produce positive results

39
Assessing the Impact of HRD
 Money is the language of business.
 You MUST talk dollars, not HRD jargon.
 No one (except maybe you) cares about
“the effectiveness of training interventions
as measured by and analysis of formal
pretest, posttest control group data.”

40
HRD Program Assessment
 HRD programs and training are
investments
 Line managers often see HR and HRD as
costs – i.e., revenue users, not revenue
producers
 You must prove your worth to the
organization –
 Or you’ll have to find another
organization…
41
Two Basic Methods for
Assessing Financial Impact
 Evaluation of training costs
 Utility analysis

42
Evaluation of Training Costs
 Cost-benefit analysis
 Compares cost of training to benefits
gained such as attitudes, reduction in
accidents, reduction in employee sick-days,
etc.
 Cost-effectiveness analysis
 Focuses on increases in quality, reduction
in scrap/rework, productivity, etc.

43
Return on Investment
 Return on investment = Results/Costs

44
Calculating Training Return On
Investment
    Results Results    
Operational How Before After Differences Expressed
Results Area Measured Training Training (+ or –) in $
Quality of panels % rejected 2% rejected 1.5% rejected .5% $720 per day

    1,440 panels 1,080 panels 360 panels $172,800


      per day   per day     per year
Housekeeping Visual  10 defects 2 defects 8 defects Not measur-
  inspection   (average)   (average)   able in $
   
  using  
    20-item      
    checklist        
         
Preventable Number of 24 per year 16 per year 8 per year  
  accidents   accidents
       
  Direct cost $144,000 $96,000 per $48,000 $48,000 per
  of each   per year   year   year
   
  accident
           
    Return Operational Results
    Total savings: $220,800.00
ROI = Investment = Training Costs    
$220,800    
= = 6.8
    $32,564
   
SOURCE: From D. G. Robinson & J. Robinson (1989). Training for impact. Training and Development Journal, 43(8), 41. Printed by permission. 45
Types of Training Costs
 Direct costs
 Indirect costs
 Development costs
 Overhead costs
 Compensation for participants

46
Direct Costs
 Instructor
 Base pay
 Fringe benefits
 Travel and per diem
 Materials
 Classroom and audiovisual equipment
 Travel
 Food and refreshments

47
Indirect Costs
 Training management
 Clerical/Administrative
 Postal/shipping, telephone, computers,
etc.
 Pre- and post-learning materials
 Other overhead costs

48
Development Costs
 Fee to purchase program
 Costs to tailor program to organization
 Instructor training costs

49
Overhead Costs
 General organization support
 Top management participation
 Utilities, facilities
 General and administrative costs, such
as HRM

50
Compensation for Participants
 Participants’ salary and benefits for time
away from job
 Travel, lodging, and per-diem costs

51
Measuring Benefits
 Change in quality per unit measured in
dollars
 Reduction in scrap/rework measured in
dollar cost of labor and materials
 Reduction in preventable accidents
measured in dollars
 ROI = Benefits/Training costs

52
Utility Analysis (Brogden-Cronbach-
Gleser model: Personnel
Psychology)
 Uses a statistical approach to support
claims of training effectiveness:
 N = Number of trainees
 T = Length of time benefits are expected to last
 dt = True performance difference resulting from
training
 SDy = Dollar value of untrained job performance (in
standard deviation units)
 C = Cost of training

 ∆U = (N)(T)(dt)(Sdy) – C

53
Critical Information for Utility
Analysis
 dt = difference in units between
trained/untrained, divided by standard
deviation in units produced by trained
 SDy = standard deviation in dollars, or
overall productivity of organization

54
Ways to Improve HRD
Assessment
 Walk the walk, talk the talk: MONEY
 Involve HRD in strategic planning
 Involve management in HRD planning and
estimation efforts
 Gain mutual ownership
 Use credible and conservative estimates
 Share credit for successes and blame for
failures
55
HRD Evaluation Steps
1. Analyze needs.
2. Determine explicit evaluation strategy.
3. Insist on specific and measurable training
objectives.
4. Obtain participant reactions.
5. Develop criterion measures/instruments to
measure results.
6. Plan and execute evaluation strategy.
56
Summary
 Training results must be measured
against costs
 Training must contribute to the “bottom
line”
 HRD must justify itself repeatedly as a
revenue enhancer, not a revenue waster

57

You might also like