Training Evaluation | Turnover (Employment) | Employment

6

Chapte r
Training Evaluation

Training doesn¶t cost; it pays. HRD is an investment, not an expense. Training is one of the most important necessities in any business for which the companies are willing to invest their resources. Naturally the outcomes are to be measured. Organizations can¶t manage what they don¶t measure. Thus it is important to establish the right performance measures for all key investments.
Step 4 Evaluate the Training

Training process

Step 1 Identify the Needs

Step 2 Design the Training

Step 3 Implement the Training

Meaning
Training evaluation refers to activities aimed at finding out the effectiveness of training programmes after they are conducted, against the objectives for which such programmes were organized. Training evaluation techniques give us solutions to answer questions like, where was the capability level of learners before the programme and where is it now, what was intended to be achieved by a particular programme and what is really achieved now; and what is the monetary value of training outcome against the cost incurred for conducting the said training programme. Training evaluation brings rationality, objectivity, accountability and credibility to HRD by insisting on tangible and verifiable outcomes. It enables HRD functionaries to prove why they should not be retrenched from service even during a market downturn.
Cont«.

Sl. No. 1 2 3 4

Author and year Kirkpatrick (1967, 1987, 1994) CIPP (Galvin, 1983) CIRO (Warr, 1970) Brinkerhoff (1987)

Evaluation criteria Four levels: reaction, learning, job behaviour and results Four levels: context, input, process and product Context, input, reaction a nd outcomes Six stages: goal setting, program design, program implementation, immediate outcomes, intermediate or usage outcomes and impacts and worth. Four sets of activities: Inputs, Process, Out puts and Outcomes. A classification scheme that specifies three categories of learning outcomes (cognitive, skill based, affective) suggested by the literature and proposes evaluation measures appropriate for each category of outcomes. Five levels: Enabling and reaction, acquisition, application, organizational outputs and societal outcomes. Identifies five categories of variables and the relationships among them: secondary i nfluences, motivation elements, environmental elements, outcomes, ability/enabling elements. Five levels: Reaction and Planned Action, Learning, Applied learning on the job, Business results, Return on investment.

5 6

Systems approach (Bushnell, 1990) Kraiger, Ford, and Salas (1993)

7 8

Kaufman and Keller (1994) Holton (1996)

9

Phillips (1996)

Different Evaluation Models

Donald Kirkpatrick¶s Evaluation Model
The four level training evaluation model advocated about half a century ago by Donald Kirkpatrick (1967), has helped HRD professionals worldwide to a great extent in solving the myths and mysteries of understanding training outcome. The four levels of Kirkpatrick¶s (1967), model are:     Level I - Reaction Level II - Learning Level III - Behaviour Level IV - Results
Cont«.

Donald Kirkpatrick

Level - 1 Reaction: At reaction level, evaluation is focused on how the trainees felt, and their personal reactions to the training or learning experience. For example: did the trainees like and enjoy the training? Did they consider the training relevant? Was it a good use of their time? Did they like the venue, the style, timing, etc? Level of participation, ease and comfort of experience, level of effort required to make the most of the learning perceived practicability and potential for applying the learning. This is useful information. Evaluation at this level will convey to us only the satisfaction level of the trainees and not what they have learnt. Examples of reaction level: typically µhappy sheets¶ feedback forms based on subjective personal reaction to the training experience, verbal reaction which can be noted and analyzed, post-training surveys or questionnaires, grading by delegates, subsequent verbal or written reports given by delegates to managers back at their jobs.
Cont«.

Level 5 What to look for in training evaluation? Level 4 Results Level 3 Behaviour Level 2 Learning Level 1 Reaction

Return on Investment

Cont«.

Indicators of training success

Level - 2 Learning: At the learning level, evaluation is aimed at the measurement of increase in knowledge or intellectual capability after the training. Evaluation at this level is based on, whether the trainees learn what is expected of a particular programme? This is an important criterion, which many people in the organization would expect an effective training programme to satisfy. Measuring the learning may involve a quiz or a test. Typically, assessments or tests before and after the training, interview or observation can be used before and after although this is time-consuming and can be inconsistent. Methods of assessment need to be closely related to the aims of the learning. Reliable, clear scoring and measurements need to be established, so as to limit the risk of inconsistent assessment. Hard copy, electronic, online tests or interview style assessments are all possible.
Cont«.

Level ± 3 Behaviour: Behaviour evaluation is the extent to which the trainees applied the learning and changed their work place behaviour. This can be seen immediately or several months after the training, depending upon the situation. Did the trainees put their learning into effect when they returned on the job? Were the relevant skills and knowledge used? Was there a noticeable and measurable change in the activity and performance of the trainees? Was the change in behaviour and new level of knowledge sustained? Would the trainee be able to transfer his learning to another person? Is the trainee aware of changes in his behaviour, knowledge, and skill? This is also a critical measure of training success. We have all come across many employees who know how to do a job well, but chose not to do. If learning does not result in positive workplace behaviour of the trainees then the training efforts would be a waste. Measuring at this level may involve observing employees¶ behaviour at work or the Cont«. feedback from customers, suppliers, bosses, peers, etc.

Level ± 4 Results: At this level, the evaluation focuses on the business or environment resulting from the improved performance of the trainee ² it is the acid test. Evaluation at this level aims at finding out whether the training initiative has improved the organization¶s performance effectiveness. Is the organization more efficient, more profitable, and better able to serve its clients or customers as a result of the training programme? Meeting this norm is considered as the bottom line. It is also the most challenging level to assess, given that many things beyond employee performance can affect organizational performance. At this level, the business data and financial data are analyzed to evaluate the training. Measures would typically be business or organizational. Key performance indicators are volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organizational performance. For instance, the number of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards, accreditations, growth, retention etc.
Cont«.

Level ± 5 Organizations expect much more than results from training. Thus, Jack Phillips (1996) has suggested that evaluation must go beyond Level IV and focus on real measurement of return on investment. Robinson (1989), whose writing redirected the attention of trainers to business impact, exhorts trainers to become ³performance consultants´ and de-emphasizes training as an intervention. Robert Brinkerhoff (1987) uses data gathering and evaluation to make the training function more customer-focused and practice of continuous improvement. Many trainers are of the view that ROI can easily be included in Kirkpatrick¶s original fourth level µResults¶. The inclusion and relevance of a fifth level is therefore arguably only relevant if the assessment of Return On Investment might otherwise be ignored or forgotten when referring simply to the µResults¶ level.

Jack Phillips

Cont«.

Table gives a quick idea about how to find outcome of training at the five levels discussed above.
Techniques for finding training outcomes
Level of Evaluation Level 1-Reaction of the trainees How to find the Outcome? Enquire from trainees orally, or use a feedback form at the end of the programme or at the end of each day/ session

Level 2-Learning occurred or not Ask questions to trainees orally, give a written test at the end of the programme or at the end of each session. Level 3-Behaviour changed or not Observe on the job or seek report from the supervisor, peers, customers, or subordinates, who are familiar with the post -training performance of the trainee.

Level 4- Results produced or not Look for the outcomes like increase in sales/productivity, or improvement in product quality/customer service or profitability. Level 5- Return on investment got or not Calculate what was the cost of training and what is the monetary value of performance outcome resulted on account of the said training.

Cont«.

Phillips (2005) suggests in this regard the following dimensions to assess ROI. The words in italics indicate the name of the training programme.        Absenteeism control/reduction: Absenteeism, customer satisfaction, job satisfaction Business coaching: Productivity/output, quality, time savings, efficiency, costs, employee satisfaction, customer satisfaction Career development/career management: recruiting expense, employee satisfaction Communications: satisfaction Errors, stress, conflicts, Turnover, productivity, promotions, employee

Compensation plans: Costs, productivity, quality, employee satisfaction Compliance programmes: Penalties/fines, charges, settlements, losses Diversity: Turnover, absenteeism, complaints, charges, settlements, losses
Cont«. 

       

E-learning: Cost savings, productivity improvement, quality improvement, cycle times, error reductions, employee satisfaction Employee benefits plans: Costs, time savings, employee satisfaction Employee relations programme: Turnover, absenteeism, employee satisfaction, engagement Gain sharing plans: Production costs, productivity, turnover Labour-management cooperation programmes: grievances, absenteeism, employee satisfaction Work stoppage,

Leadership development: Productivity/output, quality, efficiency, cost/time savings, employee satisfaction, engagement Marketing and advertising: Sales, market share, customer loyalty, cost of sales, wallet share, customer satisfaction Meeting planning: Sales, productivity/output, quality, time savings, employee satisfaction, and customer satisfaction Cont«. Orientation: Early turnover, training time, productivity 

        

Personal productivity/Time management: Time savings, productivity, stress reduction, employee satisfaction Project management: Time savings, quality improvement, budgets Recruiting source (new): Costs, yield, early turnover Retention management: Turnover, engagement, employee satisfaction Safety incentive plan: Accident frequency rates, accident severity rates, first-aid treatments Selection tool (new): Early turnover, training time, productivity Self-directed teams: Productivity/output, quality, customer satisfaction, turnover, absenteeism, employee satisfaction Sexual harassment satisfaction prevention: Complaints, turnover, employee

Six Sigma: Defects, rework, response time, cycle time, costs Skill-based pay: Labour costs, turnover, absenteeism
Cont«. 

Strategy/policy: Productivity/output, sales, market share, customer service, quality/service levels, cycle times, cost savings, employee satisfaction 

Stress

management:

Medical

costs,

turnover,

absenteeism,

job

satisfaction  Technical training (job-related): Productivity, sales, quality, time, costs, customer service, turnover, absenteeism, employee satisfaction  Technology implementation: Cycle times, error rates, productivity, efficiency, customer satisfaction  Wellness/fitness: Turnover, medical costs, accidents, absenteeism

Cont«.

Davidson (1998) also suggests a similar approach to measure ROI as per Table. The table indicates some of the areas to look when trying to demonstrate results.
Measuring ROI in HR

HRD Programmes Training Programmes

Possible Measurements Productivity, sales, quality, time, costs, customer satisfaction, turnover absenteeism, employee satisfaction

Compensation Programmes Modified Work Structures

Labour costs, turnover, absenteeism (pay for performance) Productivity, quality, customer (teams, project committees, etc.) satisfaction, turnover, absenteeism, employee satisfaction, time to deliver

Recruiting Programmes Total Quality Management Employee Support Programmes

Cost per hire, yield (percentage of candidates recruited), timeto-fill ratios Defects, rework, response time Absenteeism, employee satisfaction, employee referrals, productivity
Cont«.

Phillips and Whalen (2000), have suggested certain criteria for effective ROI process as under: 1. The ROI process must be simple, without complex formulas, lengthy equations and complicated methodologies. 2. The ROI process must be economical with the capacity to be implemented easily. 3. The assumptions, methodology, and outcomes must be credible. 4. From a research perspective, the ROI process must be theoretically sound. 5. The ROI process must account for other factors that have influenced output variables. 6. The ROI process must be appropriate to a variety of programmes. 7. The ROI process must have a flexibility to be applied on a pre-programme basis as well as a post-programme basis. 8. The ROI process must be applicable to all types of data including hard data and soft data. 9. The ROI process must include the costs of the programme. 10. The ROI process must have a successful tract record in a variety of applications.

Data collection for Training Evaluation
Good evaluation depends upon good data. Thus, collecting appropriate and valid data using scientific methods will help in doing acceptable evaluation. Table 4 shows the major source and techniques of data collection. There are different types of data available for training evaluation like individual performance details, performance details of an entire department or group and the increase in the economic value of the organization. It is not necessary that evaluation has to be done at all the four or five levels from reaction to return on investment. Depending upon the convenience and purpose; evaluation at any one or two levels would be sufficient. At reaction level, the data can be collected by oral reaction of the respondents, or it can be collected through a questionnaire during the last session of the training programme. Such questionnaire should ask questions on the subject expertise as well as methodology of teaching expertise of the trainers, sessionwise. The other questions will include the adequacy and quality of seating, lunch, tea, study material, stationary, audio-visual equipments Cont«. and the relevance of the training.

Methods of data collection for training evaluation
Method Interview Advantages Flexible, opportunity for clarification, depth possible, personal interaction Low cost, honest y increased if it is anonymous, respondent sets pace, variety of options Non -threatening, excellent way to measure behaviour change Low purchase cost, readily scored, quickly processed, easily administered, wide sampling possible Reliable, objective, close relation to the job performance Reliable, objective, job based, easy to review, minimal reactive effects Limitations High reactive effects, high cost, face to face threat potential, labour -intensive, and time consuming. Possible inaccurate data, on the job responding conditions are not controlled, respondents set varying paces, return rate of questionnaire is difficult to control. Possibly disruptive, reactive effect possible, may be unreliable, trained observers needed. May be threatening, possible low relation to job performance, reliance on norms may distort individual performance, possible cultural bias. Time -consuming, simulation development cost. often difficult, high

Questionnaire

Direct observati on Written test

Performance test Performance data

Lack of knowledge of criteria for keeping or discarding records, information system discrepancies.

Designs of Training Evaluation
One shot case study: This method involves evaluating the trainees only once at the end of the programme. One group pre-test/post-test design: In this method, a test is conducted prior to the training to assess the existing level of knowledge, skill or attitude of the trainees. Another test is conducted after the training and the actual effectiveness of training is found. One group time series: In this method, the trainees are tested for more than once before and after the training using different types of tests like paper-pencil, oral test, performance tests etc. Randomized non-equivalent control group design: It involves testing and comparing the scores of two groups once before the training and again after the training. Randomized equivalent control group design: In this method, participants are selected from identical groups and allotted randomly to control and experiment groups. Post-test only control group design: This method helps in preventing the contamination effects of pre-test sensitivities.

Sign up to vote on this title
UsefulNot useful