You are on page 1of 40

Unit-7

Training evaluation
Evaluation Helps Ensure Guests Find Paradise on Hawaii’s North Coast

• Turtle Bay Resort is located on the stunning coastline of Oahu, Hawaii. Guests can choose
many activities while enjoying the beautiful surroundings at the at the resort including
dining on farm-to-table meals, enjoying the ocean while surfing and paddling, relaxing with
a massage, or taking an invigorating horseback ride. New management made a $40 million
investment and renovations to revitalize the resort and inspire its guests to make them part
of the local community. They recognized that the physical changes to the resort were
necessary and important. But they also believe that investing in training leads to happier
and more engaged employees, and, in turn, leads to satisfied guests. Training at Turtle Bay
includes classroom learning, role-plays, and social learning. All training incorporates Turtle
Bay’s 6 Values that provide the standard by which employees work and serve guests. The
values relate to time (Manawa), goodness (Pono), caring for others (Malama), support of
family (Hanai), Aloha (kindness), and local engagement and culture (Kama’aina). The values
include underlying behaviors and practices such as greeting guests promptly (Manawa),
hold others accountable ( Malama), demonstrating interest in peers (Hanai), engage guests
and peers (Aloha), and treat locals as guests and guests as
• locals (Kama’aina). Every employee is required to attend a training program that
focuses on the values. Managers are asked to complete an individual development
plan based on their self-rating as well as ratings from employees, peers, and their
manager on how well they applied these values at work. Employees also complete a
self-assessment and personal improvement plan based on the values. Laulima
(many hands working together), a service quality training program, is an extension
of the values. The program includes modules on greeting guests, service delivery,
service recovery, and knowledge of service, food and beverages, history, and
culture. The program was developed using input from employees who were chosen
as the best service providers at Turtle Bay. Each module has a workbook that guides
employees through a series of exercises. New employees attend a scavenger hunt to
help them understand the property and its plants and animals. Managers are
expected to help teach employees and reinforce what they learn. Employees also
have to learn how to use Guidepost, a lobby experience center that provides
concierge and guest services in an interactive space. Guidepost includes iPads and
touchscreen panels for viewing activities and reviewing, learning about, and
booking local activities.
• To reinforce delightful customer service and emphasize the importance of
training, Turtle Bay has several rewards programs. The Ho’ohana Awards
recognize employees for exceptional service for guest and employees. Ali’i
and llima Awards for exceptional service are given each quarter to a
manager and two other employees. The Best of the Best Award is given to
outstanding employees who continuously demonstrate exceptional
service. Turtle Bay collects several different types of data to determine the
success of training. The most important measure is guest satisfaction,
which includes using social media tools like TripAdvisor, Revinate, and
Market Metrix. A values feedback system is used to determine how well
employees are applying the Value practices. This data is evaluated for
employees, managers, department, and functional areas. Occupancy
rates, market share, sales performance, internal promotions, and turnover
are used as financial measures. To assess employee engagement, two
surveys are conducted each year.
Training outcomes at Turtle Bay
• engagement, guest satisfaction, and financial measures such as
occupancy rates.
Training effectiveness
• Training effectiveness refers to the benefits that the company and the
trainees receive from training.
• Benefits for trainees may include learning new skills or behaviors.
Benefits for the company may include increased sales and more
satisfied customers.
• A training evaluation measures specific outcomes or criteria to
determine the benefits of the program.
Training outcomes
• Training outcomes or criteria refer to measures that the trainer and
the company use to evaluate training programs. To determine the
effectiveness of training, an evaluation needs to occur.
Training evaluation
• Training evaluation refers to the process of collecting the outcomes needed to
determine whether training is effective.
• For Turtle Bay, the outcomes included engagement, guest satisfaction, and financial
measures such as occupancy rates.
• Although not discussed in the vignette, Turtle Bay also has to be confident that the
data its information-gathering process is providing accurate data for making
conclusions about the effectiveness of its training programs.
• The evaluation design refers to the collection of information—including what, when,
how, and from whom—that will be used to determine the effectiveness of the
training program.
• Any organization that evaluates training has to be confident that training—rather
than some other factor—is responsible for changes in the outcomes of interest (e.g.,
turnover, productivity).
Why is the training evaluation required?
I. The evaluation enables the effectiveness of an investment in training to be
appraised which can help to justify expenditure on future programmes.
ii. It allows the effectiveness of differing approaches to be compared.
iii. It provides feedback for the trainers about their performance and
methods.
iv. It enables improvements to be made, either on the next occasion, or if
the evaluation is ongoing, as the training proceeds.
v. Recording learning achievements can be motivational for learners.
vi. The evaluation indicates to what extent the objectives have been met
and therefore whether any further training needs remain.
Purpose of training evaluation
• The purposes of training evaluation are as follows:
• 1. To justify the role of training, considering budget availability and
cutback situations
• 2. To improve the quality of training for employee development,
training delivery, trainer deployment, duration, methodology, etc.
• 3. To assess the effectiveness of the overall programme, quality, and
competency of the trainer
• 4. To justify the course through cost-benefit analysis and ROI
approach
• It can also be used to do the following:
• 1. Provide feedback on whether the training or development activity
is effective in achieving its aims.
• 2. Indicate the extent to which trainees apply what they have learned
back in the workplace (transfer of training), an issue which many
organizations find they have problems with
• 3. Provide information on how to increase the effectiveness of current
or later development activities
• 4. Demonstrate the overall value and worth of development activities.
2. What should be evaluated, and when?
• Evaluation of a training programme becomes necessary to find out how far
the training programme has been able to achieve its aims and objectives.
Such an evaluation provides useful information about the effectiveness of
training and the design of future training programme.
• Training evaluation involves both formative and summative evaluation.
Levels/Stages:
1. Pre-Training Evaluation, 
2. Intermediate Training Evaluation and 
3. Post-Training Evaluation
The levels can also be: Formative and Summative
Formative & Summative evaluations
Difference between formative (FA) and
summative assessments (SA)
• Formative assessment is used to monitor student’s learning to
provide ongoing feedback that can be used by instructors or teachers
to improve their teaching and by students to improve their learning.

• Summative assessment, however, is used to evaluate student’s


learning at the end of an instructional unit by comparing it against
some standard or benchmark.
• In FA, The evaluation takes place during the learning process. Not
just one time, but several times. A summative evaluation takes place
at a complete other time. Not during the process, but after it. The
evaluation takes place after a course or unit’s completion.

• With formative assessments you try to figure out whether a student’s


doing well or needs help by monitoring the learning process. When
you use summative assessments, you assign grades. The grades tell
you whether the student achieved the learning goal or not.
• The purposes of both assessments lie miles apart. For formative
assessment, the purpose is to improve student’s learning. For
summative assessment, the purpose is to evaluate student’s
achievements.
• Formative assessment includes little content areas. For example: 3
formative evaluations of 1 chapter.
• Summative assessment includes complete chapters or content areas.
For example: just 1 evaluation at the end of a chapter. The lesson
material package is much larger now.
Levels/stages of evaluation
1. Pre-Training Evaluation:

• In this stage, an evaluation is made in the beginning of the training


programme in order to understand the expectations of the trainees
from the training programmes and the extent to which they have
understood its objectives. This step enables the training segment to
modify the training curricula in such a way that the objectives of the
training programme are aligned to those of the trainees.
2. Intermediate Training Evaluation:

• Training and development segment wants to ensure that training is


progressing as expected. Mid-course corrections can be made in the
event of deviation from the envisaged objectives. For example, if
trainees perceive that a training programme is aimed at building
communication skill is more theory-oriented, rather than practice-
oriented, the feedback may be useful to modify the instruction
method. Thus, it serves as a verifying tool.
3. Post-Training Evaluation: Kirkpatrick’s model-RLBR

• The criteria used for assessing the impact of training programme include Reaction,
Learning, Behaviour and Results (RLBR).
• a. Reactions:

• This measures the degree of satisfaction of trainees with the training programme,
namely subject matter and content of training programme, the environment,
methods of training etc. The outcome of evaluation of reaction may be useful in
further strengthening the areas the participants find it more useful and in
modifying the areas they find it not useful. Negative reactions may dampen the
spirit of participation in future training programmes. However, positive reactions
may not provide complete information about the effectiveness of the programme.
Reaction
• Questionnaires, interviews, group discussion, or asking trainees to
write a report can be used.
• Care must be taken with all of these methods. Very often participants
have enjoyed a course, even if they learned very little. Factors such as
the quality of the lunch provided, or the comfort of the chairs, may
influence the assessment of the training given.
• The other participants may have spoilt a basically sound course, or
conversely saved a basically poor course.
• Trainees are not always in a position to know immediately whether
what they have learned will be useful and it may be best to wait some
considerable time before asking for an opinion.
• Sometimes a trainee may have felt unfairly criticised during a course,
and so may ‘rubbish’ it in retaliation.
• The more training a person receives, the more critical he or she is likely
to become.
• Standards and expectations rise with experience.
• Using more than one technique can be helpful to gain a broader picture.
• Also look out for cues such as an increase or decrease in demand for
the training (where there is a choice), or if the line managers start
asking for one particular trainer in preference to another.
• b. Learning:
It measures the degree to which trainees have acquired new knowledge,
skill or competencies. The trainer has to measure the knowledge and skill
level of trainees in the beginning of the programme.
It is supposed to be the baseline or standard. Again the level of
knowledge and skills obtained at the end of training is measured and
compared against the standard.
Thus pre and post training comparison helps to assess the improvement
level.
Level 2 – Learning (Knowledge and skill):

• Tests, examinations, workplace-based assessments of competence,


projects, or attitude questionnaires are the key techniques here. Some
learning situations are easy to test for (e.g typing ability), whereas others
necessarily involve a good deal of subjectivity (e.g counselling skills).
• Yet other learning is so long-term in its nature that direct methods are
frankly not appropriate. For example, if a newly appointed supervisor
attends a course, then an end test or examination can only tell us if he or
she has learned certain terms, concepts or models. It cannot tell us if he or
she will become a good supervisor by applying that learning in the work
situation.
• The processes used at level 2 are often termed validation.
Level 3 – Behaviour (performance):

• This level requires assessment of improved performance on the job. This


is easiest in jobs where before-and-after measures can easily be made
(e.g the speed at which an insurance proposal form can be processed).
• It becomes more difficult to evaluate performance in jobs which are less
prescribed and where measurement is imprecise (e.g training design).
There may be a time-lag between training and the appearance of
indicators of performance improvement.
• For instance, upon returning to work after attending a course on
leadership, a manager may immediately practise what he or she has
learned – but the results of this take two or three months to become
apparent.
• During that time other factors in the situation may have changed –
there may have been some new staff recruited, or some redundancies
have affected morale. If we were to instigate a long-term assessment
process, we would also find it difficult to separate out the influence of
day-to-day experience from the influence of the formal training
course, it is often impossible to isolate the precise influence of the
training. Often the trainer has to resort to indirect performance
assessment measures to gauge the influence of the training.
• d. Results:
• Generally, it is difficult to measure precisely the impact of training on
business performance which depends on several other factors like
economic climate, marketing, size of investment, etc. However,
certain measures like productivity, sales volume and profit, etc., may
be compared before and after the training episode. Any improvement
may be partially attributed to the training imparted. Besides, return
on investment, cost benefit analysis and bench-marking are other
methods to assess the value of training.
• Level 4 – Results:
• Because departmental and organisational results depend upon many
people and it is difficult to attribute improvements to the efforts of
specific individuals, evaluation at this level often has to be conducted in a
more general way.
• Does the overall training programme result in greater efficiency,
profitability, or whatever? If we were to try to look at the impact of a
large training programme on a part of a large organisation, we can take an
experimental approach.
• Ideally, we take two identical units. One is given lots of training, the other
is given none. Two years later, the difference in performance is apparent!
• Obviously such an approach is not one which can be easily advocated.
If we really believe that the training is likely to be of value, it is unfair,
perhaps even unethical, to withhold it from one of the units in order
to conduct an experiment.
• However, it is sometimes possible to obtain historical information
which shows a correlation between spending (or some other measure)
on training and organisational performance. Perhaps two similar units
within the same organisation can be compared and the relationship
between past training activity and other measures can be assessed
(e.g. accident rate, machine downtime, customer complaints).
Why do companies don’t use results outcomes for evaluating their
training programs?

• Results outcomes are level 4 criteria in Kirkpatrick’s framework. They


are the benefits of the training program for the company, such as
lower employee turnover, increased sales, better customer service,
and the like. These data may be difficult to collect, and companies
may not have expertise to statistically link training to these outcomes
Level 5: Training evaluation: Measuring ROI of training
programme
• Return on investment (ROI) refers to comparing the training’s monetary benefits with the cost of the training.
• ROI is often referred to as level 5 evaluation .
• ROI can be measured and communicated based on a percentage or a ratio.
• For example, assume that a new safety training program results in a decline of 5 percent in a company’s accident rate.
This provides a total annual savings (the benefit) of $150,000 in terms of lost workdays, material and equipment damage,
and workers’ compensation costs. The training program costs $50,000 to implement (including both direct and indirect
costs). To calculate the ROI, you need to subtract the training costs from the benefits, divide by the costs, and multiply by
100. That is, ROI = ((150,000 – 50,000) ÷ 50,000) × 100% = 200%. The ROI for this program is 200 percent.
• Another way to think about ROI is to consider it as ratio based on the return for every dollar spent. In this example, the
company gained a net benefit of $2 for every dollar spent. This means the ROI is 2:1.
• Training costs can be direct and indirect.
• Direct costs include salaries and benefits for all employees involved in training, including trainees, instructors,
consultants, and employees who design the program; program material and supplies; equipment or classroom rentals or
purchases; and travel costs.
• Indirect costs are not related directly to the design, development, or delivery of the training program. They include
general office supplies, facilities, equipment, and related expenses; travel and expenses not directly billed to one
program; training department management and staff salaries not related to any one program; and administrative and staff
support salaries.
• Benefits are the value that the company gains from the training program.
TCS ROI
• Tata Consultancy Services LTD, a global information technology
service company headquartered in India, measures ROI for its
technology training programs. To calculate the ROI, revenues earned
as a result of training are calculated based on the billing rates (sales
for clients)of participants who attend the training and use the new
skills. Then, training costs are subtracted from the revenues. ROI for
the technical programs is 483 percent.
Other ways to Evaluate the Effectiveness of a Training Programme?

• 1. Assessment of trainers’ comments and reactions to the training programme


after the training is over.
• 2. Observation of trainees during the training programme.
• 3. Comparing on-the-job performance of the trainees before and after training.
• 4. Collection of opinions and judgements of trainers, superiors and peers.
• . Giving oral and written tests to trainees to find out how far they have learnt
through the training programme.
• 6. Cost-benefit of the training programme.
• 7. Measurements of levels in employees’ absenteeism, turnover, productivity,
wastage or scrap of materials, accidents, breakage of machinery during pre-
training period and post-training period.
• 8. Evaluation of trainees’ skill level before and after training.
• 9. Collection of opinions of the trainees’ subordinates regarding their
job performance and behaviour.
• 10. Collection of information through evaluation forms duly filled up
by the trainees.
• 11. Knowing trainees’ expectations before training and collecting their
views regarding the attainment of the expectations after training.
The output of training evaluation will serve:

• 1. To illustrate the real worth of a training


• 2. To pinpoint where improvement is required in forthcoming training
programmes
• 3. To assess effectiveness of the overall course, trainer, and the training
methods
• 4. To carry out cost-benefit analysis to justify the amount spent; to prove that
the benefits outweigh the cost
• 5. To formulate a basis for making rational decisions about future training plans
• 6. To justify the role of training for budget purposes and in cutback situations
of budget crunch.
EVALUATION PRACTICES

• Below are percentage estimates of organizations examining different


training outcomes.
• Reactions 92%
• Cognitive (learning) 81%
• Behavior 55%
• Results 37%
• ROI 18%
• Reactions and cognitive outcomes are the most frequently used
outcomes in training evaluation. Despite the less frequent use of
cognitive, behavioral, and results outcomes, research suggests that
training can have a positive effect on these outcomes.
•  There are a number of reasons why companies fail to evaluate
training.
• Learning professionals report that access to data and tools needed to
obtain them are the most significant barriers.
• 
Principles of Training Evaluation:

• 1. Evaluation specialist must be clear of the training program and also


about the goals and purposes of evaluation.
• 2. Evaluation should be continuous.
• 3. Evaluation must be specific.
• 4. Evaluation must provide the means and focus for trainers to be able
to appraise themselves, their practices, and their products.
• 5. Evaluation must be based on objective methods and standards.
• 6. Realistic target dates must be set for each phase of the evaluation
process.

You might also like