You are on page 1of 8

Basics of Follow-up in Training.

and more
NTT 12-13 Its More Than Trainings

Basics of Follow-up
Evaluation is a science and an art. It is a blend of concepts, theory, principles, and techniques. It is up to you to do the application.- Dr. Don Kirkpatrick.

During this presentation, we will go through some very important points that we should take into consideration when we talk about the Follow-up stage: 1. 2. 3. 4. A classical training cycle; The purpose of the evaluation; The Kirkpatricks 4 levels of evaluation; Examples of methods of evaluation.

Before starting planning some evaluation activities, we should first ask ourselves some very important questions: Why is evaluation critical to the training process? Why do I want to do it? What do I expect to succeed after the evaluation part? How can you use what you glean from the evaluation process to make improvements? What kind of follow-up activities are most beneficial?

Lets go now through the most important blocks of this book let:

1. A classical training cycle. A classical training cycle follows the 5 stages listed below, starts with the Needs Assessment part, continues with the Development of you trainings Objectives and with the Development of the program itself (based on the objectives). Then comes the actual Implementation of the program by delivering the session, and the last but not the least is the Follow-up part, which is one of the most important of all, because it offers a closure to your learning program, but in the same time may open opportunities of creating and delivering another one.

Stage 5 of The Training Cycle: Evaluate performance.

2. The purpose of the evaluation. Many authors explain that the purpose of the Evaluation is very simple, and that is to change the participants: their behavior, opinions, knowledge, or level of skill. The purpose of evaluation is to determine whether the objective was met and whether these changes have taken place. In this context, the feedback received from the participants, in different ways, could be very revealing. Using different kinds of evaluation methods, you could provide feedback to many types of stakeholders: o Team leaders regarding their success in mastering new knowledge, attitudes, and skills; o Members concerning their work-related strengths and weaknesses; o Evaluation results can be a source of positive reinforcement and an incentive for motivation; o Trainers for developing future interventions and program needs or creating modifications in the current training efforts; o Team leaders as to whether there is observable change in members effectiveness or performance as a result of participating in the training program; o The organization regarding return on investment in training.

3. The Kirkpatricks 4 levels of evaluation. Don Kirkpatrick originally developed his four levels of training evaluation reaction, learning, behavior, and results almost one-half century ago, and they are as applicable today as they were in the 1950s. I. Reaction measures participants satisfaction to the training.

It is the most often used type of evaluation, because its the easiest way to get very quick reaction from the public. Why? It provides information about the trainers performance. It is an easy and sustainable process. If the tools are constructed well, the data can identify what needs to be improved. The satisfaction level provides guidance to management about whether to continue to invest in the training. o If conducted immediately following the training session, the return rate is generally close to 100 percent, providing a complete database. o o o o Examples of Follow-up methods at this level: First, use two flipchart pages. At the top of one write positives and at the top of the other write changes. Then ask participants to provide suggestions about what went well that day, the positives, and what needs to change. Capture ideas as they are suggested. A second one is to pass out an index card to each participant. Ask them to anonymously rate the day on a 1 to 7 scale, with 1 being low and 7 being high, and then to add one comment about why they rated it at the level they did. II. Learning measures the extent to which learning has occurred.

At this level you have to create methods of knowledge assessment, skills and attitudes (KSA) acquired by participants from the training, and whether they are able to implement and to use them. Most training programs have at least one objective aimed at improving knowledge of certain individuals, be they practical, cognitive or symbolic (that are designed to change attitudes). At this stage, the trainers may consider whether the learning objectives have been met and that may affect future activities and way to improve it. Examples of Follow-up methods at this level: A self-assessment for participants to compare what they gained as a result of the training.

An assessment of a members knowledge and skills related to the job descriptions requirements. If an attitude survey is conducted, it provides an indication of a members attitude about the content. An assessment of whether participants possess knowledge to safely perform their duties; this is especially critical in a manufacturing setting. III. Behavior - measures whether the skills and knowledge are being implemented.

As you can see, this type of evaluation cant be applied immediately after training because members need time to find the right situations that give them the opportunity to apply the new information received. To conduct a Level III evaluation correctly, you must find time to observe the participants on the job, create questionnaires, speak to supervisors, and correlate data. Behavior analysis must take into account all stakeholders involved for better evaluation. Thus, the situation must be evaluated and measured by the team leader and his remarks following the training, and the NGO as a whole. You can also see that even though measuring at Level III may be difficult, the benefits of measuring behaviors are very clear. o The measure may encourage a behavioral change on the job. o When possible, Level III can be quantified and tied to other outcomes on the job. o When a lack of transfer of skills is clearly defined, it can clearly point to a required training design change. o A before and after measurement provides data that can be used to understand other events. o Sometimes, Level III evaluations help to determine the reasons change has not occurred that are not related to training. IV. Results measures the business impact.

The role of this type of evaluation is to analyze the cost-benefit ratio (Sometimes called costbenefit analysis -incorrectly- or return on investment) on the overall training, and to determine if all costs for the work itself was worth the effort. Data collected is very difficult to assess the actual results, but in the organizational environment is easier if at the end of the project, or at an important point of his, considering the rate of target achievement (MBO) you measure the overall performance. A cost-benefit analysis is usually completed before a training program is created to decide whether it is worth the investment of resources required to develop the program. Return on investment (ROI) is conducted after the training has been completed to determine whether it was worth the investment.

Measurements focus on the actual results on the business as participants successfully apply the program material. Typical measures may include output, quality, time, costs, and customer satisfaction.

4. Examples of methods of evaluation. Quizzes: This method measures how well trainees learn program content. An instructor administers paper-and-pencil or computer tests in class to measure participants progress. The test should measure the learning specified in the objective. Tests should be valid and reliable. Valid means that an item measures what it is supposed to measure; reliable means that the test gives consistent results from one application to another. o Multiple-choice questions take time and consideration to prepare. However, they maximize test-item discrimination yet minimize the accuracy of guessing. They provide an easy format for the participants and an easy method for scoring. o True-false tests are more difficult to write than you may imagine. They are easy to score. o Matching tests are easy to write and to score. They require a minimum amount of writing but still offer a challenge. o Fill-in-the-blank or short-answer questions require knowledge without any memory aids. A disadvantage is that scoring may not be as objective as you may think. If the questions do not have one specific answer, the scorer may need to be more flexible than originally planned. Guessing is reduced because there are no choices available. o Essays are the most difficult to score, although they do measure achievement at a higher level than any of the other paper-and-pencil tests. Scoring is the most subjective. Attitude surveys These question-and-answer surveys determine what changes in attitude have occurred as a result of training. Practitioners use these surveys to gather information about employees perceptions, work habits, motivation, values, beliefs, and working relations. Attitude surveys are more difficult to construct because they measure less tangible items. There is also the potential for participants to respond with what they perceive is the right answering.

Simulation and on-site observation Instructors or managers observations of performance on the job or in a job simulation indicate whether a learner is demonstrating the desired skills as a result of the training. Facilitate this process by developing a checklist of the desired behaviors. This is sometimes the only way to determine whether skills have transferred to the workplace. Some people panic or behave differently if they think they are being observed. Observations of actual performance or simulated performance can be time-consuming. It also requires a skilled observer to decrease subjectivity. Criteria checklists Also called performance checklists or performance evaluation instruments, criteria checklists are surveys using a list of performance objectives required to evaluate observable performance. The checklists may be used in conjunction with observations. Productivity or performance reports Hard production data such as sales reports and manufacturing totals can help managers and instructors determine actual performance improvement on the job. An advantage of using productivity reports is that no new evaluation tool must be developed. The data is quantifiable. Disadvantages include a lack of contact with the participant and records that may be incomplete. Post-training surveys Progress and proficiency assessments by both managers and participants indicate perceived performance improvement on the job. Surveys may not be as objective as necessary. Needs/objectives/content comparison Training managers, participants, and supervisors compare needs analysis results with course objectives and content to determine whether the program was relevant to participants needs. Relevancy ratings at the end of the program also contribute to the comparison. Evaluation forms Sometimes called a response sheet, participants respond on end-of-program evaluation forms to indicate what they liked and disliked about the training delivery, content, logistics, location, and other aspects of the training experience. The form lets participants know that their input is desired. Both quantitative and qualitative data can be gathered.

A very important thing that we shouldnt forget is the fact that the Follow -up can be achieved even during the training itself, so an element not to be overlooked is the debriefing and the reviewing part after each activity, in order to help members understand better the information already given, their importance and purpose. Other ways of doing Follow-up activities during the session are the simulations, quizzes, or any other games based on using the theoretical information which the trainer already delivered.

Evaluation is the final stage in The Training Cycle, but it is certainly only the beginning of improving training. It will be up to you to take your training efforts to the next level relying on evaluation to help you decide what to improve.

NTT 12-13 whishes you good luck in creating the best training sessions, and the most creative Follow-up methods you need in order to bring added value to your trainings.

For creating this material was used the Training for dummies training -manual, by Elaine Biech