You are on page 1of 51

Stakeholders to Training & Evaluation Strategy

Evaluation

 It is a process of establishing a worth of something.

 It would simply mean the act of judging whether the activity to be evaluated is worthwhile in
terms of set Criteria

 The ‘worth’, which means the Value, Merit or Excellence of the thing

Key questions to be answered through Training Evaluation

Was it worth the investment in Time, Effort & Cost ?

Stake Holders

 Participants

 Organization

 Training Agency

 Trainers

Resistance to Training Evaluation

 There is nothing to evaluate.

 No one really care about it.

Evaluation is a threat to my job

Purpose of Evaluation

 Feedback - on the effectiveness of the training activities

 Control - over the provision of training

 Intervention - into the organizational processes that affect training

Benefits of Evaluation

 Improved quality of training activities

 Improved ability of the trainers to relate inputs to outputs

 Better discrimination of training activities between those that are worthy of support and those
that should be dropped
 Better integration of training offered and on-the job development

 Better co-operation between trainers and line-managers in the development of staff

 Evidence of the contribution that training and development are making to the organization

Definition

 blin (1970) -

Any attempt to obtain information (feedback) on the effects of training programme and to
assess the value of training in the light of that information for improving for the training.

 Bramley (1966)-

“Evaluation of training is a process of gathering information with which to make decision


about training activities.”

Validation

 Internal- Focus is on the immediacy of the training and learning resulting in the improvement of
the trainee.

 External- Series of tests designed to ascertain whether the objectives of internally valid
programme are based on an accurate identification of training needs.

Formative Evaluation

 Continuous monitoring of the training effectiveness to facilitate onward correction and


modification.

Summative Evaluation

Takes place at the end of training. It examines the outcome from training with eye on fresh
training needs and initiatives.

Need for Evaluation

 Participants

 Training Agency

 Trainers

 Organization

 TNI for future

 Identification of gaps
 Validity of training

Training Evaluation:

 Why to evaluate?

 When to evaluate?

 What to evaluate?

 How to evaluate?

Why Evaluate?

1) Cost benefit returns from training investment:

 Training department

 Senior managers / clients

 Trainees

2) Enable improvements in the assessments of training needs:

3) Self correcting feedback.

4) Feedback on performance of the trainers.

5) Feedback on performance of the trainees.

The question that we should be asking is not ‘Why Evaluate Training?

But Can we afford not to evaluate training activities?

Conclusive feedback- To establish links between training and organizational needs and goals

 Directive feedback- To improve quality, design and delivery of the present and future training.

 Intervention feedback- Facilitate transfer of training to the job by identifying support systems
required by training utilizers.

Principles of Evaluation

1) Clarity: Purpose of evaluation in order to set the standards and criteria of evaluation.

2) Objectivity:

• Measurable standards of assessment.

• Designing of valid and reliable research instruments.


• Detached analysis and interpretation of data.

3) Reliability: Evaluation to be reliable the results should confirm.

• Irrespective of the method used to gather the data.

• When repeated by the same trainer again.

• When interpreted by any other person.

4) Feasibility:

• Is it cost effective?

• Is your methodology practical?

• Is there utility of the data?

Feasible evaluation approach is YOUR ideal design.

5)Product and not end product

 Evaluation is a process and it has to be continuous.

 Evaluation has to begin before training activity and end much after the conclusion of visible
training activity.

 It should guide trainers for the current training effectiveness and to improve subsequent
training.

6)Evaluation design to be tailor made

 It should suit specific training levels and standards.

 Generalization drawn from one evaluation may not identify strengths and weaknesses of
training meant for different set of objectives.

Role of Evaluator

 It should be based on sound working relationship with those directly and indirectly involved in
training.

1. The trainees

2. The trainee’s managers to whom they report.

3. The other training faculty


4. Those responsible for providing the training budget like the HRD policy makers or some
members of the company’s top team.

In this context the evaluator has to:

a) Be acceptable to others.

b) Possess the skill of getting involvement and commitment of others

c) Have interpersonal sensitivity and trust for frank sharing of feedback.

d) Encourage understanding and selling the benefits of evaluation.


Theories & Models of Evaluation

Paper VII

Training Evaluation

 Why to evaluate?

 When to evaluate?

 What to evaluate?

 How to evaluate?

MODELS OF EVALUATION

The evaluator should know ‘When’ & ‘What’ to evaluate.

1) Hamblin Model of Evaluation

 Reaction

 Learning

 Job Behavior

 Functioning

 Ultimate Value

2) Kirkpatrick’s Design of Evaluation

 Reaction – How well did the trainees like the programme?

 Learning – What principles, facts, techniques were learned?

 Behaviour – What changes in job-behaviour resulted from the programme?

 Results – What were the tangible results of the programme in terms of reduced cost ,
improved quality etc?

3) Peter warr’s Framework of Evaluation

 Context Evaluation (c)

 Input Evaluation (I)

 Process Evaluation (P)


 Outcome Evaluation (o)

a) Immediate outcome

b) Intermediate outcome

c) Long term outcome

4) Virmani &Premila’s model of Evaluation:

 Pre-Training Evaluation

 Context and Input Evaluation

 Post-Training Evaluation

i) Reaction Evaluation

ii) Learning

iii) Job improvement Plan (JIP)

 Evaluating Transfer of training to the job

 Follow-up of Evaluation

5) Peter Bramley’s Model of Evaluation

 Evaluation Before Designing Learning Event

 Evaluation during the Event

 Evaluation after the event

i) Organisational Level

ii) Team Level

iii) Individual Level

 Changes in Behaviour

 Changes in Learning

6)David Reay’s approach to Evaluation

 The Trial Phase

i) Developmental Stage
ii) Pilot Testing

 The Ongoing Phase

i) Validation

ii) Formative Evaluation

 The Final Phase

Stages of Evaluation

A) Pre-Training Evaluation

i) Assessing Training Needs

ii) Performance standards expected from the trainee

iii) Do the training objectives concur with training Needs.

iv) Profile of Trainees

v) Input Evaluation

B) Evaluation During Training

i) Learning on that day

ii) Factors that helped/hindered the learning process

iii) Specific changes desired in the days schedule.

C) Post Training Evaluation

i) Reaction Evaluation

ii) Learning Evaluation

iii) Job Improvement Plan (JIP)

iv) Evaluating Training at the Job-Behaviour stage

v) Ultimate Value

Evaluation Methods

A) Pre-Training Evaluation

i) Identification of Training Needs


a) Where does the training department vis- a-vis line manager come into focus for

identification of Training needs

b) What role does the line Manager/ the immediate boss

c) How does the training department promote the process of training need identification?

d) Method used- Proactive or Reactive

ii) Evaluate Performance standards

 Information related to standards of performance helps in setting realistic objectives

 Needs of the trainee are person specific, performance standards are job specific

 Performance standards are identified based on KRAs e.g. productivity, profit, accident rate ,
machine down time etc

 iii) Evaluate Training Objectives

 To assess whether there is good congruence between the trainee’s needs and training
objectives

 iv) Evaluate trainee’s Profile

 some measure of pre-training knowledge, skill & attitude is desirable to compare with
post training performance

v) Content Evaluation

 Committee approach has been found to be fairly successful for input evaluation

 Brainstorming session

 Content should match trainees pre-training profie

B) Evaluation During Training

i) Observation

ii) Behaviour Analysis

iii) Course audits

 At the end of each day (Short duration training)

 Midway through the course (long duration programme)


iv) Session Assessment

Normally this is done by using semantic differential scale, three point or five point.

C) Post training Evaluation

i) Reaction Evaluation

At the end of the course, reactions are collected by use of rating scales of five point or seven
point. The words ‘extremely’, ‘totally', 'very well', 'very bad’ are semantic differential, carry different
meanings for different respondents.

ii) Learning Evaluation

a) Knowledge Learning

i) Objective tests

ii) Examination - written or oral

Learning Index=(Post training course-Pre training course)/(100-Pre training course)x100

Levels of knowledge

1) Information – rules and regulation of organisation etc.

2) Factual knowledge – principles, statistical data etc.

3) Dynamic – Intellectual or analytical understanding of theories, issues etc.

Tool and instruments

• True and false statements

• Short answer items

• Multiple choice questions

• Matching items

• Writing essays

b) skill Learning

a) Knowledge is important mainly as a pre-requisite to skill, therefore in evaluating the skills, we


are also evaluating knowledge

b) No generalisation can be made


c) Skill are acquired by actual practice, so can be evaluated by observing & analyzing the actual
performance

d) Easy to measure technical skill or physical actions than to assess management or social skills.

e) Knowledge of skill does not guarantee actual performances.

c) Attitude Learning

 Attitude exist only in the mind and cannot directly inferred from people’s behaviour

 Common way of evaluating attitudes and opinions is to hand out a questionnaire at the start &
at the end of the programme

D) Job Behaviour Evaluation

 It is the crucial half way stage between training & its ultimate effects

Watching & Asking or Observation & Questionnaire/Interview

E) Job Improvement Plan

 It is an individual action plan & hence has to be trainee specific

 Processing the job improvement plan is possible only through content analysis

 F) Ultimate value evaluation

 The trainee’s boss can enable the trainee to translate the improved behaviour into perceptible
measurable benefits by ensuring post training debriefing discussion with special reference to the
trainee's job improvement plan

 Direct observation, activity sampling, semi-structured interviews

G) Follow Up Results

 This is basically to seek information on the degree of application of learning & job Improvement
plan

 Cost Benefit analysis


Paper VII
i)Criteria and Approaches for Selection of Evaluation Methods

ii) Techniques of Measurement

Training Evaluation:

 Why to evaluate?

 When to evaluate?

 What to evaluate?

 How to evaluate?

Reactions of the trainees are important but organizers shall look for

i. Signs of transfer of training to the clients organization

ii. Content analysis of Job Improvement Plan

iii. Ensuring proper selection of participants

iv. Encouraging supervisors and line managers for their involvement.

Difficulties in End Term Evaluation

• Mental and emotional state of the participants.

• Participant’s attitude towards program and trainer.

• Social and cultural factors affect the participant’s response.

• Expectations of the participants from Trainer e.g. personal attention.

• Intellectual level of participants , frustration.

• Evaluation lack comprehensiveness as it is not possible to quantify every aspect of learning.

• In long term programme, participant’s views are influenced by later activities.

• Learning happened at subconscious level is not expressed.

• In reaction level evaluation – participants gives General comments and impressions.

• Lobbying with participants by the Trainer.

• Learning of participants vary, based on value they attach to each topics or modules according
their own requirements and perceptions.
• It is difficult to maintain focus if large numbers of Guest speaker are associated.

i)Criteria and Approaches for Selection of Evaluation Methods

1) Ultimate Value Approach

 Who- Top Management, Sponsors

 Why- Total Impact, Returns-on-Investment

 Outcome- Control

e.g. Sales training with expected outcome of increase in sales efficiency.

2) Trainee Centered Approach

 Who- line managers, trainee’s boss, trainees

 Why- improved job performance, learning index

 Outcome- feedback

In this approach growth and development of individual is primary and cost incurred on training is
secondary.

e.g. GST , ISO

 3) Training Centered ApproachWho- training manager, trainer, training institutions.

 Why- transfer of training

 Outcome- information

e.g. Induction Training

ii)Techniques of Measurement

Measurement is a set of rules for assigning numbers to objects, entities or individuals

1) Observation :

 Observation is a fundamental method of measurement in training evaluation.

 It is most direct, objective and reliable form of measurement for understanding human
behavior.

 Systematic observation is considered the hallmark strategy for planning and evaluation of
training programmes.

Important Aspects of Observation


 What behavior to observe and whether the sample behavior is representative of the broad
domain of the training programme.

 The setting in which the behavior will be observed.

 The length of the time for observing the behavior

 Who will observe the behavior

 Coding and analysis of the data

Types of observation

 Most natural, uncontrolled observation to most exact film recording of training sessions.

 Participant and non participant.

Aids in observation

 Schedules of information may be used to guide the observation process.

 Checklist or grouping of behavioral variables can be used as an aid to provide numerical


measures of observation.

 Modern tools like video recording can also be used.

Pros and cons of observation

Strengths-

i Flexible

ii Simple

iii Wide range of applicability

Limitations –

i. Expensive and labor intensive

ii. Cannot access inner experiences

iii. Gives limited snapshot of behavior

2) Interview

 Interview is fundamentally a process of social interaction in training program.

 Most ubiquitous method of obtaining information from people.


 Powerful scientific tool for data collection in training evaluation.

 Involves both verbal and non verbal communication like gestures, facial expressions, glances
and pauses which reveal subtle feelings

 An art and science

 Interviews are used to elicit both quantitative and qualitative information on complex issues like
beliefs, attitudes, feelings and behaviors of the trainees.

 It can provide a basis for prediction, understanding and action, thereby providing an active tool
for assessment of the ultimate value of training.

 A good interview is one that is planned, executed skillfully and goal oriented.

Types of interviews

i. Structured interview

ii. Unstructured interview

iii. Selection interview

iv. Focused interview

v. Case history interview

vi. Depth interview

vii. Repeated interview

viii. Crisis interview

ix. Research interview

x. Pre test and post test interview

Essentials of good interview

i. Relationship between trainer and trainee.

ii. Good rapport.

iii. Attitude of understanding

iv. Sincerity

v. Acceptance

vi. Empathy
vii. Good communication skills (verbal and non verbal)

viii. Appropriate language

ix. Appropriate questions

x. Listening

Merits and demerits

Merits :

 Ensures qualitative as well as quantitative measures.

 Interview is a superior technique for exploitation of deeper and complex areas of training
evaluation.

 Interviewer can act as a catalyst.

 Personal interview yields good response, correct and relevant information.

Demerits:

 Heavy in cost, time and energy.

 Skills of evaluation.

 Interviewer bias.

3) Questionnaire

 A questionnaire is a measurement tool consisting of a set of questions designed to elicit


information about the subject.

 By using questionnaire one can document the effect of training and establish individual profiles
and sets of group profiles.

 A questionnaire is useful tool for obtaining reliable and valuable information about beliefs,
feelings and attitudes of trainees.

 Questionnaire are often used to measure participants’ attitudes about the areas which may be
affected by training

 Questionnaire can be structured or unstructured, close ended or open ended

 Questions should be properly worded, arranged, codified and duly pre tested and approved by
the experts in the field.

 Any questionnaire must be limited in its length and scope.


Merits and demerits

Merits :

 Economical in terms of time, effort and cost for the trainer and trainees.

 Easy to plan, construct, administer and analyze.

 It can cover a large universe of the training world.

Demerits:

 Non response in questionnaire are known to be high.

 Questionnaire data may give a biased samples.

4) Rating scales

 Rating scales methods are primarily used for systematizing and structuring the collection of data
of training results.

 The great ease with which they can be administered gives them unusual appeal.

5) Paper and pencil test

 Paper and pencil tests may be used to measure work skills, general intelligence, special
achievement and personality..

 These tests are more appropriate for assessing specific attributes to be incorporated in the
training package.

 They are widely used in personal selection, placement and to predict job performance.

6)Work samples

 A work sample is a miniature replica of the job. The taste is actually a part of the work to be
performed in the job.

 This can yield numerical information about how well training participants can perform a
particular task.

 Work samples are used in both blue and white collar jobs.

7) Simulation

 The work sample is a mirror part of the job, while simulation is a microcosm of the entire job.

8) Job performance
 Performance measures are used in assessing training needs, providing feedback to the
participants and in training evaluation.

9) Individual and Group performance measures

 The basic categories of individuals and group performance measurement are quantity, quality
and timeliness.

10) Individual and group behavior measures

 Desirable behaviors like team work, maintaining good interpersonal relationships etc.

 Undesirable behaviors are absenteeism, lateness, voluntary quits, grievance etc.

Problems of measurement

1)The Hawthorne effect-

 Hawthorne plant of western electric co. , America (1930)

 The people behaved differently precisely because they were the subject of research.

2) Measuring trainee’s reaction to training

 Trainee act as a evaluator- lesser learning.

 Trainee act as a appraiser – trainer try to please trainee.

 Group is swayed by a few leaders who emerge during the course of training.

 3)Measuring training in quantitative terms –

 It is difficult to quantify all data.

 4) Measuring managerial job behavior.

 5) Measurement of change.

 6) Measuring cost benefit returns from training.


Paper VII
Tools & Methods for Evaluation
Return on Investment:
Training and Development

Evaluating Training Programs: 4 Levels

4: Final Results
Results

3: Change in B.V.
Behavior

2: KSA
Learning

1 : Exp
Reaction

General remarks for Training Budget

 The Training Budget is too high

 We need to decrease our Operation Expenses (OPEX)

 There is no guarantee that this investment will lead to the result we are looking for

 Sending all these employees to training will reduce the working force during the period

 Can't they learn by practice while they are working?

Return on Investment
 Rooted in Manufacturing.

 Advanced to Banking, Health Care, Non-profit, Public and Education sectors.

 Part of Quality and Efficiency Methodologies.

ROI is Used To:

◦ Quantify the Effectiveness of Training.

◦ Manage the Training Budget.

◦ Provide Evidence to Management and other Stakeholders.

◦ Build Trust and Respect for Trainings.

◦ Earn the Recognition from Senior Management.

◦ Identify Areas for Improvement.

◦ Provide Data Requested by Senior Management.

Major Hurdle in Cost-Benefit Analysis

 It is Difficult to Measure the Returns Immediately after Training.

 Chances of Trained people leave the company.

 Outcome can be Attributed to other factors.

Cost benefit criteria helps in relating training policy to

i. Organizational goals

ii. Estimating the cost of providing training.

iii. Deciding whether a particular training

activity is worth sponsoring at all.

Measuring Benefits of Training

 Direct- Time Utilization, Reduced number of Rejection, etc.

 Indirect- Improvement in Punctuality, Discipline, etc.

 Long term benefits- Improved Human Relations, Better Communication Ability, Teamwork etc

ROI Calculation Steps:


1. Create an ROI Measurement Plan

2. Collect Data

3. Isolate the Effects of Training

4. Covert Data to Monetary Value

5. Calculate ROI

Training Cost

 Fixed Cost

 Supportive cost- Trainers, Learners

 Opportunity cost-

Training Costs

 Developmental Costs.

 Direct Costs.

 Indirect Costs.

 Participant’s Compensation.

 Evaluation Costs.

Training Cost

 Development costs

 Program materials

 Instructor/facilitator costs

 Facilities costs

 Travel/lodging/meals

 Participant salaries and benefits

 Administrative/overhead costs

 Evaluation costs

Costs, Budgets, Accounting


 Quantifying ROI means accounting for all the costs of the program.

◦ Fixed costs: independent of the number of participants.

◦ Variable costs: Dependent on the number of participants.

◦ There are costs at every step – make sure to account for them all.

Methods and Instruments

• Questionnaire

• Surveys

• Tests

• Interviews

• Focus groups

• Observation

• Performance records

• Knowledge and skills testing

• Program follow up

• Project assignments

• Questionnaire

• Surveys

• Tests

• Interviews

• Focus groups

• Observation

• Performance records

• Knowledge and skills testing

• Program follow up

• Project assignments
The Four Major Categories of Hard Data

Primary Measurements
of Improvement
“Hard Data”

Characteristics of Hard Data

 Objectively Based

 Easy to Measure and Quantify

 Relatively easy to Assign Monetary Values

 Common Measures of Organizational Performance

 Very Credible with Management

 Examples of Hard Data


OUTPUT TIME
• Units Produced • Equipment Downtime
• Tons Manufactured • Overtime
• Items Assembled • On Time Shipments
• Money Collected • Time to Project Completion
• Items Sold • Processing Time
• Forms Processed • Supervisory Time
• Loans Approved • Break in Time for New
• Inventory Turnover Employees
• Patients Visited • Training Time
• Applications Processed • Meeting Schedules
• Students Graduated • Repair Time
• Tasks Completed • Efficiency
• Output Per Period • Work Stoppages
• Productivity • Order Response
• Work Backlog • Late Reporting
• Incentive Bonus • Lost Time Days
• Shipments
• New Accounts
Generated COSTS
• Budget Variances
• Unit Costs
• Cost By Account
• Variable Costs
• Fixed Costs
• Overhead Cost
QUALITY

• Scrap
• Waste • Operating Costs
• Rejects • Number of Cost Reductions
• Error Rates • Project Cost Savings
• Rework • Accident Costs
• Shortages • Program Costs
• Product Defects • Sales Expense
• Deviation From
Standard
Major Categories of Soft Data

Work
Habits

Typical Measures
of Improvement
“Soft Data”

New
Skills
Characteristics of Soft Data

 Subjectively Based in Many Cases

 Difficult to Measure and Quantify Directly

 Difficult to Assign Monetary Values

 Less Credible as a Performance Measure

 Behaviorally Oriented

Benefits and Soft Skills

 Change in:

◦ Attitude, work climate, leadership, teamwork.

◦ We desire these changes because they ultimately effect productivity.

◦ Allow time for change in attitude or behavior, then measure these changes and report
qualitatively.

◦ Allow time for change in productivity, then measure for data and report quantitatively.

The ROI Process Provides - Six Types of Data

• Reaction and Satisfaction

• Learning

• Application and Implementation

• Business Impact

• Return on Investment

• Intangible Measures

Common Intangible Variables Linked with Training and Performance Improvement Initiatives

 Job satisfaction

 Organizational commitment

 Work climate

 Employee complaints

 Employee grievances
 Employee stress reduction

 Employee tenure

 Employee absenteeism

 Employee turnover

 Employee lateness

 Innovation

 Customer satisfaction /dissatisfaction

 Community image

 Investor image

 Customer complaints

 Customer response time

 Customer loyalty

 Teamwork

 Cooperation

 Conflict

 Decisiveness

 Communication

Models

 Benefit/Cost Ratio

Program Benefits
BCR=
Program Costs

Benefit −Cost
ROI (%)= x 100
Cost

Benefit/Cost Ratio Example

 Data entry clerks’ average wage: Rs 130/hr.


 Five hours per week were spent correcting errors before training.

 After Training 20 percent less time spent correcting errors.

 40 clerks.

 No of weeks-1

 Total Training Cost- Rs 12000

 Calculate Benefit / Cost Ratio

 Ans- 0.433

Now with ROI %

 Data entry clerks’ average wage: Rs 130/hr.

 Five hours per week were spent correcting errors before training.

 After Training 20 percent less time spent correcting errors.

 40 clerks.

 No of weeks- 4

 Total Training Cost- Rs 12000

 Calculate ROI %

 Ans- 73.33%

 Training Investment Analysis

 Increased sales = Additional sales per employee x Revenue

 (or margin) per sale x No of employees

 Revenue produced = Revenue after training- Revenue without

 by Training Training

 Total return on = Revenue produced by training - Cost of training

 training investment
Paper VII
Data Analysis & Statistical Methods.

Training Evaluation:

 Why to evaluate?

 When to evaluate?

 What to evaluate?

 How to evaluate?

Data Analysis & Statistical Methods.

• Statistics is the method of analysising quantitative data obtained on Training Results from
Groups of Trainees.

• A statistic is a measure computed from a sample.

• The main aim of statistics is to reduce large quantities of data to manageable and
understandable form.

Introduction:
Some Basic concepts

 Statistics is a field of study concerned with

i) Collection, organization, summarization and analysis of data.

ii) Drawing of inferences about a body of data when only a part of the data is observed.

 Statisticians try to interpret and

communicate the results to others.

Role of statisticians

 To guide the design of an experiment or survey prior to data collection

To analyze data using proper statistical procedures and techniques

To present and interpret the results to researchers and other decision makers

A population:
It is the largest collection of values of a random variable for which we have an interest at a particular
time.

For example:

The weights of all the employees of the Organisation.

Populations may be finite or infinite.

Sample:

It is a part of a population.

For example:

The trainees who are having post graduate qualification.

Data

 The raw material of Statistics is data.

 We may define data as figures. Figures result from the process of counting or from taking a
measurement.

 For example:

 - When Training Staff counts the number of Trainees (counting).

 - When a Trainer evaluates Trainees by conducting test (measurement)

Sources of
data

Records Surveys Experiments

Comprehensive Sample
Sources of Data:

We search for suitable data to serve as the raw material for our investigation.

Such data are available from one or more of the following sources:

i) Routinely kept records.

For example:

- HR department keeps records of all employees related to education, skill etc.

- Training Department keeps record related to Trainings imparted.

- ii) External sources.

- The data needed to answer a question may already exist in the form of published
reports, commercially available data banks, or the research literature, i.e. someone else has
already asked the same question.

- ii) External sources.

- The data needed to answer a question may already exist in the form of published
reports, commercially available data banks, or the research literature, i.e. someone else has
already asked the same question.

- iii) Surveys:

- The source may be a survey, if the data needed is about answering certain questions.

- For example:

- If the administrator of a Training Institution wishes to obtain information regarding the mode
of transportation used by Trainees to come to Institution, then a survey may be conducted
among

- Trainees to obtain this information.

- iv) Experiments.

- Frequently the data needed to answer

- a question are available only as the

- result of an Experiment.

- For example:
- If a Training Institution wishes to know which of several strategies is best for maximizing
attendance of Trainees.

- They might conduct an experiment in which the different strategies are tried with
different Training Programme .

- Important Characteristics of Data

- 1. Center: A representative or average value that indicates where the middle of the data set is
located

- 2. Variation: A measure of the amount that the values vary among themselves

- 3. Distribution: The nature or shape of the distribution of data (such as bell-shaped, uniform, or
skewed)

- 4. Outliers: Sample values that lie very far away from the vast majority of other sample values

- 5. Time: Changing characteristics of the data over time

A variable:

It is a characteristic that takes on different values in different persons, places, or things.

For example:

- heart rate,

- the heights of adult males,

- the weights of preschool children,


Types of variables

Quantitative Qualitative
Quantitati
ve Qualitati
continuou ve
s nominal

Quantitati
ve Qualitati
descrete ve
ordinal
Quantitative Variables

It can be measured in the usual sense.

For example:

- the heights of workers,

- the weights of atheletes,

- the ages of patients seen in a dental clinic.

- Qualitative Variables

- Many characteristics are not capable of being measured. Some of them can be ordered or
ranked.

- For example:

- - classification of people into socio-economic groups,

- - social classes based on income, education.


- A discrete variable

- is characterized by gaps or interruptions in the values that it can assume.

- For example:

- - The number of trainees attended the training course.

A continuous variable

can assume any value within a specified relevant interval of values assumed by the variable.

For example:

- Height,

- weight,

No matter how close together the observed heights of two people, we can find another person whose
height falls somewhere in between.

Measure of Central Tendency.

Mean – Arithmetic mean is the sum of all the scores divided by the number of score.

Mean is affected by outliers.

Median – Median is the midpoint of the distribution of scores. Median is not affected by outliers.

Mode- Mode is the most frequent scores in the distribution. It is the most commonly occurring value.

Measure of Dispersion

• The Range.

• The Mean deviation.

• The Standard Deviation.

• The Variance.

Scales of Measurements –

 Measurement is the process of assigning values to observations obtained in evaluation of


training programme. Four basic scales of measurements.

1) Nominal Scales – The term nominal pertains to the acts of naming. Nominal Scale often called
Categorical scales, are the simple type. e.g. sex, survey response yes/no.

Nominal Scales provide little quantitative information.


2) Ordinal Scales – The term Ordinal refers to order. This suggests quantifiable ordering from most
to least along a variable. We can rank trainees from highest to least on dimension of training. It
does not tell how far apart from each observation is form the next one.

e.g. Course Grades A,B,C…

3) Interval Scales – Interval scale provide far more information about observations and can be
mathematically manipulated with for greater confidence & precision than nominal & ordinal Scales.
Intelligence tests are one good example of an interval scale.

4) Ratio Scales – Ratio Scales possess the

attributes of ordinal and interval scales.

Sampling – Sample is a subject of a population.

The characteristics of a population scores are

called parameters and characteristics of a sample

Scores drawn from a larger population are called

Statistics.

Types of Samples –

A. Probability Sampling –

i. Simple Random Sampling –

ii. Stratified Sampling – The population is divided into strata, from which random samples are
drawn. e.g. Men and Women.

iii. Cluster Sampling – it is successive random sampling of units, or sets and subsets. This is useful
when the target population is dispersed throughout a large geographic region.

iv. Systematic Sampling – In systematic sampling the first sample element is randomly chosen
from numbers through K & subsequent elements are chosen every k th interval.

B) Non-Probability Sampling-

i. Quota Sampling – In Quota sampling, knowledge of strata of the population e.g. Sex, region etc.
used to select sample members that are representative, typical & suitable for certain purpose.

ii. Purposive Sampling – It is characterised by use of judgement & a deliberate effort to obtain
representative samples by including typical groups of trainees in the samples.
iii. iii. Accidental Sampling - Accidental Sampling is the weakest form of sampling. In this, one
takes available sample at hand.

Application of Statistical Methods of Evaluation –

i. Statistical evaluation of group -differences before & after training.

ii. Measures of relationship or correlation between two sets of data (before – after, experimental-
control)

iii. Graphic methods for displaying differences between two groups over a period of time.

Graphic Display Techniques –

One of the most appealing features of modern statistics.

“A picture is worth a thousand words.”

i. Bar Chart – the bar diagram is often used to compare relative amount of same trait like
intelligence or achievement possessed by two or more groups.

Bar Chart
Pie Diagram – The pie diagram is useful to illustrate populations of the total in striking way.

Pie Diagram
Line Graph- A line graph is used to depict age-progress curve or trend lines.

Line Graph
Histogram - Frequency distribution of metric variables can be portrayed effectively in graphs.

Histogram
PaperVII
i)Tests and Evaluation
ii) Report Writing

i)Tests and Evaluation

Test is essentially an objective and standardized measure of a sample of behavior.

Test –

 Sample of behavior

 Sample obtained under standardized condition.

 There are established rules for scanning, or obtaining quantitative information from the
behavior sample.

Types of tests

1. Tests of Performance- Multiple choice test, Job performance evaluation etc.

2. Behavior Observation – Observation of social interaction of Manager with Union.

3. Self Report- Trainee expresses own feelings, beliefs , interests & attitudes.

Tests can be classified as:

1) In terms of attributes being measured-

Ability, Intelligence, Aptitude, etc.

2) Nature of response of Trainee-

Verbal vs Performance tests

Oral & Written tests

3) Mode of administration-

Individual vs Group tests

4) Time allotted for completion of test-

Speed vs Power tests.


In speed test ,time is a Constraint whereas in Power tests are arranged in order of increasing
difficulty level.

Steps in test constructing

1) Item selection-

 To determine what domain and what stage of the training evaluation the test responses will
represent.

 What a given test is expected to produce. e.g. test for manager include items on decision
making, problem solving skills.

 Initially more items to be selected.

 Finalization of items by validity.

2) Item formats-

 Open ended or Restricted- Restricted are forced choice e.g. True/False, Open ended items
allow more projective responses.

 Objective or Projective- Objective items include multiple choice. Projective items are
deliberately vague & ambiguous. Interpretive system is must to derive meaningful scoring e.g.
Impact analysis of organisational effectiveness in terms of KRA & TQM

3) Standardization-

a) Standard Administration

b) Standard Scoring

Characteristics of tests

1. Reliability- Reliability Index is the degree to which test yields consistent results on testing &
retesting.

2. Validity- Validity Index refers to degree to which a test measures what it purports to measure.

3. Cross validation – Independent determination of validity of the entire test is known as cross
validation. It is computed on a different samples of persons.

4. Norms-Norm is normal or average performance. No single set of norms can be used with all
types of trainees & Training Groups.

Critical evaluation of tests


Test must be selected for specific purpose of assessment of training results & training situation for
which they are to be used.

Tests has to be evaluated for

Objectivity, clarity of direction, ease of application, convenience of scoring, adequacy of norms, cost,
etc.

Evaluating Training function/ staff

1) Evaluating the Individual Trainer.

a) Accountability for standard of effective performance.

b) MBO – By setting KRA against which performance is measured.

c) Economic impact the Trainer makes on the Organisation.

d) Solving human performance problems.

e) Reputation of Trainer within Organisaton.

f) External reputation of Trainer.

2)Evaluating the total staff

1. Accomplishment of departmental objectives

2. Economic accomplishment of the department

3. Effective use of resources

4. Reputation of department

EVALUATION AUDIT

1. Are procedures established for continual evaluation and quality control of training even if you
are not present on the training scene?

2. Is evaluation focusing on results rather than on the effort expended in conducting training?

3. Is evaluation comprehensive enough to cover methods, trainees’ progress, attitudes, degree


of job behavior change, knowledge gained and its impact on the group and the organization as
a whole?

continued...
EVALUATION AUDIT

4. Is collection of data and interpretation of results done by personnel qualified for the job?

5. Are evaluators trained in the techniques of observation and interview?

6. Is evaluation an orderly and flexible process?

7. Is evaluation specific and not vague?

8.Is evaluation an aid to future planning, is it directive and constructive and not conclusive?

9. Do trainees participate in the evaluation of their own progress? Are evaluation procedures
reviewed and revised periodically?

EVALUATION AUDIT continued...

10. Are tests/ examinations used, derived from training objectives and are consistent with the
coverage of inputs?

11. Are other methods like observation, ratings, opinion surveys, interviews used to supplement
tests?

12. Are scoring, grading and reporting practices standardized, economical, practical,
acceptable(SEPA)?

13. Are the results used:

 To interpret quality of instructional material

continued...

EVALUATION AUDIT

 To estimate effectiveness of the tests in measuring trainee’s achievement

 To provide trainers with data needed to improve the training

 To identify group/ individual who need close guidance and coaching

14. Is the evaluation exercise worth the time, money and effort?

ii)REPORT WRITING

COVER PAGE

 Name of Organization
 Title of Training

 Time duration of Training

 Name of Evaluator

 Report Date

CONTENTS

 Executive Summary

 Introduction

 Participant’s Reaction

 Learning

 Job Impact

 Business Impact

 Return on Training Investment

 Additional Information

 Conclusion and Recommendations

 Acknowledgements

 References

 1)Executive Summary

 Summarise the main points from the evaluation, including:

 Purpose

 Objectives

 Methodology

 Main findings

 Key recommendations

2)INTRODUCTION

 Background  
 Description of training

 Training Objectives

 Methodology and scope of the evaluation report

 Training participants - Overview

 Number starting/completing

 Job roles/departments

 Educational background by highest qualification

 Age

 Gender

 Background

3)PARTICIPANT’S REACTION

 Overview of evaluation results and key issues identified 

Summarise/insert data from your Participant’s Reaction evaluation(s), e.g. statistics, graphs and/or
lists of comments.

 Additional information and comments

4)LEARNING

 Overview of evaluation results and key issues identified

Summarise/insert data from your Learning evaluation(s), e.g. statistics, graphs and/or lists of
comments.

 Additional information and comments

5)JOB IMPACT

 Overview of evaluation results and key issues identified

Summarise/insert data from your Job Impact evaluation(s), e.g. statistics, graphs and/or lists of
comments.

 Additional information and comments


Eg. Summarise key factors in determining outcomes such as barriers to and promoters of the
application of learning in the workplace. Identify other factors which may have influenced
performance outcomes.

6)BUSINESS IMPACT

 Overview of evaluation results and key issues identified 

Summarise/insert data from your from your Organisational Results evaluation(s), e.g.:

 Impact on specific performance indicators/outcomes

 Benefits that can be ascribed a financial value

 Other benefits identified

 Overall assessment of impact 

 Other comments from evaluation respondents 

 Additional information/comments

Eg. Summarise key factors in determining outcomes such as barriers and promoters of performance
improvement. Identify other factors which may have influenced performance outcomes.

7)RETURN ON TRAINING INVESTMENT

 Overview and key issues identified

Summarise/insert data from your ROTI Report(s), for example:

 Training Costs and Financial Benefits

 Return on Investment percentage

 Benefit to Cost Ratio

 Payback Period

 Additional information/comments

Eg. Summarise any limitations on data and identify other factors which may have influenced ROTI
outcomes.

8)ADDITIONAL INFORMATION

 Add any further information that you feel may be relevant to the impact evaluation.

9)CONCLUSION AND RECOMMENDATIONS


 Consider the broader lessons from your evaluation and list recommendations on the basis of
the findings. Recommendations should identify what should be maintained or expanded, and
where changes to practices and policies seem necessary.

 Also consider the impacts in relation to the costs and benefits of the investment.

10)Acknowledgement

11) References

You might also like