Professional Documents
Culture Documents
Evaluation methods
1. Evaluation methodology
Methodology is no longer a concern per se, but it is more considered as an element that
has to contribute to the transparency, and hence the use that will be made of the
evaluation results. Methodology has become one of the arguments to convince users of
the reliability and validity of the information provided.
Two elements are important in the definition of an evaluation methodology: the approach
to be followed and the methods to collect the necessary data. These two issues will be
dealt with subsequently.
2. Evaluation approach
The evaluation approach concerns the overall strategy that is followed to conduct the
evaluation study. This strategy is derived from the desired type of evaluation results.
Strategy in this sense is to be determined as the set of basic principles that govern the
entire evaluation exercise. If highly reliable information is required, the approach will have
to focus more on objectivity and significance. On the other hand, if more analytical depth
is required a different strategy may be followed. In the everyday reality of project
implementation and supervision, a mixed approach somewhere in-between these
extremes will be adopted.
Evaluation approaches have changed over the past decades. Four generations of
evaluation approaches can be distinguished. These generations reflect changes in the
MDF copyright 2013
The primary responsibility of the evaluator is to measure progress and compare it with the
intentions as laid down in the plan. Evaluation is then limited to the more tangible aspects
www.mdf.nl
of the project.
The role of the evaluator is to describe the process that takes place and the overall
environment in which the project operates, and to indicate strong and weak points. The
original project plan serves as input for the evaluation, but does not limit its scope.
The evaluator is requested to give a judgement on the project, based on the original plan,
and on his/her knowledge and experience.
In this view, the legitimate interests of the different parties concerned with the project
determine its ultimate success. The role of the evaluator is to stimulate a discussion
among the various stakeholders of the project. This discussion should feed the learning
process, and contribute to the success of the project for all stakeholders concerned. There
is an element of negotiation, but the problem for the evaluator is that his mandate to
negotiate is unclear.
The selection of the most appropriate method for data collection is an important issue in
every evaluation. Decision-makers need information that is relevant, timely, accurate and
usable. A wide variety of data collection methods exists, each method with its particular
uses, advantages and disadvantages. The following is a list of possible data collection
methods, with a brief indication of the strong and weaker points of each of those methods.
Before selecting the methods to be used the evaluator has to ask the following questions:
1. Who is the information for and who will use the findings of the evaluation?
2. What kinds of information are needed?
3. How is the information to be used? For what purposes is evaluation being done?
4. When is the information needed?
5. What resources are available to conduct the evaluation?
MDF copyright 2013
Answers to these questions will determine the kinds of data that are appropriate in a
particular evaluation.
The challenge in evaluation is getting the best possible information to the people who
need it and then getting those people to actually use the information in decision making.
According to the type of evaluation (and other factors) to be carried out you will determine
whether the accent will be on quantitative or qualitative information.
www.mdf.nl
Qualitative methods permit the evaluator to study selected issues, cases, or events in
depth and in detail. The fact that data collection is not constrained by predetermined
categories of analysis contributes to the depth and detail of qualitative data. Qualitative
methods produce a lot of detailed information about a relative small group of people and
cases. Qualitative data provide depth and detail through direct quotation and careful
description of program situations, events, people etc. This information is collected as
open-ended narrative, attempting to fit programme activities or people's experiences into
predetermined, standardised categories such as response choices that constitute typical
questionnaires or tests. Qualitative responses are often long, detailed, and variable in
content; analysis may be difficult because responses are neither systematic nor
standardised.
- quotations
- open-ended responses on questionnaires
- observations and/or observational description
- participation
- integrating observation and interviewing skills
Below, three ways of data collection are described: case studies, tracer studies and rapid
appraisal.
the combination of different techniques (like secondary data collection, (group) interviews,
visual observation and measurement). The advantage of rapid appraisal is that it leads
indeed to rapid results. A problem is that bias creeps in very easily, and that it has to be
done by highly experienced, and therefore usually expensive, professionals.
- off-setting biases
- being unimportant
- listening and learning
- multiple approaches
Annet Lingen (1992) mentions the following methods for RRA/PRA, in addition to
Chambers:
1. Review of secondary data: collection and review of existing data and other
information: official records, survey documents, (un)published studies etc.
5. Folk songs, stories and poetry, this may reveal norms and values about for example
the roles of men and women and other issues
Access and control profile: records women's and men's access to and control over
resources and benefits.
7. Group discussions/workshop
Brainstorming, analysis, presentation or discussion sessions in the field or meeting
room with members of the target-group:
Instruments to be used:
- verbal presentation of collected materials,
- visual presentation of collected materials,
- aerial photographs,
- critical incident analysis
- cartoons
- plays, videos, puppet show,
- role play
9. Wealth ranking: this technique uses the perception of informants to rank households
within a village according to wealth:
Inductive analysis: an evaluation approach is inductive to the extent that the evaluator
attempts to make sense of the situation without imposing pre-existing expectations on the
programme setting. Evaluation can be inductive in two ways:
Within programs an inductive approach can start with questions about the individual
experience of participants:
Between programs the inductive approach looks for institutional characteristics that
make each setting a case unto itself.
At either level extrapolations may emerge but the initial focus is on full understanding of
the individual cases before the unique cases are combined or aggregated. Thus
MDF copyright 2013
Going into the field: going into the field means having direct and personal contact with
people in the program in their own environment. Qualitative evaluators question the
necessity and utility of distance and detachment, assuming that without empathy and
sympathetic introspection derived from personal encounters, the observer cannot fully
understand human behaviour. This is sharp into contrast to the style of evaluation that
emphasises detachment and distance, which are presumed to contribute to objectivity and
to reduce bias.
Quantitative data collection methods use standardised measures that fit various opinions
and experiences into predetermined response categories. The advantage of the
quantitative approach is that it measures the reactions of many people to a limited set of
questions, thus facilitating comparison and statistical aggregation of the data. Quantitative
measurements are concise, economical and easily aggregated for analysis; they are
systematic, standardised and easily presented in a short space.
3.3 Pitfalls
Chambers warns especially for anti-poverty biases through quick and dirty appraisal by
the urban-based professional (development tourism):
i) Spatial: urban, tarmac and roadside biases: poor people are often out of sight of the
roadside etc.
www.mdf.nl
ii) Project: linking in with project networks in rural areas where something is happening
to the neglect of non-project areas.
iii) Biases of personal contact: those met by the visitors tend to be the less poor, men
rather than women etc. In all cases the bias is against perceiving the extent of
deprivation.
iv) Dry season bias: urban biased professionals travel mostly in dry season where as the
wet season is mostly the worst time of the year for the poorest people,
v) Biases of politeness and protocol: this might take a lot of time, which may prevent the
"professional", who is always short in time, of meeting the poorer people.
- Misleading replies
- Failure to listen
- Reinforced misperception and prejudice
- Visible against invisible
- Snapshots not trend
Long and dirty/clean surveys: the longer the research takes the longer and less usable the
report tends to be. Most long surveys do generate a lot of information, but many of them
do not generate much information in the early stages. Or lengthily questionnaires are
never processed etc.
optimal ignorance: importance of knowing what is not worth knowing. It requires great
courage to implement. It is easier to demand more information that it is to abstain from
demanding it.
proportionate accuracy: especially in surveys much of the data collected has a degree
of accuracy which is unnecessary. Orders of magnitude and directions of change are
often all that is needed or that will be used.
An evaluation report is not a scientific article or book. The description of the evaluation
methodology therefore has to be functional and avoid unnecessary detail and complexity.
The function of the methodological chapter in an evaluation study can be one of the
following:
2 The description of the methodology plays also a role in the accountability of the
evaluator to the evaluation principal. It confirms the opinion of the evaluation principal
that the selection of the evaluator was appropriate.
www.mdf.nl
3 A sound methodology inspires awe. It makes the outcome of the study more
convincing, and the information is can be used in an argument. The results are easier
to defend against criticism.
The description of the methodology should be geared towards one of those functions. If
the evaluation takes place in a charged atmosphere and is likely to be used in a
discussion where the participants are strongly opposed to each other, than the
methodological description should be more elaborate than would otherwise be the case.
References
Patton, Michael Quinn, "How to use qualitative methods in evaluation", Sage publications,
New York, 1987.
Lingen, Annet, Note on gender impact study (GIS), ISSAS, The Hague, 1992.