You are on page 1of 78

Guidelines for Health Programme & Project

Evaluation Planning
Swiss Federal Office
of Public Health
Guidelines for Health Programme & Project
Evaluation Planning
Swiss Federal Office
of Public Health
1997
Evaluation Unit
Federal Office of Public Health
CH-3003 Bern
Tel. +41 (0)31 323 87 61 or 66
Fax. +41 (0)31 323 88 05
EMail: Marlene.Laeubli@BAG.admin.ch
Photocopying of Checklists only is authorised without written permission
Copyright
I know of no safe depository of the ultimate
powers of the society but the people themselves;
and if we think them not enlightened enough
to exercise their control with a wholesome
discretion, the remedy is not to take it from
them, but to inform their discretion.
Thomas Jefferson
Letter to William Charles Jarvis
September 28, 1820
Evaluation is an investment in people and in progress.
(Egon Guba and Yvonna Lincoln, 1989)
Motto
Page
Introduction and Overview of Contents 8
Part 1: Introducing Evaluation 11
Part 2: Planning an Evaluation 17
Checklist 2.1 Assessing the Need for 25
External Evaluation
Checklist 2.2 Planning Evaluation of a Health Project 27
Checklist 2.3 Self-Evaluation Basic Elements to be Included 29
Checklist 2.4 Training and Continuing Education Courses 31
Self-Evaluation Basic Elements to be Included
Part 3: Commissioning and Monitoring Evaluation Contracts 33
Checklist 3.1 Drafting the Evaluation Mandate 37
An FOPH Checklist
Checklist 3.2 Drafting an Evaluation Proposal: 39
The Evaluators Checklist
Checklist 3.3 Assessing the Evaluation Proposal 41
Checklist 3.4 Assessing the Evaluator 43
Checklist 3.5 Monitoring the Evaluation Contract 45
Part 4: Assessing the Technical Evaluation Report 47
Checklist 4.1 What the Technical Evaluation Report Should Cover 51
Checklist 4.2 Assessing the Technical Evaluation Report 53
Part 5: Promoting and Using Evaluation Findings 55
Checklist 5.1 Identifying Key Messages Key Target Groups 59
and Appropriate Ways to Tell Them
Annexes:
1. References and Recommended Reading List 61
2. Glossary of Evaluation Terms 62
3. Characteristics of Conventional and Interpretive 68
Approaches to Social Science Research and Evaluation
4. Evaluation Questions: 72
An Example of Different Question Types
5. Guidelines for Developing or Assessing Medico-Social 73
Training/Evaluation Projects Swiss Federal Office of Public Health,
1995. Evaluation, Research and Training Section
N.B. A complete set of the Checklists is provided separately so that they can
be photocopied and used in contract negociations with external partners.
Table of Contents
The Swiss Federal Office of Public Health is committed to assuring a high
standard of health for the countrys population. Its work is continually assessed
by measuring the overall impact on the health of the nation.
Corporate Philosophy of the Swiss Federal Office of Public Health (FOPH), 1992
Evaluation of its activities is an integral part of the corporate philosophy of the
Swiss Federal Office of Public Health, (Guiding Principles, 1992). Its objec-
tive is to help improve the planning and implementation processes.
To this end a special unit for evaluation was established within the Federal
Office of Public Health (FOPH) in 1992. Its main function is to ensure that the
FOPHs strategies, measures and activities in terms of disease prevention,
health promotion and health protection are evaluated on a continuous basis.
These Guidelines were written by Marlne Lubli Loud of the FOPH Evaluation
Unit in collaboration with colleagues from this Unit. They are intended to assist
FOPH staff and project/programme partners effectively plan an evaluation
study.
We hope that this manual will foster a stimulating and fruitful collaboration
between the FOPH and its external partners. We welcome any comments that
may help improve the ideas and checklists contained in these Evaluation
Guidelines.
Bern, 1996
The Evaluation Unit
Federal Office of Public Health
Foreword
In the process of designing and writing these guidelines, we are deeply indebt-
ed to a multitude of people, all of whom in their own way contributed to this
final product.
Several organisations and individuals were asked to review the original drafts of
these Guidelines. We are indeed grateful to the many people who contributed
to turning the original ideas into the final version. In the first case, our thanks
go to our many colleagues in the Federal Office of Public Health Medical
Division: the Evaluation, Research and Further Education Section; and the
AIDS, Illegal and Legal Drugs Sections of the Health Promotion Division.
Our thanks too for the helpful comments received from our external partners:
the Prevention Programme Evaluation Unit of the Social and Preventive
Medicine Institute, University of Lausanne, (UEPP de IUMSP, Lausanne); the
Drugs Research Institute in Zurich (ISF); and the Institute for Social and
Preventive Medicine, Zurich (ISPM).
The process of translating ideas into a published set of guidelines equally
involves the efficient and creative help of graphic designers and desk-top pub-
lishers. We are indebted to Satzart AG Bern who turned our basic word
processed document into its current polished, professional format.
To all the above, our many thanks.
Acknowledgements
Why we have produced
these Guidelines and what
they are about
Aims and objectives of the Guidelines
Where to look for what information
Yet another book on evaluation principles and theory? No! That is certainly not
what you will find in this book! These Guidelines have been written to help staff
of the Federal Office of Public Health (FOPH) and its external partners reflect
on what needs to be included when planning an evaluation of their project/pro-
grammes. They are therefore of interest to the following:
staff of FOPH health promotion, prevention and campaign sections;
Project/programme planners;
Project/programme managers and implementers;
external evaluators.
These guidelines deal with the evaluation of the FOPHs prevention and health
promotion activities and therefore address issues relevant to project and pro-
gramme evaluation
1
only.
A reading list is provided for those interested in finding out more (Annex 1).
Readers are encouraged to discuss more complex needs with the Evaluation
Unit of the FOPH.
The manual provides practical information on how evaluations are best planned,
organised, commissioned and used. We accept that for reasons beyond our
control, putting these principles into practice will sometimes prove difficult. As
guidelines, however, they set the standards we should strive to attain.
1
Refer to Glossary of Evaluation Terms for definions of Programme and Project. Despite differ-
ences, the evaluation principles outlined in this manual apply to both. For ease of reading, there-
fore, from now on we will use only the term Project for both.
8
Introduction and Overview of Contents
The purpose is therefore:
to learn more about what evaluation is and how it can best be used;
to set standards for planning, commissioning, presenting and using evalua-
tion results;
to provide practical guidelines on how this can be done.
These Guidelines are therefore not intended to be a do-it-yourself evaluation
kit. Rather their purpose is to draw your attention to a range of questions and
issues to consider when planning project evaluation.
We have attempted to resume the most pertinent aspects of what evaluation
is about and how it should be used to yield the best results. For this reason, we
urge you to read these Guidelines through from start to finish.
The contents are divided into the following five parts:
Part One An Introduction to Evaluation: covering the basic principles of what
evaluation is about, and why the FOPH wants its projects evaluat-
ed.
Part Two Planning an Evaluation: This part prompts readers to identify why
an evaluation may be needed and what they want the evaluation
to examine. It then provides some general guidelines on when and
how evaluation should be planned to come up with the required
answers. A set of Checklists is included as pointers to the appro-
priate questions and issues to be addressed.
Part Three Commissioning External Evaluation: This section prompts readers
to make a critical assessment of what needs to be addressed
before an external evaluation is commissioned. What items should
be included in the contract itself, and what can be expected dur-
ing the course of the evaluation are also discussed. Again, Check-
lists are included as reminders of the key points.
Part Four Assessing the Technical Evaluation Report: provides hints on what
the Evaluation Report should cover and how to judge its findings
and conclusions. Checklists are provided for assistance.
Part Five Promoting and Using the Evaluation Results: The final part of the
Guidelines deals with identifying the key messages, key audi-
ences to be informed and what we can do to use the results to
their best effect.
Annexes Here you will find:
A reference to the literature used to produce this manual.
(Those keen to find out more about the theoretical aspects of
evaluation are recommended to refer to this list.)
A glossary of evaluation terms as we understand and define
them.
A table comparing characteristics of different theoretical para-
digms relevant to evaluation.
An example of the different evaluation questions which can be
asked about health projects.
Guidelines for Developing or Assessing Medico-Social Trai-
ning/Education Projects (FOPH, 1995)
N.B. A complete set of the Checklists is provided separately so that they can
be photocopied and used in contract negociations with external partners.
9
11 11
Part 1: Introducing Evaluation
Some key concepts defined:
What is a Project?
What is a Programme?
What is the difference between Monitoring and Evaluation?
What is Project/Programme Evaluation?
What is Global Evaluation, Meta-evaluation and Meta-analysis?
External Vs Self-Evaluation
Why evaluate? or the role of evaluation in planning and managing health
projects and programmes
Principal Evaluation Methodologies
Linking project, programme and global evaluation into evaluation planning
The terminology used in the professional world of evaluation can be confusing
as there are often subtle differences in the way the same term is used and
applied. For this reason we have set out our own definitions in the Glossary of
Evaluation Terms (Annex 2) to help the reader understand what we mean by
each.
In this section we explain some of the key concepts used throughout these
Guidelines.
Evaluation helps you to take the
right decisions for your mission
Contents of this Part
A Project consists of activities aimed at achieving pre-defined goals during a
defined period of time. Often it is a means of testing an innovative approach or
measure ultimatley to be used as part of a wider programme.
We define a Programme as a collection of co-ordinated projects aimed at
achieving a set of common objectives. A programme is also delimited in terms
of time, scope and budget.
It is the inquisitive, probing aspect of evaluation that demarcates monitoring
from evaluation. Monitoring implies keeping a watchful eye on what is hap-
pening through reference to data collected specifically for this purpose. It is the
routine checking of progress against plan, often contributing much useful infor-
mation, and is in fact an essential part of the evaluation process. Evaluation,
however, suggests that there is an analytical, interpretative process involved
when reviewing the data, which in turn may require the collection of additional,
more evaluation-specific data. Evaluation requires a critical and detached look
at what is happening. For example, a range of data can be regularly collected
on health training courses: numbers of participants; types of professional
groups such as nurses, doctors, social workers etc. who attend; number of
training days per course and so on. This will help us monitor the progress of
the course and detect trends, but it wont give us much evaluative informa-
tion about how well the course is received, why it attracts certain groups of par-
ticipants and not others, its effectiveness and other such information. Once the
monitoring data is reviewed it is likely to throw up many questions for which
other, more specific evaluative data will be need to be collected. (Refer to the
Glossary section for definitions of monitoring and evaluation).
So, as we have defined, evaluation implies examining, interpreting and analysing.
It can take place at the same time and throughout the life of a particular preven-
tion project, training course etc., or towards the end or even after it has ended.
To a large degree, this depends on its purpose. (More about this in Part 2).
For our purposes we have defined evaluation as the systematic collection and
analysis of information not necessarily routinely available, about a specific pro-
ject or programme to enable its critical appraisal.
This refers to the evaluation of a total package: the global aims and objectives,
and the strategy, measures and actions taken towards attaining the global po-
licy objectives. Analysis covers:
the strategy, measures, structural support and actions used to achieve the
policy aims and objectives;
key environmental factors (the socio-political and economic context);
the end results on target populations and/or settings;
the inter-relationship between the above.
Meta-evaluation is the evaluation of others evaluations. It provides a critical
analysis of HOW WELL evaluation studies have been conducted and is there-
fore a form of quality control which the Federal Office of Public Health (FOPH)
commissions from time to time. It should not be confused with meta-analysis.
Meta-analysis is an overall analysis of the CONTENT of several studies on a
similar topic/field of interest. To a large extent it is a synthesis and secondary
analysis of others work.
Not all interventions need or indeed should be professionally evaluated by out-
side specialists. Every project manager is expected to provide an internal self-
evaluation of the projects development and results as part of his/her ma-
nagement tasks, for example to justify the use of public funding. Was the pro-
ject implemented according to plan? Were the objectives attained? Were there
differences between what the project set out to do compared with what clients
really needed? These are just some of the questions which project managers
need to address in their interim and final reports. Self-evaluation is therefore an
integral part of good management and project planners should include a budget
of approximately 5% of total project costs for evaluation. Where needed some
12
What is a Project?
What is a Programme?
Evaluation and Monitoring
What is Project/Programme
Evaluation?
What is Global Evaluation?
What is Meta-evaluation?
What is Meta-analysis?
Whats External Evaluation?
and Self Evaluation?
of this money can then be used to call upon professional evaluators to help set
it up (what to do and how to do it).
A checklist of the main elements to be included in project self-evaluation is
provided at the end of Part 2. A proposed list of headings for the reports are
also included.
Many of the questions to be addressed in self-evaluations are, of course, the
same as those needed for external evaluations. So what is the difference?
Essentially, the main differences between self and external evaluations are the
evaluators (those who do it) and the scope covered by the evaluation study.
Project staff conduct self-evaluations. But they are constrained by time since
their principal task is to develop the project. The scope of what can be reviewed
during self-evaluations is therefore likely to be limited. Managers assessment
of the projects relevance, progress and effects is also likely to be influenced by
their interest in the success of the project itself. In other words, their closeness
to the project may make it difficult for them to provide an objective appraisal of
the situation. External evaluators, on the other hand, are specifically paid to
devote time and resources to this work. As externals, they should have no ves-
ted interests (e.g. financial, professional or personal) in the project being evalua-
ted, and thus no self-interest in its outcome. In theory, therefore, they should
be more objective in their analysis.
Ideally however, external and self-evaluations should be designed to help
answer both project specific and the more global evaluation questions. The lat-
ter, of course, will vary according to the specific prevention package and the
level of prevention/target audience. Thus each level of evaluation should in turn,
contribute towards answering some particular and global questions. In such a
way the external global evaluation team should be able to synthesise and
analyse the collated information from a range of project self and external
evaluations. The FOPHs Evaluation Unit should be consulted for more infor-
mation.
Essentially we (the FOPH) require evaluation for four reasons:
1. to improve our strategies and actions so that they are meeting clients
needs;
2. to help us make informed decisions in future planning;
3. to clarify the options available; and
4. to account for the expenditure of public funds.
Evaluation is not new, but it is being given greater emphasis and more sys-
tematic attention as part of the drive within the Civil Service world-wide to
improve accountability and financial management in general. Historically, pro-
fessional programme evaluation developed in the USA during the 1960s. Its
major impetus came from the requirement for evaluation of the Great Society
Education Legislation in 1965. Similar demands were placed on other social re-
forms of the time.
With such ever-increasing demands, over the past three decades US evaluation
has moved from being a peripheral activity of academics, to a recognised pro-
fession and career with its own theories, standards and code of practice. A
similar trend can be traced in many other countries. The requirement for evalu-
ation within the Swiss Civil Service has been a much more recent, but growing
phenomenon. A national research programme (no. 27) was funded to examine,
test and improve evaluation methods and methodologies relevant to the study
of state policy, strategies and measures, and their effects.
The need to assess and evaluate actions taken by the FOPH has now become
one of its guiding management principles (see Foreword). Within the Medical
Division, a range of projects are funded which aim at preventing disease, and
improving the populations general health by promoting healthy lifestyles. As
managers of public funds, we are responsible for ensuring that this money is
well spent. We are therefore accountable not only to the FOPH and the go-
vernment, but equally to tax payers, and more particularly, the specific clients
13
Why evaluate?
14
for whom our activities are targeted. This means that we need to be assured
that appropriate systems are established to supply us with information which
can help us assess the development, acceptability, effectiveness and impact of
the projects we fund.
There are many views on how evaluation should be approached ranging from
quantitative systems analysis to qualitative case studies. There is not just one
best way to evaluate health projects: some models are more appropriate
than others depending on the evaluations purpose, and the questions to be
addressed. The FOPH Evaluation Unit in fact routinely commissions a range of
different evaluation approaches, each being selected to suit a particular need.
To a large degree, the choice of the evaluation design depends on the pur-
pose, and therefore the types of questions to be answered.
Generally speaking, however, there is a lack of consensus within the evaluation
community about which approach yields the most useful results. In general this
relates to the various philosophical assumptions held by evaluators, theoreti-
cians and practitioners. At one extreme there is a strong belief in the need for
hard data and statistical proof. This type of approach (sometimes referred to
as the conventional approach) has its philosophical roots in logical positivism
and is characterised by quantitative measurement and experimental design to
test performance against objectives and a priori assumptions. Its strength is
that it can supply us with statistically measurable information from large popu-
lation samples on a limited number of predetermined items. We therefore can
gain a broad set of findings from which, it is argued, we can make generalis-
able conclusions.
The relevance and applicability of this approach to real social settings have,
however, been increasingly questioned (e.g. Guba and Lincoln, 1989; Patton,
1979, 1980, 1987; Parlett and Hamilton, 1974; etc.). Some of its major beliefs
have been found untenable within the philosophy of science. Proponents of an
alternative evaluation methodology emphasise the need to provide description
and understanding of social phenomenon in its natural setting with no manipu-
lation, control or elimination of situational variables. This type of approach is
grounded in phenomenology and adheres to the principles of inductive strate-
gies for developing theory from the data during the course of the research
(Glaser and Strauss, 1967). In contrast to deductive strategies based on testing
pre-conceived assumptions, key evaluation issues and hypotheses emerge
from intensive on-site, qualitative, case study investigation and are systemati-
cally worked out in relation to the data during the course of the research. This
model is often referred to as the interpretative approach.
These two models conventional or positivistic evaluation and interpreta-
tive evaluation are therefore theoretical paradigms (methodology). Whilst the
conventional paradigm largely employs quantitative methods (the tools,
not the methodology), the interpretative approach mainly draws on qualita-
tive methods. However, neither necessarily relies on only quantitative or qua-
litative methods and a combination of both can and is used by both paradigms.
The distinction between these two paradigms is steeped in their philosophical
tenets: deductive versus inductive. Characteristics of these two approaches are
provided in table form as Annex 3.
In this section we have set out our working definition of what we believe evalu-
ation to be in order to meet our specific purposes. We have indicated that
whilst there is an array of strategies which can be adopted for conducting evalu-
ations, each with its own merits and shortcomings, ultimately the appropriate-
ness of an approach is determined by the specific questions and issues we
want addressed.
From our definitions you can see that each type of evaluation has its own focus.
Project evaluation concentrates on the project, programme evaluation on the
programme, and global on the more global issues. With careful planning, each
can be linked one to the other. Programme evaluation can benefit from
Principal evaluation
methodologies
Summary
Linking project
programme and global
evaluation into
evaluation planning
15
analysing what has been learned from associated project evaluations, and glob-
al evaluations from the project and programme evaluations. Evaluation planning
should therefore be co-ordinated to include some general as well as more spe-
cific evaluation questions to ensure that each level of evaluation can contribute
to another.
With careful planning, each type of
evaluation can be linked one to the
other
16
17
When should you evaluate? integrating evaluation into planning
What can be evaluated?
Clarifying the purpose of the evaluation
Identifying evaluation audiences
Defining the evaluation questions
What to evaluate? scope and selection of information needs
Recognising what it can and cannot do Limitations of the evaluation
When feedback on evaluations progress and results is needed timeliness
of report-back
Budgeting
Mistakes to be avoided
Assessing the need for external evaluation the criteria
Planning an external evaluation
Internal, self-evaluation: basic elements to be included
Evaluation helps planning by fore-
seeing future problems
(1) Murphys Law says that if things
can go wrong, they will!
Contents of this Part
Relevant Checklists
Part 2: Planning an Evaluation
(1)
When to evaluate?
Evaluation a cyclical feedback
process
18
Having looked at what evaluation is and why we do it, in the following para-
graphs we have set out some points for you to consider when planning an
evaluation study. Our objective is to get timely yet comprehensive and inte-
grated information on how a project or programme is working. To do this we
have to work out what we need, why we need it and when feedback will prove
to be of most benefit.
All projects funded by the Federal Office of Public Health (FOPH) are contrac-
tually obliged to review, assess and report on their achievements. Each year
managers are asked to submit an Annual Report of their activities. This report
should be considered as a Self-assessment of the projects progress.
Sometimes, however, an internal, or self-assessment is not enough. The
FOPH may decide that an external evaluation is needed. But how do we
determine whether an external evaluation is needed? What criteria should we
use for making this decision? Checklist 2.1 at the end of Part 2 helps you to de-
cide whether an external evaluation is warranted. If yes, this should then be dis-
cussed with the FOPHs evaluation specialists at the earliest opportunity, pre-
ferably during the early stages of negotiating the intervention to be evaluated.
Ideally projects should incorporate some degree of monitoring and evaluation
right from their start-up. Some of the reasons for this are obvious:
everything is fresh in the mind of both project sponsors and managers, particu-
larly which key elements of the project will need to be carefully looked at;
the purpose and expectations of the evaluation can be defined and the
appropriate funds can be budgeted;
arrangements for getting which information from which groups can be set
up early on;
questions relating to the overall preventions strategy, measures and actions
can be included;
a good feedback procedure can be planned.
Since the aim of evaluation is to help improve the planning, development and
management of FOPH strategies and interventions, ideally it should provide us
with ongoing feedback throughout the various stages of a projects life: con-
ception, development and implementation. In this manner, it can prove instru-
mental in the following ways:
to reduce the risk of projects generating unrealisable or inappropriate objec-
tives and/or strategies;
to ensure that target populations receive relevant and effective health care
programmes;
to secure sustained political and financial support.
Evaluation should therefore be regarded as a cyclical process which involves
responding to a variety of clients on different aspects of the project throughout
its life cycle. Monitoring and evaluation feed into all stages of this development
cycle to help improve, re-orient or even st op an unsuccessful intervention. The
feedback process is therefore vital. Figure 2.1 below illustrates the key fea-
tures.
Figure 2.1 shows how ideally evaluation can help appraise the life cycle of a
project. A similar procedure can be put into effect for programme evaluation
(i.e. the evaluation of several projects which share matching goals). However,
as the project(s) are refined and repeated on a regular basis, there will be more
need for the regular monitoring of progress and achievements and less need
for intensive evaluation. Periodic evaluation should, however, take place to re-
assesses whats happening.
The role of evaluation
in the policy cycle
Agenda-setting
Implementation
Formulation Accountability
Relevance
Progress
Efficiency
Effectiveness
EVALUATION
19
Evaluation tasks: Know your customer!
Research (analysing issues, problems, context, past)
Development (of solution options and decision criteria)
Formulating objectives and measures
Monitoring and assessing process as well as outcomes, impact
Controlling and accounting (cause and effect; cost-effectiveness)
plus:
Information dissemination & valorisation
Networking (incl. lobbying)
Negotiating
Co-ordinating
Stakeholders involved
politicians
public/taxpayers
media
target populations
Fieldworkers
special interest groups
researches
others
Fig. 2.1 The Optimal Evaluation Feed-
back Process
Evaluation is principally concerned with four basic aspects of the intervention
project, its
relevance Is the project doing the right thing? for the right
people?
progress Is it being put into effect as planned?
efficacy/effectiveness Is it being done in an appropriate way? and
Is it having an effect?
efficiency Is it doing it economically?
Evaluations can focus on the outputs of an intervention, for example, counting
products such as how many calls were received by an AIDS hotline over a spe-
cific period, how many leaflets did a project produce etc. Equally, they can con-
centrate on the processes taking account of such levels as infrastructural and
financial support, management and organisational structure and support etc. Or
they can look at the final results, the outcomes and impacts.
However, projects do not operate in a vacuum. Thus the context
2
in which it is
set should also be examined to see if and how it influences the projects deve-
lopment (processes), and results.
Ideally, the evaluation might take a holistic approach and be able to take ac-
count of all these different focal levels.
In the absence of adopting an holistic evaluation model, the choice of evalua-
tion focus should depend to a large degree on the developmental stage of the
project itself. For instance,
During the initial stage project planning and development, evaluation is
often needed to help define the needs.
Who needs what, in what situation? Once these have been identified,
planning can then tackle which strategy and measures are likely to be
effective.
What type of intervention is needed.
Via which methods can the audience best be reached, and which might
be the most cost-effective way?
During the pilot stage, evaluation looks at how intervention methods and
activities are being put into effect, and with what results.
Is it reaching its audience, how many etc.?
Is it being put into effect as planned?
Is it an appropriate method for reaching its objectives?
What are the supports and constraints?
How could it be improved?
Do the aims and objectives need to be modified?
What unexpected outcomes are there?
What changes are taking place? How? Why?
An evaluation also provides a preliminary assessment of the projects imme-
diate effects and the likelihood that it can ultimately meet its aims and objec-
tives.
As the project becomes established, we pay more attention to such ques-
tions as
Is it meeting its aims and objectives? and
under what conditions?
How has it changed from the original plan?
What impact is the project having?
What can be evaluated? Aspects
and focal points
20
2
The context of a project includes information on where it is located, the political and social cli-
mate in which it is set (i.e. is it conducive or hostile towards the project), and the economic con-
straints, etc. An understanding of the context and its changes during the project's life, and its inter-
relationship with the project's development is essential. This helps audiences interpret evaluation
findings, and judge whether the project's context is similar to others, or quite specific. It therefore
reduces the likelihood of claiming wider applicability of findings than is otherwise justifiable.
Thus an analysis of the results, outputs and impacts grows in importance as the
intervention advances.
Whilst ideally evaluation should therefore be planned to take place in parallel
with the development of a project, this does not always happen. At worst
evaluations are often tagged on as an afterthought and expected to make a
summative appraisal of what has happened in hindsight. The focus of the eval-
uation should relate to the stage at which the project is at.
As we said above, evaluations can serve different purposes according to dif-
ferent needs. We therefore need to consider the following:
What is the project trying to achieve?
What do we want to use the evaluation results for? e.g. to re-orient, to mo-
dify, to focus the projects aims and objectives, or to make decisions about
the continuation of the project?
The starting point in planning any evaluation, be it of an innovative or well estab-
lished project, is the review of its aims and objectives. In order to understand
what a project is doing and how, it is vital that the evaluator(s) understands
what the project is attempting to achieve. Because of the importance of aims
and objectives, therefore, we shall devote the next few paragraphs to describ-
ing what they are. The aim(s) is a general statement about what the project is
attempting to achieve. In other words, it is the projects end goal. Objectives on
the other hand, are much more specific and outline the step by step process-
es needed to achieve the overall aim. They should be realistic, and expressed
in such a way that it will later be possible to tell whether they have been
achieved.
SMART objectives are those which are relevant of course, but also:
Specific,
Measurable,
Appropriate,
Realistic, and achievable within a defined
Time.
Far too often, objectives are not clearly set out, or are set out in so many dif-
ferent places within the text that it proves difficult to pick them out or rank
them in any rational order.
By the time a projects funding contract is drawn up, its objectives should have
been refined as far as circumstances permit. An evaluation at this stage can,
however, sometimes bring to light ambiguities or inconsistencies in the project
proposal which can then be corrected before the project is launched.
Determining the projects objectives at the outset need not necessarily prevent
their being changed later. Indeed it is to be expected that the external environ-
ment will change and that in turn, the projects objectives may need to be mod-
ified. The existence of a plan enables such changes to be noted explicitly and
allows the evaluation to take them into account. It is very difficult to evaluate a
project whose objectives have shifted if the changes and reasons for such have
not been documented.
So, an evaluation can help clarify, re-define or even focus the Projects aims
and objectives. Equally it can describe and analyse the reasons for such
change, and the consequences.
But even if the Projects aims and objectives do not change, we may want to
commission an evaluation to learn more about how certain projects develop
under certain conditions. In other words, we may want an evaluation to docu-
ment and analyse how projects actually get put into practice. Our purpose for
the evaluation in this case, is to increase our understanding so that we can
improve our future planning.
Classifying the purpose of
the evaluation
Aims and Objectives
21
On the other hand, we may call upon an evaluation to help us make decisions.
For example, a project may not be doing what it set out to do. Or maybe it is
doing something different from what was expected. In such cases an evalua-
tion could be asked to identify how and why this has happened so that we can
decide what to do. Is it relevant but too expensive? Is it operating in a hostile
climate? Is it an ineffective method of prevention for the particular setting or
target population? How could it be modified? Should it be closed down? Is the
contract due for renewal? What kind of decision will have to be made once we
have the evaluation results? Once we have clarified why we want to have an
evaluation, we can then determine what questions we need to ask.
But before looking at the evaluation questions, we should also remember that
the FOPH is not the only beneficiary of evaluations. The project planners, mana-
gers and implementers have as much interest as the FOPH in learning about
their prevention efforts. Equally, those who are directly or indirectly affected by
the work are also potential evaluation audiences.
The full range of groups likely to be involved and/or affected should be identi-
fied in the evaluation proposal. This will help focus its purpose, scope and ulti-
mate audience. Such groups will include those interested in using the evalua-
tion results to make decisions as well as the users, present and potential, of
the intervention itself. Groups likely to be affected are:
the project planners
project funders and sponsors
project implementers
the targeted population(s)
those who work directly and indirectly with the target group(s)
other prevention planners
decision-takers
What is each group likely to want to learn from the evaluation findings? Whilst
it is unlikely that we can consult all stakeholders when planning the evalua-
tion, we should at the very least, involve the key project staff. We should equal-
ly try to predict what type of information and findings will be of interest to other
potential evaluation audiences. In short, we should determine in advance not
only WHO will be interested in the evaluation findings, but also WHAT they are
likely to want the evaluation to inform them about.
We expect evaluators to provide us with some answers, so we need to take
this task seriously and allocate time to reflect on the following: what do we
want to know and what do we really need to know (i.e. what is priority (e.g. for
decision-making) and what is feasible within the constraints of time and avail-
able resources. Equally we must consider what we expect to do with the
results. For instance, are we looking for conclusive answers which can allow
for national generalisability? Or do we want to learn more about how a specific
project performs and develops over a given time, under what conditions, and
which are the factors which help and which dont? Different types of evalua-
tions are needed to provide different types of decision-making information.
Basically, the evaluation questions we ask should be about the projects rele-
vance, progress, effectiveness and efficiency and at the same time, co-ordinate
with the overall global evaluation questions on the prevention strategy con-
cerned. For this, we might want to learn about what processes were used by
the project to achieve its results, and how, if at all, the context e.g. social and
political setting, influenced the projects development and results. The overall
impact or outcome could also be taken into account. Such evaluation
(impact/outcome evaluation) should be encouraged to look out for the unplan-
ned as well as the planned outcomes.
For example, in an evaluation study of AIDS Prevention in Uganda, it was found
that one unintended outcome of a protective awareness campaign for sex
workers and their clients was an unplanned shift in client behaviour: instead of
Identifying evaluation audiences
Who will be involved and/or
affected by the project?
Defining the evaluation questions
22
having protected sex with commercial sex workers, some turned their atten-
tion towards schoolgirls instead.
3
But we should also be aware that the way the question is posed will determine
the type of information we can expect. For example, the effectiveness of a pro-
ject designed to train drug users as peer educators (mediators) amongst a sub-
set of drug users such as iv drug user prostitutes. In this case we may want to
know what motivates the mediators to remain active for more than six months.
The question about motivation could be posed in a variety of ways. One could
look at the personality factors of those who remain and those who dont (e.g.
extrovert, shy, intelligent, domineering etc.). Equally one could consider the
social conditions of these two groups (e.g. social class, employment status, fa-
mily life, schooling etc.). The type of data and data sources used for each would
be different and lead to a different type of analysis.
Formulating the right questions should start off by looking at a wide range of
potential questions for example about different aspects of the project (such as
the aims and objectives, inputs, management, infrastructure, social context,
etc.). Broad questions about these aspects can then be considered in terms of
priorities, how feasible they would be to answer and indeed how expensive
that might be. The Evaluation Unit can help formulate the initial evaluation ques-
tions pointing out what might be easily answered, what information would be
needed and what could and could not be expected from the answers to each.
However, we must remember that whilst we should determine the overall
questions we want addressed, ultimately it is the task of the evaluator to refine
these questions and identify the relevant sub-questions too. The evaluation
questions s/he poses will then help determine the design of the study and the
approach needed. Thus once an evaluator is appointed, (see Part 3 for how this
is done) the questions we initally put forward can then be refined and further
discussed until mutual agreement is reached.
The scope of information to be collected should be tailored to address the
agreed evaluation questions. (But the questions should not be determined by the
scope of information available!). However, to some degree this will depend on:
what information needs to be analysed to answer the questions;
what relevant data sources exist already which can be used. For example,
are there evaluation reports on similar projects which can be analysed does
the project itself collect relevant data? Is survey data available about the
same target group, about similar questions?
what else needs to be collected;
what information can and cannot be feasibly collected;
what implications this has for the analysis.
In principle, the evaluation focus should not be too narrow too quickly.
(see more on this in Part 3).
Thinking about when evaluation results will be needed and by whom, should
take place at the planning stage so that a feedback schedule can be written
into the evaluation contract. Obviously, as a general guideline, findings should
be communicated to intended users at times when the information can best be
used, and in an appropriate format. Once again, we need to emphasise the
importance of identifying the potential users of the evaluation in advance. This
will help determine the different types of reporting formats and approaches
which are appropriate for the different intended user audiences. Authority to
fulfil this responsibility needs to be determined and written into the contractu-
al agreement with the evaluators at the outset of the evaluation.
What to evaluate? The scope of
the study
When do we need feedback?
23
3
Moodie R, Katahoire A, Kaharuya F, et al An Evaluation Study of the Uganda National AIDS Control
Programs (NACP) Information, Education and communication Activities, NACP/WHO, Entebbe,
December 1991
In general, between 10%15% of the total projects budget should be set aside
for financing an external evaluation study (approx. 5% for self-evaluations). In
exceptional cases, and in consultation with the Evaluation Unit, the budget
could even be set slightly higher. For example, an intensive, in-depth evaluation
study of a low budget project may be recommended to obtain the required
answers to questions. In this case, more time input from the evaluator will be
required and will therefore cost more. Budget items should include staff costs,
travel, equipment needs, and evaluation output costs (e.g. report translation and pro-
duction charges).
The budget should reflect realistic costings of the proposed scope, methods
and procedure.
A separate budget for projected valorisation activities should also be
itemised in the evaluation proposal budget. It should cover charges for the
evaluators time and transportation costs only. This is needed because once
the final report has been accepted, depending on the findings, the FOPH
may organise targeted activities to disseminate and discuss the results with
a range of stakeholders.
The evaluation will certainly have to take place within the limitations of budget,
time and resources available. Whilst it would be interesting to have all our ques-
tions answered, it is likely that such factors will constrain the scope and depth
of the study. That is why we have to prioritise and be aware of what can and
will not be accomplished.
But even when the study has been clearly delineated at the time of contract,
the evaluator may find that what seemed feasible in the beginning does not
prove possible in reality. For example, it may well be that the evaluator cannot
interview all the different levels of those involved in a project originally planned
(absent, left the project, too busy etc.). Alternatives have to be considered, and
if no feasible options can be exploited, the limitations to the analysis should be
explained and discussed in reports (verbal or otherwise) to the FOPH.
Only thinking about commissioning an evaluation for the first time when the
project is nearing its end. Evaluations are more helpful when they are
planned concurrently with the actual project.
Not discussing the evaluation purpose with the project staff.
Defining and prioritising the evaluation questions without consulting key
partner clients.
Neglecting to identify who are directly and indirectly involved and/or affected
by the evaluation results at the planning stage.
Expecting access to data and project co-operation without negotiating what
and how this can be feasibly achieved.
Data gathering should be achieved without over-burdening the workload of
the project staff and through causing minimal disruption to project activities.
Neglecting to discuss the purpose of the evaluation and negotiate access to
data with ALL relevant persons/groups. Planning and discussing the evalua-
tion and data needs with the project manager does not mean that others
involved will be aware of what is happening!
Expecting too much from the evaluation with respect to the time and bud-
get available.
Expecting the evaluation to make decisions. It is not the task of the evalua-
tor to make decisions based on the evaluation findings: this is the contrac-
tors responsibility i.e. the FOPH.
Budgeting an evaluation
Recognising what the evaluation
can and cannot do
Mistakes to be avoided
24
25
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
For further information please contact:
Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
Swiss Federal Office
of Public Health
All Federal Office of Public Health funded projects are
expected to produce an annual report on their activi-
ties. This report should describe and discuss the pro-
jects development, progress, and achievements: it
therefore serves as a self-evaluation of the project.
The following questions should help determine
whether a project needs an independent evalua-
tion, that is by evaluation experts who are not
employed by the project. The possible need should
then be discussed with the Federal Office of Public
Health Evaluation Unit before a final decision is
reached.
I Does the project offer new measures, and/or a new
way of dealing with an FOPH priority health problem?
I Are wider measures likely to be adopted as a result of
this project?
I Does it have potential national importance?
I Does the project share characteristics or have objec-
tives which are similar to other past/present projects?
I Is the project politically sensitive?
Is the project being reviewed? due to e.g.
I staff problems such as conflicts, absences etc.;
I falling behind schedule;
I out-of-date aims and objectives;
I going off target i.e. not sticking to agreed aims and
objectives.
Does the project require an external evaluation for reasons
such as:
I project expansion;
I threatened termination;
I legitimisation (e.g. political/economic);
I sponsorship (e.g. continued sponsorship at
local/can tonal level)?
I Is the projects budget more than sFr. 100000?
I Is the project relevant to comparative international
studies?
I Is the Project relevant to Global evaluation priorities?
(e.g. in terms of target group/situation strategic
method)
Checklist 2.1: Assessing the Need for External
Evaluation The Criteria
I

27
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
For further information please contact:
Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
Swiss Federal Office
of Public Health
What should be evaluated, what questions will be
answered and how it will be done are determined by
the following (1) the nature of the project itself; (2) its
stage of development, and (3) why the evaluation is
being commissioned (the purpose of the evaluation).
The following checklist is intended to help the
Federal Office of Public Health and its project part-
ners collate the information needed to formulate
an Evaluation Mandate.
The Project to be Evaluated
I Is it a project or a programme (i.e. is it a specific
intervention or a set of interventions/activities with
common characteristics and objectives)?
I At what stage of development is it (i.e. how long has
it been running)?
I Is the target group(s)/target setting clearly defined?
I Are the overall aims and objectives clearly stated?
i.e. is it clear what the project is trying to achieve?
I Are the operational objectives SMART? i.e. specific?
measurable? appropriate? realistic? and delineated in
time?
I Has the project identified a set of criteria for assessing
its progress, results and medium to longer term
effects? (process, results and impact indicators)?
I Has the project established routine monitoring and
assessment procedures for its self-evaluation?
I Has the purpose of an external evaluation been
discussed with the project staff? at which levels?
I Have project staffs support and co-operation for the
evaluation been secured?
The Evaluation
The Purpose
What is the main purpose of the evaluation? e.g.:
I to help clarify/redefine the projects aims and
objectives;
I to re-orient the project;
I to analyse whether or not the intervention works or
doesnt and under what conditions;
I to make decisions about the projects continuation;
I to make retrospective judgement of achievements
and performance;
I to measure what impact has been achieved.
Time Framework
Should an external evaluation therefore be designed to
take place, (in what time framework), as
I a) Formative Evaluation (takes place concurrently with
the projects development and/or implementation to
highlight how/where improvements might be made)
I b) Summative Evaluation (look-back over work
achieved judgmental rather than developmental )
Checklist 2.2: Planning Evaluation of a Health
Project
I

p. t. o
The Evaluation Focus and Questions
I Have reports on similar projects/evaluations been
reviewed to help identify what needs to be studied?
I Have the main evaluation questions been defined?
(e.g. in relation to project progress, relevance,
effectiveness, and efficiency)
I Have the main evaluation questions been checked
with the Evaluation Unit e.g. to include global
evaluation questions too?
Support Needs for Conducting Evaluation
I Has the evaluation been discussed with the project
managers?
I Have all other key partners been consulted?
e.g. about the purpose of the evaluation and questions
to be addressed?
I Did project staff help formulate the evaluation
questions?
I Is the Project collecting Data for Self evaluation?
I Has the Project agreed that the evaluator can have
access to these data?
If none, what data could be collected internally (i.e. by
the project) which could then be used by the external
evaluation?
Evaluation Audiences and Timely Feedback
(N.B. the best time to provide evaluation feedback is when
the information will prove most useful to the project)
I Have we identified which groups are likely to be
directly and indirectly interested in the evaluation
results?
Definite Evaluation Audiences
project funders e.g. the FOPH
project planners
project managers/implementers
the evaluation team of the relevant global evaluation
programme
Potential Evaluation Audiences
the population(s) targeted by the project
those who work directly and indirectly with the
target group(s)
prevention planners, implementers, funders of
similar projects
relevant policy makers (e.g. at federal cantonal,
communal level)
I Have we considered if there might be any reason to
restrict feedback on the evaluation results to certain
audiences only? e.g. political sensitivity etc. If yes,
this needs to be specified in our Evaluation Mandate
(see checklist 3.1)
Timely Feedback
I Have we defined when we (FOPH and project
managers) need to know about evaluation findings?
e.g.
I in relation to project changes under planning;
I in relation to projects future planning;
I to help modify project as and where necessary
(and earlier rather than later);
I in relation to decisions being taken in operational
context which may affect project (changes in
budget, local support, etc.);
I in relation to development of other projects
sharing similar characteristics and/or objectives.
I Have we identified when other potential evaluation
audiences are likely to need feedback on the results?
I If yes, what information? When? Why?
28
29
All Federal Office of Public Health funded projects are
expected to produce an annual report on their activi-
ties. This report should describe and discuss the pro-
jects development, progress, and achievements: it
serves as a self-evaluation of the project. It is impor-
tant therefore that right from the start, the projects
development should be systematically documented. A
log book should be kept on what actions, decisions,
changes were made, when, by whom and why.
Similarly records of what services or goods were pro-
duced, how many, for whom, and, for example at what
unit cost, should also be systematically kept and sum-
marised in the self-evaluation.
A self-evaluation budget is included in every FOPH
project contract. This money is provided to support
evaluation activities. It can be used, for example, to
purchase evaluation consultancy to help establish
recording methods and procedures.
The following should be used as a checklist of
items which should be systematically assessed
and reported in the self-evaluation (annual report):
1. Description of the project design and development
What the Project planned to do.
I what did it set out to do? (aims and objectives)
I for whom? for which situation?
I how was it going to do this?
I how did it plan its structure, organisation and
management?
What the Project actually did
This should include tables and/or graphs to show for
example:
I a chronological list of major actions, decisions and
changes;
I a table of the contacts made e.g. with WHOM
(type of group/institutions/ etc.) nature of contact,
purpose;
I a description of which goods/services were pro-
duced, for whom etc.
I simple statistics/frequency counts of e.g.:
I the number of goods/services produced;
I how many were used by which type of
groups, over what period of time;
for what purpose etc.
N.B. examples of project documents e.g. on training,
brochures etc., should be annexed to the report.
I What were the major problems met e.g. unforeseen
events, and how were these resolved?
I How was it actually structured, organised, managed,
and with which organisations/groups did it work?
(provide organigram of management structure and
relationships between project and key external
partners)
I What human and financial resources were actually
made available?
2. Discussion and Lessons Learned
I Did the project achieve what it tried to do?
I How?
I Why? or Why not? and
I What helped and what didnt?
I What were the strengths of the project?
I What were its weaknesses?
I Summary of main lessons learned
Checklist 2.3: Self-Evaluation Basic Elements to
be Included
I

These Checklists are an integral part of the FOPHs


Guidelines for Health Programme and
Project Evaluation Planning.
For further information please contact:
Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
Swiss Federal Office
of Public Health
p. t. o
30
3. Recommendations
I WHAT recommendations/advice would you give
WHICH groups/people about
I the future of your project?
I setting up a similar project within the same
canton/setting?
I setting up a similar project within a different
canton/setting?
I In particular, what would you recommend to the
Federal Office of Public Health about supporting
a similar project?
31
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
For further information please contact:
Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
Swiss Federal Office
of Public Health
All managers of Federal Office of Public Health funded
Training and Continuing Education projects are
expected to produce an annual report on their activi-
ties. This report should describe and discuss the pro-
jects development, progress, and achievements: it
serves as a self-evaluation of the overall Project.
It is important therefore that right from the start, the
projects development should be systematically docu-
mented. A log book should be kept on what actions,
decisions, changes were made, when, by whom and
why. Records on course provision and attendance
should also be systematically maintained. (Refer to
the Guidelines for Developing or Assessing Medico-
Social Training/ Education Projects published by the
Federal Office of Public Health included here as
Annex No. 5).
A self-evaluation budget is included in every FOPH
project contract. This money is provided to support
evaluation activities. It can be used, for example, to
purchase evaluation consultancy to help establish
recording methods and procedures.
The following should be used as a checklist of
items which should be systematically assessed
and reported in the self-evaluation (annual report)
on Training and Continuing Education Projects.
1. Description of the Projects design and
development
i) What did the Training/Continuing Education Project
look like during Planning?
I what did it set out to do? (aims and objectives)
I for whom (which group of health carers)?
for which situation?
I how was it going to do this?
I how long would it take to become self-funded?
I what sources of funds were to be used over short,
medium, and longer term?
I how did it plan its structure, organisation and
management?
ii) What did the Project look like in action?
This part should COMPARE PLANS with what actually
took place. It should include:
I A description of the major problems met by the
project e.g. unforeseen events, and how these were
resolved?
I A description of how the project was actually
structured, organised, and managed. With which
organisations/groups did it work? (provide organigram
of management structure and relationships between
project and key external partners);
I Details of the human and financial resources actually
made available, and from which sources;
I A description of which courses took place, for whom
etc.
I A description of which courses DID NOT take place,
and why
I A description of how the courses were evaluated
(e.g. see section 5 in the Guidelines for the
Development or Review of Training/Continuing
Education Projects published by the Federal Office of
Public Health included here as Annex No. 5).
N.B. examples of course documents e.g. on training,
brochures etc., should be annexed to the report).
I A description and analysis of course costs to show
prices of course day, per participant: e.g.
I number of course days by number of participants
I cost ratio of inputs by number of course days per
year, per number of participants
Checklist 2.4: Training and Continuing Education
Courses Self-Evaluation Basic Elements to be
Included
I

p. t. o
32
This section (describing the project in action)
should be supplemented by tables and/or graphs to
show for example:
I a chronological list of major actions, decisions and
changes;
I a table of the contacts made e.g. with WHOM
(type of group/institutions/ etc.) NATURE of contact,
PURPOSE;
I simple statistics/frequency counts of e.g.:
I the number of courses planned compared with
the number of courses held;
I the number who registered per course, compared
with the numbers who actually attended.
I aggregated characteristics of participants per
course e.g.
I professional occupations (e.g. by current
employment)
I attendance on previous courses/on same course at
this institute (and/or at any other institute)
I cantons of employment
I employer institutes
I course fees paid by whom e.g. self-funded or
paid by employer
2. Discussion and Lessons Learned
I Did this Training/Continuing Education Project achieve
what it tried to do?
I How?
I Why? or Why not? and
I What helped and what didnt?
I What were the strengths of the project?
I What were its weaknesses?
I Summary of main lessons learned
3. Recommendations
I WHAT recommendations/advice would you give
WHICH groups/people about
I the future of this project?
I setting up a similar project within the same
canton/setting?
I setting up a similar project within a different
canton/setting?
I In particular, what would you recommend to the
Federal Office of Public Health about supporting
a similar project in the future?
Drawing up an evaluation mandate
Identifying potential evaluators
Reviewing the evaluation proposal
Setting up the Contractual Agreement
Refining the evaluation en route
Keeping in touch during the evaluation process
Mistakes to be avoided
Drafting the FOPH evaluation mandate the FOPHs checklist
Drafting the evaluation proposal the evaluators checklist
Assessing the evaluation proposal
Assessing the evaluator
Monitoring the evaluation contract
Continual contact with the external
evaluators helps you understand
whats happening in good time
Contents of this Part
Relevant Checklists
Part 3: Commissioning and Monitoring
Evaluation Contracts
33
This section provides our partners with an overview of the principles used by
the Federal Office of Public Health (FOPH) when commissioning external
evaluations.
The Evaluation Unit provides FOPH staff with guidance on the step-by-step pro-
cedures used when commissioning evaluation studies, and is ultimately
responsible for the quality control of all external evaluation contracts. Its staff
help FOPH partners determine their evaluation needs and purposes, an appro-
priate evaluation approach and useful feedback schedule. It also advises on the
choice of external evaluators.
The following paragraphs briefly describe the procedures which
should be adopted for setting up the contractual agreement.
Once the need for an evaluation has been agreed (see Criteria Checklist for
external evaluations, Part 2) the basic requirements for the evaluation study are
set out in writing. In some cases (e.g. evaluation budget = less than frs.
100000), this may only need to be a brief outline of the purpose, the general
evaluation questions, potential evaluation audiences, time scale and budget.
For more substantial studies, it should be more detailed. (see Checklist 3.1 to
see what information should be provided). The evaluation mandate is then dis-
cussed with the key partners involved (e.g. FOPH staff such as specialists from
the relevant prevention section and Evaluation Unit), and the project managers).
Once agreement is reached, a Call for Proposals for the evaluation may be
circulated to the evaluation community. The Call for Proposals package must
include not only the evaluation mandate, but also:
the major documents about the project;
an outline of the FOPHs relevant prevention strategy plus a short statement
on where the particular project to be evaluated is in relation to this;
copies of previous evaluation reports, research reports and/or theoretical
papers relevant to the proposed evaluation study (if available) or at least
details of where they can be obtained;
and the Evaluation Units Drafting a Proposal checklist (Checklist 3.2 of
these Guidelines).
Evaluators should take care to refer to these documents in drawing up their ini-
tial evaluation proposal.
Potential evaluators will then be invited to submit their proposals. The evalua-
tion proposal is a key component of the evaluation contract as it sets out the
main elements of what will be done, by when, and how this will be achieved.
It is therefore annexed to the contract and forms an integral part of the agree-
ment.
Evaluation proposals will be assessed by the Evaluation Unit according to a set
of criteria which include the feasibility, ethics, and relevance of the described
procedures and approach in relation to the general evaluation purpose and
questions. At the discretion of the Evaluation Unit, other specialists/expert
committees may also be invited to give a critical review of the proposals.
The Evaluation Unit draws up the necessary contractual agreement. The
accepted evaluation proposal sets out the purpose, methods, procedure, time
scale, budget, and plans for the feedback and dissemination of findings to
potential audiences. This is then annexed to the contract. Items such as owner-
ship of data, obligations of contractor and contracted, payment procedures etc.
are detailed in the actual contract.
On average it takes 68 weeks to have a contract processed.
A good evaluation design is the result of direct negotiation between the key
stakeholders and the evaluator. It is based on a sound knowledge of the pro-
ject, its context and the differing stakeholders concerns. Once commissioned,
the evaluators should therefore spend an initial period getting to know more
Step 1 The Evaluation Mandate
Step 2 Identifying potential
evaluators
Step 3 Reviewing evaluation
proposals
Step 4 Setting up the evaluation
contract
Step 5 Refining the evaluation
en route
34
about the project, the context in which it is set and the key partners. A wider
range of project documents should be reviewed, and discussions with all key
client groups should be held in order to help focus and prioritise the questions
to be addressed. This is likely to lead to the production of a refined evaluation
proposal and workplan. What can and cannot be addressed should be clearly
described to indicate the limitations of the study.
Any refinement to the original proposal needs to be submitted in writing. Once
agreed by all parties concerned, it will be used as an integral part of the con-
tract and annexed to the original proposal.
Once the evaluation is underway, it is a serious mistake to believe that nothing
needs to be done until the interim or final report comes in. It is possible, and
indeed highly likely in the case of experimental and pilot projects, that the orig-
inal evaluation plan, i.e. its purpose and procedures, will change during the
course of the evaluation. This may be due to, for example, unanticipated issues
which come to light only once the evaluation is underway, to significant
changes in the projects setting, or to a conflict of interests between key part-
ners.
The contractors and project managers need to be kept informed! Any modifi-
cation to, or reorientation of the original evaluation plan should be mutually
agreed, preferably in writing, between the contractual partners. It is the FOPHs
responsibility to ensure that potential changes are fully discussed with the rele-
vant partners, i.e. the relevant FOPH sections, such as the Prevention Section
and Evaluation Unit, and the project under study. Significant changes will
require a formal amendment to the contractual agreement, requiring the same
signatories as the original contract.
But we dont only want to hear about changes to the evaluation design: impor-
tant findings, issues or concerns which come to light through the evaluation
process should also be fed back to us. Remember, we expect evaluations to
help us improve our prevention efforts: this means that we should be kept up-
to-date on findings, albeit even interim findings, as and when they become
available.
The project team, is busy, the evaluator is busy and we, the FOPH, are also
busy. But keeping in touch is important! So, be sure that a regular contact time
schedule is set up between the various partners right from the beginning and
keep to it!
Not identifying and consulting the key partners during the initial evaluation
planning. (FOPH)
Accepting the evaluation proposal without having referred to key partners for
comment. (FOPH)
Not identifying the range of potential evaluation audiences during evaluation
planning. (FOPH and external project partners)
Appointing an evaluator without prior assurance of his/her integrity and com-
petence through e.g. reference to past evaluations and past contractors.
(FOPH)
Appointing an evaluator without having first establishing mutual trust.
(FOPH)
Failing to ensure that the main investigator is competent i.e. that an inexpe-
rienced assistant is assigned the work rather than the person appointed.
(FOPH)
Assigning evaluations to those who have little knowledge and experience of
the specific study setting (e.g. prisons, school system, government admin-
istrations etc.). (FOPH)
Not establishing preliminary agreement between FOPH, evaluators and key
project staff for feedback arrangements to whom, how and when?
Assuming that key partners will participate in the evaluation (and possibly
also provide evaluators with project collected data) without having first
secured their agreement. (FOPH and evaluators)
Assuming that the accepted evaluation proposal covers all aspects of the
contractual agreement. The draft contract and the annexed evaluation pro-
Keeping in touch during the
evaluation
Mistakes to be avoided
35
posal together should be carefully checked by both parties (the FOPH
Evaluation Unit and the evaluators themselves) to ensure that all details are
included, understood and agreed.
Neglecting to provide for periodic review and amendment of the contract.
(FOPH and evaluators)
Failing to include time within contract period for checking draft copy of final
evaluation reports. (FOPH and evaluators)
Neglecting to itemise potential valorisation plans and budget within the
contract. (evaluators and FOPH)
Failing to secure agreement to changes to the original evaluation purpose
and procedure from key partners. (FOPH)
Planning a data collection process which is over-demanding and disruptive to
the project. (evaluators)
Failing to take into account the ethics of proposed data collection. (all key
project partners and evaluators)
Neglecting to incorporate within work plan the time needed for discussing
draft reports with FOPH /projects before printing the final product. (FOPH
and evaluators)
Guaranteeing anonymity and/or confidentiality when this is not possible.
(evaluators)
Not checking from time to time FOPH and Projects information require-
ments. (evaluators)
36
37
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
For further information please contact:
Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
Swiss Federal Office
of Public Health
The FOPHs Evaluation Mandate should describe what
we need, and be set out according to the sections
described in this checklist: (see Part 3 of the FOPH
Guidelines for Project and Programme Evaluation
Planning).
Use this list to check what the evaluation mandate
does and does not yet cover.
Section 1:
Introduction and Background
I A brief description of the project, including its aims
and objectives, budget, time period, and relationship
to the FOPHs global prevention strategy
I Legal basis for commissioning the study
Section 2:
The Evaluation Mandate
I The purpose of the evaluation, and intended use
of results according to which types of evaluation
audience groups
I The initial evaluation questions as defined by the
FOPH and relevant project manager
(both the project specific and those of interest to the
relevant global evaluation study)
I Major areas and levels of interest and concern for the
evaluation focus
I Data currently being collected e.g. by project and what
other data and data sources are available
I List of evaluation outputs expected
Section 3:
Time Plan
I Time period for the evaluation study
I Timetable of when critical decisions will be taken
about project development, or timing of other factors
which could affect the project (vital to help evaluator
organise study and project feedback schedule).
Section 4:
Diffusion and Valorisation of Evaluation Findings
I List of intended evaluation audiences, i.e. key
audience groups to be informed of the evaluation
results grouped according to definite and potential
groups
I Possible formats for report-back to which type of
groups
Checklist 3.1: Drafting the Evaluation Mandate
An FOPH Checklist
I

p. t. o
38
Section 5:
Organisation Chart of Evaluation management and
responsibilities
Section 6:
Budget
I Budget guidelines including a separate and specific
budget for valorisation
Annexes:
I All relevant documents should be attached to the
mandate to help the evaluator prepare his/her proposal
(e.g. reports on similar projects, evaluation studies,
etc.). If not available, at least reference to what these
are and where they may be found.
I Also include Evaluation Guidelines Checklists 3.2
Drafting an Evaluation Proposal: An Evaluators
Checklist and Checklist 3.3: Assessing the
Evaluation Proposal.
39
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
For further information please contact:
Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
Swiss Federal Office
of Public Health
When responding to the FOPHs Call for Offers eval-
uators should use this checklist to determine what to
include in their proposal, and, wherever possible, how
it should be set out. This list is based on information
set out in the FOPH Guidelines for Project and
Programme Evaluation Planning.
Use this list to check what your proposal does and
does not yet cover.
I i) Cover Page with title of proposed evaluation,
and evaluators name, address etc. and date of
submission.
I ii)Summary of Main Points of Proposed Study.
Section 1:
Introduction and Background
I A brief description of the project, including the aims
and objectives and its relationship to the FOPHs
global prevention strategy.
I Why the evaluation is being commissioned and
how it is intended to be used (e.g. decision, project
expansion, etc.).
I Intended evaluation audiences, i.e. key audience
groups to be informed of the evaluation results.
grouped according to definite and potential audiences
I Major areas and levels of interest and concern for the
evaluation focus.
Section 2:
Proposed Evaluation Design
(Methodology, Approach and Methods)
I The preliminary evaluation questions to be answered
as set out in the FOPH mandate.
I What is already known about previous work relevant
to the project (status quo in research/evaluation of this
area theoretical reference framework for proposal).
The proposer should draw on her/his own experience
and/ or knowledge of the field.
I Evaluators proposed evaluation questions (this should
include the FOPH and Projects questions but should
introduce others and/or modifications to the original
FOPH questions).
Where relevant it may well note that additional questions
and unanticipated information may come to light during
the course. If so, it should propose that a refined evalua-
tion design will be made, and by when this should be ex-
pected.
Checklist 3.2: Drafting an Evaluation Proposal:
The Evaluators Checklist
I

p. t. o
I The methodological framework, and
I proposed data collection methods,
I proposed samples and sampling techniques,
I proposed methods of analysis, and
I support needed to conduct the proposed study e.g.
what type of clearance will be needed for data
access, from whom and to collect what type of data.
What alternatives would be possible if this could not
be guaranteed.
I Brief summary of the evaluation outputs/products we
can expect (e.g. types of reports (oral/written;
interim/final/abridged), guidelines, posters, publica-
tions, etc.).
I Description of the proposed evaluation team, and
who will do what (e.g. supervision, main investigator,
data collector, secretarial support). Details of what
infrastructural support is available, and what may be
needed to support the evaluation proposed (e.g. hard-
ware/software, recording equipment, etc.).
Section 3:
Workplan and Time Schedule
I Table of proposed workplan and time scale for
completion of the study (to include overall length of
study, start and finish date according to date con-
tract signed proposed reporting times etc. planned
time for diffusion and valorisation of findings).
I In cases where, in keeping with the methodology, the
proposed evaluation design may need to be refined
after a certain period of fieldwork, the time period
needed should be specified.
Please note that a period of one month is needed for the
FOPH/Project to review the DRAFT of the final report. The
FOPH/Projects comments have to be taken into account
in producing the final report.
Section 4:
Valorisation Plans
I Proposals for feedback process what kind, for
whom, and when
I during evaluation
I at study completion
Section 5:
Budget
I i) For conducting the evaluation
I Personnel Costs: no. of people in evaluation team,
at what % of time, and at what salary grade.
I Operation Costs: capital equipment needs
(e.g. soft/hardware, recorders etc.);
travel and subsidies, etc.
I Production Costs: e.g. for production of
questionnaires, reports, slides etc., and
translation costs
I ii)for dissemination and valorisation of results
I separate and specific budget for evaluators time
needs for such activities as writing non academic
articles and contributions to oral presentations
N.B. A curriculum vitae and statement highlighting the
evaluators (and other members of proposed teams) quali-
fications, experience, and interest relevant to the project
should be annexed to the proposal. An example of his/her
work should be included wherever possible, as well as the
name, address and telephone/fax details of previous
clients for reference.
40
41
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
For further information please contact:
Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
Swiss Federal Office
of Public Health
Checklist 3.3: Assessing the Evaluation Proposal
An FOPH review panel will assess all evaluation pro-
posals submitted in response to a Call for Evaluation
Proposals. The standard criteria for scientific rigour
as well as the relevance of the design will be used as
the basis of the assessment.
However, the panel will also take into account the
points set out in this checklist.
I Is the proposal set out according to FOPH guidelines?
I Are all items covered? If not, is this explained?
I Is the proposal clear, easy to read?
I Does it show a sound understanding of how the
project complements the relevant FOPH prevention
strategy?
I Does it demonstrate a sound knowledge of relevant
prevention projects and/or evaluation studies?
I Are the questions to be addressed clearly stated,
relevant and appropriate? i.e. are our needs well
understood?
I Does the evaluator offer additional questions/new
insights into ways of assessing the project?
I Has an initial prioritisation of evaluation questions
been made?
I Is the proposed methodology appropriate to the evalu-
ation questions?
I Is it likely that the evaluation questions can be
adequately answered by using the proposed method-
ology?
I Is the proposed evaluation approach likely to succeed?
(i.e. not too disruptive to project activities, not too
demanding of project staff, likely to engage coopera-
tion etc.).
I Are the described scope, approach and expected
outcomes realistic and achievable within the budget
and time constraints?
I Is there some discussion about possible valorisation
plans?
I Does the work plan identify when the initial proposal
and evaluation strategy, questions etc. will be
reviewed and refined?
I Does the work plan include time for valorisation?
I Does the work plan include extra time (approximately
2 months) to discuss draft reports with FOPH/others
and make amendments where needed)?
I Do the initial proposals for feedback meet our needs?
(i.e. in terms of timeliness, and format/type of feed-
back relevant to different evaluation audiences needs).
I Is the proposed budget fair, realistic and comprehen-
sive? (e.g. does it cover all necessary costs e.g.Report
translation and production, materials, equipment, per-
sonnel, travel and subsistence costs relevant to the
proposed work? Does it also include money for valori-
sation e.g. communicating the results to the range
of evaluation audiences: publications, workshops,
seminars etc. Does it do it at a fair price?)
I

43
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
For further information please contact:
Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
Swiss Federal Office
of Public Health
An FOPH review panel will assess all evaluation pro-
posals submitted in response to a Call for Evaluation
Proposals. The standard criteria for scientific rigour
as well as the relevance of the design will be used as
the basis of the assessment.
However, the panel will also take into account the
points set out in this checklist.
I Has the evaluator(s) done similar work before?
I Have we referred to examples of his/her past
evaluation work?
I Have we referred to former clients for their opinion?
I Is the evaluator familiar with relevant work done by
others? (e.g. to draw on major lessons learned, areas
of priority concern, etc.).
I Does the evaluator have the relevant competence, and
experience? If not, is there an evaluation team? Do
members together offer all the necessary skills? e.g.
I technical competence
I relevant experience
I languages
I public relations and communications
I Has the evaluator sufficient infrastructural support to
do the evaluation? (equipment, secretarial assistance
etc.).
I Has the evaluator got sufficient time to complete
this study?
I Is the evaluator (all team members) totally indepen
dent from the project? (i.e. has no personal/material
interest in its success,
no relationship with key project staff, etc.).
I Is the evaluator likely to engage the co-operation of
project staff?
Checklist 3.4: Assessing the Evaluator
I

45
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
For further information please contact:
Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
Swiss Federal Office
of Public Health
Checklist 3.5: Monitoring the Evaluation Contract
I
Even when an evaluation contract has been carefully
negotiated and satisfactorily agreed, it is still neces-
sary to monitor its implementation attentively: (a)
because there may be unancticipated events or
changes that will affect the evaluation as a conse-
quence the original evaluation design may need to be
re-negotiated and modified; (b) because either the
Office or the evaluator may become dissatisfied with
contract implementation if this starts to happen it is
important to identify and redress the problem before
any breach of contract occurs which may ultimately
lead to contract termination.
This checklist prompts you to think about which
elements you will need to look at when monitor-
ing the implementation of your evaluation con-
tract. These concern the procedures and expecta-
tions set out in the contract and evaluation pro-
posal. In addition, you will have to consider the
quality of the work and performance. Checklists
3.3 and 3.4, and 4.1 and 4.2 provide help on what
should be considered for making such a judge-
ment.
I Have the agreed procedures for data collection been
put into practice? (including collecting data from all
target groups/activities identified in the evaluation
design)
I Has the evaluator met the agreed deadlines for each
stage of the work?
I Has the Office kept to the agreed timetable in, for
example, providing information, attending meetings
etc.?
I Has the evaluator provided feedback on important
concerns, and/or findings AS AND WHEN they
become available?
I Did the Office keep the evaluators informed of
any changes likely to affect the Project and/or
the evaluation?
I Has the evaluator delivered the contracted evaluation
products/outputs?
I Were these products/outputs delivered on time, that is
according to the agreed deadlines?
I If NO to any of the above, was the Office kept
informed of any problems with, for example data
collection procedures, meeting agreed deadlines,
producing evaluation products/outputs?
I Were new agreements satisfactorily negotiated?
What it should cover
How it should be presented
Reviewing the results and recommendations
What the report should include
Assessing the technical evaluation report
The evaluation report should com-
municate the key findings in a clear
and simple way
Contents of this Part
Relevant Checklists
Part 4: Assessing the Technical Evaluation Report
47
The checklists provided with this section set out the principal requirements of
what should be covered in a technical evaluation report, and the criteria for its
overall appraisal.
In the main, we are interested in the following:
Does the report address the questions we asked, (and if not, was there
some good reason for not doing so)?
Does it present the information clearly?
Are the findings and recommendations useful to our future planning?
The technical evaluation report needs to provide a clear description of the
project itself and the context in which it operates. This is important for our
understanding of what the project is trying to achieve and whether or not the
climate is suitably conducive towards this end. It is important therefore that
the evaluation should collect examples of the projects documents (such as
brochures produced, copy of the teaching/training programme, questionnaires
produced by the project, etc.). Such documentation about the project and its
work should be appended to the evaluation report.
Having set the scene, the report then needs to provide us with an adequate
description of how the evaluation went about its task: what it did, how it did it,
what problems were met and how these were addressed. A clear description
of what was taken into account and what was not helps identify the potential
limitations of the analysis. In addition, we need sufficient information on the
reliability and validity of the data collected. An explanation of the methods used
to analyse the different data sets is equally important for the same reasons.
Examples of the different data collection tools and results should also be includ-
ed to show clearly how the work was executed. For example, in the case of
qualitative analysis, an extract of verbatim responses (but presented in such a
way as to ensure the safeguard of anonymity wherever agreed) to a particular
question should be given to help us understand the full range of opinions and
understanding of the particular issue.
The conclusions and recommendations should be defended by reference to the
data obtained.
The scientific rigour of the report is obviously an important criterion for judging
the report; its relevance and utility are equally important. The evaluation should
report information clearly enough for it to be easily understood. The discussion
should be comprehensive, but direct and focused on the evaluation questions
and issues. A good balance between text and graphic representations should
be provided. However, graphics should not be included just for their own sake:
they should be used to add value and understanding to the descriptive text.
Messages about the weaknesses and strengths of the project and its imple-
mentation have to be easily identifiable, succinct and well defended. We need
to understand;
what worked well,
what did not,
what were helpful factors,
which were inhibiting factors,
how these factors ultimately influenced and shaped the projects develop-
ment.
The lessons learned through the experience should be identified and discussed
to help us understand how to improve our future strategies and measures.
What the report should cover
Criteria for assessing the report
48
Not addressing what the key partners want to know about the project.
Concentrating on theoretical rather than practical issues in the report.
Over use of technical jargon rather than clear, simple explanations.
Not providing a description of the project, and the context in which it oper-
ates.
Not analysing how the operating context may have influenced/shaped its
development and ultimate results.
Using graphics, tables and figures that add little value or understanding to
the descriptive text (and/or vice versa!)
Providing insufficient information about the methodology and the strengths
and weaknesses of the methods used, as well as the effects on the analy-
sis.
Not maintaining confidentiality of information when this had been agreed.
Confusing what is meant by anonymity and confidentiality.
Modifying conclusions or recommendations to suit partner interests when
not justified by the data.
Not providing a clear summary of what was addressed and how, what the
Projects strengths and weaknesses were, and what were the main lessons
learned, particularly with respect to future project planning.
Making indefensible generalisations.
Not recognising nor discussing the possible limitations of the study:
why these occurred, what alternatives were considered, what were the con-
sequences and how might these have affected the analysis and overall find-
ings.
Assuming that the whole report will be read in detail.
Mistakes to be avoided
49
50
51
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
For further information please contact:
Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
Swiss Federal Office
of Public Health
(N.B. report back should be in varied forms to meet
different audience needs below we describe essen-
tials to include in a written, technical report)
An FOPH review panel will assess all evaluation
reports submitted as part of the evaluation contract.
The standard criteria for scientific rigour as well as the
relevance of the design will be used as the basis of the
assessment and relevance of the design.
Use this list to check what has and has not been
covered in the report.
The Report should cover the following items:
I List of Contents
I Acknowledgements
I Glossary of technical terms, abbreviations etc. used in
the report
I Summary of purpose, methods and the principal
conclusions (particularly the lessons learned) and
recommendations
Part 1:
Introduction What the Project was meant to do
What the Evaluation was asked to do and why
I Brief description of projects aims and objectives and
operational context
I Terms of reference for the evaluation (purpose of
evaluation, over what period of time, principal
questions to be addressed and if modified, the
questions and issues actually addressed)
Part 2:
Evaluation Methodology What was evaluated; how
was it done; what data was obtained from which
sources.
I A tabled summary of data collected, sources,
frequency, methods used etc. should be included to
illustrate the scope and weight of data collected.
I Which methods were used to answer which evaluation
questions should also be shown in table form
I The limitations of the study should be discussed in
detail (e.g. implications for the analysis of restricted
data access etc.).
Part 3:
Results and Discussion
I How did the Project actually develop?
I Were changes made to the original plan? if yes
I What were the changes?
I Why were these changes made?
I What did the project achieve?
Checklist 4.1: What the Technical Evaluation
Report Should Cover
I

p. t. o
52
Part 4:
Conclusions and Summary of Main Lessons Learned
I Did the project achieve what it tried to do?
I How?
I Why? or Why not? and
I What helped and what didnt?
I What were the strengths of the project?
I What were its weaknesses?
I How did this affect the project e.g. for meeting its
objectives?
Part 5:
Recommendations
I WHAT recommendations/advice would you give
WHICH groups/people about
I the future of this project?
I setting up a similar project within the same setting?
I setting up a similar project within a different setting?
I WHAT would you advise/recommend to the Federal
Office of Public Health about supporting a similar
project?
Annexes: These should include at least the following:
I The original evaluation mandate and, where relevant,
the authorised changes
I Examples of the evaluation tools used for data
collection and of data analysed e.g. excerpts from
qualitative interviews, questionnaires used etc.,
list of documents analysed
I Examples of the project documents, especially those
referred to in the report e.g. brochures, teaching
materials etc.
53
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
For further information please contact:
Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
Swiss Federal Office
of Public Health
An FOPH review panel will assess all evaluation
reports submitted as part of the evaluation contract.
The standard criteria for scientific rigour as well as the
relevance of the design will be used as the basis of the
assessment and relevance of the design.
However, the points set out in this checklist will
also be taken into consideration.
I Structure and presentation of the report
I Is it clear, well structured, easy-to-read and
comprehensive?
I Does it provide both text and graphics to convey
messages?
I Is it focused on, and structured to reply to the main
evaluation questions?
I Scientific content
I Was the methodology well applied?
I Is there sufficient information on the project,
the evaluation and their operational contexts?
I Does it discuss the influence of contextual
conditions on the development of the project?
I Is it technically competent?
I Does it meet the criteria of scientific rigour?
I Are the conclusions credible?
I Is a good overview provided of scope, type and
sources of data collected?
I Are the limitations of the study identified, explained
and their implications discussed?
I Usefulness
I Does it do what it said it would?
I Were the right questions asked and answered?
I Does the analysis of the projects strengths and
weaknesses improve our understanding?
I Recommendations
I Are they feasible, practical and useful for improving
this project/other projects?
I Are they relevant to future developments at a
national level?
I Do they help us determine what needs to be done to
improve our overall strategy, measures and actions?
Checklist 4.2: Assessing the Technical Evaluation
Report
I

When we should be informed


Who should be informed
Suitable modes of reporting
Making effective use of evaluation findings
Mistakes to be avoided
Identifying key messages
Identifying who is affected/interested
Deciding the best way each should be informed
To attract an audience, we should
determine what findings will be of
interest to which stakeholder group,
and in which manner these will be
best communicated
Contents of this Part
Relevant Checklists
Part 5: Promoting and Using Evaluation Findings
55
By now you will have understood that we encourage evaluations to be desig-
ned in consultation with the various partners involved or likely to be affected by
the results. The more this happens, the higher the chances of having the
results accepted and used by the various stakeholders.
From the design stage therefore, we have argued that the key partners and
evaluators need to identify the full range of possible evaluation audiences.
Ultimately this helps the evaluators reflect on which results have implications
for which target groups. Once results are available, evaluators together with
the FOPH and other key partners need to determine which findings need to be
brought to the attention of which target groups, and by which means. (What
are the key messages for which groups). Who are the key decision makers?
Who can best act on which findings?
When providing feedback on evaluation findings, evaluators need to review the
following:
Who needs to know?
Which groups?
Which key people?
Who can ultimately take action/decisons?
What information is likely to be relevant and/or of interest?
In what sequence?
In which type of format?
When?
What problems are likely to arise?
Can these be minimised?
How?
In Part 1 we stated that essentially the FOPH requires project evaluation for
four reasons:
1. to improve our strategies and actions so that they are meeting clients needs;
2. to help us make informed decisions;
3. to clarify the options available; and
4. to account for the expenditure of public funds.
For this we need feedback from evaluators at an appropriate time. For exam-
ple, if our strategies or measures are proving ineffective, inappropriate or inef-
ficient, we want to know as soon as possible so that we can modify our
actions, redress the problems or even cancel contracts wherever necessary.
Evaluators should therefore be encouraged to provide timely feedback and
highlight how these findings apply to and/or affect different participant groups
or audiences. The evaluation design should have taken into account when key
decisions about the project would be taken, and consequently planned to pro-
vide feedback (wherever possible and relevant) accordingly.
In our contractual agreement with evaluators we sometimes request an inter-
im as well as a final report. However, during preliminary negotiations, we also
emphasise our need to have feedback reported as and when significant find-
ings come to light.
We should ensure that we are available and interested in receiving feedback
and willing to assess with the evaluators alternative courses of action and like-
ly consequences.
Deciding who should be informed is determined by what the findings are and
what key messages are detected for which audience group. As a starting point,
however, we can say that all those who participated in the evaluation should,
wherever possible, be informed of the results. Too often reporting tends to be
restricted to the sponsors and project managers. Significant results/messages
should also be communicated to the various groups affected by the outcome
through, for example, the mass media, publications etc. That is why we have
previously stressed the need to identify the range of potential users of evalua-
tion results as well as those likely to be affected during evaluation planning.
When should we be informed of
evaluation findings
Who should be informed
56
This helps not only the focus of the evaluation study, but serves us as a point
of reference when reviewing the findings.
Evaluations should be designed to have an impact on our prevention strategies
and measures. In other words, they should provide us with useful, pertinent
and clear information based on the use of sound scientific method and analy-
sis. The conclusions and recommendations should help different interest
groups see for example what achievements have been accomplished, where
improvements can be made, which more cost-beneficial approaches might be
employed or even that support for wasteful, unproductive efforts should be
withdrawn.
The FOPH, project managers, researchers and evaluators will be interested in
receiving written, technical reports. Assuming that the conclusions are fully jus-
tified and well defended, various audiences will be interested in learning what
was discovered and in some cases, what should be done next.
Key partners, together with the evaluators, should identify the various audi-
ences to be addressed, which messages are likely to be of interest, and,
based on this information, devise the most appropriate formats for trans-
mitting the messages to the targeted groups.
A range of different modes should be planned to report evaluation findings to
meet the needs of the different audiences. It is a mistake to believe that all
audiences will read reports. It may be more appropriate to communicate find-
ings to targeted groups via workshops, seminars, oral presentations. etc..
Articles can also serve as a useful medium to reach a wide range of targeted
readers. But obviously different types will be needed for different groups. For
instance, an article in the FOPH bulletin requires a different style from that of
an article placed in a popular journal or newspaper.
We have previously argued that evaluation is a cyclical process which starts and
ends at the planning stage. Work does not stop once the evaluation results are
to hand. On the contrary, it is the starting point for the FOPH to reflect on its
overall prevention strategy, measures and actions, and then determine how
best to proceed.
For example, the majority of FOPH funded projects are aimed at preventing
health problems. As such they are directed at social and therefore behavioural
change. The relevant evaluations should highlight the favourable conditions
needed to bring about such change. To do justice to the work performed on our
behalf, the FOPH should consider the evaluation recommendations and act
upon the results in terms of:
What conditions does the evaluation suggest are needed to help bring about
change?
What can be done to create the conditions needed to bring about such
change?
Which groups/institutions/associations/organisations are in the best position
to help this process?
What measures might be adopted which are feasible and appropriate to
engage their co-operation and support?
Written articles alone may not be sufficient! Information should also be pre-
sented viva voce to target evaluation audiences. Workshops, meetings, semi-
nars should be organised by the FOPH to get the message over to those who
can act on the information. This is where the valorisation budget provided in
the original evaluation contract can be finally exploited! Be imaginative about
using the evaluation to its best advantage!
Suitable modes of reporting
Using evaluation results effectiv-
ley the task of
the Federal Office of Public Health
57
Assuming the evaluation work stops when the report is completed and deli-
vered. (FOPH and evaluators)
Neglecting to determine which findings might benefit which target group
what key messages are there for which audience groups? This should be
systematically worked out between the evaluators, FOPH staff and the key
project staff.
Neglecting to address the political audiences which decision-makers
should know about which evaluation findings. (FOPH)
Not agreeing who will do what to promote the results. Evaluators should
provide the right material (e.g. articles, oral presentations etc.), but the spon-
sors and project partners need to organise, co-ordinate and ensure that
results get fed to the right decision-makers and other interested parties.
Thinking that publications alone will suffice. Promoting the findings means
adopting interactive strategies to present and discuss the evaluation findings
with those who can best act upon the results. (FOPH)
Neglecting to present and discuss findings in a manner appropriate to the
target audience. The messages need to be clear, to the point and in the
cultural style of the target group. For example, the style used to address a
scientific audience will not be appropriate when addressing for example a
parents association. (evaluators)
Mistakes to be avoided
58
59
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
For further information please contact:
Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
Swiss Federal Office
of Public Health
If evaluations are to be useful, they need to highlight
what has been learned (a) to improve prevention plan-
ning and implementation, and (b) how it affects the
work of all those involved in this process. They should
not be buried in the text, but brought out clearly in the
conclusions, recommendations and/or options pro-
posed.
Once the evaluation report is written, it is up to
the Federal Office of Public Health to translate
the evaluation messages into effective action. To
do this, we have to decide what follow-up action
is needed, by whom, and when. We should there-
fore consider carefully the points set out in this
checklist.
I Have we determined what is important in the evalua-
tion findings for prevention planning?
I Have we resolved what is important for prevention
implementation?
I Have we distinguished the likely affects on the work
of the different groups involved?
I Have we considered what action might be taken as
a result?
I Have we worked out what would need to be done
to have this happen?
I Have we identified which people/groups would be
the most effective for getting things done?
I Have we determined how best to get the message
over to each of these groups?
I Have we identified who would be the most
appropriate person/group to convey the message?
I Have we defined what help will be needed
(human/ financial resources)?
I Have we prioritised which people/groups should be
approached (strategic planning from the ideal to the
feasible)?
Checklist 5.1: Identifying Key Messages Key
Target Groups and Appropriate Ways to Tell
Them
I

The Program Evaluation Standards, 2nd Edition, Sage Publications, NY, 1994
Thinking About Program Evaluation, Sage Publications, London, 1990
Accompagner et mettre point avec succs les evaluations des mesures ta-
tiques: Guide de rflexion, Editions Gorg S.A., Geneva, 1995 (French version)
Evaluationen staatlicher Massnahmen erfolgreich begleiten und nutzen: Ein
Leitfaden, Verlag Regger AG Chur/Zurich, 1995 (German version)
Program Evaluation Kit, Sage Publications, London, 10th Edition, 1995
Empowerment Evaluation: Knowledge and Tools for Self-Assessment & Ac-
countability, Sage Publications, NY, 1995
Workbook for Evaluation: A Systematic Approach, 5th Edition, Sage Publica-
tions, London, 1993
Evaluation Fundamentals: Guiding Health Programs, Research and Policy, Sage
Publications, London, 1993
Fourth Generation Evaluation, Sage Publications, London, 1989
Social Research: Philosophy, Politics and Practice, Sage Publications, London,
1994
Professional Evaluation: Social Impact and Political Consequences, Sage Publi-
cations, London, 1993
Manuel de lauto-valuation/Externe Evaluation von Entwicklungsprojekten,
Direction de la coopration au dveloppement et de laide humanitaire Servi-
ce Evaluation, Bern, 1990 (German version) and 1994, (French version)
Evaluation as Illumination: A New Approach to the Study of Innovatory
Programmes, Univ. of Edinburgh, Occasional Paper, 1972
Utilization-Focused Evaluation, 2nd Edition, Sage Publications, London, 1986
Qualitative Evaluation and Research Methods, 2nd Edition, Sage Publications,
London, 1990
Evaluation: A Systematic Approach, 5th Edition, Sage Publications, London,
1993
Rapid Assessment Methodologies for Planning and Evaluation of Health
Related Programmes, INFDC, New York, 1992
Evaluation Thesaurus, 4th Edition, Sage Publications, London 1991
Dictionary of Statistics and Methodology: A Nontechnical Guide for the Social
Sciences, Sage Publications, NY, 1993
Multi-language Evaluation Glossary, IIAS, English Edition, 1994 originating from
European programme MEANS (Mthodes dEvaluation des Actions de Nature
Structurelle)
Educational Evaluation: Alternative Approaches and Practical Guidelines.
Longman Inc., NY, 1987
American Evaluation Research
Society Joint Committee on
Standards Educational
Evaluation
Berk, Richard A. and
Rossi, Peter H.
Bussmann, Werner
Fitz-Gibbon, Carol Taylor; and
Morris, Lynn Lyons (Eds)
Fetterman, David M.;
Kaftarian, Shakeh J. &
Wandersman, Abrahm (Eds)
Freeman, Howard E.;
Rossi, Peter H.; and
Sandefur, Gary D.
Fink, Arlene
Guba, Egon G.; and
Lincoln, Yvonna
Hammersley, Martyn
House, Ernest R.
Imfeld, Josef et al
Parlett, Malcolm and
Hamilton, David
Patton, Michael Quinn
Patton, Michael Quinn
Rossi, Peter H.; and
Freeman, Howard E.
Scrimshaw, Nevin; and
Gleason, Gary (Eds)
Scriven, Michael
Vogt, W. Paul
Working Party on Policy and
Program Evaluation
Worthern, Blaine R.; and
Sanders, James R.
Annex 1: References and
Recommended Reading List
61
Given that EVALUATION is a relatively new discipline, there is as yet, no one, wide-
ly agreed-upon set of EVALUATION terms. Yet meanings of words are critical
because they influence what we do and how we do it. We have therefore pro-
vided definitions for the key terms we use in the EVALUATION Unit of the Federal
Office of Public Health to convey what we understand by EVALUATION; its tasks,
work and responsibilities. The glossary deals with evaluation terms only: it does
not deal with those of statistical METHODS since these are well-known, stan-
dardised terms.
For the most part, we have drawn upon existing definitions from a range of
sources, but mainly from those developed in the field of PROGRAMME EVALUATION.
(see Reading Reference List, Annex 1). These have, however where necessary,
been adapted to suit the specific work of the Federal Office of Public Health.
To minimise confusion, EVALUATORS working under contract with the
Swiss Federal Office of Public Health are urged to base their use of
EVALUATION terms on the definitions supplied in this Glossary.
N.B. The terms which have been cross-referenced in the glossary are
indicated in small capital letters e.g. CROSS-REFERENCED TERMS.
Annex 2: Glossary of Evaluation Terms
62
Refers to what the intervention has been able to achieve overall: its OUTPUTS, its
RESULTS, IMPACT, etc.
This is the general statement about what the PROJECT/PROGRAMME/activity etc. is
attempting to achieve. It is the overall, (often long-term) end GOAL.
The process of judging performance/RESULTS based on predetermined criteria.
Similar to EVALUATION, but narrower in FOCUS. For example: assessing student
performance through such measures as specific assignments and/or the
RESULTS of standardised tests.
An AUDIT checks that the means used to produce RESULTS were put into practice
according to professional rules and standards. It does not comment on, nor
question, the quality, RELEVANCE, EFFECTIVENESS, etc. of the IMPACTS or RESULTS of
a measure.
Synonymous with SELF-EVALUATION
The extent to which a measurement (e.g. use of t-test ) or a sampling technique
or analytic METHOD (e.g. analysis of QUALITATIVE DATA) systematically yields non
VALID RESULTS. This can arise from errors occurring during any of the processes
from the planning of a study through to the interpretation of its RESULTS and the
conclusions reached. Errors arise through subjectivity, prejudice, technical
and/or METHODological mistakes.
The process of systematically examining the content of a volume of material
(written documents, films, pictures etc.) or procedures and practices (e.g. tasks
and performance in the classroom, doctors consultation rooms etc.) in order to
determine key characteristics. (see also DOCUMENTARY ANALYSIS)
An approach based on testing a pre-conceived hypotheses (very often experi-
mental) in order to draw conclusions about its VALIDITY and/or GENERALISABILITY.
Procedure used for problem solving/EVALUATION based on achieving consensus
through the repeated drafting of a written paper by members of an expert
group. The process is characterised by the co-ordinated work of a group of
recognised experts who provide individual (i.e. non-peer influenced), written
feedback on an initial document; the SYNTHESIS ordering, and ranking of respons-
es and a repeated redrafting of the background paper for re-circulation by a co-
ordinator. The process continues until consensus is reached.
Systematic analysis of the CONTENT of (written) documents e.g. memos, min-
utes, PROJECT descriptions, training curriculum, etc.
Any change, intended or unintended, which can be attributed to the interven-
tion being evaluated. Synonymous with OUTCOME, RESULT, IMPACT. Examples of
unintended EFFECTS are ripple EFFECT, halo EFFECT, hawthorn EFFECT, etc.
A measure of the extent to which an activity, strategy, PROJECT etc. induces
change that may or may not be part of the original AIMS and OBJECTIVES. It is
essentially a measure of GOAL ACHIEVEMENT.
A measure of how well resources (human, financial, material etc.) are used to
produce desired OUTCOMES and/or OUTPUTS. Includes the analysis of the input-
output cost ratio. Implies the absence of wastage in the process of achieving
GOALS. Efficiency analysis tries to answer the question: is it possible to produce
more OUTPUTS using less inputs or using alternative, less expensive ones?
Achievement(S)(of the
Intervention)
DE - Erreichtes
FR - ralisations
Aim
DE - Gesamtziel/
bergeordnetes Ziel
FR - but
Assessment
DE- Leistungsabschtzung/
Bewertung
FR - examen
Audit
DE - Controlling
FR - audit, rvision
Auto-evaluation
DE - Auto-Evaluation
FR - auto-valuation
Bias
DE - Bias/Verzerrung
FR - biais
Content analysis
DE - Inhaltsanalyse
FR - analyse de contenu
Deductive approach
DE - Deduktiver Ansatz
FR - approche dductive
Delphi technique/survey
DE - Delphi Technik
FR - mthode/technique delphi
Documentary analysis
DE - Dokumentenanalyse
FR - analyse de documents
Effect
DE - Effekt
FR - effet
Effectiveness
DE - Effektivitt
FR - effectivit
Efficiency
DE - Effizienz
FR - efficience/rendement
63
The systematic collection and analysis of information, not necessarily routinely
available, on various aspects of a given object of study (such as a specific PRO-
JECT, PROGRAMME, intervention etc.) to enable its critical appraisal. In short, the
process of determining the value, merit, justification and/or worthiness of
something.
The person/team conducting the EVALUATION.
Analysis of the feasibility of answering the EVALUATION questions using a pro-
posed design or procedure and/or the feasibility of answering the questions per
se. In short, checking to see that what is planned can actually be done.
EVALUATION by EVALUATORS who are neither responsible for the financing, nor the
managing or implementation of the intervention under study. In short EVALUA-
TION by those who have no personal, financial or other self-interest in the object
being evaluated.
The analysis of the likelihood that an intervention can be implemented as
planned. Includes the study of contextual conditions as well as characteristics
and resources of the intervention under plan. A FEASIBILITY STUDY does not test
for EFFECTIVENESS, EFFICIENCY nor desirability (see PILOT STUDY)
The major FIELDS of EVALUATION are: product, personnel, performance, PROJECT/
PROGRAMME AND POLICY EVALUATIONS. Whilst each has been practised for some
decades, the development and discussion of METHODological issues is much
more recent. PROGRAMME EVALUATION is one of the most developed in terms of
METHODOLOGY and theory.
The sum total of what an EVALUATION finds out about the intervention under
analysis e.g. the PROJECTs context, EFFECTS/RESULTS, IMPACTS, processes, EFFI-
CIENCY etc.
The area or aspect(s) on which the EVALUATION and its analysis will concentrate.
e.g. the EVALUATION of a school health education PROGRAMME may choose to
FOCUS on the acceptability of the PROGRAMME by different groups rather than on
its end RESULTS. Equally, it may focus on the RELEVANCE or EFFICIENCY of the PRO-
GRAMME. It may well choose to FOCUS on a much wider SCOPE.
It is conducted during the development of an intervention or strategic measure
with the intent to improve performance by means of continuous feedback to
key STAKEHOLDERS. Usually produces reports for internal use. FORMATIVE EVALUA-
TION is contrasted with SUMATIVE EVALUATION.
The degree to which information about a tested group or setting may be extrap-
olated to the greater POPULATION or to different settings. GENERALISABILITY is
directly linked to external VALIDITY in that non valid data will produce non gener-
alisable FINDINGS.
This refers to the EVALUATION of a total prevention package: the global strategy,
measures and actions taken towards obtaining the prevention packages over-
all AIMS and OBJECTIVES.
Synonymous with AIM.
Evaluation
DE - Evaluation
FR - valuation
Evaluator
DE - Evaluator/in
FR - valuateur/trice
Evaluability appraisal
DE - Machbarkeitsstudie
(der Evaluation!)
FR - tude de faisabilit
(de lvaluation!)
External evaluation
DE - Externe Evaluation
FR - valuation externe
Feasibility study
DE - Machbarkeitsstudie
FR - tude de faisabilit
Fields(of evaluation)
DE - Anwendungsbereiche
FR - domaines (dvaluation)
Findings (of evaluation)
DE - Befunde
FR - rsultats/constatations
de lvaluation
Focus
DE - Fokus
FR - point focal/point
de focalisation
Formative evaluation
DE - Formative Evaluation
FR - valuation formative
(pas de terme quivalent
ni en allemand ni en
franais)
Generalisability
DE - Generalisierbarkeit
FR - gnralisabilit
Global evaluation
DE - Globalevalaution
FR - valuation globale
Goal
DE - Gesamtziel/
bergeordnetes Ziel
FR - but
64
In its pure form, the EVALUATOR is not told the AIM and OBJECTIVES of the PRO-
GRAMME/PROJECT/activity etc. under EVALUATION so that s/he is free to judge what
is going on and what is being achieved without being influenced by any pre-
determined criteria.
Philosophical tenet arguing the necessity to consider the whole. Grounded in
the belief that the whole is greater and different from the sum of its parts. It
rejects the feasibility and usefulness of breaking down the whole into isolated
parts (as in Gestalt-psychology).
In EVALUATION terms, this refers to the sum total of the individual RESULTS and
EFFECTS/OUTCOMES of an intervention or measure, be they intended or unintend-
ed. IMPACT analysis can limit itself in time e.g. to immediate EFFECTS etc. and in
FOCUS e.g. target POPULATION. It can, however, broaden its analysis in terms of
(a) time, e.g. examining EFFECTS etc. over medium to longer term, and (b) FOCUS,
e.g. going beyond the directly targeted POPULATION.
(In market research, IMPACT EVALUATION e.g. of a campaign usually is restricted to
an analysis of its visibility, acceptability, recall etc.)
An INDICATOR serves as a proxy measure for information about a phenomenon
which in itself is not directly measurable. For example: the total amount of alco-
hol consumed per capita in a country over a year indicates the degree of alco-
hol abuse. In general they represent one class of data only.
Generates hypothesis from and during field work. (Grounded theory see
Strauss & Glaser, The Discovery of Grounded Theory: Strategies for Qualitative
Research, Weidenfeld & Nicolson, London, 1968). Hypotheses are formulated
on the basis of the data gathered as opposed to gathering data in order to test
a preconceived hypothesis (DEDUCTIVE APPROACH).
Technique used to draw verbal information from an individual/group about a pre-
determined topic. Can be structured (i.e. asking standardised questions which
elicit only responses which are pre-determined and of limited range), semi-
structured (i.e. range of questions are pre-determined but the way they are
asked and/or the expected responses are not necessarily), or unstructured (i.e.
non standardisation of open-ended questions, sequence and responses but
centred around a pre-determined topic(s)).
This is the overall analysis of information arising from several studies on a sim-
ilar topic/field of interest. involving, as the first step, the standardisation of the
relevant information. Analysis therefore takes place once the disparate infor-
mation is standardised and therefore transformed into comparable values. To a
large extent, it relies on a SYNTHESIS of other studies/EVALUATIONS.
EVALUATION of others EVALUATIONs an analysis of other EVALUATION studies. It
provides a critical analysis of how well EVALUATION studies have been conduct-
ed.
The working plan (theoretical framework and design) for organising and con-
ducting the selection, collection and analysis of data, including the
approach/strategy to be used (e.g. conventional, positivist, interpretative, natu-
ralistic, phenomenological etc.) and choice of METHODS (e.g. INTERVIEWS, survey,
observation, etc.) to be used.
Formalised technique used for collecting, organising or processing data, e.g.
semi-structured INTERVIEW, QUESTIONNAIRE, observation, CONTENT ANALYSIS, multi-
ple regression analysis, multi-criteria analysis, etc.
In the context OF PROJECT/PROGRAMME EVALUATION it is the routine checking of
progress against plan. For example the annual counting of participants on a
given course. Normally MONITORING activities do not question the OBJECTIVES,
processes or actions involved.
These are a set of discrete, specific and measurable sub-GOALs which need to
be attained in order to achieve the end GOAL. They should be smart i.e. specif-
ic, measurable, appropriate, realistic and attainable within a defined time peri-
od.
65
Goal free evaluation
DE - Zieloffene Evaluation
FR - valuation sans objectifs
dclars
Holistic approach
DE - Holistischer Ansatz
FR - approche holistique
Impact
DE - Wirkung
FR - impact
Indicator
DE - Indikator
FR - indicateur
Inductive approach
DE - Induktiver Ansatz
FR - approche inductive
Interview
DE - Interview
FR - interview/entretien
Meta-analysis
DE - Meta-Analyse
FR - mta-analyse
Meta-evaluation
DE - Meta-evaluation
FR - mta-valuation
Methodology
DE - Methodologie/
Vorgehensweise
FR - mthodologie
Method
DE - Methode
FR - mthode
Monitoring
DE - Monitoring
FR - monitoring/surveillance
Objective
DE - Ziel/Zielsetzung
FR - objectif
Synonymous with EFFECT when referring to the individual and/or sum of the
EFFECTS/RESULTS (of the intervention). Mainly refers to immediate, post-treat-
ment EFFECTS, but one should consider the medium and longer term OUTCOMEs
too. (See also IMPACT).
These are the activities, goods and services directly produced by an interven-
tion/EVALUATION e.g. brochures, reports, workshops, hotline service, computer
program etc.
A PROJECT/study intended to trial its practicability in a real setting (not to be con-
fused with FEASIBILITY).
The total number of subjects/elements from which a SAMPLE is drawn, or about
which a conclusion is stated.
Concentrates specifically on the implementation aspects of an intervention. Is
less concerned with inputs, OUTPUTS, OUTCOMES etc., but rather with the proce-
dures, practices and operations used to attain the projects OBJECTIVES.
A collection of co-ordinated PROJECTS, measures, processes, or services aimed
at achieving a set of common OBJECTIVES. A PROGRAMME is delimited in terms of
time, SCOPE and budget.
A PROJECT consists of a set of similar, co-ordinated activities aimed at achieving
pre-defined GOALS. It is also limited to take place within a defined period of time,
SCOPE and budget. Often it is a means of testing an innovative approach or mea-
sure ultimately to be used as part of a wider PROGRAMME.
Data in the form of words, images, maps, sounds.
Numerical data.
A list of questions to be asked, often with pre-determined wording and
sequence. The respondent may or may not be required to give structured
responses. Answers may be given in writing or orally. Can be structured, or
semi-structured. (see INTERVIEW)
The degree to which a measure or action etc. matches an identified need.
Refers to the consistency of the RESULTS yielded when the same process and
METHODS are used during repeated applications and/or by different observers.
Not to be confused with VALIDITY.
The degree to which an observation made on a SAMPLE applies to the sys-
tem/the population as a whole.
Refers to the intended/unintended changes RESULTing from an intervention.
Similar to EFFECTS, OUTCOME. See also findings (for EVALUATION RESULTS).
A group of subjects/items selected from a larger group. Studying the smaller
group (the SAMPLE) is intended to reveal important things about the larger group
(the POPULATION).
Outcome
(of the project/intervention)
DE - Resultat
FR - rsultat
Outputs
DE - Output/Produkt
FR - produits
Pilot project/study
DE - Pilotprojekt/-Studie
FR - projet/tude pilote
Population
DE - Population
FR - population
Process evaluation
DE - Prozessevaluation
FR - valuation de processus
Programme
DE - Programm
FR - programme
Project
DE - Projekt
FR - projet
Qualitative data
DE - Qualitative Daten
FR - donne qualitative
Quantitative data
DE - Quantitative Daten
FR - donne quantitative
Questionnaire
DE - Fragebogen
FR - questionnaire
Relevance
DE - Relevanz
FR - pertinence
Reliability
DE - Reliabilitt/Zuverlssigkeit
FR - fiabilit
Representativeness
DE - Representativitt
FR - reprsentativit
Result(s)
DE - Resultat/Ergebnis
FR - rsultat
Sample
DE - Stichprobe
FR - chantillon
66
The breadth of what will be taken into account by the EVALUATION, e.g. what
issues and aspects will be addressed, which (sub)groups will be
observed/INTERVIEWed and over what time period etc.
The re-working and analysis of existing data and/or reconsideration of its inter-
pretation and FINDINGS.
Similar to SECONDARY ANALYSIS (reanalysis of original EVALUATION data and FINDINGS)
but integrates new data as and when needed. Its aim is to produce a new EVAL-
UATION of a particular PROJECT, often by broadening the SCOPE or depth of the orig-
inal analysis. (Compare with META-EVALUATION)
An EVALUATION by those who are administering and/or managing a PRO-
GRAMME/PROJECT/intervention etc. in the field.
Individuals, groups or organisations who have a defined interest in the activity
being evaluated and are therefore held to be to some degree at risk with it (e.g.
PROGRAMME staff, sponsors and others not necessarily involved in its day-to-day
operation). Also can include the interventions direct and indirect recipients (e.g.
TARGET GROUP, family of TARGET GROUP, taxpayers, etc.).
An EVALUATION that is carried out during the concluding phase of a PROJECT/PRO-
GRAMME/activity etc., with the intention of passing judgement intended to con-
tribute towards decision making re PROJECT etc.s future. Compare and contrast
with FORMATIVE EVALUATION.
Combining the FINDINGS of multiple studies into one overall picture. In EVALUATION
this is most often done by compounding a set of criteria/INDICATORS/perfor-
mances on several dimensions and attributing an overall judgement. (See also
META-ANALYSIS).
Those groups/individuals at which the health intervention, measure, strategy
etc. is aimed.
The use of several different instruments (e.g. observation, INTERVIEWS, tests,
etc.) and/or classes of respondents (e.g. managers, participants, sponsors, etc.)
to obtain information about the same aspect/subject (e.g. acceptability/RELE-
VANCE etc. of a PROJECT).
Refers to the degree to which whatever is claimed, holds true. For example, a
test is valid if it measures what it purports to measure. Valid EVALUATIONS are
ones that take into account all relevant factors, given the whole context of the
EVALUATION (particularly including the clients needs) and appraise them appro-
priately in the synthesis process. (see Scriven, 1991)
The combination of activities used to make EVALUATION FINDINGS known (dissem-
ination) and translated into practical use (thereby adding value).
Scope
DE - Reichweite
FR - porte
Secondary analysis
DE - Sekundranalyse
FR - analyse secondaire
Secondary evaluation
DE - Sekundrevaluation
FR - valuation secondaire
Self-evaluation
DE - Selbstevaluation
FR - auto-valuation
Stakeholders
DE - Beteiligte/Betroffene
FR - protagonistes =
les stakeholders
directement impliqus
pas de terme universel qui
engloberait aussi ceux qui ne sont
pas directement impliqus par le pro-
jet.
Sumative evaluation
DE - Bilanz-Evaluation
FR - valuation sommative
(pas de terme quivalent
en franais)
Synthesis
DE - Synthese
FR - synthse
Target group/population
DE - Zielgruppe/Zielpopulation
FR - groupe cible/population cible
Triangulation
DE - Triangulation
FR - triangulation
Validity
DE - Validitt
FR - validit
Valorisation
DE - Valorisierung
FR - valorisation
67
Annex 3: Characteristics of Conventional and
Interpretive Approaches to Social Scientific
Research and Evaluation
Credit due to: Yvonna Lincoln & Egon Guba, Bodan & Biklen, Michael Quinn Patton and Ray Rist
68
Characteristic Conventional Approach Interpretive Approach
Associated phrases experimental, quantitative, outer (etic) ethnographic, field work,
perspective, social facts, statistical, qualitative, inner (emic)
predictive, a priori, deductive perspective, descriptive,
ecological, phenomenological,
emergent, inductive, holistic
Key concepts variable, operationalisation*, understanding, meaning, social
hypothesis, reliability, validity construction, everyday life, verstehen,
replication, statistical significance confirmability, working hypotheses
Associated names A. Compte J. Mill Dilthey H. Rickert
Emile Durkheim Donald Campbell Max Weber Estelle Fuchs
Lee Cronbach Peter Rossi Charles Cooley Herbert Blumer
L. Guttman Thomas Cook Everett Hughes Harold Garfinkel
Gene Glass Robert Travers Margaret Mead Erving Goffman
Fred Kerlinger Robert Bales Rosalie Wax Eleanor Leacock
Edward Thorndike Julian Stanley George H. Mead Barney Glaser
Ralph Tyler C. Wright Mills William Filstead
Ray Rist Malcolm Parlett
Egon Guber Robert Stake
Yvonna Lincoln Robert Burgess
Howard Becker
Associated disciplines behavioural psychology, economics, anthropology,
sociology, political science (experimental sociology, history
physics) (ethnography)
* The conventional approach to social scientific inquiry is still practised by many social scientists and still viewed as real science by many consumers
of evaluation and research results. This is despite the fact that major tenets of conventional social science have been found untenable within the phi-
losophy of science. The most important of these major tenets have been asterisked in this handout.
Design:
Purpose develop and test theories describe multiple realities
establish the facts explain, develop experiential understanding
predict, and control phenomena develop grounded theory
Basis goals, objectives, hypotheses subject/audience concerns and issues,
activities and interactions
When developed beginning of study continuously evolving
Nature pre-ordinate, structured, formal, specific emergent, evolving, flexible, general
Style intervention, manipulation selection, participation
Sample large, stratified, randomly selected small, non-representative, theoretically or
purposively selected
Setting laboratory (context unrelated*) nature, field (context relevant)
Treatment stable, fixed variable, dynamic
69
Characteristic Conventional Approach Interpretive Approach
Control high of antecedents, low holistic understanding sought
extraneous variables, possible outcomes
Examples experiments, quasi-experiments, case studies, life histories, ethnographies
survey research
Methods:
Nature predetermined, structured, standardised, open-ended, unstructured, variable,
objective, quantitative subjective, qualitative
Focus reliability, replication validity, meaning
Specification of data before inquiry during and after inquiry
collection/analysis rules
Researcher/evaluator role stimulator of subjects to test critical stimulated by subjects and their
performance, taking readings activities, negotiations and
interactions
Researcher/evaluator distant, detached close, involved
relationship to data
Researcher/evaluator circumscribed, short-term, detached, empathetic, trustworthy, egalitarian,
relationship to subjects distant intense contact
Instruments/techniques paper- and pencil of physical devices, e.g., researcher/evaluator, interviews,
questionnaires, checklists, scales, observations (tape recorder,
computers, tests, structured interviews transcriber)
and observations
Analysis:
Data numerical, coded, counts and measures, descriptions, records and
succinct, systematic, standardised documents, observational field
notes, photographs, peoples own
words, direct quotations
Nature componential, explanatory, reductionist holistic, descriptive, depth and detail
Units variable patterns in context
Analysis statistical, deductive, conducted at end analytical, inductive, ongoing,
evolving
Focus uniformity diversity
70
Characteristic Conventional Approach Interpretive Approach
Communication of results:
When usually once, at end of study ongoing, continuous, as needed
How formal, written reports informal, oral and written portrayals,
case studies
Content identification of variables and their portraying what experience is like,
interrelationships, numerical plus narrative, direct quotations,
interpretation negotiated constructions
Paradigm:
Affiliated theories structuralism, functionalism, realism, phenomenology, symbolic
positivism*, behaviourism, logical interactionism, ethnography
empiricism*, systems theory (culture), ethnomethodology
idealism
Assumptions about:
Reality/truth singular, convergent, fragmentable, multiple, divergent, interrelated,
exists out there can be empirically socially constructed, can be
verified and then predicted and controlled understood through verstehen
Nature of truth statements time- and context-free* time- and context-bound working
and generalisations generalisations,nomothetic statements, hypotheses, idiographic
focus on similarities statements, focus on differences
Relationship between separable*, facts constrain beliefs*, inextricably intertwined, beliefs
facts and values inquiry is value-free* determine what should count as
facts, inquiry is value-bound
Human nature Humans are engaged in continuing Humans are intentional beings,
process of interacting with environment. directing their energy and
Humans and environment influence each experience in ways that enable
other in ways that are law-governed and them to constitute the world in
thus predictable a meaningful form
Human behaviour law-governed, result of concentration of wholly context-dependent
many antecedent variables
Relationship between independent, separable* interrelated, interactive not
inquirer and subject of inquiry separable
Nature of Causal linkages real causes, temporally precedent with mutually simultaneous shaping of
effects and by all entities, all are causes
and effects
71
Characteristic Conventional Approach Interpretive Approach
Postures about:
quality criteria rigor relevance
source of theory a priori grounded
stance reductionist expansionist
purpose of inquiry verification*; discovery;
facts, causes, explanation; establish understanding, verstehen
laws that govern human behaviour understand process by which social
and link laws into reality is created by different people
deductively integrated theory
knowledge type propositional propositional, tacit
value perspective singular*, consensual pluralistic, divergent
values in research value-free;* objectivity is critical to values inevitable part of social
reduce bias and influence of extraneous reality; objectivity commonality
variables, to enhance replicability of perspective; subjectivity is
important to get involved with
subjects, to use personal insight
1. Questions on Relevance
Is the health behaviour model on which the projects intervention is based
appropriate for the target group/setting?
Are the project/programmes aims and objectives still relevant? Are they still
of the same priority?
Is the intervention being targeted at the right audience?
Is the intervention appropriate for its different target groups?
Is the intervention meeting the target groups needs?
2. Questions on Progress
Is the project/programme being put into operation as planned?
Is there any difference in the understanding of the project/programmes
aims and objectives between the different groups involved? If so, how has
this influenced the way the project is ultimately being put into practice?
To what extent have any unplanned side effects been taken into account dur-
ing project/programme implementation?
Is the project/programme receiving positive support from all the various
groups concerned?
3. Questions on Effectiveness
Have the objectives been achieved in terms of quality, quantity and time?
To what extent was the achievement the effect of FOPH action?
Has FOPH stimulated actions and/or measures that would otherwise not
have occurred?
To what extent did changes in the environment affect the achievement of
project/programme objectives?
To what degree was the intervention implemented according to plan?
Was the project/programme effective in promoting itself to the targeted
groups?
4. Questions on Efficiency
Is the intervention the most cost-effective option? What alternatives
should be considered?
What are the constraints on using a more cost-effective method?
Do the human and financial resource costs compare favourably with related
interventions e.g. in another area of prevention intervention?
Have the inputs been made according to planned amounts, timing and qual-
ity?
What hidden costs have not been taken into account in project/programme
budgeting and planning?
Annex 4: Evaluation Questions:
An Example of Different Question Types
72
73
Federal Office of Public Health
Evaluation/Research/Education Section
Copyright OFSP/BAG, 1995
1
2
3
4
5
Annex 5: Guidelines for Developing or Assessing
Medico-Social Training/Educations Projects.
Swiss Federal Office of Public Health, 1995
To FOPH staff responsible for assessing training project proposals; and To persons and institutions
submitting training project funding proposals to the FOPH.
Guidelines for Developing or Assessing Medico-Social
Training/Education Projects
General Principles
All FOPH funded medico-social training/education projects for professional or voluntary
workers in the field of HIV/AIDS and/or drug dependence should be designed in line with
public and community health principles.
Among other things, they should:
be developed to meet the needs of the community, the institutions and the individuals included in
the cultural, social and economic context,
take into account the prevailing health and social policies relating to the field to which the training
applies,
take into account future needs and challenges,
encourage interdisciplinary and interprofessional cooperation,
ensure optimal exchange of information between practitioners and researchers,
ensure at least regional coverage,
increase the number of trained practitioners and the quality of the services they provide.
A training/education project should provide answers to the following questions:
Is there a need for the type of training proposed?
Does the project take into account potential resources and available means?
Are the purpose, objectives and content of the project relevant?
Does the proposed teaching method take into account the principles of adult training?
Is evaluation a clearly integral part of the project?
Needs can be identified in various ways:
A they can be measured:
by analysing the results of a specific survey as in a needs assessment (this is a long and exacting task and a project in
itself: it requires special skills and quite considerable funds),
by studying and analysing the literature available and on-going research (not forgetting the results of statistical and
demographic surveys);
B they can be defined:
by specialists in one field or discipline (development of new techniques, new concepts or methods, of a new health prob-
lem confronting operational staff, etc.);
C they can be expressed:
by professional associations or unions, by a planning institute, by educationalists, consumers, the trainees themselves or
any other person involved.
The (financial and social) cost of the training/educational project should be reasonably proportionate to
the funding available and the needs that are to be met.
The project should describe:
A the human and institutional resources available, including specialised teachers or experts in the field, other existing insti-
tutions or programmes in the same field;
B existing conceptual and theoretical resources. Methodological work may well have been done in part or in total in anoth-
er language or within another context;
C existing material such as documentation, books, brochures, videos.
1
The real overall cost of the training should be proportionate to the available funds of the organising institution. For exam-
ple, a small institution should not contemplate investing all its funds in one project.
All possibilities of co-financing and subsidies (including through cantonal and local authorities, professional or consumer
associations or foundations) should have been systematically investigated. Do not forget the possibility of premises or logis-
tical services being provided free of charge.
Given the limited funds available from the Confederation and the constant need for training, on no account can the FOPH finance
costly projects: it has to assure continuity of training support. A training project does not need to be expensive to be good!
The proposal should include a detailed budget (see Detailed Budget, last page).
The registration fee should not put off potential participants. Employers should therefore be encouraged to pay all or a pro-
portion of the fees as part of their contribution towards the further training/education of their employees. (Different rates
may be applied for employer-subsidised and self-paying registrations).
A The purpose of the training must be explained and must be relevant to the needs of the population.
For example, The project contributes to reducing the incidence of professional-related HIV infections transmitted
through blood contact by the systematic application of preventive measures and reduction of risk factors. It is aimed
specifically at health carers providing patient home-based care.
Purpose: in terms of its anticipated effects on the target population (those in the care of the training participants);
Relevance: its relation to the health problems of the population at large, and its appropriateness in relation to the
resources available.
B The training objectives must be explicit and relevant to the skills required to carry out the function or task(s).
For example, Participants will be able to provide basic care to patients, in the patients home, whilst at the same time,
respecting the application of universal precautions. Each of the measures needed towards this end will be described,
explained and discussed. The conditions under which they will be applied will need to be systematically described and
put into practice.
General aims: all the knowledge, skill and behavioural attitudes (changes in behaviour!) that the participants will have
acquired by the end of the course;
Relevance of the aims: their relevance to the tasks that professional staff will have to fulfil and to the problems with
which they will be confronted.
74
1
The proposal should include key references.
1
A need or Is training really necessary?
2
Resources and means or Use what you have first
3
Purpose, aims and content A simple and comprehensible description
The knowledge and skills taught should be briefly described and explained.
They should not be in conflict with: the ethics of the professions concerned,
the doctrine upheld by the FOPH in the relevant field.
2
In principle, the cost of developing the course should not exceed 5% of the total cost of the project.
As far as possible, the selection of teaching methods should be based on accepted knowledge and experience in
the field of adult education, e.g.:
A focused on learners needs and the groups existing knowledge,
B aimed at problem solving,
C methods and tools adapted to the learners work situation and to whatever resources are available
A good method is a method that: meets learners needs,
is suitable to the knowledge to be imparted,
suits the skills of the instructor, and
is proportionate to the resources available.
There is no one ideal teaching method. Even straightforward lecturing can well be the best solution in certain cases. A combina-
tion of different methods is often the most successful.
A good teaching method is one that truly enables and encourages participants learning and is not necessarily the most fashion-
able method of the day or the one with which they are most familiar.
Evaluation is a dynamic process aimed at (i) improving the quality and the relevance of training/education pro-
jects; (ii) on-going adaptation of training to meet current needs; and (iii) improving the conditions under which
the projects are run.
An evaluation only makes sense if it is useful and has subsequent practicable application. It is not intended merely to justify itself!
In other words, dont just prove, but improve! Training project managers are also responsible for determining evaluation needs
within the framework of their project.They, and/or the FOPH may see a need for the project to be evaluated by an outside body.
In this case the external evaluation will be planned and commissioned by the FOPH under a separate evaluation budget.
Before choosing an evaluation, it is very important that the following questions be answered:
A what is the point of the evaluation?
B what questions do we want answered?
C what exactly are we going to evaluate:, knowledge, attitudes, an action, a strategy, the implementation process will this need
quantitative/qualitative data?
D who are the stakeholders? i.e. who is the evaluations target audience?
E how will the evaluation findings be disseminated and their practical application made evident?
There are several ways of evaluating a project; the choice should be based on the answers to the above questions and the means
and skills available.
For example, all of the following are evaluations but serve quite different purposes:
evaluation of the training completed by students with a view to awarding them a certificate (evaluation for certification),
evaluation of the trainings relevance to the tasks needed in the field,
evaluation of the knowledge acquired by students with a view to modifying the course along the way,
participants evaluation of the training with a view to improving the course in the future,
evaluation of observed changes in behaviour after one years application in professional practice,
teachers self-evaluation as part of his/her teaching supervision,
evaluation of the projects overall impact to support applications for future funding e.g. from cantonal authorities,
estimation of degree of satisfaction among the students. etc.
The cost of evaluation as shown in the budget may in exceptional cases only exceed 5% of the total.
For more detailed information see the Swiss Federal Office of Public Healths Guide to Project and Programme Evaluation Planning 1996.
75
4
The method or What is important is what the adult has learnt and not what s/he has been taught.
5
Evaluation or Choose a really useful evaluation method.
2
For HIV/AIDS see Prevention of HIV/AIDS in Switzerland, Swiss FOPH, 1993;
for drug dependece, see Federal Regulation of 20.2.91.
Swiss Federal Office
of Public Health
Proposals should be structured as follows:
Description of the setting in which the programme will take place
Identified needs
Human and institutional resources
Conceptual and theoretical resources
Material resources (teaching and financial)
Purpose of the training
Target group(s) for which the course is designed? what are the
criteria/conditions for enrolment?
Educational aims and summary of course content
Teaching methods
Planned evaluation(s): of the learning, of the project itself
Dissemination of evaluation results and plans to make their practical
application evident.
Type of certification
Practical organisation and timetable number of hours teaching, dates, premises, etc.,
advertising/publicity,
other
Detailed budget: total cost and estimated cost per participant,
salaries for instructors (hourly remuneration if possible),
administrative costs,
percentage of total to be spent on course development, publicity and evaluation,
income from registration fees, subsidies.
For further information see Teaching Guidelines in the Field of Health Care by J.-J. Guilbert, published by the
WHO, 1990, or contact us at the Federal Office of Public Health:
FOPH Training Unit FOPH Evaluation Unit
Marie-Claude Hofner 031 323 88 06 Marlne Lubli-Loud 031 323 87 61
Ren Stamm 031 323 87 83 Margret Rihs-Middle 031 323 87 65
Ellen Dobler-Kane 031 323 80 20 Marianne Gertsch 031 323 88 03
76
Federal Office of Public Health
Evaluation/Research/Education Section
Training Projects and Programmes
Drafting the Proposal
1
2
3
4
5


C
o
p
y
r
i
g
h
t

O
F
S
P
/
B
A
G
,
1
9
9
5