You are on page 1of 270

Master of Science in

Development Studies

Development Monitoring and


Evaluation

MSDS510
Author: Shepard Mutsau
Master of Science in Rural and Urban Planning (UZ)
Master of Science in Development Studies (NUST)
Post Graduate Diploma in Project Planning and Management
UZ)
Bachelor of Arts General (UZ)
Bachelor of Science in Psychology (ZOU)
Executive Certificate in Monitoring and Evaluation (UZ)
Certificate in Community Development (UZ)
Research Fellow (DHS, USA)

Content Reviewer: Takawira Mumvuma


PhD in Economics (ISS, Netherlands)
Master of Science in Economics (UZ)
Bachelor of Science Honours in Economics (UZ)

Editor: Judith Tafangombe


MPHIL Special Needs Education (University of Oslo -
Norway)
B.Ed. University of Zimbabwe
Diploma in Special Needs Education (Moray House College of
Education - UK)
Teacher,s Certificate (Howard Institute - Zimbabwe)
Published by: The Zimbabwe Open University

P.O. Box MP1119

Mount Pleasant

Harare, ZIMBABWE

The Zimbabwe Open University is a distance teaching and open


learning institution.

Year: 2017

Cover design: T. Ndhlovu

Layout : S. Mapfumo

I.S.B.N:

Typeset in Times New Roman, 12 point on auto leading

© Zimbabwe Open University. All rights reserved. No part of this


publication may be reproduced, stored in a retrieval system, or
transmitted, in any form or by any means, electronic, mechanical,
photocopying, recording or otherwise, without the prior permission of
the Zimbabwe Open University.
To the student
The demand for skills and knowledge administrators of varied backgrounds,
and the requirement to adjust and training, skills, experiences and personal
change with changing technology, interests. The combination of all these
places on us a need to learn continually qualities inevitably facilitates the
throughout life. As all people need an production of learning materials that
education of one form or another, it has teach successfully any student, anywhere
been found that conventional education and far removed from the tutor in space
institutions cannot cope with the and time. We emphasize that our
demand for education of this magnitude. learning materials should enable you to
It has, however, been discovered that solve both work-related problems and
distance education and open learning, other life challenges.
now also exploiting e-learning
technology, itself an offshoot of e- To avoid stereotyping and professional
commerce, has become the most narrowness, our teams of learning
effective way of transmitting these materials producers come from different
appropriate skills and knowledge universities in and outside Zimbabwe,
required for national and international and from Commerce and Industry.
development. This openness enables ZOU to produce
materials that have a long shelf life and
Since attainment of independence in are sufficiently comprehensive to cater
1980, the Zimbabwe Government has for the needs of all of you, our learners
spearheaded the development of in different walks of life. You, the
distance education and open learning at learner, have a large number of optional
tertiary level, resulting in the courses to choose from so that the
establishment of the Zimbabwe Open knowledge and skills developed suit the
University (ZOU) on 1 March, 1999. career path that you choose. Thus, we
strive to tailor-make the learning
ZOU is the first, leading, and currently materials so that they can suit your
the only university in Zimbabwe entirely personal and professional needs. In
dedicated to teaching by distance developing the ZOU learning materials,
education and open learning. We are we are guided by the desire to provide
determined to maintain our leading you, the learner, with all the knowledge
position by both satisfying our clients and skill that will make you a better
and maintaining high academic performer all round, be this at certificate,
standards. To achieve the leading diploma, undergraduate or postgraduate
position, we have adopted the course level. We aim for products that will
team approach to producing the varied settle comfortably in the global village
learning materials that will holistically and competing successfully with anyone.
shape you, the learner to be an all-round Our target is, therefore, to satisfy your
performer in the field of your own quest for knowledge and skills through
choice. Our course teams comprise distance education and open learning
academics, technologists and
Any course or programme launched by ZOU asynchronously, with peers and tutors whom
is conceived from the cross-pollination of ideas you may never meet in life. It is our intention
from consumers of the product, chief among to bring the computer, email, internet chat-
whom are you, the students and your employers. rooms, whiteboards and other modern methods
We consult you and listen to your critical of delivering learning to all the doorsteps of our
analysis of the concepts and how they are learners, wherever they may be. For all these
presented. We also consult other academics developments and for the latest information on
from universities the world over and other what is taking place at ZOU, visit the ZOU
international bodies whose reputation in distance website at www.zou.ac.co.zw
education and open learning is of a very high Having worked as best we can to prepare your
calibre. We carry out pilot studies of the course learning path, hopefully like John the Baptist
outlines, the content and the programme prepared for the coming of Jesus Christ, it is
component. We are only too glad to subject my hope as your Vice Chancellor that all of you,
our learning materials to academic and will experience unimpeded success in your
professional criticism with the hope of educational endeavours. We, on our part, shall
improving them all the time. We are continually strive to improve the learning
determined to continue improving by changing materials through evaluation, transformation of
the learning materials to suit the idiosyncratic delivery methodologies, adjustments and
needs of our learners, their employers, research, sometimes complete overhauls of both the
economic circumstances, technological materials and organisational structures and
development, changing times and geographic culture that are central to providing you with
location, in order to maintain our leading the high quality education that you deserve. Note
position. We aim at giving you an education that your needs, the learner ‘s needs, occupy a
that will work for you at any time anywhere and central position within ZOU’s core activities.
in varying circumstances and that your
performance should be second to none. Best wishes and success in your studies.

As a progressive university that is forward


looking and determined to be a successful part
of the twenty-first century, ZOU has started to
introduce e-learning materials that will enable
you, our students, to access any source of
information, anywhere in the world through _____________________
internet and to communicate, converse, discuss Prof. Primrose Kurasha
and collaborate synchronously and Vice Chancellor
The Six Hour Tutorial Session At
Zimbabwe Open University
A s you embark on your studies with
Zimbabwe Open University (ZOU) by open
and distance learning, we need to advise you so
This is where the six hour tutorial comes in. For
it to work, you need to know that:
· There is insufficient time for the tutor
that you can make the best use of the learning to lecture you
materials, your time and the tutors who are based · Any ideas that you discuss in the
at your regional office. tutorial, originate from your experience
as you work on the materials. All the
The most important point that you need to note is issues raised above are a good source
that in distance education and open learning, there of topics (as they pertain to your
are no lectures like those found in conventional learning) for discussion during the
universities. Instead, you have learning packages tutorial
that may comprise the written hard copy modules, · The answers come from you while the
the electronic copy modules which are posted on tutor’s task is to confirm, spur further
my vista, tapes, CDs, DVDs and other referral discussion, clarify, explain, give
materials for extra reading. All these including radio, additional information, guide the
television, telephone, fax and email can be used to discussion and help you put together
deliver learning to you. As such, at ZOU, we do full answers for each question that you
not expect the tutor to lecture you when you meet bring
him/her. We believe that that task is accomplished · You must prepare for the tutorial by
by the learning package that you receive at bringing all the questions and answers
registration. What then is the purpose of the six that you have found out on the topics
hour tutorial for each course on offer? to the discussion
· For the tutor to help you effectively, give
At ZOU, as at any other distance and open learning him/her the topics beforehand so that
university, you the student are at the centre of in cases where information has to be
learning. After you receive the learning package, gathered, there is sufficient time to do
you study the tutorial letter and other guiding so. If the questions can get to the tutor
documents before using the learning materials. at least two weeks before the tutorial,
During the study, it is obvious that you will come that will create enough time for
across concepts/ideas that may not be that easy thorough preparation.
to understand or that are not so clearly explained. In the tutorial, you are expected and required to
You may also come across issues that you do not take part all the time through contributing in
agree with, that actually conflict with the practice every way possible. You can give your views,
that you are familiar with. In your discussion even if they are wrong, (many students may hold
groups, your friends can bring ideas that are totally the same wrong views and the discussion will
different from yours and arguments may begin. You help correct the errors), they still help you learn
may also find that an idea is not clearly explained the correct thing as much as the correct ideas.
and you remain with more questions than answers. You also need to be open-minded, frank,
You need someone to help you in such matters. inquisitive and should leave no stone unturned
The Six Hour Tutorial Session At Zimbabwe Open University

as you analyze ideas and seek clarification on any you are referred. Fully-fledged lectures can,
issues. It has been found that those who take part therefore, be misleading as the tutor may dwell
in tutorials actively, do better in assignments and on matters irrelevant to ZOU course.
examinations because their ideas are streamlined.
Taking part properly means that you prepare for Distance education, by its nature, keeps the tutor
the tutorial beforehand by putting together relevant and student separate. By introducing the six hour
questions and their possible answers and those tutorial, ZOU hopes to help you come in touch
areas that cause you confusion. with the physical being, who marks your
assignments, assesses them, guides you on
Only in cases where the infor mation being preparing for writing examinations and
discussed is not found in the learning package can assignments and who runs your general academic
the tutor provide extra learning materials, but this affairs. This helps you to settle down in your
should not be the dominant feature of the six hour course having been advised on how to go about
tutorial. As stated, it should be rare because the your learning. Personal human contact is,
information needed for the course is found in the therefore, upheld by ZOU.
learning package together with the sources to which

The six hour tutorials should be so structured that the


tasks for each session are very clear. Work for each
session, as much as possible, follows the structure given
below.

Session I (Two Hours)


Session I should be held at the beginning of the semester. The
main aim of this session is to guide you, the student, on how
you are going to approach the course. During the session, you
will be given the overview of the course, how to tackle the
assignments, how to organize the logistics of the course and
formation of study groups that you will belong to. It is also during
this session that you will be advised on how to use your learning
materials effectively.
The Six Hour Tutorial Session At Zimbabwe Open University

Session II (Two Hours)


This session comes in the middle of the semester to respond
to the challenges, queries, experiences, uncertainties, and
ideas that you are facing as you go through the course. In this
session, difficult areas in the module are explained through the
combined effort of the students and the tutor. It should also give
direction and feedback where you have not done well in the
first assignment as well as reinforce those areas where
performance in the first assignment is good.

Session III (Two Hours)


The final session, Session III, comes towards the end of the
semester. In this session, you polish up any areas that you still
need clarification on. Your tutor gives you feedback on the
assignments so that you can use the experience for preparation
for the end of semester examination.

Note that in all the three sessions, you identify the areas
that your tutor should give help. You also take a very
important part in finding answers to the problems posed.
You are the most important part of the solutions to your
learning challenges.

Conclusion for this course, but also to prepare yourself to


contribute in the best way possible so that you
In conclusion, we should be very clear that six can maximally benefit from it. We also urge
hours is too little for lectures and it is not you to avoid forcing the tutor to lecture you.
necessary, in view of the provision of fully self-
contained learning materials in the package, to BEST WISHES IN YOUR STUDIES.
turn the little time into lectures. We, therefore,
urge you not only to attend the six hour tutorials ZOU
Contents

Overview __________________________________________________ 1

Unit One: Introduction to Project Monitoring and Evaluation

1.1 _______ Introduction __________________________________________________ 5


1.2 _______ Unit Objectives _______________________________________________ 6
1.3 _______ History of Monitoring and Evaluation ____________________________ 6
1.4 _______ Monitoring and Evaluation ______________________________________ 7
1.5 _______ The Distinction between Monitoring and Evaluation ________________ 9
__________ 1.5.1 Similarities between monitoring and evaluation ______________ 10
__________ Activity 1.1 __________________________________________________ 11
1.6 _______ Why Monitoring and Evaluation? _______________________________ 12
__________ 1.6.1 Justification for monitoring and evaluation __________________ 12
__________ 1.6.2 Relationship between monitoring and evaluation _____________ 13
1.7 _______ Guiding Principles for Monitoring and Evaluation _________________ 14
__________ Activity 1.2 __________________________________________________ 15
1.8 _______ Important Concepts in Monitoring and Evaluation ________________ 15
__________ Activity 1.3 __________________________________________________ 21
1.9 _______ Summary ___________________________________________________ 23
__________ References __________________________________________________ 24

Unit Two: Stakeholder Participation in Monitoring and Evaluation

2.1 _______ Introduction _________________________________________________ 25


2.2 _______ Unit Objectives ______________________________________________ 26
2.3 _______ Stakeholders in Monitoring and Evaluation (M and E) _____________ 26
2.4 _______ Participation _________________________________________________ 26
2.5 _______ Stakeholder Engagement in M and E ____________________________ 27
2.6 _______ Rational for Stakeholder Analysis in M and E ____________________ 27
__________ Activity 2.1 __________________________________________________ 28
2.7 _______ Identification of Stakeholders __________________________________ 28
2.8 _______ Stakeholders Importance and Influence Table ____________________ 29
2.9 _______ Key Stakeholder Weighting Matrix _____________________________ 30
2.10 ______ Stakeholder Importance-Influence Ranking Matrix _______________ 31
2.11 ______ Interpreting the Stakeholder Importance and Influence Matrix _______
__________ (UNDP, 2009) ________________________________________________ 31
__________ Activity 2.2 __________________________________________________ 33
2.12 ______ Orientation and Training of Stakeholders ________________________ 33
__________ 2.12.1 Orientation on the planning process _______________________ 33
__________ 2.12.2 Orientation ____________________________________________ 34
2.13 ______ Other Players in Monitoring and Evaluation ______________________ 35
2.14 ______ Benefits of Participation of Stakeholders ________________________ 36
__________ Activity 2.3 __________________________________________________ 38
2.15 ______ Summary ___________________________________________________ 38
__________ References __________________________________________________ 39

Unit Three: Monitoring and Evaluation Tools

3.1 _______ Introduction _________________________________________________ 41


3.2 _______ Unit Objectives ______________________________________________ 42
3.3 _______ Goal of Monitoring and Evaluation ______________________________ 42
3.4 _______ The Log Frame ______________________________________________ 44
__________ 3.4.1 Definition of the key terms and components of a classic 4 x 5 log _
__________ frame matrix ________________________________________________ 45
__________ Activity 3.1 __________________________________________________ 47
3.5 _______ Indicators ___________________________________________________ 47
__________ 3.5.1 Designing indicators _____________________________________ 47
__________ 3.5.2 Factors to consider when constructing indicators/indicator traps 48
3.6 _______ The Indicator Matrix __________________________________________ 48
3.7 _______ The Use of a Log Frame in M and E _____________________________ 53
__________ 3.7.1 The anatomy of a log frame _______________________________ 53
__________ 3.7.2 Tracing the logic of log frames _____________________________ 53
__________ 3.7.3 Construction descriptive statements ________________________ 54
__________ Activity 3.2 __________________________________________________ 55
3.8 _______ Indicators ___________________________________________________ 55
__________ 3.8.1 Types of indicators _______________________________________ 56
__________ Activity 3.3 __________________________________________________ 56
3.9 _______ Log Frame Vertical Logic Indicators ____________________________ 57
__________ 3.9.1 Activity indicators _______________________________________ 57
__________ 3.9.2 Output indicators ________________________________________ 57
__________ 3.9.3 Objectives indicators _____________________________________ 57
__________ 3.9.4 Goal indicators __________________________________________ 58
3.10 ______ Sources of Verification ________________________________________ 58
3.11 ______ Advantages of Using a Log Frame ______________________________ 59
__________ Activity 3.4 __________________________________________________ 60
3.12 ______ Summary ___________________________________________________ 60
__________ References __________________________________________________ 61
Unit Four: Monitoring and Evaluation Tools: Gantt Chart

4.1 _______ Introduction _________________________________________________ 63


4.2 _______ Unit Objectives ______________________________________________ 64
4.3 _______ Origins of the Gantt chart _____________________________________ 64
4.4 _______ The Gantt Chart: A Conceptual View ____________________________ 64
__________ 4.4.1 Advantages of a Gantt chart _______________________________ 65
__________ 4.4.2 Disadvantages of Gantt charts _____________________________ 66
__________ Activity 4.1 __________________________________________________ 67
__________ 4.4.3Steps in creating a Gantt Chart using Excel __________________ 68
4.5 _______ Creating a Gantt Chart: Example 2 _____________________________ 92
__________ Activity 4.2 __________________________________________________ 98
4.6 _______ Summary ___________________________________________________ 98
__________ References __________________________________________________ 99

Unit Five: Program Evaluation Review Technique and the Critical Path
Method

5.1 _______ Introduction ________________________________________________ 101


5.2 _______ Unit Objectives _____________________________________________ 102
5.3 _______ Limitations of the PERT ______________________________________ 104
__________ Activity 5.1 _________________________________________________ 105
5.4 _______ Key Element of the Critical Path Method _______________________ 105
5.5 _______ Applying the Program Evaluation Review Technique ______________ 106
5.6 _______ Important Attributers of the PERT Chart _______________________ 108
5.7 _______ Examination and Interpretation of PERT _______________________ 109
5.8 _______ Critical Path Analysis ________________________________________ 110
5.9 _______ Float Determination _________________________________________ 112
5.10 ______ Early Start & Early Finish Calculation __________________________ 113
5.11 ______ Late Start and Late Finish Calculation _________________________ 114
5.12 ______ Question and Solution Activities _______________________________ 114
5.13 ______ Constructing the Network Diagram ____________________________ 116
5.14 ______ The Use of Dummy Activities _________________________________ 118
__________ Activity 5.1 _________________________________________________ 120
__________ Activity 5.2 _________________________________________________ 121
__________ Activity 5.3 _________________________________________________ 122
__________ Activity 5.4 _________________________________________________ 123
5.15 ______ Summary __________________________________________________ 125
__________ References _________________________________________________ 126
Unit Six: Setting Up a Monitoring System: Monitoring and Evaluation
as a Process

6.1 _______ Introduction ________________________________________________ 129


6.2 _______ Unit Objectives _____________________________________________ 130
6.3 _______ Monitoring System __________________________________________ 130
__________ Activity 6.1 _________________________________________________ 131
6.4 _______ Steps in Designing a Monitoring System ________________________ 131
__________ Activity 6.2 _________________________________________________ 135
6.5 _______ Reporting and Communicating M and E Information _____________ 135
6.6 _______ Report Writing _____________________________________________ 136
__________ 6.6.1 Purpose of the report ____________________________________ 136
6.7 _______ Main Components of the Report _______________________________ 137
__________ Activities 6.3 _______________________________________________ 137
__________ 6.7.1 Cover page ____________________________________________ 137
__________ 6.7.2 Summary and recommendations __________________________ 138
__________ 6.7.3 Introduction ___________________________________________ 138
__________ 6.7.4 Objectives and methodology ______________________________ 138
__________ 6.7.5 Findings _______________________________________________ 139
__________ 6.7.6 Discussion and conclusion ________________________________ 139
__________ 6.7.7 Recommendations ______________________________________ 139
6.8 _______ Issues to Remember _________________________________________ 141
__________ Activity 6.4 _________________________________________________ 142
6.9 _______ Summary __________________________________________________ 142
__________ References _________________________________________________ 143

Unit Seven: Performance Measurement and Management

7.1 _______ Introduction ________________________________________________ 145


7.2 _______ Unit Objectives _____________________________________________ 146
7.3 _______ Performance Management ____________________________________ 146
__________ 7.3.1 Dimensions of performance management __________________ 147
__________ 7.3.2 Dimensions of performance management __________________ 147
7.4 _______ Rating System ______________________________________________ 148
__________ Activity 7.1 _________________________________________________ 149
__________ 7.4.1 Key elements of the common rating system ________________ 149
__________ 7.4.2 Rating outputs _________________________________________ 149
7.5 _______ Selecting Indicators__________________________________________ 150
__________ 7.5.1 Key steps in selection if indicators ________________________ 150
__________ 7.5.2 Use disaggregated data __________________________________ 152
__________ 7.5.3 Involve stakeholders ____________________________________ 152
__________ 7.5.4 Distinguish between quantitative and qualitative indicators ___ 153
__________ 7.5.5 Try to limit the number of indicators ______________________ 153
__________ 7.5.6 Ensure timeliness _______________________________________ 153
__________ Activity 7.3 _________________________________________________ 154
7.6 _______ Other Tips on Selecting Indicators _____________________________ 154
7.7 _______ Using Indicators _____________________________________________ 154
__________ 7.7.1 Involving stakeholders __________________________________ 154
__________ 7.7.2 Using indicators for monitoring ___________________________ 155
__________ 7.7.3 Output indicators _______________________________________ 155
__________ 7.7.4 Impact indicators _______________________________________ 155
__________ Activity 7.4 _________________________________________________ 156
7.8 _______ Summary __________________________________________________ 156
__________ References _________________________________________________ 157

Unit Eight: Typology of Evaluation Approaches

8.1 _______ Introduction ________________________________________________ 159


8.2 _______ Unit Objectives _____________________________________________ 160
8.3 _______ Evaluation Approaches _______________________________________ 160
8.4 _______ Evaluation Approaches Based on Scope _________________________ 160
__________ 8.4.1 Community-based evaluation _____________________________ 161
__________ 8.4.2 Sectoral evaluations _____________________________________ 161
__________ 8.4.3 Geographical evaluations ________________________________ 161
__________ 8.4.4 Policy evaluation _______________________________________ 161
__________ 8.4.5 Programme and project evaluation ________________________ 162
__________ 8.4.6 Product evaluation ______________________________________ 162
__________ 8.4.7 Input evaluation ________________________________________ 162
__________ 8.4.8 Process or ongoing evaluation ____________________________ 162
__________ 8.4.9 Output evaluation ______________________________________ 162
__________ 8.4.10 Outcome evaluation ____________________________________ 163
__________ 8.4.11 Impact evaluation or impact assessment __________________ 163
__________ 8.4.12 Systemic evaluation ____________________________________ 163
__________ 8.4.13 Meta-evaluation _______________________________________ 163
__________ Activity 8.1 _________________________________________________ 164
8.5 _______ Formal Substantive Theory Base Evaluation Approaches __________ 164
__________ 8.5.1 Theory-based evaluation _________________________________ 165
8.6 _______ Deductive Evaluation Approaches ______________________________ 165
__________ 8.6.1 Illuminative evaluation __________________________________ 166
__________ 8.6.2 Realist evaluation ______________________________________ 166
__________ 8.6.3 Cluster evaluation and multisite evaluations _______________ 166
__________ 8.6.4 Goal-free evaluation ____________________________________ 166
__________ 8.6.5 Participatory evaluation _________________________________ 167
__________ 8.6.6 Responsive evaluation ___________________________________ 167
__________ 8.6.7 Naturalistic, constructivist, interpretivist or fourth-generation ___
__________ evaluation __________________________________________________ 167
__________ 8.6.8 Utilisation-focused evaluation ____________________________ 168
__________ 8.6.9 Appreciative inquiry _____________________________________ 168
__________ 8.6.10 Evaluative inquiry _____________________________________ 168
__________ 8.6.11 Critical theory evaluation _______________________________ 169
__________ 8.6.12 Empowerment evaluation _______________________________ 169
__________ 8.6.13 Democratic evaluation __________________________________ 169
__________ Activity 8.2 _________________________________________________ 170
__________ 8.6.14 Evaluation design and methodology ______________________ 170
8.7 _______ The Main Evaluation Designs Applicable to Monitoring and Evaluation.170
__________ 8.7.1 Quantitative evaluation approaches _______________________ 170
__________ 8.7.2 Classic experimental design ______________________________ 171
__________ 8.7.3 Quasi-experimental evaluation ___________________________ 171
8.8 _______ Qualitative Evaluation Approaches that are Non- ___________________
__________ Experimental Approaches _____________________________________ 171
__________ 8.8.1 Qualitative evaluation ___________________________________ 172
__________ 8.8.2 Case study evaluation ___________________________________ 172
__________ 8.8.3 Participatory action research _____________________________ 172
__________ 8.8.4 Grounded theory _______________________________________ 172
8.9 _______ Evaluation Justification ______________________________________ 173
__________ 8.9.1 External evaluation _____________________________________ 175
__________ Activity 8.3 _________________________________________________ 176
8.10 ______ Evaluation Challenges _______________________________________ 178
__________ Activity 8.4 _________________________________________________ 179
8.11 ______ Summary __________________________________________________ 179
__________ References _________________________________________________ 181

Unit Nine: Data Gathering and Analysis for Monitoring and Evaluation

9.1 _______ Introduction ________________________________________________ 183


9.2 _______ Unit objectives ______________________________________________ 184
9.3 _______ Sampling in Monitoring and Evaluation Context _________________ 184
__________ Activity 9.1 _________________________________________________ 187
9.4 _______ Sources of Data for Monitoring and Evaluation __________________ 187
__________ Activity 9.2 _________________________________________________ 189
9.5 _______ Some Practical Considerations in Planning for Data Collection _____ 189
__________ 9.5.1 Prepare data collection guidelines _________________________ 189
__________ 9.5.2 Pre-test data collection tools _____________________________ 189
__________ 9.5.3 Train data collectors _____________________________________ 190
__________ 9.5.4 Address ethical concerns _________________________________ 190
9.6 _______ Reducing Data Collection Costs _______________________________ 190
9.7 _______ Tools for Data Gathering _____________________________________ 191
9.8 _______ Analysing Information _______________________________________ 195
__________ 9.8.1 Data analysis __________________________________________ 196
9.9 _______ Taking Action _______________________________________________ 197
9.10 ______ Information Reporting and Utilisation __________________________ 197
__________ Activity 9.3 _________________________________________________ 199
9.11 ______ M and E Staffing and Capacity Building _________________________ 199
__________ 9.11.1 Suggestions for ensuring adequate M and E support ________ 199
9.12 ______ Summary __________________________________________________ 200
__________ References _________________________________________________ 201
Unit Ten: Impact Assessment for Monitoring and Evaluation

10.1 ______ Introduction ________________________________________________ 203


10.2 ______ Unit Objectives _____________________________________________ 204
10.3 ______ Impact Assessment __________________________________________ 204
10.4 ______ Importance of Impact Assessment _____________________________ 204
10.5 ______ Determining Whether or Not to Carry Out an Evaluation _________ 205
__________ Activity 10.1 ________________________________________________ 205
10.6 ______ Main Steps in Designing and Implementing Impact Evaluations ____ 205
__________ 10.6.1 Exploring data availability _______________________________ 206
__________ 10.6.2 Key points for identifying data resources for impact evaluation207
__________ 10.6.3 Designing the evaluation _______________________________ 208
__________ 10.6.4 Evaluation question ____________________________________ 208
__________ 10.6.5 Timing and budget concerns _____________________________ 209
__________ 10.6.6 Implementation capacity ________________________________ 209
__________ 10.6.7 Formation of the evaluation team ________________________ 209
__________ Activity 10.2 ________________________________________________ 210
10.7 ______ Responsibilities and Roles of the Team Members ________________ 210
__________ 10.7.1 Evaluation manager ____________________________________ 210
__________ 10.7.2 Policy analysts ________________________________________ 211
__________ 10.7.3 Sampling expert _______________________________________ 211
__________ 10.7.4 Survey designer _______________________________________ 211
__________ 10.7.5 Fieldwork manager and staff ____________________________ 211
__________ 10.7.6 Data managers and processors ___________________________ 212
10.8 ______ Data Development ___________________________________________ 213
10.9 ______ Deciding What to Measure ____________________________________ 213
10.10 _____ Developing Data Collection Instruments and Approaches _________ 213
10.11 _____ Training ____________________________________________________ 214
10.12 _____ Pilot Testing ________________________________________________ 214
10.13 _____ Sampling ___________________________________________________ 214
10.14 _____ Data Instruments ___________________________________________ 216
__________ 10.14.1 Questionnaires _______________________________________ 216
__________ 10.14.2 Fieldwork issues ______________________________________ 216
10.15 _____ Data Management and Access _________________________________ 218
10.16 _____ Analysis, Reporting and Dissemination _________________________ 218
__________ 10.16.1 Content analysis ______________________________________ 219
__________ 10.16.2 Case analysis ________________________________________ 219
__________ Activity 10.2 ________________________________________________ 220
10.17 _____ Summary __________________________________________________ 220
__________ References _________________________________________________ 221

Unit Eleven: Monitoring and Evaluation in the Context of Results


Based Management (RBM)

11.1 ______ Introduction ________________________________________________ 223


11.2 ______ Unit Objectives _____________________________________________ 224
11.3 ______ What is Results Based Management? __________________________ 224
11.4 ______ Putting Planning, Monitoring and Evaluation _______________________
__________ Together: Results Based Management __________________________ 224
__________ Activity 11.1 ________________________________________________ 226
11.5 ______ The Results Based Management Lifecycle Approach ______________ 226
11.6 ______ Planning ___________________________________________________ 227
__________ 11.6.1 Justification for planning _______________________________ 227
11.7 ______ Monitoring _________________________________________________ 228
11.8 ______ Evaluation __________________________________________________ 229
__________ Activity 11.2 ________________________________________________ 229
11.9 ______ Principles of Results-Based Management _______________________ 230
__________ 11.9.1 Ownership ____________________________________________ 230
__________ 11.9.2 Engagement of stakeholders ____________________________ 231
__________ 11.9.3 Focus on results _______________________________________ 231
__________ 11.9.4 Focus on development effectiveness ______________________ 231
__________ 11.9.5 Exploring the link between planning, monitoring and evaluation233
__________ Activity 11.3 ________________________________________________ 233
11.10 _____ Summary __________________________________________________ 234
__________ References _________________________________________________ 235

Unit Twelve: Annex 1: Evaluation Report Outlines

12.1 ______ Introduction ________________________________________________ 237


12.2 ______ Unit Objectives _____________________________________________ 238
12.3 ______ Annex 1. Evaluation Report Outline ___________________________ 238
12.4 ______ Annex 2: A Sample Of A Full Evaluation Report _________________ 242
__________ 12.4.1 A Detailed Explanation of the Evaluation Report ___________ 243
12.5 ______ ANNEX 3: USAID FORMART _________________________________ 251
12.6 ______ ANNEX 4: ACEIDA REPORT ON WATER AND SANIATION _______ 253
12.7 ______ Summary __________________________________________________ 255
__________ References _________________________________________________ 256
Module Overview

T his module will walk you through the basic concepts, principles and
major issues that arise in the monitoring and evaluation of development
projects. We, in Unit 1, start by introducing the concept of monitoring and
evaluation. The unit examines the various aspects of the monitoring and evalu-
ation process as it is used in the management of development projects. We
also define the concept; characterise it indicating similarities and differences
of the twin processes of monitoring and evaluation. We end the unit by high-
lighting other key concepts in monitoring and evaluation practice. In Unit 2,
we introduce the Stakeholder Participation in Monitoring and evaluation. We
demonstrate that monitoring and evaluation is not a one-man job. It is a proc-
ess which requires several actors and stakeholders. In this unit, we fully dwell
on the importance of stakeholders in monitoring and evaluation process of
development projects. We define who the stakeholders are in monitoring and
evaluation process and discuss the methods of identifying stakeholders, as
well as the importance of creating a stockholder analysis.
Development Monitoring and Evaluation MSDS510

The process of stakeholder engagement is visited. In Unit 3 we capture the


monitoring and evaluation tools in the trade of monitoring and evaluation of
development projects. It acknowledges the very fact that there are various
tools which are used by evaluation practitioners or development managers in
the practice of monitoring and evaluation of projects. These include but are
not limited to the following, budgets, M&E calendars, log frame, Gant charts,
network analysis and WBS among other tools. The unit underscore that these
help to collect information which is necessary for evaluation to take place and
that most of these tools have a dual purpose. In this unit we look closely at a
very important tool in monitoring and evaluation practice which is called the
log frame. We look at the log frame in the context of the project cycle, define
the log frame and characterise it. The anatomy of the log frame is visited and
the factors to consider in the construction of a logical framework are outlined.
The advantages and disadvantages of using the log frame are presented in this
unit.

In Unit 4, we discuss the Monitoring and Evaluation Tools paying special


attention to the Gantt Chart. The conceptual view is covered as well as the
advantages and disadvantages of Gantt charts in monitoring ad evaluation are
discussed. The steps involved in creating a Gantt chart using a computer are
covered. Unit 5 covers a related but different monitoring and evaluation tool
called the Program Evaluation Review Technique. In this unit we cover areas
that include but not limited to the following ;application of the PERT, key
Element of the CPM , important Attributers of the Pert Chart Examination
and Interpretation of PERT, float determination as well as construction of
network diagrams. In Unit 6, we capture the setting up of a monitoring sys-
tem. Here we look at monitoring and evaluation as a process. We recognise
that setting a monitoring system is as essential as the monitoring and evalua-
tion process itself. This is because of the fact that it lays the foundation for the
entire system and has a bearing on the quality of the results. In this unit we
walk you through the process of setting up a monitoring system. We cover
defining monitoring systems objectives, selection of relevant information, pres-
entation of results and use of results. Reporting and communication of moni-
toring and evaluation results and report structure are presented in detail. In
Unit 7 we look at the process of performance measurement in monitoring and
evaluation of developed projects, essential aspects of performance manage-
ment are fully covered in this unit.

In Unit 8 we concentrate on the typology of evaluation approaches. In this


unit we preset a plethora of evaluation approaches which are at the disposal
of development practitioners. Our main task in this unit is to visit the typology

2 Zimbabwe Open University


Overview

of evaluation approaches in the context on monitoring and evaluation of de-


velopment projects. Data gathering for monitoring and evaluation is looked at
in Unit 9. In this unit we look closely at the different kinds of methods that can
be used to collect information for monitoring and evaluation purposes. We
emphasise that you need to select methods that suit your purposes and your
resources. We examine various sources of data and also explore various prac-
tical consideration in planning for data collection and the tools used in data
collection. Sampling issues are revisited as well as data analysis in the context
of monitoring and evaluation. In Unit 10 we visit impact assessment in moni-
toring and evaluation. This unit covers the importance of Impact Assessment,
determination of when to carry out an evaluation, main steps in designing and
implementing impact evaluations amongst other issues. Monitoring and evalu-
ation in the context of Results Based Management (RBM) is discussed in unit
11. We reiterate that main emphasis is put on management by results. The
principles of RBM are visited; the link between planning and monitoring-
evaluation is established and explained. Finally, we provide example of evalu-
ation report structures in unit 12.

Zimbabwe Open University 3


Unit One

Introduction to Project
Monitoring and Evaluation

1.1 Introduction

T oday the development practice has been put under pressure for the
failure to deliver the expected outcomes in the development projects.
As such, there has been more emphasis on monitoring and evaluation on
development projects. In this unit, we introduce the concept of monitoring
and evaluation (M&E), in this unit, we also examine the various aspects of the
monitoring and evaluation process as it is used in the management of devel-
opment projects. We then define the concept; characterise it indicating simi-
larities and differences of the twin processes of monitoring and evaluation.
We end the unit by highlighting other key concepts in monitoring and evalua-
tion practice.
Development Monitoring and Evaluation MSDS510

1.2 Unit Objectives


By the end of this unit, you should be able to:
 define the terms monitoring and evaluation
 explain the concept of monitoring
 explain the concept of evaluation
 examine the history of monitoring and evaluation
 examine the differences between monitoring and evaluation
 discuss the principles of evaluation

1.3 History of Monitoring and Evaluation


This unit briefly traces the genealogy of monitoring and evaluation to date as a
way of introducing monitoring and evaluation. In this module, monitoring and
evaluation is viewed as a robust activity aimed at collecting, analysing and
interpreting information for effectively implementing development projects to
meet development project expected outcomes, (Rossi, Freeman and Wright,
1993).

Monitoring and evaluation emerged from the general acceptance of the scien-
tific methods as a means of dealing with social problems. However, despite
historical roots that extend to the seventeenth century, the widespread use of
systematic data based evaluations (SDBE) is a relatively modern develop-
ment. It is argued that the application of social research methods to education
coincides with the growth and refinement of social research methods (Rossi,
et al., 1993).

Of key importance are the emergence and increased standing of social sci-
ences in universities and the increased support of social research. Social sci-
ences research in universities became centres of early work in program M&E
and has continued to occupy a key role in the field

The strong commitment of systematic M&E of the programs first became


common in the areas of education and public health. Early efforts beginning
prior to World War 1 (WW1) were directed at assessing educational pro-
grams concerned with literacy and public health initiatives to reduce mortality
and morbidity from infectious diseases. The period following WW1 was the
rise of large-scale programs and is viewed as the boom period of monitoring
and evaluation. By the end of 1950s, large-scale monitoring and evaluation
programs were commonplace especially in the military. According to Nagel
(1986), the computer revolution was an important stimulus to the growth of

6 Zimbabwe Open University


Unit 1 Introduction to Project Monitoring and Evaluation

evaluation and it helped in the efficient data collection and storage as well as
it analysis (Gray, 1988). Since then, more and more methods and computer
aided applications have made monitoring and evaluation a robust project
management activity for development projects.

1.4 Monitoring and Evaluation


Several definitions have been pontificated by different schools to define the
concept and practice of monitoring and evaluation. Here we proffer several
definitions as they are tendered by several scholars in different fields.

Shapiro (1996) defines monitoring as the systematic collection and analysis


of information as a project progresses. Monitoring is aimed at improving the
efficiency and effectiveness of a project or organisation. Monitoring is based
on targets set and activities planned during the planning phases of work and
helps to keep the work on track and can let management know when things
are going wrong. Monitoring is tool for good management that provides a
useful base for evaluation. Monitoring enables you to determine whether the
resources you have available are sufficient and are being well used, whether
the capacity you have is sufficient and appropriate, and whether you are do-
ing what you planned to do, (Shapiro, 1996).

Monitoring can also be defined as a continuing function that aims primarily to


provide the management and main stakeholders of an ongoing intervention
with early indications of progress, or lack thereof, in the achievement of re-
sults. An ongoing intervention might be a project, programme or other kind of
support to an outcome, (United Nations Development Programme, 2002).

In other words, monitoring is concerned with asking the questions, “Are we


taking the actions we said we would take?” “Are we making progress on
achieving the results that we said we wanted to achieve in the manner we had
planned?. In the broader sense, monitoring also involves tracking strategies
and actions being taken by partners and non-partners and figuring out what
new strategies to be taken to ensure progress towards the results is made.

UNDP (2009) defines evaluation as the ongoing process by which stakeholders


obtain regular feedback on the progress being made towards achieving their
pre-set goals and objectives. Contrary to other definitions that treat monitor-
ing as merely reviewing progress made in the implementing actions or activi-
ties, this definition used by United Nations Development Programmes (UNDP)
(2009) focuses on reviewing progress against achieving goals and objectives.

Zimbabwe Open University 7


Development Monitoring and Evaluation MSDS510

Evaluation is the comparison of actual project impacts against the agreed


strategic plans. It looks at what you set out to do, at what you have accom-
plished, and how you accomplished it. It can be formative (taking place dur-
ing the life of a project or organisation, with the intention of improving the
strategy or way of functioning of the project or organisation). Evaluation can
also be summative (drawing learning’s from a completed project or an or-
ganisation that is no longer functioning). Someone once described this as the
difference between a check-up and an autopsy! (Shapiro, 1996; Olive, 2002).

According to UNDP (2009), evaluation is a rigorous and independent as-


sessment of either completed or ongoing activities to determine the extent to
which they are achieving stated objectives and contributing to decision-mak-
ing.

The International Union for Conservation of Nature (IUCN) (2004) views


evaluation as a broad term that can mean different things in different organisa-
tions. It defines evaluation as a periodic assessment, as systematic and impar-
tial as possible, of the relevance, effectiveness, efficiency, impact and
sustainability of a policy, programme, project, commission or organisational
unit in the context of stated objectives. An evaluation may also include an
assessment of unintended impacts.

Evaluations are seen as formal activities comprising applied research tech-


niques to generate systematic information that can help to improve the per-
formance of the secretariat. Evaluation studies are usually undertaken as an
independent examination of the background, strategy, objectives, results, ac-
tivities, and means deployed, with a view to drawing lessons that may guide
future work (IUCN, 2004).

According to UNDP (2002), evaluation is a selective exercise that attempts


to systematically and objectively assess progress towards and the achieve-
ment of an outcome. Evaluation is not a one-time event, but an exercise in-
volving assessments of differing scope and depth carried out at several points
in time in response to evolving needs for evaluative knowledge and learning
during the effort to achieve an outcome. All evaluations even project evalua-
tions that assess relevance, performance and other criteria need to be linked
to outcomes as opposed to only implementation or immediate outputs.

8 Zimbabwe Open University


Unit 1 Introduction to Project Monitoring and Evaluation

1.5 The Distinction between Monitoring and


Evaluation
Although the term “monitoring and evaluation” tends to get run together as if
it is only one thing, monitoring and evaluation are, in fact, two distinct sets of
organisational activities, related but not identical. Monitoring and evaluation
are complementary processes. They cannot be separated.

(https://www.oecd.org/dac/peer-reviews/
World%20bank%202004%2010_Steps_to_a_Results_Based_ME_System.pdf
accessed 22/05/2017)

Table 1.1: Complementarity between monitoring and evaluation

Item Monitoring Evaluation


Frequency Periodic, regular episodic
Main action Keeping track; oversight assessment
Basic purpose Improve efficiency; adjust Improve effectiveness; impact;
work plan future programming
focus Inputs, outputs, processes, Effectiveness, relevance,
outcomes, work plans impact, cost effectiveness
Information sources Routine or sentinel systems, Same plus survey studies
field observation, progress
reports, rapid assessments
Undertaken by Programme managers,
Programme managers,
community workers, the community workers, the
community (beneficiaries)
community (beneficiaries)
supervisors and funders supervisors, funders, external
evaluators
Reporting to Programme managers, Programme managers,
community workers, the community workers, the
community (beneficiaries) community (beneficiaries)
supervisors and funders supervisors, funders, external
evaluators ,policy makers
(Source: UNICEF Guide for Monitoring and Evaluation
www.unicerforg/resrval/indexh/html)

Zimbabwe Open University 9


Development Monitoring and Evaluation MSDS510

According to UNDP (1997), some of the key distinctions between monitor-


ing and evaluation are that:

 evaluations are done independently to provide managers and start with


an objective assessment of whether or not they are on track,
 they are also more rigorous in their procedures, designs and methodol-
ogy,
 evaluation comes after monitoring, during monitoring that is where data
is gathered,
 evaluation generally involve intensive analysis,
 monitoring is ongoing but evaluation is periodic and
 monitoring is about data gathering and evaluation is about data analy-
sis.

1.5.1 Similarities between monitoring and evaluation


However, the main aim of both monitoring and evaluation is similar in that they
all work in complementarily to provide information that can help inform deci-
sions, improve performance and achieve planned results. Evaluations like
monitoring can apply to many areas including activities, projects, programmes,
strategies as well as policy (UNDP, 2009). What monitoring and evaluation
have in common is that they are geared towards learning from what you are
doing and how you are doing it, by focusing on: Efficiency, Effectiveness and
Impact.

Efficiency

Efficiency measures whether the input into the work is appropriate in terms of
the output. This could be input in terms of money, time, staff, equipment and
so on. It focuses on the extent to which resources are used cost-effectively.
The quality and quantity of results achieved justify the resources used. It asks
the question, “Are there more cost-effective methods of achieving the same
result?” (Shapiro, 1996).

Effectiveness

Effectiveness is a measure of the extent to which a development programme


or project achieves the specific objectives it set. If, for example, we set out to
improve the qualifications of all the high school teachers in a particular area,
did we succeed? It is the extent to which intended outputs (products, services
and deliverables) are achieved. It asks the question, “To what extent are

10 Zimbabwe Open University


Unit 1 Introduction to Project Monitoring and Evaluation

these outputs used to bring about the desired outcomes?” (IUCN, 2004;
Shapiro, 1996).

Activity 1.1

?
1. Define the following terms in the context of monitoring and evaluation:
(a) monitoring
(b) evaluation
2. Distinguish between monitoring and evaluation. Give examples for your
answer.
3. Discuss issues that monitoring and evaluation have in common giving
examples.
Impact

Impact tells you whether or not what you did make a difference to the prob-
lem situation you were trying to address. An example is the changes in condi-
tions of people and ecosystems that result from an intervention (that is, policy,
programme or project). It asks the question, “What are the positive, nega-
tive, direct, indirect, intended or unintended effects?” In other words, was
your strategy useful? Did ensuring that teachers were better qualified improve
the pass rate in the final year of school? Before you decide to get bigger, or to
replicate the project elsewhere, you need to be sure that what you are doing
makes sense in terms of the impact you want to achieve (IUCN, 2004; Shapiro,
1996).

Following the above explanations, it should be clear that monitoring and evalu-
ation are best done when there has been proper planning against which to
assess progress and achievements.

Relevance

Relevance is the extent to which the policy, programme, project or the or-
ganisational unit contributes to the strategic direction of the members and
partners. Is it appropriate in the context of its environment?

Zimbabwe Open University 11


Development Monitoring and Evaluation MSDS510

1.6 Why Monitoring and Evaluation?


In this section, we look at the reasons for monitoring and evaluation. The
following reasons are the justification for monitoring and evaluation of devel-
opment projects (UNDP, 2009; IUCN, 2004)

1.6.1 Justification for monitoring and evaluation


Monitoring and evaluation:
 Checks the relevance of project initiatives, strategies, policies, pro-
grammes and projects designed to combat poverty and support desir-
able changes) to national development goals within a given national,
regional or global context.
 Checks on the effectiveness of development assistance initiatives, in-
cluding partnership strategies.
 Looks at the contribution and worth of this assistance to national de-
velopment outcomes and priorities, including the material conditions of
programme countries, and how this assistance visibly improves the pros-
pects of people and their communities.
 Determines drivers or factors enabling successful, sustained and scaled-
up development initiatives, alternative options and comparative advan-
tages of UNDP.
 Identifies efficiency of development assistance, partnerships and coor-
dination to limit transaction costs.
 Identifies risk factors and risk management strategies to ensure suc-
cess and effective partnership.
 Shows the level of national ownership and measures to enhance na-
tional capacity for sustainability of results.
 While monitoring provides real-time information required by manage-
ment, evaluation provides more in-depth assessment. The monitoring
process can generate questions to be answered by evaluation. Also,
evaluation draws heavily on data generated through monitoring during
the programme and project cycle, including, for example, baseline data,
information on the programme or project implementation process and
measurements of results (UNDP, 2009).
 Donor requirements: an evaluation may be part of a donor contractual
obligation.

12 Zimbabwe Open University


Unit 1 Introduction to Project Monitoring and Evaluation

 Accountability: An evaluation may be necessary for you as a manager


to fulfil your fiduciary role – for example a contractually required evalu-
ation.
 Policy and programme relevance and renewal: An evaluation may
help you IUCN improve policies and/or programme delivery. When a
programme has reached the end of one phase and another is to be
planned, an interim evaluation might be a useful input to planning the
next phase.
 Innovation: A review of a new innovative programme or project may
help to determine whether to apply the approach with confidence else-
where (IUCN, 2004).
 Monitoring and evaluation help improve performance and achieve re-
sults. More precisely, the overall purpose of monitoring and evaluation
is the measurement and assessment of performance in order to more
effectively manage the outcomes and outputs known as development
results.
 Traditionally, monitoring and evaluation focused on assessing inputs and
implementation processes. Today, the focus is on assessing the contri-
butions of various factors to a given development outcome, with such
factors including outputs, partnerships, policy advice and dialogue,
advocacy and brokering/coordination. Programme managers are be-
ing asked to actively apply the information gained through monitoring
and evaluation to improve strategies, programmes and other activities.

Monitoring and evaluation helps to:


 complete the project within the budgeted resources and time and to
check on wastage.
 maintain focus on objectives by checking on performance specificity.
 know when to terminate the project if something goes wrong.
 identify areas which need change or improvement.

1.6.2 Relationship between monitoring and evaluation


Monitoring is also an ongoing process. The lessons from monitoring are dis-
cussed periodically and used to inform actions and decisions. Evaluations
should be done for programmatic improvements while the programme is still
ongoing and also inform the planning of new programmes. This ongoing proc-
ess of doing, learning and improving is what is referred to as the RBM life-
cycle approach (UNDP, 2009).

Zimbabwe Open University 13


Development Monitoring and Evaluation MSDS510

1.7 Guiding Principles for Monitoring and


Evaluation
Like any other process, monitoring and evaluation is guided by some princi-
ples. These principles are based on best practice in the development evalua-
tion field as found in Organisation for Economic Cooperation and Develop-
ment (OECD), bilateral and multilateral agency evaluation practices. These
principles act as the foundation for the standards for monitoring and evalua-
tion practice. Managers are expected to promote and safeguard these princi-
ples in the management of evaluation processes. It is ethical to respect these
principles when doing monitoring and evaluation, (IUCN, 2004).

Table 1.2 Principles for Evaluation

Principles for Monitoring and Evaluation of Development Projects


Results–oriented Evaluations should make relevant links to the overall
accountability results and outputs of the development program and
polices

Improving planning Evaluations must provide useful findings and


and delivery recommendations for action by managers

Quality control Evaluations must meet certain standards for acceptable


process and products

Supporting an Evaluation is seen as a tool to help staff improve their


evaluation culture work and results. It should be incorporated into ongoing
work processes and incentives system

Participatory/work in Evaluations should considers participation of members


partnership and all stakeholders

Transparency Evaluation reacquire clear communication with all those


involved and affected by the evaluation

Accessibility The results of the evaluation should be readily available


to all members, partners, donors and other stakeholders

Managers should carefully assess if evaluation is the

14 Zimbabwe Open University


Unit 1 Introduction to Project Monitoring and Evaluation

Ethical Managers should carefully assess if evaluation is the


appropriate tool to use in a given situation. Managers
should remain open to the results and consider the
welfare of those involved and affected by the evaluation

Impartiality Evaluations should be fair and complete and should


review strengths and weakness. The procedures should
minimise distortion caused by personal biases

Credibility Evaluation should adhere to standards or best practices


developed to regulate the practice and should strive to
improve the quality of evaluations overtime.

Utility Evaluation must serve the information needs of the


intended users.

(Source: IUCN, 2004)


(Source: IUCN, 2004)

Activity 1.2

? 1. Justify the relevance of monitoring and evaluation in development work.


Illustrate your answers using practical examples.
2. Discuss the principles of Monitoring and evaluation.
3. To what extent do monitoring and evaluation principles inhibit/enhance
quality of results in development?

1.8 Important Concepts in Monitoring and


Evaluation
Most of the terms appearing in this section are of general use in the develop-
ment community and by evaluation practitioners. In this publication, all terms
are defined in the context of monitoring and evaluation, although they may
have additional meanings in other contexts. Definitions in this section were
developed with the intention of harmonising the use of terminology throughout
the development community. The UNDP Evaluation Office, (2002), devel-
oped the definitions. Some of the more common terms you may have come
across are:

Zimbabwe Open University 15


Development Monitoring and Evaluation MSDS510

 Self-evaluation: This involves an organisation or project holding up a


mirror to itself and assessing how it is doing, as a way of learning and
improving practice. It takes a very self-reflective and honest organisa-
tion to do this effectively, but it can be an important learning experi-
ence.

 Participatory evaluation: This is a form of internal evaluation. The


intention is to involve as many people with a direct stake in the work as
possible. This may mean project staff and beneficiaries working to-
gether on the evaluation. If an outsider is called in, it is to act as a
facilitator of the process, not an evaluator (UNDP, 2002).

 Rapid participatory appraisal: Originally used in rural areas, the same


methodology can, in fact, be applied in most communities. This is a
qualitative way of doing evaluations. It is semi-structured and carried
out by an interdisciplinary team over a short time. It is used as a starting
point for understanding a local situation and is a quick, cheap, useful
way to gather information. It involves the use of secondary data re-
view, direct observation, semi-structured interviews, key informants,
group interviews, games, diagrams, maps and calendars (Gray, 1988).
In an evaluation context, it allows one to get valuable input from those
who are supposed to be benefiting from the development work. It is
flexible and interactive.

 External evaluation: This is an evaluation done by a carefully chosen


outsider or outsider team.

 Interactive evaluation: This involves a very active interaction be-


tween an outside evaluator or evaluation team and the organisation or
project being evaluated. Sometimes an insider may be included in the
evaluation team (UNDP, 2002).

 Feedback is a process within the framework of monitoring and evalu-


ation by which information and knowledge are disseminated and used
to assess overall progress towards results or confirm the achievement
of results. Feedback may consist of findings, conclusions, recommen-
dations and lessons from experience. It can be used to improve per-
formance and as a basis for decision-making and the promotion of
learning in an organisation.

16 Zimbabwe Open University


Unit 1 Introduction to Project Monitoring and Evaluation

 Outcome monitoring is a continual and systematic process of collect-


ing and analysing data to measure the performance of UNDP interven-
tions towards achievement of outcomes at country level. While the proc-
ess of outcome monitoring is continual in the sense that it is not a time-
bound activity, outcome monitoring must be periodic, so that change
can be perceived. In other words, country offices will accumulate in-
formation on an ongoing basis regarding progress towards an outcome,
and then will periodically compare the current situation against the baseline
for outcome indicators and access and analyse the situation (UNDP,
2002).

 An outcome evaluation is an evaluation that covers a set of related


projects, programmes and strategies intended to bring about a certain
outcome. Such evaluations assess how and why outcomes are or are
not being achieved in a given project. They may also help to clarify
underlying factors affecting the situation, highlight unintended conse-
quences (positive and negative), recommend actions to improve per-
formance in future programming, and generate lessons learned. These
periodic and in-depth assessments use “before and after” monitoring
data (IUCN, 2004).

 Planning can be defined as the process of setting goals, developing


strategies, outlining the implementation arrangements and allocating re-
sources to achieve those goals.

 Accountability: Responsibility for the justification of expenditures,


decisions or results of the discharge of authority and official duties,
including duties delegated to a subordinate unit or individual. In regard
to programme and project managers, the responsibility to provide evi-
dence to stakeholders that a programme or project is effective and
conforms with planned results, legal and fiscal requirements. In organi-
sations that promote learning, accountability may also be measured by
the extent to which managers use monitoring and evaluation findings.
Accountability is also an obligation to provide a true and fair view of
performance and the results of operations. It relates to the obligations
of development partners to act accordingly to clearly defined responsi-
bilities, roles and performance expectations and to ensure credible
monitoring, evaluation and reporting (UNDP, 2002; UNDP, 2004).
 Baseline data: Data that describe the situation to be addressed by a
programme or project and that serve as the starting point for measuring
the performance of that programme or project. A baseline study would

Zimbabwe Open University 17


Development Monitoring and Evaluation MSDS510

be the analysis describing the situation prior to receiving assistance.


This is used to determine the results and accomplishments of an activity
and serves as an important reference for evaluation.
 Beneficiaries: Individuals and/or institutions whose situation is sup-
posed to improve (the target group), and others whose situation may
improve. Also refers to a limited group among the stakeholders who
will directly or indirectly benefit from the project.

 Best practices: Planning and/or operational practices that have proven


successful in particular circumstances. Best practices are used to dem-
onstrate what works and what does not and to accumulate and apply
knowledge about how and why they work in different situations and
contexts.

 Capacity development: The process by which individuals, groups,


organisations and countries develop, enhance and organise their sys-
tems, resources and knowledge, all reflected in their abilities (individu-
ally and collectively) to perform functions, solve problems and set and
achieve objectives. Capacity development is also referred to as ca-
pacity building or strengthening.

 Effectiveness: The extent to which a development outcome is achieved


through interventions. The extent to which a programme or project
achieves its planned results (goals, purposes and outputs) and contrib-
utes to outcomes.

 Efficiency: The optimal transformation of inputs into outputs.

 Evaluation: A time-bound exercise that attempts to assess systemati-


cally and objectively the relevance, performance and success of ongo-
ing and completed programmes and projects. Evaluation can also ad-
dress outcomes or other development issues. Evaluation is undertaken
selectively to answer specific questions to guide decision-makers and/
or programme managers, and to provide information on whether un-
derlying theories and assumptions used in programme development were
valid, what worked and what did not work and why. Evaluation com-
monly aims to determine relevance, efficiency, effectiveness, impact
and sustainability. Evaluation is a vehicle for extracting cross-cutting
lessons from operating unit experiences and determining the need for
modifications to the strategic results framework. Evaluation should pro-

18 Zimbabwe Open University


Unit 1 Introduction to Project Monitoring and Evaluation

vide information that is credible and useful, enabling the incorporation


of lessons learned into the decision-making process.

 Evaluation scope: The focus of an evaluation in terms of questions to


address, limitations, what to analyse and what not to analyse.

 Ex-post evaluation: A type of summative evaluation of an intervention


usually conducted two years or more after it has been completed. Its
purpose is to study how well the intervention (programme or project)
served its aims and to draw conclusions for similar interventions in the
future.

 Feedback: As a process, feedback consists of the organisation and


packaging in an appropriate form of relevant information from moni-
toring and evaluation activities, the dissemination of that information to
target users and, most importantly, the use of the information as a basis
for decision-making and the promotion of learning in an organisation.
Feedback as a product refers to information that is generated through
monitoring and evaluation and transmitted to parties for whom it is rel-
evant and useful. It may include findings, conclusions, recommenda-
tions and lessons from experience.

 Impact: The overall and long-term effect of an intervention. Impact is


the longer-term or ultimate result attributable to a development inter-
vention—in contrast to output and outcome, which reflect more imme-
diate results from the intervention. The concept of impact is close to
“development effectiveness”.

 Impact evaluation: A type of evaluation that focuses on the broad,


longer-term impact or results, whether intended or unintended, of a
programme or outcome. For example, an impact evaluation could show
that a decrease in a community’s overall infant mortality rate was the
direct result of a programme designed to provide high quality pre- and
post-natal care and deliveries assisted by trained health care profes-
sionals.

 Independent evaluation: An evaluation carried out by persons sepa-


rate from those responsible for managing, making decisions on, or im-
plementing the project. It could include groups within the donor or-
ganisation. The credibility of an evaluation depends in part on how
independently it has been carried out, that is, on the extent of autonomy

Zimbabwe Open University 19


Development Monitoring and Evaluation MSDS510

and the ability to access information, carry out investigations and re-
port findings free of political influence or organisational pressure.
 Indicator: Signal that reveals progress (or lack thereof) towards ob-
jectives; means of measuring what actually happens against what has
been planned in terms of quantity, quality and timeliness. An indicator is
a quantitative or qualitative variable that provides a simple and reliable
basis for assessing achievement, change or performance.

 Monitoring: A continuing function that aims primarily to provide manag-


ers and main stakeholders with regular feedback and early indications
of progress or lack thereof in the achievement of intended results.
Monitoring tracks the actual performance or situation against what was
planned or expected according to pre-determined standards. Moni-
toring generally involves collecting and analysing data on implementa-
tion processes, strategies and results, and recommending corrective
measures.

 Outcome: Actual or intended change in development conditions that


UNDP interventions are seeking to support. It describes a change in
development conditions between the completion of outputs and the
achievement of impact. Examples: increased rice yield, increased in-
come for the farmers (UNDP, 2004).

 Outcome evaluation: Evaluation that covers a set of related projects,


programmes and strategies intended to bring about a certain outcome.
An outcome evaluation assesses “how” and “why” outcomes are or
are not being achieved in a given country context, and the contribution
of UNDP outputs to the outcome. It can also help to clarify the under-
lying factors that explain the achievement or lack thereof of outcomes;
highlight unintended consequences (both positive and negative) of in-
terventions; and recommend actions to improve performance in future
programming cycles and generate lessons learned.

 Outcome monitoring: A process of collecting and analysing data to


measure the performance of a programme, project, partnership, policy
reform process and/or “soft” assistance towards achievement of de-
velopment outcomes at country level. A defined set of indicators is
constructed to track regularly the key aspects of performance. Per-
formance reflects effectiveness in converting inputs to outputs, outcomes
and impacts.

20 Zimbabwe Open University


Unit 1 Introduction to Project Monitoring and Evaluation

 Outputs: Tangible products (including services) of a programme or


project that are necessary to achieve the objectives if a programme or
project. Outputs relate to the completion (rather than the conduct) of
activities and are the type of results over which managers have a high
degree of influence. Example: agricultural extension services provided
to rice farmers (www.unicerorg/resrval/indexh/html).

 Participatory evaluation: The collective examination and assessment


of a programme or project by the stakeholders and beneficiaries. Par-
ticipatory evaluations are reflective, action-oriented and seek to build
capacity. Participatory evaluations are primarily oriented to the infor-
mation needs of the stakeholders rather than the donor who acts as a
facilitator.

 Performance indicator: A particular characteristic or dimension used


to measure intended changes defined by an organisational unit’s results
framework. Performance indicators are used to observe progress and
to measure actual results compared to expected results. They serve to
answer “how” or “whether” a unit is progressing towards its objec-
tives, rather than “why” or “why not” such progress is being made.
Performance indicators are usually expressed in quantifiable terms, and
should be objective and measurable (for example, numeric values, per-
centages, scores, and indices) (UNDP, 2004).

Activity 1.3

?
1. Discuss briefly the following concepts in the context of monitoring and
evaluation:
a) self-evaluation
b) participatory evaluation
c) outcome evaluation
d) interactive evaluation
2. Account for the difference between outcome evaluation and outcome
monitoring. Give examples of real development projects in your com-
munity.

Zimbabwe Open University 21


Development Monitoring and Evaluation MSDS510

 Relevance: The degree to which the objectives of a programme or


project remain valid and pertinent as originally planned or as subse-
quently modified owing to changing circumstances within the immedi-
ate context and external environment of that programme or project.
For an outcome, the extent to which the outcome reflects key national
priorities and receives support from key partners.

 Results: A broad term used to refer to the effects of a programme or


project and/or activities. The terms “outputs”, “outcomes” and “im-
pact” describe more precisely the different types of results at different
levels of the log-frame hierarchy.

 Results-based management (RBM): A management strategy or


approach by which an organisation ensures that its processes, prod-
ucts and services contribute to the achievement of clearly stated re-
sults. Results-based management provides a coherent framework for
strategic planning and management by improving learning and account-
ability (UNDP, 2009; UNDP, 2002). It is also a broad management
strategy aimed at achieving important changes in the way agencies op-
erate, with improving performance and achieving results as the central
orientation, by defining realistic expected results, monitoring progress
towards the achievement of expected results, integrating lessons learned
into management decisions and reporting on performance.

 Target groups: The main beneficiaries of a programme or project that


are expected to gain from the results of that programme or project;
sectors of the population that a programme or project aims to reach in
order to address their needs based on gender considerations and their
socio-economic characteristics.

 Terminal evaluation: Evaluation conducted after the intervention has


been in place for some time or towards the end of a project or pro-
gramme to measure outcomes, demonstrate the effectiveness and rel-
evance of interventions and strategies indicate early signs of impact,
and recommend what interventions to promote or abandon.

22 Zimbabwe Open University


Unit 1 Introduction to Project Monitoring and Evaluation

1.9 Summary
In this unit, we looked at the concept of monitoring and evaluation as it is used
in development. We traced the history of monitoring and evaluation. We man-
aged to define the key terms, which are the bolts and nuts of monitoring and
evaluation thereby setting a strong background for the unit. We characterised
monitoring as well as evaluation. The similarities and differences of monitoring
and evaluation were discussed in this unit. In this unit, we also discussed the
principle for evaluation which are an important concept in monitoring and
evaluation practice.

Zimbabwe Open University 23


Development Monitoring and Evaluation MSDS510

References
Gray, R.J. (1988). Microcomputers in Evaluation, Evaluation Practice 9
(3) (47-53)
International Union for Conservation of Nature (IUCN). (2004). Managing
Evaluations Guide for IUCN Programme and Project Managers, Gland,
Cambridge: IUCN.
Nagel, S. (1986). Microcomputers and Evaluation Research. Evaluation
Review Volume: 10 Issue: 5, page(s): 563-577, DOI: https://doi.org/
10.1177/0193841X8601000501
Olive Publications. (2002). Planning for Monitoring and Evaluation, South
Africa: Olive Publications.
Rossi, P. H., Freeman, H. E., & Wright, S. R. (1979). Evaluating social
projects in developing countries. Paris: Development Centre of the
Organization for Economic Co-operation and Development
Rossi, P.H. (1993). Evaluation A Systematic Approach, London: Sage
Publications.
Shapiro, J. (1996). Evaluation: Judgment Day or Management Tool? Olive
Publications.
United Nations Development Programme. (2002). Handbook on Monitor-
ing and Evaluation for Results, New York: UNDP Evaluation Of-
fice.
United Nations Development Programme. (1997). Human Development
Report, UNDP, New York: Oxford University Press.
United Nations Development Programme. (2009). Planning, Monitoring and
Evaluation for Development Results, New York: United Nations De-
velopment Programme, UNDP.
The United Nations Children’s Fund (UNICEF). (n.d). Guide for Monitor-
ing and Evaluation. www.unicerforg/resrval/indexh/html, Accessed
on 12 May 2016.

24 Zimbabwe Open University


Blank page
Unit Two

Stakeholder Participation in
Monitoring and Evaluation

2.1 Introduction

M onitoring and evaluation is not a one-man job. It requires several ac-


tors and stakeholders. In this unit, we discuss the importance of
stakeholders in monitoring and evaluation process of development projects.
In this unit, we also define who the stakeholders are in monitoring and evalu-
ation process. We discuss the methods of identifying stakeholders, as well as
the importance of creating a stockholder analysis. The process of stakeholder
engagement is visited. In this unit, we also analyse the importance of partici-
pation in the context of monitoring and evaluation of development projects.
Development Monitoring and Evaluation MSDS510

2.2 Unit Objectives


By the end of this unit, you should be able to:
 define the term “stakeholder” in the context of monitoring and evalua-
tion
 identify the key stakeholder in monitoring and evaluation
 carry a stakeholder analysis
 create a stakeholder matrix
 explain the importance of stakeholder in monitoring and evaluation
 discuss the concept of participation in the context and monitoring and
evaluation

2.3 Stakeholders in Monitoring and Evaluation (M


and E)
In this section, we are going to define who the stakeholders are. We are going
to entertain several definitions from deferent scholars with the aim of unpack-
ing and illuminating what stakeholders are. Stakeholders are persons or groups
who are directly or indirectly affected by an intervention, as well as those who
may have interests in an intervention and/or the ability to influence its out-
come, either positively or negatively. Stakeholders may include locally af-
fected communities or individuals and their formal and informal representa-
tives, national or local government authorities, politicians, religious leaders,
civil society organisations and groups with special interests, the academic
community, or other businesses (IFC, 2007). Understanding who is involved
and in what way is key to carrying out effective monitoring and evaluation
because it is also the work of these stakeholders which will be monitored and
producing data for evaluation. Stakeholder analysis thus becomes a very im-
portant part of management of development projects as well as monitoring
and evaluation of the developing projects. The engagement and interaction of
stakeholders regulate participation.

2.4 Participation
In this part, we are going to define what is meant by participation. We are
going to entertain several definitions from different scholars with the aim of
unpacking participation. Participation is seen as the means for a widening and
redistribution of opportunities to take part in societal decision-making, in con-

26 Zimbabwe Open University


Unit 2 Stakeholder Participation in Monitoring and Evaluation

tributing to development and in the benefiting from its fruits (Oakley and
Marsden, 1984). Chambers (1983) describes participation as an empower-
ing process that enables local people to do their own analysis and to make
their own decisions. According to Nelson and Wright, (1995), it means that
“we” participate in “their” project, not “they” in “ours”). Marsden (1991),
believes participation is about understanding and working with the realities of
others in order to enhance effective development through the pooling of ex-
pertise, while Hira and Parfitt (2004) define participation as an active process
by which beneficiary groups influence the direction and execution of a devel-
opment project with a view to enhancing their well-being in terms of income,
personal growth, self-reliance or other values they cherish.

2.5 Stakeholder Engagement in M and E


Inadequate stakeholder involvement is one of the most common reasons why
programmes and projects fail. Therefore, every effort should be made to
encourage broad and active stakeholder engagement in the planning, moni-
toring and evaluation processes (Wanja and Iravo, 2017). This is particularly
relevant to crisis situations where people’s sense of security and vulnerability
may be heightened and where tensions and factions may exist. In these situa-
tions, the planning process should aim to ensure that as many stakeholders as
possible are involved, especially those who may be least able to promote
their own interests, and that opportunities are created for the various parties
to hear each other’s perspectives in an open and balanced manner. In crisis
situations this is not just good practice but is fundamental to ensuring that
programming ‘does no harm’ at the least and, hopefully, reduces inherent or
active tensions. According to United Nations Development Programme (2009),
at times the success of the programme or project, depends on representa-
tives of the different main stakeholder groups (including those relating to dif-
ferent parties of the tension) being equally consulted. In some situations, a
planning platform that brings stakeholders together so that they can hear each
other’s views may itself be a mechanism for reducing tensions (UNDP, 2009).

2.6 Rational for Stakeholder Analysis in M and E


Any given programme, project or development plan is likely to have a number
of important stakeholders. Effective planning is done with the participation of
these stakeholders. Stakeholders are the people who will benefit from the
development activity or whose interests may be affected by that activity. There-

Zimbabwe Open University 27


Development Monitoring and Evaluation MSDS510

fore, a simple stakeholder analysis is generally recommended for all planning


processes. According to UNDP (2009), a stakeholder analysis can help identify
the following:
 potential risks, conflicts and constraints that could affect the pro-
grammes, projects or activities being planned
 opportunities and partnerships that could be explored and developed
 vulnerable or marginalised groups that are normally left out of planning
processes
There are three steps important in stakeholder engagement in monitoring and
evaluation and these are:

(a) Identification of stakeholders


(b) Profiling their influence on the project and or programme
(c) Training, orientation or induction so that you can travel at the same
wavelength and finding common ground

Activity 2.1

?
1. Define the following term stakeholder in the context of monitoring and
evaluation.
2. Identify any 5 stakeholders in monitoring and evaluation.
3. Discuss the rationale for stakeholder engagement in monitoring and
evaluation.

2.7 Identification of Stakeholders


Identification of right stakeholders is key to a successful monitoring and evalu-
ation exercise. Many tools can be used to identify stakeholders and deter-
mine the type of involvement that they should have at different stages of the
process, which are planning, implementation, and monitoring, reporting, evalu-
ation. There are three common tools used in stakeholder analysis and these
are:
(a) Stakeholders importance and influence table
(b) Key stakeholder –interest matrix
(c) Stakeholder importance and influence table.

28 Zimbabwe Open University


Unit 2 Stakeholder Participation in Monitoring and Evaluation

These tables and matrix can be helpful in communicating about the stakeholders
and their role in the programme or activities that are being planned (UNDP,
2009).

2.8 Stakeholders Importance and Influence Table


Table 2.1 seeks to identify the stakeholders, who may have an interest in the
programme or project being planned, and determine the nature of that inter-
est.

Table 2.1 Identification of Key Stakeholder and Their Interests

Identification of key stakeholders and their interests

Stakeholder example Interest in activity Nature of interest


(+ve or –ve)*
Religious umbrella Ethics, fairness Positive
organisations

Watchdog NGO Fairness greater influence positive Positive

Private sector Opportunities for influence, fairness Positive/negative


organisations

Youth Umbrella Opportunities to participate Positive


organisations

International observer Fairness Positive


group

Citizens organisations Rights of citizens Positive

Women’s organisations Rights of women and fairness Positive

Informal political Threats to power Negative


leaders

(Source: UNDP, 2009)


(Source: UNDP, 2009)

Zimbabwe Open University 29


Development Monitoring and Evaluation MSDS510

2.9 Key Stakeholder Weighting Matrix


Table 2.2 assesses the importance and influence of those stakeholders in the
programme or project. Here, importance relates to who the programme or
project is intended for, which may be different from the level of influence they
may have.

Table 2.2: Importance and Influence of Stakeholders


Stakeholder example Importance (1-5) Influence (1-5)

Religious umbrella organisations 3 4

Watchdog NGO 5 1

Private sector organisations 5 1

Youth Umbrella organisations 4 3

International observer group 1 3

Citizens organisations 5 2

Women’s organisations 5 2

Informal political leaders 2 4

(Source: UNDP, 2009)


(Source: UNDP, 2009)

30 Zimbabwe Open University


Unit 2 Stakeholder Participation in Monitoring and Evaluation

2.10 Stakeholder Importance-Influence Ranking


Matrix

Figure 2.1 Stakeholder Importance and Influence Matrix (Source:


UNDP, 2009)

2.11 Interpreting the Stakeholder Importance and


Influence Matrix (UNDP, 2009)
The stakeholder importance and influence matrix, which is the second deliv-
erable in the planning process, becomes the main tool used to determine
who should be involved in the planning session and how other stakeholders
should be engaged in the overall process (UNDP, 2009).

Group 1

Stakeholders are very important to the success of the activity but may have
little influence on the process. For example, the success of an electoral project
or referendum in Zimbabwe will often depend on how well women and mi-
norities are able to participate in the elections, but these groups may not have

Zimbabwe Open University 31


Development Monitoring and Evaluation MSDS510

much influence on the design and implementation of the project or the con-
duct of the elections. In this case, they are highly important but not very influ-
ential. They may require special emphasis to ensure that their interests are
protected and that their voices are heard.

Group 2

Stakeholders are central to the planning process as they are both important
and influential. These should be key stakeholders for partnership building.
For example, political parties involved in a national elections programme in
Zimbabwe may be both very important (as mobilisers of citizens) and influen-
tial (without their support the programme may not be possible).

Group 3

Stakeholders are not the central stakeholders for an initiative and have little
influence on its success or failure. They are unlikely to play a major role in the
overall process. One example could be an international observer group that
has little influence on elections. Similarly, they are not the intended beneficiar-
ies of, and will not be impacted by those elections.

Group 4

Stakeholders are not very important to the activity but may exercise signifi-
cant influence. For example, an informal political leader may not be an impor-
tant stakeholder for an elections initiative aimed at increasing voter participa-
tion, but she or he could have major influence on the process due to informal
relations with power brokers and the ability to mobilise people or influence
public opinion. These stakeholders can sometimes create constraints to pro-
gramme implementation or may be able to stop all activities. Even if they are
not involved in the planning process, there may need to be a strategy for
communicating with these stakeholders and gaining their support.

Based on the stakeholder analysis, and on what is practical given cost and
location of various stakeholders, the identified stakeholders should be brought
together in a planning workshop or meeting.

32 Zimbabwe Open University


Unit 2 Stakeholder Participation in Monitoring and Evaluation

Activity 2.2

? 1.
2.
List three important steps in stakeholder identification.
How useful is each of the step in the identification of stakeholders in
monitoring and evaluation?
3. Discuss various challenges you may face in creating a stockholder
matrix.
4. Using real life examples identify a project and attempt to create a
stakeholder matrix for its monitoring and evaluation.

2.12 Orientation and Training of Stakeholders


The orientation and training of stakeholders involves the following:

2.12.1 Orientation on the planning process


Stakeholders should be made aware of what the planning process will in-
volve. The planning team should provide the stakeholders with a copy of the
draft issue note and work plan at the initial meeting. The work plan should
include sufficient time for preparing the results framework and the monitoring
and evaluation plan. It should also allow for potential challenges in conducting
stakeholder meetings in crisis settings when meetings between different par-
ties can be sensitive and time consuming (UNDP, 2002; United Nations, 2009).

This initial session is intended to raise awareness on cross cutting thematic


issues and enable participants to adopt a more rigorous and analytical ap-
proach to the planning process. Some of the ways in which this can be done
include:
 Having a gender expert provide an overview to participants on the
importance of gender and how to look at development programming
through a gender lens. This session would include an introduction to the
gender analysis methodology.
 Including a gender expert as a stakeholder in the workshop as an addi-
tional means of ensuring that gender and women’s empowerment is-
sues receive attention.
 Having a presenter address the group on capacity development meth-
odology as a tool to enhance programme effectiveness and promote
more sustainable development.

Zimbabwe Open University 33


Development Monitoring and Evaluation MSDS510

 Having a presenter address the group on promoting inclusiveness and


a rights-based approach to development.

2.12.2 Orientation
Orientation enables that all stakeholders start at the same point. They should
all understand:
 Why it is important for them to work together
 Why they have been selected for the planning monitoring and evalua-
tion exercise
 The rules of the planning exercise and how stakeholders should dia-
logue, especially in crisis settings, where these fora could be the first
time different parties have heard each other’s perspectives and goals
for development. It is important to bring stakeholders together not only
for the resources they have but also because each has a unique per-
spective on the causes of the problems and what may be required to
solve them (International Finance Corporation (IFC) (2007).
A government minister, a community member, a social worker, an economist,
a businessperson, a woman and a man may have different views on what they
are confronting and what changes they would like to see occur. It is common
in the early stages of planning for persons to use anecdotes to get stakeholders
to see how easy it is to look at the same issue and yet see it differently. This is
why stakeholder orientation and induction is very important to forge a com-
mon ground.

According to UNDP (2009), the orientation and induction should encourage


stakeholders to:
 Suspend judgment -Stakeholders should not start the process with
any pre-set ideas and should not rush to conclusions. They should be
prepared to hear different points of view before coming to conclusions.

 Be open to all points of view-In the planning exercise, all points of


views are equally valid, not just those of persons considered important.
The planning exercise should be conducted in such a way that every-
one, men, women, marginalised individuals - feel free to express their
views. The views expressed by stakeholders are neither ‘right’ nor
‘wrong.’

 Be creative -Stakeholders should understand that long-standing chal-


lenges are unlikely to be solved by traditional approaches, many of
which may have been tried before. They therefore, should be open to

34 Zimbabwe Open University


Unit 2 Stakeholder Participation in Monitoring and Evaluation

fresh ideas, especially those that may, at first, seem unworkable or


unrealistic.

2.13 Other Players in Monitoring and Evaluation


Other players in monitoring and evaluation include the following listed in Ta-
ble 2.3:

Table: 2.3 Other Stakeholders in Monitoring and Evaluation Practice

Stakeholder Role
Government- • Usually have overall responsibility for monitoring
coordinating authority and and evaluating development activities
other central ministries • Are in a good position to coordinate the design and
(for example, Planning or support for monitoring and evaluation activities,
Finance) particularly the annual review, and to take action
based on the findings of evaluations.

UN agencies • Provide baseline socio-economic information on


populations and beneficiary groups in locations
where UNDP is newly arrived or has a small
presence
• They provide technical support for evaluations and
monitoring, and may also provide information
about the status of outcomes.

Executing agents (the • Provide technical information about the relevance


institutions designated to and the quality of outputs or services through
manage a project) stakeholder meetings and consultations.
• Also provide technical support during evaluations.

Target beneficiaries • Provide information about the relevance and the


quality of outputs or services through stakeholder
meetings and consultations.
• Also provide boundary support during evaluations.
• Provides local information useful for evaluation

National statistical offices • Are key providers of data as well as expertise in

Zimbabwe Open University 35


Development Monitoring and Evaluation MSDS510

National statistical offices • Are key providers of data as well as expertise in


data collection and analysis?
Development assistance • May develop capacity for monitoring and
agencies evaluation through the provision of technical
assistance including advice, expertise and
organisation of seminars, training, identification of
qualified consultants, and the preparation of
guidance material including case study examples.
Such agencies also provide information on the
outcome and outputs, and exercise policy influence
(UNDP, 2002).

(Source: UNDP, 2009)

2.14 Benefits of Participation of Stakeholders


The many claimed benefits of stakeholder participation have to some extent
driven its widespread incorporation into national and international policy. At
the same time, disillusionment has been growing amongst practitioners,
stakeholders and the wider public, who feel let down when these claims are
not realised. However according to Danish International Development Agency
(DANIDA) (2005), some of the benefits include the following:

 Inclusion in decision-making

Normative claims focus on benefits for democratic society, citizenship and


equity. For example, it is argued that stakeholder participation reduces the
likelihood that those on the periphery of the decision-making context or soci-
ety are marginalised. In this way, more relevant stakeholders can be included
in decisions that affect them and active citizenship can be promoted, with
benefits for wider society

 Public trust

Stakeholder participation in environmental management may increase public


trust in decisions and civil society, if participatory processes are perceived to
be transparent and consider conflicting claims and views (Byrne, A., Gray-
Felder, D., Hunt, J. and Parks, W. 2005).

36 Zimbabwe Open University


Unit 2 Stakeholder Participation in Monitoring and Evaluation

 Empowers communities and youths

Stakeholder participation in environmental management is argued that it can


empower stakeholders through the co-generation of knowledge with research-
ers and increasing participants’ capacity to use this knowledge. It is claimed
that stakeholder participation may increase the likelihood that environmental
decisions are perceived to be holistic and fair, accounting for a diversity of
values and needs and recognising the complexity of human-environmental
interactions.

 Promotes social and vicarious learning

Social and vicarious learning may also promote social learning. This is where
the youths and other stakeholders and the wider society in which they live,
learn from each other through the development of new relationships, building
on existing relationships and transforming adversarial relationships as indi-
viduals learn about each other’s trustworthiness and learn to appreciate the
legitimacy of each other’s views.
 It is argued that participation enables interventions and technologies to
be better adapted to local socio-cultural and environmental conditions.
Youths gain technological knowledge through participation. This may
enhance their rate of adoption and diffusion among target groups, and
their capacity to meet local needs and priorities. Participation may make
research more robust by providing higher quality information inputs.

 By taking local interests and concerns into account at an early stage, it


may be possible to inform project design with a variety of ideas and
perspectives, and in this way increase the likelihood that local needs
and priorities are successfully met.

 High quality decisions


It is argued that participatory processes should lead to higher quality
decisions, as they can be based on more complete information, antici-
pating and ameliorating unexpected negative outcomes before they
occur. Participation helps to establish common ground and trust be-
tween participants. It also helps to appreciate the legitimacy of each
other’s viewpoints. Participatory processes have the capacity to trans-
form adversarial relationships and find new ways for youths and other
participants to work together towards sustainable environmental man-
agement solutions.

Zimbabwe Open University 37


Development Monitoring and Evaluation MSDS510

 Creates a sense of ownership

Participation leads to a sense of ownership over the process and outcomes. If


this is shared by a broad coalition of stakeholders, long-term support and
active implementation of decisions may be enhanced. Depending on the na-
ture of the initiative, this may significantly reduce implementation costs. How-
ever, there is growing concern that stakeholder participation is not living up to
many of the claims that are being made.

Activity 2.3

?
1. Discuss the benefits of participation of stakeholders in the monitoring
and evaluation process. Illustrate using examples.
2. Identify factors that may impede participation in your area with re-
spect to monitoring and evaluation.
3. As a development practitioner, how would you improve stakeholder
participation in monitoring and evaluation of development projects?
Give real examples.
4. Examine various ways in which target beneficiaries may participate in
monitoring and evaluation of development projects.
5. Discuss the ways in which participants can adapt a more rigorous and
analytical approach to the monitoring and evaluation process. Use ex-
amples to illustrate your points.

2.15 Summary
In this unit, we defined who the stakeholders are in monitoring and evaluation
process. We discussed the methods of identifying stakeholders, as well as the
importance if creating a stockholder analysis. The processes of stakeholder
engagement were visited. In this unit, we also analysed the importance of
participation in the context of monitoring and evaluation of development
projects. The next unit focuses on the various tools, which are used by evalu-
ation practitioners or development managers in the practice of monitoring and
evaluation of development projects.

38 Zimbabwe Open University


Unit 2 Stakeholder Participation in Monitoring and Evaluation

References
Byrne, A., Gray-Felder, D., Hunt, J. and Parks, W. (2005). Measuring
Change: A Guide to Participatory Monitoring and Evaluation of
Communication for Social Change. New Jersey: Communication
for Social Change Consortium.
Chambers, R. (1983). Rural Development: Putting the First Last, Longman,
London
Danish International Development Agency (DANIDA) (2005). Monitoring
and Indicators for Communication for Development. Copenhagen:
Technical Advisory Service, Royal Danish Ministry of Foreign Affairs.
International Finance Corporation (IFC). (2007). Stakeholder Engagement:
A Good Handbook for Companies Doing Business in Emerging Mar-
kets. Washington, IFC.
Hira, A. and Parfitt,T. (2004). Development Projects for A New Millennium,
London Praeger
Nelson, N. and Wright, S. (1995) Participation and Power. London, Inter-
mediate technology Publications.
Oakley, P. and Marsclen, D. (1984). Approaches to participation in Rural
Development.Geneva, ILO.
United Nations Development Programme. (2002). Handbook on Monitor-
ing and Evaluation for Results, New York: UNDP Evaluation Of-
fice, UNDP.
United Nations Development Programme. (2009). A Manager’s Guide to
Gender Equality and Human Rights Responsive Evaluation. New York.
UN Women Evaluation Unit.http://unifem.org/evaluation_manual/
accessed 23/12/2016
United Nations Development Programme. (2009). Guidance Note on Car-
rying Out an
Evaluation Assessment. A Manager’s Guide to Gender Equality and Human
Rights Responsive Evaluation. New York: UN Women Evaluation Unit.
United Nations Development Programme. (2009). Handbook on Planning,
Monitoring and Evaluating for Development Results. New York:
UNDP. http://undp.org/eo/handbook.
United Nations Children’s Funds. (2003). Planning Participatory Evaluation.
In M and E Training Modules. New York, UNICEF.
Wanja, M. and Iravo, M. (2017). Factors Affecting Project Scheduling of
Non-Governmental Organizations’ Projects in Mogadishu, Somalia.
(A Case Study of International Rescue Committee), American Based
Research Journal Vol-6-Issue-4 April-2017

Zimbabwe Open University 39


Development Monitoring and Evaluation MSDS510

40 Zimbabwe Open University


Unit Three

Monitoring and Evaluation


Tools

3.1 Introduction

T here are various tools, which are used by evaluation practitioners or


development managers in the practice of monitoring and evaluation of
projects. These include but are not limited to the following, budgets, Moni-
toring and Evaluation calendars, log frame, Gant charts and network analysis
among other tools. These help to collect information, which is necessary for
evaluation to take place. Most of these tools have a dual purpose. They are
both planning and monitoring and evaluation tools in development. In this
unit, we look closely at a very important tool in monitoring and evaluation
practice, which is called the log frame. We look at the log frame in the context
of the project cycle. In this unit, we define the log frame and characterize it.
The anatomy of the log frame is visited and the factors to consider in the
construction of a logical framework are outlined. The advantages and disad-
vantages of using the log frame are presented in this unit.
Development Monitoring and Evaluation MSDS510

3.2 Unit Objectives


By the end of this unit, you should be able to:
 define what a log frame is
 list the characteristics of a log frame
 explain the components of the log frame
 discuss factors to consider when constructing a log frame
 examine the advantages and disadvantages of a log frame
 apply the log frame as a monitoring and evaluation tool

3.3 Goal of Monitoring and Evaluation


Before we look into the tool of used in monitoring and evaluation, we would
like to remind you of the goals of monitoring and evaluation. This will help you
to associate the usefulness of each tool with the task the tool is supposed to
fulfil. In other words, the tool should contribute to the attainment of the moni-
toring and evaluation goals. Herewith are some of the monitoring and evalua-
tion goals adapted from Shapiro (2002).

42 Zimbabwe Open University


Unit 3 Monitoring and Evaluation Tools

Evaluate/learn/
Implement decide
Plan

Reflect/learn/
decide/adjust Implement

Monitor Monitor

Reflect/learn/
Implement
decide/adjust

Figure 3.1 Monitoring and Evaluation /Project Cycle (Source: Shapiro,


2002; Civicus, 2006)

It is important to recognise that monitoring and evaluation are not magic wands
that can be waved to make problems disappear, or to cure them, or to mi-
raculously make changes without a lot of hard work being put in by the project
or organisation. In themselves, they are not a solution, but they are valuable
tools. Monitoring and evaluation can:
 help you identify problems and their causes,
 suggest possible solutions to problems,
 raise questions about assumptions and strategy,
 push you to reflect on where you are going and how you are getting
there,
 provide you with information and insight,
 encourage you to act on the information and insight and
 increase the likelihood that you will make a positive development dif-
ference

Zimbabwe Open University 43


Development Monitoring and Evaluation MSDS510

3.4 The Log Frame


A log frame or logical framework is a tool, which shows the conceptual foun-
dation upon which the project’s M and E system is built. The log frame is a
matrix that specifies what the project is intended to achieve (objectives) and
how this achievement will be measured (indicators). It is essential to under-
stand the differences between project inputs, outputs, outcomes, and impact,
since the indicators to be measured under the M and E system reflect this
hierarchy (Chaplowe, 2008).

It should be noted on the onset that the structures of log frames are not cast in
concrete. Various organisations in the development community use different
formats and terms for the types of objectives in a log frame. A clear under-
standing of the log frame’s hierarchy of objectives is important for M and E
planning. This is because it will inform the key questions that will guide the
evaluation of project processes and impacts.

The log frame is characterised by five categories/components namely Goal,


outcomes, outputs, activities and inputs. These make up what is called the
horizontal logic which we will deal with later in this unit. However, these key
categories of the log frame attempts to answer the following questions:

Goal:
 To what extent has the project contributed towards its longer term
goals?
 Why or why not?
 What unanticipated positive or negative consequences did the project
have?
 Why did they arise?
Outcomes:
 What changes have occurred as a result of the outputs and to what
extent are these likely to contribute towards the project purpose and
desired impact? Has the project achieved the changes for which it can
realistically be held accountable?
Outputs:
 What direct tangible products or services has the project delivered as
a result of activities?
Activities:
 Have planned activities been completed on time and within the budget?

44 Zimbabwe Open University


Unit 3 Monitoring and Evaluation Tools

 What unplanned activities have been completed?


Inputs:

 Are the resources being used efficiently?


 These form the important components of the logical framework,
(Chaplowe, 2008).

3.4.1 Definition of the key terms and components of a


classic 4 x 5 log frame matrix
In this section, we define key terms and components of the logical frame-
work. This time we have added the vertical logic to the horizontal logic to
make a full log frame. This then becomes the conceptual structure of the log
frame.

Zimbabwe Open University 45


Development Monitoring and Evaluation MSDS510

Table 3.1 Log Frame Definition Table and Conceptual Structure

Project Objectives Indicators Means of Verification Assumptions

Goal: Simple clear Impact Indicator: Measurement method, External factors


statement of the Quantitative or data source, and data necessary to sustain the
impact or results to qualitative means to collection frequency for long-term impact, but
achieve by the project measure stated indicator beyond the control of the
achievement or to project
reflect the changes
connected to stated
goal

Outcomes: Set of Outcome /Indicator: Measurement method, External conditions


beneficiary and Quantitative or data source, and data necessary if the
population-level qualitative means to collection frequency for outcomes are to
changes needed to measure achievement or stated indicator contribute to achieving
achieve the goal to reflect the changes the goal
(usually knowledge, connected to stated
attitudes and practices. outcomes

Outputs: Products or Output Indicator: Measurement method, Factors out of the


services needed to Quantitative or data source, and data project’s control that
achieve...the outcomes qualitative means to collection frequency for could restrict or prevent
measure completion of stated indicator the outputs from
stated outputs (measures achieving the outcomes
the immediate product
of an activity)

Activities: Regular Process Indicator: Measurement method, Factors out of the


efforts needed to Quantitative or data source, and data project’s control that
produce the outputs qualitative means to collection frequency for could restrict or prevent
measure completion of stated indicator the activities from
stated activities, that is, achieving the outcomes
attendance at the
activities

Inputs: Resources Input Indicator: Measurement method, Factors out of the


used to implement Quantitative or data source, and data project’s control that
activities (financial, qualitative means to collection frequency for could restrict or prevent
materials, human) measure utilisation of stated indicator access to the inputs
stated inputs (resources
used for activities)

(Source: UNDP, 1997)

(Source: UNDP, 1997)

46 Zimbabwe Open University


Unit 3 Monitoring and Evaluation Tools

Activity 3.1

?
1 (a) Define the term log frame.
(b) Distinguish inputs from outputs.
(c) Distinguish outcomes from outputs.
2. List five components of a log frame and justify each component in
monitoring and evaluation process.
3. Discuss the usefulness of log frame components. Pay particular atten-
tion to the relationships of the components.

3.5 Indicators
Effective indicators are a critical log frame element. The indicators you meas-
ure in monitoring produce evidence that there has been a change or not after
an intervention. They are a very important part of the log frame in monitoring
and evaluation practice. Indicators must always be realistic and feasible and
meet user informational needs.

3.5.1 Designing indicators


When designing indicators there is need to consider the following questions
(Chaplowe, 2008; UNDP, 2009).
 Are the indicators SMART
(specific, measurable, achievable, realistic, and time-bound)? Indicators should
be easy to interpret and explain, timely, cost-effective, and technically feasi-
ble. Each indicator should have validity (be able to measure the intended
concept accurately) and reliability (yield the same data in repeated observa-
tions of a variable).

 Are there international or industry standard indicators?

For example, indicators developed by UNAIDS, the UNDP Millennium


Development Goals, and the Demographic and Health Surveys have been
used and tested extensively.

 Are there indicators required by the donor, grant or programme?

This can be especially important if the project-level indicator is expected to


roll up to a larger accountability framework at the program level.

Zimbabwe Open University 47


Development Monitoring and Evaluation MSDS510

 Are there secondary indicator sources?

It may be cost-effective to adopt indicators for which data have been or will
be collected by a government ministry, international agency, and so on.

3.5.2 Factors to consider when constructing indicators/


indicator traps
The following are indictor traps you must take note of when constructing
indicators:

Indicator overload

 Indicators do not need to capture everything in a project, but only what


is necessary and sufficient for monitoring and evaluation.
Output fixation
 Counting myriad activities or outputs is useful for project management
but does not show the project’s impact. For measuring project effects,
it is preferable to select a few key output indicators and focus on out-
come and impact indicators whenever possible.
Indicator imprecision
 Indicators need to be specific so that they can be readily measured.
For example, it is better to ask how many children under age 5 slept
under an insecticide-treated bed-net the previous night than to inquire
whether the household practices protective measures against malaria.
Excessive complexity
 Complex information can be time-consuming, expensive, and difficult
for local staff to understand, summarise, analyse, and work with. Keep
it simple, clear, and concise (Chaplowe, 2008).

3.6 The Indicator Matrix


An indicator matrix is a critical tool for planning and managing data collection,
analysis, and use. It expands the log frame to identify key information require-
ments for each indicator and summarises the key M and E tasks for the project.
While the names and formats of the indicator matrix may vary, (for example,
M and E plan, indicator planning matrix, or data collection plan), the overall
function remains the same. Often, the project donor will have a required for-
mat (Chaplowe, 2008).

48 Zimbabwe Open University


Unit 3 Monitoring and Evaluation Tools

Table 3.2 An Example of an Indicators Matrix


Indicators Indicator Methods/Sourc Person/s Frequenc Data Information Use
Definition es Responsi y/ Analysis
ble Schedules

Example Caretakers Polio Educatio Attendan Quarterly Project imple-


Output II. refers to Immunization n Field ce roster project mentation with
a. Number community Workshop At- Officer data reporting community
of beneficiaries tendance (EFO): collected and project beneficiaries
caretakers identified by Roster at the reflection Monitoring process
participati Local workshop meeting of community
ng in Polio Government and Project outreach training
Immuni- Agent and reported manageme for project with
zation who are quarterly nt team management with
Awareness participating during Sri Lankan Red
workshops in project quarterly Cross Society
activities reflection Tsunami Recovery
Polio meeting Program
Immunisation management
Awareness Impact evaluation
Workshop to justify
refers to a intervention to
one-day Ministry of Health
training, and donors
which is
designed to
convey
knowledge
on polio
immunisation
according to
Ministry of
Health recog-
nised
standard
curriculum
Numerator:
number of
beneficiaries
who
participate
and complete
one-day
workshop

Zimbabwe Open University 49


Development Monitoring and Evaluation MSDS510

Example “Schools” in Pre-arranged School Checklist Post-drill Project imple-


Outcome Mata District site visits dur- Field data meeting mentation with
2a. Criteria of ing disaster Officer collected with School Disaster
Percent of “success”: drill (SFO): quarterly School Committees
target drill Complete FGDs: Disaster Monitoring process
schools unannounced disaster drill teachers, Committee, of school outreach
that through early checklist and students, facilitated training for project
success- warning sys- entered into and by SFO with management
fully tem; response quarterly administr Project with Sri Lankan
conduct a time under 20 project report ation manage- Red Cross Society
minimum minutes, (QPR) every six ment team Tsunami Recovery
of one school School focus months during Program
disaster members group discus- Begin quarterly management
drill per report to sions data reflection Impact evaluation
quarter designated (teachers, collection meeting to justify
area per the students, ad- on intervention to
School Crisis ministration) 4/15/06 Ministry of
Response Scenario Education, Ministry
Plan check list of Disaster Relief,
Numerator: complete donors
number of d by
schools with 3/8/06
successful
scenario per
quarter.
Denominator:
total number
of targeted
schools
(Chaplowe, 2008; UNDP, 2009)

50 Zimbabwe Open University


Unit 3 Monitoring and Evaluation Tools

Table 3.2 above provides a sample format for an indicator matrix, with illus-
trative rows for outcome and output indicators. The following are the major
components or column headings of the indicator matrix (Chaplowe, 2008;
UNDP, 2009).

 Indicators

The indicators provide clear statements of the precise information needed to


assess whether proposed changes have occurred. Indicators can be either
quantitative (numeric) or qualitative (descriptive observations). Typically, the
indicators in an indicator matrix are taken directly from the log frame.

 Indicator definitions

Each indicator needs a detailed definition of its key terms, including an expla-
nation of specific aspects that will be measured (such as who, what, and
where the indicator applies). The definition should explain precisely how the
indicator will be calculated, such as the numerator and denominator of a per-
cent measure. This column should also note if the indicator is to be
disaggregated by sex, age, ethnicity, or some other variable.

 Methods/sources

This column identifies sources of information and data collection methods or


tools, such as use of secondary data, regular monitoring or periodic evalua-
tion, baseline or end-line surveys, Participatory rural appraisal (PRA) and
focus group discussions. This column should also indicate whether data col-
lection tools (questionnaires, checklists) are pre-existing or will need to be
developed. Note that the log frame column on “Means of Verification” may
list a source or method, that is, “household survey,” the M and E plan requires
much more detail, since the M and E work will be based on the specific
methods noted.

 Frequency/schedules

This column states how often the data for each indicator will be collected,
such as monthly, quarterly, or annually. It is often useful to list the data collec-
tion timing or schedule, such as start-up and end dates for collection or dead-
lines for tool development. When planning for data collection timing, it is im-
portant to consider factors such as seasonal variations, school schedules,
holidays, and religious observances (that is, Ramadan).

Zimbabwe Open University 51


Development Monitoring and Evaluation MSDS510

 Person(s) responsible

This column lists the people responsible and accountable for the data collec-
tion and analysis, that is, community volunteers, field staff, project managers,
local partner/s, and external consultants. In addition to specific people’s names,
use the position title to ensure clarity in case of personnel changes. This col-
umn is useful in assessing and planning for capacity building for the M and E
system.

 Data analysis

This column describes the process for compiling and analysing the data to
gauge whether the indicator has been met or not. For example, survey data
usually require statistical analysis, while qualitative data may be reviewed by
research staff or community members.

 Information use

This column identifies the intended audience and use of the information. For
example, the findings could be used for monitoring project implementation,
evaluating the interventions, planning future project work, or reporting to policy
makers or donors. This column should also state ways that the findings will be
formatted (for example, tables, graphs, maps, histograms, and narrative re-
ports) and disseminated (for example, Internet Web sites, briefings, commu-
nity meetings and mass media).

The indicator matrix can be adapted to information requirements for project


management. For example, separate columns can be created to identify data
sources, collection methods and tools, information use and audience, or
person(s) responsible for data collection and analysis. It may also be prefer-
able to use separate matrices for M and E indicators.

It is critical that the indicator matrix be developed with the participation of


those who will be using it. Completing the matrix requires detailed knowledge
of the project and context provided by the local project team and partners.
Their involvement contributes to data quality because it reinforces their un-
derstanding of what data they are to collect and how they will collect them
(Chaplowe, 2008)

52 Zimbabwe Open University


Unit 3 Monitoring and Evaluation Tools

3.7 The Use of a Log Frame in M and E


3.7.1 The anatomy of a log frame
A log frame is made up of two important features, which are (a) the horizontal
logic and (b) the vertical logic.

The horizontal logic consists of the following sub features Indicators, project
description, indicators, sources of verification (SOV) means of verification
(MOV) and assumptions. The vertical logic consists of the project /programme
goal, objectives/outcomes, deliverables /outputs and activities (Source:
Osborne, 2004)

These features are crafted in what is technically termed log frame matrix which
is indicated in Figure 3.2.

Figure 3.2 Log Frame Matrixes (Source: Osborne, 2004)

3.7.2 Tracing the logic of log frames


The Figure 3.2 tells a complete story of how the log frames can summaries
the project. It indicates that if adequate resources are provided then it be-

Zimbabwe Open University 53


Development Monitoring and Evaluation MSDS510

comes possible for the activities to be conducted. As we go up with the logic,


if the activities are conducted then the results can be produced as outputs of
the activities. If the deliverables are produced, then the objectives of the project
are accomplished. If the objectives are accomplished, then this should con-
tribute to the overall goal.

3.7.3 Construction descriptive statements


Writing a descriptive statement is very important in the construction of the log
frame. The statements must be clear, to the point and prices without ambigu-
ity.

Figure 3.3: Description of Statements (Source: Osborne, 2004)

If the horizontal logic is followed and assumptions hold true and do not change
fundamentally in the negative direction, then the project is likely to succeed.
This is indicated in the Figure 3.4 below.

54 Zimbabwe Open University


Unit 3 Monitoring and Evaluation Tools

Figure 3.4 Logic of Log Frames (Source: Osborne, 2004)

Activity 3.2

?
1. (a) Identify components of the horizontal logic on a log frame.
(b) Identify components of the vertical logic on a long frame.
(c) Discuss the relationship that exists between horizontal and vertical log
frame.

3.8 Indicators
Indicators are an instrument that gives you information. They are a quantita-
tive or qualitative factor or variable that provides a simple and reliable means
to measure achievement, to reflect changes connected to an intervention, or
to help assess the performance of a development actor. Indicators are a vari-
able that measures change in a phenomena or process (OECD, 2002).

Zimbabwe Open University 55


Development Monitoring and Evaluation MSDS510

3.8.1 Types of indicators


There are two types of indicators: direct indicators and indirect indicators
(proxy indicators).

(OECD, 2002).

Direct indicators

These indicators directly pinpoint at the subject of interest. This is often the
case with operational and more technical subjects. What the manager wants
to know, can be (and generally is) measured directly.

Indirect indicators (Proxy indicators)

Indirect indicators (or proxy-indicators) refer in an indirect way to the subject


of interest. There can be several reasons to formulate indirect indicators
(UNDP, 2009):
 The subject of interest cannot be measured directly. This is particularly
the case for more qualitative subjects, like behavioural change, and so
on.
 The subject of analysis can be measured directly, but it is too sensitive
to do so, for example level of income, “safe sex”, and so forth.
 The use of an indirect indicator can be more cost-effective than the use
of a direct one. An indirect indicator may very well represent the right
balance between level of reliability of information and the efforts needed
to obtain the data.
(Eldis, 2001)

Activity 3.3
1. (a) Discuss various sources of verification on a log frame in a development
? project of your choice.
(b) What are the advantages and disadvantages of the sources of verifica-
tion you have chosen?
(c) How would you resolve the shortfalls of the sources of verification you
have identified?
(d) How can you increase the reliability and validity of the information
from the sources of verification?
2. Discuss ways in which a log frame can be improved as a tool for monitor-
ing and evaluation.

56 Zimbabwe Open University


Unit 3 Monitoring and Evaluation Tools

3.9 Log Frame Vertical Logic Indicators


The log frame’s vertical logic indicators include:

3.9.1 Activity indicators


Activity indicators focus on implementation progress as reflected in project
and partner staff work plans, project events, and corresponding budget ex-
penditures (UNDP, 2009). They answer basic questions like:
 Was the Activity completed with acceptable quality?
 Was it completed as planned regarding numbers and types of items
purchased and distributed?
 Were the meetings held?
 Were the numbers and gender of people in the target groups trained or
otherwise involved?
Activity indicators are typically measured through administrative, manage-
ment, trainer, and financial tracking and record-keeping systems, supplemented
with written summaries and reports (UNDP, 2009).

3.9.2 Output indicators


Output indicators allow project management to track what is to be delivered,
when, and, most importantly, to what effect. They are generally measured in
terms of immediate effects of goods and services delivered, such as:
 pre/post-training scores on tests (written or verbal skills, simple as-
sessments, and so on) and
 creation of certain structures, documents, systems (kilometres of roads
or number of schools rehabilitated), and so on.
(UNDP, 2009)

3.9.3 Objectives indicators


Objectives indicators focus on demonstrable evidence of a behavioural change,
such as adoption or uptake, coverage or reach of outputs. Objectives indica-
tors normally can only be collected by the project itself – because they are
specific to behavioural changes in response to interventions by/in the specific
project and its action area. Secondary sources rarely exist at this level. Tracking
objectives indicators begins as soon as results have begun being delivered

Zimbabwe Open University 57


Development Monitoring and Evaluation MSDS510

and have had a reasonable amount of time to take effect. Start with “light”
monitoring, then do more, or more targeted monitoring depending on your
findings (UNDP, 2009).

3.9.4 Goal indicators


Many organisations do not require that the project manager measure the im-
pact of the project against the goal, asserting that project managers generally,
have no direct influence over the contribution the project makes to the overall
objective. They can only be expected to monitor the broader policy and pro-
gramme environment to help ensure the project continues to be contextually
relevant (UNDP, 2009).

3.10 Sources of Verification


This part of the horizontal logic is a source of verification of the log frame.
This requires you to articulate how you are going to obtain the data from your
indicators. It asks the question from where you are going to obtain data to
measure your indicators. The following are the common sources of verifica-
tion that are used in a log frame in most development projects depending on
the nature of the programme (UNDP, 2009; Chaplowe, 2008).
 household surveys,
 interviews,
 records (medical records),
 registers,
 vaccine records,
 evaluators’ reports,
 quarterly reports scenario checklist.
The information obtained from the above sources is compared to the baseline
and the difference or any change between the new record and the baseline
will indicate the direction to success or failure of the project. The information
from these sources will also indicate how far or close the project is in meeting
its pre-set indicator target.

58 Zimbabwe Open University


Unit 3 Monitoring and Evaluation Tools

3.11 Advantages of Using a Log Frame


World Bank (2003) identified several advantages and weaknesses of the log
frame. The following are some of the advantages and limitations of using the
Logical Framework as a tool for monitoring and evaluation.

Table 3.3 Advantages and Disadvantages of Using Log frames

Advantages
 It helps you ask the right questions.
 It guides systematic and logical analysis of the key interrelated ele-
ments that constitute a well-designed project.
 It defines linkages between the project and external factors.
 It facilitates common understanding and better communication among
decision-makers managers and other parties involved in the project.
 It prepares us for replication of successful results.
 It ensures continuity of approach when the original project staff is re-
placed.
 It provides a shared methodology and terminology among governments,
donor agencies, contractors and clients.
 Widespread use of the logical framework format makes it easier to
undertake both sector studies and comparative studies in general.

Limitations

 Organisations may promote a blueprint, rigid or inflexible approach,


making the log frame a straitjacket to creativity and innovation.
 The strong focus on results can miss the opportunity to define and im-
prove processes.
 The log frame is only one of several tools to be used during project
preparation, Implementation and evaluation, and it does not replace
target group analysis, time planning, impact analysis and so on.
 The log frame is a general analytic tool. It is policy neutral on questions
of income, distribution, employment opportunities, access to resources,
local participation cost and feasibility of strategies and technology, or
effects on the environment.
(The World Bank, 2003)

Zimbabwe Open University 59


Development Monitoring and Evaluation MSDS510

Activity 3.4
1. Analyse advantages and disadvantages of using a log frame as a tool
? 2.
for monitoring and evaluation of development projects.
Discuss ways in which a log frame can be used.
3. How applicable are log frames as a monitoring and evaluation tool for
managing development projects?
4. (a) Define indicators.
(b) In groups attempt to make a list of 10 (i) poverty indicators (ii)
health indicators of your choice and
(c) Discuss the utility of the identified health and poverty in
monitoring and evaluation.
5. Examine factors that you need when considering selection of indica-
tors.
6. (a) As a group, think of an example of a development project of your
choice. Attempt a typology of indicators, giving examples.
(b) Using an example of a project draw a log frame and indicate
activities and associated indicators. Explain the nexus between
indictors you have identified and activities.

3.12 Summary
In this unit, we looked closely at a very important tool in monitoring and
evaluation practice, which is called the log frame. We looked at the log frame
in the context of the project cycle. In this unit, we defined the log frame and
characterised it. The anatomy of the log frame was visited and the factors to
consider in the construction of a logical framework were outlined. We also
presented the advantages and disadvantages of using the log frame in this unit.
The next unit will focus on setting up a monitoring system.

60 Zimbabwe Open University


Unit 3 Monitoring and Evaluation Tools

References
Chaplowe, S. (2008). Monitoring and Evaluation Planning. Guidelines
and Tools. Baltimore: American Red Cross, CRS.
Civicus. (2006). Community Monitoring and Evaluation.
< h t t p : / / w w w. c i v i c u s . o rg / d o c u m e n t s / t o o l k i t s /
PGX_H_Community%20M&E.pdf (accessed> 20/04/2017)
Eldis. (2001). A Participatory Monitoring and Evaluation Guide: Indicators.
http://nt1.ids.ac.uk/eldis/hot/pm4.
Organisation for Economic Co-operation and Development (OECD). (2002).
DAC Glossary of Key Terms in Evaluation. DAC.https://
www.oecd.org/dac/evaluation/2754804.pdf,accessed 03/06/2017
Osborne, C. (2004). A Presentation Workshop on Monitoring and Evalua-
tion. Challenges of the 21st Century Planning. www.nasv.orgb. 6 June
2011.
Shapiro, J. (1996). Evaluation: Judgment Day or Management Tool? Olive
Publications.
United Nations Development Programme. (2009). Selecting Key Results
Indicators. <http://stone.undp.org/undpweb/evalnet/docstore1/
index_final/methodology/documents >indicators.
World Bank, (1996). The Log Frame Handbook. A Logical Framework
Approach to Project Cycle Management. Washington: World Bank.

Zimbabwe Open University 61


Development Monitoring and Evaluation MSDS510

62 Zimbabwe Open University


Unit Four

Monitoring and Evaluation


Tools: Gantt Chart

4.1 Introduction

T he process of monitoring and evaluation is a multi-stakeholder exercise.


It requires creating a framework for gathering and analysing the data
and in that regard, this exercise calls for the use of a plethora of tools to
necessitate the monitoring and evaluation processes. Various tools can be
used for monitoring and evaluation in development practice. In this unit, we
are going to look at a Gantt chart as a monitoring and evaluation tool. This
unit explores how to create and use a Gantt chart for monitoring and evalua-
tion.
Development Monitoring and Evaluation MSDS510

4.2 Unit Objectives


By the end of this unit, you should be able to:
 define a Gantt chart
 list components of a Gantt chart
 interpret a Gantt chart
 construct a Gantt chart
 use a Gant chart to monitor and evaluate a project

4.3 Origins of the Gantt chart


Karol Adamiecki, a Polish engineer who ran a steelwork in southern Poland
and had become interested in management ideas and techniques, devised the
first Gantt chart in the mid 1890s. Some 15 years after Adamiecki, Henry
Gantt, an American engineer and project management consultant, devised his
own version of the chart and it was this that became widely known and popu-
lar in western countries. Consequently, it was Henry Gantt whose name was
to become associated with charts of this type. Originally Gantt charts were
prepared laboriously by hand; each time a project changed it was necessary
to amend or redraw the chart and this limited their usefulness, continual change
being a feature of most projects. Nowadays, however, with the advent of
computers and project management software, Gantt charts can be created,
updated and printed easily (http://www.gantt.com/creating-gantt-charts.htm).

4.4 The Gantt Chart: A Conceptual View


The Gantt chart bears a name of Henry Gantt, American mechanical engi-
neer and management consultant who invented this chart as early as in 1910s.
A Gantt diagram represents projects or tasks in the form of cascading hori-
zontal bar charts. It is helpful when monitoring a project’s progress. It is a bar
chart that illustrates a project schedule. Gantt charts illustrate the start and
finish dates of the terminal elements and summary elements of a project. It is
used to plan how long a project should take. The tool lays out the order in
which the tasks need to be carried out. The chart allows you see immediately
what should have been achieved at any point in time.  It lets you see how
remedial action may bring the project back on course. Most Gantt charts
include “milestones” which are not part of a traditional Gantt Chart. How-
ever, for representing deadlines and other significant events, it is very useful to
include this feature on a Gantt chart.

64 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

(http://www.envisionsoftware.com/articles/Gantt_Chart.html (accessed 16/05/


2016)

A Gantt chart illustrates the breakdown structure of the project by showing


the start and finish dates as well as various relationships between project
activities, and in this way helps you track the tasks against their scheduled
time or predefined milestone. It is both a monitoring and evaluation tool for
any project.

The activity of the project might occur over several months and have several
tasks. Furthermore, not all tasks start and finish at the same time. That is,
some of the tasks cannot start until other tasks are finished. You can write this
down in words but sometimes it can be hard to grasp the meaning of a docu-
ment that sets out a project like this. One technique for dealing with the man-
agement of a project is a Gantt chart. This provides you with a pictorial method
of managing your project and improves the monitoring and evaluation of the
tasks being undertaken as well as the whole development of project. As de-
velopment practitioners, a Gantt chart is an important tool for you to use in
the monitoring and evaluation of projects.

4.4.1 Advantages of a Gantt chart


A Gantt chart allows you to monitor the following:
 What the various activities are
 When each activity begins and ends
 How long each activity is scheduled to last
 Where activities overlap with other activities, and by how much
 The start and end date of the whole project
(http://www.gantt.com/)
 It organises thoughts. Gantt chart helps to plan and organize thought.
It helps to simplify complexity by breaking the task down into indi-
vidual tasks that can be easily monitored and evaluated in line with
expected outcomes.
 It helps you to set realistic time frames. The bars on the chart
indicate in which period a particular task or set of tasks will be com-
pleted. This can help you to get things in perspective properly. In addi-
tion, when you do this, make sure that you think about events in your
organisation that have nothing to do with the project that might con-
sume resources and time.

Zimbabwe Open University 65


Development Monitoring and Evaluation MSDS510

 It can be highly visible. It can be useful to place the chart, or a large


version of it, where everyone can see it. This helps to remind people of
the objectives and when certain things are going to happen. It is useful
if everyone in your organisation can have a basic level of understanding
of what is happening with the project even if they may not be directly
involved with it. Some of the advantages of a Gantt include the follow-
ing:
 easy to draw;
 easy to decipher and understand;
 provides clarity of a project’s status;
 easy to use as a briefing tool in a meeting covering a whole set of
activities;
 one chart can illustrate many activities;
 one of the easiest ways of reporting and documenting a project’s
progress;
 it is a tool that can be used across all streams of work.
(http://www.brighthubpm.com/templates-forms/98481-network-analysis-and-
gantt-charts)

4.4.2 Disadvantages of Gantt charts


The disadvantages of Gantt Charts are as follows:
 They can become extraordinarily complex. Except for the simplest
projects, there will be large numbers of tasks undertaken and resources
employed to complete the project. There are software applications that
can manage all this complexity (e.g., Mavenlink, Wrike,
Smartsheet, AceProject). However, when the project gets to this level,
a small number of people (perhaps one) who manages all of the details
must manage it. Sometimes this does not work so well in a business
that is not used to this type of management. Big businesses will fre-
quently employ one or more project managers who are very skilled in
this. For a range of reasons, this may not work so well in a smaller
enterprise.
 The size of the bar does not indicate the amount of work. Each
bar on the chart indicates the period over which a particular set of
tasks will be completed. However, by looking at the bar for a particu-
lar set of tasks, you cannot tell what level of resources are required to
achieve those tasks. So, a short bar might take 500 man hours while a

66 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

longer bar may only take 20 man hours. The longer bar may indicate to
the uninformed that it is a bigger task, when in fact it is not.
 They need to be constantly updated. As you get into a project,
things will change. If you’re going to use a Gantt chart you must have
the ability to change the chart easily and frequently. If you don’t do this,
it will be ignored. Again, you will probably need software to do this
unless you’re keeping your project management at a high level.
 Difficult to see on one sheet of paper. The software products that
produce these charts need to be viewed on a computer screen, usually
in segments, to be able to see the whole project. It then becomes diffi-
cult to show the details of the plan to an audience. Further, you can
print out the chart, but this will normally entail quite a large “cut and
paste” exercise. If you are going to do this frequently, it can be very
time-consuming.

Activity 4.1

?
1. Define a Gantt chart.
2. Discuss the advantages and disadvantages of using a Gantt chart in
project management.
Use examples.

Figure 4.1 An example of a Gant chart

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017

Zimbabwe Open University 67


Development Monitoring and Evaluation MSDS510

Regrettably, Microsoft Excel does not have a built-in Gantt chart template as
an option. However, you can quickly create a Gantt chart in Excel by using
the bar graph functionality and a bit of formatting.

Please follow the below steps closely and you will make a simple Gantt chart
in under 3 minutes. We will be using Excel 2010 for this Gantt chart example,
but you can simulate Gantt diagrams in Excel 2007 and Excel 2013 exactly in
the same way.

4.4.3 Steps in creating a Gantt Chart using Excel


Step 1. Create a project table

You start by entering your project’s data in an Excel spreadsheet. List each
task is a separate row and structure your project plan by including the Start
date, End date and Duration, that is, the number of days required to com-
plete the tasks.

Tip. Only the Start date and Duration columns are really necessary for
creating an Excel Gantt chart. However, if you enter the End Dates too, you
can use a simple formula to calculate Duration, as you can see in the screenshot
below.

Figure 4.2: Creating Gantt chart using excel

Source:https://www.ablebits.com/office-addins-blog/2014/05/23/make-gantt-
chart-excel/accessed 2/05/2017

68 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

Step 2. Make a standard Excel Bar chart based on Start date

You begin making your Gantt chart in Excel by setting up a usual Stacked Bar
chart.
 Select a range of your Start Dates with the column header, it’s B1:B11
in our case. Be sure to select only the cells with data, and not the entire
column.
 Switch to the Insert tab >Charts group and click Bar.
 Under the 2-D Bar section, click Stacked Bar.

Figure 4.2: Creating Gantt chart using excel

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017

Step 2. Make a standard Excel Bar chart based on Start date

You begin making your Gantt chart in Excel by setting up a usual Stacked Bar
chart.

 Select a range of your Start Dates with the column header, it's B1:B11
in our case. Be sure to select only the cells with data, and not the entire
column.

Zimbabwe Open University 69


Development Monitoring and Evaluation MSDS510

 Switch to the Insert tab >Charts group and click Bar.


 Under the 2-D Bar section, click Stacked Bar.

Figure 4.3 A standard Excel Bar chart based on Start date


Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017

70 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

As a result, you will have the following Stacked bar added to your worksheet:

Figure 4.4 Stacked bar added to worksheet

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017

Zimbabwe Open University 71


Development Monitoring and Evaluation MSDS510

Step 3. Add Duration data to the chart

Now you need to add one more series to your Excel Gantt chart-to-be.

1. Right-click anywhere within the chart area and choose Select Data
from the context menu.

Figure 4.5 Adding Duration data to the chart

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017

The Select Data Source window will open. As you can see in the screenshot
below, Start Date is already added under Legend Entries (Series). And
you need to add Duration there as well.
2. Click the Add button to select more data (Duration) you want to plot
in the Gantt chart.

72 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

Figure 4.6 Locating the Add button.

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017

3. The Edit Series window opens and you do the following:

 In the Series name field, type “Duration” or any other name of your
choosing. Alternatively, you can place the mouse cursor into this field
and click the column header in your spreadsheet, the clicked header
will be added as the Series name for the Gantt chart.
 Click the range selection icon next to the Series Values field.

Zimbabwe Open University 73


Development Monitoring and Evaluation MSDS510

Figure 4.7 locating the range selection icon.

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
4. A small Edit Series window will open. Select your project Duration
data by clicking on the first Duration cell (D2 in our case) and dragging
the mouse down to the last duration (D11). Make sure you have not
mistakenly included the header or any empty cell.

Figure 4.8 Determining duration.

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
5. Click the Collapse Dialog icon to exit this small window. This will bring
you back to the previous Edit Series window with Series name and
Series values filled in, where you click OK.

74 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

Figure 4.9 Determining duration.

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
6. Now you are back at the Select Data Source window with both Start
Date and Duration added under Legend Entries (Series). Simply
click OK for the Duration data to be added to your Excel chart.

Figure 4.10 Determining duration.

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017

Zimbabwe Open University 75


Development Monitoring and Evaluation MSDS510

The resulting bar chart should look similar to this:

Figure 4.11 Determined duration.

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017

Step 4. Add task descriptions to the Gantt chart

Now you need to replace the days on the left side of the chart with the list of
tasks.

1. Right-click anywhere within the chart plot area (the area with blue and
orange bars) and click Select Data to bring up the Select Data Source
window again.
2. Make sure the Start Date is selected on the left pane and click the
Edit button on the right pane, under Horizontal (Category) Axis La-
bels.

76 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

Figure 4.12 Adding Task Descriptions to the chart (1).

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
3. A small Axis Label window opens and you select your tasks in the
same fashion as you selected Durations in the previous step - click the
range selection icon , then click on the first task in your table and
drag the mouse down to the last task. Remember, the column header
should not be included. When done, exit the window by clicking on the
range selection icon again.

Zimbabwe Open University 77


Development Monitoring and Evaluation MSDS510

Figure 4.13 Adding Task Descriptions to the chart (2)

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017

4. Click OK twice to close the open windows.


5. Remove the chart labels block by right-clicking it and selecting Delete
from the context menu.

Figure 4.14 Adding Task Descriptions to the chart (3)

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017

78 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

At this point your Gantt chart should have task descriptions on the left side
and look something like this:

Figure 4. 15 Chart with Adding Task Descriptions (4)

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017

Step 5. Transform the bar graph into the Excel Gantt chart

What you have now is still a stacked bar chart. You have to add the proper
formatting to make it look more like a Gantt chart. Our goal is to remove the
blue bars so that only the orange parts representing the project’s tasks will be
visible. In technical terms, we won’t really delete the blue bars, but rather
make them transparent and therefore invisible.
1. Click on any blue bar in your Gantt chart to select them all, right-click
and choose Format Data Series from the context menu.

Zimbabwe Open University 79


Development Monitoring and Evaluation MSDS510

Figure 4.16 Transforming the bar graph into the Excel Gantt chart (1)

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
2. The Format Data Series window will show up and you do the follow-
ing:
 Switch to the Fill tab and select No Fill.

 Go to the Border Colour tab and select No Line.

80 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

Figure 4.17 Transforming the bar graph into the Excel Gantt chart (2)

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
Note. You do not need to close the dialog because you will use it again in the
next step.
3. As you have probably noticed, the tasks on your Excel Gantt chart are
listed in reverse order. Now we are going to fix this.

Zimbabwe Open University 81


Development Monitoring and Evaluation MSDS510

Click on the list of tasks in the left-hand part of your Gantt chart to select
them. This will display the Format Axis dialog for you. Select the Catego-
ries in reverse order option under Axis Options and then click the Close
button to save all the changes.

Figure 4.18 Transforming the bar graph into the Excel Gantt chart (3)

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017

82 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

The results of the changes you have just made are:


 Your tasks are arranged in a proper order on a Gantt chart.
 Date markers are moved from the bottom to the top of the graph.
Your Excel chart is starting to look like a normal Gantt chart, isn’t it? For
example, my Gantt diagram looks like this now:

Figure 4.19 Resultant chart

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017

Step 6. Improve the design of your Excel Gantt chart

Though your Excel Gantt chart is beginning to take shape, you can add a few
more finishing touches to make it really stylish.

1. Remove the empty space on the left side of the Gantt chart

As you remember, originally the starting date blue bars resided at the start of
your Excel Gantt diagram. Now you can remove that blank white space to
bring your tasks a little closer to the left vertical axis.

Zimbabwe Open University 83


Development Monitoring and Evaluation MSDS510

 Right-click on the first Start Date in your data table, select Format
Cells > General. Write down the number that you see - this is a nu-
meric representation of the date, in my case 41730. As you probably
know, Excel stores dates as numbers based on the number of days
since 1-Jan-1900. Click Cancel because you don’t actually want to
make any changes here.

Figure 4.20 Improving Chart Design

Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017

84 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

 Click on any date above the task bars in your Gantt chart. One click
will select all the dates; you right click them and choose Format Axis
from the context menu.

Figure 4.21 Improving Chart Design: choosing format

Source:<https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017>
 Under Axis Options, change Minimum to Fixed and type the number
you recorded in the previous step.
2. Adjust the number of dates on your Gantt chart

In the same Format Axis window that you used in the previous step, change
Major unit and Minor unit to Fixed too, and then add the numbers you
want for the date intervals. Typically, the shorter your project’s timeframe is
the smaller numbers you use. For example, if you want to show every other
date, enter 2 in the Major unit. You can see my settings in the screenshot
below.

Note. In Excel 2013 and Excel 2016, are no Auto and Fixed radio buttons,
so you simply type the number in the box.

Zimbabwe Open University 85


Development Monitoring and Evaluation MSDS510

Figure 4.22 Adjusting the number of dates on your Gantt chart

Source: <https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017>

Tip. You can play with different settings until you get the result that works best
for you. Do not be afraid to do something wrong because you can always
revert to the default settings by switching back to Auto in Excel 2010 and
2007, or click Reset in Excel 2013.

3. Remove excess white space between the bars

Compacting the task bars will make your Gantt graph look even better.

86 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

 Click any of the orange bars to get them all selected, right click and
select Format Data Series.
 In the Format Data Series dialog, set Separated to 100% and Gap
Width to 0% (or close to 0%).

Figure 4.23 Removing excess white space between the bars

Source:<https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017>

Here is the result of our efforts - a simple but nice-looking Excel Gantt chart:

Zimbabwe Open University 87


Development Monitoring and Evaluation MSDS510

Figure 4.24 Resultant chart

Source: <https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017>

Remember, though your Excel chart simulates a Gantt diagram very closely, it
still keeps the main features of a standard Excel chart:

 Your Excel Gantt chart will resize when you add or remove tasks.
 You can change a Start date or Duration; the chart will reflect the changes
and adjust automatically.
 You can save your Excel Gantt chart as an image
Tips:
 You can design your Excel Gant chart in different ways by changing the
fill colour, border colour, shadow and even applying the 3-D format.
All these options are available in the Format Data Series window
(right-click the bars in the chart area and select Format Data Series
from the context menu).

88 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

Figure 4.25 3-D formatting

Source: <https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017>

Zimbabwe Open University 89


Development Monitoring and Evaluation MSDS510

 When you have created an awesome design, it might be a good idea to


save your Excel Gantt chart as a template for future use. To do this,
click the chart, switch to the Design tab on the ribbon and click Save
as Template.
The Online Gantt template

Gantt charts can be created using online templates. These are programmed to
create the Gantt charts automatically. This is an Interactive Online Gantt Chart
Creator from smartsheet.com. The Online Gantt templates are fast and easy-
to-use. You can practice using online templates by using the Interactive Online
Gantt Chart Creator from smartsheet.com They offer 30 days free trial, so
you can sign with your Google account here and start making your first Excel
Gantt diagram online straight away.

The process is very straightforward, you enter your project details in the left-
hand table, and as you type, a Gantt chart is being built in the right-hand part
of the screen. Figure 4.26 illustrates the process.

90 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

Figure 4.26 The Online Gantt template

Source:<https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017>

Zimbabwe Open University 91


Development Monitoring and Evaluation MSDS510

4.5 Creating a Gantt Chart: Example 2


A Gantt chart

Figure 4.27 shows an example of a finished Gantt chart which can be used to
monitor a project so that it is able to meet its project goals and expected
outcomes. This figure is an outcome of the several steps that we want to
follow in producing a Gant chart for monitoring and evaluation development
projects.

Figure 4.27: An example of a completed Gantt Chart

Source: <https://www.mindtools.com/pages/article/newPPM_03.htm
accessed 22/05/2016>

92 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

To create one for your project, follow these steps, using our example as a
guide. The following steps presented in this section were obtained from

www.mindtools.com/pages/article/newPPM_03.htm

Step 1: Identify Essential Tasks

Gantt charts don’t give useful information unless they include all of the activi-
ties needed for a project or project phase to be completed. So, to start, list all
of these activities. Use a work breakdown structure if you need to establish
what the tasks are. Then, for each task, note its earliest start date and its
estimated duration.

For example:

Your organization has won a tender to create a new “Software as a Service”


product, and you’re in charge of the project. You decide to use a Gantt chart
to organize all of the necessary tasks, and to calculate the likely overall timescale
for delivery. You start by listing all of the activities that have to take place, and
you estimate how long each task should take to complete. Your list looks as
follows:

Zimbabwe Open University 93


Development Monitoring and Evaluation MSDS510

Table 1: Scheduled Tasks


TASK LENGTH

A. High level analysis 1 week

B. Selection of server hosting 1 day

C. Configuration of server 2 weeks

D. Detailed analysis of core modules 2 weeks

E. Detailed analysis of supporting modules 2 weeks

F. Development of core modules 3 weeks

G. Development of supporting modules 3 weeks

H. Quality assurance of core modules 1 week

I. Quality assurance of supporting modules 1 week

J. Initial client internal training 1 day

K. Development and QA of accounting reporting 1 week

L. Development and QA of management reporting 1 week

M. Development of management information system 1 week

N. Client internal user training 1 week

Source: https://www.mindtools.com/pages/article/newPPM_03.htm
Source: <https://www.mindtools.com/pages/article/newPPM_03.htm
accessed 22/05/2016
accessed 22/05/2016>

Step 2: Identify Task Relationships

The chart shows the relationship between the tasks in a project. Some tasks
will need to be completed before you can start the next one, and others can’t
end until preceding ones have ended. These dependent activities are called
“sequential” or “linear” tasks. Other tasks will be “parallel” – i.e. they can

94 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

be done at the same time as other tasks. You don’t have to do these in se-
quence, but you may sometimes need other tasks to be finished first. So, for
example, the design of your brochure could begin before the text has been
edited (although you won’t be able to finalize the design until the text is per-
fect.) Identify which of your project’s tasks are parallel, and which are se-
quential. Where tasks are dependent on others, note down the relationship
between them. This will give you a deeper understanding of how to organize
your project, and it will help when you start scheduling activities on the chart.

Note:

In Gantt charts, there are three main relationships between sequential tasks:
 Finish to Start (FS) – FS tasks cannot start before a previous (and
related) task is finished. However, they can start later.
 Start-to-Start (SS) – SS tasks cannot start until a preceding task starts.
However, they can start later.
 Finish-to-Finish (FF) – FF tasks cannot end before a preceding task
ends. However, they can end later.
 A fourth type, Start to Finish (SF), is very rare.
Tip 1:

Tasks can be sequential and parallel at the same time – for example, two
tasks (B and D) may be dependent on another one (A), and may be com-
pleted at the same time. Task B is sequential in that it follows on from A, and
it is parallel, with respect to D.

Tip 2:

To minimise delivery times, you will need to do as much work in parallel as


you sensibly can. You also need to keep the scope of the project as small as
possible.

Example

Zimbabwe Open University 95


Development Monitoring and Evaluation MSDS510

Table 4.2 indicting Task relationships.

Dependent
Task Length Type*
on...

A. High level analysis 1 week S

B. Selection of server hosting 1 day S A

C. Configuration of server 2 weeks S B

D. Detailed analysis of core modules 2 weeks S, P to B, C A

E. Detailed analysis of supporting modules 2 weeks S, P to F D

F. Development of core modules 3 weeks S, P to E D

G. Development of supporting modules 3 weeks S, P to H, J E

H. Quality assurance of core modules 1 week S, P to G F

I. Quality assurance of supporting modules 1 week S G

J. Initial client internal training 1 day S, P to G C,H

K. Development and QA of accounting


1 week S E
reporting

L. Development and QA of management


1 week S E
reporting

M. Development of Management Information


1 week S L
System

N. Client internal user training 1 week S I, J, K, M

* P= Parallel, S= Sequential

Source: <https://www.mindtools.com/pages/article/newPPM_03.htm
accessed 22/05/2016>

96 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

Step 3: Input Activities into Software or a Template

You can draw your charts by hand or use specialist software, such as Gantto,
Matchware, or Microsoft Project. Some of these tools are cloud-based,
meaning that you and your team can access the document simultaneously,
from any location. (This helps a lot when you are discussing, optimising, and
reporting on a project.)

Figure 4.28: An example of a completed Gantt Chart with tasks

Source: <https://www.mindtools.com/pages/article/newPPM_03.htm
accessed 22/05/2016>

Zimbabwe Open University 97


Development Monitoring and Evaluation MSDS510

Step 4: Chart Progress

As your project moves along, it will evolve. For example, in our scenario, if
quality assurance of core modules revealed a problem, then you may need to
delay training, and halt development of the management information system
until the issue is resolved.

Update your chart to reflect changes as soon as they occur. This will help you
to keep your plans, your team, and your sponsors up to date.

Activity 4.2

?
1. With reference to Figure 4.1, how many tasks are done at the same
time in week?
2. Using figure 4.27
(a) Identify any parallel activates.
(b) List any sequential activities.
(c) List activities that must be finished by the end of the 7th week.
(d) Which activities start and finish between week 4 and 7.
3. Choose a project of your choice and create a Gantt for monitoring the
project activities.

4.6 Summary
Gantt charts are useful for planning and scheduling projects. They help you
assess how long a project should take, determine the resources needed, and
plan the order in which you will complete tasks. Gantt charts help you to
monitor and evaluate whether activities of the project are within the planned
parameters. It is therefore, a very important monitoring and evaluation tool in
development practice. They are also helpful for managing the dependencies
between tasks. Gantt charts are useful for monitoring a project’s progress
once it is underway. You can immediately see what should have been achieved
by a certain date and, if the project is behind schedule, you can take action to
bring it back on course.

98 Zimbabwe Open University


Unit 4 Monitoring and Evaluation Tools: Gantt Chart

References
<https://www.mindtools.com/pages/article/newPPM_03.htm accessed 22/05/
2016>
https://www.ablebits.com/office-addins-blog/2014/05/23/make-gantt-chart-
excel/.accessed 2/05/2017 Books for further reading
Clark, W. (2012). The Gantt Chart: A Working Tool of Management, Nabu
Press, Amazon
Lock, D. (2007). Project Management 9th Ed, Grower Publishing Limited,
Hampshire.
Montgomery, J. (2003). How to Create Gantt Charts Anyone Can Follow,
the Idea Interpreter
Thomsett, M.C. (2010), The little Black book of project Management, 3rd
edition, AMACOM, Toronto.

Zimbabwe Open University 99


Development Monitoring and Evaluation MSDS510

100 Zimbabwe Open University


Unit Five

Program Evaluation Review


Technique and the Critical Path
Method

5.1 Introduction

A Program Evaluation Review Technique (PERT) chart is a project man-


agement tool used to schedule, organize, and coordinate tasks within a
project. PERT stands for Program Evaluation Review Technique, a method-
ology developed by the U.S. Navy in the 1950s to manage a nuclear subma-
rine missile program (Stretton 2007). The Program Evaluation Review Tech-
nique is a project monitoring and evaluation tool that can be for managing the
projects in order to complete the project within the planned parameters. There
are many tools and steps included in a PERT chart but most importantly; it
includes scheduling, coordinating and organising the different steps of a project.
In simple words, a PERT chart is a diagram that explains the beginning, dura-
tion, and steps included and finishing of a project. In this unit, we look at the
PERT as a monitoring and evaluation tool. We will cover the advantages of
using the Program Evaluation Review Technique in monitoring and evalua-
tion, the methodology for creating the network diagrams, task scheduling, as
well as the determination of the critical paths in project management.
Development Monitoring and Evaluation MSDS510

5.2 Unit Objectives


By the end of this unit, you should be able to:
 explain the program evaluation review technique
 describe the main components of the program evaluation review tech-
nique
 discuss the program evaluation review technique utility to monitoring
and evaluation
 contract network diagrams from a given set of project data
 create scheduled tasks form project data
 execute the critical path analysis computations

Although it originated in the late 1950s, critical path is still incredibly impor-
tant to project managers today. It provides a visual representation of project
activities, clearly presents the time required to complete tasks, and tracks
activities so you don’t fall behind (Kelly, 1989). The critical path method also
reduces uncertainty because you must calculate the shortest and longest time
of completion of each activity. This forces you to consider unexpected factors
that may impact your tasks and reduces the likelihood that an unexpected
surprise will occur during your project. According to Kelly (1989), the criti-
cal path method also has three main benefits for project managers as follows:
 Identifies the Most Important Tasks:  First, it clearly identifies the
tasks that you will have to closely manage. If any of the tasks on the
critical path take more time than their estimated durations, start later
than planned, or finish later than planned, then your whole project will
be affected. 

 Helps Reduce Timelines: Secondly, if, after the initial analysis pre-
dicts a completion time, there is interest in completing the project in a
shorter period, it is clear which task or tasks are candidates for dura-
tion reduction.

 When the results from a critical path method are displayed as a bar
chart, like a Gantt chart, it is easy to see where the tasks fall in the
overall timeframe. You can visualise the critical path activities (they are
usually highlighted), as well as task durations and their sequences. This
provides a new level of insight into your project’s timeline, giving you
more understanding about which task durations you can modify, and
which must stay the same. 

102 Zimbabwe Open University


Unit 5 Program Evaluation Review Technique and the Critical Path Method

 Compares Planned with Actual: the critical path method can also be
used to compare planned progress with actual progress. As the project
proceeds, the baseline schedule developed from the initial critical path
analysis can be used to track schedule progress.
 Throughout a project, a manager can identify tasks that have already
been completed, the predicted remaining durations for in-progress tasks,
and any planned changes to future task sequences and durations.  The
result will be an updated schedule, which, when displayed against the
original baseline, will provide a visual means of comparing planned with
actual progress.
 Help you identify the activities that must be completed on time in order
to complete the whole project on time.
 Show you which tasks can be delayed and for how long without im-
pacting the overall project schedule.
 Calculate the minimum amount of time it will take to complete the project.
 Tell you the earliest and latest dates each activity can start on in order
to maintain the schedule.

(Fondal, 1987).

Critical Path Analysis is an effective and powerful method of assessing:


 Tasks, which must be carried out.
 Where parallel activity can be carried out.
 The shortest time in which a project can be completed.
 Resources needed to achieve a project.
 The sequence of activities, scheduling, and timings involved.
 Task priorities.
The Critical Path Method (CPM) will help you to answer the following moni-
toring questions:

 What is the expected project completion date?


 What is the potential “variability” in this date?
 What are the scheduled start and completion dates for each specific
activity?

Zimbabwe Open University 103


Development Monitoring and Evaluation MSDS510

 What activities are critical in the sense that they must be completed
exactly as scheduled
 In order to meet the target for overall project completion?
 How long can noncritical activities be delayed before a delay in the
overall completion date is incurred?
 How might resources be concentrated most effectively on activities in
order to speed up project completion?
 What controls can be exercised on the flows of expenditures for the
various activities
 Throughout the duration of the project in order that the overall budget
can be adhered to?
(Stretton 2007).

5.3 Limitations of the PERT


The following are some of PERT’s weaknesses:
 The activity time estimated are somewhat subjective and depend on
judgments. In cases where there is little experience in performing an
activity, the numbers may be only a guess. In other cases, if the person
or group performing the activity estimates the time there may be bias in
the estimate.
 Even if the activity times are well estimated, PERT assumes a beta
distribution for these time estimates, but the actual distribution may be
different.
 Even if the beta distribution assumption holds, PERT assumes that the
probability distribution of the project completion is the same as that of
the critical path. Because of other paths can become the critical path if
their associated activities are delayed, PERT consistently underesti-
mates the expected project completion time.
(<http://projectmanagementandmicrosoftproject.blogspot.my/2016/11/pert-
chart.html, accessed> 20/05/2017)

104 Zimbabwe Open University


Unit 5 Program Evaluation Review Technique and the Critical Path Method

Activity 5.1

? 1. Examine the utility of Critical Path Method in project management.


Use examples

5.4 Key Element of the Critical Path Method


The CPM has four key elements, which are:
 Critical Path Analysis
 Float Determination
 Early Start & Early Finish Calculation
 Late Start & Late Finish Calculation
The following notations are important and you must understand what they
mean:
ES = Early Start
EF = Early Finish
LS = Late Start
LF = Late Finish
DU = Duration

Zimbabwe Open University 105


Development Monitoring and Evaluation MSDS510

5.5 Applying the Program Evaluation Review


Technique
Using the Program Evaluation Review Technique in monitoring and evalua-
tion can be complicated. However, if one grasps the technique it can be a
very simple method of monitoring and evaluation projects. In many cases, this
can be done using computer software, which makes it very easy to use. How-
ever, for our purposes we are doing it so that we get to know the concept of
the method and it becomes easy when using a computer to run it. We will
proceed by giving examples and you are expected to follow the examples
and know PERT. We will restrict ourselves to the determination of the identi-
fied key elements, which are:
 Critical Path Analysis
 Float Determination
 Early Start & Early Finish Calculation
 Late Start & Late Finish Calculation
(Fondal, 1987)

Figure 5.1 (a) An example of a PERT chart drawn to show the devel-
opment of a system.

Source:(http://www.pmexamsmartnotes.com/how-to-calculate-critical-
path,accessesed 18/05/2017)

106 Zimbabwe Open University


Unit 5 Program Evaluation Review Technique and the Critical Path Method

With reference to the above figure, the following steps must be noted in PERT
chart technique:
 Determine the steps or tasks included in the project:
This is the first step that needs to be done before the PERT chart is con-
structed. At this stage, the person in charge or the project supervisor needs to
review the entire project and enlist the tasks or different stages that need to
be done in order to deliver the results.

 Define and explain the first task that needs to be done:

This is the second stage where one needs to define what will be the first task
to complete in order to begin the whole project. This may or may not include
the duration to complete the project, which depends on each project.

 Define the next task that can be started simultaneously with the first
task:

This includes the explanation of the second task that can be started with the
first task at the same time.

 Determine the second task that needs to be done:

This includes the second task that will be started as soon the first task is
completed or it can also be started along with the other stages if that is possi-
ble in the project.

 Define the duration or completion time for each task or step:

There are two options for you in order to implicate the PERT chart on your
project; you can explain the duration or deadline for each stage on each step
with the individual task or you can mention the required completion duration
of each task separately on the chart.

 Define the critical path on the PERT chart:

This includes the tasks or steps in the project that needs to be done on short-
term basis and as these tasks are the most important ones in the project, it is
important that they be done on time in order to eliminate any delays in the
project.

(http://www.pmexamsmartnotes.com/how-to-calculate-critical-
path,accessesed 18/05/2017/)

Zimbabwe Open University 107


Development Monitoring and Evaluation MSDS510

5.6 Important Attributers of the PERT Chart


In order to understand both the construction and the interpretation of the
PERT chart, you must understand the following parameters.
 The circles mark the beginnings and ends of tasks to be done in the
project and are called nodes.
 The arrows are the tasks themselves. Letters A to I identify them. In a
real pert chart, the actual names of tasks would be used instead of
letters. The lengths of the arrows does not relate to their length in time.
 The numbers after the task names are the durations of the task. The
time interval may be anything from seconds to years. Let’s assume
these timings are in days.
 Important point to remember: the arrows are tasks, not the circles
(nodes).
 When a node has two or more tasks branching from it, it means those
tasks can be done concurrently (at the same time.)
 When a node has incoming arrows, it means the incoming task must be
completed before progress may continue to any arrows heading away
from the node. For example, task A must be completed before tasks B
or G may begin.
(https://www.com/file/6803126/wwwvceitcom-ganttpert-pert-tute-pert-
tutehtm/ accessed 20/05/2017)

108 Zimbabwe Open University


Unit 5 Program Evaluation Review Technique and the Critical Path Method

5.7 Examination and Interpretation of PERT


Using figure 5.1 let us attempt interpretation of the PERT

Figure 5.1 (b) Interpretation of the PERT

Source: (https://www.com/file/6803126/wwwvceitcom-ganttpert-pert-tute-
pert-tutehtm/accessed 20/05/2017)
You need to be able to examine and interpret charts like this PERT.
Task A is the first task and takes 2 days.
When it is done, tasks B and G can begin.
If we follow the task G line, it takes 2 days to reach task H that takes 5 days.
Task H leads to the final task, I.
Total time for following this path is 2 + 2 + 5 + 3 = 12 days.
The path would be described as A, G, H, I.

When task G began, so did task B (with another team of workers).


When task B finished, after 3 days, there is another opportunity to run some
tasks concurrently.
Therefore, after B, tasks C and D began at the same time.
If we follow task C, it takes 1 day to reach task E, which leads to the final
task I.
Total time for this path was 2 + 3 + 1 + 4 + 3 = 13 days.

Zimbabwe Open University 109


Development Monitoring and Evaluation MSDS510

If we followed task D, which takes 3 days, it leads to task F (also 3 days)


before reaching the final task, I.
Total time for this path is 2 + 3 + 3 + 3 + 3 = 14 days.
Note that tasks E, F and H must all be finished before task I can begin.
You will have noticed that there are several paths through from task A to task
I.
Each of these paths takes a different amount of time.

Thus, we ask ourselves the most important Question, which is


What is the shortest possible time for the project to take (without leaving any
tasks out)?

14 days (the longest possible path). Yeah, it sounds odd that the shortest time
is the longest path, so in the chart above, the shortest project time would be
14 days. That is the critical PATH of the project: the sequence of tasks from
beginning to end that takes the longest time. No task on the critical path can
take more time without affecting the end date of the project. In other words,
none of the tasks on the critical path has any slack. Slack is the amount of
extra time a task can take before it affects a following task. In the breakfast
example above, the breakfast could take another eight minutes before it af-
fected the leaving time, so it has eight minutes’ slack. Tasks on the critical path
are called critical tasks. No critical task can have any slack (by definition).

(<https://www.com/file/6803126/wwwvceitcom-ganttpert-pert-tute-pert-
tutehtm/accessed 20/05/2017>)

5.8 Critical Path Analysis


Example 2

The critical path is the sequence of activities with the longest duration.
A delay in any of these activities will result in a delay for the whole project.
Below are some critical path examples to help you understand the key
elements

110 Zimbabwe Open University


Unit 5 Program Evaluation Review Technique and the Critical Path Method

Figure 5.2 Using the Critical Path Method (CPM)

Source: http://www.project-management-skills.com/critical-path-method.html
accessed 21/05/2017
1. The duration of each activity is listed above each node in the diagram.
2. For each path, add the duration of each node to determine its total
duration.
3. The critical path is the one with the longest duration.
4. There are three paths through this project.
To use the Critical Path Analysis to find Your Critical path, you use the follow-
ing method:

Therefore, the critical path is 14 as indicted in the example. Follow the method
carefully and understand it.

(ht tp :/ /www.p ro ject -m an agem en t-sk il ls .com /cri ti cal-path -


method.html,accessed 21/05/2017)

Zimbabwe Open University 111


Development Monitoring and Evaluation MSDS510

5.9 Float Determination


Once you have identified the critical path for the project, you can determine
the float for each activity. Float is the amount of time an activity can slip
before it causes your project to be delayed. Float is sometimes referred to
as slack. (<http://www.project-management-skills.com/critical-path-
method.html, accessed> 21/05/2017)

Figuring out the float using the Critical Path Method is easy. You will start with
the activities on the critical path. Each of those activities has a float of zero. If
any of those activities slips, the project will be delayed.

Then you take the next longest path. Subtract its duration from the duration of
the critical path. That is the float for each of the activities on that path.

You will continue doing the same for each subsequent longest path until each
activities float has been determined. If an activity is on two paths, its float will
be based on the longer path that it belongs to.

Figure 5.3 Determining Float

Source: (http://www.project-management-skills.com/critical-path-
method.html. accessed 21/05/2017)

Using the critical path diagram from the previous section, Activities 2, 3, and
4 are on the critical path so they have a float of zero.

The next longest path is Activities 1, 3, and 4. Since Activities 3 and 4 are also
on the critical path, their float will remain as zero. For any remaining activities,
in this case Activity 1, the float will be the duration of the critical path minus
the duration of this path 14 - 12 = 2. Therefore, Activity 1 has a float of 2.

112 Zimbabwe Open University


Unit 5 Program Evaluation Review Technique and the Critical Path Method

The next longest path is Activities 2 and 5. Activity 2 is on the critical path so
it will have a float of zero. Activity 5 has a float of 14 - 9, which is 5. There-
fore, as long as Activity 5 does not slip more than 5 days, it will not cause a
delay to the project.

5.10 Early Start & Early Finish Calculation


The Critical Path Method includes a technique called the Forward Pass,
which is used to determine the earliest date an activity can start, and the
earliest date it can finish. These dates are valid as long as all prior activities in
that path started on their earliest start date and did not slip.

Starting with the critical path, the Early Start (ES) of the first activity is one.

The Early Finish (EF) of an activity is its ES plus its duration minus one.
Using our earlier example, Activity 2 is the first activity on the critical path: ES
= 1, EF = 1 + 5 -1 = 5.

Figure 5.4 Critical Path Schedules

Source: http://www.project-management-skills.com/critical-path-method.html
accessed 21/05/2017

You then move to the next activity in the path, in this case Activity 3. Its ES is
the previous activity’s EF + 1. Activity 3 ES = 5 + 1 = 6. Its EF is calculated
the same as before: EF = 6 + 7 - 1 = 12. If an activity has more than one
predecessor; to calculate its ES you will use the activity with the latest EF.

Zimbabwe Open University 113


Development Monitoring and Evaluation MSDS510

5.11 Late Start and Late Finish Calculation


1. The Backward Pass is a Critical Path Method technique you can use
to determine the latest date an activity can start and the latest date it
can finish before it delays the project.

2. You will start once again with the critical path, but this time you begin
from the last activity in the path.

3. The Late Finish (LF) for the last activity in every path is the same as
the last activity’s EF in the critical path. The Late Start (LS) is the LF
- duration + 1.

4. In our example, Activity 4 is the last activity on the critical path. Its LF
is the same as its EF, which is 14. To calculate the LS, subtract its
duration from its LF and add one. LS = 14 - 2 + 1 = 13.

5. You then move on to the next activity in the path. Its LF is determined
by subtracting one from the previous activity’s LS.
6. In our example, the next Activity in the critical path is Activity 3. Its LF
is equal to Activity 4 LS - 1. Activity 3 LF = 13 -1 = 12.
7. Its LS is calculated the same as before by subtracting its duration from
the LF and adding one. Activity 3 LS = 12 - 7 + 1 = 6.
You will continue in this manner moving along each path filling in LF and LS
for activities that do not have it already filled in.

The Critical Path Method is an important tool for managing your project’s
schedule. As you can see, it is not very difficult to determine its key elements.
However, once your project has more than a few activities, critical path
scheduling can become tedious.

5.12 Question and Solution Activities


The Critical Path Method is an important tool for monitoring project’s sched-
ules. As you can see, it is not very difficult to determine its key elements.
However, once your project has more than a few activities, critical path
scheduling can become tedious.

114 Zimbabwe Open University


Unit 5 Program Evaluation Review Technique and the Critical Path Method

Other Illustrative Activities and Examples

When dealing with the Critical Path Method it is important to remember


that:
(i) the first step in the process is to define the activities in the project
(ii) establish the proper precedence relationships.
Comment

This is an important first step since errors or omissions at this stage can lead
to a disastrously inaccurate schedule. Table 5.1 shows the first activity list,
(the columns labelled “Time” and “Resources” are indications of things to
come). This is the most important part of any PERT or CPM project. Impor-
tant activities must not be missed. This must be a group effort—not done in
isolation.
3. (iii) Conceptually, Table 5.1 shows that each activity is placed on a sepa-
rate line, and its immediate predecessors are recorded on the same
line. The immediate predecessors of an activity are those activities that
must be completed prior to the start of the activity in question.
Comment

For example, note that in Table 5.1 we see that the organisation cannot start
activity C, determine personnel requirements, until activity B, create the or-
ganizational and financial plan, is completed. Similarly, activity G, hire new
employees, cannot begin until activity F, select the Global personnel that will
move from Texas to Iowa, is completed. This activity, F, in turn, cannot start
until activity C, determine personnel requirements, is completed.

Zimbabwe Open University 115


Development Monitoring and Evaluation MSDS510

Table 5.1: Title Table of Tasks

Source: https://www2.kimep.kz/bcb/omis/our_courses/is4201/Chap14.pdf,
accessed 22/05/2017

We shall shortly see how PERT and CPM are used to produce these an-
swers.

5.13 Constructing the Network Diagram


Figure 5.5 shows a network diagram for activities A through C. We empha-
sise at the onset that the numbers assigned to the nodes are arbitrary. They
are simply used to identify events and do not imply anything about prec-
edence relationships. Indeed, we shall renumber the node that terminates ac-
tivity C several times as we develop the network diagram for this project, but
correct precedence relationships will always be preserved. In the network
diagram, each activity must start at the node in which its immediate predeces-
sors ended.

For example, in Figure 5.5, activity C starts at node ? because its immediate
predecessor, activity B, ended there. We see, however, that complications
arise as we attempt to add activity D to the network diagram.

116 Zimbabwe Open University


Unit 5 Program Evaluation Review Technique and the Critical Path Method

Both A and C are immediate predecessors to D, and since we want to show


any activity such as D only once in our diagram, nodes ? and ? in Figure 14.2
must be combined, and D should start from this new node. This is shown in
Figure 5.6.

Node ? now represents the event that both activities A and C have been
completed. Note that activity E, which has only D as an immediate predeces-
sor, can be added with no difficulty. However, as we attempt to add activity
F, a new problem arises. Since F has C as an immediate predecessor, it
would emanate from node ? (of Figure 5.6). We see, however, that this would
imply that F also has A as an immediate predecessor, which is incorrect

Figure 5.5 Network Diagram for Activities A through C

https://www2.kimep.kz/bcb/omis/our_courses/is4201/Chap14.pdf accessed
22/05/2017

Figure 5.6 A Partial Network Diagram

https://www2.kimep.kz/bcb/omis/our_courses/is4201/Chap14.pdf accessed
22/05/2017

Zimbabwe Open University 117


Development Monitoring and Evaluation MSDS510

5.14 The Use of Dummy Activities


This diagramming dilemma is solved by introducing a dummy activity, which
is represented by a dashed line in the network diagram in Figure 5.7. This
dummy activity is fictitious in the sense that it requires no time or resources. It
merely provides a pedagogical device that enables us to draw a network
representation that correctly maintains the appropriate precedence relation-
ships. Thus, Figure 5.7 indicates that activity D can begin only after both
activities A and C have been completed. Similarly, activity F can occur only
after activity C is completed.

We can generalise the procedure of adding a dummy activity as follows. Sup-


pose that we wish to add an activity A to the network starting at node N, but
not all of the activities that enter node N are immediate predecessors of the
activity. Create a new node M with a dummy activity running from node M to
node N. Take those activities that are currently entering node N and that are
immediate predecessors of activity A and reroute them to enter node M.
Now make activity A start at node M. (Dummy activities can be avoided
altogether if, instead of associating activities with arcs (commonly known as
activity on the arc [AOA]), we associate them with nodes.

Figure 5.7 Introducing a Dummy Activity

https://www2.kimep.kz/bcb/omis/our_courses/is4201/Chap14.pdf accessed
22/05/2017

Figure 5.8 shows the network diagram for the first activity list as presented in
Figure 5.8. We note that activities G and H both start at node ? and terminate
at node ?. This does not present a problem in portraying the appropriate
precedence relationships, since only activity J starts at node ?.

118 Zimbabwe Open University


Unit 5 Program Evaluation Review Technique and the Critical Path Method

Figure 5.8 Network Diagram for the First Activity List for the Move
to Des Moines

https://www2.kimep.kz/bcb/omis/our_courses/is4201/Chap14.pdf accessed
22/05/2017

Zimbabwe Open University 119


Development Monitoring and Evaluation MSDS510

Examples, Activities and Solutions

The following are some of the examples that will help you understand the
determination of the critical path. The questions and solutions are provided.
You are expected to study them and see how the solution comes about. With
more practice, you will be able to understand. More information and exer-
cises will be found on MyVista account for this course. We will now apply
both CPM and Pert to the following example:

Activity 5.1

?
1. A publisher has a contract with an author to publish a textbook. The
(simplified) activities associated with the production of the textbook
are given. The author is required to submit to the publisher a hard copy
and a computer file of the manuscript.
(i) Based on the Table 5.2 below, construct a network diagram.
(ii) Determine the critical path.

CPM Method

A list of activities is provided. Using the CPM method, we can find out the
completion time of the project. We will also determine which activities are
critical activities-activities that if delayed would delay the completion of the
project, a slack time for non-critical activity can be delayed (run late) without
effecting the completion time of the project.

120 Zimbabwe Open University


Unit 5 Program Evaluation Review Technique and the Critical Path Method

Table 5.2: Table of schedules Tasks

Activity 5.2

?
1. Critical path
Note that the Critical Path is the path with the longest total time.
Using the table below construct, a network diagram and determine the
critical path of the project.
2. (a) Create a network diagram for the following project
(b) Determine the critical path

Zimbabwe Open University 121


Development Monitoring and Evaluation MSDS510

Note that
1. Determine all possible paths to the finish.
2. Add the combination value of the paths.
3. The longest path is the critical path.

Activity 5.3

?
1. Using the information in table below, assuming that the project team
will work a standard working week (5 working days in 1 week) and
that all tasks will start as soon as possible:

122 Zimbabwe Open University


Unit 5 Program Evaluation Review Technique and the Critical Path Method

(i) Determine the critical path of the project.


(ii) Calculate the planned duration of the project in weeks.
(iii) Identify any non-critical tasks

Activity 5.4
Using the figure below, determine the critical path,
?

Solutions for Activities

Solution for activity 5.1


(a) The network diagram that you must come up with should be as the
diagram indicated in the figure below.

(b) The critical path is (1,2) (2,3)3,4)4,6) (6,7)7,8)8,9).

Zimbabwe Open University 123


Development Monitoring and Evaluation MSDS510

Solution for Activity 5.2

Solution for Activity 5.3

The critical path of the project can be ascertained as follows:

(i) The critical path runs through Tasks, A, B, C, F, G, H and I.


(ii) The sum of the critical task durations is 75 days (Notice that you add
the sum of the days along the critical path)
(iii) Task D and E are non-critical
Solution for Activity 5.4

The expected project duration is 21 weeks (7+5+5+4)

124 Zimbabwe Open University


Unit 5 Program Evaluation Review Technique and the Critical Path Method

5.15 Summary
In this unit, we looked at the CPM as a project management that can be used
to monitor and evaluate project activities. We have looked at the important
components of the CPM. We defined and characterised the CPM. The ad-
vantages of using the CPM in project monitoring and evaluation were indi-
cated. The four key elements of the CPM were covered. Lastly, examples
and solutions were proffered in this unit. Students were also reminded that
much of what was covered in this unit is done using computer software. Stu-
dents are encouraged to try the CPM computations using computer pro-
grammes. However, this unit has created as sound basis for students to en-
gage computer application of the CPM. The thrust of the examination of the
CPM is in the context of monitoring and evaluation of projects.

Zimbabwe Open University 125


Development Monitoring and Evaluation MSDS510

References
Fondal, J. W. (1987). The History of Modern Project Management –Prec-
edence Diagramming
Methods: Origins and Early Development. Project Management Journal.
Volume XVIII. No. 2. June
http://www.pmexamsmartnotes.com/how-to-calculate-critical-
path,accessesed 18/05/2017/
<http://www.project-management-skills.com/critical-path-method.html
accessed 21/05/2017>
https://www2.kimep.kz/bcb/omis/our_courses/is4201/Chap14.pdf accessed
22/05/2017
Kelly, J. (1989). “The Origins of CPM: A Personal History”.PMNETwork,
Vol III, No 2,February, pp 7-22
Project Management PERT and CPM https://www.coursehero.com/file/
10473079/2-PERT/accessed 20/05/2017
Stretton, A. (2007) A Short History of Modern Project Management. Pub-
lished in PM World Today - October 2007 (Vol. IX, Issue X)

126 Zimbabwe Open University


Unit Six

Setting Up a Monitoring
System: Monitoring and
Evaluation as a Process

6.1 Introduction

S etting a monitoring system is as essential as the monitoring and evalua-


tion process itself. It lays the foundation for the entire system and has a
bearing on the quality of the results. In this unit, we walk you through the
process of setting up a monitoring system. We cover defining monitoring sys-
tems objectives, selection of relevant information and presentation of results,
use of results. Reporting and communication of monitoring and evaluation
results and report structure are presented in detail.
Development Monitoring and Evaluation MSDS510

6.2 Unit Objectives


By the end of this unit, you should be able to:
 develop a monitoring system
 formulate monitoring objectives
 discuss how the data is analysed and used

6.3 Monitoring System


A monitoring system is designed to meet the specific needs of the govern-
ment, donors, and the community. Before designing the system, the priorities
of the system must be clearly defined. A monitoring system should be seen as
a means of communication where the information collected is analysed and
management decisions made based on the system. The monitoring system
itself will need to be monitored and evaluated to assess whether it is meeting
its objectives of what it is intended for and it can be adjusted if necessary
(Save the Children,1993). A monitoring system is not simply a means of col-
lecting information. It must be a communication system, in which information
flows in different directions between all the people involved. According to the
International Red Cross (IRC) (1987), a monitoring system is a basic moni-
toring and evaluation tool. It is put in place before their monitoring exercise
starts. It thus guides the monitoring process when it starts. It is important to
understand that all what we are going to discuss in this unit should be done
before we start the actual monitoring process.

In most cases, a common failing of monitoring systems is the lack of feedback


to the levels producing the data, the result being that there is no perceived use
for the data, the data become less reliable, and management of the activity is
less effective than it could be. In most development projects, the problem is
that a lot of information is collected and at the end, it is not utilised. As we
sated earlier, we should not monitor projects for higher levels but should monitor
to improve our own performance.

It is important to start by identifying the minimum requirements of the moni-


toring system and this in turn will determine:
 who is involved in designing the system,
 what information is needed and where the information is obtained,
 how the information is collected, analysed and presented,
 the degree of accuracy required and

130 Zimbabwe Open University


Unit 6 Setting Up a Monitoring System: Monitoring and Evaluation as a Process

 the timing and frequency of information collected and analysed.

Designing a monitoring system is a daunting task. It requires some serous


hard work but once you have set the monitoring framework, the whole proc-
ess becomes easy. There are some issues one must consider when formulat-
ing a monitoring and evolution framework. In this unit, we consider the steps
provided by Oackely (2001) in designing any monitoring system.

Activity 6.1

?
1. Explain the “monitoring system” concept.
2. In your opinion, why is it important to clearly identify the people in-
volved in the monitoring and evaluation system.
3. Why is it important to define clearly monitoring objectives?
4. How useful is the collection of relevant information in setting up a moni-
toring system?

6.4 Steps in Designing a Monitoring System


Monitoring systems do not have a structure cast in concrete. Although they
may differ with organisations and institutions, they have common features,
which are found in all systems. The following is an example of steps (steps 1-
8) which can be followed when designing a monitoring system (UNDP, 2009).

Step one: Defining the monitoring system objectives

Step 1 consists of defining monitoring objectives. This is important in that


setting out the monitoring objectives act as a guide to the whole process of
mentoring. In this exercise, we ask questions such as the following:
 Why are you doing it, and for whom?
 What is to be monitored, by whom, and how will it be organised?
 How will the results be used?
Step two: Selection of relevant information

One of the key steps in the formulation of the monitoring system is the deter-
mination of how you are going to select the relevant information. This is a very
important part because in most cases many practitioners performing the moni-
toring and evaluation fail to attend to this aspect and end up having a lot of

Zimbabwe Open University 131


Development Monitoring and Evaluation MSDS510

irrelevant data. This retards the monitoring process in that too much data
collected does not only make it difficult to process but also it is costly as it
delays the publication of results. You also waste a lot of time playing field
work expenses. It is, therefore, very important that before you set out to
perform the monitoring exercise, thorough work must be done in determining
exactly relevant information which is in line with the objectives set in step one.
To archive these following questions may be asked in by the practitioner for-
mulating the system.
 What information should be collected?
 What process indicators should we choose which will give us the infor-
mation we want?
 Is the information we will get in line with our set objectives of the evalu-
ation process?
 Do we have the right impact indicators?
One need to be selective to make sure that only useful information is col-
lected, and that it is of a reasonable quality.

Step three: The collection of data for monitoring

Data collection is another important step in the monitoring system. Before


you get into the field, one needs to be certain of how you are going to collect
data. There is needed to take caution in choosing the best methods of collect-
ing data. From a plethora of data gathering methods there is need to choose
the best methods which are cost effective and which will give you the amount
and quality of data you need. The choice of the data gathering method is
dependant of the type of the project being evaluated. There is also need for
providing sufficient training and support for the people collecting the data.

Step four: Data analysis

Analysis of the data will depend on the nature of the data collected. Where it
is quantified, statistical analysis is possible. The sort of analysis that might be
undertaken would be for associations and relationships, for trends, for signifi-
cant changes, even the simpler calculations such as average (or mean), maxi-
mum/minimum and range can be invaluable despite their simplicity. Too often
there is costly and detailed collection of data without the same attention given
to analysis so that the lessons contained in it are just not noted. The methods
of analysis should be identified as part of the process of preparing to gather
data (Welsh, et al., 2005). In interpreting information and assessing results,
the following questions need to be asked frequently.

132 Zimbabwe Open University


Unit 6 Setting Up a Monitoring System: Monitoring and Evaluation as a Process

 Who should analyse the data?


 What methods are suitable for analysis?
 When should it be analysed?
The use of computer packages has to be decided at this time. The package to
be used to analyse data need to be familiar with the monitoring team. It should
also be in line with the nature of the data to be collected. Data analysis pack-
ages like Atlas and Statistical Package for the Social Science (SPSS) are
used in such exercises.

Step five: The presentation of information

There are different forms of presentation for a range of different users. Just as
important as analysis is the presentation of data. There are different forms of
presentation for a range of different users. The secret is to keep it simple and
focused on a single message. Avoid showing too much in any one presenta-
tion. Tables are a simple means of presentation; more visually pleasing and
often clearer are pie charts, columns and graphs. Diagrams and flow charts
are other effective ways of depicting data. Well selected photographs; even
multimedia presentations can be used to great effect to show the impacts of
projects (Welsh, Schans and Dethrasaving, 2005).

Step six: The use of information/results

The goal of monitoring and evaluation is to find information that will improve
the project so that the project can meet its expected outcomes. When the
project expected outcomes are met, then the lives of the intended beneficiar-
ies are improved. Thus, the most important thing is not only to get the infor-
mation, but also to use the information. There is need to pre-determine how
the information can be used for informing and improving the work. It is also
important to provide opportunities for discussing the findings with all involved
so that there are no surprises and everyone is in the same boat. Ensuring that
monitoring information is incorporated into existing planning procedures is
very important.

Step seven: Maintaining the system: Resources, training and support


and supervision

Resources for implementing M and E

According to Narayan (2001), Resources are needed for implementing M


and E activities. These are both human resources and financial resources.
And some material resources will also be necessary, although many of these

Zimbabwe Open University 133


Development Monitoring and Evaluation MSDS510

things are likely to be available in a project for use in other activities as well as
in M and E, for example, GPS instruments. At the time of the designing of the
M and E system resource needs can only be discussed in general terms until
indicators are finalised and the methods of measurement agreed upon.

Human resources

It is important to identify a person in the Project office who serves as the


Coordinator for all M and E activities. Often a project will have an Interna-
tional Consultant to set up the system, assisted by an organisations Monitor-
ing and Evaluation manager. It is most important that if you have contracted
external consultants, external implementation agency has a designated M and
E officer to liaise with staff in other offices, especially when the project is
implemented in several provinces (or similar administrative units). During the
preparatory phase for implementing M and E system, informants will need to
be identified, for example, households should be selected to collect income
and expenditure data. In addition, key contacts amongst the various
stakeholders should be identified, for example, in government agencies and
non-governmental organisations (Welsh et al., 2005).

Financial resources

M and E should have a separate budget. Some projects have a specific budget
for M and E activities, in others a specified per cent of total budget might be
set aside, whilst in others nothing is provided and all activities must be funded
from “regular” budget according to UNDP (2009) a number of items that
should be included in a budget are listed below:
 field data collection – fees and per diems for enumerators,
 incentive payments for informal data collectors/informants,
 travel expenses for project staff engaged in M and E activities,
 fees, per diems and expenses for midterm review, materials and
 fees, per diems and expenses for ex-post evaluation.
Step eight: Participants in monitoring

It is important to pre-determine who should be involved in the monitoring


exercise (Narayan, 2001). This is important that participants must be chosen
on competence lines and areas of specialisation. The following questions need
to be asked:

134 Zimbabwe Open University


Unit 6 Setting Up a Monitoring System: Monitoring and Evaluation as a Process

 Who should be involved in the monitoring?


 How can all the people involved in the work benefit from a monitoring
system?
 Should someone from outside design a monitoring system?

Activity 6.2

?
Using an example of a project:
1. List the steps involved in setting up a monitoring and evaluation sys-
tem.
2. Discuss what is involved in maintain the monitoring system.
3. Discuss the aspects that should be considered in the monitoring proves.

6.5 Reporting and Communicating M and E


Information
Communicating information on the other hand concerns the findings of the M
and E process and is for action and accountability (Welsh et al., 2005). Where
the M and E system incorporates reporting on completion of milestones, this
is normally adequate for the monitoring of actual activities versus planned
activities. For evaluation, however, something more is needed, and many
projects fail to integrate progress reports on the achievement of their objec-
tives into regular report formats, instead leaving it until the project completion
report. Preferably, an annual report should be made (Welsh, et al., 2005).
Annual Reports have tended to be concerned with recording the activities
undertaken and the funds spent, that is, it is more of a monitoring report.
However, it can be more, and it should be reporting on changes in status from
the previous period and highlighting any issues arising from the review. But it
should also discuss impacts to date, for example, where annual household
income is collected it is easy to see if changes are occurring. To whom should
M and E information be communicated?

According to Welsh et al. (2005), there has been a tradition that regular
reports are given to the funding agency, and to the implementation partners.
The broader community, the stakeholders with most at stake, rarely are the
recipients of detailed reports on progress. They may get snippets, perhaps
results of trials and demonstrations, and of course, they do have their own
ideas of success. Nevertheless, they have probably never had the expected

Zimbabwe Open University 135


Development Monitoring and Evaluation MSDS510

impacts explained. For example, if the project is to impact upon poverty, it


would be easy to call a meeting of affected communities and discuss the find-
ings of the latest household income survey. Project management might even
benefit from the feedback that communities would give. When projects are
truly “participatory” that does not just mean having communities do things, it
also means taking communities into your confidence and sharing information
with them, information that could lead to even better outcomes for the sup-
posed beneficiaries (Welsh et al., 2005).

6.6 Report Writing


This section focuses on writing a report on an evaluation. However, all re-
ports are similar and even very short ones have an introduction, background/
findings and conclusion or recommendations. Therefore, following the basic
principles outlined here will improve all report writing.

6.6.1 Purpose of the report


The purpose of an evaluation is to communicate the findings and recommen-
dations of the evaluation to others. Often it will serve as a very important tool
for formal decision making, thus requiring clear formulation of conclusions,
recommendations and proposed actions. The length and style of the report
will depend largely on the intended readers. Policy makers and planners are
interested in a short, to-the point report, summarising the findings, conclu-
sions and recommendations, (UNICEF, 1996). Project staff is interested in
detailed technical information to improve project implementation. Therefore,
it is necessary to define for whom the report is intended. This is unlikely to be
an issue, as these people will have been identified at the start of the evalua-
tion. In most cases the evaluation will have been conducted or organised by
you, the manager. The report should be adequate to serve your needs and
provide a reference for the future. The report will probably be circulated to
your staff and your superiors (UNICEF, 1996).

Reports to communicate the results are produced. In the production of the


reports several options could be considered. The first is a short and concise
report for people, such as, policy makers, planners and managers, and a
more detailed report for other groups, such as, health staff and engineers. The
second option is to produce only one report, and to include a number of
annexes, each of which provides a more detailed information on a particular
subject. A third option is to produce an Executive Summary, which is the
length and content appropriate for readers with limited time.

136 Zimbabwe Open University


Unit 6 Setting Up a Monitoring System: Monitoring and Evaluation as a Process

6.7 Main Components of the Report


The report should contain the following components, (Wang, 1997):

 cover page,
 summary and recommendations,
 introduction,
 objectives,
 methodology,
 findings or results,
 discussion and conclusion,
 reference of literature used,
 annexes and
 acknowledgements.
As the evaluation has been carried out, as a management tool the amount of
introduction and background necessary may be very little. However, in all
reports it is essential that the sections on objectives, methodology and results
are presented in enough detail for the reader to be able to assess what was
done, why, and whether it was done properly.

Activities 6.3
1. Give an outline of the structure and components of a Monitoring and
? 2.
evaluation report.
Justify the significance of communicating information regarding moni-
toring and evaluation to stakeholders.

6.7.1 Cover page


The cover page should contain the title, the names of the authors with the titles
and positions, and the institution that publishes the report. This will, most
likely, be the institution that administered the project.

Zimbabwe Open University 137


Development Monitoring and Evaluation MSDS510

6.7.2 Summary and recommendations


The summary can only be written after the first or even the second draft of the
report has been completed. It should contain:
 a very brief description of the problem,
 the main objectives,
 the place of study,
 the type of study and methods used and
 all main findings and conclusions.
The summary will be the first part of your study that will be read. Therefore,
its writing demands thorough reflection and is time consuming. As a summary
it should be brief - probably only one or two pages. The recommendation
should come from discussion and conclusion. It is important that they be
brought out to the front of the report as they represent the major reason for
carrying out a monitoring and evaluation process.

6.7.3 Introduction
The introduction is a relatively easy part, which may be written after the first
draft of findings. The evaluation report requires an introduction, which sets
out basic information about the project and the evaluation itself. This provides
the context for the report. According to Wang (1997), the introduction should
provide the following information although other information may be added, if
felt necessary:
 brief statement of the main features of the project being evaluated,
 reasons for the evaluation,
 composition of the evaluation team and
 The introduction should be kept short and to the point and about two
pages long.

6.7.4 Objectives and methodology


For a long and complex evaluation covering a number of aspects and project
areas, it may be necessary to set out the evaluation objectives and methodol-
ogy in separate chapters. However, for most evaluation reports it may be
sufficient to combine the objectives and the methodology in one chapter, (Wang,
1997). The information should then be set out under the following major head-
ings:

138 Zimbabwe Open University


Unit 6 Setting Up a Monitoring System: Monitoring and Evaluation as a Process

 evaluation objectives and


 methods of data collection, and analysis
The methodology you followed for the collection of your data should be de-
scribed in detail. It should include:
 sources of data (record cards, households, clinic registers, and so forth)
 how the data was collected and by whom

6.7.5 Findings
The systematic presentation of your findings in relation to the evaluation ob-
jectives is the crucial part of your report. Tables or graphs that summarise the
findings may complete a description of the findings. How many of these you
include depends on the readers you are aiming at. If your principal target
group consists of managers rather than researchers, you may decide to in-
clude only the most essential tables in the text and merely refer to others
presented in annexes (Narayan, 2001). It is important to set out the findings
under clear and precise headings.

6.7.6 Discussion and conclusion


The findings can be discussed per objective with conclusions as to what the
findings indicate. The recommendations flow logically from the discussion and
should be evident in this section. The discussion should also include findings
from other related studies that support or contradict your conclusions. It is
also important to present and discuss the limitations of the study.

The text and annexes should include sufficient details for professionals to
enable them to follow how you substantiate your findings and conclusions.
The report should be so self-explanatory that it should be possible to repeat
the study, if desired. If any references have been used, then they are quoted
here. A reference shall always include the author’s name, the year, the title of
the publication and the publishers (Wang, 1997).

6.7.7 Recommendations
The interpretation of findings requires decisions to be made on the relative
success or failure of different aspects of the project. It may follow from these
decisions, that some changes should be made or some successes should be
repeated in other projects. To ensure that conclusions are clearly identified
and obvious to anyone reading an evaluation report, they are usually summa-

Zimbabwe Open University 139


Development Monitoring and Evaluation MSDS510

rised as recommendations. No evaluation report is complete without the


recommendations. As mentioned above, the recommendations should rea-
sonably follow from the findings and interpretation. A recommendation can-
not be added because you ‘think it is a good idea’ unless it is supported by
the findings of the evaluation. A recommendation is expected to be imple-
mented and therefore should always include WHO is expected to do WHAT
and WHEN.

The following guidelines may aid the formulation of conclusions and recom-
mendations, (Wang, 1997; Narayan, 2001):
 Do not jump to conclusions: Conclusions and recommendations have
to flow from the evaluation findings. Take care not to jump to conclu-
sions or make too sweeping statements.
 Do not suppress conflicting findings: Conflicting findings should not
be suppressed or spirited away. Instead, they should be carefully con-
sidered. If no explanation can be found, this should be stated in the
conclusions.
 Include unexpected findings: During the evaluation you may collect
information that you were not looking for, but which proves to be very
important. Even if these unexpected findings do not serve a particular
evaluation objective, they should nevertheless, be incorporated in the
conclusions and recommendations.
 Make practical and feasible recommendations: During the dis-
cussion and formulation of recommendations, attention needs to be
given to making practical and feasible recommendations which are
possible to implement. Recommendations that cannot or will not be
implemented are not worth making.
 Be as clear as possible: To increase their impact, conclusions and
recommendations need to be stated clearly. Therefore, each conclu-
sion or recommendation should cover one message only, and the level
or organisation to which it is directed should be precisely indicated.
 Finalisation of recommendations: Conclusions and recommenda-
tions should be arranged in order of importance, from the general to
the more specific. Before finalisation, check whether they meet the evalu-
ation objective and thus the purpose of the evaluation.
The recommendations of an evaluation, whether formulated by an individual
or in committee, should be stated in a concise and useful manner and fed-
back or delivered to the appropriate persons. In the case of formal, external
evaluation studies, the final product is a comprehensive evaluation report,

140 Zimbabwe Open University


Unit 6 Setting Up a Monitoring System: Monitoring and Evaluation as a Process

which is submitted to the donors, with additional copies distributed to na-


tional and international agencies as the case may be.

Unfortunately, the evaluation report often does not find its way back to the
project implementers and communities and the other information providers,
and when it does, it is probably too late and in a form and style that is of
limited use to field personnel (Narayan, 2001). In order to overcome these
potential difficulties, it is recommended that external evaluators should present
their conclusions and recommendations to project implementers at the con-
clusion of the evaluation. This requirement could be included in the terms of
reference of the external evaluator.

Annexes

The annexes should contain essential additional information enabling profes-


sionals to follow your research and data analysis. If you used questionnaires,
these should be annexed to the report. Tables referred to in the text but not
included in order to keep the report short should also appear as annexes.

Acknowledgements

You may wish to thank those who supported you technically or financially in
the drafting and implementation of your study. In addition, your employer
who allowed you to invest time in the study and the respondents may be
acknowledged. Acknowledgements are usually placed straight after the title
page, before the table of contents, or at the end of the report before the
references.

6.8 Issues to Remember


Remember that your reader;
 is short of time,
 has many other urgent matters demanding his interest and attention and
 is probably not knowledgeable in project implementation “jargon”.
Therefore, the rules are;
 keep to the essentials,
 make no statement which is not based on facts and data,
 quantify,

Zimbabwe Open University 141


Development Monitoring and Evaluation MSDS510

 strive to be precise and specific all the time and avoid exaggeration,
 avoid using adverbs and adjectives,
 remember that your goal is to inform and not to impress,
 we encourage you to aim for clarity, logic and being sequential.

Activity 6.4

?
1. Discuss ways in which the monitoring and evaluation reporting and
communication processes can be improved. Use examples to support
your facts.

6.9 Summary
We have established that setting a monitoring system is as essential as the
monitoring and evaluation process itself. In this unit, we have laid the founda-
tion for the entire monitoring and evaluation system for managing develop-
ment projects. In this unit, we discussed the step- by- step process of setting
up a monitoring system. We defined monitoring systems objectives. We cov-
ered the selection of relevant information, presentation and use results. Lastly,
reporting and communication of monitoring and evaluation results and report
structures were presented in detail.

142 Zimbabwe Open University


Unit 6 Setting Up a Monitoring System: Monitoring and Evaluation as a Process

References
International Red Cross. (1987). Evaluating Water Supply and Sanitation
Projects. Geneva.
Narayan, D. (2001). Participatory Evaluation. North Wind, Nodav Pub-
lishers.
North American Aerospace Defense Command (NORAD). (1996). The
Logical Framework Approach. Oslo: NORAD.
Oakley, P. (2003). Projects with People: The Practice of Participation in
Rural Development. New Delhi, McGraw-Hill.
Save The Children. (1993). Assessment Monitoring Review and Evalua-
tion, Toolkits, STCF.
United Nations Children Fund. (1996). A UNICEF Guide for Monitoring
and Evaluation. Making a Difference? Geneva, UNICEF.
United Nations Development Programme. (2009). Handbook on Planning,
Monitoring and Evaluating for Development Results. New York:
UNDP. http://undp.org/eo/handbook
Wang, C. (1997). Logical Framework Workshop. Maseru.
Welsh, N., Schans, M. & Dethrasaving, C. (2005). Monitoring and Evalua-
tion Systems
Manual (M&E Principles). Publication of the Mekong Wetlands Biodiversity
Conservation and Sustainable Use Programme.

Zimbabwe Open University 143


Development Monitoring and Evaluation MSDS510

144 Zimbabwe Open University


Unit Seven

Performance Measurement and


Management

7.1 Introduction

P erformance management is a crucial part in the management of the de-


velopment projects. Monitoring and evaluation practitioners and project
managers need to know how the project is performing so that they take ap-
propriate action to correct or to reinforce the activities. In this unit, we cover
methods used in performance measurement in monitoring and evaluation. We
introduce the use of indicators, setting targets, data collection systems and
quantitative and qualitative analysis. The unit should help us to apply indica-
tors in ways that enhance the ability to judge progress towards results and
performance when monitoring and evaluating development projects. Thus, in
this unit we cover performance measurement, selection of indicators, key
steps in selecting indicators and using indicators in measuring performance.
Development Monitoring and Evaluation MSDS510

7.2 Unit Objectives


By the end of this unit, you should be able to:
 define performance management
 list methods used in performance management
 describe the use of indicators in performance management
 identify dimensions of performance management
 discuss steps in selection of performance indicators

7.3 Performance Management


Performance management is the generation, use and application of perform-
ance information for continuous improvement. It includes “performance meas-
urement”. Performance measurement is the collection, interpretation of, and
reporting on data for performance indicators that measure how well pro-
grammes or projects deliver outputs and contribute to achievement of higher-
level aims (purposes and goals). Performance measures are most useful when
used for comparisons over time or among units performing similar work. A
system for assessing performance of development initiatives against stated
goals. It is also described as the process of objectively measuring how well a
project is meeting its stated goals or objectives.

Indicators are part of performance measurement but they are not the only
part. To assess performance, it is necessary to know about more than actual
achievements. Also required is information about how targets were achieved,
factors that influenced this positively or negatively, whether the achievements
were exceptionally good or bad and who was mainly responsible for the
achievement or failure.

Traditionally, it has been easier to measure financial or administrative per-


formance such as efficiency. Results-based management today lays the basis
for substantive accountability and performance assessment or effectiveness.

146 Zimbabwe Open University


Unit 7 Performance Measurement and Management

7.3.1 Dimensions of performance management

Figure 7.1 Dimensions of Performance Assessment (Source: adapted


from UNDP, 2002)

Results-based management may inform the assessment of performance of


projects, programmes, programme areas, groups of staff and individuals Fig-
ure 7.1 illustrates the linkages between performance measurement, rating and
indicators as elements of performance assessment.

7.3.2 Dimensions of performance management


There are basically three dimensions of performance assessment and these
dovetail as shown in Figure 7.1 These are:
 performance measurement,
 rating and
 indicators.
Performance measurement

This is the systematic analysis of performance against goals taking into ac-
count of the reason behind performance and influencing factors (Organisation
for Economic Co-operation and Development (OECD), 1998; United Na-
tions Development Programme, 2002).

Zimbabwe Open University 147


Development Monitoring and Evaluation MSDS510

Rating

This measures the judgment of progress, good or bad based on indicators.


Rating can also include rating on other performance dimensions.

Indicators

Indicators provide verification of progress towards results has taken place.

7.4 Rating System


The growing internalisation of Results Based Management (RBM) within de-
velopment projects is gradually allowing for an expanded use of reported
performance results for internal management and oversight functions. A key
area for such expanded use involves the development of a common rating
system for all results reported by the organisation under the RBM frame-
work. Such a system allows managers to rate performance at the results level,
and to analyse and compare trends by thematic category (for example, gov-
ernance, poverty or environment); level of intervention (for example, project,
output or outcome); geographic area (for example, Africa, Asia or Latin
America) or organisational unit (UNDP, 2002).

A common rating system would build on the three-point rating system. A


common rating system may be used for all key monitoring and evaluation
tools to compare performance across results. With this approach, there are
two kinds of ratings: self-ratings and independent ratings. Having two kinds
of ratings that use the same rating criteria allows a richer picture of how progress
towards results is achieved. It also provides the basis for dialogue within and
between the implementing organisations, beneficiaries and the government if
ratings for the same outputs or outcomes vary.

148 Zimbabwe Open University


Unit 7 Performance Measurement and Management

Activity 7.1
1. Define the following terms in the context of development of monitoring
? and evaluation
a) performance
b) management
c) performance management indicators
2. List three dimensions of performance assessment.
3. Discuss how each of these dimensions influence measurement.
4. Analyse key elements of common rating system, how is it important in
performance measurement.

7.4.1 Key elements of the common rating system


For outcomes, the rating system has three points, which are positive change,
negative change and unchanged (no change). The three ratings reflect progress
on outcomes, without attributing the progress to any partner. The three rat-
ings are meant to reflect the degree to which progress has been made to-
wards or away from achieving the outcome (World Bank, 1996).

The methodology in all three ratings is to compare, as measured by outcome


indicators, the evidence of movement from the baseline either towards or
away from the target.
 Positive change: positive movement from baseline to set target as
measured by the outcome indicator(s),
 Negative change: reversal to a level below the baseline as measured
by the outcome indicator(s),
 Unchanged: no perceptible change between baseline and SRF target
as measured by the outcome indicator(s).

7.4.2 Rating outputs


The rating system also has three points: no, partial and yes. The three ratings
reflect the degree to which an output’s targets have been met. This serves as
a proxy assessment of how successful an organizational unit has been in achiev-
ing its outputs.

According to World Bank (1996), the three ratings are meant to reflect the
degree of achievement of outputs by comparing baselines (the inexistence of
the output) with the target (the production of the output). The partially achieved
category is meant to capture those en route or particularly ambitious outputs
that may take considerable inputs and time to come to fruition.

Zimbabwe Open University 149


Development Monitoring and Evaluation MSDS510

 No: not achieved.


 Partial: only if two-thirds or more of a quantitative target is
achieved.
 Yes: achieved.
Evaluations at a minimum rate outcome and output progress. Outcome evalu-
ations, which are undertaken by independent assessment teams, may also
rate key performance dimensions, such as, sustainability, relevance and cost-
effectiveness. Other types of assessments should also provide ratings where
appropriate, such as assessments of development results by the evaluation
Offices. The ratings will be used for trend analysis and lessons learned corpo-
rately, as well as for validation of results and debate on performance of projects.

Selected monitoring reports rate outcome and output progress for projects,
on a voluntary basis. For the Annual Project Report (APR), the rating on
progress towards outputs is made annually by the project manager and the
programme manager. It forms the basis of a dialogue in which consensus
ratings for the out puts are produced. If there is disagreement between the
project and programme staff on how outputs are rated, both ratings are in-
cluded in the report, with proper attribution. The Programme Manager (World
Bank, 1996) makes the rating on progress towards outcomes in the APR.
For field visits, programme managers periodically rate progress towards both
outputs and outcomes, discussing their ratings with the project staff. The rat-
ings are used to assess project performance and for trend analysis and les-
sons learned. They may also be used corporately for validation and lessons
learned.

7.5 Selecting Indicators


Programme Managers and Senior Managers are concerned with more com-
plex indicators that reflect progress towards outcomes. UNDP (2002) pro-
vides key steps for managers in working with indicators outlined below:

7.5.1 Key steps in selection if indicators


The following are steps, which can be followed in the selection of indicators
(UNDP, 2002).

Set baseline data and target:

150 Zimbabwe Open University


Unit 7 Performance Measurement and Management

An outcome indicator has two components: a baseline and a target. The base-
line is the situation before a programme or activity begins. It is the starting
point for results monitoring. The target is what the situation is expected to be
at the end of a programme or activity. (Output indicators rarely require a
baseline since outputs are being newly produced and the baseline is that they
do not exist).

Hypothetical example 1
 If wider access to education is the intended result, for example, school
enrolment may provide a good indicator. Monitoring of results may
start with a baseline of 55 percent enrolment in 1997 and a target of 80
percent enrolment in 2002. Between the baseline and the target there
may be several milestones that correspond to expected performance
at periodic intervals.
Baseline data provides information that can be used when designing and im-
plementing interventions. It also provides an important set of data against
which success (or at least change) can be compared, thereby making it pos-
sible to measure progress towards a result. The verification of results de-
pends upon having an idea of change over time. It requires a clear under-
standing of the development problem to be addressed, before beginning any
intervention. A thorough analysis of the key factors influencing a development
problem complements the development of baseline data and target setting
(UNDP, 2002).

What do you do when no baseline is identified?


 A baseline may exist even when none was specified at the time a pro-
gramme or project was formulated. In some cases, it may be possible
to find estimates of approximately where the baseline was when the
programme started through the annual review exercises and national
administrative sources (Allen, 1996).
.
Hypothetical examples 2
 For example, implementation of a local governance project already
has begun but no baseline data can be found. It still may be possible to
obtain a measure of change over time. Ask a number of people: “Com-
pared to three years ago do you now feel more or less involved in local
decision-making?” A clear tendency among respondents either towards
“more” or towards “less” provides an indication of whether or not change
has occurred (Kellogg Foundation, 1998; UNDP, 2009).

Zimbabwe Open University 151


Development Monitoring and Evaluation MSDS510

Sometimes it is not possible to ascertain any sense of change. In this case,


establish a measure of the current situation so that an assessment of change
may take place in the future. Refer to the project document for more informa-
tion about context and problems to be resolved. Use proxy indicators when
necessary, cost, complexity and/or the timeliness of data collection may pre-
vent a result from being measured directly. In this case, proxy indicators may
reveal performance trends and make managers aware of potential problems
or areas of success.
 For example, the outcome “fair and efficient administration of justice”
is often measured by surveying public confidence in the justice system.
Although high public confidence does not prove that the system actu-
ally is fair, there is very likely a correlation. In another example, in an
environmental protection programme where a target result is the im-
provement in the health of certain lakes, the level of toxins in duck eggs
may serve as a proxy indicator of that improvement (Allen, 1996).

7.5.2 Use disaggregated data


Good indicators are based on basic disaggregated data specifying location,
gender, income level and social group. This is also necessary for good project
and programme management. Such information, sometimes in the form of
estimates, may be drawn from governmental and non-governmental adminis-
trative reports and surveys. Regular quality assessments using qualitative and
participatory approaches may be used to corroborate, clarify and improve
the quality of data from administrative sources.
 For the outcome “effective legal and policy framework for decentrali-
zation”, for example, the indicator “proportion of total public revenues
allocated and managed at sub-national level” may demonstrate an in-
creased overall distribution of resources to the local level but hide large
disparities in distribution to some regions (Allen, 1996; UNDP, 2002).

7.5.3 Involve stakeholders


Participation should be encouraged in the selection of both output and out-
come indicators. Participation tends to promote ownership of, and responsi-
bility for, the planned results and agreement on their achievement. A prelimi-
nary list of output indicators should be selected at the project formulation
stage, with the direct involvement of the institution designated to manage the
project and with other stakeholders. Partners are involved in the selection of
outcome indicators and project formulation processes (Allen, 1996). It is
important that partners agree on which indicators to use for monitoring and

152 Zimbabwe Open University


Unit 7 Performance Measurement and Management

on respective responsibilities for data collection and analysis. This establishes


a foundation for any future changes in the implementation strategy should the
indicators show that progress is not on track.

7.5.4 Distinguish between quantitative and qualitative


indicators
Both quantitative and qualitative indicators should be selected based on the
nature of the particular aspects of the intended result. Efficiency lends itself
easily to quantitative indicators, for example. Measuring dynamic sustainability,
in contrast, necessitates some qualitative assessment of attitudes and behav-
iours because it involves people’s adaptability to a changing environment.
Methodologies such as beneficiary assessment, rapid rural appraisal (RRA)
and structured interviews may be used to convert qualitative indicators into
quantitative indicators (OECD, 1996).

7.5.5 Try to limit the number of indicators


Too many indicators usually prove to be counter-productive. From the avail-
able information, develop a few credible and well-analysed indicators that
substantively capture positive changes in the development situation. The of-
fice needs to select from among a variety of indicators since several projects
may contribute to one strategic outcome. Be selective by striking a good
balance between what should be and what can be measured. Narrow the list
using the SMART principles and additional criteria to sharpen indicators.

7.5.6 Ensure timeliness


The usefulness of an indicator depends on timeliness and clear actions so that
an indicator target date corresponds to the expected progress of the assist-
ance. If changes take place, such as the modification of outputs or outcomes,
new sets of indicators would need to be established to reflect the actual tar-
gets.

Zimbabwe Open University 153


Development Monitoring and Evaluation MSDS510

Activity 7.3

?
1. Discuss the steps, which one can follow, in the selection of indicators
for performance measurement. Use examples to buttress you answer.
2. “Results oriented monitoring of development performance involves look-
ing at results at the level of outputs, outcomes and impact”. Discuss the
importance of output, outcome and input indicators in the perform-
ance of measurement. Use examples to buttress you answer.

7.6 Other Tips on Selecting Indicators


The following are some of the tips one can use on selecting indicators.
 Make sure that the meaning of the indicator is clear.
 Data for the indicator are available.
 The effort to collect that data is within the power to the project man-
agement and does not require experts for analysis.
 The indicator is substantially representative for the total of the intended
result (that is, outcome output).
 The indicator is tangible and can be observed.
 The indicator is difficult to qualify but so important that it should be
considered (Proxy indicator).

7.7 Using Indicators


7.7.1 Involving stakeholders
The programme manager should establish mechanisms for sharing informa-
tion generated from indicators with primary stakeholders. This is particularly
true for outcome indicators. This ensures that the analysis of progress is lo-
cally relevant using local knowledge, while fostering “ownership” and building
group decision-making skills (UNDP, 2002; UNPFA, 2000).

It is worth noting, however, that stakeholder or partner participation in the


analysis of the indicator data may significantly alter the interpretation of that
data. Participatory observation and in-depth participatory reviews with im-
plementation partners and beneficiaries are integral to visual on-site verifica-
tion of results, which is a reliable form of assessment. More “top down” and
less participatory approaches to assessment may be used to achieve analyti-

154 Zimbabwe Open University


Unit 7 Performance Measurement and Management

cal rigor, independence, technical quality, uniformity and comparability


(UNPFA, 2000). Ultimately, the information gained through the analysis of
indicators feeds into evaluations. This data helps assess progress towards
outputs and outcomes, and includes a measure of stakeholder satisfaction
with results.

7.7.2 Using indicators for monitoring


Results-oriented monitoring of development performance involves looking at
results at the level of outputs, outcomes and, eventually, impact. Indicators
are used periodically to validate partners’ perceptions of progress and achieve-
ment, to keep projects and programmes on track and to provide early warn-
ing signals of problems in progress. Indicators only indicate; they do not ex-
plain (OECD, 1999). Any interpretation of indicators is done through qualita-
tive analysis. Qualitative analysis is needed to interpret what the indicators
say about progress towards results.

7.7.3 Output indicators


For output indicators, the Programme Manager uses day-to-day monitor-
ing to verify progress, as well as field visits and reports or information re-
ceived from the project management. The Annual Project Report (APR) is
too infrequent to allow early action in case there are delays or problems in the
production of outputs (OECD, 1999).

For outcome indicators, annual monitoring is more appropriate and is ac-


complished through input from the technical project experts in the APR, dis-
cussions at the Steering Committee and the Annual Review. Since outcomes
are less tangible than outputs, indicators are indispensable for an informed
analysis of progress.

7.7.4 Impact indicators


For impact indicators, also called situational indicators, discussion may take
place annually if information is available but is often done less frequently.

Zimbabwe Open University 155


Development Monitoring and Evaluation MSDS510

Activity 7.4

?
1. Discuss how performance measurement can be improved to enhance
monitoring and evaluation of development projects in developing coun-
tries.
2. The basis of a successful monitoring and evaluation rests on efficient
performance measurement. Discuss
3. If tasks cannot be measured, then they cannot be evaluated, discuss in
line with different monitoring and evaluation methodologies.

7.8 Summary
In this unit, we have demonstrated the importance of performance manage-
ment. We emphasized that monitoring and evaluation practitioners and project
managers need to know how the project is performing so that they take ap-
propriate action to correct or to reinforce the activities. In this unit, we cov-
ered methods used in performance measurement in monitoring and evalua-
tion. We introduced the use of indicators, setting of targets, data collection
systems and quantitative and qualitative analysis issues were discussed. per-
formance measurement, selection of indicators, key steps in selecting indica-
tors and using indicators in measuring performance were also covered.

156 Zimbabwe Open University


Unit 7 Performance Measurement and Management

References
Allen, J.R. (1996). Performance Measurement. Atlanta: AEA.
Committee. (1998). Review of the DAC Principles for Evaluation of devel-
opment Assistance. http://www.oecd.org/dac/ Evaluation/pdf/eval.
Information and Evaluation (CDIE). (2004). Performance Monitoring
and Evaluation Tips: http://www.dec.org/usaid_eval/004:accessed 22/
04/2017
Kellogg Foundation. (1998). Evaluation Handbook. http://
www.WKKF.org/.
Organisation for Economic Co-operation and Development (OECD)/Devel-
opment Assistance
Operations Evaluation Department (OED). (1996). Performance Monitoring
Indicators: A Handbook for Task Managers, (OED).
Organisation for Economic Co-operation and Development (OECD)/Public
Management
Service (1999). Improving Evaluation Practices: Best Practice Guidelines
for Evaluation and Background Paper, http://www.oecd.org/puma.
United Nations Development Programme. (2002). A Handbook on Moni-
toring and Evaluation for Results, USA: UNDP.
United Nations Population Fund (UNFPA).(2000). Monitoring and Evalua-
tion Methodologies: The Programme Manager’s M&E Toolkit:
UNFPA http://bbs.unfpa.org/ooe/me_methodologies.htm.accessed 1/4/2017
United States Agency for International Development (USAID), Centre for
Development
World Bank. (1996). http://www.worldbank.org/html/oed/evaluation/:ccessed
22/04/2017

Zimbabwe Open University 157


Development Monitoring and Evaluation MSDS510

158 Zimbabwe Open University


Unit Eight

Typology of Evaluation
Approaches

8.1 Introduction

E valuation has emerged as the cornerstone of successful development


programmes. The evaluation or determination of the relative worth of
something must be undertaken in order to compare alternatives before mak-
ing choices amongst them. Evaluation literally means ‘to work out the value
(of something)’. Informal evaluations inform daily decisions on how good or
bad, desirable or undesirable something is. Formal evaluations involve the
same kind of judgment, but are more systematic and rigorous than their infor-
mal counterparts, with appropriate controls for the validity and reliability of
the findings and conclusions (Rabie and Cloete, 2009). In this unit we visit the
concept of evaluation. Our main task in this unit is come visit the typology of
evaluation approaches in the context on monitoring and evaluation of devel-
opment projects. We also look in to the different types of evaluation ap-
proaches, main evaluation designs in monitoring and evaluation of develop-
ment projects, we look at external evaluation in detail as well as advantages
and disadvantages of external evolutions. Last we touch on evaluation chal-
lenges.
Development Monitoring and Evaluation MSDS510

8.2 Unit Objectives


By the end of this unit, you should be able to:
 classify evaluation in terms of its characteristics
 list the main evaluation designs
 explain the concept of external evaluation
 appraise external evaluation with respect to monitoring and evaluation
of development projects
 discuss evaluation challenges

8.3 Evaluation Approaches


Rossi, Lipsey and Freeman (2004) and Mouton’s (2008) classification sys-
tems link evaluation to the programme life cycle (design, implementation and
outcomes). Similarly, Owen (2006) distinguishes between proactive evalua-
tion aimed at synthesising previous evaluation findings, clarificative evaluation
to clarify the underlying logic and intended outcomes of the intervention, in-
teractive evaluation to improve the evaluation design, monitoring evaluation
to track progress and refine the programme and finally impact evaluation for
learning and accountability purposes.

While this typology is useful in identifying the design, tracking implementation


and evaluating final results, it ignores other variables and choices that need to
be taken into consideration in deciding what approach to be followed during
the evaluation exercise. Our alternative classification system proposed in this
unit attempts to supplement the weaknesses of the other classification schemes.
It uses three main classification categories, namely the scope of the evaluation
study, the approach underpinning philosophy of the evaluation study, and lastly
the evaluation study design and methodology which provides the parameters
for collecting and assessing data to inform the evaluation. In this unit, the main
evaluation approaches that emerged during the last years of evaluation re-
search will be discussed below within these three categories of the proposed
new classification system.

8.4 Evaluation Approaches Based on Scope


The functional, geographic or behavioural parameters of the evaluation deter-
mine and delimit the focus of the evaluation. The evaluation may be very
broad, encompassing several of the dimensions or attributes of performance

160 Zimbabwe Open University


Unit 8 Typology of Evaluation Approaches

listed below, as is done during a comprehensive organisational performance


review (Rabie and Cloete, 2009). A comprehensive evaluation therefore fo-
cuses on more than one and even in extreme cases on all of the aspects of the
evaluation (integrated evaluation). Alternatively, the evaluation may be fo-
cused on a particular intervention, be that a policy, a programme, a project or
a product or limited to a particular development sector (for example, the
economy, political, financial, technological, cultural, environmental, educa-
tional, transport, health or other sectors of a community or society), geo-
graphical area or community; confined to a particular phase or stage of an
intervention (such as its inputs, resource conversion or management proc-
esses, outputs, outcomes or impacts); or focused on the performance of indi-
vidual staff members within the organisation or intervention. Only organisa-
tional evaluations are, however, dealt with below (Rabie and Cloete, 2009).

According to Rabie and Cloete (2009), the main evaluation approaches based
on scope, are the following:

8.4.1 Community-based evaluation


This type of evaluation focuses on a particular community, which may be
geographically based, or spatially spread, but with similar characteristics such
as ethnicity, interest or ideology.

8.4.2 Sectoral evaluations


Sectoral evaluation evaluates different sectoral policies, programmes and/or
projects like transport, education, health and welfare.

8.4.3 Geographical evaluations


Geographical evaluations evaluate the consequences of specific location based
policies, programmes and/or projects like integrated community, local gov-
ernment, regional, provincial or national developmental initiatives.

8.4.4 Policy evaluation


According to Owen (2006) policy evaluation focus on either policy process
assessment (how and why policies are devised and implemented) or policy
content assessment, what interventions are considered or made, or both.

Zimbabwe Open University 161


Development Monitoring and Evaluation MSDS510

8.4.5 Programme and project evaluation


According to Rossi, et al. (2004), programme and project evaluation sys-
tematically investigates the effectiveness of social intervention programmes/
projects in ways that are adapted to their political and organisational environ-
ments to inform social actions that may improve social conditions. Programme
evaluation also assesses the programme results and the extent to which the
programme caused those results.

8.4.6 Product evaluation


Product evaluation entails the evaluation of (not the process, but) only the
product against quality assurance standards. In the social context, product
evaluation measures, interprets, and judges the achievements to ascertain the
extent to which the evaluand met the needs of the rightful beneficiaries.

8.4.7 Input evaluation


Input evaluation assesses only the required financial, human, physical, time,
information and commitment resources. It enables decision makers to exam-
ine the feasibility of alternative strategies for addressing identified needs of
targeted beneficiaries to prevent failure or waste of resources (Stufflebeam,
2004).

8.4.8 Process or ongoing evaluation


Ongoing or process evaluation investigates only the implementation of the
programme, including whether the administrative and service objectives of
the programme are being met; whether services are delivered in accordance
to the goals of the programme; whether services are delivered to appropriate
recipients and whether eligible persons are omitted from the delivered serv-
ice; whether clients are satisfied; whether the administrative, organisational
and personnel functions are managed well; whether service delivery is well-
organised and in line with programme design and other specifications and
whether the project runs within the projected budgetary and time frames (Rossi,
et al., 2004).

8.4.9 Output evaluation


Output evaluation assesses the tangible product or service produced by the
intervention in terms of the quantity, quality and diversity of services deliv-
ered. It is the easiest and most straight-forward focus for evaluation.

162 Zimbabwe Open University


Unit 8 Typology of Evaluation Approaches

8.4.10 Outcome evaluation


Outcome evaluation entails the positive, neutral or negative intermediate sectoral
results or consequences of a project/programme, that is, the progress made
towards achieving the strategic goals (Chen, 2005). Outcome evaluations
may focus on four levels, which are individual organisational and community
level, community and at government level
 Individual level - focusing on changes in knowledge, skills, attitudes
 Organisation level - focusing on changes in policies, practices, ca-
pacity
 community level - focusing on changes in employment rates, school
achievement, recycling), and the policy
 Government level –focusing on changes in laws, regulations, sources
of funding.

8.4.11 Impact evaluation or impact assessment


Impact evaluation focuses on final long term multi-sectoral consequences of
the project/programme (that is, progress towards achieving the transformative
vision). It determines the extent to which a program produces the intended
improvements in the social conditions it addresses (Weiss, 1998). Impact
evaluation tests whether the desired effects of the social conditions that the
programme intended to change, were attained and whether those changes
included unintended side effects (Rossi et al. 2004.)

8.4.12 Systemic evaluation


It analyses the entire system, including the effect of external factors on the
system, with the aim of improving its functioning.

8.4.13 Meta-evaluation
Meta evaluation evaluates the evaluation focus, content and process as well
as the evaluators themselves (Scriven in Mathison, 2005). Interpretations by
evaluators and others should be scrutinised by colleagues and selected
stakeholder to identify shortcomings in design and poor interpretations.

Zimbabwe Open University 163


Development Monitoring and Evaluation MSDS510

Activity 8.1

?
1. List the main evaluation types based on scope.
2. What do you understand by evaluation based on scope?
3. With examples, discuss the main evaluation approaches based on scope
4. How does input evaluation differ from process or ongoing evaluation?
5. Distinguish between input evaluation and impact evaluation. Illustrate
your answer using examples.

8.5 Formal Substantive Theory Base Evaluation


Approaches
The various theoretical approaches to evaluation range from a largely positivistic
perspective on the one hand and constructivist approaches on the other. The
positivistic perspective use quantitative approaches to generate information
about measurable and calculable behaviour patterns, analysed based on so-
called scientific criteria (for example, the analysis of huge quantitative datasets).
The constructivist approaches are more normative, interpretative and prioritise
the identification and generation of local knowledge, learning and use within
the context of different situations and cultures, for example, the assessments
of similarities and differences between specific case studies (Rabbie and Cloete,
2009).

According to Naidoo (2007), this broad distinction of the two polar oppo-
sites of approaches in this category classifies adherents into two camps: the
quantitative or ‘scientific’ versus the qualitative or interpretative.

Some of the previous evaluation approach classifications attempts to distin-


guish between value-driven and use-driven evaluation approaches. The prob-
lem with this distinction is that all evaluations inherently entail a value judge-
ment (good or bad) and that all evaluations are goal-directed with a particular
end-use or purpose in mind. A clearer distinction in terms of the underpinning
philosophy of an evaluation is theory-driven versus participation-driven ap-
proaches, where theory-driven evaluation philosophies lean towards a more
scientific approach to evaluation research with the general aim to expand
knowledge. The participation-driven evaluation philosophies lean towards a
more applied social improvement approach to evaluation research with the
general aim of development, empowerment and creating shared understand-
ing of the programme between the evaluators, beneficiaries and decision-
makers, (Rabbie and Cloete, 2009).

164 Zimbabwe Open University


Unit 8 Typology of Evaluation Approaches

8.5.1 Theory-based evaluation


Theory based evaluation entails the identification of the critical success fac-
tors of the evaluation, linked to an in-depth understanding of the workings of
a programme or activity. According to Donaldson and Lipsey (2006), theory-
driven evaluation is, therefore, the systematic use of substantive knowledge
about the phenomena under investigation and scientific methods to improve,
to produce knowledge and feedback about and to determine the merit, worth
and significance of evaluands. For example, assessing sectoral or integrated
governmental interventions to reduce poverty, unemployment, crime and in-
security, and to improve health, education, quality of life and community de-
velopment. The approaches in this category are all based on an implicit ‘theory
of change’, that is, how to reduce crime, poverty and disease and achieve
growth and development, which links the evaluation with intended improve-
ments in practice. It does not assume simple linear cause-and effect relation-
ships, but allows for the mapping and design of complex programmes. Where
evaluation data indicates that critical success factors of a programme have
not been achieved, it is concluded that the programme will be less likely to
succeed (Donaldson and Lipsey, 2006). These evaluations can be approached
in a deductive or an inductive way.

8.6 Deductive Evaluation Approaches


The following are specialised deductive approaches:

Clarification evaluation,

Clarification evaluation helps to:


 clarify or develop the programme plan,
 analyse the programme assumptions and theory,
 determine program reasonability and ethic of the programme,
 determine the feasibility and appropriateness and improve coherence
of the programme (Owen, 2006; Rossi, et al., 2004).
 test the deductive or inductive causal logic of the intervention and the
feasibility of the design and
 encourages consistency between design and implementation (Owen,
2006). It draws the causal “logic model” for the intervention to provide
a picture of how it is believed the intervention will work to bring about
desired results through a specific sequence of activities.

Zimbabwe Open University 165


Development Monitoring and Evaluation MSDS510

8.6.1 Illuminative evaluation


Illuminative evaluation is basically the same as clarification evaluation. It as-
sesses the significant features, recurring issues, themes, and critical processes
of a programme to provide a comprehensive understanding of the complex
reality surrounding a programme: in short, to ‘illuminate’ In contrast to clarifi-
cation evaluation, which is a deductive approach from within, the perspective
of a specific theoretical paradigm, illuminative evaluation, however, generally
follows an inductive approach.

8.6.2 Realist evaluation


Realist evaluation tries to establish why, where, and for whom programmes
work or fail by identifying the mechanisms that produce observable pro-
gramme effects. It can also test the mechanisms as well as other contextual
factors that may have caused the observed effect (Henry in Mathison, 2005).
It thus tests whether there is an unequivocal causal relationship between a
programme and its outcomes to establish beyond doubt that it was the actual
programme which caused the measurable change, and not some other, uni-
dentified, variable which may not exist in another social setting.

8.6.3 Cluster evaluation and multisite evaluations


Cluster evaluation and multisite evaluations look across a group of projects to
identify common threads and themes across such projects. Cluster evaluation
tries to establish impact through aggregating outcomes from multiple sites or
projects, whereas multisite evaluation seeks to determine outcomes through
aggregating indicators from multiple sites. Both approaches try to clarify and
verify the validity of the theory of change concerned. Goal-free evaluation is
an example of an inductive theory-driven approach (Rabbie and Cloete, 2009).

8.6.4 Goal-free evaluation


According to Posavac and Carey (1997), goal-free evaluation studies all as-
pects of the programme and notes all positive and negative aspects without
focusing only on information that supports the goals. The evaluator remains
purposely ignorant of a programme’s goals, searching for all effects of a pro-
gramme regardless of its developer’s objectives. If the programme is doing
what it is supposed to do, the evaluation should confirm this, but the evaluator
will also be more likely to uncover unanticipated effects that the goal-based
evaluations would miss because of the preoccupation with stated goals. Con-
ceptualised in this way, goal-free evaluation is seen as the opposite of a de-
ductive theory-driven approach to evaluation.

166 Zimbabwe Open University


Unit 8 Typology of Evaluation Approaches

8.6.5 Participatory evaluation


Participatory evaluation is an overarching term for any evaluation approach
that involves programme staff or participants actively in decision-making and
other activities related to the planning and implementation of evaluation stud-
ies (King in Mathison, 2005). In participatory evaluation the evaluation team
consists of the evaluator (either as team leader, or as supportive consultant)
and representatives from stakeholder groups, who together plan, conduct
and analyse the evaluation. The degree of participation can range from shared
evaluator-participant responsibility for evaluation questions and activities, to
participants’ complete control of the evaluation process. With shared respon-
sibility, the evaluator is responsible for the quality of the process and the out-
comes, but designing and conducting the evaluation is done in collaboration
with stakeholders. In evaluations where participants control the evaluation,
the evaluator becomes a coach or facilitator who offers technical skills where
needed. In a sense, all evaluations have some participation from stakeholders,
as evaluators need to interact with stakeholders to obtain information. How-
ever, a study has a participatory philosophy when the relationship between
the evaluator and the participants provides participants with a substantial role
in making decisions about the evaluation process (Rabbie and Cloete, 2009).

8.6.6 Responsive evaluation


Responsive evaluation is not particularly responsive to programme theory or
stated goals but more to stakeholder concerns. In contrast to pre-ordinate
goal-focused evaluation, where the evaluator predetermines the evaluation
plan, based on the programme goals, responsive evaluation orients the evalu-
ation to the programme activities as opposed to the goals, thereby respond-
ing to various information needs and values with appropriate methods that
emerge during the course of the programme implementation. Responsive evalu-
ation searches for pertinent issues and questions throughout the study and
attempts to respond in a timely manner by collecting and reporting useful
information, even if the need for such information had not been anticipated at
the start of the study (Stufflebeam and Shinkfield, 2007).

8.6.7 Naturalistic, constructivist, interpretivist or fourth-


generation evaluation
Naturalistic, constructivist, interpretivist or fourth-generation evaluation at-
tempts to blend the evaluation process into the lives of the people involved by
focusing on both the tangible, countable reality and the intangible socially con-
structed reality (what people believe to be real). The merit or worth of the

Zimbabwe Open University 167


Development Monitoring and Evaluation MSDS510

evaluand is judged in ways appropriate to the setting, expectations, values,


assumptions, and dispositions of the participants, with minimal medications
due to the inquiry processes used and assumptions held by the evaluator
(Rabbie and Cloete, 2009). Values are assigned a central role in the evalua-
tion, as they provide the basis for determining merit. The values of stakeholders,
values inherent to the context or setting of the situation and conflict in values
are critical in formulating judgements and conclusions about the evaluand.

8.6.8 Utilisation-focused evaluation


This is based on the premise that evaluations should be judged by their utility
and actual use. Therefore, evaluators should facilitate the evaluation process
and design the evaluation with careful consideration of how everything that is
done, from beginning to end, will affect its use. A group of representative
stakeholders clarify the outcomes, indicators, performance targets, data col-
lection plan and intended uses of the findings will be used. The group’s values
(not the evaluator’s) thus determine the nature of recommendations arising
from the evaluation. Patton (2004) argues that, as evaluation cannot be value-
free, utilisation-focused evaluation answers the question of whose values will
frame the evaluation by working with clearly identified, primary intended us-
ers who have the responsibility to apply evaluation finding and implement
recommendations.

8.6.9 Appreciative inquiry


Appreciative inquiry focuses on the strengths of a particular organisation or
intervention with the assumption that focusing attention on the strengths will
strengthen them further. Appreciative inquiry is based on the social constructivist
concept that what you look for is what you will find, and where you think you
are going is where you will end off.

8.6.10 Evaluative inquiry


Evaluative inquiry responds to a range of decision-makers’ information needs,
of which determining the worth of the programme may be one. Evaluative
inquiry consists of collecting data, including relevant variables and standards,
resolving inconsistencies in the values, clarifying misunderstandings and mis-
representations, rectifying false facts and factual assumptions, distinguishing
between wants and needs, identifying all relevant dimensions of merit, finding
appropriate measures of these dimensions, weighing the dimensions, validat-
ing the standards and arriving at an evaluative conclusion (Owen, 2006). It

168 Zimbabwe Open University


Unit 8 Typology of Evaluation Approaches

emphasises the importance of individual, team and organisational learning as


a result of participating in the evaluation process.

8.6.11 Critical theory evaluation


Critical theory evaluation aims to determine the merit, worth or value of some-
thing by unveiling false culturally based perspectives through a process of
systematic inquiry. The evaluation is influenced by an explicit value position
that we operate beneath layers of false consciousness contribute to our own
and others’ exploitation and oppressions. As a response, critical theory evalu-
ation seeks to engage evaluation participants in a dialectic process of ques-
tioning the history of their ideas and thinking about how privileged, narrative
of the past and present will influence future value judgments (Rabbie and
Cloete, 2009).

8.6.12 Empowerment evaluation


This form of evaluation uses the evaluation process to foster self-determina-
tion with the help of the evaluator coach or critical friend. The evaluator helps
the group to determine their mission, take stock through evaluation tools of
the current reality and to set goals and strategies based on the self-assess-
ment. The evaluator needs to capacitate stakeholders to enable them to con-
duct independent evaluations, thereby altering the balance of power in pro-
gramme context by enhancing the influence of stakeholders.

8.6.13 Democratic evaluation


Democratic evaluation considers all relevant interests, values, and perspec-
tives to arrive at conclusions that are impartial to values. Democratic evalua-
tion allows the multiple reality of a program to be portrayed, providing deci-
sion-makers with a variety of perspectives and judgements to consider. House
(1991) argues that evaluation is never value neutral; it should tilt in the direc-
tion of social justice by specifically addressing the needs and interests of the
powerless thereby promoting social justice to the poor and marginalised
through the evaluation process. Evaluation thus becomes a democratising force
with evaluators advocating on behalf of disempowered groups.

Zimbabwe Open University 169


Development Monitoring and Evaluation MSDS510

Activity 8.2

?
1. Explain the concept of theory based evaluation.
2. List any 6 deductive evaluation approaches. In what way do they dif-
fer from theory-based evaluation approaches?

8.6.14 Evaluation design and methodology


The evaluators can choose either a quantitative, a qualitative or mixed-meth-
ods design approach, as he tries to find a workable balance between the
emphasis placed on procedures that ensure the validity of findings and those
that make findings timely, meaningful, and useful to consumers. The choice of
the design is determined by the purposes of the evaluation, the nature of the
program, and the political or decision-making context, (Rossi et al., 2004: p.
25). Rossi refers to this as the “good-enough” rule, which entails choosing the
best possible design, taking into account practicality and feasibility. While a
particular evaluation approach, such as, the classic experimental study may
be ideal, it may not be feasible. Given the advantages and disadvantages of
different approaches to evaluation, the OECD (2007) argues for “the use of
a plurality of approaches that are able to gain from the complementarities in
the information they can provide”. Different methodologies may be applied in
different evaluation designs.

8.7 The Main Evaluation Designs Applicable to


Monitoring and Evaluation.
The following are the main evaluation designs that are applicable to monitor-
ing and evaluation.

8.7.1 Quantitative evaluation approaches


Quantitative evaluation approaches normally take the form of experimental
designs. An experimental design advocates for “a social experimental ap-
proach to reform where social programmes are retained, imitated, modified
or discarded on the basis of apparent effectiveness on the multiple imperfect
criteria available” (Rossi et al., 2004). When a clear statement of the pro-
gramme objective to be evaluated has been formulated, the evaluation may
be viewed as a study of change. The programme to be evaluated constitutes
the causal or independent variable, and the desired change is similar to the
effect or dependent variable. The project may be formulated in terms of a

170 Zimbabwe Open University


Unit 8 Typology of Evaluation Approaches

series of hypotheses that state that certain activities will produce certain stated
results.

8.7.2 Classic experimental design


This entails the random assignment of subjects to treatment and non-treat-
ment conditions, and the pre- and post-measurement of both groups. The
impact of programmes is determined by comparing the outcomes of the groups
to determine whether the intervention has produced the desired outcome
(OECD, 2007).

8.7.3 Quasi-experimental evaluation


It attempts to overcome the problems with randomly assigning participants to
interventions in real life (Mouton, 2007). The term ‘quasi-experimental’ re-
fers to approximations of randomised experiments and while their control of
internal validity is not as reliable as true experimental design, they neverthe-
less provide valuable answers to cause-and-effect questions. The validity of
the quasi-experiment may be undermined by historical or seasonal events,
maturation of the subjects, the effect of the test or instruments used on the
subject’s behaviour, attrition of subjects from the programme and statistical
regression that would have occurred naturally without any intervention
(Reichardt and Mark, 2004). Forms of quasi experimental designs include
pretest – post-test, non-equivalent comparison group design, pretest-post-
test, no comparison group design, interrupted time series designs, compari-
son group designs, and regression-discontinuity design where the conditions
for being part of the experimental group is known and, therefore, ‘controlla-
ble’.

8.8 Qualitative Evaluation Approaches that are Non-


Experimental Approaches

These focus on the constructed nature of social programmes, the contextuality


of social interventions and importance of focusing on processes of implemen-
tation, in addition to assessing programme outcomes and effects (Mouton,
2008).

Zimbabwe Open University 171


Development Monitoring and Evaluation MSDS510

8.8.1 Qualitative evaluation


Qualitative evaluation answers ‘why’ and ‘how’ questions. It is ideal when
non-causal questions form the basis for the evaluation, when contextual knowl-
edge, perspective and values of the evaluand is required before finalising the
evaluation design, when the focus is on implementation rather than outcomes,
when the purpose of the evaluation is formative, when it is important to study
the intervention in its natural setting by means of unobtrusive measures (Pierre,
2004).

8.8.2 Case study evaluation


Case study evaluation approaches see the evaluator analysing the goals, plans,
resources, needs and problems of the case in its natural setting (as opposed
to imposed experimental conditions) to prepare in in-depth report on the
case, with descriptive and judgemental information, perceptions of various
stakeholders and experts, and summary conclusions. In the case study ap-
proach, the evaluator seeks patterns of data to develop issues, triangulates
key observations and bases for interpretation, selects alternative interpreta-
tions to pursue and develops assertions or generalisations about the case.
Success case method compares the experiences of successful and unsuc-
cessful participants to identify key factors that allowed successful participants
to benefit from a particular intervention (Rabbie and Cloete, 2009).

8.8.3 Participatory action research


Participatory action research combines the investigative research process with
education of less powerful stakeholders and subsequent action on the re-
search results. The cycle starts with observation and reflection, which leads
to a plan of change to guide action. The approach is best suited to action-
orientated evaluation questions (Rogers and Williams, 2006).

8.8.4 Grounded theory


The grounded theory provides an open-ended evaluation design where the
evaluator’s inductive sensitivity, ontology and epistemology is the preferred
methodological paradigm (Scriven, 2003). Grounded theory is particularly
helpful in goal-free evaluations where it assists in developing substantive theo-
retical propositions and extrapolations from the classification or coding of
empirical data that might lead to theory building or change, rather than the
testing of a theory as happens in a deductive theory-driven approach to evalu-
ation.

172 Zimbabwe Open University


Unit 8 Typology of Evaluation Approaches

Civicus (2002) offers another classification of evaluation approaches. Table


6.1 indicates the classification of monitoring approaches by Civicus (2002).
It is interesting to identify differences and similarities with the one tendered in
this unit.

Table 8.1: Other Approaches to Evaluation


Approach Major purpose Typical focus Likely methodology
questions
Goal-based Assessing Were the goals Comparing baseline and
achievement of achieved? progress data finding ways to
goals and Efficiently? Were measure indicators.
objectives. they the right goals?
Decision- Providing Is the project Assessing range of options
making information. effective? Should it related to the project context,
continue? How might inputs, process, and product.
it be modified? Establishing some kind of
decision-making consensus.
Goal-free Assessing the What are all the Independent determination of
full range of outcomes? What needs and standards to judge
project effects, value do they have? project worth. Qualitati
intended and and quantitative techniques
unintended. to uncover any possible
results.

Source: Civicus, 2002.

Our feeling is that the best evaluators use a combination of all these approaches,
and that an organisation can ask for a particular emphasis but should not
exclude findings that make use of a different approach.

8.9 Evaluation Justification


Let us go back and remind ourselves on the reasons for evaluation so that we
can remember the goal of evaluating, as we look deep into the process and
concept of External evaluation. We discussed that Evaluation involves the
following:
 Looking at what the project or organisation intended to achieve – what
difference did it want to make? What impact did it want to make?
 Assessing its progress towards what it wanted to achieve, its impact
targets.

Zimbabwe Open University 173


Development Monitoring and Evaluation MSDS510

 Looking at the strategy of the project or organisation. Did it have a


strategy? Was it effective in following its strategy? Did the strategy
work? If not, why not?
 Looking at how it worked. Was there an efficient use of resources?
What were the opportunity costs of the way it chose to work? How
sustainable is the way in which the project or organisation works? What
are the implications for the various stakeholders in the way the organi-
sation works?
In an evaluation, we look at efficiency, effectiveness and impact. There are
many different ways of doing an evaluation. Some of the more common terms
you may have come across are:
 Self-evaluation: This involves an organisation or project holding up a
mirror to itself and assessing how it is doing, as a way of learning and
improving practice. It takes a very self-reflective and honest organisa-
tion to do this effectively, but it can be an important learning experi-
ence.
 Participatory evaluation: This is a form of internal evaluation. The
intention is to involve as many people with a direct stake in the work as
possible. This may mean project staff and beneficiaries working to-
gether on the evaluation. If an outsider is called in, it is to act as a
facilitator of the process, not an evaluator.
 Rapid participatory appraisal: Originally used in rural areas, the same
methodology can be applied, in fact, in most communities. This is a
qualitative (see Glossary of Terms) way of doing evaluations. It is semi-
structured and carried out by an interdisciplinary team over a short
time. It is used as a starting point for understanding a local situation and
is a quick, cheap, useful way to gather information. It involves the use
of secondary data review, direct observation, semi-structured inter-
views, key informants, group interviews, games, diagrams, maps and
calendars. In an evaluation context, it allows one to get valuable input
from those who are supposed to be benefiting from the development
work. It is flexible and interactive.
 External evaluation: This is an evaluation done by a carefully chosen
outsider or outsider team.
 Interactive evaluation: This involves a very active interaction be-
tween an outside evaluator or evaluation team and the organisation or
project being evaluated. Sometimes an insider may be included in the
evaluation team.

174 Zimbabwe Open University


Unit 8 Typology of Evaluation Approaches

8.9.1 External evaluation


This is an evaluation done by a carefully chosen outsider or outsider team.
Sometimes it is necessary to consider contracting an external evaluator in
order to get another opinion from a deferent side. There should be care when
selection an external evaluator or external evaluation team. When you have
decided to solicit services of an external evaluator the following are the Quali-
ties to look for in an external evaluator or evaluation team (Rossi et al., 2004).

According to Civicus (2002), the following qualities should be found on the


external evaluator or evaluation team:
 an understanding of development issues,
 an understanding of organisational issues,
 experience in evaluating development projects, programmes or organi-
sations,
 a good track record with previous clients,
 research skills,
 a commitment to quality,
 a commitment to deadline,
 objectivity, honesty and fairness,
 logic and the ability to operate systematically,
 ability to communicate verbally and in writing,
 a style and approach that fits with your organisation,
 values that are compatible with those of the organisation and
 reasonable rates (fees), measured against the going rates.

When you decide to use an external evaluator, you should make sure that you
do the following before you offer a contract to the evaluators:
 check his/her/their references,
 meet with the evaluators before making a final decision,
 communicate what you want clearly – good terms of reference (tor)
are the foundation of a good contractual relationship,
 negotiate a contract which makes provision for what will happen if time
frames and output expectations are not met,
 ask for a work plan with outputs and timelines,

Zimbabwe Open University 175


Development Monitoring and Evaluation MSDS510

 maintain contact – ask for interim reports as part of the contract –


either verbal or written and
 build in formal feedback times.

Activity 8.3

? 1. You are a manager of a dam development project in Masvingo. Dis-


cuss how you would select an external evaluator to evaluate the progress
made so far so that you can draft a progress report to the Minister of
State (Provincial Affairs).
2. Discuss various challenges you may confirm in undertaking external
evaluation of a development project. Use examples.
3. Examine various qualities that should be found on the external evalua-
tor or evaluation team.

Do not expect any evaluator to be completely objective. There is need to


view their work with a critical eye. Remember the goal is not to get a positive
report from the external evaluator, but to get a report with valid conclusions
and observations that will change the project in a positive manner in order to
achieve its development goals.

176 Zimbabwe Open University


Unit 8 Typology of Evaluation Approaches

Table 8.2: Advantages and Disadvantages of Internal and External


Evaluations
Advantages Disadvantages
Internal The evaluators are very familiar with The evaluation team
evaluation the work, the organisational culture may have a vested
and the aims and objectives. interest in reaching
positive conclusions
Sometimes people are more willing to about the work or
speak to insiders than to outsiders. organisation. For this
reason, other
An internal evaluation is very clearly stakeholders, such as
a management tool, a way of self- donors, may prefer an
correcting, and much less threatening external evaluation.
than an external evaluation. This may
make it easier for those involved to The team may not be
accept findings and criticisms. specifically skilled or
trained in evaluation.
An internal evaluation will cost less
than an external evaluation. The evaluation will take
up a considerable
amount of organisational
time – while it may cost
less than an external
evaluation, the
opportunity costs (see
Glossary of Terms) may
be high.

Zimbabwe Open University 177


Development Monitoring and Evaluation MSDS510

External The evaluation is likely to be more Someone from outside


evaluation (done objective as the evaluators will have the organisation or
by a team or some distance from the work. project may not
person with no understand the culture or
vested interest in The evaluators should have a range of even what the work is
the project) evaluation skills and experience. trying to achieve.

Sometimes people are more willing to Those directly involved


speak to outsiders than to insiders. may feel threatened by
outsiders and be less
Using an outside evaluator gives likely to talk openly and
greater credibility to findings, co-operate in the
particularly positive findings. process.

External evaluation can


be very costly.

An external evaluator
may misunderstand what
you want from the
evaluation and not give
you what you need.

Source: Civicus, 2002.

Remember it is important that you should not expect any evaluator to be


completely objective. There is need to view their work with a critical eye.
Remember also that the goal is not to get a positive report from the external
evaluator, but to get a report with valid conclusions and observations that will
change the project in a positive manner in order to achieve its development
goals.

8.10 Evaluation Challenges


Evaluation is faced with a lot of challenges. The challenges emanate from
several sources. Some of the challenges arises from poor choice of the evalu-
ation approaches and design. Other challenges are a result of operational
processes. The following are some of the sources of evaluation challenge,
which are faced by monitoring and evaluation practitioners in the field of de-
velopment studies. It is interesting to find out how we can practically resolve
the evaluation challenges. For those who are in the field of development work

178 Zimbabwe Open University


Unit 8 Typology of Evaluation Approaches

you may assist in giving work examples on how you have attempted to go
around the challenges that include the following:
 data gapes,
 insufficient data,
 falsified information,
 lack of committed participation and
 political or organisational interference.

Activity 8.4

?
1. Using examples discuss how participatory evaluation can increase the
success chances of a project.
2. Discuss the main evaluation designs applicable to monitoring and evalu-
ation of community development projects.
3. Examine the advantages and disadvantages of internal and external
evaluations.

8.11 Summary
In this unit, we have presented the different classes of evaluation approaches.
Classification of evaluation approaches is important and necessary to enable
evaluators to understand the different approaches to evaluation and how they
relate to each other, overlap or differ from one another. In the summaries of
these approaches it is clear how some are mutually exclusive, others overlap
and many are related or complementary. The diversity in approaches need to
be viewed as an asset as it provided the evaluators with many choices. In
order to get the most accurate perspective of whatever we are trying to evaluate
it is necessary to consider and apply different approaches. Thus, an outcome
evaluation study may take a participatory approach to clarify the multiple
aims and intended uses of the evaluation results, followed by a more theory-
driven approach in the summative evaluation to determine whether the prede-
termined goals were reached as well as identifying potential unintended con-
sequences. The nature of the evaluation will determine the appropriate quan-
titative or qualitative data gathering techniques, which will inform the design of
the study in addition to the stated goals of the evaluation. As the different
approaches emphasise different aspects of the evaluand, it can be argued that
a combination of approaches will provide ‘richer’ evaluation data through a

Zimbabwe Open University 179


Development Monitoring and Evaluation MSDS510

multi-faceted evaluation focus. However, each additional approach implies


more resources (including time) to bring it to fruition. It is the task of the
evaluator to select the moist appropriate balance of approaches to ensure the
most accurate evaluation results within the limited resources available.

180 Zimbabwe Open University


Unit 8 Typology of Evaluation Approaches

References
Chen, H. (2005). Practical Program Evaluation. Assessing and Improv-
ing Planning, Implementation and Effectiveness. California: Sage
Publications.
Civicus. (2002). Monitoring and Evaluation.
<http://www.civicus.org/documents/toolkits.df (accessed> 20/04/2017)
Mathison, S. (editor). 2005. Encyclopedia of Evaluation. California: Sage
Publications.
Mouton, J. (2007). Approaches to Programme Evaluation Research. Jour-
nal of Public Administration. Vol 42, No 6.
Mouton, J. (2008). Class and slide notes from the “Advanced Evaluation
Course” presented by the Evaluation Research Agency in Rondebosch,
20 – 24 October 2008.
Naidoo, I. A. (2007). Unpublished research proposal, submitted to the Gradu-
ate School of Public and Development Management, University of
Witwatersrand.
Organisation for Economic Co-operation and Development (OECD). (2007).
OECD Framework for the Evaluation of SME and Entrepreneur-
ship Policies and Programmes. Paris, France: OECD.
Owen, J. M. (2006). Program Evaluation. Forms and Approaches, (3rd
Edition). New York: The Guilford Press.
Patton, M.Q. (2004.) “The Roots of Utilization-Focused Evaluation.” In Alkin,
C.M. (ed.) (2004). Evaluation Roots, Tracing Theorists’ Views and
Influences. California: Sage Publications.
Rabie, B. and Fanie Cloete, F. (2009). A New Typology of Monitoring and
Evaluation Approaches, Administration Publications, Vol 17(3):76-
97.
Rossi, P. H., Lipsey, M.W. and Freeman, H. E. (2004). Evaluation A Sys-
tematic Approach, (Seventh Edition). London: Sage Publications.
Scriven, M. (2003). Michael Scriven on the Difference between Evaluation
and Social
Science Research. The Evaluation Exchange. Vol IX, No 4. p7.
Stufflebeam, D.L. and Shinkfield, A.J. (2007). Evaluation Theory, Models
and Applications. San Francisco: Jossey-Bass.
Stufflebeam, D.L. (2004.) The 21st Century CIPP Model: Origins, Develop-
ment and Use. In Alkin, C.M. (ed.) (2004.) Evaluation Roots, Trac-
ing Theorists’ Views and Influences. California: Sage Publications.
Weiss, G. (1998). Using Randomized Experiments. Handbook of Practical
Program Evaluation, (Second Edition). San Francisco: Jossey-Bass,
John Wiley and Sons Inc. Publications.

Zimbabwe Open University 181


Development Monitoring and Evaluation MSDS510

182 Zimbabwe Open University


Unit Nine

Data Gathering and Analysis for


Monitoring and Evaluation

9.1 Introduction

O ne of the most important processes in monitoring and evaluation is the


data gathering. This process is very important because all other proc-
esses relay on it. To have valid and reliable results one has to get right the
process of date gathering. It is in this context that we look closely in this unit,
the process of data gathering for monitoring and evaluation. In this unit we are
going to look at the different kinds of methods that can be used to collect
information for monitoring and evaluation purposes. You need to select meth-
ods that suit your purposes and your resources. In this unit, we examine
various sources of data. We also explore various practical consideration in
planning for data collection and the tools used in data collection. Sampling
issues are revisited as well as data analysis in the context of monitoring and
evaluation.
Development Monitoring and Evaluation MSDS510

9.2 Unit objectives


By the end of this unit, you should be able to:
 discuss several sampling methods
 explain different methods of data gathering
 list practical considerations in planning for data collection
 discuss tools used in data gathering
 carry out data analysis

9.3 Sampling in Monitoring and Evaluation Context


Sampling is an important concept when using various tools for a monitoring or
evaluation process. Sampling is not really a tool in itself, but used with other
tools it is very useful. It is a way of narrowing down the number of possible
respondents to make it manageable and affordable. Sometimes it is neces-
sary to be comprehensive. This means getting to every possible household, or
school or teacher or clinic and so on. In an evaluation, you might well use all
the information collected in every case during the monitoring process in an
overall analysis. Usually, however, unless numbers are very small, for in-depth
exploration you will use a sample. Sampling techniques include the methods
in Table 9.1. The following table explains the sampling methods.

Table 9.1: Summary of Sampling Methods

Evaluating or estimating attributes or characteristics of the entire system, proc-


ess, product or project through a representative sample can be more efficient
while still providing the required information. To legitimately be able to use a
sample to extrapolate the results to the whole population requires the use of
one of four statistical sampling methods.

Random sampling

The first statistical sampling method is simple random sampling. In this method,
each item in the population has the same probability of being selected as part
of the sample as any other item. For example, a tester could randomly select
5 inputs to a test case from the population of all possible valid inputs within a
range of 1-100 to use during test execution, to do this, the tester could use a
random number generator or simply put each number from 1-100 on a slip of
paper in a hat, mixing them up and drawing out 5 numbers. Random sampling
can be done with or without replacement. If it is done without replacement,

184 Zimbabwe Open University


Unit 9 Data Gathering and Analysis for Monitoring and Evaluation

an item is not returned to the population after it is selected and thus can only
occur once in the sample.

Systematic sampling

Systematic sampling is another statistical sampling method. In this method,


every nth element from the list is selected as the sample, starting with a sample
element n randomly selected from the first k elements. For example, if the
population has 1 000 elements and a sample size of 100 is needed, then k
would be 1 000/100 = 10. If number 7 is randomly selected from the first ten
elements on the list, the sample would continue down the list selecting the 7th
element from each group of ten elements. Care must be taken when using
systematic sampling to ensure that the original population list has not been
ordered in a way that introduces any non-random factors into the sampling.
An example of systematic sampling would be if the auditor of the acceptance
test process selected the 14th acceptance test case out of the first 20 test
cases in a random list of all acceptance test cases to retest during the audit
process. The auditor would then keep adding twenty and select the 34th test
case, 54th test case, 74th test case and so on to retest until the end of the list is
reached.

Stratified sampling

The statistical sampling method called stratified sampling is used when repre-
sentatives from each subgroup within the population need to be represented
in the sample. The first step in stratified sampling is to divide the population
into subgroups (strata) based on mutually exclusive criteria. Random or sys-
tematic samples are then taken from each subgroup. The sampling fraction
for each subgroup may be taken in the same proportion as the subgroup has
in the population. For example, if the person conducting a customer satisfac-
tion survey selected random customers from each customer type in propor-
tion to the number of customers of that type in the population. For example, if
40 samples are to be selected, and 10% of the customers are managers, 60%
are users, 25% are operators and 5% are database administrators then 4
managers, 24 uses, 10 operators and 2 administrators would be randomly
selected. Stratified sampling can also sample an equal number of items from
each subgroup. For example, a development lead randomly selected three
modules out of each programming language used to examine against the cod-
ing standard.

Zimbabwe Open University 185


Development Monitoring and Evaluation MSDS510

Cluster sampling

The fourth statistical sampling method is called cluster sampling, also called
block sampling. In cluster sampling, the population that is being sampled is
divided into groups called clusters. Instead of these subgroups being homo-
geneous based on selected criteria as in stratified sampling, a cluster is as
heterogeneous as possible to matching the population. A random sample is
then taken from within one or more selected clusters. For example, if an
organisation has 30 small projects currently under development, an auditor
looking for compliance to the coding standard might use cluster sampling to
randomly select 4 of those projects as representatives for the audit and then
randomly sample code modules for auditing from just those 4 projects. Clus-
ter sampling can tell us a lot about that particular cluster, but unless the clus-
ters are selected randomly and a lot of clusters are sampled, generalisations
cannot always be made about the entire population. For example, random
sampling from all the source code modules written during the previous week,
or all the modules in a particular subsystem, or all modules written in a par-
ticular language may cause biases to enter the sample that would not allow
statistically valid generalisation.

Haphazard sampling

There are also other types of sampling that, while non-statistical (information
about the entire population cannot be extrapolated from the sample), may still
provide useful information. In haphazard sampling, samples are selected based
on convenience but preferably should still be chosen as randomly as possible.
For example, the auditor may ask to see a list of all of the source code mod-
ules, and then closes his eyes and points at the list to select a module to audit.
The auditor could also grab one of the listing binders off the shelf, flip through
it and “randomly” stop on a module to audit. The haphazard sampling is usu-
ally typically, quicker, and uses smaller sample sizes than other sampling tech-
niques. The main disadvantage of haphazard sampling is that since it is not
statistically based, generalisations about the total population should be made
with extreme caution.

Judgmental sampling

Another non-statistical sampling method is judgmental sampling. In judgmen-


tal sampling, the person doing the sample uses his/her knowledge or experi-
ence to select the items to be sampled. For example, based on experience, an
auditor may know which types of items are more apt to have non-conform-
ances or which types of items have had problems in the past or which items

186 Zimbabwe Open University


Unit 9 Data Gathering and Analysis for Monitoring and Evaluation

are a higher risk to the organisation. In another example, the acceptance tester
might select test cases that exercise the most complex features, mission criti-
cal functions or most used sections of the software.

Source: Excerpt from The Certified Software Quality Engineers Handbook,


by Linda Westfall, wwwwestfallteam.com Accessed 05/05/2017

Activity 9.1

?
1. What do you understand by the term sampling?
2. List five types if sampling.
3. Explain briefly the following sampling types:
a) Stratified sampling.
b) Cluster sampling.
c) Judgmental sampling.
d) Random sampling.

9.4 Sources of Data for Monitoring and Evaluation


There are several data sources for monitoring and evaluation. Practitioners,
depending on the nature and type of data needed, use these sources selec-
tively. In many cases, they are used to triangulate the data sources in increase
reliability of the information. Chiplowe (2008) compiled major sources of
data and information for project monitoring and evaluation and this include
the following:

Zimbabwe Open University 187


Development Monitoring and Evaluation MSDS510

Table 9.2 Sources of Data


Source Description

Secondary data Secondary data is information obtained from other research,


such as surveys and other studies previously conducted or
planned at a time consistent with the project’s M and E needs,
in-depth assessments, and project reports. Secondary data
sources include government planning departments, university or
research centres, international agencies, other projects/programs
working in the area, and financial institutions (Chiplowe, 2008).
Sample surveys A survey based on a random sample taken from the beneficiaries
or target audience of the project is usually the best source of data
on project outcomes and effects. Although surveys are laborious
and costly, they provide more objective data than qualitative
methods. Many donors expect baseline and end-line surveys to
be done if the project is large and alternative data are
unavailable.
Project output Most projects collect data on their various activities, such as
data number of people served and number of items distributed.
Mapping, key informant interviews, focus group discussions,
and observation.
Checklists A systematic review of specific project components can be
useful in setting benchmark standards and establishing periodic
measures of improvement.
External Project implementers as well as donors often hire outside experts
assessments to review or evaluate project outputs and outcomes. Such
assessments may be biased by brief exposure to the project and
over-reliance on key informants. Nevertheless, this process is
less costly and faster than conducting a representative sample
survey, and it can provide additional insight, technical expertise,
and a degree of objectivity that is more credible to stakeholders.
Participatory The use of beneficiaries in project review or evaluation can be
assessments empowering, building local ownership, capacity, and project
sustainability. However, such assessments can be biased by local
politics or dominated by the more powerful voices in the
community. In addition, training and managing local
beneficiaries can take time, money, and expertise, and it
necessitates buy-in from stakeholders. Nevertheless,
participatory assessments may be worthwhile as people are
likely to accept, internalise, and act upon findings and
recommendations that they identify themselves.

Source: Chiplowe, 2008.

188 Zimbabwe Open University


Unit 9 Data Gathering and Analysis for Monitoring and Evaluation

Activity 9.2

? 1.

2.
List any five data sources for monitoring and evaluation of develop-
ment projects.
Choose any three data sources and explain how you can use sampling
in data gathering processes from these sources.
3. Justify for the use of eclectic methods in gathering data for monitoring
and evaluation.
4. Go into the library and find five monitoring ad evolution reports. Look
up for the section where the report indicates the name of the project,
the monitoring objectives and the methodology. Compile a table and
identify the type of data gathering methods used for each project. Sug-
gest reasons why they methods may be similar or different.

9.5 Some Practical Considerations in Planning for


Data Collection
Many people think that data collection is very easy. This is not true. There is
need to take extreme caution in data collection. If data collection is handled
recklessly, it will contaminate the results of the analysis you would have made
from the data. The following are guidelines, which can help you when prepar-
ing and planning for data collection in monitoring and evaluation of develop-
ment projects. The following are some of the practical considerations in Plan-
ning for Data Collection (Chiplowe, 2008).

9.5.1 Prepare data collection guidelines


This helps to ensure standardisation, consistency, and reliability over time and
among different people in the data collection process. Double-check that all
the data required for indicators are being captured through at least one data
source.

9.5.2 Pre-test data collection tools


Pretesting helps to detect problematic questions or techniques, verify collec-
tion time, identify potential ethical issues, and build the competence of data
collectors.

Zimbabwe Open University 189


Development Monitoring and Evaluation MSDS510

9.5.3 Train data collectors


Provide an overview of the data collection system, data collection techniques,
tools, ethics, and culturally appropriate interpersonal communication skills.
Give trainees practical experience collecting data.

9.5.4 Address ethical concerns


Identify and respond to any concerns expressed by the target population.
Ensure that the necessary permission or authorisation has been obtained from
local authorities that local customs and attire are respected, and that confi-
dentiality and voluntary participation are maintained.

9.6 Reducing Data Collection Costs


Data collection can be costly. One of the best ways to reduce data collection
costs is to reduce the amount of data collected (Civicus, 1996; Rossi, 1993;
Bamberger, et al., 2006). The following questions can help simplify data col-
lection and reduce costs:
 Is the information necessary and sufficient?
Collect only what is necessary for project management and evaluation. Limit
information needs to the stated objectives, indicators, and assumptions in the
log frame.
 Are there reliable secondary data sources?
This can save costs for primary data collection.
 Is the sample size adequate but not excessive?
Determine the sample size that is necessary to estimate or detect change.
Consider using stratified and cluster samples.
 Can the data collection instruments be simplified?
Eliminate extraneous questions from questionnaires and checklists. In addi-
tion to saving time and cost, this has the added benefit of reducing “survey
fatigue” among respondents.

190 Zimbabwe Open University


Unit 9 Data Gathering and Analysis for Monitoring and Evaluation

9.7 Tools for Data Gathering


The following section summaries different tools used in gathering a data dur-
ing monitoring and evaluation. It appraises the tools as it is used in the prac-
tice of monitoring and evaluation in deferent projects. This section is neces-
sary for it prepares the ground for triangulation in order to increase data vola-
tility and reliability.

Table 9.3 Tools for Gathering Information Data for Monitoring and
Evaluation

Tool Description Usefulness Disadvantages


Interviews These can be structured, Can be used with Requires some skill
semi-structured or almost anyone who the interviewer. For
unstructured (see has some more on interviewing
Glossary of Terms). involvement with the skills, see later in this
They involve asking project. Can be done toolkit.
specific questions aimed in person or on the
at getting information telephone or even by
that will enable e-mail. Very
indicators to be flexible.
measured. Questions
can be open-ended or
closed (yes/no answers).
Can be a source of
qualitative and
quantitative
information.
Key informant These are interviews that As these key Needs a skilled
interviews are carried out with informants often have interviewer with a
specialists in a topic or little to do with the good understanding of
someone who may be project or the topic. Be careful
able to shed a particular organisation, they can not to turn something
light on the process. be quite objective and into an absolute truth
offer useful insights. (cannot be challenged)
They can provide because it has been
something of the “big said by a key
picture” where people informant.
more involved may
focus at the micro
(small) level.
Questionnaire These are interviews that As these key Needs a skilled

Zimbabwe Open University 191


Development Monitoring and Evaluation MSDS510

Questionnaire These are interviews that are As these key informants Needs a skilled
carried out with specialists in often have little to do with interviewer with
a topic or someone who may the project or a good
be able to shed a particular organisation, they can be understanding of
light on the process. quite objective and offer the topic. Be
useful insights. They can careful not to
provide something of the turn something
“big picture” where into an absolute
people more involved truth (cannot be
may focus at the micro challenged)
(small) level. because it has
been said by a
key informant.
Focus Group In a Focus Group, a group of This can be a useful way It is quite
about six to 12 people are of getting opinions from difficult to do
interviewed together by a quite a large sample of random sampling
skilled interviewer /facilitator people. for focus groups
with a carefully structured and this means
interview schedule. Questions findings may not
are usually focused around a be generalised.
specific topic or issue. Sometimes
people influence
one another
either to say
something or to
keep quiet about
something. If
possible, focus
groups
interviews
should be
recorded and
then transcribed.
This requires
special
equipment and
can be very time
consuming.

192 Zimbabwe Open University


Unit 9 Data Gathering and Analysis for Monitoring and Evaluation

Community This involves a gathering Community meetings Difficult to


meetings of a fairly large group of are useful for getting a facilitate –
beneficiaries to whom broad response from requires a very
questions, problems, many people on experienced
situations are put for input specific issues. It is facilitator. May
to help in measuring also a way of involving require breaking
indicators. beneficiaries directly in into small
an evaluation process, groups
giving them a sense of followed by
ownership of the plenary sessions
process. They are when everyone
useful to have at comes together
critical points in again.
community projects.
Field worker report Structured report forms Flexible, an extension Relies on field
that ensure that indicator- of normal work, so workers being
related questions are asked cheap and not time- disciplined and
and answers recorded, and consuming. insightful.
observations recorded on
every visit.
Ranking This involves getting It can be used with Ranking is quite
people to say what they individuals and groups, a difficult
think is most useful, most as part of an interview concept to get
important, least useful and schedule or across and
so on. questionnaire, or as a requires very
separate session. careful
Where people cannot explanation as
read and write, pictures well as testing
can be used. to ensure that
people
understand what
you are asking.
If they
misunderstand,
your data can be
completely
distorted.

Zimbabwe Open University 193


Development Monitoring and Evaluation MSDS510

Visual/audio stimuli These include pictures, Very useful to use You have to
movies, tapes, stories, role together with other have appropriate
plays, photographs, used tools, particularly withstimuli and the
to illustrate problems or people who cannot facilitator needs
issues or past events or read or write. to be skilled in
even future events. using such
stimuli.
This technique makes use It is useful to measure You need to test
of a continuum, along attitudes, opinions, the statements
which people are expected perceptions. very carefully to
to place their own make sure that
feelings, observations and there is no
so on. People are usually possibility of
asked to say whether they misunderstandin
agree strongly, agree, g. A common
don’t know, disagree, problem is when
disagree strongly with a two concepts are
statement. You can use included in the
pictures and symbols in statement and
this technique if people you cannot be
cannot read and write. sure whether an
opinion is being
given on one or
the other or
both.
Critical event This method is a way of Very useful when The evaluation
analysis focusing interviews with something problematic team can end up
individuals or groups on has occurred and submerged in a
particular events or people feel strongly vast amount of
incidents. The purpose of about it. If all those contradictory
doing this is to get a very involved are included, detail and lots of
full picture of what it should help the “he said/she
actually happened. evaluation team to get said”. It can be
a picture that is difficult not to
reasonably close to take sides and to
what actually happened remain
and to be able to objective.
diagnose what went
wrong.

194 Zimbabwe Open University


Unit 9 Data Gathering and Analysis for Monitoring and Evaluation

Participant This involves direct It can be a useful way It is difficult to


observation observation of events, of confirming, or observe and
processes, relationships otherwise, information participate.
and behaviours. provided in other ways. The process is
“Participant” here implies very time-
that the observer gets consuming.
involved in activities
rather than maintaining a
distance.
Self Drawings This involves getting Can be very useful, Can be difficult
participants to draw particularly with to explain and
pictures, usually of how younger children. interpret.
they feel or think about
something.

Source: Shapiro and Civicus, 1996

9.8 Analysing Information


Whether you are looking at monitoring or evaluation, at some point you are
going to find yourself with a large amount of information and you will have to
decide how to make sense of it or to analyse it. If you are using an external
evaluation team, it will be up to this team to do the analysis, but, sometimes in
evaluation and certainly, in monitoring you, the organisation have to do the
analysis.

Analysis is the process of turning the detailed information into an understand-


ing of patterns, trends, interpretations (Olive, 2002). The starting point for
analysis in a project or organisational context is quite often very unscientific. It
is your intuitive understanding of the key themes that come out of the informa-
tion gathering process. Once you have the key themes, it becomes possible
to work through the information, structuring and organising it. The next step is
to write up your analysis of the findings as a basis for reaching conclusions,
and making recommendations.

Zimbabwe Open University 195


Development Monitoring and Evaluation MSDS510

So, your process may look something like as shown in Figure 9.1

Determine key indicators for the


evaluation/monitoring process

Collect information around the indicators

Develop a structure for your analysis,


based on your intuitive understanding of
emerging themes and concerns, and
where you suspect there have been
variations from what you had hoped
and/or expected.

Go through your data, organising it under


the themes and concerns.

Identify patterns, trends, possible


interpretations.

Write up your findings and conclusions.


Work out possible ways forward
(recommendations).

.1 Analysis of the Findings (Source: Civicus, 1996)


Figure 9.1 Analysis of the Findings (Source: Civicus, 1996)

9.8.1 Data analysis


A data analysis plan should identify the following (Rossi, 1993; Westfall, 2010):
 When data analysis will occur - It is not an isolated event at the end
of data collection, but an ongoing task from project start. Data analysis
can be structured through meetings and other forums to coincide with
key project implementation and reporting benchmarks.
 To what extent analysis will be quantitative - and/or qualitative,
and any specialised skills and equipment required for analysis.
 Who will do the analysis? – That is, external experts, project staff,
beneficiaries, and/or other stakeholders.

196 Zimbabwe Open University


Unit 9 Data Gathering and Analysis for Monitoring and Evaluation

 If and how subsequent analysis will occur - Such analysis may be


needed to verify findings, to follow-up on research topics for project
extension and additional funding, or to inform future programming.
An important consideration in planning for data collection and analysis is to
identify any limitations, biases, and threats to the accuracy of the data and
analysis. Data distortion can occur due to limitations or errors in design, sam-
pling, field interviews, and data recording and analysis (Prim Research Group
(PRG), 2009). It is best to monitor the research process carefully and seek
expert advice, when needed.

It is also important to carefully plan for the data management of the M and E
system. This includes the set of procedures, people, skills, and equipment
necessary to systematically store and manage M and E data. If this step is not
carefully planned, data can be lost or incorrectly recorded, which compro-
mises not only data quality and reliability, but also subsequent data analysis
and use. Poorly managed data wastes time and resources (PRG, 2009).

9.9 Taking Action


Monitoring and evaluation have little value if the organisation or project does
not act on the information that comes out of the analysis of data collected.
Once you have the findings, conclusions and recommendations from your
monitoring and evaluation process, you need to:
 Report to your stakeholders - Reporting is closely related to moni-
toring and evaluation work, since data are needed to support the major
findings and conclusions presented in a project report (Chaplowe,
2008).

9.10 Information Reporting and Utilisation


Reporting project achievements and evaluation findings serves many impor-
tant functions (Olive, 2002), namely to:
 advance learning among project staff as well as the larger development
community,
 improve the quality of the services provided,
 inform stakeholders on the project benefits and engage them in work
that furthers project goals,

Zimbabwe Open University 197


Development Monitoring and Evaluation MSDS510

 Inform donors, policy makers and technical specialists of effective in-


terventions (and those that did not work as hoped) and
 develop a project model that can be replicated.
Practical considerations in information reporting and utilisation planning in-
clude the following:

Design the M and E communication plan around the information needs of the
users

The content and format of data reports will vary, depending on whether the
reports are to be used to monitor processes, conduct strategic planning, com-
ply with requirements, identify problems, justify a funding request, or conduct
an impact evaluation.

Identify the frequency of data reporting needs

Project managers may want to review M and E data frequently to assess


project progress and make decisions, whereas donors may need data only
once or twice a year to ensure accountability.

Tailor reporting formats to the intended audience

Reporting may entail different levels of complexity and technical language; the
report format and media should be tailored to specific audiences and different
methods used to solicit feedback.

Identify appropriate outlets and media channels for communicating


M and E data

Consider both internal reporting, such as regular project reports to manage-


ment and progress reports to donors, as well as external reporting, such as
public forums, news releases, briefings, and Internet Web sites.

www.united.fn.3non-profit.nl/info accessed 18/4/2016

198 Zimbabwe Open University


Unit 9 Data Gathering and Analysis for Monitoring and Evaluation

Activity 9.3

?
1. Discuss any practical considerations in the planning for data collection
in the monitoring and evaluation of development projects.
2. Attempt an explanation to how a project manager can reduce data
collection costs. Give examples
3. Discuss the strength and weakness of any five data gathering tools.
Suggest ways in which these tools can be improved.
4. Examine the nexus between the quality of data gathered and the qual-
ity of results produced during monitoring and evaluation.

9.11 M and E Staffing and Capacity Building


Staffing is a special concern for M and E work because it demands special
training and a combination of research and project management skills. Also,
the effectiveness of M and E work often relies on assistance from staff and
volunteers who are not M and E experts. Thus, capacity building is a critical
aspect of implementing good M and E work (Olive, 2002).

9.11.1 Suggestions for ensuring adequate M and E support


Suggestions for ensuring M and E support include the following (Olive, 2002):
 Identify the various tasks and related skills that are needed, such as
ensuring adequate data collection systems in the field, research design,
and data entry and analysis
 Assess the relevant skills of the project team, partner organisations,
and the community beneficiaries
 Specify to what extent local stakeholders will (or will not) participate in
the M and E process
 Assign specific roles and responsibilities to team members and desig-
nate an overall M and E manager
 Recruit consultants, students, and others to fill in the skill gaps and
special needs such as translation, statistical analysis, and cultural knowl-
edge
 Identify the topics for which formal training is needed and hold training
sessions

Zimbabwe Open University 199


Development Monitoring and Evaluation MSDS510

 Encourage staff to provide informal training through on-the-job guid-


ance and feedback, such as commenting on a report or showing how
to use computer software programmes
 Give special attention to building local capacity in M and E.
 Cultivating nascent M and E skills takes time and patience, but in the
end the contributions of various collaborators will enrich M and E work
and lead to greater acceptance of M and E’s role in project implemen-
tation.
Remember to:
 learn from the overall process,
 make effective decisions about how to move forward; and, if neces-
sary,
 deal with resistance to the necessary changes within the organisation or
project, or even among other stakeholders

9.12 Summary
In this unit, we have looked at the data collection for monitoring and evalua-
tion. We have presented sampling issues, highlighted practical considerations
in information reporting and utilisation planning. We have also looked at the
different methods that can be used to collect information for monitoring and
evaluation purposes. The need to select methods that suit our purposes and
your resources is emphasised. We also examined various sources of data.
We explored various practical consideration in planning for data collection
and the tools used in data collection.

200 Zimbabwe Open University


Unit 9 Data Gathering and Analysis for Monitoring and Evaluation

References
Chiplowe, S. (2008). Monitoring and Evaluation Planning Guidelines
and Tools. Baltimore: American Red Cross, CRS.
Civicus. (1996). Monitoring and Evaluation Handbook. Civicus.
Olive .(2002). Planning for Monitoring and Evaluation. Olive Publica-
tions.
Prim Research Group (PRG). (2009). Engaging in Data Collection. Avail-
able Online at: www.sdprgorg/depov/strategies/engaging.htm (Decem-
ber 2005).
Rossi, P.H. (1993). Evaluation A Systematic Approach. 5, London: Sage
Publications.
Shapiro, J. (2006). Evaluation: Judgement Day or Management Tool?
Olive Publications.
Strategies and Means: Data Collection in Research (May 2001). (DA). Avail-
able Online at: www.sdcn.org/strategies/integrating.htm, (December
2005).
Westfall,L. (2010).The Certified Software Quality Engineers Handbook.
wwwwestfallteam.com. Accessed on 8 may 2011.
www.united.fn.3non-profit.nl/info) accessed 18/4/2016

Zimbabwe Open University 201


Development Monitoring and Evaluation MSDS510

202 Zimbabwe Open University


Unit Ten

Impact Assessment for


Monitoring and Evaluation

10.1 Introduction

P ublic programmes are designed to reach certain goals and beneficiaries.


Methods to understand whether such programs actually work, as well
as the level and nature of impacts on intended beneficiaries, are main goal if
this unit. Despite the billions of dollars spent on development assistance each
year there is still very little known about the actual impact of projects on the
poor. But for a specific program or project in a given country, is the interven-
tion producing the intended benefits and what was the overall impact on the
population? Could the program or project be better designed to achieve the
intended outcomes? How can we know about this? Are resources being spent
efficiently? How can we assess this? These are the types of questions that can
only be answered through an impact assessment, an approach that measures
the outcomes of a program intervention in isolation of other possible factors
(Backer, 2000; Shahidur, 2010). In this unit we take you through the proc-
esses of impact assessment for monitoring and evaluation.
Development Monitoring and Evaluation MSDS510

We visit the importance of impact assessment, as well as the main steps in


designing impact assessment framework. Implementation processes and the
roles of the key impact assessment team members are articulated in this unit.
Last reporting and communication procedure are attended to.

10.2 Unit Objectives


By the end of this unit, you should be able to:
 define what impact assessment is in the context of monitoring and evalu-
ation of development projects
 discuss the significance of impact assessment
 describe steps in designing impact assessment
 identify roles of impact assessment of a team member
 explain the process of reporting and communicating results
 critic the process of impact assessment

10.3 Impact Assessment


Impact assessment is a process which is undertaken to estimate whether or
not interventions produce their intended effects (Hulme, 1997).

Impact tells us whether or not we did make a difference to the challenging


situation we were trying to address. In other words, did our strategy work or
was it useful? If your programmes are not making any positive impact on the
target group, there is no reason why those projects should continue. They
must be either terminated or restructured (Rossi, 1993).

10.4 Importance of Impact Assessment


Programmes might appear potentially promising before implementation yet
fail to generate expected impacts or benefits. The obvious need for impact
evaluation is to help policy makers decide whether programmes are generat-
ing intended effects; to promote accountability in the allocation of resources
across public programmes; and to fill gaps in understanding what works, what
does not, and how measured changes in well-being are attributable to a par-

204 Zimbabwe Open University


Unit 10 Impact Assessment for Monitoring and Evaluation

ticular project or policy intervention. Effective impact evaluation should, there-


fore, be able to assess precisely the mechanisms by which beneficiaries are
responding to the intervention (Shahidur, Gayatri, Hussain and Samad, 2010).

10.5 Determining Whether or Not to Carry Out an


Evaluation
A first determination is whether or not an impact evaluation is required. Im-
pact evaluations differ from other evaluations in that they are focused on as-
sessing causality. Given the complexity and cost in carrying out impact evalu-
ation, the costs and benefits should be assessed, and consideration should be
given to whether another approach would be more appropriate, such as moni-
toring of key performance indicators or a process evaluation (Atkinson, 1987).
These approaches should not be seen as substitutes for impact evaluations;
indeed, they often form critical complements to impact evaluations. And per-
haps the most important inputs to the decision of whether or not to carry out
an evaluation are strong political and financial support. The additional effort
and resources required for conducting impact evaluations are best mobilised
when the project is innovative, is replicable, involves substantial resource al-
locations, and has well-defined interventions (Atkinson, 1987).

Activity 10.1

?
1. Define the term impact assessment in the context of monitoring and
evaluation.
2. Provide a justification for impact assessment.
3. How would you determine whether or not to carry an evaluation?

10.6 Main Steps in Designing and Implementing


Impact Evaluations
Backer (2000) provides main steps in designing and implementing impact
evaluations. These are:

During project identification and preparation

Zimbabwe Open University 205


Development Monitoring and Evaluation MSDS510

1. Determining whether or not to carry out an evaluation


2. Clarifying objectives of the evaluation
3. Exploring data availability
4. Designing the evaluation
5. Forming the evaluation team
6. If data will be collected:
(a) sample design and selection,
(b) data collection instrument development,
(c) staffing and training fieldwork personnel,
(d) pilot testing,
(e) data collection and
(f) data management and access
During project implementation
1. Ongoing data collection,
2. Analysing the data,
3. Writing up the findings and discussing them with policymakers and other
stakeholders and
4. Incorporating the findings in project design

10.6.1 Exploring data availability


Many types of data can be used to carry out impact evaluation studies. These
can include a range from cross-sectional or panel surveys to qualitative open-
ended interviews. Ideally this information is available at the individual level to
ensure that true impact can be assessed. Household level information can
conceal intra-household resource allocation, which affects women and chil-
dren because they often have more limited access to household productive
resources (Backer, 2000). In many cases, the impact evaluation will take
advantage of some kind of existing data or piggyback on an ongoing survey,
which can save considerably on costs. With this approach, however, prob-
lems may arise in the timing of the data collection effort and with the flexibility
of the questionnaire design.

The following are some key points to remember in exploring the use of exist-
ing data resources for the impact evaluation (Abdie, 1988; Backer cf, 2000).

206 Zimbabwe Open University


Unit 10 Impact Assessment for Monitoring and Evaluation

10.6.2 Key points for identifying data resources for impact


evaluation
The following are key points for identifying data resources for impact evalua-
tion:
 Know the programme well. It is risky to embark on an evaluation
without knowing a lot about the administrative and institutional details
of the program; that information typically comes from the programme
administration.
 Collect information on the relevant “stylised facts” about the
setting. The relevant facts might include the poverty map, the way the
labour market works, the major ethnic divisions, and other relevant
public programmes.
 Be eclectic about data. Sources can embrace both informal and un-
structured interviews with participants in the programme and quantita-
tive data from representative samples. However, it is extremely difficult
to ask counterfactual questions in interviews or focus groups; try ask-
ing someone who is currently participating in a public programme: “What
would you be doing now if this programme did not exist?” Talking to
programme participants can be valuable, but it is unlikely to provide a
credible evaluation on its own.
 Ensure that there is data on the outcome indicators and relevant
explanatory variables. The latter need to deal with heterogeneity in
outcomes conditional on programme participation. Outcomes can dif-
fer depending, for example, on whether one is educated. It may not be
possible to see the impact of the programme unless one controls for
that heterogeneity.
 Depending on the methods used, data might also be needed on vari-
ables that influence participation but do not influence outcomes given
participation. These instrumental variables can be valuable in sorting
out the likely causal effects of non-random programmes.
 The data on outcomes and other relevant explanatory variables
can be either quantitative or qualitative. But it has to be possible
to organise the information in some sort of systematic data structure. A
simple and common example is that one has values of various variables
including one or more outcome indicators for various observation units
(individuals, households, firms, communities).
 The variables one has data on and the observation units one
uses are often chosen as part of the evaluation method. These

Zimbabwe Open University 207


Development Monitoring and Evaluation MSDS510

choices should be anchored to the prior knowledge about the pro-


gramme (its objectives, of course, but also how it is run) and the setting
in which it is introduced.
 The specific source of the data on outcomes and their determinants,
including programme participation, typically comes from survey data
of some sort. The observation unit could be the household, firm, or
geographic area, depending on the type of programme one is studying.
 Survey data can often be supplemented with other useful data on the
programme (such as, from the project monitoring database) or setting
(such as, from geographic databases).

10.6.3 Designing the evaluation


Once the objectives and data resources are clear, it is possible to begin the
design phase of the impact evaluation study. The choice of methodologies will
depend on the evaluation question, timing, budget constraints, and implemen-
tation capacity (Bamberger, 2000). The pros and cons of the different design
types should be balanced to determine which methodologies are most appro-
priate and how quantitative and qualitative techniques can be integrated to
complement each other. Even after the evaluation design has been deter-
mined and built into the project, evaluators should be prepared to be flexible
and make modifications to the design as the project is implemented. In addi-
tion, provisions should be made for tracking the project interventions if the
evaluation includes baseline and follow-up data so that the evaluation effort is
parallel with the actual pace of the project. In defining the design, it is also
important to determine how the impact evaluation will fit into the broader
monitoring and evaluation strategy applied to a project. All projects must be
monitored so that administrators, finance providers, and policy makers can
keep track of the project as it unfolds (Bamberger, 2000). The evaluation
effort, as argued above, must be tailored to the information requirements of
the project.

10.6.4 Evaluation question


The evaluation questions being asked are very much linked to the design of
the evaluation in terms of the type of data collected, unit of analysis, method-
ologies used, and timing of the various stages. In clarifying the evaluation
questions, Grosh and Munoz. (1996), state that it is also important to con-
sider the gender implications of project impact. At the outset this may not
always be obvious, however, in project implementation there may be sec-

208 Zimbabwe Open University


Unit 10 Impact Assessment for Monitoring and Evaluation

ondary effects on the household, which would not necessarily be captured


without specific data collection and analysis efforts.

10.6.5 Timing and budget concerns


According to Grossman et al., (1994), the most critical timing issue is whether
it is possible to begin the evaluation design before the project is implemented
and when the results will be needed. It is also useful to identify up front at
which points during the project cycle information from the evaluation effort
will be needed so that data collection and analysis activities can be linked.
Having results in a timely manner can be crucial to policy decisions, for exam-
ple, during a project review, around an election period, or when decisions
regarding project continuation are being made. Some methods require more
time to implement than others. Random assignment and before-and-after
methods take longer to implement than ex-post matched-comparison ap-
proaches. When using before-and-after approaches that utilise baseline and
follow-up assessments, time must be allowed for the last member of the treat-
ment group to receive the intervention, and then usually more time is allowed
for post-programme effects to materialise and be observed. Grossman (1994)
suggests that 12 to 18 months after sample enrolment in the intervention is a
typical period to allow before examining impacts.

10.6.6 Implementation capacity


A final consideration in the scale and complexity of the evaluation design is the
implementation capacity of the evaluation team. Implementation issues can be
very challenging, particularly in developing countries where there is little ex-
perience with applied research and programme evaluations (Backer, 2000).
The composition of the evaluation team is very important, as well as team
members’ experience with different types of methodologies and their capac-
ity relative to other activities being carried out by the evaluation unit. This is
particularly relevant when working with public sector agencies with multiple
responsibilities and limited staff. Awareness of the unit’s workload is impor-
tant in order to assess not only how it will affect the quality of evaluation being
conducted but also the opportunity cost of the evaluation with respect to
other efforts for which the unit is responsible.

10.6.7 Formation of the evaluation team


A range of skills is needed in evaluation work. The quality and eventual utility
of the impact evaluation can be greatly enhanced with coordination between

Zimbabwe Open University 209


Development Monitoring and Evaluation MSDS510

team members and policy makers from the outset. It is therefore important to
identify team members as early as possible, agree upon roles and responsi-
bilities, and establish mechanisms for communication during key points of the
evaluation. Among the core team is the evaluation manager, analysts, both
economist and other social scientists, and, for evaluation designs involving
new data collection, a sampling expert, survey designer, fieldwork manager
and fieldwork team, and data managers and processors (Grosh and Muñoz,
1996). Depending on the size, scope, and design of the study, some of these
responsibilities will be shared or other staffing needs may be added to this
core team. In cases in which policy analysts may not have had experience
integrating quantitative and qualitative approaches, it may be necessary to
spend additional time at the initial team building stage to sensitise team mem-
bers and ensure full collaboration.

Activity 10.2

? 1.
2.
Identify various steps involved in designing impact assessment.
Account for the key issues involved in identifying data resources for
impact assessment.
3. Discuss the importance of creating evaluation questions in impact as-
sessment.
4. Discuss the rationale behind impact assessment of development
projects.

10.7 Responsibilities and Roles of the Team


Members
The broad responsibilities of team members include the following:

10.7.1 Evaluation manager


The evaluation manager is responsible for establishing the information needs
and indicators for the evaluation (which are often established with the client
by using a logical framework approach), drafting terms of reference for the
evaluation, selecting the evaluation methodology, and identifying the evalua-
tion team. In many cases, the evaluation manager will also carry out policy
analysis (Grosh and Muñoz, 1996).

210 Zimbabwe Open University


Unit 10 Impact Assessment for Monitoring and Evaluation

10.7.2 Policy analysts


An economist is needed for the quantitative analysis, as well as a sociologist
or anthropologist for ensuring participatory input and qualitative analysis at
different stages of the impact evaluation. Both should be involved in writing
the evaluation report.

10.7.3 Sampling expert


The sampling expert can guide the sample selection process. For quantitative
data, the sampling expert should be able to carry out power calculations to
determine the appropriate sample sizes for the indicators established, select
the sample, review the results of the actual sample versus the designed sam-
ple, and incorporate the sampling weights for the analysis. For qualitative
data, the sampling expert should guide the sample selection process in coor-
dination with the analysts, ensuring that the procedures established guarantee
that the correct informants are selected. The sampling expert should also be
tasked with selecting sites and groups for the pilot test and will often need to
be paired with a local information coordinator responsible for collecting for
the sampling expert data from which the sample will be drawn (Grosh and
Muñoz, 1996).

10.7.4 Survey designer


This could be a person or team, whose responsibility is designing the data
collection instruments, accompanying manuals and codebooks, and coordi-
nating with the evaluation manager(s) to ensure that the data collection instru-
ments will indeed produce the data required for the analysis. This person or
team should also be involved in pilot testing and refining the questionnaires.

10.7.5 Fieldwork manager and staff


The manager should be responsible for supervising the entire data collection
effort, from planning the routes for the data collection to forming and sched-
uling the fieldwork teams, generally composed of supervisors and interview-
ers. Supervisors generally manage the fieldwork staff (usually interviewers,
data entry operators and drivers) and are responsible for the quality of data
collected in the field. Interviewers administer the questionnaires. In some cul-
tures, it is necessary to ensure that male and female interviewers carry out the
surveys and that they are administered separately for men and women.

Zimbabwe Open University 211


Development Monitoring and Evaluation MSDS510

10.7.6 Data managers and processors


These team members design the data entry programmes, enter the data, check
the data’s validity, provide the needed data documentation, and produce ba-
sic results that can be verified by the data analysts. In building up the evalua-
tion team, there are also some important decisions that the evaluation man-
ager must make about local capacity and the appropriate institutional arrange-
ments to ensure impartiality and quality in the evaluation results.

First is whether there is local capacity to implement the evaluation, or parts of


it, and what kind of supervision and outside assistance will be needed. Evalu-
ation capacity varies greatly from project to project and institution to institu-
tion. The general practice for World Bank supported projects seems to be to
implement the evaluation using local staff while providing a great deal of inter-
national supervision (Backer, 2000). Therefore, it is necessary to critically
assess local capacity and determine who will be responsible for what aspects
of the evaluation effort. Regardless of the final composition of the team, it is
important to designate an evaluation manager who will be able to work effec-
tively with the data producers as well as the analysts and policymakers using
the data and the results of the evaluation. If this person is not based locally, it
is recommended that a local manager be designated to coordinate the evalu-
ation effort in conjunction with the international manager (Backer, 2000).

Second is whether to work with a private firm or public agency. Private firms
can be more dependable with respect to providing results on a timely basis,
but capacity building in the public sector is lost and often private firms are
understandably less amenable to incorporating elements into the evaluation
that will make the effort costlier. Whichever counterpart or combination of
counterparts is finally crafted, a sound review of potential collaborators’ past
evaluation activities is essential to making an informed choice (Backer, 2000).

And third is what degree of institutional separation to put in place between the
evaluation providers and the evaluation users. There is much to be gained
from the objectivity provided by having the evaluation carried out independ-
ently of the institution responsible for the project being evaluated. However,
evaluations can often have multiple goals, including building evaluation capac-
ity within government agencies and sensitising programme operators to the
realities of their projects once these are carried out in the field. At a minimum,
the evaluation users, who can range from policymakers in government agen-
cies in client countries to NGO organizations, bilateral donors, and interna-
tional development institutions, must remain sufficiently involved in the evalu-
ation to ensure that the evaluation process is recognised as being legitimate

212 Zimbabwe Open University


Unit 10 Impact Assessment for Monitoring and Evaluation

and that the results produced are relevant to their information needs. Other-
wise, the evaluation results are less likely to be used to inform policy. In the
final analysis, the evaluation manager and his or her clients must achieve the
right balance between involving the users of evaluations and maintaining the
objectivity and legitimacy of the results.

10.8 Data Development


Having adequate and reliable data is a necessary input to evaluating project
impact. High-quality data are essential to the validity of the evaluation results.
Assessing what data exist is a first important step before launching any new
data collection efforts.

10.9 Deciding What to Measure


The main output and impact indicators should be established in planning the
evaluation, possibly as part of a logical framework approach. To ensure that
the evaluation is able to assess outcomes during a period of time relevant to
decision makers’ needs, a hierarchy of indicators might be established, rang-
ing from short term impact indicators to longer-term indicators (Bamberger,
2000). This ensures that even if final impacts are not picked up initially, pro-
gramme outputs can be assessed. In addition, the evaluator should plan on
measuring the delivery of intervention as well as taking account of exogenous
factors that may have an effect on the outcome of interest. Evaluation manag-
ers can also plan to conduct the evaluation across several time periods, al-
lowing for more immediate impacts to be picked up earlier while still tracking
final outcome measures. In addition, the evaluator may also want to include
cost measures in order to do some cost-effectiveness analysis or other com-
plementary assessments not strictly related to the impact evaluation.

10.10 Developing Data Collection Instruments and


Approaches
Developing appropriate data collection instruments that will generate the re-
quired data to answer the evaluation questions can be tricky. This will require
having the analysts involved in the development of the questions, in the pilot
test, and in the review of the data from the pilot test. Involving both the field

Zimbabwe Open University 213


Development Monitoring and Evaluation MSDS510

manager and the data manager during the development of the instruments, as
well as local staff preferably analysts who can provide knowledge of the country
and the programme—can be critical to the quality of information collected
(Grosh and Muñoz, 1996). It is also important to ensure that the data col-
lected can be disaggregated by gender to explore the differential impact of
specific programmes and policies.

Quantitative evaluations usually collect and record information either in a nu-


meric form or as precoded categories. Some qualitative studies use the
precoded classification of data as well (Bamberger, 2000).

10.11 Training
For both qualitative and quantitative data collection, even experienced staff
must be trained to collect the data specific to the evaluation, and all data
collection should be guided by a set of manuals that can be used as orienta-
tion during training and as a reference during the fieldwork. Depending on the
complexity of the data collection task, training can range from three days to
several weeks.

10.12 Pilot Testing


Pilot testing is an essential step because it will reveal whether the instrument
can reliably produce the required data and how the data collection proce-
dures can be put into operation. The pilot test should mimic the actual field-
work as closely as possible. For this reason, it is useful to have data entry
programmes ready at the time of the pilot to test their functionality as well as
to pilot test across the different populations and geographical areas to be
included in the actual fieldwork.

10.13 Sampling
Sampling is an art best practiced by an experienced sampling specialist. The
design need not be complicated, but it should be informed by the sampling
specialist’s expertise in the determination of appropriate sampling frames, sizes,
and selection strategies. The sampling specialist should be incorporated in the
evaluation process from the earliest stages to review the available information

214 Zimbabwe Open University


Unit 10 Impact Assessment for Monitoring and Evaluation

needed to select the sample and determine whether any enumeration work
will be needed, which can be time consuming (Backer, 2000). As with other
parts of the evaluation work, coordination between the sampling specialist
and the evaluation team is important. This becomes particularly critical in con-
ducting matched comparisons because the sampling design becomes the ba-
sis for the “match” that is at the core of the evaluation design and construction
of the counterfactual. In these cases, the sampling specialist must work closely
with the evaluation team to develop the criteria that will be applied. There are
many tradeoffs between costs and accuracy in sampling that should be made
clear as the sampling framework is being developed. For example, conduct-
ing a sample in two or three stages will reduce the costs of both the sampling
and the fieldwork, but the sampling errors and therefore, the precision of the
estimates will be increased.

After developing the sampling strategy and framework, the sampling special-
ist should also be involved in selecting the sample for the fieldwork and the
pilot test to ensure that the pilot is not conducted in an area that will be in-
cluded in the sample for the fieldwork. Often initial fieldwork will be required
as part of the sample selection procedure (Backer, 2000).

And finally, the sampling specialist should produce a sampling document de-
tailing the sampling strategy, including:
a) From the sampling design stage, the power calculations using the im-
pact variables, the determination of sampling errors and sizes, the use
of stratification to analyse populations of interest.
b) From the sample selection stage, an outline of the sampling stages and
selection procedures.
c) From the fieldwork stage to prepare for analysis, the relationship be-
tween the size of the sample and the population from which it was
selected, non-response rates, and other information used to inform sam-
pling weights, and any additional information that the analyst would
need to inform the use of the evaluation data. This document can be
used to maintain the evaluation project records and should be included
with the data whenever it is distributed to help guide the analysts in
using the evaluation data.

Zimbabwe Open University 215


Development Monitoring and Evaluation MSDS510

10.14 Data Instruments


The following instruments are used for data collection:

10.14.1 Questionnaires
The design of the questionnaire is important to the validity of the information
collected. There are four general types of information required for an impact
evaluation (Bamberger, 2000).

These include:
 Classification of nominal data with respondents classified according to
whether they are project participants or belong to the comparison
group.
 Exposure to treatment variables recording not only the services and
benefits received but also the frequency, amount, and quality—assess-
ing quality can be quite difficult.
 Outcome variables to measure the effects of a project, including imme-
diate products, sustained outputs or the continued delivery of services
over a long period, and project impacts such as improved income and
employment.
 Intervening variables that affect participation in a project or the type of
impact produced, such as individual, household, or community charac-
teristics—these variables can be important for exploring biases. The
way in which the question is asked, as well as the ordering of the ques-
tions, is also quite important in generating reliable information. A rel-
evant example is the measurement of welfare, which would be required
for measuring the direct impact of a project on poverty reduction.
Among the elements noted for a good questionnaire are keeping it short and
focused on important questions, ensuring that the instructions and questions
are clear, limiting the questions to those needed for the evaluation, including a
“no opinion” option for closed questions to ensure reliable data, and using
sound procedures to administer the questionnaire, which may indeed be dif-
ferent for quantitative and qualitative surveys.

10.14.2 Fieldwork issues


Working with local staff who have extensive experience in collecting data
similar to that needed for the evaluation can greatly facilitate fieldwork opera-
tions. Not only can these staff provide the required knowledge of the geo-

216 Zimbabwe Open University


Unit 10 Impact Assessment for Monitoring and Evaluation

graphical territory to be covered, but their knowledge can also be critical to


developing the norms used in locating and approaching informants. Field staff
whose expertise is in an area other than the one required for the evaluation
effort can present problems (UNDP, 2002).

The type of staff needed to collect data in the field will vary according to the
objectives and focus of the evaluation. For example, a quantitative impact
evaluation of a nutrition programme might require the inclusion of an
anthropometrist to collect height-for-weight measures as part of a survey team,
whereas the impact evaluation of an educational reform would most likely
include staff specialising in the application of achievement tests to measure the
impact of the reform on academic achievement. According to UNDP (2002),
most quantitative surveys will require at least a survey manager, data man-
ager, field manager, field supervisors, interviewers, data entry operators, and
drivers. Depending on the qualitative approach used, field staff may be similar
with the exception of data entry operators. The skills of the interviewers,
however, would be quite different, with qualitative interviewers requiring spe-
cialised training, particularly for focus groups, direct observation, and so forth.

Three other concerns are useful to remember when planning survey opera-
tions.

First, it is important to take into consideration temporal events that can affect
the operational success of the fieldwork and the external validity of the data
collected, such as:
 the school year calendar,
 holidays,
 rainy seasons and seasonality
 harvest times, or migration patterns.
Second, it is crucial to pilot test data collection instruments, even if they are
adaptations of instruments that have been used previously, both to test the
quality of the instrument with respect to producing the required data and to
familiarise fieldwork staff with the dynamics of the data collection process.
Pilot tests can also serve as a proving ground for the selection of a core team
of field staff to carry out the actual survey.

Finally, communications are essential to field operations. For example, if local


conditions permit their use, fieldwork can be enhanced by providing supervi-
sors with cellular phones so that they can be in touch with the survey manager,
field manager, and other staff to answer questions and keep them informed of
progress.

Zimbabwe Open University 217


Development Monitoring and Evaluation MSDS510

10.15 Data Management and Access


The objectives of a good data management system should be to ensure the
timeliness and quality of the evaluation data. Timeliness will depend on having
as much integration as possible between data collection and processing so
that errors can be verified and corrected prior to the conclusion of fieldwork.
The quality of the data can be ensured by applying consistency checks to test
the internal validity of the data collected both during and after the data are
entered and by making sure that proper documentation is available to the
analysts who will be using the data.

Documentation should consist of two types of information:

(a) Information needed to interpret the data, including codebooks, data


dictionaries, guides to constructed variables, and any needed transla-
tions.
(b) Information needed to conduct the analysis, which is often included in a
basic information document that contains a description of the focus and
objective of the evaluation, details on the evaluation methodology, sum-
maries or copies of the data collection instruments, information on the
sample, a discussion of the fieldwork, and guidelines for using the data.
It is recommended that the data produced by evaluations be made openly
available given the public good value of evaluations and the possible need to
do additional follow-up work to assess long-term impacts by a team other
than the one that carried out the original evaluation work (Taschereau, 1998).
To facilitate the data-sharing process, at the outset of the evaluation an open
data access policy should be agreed upon and signed, establishing norms and
responsibilities for data distribution. An open data access policy puts an added
burden on good data documentation and protecting the confidentiality of the
informants.

10.16 Analysis, Reporting and Dissemination


As with other stages of the evaluation process, the analysis of the evaluation
data, whether quantitative or qualitative, requires collaboration between the
analysts, data producers, and policymakers to clarify questions and ensure
timely, quality results. Problems with the cleaning and interpretation of data
will almost surely arise during analysis and require input from various team
members. There are also many techniques for analysing qualitative data (Miles

218 Zimbabwe Open University


Unit 10 Impact Assessment for Monitoring and Evaluation

and Huberman, 1994). Two commonly used methods for impact evaluation
are mentioned—content analysis and case analysis (Taschereau, 1998).

10.16.1 Content analysis


Content analysis is used to analyse data drawn from interviews, observations,
and documents. In reviewing the data, the evaluator develops a classification
system for the data, organising information based on:

(a) the evaluation questions for which the information was collected,

(b) how the material will be used

(c) the need for cross-referencing the information.

The coding of data can be quite complex and may require many assumptions.
Once a classification system has been set up, the analysis phase begins. This
involves looking for patterns in the data and moving beyond description to-
ward developing an understanding of program processes, outcomes, and
impacts. This is best carried out with the involvement of team members. New
ethnographic and linguistic computer programmes are also now available,
designed to support the analysis of qualitative data (UNDP, 2002).

10.16.2 Case analysis


Case analysis is based on case studies designed for in-depth study of a par-
ticular group or individual. The high level of detail can provide rich informa-
tion for evaluating project impact. The processes of collecting and analysing
the data are carried out simultaneously as evaluators make observations as
they are collecting information. They can then develop and test explanations
and link critical pieces of information (Barbara, 1988).

First, analysis commonly takes longer than anticipated, particularly if the data
are not as clean or accessible at the beginning of the analysis, if the analysts
are not experienced with the type of evaluation work, or if there is an empha-
sis on capacity building through collaborative work.

Second, the evaluation manager should plan to produce several products as


outputs from the analytical work, keeping in mind two elements. The first is to
ensure the timing of outputs around key events when decisions regarding the
future of the project will be made, such as mid-term reviews, elections, or
closings of a pilot phase. The second is the audience for the results.

Zimbabwe Open University 219


Development Monitoring and Evaluation MSDS510

Products should be differentiated according to the audience, for which they


are crafted, including government policymakers, programme managers, do-
nors, the general public, journalists, and academics.

Third, the products will have the most policy relevance if they include clear
and practical recommendations stemming from the impact analysis. These
can be broken into short- and long-term priorities, and when possible, should
include budgetary implications. Decision makers will be prone to look for the
“bottom line.”

Finally, the reports should be planned as part of a broader dissemination


strategy, which can include presentations for various audiences, press re-
leases, feedback to informants, and making information available on the web.
Such a dissemination strategy should be included in the initial stages of the
planning process to ensure that it is included in the budget and that the results
reach the intended audience (Barbara, 1988).

Activity 10.3

?
1. Create a list of roles for impact assessment team member.
2. Discuss the roles and responsibilities of the team members in an im-
pact assessment team.
3. Analyse the utility of case analysis in impact assessment.

10.17 Summary
In this unit we have taken you through the processes of impact assessment for
monitoring and evaluation. We visited the importance of impact assessment,
as well as the main steps in designing impact assessment framework. Imple-
mentation processes and the roles of the key impact assessment team mem-
bers are articulated in this unit. Last reporting and communication procedure
were attended to. We have encouraged you to view the impact assessment in
the context of monitoring and evaluation of development projects. We have
also reminded you that the goal of impact assessment is to see if the projects
are yielding the intended pre-determined development outcomes. We en-
courage you to read through this material and critique some of the processes
tendered in this unit.

220 Zimbabwe Open University


Unit 10 Impact Assessment for Monitoring and Evaluation

References
Abadie, A., Angrist, J. and Imbens, G. (1998). “Instrumental Variables Esti-
mation of Quartile Treatment Effects.” National Bureau of Economic
Research Working Paper Series, No. 229:1
Atkinson, A (1987). “On the Measurement of Poverty.” Econometrica 55:
749-64.
Baker, J. (2000). Directions in Development Evaluating the Impact of Devel-
opment Projects on Poverty. A Handbook for Practitioners. Wash-
ington, D.C.: The International Bank for Reconstruction and Develop-
ment/ The World Bank.
Bamberger,M.(2000).Integrating Quantitative and Qualitative Methods
in Development Research. Washington, D.C.: World Bank.
Barbara,M.(1998).“Practitioner-Led Impact Assessment: A Test in Mali.”
USAID AIMS Brief. Washington, D.C.: USAID.
Grosh, M.E. and Muñoz, J. (1996). “A Manual for Planning and Implement-
ing the Living Standards Measurement Study Survey.” LSMS Work-
ing Paper No. 126. Washington, D.C.: World Bank.
Grossman,J.B.(1994).“Evaluating Social Policies: Principles and U.S. Expe-
rience.” The World Bank Research Observer 9 (July): 159-80.
Hulme, D. (1997). “Impact Assessment Methodologies for Microfinance:
Theory, Experience and Better Practice.” Institute for Development
Policy and Management, University of Manchester.
James, H. and Richard, R. (1985). “Alternative Methods of Evaluating the
Impact of Interventions: An Overview.” Journal of Econometrics
30: 239-67.
Rossi, P.H. (1993). Evaluation A Systematic Approach, London: Sage
Publications.
Shahidur, R. KhandkerGayatri, B., Koolwal Hussain, A. and Samad, (2010).
Handbook on Impact Evaluation Quantitative Methods and Prac-
tices, The International Bank for Reconstruction and Development /
The World Bank.
Taschereau, S. (1998). Evaluating the Impact of Training and Institu-
tional Development Programs, a Collaborative Approach. Eco-
nomic Development Institute of the World Bank, January.
United Nations Development Programme. (2002). The Evaluation of Re-
sults-Based Management at UNDP, New York, NY:

Zimbabwe Open University 221


Development Monitoring and Evaluation MSDS510

222 Zimbabwe Open University


Unit Eleven

Monitoring and Evaluation in


the Context of Results Based
Management (RBM)

11.1 Introduction

A n increasing emphasis on results is bringing about some major changes


in the focus, approach and application of monitoring and evaluation
within development practice. Central to these changes is results-based man-
agement. In this unit we visit the concept of Results Based Management (RBM)
as it is used in Monitoring and evaluation. In this unit we define the concept
and link it to monitoring and evaluation of development projects. The princi-
ples of RBM are looked at, the link between planning, monitoring and evalu-
ation is established and explained. We also look at the objectives of RBM as
well as the justification for planning is explored fully in this unit.
Development Monitoring and Evaluation MSDS510

11.2 Unit Objectives


By the end of this unit, you should be able to:

 define the following terms


(a) results-based management (RBM)
(b) planning
 list the main objectives of RBM
 explain the RBM life cycle
 justify the reasons for planning
 discuss the principles of RBM
 explain the link between planning and monitoring and evaluation

11.3 What is Results Based Management?


Results-Based Management (RBM) is a management strategy or approach
by which an organisation ensures that its processes, products and services
contribute to the achievement of clearly stated results. Results-Based Man-
agement provides a coherent framework for strategic planning and manage-
ment by improving learning and accountability. It is also a broad management
strategy aimed at achieving important changes in the way agencies operate,
with improving performance and achieving results as the central orientation,
by defining realistic expected results, monitoring progress toward the achieve-
ment of expected results, integrating lessons learned into management deci-
sions and reporting on performance (United Nations Development Pro-
gramme, 2002).

11.4 Putting Planning, Monitoring and Evaluation


Together: Results Based Management
Planning, monitoring and evaluation come together as Results Based Man-
agement (RBM). RBM is defined as “a broad management strategy aimed at
achieving improved performance and demonstrable results,” (Unite Nations
Evaluation Group, 2007) and has been adopted by many multilateral devel-
opment organisations, bilateral development agencies and public administra-
tions throughout the world. Some of these organisations now refer to RBM
as Management for Development Results (MfDR) to place the emphasis on
development rather than organisational results (United Nations Development
Programme, 2009)

224 Zimbabwe Open University


Unit 11 Monitoring and Evaluation in the Context of Results Based Management (RBM)

Good RBM is an ongoing process. This means that there is constant feed-
back, learning and improving. Existing plans are regularly modified based on
the lessons learned through monitoring and evaluation, and future plans are
developed based on these lessons. Monitoring is also an ongoing process.
The lessons from monitoring are discussed periodically and used to inform
actions and decisions. Evaluations should be done for programmatic improve-
ments while the programme is still ongoing and also inform the planning of
new programmes. This ongoing process of doing, learning and improving is
what is referred to as the RBM life-cycle approach, which is depicted in
Figure 11.1, (RBM cycle).

RBM is concerned with learning, risk management and accountability. Learn-


ing not only helps improve results from existing programmes and projects, but
also enhances the capacity of the organisation and individuals to make better
decisions in the future and improves the formulation of future programmes
and projects. Since there are no perfect plans, it is essential that managers,
staff and stakeholders learn from the successes and failures of each pro-
gramme or project. There are many risks and opportunities involved in pur-
suing development results. RBM systems and tools should help promote
awareness of these risks and opportunities, and provide managers, staff,
stakeholders and partners with the tools to mitigate risks or pursue opportu-
nities.

RBM practices and systems are most effective when they are accompanied
by clear accountability arrangements and appropriate incentives that promote
desired behaviour. In other words, RBM should not be seen simply in terms
of developing systems and tools to plan, monitor and evaluate results. It must
also include effective measures for promoting a culture of results-orientation
and ensuring that persons are accountable for both the results achieved and
their actions and behaviour.

According to UNDP (2009), the main objectives of good planning, monitor-


ing and evaluation (RBM) are to:
 support substantive accountability to governments, beneficiaries, do-
nors, other partners and stakeholders, and the UNDP Executive Board,
 prompt corrective action,
 ensure informed decision making promote risk management,
 enhance organisational and individual learning,
These objectives are linked together in a continuous process.

Zimbabwe Open University 225


Development Monitoring and Evaluation MSDS510

Activity 11.1

?
1. Define the following terms
(a) Results Based Management (RBM)
(b) planning
(c) evaluation
2. List the main objectives of Results Based Management (RBM)

11.5 The Results Based Management Lifecycle


Approach
Planning, monitoring and evaluation should not necessarily be approached in
a sequential manner. The conduct of an evaluation does not always take place
at the end of the cycle. Evaluation can take place at any point in time during
the programming cycle. Figure 11.1 illustrates the interconnected nature of
planning, monitoring and evaluation to support MfDR. Planning for monitor-
ing and evaluation must take place at the planning stage.

Figure 11.1 The RBM Lifecycle Approach (Source: An extract from


UNDP, 2009)

226 Zimbabwe Open University


Unit 11 Monitoring and Evaluation in the Context of Results Based Management (RBM)

11.6 Planning
According to UNDP (2009) planning can be defined as the process of setting
goals, developing strategies, outlining the implementation arrangements and
allocating resources to achieve those goals. It is important to note that plan-
ning involves looking at a number of different processes such as:
 identifying the vision, goals or objectives to be achieved,
 formulating the strategies needed to achieve the vision and goals,
 determining and allocating the resources (financial and other) required
to achieve the vision and goals and
 outlining implementation arrangements, which include the arrangements
for monitoring and evaluating progress towards achieving the vision
and goals.
There is an expression that failing to plan is planning to fail. While it is not
always true that those who fail to plan will eventually fail in their endeavours,
there is strong evidence to suggest that having a plan leads to greater effec-
tiveness and efficiency. Not having a plan—whether for an office, programme
or project is in some ways similar to attempting to build a house without a
blueprint, that is, it is very difficult to know what the house will look like, how
much it will cost, how long it will take to build, what resources will be re-
quired, and whether the finished product will satisfy the owner’s needs. In
short, planning helps us define what an organisation, programme or project
aims to achieve and how it will go about it (UNDP 2009).

11.6.1 Justification for planning


Four main benefits that make planning worthwhile are as follows:

Planning enables us to know what should be done when

Without proper planning, projects or programmes may be implemented at the


wrong time or in the wrong manner and result in poor outcomes. A classic
example is that of a development agency that offered to help improve the
conditions of rural roads. The planning process was controlled by the agency
with little consultation. Road repair began during the rainy season and much
of the material used for construction was unsuitable for the region. The project
suffered lengthy delays and cost overruns. One community member com-
mented during the evaluation that the community wanted the project, but if
there had been proper planning and consultation with them, the donors would

Zimbabwe Open University 227


Development Monitoring and Evaluation MSDS510

have known the best time to start the project and the type of material to use
(UN, 2008).

Planning helps mitigate and manage crises and ensure smoother im-
plementation

There will always be unexpected situations in programmes and projects.


However, a proper planning exercise helps reduce the likelihood of these and
prepares the team for dealing with them when they occur. The planning proc-
ess should also involve assessing risks and assumptions and thinking through
possible unintended consequences of the activities being planned
(www.plandev,of,org/go/usergiude). The results of these exercises can be very
helpful in anticipating and dealing with problems. Some planning exercises
also include scenario planning that looks at ‘what ifs’ for different situations
that may arise (UNDP 2009).

Planning improves focus on priorities and leads to more efficient use


of time, money and other resources

Having a clear plan or roadmap helps focus limited resources on priority


activities, that is, the ones most likely to bring about the desired change. With-
out a plan, people often get distracted by many competing demands. Simi-
larly, projects and programmes will often go off track and become ineffective
and inefficient.

Planning helps determine what success will look like.

A proper plan helps individuals and units to know whether the results achieved
are those that were intended and to assess any discrepancies. Of course, this
requires effective monitoring and evaluation of what was planned. For this
reason, good planning includes a clear strategy for monitoring and evaluation
and use of the information from these processes (www.plandev,of,org/go/
usergiude).

11.7 Monitoring
Monitoring can be defined as the ongoing process by which stakeholders
obtain regular feedback on the progress being made towards achieving their
goals and objectives. Contrary to many definitions that treat monitoring as
merely reviewing progress made in implementing actions or activities, the
definition used in this Handbook focuses on reviewing progress against achiev-

228 Zimbabwe Open University


Unit 11 Monitoring and Evaluation in the Context of Results Based Management (RBM)

ing goals. In other words, monitoring in this unit is not only concerned with
asking “Are we taking the actions we said we would take?” but also “Are we
making progress on achieving the results that we said we wanted to achieve?”
The difference between these two approaches is extremely important. In the
more limited approach, monitoring may focus on tracking projects and the
use of the agency’s resources. In the broader approach, monitoring also in-
volves tracking strategies and actions being taken by partners and non-part-
ners, and figuring out what new strategies and actions need to be taken to
ensure progress towards the most important results (United Nations Popula-
tion Fund, 2001).

11.8 Evaluation
Evaluation is a rigorous and independent assessment of either completed or
ongoing activities to determine the extent to which they are achieving stated
objectives and contributing to decision making. Evaluations, like monitoring,
can apply to many things, including an activity, project, programme, strategy,
policy, topic, theme, sector or organisation (Organisation for Economic Co-
operation and Development, 2002). The key distinction between the two is
that evaluations are done independently to provide managers and staff with
an objective assessment of whether or not they are on track. They are also
more rigorous in their procedures, design and methodology, and generally
involve more extensive analysis. While monitoring provides real-time infor-
mation required by management, evaluation provides more in-depth assess-
ment. The monitoring process can generate questions to be answered by
evaluation. Also, evaluation draws heavily on data generated through moni-
toring during the programme and project cycle, including, for example, base-
line data, information on the programme or project implementation process
and measurements of results (UNDP, 1998).

Activity 11.2

?
1. Draw Results-Based Management lifecycle and identify its major com-
ponents.
2. Identify the main processes involved in planning.
3. Provide a justification for planning in development projects. Illustrate
your answers using practical examples.

Zimbabwe Open University 229


Development Monitoring and Evaluation MSDS510

11.9 Principles of Results-Based Management


This section addresses some of the principles that readers should have in
mind throughout the entire process of planning, monitoring and evaluation
(RBM).

11.9.1 Ownership
Ownership is fundamental in formulating and implementing programmes and
projects to achieve development results. According to UNDP (1998; 2009),
there are two major aspects of ownership to be considered which are, the
depth, or level, of ownership of plans and processes and the breadth of own-
ership.

 Depth of ownership:

Many times, units or organisations go through the planning process to fulfil


requirements of their governing or supervisory bodies, such as a board of
directors or headquarters. When this is the case, plans, programmes or projects
tend to be neatly prepared for submission, but agencies and individuals return
to business as usual once the requirements are met (UNDP, 1998). When
these plans are formulated to meet a requirement and are not used to guide
ongoing management actions, organisations have greater risk of not achieving
the objectives set out in the plans. Ownership is also critical for effectively
carrying out planned monitoring and evaluation activities and linking the infor-
mation generated from monitoring and evaluation to future programme im-
provements and learning.

The process is not about compliance and meeting requirements, while it is


important to have the systems, it is more important that people understand
and appreciate why they are doing the things they are doing and adopt a
results-oriented approach in their general behaviour and work.

 Breadth of ownership:

There are two questions to address with respect to breadth of ownership:


Who does the development programme or project benefit or impact, and do
a sufficient number of these agencies and persons feel ownership of the pro-
gramme or project? Programme countries are ultimately responsible for achiev-
ing development results, which is why all development plans, programmes
and projects should be owned by national stakeholders. Ownership by pro-
gramme countries does not mean implementing agents are not accountable

230 Zimbabwe Open University


Unit 11 Monitoring and Evaluation in the Context of Results Based Management (RBM)

for the results. Implementing agents’ accountability generally applies to the


contributions agents, (for example, UNDP) makes to country’s results and
the use of financial resources.

A key aim of managing for results is to ensure that ownership goes beyond a
few select persons to include as many stakeholders as possible. For this rea-
son, monitoring and evaluation activities and the findings, recommendations
and lessons from ongoing and periodic monitoring and evaluation should be
fully owned by those responsible for results and those who can make use of
them (UNDP, 2009).

11.9.2 Engagement of stakeholders


Throughout all stages of planning, monitoring, evaluating, learning and im-
proving, it is vital to engage stakeholders, promote buy-in and commitment,
and motivate action (UNEG 2008):

A strong results-management process aims to engage stakeholders in think-


ing as openly and creatively as possible about what they want to achieve
and encourage them to organise themselves to achieve what they have agreed
on, including putting in place a process to monitor and evaluate progress and
use the information to improve performance.

11.9.3 Focus on results


RBM processes should be geared towards ensuring that results are achieved,
not towards ensuring that all activities and outputs get produced as planned.

11.9.4 Focus on development effectiveness


Results management also means focusing on achieving development effec-
tiveness. Meaningful and sustainable development results require more than
just a generic plan of outcomes, outputs and activities. How we do develop-
ment is often equally if not more important than what we do in development
work. For this reason, many development agencies attempt to incorporate
various themes into their planning, monitoring and evaluation processes to
improve the overall effectiveness of their efforts. For example, planning, moni-
toring and evaluation must focus on sustainability. This conclusion was reached
after years of experience with projects and programmes that had short-term
impact but failed to alter the development conditions of countries or commu-
nities in any meaningful manner. Similarly, there is now a focus on gender in
planning, monitoring and evaluation (UNEG 2008).

Zimbabwe Open University 231


Development Monitoring and Evaluation MSDS510

Many projects and programmes often fail to achieve their objectives because
there is little or no analysis of, and attention to, the differences between the
roles and needs of men and women in society. Inequalities, discriminatory
practices and unjust power relations between groups in society are often at
the heart of development problems. The same applies to the concept of na-
tional or community ownership of development programmes. There is greater
pride and satisfaction, greater willingness to protect and maintain assets, and
greater involvement in social and community affairs when people have a vested
interest in something that is, when they feel ‘ownership’.

Applying these principles to planning, monitoring and evaluation in a concrete


manner means that these processes should be designed in such a way that
they ensure the following (UNEG (2008):

 Ensure or promote national ownership

Ensure that, as appropriate, processes are led or co-led by the government


and/or other national or community partners and that all plans, programmes,
projects, and monitoring and evaluation efforts are aimed primarily at sup-
porting national efforts, rather than agency objectives. Important questions to
ask in include: “Are the people for whom the plan, programme or project is
being developed involved in the planning, monitoring and evaluation proc-
ess?”; “Do they feel that they are a part of the process?”; and “Do they feel
ownership of the processes and the plan or programme?”

 Promote national capacity development

Ask throughout the processes: “Will this be sustainable?”; “Can national sys-
tems and processes be used or augmented?”; “What are the existing national
capacity assets in this area?”; “Are we looking at the enabling environment,
the organisation or institution, as well as the individual capacities?”; and “Can
we engage in monitoring and evaluation activities so that we help to strengthen
national M and E systems in the process?”
 Promote inclusiveness, gender mainstreaming and women’s
empowerment
Ensure that men, women and traditionally marginalised groups are involved in
the planning, monitoring and evaluation processes. For example, ask ques-
tions such as:
 “Does this problem or result as we have stated it reflect the interests,
rights and concerns of men, women and marginalised groups?”

232 Zimbabwe Open University


Unit 11 Monitoring and Evaluation in the Context of Results Based Management (RBM)

 “Have we analysed this from the point of view of men, women and
marginalised groups in terms of their roles, rights, needs and
concerns?”and
 “Do we have sufficiently disaggregated data for monitoring and evalu-
ation?”

11.9.5 Exploring the link between planning, monitoring and


evaluation
According to UNEG (2008), the following are some of the inter-linkages and
dependences between planning, monitoring and evaluation:
 Without proper planning and clear articulation of intended results, it is
not clear what should be monitored and how it should be monitored
hence monitoring cannot be done well.
 Without effective planning (clear results frameworks), the basis for evalu-
ation is weak, hence cannot be done well.
 Monitoring is necessary but not sufficient for evaluation.
 Monitoring facilitates evaluation but evaluation uses additional new data
collection and different frameworks for analysis.
 Monitoring and evaluation for programmes will often lead to changes in
programme plans. This may mean further changing or modifying data
collection for monitoring purposes.

Activity 11.3

?
1. Explore the link between planning and monitoring and evaluation.
2. (a)Discuss the principles of Results-Based Management. (b) How fea-
sible are the principles in the monitoring and evaluation practice.
3. Discuss the outcomes of applying the principles of Results-Based Man-
agement effectively in development projects.

Zimbabwe Open University 233


Development Monitoring and Evaluation MSDS510

11.10 Summary
In this unit we covered issues concerning monitoring and evaluation in the
context of RBM. We recognised the increasing emphasis on results is bring-
ing about some major changes in the focus, approach and application of moni-
toring and evaluation within development circles. We noted the central to
these changes is Results-Based Management. In this unit we also defined the
concept of RBM and linked it to monitoring and evaluation of development
projects. The principles of RBM were visited. We established the link be-
tween planning and monitoring-evaluation and explained it. We also looked
at the objectives of RBM as well as the justification for planning was explored
fully in this unit.

234 Zimbabwe Open University


Unit 11 Monitoring and Evaluation in the Context of Results Based Management (RBM)

References
Evaluation Resource Centre (ERC). (n.d) .website (erc.undp.org)accessed
6/9/2016.
International Development Association (IDA) (2002). “Measuring Outputs
and Outcomes in IDA Countries.” IDA 13. Washington, D.C.: World
Bank.
Organisation for Economic Co-operation and Development. (2002). “Glos-
sary of Key Terms in Evaluation and Results-Based Management”,
Development Assistance Committee (DAC), Paris, France:2002. Avail-
able at: http://www.oecd.org/dataoecd/29/21/2754804. Accessed 7
June, 2012.
United Nations. (2008). Programme and Operations Policies and Proce-
dures’. Available at: http://content.undp.org/go/userguid. (15 Septem-
ber, 2010).
United Nations Development Programme. (1998). Evaluation of the Gov-
ernance Programme for Latin America and the Caribbean,
<http://intra.UNDP.org/eog/publications/publixations.html.Accessed
>16/12/2016
United Nations Development Programme. (2002). The Evaluation of Re-
sults-Based Management at UNDP, New York, NY
United Nations Development Programme. (2009). “Assessment of Devel-
opment Results (ADR) Guidelines”, Evaluation Office, New York, NY:
Available at:
http://intra.undp.org/eo/documents/ADR/ADR-Guide, Accessed 26
June, 2012.
United Nations Development Programme. (n.d) Accountability Framework,
httt://content.undporg/go/usergiude,resultsmanagement- accountabil-
ity/? Accessed 22/12/2016
United Nations Development Programme Evaluation Office Internal (1998)
(intra.undp.org/eo) and external websites (www.undp.org/eo),
(Accessed 5 October, 2016).
United Nations Evaluation Group. (2008). ‘Standards for Evaluation in the
UN System’, Available at: http://www.unevaluation.org/ unegstandards.
22 September 2008.
United Nations Population Fund (UNFPA). (2001). RBM and M&E, Ge-
neva.
United Nations Evaluation Group (UNEG).(2000) Monitoring and Evalua-
tion in Development. website (www.unevaluation.org). (19 March
2017).

Zimbabwe Open University 235


Development Monitoring and Evaluation MSDS510

236 Zimbabwe Open University


Unit Twelve

Annex 1: Evaluation Report


Outlines

12.1 Introduction

T his unit covers different structures of evaluation reports. The structures


presented in this unit were taken from real evaluation reports submitted
for professional consideration. We have done this to show you how the ac-
tual profession reports are done and how they are structured.
Development Monitoring and Evaluation MSDS510

12.2 Unit Objectives


By the end of this unit, you should be able to:
 identify different monitoring and evaluation reporting structures
 create a monitoring ad evaluation report outline
 arrange deferent sections of the Monitoring and evaluation report in
line with its objectives
 compare deferent report structures.
 discuss the different components of The Evaluation Report sections

12.3 Annex 1. Evaluation Report Outline


This evaluation report structure was adopted from the work of Morris, L.L.
1987). It is a useful guideline for anyone having to write an evaluation report.
Too often we rush into the actual evaluation without giving due consideration
into how we are going to communicate our findings once the evaluation is
complete. This framework is useful when planning an evaluation as it covers
all the areas that could potentially be involved in having to conduct one. A
section 1 to 7 presents one example of a Report structure outline. It presents
indicator for what should be contained under each section.

Section 1 – Summary

Make this a short summary for people who will not read the whole report.
Give the reasons why the evaluation was conducted and who it is targeted at
together with any conclusions and recommendations (Crompton,n.d).

It should cover:

 What was evaluated?


 Why the evaluation was conducted?
 What are the major findings and recommendations?
 Who is the report aimed at?
 Where there any major restrictions placed on the evaluation? and by
whom?

238 Zimbabwe Open University


Unit 12 Annex 1: Evaluation Report Outlines

Section 2 - Background

In this part, cover the background to the evaluation and what is was meant to
achieve. The program should be described and the depth of description will
depend on whether the intended audiences have any knowledge of the pro-
gram or not. Do not assume that everybody will know. Do not leave things
out but at the same time do not burden them with detail.

It should cover:
 origin of the program,
 aims of the program,
 participants in the program,
 characteristics of the materials,
 staff involved in the program.
Section 3 - Description of the Evaluation

This covers why the evaluation was conducted and what it was and was not
intended to accomplish. State the methodology and any relevant technical
information such as how the data was collected and what evaluation tools
were used.

It should cover:

 purposes of the evaluation,


 evaluation design,
 outcome measures
 instruments used,
 data collection procedures,
 implementation measures.
Section 4 - Results

This will cover the results of the work from section 3 and can be supple-
mented by any other evidence collected. Try to use graphics (charts, tables
and so on) to illustrate the information but use them sparingly to increase their
effectiveness.

Zimbabwe Open University 239


Development Monitoring and Evaluation MSDS510

It should cover:
 results of the study
 How many participants took any tests?
 What were the results of the tests?
 if there was a comparative group, how do they compare?
 Are any differences statistically significant?
 if no control group, did performance change from test to test?
Section 5 - Discussion

This should discuss your findings and your interpretation of them. Always
interpret your results in terms of your stated goals.

This section should cover the interpretation of all the results in section 4. If the
evaluation is not a large one, then sections 4 and 5 could be combined. The
results should always be related back to the purpose of the evaluation, some-
thing that does not always happen. Do not forget the unexpected results as
they can often be the most interesting.

It should cover:
 Are there alternative explanations to the results from the data?
 Are these results generalisable?
 What were the strengths and weaknesses of the intervention?
 Are certain parts of the program better received by certain groups?
 Are any results related to certain attitudes or learner characteristics?
 Were there any unexpected results?
Section 6 - Costs and Benefits

This is an optional section and would only be included if this had been part of
the evaluation plan. As there is no definitive approach to investigating this
whole area there will be a need to justify the approach taken. Not many
evaluations look at costs but there is a growing need to include some informa-
tion about this area. Evaluations and program interventions do not happen for
free.

240 Zimbabwe Open University


Unit 12 Annex 1: Evaluation Report Outlines

It should cover:
 What was the method used to calculate costs and effects/benefits?
 How were costs and outcomes defined?
 What costs were associated with the program?
 How were costs distributed (for example start-up costs, operating costs
etc.)?
 Where there any hidden costs (for example in-kind contributions)?
 What benefits were associated with the program?
 What were measures of effectiveness (test scores; program comple-
tion and so on)?
 Were there any unexpected benefits?
Section 7 - Conclusions

This section can be the most important section in the report apart from the
summary. Some people will only read the summary and the conclusion sec-
tion. Conclusions and recommendations should be stated clearly and pre-
cisely and these might be presented as a list as readers can easily scan them.
Don’t expect everyone to read your report from cover to cover. Make sure
that you get your main points across in the opening summary and in the con-
clusion.

It should cover:
 What are the major conclusions of the evaluation?
 How sure are you of the conclusions?
 Are all the results reliable?
 What are the recommendations regarding the program?
 Can any predictions or hypotheses be put forward?
 Are there any recommendations as to future evaluations?
Adapted from: Philip Crompton, Research Fellow, Institute for Education,
University of Stirling.( https://www.sampletemplates.com/business-templates/
sample-evaluation-report.html

Zimbabwe Open University 241


Development Monitoring and Evaluation MSDS510

12.4 Annex 2: A Sample Of A Full Evaluation Report


(<https://www.oecd.org/development/evaluation/dcdndep/47069197.pdf,
accses>20/05/2017)
Acronyms................................................................................................6
Executive Summary..................................................................................6
1 Introduction.................................................................................6
1.1 Background and purposes of the evaluation.........................................6
1.2 Methodology of the evaluation.............................................................6
2 Context of the project/programme..............................................6
3 Analysis of project concept and design.......................................6
3.1Concept and Design............................................................................6
4 Analysis of the implementation process ...................................6
4.1Project/programme Management.........................................................6
4.2Financial resources management..........................................................6
4.3 Efficiency and effectiveness of the institutional arrangements including
Government’s participation..............................................................6
5 Analysis of results and contribution to stated objectives...........6
5.1 Achievements at Outputs level..............................................................6
5.2 Achievements at Outcome level............................................................6
5.3 Gender equality....................................................................................6
5.4 Capacity development.........................................................................6
5.5 Human-Rights Based Approach............................................................6
5.6 Partnerships and Alliances....................................................................6
5.7 Humanitarian principles (emergency projects).......................................6
6 Analysis by evaluation criteria....................................................6
6.1 Relevance............................................................................................6
6.2 Efficiency..............................................................................................6
6.3 Effectiveness........................................................................................6
6.4 Sustainability.......................................................................................6
6.5 Impact................................................................................................6
7 Conclusions and Recommendations............................................6
8 Lessons Learned...........................................................................6
Annexes to the evaluation report................................................................6
Annex 1.Evaluation Terms of Reference....................................................6
Annex 2.Brief profile of evaluation team members...................................6
Annex 3.List of documents reviewed ......................................................6
Annex 4.List of institutions and stakeholders met during the evaluation proc-
ess;.................................................................................................6

242 Zimbabwe Open University


Unit 12 Annex 1: Evaluation Report Outlines

Annex 5.List of project outputs................................................................6


Annex 6.Evaluation tools (not mandatory)................................................6
( h t t p s : / / w w w. o ec d . o rg/ d e v el o p m e n t / e v a l u a t i o n / d c d n d e p /
47069197.pdf,accses 20/05/2017)

12.4.1 A Detailed Explanation of the Evaluation Report


The following resection provides a detailed explanation of the components of
the report presented in annex 2.

Introduction

Background and purposes of the evaluation


 This section will include:
 the purpose of the evaluation, as stated in the Terms of Reference;
 project/programme title, starting and closing dates, initial and current
total budget;
 dates of implementation of the evaluation.
 It will also mention that Annex I of the evaluation report is the evalua-
tion Terms of Reference.
Methodology of the evaluation
 This section will comprise the description of the methodology and tools
used and evaluation criteria that were applied by the evaluation. This
should also note any limitations and constraints encountered in apply-
ing the planned approach or methods defined in the TORs by the evalu-
ation team and the extent to which and how these were overcome. As
appropriate, the tools will be included as an annex of the report.
Context of the project/programme
 This section will include a description of the developmental context
relevant to the project/programme (global/regional/national as appro-
priate) including major challenges in the area of the intervention, politi-
cal and legislative issues, etc.
 It will also describe the process by which the project/programme was
identified and developed and cite other related UN (including FAO)
and bilateral interventions if relevant.

Zimbabwe Open University 243


Development Monitoring and Evaluation MSDS510

Analysis of project concept and design


 Programmes and projects are built on assumptions on how and why
they are supposed to achieve the agreed objectives through the se-
lected strategy; this set of assumptions constitutes the programme theory
or ‘theory of change’ and can be explicit (for example. in a logical
framework matrix) or implicit in a project/programme document. This
section will analyse the theory of change, or the strategy underpinning
the project, including objectives and assumptions, and assess its ro-
bustness and realism. In so doing, the evaluation team will refer to the
following characteristics, as appropriate:
 The relevance of stated development goals and outcomes (immediate
objectives);
 The adequacy of the approach and methodology of implementation to
achieve intended results;
 The adequacy of the time-frame and total resources, including human
and financial, allocated for implementation;
 The quality of the stakeholders’ and beneficiaries identification;
 The appropriateness of both institutional set-up and management ar-
rangements.
 This section will also critically analyse the clarity and coherence of the
Logical Framework of the project, including:
 The causal relationship between inputs, activities, outputs, expected
outcomes (immediate objectives) and impact (development objectives);
 The validity of indicators, assumptions and risks.
ANALYSIS OF THE IMPLEMENTATION PROCESS
Project/programme Management
 This section will analyse the performance of the operational manage-
ment function, including, as appropriate:
 effectiveness of strategic decision-making by project/programme man-
agement, including quality, realism and focus of annual work plans;
 efficiency and effectiveness of operations management, including time-
liness of delivery, gaps and delays if any between planned and achieved
outputs, the causes and consequences of delays and assessment of any
remedial measures taken efficiency and effectiveness of monitoring sys-
tem and internal review processes;
 effectiveness of staff management; and

244 Zimbabwe Open University


Unit 12 Annex 1: Evaluation Report Outlines

 elaboration, quality and progress in the implementation of an exit strat-


egy.
Financial resources management
 This section will analyse whether available financial resources and pro-
gramme delivery were efficiently managed. In so doing, the evaluation
team will refer to the following aspects as appropriate:
 the relevance and adequacy of budget allocations to achieve intended
results;
 coherence and soundness of Budget Revisions in matching implemen-
tation needs and project/programme objectives; and
 assessment of rate of delivery and budget balance at the time of the
evaluation, compared to the initial plan.
Efficiency and effectiveness of the institutional arrangements including Gov-
ernment’s participation
 This section will analyze the extent to which institutional arrangements
have supported programme delivery, including the following aspects:
 administrative and technical support by Food and Agriculture Organi-
sation Headquarters, regional, sub-regional and country office, as ap-
propriate;
Delivery by ‘FAO as One’;
Role and effectiveness of the project’s institutional set-up, including coordi-
nation and steering bodies;
 The section will also analyze the Government’s commitment and sup-
port to the project/programme, in particular:
 Financial and human resources made available for project/programme
operations;
 Uptake of outputs and outcomes through policy or investment for up-
scaling.
Analysis of Results and Contribution to Stated Objectives

Achievements at Outputs level


 This section will critically analyse the extent to which planned project/
programme outputs have been achieved: ideally, the evaluation team
should directly assess all of these, but this is not always feasible due to
time and resources constraints. Thus, the detailed analysis should be
done on a representative sample of outputs that were assessed di-

Zimbabwe Open University 245


Development Monitoring and Evaluation MSDS510

rectly, while a complete list of outputs prepared by the project/pro-


gramme team should be included as annex. If appropriate, the section
will also include an analysis of gaps and delays and their causes and
consequences. Unexpected outputs should also be included.
Achievements at Outcome level
 This section will critically analyse to what extent expected outcomes
(specific/ immediate objectives) were achieved, or are likely to be
achieved during the project/programme life’s time. It will also identify
and analyse the main factors influencing their achievement and the con-
tributions of the various stakeholders to them.
 This analysis should encompass the use made by the project of FAO’s
normative and knowledge products and actual and potential contribu-
tion of the project to the normative and knowledge function of the Or-
ganization.
Gender equality
 This section will analyse if and how the project/programme
mainstreamed gender issues. The assessment will cover:
 how gender issues were reflected in objectives, design, identification of
beneficiaries and implementation;
 the extent to which gender equality considerations were taken into ac-
count in project management; and
 how gender relations and equality and processes of women’s inclusion
were and are likely to be affected by the initiative.
Capacity development
 This section will assess the extent to which the project has integrated
capacity development measures in its design and implementation and
what results it has achieved in that regard, at individual, organizational
and enabling environment levels. This will include the perspectives for
institutional uptake and mainstreaming of the newly acquired capaci-
ties, and/or diffusion beyond the beneficiaries or the project/programme.

Human-Rights Based Approach


 This section will analyze how the project integrated the principle of
Right to Food and decent rural employment in its design and imple-
mentation and what results were achieved.

246 Zimbabwe Open University


Unit 12 Annex 1: Evaluation Report Outlines

Partnerships and Alliances


 This section will assess the extent to which partnerships and alliances
FAO developed within the project contributed to an efficient programme
delivery; their focus and strength; and their effect on project results and
sustainability.
Humanitarian principles (emergency projects)
 In the case of emergency projects, this section will assess the findings
on the adherence of the project, from design and throughout its imple-
mentation, to the Humanitarian principles and the Minimum Standards
as defined in the Sphere handbook.
Analysis by evaluation criteria
 This section is standard for purposes of accountability and lessons learn-
ing. It should include the analysis of the project against the evaluation
criteria identified in the ToR; it paves the way to the conclusions and
recommendations and will provide the evidence for the quantitative
scoring of the project in the OED project performance questionnaire.
Relevance

 This section will analyse the extent to which the project/programme’s


objectives and strategy were coherent with country’s expressed priori-
ties and policies, with beneficiaries’ needs, and other major aid pro-
grammes, at the time of approval and at the time of the evaluation.
 It will assess how through its implementation and results; the project
has been relevant to (select as applicable from the following list):
 national/regional development priorities, programmes, needs of the
population;
 UNDAF; Consolidated Appeal or other UN programming framework;
 FAO Country Programming Framework;
 FAO Global Goals and Strategic Objectives/Core Functions; and
 other aid programmes in the sector.
Efficiency
 This section will synthesise and discuss all evidence about efficiency in
project implementation, with particular focus on delivery and manage-
ment.

Zimbabwe Open University 247


Development Monitoring and Evaluation MSDS510

Effectiveness
 This section will synthesise and discuss all evidence about effectiveness
of the project, actual or potential, in pursuing its intermediate/specific
objectives.
Sustainability
 This section will assess the prospects for sustaining and up-scaling the
project’s results by the beneficiaries and the host institutions after the
termination of the project. It will include, as appropriate:
 Institutional, technical, social and economic sustainability of proposed
technologies, innovations and/or processes;
 Expectation of institutional uptake and mainstreaming of the newly ac-
quired capacities, and/or diffusion beyond the beneficiaries or the
project;
 Environmental sustainability: the project’s contribution to sustainable
natural resource management, in terms of maintenance and/or regen-
eration of the natural resource base.
 In the case of emergency projects, where the concept of sustainability
may not be fully appropriate, findings related to the project’s
connectedness will be reported in this section.
Impact
 This section will assess the current and foreseeable positive and nega-
tive impacts produced as a result of the project/programme, directly or
indirectly, intended or unintended.
 It will assess the actual or potential contribution of the project/pro-
gramme to the planned development objective and to FAO’s Strategic
Objectives, Core Functions and Organizational Results.

Conclusions and Recommendations


 Conclusions need to be substantiated by findings consistent with data
collected and methodology, and represent insights into identification
and/ or solutions of important problems or issues. They may address
specific evaluation questions raised in the Terms of Reference and should
provide a clear basis for the recommendations which follow.
 Conclusions will synthesise the main findings from the preceding sec-
tions: main achievements, major weaknesses and gaps in implementa-
tion, factors affecting strengths and weaknesses, prospects for follow-
up, any emerging issues. It will consolidate the assessment of various

248 Zimbabwe Open University


Unit 12 Annex 1: Evaluation Report Outlines

aspects to judge the extent to which the project/programme has at-


tained, or is expected to attain, its intermediate/specific objectives.
Considerations about relevance, costs, implementation strategy and
quantity and quality of outputs and outcomes should be brought to
bear on the aggregate final assessment. Conclusions will also assess to
what extent FAO delivered as One.
 Recommendations should be firmly based on evidence and analysis,
be relevant and realistic, and clearly indicate actions to be taken upon
their acceptance. They can tackle strategic, thematic or operational
issues and insofar as possible, should aim at producing measurable
outputs and outcomes. Each recommendation should tackle one set of
issues at a time, in particular when different levels in decision-making
and action are involved.
 Each recommendation should each be introduced by the rationale for
it; alternatively, it should be referenced to the paragraphs in the report
to which it is linked.
 Each recommendation should be clearly addressed to the appropriate
party(ies), i.e. the Government, the resource partner, FAO at different
levels (HQ, regional, sub-regional, national) and the project/programme
management. Responsibilities and the time frame for their implementa-
tion should be stated, to the extent possible.
 Although it is not possible to identify a ‘correct’ number of recommen-
dations in an evaluation report, the evaluation team should focus its
recommendations on those aspects that in its view, will make a sub-
stantial and real difference to the project/programme and/or to FAO’s
work.

Lessons Learned
 Not all evaluations generate lessons. Lessons should only be drawn if
they represent original contributions to general knowledge.
 Where this is the case, the evaluation will identify lessons and good
practices on substantive, methodological or procedural issues, which
could be relevant to the design, implementation and evaluation of simi-
lar projects or programmes. Such lessons/practices must have been
innovative, demonstrated success, had an impact, and be replicable.

Zimbabwe Open University 249


Development Monitoring and Evaluation MSDS510

Annexes to the evaluation report

Annex 1. Evaluation Terms of Reference

Annex 2. Brief profile of evaluation team members

Annex 3. List of documents reviewed

Annex 4. List of institutions and stakeholders met during the evaluation


process.

The team will decide whether to report the full name and/or the function of the
people who were interviewed

Annex 5. List of project outputs.

This includes training events, meetings, reports/publications, initiatives sup-


ported through the project/programme. It should be prepared by the Project/
programme staff, in a format decided by the evaluation team, when details
cannot be provided in the main text because too cumbersome

Annex 6. Evaluation tools (not mandatory)

( h t t p s : / / w w w. o ec d . o rg/ d e v el o p m e n t / e v a l u a t i o n / d c d n d e p /
47069197.pdf,accessed 20/05/2017)

250 Zimbabwe Open University


Unit 12 Annex 1: Evaluation Report Outlines

12.5 ANNEX 3: USAID FORMART


Report should be… A thoughtful, well-researched, well-organized, and
objectively evaluate what worked, what did not, and why.
Executive Summary Include a 3 to 5-page Executive Summary that provides a
brief overview of the evaluation purpose, project
background, evaluation questions, methods, findings, and
conclusions.
Evaluation Questions Address all evaluation questions in the statement of work.

Methods ▪ Explain evaluation methodology in detail.


▪ Disclose evaluation limitations, especially those
associated with the evaluation methodology (e.g. selection
bias, recall bias, unobservable differences between
comparator groups, etc.).
NOTE: A summary of methodology can be included in the
body of the report, with the full description provided as an
annex.
Findings ▪ Present findings as analyzed facts, evidence and data
supported by strong quantitative or qualitative evidence
and not anecdotes, hearsay or people’s opinions.
▪ Include findings that assess outcomes and impacts on
males and females.
Recommendations ▪ Support recommendations with specific findings.
▪ Provide recommendations that are action-oriented,
practical, specific, and define who is responsible for the
action.
Annexes Include the following as annexes, at minimum:
▪ Statement of Work.
▪ Full description of evaluation methods.
▪ All evaluation tools (questionnaires, checklists,
discussion guides, surveys, etc.).
▪ A list of sources of information (key informants,
documents reviewed, other data sources).
Only if applicable, include as an annex Statement(s) of
Differences regarding any significant
unresolved differences of opinion on the part of funders,
implementers, and/or members of
the evaluation team.
Quality Control Assess reports for quality by including an in-house peer

Zimbabwe Open University 251


Development Monitoring and Evaluation MSDS510

Quality Control Assess reports for quality by including an in-house peer


technical review with comments provided to evaluation
teams.
Transparency ▪ Submit the report to the Development Experience
Clearinghouse (DEC) within three months of completion.
▪ Share the findings from evaluation reports as widely as
possible with a commitment to full and active disclosure.
Use Integrate findings from evaluation reports into decision-
making about strategies, program priorities, and project
design.
(Adopted from USAID, Preparing evaluation: NUMBER 1 VERSION 1.0 NOV 2012
(Adopted from USAID, Preparing evaluation: NUMBER 1 VERSION 1.0
NOV 2012

252 Zimbabwe Open University


Unit 12 Annex 1: Evaluation Report Outlines

12.6 ANNEX 4: ACEIDA REPORT ON WATER AND


SANIATION
STRUCTURE ON A FINAL EVALUATION REPORT

CONTENTS

ACRONYMS ........................................................................................................................... 4
ACKNOWLEDGEMENTS ................................................................................................... 5
EXECUTIVE SUMMARY .................................................................................................... 6
A.1. ICEIDA WatSan Project in Mangochi, Malawi ............................................................... 6
A.2. Evaluation methodology ................................................................................................... 6
A.3. Summary of findings and main recommendations ............................................................ 7
1.0 INTRODUCTION ........................................................................................................... 12
1.1 Introduction ....................................................................................................................... 12
1.2 The purpose of the report................................................................................................... 12
1.3 The scope of evaluation. ................................................................................................... 12
1.4 The scope of the project .................................................................................................... 12
2.0 COUNTRY AND PROGRAMME PROFILE .............................................................. 13
2.1 Context for development ............................................................................................... 13
2.2 The economic, cultural and political dimensions of Malawi ..........................................13
2.3 State of Infrastructure that Characterize the Context for Development ..........................14
2.4 Link to poverty reduction ............................................................................................... 14
2.5 Link to Sustainable Development and Local Needs ...................................................... 15
2.6 Gender equality, Environment, and other programming priorities ........... .................... 15
2.7 Financial Resourcing ..................................................................................................... 16
2.8 Project Milestones and Achievements to date ................... ........................................... 17
2.9 Stakeholder Participation ............................ .................................................................. 19
3.0 EVALUATION PROFILE .......................................................................................... 21
3.1 Methodology .................................................................................................................. 21
3.2 Sources of data ............................................................................................................... 21
3.3 Sampling methods .......................................................................................................... 21
3.4 Enumeration ................................................................................................................... 22
3.5 Techniques of data collection ......................................................................................... 22
3.6 Data analysis ................................................................................................................... 23
4.0 EVALUATION FINDINGS ......................................................................................... 24
4.A.0 Relevance .................................................................................................................... 24
4.A.1 Needs assessment and choice of the beneficiaries. ................................................... 24
4.A.2 Consistency of program's objectives with beneficiaries needs and expectations ....... 26
4.A.3 Consistency of program's strategy and activities with program's objectives .............. 27
4.A.4 Alignment with National Policies, Strategies and Priorities ....................................... 27
4.B.0 Effectiveness ............................................................................................................... 28
4.B.1 Objective 1: Increase the number of boreholes in the Monkey Bay Health Zone ........28

Zimbabwe Open University 253


Development Monitoring and Evaluation MSDS510

4.B.1 Objective 1: Increase the number of boreholes in the Monkey Bay Health Zone ........28
4.B.2 Objective 2: Build up capacity among communities in maintenance of boreholes and
pumps ……………………………………………………………………………………..….29
4.B.3 Objective 3: Increase knowledge in hygiene and sanitation among the target groups .. 31

4.B.4 Objective 4: Increase the number of protected and improved shallow wells ................ 32
4.B.5 Objective 5: Putting to use 2 natural springs in Mvunguti village ................................ 33
4.B.6 Objective 6: Improve Community Based Management ................................................ 33
4.B.7 Objective 7: Establishing functional co-ordination, monitoring and reporting system
between stakeholders .............................................................................................................. 34
4.C.0 Efficiency ...................................................................................................................... 34
4.C.1 Significant improvement in access to drinking water ................................................... 35
4.C.2 Increased knowledge in sanitation and hygiene............................................................. 37
4.C.3 Practice of hand washing with soap .............................................................................. 38
4.D.0 Impact ........................................................................................................................... 38
4.D.1 A remarkable decrease in water-related diseases ....... ................................................. 38
4.D.2 Performing community management structures .......................................................... 41
4.E.0 Sustainability ................................................................................................................ 42
4.E.1 Current functioning status of the program’s outputs .................................................... 42
4.E.2 Financial sustainability ................................................................................................. 43
4.E.3 Technical sustainability................................................................................................. 43
4.E.4 Institutional sustainability ............................................................................................. 44
4.E.5 Environmental sustainability .................................................................................. 45
5.0 CONCLUSION ........................................................................................................ 47
6.0 RECOMMENDATIONS......................................................................................... 48
6.1 Recommendations of the evaluation with respect to relevance: ............................... 48
6.2 Recommendations of the evaluation with respect to Effectiveness: ......................... 49
6.3 Recommendations of the evaluation with respect to efficiency: ............................... 50
6.4 Recommendations of the evaluation with respect to impact: .................................... 50
6.5 Recommendations of the evaluation with respect to sustainability: .......................... 51
7.0 LESSONS LEARNED ............................................................................................. 52
APPENDICES ................................................................................................................. 53
APPENDIX 1: TERMS OF REFERENCE (ToRs) ......................................................... 53
APPENDIX 2: EVALUATION RESULTS MATRIX ................................................... 65
APPENDIX 3: GIS WATER POINT RESULTS MAP .................................................. 72
APPENDIX 4: EVALUATION ACTIVITIES AND TIME FRAME ............................ 74
APPENDIX 5: LIST OF PEPOPLE INTERVIEWED ................................................... 75
APPENDIX 6: QUESTIONNAIRES .............................................................................. 77
APPENDIX 7: LIST OF COMMUNITIES/VILLAGES SURVEYED .......................... 95
BIBLIOGRAPHY/REFERENCES ................................................................................. 100

http://www.iceida.is/media/pdf/ICEIDA-FINAL-EVALUATION-REPORT-FINAL--
submitted.pdf (Accessed 26/06/2017)

254 Zimbabwe Open University


Unit 12 Annex 1: Evaluation Report Outlines

12.7 Summary
In this unit, we have presented real case examples of professional evaluation
report structures. We hope you were able to see how the reports are struc-
tured and how the different units of the report are arranged. For the full ver-
sion of the report you will find them on your My- Vista course account. Alter-
natively, you may find any report on the internet and download it for analyses.
A full report could not be inserted in this unit because of limited space.

Zimbabwe Open University 255


Development Monitoring and Evaluation MSDS510

References
Crompton. P.(n.d),( https://www.sampletemplates.com/business-templates/
sample-evaluation-report.html(accsessed 20/605/2016)
( h t t p s : / / w w w. o ec d . o rg/ d e v el o p m e n t / e v a l u a t i o n / d c d n d e p /
47069197.pdf,accses 20/05/2017)
<http://www.iceida.is/media/pdf/ICEIDA-FINAL-EVALUATION-RE-
PORT-FINAL—submitted.pdf(Accessed 26/06/2017)>
https://www.usaid.gov/sites/default/files/documents/1870/How-to-
Note_Preparing-Evaluation-Reports.pdf accessed 26/05/2017

256 Zimbabwe Open University

You might also like