Professional Documents
Culture Documents
Development Studies
MSDS510
Author: Shepard Mutsau
Master of Science in Rural and Urban Planning (UZ)
Master of Science in Development Studies (NUST)
Post Graduate Diploma in Project Planning and Management
UZ)
Bachelor of Arts General (UZ)
Bachelor of Science in Psychology (ZOU)
Executive Certificate in Monitoring and Evaluation (UZ)
Certificate in Community Development (UZ)
Research Fellow (DHS, USA)
Mount Pleasant
Harare, ZIMBABWE
Year: 2017
Layout : S. Mapfumo
I.S.B.N:
as you analyze ideas and seek clarification on any you are referred. Fully-fledged lectures can,
issues. It has been found that those who take part therefore, be misleading as the tutor may dwell
in tutorials actively, do better in assignments and on matters irrelevant to ZOU course.
examinations because their ideas are streamlined.
Taking part properly means that you prepare for Distance education, by its nature, keeps the tutor
the tutorial beforehand by putting together relevant and student separate. By introducing the six hour
questions and their possible answers and those tutorial, ZOU hopes to help you come in touch
areas that cause you confusion. with the physical being, who marks your
assignments, assesses them, guides you on
Only in cases where the infor mation being preparing for writing examinations and
discussed is not found in the learning package can assignments and who runs your general academic
the tutor provide extra learning materials, but this affairs. This helps you to settle down in your
should not be the dominant feature of the six hour course having been advised on how to go about
tutorial. As stated, it should be rare because the your learning. Personal human contact is,
information needed for the course is found in the therefore, upheld by ZOU.
learning package together with the sources to which
Note that in all the three sessions, you identify the areas
that your tutor should give help. You also take a very
important part in finding answers to the problems posed.
You are the most important part of the solutions to your
learning challenges.
Overview __________________________________________________ 1
Unit Five: Program Evaluation Review Technique and the Critical Path
Method
Unit Nine: Data Gathering and Analysis for Monitoring and Evaluation
T his module will walk you through the basic concepts, principles and
major issues that arise in the monitoring and evaluation of development
projects. We, in Unit 1, start by introducing the concept of monitoring and
evaluation. The unit examines the various aspects of the monitoring and evalu-
ation process as it is used in the management of development projects. We
also define the concept; characterise it indicating similarities and differences
of the twin processes of monitoring and evaluation. We end the unit by high-
lighting other key concepts in monitoring and evaluation practice. In Unit 2,
we introduce the Stakeholder Participation in Monitoring and evaluation. We
demonstrate that monitoring and evaluation is not a one-man job. It is a proc-
ess which requires several actors and stakeholders. In this unit, we fully dwell
on the importance of stakeholders in monitoring and evaluation process of
development projects. We define who the stakeholders are in monitoring and
evaluation process and discuss the methods of identifying stakeholders, as
well as the importance of creating a stockholder analysis.
Development Monitoring and Evaluation MSDS510
Introduction to Project
Monitoring and Evaluation
1.1 Introduction
T oday the development practice has been put under pressure for the
failure to deliver the expected outcomes in the development projects.
As such, there has been more emphasis on monitoring and evaluation on
development projects. In this unit, we introduce the concept of monitoring
and evaluation (M&E), in this unit, we also examine the various aspects of the
monitoring and evaluation process as it is used in the management of devel-
opment projects. We then define the concept; characterise it indicating simi-
larities and differences of the twin processes of monitoring and evaluation.
We end the unit by highlighting other key concepts in monitoring and evalua-
tion practice.
Development Monitoring and Evaluation MSDS510
Monitoring and evaluation emerged from the general acceptance of the scien-
tific methods as a means of dealing with social problems. However, despite
historical roots that extend to the seventeenth century, the widespread use of
systematic data based evaluations (SDBE) is a relatively modern develop-
ment. It is argued that the application of social research methods to education
coincides with the growth and refinement of social research methods (Rossi,
et al., 1993).
Of key importance are the emergence and increased standing of social sci-
ences in universities and the increased support of social research. Social sci-
ences research in universities became centres of early work in program M&E
and has continued to occupy a key role in the field
evaluation and it helped in the efficient data collection and storage as well as
it analysis (Gray, 1988). Since then, more and more methods and computer
aided applications have made monitoring and evaluation a robust project
management activity for development projects.
(https://www.oecd.org/dac/peer-reviews/
World%20bank%202004%2010_Steps_to_a_Results_Based_ME_System.pdf
accessed 22/05/2017)
Efficiency
Efficiency measures whether the input into the work is appropriate in terms of
the output. This could be input in terms of money, time, staff, equipment and
so on. It focuses on the extent to which resources are used cost-effectively.
The quality and quantity of results achieved justify the resources used. It asks
the question, “Are there more cost-effective methods of achieving the same
result?” (Shapiro, 1996).
Effectiveness
these outputs used to bring about the desired outcomes?” (IUCN, 2004;
Shapiro, 1996).
Activity 1.1
?
1. Define the following terms in the context of monitoring and evaluation:
(a) monitoring
(b) evaluation
2. Distinguish between monitoring and evaluation. Give examples for your
answer.
3. Discuss issues that monitoring and evaluation have in common giving
examples.
Impact
Impact tells you whether or not what you did make a difference to the prob-
lem situation you were trying to address. An example is the changes in condi-
tions of people and ecosystems that result from an intervention (that is, policy,
programme or project). It asks the question, “What are the positive, nega-
tive, direct, indirect, intended or unintended effects?” In other words, was
your strategy useful? Did ensuring that teachers were better qualified improve
the pass rate in the final year of school? Before you decide to get bigger, or to
replicate the project elsewhere, you need to be sure that what you are doing
makes sense in terms of the impact you want to achieve (IUCN, 2004; Shapiro,
1996).
Following the above explanations, it should be clear that monitoring and evalu-
ation are best done when there has been proper planning against which to
assess progress and achievements.
Relevance
Relevance is the extent to which the policy, programme, project or the or-
ganisational unit contributes to the strategic direction of the members and
partners. Is it appropriate in the context of its environment?
Activity 1.2
and the ability to access information, carry out investigations and re-
port findings free of political influence or organisational pressure.
Indicator: Signal that reveals progress (or lack thereof) towards ob-
jectives; means of measuring what actually happens against what has
been planned in terms of quantity, quality and timeliness. An indicator is
a quantitative or qualitative variable that provides a simple and reliable
basis for assessing achievement, change or performance.
Activity 1.3
?
1. Discuss briefly the following concepts in the context of monitoring and
evaluation:
a) self-evaluation
b) participatory evaluation
c) outcome evaluation
d) interactive evaluation
2. Account for the difference between outcome evaluation and outcome
monitoring. Give examples of real development projects in your com-
munity.
1.9 Summary
In this unit, we looked at the concept of monitoring and evaluation as it is used
in development. We traced the history of monitoring and evaluation. We man-
aged to define the key terms, which are the bolts and nuts of monitoring and
evaluation thereby setting a strong background for the unit. We characterised
monitoring as well as evaluation. The similarities and differences of monitoring
and evaluation were discussed in this unit. In this unit, we also discussed the
principle for evaluation which are an important concept in monitoring and
evaluation practice.
References
Gray, R.J. (1988). Microcomputers in Evaluation, Evaluation Practice 9
(3) (47-53)
International Union for Conservation of Nature (IUCN). (2004). Managing
Evaluations Guide for IUCN Programme and Project Managers, Gland,
Cambridge: IUCN.
Nagel, S. (1986). Microcomputers and Evaluation Research. Evaluation
Review Volume: 10 Issue: 5, page(s): 563-577, DOI: https://doi.org/
10.1177/0193841X8601000501
Olive Publications. (2002). Planning for Monitoring and Evaluation, South
Africa: Olive Publications.
Rossi, P. H., Freeman, H. E., & Wright, S. R. (1979). Evaluating social
projects in developing countries. Paris: Development Centre of the
Organization for Economic Co-operation and Development
Rossi, P.H. (1993). Evaluation A Systematic Approach, London: Sage
Publications.
Shapiro, J. (1996). Evaluation: Judgment Day or Management Tool? Olive
Publications.
United Nations Development Programme. (2002). Handbook on Monitor-
ing and Evaluation for Results, New York: UNDP Evaluation Of-
fice.
United Nations Development Programme. (1997). Human Development
Report, UNDP, New York: Oxford University Press.
United Nations Development Programme. (2009). Planning, Monitoring and
Evaluation for Development Results, New York: United Nations De-
velopment Programme, UNDP.
The United Nations Children’s Fund (UNICEF). (n.d). Guide for Monitor-
ing and Evaluation. www.unicerforg/resrval/indexh/html, Accessed
on 12 May 2016.
Stakeholder Participation in
Monitoring and Evaluation
2.1 Introduction
2.4 Participation
In this part, we are going to define what is meant by participation. We are
going to entertain several definitions from different scholars with the aim of
unpacking participation. Participation is seen as the means for a widening and
redistribution of opportunities to take part in societal decision-making, in con-
tributing to development and in the benefiting from its fruits (Oakley and
Marsden, 1984). Chambers (1983) describes participation as an empower-
ing process that enables local people to do their own analysis and to make
their own decisions. According to Nelson and Wright, (1995), it means that
“we” participate in “their” project, not “they” in “ours”). Marsden (1991),
believes participation is about understanding and working with the realities of
others in order to enhance effective development through the pooling of ex-
pertise, while Hira and Parfitt (2004) define participation as an active process
by which beneficiary groups influence the direction and execution of a devel-
opment project with a view to enhancing their well-being in terms of income,
personal growth, self-reliance or other values they cherish.
Activity 2.1
?
1. Define the following term stakeholder in the context of monitoring and
evaluation.
2. Identify any 5 stakeholders in monitoring and evaluation.
3. Discuss the rationale for stakeholder engagement in monitoring and
evaluation.
These tables and matrix can be helpful in communicating about the stakeholders
and their role in the programme or activities that are being planned (UNDP,
2009).
Watchdog NGO 5 1
Citizens organisations 5 2
Women’s organisations 5 2
Group 1
Stakeholders are very important to the success of the activity but may have
little influence on the process. For example, the success of an electoral project
or referendum in Zimbabwe will often depend on how well women and mi-
norities are able to participate in the elections, but these groups may not have
much influence on the design and implementation of the project or the con-
duct of the elections. In this case, they are highly important but not very influ-
ential. They may require special emphasis to ensure that their interests are
protected and that their voices are heard.
Group 2
Stakeholders are central to the planning process as they are both important
and influential. These should be key stakeholders for partnership building.
For example, political parties involved in a national elections programme in
Zimbabwe may be both very important (as mobilisers of citizens) and influen-
tial (without their support the programme may not be possible).
Group 3
Stakeholders are not the central stakeholders for an initiative and have little
influence on its success or failure. They are unlikely to play a major role in the
overall process. One example could be an international observer group that
has little influence on elections. Similarly, they are not the intended beneficiar-
ies of, and will not be impacted by those elections.
Group 4
Stakeholders are not very important to the activity but may exercise signifi-
cant influence. For example, an informal political leader may not be an impor-
tant stakeholder for an elections initiative aimed at increasing voter participa-
tion, but she or he could have major influence on the process due to informal
relations with power brokers and the ability to mobilise people or influence
public opinion. These stakeholders can sometimes create constraints to pro-
gramme implementation or may be able to stop all activities. Even if they are
not involved in the planning process, there may need to be a strategy for
communicating with these stakeholders and gaining their support.
Based on the stakeholder analysis, and on what is practical given cost and
location of various stakeholders, the identified stakeholders should be brought
together in a planning workshop or meeting.
Activity 2.2
? 1.
2.
List three important steps in stakeholder identification.
How useful is each of the step in the identification of stakeholders in
monitoring and evaluation?
3. Discuss various challenges you may face in creating a stockholder
matrix.
4. Using real life examples identify a project and attempt to create a
stakeholder matrix for its monitoring and evaluation.
2.12.2 Orientation
Orientation enables that all stakeholders start at the same point. They should
all understand:
Why it is important for them to work together
Why they have been selected for the planning monitoring and evalua-
tion exercise
The rules of the planning exercise and how stakeholders should dia-
logue, especially in crisis settings, where these fora could be the first
time different parties have heard each other’s perspectives and goals
for development. It is important to bring stakeholders together not only
for the resources they have but also because each has a unique per-
spective on the causes of the problems and what may be required to
solve them (International Finance Corporation (IFC) (2007).
A government minister, a community member, a social worker, an economist,
a businessperson, a woman and a man may have different views on what they
are confronting and what changes they would like to see occur. It is common
in the early stages of planning for persons to use anecdotes to get stakeholders
to see how easy it is to look at the same issue and yet see it differently. This is
why stakeholder orientation and induction is very important to forge a com-
mon ground.
Stakeholder Role
Government- • Usually have overall responsibility for monitoring
coordinating authority and and evaluating development activities
other central ministries • Are in a good position to coordinate the design and
(for example, Planning or support for monitoring and evaluation activities,
Finance) particularly the annual review, and to take action
based on the findings of evaluations.
Inclusion in decision-making
Public trust
Social and vicarious learning may also promote social learning. This is where
the youths and other stakeholders and the wider society in which they live,
learn from each other through the development of new relationships, building
on existing relationships and transforming adversarial relationships as indi-
viduals learn about each other’s trustworthiness and learn to appreciate the
legitimacy of each other’s views.
It is argued that participation enables interventions and technologies to
be better adapted to local socio-cultural and environmental conditions.
Youths gain technological knowledge through participation. This may
enhance their rate of adoption and diffusion among target groups, and
their capacity to meet local needs and priorities. Participation may make
research more robust by providing higher quality information inputs.
Activity 2.3
?
1. Discuss the benefits of participation of stakeholders in the monitoring
and evaluation process. Illustrate using examples.
2. Identify factors that may impede participation in your area with re-
spect to monitoring and evaluation.
3. As a development practitioner, how would you improve stakeholder
participation in monitoring and evaluation of development projects?
Give real examples.
4. Examine various ways in which target beneficiaries may participate in
monitoring and evaluation of development projects.
5. Discuss the ways in which participants can adapt a more rigorous and
analytical approach to the monitoring and evaluation process. Use ex-
amples to illustrate your points.
2.15 Summary
In this unit, we defined who the stakeholders are in monitoring and evaluation
process. We discussed the methods of identifying stakeholders, as well as the
importance if creating a stockholder analysis. The processes of stakeholder
engagement were visited. In this unit, we also analysed the importance of
participation in the context of monitoring and evaluation of development
projects. The next unit focuses on the various tools, which are used by evalu-
ation practitioners or development managers in the practice of monitoring and
evaluation of development projects.
References
Byrne, A., Gray-Felder, D., Hunt, J. and Parks, W. (2005). Measuring
Change: A Guide to Participatory Monitoring and Evaluation of
Communication for Social Change. New Jersey: Communication
for Social Change Consortium.
Chambers, R. (1983). Rural Development: Putting the First Last, Longman,
London
Danish International Development Agency (DANIDA) (2005). Monitoring
and Indicators for Communication for Development. Copenhagen:
Technical Advisory Service, Royal Danish Ministry of Foreign Affairs.
International Finance Corporation (IFC). (2007). Stakeholder Engagement:
A Good Handbook for Companies Doing Business in Emerging Mar-
kets. Washington, IFC.
Hira, A. and Parfitt,T. (2004). Development Projects for A New Millennium,
London Praeger
Nelson, N. and Wright, S. (1995) Participation and Power. London, Inter-
mediate technology Publications.
Oakley, P. and Marsclen, D. (1984). Approaches to participation in Rural
Development.Geneva, ILO.
United Nations Development Programme. (2002). Handbook on Monitor-
ing and Evaluation for Results, New York: UNDP Evaluation Of-
fice, UNDP.
United Nations Development Programme. (2009). A Manager’s Guide to
Gender Equality and Human Rights Responsive Evaluation. New York.
UN Women Evaluation Unit.http://unifem.org/evaluation_manual/
accessed 23/12/2016
United Nations Development Programme. (2009). Guidance Note on Car-
rying Out an
Evaluation Assessment. A Manager’s Guide to Gender Equality and Human
Rights Responsive Evaluation. New York: UN Women Evaluation Unit.
United Nations Development Programme. (2009). Handbook on Planning,
Monitoring and Evaluating for Development Results. New York:
UNDP. http://undp.org/eo/handbook.
United Nations Children’s Funds. (2003). Planning Participatory Evaluation.
In M and E Training Modules. New York, UNICEF.
Wanja, M. and Iravo, M. (2017). Factors Affecting Project Scheduling of
Non-Governmental Organizations’ Projects in Mogadishu, Somalia.
(A Case Study of International Rescue Committee), American Based
Research Journal Vol-6-Issue-4 April-2017
3.1 Introduction
Evaluate/learn/
Implement decide
Plan
Reflect/learn/
decide/adjust Implement
Monitor Monitor
Reflect/learn/
Implement
decide/adjust
It is important to recognise that monitoring and evaluation are not magic wands
that can be waved to make problems disappear, or to cure them, or to mi-
raculously make changes without a lot of hard work being put in by the project
or organisation. In themselves, they are not a solution, but they are valuable
tools. Monitoring and evaluation can:
help you identify problems and their causes,
suggest possible solutions to problems,
raise questions about assumptions and strategy,
push you to reflect on where you are going and how you are getting
there,
provide you with information and insight,
encourage you to act on the information and insight and
increase the likelihood that you will make a positive development dif-
ference
It should be noted on the onset that the structures of log frames are not cast in
concrete. Various organisations in the development community use different
formats and terms for the types of objectives in a log frame. A clear under-
standing of the log frame’s hierarchy of objectives is important for M and E
planning. This is because it will inform the key questions that will guide the
evaluation of project processes and impacts.
Goal:
To what extent has the project contributed towards its longer term
goals?
Why or why not?
What unanticipated positive or negative consequences did the project
have?
Why did they arise?
Outcomes:
What changes have occurred as a result of the outputs and to what
extent are these likely to contribute towards the project purpose and
desired impact? Has the project achieved the changes for which it can
realistically be held accountable?
Outputs:
What direct tangible products or services has the project delivered as
a result of activities?
Activities:
Have planned activities been completed on time and within the budget?
Activity 3.1
?
1 (a) Define the term log frame.
(b) Distinguish inputs from outputs.
(c) Distinguish outcomes from outputs.
2. List five components of a log frame and justify each component in
monitoring and evaluation process.
3. Discuss the usefulness of log frame components. Pay particular atten-
tion to the relationships of the components.
3.5 Indicators
Effective indicators are a critical log frame element. The indicators you meas-
ure in monitoring produce evidence that there has been a change or not after
an intervention. They are a very important part of the log frame in monitoring
and evaluation practice. Indicators must always be realistic and feasible and
meet user informational needs.
It may be cost-effective to adopt indicators for which data have been or will
be collected by a government ministry, international agency, and so on.
Indicator overload
Table 3.2 above provides a sample format for an indicator matrix, with illus-
trative rows for outcome and output indicators. The following are the major
components or column headings of the indicator matrix (Chaplowe, 2008;
UNDP, 2009).
Indicators
Indicator definitions
Each indicator needs a detailed definition of its key terms, including an expla-
nation of specific aspects that will be measured (such as who, what, and
where the indicator applies). The definition should explain precisely how the
indicator will be calculated, such as the numerator and denominator of a per-
cent measure. This column should also note if the indicator is to be
disaggregated by sex, age, ethnicity, or some other variable.
Methods/sources
Frequency/schedules
This column states how often the data for each indicator will be collected,
such as monthly, quarterly, or annually. It is often useful to list the data collec-
tion timing or schedule, such as start-up and end dates for collection or dead-
lines for tool development. When planning for data collection timing, it is im-
portant to consider factors such as seasonal variations, school schedules,
holidays, and religious observances (that is, Ramadan).
Person(s) responsible
This column lists the people responsible and accountable for the data collec-
tion and analysis, that is, community volunteers, field staff, project managers,
local partner/s, and external consultants. In addition to specific people’s names,
use the position title to ensure clarity in case of personnel changes. This col-
umn is useful in assessing and planning for capacity building for the M and E
system.
Data analysis
This column describes the process for compiling and analysing the data to
gauge whether the indicator has been met or not. For example, survey data
usually require statistical analysis, while qualitative data may be reviewed by
research staff or community members.
Information use
This column identifies the intended audience and use of the information. For
example, the findings could be used for monitoring project implementation,
evaluating the interventions, planning future project work, or reporting to policy
makers or donors. This column should also state ways that the findings will be
formatted (for example, tables, graphs, maps, histograms, and narrative re-
ports) and disseminated (for example, Internet Web sites, briefings, commu-
nity meetings and mass media).
The horizontal logic consists of the following sub features Indicators, project
description, indicators, sources of verification (SOV) means of verification
(MOV) and assumptions. The vertical logic consists of the project /programme
goal, objectives/outcomes, deliverables /outputs and activities (Source:
Osborne, 2004)
These features are crafted in what is technically termed log frame matrix which
is indicated in Figure 3.2.
If the horizontal logic is followed and assumptions hold true and do not change
fundamentally in the negative direction, then the project is likely to succeed.
This is indicated in the Figure 3.4 below.
Activity 3.2
?
1. (a) Identify components of the horizontal logic on a log frame.
(b) Identify components of the vertical logic on a long frame.
(c) Discuss the relationship that exists between horizontal and vertical log
frame.
3.8 Indicators
Indicators are an instrument that gives you information. They are a quantita-
tive or qualitative factor or variable that provides a simple and reliable means
to measure achievement, to reflect changes connected to an intervention, or
to help assess the performance of a development actor. Indicators are a vari-
able that measures change in a phenomena or process (OECD, 2002).
(OECD, 2002).
Direct indicators
These indicators directly pinpoint at the subject of interest. This is often the
case with operational and more technical subjects. What the manager wants
to know, can be (and generally is) measured directly.
Activity 3.3
1. (a) Discuss various sources of verification on a log frame in a development
? project of your choice.
(b) What are the advantages and disadvantages of the sources of verifica-
tion you have chosen?
(c) How would you resolve the shortfalls of the sources of verification you
have identified?
(d) How can you increase the reliability and validity of the information
from the sources of verification?
2. Discuss ways in which a log frame can be improved as a tool for monitor-
ing and evaluation.
and have had a reasonable amount of time to take effect. Start with “light”
monitoring, then do more, or more targeted monitoring depending on your
findings (UNDP, 2009).
Advantages
It helps you ask the right questions.
It guides systematic and logical analysis of the key interrelated ele-
ments that constitute a well-designed project.
It defines linkages between the project and external factors.
It facilitates common understanding and better communication among
decision-makers managers and other parties involved in the project.
It prepares us for replication of successful results.
It ensures continuity of approach when the original project staff is re-
placed.
It provides a shared methodology and terminology among governments,
donor agencies, contractors and clients.
Widespread use of the logical framework format makes it easier to
undertake both sector studies and comparative studies in general.
Limitations
Activity 3.4
1. Analyse advantages and disadvantages of using a log frame as a tool
? 2.
for monitoring and evaluation of development projects.
Discuss ways in which a log frame can be used.
3. How applicable are log frames as a monitoring and evaluation tool for
managing development projects?
4. (a) Define indicators.
(b) In groups attempt to make a list of 10 (i) poverty indicators (ii)
health indicators of your choice and
(c) Discuss the utility of the identified health and poverty in
monitoring and evaluation.
5. Examine factors that you need when considering selection of indica-
tors.
6. (a) As a group, think of an example of a development project of your
choice. Attempt a typology of indicators, giving examples.
(b) Using an example of a project draw a log frame and indicate
activities and associated indicators. Explain the nexus between
indictors you have identified and activities.
3.12 Summary
In this unit, we looked closely at a very important tool in monitoring and
evaluation practice, which is called the log frame. We looked at the log frame
in the context of the project cycle. In this unit, we defined the log frame and
characterised it. The anatomy of the log frame was visited and the factors to
consider in the construction of a logical framework were outlined. We also
presented the advantages and disadvantages of using the log frame in this unit.
The next unit will focus on setting up a monitoring system.
References
Chaplowe, S. (2008). Monitoring and Evaluation Planning. Guidelines
and Tools. Baltimore: American Red Cross, CRS.
Civicus. (2006). Community Monitoring and Evaluation.
< h t t p : / / w w w. c i v i c u s . o rg / d o c u m e n t s / t o o l k i t s /
PGX_H_Community%20M&E.pdf (accessed> 20/04/2017)
Eldis. (2001). A Participatory Monitoring and Evaluation Guide: Indicators.
http://nt1.ids.ac.uk/eldis/hot/pm4.
Organisation for Economic Co-operation and Development (OECD). (2002).
DAC Glossary of Key Terms in Evaluation. DAC.https://
www.oecd.org/dac/evaluation/2754804.pdf,accessed 03/06/2017
Osborne, C. (2004). A Presentation Workshop on Monitoring and Evalua-
tion. Challenges of the 21st Century Planning. www.nasv.orgb. 6 June
2011.
Shapiro, J. (1996). Evaluation: Judgment Day or Management Tool? Olive
Publications.
United Nations Development Programme. (2009). Selecting Key Results
Indicators. <http://stone.undp.org/undpweb/evalnet/docstore1/
index_final/methodology/documents >indicators.
World Bank, (1996). The Log Frame Handbook. A Logical Framework
Approach to Project Cycle Management. Washington: World Bank.
4.1 Introduction
The activity of the project might occur over several months and have several
tasks. Furthermore, not all tasks start and finish at the same time. That is,
some of the tasks cannot start until other tasks are finished. You can write this
down in words but sometimes it can be hard to grasp the meaning of a docu-
ment that sets out a project like this. One technique for dealing with the man-
agement of a project is a Gantt chart. This provides you with a pictorial method
of managing your project and improves the monitoring and evaluation of the
tasks being undertaken as well as the whole development of project. As de-
velopment practitioners, a Gantt chart is an important tool for you to use in
the monitoring and evaluation of projects.
longer bar may only take 20 man hours. The longer bar may indicate to
the uninformed that it is a bigger task, when in fact it is not.
They need to be constantly updated. As you get into a project,
things will change. If you’re going to use a Gantt chart you must have
the ability to change the chart easily and frequently. If you don’t do this,
it will be ignored. Again, you will probably need software to do this
unless you’re keeping your project management at a high level.
Difficult to see on one sheet of paper. The software products that
produce these charts need to be viewed on a computer screen, usually
in segments, to be able to see the whole project. It then becomes diffi-
cult to show the details of the plan to an audience. Further, you can
print out the chart, but this will normally entail quite a large “cut and
paste” exercise. If you are going to do this frequently, it can be very
time-consuming.
Activity 4.1
?
1. Define a Gantt chart.
2. Discuss the advantages and disadvantages of using a Gantt chart in
project management.
Use examples.
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
Regrettably, Microsoft Excel does not have a built-in Gantt chart template as
an option. However, you can quickly create a Gantt chart in Excel by using
the bar graph functionality and a bit of formatting.
Please follow the below steps closely and you will make a simple Gantt chart
in under 3 minutes. We will be using Excel 2010 for this Gantt chart example,
but you can simulate Gantt diagrams in Excel 2007 and Excel 2013 exactly in
the same way.
You start by entering your project’s data in an Excel spreadsheet. List each
task is a separate row and structure your project plan by including the Start
date, End date and Duration, that is, the number of days required to com-
plete the tasks.
Tip. Only the Start date and Duration columns are really necessary for
creating an Excel Gantt chart. However, if you enter the End Dates too, you
can use a simple formula to calculate Duration, as you can see in the screenshot
below.
Source:https://www.ablebits.com/office-addins-blog/2014/05/23/make-gantt-
chart-excel/accessed 2/05/2017
You begin making your Gantt chart in Excel by setting up a usual Stacked Bar
chart.
Select a range of your Start Dates with the column header, it’s B1:B11
in our case. Be sure to select only the cells with data, and not the entire
column.
Switch to the Insert tab >Charts group and click Bar.
Under the 2-D Bar section, click Stacked Bar.
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
You begin making your Gantt chart in Excel by setting up a usual Stacked Bar
chart.
Select a range of your Start Dates with the column header, it's B1:B11
in our case. Be sure to select only the cells with data, and not the entire
column.
As a result, you will have the following Stacked bar added to your worksheet:
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
Now you need to add one more series to your Excel Gantt chart-to-be.
1. Right-click anywhere within the chart area and choose Select Data
from the context menu.
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
The Select Data Source window will open. As you can see in the screenshot
below, Start Date is already added under Legend Entries (Series). And
you need to add Duration there as well.
2. Click the Add button to select more data (Duration) you want to plot
in the Gantt chart.
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
In the Series name field, type “Duration” or any other name of your
choosing. Alternatively, you can place the mouse cursor into this field
and click the column header in your spreadsheet, the clicked header
will be added as the Series name for the Gantt chart.
Click the range selection icon next to the Series Values field.
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
4. A small Edit Series window will open. Select your project Duration
data by clicking on the first Duration cell (D2 in our case) and dragging
the mouse down to the last duration (D11). Make sure you have not
mistakenly included the header or any empty cell.
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
5. Click the Collapse Dialog icon to exit this small window. This will bring
you back to the previous Edit Series window with Series name and
Series values filled in, where you click OK.
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
6. Now you are back at the Select Data Source window with both Start
Date and Duration added under Legend Entries (Series). Simply
click OK for the Duration data to be added to your Excel chart.
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
Now you need to replace the days on the left side of the chart with the list of
tasks.
1. Right-click anywhere within the chart plot area (the area with blue and
orange bars) and click Select Data to bring up the Select Data Source
window again.
2. Make sure the Start Date is selected on the left pane and click the
Edit button on the right pane, under Horizontal (Category) Axis La-
bels.
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
3. A small Axis Label window opens and you select your tasks in the
same fashion as you selected Durations in the previous step - click the
range selection icon , then click on the first task in your table and
drag the mouse down to the last task. Remember, the column header
should not be included. When done, exit the window by clicking on the
range selection icon again.
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
At this point your Gantt chart should have task descriptions on the left side
and look something like this:
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
Step 5. Transform the bar graph into the Excel Gantt chart
What you have now is still a stacked bar chart. You have to add the proper
formatting to make it look more like a Gantt chart. Our goal is to remove the
blue bars so that only the orange parts representing the project’s tasks will be
visible. In technical terms, we won’t really delete the blue bars, but rather
make them transparent and therefore invisible.
1. Click on any blue bar in your Gantt chart to select them all, right-click
and choose Format Data Series from the context menu.
Figure 4.16 Transforming the bar graph into the Excel Gantt chart (1)
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
2. The Format Data Series window will show up and you do the follow-
ing:
Switch to the Fill tab and select No Fill.
Figure 4.17 Transforming the bar graph into the Excel Gantt chart (2)
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
Note. You do not need to close the dialog because you will use it again in the
next step.
3. As you have probably noticed, the tasks on your Excel Gantt chart are
listed in reverse order. Now we are going to fix this.
Click on the list of tasks in the left-hand part of your Gantt chart to select
them. This will display the Format Axis dialog for you. Select the Catego-
ries in reverse order option under Axis Options and then click the Close
button to save all the changes.
Figure 4.18 Transforming the bar graph into the Excel Gantt chart (3)
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
Though your Excel Gantt chart is beginning to take shape, you can add a few
more finishing touches to make it really stylish.
1. Remove the empty space on the left side of the Gantt chart
As you remember, originally the starting date blue bars resided at the start of
your Excel Gantt diagram. Now you can remove that blank white space to
bring your tasks a little closer to the left vertical axis.
Right-click on the first Start Date in your data table, select Format
Cells > General. Write down the number that you see - this is a nu-
meric representation of the date, in my case 41730. As you probably
know, Excel stores dates as numbers based on the number of days
since 1-Jan-1900. Click Cancel because you don’t actually want to
make any changes here.
Source: https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017
Click on any date above the task bars in your Gantt chart. One click
will select all the dates; you right click them and choose Format Axis
from the context menu.
Source:<https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017>
Under Axis Options, change Minimum to Fixed and type the number
you recorded in the previous step.
2. Adjust the number of dates on your Gantt chart
In the same Format Axis window that you used in the previous step, change
Major unit and Minor unit to Fixed too, and then add the numbers you
want for the date intervals. Typically, the shorter your project’s timeframe is
the smaller numbers you use. For example, if you want to show every other
date, enter 2 in the Major unit. You can see my settings in the screenshot
below.
Note. In Excel 2013 and Excel 2016, are no Auto and Fixed radio buttons,
so you simply type the number in the box.
Source: <https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017>
Tip. You can play with different settings until you get the result that works best
for you. Do not be afraid to do something wrong because you can always
revert to the default settings by switching back to Auto in Excel 2010 and
2007, or click Reset in Excel 2013.
Compacting the task bars will make your Gantt graph look even better.
Click any of the orange bars to get them all selected, right click and
select Format Data Series.
In the Format Data Series dialog, set Separated to 100% and Gap
Width to 0% (or close to 0%).
Source:<https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017>
Here is the result of our efforts - a simple but nice-looking Excel Gantt chart:
Source: <https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017>
Remember, though your Excel chart simulates a Gantt diagram very closely, it
still keeps the main features of a standard Excel chart:
Your Excel Gantt chart will resize when you add or remove tasks.
You can change a Start date or Duration; the chart will reflect the changes
and adjust automatically.
You can save your Excel Gantt chart as an image
Tips:
You can design your Excel Gant chart in different ways by changing the
fill colour, border colour, shadow and even applying the 3-D format.
All these options are available in the Format Data Series window
(right-click the bars in the chart area and select Format Data Series
from the context menu).
Source: <https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017>
Gantt charts can be created using online templates. These are programmed to
create the Gantt charts automatically. This is an Interactive Online Gantt Chart
Creator from smartsheet.com. The Online Gantt templates are fast and easy-
to-use. You can practice using online templates by using the Interactive Online
Gantt Chart Creator from smartsheet.com They offer 30 days free trial, so
you can sign with your Google account here and start making your first Excel
Gantt diagram online straight away.
The process is very straightforward, you enter your project details in the left-
hand table, and as you type, a Gantt chart is being built in the right-hand part
of the screen. Figure 4.26 illustrates the process.
Source:<https://www.ablebits.com/office-addins-blog/2014/05/23/make-
gantt-chart-excel/accessed 2/05/2017>
Figure 4.27 shows an example of a finished Gantt chart which can be used to
monitor a project so that it is able to meet its project goals and expected
outcomes. This figure is an outcome of the several steps that we want to
follow in producing a Gant chart for monitoring and evaluation development
projects.
Source: <https://www.mindtools.com/pages/article/newPPM_03.htm
accessed 22/05/2016>
To create one for your project, follow these steps, using our example as a
guide. The following steps presented in this section were obtained from
www.mindtools.com/pages/article/newPPM_03.htm
Gantt charts don’t give useful information unless they include all of the activi-
ties needed for a project or project phase to be completed. So, to start, list all
of these activities. Use a work breakdown structure if you need to establish
what the tasks are. Then, for each task, note its earliest start date and its
estimated duration.
For example:
Source: https://www.mindtools.com/pages/article/newPPM_03.htm
Source: <https://www.mindtools.com/pages/article/newPPM_03.htm
accessed 22/05/2016
accessed 22/05/2016>
The chart shows the relationship between the tasks in a project. Some tasks
will need to be completed before you can start the next one, and others can’t
end until preceding ones have ended. These dependent activities are called
“sequential” or “linear” tasks. Other tasks will be “parallel” – i.e. they can
be done at the same time as other tasks. You don’t have to do these in se-
quence, but you may sometimes need other tasks to be finished first. So, for
example, the design of your brochure could begin before the text has been
edited (although you won’t be able to finalize the design until the text is per-
fect.) Identify which of your project’s tasks are parallel, and which are se-
quential. Where tasks are dependent on others, note down the relationship
between them. This will give you a deeper understanding of how to organize
your project, and it will help when you start scheduling activities on the chart.
Note:
In Gantt charts, there are three main relationships between sequential tasks:
Finish to Start (FS) – FS tasks cannot start before a previous (and
related) task is finished. However, they can start later.
Start-to-Start (SS) – SS tasks cannot start until a preceding task starts.
However, they can start later.
Finish-to-Finish (FF) – FF tasks cannot end before a preceding task
ends. However, they can end later.
A fourth type, Start to Finish (SF), is very rare.
Tip 1:
Tasks can be sequential and parallel at the same time – for example, two
tasks (B and D) may be dependent on another one (A), and may be com-
pleted at the same time. Task B is sequential in that it follows on from A, and
it is parallel, with respect to D.
Tip 2:
Example
Dependent
Task Length Type*
on...
* P= Parallel, S= Sequential
Source: <https://www.mindtools.com/pages/article/newPPM_03.htm
accessed 22/05/2016>
You can draw your charts by hand or use specialist software, such as Gantto,
Matchware, or Microsoft Project. Some of these tools are cloud-based,
meaning that you and your team can access the document simultaneously,
from any location. (This helps a lot when you are discussing, optimising, and
reporting on a project.)
Source: <https://www.mindtools.com/pages/article/newPPM_03.htm
accessed 22/05/2016>
As your project moves along, it will evolve. For example, in our scenario, if
quality assurance of core modules revealed a problem, then you may need to
delay training, and halt development of the management information system
until the issue is resolved.
Update your chart to reflect changes as soon as they occur. This will help you
to keep your plans, your team, and your sponsors up to date.
Activity 4.2
?
1. With reference to Figure 4.1, how many tasks are done at the same
time in week?
2. Using figure 4.27
(a) Identify any parallel activates.
(b) List any sequential activities.
(c) List activities that must be finished by the end of the 7th week.
(d) Which activities start and finish between week 4 and 7.
3. Choose a project of your choice and create a Gantt for monitoring the
project activities.
4.6 Summary
Gantt charts are useful for planning and scheduling projects. They help you
assess how long a project should take, determine the resources needed, and
plan the order in which you will complete tasks. Gantt charts help you to
monitor and evaluate whether activities of the project are within the planned
parameters. It is therefore, a very important monitoring and evaluation tool in
development practice. They are also helpful for managing the dependencies
between tasks. Gantt charts are useful for monitoring a project’s progress
once it is underway. You can immediately see what should have been achieved
by a certain date and, if the project is behind schedule, you can take action to
bring it back on course.
References
<https://www.mindtools.com/pages/article/newPPM_03.htm accessed 22/05/
2016>
https://www.ablebits.com/office-addins-blog/2014/05/23/make-gantt-chart-
excel/.accessed 2/05/2017 Books for further reading
Clark, W. (2012). The Gantt Chart: A Working Tool of Management, Nabu
Press, Amazon
Lock, D. (2007). Project Management 9th Ed, Grower Publishing Limited,
Hampshire.
Montgomery, J. (2003). How to Create Gantt Charts Anyone Can Follow,
the Idea Interpreter
Thomsett, M.C. (2010), The little Black book of project Management, 3rd
edition, AMACOM, Toronto.
5.1 Introduction
Although it originated in the late 1950s, critical path is still incredibly impor-
tant to project managers today. It provides a visual representation of project
activities, clearly presents the time required to complete tasks, and tracks
activities so you don’t fall behind (Kelly, 1989). The critical path method also
reduces uncertainty because you must calculate the shortest and longest time
of completion of each activity. This forces you to consider unexpected factors
that may impact your tasks and reduces the likelihood that an unexpected
surprise will occur during your project. According to Kelly (1989), the criti-
cal path method also has three main benefits for project managers as follows:
Identifies the Most Important Tasks: First, it clearly identifies the
tasks that you will have to closely manage. If any of the tasks on the
critical path take more time than their estimated durations, start later
than planned, or finish later than planned, then your whole project will
be affected.
Helps Reduce Timelines: Secondly, if, after the initial analysis pre-
dicts a completion time, there is interest in completing the project in a
shorter period, it is clear which task or tasks are candidates for dura-
tion reduction.
When the results from a critical path method are displayed as a bar
chart, like a Gantt chart, it is easy to see where the tasks fall in the
overall timeframe. You can visualise the critical path activities (they are
usually highlighted), as well as task durations and their sequences. This
provides a new level of insight into your project’s timeline, giving you
more understanding about which task durations you can modify, and
which must stay the same.
Compares Planned with Actual: the critical path method can also be
used to compare planned progress with actual progress. As the project
proceeds, the baseline schedule developed from the initial critical path
analysis can be used to track schedule progress.
Throughout a project, a manager can identify tasks that have already
been completed, the predicted remaining durations for in-progress tasks,
and any planned changes to future task sequences and durations. The
result will be an updated schedule, which, when displayed against the
original baseline, will provide a visual means of comparing planned with
actual progress.
Help you identify the activities that must be completed on time in order
to complete the whole project on time.
Show you which tasks can be delayed and for how long without im-
pacting the overall project schedule.
Calculate the minimum amount of time it will take to complete the project.
Tell you the earliest and latest dates each activity can start on in order
to maintain the schedule.
(Fondal, 1987).
What activities are critical in the sense that they must be completed
exactly as scheduled
In order to meet the target for overall project completion?
How long can noncritical activities be delayed before a delay in the
overall completion date is incurred?
How might resources be concentrated most effectively on activities in
order to speed up project completion?
What controls can be exercised on the flows of expenditures for the
various activities
Throughout the duration of the project in order that the overall budget
can be adhered to?
(Stretton 2007).
Activity 5.1
Figure 5.1 (a) An example of a PERT chart drawn to show the devel-
opment of a system.
Source:(http://www.pmexamsmartnotes.com/how-to-calculate-critical-
path,accessesed 18/05/2017)
With reference to the above figure, the following steps must be noted in PERT
chart technique:
Determine the steps or tasks included in the project:
This is the first step that needs to be done before the PERT chart is con-
structed. At this stage, the person in charge or the project supervisor needs to
review the entire project and enlist the tasks or different stages that need to
be done in order to deliver the results.
This is the second stage where one needs to define what will be the first task
to complete in order to begin the whole project. This may or may not include
the duration to complete the project, which depends on each project.
Define the next task that can be started simultaneously with the first
task:
This includes the explanation of the second task that can be started with the
first task at the same time.
This includes the second task that will be started as soon the first task is
completed or it can also be started along with the other stages if that is possi-
ble in the project.
There are two options for you in order to implicate the PERT chart on your
project; you can explain the duration or deadline for each stage on each step
with the individual task or you can mention the required completion duration
of each task separately on the chart.
This includes the tasks or steps in the project that needs to be done on short-
term basis and as these tasks are the most important ones in the project, it is
important that they be done on time in order to eliminate any delays in the
project.
(http://www.pmexamsmartnotes.com/how-to-calculate-critical-
path,accessesed 18/05/2017/)
Source: (https://www.com/file/6803126/wwwvceitcom-ganttpert-pert-tute-
pert-tutehtm/accessed 20/05/2017)
You need to be able to examine and interpret charts like this PERT.
Task A is the first task and takes 2 days.
When it is done, tasks B and G can begin.
If we follow the task G line, it takes 2 days to reach task H that takes 5 days.
Task H leads to the final task, I.
Total time for following this path is 2 + 2 + 5 + 3 = 12 days.
The path would be described as A, G, H, I.
14 days (the longest possible path). Yeah, it sounds odd that the shortest time
is the longest path, so in the chart above, the shortest project time would be
14 days. That is the critical PATH of the project: the sequence of tasks from
beginning to end that takes the longest time. No task on the critical path can
take more time without affecting the end date of the project. In other words,
none of the tasks on the critical path has any slack. Slack is the amount of
extra time a task can take before it affects a following task. In the breakfast
example above, the breakfast could take another eight minutes before it af-
fected the leaving time, so it has eight minutes’ slack. Tasks on the critical path
are called critical tasks. No critical task can have any slack (by definition).
(<https://www.com/file/6803126/wwwvceitcom-ganttpert-pert-tute-pert-
tutehtm/accessed 20/05/2017>)
The critical path is the sequence of activities with the longest duration.
A delay in any of these activities will result in a delay for the whole project.
Below are some critical path examples to help you understand the key
elements
Source: http://www.project-management-skills.com/critical-path-method.html
accessed 21/05/2017
1. The duration of each activity is listed above each node in the diagram.
2. For each path, add the duration of each node to determine its total
duration.
3. The critical path is the one with the longest duration.
4. There are three paths through this project.
To use the Critical Path Analysis to find Your Critical path, you use the follow-
ing method:
Therefore, the critical path is 14 as indicted in the example. Follow the method
carefully and understand it.
Figuring out the float using the Critical Path Method is easy. You will start with
the activities on the critical path. Each of those activities has a float of zero. If
any of those activities slips, the project will be delayed.
Then you take the next longest path. Subtract its duration from the duration of
the critical path. That is the float for each of the activities on that path.
You will continue doing the same for each subsequent longest path until each
activities float has been determined. If an activity is on two paths, its float will
be based on the longer path that it belongs to.
Source: (http://www.project-management-skills.com/critical-path-
method.html. accessed 21/05/2017)
Using the critical path diagram from the previous section, Activities 2, 3, and
4 are on the critical path so they have a float of zero.
The next longest path is Activities 1, 3, and 4. Since Activities 3 and 4 are also
on the critical path, their float will remain as zero. For any remaining activities,
in this case Activity 1, the float will be the duration of the critical path minus
the duration of this path 14 - 12 = 2. Therefore, Activity 1 has a float of 2.
The next longest path is Activities 2 and 5. Activity 2 is on the critical path so
it will have a float of zero. Activity 5 has a float of 14 - 9, which is 5. There-
fore, as long as Activity 5 does not slip more than 5 days, it will not cause a
delay to the project.
Starting with the critical path, the Early Start (ES) of the first activity is one.
The Early Finish (EF) of an activity is its ES plus its duration minus one.
Using our earlier example, Activity 2 is the first activity on the critical path: ES
= 1, EF = 1 + 5 -1 = 5.
Source: http://www.project-management-skills.com/critical-path-method.html
accessed 21/05/2017
You then move to the next activity in the path, in this case Activity 3. Its ES is
the previous activity’s EF + 1. Activity 3 ES = 5 + 1 = 6. Its EF is calculated
the same as before: EF = 6 + 7 - 1 = 12. If an activity has more than one
predecessor; to calculate its ES you will use the activity with the latest EF.
2. You will start once again with the critical path, but this time you begin
from the last activity in the path.
3. The Late Finish (LF) for the last activity in every path is the same as
the last activity’s EF in the critical path. The Late Start (LS) is the LF
- duration + 1.
4. In our example, Activity 4 is the last activity on the critical path. Its LF
is the same as its EF, which is 14. To calculate the LS, subtract its
duration from its LF and add one. LS = 14 - 2 + 1 = 13.
5. You then move on to the next activity in the path. Its LF is determined
by subtracting one from the previous activity’s LS.
6. In our example, the next Activity in the critical path is Activity 3. Its LF
is equal to Activity 4 LS - 1. Activity 3 LF = 13 -1 = 12.
7. Its LS is calculated the same as before by subtracting its duration from
the LF and adding one. Activity 3 LS = 12 - 7 + 1 = 6.
You will continue in this manner moving along each path filling in LF and LS
for activities that do not have it already filled in.
The Critical Path Method is an important tool for managing your project’s
schedule. As you can see, it is not very difficult to determine its key elements.
However, once your project has more than a few activities, critical path
scheduling can become tedious.
This is an important first step since errors or omissions at this stage can lead
to a disastrously inaccurate schedule. Table 5.1 shows the first activity list,
(the columns labelled “Time” and “Resources” are indications of things to
come). This is the most important part of any PERT or CPM project. Impor-
tant activities must not be missed. This must be a group effort—not done in
isolation.
3. (iii) Conceptually, Table 5.1 shows that each activity is placed on a sepa-
rate line, and its immediate predecessors are recorded on the same
line. The immediate predecessors of an activity are those activities that
must be completed prior to the start of the activity in question.
Comment
For example, note that in Table 5.1 we see that the organisation cannot start
activity C, determine personnel requirements, until activity B, create the or-
ganizational and financial plan, is completed. Similarly, activity G, hire new
employees, cannot begin until activity F, select the Global personnel that will
move from Texas to Iowa, is completed. This activity, F, in turn, cannot start
until activity C, determine personnel requirements, is completed.
Source: https://www2.kimep.kz/bcb/omis/our_courses/is4201/Chap14.pdf,
accessed 22/05/2017
We shall shortly see how PERT and CPM are used to produce these an-
swers.
For example, in Figure 5.5, activity C starts at node ? because its immediate
predecessor, activity B, ended there. We see, however, that complications
arise as we attempt to add activity D to the network diagram.
Node ? now represents the event that both activities A and C have been
completed. Note that activity E, which has only D as an immediate predeces-
sor, can be added with no difficulty. However, as we attempt to add activity
F, a new problem arises. Since F has C as an immediate predecessor, it
would emanate from node ? (of Figure 5.6). We see, however, that this would
imply that F also has A as an immediate predecessor, which is incorrect
https://www2.kimep.kz/bcb/omis/our_courses/is4201/Chap14.pdf accessed
22/05/2017
https://www2.kimep.kz/bcb/omis/our_courses/is4201/Chap14.pdf accessed
22/05/2017
https://www2.kimep.kz/bcb/omis/our_courses/is4201/Chap14.pdf accessed
22/05/2017
Figure 5.8 shows the network diagram for the first activity list as presented in
Figure 5.8. We note that activities G and H both start at node ? and terminate
at node ?. This does not present a problem in portraying the appropriate
precedence relationships, since only activity J starts at node ?.
Figure 5.8 Network Diagram for the First Activity List for the Move
to Des Moines
https://www2.kimep.kz/bcb/omis/our_courses/is4201/Chap14.pdf accessed
22/05/2017
The following are some of the examples that will help you understand the
determination of the critical path. The questions and solutions are provided.
You are expected to study them and see how the solution comes about. With
more practice, you will be able to understand. More information and exer-
cises will be found on MyVista account for this course. We will now apply
both CPM and Pert to the following example:
Activity 5.1
?
1. A publisher has a contract with an author to publish a textbook. The
(simplified) activities associated with the production of the textbook
are given. The author is required to submit to the publisher a hard copy
and a computer file of the manuscript.
(i) Based on the Table 5.2 below, construct a network diagram.
(ii) Determine the critical path.
CPM Method
A list of activities is provided. Using the CPM method, we can find out the
completion time of the project. We will also determine which activities are
critical activities-activities that if delayed would delay the completion of the
project, a slack time for non-critical activity can be delayed (run late) without
effecting the completion time of the project.
Activity 5.2
?
1. Critical path
Note that the Critical Path is the path with the longest total time.
Using the table below construct, a network diagram and determine the
critical path of the project.
2. (a) Create a network diagram for the following project
(b) Determine the critical path
Note that
1. Determine all possible paths to the finish.
2. Add the combination value of the paths.
3. The longest path is the critical path.
Activity 5.3
?
1. Using the information in table below, assuming that the project team
will work a standard working week (5 working days in 1 week) and
that all tasks will start as soon as possible:
Activity 5.4
Using the figure below, determine the critical path,
?
5.15 Summary
In this unit, we looked at the CPM as a project management that can be used
to monitor and evaluate project activities. We have looked at the important
components of the CPM. We defined and characterised the CPM. The ad-
vantages of using the CPM in project monitoring and evaluation were indi-
cated. The four key elements of the CPM were covered. Lastly, examples
and solutions were proffered in this unit. Students were also reminded that
much of what was covered in this unit is done using computer software. Stu-
dents are encouraged to try the CPM computations using computer pro-
grammes. However, this unit has created as sound basis for students to en-
gage computer application of the CPM. The thrust of the examination of the
CPM is in the context of monitoring and evaluation of projects.
References
Fondal, J. W. (1987). The History of Modern Project Management –Prec-
edence Diagramming
Methods: Origins and Early Development. Project Management Journal.
Volume XVIII. No. 2. June
http://www.pmexamsmartnotes.com/how-to-calculate-critical-
path,accessesed 18/05/2017/
<http://www.project-management-skills.com/critical-path-method.html
accessed 21/05/2017>
https://www2.kimep.kz/bcb/omis/our_courses/is4201/Chap14.pdf accessed
22/05/2017
Kelly, J. (1989). “The Origins of CPM: A Personal History”.PMNETwork,
Vol III, No 2,February, pp 7-22
Project Management PERT and CPM https://www.coursehero.com/file/
10473079/2-PERT/accessed 20/05/2017
Stretton, A. (2007) A Short History of Modern Project Management. Pub-
lished in PM World Today - October 2007 (Vol. IX, Issue X)
Setting Up a Monitoring
System: Monitoring and
Evaluation as a Process
6.1 Introduction
Activity 6.1
?
1. Explain the “monitoring system” concept.
2. In your opinion, why is it important to clearly identify the people in-
volved in the monitoring and evaluation system.
3. Why is it important to define clearly monitoring objectives?
4. How useful is the collection of relevant information in setting up a moni-
toring system?
One of the key steps in the formulation of the monitoring system is the deter-
mination of how you are going to select the relevant information. This is a very
important part because in most cases many practitioners performing the moni-
toring and evaluation fail to attend to this aspect and end up having a lot of
irrelevant data. This retards the monitoring process in that too much data
collected does not only make it difficult to process but also it is costly as it
delays the publication of results. You also waste a lot of time playing field
work expenses. It is, therefore, very important that before you set out to
perform the monitoring exercise, thorough work must be done in determining
exactly relevant information which is in line with the objectives set in step one.
To archive these following questions may be asked in by the practitioner for-
mulating the system.
What information should be collected?
What process indicators should we choose which will give us the infor-
mation we want?
Is the information we will get in line with our set objectives of the evalu-
ation process?
Do we have the right impact indicators?
One need to be selective to make sure that only useful information is col-
lected, and that it is of a reasonable quality.
Analysis of the data will depend on the nature of the data collected. Where it
is quantified, statistical analysis is possible. The sort of analysis that might be
undertaken would be for associations and relationships, for trends, for signifi-
cant changes, even the simpler calculations such as average (or mean), maxi-
mum/minimum and range can be invaluable despite their simplicity. Too often
there is costly and detailed collection of data without the same attention given
to analysis so that the lessons contained in it are just not noted. The methods
of analysis should be identified as part of the process of preparing to gather
data (Welsh, et al., 2005). In interpreting information and assessing results,
the following questions need to be asked frequently.
There are different forms of presentation for a range of different users. Just as
important as analysis is the presentation of data. There are different forms of
presentation for a range of different users. The secret is to keep it simple and
focused on a single message. Avoid showing too much in any one presenta-
tion. Tables are a simple means of presentation; more visually pleasing and
often clearer are pie charts, columns and graphs. Diagrams and flow charts
are other effective ways of depicting data. Well selected photographs; even
multimedia presentations can be used to great effect to show the impacts of
projects (Welsh, Schans and Dethrasaving, 2005).
The goal of monitoring and evaluation is to find information that will improve
the project so that the project can meet its expected outcomes. When the
project expected outcomes are met, then the lives of the intended beneficiar-
ies are improved. Thus, the most important thing is not only to get the infor-
mation, but also to use the information. There is need to pre-determine how
the information can be used for informing and improving the work. It is also
important to provide opportunities for discussing the findings with all involved
so that there are no surprises and everyone is in the same boat. Ensuring that
monitoring information is incorporated into existing planning procedures is
very important.
things are likely to be available in a project for use in other activities as well as
in M and E, for example, GPS instruments. At the time of the designing of the
M and E system resource needs can only be discussed in general terms until
indicators are finalised and the methods of measurement agreed upon.
Human resources
Financial resources
M and E should have a separate budget. Some projects have a specific budget
for M and E activities, in others a specified per cent of total budget might be
set aside, whilst in others nothing is provided and all activities must be funded
from “regular” budget according to UNDP (2009) a number of items that
should be included in a budget are listed below:
field data collection – fees and per diems for enumerators,
incentive payments for informal data collectors/informants,
travel expenses for project staff engaged in M and E activities,
fees, per diems and expenses for midterm review, materials and
fees, per diems and expenses for ex-post evaluation.
Step eight: Participants in monitoring
Activity 6.2
?
Using an example of a project:
1. List the steps involved in setting up a monitoring and evaluation sys-
tem.
2. Discuss what is involved in maintain the monitoring system.
3. Discuss the aspects that should be considered in the monitoring proves.
According to Welsh et al. (2005), there has been a tradition that regular
reports are given to the funding agency, and to the implementation partners.
The broader community, the stakeholders with most at stake, rarely are the
recipients of detailed reports on progress. They may get snippets, perhaps
results of trials and demonstrations, and of course, they do have their own
ideas of success. Nevertheless, they have probably never had the expected
cover page,
summary and recommendations,
introduction,
objectives,
methodology,
findings or results,
discussion and conclusion,
reference of literature used,
annexes and
acknowledgements.
As the evaluation has been carried out, as a management tool the amount of
introduction and background necessary may be very little. However, in all
reports it is essential that the sections on objectives, methodology and results
are presented in enough detail for the reader to be able to assess what was
done, why, and whether it was done properly.
Activities 6.3
1. Give an outline of the structure and components of a Monitoring and
? 2.
evaluation report.
Justify the significance of communicating information regarding moni-
toring and evaluation to stakeholders.
6.7.3 Introduction
The introduction is a relatively easy part, which may be written after the first
draft of findings. The evaluation report requires an introduction, which sets
out basic information about the project and the evaluation itself. This provides
the context for the report. According to Wang (1997), the introduction should
provide the following information although other information may be added, if
felt necessary:
brief statement of the main features of the project being evaluated,
reasons for the evaluation,
composition of the evaluation team and
The introduction should be kept short and to the point and about two
pages long.
6.7.5 Findings
The systematic presentation of your findings in relation to the evaluation ob-
jectives is the crucial part of your report. Tables or graphs that summarise the
findings may complete a description of the findings. How many of these you
include depends on the readers you are aiming at. If your principal target
group consists of managers rather than researchers, you may decide to in-
clude only the most essential tables in the text and merely refer to others
presented in annexes (Narayan, 2001). It is important to set out the findings
under clear and precise headings.
The text and annexes should include sufficient details for professionals to
enable them to follow how you substantiate your findings and conclusions.
The report should be so self-explanatory that it should be possible to repeat
the study, if desired. If any references have been used, then they are quoted
here. A reference shall always include the author’s name, the year, the title of
the publication and the publishers (Wang, 1997).
6.7.7 Recommendations
The interpretation of findings requires decisions to be made on the relative
success or failure of different aspects of the project. It may follow from these
decisions, that some changes should be made or some successes should be
repeated in other projects. To ensure that conclusions are clearly identified
and obvious to anyone reading an evaluation report, they are usually summa-
The following guidelines may aid the formulation of conclusions and recom-
mendations, (Wang, 1997; Narayan, 2001):
Do not jump to conclusions: Conclusions and recommendations have
to flow from the evaluation findings. Take care not to jump to conclu-
sions or make too sweeping statements.
Do not suppress conflicting findings: Conflicting findings should not
be suppressed or spirited away. Instead, they should be carefully con-
sidered. If no explanation can be found, this should be stated in the
conclusions.
Include unexpected findings: During the evaluation you may collect
information that you were not looking for, but which proves to be very
important. Even if these unexpected findings do not serve a particular
evaluation objective, they should nevertheless, be incorporated in the
conclusions and recommendations.
Make practical and feasible recommendations: During the dis-
cussion and formulation of recommendations, attention needs to be
given to making practical and feasible recommendations which are
possible to implement. Recommendations that cannot or will not be
implemented are not worth making.
Be as clear as possible: To increase their impact, conclusions and
recommendations need to be stated clearly. Therefore, each conclu-
sion or recommendation should cover one message only, and the level
or organisation to which it is directed should be precisely indicated.
Finalisation of recommendations: Conclusions and recommenda-
tions should be arranged in order of importance, from the general to
the more specific. Before finalisation, check whether they meet the evalu-
ation objective and thus the purpose of the evaluation.
The recommendations of an evaluation, whether formulated by an individual
or in committee, should be stated in a concise and useful manner and fed-
back or delivered to the appropriate persons. In the case of formal, external
evaluation studies, the final product is a comprehensive evaluation report,
Unfortunately, the evaluation report often does not find its way back to the
project implementers and communities and the other information providers,
and when it does, it is probably too late and in a form and style that is of
limited use to field personnel (Narayan, 2001). In order to overcome these
potential difficulties, it is recommended that external evaluators should present
their conclusions and recommendations to project implementers at the con-
clusion of the evaluation. This requirement could be included in the terms of
reference of the external evaluator.
Annexes
Acknowledgements
You may wish to thank those who supported you technically or financially in
the drafting and implementation of your study. In addition, your employer
who allowed you to invest time in the study and the respondents may be
acknowledged. Acknowledgements are usually placed straight after the title
page, before the table of contents, or at the end of the report before the
references.
strive to be precise and specific all the time and avoid exaggeration,
avoid using adverbs and adjectives,
remember that your goal is to inform and not to impress,
we encourage you to aim for clarity, logic and being sequential.
Activity 6.4
?
1. Discuss ways in which the monitoring and evaluation reporting and
communication processes can be improved. Use examples to support
your facts.
6.9 Summary
We have established that setting a monitoring system is as essential as the
monitoring and evaluation process itself. In this unit, we have laid the founda-
tion for the entire monitoring and evaluation system for managing develop-
ment projects. In this unit, we discussed the step- by- step process of setting
up a monitoring system. We defined monitoring systems objectives. We cov-
ered the selection of relevant information, presentation and use results. Lastly,
reporting and communication of monitoring and evaluation results and report
structures were presented in detail.
References
International Red Cross. (1987). Evaluating Water Supply and Sanitation
Projects. Geneva.
Narayan, D. (2001). Participatory Evaluation. North Wind, Nodav Pub-
lishers.
North American Aerospace Defense Command (NORAD). (1996). The
Logical Framework Approach. Oslo: NORAD.
Oakley, P. (2003). Projects with People: The Practice of Participation in
Rural Development. New Delhi, McGraw-Hill.
Save The Children. (1993). Assessment Monitoring Review and Evalua-
tion, Toolkits, STCF.
United Nations Children Fund. (1996). A UNICEF Guide for Monitoring
and Evaluation. Making a Difference? Geneva, UNICEF.
United Nations Development Programme. (2009). Handbook on Planning,
Monitoring and Evaluating for Development Results. New York:
UNDP. http://undp.org/eo/handbook
Wang, C. (1997). Logical Framework Workshop. Maseru.
Welsh, N., Schans, M. & Dethrasaving, C. (2005). Monitoring and Evalua-
tion Systems
Manual (M&E Principles). Publication of the Mekong Wetlands Biodiversity
Conservation and Sustainable Use Programme.
7.1 Introduction
Indicators are part of performance measurement but they are not the only
part. To assess performance, it is necessary to know about more than actual
achievements. Also required is information about how targets were achieved,
factors that influenced this positively or negatively, whether the achievements
were exceptionally good or bad and who was mainly responsible for the
achievement or failure.
This is the systematic analysis of performance against goals taking into ac-
count of the reason behind performance and influencing factors (Organisation
for Economic Co-operation and Development (OECD), 1998; United Na-
tions Development Programme, 2002).
Rating
Indicators
Activity 7.1
1. Define the following terms in the context of development of monitoring
? and evaluation
a) performance
b) management
c) performance management indicators
2. List three dimensions of performance assessment.
3. Discuss how each of these dimensions influence measurement.
4. Analyse key elements of common rating system, how is it important in
performance measurement.
According to World Bank (1996), the three ratings are meant to reflect the
degree of achievement of outputs by comparing baselines (the inexistence of
the output) with the target (the production of the output). The partially achieved
category is meant to capture those en route or particularly ambitious outputs
that may take considerable inputs and time to come to fruition.
Selected monitoring reports rate outcome and output progress for projects,
on a voluntary basis. For the Annual Project Report (APR), the rating on
progress towards outputs is made annually by the project manager and the
programme manager. It forms the basis of a dialogue in which consensus
ratings for the out puts are produced. If there is disagreement between the
project and programme staff on how outputs are rated, both ratings are in-
cluded in the report, with proper attribution. The Programme Manager (World
Bank, 1996) makes the rating on progress towards outcomes in the APR.
For field visits, programme managers periodically rate progress towards both
outputs and outcomes, discussing their ratings with the project staff. The rat-
ings are used to assess project performance and for trend analysis and les-
sons learned. They may also be used corporately for validation and lessons
learned.
An outcome indicator has two components: a baseline and a target. The base-
line is the situation before a programme or activity begins. It is the starting
point for results monitoring. The target is what the situation is expected to be
at the end of a programme or activity. (Output indicators rarely require a
baseline since outputs are being newly produced and the baseline is that they
do not exist).
Hypothetical example 1
If wider access to education is the intended result, for example, school
enrolment may provide a good indicator. Monitoring of results may
start with a baseline of 55 percent enrolment in 1997 and a target of 80
percent enrolment in 2002. Between the baseline and the target there
may be several milestones that correspond to expected performance
at periodic intervals.
Baseline data provides information that can be used when designing and im-
plementing interventions. It also provides an important set of data against
which success (or at least change) can be compared, thereby making it pos-
sible to measure progress towards a result. The verification of results de-
pends upon having an idea of change over time. It requires a clear under-
standing of the development problem to be addressed, before beginning any
intervention. A thorough analysis of the key factors influencing a development
problem complements the development of baseline data and target setting
(UNDP, 2002).
Activity 7.3
?
1. Discuss the steps, which one can follow, in the selection of indicators
for performance measurement. Use examples to buttress you answer.
2. “Results oriented monitoring of development performance involves look-
ing at results at the level of outputs, outcomes and impact”. Discuss the
importance of output, outcome and input indicators in the perform-
ance of measurement. Use examples to buttress you answer.
Activity 7.4
?
1. Discuss how performance measurement can be improved to enhance
monitoring and evaluation of development projects in developing coun-
tries.
2. The basis of a successful monitoring and evaluation rests on efficient
performance measurement. Discuss
3. If tasks cannot be measured, then they cannot be evaluated, discuss in
line with different monitoring and evaluation methodologies.
7.8 Summary
In this unit, we have demonstrated the importance of performance manage-
ment. We emphasized that monitoring and evaluation practitioners and project
managers need to know how the project is performing so that they take ap-
propriate action to correct or to reinforce the activities. In this unit, we cov-
ered methods used in performance measurement in monitoring and evalua-
tion. We introduced the use of indicators, setting of targets, data collection
systems and quantitative and qualitative analysis issues were discussed. per-
formance measurement, selection of indicators, key steps in selecting indica-
tors and using indicators in measuring performance were also covered.
References
Allen, J.R. (1996). Performance Measurement. Atlanta: AEA.
Committee. (1998). Review of the DAC Principles for Evaluation of devel-
opment Assistance. http://www.oecd.org/dac/ Evaluation/pdf/eval.
Information and Evaluation (CDIE). (2004). Performance Monitoring
and Evaluation Tips: http://www.dec.org/usaid_eval/004:accessed 22/
04/2017
Kellogg Foundation. (1998). Evaluation Handbook. http://
www.WKKF.org/.
Organisation for Economic Co-operation and Development (OECD)/Devel-
opment Assistance
Operations Evaluation Department (OED). (1996). Performance Monitoring
Indicators: A Handbook for Task Managers, (OED).
Organisation for Economic Co-operation and Development (OECD)/Public
Management
Service (1999). Improving Evaluation Practices: Best Practice Guidelines
for Evaluation and Background Paper, http://www.oecd.org/puma.
United Nations Development Programme. (2002). A Handbook on Moni-
toring and Evaluation for Results, USA: UNDP.
United Nations Population Fund (UNFPA).(2000). Monitoring and Evalua-
tion Methodologies: The Programme Manager’s M&E Toolkit:
UNFPA http://bbs.unfpa.org/ooe/me_methodologies.htm.accessed 1/4/2017
United States Agency for International Development (USAID), Centre for
Development
World Bank. (1996). http://www.worldbank.org/html/oed/evaluation/:ccessed
22/04/2017
Typology of Evaluation
Approaches
8.1 Introduction
According to Rabie and Cloete (2009), the main evaluation approaches based
on scope, are the following:
8.4.13 Meta-evaluation
Meta evaluation evaluates the evaluation focus, content and process as well
as the evaluators themselves (Scriven in Mathison, 2005). Interpretations by
evaluators and others should be scrutinised by colleagues and selected
stakeholder to identify shortcomings in design and poor interpretations.
Activity 8.1
?
1. List the main evaluation types based on scope.
2. What do you understand by evaluation based on scope?
3. With examples, discuss the main evaluation approaches based on scope
4. How does input evaluation differ from process or ongoing evaluation?
5. Distinguish between input evaluation and impact evaluation. Illustrate
your answer using examples.
According to Naidoo (2007), this broad distinction of the two polar oppo-
sites of approaches in this category classifies adherents into two camps: the
quantitative or ‘scientific’ versus the qualitative or interpretative.
Clarification evaluation,
Activity 8.2
?
1. Explain the concept of theory based evaluation.
2. List any 6 deductive evaluation approaches. In what way do they dif-
fer from theory-based evaluation approaches?
series of hypotheses that state that certain activities will produce certain stated
results.
Our feeling is that the best evaluators use a combination of all these approaches,
and that an organisation can ask for a particular emphasis but should not
exclude findings that make use of a different approach.
When you decide to use an external evaluator, you should make sure that you
do the following before you offer a contract to the evaluators:
check his/her/their references,
meet with the evaluators before making a final decision,
communicate what you want clearly – good terms of reference (tor)
are the foundation of a good contractual relationship,
negotiate a contract which makes provision for what will happen if time
frames and output expectations are not met,
ask for a work plan with outputs and timelines,
Activity 8.3
An external evaluator
may misunderstand what
you want from the
evaluation and not give
you what you need.
you may assist in giving work examples on how you have attempted to go
around the challenges that include the following:
data gapes,
insufficient data,
falsified information,
lack of committed participation and
political or organisational interference.
Activity 8.4
?
1. Using examples discuss how participatory evaluation can increase the
success chances of a project.
2. Discuss the main evaluation designs applicable to monitoring and evalu-
ation of community development projects.
3. Examine the advantages and disadvantages of internal and external
evaluations.
8.11 Summary
In this unit, we have presented the different classes of evaluation approaches.
Classification of evaluation approaches is important and necessary to enable
evaluators to understand the different approaches to evaluation and how they
relate to each other, overlap or differ from one another. In the summaries of
these approaches it is clear how some are mutually exclusive, others overlap
and many are related or complementary. The diversity in approaches need to
be viewed as an asset as it provided the evaluators with many choices. In
order to get the most accurate perspective of whatever we are trying to evaluate
it is necessary to consider and apply different approaches. Thus, an outcome
evaluation study may take a participatory approach to clarify the multiple
aims and intended uses of the evaluation results, followed by a more theory-
driven approach in the summative evaluation to determine whether the prede-
termined goals were reached as well as identifying potential unintended con-
sequences. The nature of the evaluation will determine the appropriate quan-
titative or qualitative data gathering techniques, which will inform the design of
the study in addition to the stated goals of the evaluation. As the different
approaches emphasise different aspects of the evaluand, it can be argued that
a combination of approaches will provide ‘richer’ evaluation data through a
References
Chen, H. (2005). Practical Program Evaluation. Assessing and Improv-
ing Planning, Implementation and Effectiveness. California: Sage
Publications.
Civicus. (2002). Monitoring and Evaluation.
<http://www.civicus.org/documents/toolkits.df (accessed> 20/04/2017)
Mathison, S. (editor). 2005. Encyclopedia of Evaluation. California: Sage
Publications.
Mouton, J. (2007). Approaches to Programme Evaluation Research. Jour-
nal of Public Administration. Vol 42, No 6.
Mouton, J. (2008). Class and slide notes from the “Advanced Evaluation
Course” presented by the Evaluation Research Agency in Rondebosch,
20 – 24 October 2008.
Naidoo, I. A. (2007). Unpublished research proposal, submitted to the Gradu-
ate School of Public and Development Management, University of
Witwatersrand.
Organisation for Economic Co-operation and Development (OECD). (2007).
OECD Framework for the Evaluation of SME and Entrepreneur-
ship Policies and Programmes. Paris, France: OECD.
Owen, J. M. (2006). Program Evaluation. Forms and Approaches, (3rd
Edition). New York: The Guilford Press.
Patton, M.Q. (2004.) “The Roots of Utilization-Focused Evaluation.” In Alkin,
C.M. (ed.) (2004). Evaluation Roots, Tracing Theorists’ Views and
Influences. California: Sage Publications.
Rabie, B. and Fanie Cloete, F. (2009). A New Typology of Monitoring and
Evaluation Approaches, Administration Publications, Vol 17(3):76-
97.
Rossi, P. H., Lipsey, M.W. and Freeman, H. E. (2004). Evaluation A Sys-
tematic Approach, (Seventh Edition). London: Sage Publications.
Scriven, M. (2003). Michael Scriven on the Difference between Evaluation
and Social
Science Research. The Evaluation Exchange. Vol IX, No 4. p7.
Stufflebeam, D.L. and Shinkfield, A.J. (2007). Evaluation Theory, Models
and Applications. San Francisco: Jossey-Bass.
Stufflebeam, D.L. (2004.) The 21st Century CIPP Model: Origins, Develop-
ment and Use. In Alkin, C.M. (ed.) (2004.) Evaluation Roots, Trac-
ing Theorists’ Views and Influences. California: Sage Publications.
Weiss, G. (1998). Using Randomized Experiments. Handbook of Practical
Program Evaluation, (Second Edition). San Francisco: Jossey-Bass,
John Wiley and Sons Inc. Publications.
9.1 Introduction
Random sampling
The first statistical sampling method is simple random sampling. In this method,
each item in the population has the same probability of being selected as part
of the sample as any other item. For example, a tester could randomly select
5 inputs to a test case from the population of all possible valid inputs within a
range of 1-100 to use during test execution, to do this, the tester could use a
random number generator or simply put each number from 1-100 on a slip of
paper in a hat, mixing them up and drawing out 5 numbers. Random sampling
can be done with or without replacement. If it is done without replacement,
an item is not returned to the population after it is selected and thus can only
occur once in the sample.
Systematic sampling
Stratified sampling
The statistical sampling method called stratified sampling is used when repre-
sentatives from each subgroup within the population need to be represented
in the sample. The first step in stratified sampling is to divide the population
into subgroups (strata) based on mutually exclusive criteria. Random or sys-
tematic samples are then taken from each subgroup. The sampling fraction
for each subgroup may be taken in the same proportion as the subgroup has
in the population. For example, if the person conducting a customer satisfac-
tion survey selected random customers from each customer type in propor-
tion to the number of customers of that type in the population. For example, if
40 samples are to be selected, and 10% of the customers are managers, 60%
are users, 25% are operators and 5% are database administrators then 4
managers, 24 uses, 10 operators and 2 administrators would be randomly
selected. Stratified sampling can also sample an equal number of items from
each subgroup. For example, a development lead randomly selected three
modules out of each programming language used to examine against the cod-
ing standard.
Cluster sampling
The fourth statistical sampling method is called cluster sampling, also called
block sampling. In cluster sampling, the population that is being sampled is
divided into groups called clusters. Instead of these subgroups being homo-
geneous based on selected criteria as in stratified sampling, a cluster is as
heterogeneous as possible to matching the population. A random sample is
then taken from within one or more selected clusters. For example, if an
organisation has 30 small projects currently under development, an auditor
looking for compliance to the coding standard might use cluster sampling to
randomly select 4 of those projects as representatives for the audit and then
randomly sample code modules for auditing from just those 4 projects. Clus-
ter sampling can tell us a lot about that particular cluster, but unless the clus-
ters are selected randomly and a lot of clusters are sampled, generalisations
cannot always be made about the entire population. For example, random
sampling from all the source code modules written during the previous week,
or all the modules in a particular subsystem, or all modules written in a par-
ticular language may cause biases to enter the sample that would not allow
statistically valid generalisation.
Haphazard sampling
There are also other types of sampling that, while non-statistical (information
about the entire population cannot be extrapolated from the sample), may still
provide useful information. In haphazard sampling, samples are selected based
on convenience but preferably should still be chosen as randomly as possible.
For example, the auditor may ask to see a list of all of the source code mod-
ules, and then closes his eyes and points at the list to select a module to audit.
The auditor could also grab one of the listing binders off the shelf, flip through
it and “randomly” stop on a module to audit. The haphazard sampling is usu-
ally typically, quicker, and uses smaller sample sizes than other sampling tech-
niques. The main disadvantage of haphazard sampling is that since it is not
statistically based, generalisations about the total population should be made
with extreme caution.
Judgmental sampling
are a higher risk to the organisation. In another example, the acceptance tester
might select test cases that exercise the most complex features, mission criti-
cal functions or most used sections of the software.
Activity 9.1
?
1. What do you understand by the term sampling?
2. List five types if sampling.
3. Explain briefly the following sampling types:
a) Stratified sampling.
b) Cluster sampling.
c) Judgmental sampling.
d) Random sampling.
Activity 9.2
? 1.
2.
List any five data sources for monitoring and evaluation of develop-
ment projects.
Choose any three data sources and explain how you can use sampling
in data gathering processes from these sources.
3. Justify for the use of eclectic methods in gathering data for monitoring
and evaluation.
4. Go into the library and find five monitoring ad evolution reports. Look
up for the section where the report indicates the name of the project,
the monitoring objectives and the methodology. Compile a table and
identify the type of data gathering methods used for each project. Sug-
gest reasons why they methods may be similar or different.
Table 9.3 Tools for Gathering Information Data for Monitoring and
Evaluation
Questionnaire These are interviews that are As these key informants Needs a skilled
carried out with specialists in often have little to do with interviewer with
a topic or someone who may the project or a good
be able to shed a particular organisation, they can be understanding of
light on the process. quite objective and offer the topic. Be
useful insights. They can careful not to
provide something of the turn something
“big picture” where into an absolute
people more involved truth (cannot be
may focus at the micro challenged)
(small) level. because it has
been said by a
key informant.
Focus Group In a Focus Group, a group of This can be a useful way It is quite
about six to 12 people are of getting opinions from difficult to do
interviewed together by a quite a large sample of random sampling
skilled interviewer /facilitator people. for focus groups
with a carefully structured and this means
interview schedule. Questions findings may not
are usually focused around a be generalised.
specific topic or issue. Sometimes
people influence
one another
either to say
something or to
keep quiet about
something. If
possible, focus
groups
interviews
should be
recorded and
then transcribed.
This requires
special
equipment and
can be very time
consuming.
Visual/audio stimuli These include pictures, Very useful to use You have to
movies, tapes, stories, role together with other have appropriate
plays, photographs, used tools, particularly withstimuli and the
to illustrate problems or people who cannot facilitator needs
issues or past events or read or write. to be skilled in
even future events. using such
stimuli.
This technique makes use It is useful to measure You need to test
of a continuum, along attitudes, opinions, the statements
which people are expected perceptions. very carefully to
to place their own make sure that
feelings, observations and there is no
so on. People are usually possibility of
asked to say whether they misunderstandin
agree strongly, agree, g. A common
don’t know, disagree, problem is when
disagree strongly with a two concepts are
statement. You can use included in the
pictures and symbols in statement and
this technique if people you cannot be
cannot read and write. sure whether an
opinion is being
given on one or
the other or
both.
Critical event This method is a way of Very useful when The evaluation
analysis focusing interviews with something problematic team can end up
individuals or groups on has occurred and submerged in a
particular events or people feel strongly vast amount of
incidents. The purpose of about it. If all those contradictory
doing this is to get a very involved are included, detail and lots of
full picture of what it should help the “he said/she
actually happened. evaluation team to get said”. It can be
a picture that is difficult not to
reasonably close to take sides and to
what actually happened remain
and to be able to objective.
diagnose what went
wrong.
So, your process may look something like as shown in Figure 9.1
It is also important to carefully plan for the data management of the M and E
system. This includes the set of procedures, people, skills, and equipment
necessary to systematically store and manage M and E data. If this step is not
carefully planned, data can be lost or incorrectly recorded, which compro-
mises not only data quality and reliability, but also subsequent data analysis
and use. Poorly managed data wastes time and resources (PRG, 2009).
Design the M and E communication plan around the information needs of the
users
The content and format of data reports will vary, depending on whether the
reports are to be used to monitor processes, conduct strategic planning, com-
ply with requirements, identify problems, justify a funding request, or conduct
an impact evaluation.
Reporting may entail different levels of complexity and technical language; the
report format and media should be tailored to specific audiences and different
methods used to solicit feedback.
Activity 9.3
?
1. Discuss any practical considerations in the planning for data collection
in the monitoring and evaluation of development projects.
2. Attempt an explanation to how a project manager can reduce data
collection costs. Give examples
3. Discuss the strength and weakness of any five data gathering tools.
Suggest ways in which these tools can be improved.
4. Examine the nexus between the quality of data gathered and the qual-
ity of results produced during monitoring and evaluation.
9.12 Summary
In this unit, we have looked at the data collection for monitoring and evalua-
tion. We have presented sampling issues, highlighted practical considerations
in information reporting and utilisation planning. We have also looked at the
different methods that can be used to collect information for monitoring and
evaluation purposes. The need to select methods that suit our purposes and
your resources is emphasised. We also examined various sources of data.
We explored various practical consideration in planning for data collection
and the tools used in data collection.
References
Chiplowe, S. (2008). Monitoring and Evaluation Planning Guidelines
and Tools. Baltimore: American Red Cross, CRS.
Civicus. (1996). Monitoring and Evaluation Handbook. Civicus.
Olive .(2002). Planning for Monitoring and Evaluation. Olive Publica-
tions.
Prim Research Group (PRG). (2009). Engaging in Data Collection. Avail-
able Online at: www.sdprgorg/depov/strategies/engaging.htm (Decem-
ber 2005).
Rossi, P.H. (1993). Evaluation A Systematic Approach. 5, London: Sage
Publications.
Shapiro, J. (2006). Evaluation: Judgement Day or Management Tool?
Olive Publications.
Strategies and Means: Data Collection in Research (May 2001). (DA). Avail-
able Online at: www.sdcn.org/strategies/integrating.htm, (December
2005).
Westfall,L. (2010).The Certified Software Quality Engineers Handbook.
wwwwestfallteam.com. Accessed on 8 may 2011.
www.united.fn.3non-profit.nl/info) accessed 18/4/2016
10.1 Introduction
Activity 10.1
?
1. Define the term impact assessment in the context of monitoring and
evaluation.
2. Provide a justification for impact assessment.
3. How would you determine whether or not to carry an evaluation?
The following are some key points to remember in exploring the use of exist-
ing data resources for the impact evaluation (Abdie, 1988; Backer cf, 2000).
team members and policy makers from the outset. It is therefore important to
identify team members as early as possible, agree upon roles and responsi-
bilities, and establish mechanisms for communication during key points of the
evaluation. Among the core team is the evaluation manager, analysts, both
economist and other social scientists, and, for evaluation designs involving
new data collection, a sampling expert, survey designer, fieldwork manager
and fieldwork team, and data managers and processors (Grosh and Muñoz,
1996). Depending on the size, scope, and design of the study, some of these
responsibilities will be shared or other staffing needs may be added to this
core team. In cases in which policy analysts may not have had experience
integrating quantitative and qualitative approaches, it may be necessary to
spend additional time at the initial team building stage to sensitise team mem-
bers and ensure full collaboration.
Activity 10.2
? 1.
2.
Identify various steps involved in designing impact assessment.
Account for the key issues involved in identifying data resources for
impact assessment.
3. Discuss the importance of creating evaluation questions in impact as-
sessment.
4. Discuss the rationale behind impact assessment of development
projects.
Second is whether to work with a private firm or public agency. Private firms
can be more dependable with respect to providing results on a timely basis,
but capacity building in the public sector is lost and often private firms are
understandably less amenable to incorporating elements into the evaluation
that will make the effort costlier. Whichever counterpart or combination of
counterparts is finally crafted, a sound review of potential collaborators’ past
evaluation activities is essential to making an informed choice (Backer, 2000).
And third is what degree of institutional separation to put in place between the
evaluation providers and the evaluation users. There is much to be gained
from the objectivity provided by having the evaluation carried out independ-
ently of the institution responsible for the project being evaluated. However,
evaluations can often have multiple goals, including building evaluation capac-
ity within government agencies and sensitising programme operators to the
realities of their projects once these are carried out in the field. At a minimum,
the evaluation users, who can range from policymakers in government agen-
cies in client countries to NGO organizations, bilateral donors, and interna-
tional development institutions, must remain sufficiently involved in the evalu-
ation to ensure that the evaluation process is recognised as being legitimate
and that the results produced are relevant to their information needs. Other-
wise, the evaluation results are less likely to be used to inform policy. In the
final analysis, the evaluation manager and his or her clients must achieve the
right balance between involving the users of evaluations and maintaining the
objectivity and legitimacy of the results.
manager and the data manager during the development of the instruments, as
well as local staff preferably analysts who can provide knowledge of the country
and the programme—can be critical to the quality of information collected
(Grosh and Muñoz, 1996). It is also important to ensure that the data col-
lected can be disaggregated by gender to explore the differential impact of
specific programmes and policies.
10.11 Training
For both qualitative and quantitative data collection, even experienced staff
must be trained to collect the data specific to the evaluation, and all data
collection should be guided by a set of manuals that can be used as orienta-
tion during training and as a reference during the fieldwork. Depending on the
complexity of the data collection task, training can range from three days to
several weeks.
10.13 Sampling
Sampling is an art best practiced by an experienced sampling specialist. The
design need not be complicated, but it should be informed by the sampling
specialist’s expertise in the determination of appropriate sampling frames, sizes,
and selection strategies. The sampling specialist should be incorporated in the
evaluation process from the earliest stages to review the available information
needed to select the sample and determine whether any enumeration work
will be needed, which can be time consuming (Backer, 2000). As with other
parts of the evaluation work, coordination between the sampling specialist
and the evaluation team is important. This becomes particularly critical in con-
ducting matched comparisons because the sampling design becomes the ba-
sis for the “match” that is at the core of the evaluation design and construction
of the counterfactual. In these cases, the sampling specialist must work closely
with the evaluation team to develop the criteria that will be applied. There are
many tradeoffs between costs and accuracy in sampling that should be made
clear as the sampling framework is being developed. For example, conduct-
ing a sample in two or three stages will reduce the costs of both the sampling
and the fieldwork, but the sampling errors and therefore, the precision of the
estimates will be increased.
After developing the sampling strategy and framework, the sampling special-
ist should also be involved in selecting the sample for the fieldwork and the
pilot test to ensure that the pilot is not conducted in an area that will be in-
cluded in the sample for the fieldwork. Often initial fieldwork will be required
as part of the sample selection procedure (Backer, 2000).
And finally, the sampling specialist should produce a sampling document de-
tailing the sampling strategy, including:
a) From the sampling design stage, the power calculations using the im-
pact variables, the determination of sampling errors and sizes, the use
of stratification to analyse populations of interest.
b) From the sample selection stage, an outline of the sampling stages and
selection procedures.
c) From the fieldwork stage to prepare for analysis, the relationship be-
tween the size of the sample and the population from which it was
selected, non-response rates, and other information used to inform sam-
pling weights, and any additional information that the analyst would
need to inform the use of the evaluation data. This document can be
used to maintain the evaluation project records and should be included
with the data whenever it is distributed to help guide the analysts in
using the evaluation data.
10.14.1 Questionnaires
The design of the questionnaire is important to the validity of the information
collected. There are four general types of information required for an impact
evaluation (Bamberger, 2000).
These include:
Classification of nominal data with respondents classified according to
whether they are project participants or belong to the comparison
group.
Exposure to treatment variables recording not only the services and
benefits received but also the frequency, amount, and quality—assess-
ing quality can be quite difficult.
Outcome variables to measure the effects of a project, including imme-
diate products, sustained outputs or the continued delivery of services
over a long period, and project impacts such as improved income and
employment.
Intervening variables that affect participation in a project or the type of
impact produced, such as individual, household, or community charac-
teristics—these variables can be important for exploring biases. The
way in which the question is asked, as well as the ordering of the ques-
tions, is also quite important in generating reliable information. A rel-
evant example is the measurement of welfare, which would be required
for measuring the direct impact of a project on poverty reduction.
Among the elements noted for a good questionnaire are keeping it short and
focused on important questions, ensuring that the instructions and questions
are clear, limiting the questions to those needed for the evaluation, including a
“no opinion” option for closed questions to ensure reliable data, and using
sound procedures to administer the questionnaire, which may indeed be dif-
ferent for quantitative and qualitative surveys.
The type of staff needed to collect data in the field will vary according to the
objectives and focus of the evaluation. For example, a quantitative impact
evaluation of a nutrition programme might require the inclusion of an
anthropometrist to collect height-for-weight measures as part of a survey team,
whereas the impact evaluation of an educational reform would most likely
include staff specialising in the application of achievement tests to measure the
impact of the reform on academic achievement. According to UNDP (2002),
most quantitative surveys will require at least a survey manager, data man-
ager, field manager, field supervisors, interviewers, data entry operators, and
drivers. Depending on the qualitative approach used, field staff may be similar
with the exception of data entry operators. The skills of the interviewers,
however, would be quite different, with qualitative interviewers requiring spe-
cialised training, particularly for focus groups, direct observation, and so forth.
Three other concerns are useful to remember when planning survey opera-
tions.
First, it is important to take into consideration temporal events that can affect
the operational success of the fieldwork and the external validity of the data
collected, such as:
the school year calendar,
holidays,
rainy seasons and seasonality
harvest times, or migration patterns.
Second, it is crucial to pilot test data collection instruments, even if they are
adaptations of instruments that have been used previously, both to test the
quality of the instrument with respect to producing the required data and to
familiarise fieldwork staff with the dynamics of the data collection process.
Pilot tests can also serve as a proving ground for the selection of a core team
of field staff to carry out the actual survey.
and Huberman, 1994). Two commonly used methods for impact evaluation
are mentioned—content analysis and case analysis (Taschereau, 1998).
(a) the evaluation questions for which the information was collected,
The coding of data can be quite complex and may require many assumptions.
Once a classification system has been set up, the analysis phase begins. This
involves looking for patterns in the data and moving beyond description to-
ward developing an understanding of program processes, outcomes, and
impacts. This is best carried out with the involvement of team members. New
ethnographic and linguistic computer programmes are also now available,
designed to support the analysis of qualitative data (UNDP, 2002).
First, analysis commonly takes longer than anticipated, particularly if the data
are not as clean or accessible at the beginning of the analysis, if the analysts
are not experienced with the type of evaluation work, or if there is an empha-
sis on capacity building through collaborative work.
Third, the products will have the most policy relevance if they include clear
and practical recommendations stemming from the impact analysis. These
can be broken into short- and long-term priorities, and when possible, should
include budgetary implications. Decision makers will be prone to look for the
“bottom line.”
Activity 10.3
?
1. Create a list of roles for impact assessment team member.
2. Discuss the roles and responsibilities of the team members in an im-
pact assessment team.
3. Analyse the utility of case analysis in impact assessment.
10.17 Summary
In this unit we have taken you through the processes of impact assessment for
monitoring and evaluation. We visited the importance of impact assessment,
as well as the main steps in designing impact assessment framework. Imple-
mentation processes and the roles of the key impact assessment team mem-
bers are articulated in this unit. Last reporting and communication procedure
were attended to. We have encouraged you to view the impact assessment in
the context of monitoring and evaluation of development projects. We have
also reminded you that the goal of impact assessment is to see if the projects
are yielding the intended pre-determined development outcomes. We en-
courage you to read through this material and critique some of the processes
tendered in this unit.
References
Abadie, A., Angrist, J. and Imbens, G. (1998). “Instrumental Variables Esti-
mation of Quartile Treatment Effects.” National Bureau of Economic
Research Working Paper Series, No. 229:1
Atkinson, A (1987). “On the Measurement of Poverty.” Econometrica 55:
749-64.
Baker, J. (2000). Directions in Development Evaluating the Impact of Devel-
opment Projects on Poverty. A Handbook for Practitioners. Wash-
ington, D.C.: The International Bank for Reconstruction and Develop-
ment/ The World Bank.
Bamberger,M.(2000).Integrating Quantitative and Qualitative Methods
in Development Research. Washington, D.C.: World Bank.
Barbara,M.(1998).“Practitioner-Led Impact Assessment: A Test in Mali.”
USAID AIMS Brief. Washington, D.C.: USAID.
Grosh, M.E. and Muñoz, J. (1996). “A Manual for Planning and Implement-
ing the Living Standards Measurement Study Survey.” LSMS Work-
ing Paper No. 126. Washington, D.C.: World Bank.
Grossman,J.B.(1994).“Evaluating Social Policies: Principles and U.S. Expe-
rience.” The World Bank Research Observer 9 (July): 159-80.
Hulme, D. (1997). “Impact Assessment Methodologies for Microfinance:
Theory, Experience and Better Practice.” Institute for Development
Policy and Management, University of Manchester.
James, H. and Richard, R. (1985). “Alternative Methods of Evaluating the
Impact of Interventions: An Overview.” Journal of Econometrics
30: 239-67.
Rossi, P.H. (1993). Evaluation A Systematic Approach, London: Sage
Publications.
Shahidur, R. KhandkerGayatri, B., Koolwal Hussain, A. and Samad, (2010).
Handbook on Impact Evaluation Quantitative Methods and Prac-
tices, The International Bank for Reconstruction and Development /
The World Bank.
Taschereau, S. (1998). Evaluating the Impact of Training and Institu-
tional Development Programs, a Collaborative Approach. Eco-
nomic Development Institute of the World Bank, January.
United Nations Development Programme. (2002). The Evaluation of Re-
sults-Based Management at UNDP, New York, NY:
11.1 Introduction
Good RBM is an ongoing process. This means that there is constant feed-
back, learning and improving. Existing plans are regularly modified based on
the lessons learned through monitoring and evaluation, and future plans are
developed based on these lessons. Monitoring is also an ongoing process.
The lessons from monitoring are discussed periodically and used to inform
actions and decisions. Evaluations should be done for programmatic improve-
ments while the programme is still ongoing and also inform the planning of
new programmes. This ongoing process of doing, learning and improving is
what is referred to as the RBM life-cycle approach, which is depicted in
Figure 11.1, (RBM cycle).
RBM practices and systems are most effective when they are accompanied
by clear accountability arrangements and appropriate incentives that promote
desired behaviour. In other words, RBM should not be seen simply in terms
of developing systems and tools to plan, monitor and evaluate results. It must
also include effective measures for promoting a culture of results-orientation
and ensuring that persons are accountable for both the results achieved and
their actions and behaviour.
Activity 11.1
?
1. Define the following terms
(a) Results Based Management (RBM)
(b) planning
(c) evaluation
2. List the main objectives of Results Based Management (RBM)
11.6 Planning
According to UNDP (2009) planning can be defined as the process of setting
goals, developing strategies, outlining the implementation arrangements and
allocating resources to achieve those goals. It is important to note that plan-
ning involves looking at a number of different processes such as:
identifying the vision, goals or objectives to be achieved,
formulating the strategies needed to achieve the vision and goals,
determining and allocating the resources (financial and other) required
to achieve the vision and goals and
outlining implementation arrangements, which include the arrangements
for monitoring and evaluating progress towards achieving the vision
and goals.
There is an expression that failing to plan is planning to fail. While it is not
always true that those who fail to plan will eventually fail in their endeavours,
there is strong evidence to suggest that having a plan leads to greater effec-
tiveness and efficiency. Not having a plan—whether for an office, programme
or project is in some ways similar to attempting to build a house without a
blueprint, that is, it is very difficult to know what the house will look like, how
much it will cost, how long it will take to build, what resources will be re-
quired, and whether the finished product will satisfy the owner’s needs. In
short, planning helps us define what an organisation, programme or project
aims to achieve and how it will go about it (UNDP 2009).
have known the best time to start the project and the type of material to use
(UN, 2008).
Planning helps mitigate and manage crises and ensure smoother im-
plementation
A proper plan helps individuals and units to know whether the results achieved
are those that were intended and to assess any discrepancies. Of course, this
requires effective monitoring and evaluation of what was planned. For this
reason, good planning includes a clear strategy for monitoring and evaluation
and use of the information from these processes (www.plandev,of,org/go/
usergiude).
11.7 Monitoring
Monitoring can be defined as the ongoing process by which stakeholders
obtain regular feedback on the progress being made towards achieving their
goals and objectives. Contrary to many definitions that treat monitoring as
merely reviewing progress made in implementing actions or activities, the
definition used in this Handbook focuses on reviewing progress against achiev-
ing goals. In other words, monitoring in this unit is not only concerned with
asking “Are we taking the actions we said we would take?” but also “Are we
making progress on achieving the results that we said we wanted to achieve?”
The difference between these two approaches is extremely important. In the
more limited approach, monitoring may focus on tracking projects and the
use of the agency’s resources. In the broader approach, monitoring also in-
volves tracking strategies and actions being taken by partners and non-part-
ners, and figuring out what new strategies and actions need to be taken to
ensure progress towards the most important results (United Nations Popula-
tion Fund, 2001).
11.8 Evaluation
Evaluation is a rigorous and independent assessment of either completed or
ongoing activities to determine the extent to which they are achieving stated
objectives and contributing to decision making. Evaluations, like monitoring,
can apply to many things, including an activity, project, programme, strategy,
policy, topic, theme, sector or organisation (Organisation for Economic Co-
operation and Development, 2002). The key distinction between the two is
that evaluations are done independently to provide managers and staff with
an objective assessment of whether or not they are on track. They are also
more rigorous in their procedures, design and methodology, and generally
involve more extensive analysis. While monitoring provides real-time infor-
mation required by management, evaluation provides more in-depth assess-
ment. The monitoring process can generate questions to be answered by
evaluation. Also, evaluation draws heavily on data generated through moni-
toring during the programme and project cycle, including, for example, base-
line data, information on the programme or project implementation process
and measurements of results (UNDP, 1998).
Activity 11.2
?
1. Draw Results-Based Management lifecycle and identify its major com-
ponents.
2. Identify the main processes involved in planning.
3. Provide a justification for planning in development projects. Illustrate
your answers using practical examples.
11.9.1 Ownership
Ownership is fundamental in formulating and implementing programmes and
projects to achieve development results. According to UNDP (1998; 2009),
there are two major aspects of ownership to be considered which are, the
depth, or level, of ownership of plans and processes and the breadth of own-
ership.
Depth of ownership:
Breadth of ownership:
A key aim of managing for results is to ensure that ownership goes beyond a
few select persons to include as many stakeholders as possible. For this rea-
son, monitoring and evaluation activities and the findings, recommendations
and lessons from ongoing and periodic monitoring and evaluation should be
fully owned by those responsible for results and those who can make use of
them (UNDP, 2009).
Many projects and programmes often fail to achieve their objectives because
there is little or no analysis of, and attention to, the differences between the
roles and needs of men and women in society. Inequalities, discriminatory
practices and unjust power relations between groups in society are often at
the heart of development problems. The same applies to the concept of na-
tional or community ownership of development programmes. There is greater
pride and satisfaction, greater willingness to protect and maintain assets, and
greater involvement in social and community affairs when people have a vested
interest in something that is, when they feel ‘ownership’.
Ask throughout the processes: “Will this be sustainable?”; “Can national sys-
tems and processes be used or augmented?”; “What are the existing national
capacity assets in this area?”; “Are we looking at the enabling environment,
the organisation or institution, as well as the individual capacities?”; and “Can
we engage in monitoring and evaluation activities so that we help to strengthen
national M and E systems in the process?”
Promote inclusiveness, gender mainstreaming and women’s
empowerment
Ensure that men, women and traditionally marginalised groups are involved in
the planning, monitoring and evaluation processes. For example, ask ques-
tions such as:
“Does this problem or result as we have stated it reflect the interests,
rights and concerns of men, women and marginalised groups?”
“Have we analysed this from the point of view of men, women and
marginalised groups in terms of their roles, rights, needs and
concerns?”and
“Do we have sufficiently disaggregated data for monitoring and evalu-
ation?”
Activity 11.3
?
1. Explore the link between planning and monitoring and evaluation.
2. (a)Discuss the principles of Results-Based Management. (b) How fea-
sible are the principles in the monitoring and evaluation practice.
3. Discuss the outcomes of applying the principles of Results-Based Man-
agement effectively in development projects.
11.10 Summary
In this unit we covered issues concerning monitoring and evaluation in the
context of RBM. We recognised the increasing emphasis on results is bring-
ing about some major changes in the focus, approach and application of moni-
toring and evaluation within development circles. We noted the central to
these changes is Results-Based Management. In this unit we also defined the
concept of RBM and linked it to monitoring and evaluation of development
projects. The principles of RBM were visited. We established the link be-
tween planning and monitoring-evaluation and explained it. We also looked
at the objectives of RBM as well as the justification for planning was explored
fully in this unit.
References
Evaluation Resource Centre (ERC). (n.d) .website (erc.undp.org)accessed
6/9/2016.
International Development Association (IDA) (2002). “Measuring Outputs
and Outcomes in IDA Countries.” IDA 13. Washington, D.C.: World
Bank.
Organisation for Economic Co-operation and Development. (2002). “Glos-
sary of Key Terms in Evaluation and Results-Based Management”,
Development Assistance Committee (DAC), Paris, France:2002. Avail-
able at: http://www.oecd.org/dataoecd/29/21/2754804. Accessed 7
June, 2012.
United Nations. (2008). Programme and Operations Policies and Proce-
dures’. Available at: http://content.undp.org/go/userguid. (15 Septem-
ber, 2010).
United Nations Development Programme. (1998). Evaluation of the Gov-
ernance Programme for Latin America and the Caribbean,
<http://intra.UNDP.org/eog/publications/publixations.html.Accessed
>16/12/2016
United Nations Development Programme. (2002). The Evaluation of Re-
sults-Based Management at UNDP, New York, NY
United Nations Development Programme. (2009). “Assessment of Devel-
opment Results (ADR) Guidelines”, Evaluation Office, New York, NY:
Available at:
http://intra.undp.org/eo/documents/ADR/ADR-Guide, Accessed 26
June, 2012.
United Nations Development Programme. (n.d) Accountability Framework,
httt://content.undporg/go/usergiude,resultsmanagement- accountabil-
ity/? Accessed 22/12/2016
United Nations Development Programme Evaluation Office Internal (1998)
(intra.undp.org/eo) and external websites (www.undp.org/eo),
(Accessed 5 October, 2016).
United Nations Evaluation Group. (2008). ‘Standards for Evaluation in the
UN System’, Available at: http://www.unevaluation.org/ unegstandards.
22 September 2008.
United Nations Population Fund (UNFPA). (2001). RBM and M&E, Ge-
neva.
United Nations Evaluation Group (UNEG).(2000) Monitoring and Evalua-
tion in Development. website (www.unevaluation.org). (19 March
2017).
12.1 Introduction
Section 1 – Summary
Make this a short summary for people who will not read the whole report.
Give the reasons why the evaluation was conducted and who it is targeted at
together with any conclusions and recommendations (Crompton,n.d).
It should cover:
Section 2 - Background
In this part, cover the background to the evaluation and what is was meant to
achieve. The program should be described and the depth of description will
depend on whether the intended audiences have any knowledge of the pro-
gram or not. Do not assume that everybody will know. Do not leave things
out but at the same time do not burden them with detail.
It should cover:
origin of the program,
aims of the program,
participants in the program,
characteristics of the materials,
staff involved in the program.
Section 3 - Description of the Evaluation
This covers why the evaluation was conducted and what it was and was not
intended to accomplish. State the methodology and any relevant technical
information such as how the data was collected and what evaluation tools
were used.
It should cover:
This will cover the results of the work from section 3 and can be supple-
mented by any other evidence collected. Try to use graphics (charts, tables
and so on) to illustrate the information but use them sparingly to increase their
effectiveness.
It should cover:
results of the study
How many participants took any tests?
What were the results of the tests?
if there was a comparative group, how do they compare?
Are any differences statistically significant?
if no control group, did performance change from test to test?
Section 5 - Discussion
This should discuss your findings and your interpretation of them. Always
interpret your results in terms of your stated goals.
This section should cover the interpretation of all the results in section 4. If the
evaluation is not a large one, then sections 4 and 5 could be combined. The
results should always be related back to the purpose of the evaluation, some-
thing that does not always happen. Do not forget the unexpected results as
they can often be the most interesting.
It should cover:
Are there alternative explanations to the results from the data?
Are these results generalisable?
What were the strengths and weaknesses of the intervention?
Are certain parts of the program better received by certain groups?
Are any results related to certain attitudes or learner characteristics?
Were there any unexpected results?
Section 6 - Costs and Benefits
This is an optional section and would only be included if this had been part of
the evaluation plan. As there is no definitive approach to investigating this
whole area there will be a need to justify the approach taken. Not many
evaluations look at costs but there is a growing need to include some informa-
tion about this area. Evaluations and program interventions do not happen for
free.
It should cover:
What was the method used to calculate costs and effects/benefits?
How were costs and outcomes defined?
What costs were associated with the program?
How were costs distributed (for example start-up costs, operating costs
etc.)?
Where there any hidden costs (for example in-kind contributions)?
What benefits were associated with the program?
What were measures of effectiveness (test scores; program comple-
tion and so on)?
Were there any unexpected benefits?
Section 7 - Conclusions
This section can be the most important section in the report apart from the
summary. Some people will only read the summary and the conclusion sec-
tion. Conclusions and recommendations should be stated clearly and pre-
cisely and these might be presented as a list as readers can easily scan them.
Don’t expect everyone to read your report from cover to cover. Make sure
that you get your main points across in the opening summary and in the con-
clusion.
It should cover:
What are the major conclusions of the evaluation?
How sure are you of the conclusions?
Are all the results reliable?
What are the recommendations regarding the program?
Can any predictions or hypotheses be put forward?
Are there any recommendations as to future evaluations?
Adapted from: Philip Crompton, Research Fellow, Institute for Education,
University of Stirling.( https://www.sampletemplates.com/business-templates/
sample-evaluation-report.html
Introduction
Effectiveness
This section will synthesise and discuss all evidence about effectiveness
of the project, actual or potential, in pursuing its intermediate/specific
objectives.
Sustainability
This section will assess the prospects for sustaining and up-scaling the
project’s results by the beneficiaries and the host institutions after the
termination of the project. It will include, as appropriate:
Institutional, technical, social and economic sustainability of proposed
technologies, innovations and/or processes;
Expectation of institutional uptake and mainstreaming of the newly ac-
quired capacities, and/or diffusion beyond the beneficiaries or the
project;
Environmental sustainability: the project’s contribution to sustainable
natural resource management, in terms of maintenance and/or regen-
eration of the natural resource base.
In the case of emergency projects, where the concept of sustainability
may not be fully appropriate, findings related to the project’s
connectedness will be reported in this section.
Impact
This section will assess the current and foreseeable positive and nega-
tive impacts produced as a result of the project/programme, directly or
indirectly, intended or unintended.
It will assess the actual or potential contribution of the project/pro-
gramme to the planned development objective and to FAO’s Strategic
Objectives, Core Functions and Organizational Results.
Lessons Learned
Not all evaluations generate lessons. Lessons should only be drawn if
they represent original contributions to general knowledge.
Where this is the case, the evaluation will identify lessons and good
practices on substantive, methodological or procedural issues, which
could be relevant to the design, implementation and evaluation of simi-
lar projects or programmes. Such lessons/practices must have been
innovative, demonstrated success, had an impact, and be replicable.
The team will decide whether to report the full name and/or the function of the
people who were interviewed
( h t t p s : / / w w w. o ec d . o rg/ d e v el o p m e n t / e v a l u a t i o n / d c d n d e p /
47069197.pdf,accessed 20/05/2017)
CONTENTS
ACRONYMS ........................................................................................................................... 4
ACKNOWLEDGEMENTS ................................................................................................... 5
EXECUTIVE SUMMARY .................................................................................................... 6
A.1. ICEIDA WatSan Project in Mangochi, Malawi ............................................................... 6
A.2. Evaluation methodology ................................................................................................... 6
A.3. Summary of findings and main recommendations ............................................................ 7
1.0 INTRODUCTION ........................................................................................................... 12
1.1 Introduction ....................................................................................................................... 12
1.2 The purpose of the report................................................................................................... 12
1.3 The scope of evaluation. ................................................................................................... 12
1.4 The scope of the project .................................................................................................... 12
2.0 COUNTRY AND PROGRAMME PROFILE .............................................................. 13
2.1 Context for development ............................................................................................... 13
2.2 The economic, cultural and political dimensions of Malawi ..........................................13
2.3 State of Infrastructure that Characterize the Context for Development ..........................14
2.4 Link to poverty reduction ............................................................................................... 14
2.5 Link to Sustainable Development and Local Needs ...................................................... 15
2.6 Gender equality, Environment, and other programming priorities ........... .................... 15
2.7 Financial Resourcing ..................................................................................................... 16
2.8 Project Milestones and Achievements to date ................... ........................................... 17
2.9 Stakeholder Participation ............................ .................................................................. 19
3.0 EVALUATION PROFILE .......................................................................................... 21
3.1 Methodology .................................................................................................................. 21
3.2 Sources of data ............................................................................................................... 21
3.3 Sampling methods .......................................................................................................... 21
3.4 Enumeration ................................................................................................................... 22
3.5 Techniques of data collection ......................................................................................... 22
3.6 Data analysis ................................................................................................................... 23
4.0 EVALUATION FINDINGS ......................................................................................... 24
4.A.0 Relevance .................................................................................................................... 24
4.A.1 Needs assessment and choice of the beneficiaries. ................................................... 24
4.A.2 Consistency of program's objectives with beneficiaries needs and expectations ....... 26
4.A.3 Consistency of program's strategy and activities with program's objectives .............. 27
4.A.4 Alignment with National Policies, Strategies and Priorities ....................................... 27
4.B.0 Effectiveness ............................................................................................................... 28
4.B.1 Objective 1: Increase the number of boreholes in the Monkey Bay Health Zone ........28
4.B.1 Objective 1: Increase the number of boreholes in the Monkey Bay Health Zone ........28
4.B.2 Objective 2: Build up capacity among communities in maintenance of boreholes and
pumps ……………………………………………………………………………………..….29
4.B.3 Objective 3: Increase knowledge in hygiene and sanitation among the target groups .. 31
4.B.4 Objective 4: Increase the number of protected and improved shallow wells ................ 32
4.B.5 Objective 5: Putting to use 2 natural springs in Mvunguti village ................................ 33
4.B.6 Objective 6: Improve Community Based Management ................................................ 33
4.B.7 Objective 7: Establishing functional co-ordination, monitoring and reporting system
between stakeholders .............................................................................................................. 34
4.C.0 Efficiency ...................................................................................................................... 34
4.C.1 Significant improvement in access to drinking water ................................................... 35
4.C.2 Increased knowledge in sanitation and hygiene............................................................. 37
4.C.3 Practice of hand washing with soap .............................................................................. 38
4.D.0 Impact ........................................................................................................................... 38
4.D.1 A remarkable decrease in water-related diseases ....... ................................................. 38
4.D.2 Performing community management structures .......................................................... 41
4.E.0 Sustainability ................................................................................................................ 42
4.E.1 Current functioning status of the program’s outputs .................................................... 42
4.E.2 Financial sustainability ................................................................................................. 43
4.E.3 Technical sustainability................................................................................................. 43
4.E.4 Institutional sustainability ............................................................................................. 44
4.E.5 Environmental sustainability .................................................................................. 45
5.0 CONCLUSION ........................................................................................................ 47
6.0 RECOMMENDATIONS......................................................................................... 48
6.1 Recommendations of the evaluation with respect to relevance: ............................... 48
6.2 Recommendations of the evaluation with respect to Effectiveness: ......................... 49
6.3 Recommendations of the evaluation with respect to efficiency: ............................... 50
6.4 Recommendations of the evaluation with respect to impact: .................................... 50
6.5 Recommendations of the evaluation with respect to sustainability: .......................... 51
7.0 LESSONS LEARNED ............................................................................................. 52
APPENDICES ................................................................................................................. 53
APPENDIX 1: TERMS OF REFERENCE (ToRs) ......................................................... 53
APPENDIX 2: EVALUATION RESULTS MATRIX ................................................... 65
APPENDIX 3: GIS WATER POINT RESULTS MAP .................................................. 72
APPENDIX 4: EVALUATION ACTIVITIES AND TIME FRAME ............................ 74
APPENDIX 5: LIST OF PEPOPLE INTERVIEWED ................................................... 75
APPENDIX 6: QUESTIONNAIRES .............................................................................. 77
APPENDIX 7: LIST OF COMMUNITIES/VILLAGES SURVEYED .......................... 95
BIBLIOGRAPHY/REFERENCES ................................................................................. 100
http://www.iceida.is/media/pdf/ICEIDA-FINAL-EVALUATION-REPORT-FINAL--
submitted.pdf (Accessed 26/06/2017)
12.7 Summary
In this unit, we have presented real case examples of professional evaluation
report structures. We hope you were able to see how the reports are struc-
tured and how the different units of the report are arranged. For the full ver-
sion of the report you will find them on your My- Vista course account. Alter-
natively, you may find any report on the internet and download it for analyses.
A full report could not be inserted in this unit because of limited space.
References
Crompton. P.(n.d),( https://www.sampletemplates.com/business-templates/
sample-evaluation-report.html(accsessed 20/605/2016)
( h t t p s : / / w w w. o ec d . o rg/ d e v el o p m e n t / e v a l u a t i o n / d c d n d e p /
47069197.pdf,accses 20/05/2017)
<http://www.iceida.is/media/pdf/ICEIDA-FINAL-EVALUATION-RE-
PORT-FINAL—submitted.pdf(Accessed 26/06/2017)>
https://www.usaid.gov/sites/default/files/documents/1870/How-to-
Note_Preparing-Evaluation-Reports.pdf accessed 26/05/2017