You are on page 1of 17

MONITORING AND EVALUATION SYSTEM TO AID IN

IMPORVING THE EFFICIENCY OF THE MONITORING AND


EVALUATION PROCESS

SCHOOL: COMPUTING AND INFORMATION TECHNOLOGY

UNIT NAME: PROJECT RESEARCH AND DEFINITION

Abstract
Monitoring and evaluation is a systematic process used to evaluate projects, institutions, and
programmes with the goal of improving current and future outputs. It is a continuous
assessment based on information on progress or delay of the undergoing activities which are
to be assessed. It helps in determining if a programme is on track and if changes to the
program may be needed.
The Monitoring and evaluation process can be used to demonstrate that a programme’s
efforts have had a measurable impact on expected outcomes set and tell if they have been
implemented effectively. It is critical in aiding managers, planners, implementers, policy
makers and donors acquire the information and understanding they need to make informed
decisions about a programme’s operations.
This proposal aims on the use of monitoring and evaluation software to help make the
monitoring and evaluation process efficient and the inferences received from the process to
be reliable.
1.INTRODUCTION
1.1 Background Information
Monitoring is the systematic process of collecting, analysing and using information to track a
programme’s progress toward reaching its objectives and to guide management decisions.
Monitoring usually focuses on processes, such as when and where activities occur, who
delivers them and how many people or entities they reach.

Monitoring is conducted after a programme has begun and continues throughout the
programme implementation period.

The four main purposes of monitoring and evaluation are:

• To learn from experiences to improve practices and activities in the future;


• To have internal and external accountability of the resources used and the results
obtained;
• To take informed decisions on the future of the initiative; To promote
empowerment of beneficiaries of the initiative.

Evaluation is the systematic assessment of an activity, project, programme, strategy, policy,


topic, theme, sector, operational area or institution’s performance. Evaluation focuses on
expected and achieved accomplishments, examining the inputs, activities, outputs, outcomes,
impacts, processes, contextual factors and causality, in order to understand achievements or
the lack of achievements. Evaluation aims at determining the relevance, impact,
effectiveness, efficiency and sustainability of interventions and the contributions of the
intervention to the results achieved.
Evaluations should help to draw conclusions about five main aspects of the project:

• relevance
• effectiveness
• efficiency
• impact
• sustainability

An evaluation should provide evidence-based information that is credible, reliable and useful.
The findings, recommendations and lessons of an evaluation should be used to inform the
future decision-making processes regarding the programme.
Monitoring and evaluation are critical for identifying and documenting successful programmes
and approaches and tracking progress toward common indicators across related projects.
Monitoring and evaluation forms the basis of strengthening understanding around the many
multi-layered factors underlying a project

This is especially relevant in resource poor areas, where difficult decisions need to be made
with respect to funding priorities.

At the programme level, the purpose of monitoring and evaluation is to track implementation
and outputs systematically, and measure the effectiveness of programmes. It helps determine
exactly when a programme is on track and when changes may be needed. Monitoring and
evaluation forms the basis for modification of interventions and assessing the quality of
activities being conducted.

Monitoring and evaluation can be used to demonstrate that programme efforts have had a
measurable impact on expected outcomes and have been implemented effectively. It is
essential in helping managers, planners, implementers, policy makers and donors acquire the
information and understanding they need to make informed decisions about programme
operations.

Monitoring and evaluation help with identifying the most valuable and efficient use of
resources. It is critical for developing objective conclusions regarding the extent to which
programmes can be judged a “success”. Monitoring and evaluation together provide the
necessary data to guide strategic planning, to design and implement programmes and projects,
and to allocate, and re-allocate resources in better ways.

1.2 Monitoring and Evaluation Approaches


1.2.1 The Logical Framework Approach
The logical framework (LogFrame) is an approach that helps to clarify the objectives
of any project, program, or policy. It aids in the identification of the expected causal
links—the “program logic”—in the following results chain: inputs, processes,
expected outputs, outcomes, and impact. It leads to the identification of performance
indicators at each stage in this chain, as well as risks which might impede the
attainment of the objectives. The LogFrame is also a vehicle for engaging partners in
clarifying objectives and designing activities. During implementation the LogFrame
serves as a useful tool to review progress and take corrective action.
1.2.2 Rapid Appraisal Approach
Rapid appraisal methods are quick, low-cost ways to gather the views and feedback of
beneficiaries and other stakeholders, in order to respond to decision-makers’ needs
for information.
Use

•Providing rapid information for management decision-making, especially at the


project or program level.

•Providing qualitative understanding of complex socioeconomic changes, highly


interactive social situations, or people’s values, motivations, and reactions.
•Providing context and interpretation for quantitative data collected by more
formal methods.
This could be achieved by use of a Questionnaire, Key informant interview, Focus
group discussion and Direct observation.

1.2.3 Impact Evaluation


Impact evaluation is the systematic identification of the effects – positive or negative,
intended or not on individual households, institutions, and the environment caused by
a given development activity such as a program or project. Impact evaluation helps us
better understand the extent to which activities reach the poor and the magnitude of
their effects on people’s welfare. Impact evaluations can range from large scale
sample surveys in which project populations and control groups are compared before
and after, and possibly at several points during program intervention; to small-scale
rapid assessment and participatory appraisals where estimates of impact are obtained
from combining group interviews, key informants, case studies and available
secondary data.
Use
• Measuring outcomes and impacts of an activity and distinguishing these
from the influence of other, external factors.
• Helping to clarify whether costs for an activity are justified.
• Informing decisions on whether to expand, modify or eliminate projects,
programs or policies
• Drawing lessons for improving the design and management of future
activities.
• Comparing the effectiveness of alternative interventions.
• Strengthening accountability for results.
This is achieved through Randomized pre-test post-test evaluation, Quasi-
experimental design and Rapid assessment evaluation

1.3 Related works


Tola Data
Tola data aims to simplify the monitoring and evaluation process. It allows one to Collect
and manage data in a easy to use user interface it also allows for project tracking and
management and result visualization it doesn’t not however offer any analytics.
Delta
Delta monitoring and evaluation software provides an array of features such as its system
is open to other systems and it provides analytics, log frame support and project
Monitoring tools. It has a wide array of dashboards for multiple scenarios and thus is
considered to be one of the best Monitoring and evaluation software.
OTB Africa monitoring and evaluation Tool
This tool aims to provide automated and transparent reporting in their Monitoring and
evaluation tool it is a custom-tailored software and thus has to made specifically for each
client

1.4 Proposed Solution

The proposed solution is a monitoring and evaluation software capable of handling the
analytics and basic Monitoring and evaluation steps the system should be able to guide
the user as to the next steps to perform and what is needed to make the monitoring and
evaluation process effective. It should be able to perform analysis based on the indicators
set and generate reports which could then be submitted for review.
2.Problem Statement
Most governmental and NGO projects which are meant to aid society usually take longer than
proposed or sometimes fail. This is because the monitoring and evaluation of goals and
indicators set when the project was initiated was not efficiently done. This is mainly due to
the fact that staff are not well trained in the monitoring and evaluation process and thus they
find the learning curve in finding out these processes too steep this leads to inefficiency in the
monitoring and evaluation of the project. This leads to the need for a tool to aid in the
monitoring and evaluation process by preforming the analysis needed and reducing the
learning curve needed to perform effective monitoring and evaluation.

2.1 Research questions


1. What are the main challenges faced in the Monitoring and evaluation Process in the
Africa-ai-Japan Project ?
2. Can the use of Monitoring and evaluation Software improve the efficiency of the
Monitoring and Evaluation process?
2.2 Research Objectives
1. To analyse the way the existing monitoring and evaluation system works at the
Africa-ai-Japan Project .
2. To design a new system that meets the user requirements in M&E of projects in the
Africa-Ai-Japan Project.
3. To develop and implement a monitoring and evaluation software tool that will aid the
in the Monitoring and Evaluation process in the Africa-ai-Japan Project.
4. To test the system.
Scope
The study will be limited to the Jomo Kenyatta University of agriculture and Technology-ai-
Japan project as it is need of a monitoring and evaluation tool.

3. Justification
Monitoring and evaluation has become an integral part of a project as it helps to determine if
goals are being achieved and the right action to perform next in order to achieve the set goals.
Even so in developing countries the monitoring and evaluation of projects is done efficiently
because of many issues one of them being lack of staff sufficiently trained in the monitoring
and evaluation process. This leads to the need for a tool that will help lessen the learning
curve required in performing monitoring and evaluation and help make the process more
effective. The proposed tool will be able to help users perform the monitoring and evaluation
process in an easier and more effective way.
4.Literature relevant to the proposal
Lahey (2015). states that the main challenges facing monitoring and evaluation of the ILO
projects was:

• The log frame identification of expected results generally fails to clearly identify the
full set of results and often confuses the articulation of ‘outputs’ with ‘outcomes’;
• The clarity and completeness of performance indicators to measure project progress
and success are frequently problematic;
• The performance measurement strategy in general tends to have serious gaps, in
particular, lack of relevant data/information sources and feasible measurement
strategies;
• There is too little or no monitoring of ‘other influencers’ that influence movement
along the results chain and ultimately, attainment of success. Recognition of such
‘influencers’ may bring to light the non-linear relationship inherent in a project’s
theory of change and the true complexity of the initiative;
• Most M&E plans generally need a more systematic, structured and comprehensive
approach to the collecting, reporting and analysis of data, including assigning
responsibility;
• M&E Plans frequently are neglected or are not implemented effectively.
Maimula (2017).In his case study states “The challenges in practicing M&E including
Political influence, weak management team in M&E practice, and lack of technical staffs;
staffs are Unqualified and untrained”.
Mthethwa &Jili (2016) in their case study of Mfolozi south Africa say: The main challenge
faced by the Mfolozi municipality is that the knowledge, skills and competence required for
those aspiring and performing duties related to M&E of public projects is limited. Municipal
officials fail to understand the importance of M&E at the local government level of the
various projects. Therefore, they have failed to develop an institutional M&E system
(including M&E plans, indicators and tools).
Muzinda (2007) determined that the monitoring and evaluation practices of the local NGOs
fell short of the best practices. Most of the best practices were inconsistently done and others
were not done at all. Planning for monitoring and evaluation was inadequately done and
inconsistently by respondents. Implementing the monitoring and evaluation process was not
effectively done by the respondents.
Frankel & Gage (2016) in their mini course expound on the fundamentals of monitoring and
evaluation they further explain the basic Monitoring t and evaluation concepts, It expounds
on how to make a monitoring and evaluation plan, Frameworks, Indicators and data sources.
Gage & Dunn(2010) Define the fundamentals of monitoring and evaluation and give out
reasons why monitoring and evaluation is essential in a project’s lifetime they give out
useful insights on monitoring and evaluation they further go on differentiate between
monitoring ana evaluation from the terminologies to expectorations.
5. Research Methods and design
The research design is made to answer the questions by use of mainly qualitative methods
like:

1. Interviews

Interview respondents with the intent of finding out the methods the current
monitoring and evaluation techniques. The challenges they face and how they try to
solve them we will also investigate the need for ICT in the monitoring and evaluation
process.

2.Participant observation observe the methods used by institutions in monitoring and


evaluating their projects. This research method will help gain insightful data as to
how they carry out their monitoring and evaluation

3. Case studies

A case study is a research strategy and an empirical inquiry that investigates a


phenomenon within its real-life context. Case studies are based on an in-depth
investigation of a single individual, group or event to explore the causes of underlying
principles.

6. Schedule
Task Name Duration in days start end Deliverrable
1 Literature Review 25 01/03/2020 26/03/2020 Literature review
document

2 Literature Review 1 27/03/2020 27/03/2020


Presentation
3 Research 20 30/03/2020 19/04/2020
4 Conducting Interviews 13 30/03/2020 12/04/2020 Research
document

5 Collecting gathered 7 12/04/2020 19/04/2020 Research


data dcoument

6 Analysis 13 20/04/2020 03/05/2020


7 Analysis of gathered 6 20/04/2020 26/04/2020
data

8 Review of analysis 7 26/04/2020 03/05/2020 Analysis


9 System Design 21 03/05/2020 24/05/2020 System
specification
Document
10 System Implementation 1 24/05/2020 25/05/2020 System
Protoptype
11 Testing 06/06/2020 06/06/2020
12 Requirements testing 14 06/06/2020 20/06/2020 Test Report
13 usability testing 16 20/06/2020 06/07/2020 Test Report

7.Budget
Task Cost In KES

Virtual machine for hosting 4500

Licenses for software 1200


Interview materials 800

Transportation costs 2000

Meeting facilities for hosting interviews 1500

Domain Name 500

Total 10500

8.Conclusion
In conclusion it is evident that the monitoring and evaluation process is critical in ensuring
projects are conducted effectively and successfully thus the process of monitoring and
evaluation should be as effective and reliable as possible. This can be attained by the use of
software and tools made specifically for this purpose this research aims to create such a tool
that will aid in monitoring and evaluation in the future.

References
1. Robert Lahey.(2015). Common issues affecting monitoring and evaluation of large
ILO projects
2. Salum Maimula.(2017). Challenges In Practicing Monitoring And Evaluation: The
Case Of Local Government Water Projects In Mkuranga, Tanzania.
3. R.M.Mthethwa & N.N.Jili.(2016). Challenges in implementing monitoring and
evaluation (M&E). The case of the Mfolozi Municipality.South Africa.
4. Muzinda Mark.(2007). Monitoring And Evalaution Practices And Challenges Of
Gaborone Based Local NGOs Implementing HIV/AIDS Projects.Botswana 5.
Frankel, Nina & Anastasia Gage. (2016). “M&E Fundamentals: A Self Guided
Minicourse.” U.S. Agency for International Development, MEASURE Evaluation,
Interagency Gender Working Group, Washington DC.
6. Anastasia J. Gage Melissa Dunn. (2010). Monitoring and Evaluating Gender-Based
Violence Prevention and Mitigation Programs

CHAPTER 2 LITERATURE REVIEW


2.1 Background Information
Monitoring is the systematic process of collecting, analysing and using information to track a
programme’s progress toward reaching its objectives and to guide management decisions.
Monitoring usually focuses on processes, such as when and where activities occur, who
delivers them and how many people or entities they reach.
Monitoring is conducted after a programme has begun and continues throughout the
programme implementation period.
The four main purposes of monitoring and evaluation are:
1. To learn from experiences to improve practices and activities in the future;
2. To have internal and external accountability of the resources used and the results
obtained;
3. To take informed decisions on the future of the initiative; 4. To promote
empowerment of beneficiaries of the initiative.
Evaluation is the systematic assessment of an activity, project, programme, strategy, policy,
topic, theme, sector, operational area or institution’s performance. Evaluation focuses on
expected and achieved accomplishments, examining the inputs, activities, outputs, outcomes,
impacts, processes, contextual factors and causality, in order to understand achievements or
the lack of achievements. Evaluation aims at determining the relevance, impact,
effectiveness, efficiency and sustainability of interventions and the contributions of the
intervention to the results achieved.
Evaluations should help to draw conclusions about five main aspects of the project:

• relevance
• effectiveness
• efficiency
• impact
• sustainability

An evaluation should provide evidence-based information that is credible, reliable and useful.
The findings, recommendations and lessons of an evaluation should be used to inform the
future decision-making processes regarding the programme.
Monitoring and evaluation are critical for identifying and documenting successful
programmes and approaches and tracking progress toward common indicators across related
projects. Monitoring and evaluation forms the basis of strengthening understanding around
the many multi-layered factors underlying a project
This is especially relevant in resource poor areas, where difficult decisions need to be made
with respect to funding priorities.
At the programme level, the purpose of monitoring and evaluation is to track implementation
and outputs systematically, and measure the effectiveness of programmes. It helps determine
exactly when a programme is on track and when changes may be needed. Monitoring and
evaluation forms the basis for modification of interventions and assessing the quality of
activities being conducted.
Monitoring and evaluation can be used to demonstrate that programme efforts have had a
measurable impact on expected outcomes and have been implemented effectively. It is
essential in helping managers, planners, implementers, policy makers and donors acquire the
information and understanding they need to make informed decisions about programme
operations.
Monitoring and evaluation help with identifying the most valuable and efficient use of
resources. It is critical for developing objective conclusions regarding the extent to which
programmes can be judged a “success”. Monitoring and evaluation together provide the
necessary data to guide strategic planning, to design and implement programmes and
projects, and to allocate, and re-allocate resources in better ways.
The development of M&E has been marred by a series of challenges ever since the 1960s.
The 1980s is recorded as the worst decade for the M&E tenet of projects. By then, the IMF
(International Monitory Fund) and the World Bank were in the forefront advocating
following market forces. With the cost of debts contracting in the 1970s, it escalated again in
the 1980s forcing many economies open. Consequently, most governments then chose to
adjust the following economic tantrums (Cameron, 1993).
The success of projects from multiple sectors such as health, agriculture, community
empowerment, and human rights, among other sectors depend on Monitoring and evaluation
(M&E). According to the World Health Organization (2006), monitoring the progress of
critical goals and evaluating the effect of interventions and actions are vital in improving
satisfactory results' Performance and achievement. Prabhakar notes that Monitoring and
feedback are critical factors that fuel the success of any given project. However, it is worthy
to note that the targeted beneficiaries do not felt the effectiveness of most of these projects.
This is true in some of the developing nations where within the maternal health sector, the
percentage of mothers who do not make it through childbirth is higher than the number that
survives in most developed nations (United Nations, 2015). A profound look into the M&E
systems in most NGO’s and world governments depict a weakness in the systems that lag
behind project results. In some countries like Canada, accountability is highly held as a
government responsibility among other charity organizations.
In M&E the main concern is not only to know that a programme/project performs well but
how well it performs. Thus, it is important to devote substantial efforts (time and resources)
to monitor and evaluate the performance of development projects (Mackay, 2008).

2.2 Traditional Monitoring And Evaluation Systems

Traditional M&E systems are designed and implemented to assess accomplishment of


activities /tasks which relates to the “did they do it” question. The implementation approach
focuses on monitoring and assessing how well a project, program, or policy is being
executed, and it often links the implementation to a particular unit of responsibility. However,
this approach does not provide policymakers, managers, and stakeholders with an
understanding of the success or failure of that project, program, or policy. Results-based
M&E systems are designed to address the “so what” question. A results-based system
provides feedback on the actual outcomes and goals of interventions (Holzer, 2000).
Moreover, beneficiaries of research projects are now demanding implementers of these
projects to be accountable for results, transparent, and to provide more efficient and effective
services (Kusek and Rist,2001; Kusek and Rist, 2004). Furthermore, there are internal and
external (donor communities’) demands to measure accurately the results of aid-financed
development activities. Results-based M&E is a powerful management tool that can help
different stakeholders or actors (e.g. normal user, policy makers and decision makers) to track
progress and demonstrate the impact of a given project, program, or policy(Hendricks et al.,
2008).
2.3 Building a Monitoring and Evaluation System
T has never been an agreement on how many steps one should go when building a
Monitoring and Evaluation system but Kurt and Rist (2004)identifies several steps to be
followed when building a monitoring and evaluation system ,namely:
1. Formulation of Goals : This step requires one to identify outcomes before
establishing performance indicators because outcomes illustrate how success and
when assessed thoroughly can tell whether interventions are successful.
2. Selection of outcome indicators to monitor: In general, these indicators should be
representative; simple and easy to interpret; capable to indicate time trends; sensitive
to changes within or outside the organisation; easy to collect its data and process it;
and easy to update. In summary, key performance indicator (KPI) should be Clear,
Relevant, Economical, Adequate and easy to Monitor (Adam et al, 2004).the indicator
is measured objectively from its qualitative or quantitative characteristics.

3. Set specific targets to reach and dates for reaching them: target is a “specified
objective that indicates the number, timing and location of what is expected to be
realized” (IFAD, 2002). Thus, it is useful to set explicit targets to be achieved
throughout the programme cycle along with indicators that will be used to judge these
achievements (Binnendijik, 2000).
4. Development of the system and monitoring of the results: Performance monitoring
systems should be developed to allow regular collection of data on actual results
(Binnendijik, 2000). This system should also help to undertake monitoring on a
systematic or continues basis to track down the progress, and gauge whether targets
are met (Holzer, 1999).
5. Analyze and report the results: Analyzing and reporting the evaluation of actual
results vis-à-vis the targets set for making judgments’ about performance, is the final
step in developing a monitoring and evaluation system (Binnendijik, 2000).

2.4 Advantages of using Monitoring and Evaluation systems


Kusek and Rist (2004) using a performance-based monitoring and evaluation system can help
stakeholder and policy makers gauge whether goals were achieved .Mackay (2008) suggests that such
a system is useful in enhancing transparency and accountability because it reveals the extent to which
results have been attained. This system can also assist organizations to manage activities of
programmes or projects at different levels. Furthermore, it helps policy-makers in decision making,
for example the use of performance-based budgeting (Olubode-Awosola et al, 2008).

M&E, in itself, should not be seen as having an inherent value. The value of M&E does not
come from conducting M&E or having such information available; rather the value comes
from using the information to monitor, guide and control implementation for enhanced
performance and better results. (World Meteorological organization, 2009).

2.5 Challenges Faced in the Monitoring and Evaluation Process


Lahey (2015) posits that more than two-thirds of ILO independent evaluations have
inadequate Monitoring and evaluation approaches and practices that hinder projects'
effectiveness. Due to this reason, most ILO evaluation Office (EVAL) has targeted the
growing number of higher budget projects for support in planning and implementing
Monitoring and evaluations. Evaluation assessments of multi ILO projects done between
2014-2015 have shown weaknesses that profoundly impact management measures and the
expected results. Intuitively, the inclusion of a systematic approach leaning on all ILO
Development Cooperation Internal guidance manual is being incorporated during the project
design phase. In aid through the log frame development during the front-end development of
the project document, it projects clear objectives based on the relevant activities related to
their achievement.
Consequently, enormous potential in monitoring the progress of a project. However, some
serious gaps exist in areas associated with the framework and the related theory behind the
M&E plan. Lahey 2015 notes some challenges related to ILO projects in Monitoring and
evaluation.
The articulation of the project's theory of shift is limited or if it exists is inadequate. Lahey
(2015) emphasizes on the modification and enhancement of the current log frames. For
instance, the addition of casual links assumptions and risks for the program's success.
Too little Monitoring or absence of Monitoring of other variables that impact results and
achievement of success might also be a serious gap. If such gaps are identified, it might shed
some light on the non-linear relationships inherent to a project's theory of shift. There is also
the need for a systematic review of responsibilities in most M&Es, such as collecting
reporting and analysis of data for most projects to be successful. Lastly, most M&Es plans
are not included or maybe neglected or do not have feasible objectives.
Maimula (2017) in his case study, “Challenges in practicing monitoring and evaluation,”
asserts that M&Es are faced by political influence, the weak management team in M&Es,
lack of technical expertise in M&Es, strength of monitoring team, and staffs that are not
competent enough to lead M&Es. Maimula (2017) also asserts that the findings indicate that
challenges are facing the project in the form of inadequate technical experience influencing
the assessment of M&Es, political issues influencing the assessment of M&Es, low strength
of the monitoring staff and inadequate management of M&Es.
This is found true based on the Mkuranga water project in Tanzania. As noted, based on
Maimula's (2017) findings, most M&Es projects fail due to government influence on all
activities, including a selection of taskforce, among others. It appears that there is also a lack
of transparency within government and NGO's M&Es in projects. It can also be noted that
governments induce M&Es in their projects, mainly for transparency, elevate the strength of
accountability, performance improvement, and give information on the outcomes to the
public and higher policy levels.
In the case study by Mthethwa (2016) involving challenges in implementing Monitoring and
evaluation of a case study of Mfolozi, myriad challenges are raised. Mfolozi municipality is
faced with the challenge of knowledge, skills, and inadequate experience to those who are
assigned M&E duties to public resources. This follows from the fact that the municipal
officials are failing to understand the relevance of M&Es at the local government across
multiple projects. Consequently, there is no M&E protocol system in the form of M&E plans,
tools, and indicators.
Mthethwa (2016) observes that M&E are powerful management tools that aid the government
and state to improve the way projects are undertaken following their mission and vision. Data
that is adequately needed by the government in making decisions and implement policy
heavily depends on the results leaning on the performance feedback system to make tactical,
strategic, and operational decisions (Mackay 2007). The PSC (Public Service Commission)
has, however, directed that departments, as well as organs of state, do not participate in M&E
as a management criterion due to lack of M&E system to evaluate projects and programs
(PSC 2008).
In line with the Mfolozi municipality, there is a need to attract and retain a highly-skilled task
force from the angle of diversity. This is critical since having a diversified workforce elevates
productivity by having problems solved in different ways. Also, the task force should be
diverse and capable of dealing with the work for the benefit of the community and the
organization. The municipality must ensure that only capable people who have the right skills
are placed in the project since they can perform their duties adequately for the organization's
benefit. In this case, it can be argued that if persons with adequate skills are employed, it will
benefit in Monitoring and evaluating projects at the local and national government levels.
IFRIC (2011) posits that M&E planning is crucial components in the M&E system, involving
empirical planning for the project to monitor and evaluate the log frame’s objective as well as
its indicators. Monitoring and evaluation plan aids in managing the process of assessing and
giving progress of ongoing project results and identify the questions to be answered via
evaluation (USAID, 2016). Precisely, the Monitoring and evaluation plan describes
indicators, the bestowed responsibility personnel, the forms and tools to be utilized, and how
the data will flow through the system (Bullen, 2014). In that case, it is pretty clear that in the
absence of monitoring and evaluation plans, most M&E systems will fail into discontinuity
since there is little attention input at the planning stage (Sinister, 2015).
Nalianya and Wanyonyi (2017) note that M&E plans need to be put down and shared across
stakeholders and donors of any given project. This is so far the best practice as it engages a
wide variety of stakeholders. Any other person who has any given task within the project
needs to be engaged productively. However, it is essential to note that there are limited
studies done on stakeholders’ involvement in Monitoring and evaluation without pointing its
influence on the Performance of the project.
It can be seen that all M&Es are faced with similar challenges. From a close look, the
challenges witnessed in the Mukaranga water project are the same reported by Nyakundi
(2014) on a project dubbed “Factors influencing implementation of monitoring and
evaluation processes on donor-funded projects. In the study, Nyakundi (2014) revealed
several challenges such as a small number of stakeholders involved in the implementation of
M&E of projects, allocation of inadequate finances to the project M&Es, poorly trained
personnel for M&E and inadequate resources for M&E, low level of technical skills on M&E
and poorly developed project reports.
A study conducted by Githika in 2013 on HIV projects of civil society organizations (CSOs)
and stakeholder’s involvement in M&E’s depicted that most of them were not willing to
absorb Monitoring and evaluation. In the study that engaged descriptive research design, it
was found that the involvement of donors, staff, community, and project beneficiaries M & E
planning projects was 16%, 11%, 48%, and 26, respectively. Intuitively, M & E's challenges
in projects can be grouped into four comprising of lack of experience, inadequate financial
and staff resources, technical knowledge gap regarding performance indicators, retrieval,
collection, and data analysis and inadequate monitoring and evaluation practices.
According to Kusek and Rist(2004), developing countries have poor capacity in technical and
managerial skills, and these governments are often not only loosely interconnected but lack
strong administrative cultures and function without the discipline of transparent financial
systems. As result performance is not linked to public expenditure thereby hindering the
adoption of result-based systems. Thus, there is a need to develop unique systems to monitor
and evaluate projects in developing countries. This therefore implies the need to develop the
monitoring and evaluation system local in the Africa-ai-Japan Project.

2.6 Relevance To The Research


The relevance of the literature to the research was to identify the problems faced during
monitoring and evaluation of a project and to see a possible way to improve the process using
the literature. On examining the challenges shown in the literature it was evident that there
are challenges in monitoring and evaluation with which the introduction of software will help
improve.

2.7 References
1. Githika, M. S. (2013). Influence of Project Management Practices on Implementation of
HIV and AIDS Projects: A Case of Civil Society Organizations in Imenti North
Subcounty, Meru County Kenya. Master's Thesis, 1-96.
2. Jili, N. N., & Mthethwa, R. M. (2016). Challenges in implementing monitoring and
evaluation (M&E): The case of the Mfolozi Municipality.
3. Mackay, K. 2007. How to build M&E systems to support the better government.
Washington, DC: World Bank.
4. Nyakundi, A. (2014). "Factors influencing implementation of monitoring and evaluation
processes on donor-funded projects; A case of Gruppo per Le Relazioni Transcultural
GRT project in Nairobi, Kenya" "A Research Project Report Submitted In Partial
Fulfillment For The Requirements of the Award of the Degree of Master of Arts In
Project Planning and Management of The University of Nairobi, Kenya.
5. Public Services Commission (PSC). 2008. A fundamental concept in Monitoring and
6. Lahey, R. (2015). Common issues affecting Monitoring and evaluation of large ILO
projects: Strategies to address them. ieval THINK Piece (9).
7. Simister, N. (2015). M&E Plans. INTRAC Publications, 1-3.
8. IFRC. (2011). Project/program monitoring and evaluation (M&E) guide. Geneva.
9. Lahey, R. (2015, November). Common issues affecting Monitoring and evaluation of
large ILO projects: Strategies to address them. I-eval THINK Piece (9).
10. Cameron, J. (1993). The challenges for Monitoring and evaluation in the 1990s. Project
appraisal, 8(2), 91-96.
11. Frankel, N., & Gage, A. M&E fundamentals: a self-guided mini-course. 2016. Measure
evaluation. MS-07-20. https://www. GlobalHealth learning. org/course/me-fundamentals.
12. Dunn, M., & Gage, A. (2010). M&E of constructive men's engagement (CME) programs.
Chapel Hill, NC, USA: MEASURE Evaluation, University of North Carolina.
13. Sanga, Camilius & Fue, Kadeghe & Nicodemus, Neema & Kilima, N.. (2013). Webbased
System for Monitoring and Evaluation of Agricultural Projects’. Interdisciplinary Studies
on Information Technology and Business (ISITB). 1. 17-43.
14. Hendricks, M., Plantz, M. C., & Pritchard, K. J. (2008). ‘Measuring outcomes of United
Way–funded programs: Expectations and reality.’ In Carman J. G. & Fredericks K. A.
(Eds.), Nonprofits and evaluation. New Directions for Evaluation, USA
15. Holzer, M. (2000). ‘Public Performance Evaluation and Improvement’, Evaluation
capacity development in Asia, National Center for Public Productivity, Rutgers
University, NJ, USA.
16. Kusek, Z., & Rist, C. (2004). ‘Ten steps to a Result-based monitoring and evaluation
system’, A
Handbook for Development Practitioner, World Bank, Washington D.C.
17. Sanga, C., Kadeghe, F., & Kilima, F. T. M. (2012). Projects Monitoring and Evaluation
Information
System: Case Study of EPINAV Programme, Sokoine University of Agriculture -
Tanzania. LAP
Lambert Academic Publishing, Germany.

You might also like