You are on page 1of 56

THE INFLUENCE OF MONITORING & EVALUATION ON PROJECT

PERFORMANCE AMONG NON-GOVERNMENTAL


ORGANIZATIONS IN HARGEISA SOMALILAND
___________________________________________________________________________________

HAMSE ABDIRAHMAAN AHMED

thesis project submitted to the Department of project management in


partial fulfillment of the requirement for the award of the master degree
of project management Abaarso Tech University.

AUGUST 2021
DECLARATION

I, Hamse Abdirahmaan Ahmed Declare This thesis is my original work and has not been

presented for a degree in any other University.

Hamse Abdirahmaan Ahmed

Signature -----------------------------------------------------Date----------------------------------------

DECLATION BY THE SUPERVISORS

This proposal/thesis has been submitted for examination with my approval The Abaarso Tech

University

Supervisor(s)

MR. John Otieno

Signature -----------------------------------------------------Date --------------------------------------------

I
ACKNOWLEDGEMENT

First and foremost, heartfelt thanks to Allah, who deserves to be thanked at any time and every
condition in life. During our research, I enjoyed considerable support from so many groups who
are very important in our life, and they were really cooperative, knowledgeable, creative, and
constantly encouraging. Second, I would like to express my gratitude to John Otieno, my respected
supervisor, for his advice and kind assistance. Special thanks also to my family, especially my
parental love, moral assistance, advice, and guidance: I would not reach this stage without them. I
salute them all. I wish to acknowledge my colleagues, students, and Friends who consciously or
unconsciously have helped this work to be done.

II
Contents
DECLARATION.......................................................................................................................................... I
ACKNOWLEDGEMENT ......................................................................................................................... II
LIST OF FIGURES ................................................................................................................................... V
LIST OF TABLES .................................................................................................................................... VI
ABSTRACT ............................................................................................................................................. VII
LIST OF ABREVIATIONS AND ACRONYMS ................................................................................ VIII
DEFINITION OF TERMS....................................................................................................................... IX
CHAPTER ONE ......................................................................................................................................... 1
1.0 Introduction ......................................................................................................................................... 1
1.1 Background of The Study ................................................................................................................... 1
1.3 Objectives of The Study...................................................................................................................... 4
1.3.1 General Objectives ....................................................................................................................... 4
1.3.2 Specific Objectives ...................................................................................................................... 4
1.4 Research Questions ............................................................................................................................. 4
1.7 Justification of the Study................................................................................................................. 5
1.8 Scope of The Study ......................................................................................................................... 5
1.9 Limitation............................................................................................................................................ 5
CHAPTER TWO ........................................................................................................................................ 6
2.0 Introduction ......................................................................................................................................... 6
2.2 Theoretical Review ............................................................................................................................. 6
2.2.1Program Theory: ........................................................................................................................... 6
2.2.2Theory of Change.......................................................................................................................... 7
2.2.3 Evaluation Theory ........................................................................................................................ 7
2.3 Empirical Review................................................................................................................................ 8
2.3.1 Types of Monitoring .................................................................................................................... 8
2.3.2 Monitoring and Evaluation Practices ........................................................................................... 9
2.3.2.9 Tools and Methods in Monitoring and Evaluation ................................................................ 13
2.3.2.10 Project performance .............................................................................................................. 15
2.3.2.11 Monitoring and Evaluation Practices Related Studies .......................................................... 16
2.3.2.12 Tools and Methods in Monitoring and Evaluation Related Studies..................................... 17

III
2.3.2.12 Project performance Related Studies .................................................................................... 18
2.4 Summary and Research gaps ............................................................................................................ 19
CHAPTER THREE .................................................................................................................................. 20
3.0 Introduction ....................................................................................................................................... 20
3.1 Research Design................................................................................................................................ 20
3.2 Research Population.......................................................................................................................... 20
3.3 Sample Size....................................................................................................................................... 20
3.4 Sampling Procedure .......................................................................................................................... 21
3.5 Research Instruments ........................................................................................................................ 21
3.6 Data Collection Procedure ................................................................................................................ 21
3.7.1 Before the administration of the questionnaires......................................................................... 21
3.7.2 During the administration of the questionnaires ........................................................................ 21
3.7.3 After the administration of the questionnaires ........................................................................... 22
3.8 Data Processing and analysis ............................................................................................................ 22
3.9 Ethical Considerations ...................................................................................................................... 22
CHAPTER FOUR..................................................................................................................................... 23
DATA PRESENTATION AND ANALYSIS ........................................................................................ 23
4.0 INTRODUCTION ........................................................................................................................ 23
4.1. Socio-demographic characteristics of the respondents ................................................................ 23
4.2 Monitoring and Evaluation ............................................................................................................... 24
4.3 Tools and Methods used by M&E .................................................................................................... 28
4.4 Project Performance .......................................................................................................................... 30
CHAPTER FIVE ...................................................................................................................................... 33
5.0 Introduction ....................................................................................................................................... 33
5.1 Summary ........................................................................................................................................... 33
5.2 Conclusion ........................................................................................................................................ 34
5.3 Recommendations ............................................................................................................................. 35
REFERENCES .......................................................................................................................................... 36
APPENDIX I: INTRODUCTION LETTER .......................................................................................... 39
APPENDIX 2 - QUESTIONNAIRE ........................................................................................................ 40
STUDENT LITTER OF VERIFICATION ............................................................................................ 45

IV
LIST OF FIGURES

Figure 2. 1: Relationship Between the Variables.......................................................................... 19


Figure 4.2. 1 practical experience in monitoring and evaluation system in your organization
24
Figure 4.2. 2 Do you have direct involvement in the Monitoring and Evaluation System of the
organizations ................................................................................................................................. 25
Figure 4.2. 3 organization have a plan that guides monitoring and evaluation when
implementing the program/project ................................................................................................ 25
Figure 4.2.4 stakeholders involved in planning monitoring and evaluation of projects .............. 26
Figure 4.2. 5 Guided monitoring and evaluation plan activities in organization ......................... 26
Figure 4.2. 6 the monitoring and evaluation activities have ........................................................ 27
Figure 4.2. 7 separate budget is allocated for monitoring and evaluation activities .................... 27
Figure 4.2. 8 Organization use the logical framework approach (log frame) so as to plan about
M&E activities in the organization ............................................................................................... 28
Figure 4.3. 1 The tools and methods used in Monitoring and evaluation in the organization
29
Figure 4.3. 2 Do you have any difficulties in using the M&E system? ........................................ 30
Figure 4.4. 1 How do you rate the performance of your project compared to other NGOs? 31
Figure 4.4. 2 relationship between monitoring & evaluation and project performance ............... 32

V
LIST OF TABLES

Table 3.3 1Table1 population and sampling in the study ........................................................................... 20

Table 4. 1. 1 Socio-demographic characteristics of Respondents (N=50) .................................... 23

Table 4.2. 1 How often do you document lessons learned on the project implementation ......... 28

Table 3.3 1Table1 population and sampling in the study ............................................................. 20

Table 4.3. 1 the applicability of these tools and methods used by M&E in your organization? . 29
Table 4.4. 1 In which stage do make Monitoring and Evaluation in the project 30
Table 4.4. 2 the factors necessary for improving project performance in NGOs? ....................... 30

VI
ABSTRACT

Without project monitoring and evaluation, it is impossible to meet project objectives and maintain
a high level of employee performance. In projects, Monitoring and Evaluation is one of the critical
elements of the project management cycle. This study focuses monitoring and evaluation on
project performance in NGOs. The specific objectives of this study were: Practices of monitoring
and evaluation done by NGOs, tools and techniques use in monitoring and evaluation and to
establish the relationship between monitoring and evaluation on project performance of non-
Government organizations. The study design was descriptive. Primary data was collected for the
analysis, the target population was 33 monitoring evaluation team and 17 project team leaders; the
sample of this research was selected using a stratified random sampling method. The study found
the monitoring and evaluation plan prepared by the involvement project managers, m & e experts,
team leaders and consultants. Most respondents say, there is a separate budget for monitoring and
evaluating projects activtes and also said no specific budget allocated monitoring and evaluating
activities. The common tools and techniques are used logical frameworks, theory-based
evaluations, formal surveys, rapid appraisal, and participatory methods. The factor affected in
using the M&E system is the role of management in the operations of the M&E. According
relationship between Monitoring & evaluation and performance There is a positive relation, (85%)
of respondents are agreed there is a related m&e and project performance. The researcher
concluded that contributing to factors necessary for improving project performance in NGOs are
project planning and quality, employee skills, identification problems in planning, and
environmental scanning. The researcher recommended that all NGOs focus on M&E to improve
project performance; this is because monitoring and evaluation assists projects in understand the
environment in which they operate and how to proceed.

VII
LIST OF ABREVIATIONS AND ACRONYMS

M&E : Monitoring and Evaluation


NGOs : Non-Governmental Organizations
CPA : Critical Path Analysis
CPM : Critical Path Method
CV : Cost of Variance
CPI : Cost Performance Index
UNAIDS : United Nations AIDS Program
PASSIA : Palestinian Academic Society for the Study of International Affairs
WVI : World Vision International
OECD : Economic Co-operation and Development
LFA : logical framework approach
DDG : Danish Demining Group
AVR : Armed Violence Reduction
MIS : Management Information System
IFRC : The International Federation of Red Cross and Red Crescent Societies
UNDP : United Nations Development Programme
SPSS : Statistical Packages for Social Sciences

VIII
DEFINITION OF TERMS

Monitoring planning is part of project management that relates to the use of schedules such as
Gantt charts to plan and report progress within the project environment
(PMBOK Guide, 2017)
Monitoring and Monitoring is defined as the routine continuous tracking of the key elements
Evaluation of project implementation performance inputs activities and outputs, through
recordkeeping and regular reporting while evaluation is the episodic
assessment of an ongoing or completed project to determine its actual impact
against the planned impact, efficiency, sustainability, effectiveness (McCoy,
Ngari, & Krumpe, 2005)
Monitoring Its collection of project performance data with respect to a plan or a practice
to produce performance measures, and report and disseminate performance
information (McCoy et al., 2005)
Performance This is the success in meeting pre-defined objectives, targets and goals i.e.
simple terms refers to getting the job done or producing the result that you
aim at (Harish, 2010) Performance of a project is multifaceted and may
include unit cost, delivery speeds and the level of client satisfaction (Ling,
2004).
Practice A specific type of professional or management activity that contributes to
execution of a process and that may employ adoption of a plan, technique and
tools (PMBOK Guide, 2017).
Project A project is a temporary endeavor undertaken to create a unique product,
service or result (PMBOK Guide, 2008)
Project Management refers to the application of knowledge skills, tools and techniques to
undertake a project successfully to add value (PMBOK Guide, 2008)
Technique Is a defined systematic procedure employed by a human resource to perform
an activity to produce a product or result or deliver a service, and that may
employ one or more tools (PMBOK Guide, 2008).

IX
Tool Is something tangible, such as a template or software program,
used in performing an activity to produce a product or result
(PMBOK Guide, 2008).

X
CHAPTER ONE
1.0 Introduction
This chapter will focus background of the study, problem statement, purpose of the study,
objectives of the study, research questions, scope of the study and significance of the study

1.1 Background of The Study

Monitoring and evaluation activities are said to have been born in the US in the 1950s. It was
started by one institute of higher learning by the name of "Urban Institute of the USA" that wanted
to evaluate the efficiency of government programs as compared to what the government was
promising to do. The exercise to evaluate government activities by this higher Institute of learning
was named "promise and performance" and was publicized in 1979. Since then, monitoring and
evaluation programs have spread all over the world, Africa inclusive, and it mostly attracted the
attention of higher institutes of learning and research centers (Ngarambe, 2015).

Participatory monitoring and evaluation are one of many approaches to ensure that the
implementation of the different projects within the action plan or smaller individual projects leads
to the expected outcomes, monitoring and evaluation are separate practices dedicated to the
assessment of your NGOs overall performance. Monitoring is a systematic and long-term process
that gathers information in regards to the progress made by an implemented project. Evaluation is
time-specific and it "s performed to judge whether a project has reached its goals and delivered
what is expected according to its original plan (Philip et al., 2008).

Developed countries, particularly those of the Organization for European Co-operation and
Development (OECD), have had as many as 20 or more years of experience in M&E, while many
developing countries are just beginning to use this key public management tool. The experiences
of the developed countries are instructive and can provide important lessons for developing
countries (Kusek & Rist, 2004).

Developing countries now perform regular monitoring and evaluation activities.


These range from comprehensive national evaluation systems in countries such as India and

1
Malaysia to basic monitoring of selected projects in many countries in Africa and the Middle
East (Tache, 2011). In Africa, the South African government has placed increasing importance on
Monitoring and Evaluation during its third term of office since democracy. The imperative was to
focus and strengthen monitoring and evaluation capacity across all spheres of government
(Mackay, 2006).

There are several Non-Governmental Organizations in Rwanda and World Vision


International is one of them. As a strong and big organization, World Vision International
Rwanda has to move with the modern trend of doing things especially the formal way of
presenting accountability both in terms of funds and performance. Monitoring and evaluation
are some of the most appropriate ways this can be done especially as an organization it has
some donors and benefactors to be accountable to. To understand this well we need to know
what monitoring and evaluation is (Ngarambe, 2015)

A project is a temporary endeavor undertaken to create a unique product, service, or result.


The project requires an organized set of work efforts that are planned in a level of detail that is
progressively elaborated upon as more information is discovered, to establish and operate an
effective project; all project managers perform several major functions. These functions
enable project managers to create a positive work environment and to provide the
opportunities and incentives. These functions are critical to the performance of any project
manager and project in general. In many projects, monitoring and evaluation is one of the
pillars that support them for effective productivity. Plans are essential but they are not set in
concrete. If they are not working, or if the circumstances change, then plans need to change
too. Monitoring and evaluation are both tools that help a project or organization know
when plans are not working, and when circumstances have changed (Kloppenborg, 2008).

In Somalia, the monitoring and Evaluation Directorate was established in 2016. It has four units
namely Performance, Monitoring and Review, Evaluation and Research, Management Information
System (MIS), and Reporting and Coordination Units. The department is at the center of sound
governance arrangements. The department is crucial for the achievement of evidence-based
policymaking, Result based decision-making, management, and accountability within the Somali
Government. The Directorate for Monitoring and Evaluation is in charge of the overall monitoring,

2
review, and evaluation of national plans, programs, and projects to determine if they are achieving
their intended objectives. It tracks the progress before the implementation of national plans,
programs, and projects through systematic monitoring, reviews, assessments, and evaluations
(MPIEDS 2018).

In 2009, the Danish Demining Group (DDG) published a Manual on Impact Monitoring for staff
involved in its mine action programs, which is being adapted and used by DDG's Armed Violence
Reduction (AVR) programs. Monitoring and Evaluation (M&E) is a key component of the s
Community Safety Programme (CSP) in Somaliland, although DDG acknowledges that it needs
to be strengthened. M&E takes place during the following stages of the program (1) Participatory
Needs Assessment and baseline survey (during the community entry phase), (2) ongoing
monitoring, which involves a mix of staff and volunteer reporting and monitoring visits by national
and international managers and (3) Participatory Impact Assessment, which takes place once the
nine-month CSP cycle has concluded; the M&E team uses household questionnaires, focus group
discussions and interviews with key informants (Naidoo, 2012)

First of all, monitoring and evaluation are essential for you to assess that your project is achieving
set targets. For instance, by monitoring the project's development, you will easily understand
whether strategic changes need to be made and act accordingly. Second, Monitoring and
Evaluation are relevant to donors who need to assess whether your NGO is a reliable partner. By
reviewing milestones and final outcomes of your project, donors will decide on the accountability
of your NGO, upon which further collaborations could be established (Lewis, 2005).

1.2 Problem Statement

In projects, Monitoring and Evaluation is one of the critical elements of the project
management cycle. Thus, organizations especially NGOs, implement project M&E just to adapt
to demands and pressures from funding agencies rather than as a measure to contribute to project
performance (Zall Kusek & Rist, 2004). From many organizations' perspectives, M&E is provided
by organizations to track, analyze and report on relevant information and data throughout the life
cycle of a project. Monitoring and evaluation significantly improve project performance (Kihuha,
2018). According to (International Federation of Consultancy, 2012), monitoring and evaluation

3
help those involved with community development projects to assess if progress is being achieved
in line with expectations.
Monitoring and evaluation when carried out correctly and at the right time and place are two of
the most important aspects of ensuring the success of many projects. Unfortunately, these two
although known to many project developers tend to be given little priority and as a result, they are
done simply for the sake of fulfilling the requirements of most funding agencies without the
intention of using them as a mechanism of ensuring the success of the projects (Otieno, 2000).

The common problem facing Somaliland NGOs is how to run a project successfully and efficiently
like in many other developing countries is to determine whether they have achieved their stated
goals or not. In many NGOs, monitoring and evaluation is something that is used by a stakeholder
to assess project requirements and project performance. Thus study aims to explore, the impact of
monitoring and evaluation in non-governmental organizations and the benefits of monitoring and
evaluation in NGOs

1.3 Objectives of The Study


1.3.1 General Objectives

The general objective of this study will examine the influence of monitoring and evaluation on
project performance in Somaliland NGOs.

1.3.2 Specific Objectives

1. To identify the monitoring and evaluation practices done by NGOs.


2. To identify tools and techniques use in monitoring and evaluation in NGOs
3. To establish a relationship between monitoring and evaluation practices in the project
performance of NGOs.

1.4 Research Questions

1. What are the monitoring and evaluation practices done by NGOs?


2. What are tools and techniques use in monitoring and evaluation in NGOs?

4
3. What is the relationship between monitoring and evaluation in the project performance of
NGOs?

1.7 Justification of the Study

This research study will contribute to the provision of the knowledge that has generally been
lacking on monitoring and evaluation exercise in Non-Governmental Organizations. And also help
the project managers to understand how and when to do monitoring and evaluation in their project
and helped them to review and analyze their monitoring and evaluation system to maintain or
increase their project performance.

1.8 Scope of The Study

Geographically, the study about monitoring and evaluation of project performance will be
carried out in Somaliland Hargeisa especially (26jun and Koodbuur District) NGOs. The content
scope of this study aimed at discovering information about monitoring and evaluation on project
performance and about time scope of the study focused on monitoring and evaluation on project
performance considering ended project and projects in progress.

1.9 Limitation

In some cases, researchers are challenged with a number of problems while conducting their
research. These include the following but not limited to:

Some people may refuse to divulge information. To avoid such an circumvent, questionnaire
and interviews would clearly and concisely be set to attract sampled population
who can cooperate

Access to some information in monitoring and evaluation might be very difficult to be given
to the researcher. The researcher tried to be tactful and diplomatic to collect all needed
information.

Extraneous variables which will be beyond the researcher’s control such as respondents’
honesty, personal biases and uncontrolled settings of the study.

5
CHAPTER TWO

2.0 Introduction
This chapter discussed the literature that in one way or another relates to the study being
undertaken. It critically analyzed written sources like library-based research, internet views,
reports, theories, and other already documented information related to the topic under study. The
use of related literature helps the researcher to gain clarity on the issue under investigation and
also covers the gap of this research fill.

2.2 Theoretical Review

2.2.1Program Theory:
The program theory was developed by Huey Chen, Peter Rossi, Michael Quinn Patton, and Carol
Weiss (1195). This theory is concerned with how to bring about change and who is responsible for
the change. Logical models often used to represent the program theory show how the overall logic
is used in an intervention. The theory is in the body of the theory of change and applied
development evaluation field. The application by the proponents to this theory was on how to
relate program theories to evaluation for several years Weiss. For many years, program theory
served as an useful tool for monitoring evaluations.; The theory was well-known for its conclusive
mechanism and solving problems and the requirement to do our assessments to supplement the
findings. It also provides tools to control influential areas in evaluation (Müller & Turner, 2007).
Most NGOs deal with human service programs designed to improve the society, which are at times
designed and redesigned in due course (Hosley, 2005). As a result, the program theory uses logical
framework methodology. The program theory is a comprehensive version of the logic model. The
logical model is used to guide stakeholders' participation, management, and outcome evaluation.
(Hosley, 2005).

In this study, program theory provide way to practices monitoring and evaluation activities.
According to (Rossi, 2012) program theory consists of an organizational plan on how to utilize
resources and organize the activities of the program activities. (Uitto, 2004) illustrates the benefits
of using the program theory in monitoring and evaluation. It includes the ability to attribute project
outcomes of specific projects or activities and identify anticipated and undesired program

6
outcomes. Program theory enable the evaluator to understand why and how the program is working
(Rossi, 2012).

2.2.2Theory of Change

The theory of change is a component of the program theory that emerged in the 1990s to improve
the evaluation theory (Seith, 2012). A theory of change is used for discovering out how to solve
complicated social problems. It gives a complete picture of early and intermediate-term changes
that are needed to reach a long-term set goal (Anderson, 2005). therefore, provides a model of how
a project should be run, which can be testable and refined through monitoring and evaluation. A
theory of change is also a specific and measurable description of the change that forms the basis
for planning, implementation, and evaluation. Most projects have a theory of change, although
they are usually assumed (CPWF., 2012). The theory of changes helps in the development of
understandable monitoring and evaluation frameworks. NGOs and donors mainly use it to express
the long-term impact on projects (James, 2011).

2.2.3 Evaluation Theory

Evaluation Theory outlines useful ways for addressing issues that arise throughout the evaluation
process. Lessons are learned about what does not work, saving program designers and evaluators'
time and resources (Donaldson, 2001). Evaluation theory assesses project effectiveness in
achieving its goals and in determining the relevance and sustainability of an ongoing project.

(J. J. S. S. Abalang, Nairobi, 2016). There are two sorts of evaluations, depending on when they
take place. These are summative and formative evaluations. Formative evaluation is primarily
concerned with ensuring that resources are used efficiently to produce outputs. It focuses on the
project's strengths, weaknesses, and problems, or even whether the current project plan will be
able to meet the project's objectives or whether it needs to be redesigned (Nabris, 2002). Formative
evaluations are sometimes called interim or midterm evaluations. A summative evaluation is
carried out at the end of the project and aims to determine how the project progressed, what went
right and wrong, and captured any lessons learned.

7
Macdowall (2000) identifies two types of summative evaluation that is geared towards guiding
future projects by facilitating organizational learning by documenting good practices and mistakes.
Outcome evaluation is concerned with the extent to which the set objectives were achieved and
how we can attribute the project's role to the outcomes to carry out monitoring evaluation
effectively; there are some critical factors that must be taken into account. These include relevant
skills, sound methods, adequate resources, and transparency to be a quality (Fitzgerald, Posner,
Workman, & Institute, 2012). The resources here include skilled personnel and financial resources.

Concerning this study, essential to evaluate practice in NGO's especially NGOs that deal with a
human service program. Evaluation theory to prescribe underlying frameworks of evidence-based
practice. According (Shadish, 1991), the fundamental purpose of evaluation theory is to specify
feasible practices that evaluators can use to construct knowledge about the value of social
programs.

2.3 Empirical Review

2.3.1 Types of Monitoring

Manual (IFRC, 2011) gives a summary of the different types of monitoring commonly use in a
project and program monitoring system. It is important to remember that these monitoring types
often occur simultaneously as part of an overall monitoring system.

Results monitoring tracks effects and impacts. This is where monitoring merges with evaluation
to determine if the project and program are on target towards its intended results (outputs,
outcomes, impact) and whether there may be any unintended impact (positive or negative). For
example, a psychosocial project may monitor that its community activities achieve the outputs that
contribute to community resilience and ability to recover from a disaster. Process (activity)
monitoring tracks the use of inputs and resources, the progress of activities, and the delivery of
outputs. It examines how activities are delivered – the efficiency in time and resources. It is often
conducted in conjunction with compliance monitoring and feeds into the evaluation of impact. For
example, a water and sanitation project may monitor that targeted households receive septic
systems according to schedule

8
Compliance monitoring ensures compliance with donor regulations and expected results, grant
and contract requirements, local governmental regulations and laws, and ethical standards. For
example, shelter projects may monitor that shelters adhere to agreed national and international
safety standards in construction. Context (situation) monitoring tracks the setting in which the
project/programs operate, especially as it affects identified risks and assumptions, but also any
unexpected considerations that may arise. It includes the field as well as the larger political,
institutional, funding, and policy context that affect the project/programs. For example, a project
in a conflict-prone area may monitor potential fighting that could not only affect project success
but endanger project staff and volunteers.

Beneficiary monitoring tracks beneficiary perceptions of a project/program. It includes beneficiary


satisfaction or complaints with the project/program, including their participation, treatment, access
to resources, and their overall experience of change. Sometimes referred to as beneficiary contact
monitoring (BCM), it often includes stakeholder complaints and feedback mechanisms. It should
take account of different population groups, as well as the perceptions of indirect beneficiaries
(e.g. community members not directly receiving a good or service). For example, a cash-for-work
program assisting community members after a natural disaster may monitor how they feel about
the selection of program participants; the payment of participants and the contribution the program
is making to the community (IFRC, 2011).

Financial monitoring accounts for costs by input and activity within predefined categories of
expenditure. It is often conducted in conjunction with compliance and process monitoring. For
example, a livelihoods project implementing a series of micro-enterprises may monitor the money
awarded and repaid, and ensure implementation is according to the budget and time frame.
Organizational monitoring tracks the sustainability, institutional development, and capacity
building in the project/programme and with its partners. It is often done in conjunction with the
monitoring processes of the larger, implementing organization. For example, a National Society's
headquarters may use organizational monitoring to track communication and collaboration in
project implementation among its branches and chapters (IFRC, 2011).

2.3.2 Monitoring and Evaluation Practices

9
Through research and practice, these practices have come to be known as effective in achieving
monitoring and evaluation objectives. The best practices associated with monitoring and
evaluations are described below.

2.3.2.1 Monitoring and Evaluation Plan

The project should have a monitoring and evaluation plan. The plan should be prepared as an
integral part of the project plan and design Palestinian Academic Society for the Study of
International Affairs (PASSIA) (Nabris, 2002), (McCoy et al., 2005). The integration is for the
clear identification of project objectives for which performance can be measured.

2.3.2.2 Coherent Framework

Monitoring and evaluation should be aided by a coherent structured conceptual framework. The
framework aids in identifying the logic behind project elements and performance measurement,
how they are elated, and the underlying assumptions. One of the best practices that have been
adopted because of its structured approach is the use of the logical framework approach (LFA) as
a tool to aid both the planning and the monitoring and evaluation functions during implementation
(Aune, 2000; FHI 2011). Vann open (1994) as quoted by (Aune, 2000) argues that the LFA makes
the planners of the project from the start to think in terms of measuring performance by identifying
the measures and criteria for success during the planning stage. This gives it great leverage in that
form the beginning of the project design hence implementation are integrated with performance
measurement through the identification of indicators that will demonstrate how the project is
performing during implementation.

2.3.2.3 Monitoring and Evaluation budget

The project budget should provide a clear and adequate provision for monitoring and evaluation
activities. The monitoring and evaluation budget can be delineated within the overall project
budget to give the monitoring and evaluation function the due recognition it plays in project
management (McCoy et al., 2005). Some authors argue for a monitoring and evaluation budget to
be about 5 to 10 percent of the total budget (Kelly & Magongo, 2004). The intention with this
practice is not to be prescriptive of the percentage that is adequate but to come up with sufficient
funds to facilitate the monitoring and evaluation activities. The provision of a budget for
10
monitoring and evaluation ensures that the monitoring and evaluation activities take place when
they are due. It also ensures that monitoring and evaluation are not treated as peripheral functions.

2.3.2.4 Schedule of Monitoring and Evaluation

The monitoring and evaluation activities of the project should be included in the project schedule
so that they are given the due importance they require, not only done at the whims of the project
manager ((Miller, 2010) &(McCoy et al., 2005). Individuals for Monitoring and Evaluation
Activities

There should also be an individual who is directly in charge of the monitoring and evaluation as
the main function (Kelly & Magongo, 2004) and identification of different personnel for the
different activities of the monitoring and evaluation such as data collection, analysis, report
writing, dissemination of the monitoring and evaluation findings (AusAID, 2006)& (McCoy et al.,
2005).

2.3.2.5 Stakeholder Involvement

The involvement of all stakeholders (beneficiaries, implementation staff, donors, wider


communities) in the monitoring and evaluation process of the project is very important. A
participatory approach to monitoring and evaluation is viewed as an empowerment tool for the
beneficiaries and other stakeholders of the project who in most cases are not consulted in this
function. It is also a demonstration of downward accountability i.e. accountability to the
beneficiaries. There is a lot of emphasis on upward accountability (Aune, 2000). This obsession
with upward accountability creates a barrier between the project and other stakeholders in terms
of monitoring and evaluation. This results in the process being geared towards satisfying the
donor's demands at the expense of the other stakeholders. Involvement of the beneficiaries in
monitoring and evaluation gives them a sense of ownership and contributes to long-term
sustainability long after the project donor has ceased financing the project and also increases the
chance of more beneficiaries to take up the services of the project. Other key neglected
stakeholders are the field staff involved in implementing the project.

11
2.3.2.5.1 Inputs
The different inputs of the project need to be monitored effectively to ensure that they are used
optimally on project activities to produce the desired outputs. The recommended practices for
monitoring each of the inputs as identified by the log frame approach include the following:

2.3.2.5.2 Financial Resources


Financial resources should be tracked with a project budget with the project activities having cost
attached to them, with a comparison of what has been spent on project activities with what should
have been spent as per planned expenditure in the budget (Crawford & Bryce, 2003). This
information of expenditure is obtained from the individual in charge of project accounts. This
comparison of the actual expenditure versus planned expenditure should be made regularly to
determine if the project is not going over budget.

2.3.2.5.3 Human Resources


Human resources on the project should be given transparent job allocation, and designation is
suitable to their expertise; if they are inadequate, then training for the requisite skills should be
arranged. For projects with staff sent out in the field to carry out project activities on their own,
there is a need for constant and intensive onsite support to the outfield staff (Reijer, Chalimba, &
Nakwagala, 2002).

2.3.2.5.3 Activities
There are activates which are very important for the practicality of monitoring and evaluation
system; these are described below.

2.3.2.5.4 Project schedule


Processes or activities to be done on the project are tracked with a project schedule or timeline. At
regular intervals, the actual schedule of activities done is compared with the planned schedule to
determine whether the project is within a schedule or over schedule (Crawford & Bryce, 2003).

2.3.2.5.5 Outputs
For monitoring the outputs of the project, it is important to use a mix of both qualitative and
quantitative indicators.

2.3.2.6 Midterm and End of Project Evaluations

12
For evaluation usually, there is a midterm and another at end of project implementation, an impact
assessment should be scheduled after the project has ended to determine what the impact of the
project was and what the contribution of the project was to the attainment of the goal (Gyorkos,
2003). The midterm evaluation and the one at the end of the project implementation process
(process evaluation) to determine how the project faired in terms of input use, carrying out the
scheduled activities, and in terms of how the project faired in terms of the level of outputs in
relation with the targeted output (Gilliam et al., 2003). The short-term outcomes can also be
evaluated at this point.

2.3.2.7 Capture and Documentation of Lessons Learned

Lessons learned from the implementation should be captured and documented for incorporation
into the subsequent projects and sharing with other stakeholders. The lessons would include what
went right in implementation, what went wrong, and why so that the mistakes are not repeated in
subsequent projects (Reijer et al., 2002). These lessons should be shared with the implementing
staff. The sustainability of the project should be determined. It is not easy to determine
sustainability, but the level of the communities' involvement can indicate the continuation of the
project activities even at the end of the funding period.

2.3.2.8 Dissemination of Monitoring and Evaluation Findings

There should be a monitoring and evaluation findings dissemination plan. Monitoring and
evaluation findings should be disseminated to the stakeholder by way of a report to the other
depending on his requirement, communication, or report to the community and beneficiaries and
the implementing staff to improve on their implementation practices and strategies (McCoy et al.,
2005).

2.3.2.9 Tools and Methods in Monitoring and Evaluation

M&E systems use different tools and approaches, some of which are either complementary or
substitute to each other, while others are either broad or narrow (Bank, 2004). However, an
evaluator may choose to use a combination of methods and sources of information to cross-validate
data (Whitaker, 2002). The M&E system's tools include performance indicators, logical

13
framework approach, theory-based evaluation, and formal surveys, rapid appraisal methods, and
participatory methods, public expenditure tracking surveys, impact evaluation, cost-benefit, and
cost-effectiveness analysis. However, the selection of these tools depends on the information
needed, stakeholders, and the cost involved (Bank, 2004).

Organizations like United States Agency for International Development (USAID) require that their
grant recipients document their M&E system in a Performance management Plan, which is a tool
designed to help them set up and manage the process of monitoring, analyzing, evaluating, and
reporting progress towards achieving objectives. The Performance management Plan also serves
as a reference document containing targets, a detailed definition of each project indicator, the
methods, and frequency of data collection, and who is responsible for collecting the data. It will
also provide details on how data will be analyzed and evaluations required to complement
monitoring data (UNDP, 2012).

The NGOs mainly use two principal frameworks: the result framework and the logical framework
(Jaszczolt K, 2010). A framework is an essential guide to monitoring and evaluation as it explains
how the project should work by laying the steps needed to achieve the desired results. Therefore,
a framework increases the understanding of the project goals and objectives by defining the
relationships between factors key to implementation and articulating the internal and external
elements that could affect the project's success. A good M&E framework can assist with ideas
through the project strategies and objectives on whether they are ideal and most appropriate to
implement (Ending Violence against Women and Girls Programming Essentials 2, 2013). The
M&E framework should also include details on budgeting and allocation of technical expertise, as
well as inform donors and project management on its implementation (Guijt & Woodhill, 2002).

The Result framework (also known as strategic framework) principal, endorsed in the Rome
Declaration on Harmonization in February 2003 and advanced by the organization for Economic
Co-operation and Development (OECD), applies coherent framework to develop effective
practical tools for strategic planning, risk management, progress monitoring, and outcome
evaluation (Jaszczolt K, 2010). While the logical framework identified internationally, is a matrix
that make use of M&E indicators at each stage of the project as well as identify possible risks.
Hence, the logical framework shows the conceptual foundation on which the project M&E system
is built (Chaplowe, 2008). It also works well with other M&E tools (Jaszczolt K, 2010). The log-

14
frame (logical framework) has four columns and rows that link the project goals and objectives to
the inputs, process and outputs required to implement the project.

Preparation a logical Framework It is essential to know what the program/project is intended to do


and how it is expected to operate before forming opinion on how it should be monitored and
evaluated. A careful description of its objectives and work plans must be the first step in designing
the procedures for Monitoring and Evaluation. Such a methodical description will result in logical
framework for the program/project. Preparing such framework requires the undertaking of the
following three main tasks: Definition of the content of objectives: what is the project intended to
do? Definition of premises (hypothesis) underlying the program/project plan. Selection of
indicators of program/project inputs, activities and outcomes: indicators of program/project
performance and outcomes depend on objectives pursued and the strategies adopted, which vary
from program to program. The logical framework is a crucial instrument for monitoring and
evaluation as well as an important program/project planning device.

2.3.2.10 Project performance

When a successful company invests time, money, and other resources in a project, its primary
concern is always what it is in return for its investment. It is the project manager's responsibility
to ensure these projects stay on schedule and within their approved budget. Performance
measurement provides the manager with visibility to make sure he is operating within the approved
time and cost constraints and that the project is performing according to plan .It also alerts
management if a project begins to run over budget or behind schedule so actions can quickly be
taken to get the project back on track (Verweire, Van Den Berghe, & Berghe, 2004).

For (Chong, 2008), it is the responsibility of Project Manager (PMO) manager to ensure that
project performance is being captured and reported. It is also his/her responsibility to ensure that
the level of reporting is achievable and doesn't unnecessarily overburden or distract the project
managers. Additionally the PMO should have a standard for applying credit for work performed.
For most projects they track the Schedule Variance (SV). Cost Variance (CV), Schedule
performance Index (SPI) and Cost Performance Index (CPI). These four values provide a reliable
measurement of the project "s performance.

15
Schedule Variance (SV): If Sv is Zero, then the project is perfectly on Schedule. If Sv is greater
than zero, the project is earning more value than planned thus it "s ahead of schedule. If Sv is less
than zero, the project is earning less value than planned it "s behind schedule. Cost Variance (CV):
If CV is zero, then the project is perfectly on budget. If CV is greater than Zero, the project is
earning more value than planned thus it "s under budget. If CV is less than zero, the project is
earning less value than planned thus it "s over budget. Schedule performance Index (SPI): If SPI
is one, and then the project is perfectly on schedule. If SPI is less than one then the project is
behind the schedule. If SPI is greater than one then the project is ahead of schedule. A well
performing project should have its SPI as close to one as possible. Cost Performance Index (CPI):
If CPI is one, then the project is perfectly on budget. If CPI is less than one then the project is over
budget. If CPI is greater than one then the project is under budget. A well performing project
should have its SPI as close to one as possible (Nirere, 2014).

Measuring project performance is an important part of project and program management. It allows
the PMO and project manager to identify cost and schedule problems early and take steps for
remediation quickly. It starts with setting the standards for the size of work packages, applying
credit for work performed, and which earned value metrics to track, which should be included in
the project" s cost management plan ((Verweire et al., 2004).

Evaluating project performance provides the organization with a clear picture of the health of its
projects and can instill confidence in the project teams. Additionally, these performance measures
can help the PMO establish continuous improvement initiatives in areas where projects commonly
perform at lower levels. The usefulness of measuring project performance is evident and as long
as organizations do not become overwhelmed with them, these measures will remain important
contributors to organizational success (Nirere, 2014).

2.3.2.11 Monitoring and Evaluation Practices Related Studies

A study conducted by (Workie, 2018) focused on assessing the Practices of monitoring and
evaluation in Commercial Bank of Ethiopia (CBE) projects. The main objective of the research is
to describe the monitoring and evaluation of projects in CBE. In addition, this study indicated
Concerning the monitoring and evaluation practices relating to finance, activities, and schedule.

16
The study uses descriptive, qualitative, and quantitative methods to apply the purposive sampling
technique to determine sample and sample size.

(Muchelule, 2018) investigated the influence of monitoring practices on project performance of


Kenya State Corporations. Specifically, seeking whether monitoring practices planning, tools,
techniques, and its adoption influences project performance of Kenya state corporations. The was
study found out that monitoring techniques and its adoption contributes to project performance
significantly as well as monitoring planning and tools contribute to organization performance. The
study conclusion on the basis of findings reveals that monitoring best practices have a positive
impact on project performance in Kenya state corporations. Also proposed, the organization
requires a monitoring tools framework policy monitoring that should be facilitated by developing
specific tools and instruments that can be used to mitigate risks associated with the tools used. The
study revealed that monitoring tools have no significant effect on project performance.

A study conducted by (Kissi et al., 2019) focused on examining the impact of project M&E
practices on construction project success criteria. The study was used Structured questionnaires
were used to solicit the views of project professionals in the Ghanaian construction industry. The
study was Find the M&E practices had a positive statistically significant relationship with
construction project success criteria. In addition, health and safety performance and project scope
showed a strong significant relation with M&E practice, implying that, in developing countries,
these two main constructs should be given critical attention in achieving project success.

2.3.2.12 Tools and Methods in Monitoring and Evaluation Related Studies

(J. Abalang, 2016), the research found that Caritas Torit used six tools and methods while
conducting M & E. In popularity, they ranged from Logical Framework approach, Impact
evaluation assessments, cost-benefit, and cost-effective analyses, theory-based evaluation
methods, performance indicators to public expenditure tracking survey. The main objective of the
research was to assess the Performance of Monitoring & Evaluation systems at Caritas Torit in
South Sudan. The study concentrated on the four parameters: The Tools and Methods used in M&E
systems at Caritas Torit in South Sudan, the management influence on M&E systems at Caritas
Torit in South Sudan, the relationship between training of M&E employees and performance and
stakeholder's involvement.

17
A study conducted by (Wanjiru & Kimutai, 2013) looked at the determinants influencing the
effectiveness of M&E systems in NGOs within Nairobi County, Kenya. The study's objective was
to find out how the selection of tools and techniques, the role of management, M&E training, and
technical expertise of staff contribute to the effectiveness of the M&E system. The findings are
presented in tables and charts. The results indicated that there are difficulties in the application of
the M&E systems, which was mainly attributed to the tools and techniques used. The study present,
Logical framework has been cited as one of the popular tools used by the sampled NGOs. Also,
proposed M&E tools and techniques should be identified when preparing an M&E plan and their
limitation. The study was concludes that the NGOs should be flexible to allow the modification of
their M&E systems including tools and techniques; the management should have the knowhow to
run the project and M&E system; a capacity building policy to be put in place to emphasis on
M&E training across the NGO sector and a professional association of M&E experts needs to be
started.

2.3.2.12 Project performance Related Studies

(Rumenya, Kisimbi, & Management, 2020), They investigated the fluence of monitoring and
evaluation systems on the performance of projects in non-governmental organizations. The study's
objective was to assess how organizational structures and the human capacity for monitoring and
evaluation influence project performance in non-governmental organizations in Mombasa County.
It focused on how a project monitoring and evaluation plan and work planning for monitoring and
evaluation activities influence project performance in Non-Governmental Organizations in
Mombasa County. The study uses a descriptive research design, and structured questionnaires are
used to collect the study data. The Results study found that projects' performance in the education
sector significantly and positively correlated with organizational structures for M & M&E, human
resource capacity for M & M&E, and project M&E plan.

(Nirere, 2014), The research was carried out to establish the contribution of monitoring and
evaluation on project performance in Hand in Hand project. The study was designed to assess how
and when Hand in Hand project carried out monitoring and evaluation of their project to achieve
it to establish a relationship between monitoring and evaluation in Hand in Hand Project and its

18
performance. The study indicated a higher number of respondents agreed that project planning,
designing, project data collection and analysis, project implementation, and the project closed are
the monitoring and evaluation activities done by Hand in Hand project, which leads to project
performance.

2.4 Summary and Research gaps

Generally, M&E is an important element in Project Management for NGOs. However, Monitoring
and Evaluation in NOGs are intensely reliant on donor funds and profit-oriented organizations that
are self-supporting in their projects. Thus this research is to examine the impact of Monitoring
and Evaluation on project performance in NGOs. The gap of this research fille is no study that
done before me in Somaliland, especially NGOs in Hargeisa

Figure 2. 1: Relationship Between the Variables

Independent Variable Dependent Variable

Monitoring and Evaluation:


• Monitoring Practices

• Evaluation Practices Project Performance:

A conceptual framework is a diagram that illustrates the relationships among relevant factors that
may influence the achievement of goals and objectives. This research looks at the impact of
Monitoring and Evaluation on project performance in NGOs. The independent variables in the
moninoring practices and evalutaion practices. while project performance is the dependent
variable. The connection between the dependent and the independent variable can be summarised
in figure 2.1 above.

19
CHAPTER THREE
3.0 Introduction

This chapter describes the methodology used by the researcher in order to attain the purpose of
this research which is examining the effect of monitoring and evaluation on project performance.
The chapter covered the research design, research population, sample size, sampling procedure,
instruments used or collecting data, methods of data analysis, data processing and analysis, and
ethical consideration.

3.1 Research Design

The research was used descriptive research which is suitable for such kinds of research problems.
According to (Cooper & Schindler, 2000), a descriptive research finds out who, what where, when
and how much. Descriptive research is designed to obtain data that describe the characteristics of
the topic of interest in the research, as this study was designed to obtain its research on the facts
and information of the influence of Monitoring and Evaluation on project performance in NGOs

3.2 Research Population


Population refers to the entire group of people, events, or organizations that a researcher wants to
study. The population of this research is the 55 NGO staff operating in Hargeisa County
(Consortium 2016). According to some of the mangers NGOs, each project has one project
manager and two monitoring and evaluation team. The target population is staffs are 110 members
monitoring and evaluation team and 55 project managers. Therefore total are 165 personnel which
is a staffs of 55 NGOs. Which have completed a project between 2018 and 2020 and are in the
process of monitoring and evaluating them using a defined M&E system.

3.3 Sample Size


The researcher used the Rule of Thumb formula to calculate the sample size. If the population is
less than or equal to 1000 take 30% of the population. 165 *0.30=50

In this study, a sample size of 50 was used. This sample was composed of 33 monitoring evaluation
team and 17 project mangers, all selected from the 55 NGOs that are registered NGOs that are
registered Hargeisa

Table 3.3 1Table1 population and sampling in the study

20
NGOs Target Sample Sampling Data collection
Population method instrument
1. M&E team 110 33 Stratified Questionnaire
2. Project Managers 55 16 Stratified Questionnaire
TOTAL 165 50

3.4 Sampling Procedure


The process of selecting to get a sufficient number of elements from a population (Raval,2009). It
also refers to the techniques and procedures to be applied in choosing a sample. The sample of this
research was selected using probability sampling – stratified random sampling method to get the
correct number of the monitoring evaluation team and project managers

3.5 Research Instruments

This research was use different instruments for collecting data; the process of data collection and
analysis in the research is mainly supported through techniques that include structured
questionnaires and observation designed together to collect information.

3.6 Data Collection Procedure

3.7.1 Before the administration of the questionnaires


The researcher was request an introduction letter. The research ws addressed to the authorities of
the university understudy for him to be permitted to conduct the study. The letter will contain the
criteria for selecting the respondents and the request to be provided with the list of qualified
academic administrators.

3.7.2 During the administration of the questionnaires


Specifically, the researcher and his assistants will be seriously particular in requesting the
respondents the following: (1) to answer all questions hence should not leave any item
unanswered; (2) to avoid biases and to be objective in answering the questionnaires.

21
3.7.3 After the administration of the questionnaires
The data collected will be organized, collated, summarized, statistically treated, and drafted in
tables using the Statistical Package for Social Sciences (SPSS).

3.8 Data Processing and analysis

The research was the collect procedure consisted of both quantitative and qualitative methods by
visiting target group. Thus, data from the completed questionnaire was edited, categorized, coded,
and entered into the computer SPSS and summarized using simple frequency tables and percentage
distribution for analysis. Based on the qualitative analysis, the researcher based on the collected
information from the respondents to establish patterns and relationships with the area being
studied. Quantitatively the researcher summarized data using descriptive statistics like graphs,
percentages, and frequencies, which enabled the researcher to describe the distribution of scores
and measurements meaningfully. Using these techniques, the presentation, analysis, and
interpretation of the findings make it easy for the researcher to comprehend and draw conclusions
based on the findings.

3.9 Ethical Considerations

Ethically the study gives an important consideration for the participants during data collection, the
study will be carried out with the permission of the respondents it will protect any respondent’s
name during the study and keep any information confidential. Ethically the study considers the
participant’s dignity, the information will be used for academic purposes, finally, the researcher
makes sure of the suitability of data collection in acceptable research standard which is match
setting of objectives of the research.

22
CHAPTER FOUR

DATA PRESENTATION AND ANALYSIS


4.0 INTRODUCTION
This chapter focuses on the presentation, analysis, and interpretation of findings from the collected
data through questionnaire. The data was presented in tables and figures, from which percentages
and frequencies were ascertained to provide the foundation for analysis and interpretation. During
data collection, the researcher distributed 50 questionnaires, and all questionnaires were returned.

4.1. Socio-demographic characteristics of the respondents

Table 4. 1. 1 Socio-demographic characteristics of Respondents (N=50)


Gender Frequency Percent
Male 27 55%
female 22 45%
Total 49 100.0%
Educational Level
primary 0 0%
secondary 0 0%
diploma 2 2%
degree 17 33%
Master’s degree 31 62%
Total 50 100.0%
Age
Less than 25 years 12 24%
26 -35 years 25 50%
36 -45 years 12 24%
46 -50 years 1 2%
51 above years 0 0%
Total 50 100.0%
Marital Status
Single 26 52%
Married 22 44%
Divorced/Separated 2 2%
Total 50 100.0%

23
From the table, we can see that most of the respondents (55%) were male, followed by female
(45%) in terms of gender. The table indicates that the marital status of respondents that most of
the respondents were single (52%) followed by Married respondents (44%) and
divorced/Separated (2%)
According to the table above that most of the respondents were having Master’s degree (62 %) of
the respondents, (34 %) of the respondents are degree, and (4%) of the respondents are diploma.

The above Table indicates that most of the respondents were from the age group between 26-35
years (50 %) followed by 36- 45 years (24%), less than 25 the least are (24%) and 46-50 years
are (2%).

4.2 Monitoring and Evaluation

6%

yes
no
94%

Figure 4.2. 1 practical experience in monitoring and evaluation systems in your


organization
This figure shows practical experience in monitoring and evaluation systems. Almost 94% of
respondents have practical experience in monitoring and evaluation system, where 6% does not
have practical experience in monitoring and evaluation system.

24
No
6%

Yes Yes
94%
No

Figure 4.2. 2 Do you have direct involvement in the Monitoring and Evaluation
System of the organizations
This figure shows the Direct involvement in the Monitoring and Evaluation System. Almost 94%
of respondents are involved, where 6% are not involved in the monitoring and evaluation system.

No
38%

Yes
62%

Yes No

Figure 4.2. 3 organizations have a plan that guides monitoring and evaluation when
implementing the program/project
From 50 respondents, 24 (62%) respond that a monitoring and evaluation plan to carry out
monitoring and evaluation can refer. Only 38% of the respondents say that there is no monitoring
and evaluation plan. Because of some projects are too small, and some others do not know how to
design.

25
yes no Partially

120.0%

95.9%
100.0%
83.7% 83.3%
80.0%
58.7%
60.0%

40.0% 32.6%

20.0% 8.2% 8.2% 10.4% 8.7%


6.3%
2.0% 2.0%
0.0%
Project managers Team leaders monitoring and evaluation Consultants
experts

Figure 4.2.4 stakeholders involved in planning monitoring and evaluation of


projects
This figure shows the different stakeholders involved in the planning, monitoring, and evaluation
of projects. Among these stakeholders that most respondents replied that they were involved most
in monitoring and evaluation planning are project managers, team leaders, and monitoring and
evaluation experts monitoring and evaluation experts. Consultants are also involved in the planning
of M and E next to team leaders.

100.0% 94.0%
87.7%
90.0%
80.0% 75.5%
70.0% 66.0% 65.3%
60.0%
50.0%
40.0% 30.4%
30.0% 23.4%
20.0% 14.3%
8.2% 10.2% 10.6%
10.0% 2.0% 4.0% 4.1% 4.3%
0.0%
Data to be Frequency of An individual in Plan for Individuals for
collected data collection charge of M&E dissemination of specific M&E
findings activities

yes no Partially

Figure 4.2. 5 Guided monitoring and evaluation plan activities in organization


This figure shows Guided monitoring and evaluation plan activities in the organization. Data to be
collected, frequency of data collection, individuals in charge of M and E, and Plan for
dissemination.

26
A separate budget not special budge I have no idea

6%

19%

75%

Figure 4.2. 6 the monitoring and evaluation activities have


As shown in the above chart, 75% of the respondents respond that there is a separate budget for
monitoring and evaluating projects. 19% of the respondents have No special budget is considered
for monitoring and evaluation of projects. Only 6% of the respondents said that I have no idea

Less than 5% 5 - 10 % More than 10% not specific

15%

44% 12%

29%

Figure 4.2. 7 separate budget is allocated for monitoring and evaluation activities
As shown in the above chart, 44% of the respondents said no specific monitoring and evaluating
projects. (29%) of the respondents have more than 10%, (15%) of the respondents are less than
5%, and only 12% of the respondents said 5-10% allocated for monitoring and evaluation activities

27
No
4%

Yes
96%

Figure 4.2. 8 Organization use the logical framework approach (log frame) to plan
M&E activities in the organization
This figure shows that the organization uses the logical framework approach (log frame). Almost
96% of respondents use the logical framework approach, and only 4% do not use the logical
framework approach.
Table 4.2. 1 How often do you document lessons learned on the project
implementation
No of projects Frequency Percent
All projects 7 64%
for some projects 3 27%
Few projects 1 9%
Never 0 0%
Total 100%

This table shows the lessons learned document is done after the end of the project evaluation of
projects. In this study, respondents are asked to share their experience on the documentation of
lessons learned. 64% of the respondents agree that lessons learned document is done for
all projects, 27% of the respondents for some projects and 9 % of the respondents are few projects.

4.3 Tools and Methods used by M&E

28
Cost-Benefitand Cost-Effectiveness 7.25%

Public Expenditure Tracking Survery 7.73%

Participatory Method 10.14%

Rapid Appraisal 10.14%

Formal Surveys 14.01%

Theory-Based Evaluation 14.49%

Logical Framework 18.36%

Performance Indicator 17.87%

0.00% 2.00% 4.00% 6.00% 8.00% 10.00% 12.00% 14.00% 16.00% 18.00% 20.00%

Figure 4.3. 1 The tools and methods used in Monitoring and evaluation in the
organization
This figure shows that the tools and methods used in Monitoring and evaluation in the organization.
Most respondents replied that the tools and methods used are performance indicators, logical
frameworks, theory-based evaluations, and formal surveys. Some respondents also said Rapid
Appraisal and Participatory Method are also used to monitor and evaluate the organization. Only
a few respondents are said public expenditure tracking survey and cost-benefit and cost-
effectiveness are used.

Table 4.3. 1 the rating applicability of these tools and methods used by M&E in your
organization?
Responses Frequency Percent
Very Easy 23 47%
Easy 13 27%
Difficult 8 16%
Very Difficult 5 10%
Total 49 100%

This table shows the rate of the applicability of these tools and methods used by M&E in the
organization. (47%) of the respondents indicated that the applicability of the tools and
techniques were very easy. 23% of the respondent indicated that it was easy, 16% of the
respondents are difficult and very difficult, while a minority (10%).

29
30%

70%

Yes no

Figure 4.3. 2 Do you have any difficulties in using the M&E system?
As shown in the figure above, 70 % of respondents do not have difficulties using the M&E system.
30% of respondents are having difficulties with the M&E system because there lack The role of
management in the operations of the M&E.

4.4 Project Performance

Table 4.4. 1 In which stage do make Monitoring and Evaluation in the project
Responses Frequency Percent
In planning 12 24%
In the implementation of a project 10 20%
In the closing phase of the project 8 16%
all stages of the project said above 20 40%
Total 50 100%

This table shows the stage that makes Monitoring and Evaluation in the project. (40%) of the
respondents are told to do all the stages of the projects that are made Monitoring and Evaluation.
24% of the respondents are told in the planning, 20% of the respondents said in the implementation.
Only 16% of the respondents are said to be in the closing phase of the project.

Table 4.4. 2 the factors necessary for improving project performance in NGOs?

30
Responses Frequency Percent
Employee skills 15 30%
Identification problems in planning and 13 26%
Implementation
Improve Project Planning and Quality 17 34%
Environmental scanning 5 10%
Total 50 100%

This table shows the factors necessary for improving project performance in NGOs. 34% of the
respondents are said to improve project planning and quality, 30% of the respondents are said
Employee skills. 26% of the respondents are said Identification problems in planning and
Implementation. Only 10% of the respondents said environmental scanning. The
researcher realized that all the factors necessary for the improvements in project performance of
NGOs exist Hargeisa hence meaning that it is a development institution in terms of increased
performance

Better Poor Not sure

12%

12%

76%

Figure 4.4. 1 How do you rate the performance of your project compared to other
NGOs?
This figure shows that the rate the project performance in NGOs. 75% of respondents are said the
rate of the project performance was better. 12% of respondents are said I have no idea, 12% of
respondents are poor.

31
Not Sure
No 10%
5%

yes
85%

Figure 4.4. 2 relationship between monitoring & evaluation and project performance

This figure shows the relationship between Monitoring & evaluation and performance using
perception and opinions of respondents; (85%) of respondents agree there is a related m&e and
project performance. In comparison, 10% of respondents said they are not related. Only 5% of the
respondents said there is no relationship between Monitoring & evaluation and performance.

32
CHAPTER FIVE
5.0 Introduction

In this chapter, the researcher presents the summary of the findings based on respondent’s
views. The summary of the findings is presented in line with the objectives of the research.
The researcher also presents the conclusion of the study, gives recommendations.

5.1 Summary

This chapter presents the summary, the research findings in view of the research objectives, the
findings in connection to Monitoring & evaluation and performance of NGOs. The response
collected from 50 respondents are analyzed and presented in this chapter. Regarding Monitoring
and Evaluation System, (94% ) of respondents are involved Monitoring and Evaluation System.
According to the monitoring and evaluation plan, (62%) of the respondents indicated that there is
monitoring and evaluation plan to refer while doing monitoring and evaluation. Project managers,
M & E experts, team leaders and consultants are involved in the preparation of monitoring and
evaluation plan. According to the budget for monitoring and evaluation the response shows that
75% of the respondents indicate that there is a separate budget. Regarding separate budget is
allocated for monitoring and evaluation 44% are said no specific budget allocated monitoring and
evaluating activities.
The document Lessons learned is a document that is done after completing project monitoring and
evaluation at the end of the project. The documented provided the positive and negative
experiences in the process of the project are documented in the lessons learned document.
Regarding, 64% of the respondents replied that lessons learned document is done for all projects.

NGOs used different tools and techniques in their M&E systems. The most common tools and
techniques are used logical frameworks, theory-based evaluations, formal surveys, rapid appraisal,
and participatory methods. The tools and techniques the highest factor contributing to the
difficulties faced in the use of the M&E system is the role of management in the operations of the
M&E. A big number of the project managers and M&E staff. also indicated that the applicability
of the tools and techniques was compared to 47% and 27% that indicated that it was easy and very
easy respectively.

33
From the qualitative method, respondents revealed a relationship between Monitoring &
evaluation and project performance in NGOs (85%) of respondents are agreed there is a related
m&e and project performance. In comparison, 10% of respondents said they are not related. The
researcher led to understand that Monitoring & evaluation and performance in NGOs are related.
According to factors necessary for improving project performance in NGOs, 34% of the
respondents are said to improve project planning and quality, 30% of the respondents are said
Employee skills. The researcher concludes the most of respondents agree there is a relationship
between monitoring and evaluation and project performance in NGOs

5.2 Conclusion

The researcher concluded In NGOs projects there is monitoring and evaluation plan prepared by
the involvement project managers, monitoring and evaluation experts, team leaders and
consultants. However, a small number of respondents stated that there is no monitoring and
evaluation plan and they do not refer to it because of the size of the project is too small. Regarding
the budget for monitoring and evaluation, as most respondents say, there is a separate budget for
monitoring and evaluating projects and also said no specific budget allocated monitoring and
evaluating activities.

The common tools and techniques are used logical frameworks, theory-based evaluations, formal
surveys, rapid appraisal, and participatory methods. The factor affected in using the M&E system
is the role of management in the operations of the M&E. The NGOs should therefore be flexible
to allow modification of their M&E systems, including tools and techniques used as well as
consider experiences from other organizations.

The researcher concluded that contributing to factors necessary for improving project performance
in NGOs is project planning and quality, Employee skills, Identification problems in planning, and
environmental scanning. The contribution of Monitoring and evaluation to NGOs are improved
performance. There is a positive relationship between Monitoring & evaluation and performance
of NGOs.

34
5.3 Recommendations

The researcher recommended that all NGOs focus on M&E to improve project performance; this
is because monitoring and evaluation assists projects in understand the environment in which they
operate and how to proceed. Monitoring and evaluation are used to ensure that the project is
planned and right and finally achieves its objective. The results from monitoring and evaluation
and lessons learned can also be necessary for proper planning and accomplishment of other
projects that will be undertaken in the future. Based on the research assessment of monitoring and
evaluation in NGOs projects, the following recommendations are forwarded so that the projects
can have better performance in the monitoring and evaluation in particular and effeteness of
projects in general. NGOs shall have a separate budget for the monitoring and evaluation of
projects. monitoring and evaluation should have flexible systems to allow modification of the
monitoring and evaluation systems, including tools and techniques used as well as consider
experiences from other organization.

35
REFERENCES

Uncategorized References
Abalang, J. (2016). Assessment of performance of monitoring and evaluation systems at CARITA Torit in
South Sudan.
Abalang, J. J. S. S., Nairobi. (2016). Assessment of Performance of Monitoring and Evaluation Systems at
Caritas Torit.
Anderson, A. (2005). An introduction to theory of change. The Evaluation Exchange 11 (2): 12. In.
Aune, B. (2000). Logical framework approach and Participatory Rural Appraisal mutually exclusive or
complementary tools for planning. 10, 5.
AusAID, A. A. (2006). Monitoring and Evaluation Framework Good Practice Guide Retrieved from
http://www.ausaid.gov.au/ngos/pages/ancp.aspx
Bank, W. (2004). Monitoring & Evaluation: some tools, methods and approaches. Retrieved from The
World Bank, Washington, D.C.:
Chaplowe, S. G. (2008). Monitoring and evaluation planning. American Red Cross/CRS M&E Module
Series, American Red Cross and Catholic Relief Services (CRS), Washington, DC and Baltimore,
MD.
Chong, S. J. J. o. E. I. M. (2008). Success in electronic commerce implementation.
Consortium , S. N. (2016). Somalia NGO Consortium Annual Report. Retrieved from Somalia NGO
Consortium: http://somaliangoconsortium.org/membership/current-members/
Cooper, D., & Schindler, P. (2000). Business Research Methods. Saddle Brook. In: NJ: McGraw-Hill.
CPWF. (2012). M&E guide: Theories of change. Retrieved from https://sites.google.com/a/cpwf.info/m-
e-guide/background/theory-of-change
Crawford, P., & Bryce, P. J. I. j. o. p. m. (2003). Project monitoring and evaluation: a method for
enhancing the efficiency and effectiveness of aid project implementation. 21(5), 363-373.
FHI , F. H. I. (2011). Monitoring and Evaluating Behavior Change Communication Programs. Retrieved
from USAID: http://teampata.org/wp-content/uploads/2017/06/Monitoring-and-Evaluating-
Behaviour-Change-Communication-Programs.pdf
Fitzgerald, M., Posner, J., Workman, A. J. J. R., & Institute, T. (2012). A Guide to Monitoring and
Evaluation of NGO Capacity Building Interventions in Conflict Affected Settings.
Gilliam, A., Barrington, T., Davis, D., Lacson, R., Uhl, G., Phoenix, U. J. E., & Planning, P. (2003). Building
evaluation capacity for HIV prevention programs. 26(2), 133-142.
Guijt, I., & Woodhill, J. J. I. F. f. A. D. (2002). A guide for project M&E: Managing for impact in rural
development.
Gyorkos, T. J. A. t. (2003). Monitoring and evaluation of large scale helminth control programmes. 86(2-
3), 275-282.
Hosley, C. J. W. R. h. s. g. c. a. c. i. m.-g. b. t.-o.-c. (2005). Whats Your Theory–Tips for Conducting
Program Evaluation-Issue 4.
IFRC, I. F. o. R. C. (2011). Project/programme monitoring and evaluation (M&E) guide. Retrieved from
Red-Cross Internationnal:
International Federation of Consultancy, I. (2012). Project Management. Retrieved from World Bank:
James, C. J. C. R. (2011). Theory of change review.
Jaszczolt K, P. T., Stanislaw A. (2010). Internal Project M&E System and
Development of Evaluation Capacity – Experience of the World Bank – Funded Rural

36
Development Program. Retrieved from World Bank:
Kelly, K., & Magongo, B. (2004). Report on assessment of the monitoring and evaluation capacity of
HIV/AIDS organisations in Swaziland: National Emergency Response Council on HIV/AIDS.
Kihuha, P. (2018). Monitoring and Evaluation Practices and Performance of Global Environment Facility
Projects in Kenya, a Case of United Nations Environment Programme. Doctoral Dissertation,
Kenyatta University,
Kissi, E., Agyekum, K., Baiden, B. K., Tannor, R. A., Asamoah, G. E., & Andam, E. T. (2019). Impact of
project monitoring and evaluation practices on construction project success criteria in Ghana.
Built Environment Project and Asset Management.
Kloppenborg, T. J. (2008). Project Management: A Contemporary Approach: Organize, Plan, Perform:
Thomson Learning EMEA, Limited.
Kusek, J. Z., & Rist, R. C. (2004). Ten steps to a results-based monitoring and evaluation system: a
handbook for development practitioners: World Bank Publications.
Lewis, J. P. (2005). Project planning, scheduling, and control: A hands-on guide to bringing projects in on
time and on budget (Third Edition ed.): Irwin.
Mackay, K. (2006). Institutionalization of monitoring and evaluation systems to improve public sector
management. Retrieved from
McCoy, K. L., Ngari, P. N., & Krumpe, E. E. (2005). Building monitoring, evaluation and reporting systems
for HIV/AIDS programs.
Miller, D. S. J. D. (2010). Handbook of Disaster and Emergency Policies and Institutions. 2(34), 586-588.
MPIEDS , m. o. p. i. a. e. d. s. (2018). M&E Department. Retrieved from
http://mop.gov.so/index.php/the-ministry/directorates/me-department/
Muchelule, Y. W. (2018). Influence of Monitoring Practices on Projects Performance of Kenya State
Corporations. JKUAT-COHRED,
Müller, R., & Turner, R. J. E. m. j. (2007). The influence of project managers on project success criteria
and project success by type of project. 25(4), 298-309.
Nabris, K. (2002). Civil Society Empowerment: Monitoring and Evaluation. Based on a PASSIA Training
Course, Jerusalén, Palestinian Academic Society for the Study of International Affairs.
Naidoo, V. S. (2012). Danish Demining Group Community Safety Programme, Somaliland, Mine Action
and Armed Violence Reduction Case Study, septembre 2012, CIDHG. In.
Ngarambe, C. (2015). INFLUENCE OF MONITORING AND EVALUATION ON PERFORMANCE OF NON
GOVERMENTAL ORGANIZATIONS IN RWANDA: A CASE OF WORLD VISION INTERNATIONAL.
Mount Kenya University,
Nirere, V. (2014). MONITORING AND EVALUATION AND PROJECT PERFORMANCE IN RWANDA A CASE
STUDY OF HAND IN HAND PROJECT IN CARE INTERNATIONAL IN RWANDA. Mount Kenya
University,
Otieno, F. (2000). The roles of monitoring and evaluation in projects. Paper presented at the 2nd
International Conference on Construction in Developing Countries: Challenges facing the
construction industry in developing countries.
Philip, R., Anton, B., Bonjean, M., Bromley, J., Cox, D., Smits, S., . . . Monggae, F. J. F. I. E. S. G. (2008).
Local Government and Integrated Water Resources Management (IWRM) Part III Engaging in
IWRM Practical Steps and Tools for Local Government.
PMBOK Guide, I., P.M. (2008). A Guide to the Project Management Body of Knowledge (PMBOK® Guide)–
Sixth Edition: Project Management Institute.
PMBOK Guide, I., P.M. (2017). A Guide to the Project Management Body of Knowledge (PMBOK® Guide)–
Sixth Edition: Project Management Institute.
Reijer, P., Chalimba, M., & Nakwagala, A. A. (2002). Malawi goes to scale with anti-AIDS clubs and
popular media. 25(4), 357-363.

37
Rietbergen-McCracken, J., & Narayan-Parker, D. (1998). Participation and social assessment: tools and
techniques (Vol. 1): World Bank Publications.
Rossi, P. H. J. E. r. (2012). Evaluating with sense: The theory-driven approach. 7(3), 283-302.
Rumenya, H., Kisimbi, J. M. J. J. o. E., & Management, P. (2020). Influence of Monitoring and Evaluation
Systems on Performance of Projects in Non-Governmental Organizations: A Case of Education
Projects in Mombasa County, Kenya. 5(2), 46-66.
Seith, S. a. P. I. (2012). Evaluation and Theory of change. Presented at workshop on randomized
evaluation to improve financial capability innovation for
poverty action (ipa)
Shadish, W. R. J., Cook, T. D., & Leviton, L. C. (1991). Chapter 2: Good theory for social program
evaluation. Foundations of Program Evaluation: Theories of Practice Newbury Park, CA: Sage.
Tache, F. J. E. S. M. (2011). Developing an integrated monitoring and evaluation flow for sustainable
investment projects. 14(2), 380-391.
Uitto, J. I. J. G. e. c. (2004). Multi-country cooperation around shared waters: role of monitoring and
evaluation. 14, 5-14.
UNDP. (2012). Development Assistance Framework for the Republic of South Sudan, 2011-2012.
Retrieved from United Nations Country Team (South Sudan), UNCT:
https://planipolis.iiep.unesco.org/en/2012/development-assistance-framework-republic-south-
sudan-2012-2013-5483
Verweire, K., Van Den Berghe, L., & Berghe, L. (2004). Integrated performance management: a guide to
strategy implementation: Sage.
Wanjiru, W. E., & Kimutai, G. J. U. t. (2013). Determinants of Effective Monitoring and Evaluation
Systems in Non-Governmental Organizations within Nairobi County, Kenya.
Whitaker, B. J. J. S. (2002). The Palestinian Academic Society for the Study of International Affairs (www.
passia. org). 3(2), 301-302.
Workie, T. (2018). Assessing The Practice Of Project Monitoring And Evaluation: The Case Of Commercial
Bank Of Ethiopia Projects. (Master ), ADDIS ABABA UNIVERSITY, Retrieved from
https://nadre.ethernet.edu.et/record/12042/export/hx#.YM-UYOFR3Dc
Zall Kusek, J., & Rist, R. (2004). Ten steps to a results-based monitoring and evaluation system: A
handbook for development practitioners. In: The World Bank.

38
APPENDIX I: INTRODUCTION LETTER

Dear respondents, I am a Master of Project Management student at Abaarso Tech University.


Therefore, I request you to fill this questionnaire with the information to the best of your
understanding. The information you are providing will be kept secret, and it is purely for academic
purposes only.

Take your time while answering the information in order to come up with a well thought
answers.

Thank you in advance.

Hamse Abdirahmaan Ahmed

39
Appendix 2 - Questionnaire
FACE SHEET PART 1: Profile of the Respondents 1: Gender
1. Male
2. Female
2: Educational Level
Primary
Secondary
Diploma
Degree
Master’s degree
3: Age
Less than 25 years
26 -35 years
36 - 45 years
46 - 50 years
51 above years
5: Marital Status
Single
Married
Divorced/Separated
Widowed
Section A
1. Your position in the organization
1. Top Management 3. Project Team Leader
2. Middle Management 4. M&E Expert
5. Other Expert

2. Is there practical experience in monitoring and evaluation system in your organization

Yes No

40
3. Do you have direct involvement in the Monitoring and Evaluation System of the
organizations?

Yes No

Section B: Monitoring and Evaluation Practices

1. Does your organization have a plan that guides monitoring and evaluation when
implementing the program/project?

Yes No

2. If your answer is no for the above question what is the reason behind not having the plan?
We don’t know how to design Projects are too small
Not important to us

3. Which of the following stakeholders do you think were involved in the planning of the
monitoring and evaluation of the activities of your organization?

yes no Partially
Project managers
Team leaders
Middle and top management
Consultants

4. Which of the following aspects were specified in the plan that guided monitoring and
evaluation (M&E) activities of your organization?

Yes No Partially
Data to be collected
Frequency of data collection
An individual in charge of M&E
Plan for dissemination of findings
Individuals for specific M&E
activities

41
5. In your organization the monitoring and evaluation activities have:

A separate budget not special budge I have no idea

6. If a separate budget is allocated for monitoring and evaluation activities, what percentage of
the total project budget allocated for this purpose?

Less than 5% 5-10% More than 10% not specific

7. Does your organization use the logical framework approach (log frame) so as to plan about
M&E activities in your organization?
(Log frame: provide a streamlined linear interpretation of a project’s planned use of
resources and its desired ends)

Yes No

8. How often do you document lessons learned on the project implementation?

For all projects for some projects

For a few projects Never

Section C: Tools and Methods used by M&E

1. What are some of the tools and methods used in Monitoring and evaluation in your
organization

1. Performance Indicator 2. Logical Framework 3. Theory-Based Evaluation

4. Formal Surveys 5. Rapid Appraisal 6. Participatory Method

6. Public Expenditure Tracking Survey 7. Cost-Benefit and Cost-Effectiveness

2. How would you rate the applicability of these tools and methods used by M&E in your
organization?

Very Easy Easy Difficult Very difficult

42
3. Do you have any difficulties in using the M&E system?

Yes No

2. a) If yes, what do you think is contributing to the difficulty?

Tick where appropriate


Selected Tools and techniques
The role of management to the operations of the M&E
The adequacy of M&E training
Technical expertise of the staff

Section D: Project Performance

1. In which stage do make Monitoring and Evaluation in the project

a) In planning

b) In the implementation of a project

c) In the closing phase of the project

d) In all stage of the project said above

2. What are the factors necessary for improving project performance in NGOs?

a) Employee skills

b) Identification problems in planning and Implementation

c) Improve Project Planning and Quality

d) Environmental scanning

3. How do you rate the performance of your project compared to other NGOs?
a) Poor

43
b) Better
c) Not sure

4. The following are monitoring and evaluation actictivies carried out by NGOs led to project
performance.
a) Collect and analyzing data

b) Review progress

c) Identification problems in planning and Implementation

d) All the above

e) None the above

5. Do you think there is a relationship between monitoring & evaluation and project
performance

a) Yes.

b) No.

c) Not sure.

44
STUDENT LITTER OF VERIFICATION

45

You might also like