You are on page 1of 12

THE UNIVERSITY OF ZAMBIA

SCHOOL OF HUMANITIES AND SOCIAL SCIENCES

DEPARTMENT OF POPULATION STUDIES

NAME : ADAH BITIAH CHEMBO

COMPUTER NUMBER :2016131946

COURSE : MONITORING AND EVALUATION

COURSE CODE : DEM 9114

LECTURER : MR EMMANUEL TEMBO

TASK : ASSIGNMENT 1

DUE DATE : 30TH MAY, 2020

QUESTION: Discuss the differences between monitoring and evaluation (M&E) and present a
historical development of M&E
This essay will discuss the differences between monitoring and evaluation (M&E) and present a
historical development of M&E after which a conclusion will be drawn.

Monitoring and Evaluation are two separate yet important complementary sets of organizational
activities that collect information and report on the effectiveness of a project, program or an
intervention. Monitoring and evaluation are important management tools that are necessary to
track the progress and facilitate decision making for present and future interventions (Adhikari,
2017).

In illustrating the importance of Monitoring and Evaluation, (Kusek & Rist, 2004) stated: If you
don’t measure your results, how can you tell success from failure? If you can’t tell success from
failure, how can you reward success? how do you know you’re not rewarding failure? How can
you learn? How can you demonstrate results for public support? Despite the complementary
nature of these two organizational activities, they are two distinct undertakings and this will be
outlined in the following paragraphs.

OECD DAC (2010), defines Monitoring as “a continuing function that uses systematic collection
of data on specified indicators to provide management and the main stakeholders of an ongoing
development intervention with indications of the extent of progress, achievement of objectives
and progress in the use of allocated funds.” Evaluation is defined as “the systematic and
objective assessment of an on-going or completed project, programme or policy, its design,
implementation and results.”

From the above definition, it can be seen that monitoring is a repetitive process of collecting
information relating to an intervention aimed at ascertaining whether the program is doing what
it set out to do while evaluation is the general assessment of data or experience to establish to
what extent the initiative or intervention has achieved its objectives. It is for this reason that
scholars have said that the key question that should be asked during monitoring is “How do we
know if we are getting there?” whereas the key question that is asked during evaluation is “How
do we know we got there?” It can be stated that monitoring looks at how your activity is being
implemented while it is being implemented and it is something that is part of the activity while
evaluation looks at how you implemented the activity or project at the end of the undertaking
(Adhikari, 2017).
Monitoring provides project managers with the information needed to understand the current
project situation and assess where it is relative to specified targets and objectives – identifying
project trends and patterns, keeping project activities on schedule, and measuring progress
toward expected outcomes. Evaluation, on the other hand, gives information about why the
project is or is not achieving its targets and objectives. Some evaluations are carried out to
determine whether a project has met (or is meeting) its goals. Others examine whether or not the
project hypothesis was valid, and whether or not it addressed priority needs of the target
population (Frankel & Gage, 2007)
A key distinction between Monitoring and Evaluation is in relation to their frequency.
Monitoring is an ongoing or continuous process (can be weekly, monthly, quarterly, etc.) which
tracks the progress of the project throughout the duration of the project. It collects and analyses
information to compare how well a project is performing in relation to the expected results. The
collection of data at multiple points gives ongoing information through selected indicators on the
direction of change, pace of change and the magnitude of change. Evaluation on the other hand is
the episodic assessment of the design, implementation, outcomes and impact of a development
intervention which is usually performed either at mid-term or end of the project, on conclusion of
all activities. It occurs at pre-determined points during implementation; other smaller evaluations
may be undertaken to meet specific information needed throughout the process. An evaluation
performed at mid-term is called a formative evaluation and it is aimed at improving the
functioning of the project while it is still possible to do so by investigating why and whether
targets are or are no being achieved whereas the evaluation done at the end of the project is
called a summative evaluation which only makes it possible to draw lessons once the project has
been completed (IFRC, 2002).
Due to its continuous nature, monitoring usually requires building a system to continuously track
performance of the project throughout the project rather than being a one-ff exercise carried out
at a specific point in time as it is the case with evaluation. To serve a useful purpose, this system
needs to generate information that is relevant, accurate and timely for a range of different
stakeholders (Gosling & Edwads, 2003)

Additionally, a notable difference between monitoring and evaluation is the main action of the
two. This typically captures the principle process of what happens during the two organizational
activities. Through the periodic and continuous nature of data collection process in monitoring,
it is aimed at tracking the progress of a project or intervention which helps in improving the
efficiency and effectiveness of a project while it is running as opposed to evaluation which is
aimed at assessing the results of an intervention (Civicus, 2011).
The tracking nature of monitoring makes it possible to follow a course of movement which
provides regular information on progress relative to targets and outcomes and also helps identify
problems so that solutions can be proposed as the program is ongoing. In contrast, the
assessment nature of evaluation makes it possible to estimate the value or nature of a program or
project at a given level unlike following the trail of the project which makes it almost impossible
to make adjustments to the project in a timely and methodical way. Evaluation makes a
judgement of the relevance, impact, sustainability, effectiveness and efficiency of an intervention
(Hunter, 2009).

Through the tracking process, monitoring is concerned with verifying that project activities are
being undertaken, services are being delivered, and the project is leading to the desired changes
described in the project proposal while monitoring assesses higher level outcomes and impact
and may verify some of the findings from the monitoring. Evaluations therefore explore
anticipated and unanticipated results (Stetson, 2011).

An example to differentiate tracking from assessing is, if you are carrying out a training for
farmers, you can track the progress of the training by looking at how many farmers are attending
the training on a daily basis while assessment can be made by looking at how well the farmers
were trained which can be seen by how they have applied what they learnt and the yields
produced as a result of the training.

Further, a significant difference between monitoring and evaluation is the center of interest or
activity of the two. Monitoring focusses more on inputs, outputs and outcomes of an
intervention while evaluation focusses on the efficiency, relevance, impact and cost-effectiveness
of a program or project. Some of the indicators that are used for monitoring such as activities
look at how the programs are conducted and if the intervention is on track or on budget.
Additionally, the indicators look at the program’s level of performance such as whether it is
reaching the desired number of people in the target group. Evaluation indicators on the other
hand track the outcomes and impacts of programs or projects at the larger population level as
opposed to the program or project level. An example of this would be, in assessing the impact
(long term effects) of a project through special studies with wide district or regional coverage.
Evaluations are conducted to find out what has happened as a result of a project or program or a
set of projects and programs.

It can therefore be stated that monitoring looks at the short-term goals and short-term process
concerned with the collection of information regarding the success of the project as it is still
ongoing while evaluation is a long-term process which not only records the information but
assesses the outcomes and impact of the project. Due to this, monitoring is an essential part of
the good day to day management practice whereas evaluation is an essential activity in a longer-
term dynamic learning process (Surbhi, 2017).

Additionally, monitoring is different from evaluation in relation to its basic purpose. Monitoring
is aimed at improving the efficiency and adjusting the work plan, if necessary while evaluation is
aimed at improving effectiveness, impact and future programming by studying the past
experience of the project performance.
A notable difference between monitoring and evaluation is who undertakes these activities.
Monitoring is usually undertaken by project staff in conjunction with beneficiaries. It can
therefore be anchored and carried out at various institutional levels such as donor agencies, third
sector, project staff, line ministries, etc. Evaluation on the other hand is despite involving the
active participation of project staff is usually external and is typically undertaken by donor
agencies or an independent party.
Furthermore, due to the fact that monitoring is usually done by project staff, it provides an
opportunity for staff development and learning as they undertake the monitoring project as they
utilize their own knowledge and experience while learning resulting from evaluations is often
focused on the needs and perception of outside agencies.

In addition, monitoring usually collects quantitative data looking at the fact that it is descriptive
and is aimed at identifying actual or potential success and problems as early as possible while
evaluation involves intense data collection both qualitative and quantitative. Evaluations are
more analytical and seek to address causality.

An example to better explain this difference would be, if you want to study the DEM 9114 2020
cohort, you can do this by observing the attendance rate of students in this class during tutorials.
You will want to know how many people go for the tutorials and further what time they go, that
is, how many students go for the tutorials on time, how many are late and by how many minutes.
During the evaluation process, in addition to the above quantitative data collected, focus is also
given to the impact of the tutorial. You will hence want to know if there is a positive correlation
between tutorial attendance and the pass rate in DEM 9114.

Further, while monitoring provides real-time information and looks at detail of activities required
by management, evaluation provides more in-depth assessment. The monitoring process can
generate questions to be answered by evaluation. Evaluation does not look at detail of activities
but rather looks at the bigger picture which makes it more detailed and easier to understand even
by the non-experts of a particular field or project.

Additionally, another difference between monitoring and evaluation is in the sources of


information. The main source of data for monitoring is primary data while sources of
information for evaluation can be primary and secondary data.

Information sources for monitoring are usually derived within the program and systems that have
been put in place during an intervention while evaluation is much broader as it goes beyond the
information sources of monitoring. Evaluation extensively makes use of social research methods
such as special studies, focus group discussions, surveys and sometimes requires introducing a
control comparison group to best understand the effectiveness of an intervention and analyze the
outcome of a program, that is whether and how well a program or project was. (FHI, 2004).

In concluding this part of the essay, the differences between Monitoring and Evaluation can be
summed up by stating that, Monitoring is an organizational activity is aimed at describing what
is observed and measuring the progress of a project while it is in action. This is done in order to
identify the strengths and weaknesses of the project, identify problems that had not been
anticipated and identify and implement solutions to those problems as quickly as possible.
Evaluation on the other hand is an organizational activity that makes value judgements that go
beyond what is described. It is a means by which the evaluators and donors are able to assess
how successful the project was, whether it was able attain its goals, what the outcomes where
and what issues helped or hindered the success of the project (Frankel & Gage, 2007)
The next part of this essay will look at the historical development of monitoring and evaluation.
The historical development of evaluation is difficult, if not impossible to describe due to its
informal utilization by humans for thousands of years. One of the complications we face is the
fact that Monitoring and Evaluation means diverse things to different people, and that they are
disciplines that have been in a state of evolution over the past quarter century.
Historically, Monitoring and Evaluation can be traced to several points in the past. Nevertheless,
it is important to distinguish between modern-day Monitoring and Evaluation and traditional
Monitoring and Evaluation, which has been practiced by different generations and societies as
the world continues to advance.

Every society in the past seems to have executed some form of performance-tracking system. An
example can be seen the ancient Egyptians who regularly monitored their country’s outputs in
grain and livestock production more than 5,000 years ago. In this sense, Monitoring and
Evaluation is certainly not a new concept (Kanyamuna, 2019).

From the days of the Ancient Egyptians, there has been a great deal of evolution in the
philosophical alignment and conceptualization of Monitoring and Evaluation
Globally, the international status of M&E research remains theoretically and methodologically
influenced by the American tradition. The United States (US) is regarded as the motherland of
the field in terms of its trends, number of authors and their academic and professional influence,
degree of professionalisation, focus of academic programmes, legislation and institutionalisation
of evaluation, development of models and approaches for evaluation, evaluation capacity
building initiatives, evaluation standards and guiding principles, number and attendees of
evaluation conferences and workshops, publications and their impact factor, guides and
evaluation handbooks (Auriacombe, 2013).
Seven development periods of program evaluation have been designated and they will be
explained in detail. The first period is prior to 1900, which authors have called Age of Reform;
second, from 1900 until 1930, is called the Age of Efficiency; the third period is from 1930 to
1945, called the Tylerian Age; fourth, from 1946 to about 1957, called the Age of Innocence;
fifth, from 1958 to 1972, the Age of Development; sixth, from 1973 to 1983, the Age of
Professionalization; and finally the seventh period, from 1983 to 2000 the Age of Expansion and
Integration (Madaus, et al., 2000)
Age of Reform (1792-1900’s): The first documented official use of evaluation took place in 1792
when William Farish utilized the quantitative mark to assess students’ performance. The
quantitative mark permitted objective ranking of examinees and the averaging and aggregating of
scores. During this period in Great Britain, education was transformed through evaluation
(Madaus & Kellaghan, 1982).
Further, studies were undertaken in the 19th century by government-appointed commissions to
measure initiatives in the educational, law and health sectors. The US counterparts – presidential
commissions – examined evidence in an effort to examine various kinds of programmes.
Inspectorates in Britain also came to the scene in this era. These inspectorates would typically
conduct site visits and submit reports to explain their findings. In the United States a system of
regulations established by the Army Ordnance Department is recorded as one of the first formal
evaluation activities and took place in 1815 (ibid).
The second period of development as earlier alluded to was called the Age of Efficiency and
Testing (1900-1930): Fredrick W. Taylor’s work on scientific management became significant to
administrators in education. Taylor’s scientific management was based on observation,
measurement, analysis, and most importantly, efficiency. During this era, educators regarded
measurement and evaluation as synonyms, with evaluation thought of as summarizing student
test performance and assigning grades (Worthen, et al., 1997)
The subsequent period of development was known as The Tylerian Age (1930-1945): Ralph
Tyler, who is considered the father of educational evaluation, made considerable contributions to
evaluation. Tyler directed an Eight-Year Study (1932-1940) which assessed the outcomes of
programs in 15 progressive high schools and 15 traditional high schools (Horgan, 2007)
The Age of Innocence (1946-1957) was the forth period of development for program evaluation.
Starting in the mid 1940’s, Americans stirred mentally beyond the war (World War II) and great
depression. According to (Maduas, et al., 1984), society experienced a period of great growth;
there was an advancement and expansion of educational offerings, staffs, and facilities. Due to
this national optimism, little interest was given to accountability of national funds spent on
education; hence the label of this evaluation time period, The Age of Innocence. It is however
important to note that, this period institutionalized programme evaluation much faster in the US
due to the strong foundation of applied social sciences that came about post World War II. It was
in particular the development of strategies such as survey research and large-scale statistical
analysis that were used to better understand the population.
The Age of Development (1958-1972) is the next period of development. In 1957, the Russian’s
successful launch of Sputnik I led to a national crisis. Subsequent to this, legislation was passed
to improve instruction in areas that were considered critical to the national defence and security.
In 1958, Congress endorsed the National Defence Education Act (NDEA) which poured millions
of dollars into new curriculum development ventures and provided for new educational programs
in mathematics, sciences, and foreign languages (Madaus, et al., 2000).
The concluding part of the 1950’s and throughout the 1960’s was a slow period of country level
focus on M&E as the United Nations “promoted building of national development planning
capabilities.” Building capacity in Monitoring and Evaluation was intended to increase
ownership over the development process for the governments and citizens in the countries where
development programs were being implemented (Horgan, 2007).
In the 1960s, Monitoring and Evaluation practice underwent an extensive paradigm shift. This
was predominantly quantitative in focus, reflecting the social scientific trend of the era. This
command continued in the social sciences even in the 1970s, putting more emphasis on
empowerment evaluation. The stress on empowerment approaches was premised on lived
experiences to represent and provide a voice to as many stakeholders as possible. Nevertheless,
in the decades that followed, Monitoring and Evaluation methodologies shifted from an
emphasis on quantitative to more qualitative, participatory approaches and empowerment
techniques (ibid)

The Age of Professionalization (1973-1983) was the following period that followed. During the
1970’s, evaluation developed as a profession. Notable journals including: Educational
Evaluation and Policy Analysis, Studies in Educational Evaluation, CEDR Quarterly,
Evaluation Review, New Directions for Program Evaluation, Evaluation and Program Planning,
and Evaluation News were published. Additionally, universities began to recognize the
importance of evaluation by offering courses in evaluation methodology. Among such
universities were: The University of Illinois, Stanford University, Boston College, UCLA,
University of Minnesota, and Western Michigan University (Madaus, et al., 2000).
The demands on the evaluators and evaluation in general changed from examining operational
and measurable aims in the 1950s to producing useful information for the decision-makers and
even to shaping the actual intervention in the 1970s (Scriven, 1996)

The final period of development is the Age of Expansion and Integration (1983-Present). In the
early 1980’s, evaluation struggled under the Reagan administration. Cut backs in funding for
evaluation took place and emphasis on cost cutting emerged. Most funding for new social
initiatives were drastically cut. Fortunately, in early 1990’s, evaluation had rebounded with the
economy. The field expanded and became more integrated. Professional associations were
established along with evaluation standards (Horgan, 2007).
Since the 1990s, there has been a major modification in the delivery of aid assistance away from
donor designed and managed project related with the end of the Cold War, theoretical critiques
of development from the right and left, globalization, increased importance of trade and private
investment, aid fatigue among donors and structural adjustment. The push for aid effectiveness
and accountability drives the current emphasis on Monitoring and Evaluation in the development
field. Funders of development programs, governments, foundations and charities alike, have a
development agenda and need the executing organizations to show that objectives are being met
and that the overall human condition is being impacted as a result of development aid in order to
justify the expenditure of funds (ibid).
Further, modern Monitoring and Evaluation practices have their roots in the Results Based
Management (RBM) approach, which is a management strategy focused on performance and
achievement of outputs, outcomes and impacts for a policy, programme or project. M&E systems
are also regarded as toolkits of management meant to support institutions of development to
realize intervention efficiency through the delivery of results. The Results Based Management
approach was propagated first among private sector organizations, development agencies and
multilateral organizations, and later moved on to the public sector as part of reform efforts in the
1980s and 1990s. To date, most development interventions have adopted the RBM approach to
inform processes such as planning, budgeting, strategic prioritization, policy and decision
making (Kusek & Rist, 2004)

In conclusion, it can be noted that Evaluation has been used for different purposes over the years.
In the OECD countries, for example, early evaluations in the 1960s and 1970s studied ways of
improving social programs. Later in the 1980s and 1990s, governments used evaluation to
conduct budgetary management, for example, by examining ways to reduce expenditures and cut
public programs (ibid). Efforts to grow M&E systems have spread to most developing countries
—many having been driven by the desire to meet specific donor requirements, international
development goals, or, in some cases, both external and internal social and economic pressures.
REFERENCES

Adhikari, S., 2017. Public Health Notes. [Online]


Available at: https://www.publichealthnotes.com/difference-monitoring-evaluation/
[Accessed 11 April 2020].

Auriacombe, C., 2013. In search of an analytical evaluation framework to meet the needs of governance.
Journal of Public Administration, 4(48), pp. 715-729.

Civicus, 2011. Monitoring and Evaluation, Johannesburg: CIVICUS: World Alliance for Citizen
Participation.

FHI, 2004. Monitoring HIV/AIDS Pograms: A Facilitator's Training Guide. 1st ed. Arlington: Family Health
International.

Frankel , N. & Gage, A., 2007. M&E Fundamentals: A Self-Guided Minicourse, New York: U.S Agency for
Intenational Development (USAID).

Gosling, L. & Edwads, M., 2003. Toolkits: A practial guide to assessment, monitoring, review and
evaluation. 2nd ed. London: Save the Children.

Horgan, L. R., 2007. The Historical Development of Program Evaluation: Exploring the Past and Present.
Online Journal of Workforce Education and Development, 2(4), p. 14.

Hunter, J., 2009. Monitoring and Evaluation: Are We Making a Difference?. 1st ed. Windhoek: John
Meinert Printing.

I., 2002. Handbook for Monitoring and Evaluation. 1st ed. Geneva: International Federation of Red Cross
and Red Crescent Societies.

Kanyamuna, V., 2019. The Mast Online. [Online]


Available at: https://www.themastonline.com/2019/09/11/historical-context-of-monitoring-and-
evaluation/
[Accessed 21 April 2020].

Kusek & Rist, 2004. Ten Steps to a Results-Based Monitoring and Evaluation System: A Handbook for
Development Practitioners, Washington D.C: World Bank.

Madaus, G. & Kellaghan, T., 1982. Trends in standads in Great Britain and Ireland. 1st ed. New York:
Academic Press.

Madaus, G., Stufflebeam, D. & Kellagham, T., 2000. Evaluation models: Viewpoints on educational and
human services evaluation. 2nd ed. Hingham: Kluwer Academic Publishers.

Maduas, G., Scriven , M. & Stufflebeam, D., 1984. Educational evaluation and accountability: A review of
quality assurance efforts. The American, 5(27), pp. 649-673.
O. D., 2010. Glosary of Key Terms in Evaluation and Results Based Management, Genenva: OECD DAC .

Scriven, M., 1996. The Theory Behind Practical Evaluation. Evaluation, 2(4), pp. 393-404.

Stetson, V., 2011. Monitoring and Evaluation Guide. In: G. Sharrock, ed. Key Aspects of Monitoring and
Evaluation (M&E) for Humanitarian and Socio-economic Development Programs. Baltimore: American
Red Cross and Catholic Relief Service, p. 71.

Surbhi, S., 2017. Key Differences. [Online]


Available at: https://keydifferences.com/difference-between-monitoring-and-evaluation.html
[Accessed 20 April 2020].
Worthen, B., Sanders, J. & Fitzpatrick, J., 1997. Educational Evaluation: Alternative approaches and
practical guidelines. 2nd ed. New York: Longman.

You might also like