You are on page 1of 39

BASIC M & E

CONCEPTS
DR CHIKAFUNA BANDA
BASIC M & E CONCEPTS
 When you hear or read that the prevalence of low birth weight in a country is 20%, have you

ever wondered how this calculation was derived? Or when you hear that the percentage of
married women of reproductive age in a rural area using a modern contraceptive method rose
from 52% to 73%, do you wonder how they know this?

 When you hear or read that the prevalence of COVID 19 vaccination in a country is 50%,

have you ever wondered how this calculation was derived? Or when you hear that the
percentage of people on ART rose from 65% to 80%, do you wonder how they know this?
WHAT IS MONITORING
 Monitoring of a program or intervention involves the collection of routine data that measure
progress toward achieving program objectives. It is used to track changes in program
performance over time.
 Its purpose is to permit stakeholders to make informed decisions regarding the effectiveness of
programs and the efficient use of resources
 Monitoring is sometimes referred to as process evaluation, because it focuses on the
implementation process and asks key questions:
• How well has the program been implemented?
• How much does implementation vary from site to site?
• Did the program benefit the intended people? At what cost?
WHAT IS MONITORING
Examples of program elements that can be monitored are:
• Supply inventories
• Number of vaccine doses administered monthly
• Quality of service
• Service coverage
• Patient outcomes(changes in behavior, morbidity, etc.
A GRAPHIC ILLUSTRATION OF PROGRAM MONITORING OVER TIME
COULD
LOOK LIKE THIS

Program
indicator

Program start Program end


PROGRAM MONITORING
 The program indicator being measured on the “Y” axis could be any element of the program that needs tracking,

such as the cost of supplies, the number of times the staff provide certain information to clients, or the percentage of
clients who are pleased with the services they received.

 Time before and after the program on the “X” a.xis

 Monitoring:

 Is an ongoing, continuous process

 Requires the collection of data at multiple points throughout the program cycle, including at the beginning, to

provide a baseline

 Can be used to determine if activities need adjustment duringthe intervention to improve desired outcomes
EVALUATION
 Evaluation measures how well the program activities have met expected objectives and/or the

extent to which changes in outcomes can be attributed to the program or intervention. The
difference in the outcome of interest between having or not having the program or intervention
is known as its “impact,” and measuring that is commonly referred to as “impact evaluation.”
EVALUATION

Indicator with program


Change in
program outcome
% Progress of
the program
.%
Program Indicator without program
progress t
Program
achievement

Program Start Program Evaluation

Time

JANUARY 2021 DECEMBER 2021


WHAT IS EVALUATION?
Did you know?

 Evaluation is fundamentally an exercise to help decision makers understand how, and to what

extent, a program is responsible for particular, measured results.

Evaluations require:
1. Data collection at the start of a program (to provide a baseline) and again at the end, rather than at
repeated intervals during program implementation.

2. A control or comparison group, in order to measure whether the changes in outcomes can be
attributed to the program.

3. A well-planned study design.


EVALUATION
 Evaluation is a process that systematically and objectively assesses all the elements of a

program (e.g. design, implementation and results achieved) to determine its overall worth or
significance. The objective is to provide credible information for decision-makers to identify
ways to achieve more of the desired results. Broadly speaking, there are two main types of
evaluation:
PERFORMANCE
EVALUATIONS
1. Performance evaluations focus on the quality of service delivery and the outcomes (results)
achieved by a program.
 They typically cover short-term and medium-term outcomes (e.g. vaccination coverage achievement

levels, or maternal deaths.

 They are carried out on the basis of information regularly collected through the program monitoring

system.

 Performance evaluation is broader than monitoring. It attempts to determine whether the progress

achieved is the result of the intervention, or whether another explanation is responsible for the observed
changes.
IMPACT EVALUATIONS
 Impact evaluations look for changes in outcomes that can be directly attributed to the program
being evaluated.
 They estimate what would have occurred had beneficiaries not participated in the program.
 The determination of causality between the program and a specific outcome is the key feature
that distinguishes impact evaluation from any other type of assessment.
MONITORING AND
EVALUATION
 Monitoring and evaluation usually include information on the cost of the program being

monitored or evaluated. This allows judging the benefits of a program against its costs and
identifying which intervention has the highest rate of return. Two tools are commonly used.
COST-BENEFIT ANALYSIS
 A cost-benefit analysis estimates the total benefit of a program compared to its total costs. This
type of analysis is normally used ex-ante, to decide among different program options.
 The main difficulty is to assign a monetary value to “intangible” benefits. For example, the
main benefit of a ART program is the increase of percentage people living with HIV on ART
and undetectable viral loads.
 These are tangible benefits to which a monetary value can be assigned. However, being on
ART and good health, having a job also increase people’s self-esteem, which is more difficult
to express in monetary terms as it has different values for different persons.
A COST-EFFECTIVENESS
 A cost-effectiveness analysis compares the costs of two or more programs in yielding the same

outcome. Take for example a buying HIV test kits for routine HIV screening at health facilities
and . Each has the objective to place HIV positive people on ART, but the routine HIV test
does so at the cost of K500 per individual tested, while the second costs K50. In cost-
effectiveness terms, the routine HIV testing performs better than the …..testing, by having a
huge percentage of the population knowing their status as opposed to targeted testing which
concentrates on people on high risk only.
THEORY OF CHANGE
 A theory of change describes how an intervention will deliver the planned results. A

causal/result chain (or logical framework) outlines how the sequence of inputs, activities
and outputs of a program will attain specific outcomes (objectives).

 This in turn will contribute to the achievement of the overall aim. A causal chain maps: (i)

inputs (financial, human and other resources); (ii) activities (actions or work performed to
translate inputs into outputs); (iii) outputs (goods produced and services delivered); (iv)
outcomes (use of outputs by the target groups); and (v) aim (or final, long-term outcome
of the intervention).
RESULTS CHAIN

MONITORING
EVALUATION
INPUTS ACTIVITIES OUTPUTS OUTCOMES IMPACT

Tangible
Action Final
Available goods Results likely
taken/work programme
resources, or services to be achieved
performed to goals,
including the when
transform typically
budget and programme beneficiaries
inputs into achieved in
staff produces or use outputs
outputs the long-term
delivers

IMPLEMENTATION RESULTS
RESULTS CHAINS
In the result chain above, the monitoring system would continuously track:

 (i) the resources invested in/used by the program;

 (ii) the implementation of activities in the planned timeframe; and

 (iii) the delivery of goods and services.

 A performance evaluation would, at a specific point of time, judge the inputs-outputs relationship and the

immediate outcomes.

 An impact evaluation would provide evidence on whether the changes observed were caused by the

intervention and by this alone.


PERFORMANCE
MANAGEMENT SYSTEMS AND
PERFORMANCE
MEASUREMENT
1. Performance management (or results-based management) is a strategy designed to
achieve changes in the way organizations operate, with improving performance (better
results) at the core of the system.

2. Performance measurement (performance monitoring) is concerned more narrowly with


the production of information on performance. It focuses on defining objectives, developing
indicators, and collecting and analyzing data on results. Results-based management systems
typically comprise seven stages:
1. Formulating objectives: identifying in clear, measurable terms the

Strategic Planning
results being sought and developing a conceptual framework for
how the results will be achieved.
2. Identifying indicators: for each objective, specifying exactly what
RESULTS - BASED MAMNAGEMENT

is to be measured along a scale or dimension.


3. Setting targets: for each indicator, specifying the expected level of
Performance Measurement results to be achieved by specific dates, which will be used to
judge performance.
4. Monitoring results: developing performance-monitoring systems
that regularly collect data on the results achieved.
5. Reviewing and reporting results: comparing actual results against
the targets (or other criteria for judging performance).
6. Integrating evaluations: conducting evaluations to gather
information not available through performance monitoring
systems.
7. Using performance information: using information from
monitoring and evaluation for organizational learning, decision
making and accountability
PERFORMANCE INDICATORS
Performance indicators are concise quantitative and qualitative measures of program
performance that can be easily tracked on a regular basis.
 Quantitative indicators measure changes in a specific value (number, mean or median) and a
percentage.
 Qualitative indicators provide insights into changes in attitudes, beliefs, motives and
behaviors of individuals.
 Although important, information on these indicators is more time-consuming to collect,
measure and analyze, especially in the early stages of program implementation.
PERFORMANCE MONITORING
SYSTEM
 The setting up a performance monitoring system for ART programs, therefore, requires:

clarifying program objectives; identifying performance indicators; setting the baseline and
targets, monitoring results, and reporting.

 In many instances, the objectives of an ART program are implied rather than expressly stated.

In such cases, the first task of performance monitoring is to articulate what the program
intends to achieve in measurable terms. Without clear objectives, in fact, it becomes difficult
to choose the most appropriate measures (indicators) and express the program targets.
“MONITORING” OR
“EVALUATION.”?
Check to see if you know whether the following situations call for?
1. Ministry of Health wants to know if the programs being carried out in Province A are
reducing unintended pregnancy among adolescents in that province.
2. National Aids Council wants to know how many sex workers have been reached by your
program this year.
3. The PS MOH is interested in finding out if the post abortion care provided in public clinics
meets national standards of quality
ANSWERS
1. This is evaluation, because it is concerned with the impact of particular programs..
2. This is monitoring, because it is concerned with counting the number of something (sex
workers reached).
3. This is monitoring, because it requires tracking some thing (quality of care)
WHY M & E?
Monitoring and evaluation helps program implementers:
 Make informed decisions regarding program operations and service delivery based on
objective evidence.
 Ensure the most effective and efficient use of resources.
 Objectively assess the extent to which the program is having or has had the desired impact, in
what areas it is effective, and where corrections need to be considered.
 Meet organizational reporting and other requirements, and convince donors that their
investments have been worthwhile or that alternative approaches should be considered.
WHEN SHOULD M&E TAKE PLACE?

 M&E is a continuous process that occurs throughout the life of a program.


 To be most effective, M&E should be planned at the design stage of a program, and the time,
money, and personnel that will be required should be calculated and allocated in advance.
 Monitoring should be conducted at every stage of the program, with data collected, analyzed,
and used on a continuous basis.
 Evaluations are usually conducted at midyear or at the end of programs. However, they
should be planned at the start, because they rely on data collected throughout the program,
with baseline data being especially important
Did You Know?
 One rule of thumb is that 5-10% of a project budget should be allocated for M&E
EXAMPLES OF QUESTIONS THAT M&E
CAN ANSWER:
 Was the program implemented as planned?
 Did the target population benefit from the program, and at what cost?
 Can improved health outcomes be attributed to program efforts?
 Which program activities were more effective and which less effective?
INDICATORS
 Input indicators measure the contributions necessary to enable the program to be
implemented (e.g., funding, staff, key partners, infrastructure).
 Process indicators measure the program’s activities and outputs (direct products/deliverables
of the activities). Together, measures of activities and outputs indicate whether the program is
being implemented as planned. Many people use output indicators as their process indicators;
that is, the production of strong outputs is the sign that the program’s activities have been
implemented correctly.
 Others may collect measures of the activities and separate output measures of the
products/deliverables produced by those activities. Regardless of how you slice the process
indicators, if they show the activities are not being implemented with fidelity, then the
program risks not being able to achieve the intended outcomes.
INDICATORS
 Output indicators measure or describe the delivery of products, including, but not limited to:
 Outcome indicators measure whether the program is achieving the expected effects/changes
in the short, intermediate, and long term.(objectives)
 Because outcome indicators measure the changes that occur over time, indicators should be
measured at least at baseline (before the program/project begins) and at the end of the project.
 There is often confusion about the differences between project outputs (products) and
outcomes (the short and medium term benefits that those products deliver). One easy way to
distinguish between outputs and outcomes is to consider whether the indicator describes
project effectiveness (an outcome).
 Impact indicators measure the goal and are expected to be achieved in the medium-long
term. They should be based on goals/aims outlined in the national nutrition policy.
TYPES OF INDICATORS
 Indicators need to measure physical and visible (measurable) outcomes, but also changes in
attitudes and behavior, which is often less tangible and not always easy to count.
 While quantitative indicators are emphasized in mainstream M&E approaches, for
communication for development, and especially Communication for Social Change, they often
need to be qualitative to be most effective and appropriate.
 Qualitative indicators can help us to assess the impacts of our projects and the extent to which
change has occurred. They are generally more descriptive. Quantitative indicators can help to
assess if our projects are on track. Indicators can take different formats such as pictures or stories
of social change.
 This is particularly important to consider when we are working with people who have low levels
of education or literacy.
SPICED AND SMART
INDICATORS
While there are no set rules to selecting indicators, one popular guideline has been to use the
acronym :
 ‘SMART’: indicators should be Specific, Measurable, Attainable and action-oriented,
Relevant, and Time-bound. This guideline tends to suit quantitative indicators in particular.
Another acronym recently suggested is
 ‘SPICED’: Subjective, Participatory, Interpreted, Communicable, Empowering and
Disaggregated.
 SMART describes the properties of the indicators themselves, while SPICED relates more to
how indicators should be used
INDICATORS
SMART SPICED

 Specific (to the change being measured)  Subjective


 Measurable (and unambiguous)  Participatory
 Attainable (and sensitive)  Interpreted (and communicable)
 Relevant (and easy to collect)  Cross-checked
 Time bound (with term dates for  Empowering
measurement)
 Diverse and disaggregated
SPICED INDICATORS
 SPICED: Subjective - Participatory - Interpreted and communicable - Cross-checked and
compared - Empowering - Diverse and disaggregated
 Subjective: Informants have a special position or experience that gives them unique insights
which may yield a very high return on the investigators time. In this sense, what others see as
'anecdotal' becomes critical data because of the source’s value.
 Participatory: Objectives and indicators should be developed together with those best placed
to assess them. This means involving a project's ultimate beneficiaries, but it can also mean
involving local staff and other stakeholders.
 Interpreted and communicable: Locally defined objectives/indicators may not mean much to
other stakeholders, so they often need to be explained.
SPICED INDICATORS
 Cross-checked and compared: The validity of assessment needs to be cross-checked, by
comparing different objectives/indicators and progress, and by using different informants,
methods, and researchers.
 Empowering: The process of setting and assessing objectives/indicators should be
empowering in itself and allow groups and individuals to reflect critically on their changing
situation.
 Diverse and disaggregated: There should be a deliberate effort to seek out different
objectives/indicators from a range of groups, especially men and women. This information
needs to be recorded in such a way that these differences can be assessed over time
SPICED INDICATORS
 The SPICED approach is a very useful tool for thinking about how to set participatory
objectives and indicators.
 It is qualitative; it appreciates local understandings of change and is a good tool for thinking
about why it is important to work with communities.
 It identifies that different people have different ideas about what change means.
 Developing indicators that help us understand what change means at the community level is
challenging, but several steps can be taken that make the process simpler to understand and
implement.
INDICATORS
Examples of indicators are:
 The number of maternal deaths per 100,000 live births in a specific period (1 year).
 The number of under5 deaths per 1,000 live births in a specific period ( 1 year).
 Number of health workers trained in IUD insertion in the past 12 months (1 year).
 Percentage of women of reproductive age who are using a contraceptive method at a particular
point in time
INDICATORS OF MATERNAL AND NEWBORN HEALTH?

 Maternal mortality ratio;


 Under-five child mortality, with the proportion of newborn deaths;
 Children under five who are stunted;
 Proportion of demand for family planning satisfied (met need for contraception)
 Antenatal care coverage (at least four times during pregnancy)
END
Thank you

You might also like