You are on page 1of 18

MEAL Glossary – updated June 2015

Accountability
is how an organization responds to and balances the needs of all stakeholders
(including beneficiaries, donors, partners and CRS itself) in its decision making and
activities, and delivers against this commitment. (Source: ECB 2010 and CRS July
2014)
After action review
is a simple, quick and versatile option for facilitating the continual assessment of
organizational performance, looking at successes and failures, and ensuring that
learning takes place to support continuous improvement. It works by bringing together
a team to discuss a recently completed task, event, activity or project, in an open and
honest fashion. (Source: Adapted from Better Evaluation)
Analysis
is a process of probing and investigating the constituent parts, and their
interrelationships, of the underlying causes and effects of selected issues to gain
deeper insights. Analysis helps transform data and other forms of evidence into usable
information that supports interpretation. Analysis has both a qualitative and a
quantitative dimension. In project design, assessment data is analyzed by:
• making comparisons
• ranking and prioritizing issues
• identifying similarities, differences, trends, gaps, and cause-and-effect relationships
The opposite of analysis is synthesis. Both are important in a learning organization.
(Source: Adapted from Encyclopedia of Evaluation, 2005.)
Appropriateness
See relevance
Approved evaluation report
is an evaluation report that has received the approval of the delegated authority, in
most cases the Country Representative or Manager, in order to be released for wider
dissemination.
Assessment
is an exercise, often using a mix of qualitative and quantitative data collection methods,
to gather information on priority needs and the current context in a particular area to
inform project design. (Source: Adapted from the CRS Guidance on Participatory
Assessments)
Attribution
is the ability to ascribe changes to specific interventions, rather than just assessing
what happened.
Audit
is a procedure in which an independent third party systematically examines the
evidence of adherence to a set of standards (e.g., the MEAL procedures) for a project
or program, and issues a professional opinion that, in CRS, is usually in the form of
“findings”.

1
Audit trail requirement
is the documentation required to demonstrate that a project, program or emergency
response has adhered to a particular standard (e.g., a specific MEAL procedure).
Average
is a number that is calculated by adding quantities together and then dividing the total
by the number of quantities. The average may also be known at the mean value.
(Source: Adapted from Merriam Webster)

2
Baseline survey
is the systematic collection of data required to measure project indicators in a
(typically representative) sample of target respondents and locations at the time
of project start-up. (Source: CRS Guidance on Participatory Assessments)
Beneficiary
is an individual, group, or organization, whether targeted or not, that benefits,
directly or indirectly, from a development intervention or emergency response.
See direct beneficiary, extended beneficiary and indirect beneficiary
Beneficiary accountability
focuses on accountability to specific stakeholders (women, men, girls and
boys). It is a two-way communication process, using various channels, that
prioritizes the involvement of beneficiaries in project decision making. It involves
listening to beneficiaries, establishing trust, understanding their needs and
reflecting those needs in the project’s decision-making processes and activities
(CRS June 2014 and IFRC 2011).
Beneficiary and Service Delivery Indicators
(BSDI) are the collection of prescribed information that the agency uses to
monitor the scale of its interventions. The set of data include the names of
beneficiaries, their age, gender, location, and the assistance they received. The
data are then organized into a standardized system that can track and compare
the services CRS is providing—not only from project to project but also from
country program to country program. This system allows CRS to document who
its projects are reaching, what services they are receiving, and how well the
programs are reaching the people CRS intends to assist. (Source: Adapted from
BSDI Orientation Module, forthcoming)
Bias
See general bias or sample bias.

3
Communication plan
documents the approach that a program will use to communicate key project
information and MEAL findings with communities and other stakeholders. It helps
ensure systematic information sharing and two-way communication. (Source: Adapted
from CRS Haiti Communication Toolbox)
Comparison group
is a set of non-program beneficiaries or “untreated” individuals (or other units of study)
with which program beneficiaries (intervention group) are compared. The term
“comparison group” is associated with a quasi-experimental design (See quasi-
experimental design) Individuals have not been randomly assigned to the comparison
group or the intervention group. (Source: Adapted from the World Bank Handbook on
Impact Evaluation, 2010)
Competency
is a cluster of interrelated knowledge, skills, and attitudes that enables a person to do
his or her job effectively. (Source: CRS Global Competency-Based Development site)
Competency model
(also known as a competency framework) defines the skills, knowledge and attitudes
that a CRS staff member needs to be effective in his or her job. Competencies are
applied to recruitment, performance management, learning, career development and
succession planning. At CRS, a competency model has three parts: a competency title,
an accompanying definition, and behavioral indicators. (Source: CRS Global
Competency-Based Development site)
Complaint
is a specific grievance of anyone who has been negatively affected by an organization’s
action or who believes that an organization has failed to meet a stated commitment.
(Source: HAP, The 2010 HAP Standard in Accountability and Quality Management,
2010)
Confounding factor
is a variable, either observed or unobserved, that contributes to change in a desired
result over time, independently of the intervention. Potential confounding factors can
include other interventions in the program area, extreme events or disasters,
government policy changes, population characteristics, or natural changes that happen
in an individual or community over time.
Control group
is the randomly assigned set of non-program participants or "untreated" individuals (or
other units of study) with which an intervention group (or treatment group) is contrasted.
It consists of units of study individuals who did not receive the intervention under
evaluation. The term control group is used when the evaluation employs an
experimental design (See experimental design). (Source: Adapted from Encyclopedia of
Survey Research Methods and the World Bank Impact Evaluation Handbook)
Core competency
is a specific capability that is central to the success of an organization and is applicable
across a broad range of programs. As part of the 2014-2018 agency strategy, CRS has

4
identified five specific core competencies for deeper cultivation and investment during
the period: Partner Collaboration and Support; Justice and Peacebuilding Integration;
Monitoring and Evaluation, Accountability and Learning; Information and
Communications Technology for Development; and Global Brand Management.
Counterfactual
is what would have occurred in the absence of the intervention and a comparison with
what has occurred with the intervention implemented. (Source: Leeuw, F. and Vaessen,
J. (2009) Impact Evaluations and Development: NoNIE Guidance on Impact
Evaluation. Washington, DC: The Network of Networks on Impact Evaluation)

Data gathering forms


(also known as data collection forms) are forms to be filled out by project participants
or staff to collect data. (Source: ProPack III).
Data Quality Assessment
(DQA) provides an in-depth appraisal of data quality and M&E systems in selected
projects. Ideally, the assessment is led by an independent third party. (Source:
Adapted from The Global Fund (2014) Data Quality Tools and Mechanisms
http://www.theglobalfund.org/en/me/documents/dataquality/)
Detailed Implentation Plan
(DIP) is the document that will guide managers in project implementation. Detailed
Implementation Plans include detailed timelines for the implementation of project
activities and other information, such as the person/people responsible for an activity,
to support project management. (Source: Adapted from ProPack I).
Development plans
document an employee’s plan for advancing their individual learning according to
stated objectives, and making improvements related to competencies.
Direct beneficiary
is a countable, identifiable individual who directly benefits and who receives project
services.

Effectiveness
is the measure of the extent to which an aid activity attains its objectives. (Source:
DAC Criteria for Evaluating Development Assistance)

Efficiency
measures the outputs—qualitative and quantitative—in relation to the inputs. It is an
economic term which signifies that the aid uses the least costly resources possible in
order to achieve the desired results. This generally requires comparing alternative
approaches to achieving the same outputs, to see whether the most efficient process
has been adopted. (Source: DAC Criteria for Evaluating Development Assistance)
Emergency response
(or emergency response strategy) often includes multiple projects that share a

5
common goal and strategic objective(s) for meeting the needs of affected populations
following a disaster.
Evaluation
is a periodic, systematic assessment of a project’s relevance, efficiency, effectiveness,
impact and sustainability on a defined population. Evaluation draws from data
collected via the monitoring system, as well as any other more detailed data (e.g., from
additional surveys or studies) gathered to understand specific aspects of the project in
greater depth. The term “evaluation” in the CRS context often implies the engagement
of an external third party to act as team leader. (Source: ProPack II)
Evaluation events
is a collective phrase representing different types of evaluations (See evaluation) and
reviews (See review).
Evaluative (or critical) thinking
is a cognitive process important to MEAL, requiring an attitude of inquiry and a belief in
the value of evidence. It involves identifying assumptions, asking thoughtful questions
to elicit alternative interpretations, pursuing deeper understanding and learning
through reflection and perspective-taking, and making informed decisions in
preparation for adaptation and action. It is embedded in a model of change that is
dynamic, reflective and responsive. (Source: Jones 2011 and USAID 2013)
Experimental design
(or randomized control trial or RCT) is a type of research or impact evaluation method
whereby two samples or groups from the same population of interest are randomly
selected, and one is given the intervention (i.e., the intervention group), and the other
not (i.e., the control group). Once groups have been identified, identical measurements
are taken at (at least) two points in time, including before the intervention is
administered (i.e., baseline) and at the close of the program or later (i.e., endline). Any
change over time between the two groups is compared, with the expectation that if the
intervention is effective, the desired change—detectable through statistical methods—
will be more extreme among the intervention group.
Explicit knowledge
is formal, systematic, documented, and thus easily communicated and shared. It exists
in the form of words, sentences, documents, organized data, computer programs and
other explicit forms. Contrast explicit knowledge with tacit knowledge (See tacit
knowledge). (Source: Adapted from King, Knowledge Management and Organizational
Learning, 2009)
Extended beneficiary
is an individual that partners serve directly (not indirect beneficiaries, therefore) in a
given fiscal year by implementing projects that are not funded by CRS. The number of
extended beneficiaries is an estimate provided by the partners themselves. It is
considered a proxy indicator for the growth of institutional capacity of a partner over
time.
Extenuating circumstances
are existing conditions beyond the reasonable control of CRS or partner staff that may

6
prevent planned actions or adherence to a set of standards (e.g., the MEAL
procedures). Adequate documentation of extenuating circumstances in which CRS
staff were not able to adhere to a MEAL procedure will be considered by the Internal
Audit team and may prevent an audit finding.

Feedback
is information about stakeholders’ reactions to the content and delivery of a project’s
interventions that is used as a basis for collaboration, accountability, learning and
improvement
Final evaluation
is a type of evaluation (See evaluation) conducted at the end of a project and that
includes an endline survey to allow for comparison with baseline data, if available, of
relevant project indicators.
Focus group discussion
is a data collection method that involves six to 12 people with specific characteristics
who are invited to discuss a specific topic in detail. Participants should have something
in common depending on the focus group topic (e.g., a particular problem, they are all
marginalized, or they share a social status or sectoral interest). The discussion should
be planned and facilitated to ensure maximum participation and in-depth discussion.
(Source: Adapted from CRS Guidance on Participatory Assessments)
Formative evaluation or review
is designed with an explicit objective to improve ongoing programming (See evaluation
and review).

General bias
is an inclination of temperament or outlook to present or hold a partial perspective,
often accompanied by a refusal to even consider the possible merits of alternative
points of view. People may be biased toward or against an individual, a race, a
religion, a social class, a political party, or a species. Biased means one-sided, lacking
a neutral viewpoint, not having an open mind (Source: Wikipedia)
Global MEAL Community
refers to program staff interested in advancing MEAL and with MEAL-related
responsibilities in their job descriptions. This includes both MEAL and non-MEAL
program and in particular head of programs, program managers, MEAL staff in country
programs, MEAL advisors, and deputy regional directors for program quality.
Global MEAL Team
consists of MEAL Advisors and PIQA MEAL staff and collectively works to advance
MEAL priorities.
Good practice
(or promising practice) is a method or technique that has consistently shown results
superior to those achieved with other means in a variety of contexts and that is used to
guide quality improvement. Good practices can evolve as improvements are identified.

7
Hard competency
is a technical or concrete skill set and knowledge base that directly contributes to the
ability of a CRS staff to perform a given job effectively. (Source: Adapted from the
Limerick Institute of Technology definition)
Human subjects
include a living individual about whom an investigator obtains (1) data through
intervention or interaction, or (2) identifiable private information. (Source: Adapted from
Protection of Human Subjects in Research Supported by USAID: A Mandatory
Reference for ADS Chapter 200 and The Belmont Report).
Human subjects research
is a systematic investigation involving human subjects designed to test a hypothesis,
permit conclusions to be drawn, and thereby to develop or contribute to generalizable
knowledge. Research can include a wide variety of activities, including but not limited
to experiments, observational studies, surveys, tests, and recordings designed to
contribute to a wider audience. (Source: Adapted from Protection of Human Subjects
in Research Supported by USAID: A Mandatory Reference for ADS Chapter 200 and
The Belmont Report).

ICT4D
is the application of Information and Communication Technologies for international
Development. Similarly, ICT4E and ICT4MEAL are the application of such
technologies in an emergency setting or for the specific purpose of supporting MEAL
activities, respectively. (Source: Adapted from Heeks, The ICT4D 2.0 Manifesto:
Where Next for ICTs and International Development? 2009)
Impact
refers to the positive and negative changes produced by a development intervention,
directly or indirectly, intended or unintended. (Source: DAC Criteria for Evaluating
Development Assistance). Note: Other organizations, including USAID, consider that
impact must be demonstrated through rigorous evaluations which include a
counterfactual.
Impact evaluation
is a type of evaluation that assesses changes in the well-being of individuals,
households, communities or firms that can be attributed (See attribution) to a particular
project, program or policy. The central impact evaluation question asks what would
have happened to those receiving the intervention if they had not received the
program. To this end, an impact evaluation must estimate the counterfactual (See
counterfactual), which attempts to define a hypothetical situation that would occur in
the absence of the program, and to measure the welfare levels of individuals or other
identifiable units that correspond with this hypothetical situation. This comparison
allows for the establishment of definitive causality – attributing observed changes in
welfare to the program, while removing confounding factors (See confounding factors).

8
There are other types of program assessments including organizational reviews,
performance evaluations (See performance evaluations), and process monitoring (See
monitoring), but these do not estimate the magnitude of effects with clear causation.
(Source: Adapted from World Bank, Handbook on Impact Evaluation: Quantitative
Methods and Practices, 2010).
Implementation science
is the inquiry into questions concerning the implementation of a program or policy. The
intention behind implementation science is to understand how and why interventions
work in “real world” settings and to test approaches to improve them. (Adapted from
Peters, D. et al. 2013. Implementation Research: What it is and how to do it. BMJ
347:f6753)
Indicator
is a quantitative or qualitative factor or variable that provides a simple and reliable
means to measure achievement, to reflect the changes connected to an intervention,
or to help assess the performance of a development actor. (Source: OECD Glossary
of Key Terms in Evaluation and Results Based Management)
Indicator Performance Tracking Table
(IPTT) provides a simple, standardized way of presenting M&E project data; the IPTT
is the table used to track, document, and display indicator performance data. Although
individual donors may specify the format they want projects to use, most tracking
tables include a list of all official project performance indicators, baseline values and
benchmarks of these indicators, and targets for each indicator. Representative data
are included in the IPTT during the life of the project in order to calculate
achievements against initial targets. (Source: IPTT Guidelines: Guidelines and Tools
for the Preparation and Use of Indicator Performance Tracking Tables)
Indicator Tracking Table (ITT)
(ITT) is a tool to periodically, often quarterly, track progress in implementing project
activities and achieving project outputs against the targets set in the Detailed
Implementation Plan (DIP) (See Detailed Implementation Plan) and in the indicator
performance tracking table (See Indicator Performance Tracking Table). The ITT
supports the use of monitoring data during quarterly reflection sessions with partners
and often informs progress and results reports submitted to donors.
Indirect beneficiary
is a countable, but not identifiable, individual or group of individuals who indirectly
benefit from project services (e.g. households).
Institutional Review Board
(IRB) is a constituted review body specifically established or designated by an entity to
protect the rights and welfare of human subjects recruited to participate in behavioral,
social science or biomedical research. The primary responsibility of an IRB is to review
research to ensure the protection of human participants through the application of the
principles of research ethics. IRB approval is required to publish study results in a
peer-reviewed journal. (Source: Adapted from Mayo Clinic IRB Definition of Terms and
FHI360 Research Ethics Training)

9
Integrity
of data refers to whether there is improper manipulation of data. Integrity is one of five
USAID Data Quality Standards. (Source: USAID Performance Monitoring & Evaluation
Tips: Data Quality Standards, 2009)
Intermediate result
states the expected change(s) in identifiable behaviors by participants in response to
the successful delivery and reception of outputs. (Source: ProPack I)
Interpretation
involves explaining findings, attaching significance to particular results, making
inferences, drawing conclusions, and presenting patterns within a clear and orderly
framework. (Source: Encyclopedia of Evaluation)
Interviews
or key informant interviews (KIIs) gather information from individuals who are usually
selected based on particular characteristics. Interviews may be structured or
unstructured and often follow a list of open-ended questions or a checklist. (Source:
Adapted from CRS Guidance on Participatory Assessments)

Knowledge management
is the planning, organizing, motivating and controlling of people, processes and
systems in an organization to ensure that its knowledge-related assets are improved
and effectively employed. These assets include knowledge in printed documents,
stored electronically (e.g. CRS Global) employees’ knowledge, team/community
knowledge and knowledge embedded in organization’s products, processes and
relationships (CRS July 2014 and King 2009).

Learning
is a continuous process of analyzing a wide variety of information sources and
knowledge (including evaluation findings, monitoring data, innovations, stories,
person-to-person exchanges and new learning) that bring to light new best practices or
call into question received wisdom. Learning leads to iterative adaptation of project
design steps, the project strategy and/or project implementation, in order to sustain the
most effective and efficient path to achieving project success (CRS July 2014).
Learning to Action Discussions
(LADs) are times set aside to understand and analyze project-related data and to
discuss their implications for the management of the project. (Source: ProPack III)
Lesson learned
is an experience that can be generalized from a specific project context to improve
programming in broader situations. (Source: CRS Asia Improving our Lessons
Learned Practice)

M&E Plan

10
builds upon a project’s Proframe to detail in tabular format the key M&E requirements
for each indicator and assumption thereby enabling projects to collect comparable
data over time. Within the M&E Plan, indicators are defined and summary information
is provided for how data will be collected, analyzed, reported, and the respective
allocation of responsibilities for each. The M&E Plan contributes to stronger
performance management (See Performance Management Plan) and to better
transparency and accountability within and outside of CRS.
MEAL narrative
is the text in the project proposal that describes planned MEAL activities.
MEAL operating manual
is the centralized documentation (soft or hard copy) of key project MEAL documents
ranging from MEAL design documents to final evaluations.
MEAL Procedure Point Persons
are individuals within the PIQA MEAL team responsible for managing the resources for
individual procedures.
MEAL system
comprises people, processes and structures, and resources, that work together as an
interconnecting whole to identify, generate, manage and analyze programmatic
information which is communicated to specified audiences.
MEAL Task Forces
are comprised of Global MEAL Team members and created to advance particular
MEAL priorities. In FY15, MEAL task forces focus on MEAL system design, eValuate,
evaluation, data management, BSDI, learning, beneficiary accountability, MEAL
competencies, and communication.
Mid-term evaluation
is an evaluation (See evaluation), usually improvement-oriented in nature, performed
toward the middle of the period of implementation of the intervention.
Mid-term review
is a review (See review), usually improvement-oriented in nature, performed toward
the middle of the period of implementation of the intervention. Mid-term reviews are
often less rigorous than mid-term evaluations.
Monitoring
is the systematic collection, analysis and documentation of information about progress
towards achieving project objectives and changes in operational contexts in order to
inform timely decision making and contribute to project accountability and learning.

Non-probability sampling methods


include a number of approaches that are not based on probability sampling theory
(See probability sampling methods). It is intended/hoped that the sample is
representative enough for the purposes of the data collection, but this cannot be
known with any measurable degree of certainty. Quota and purposive sampling are
two examples of the several forms of non-probability sampling that exist.

11
Observation
is a data collection method in which the enumerator or staff person visibly confirms
and documents a context, characteristic, behavior, or action. Observations can be
formal, such as checklists, or informal, such as describing what has been seen.
Observation is often used to triangulate data collected through other methods.
Operations research
See implementation science
Organizational learning
is the “L” in MEAL and is primarily concerned with organizational, rather than
individual, learning. Organizational learning is a continuous process that enhances an
organization’s collective ability to accept, make sense of, and respond to internal and
external change. Organizational learning is more than the sum of information held by
employees. It requires systematic integration and collective interpretation of new
knowledge that leads to collective action and experimentation.
Outcome
is a result or effect that is caused by, or attributable to, the project, program or policy.
Many organizations and donors associate outcomes with more immediate and
intended effects that are equivalent to intermediate results and strategic objectives in
the Proframe. (Source: Adapted from USAID Glossary of Evaluation Terms)
Outputs
are the goods, services, knowledge, skills, attitudes and enabling environment that are
delivered by the project as a result of the activities undertaken. (Source: ProPack I)

Partner
is an organization with which CRS is in a relationship based on mutual commitment
and complementary purpose and values that is often supported by shared resources
and which results in positive change and increased social justice. (Source: CRS
partnership documentation)
Percentage
is a number or rate that is expressed as a certain number of parts of something
divided into 100 parts. For example, if a goalie saves 96 out of 100 shots, his save
percentage is 96 percent. (Source: Adapted from Merriam Webster)
Performance evaluations
simply compare data from indicators over time against baseline values, in contrast to
evaluations that employ experimental or quasi-experimental designs. Performance
evaluations demonstrate only whether change has occurred over the life of a project,
but cannot hope to definitively establish what actually caused the observed change,
because of the absence of control or comparison groups and the other potential
confounding factors.
Performance Management Plan

12
(PMP) is a tool designed to assist in the setting up and managing of the process of
monitoring, analyzing, evaluating, and reporting progress toward achieving a project’s
strategic objectives. In contrast to the M&E Plan (See M&E Plan), the PMP has a
broader scope and, critically, would include explicit plans for both accountability and
for learning. The PMP organizes performance management tasks and data over the
life of a program. It is intended to be a living document that is developed, used, and
updated by project staff. Specifically, it: i) Articulates plans for accountability and
learning; ii) Supports institutional memory of definitions, assumptions, and decisions;
iii) Alerts staff to imminent tasks, such as data collection, data quality assessments,
and evaluation planning; and, iv) Provides documentation to help mitigate audit risks.
(Source: USAID Performance Monitoring & Evaluation Tips: Preparing a Performance
Management Plan, 2010)
Policy
is a mandate for action and decision making under a given set of circumstances.
Compliance with policy is expected of all relevant staff. Policies assure consistency,
fairness and quality of result within the framework of agency values and management
philosophy. (Source: CRS HR Policy, Policy Development, Review, Approval and
Dissemination)
Pre-agreement letter
is a letter issued between implementing organizations and donors that makes explicit
conditions and expectations prior to the signing of the contract or full award letter.
Precision
of data is a sufficient level of detail to present a fair picture of performance and enable
management decision making. Precision is one of five USAID Data Quality Standards.
(Source: USAID Performance Monitoring & Evaluation Tips: Data Quality Standards,
2009)
Probability sampling methods
are formal sampling techniques where each individual or sampling unit (household,
organization, etc.) in the population has a known non-zero chance of being selected.
In other words, each member of the target population has some opportunity of being
included in the sample, and the mathematical probability that any one of them will be
selected can be calculated. Probability sampling also tends in practice to be
characterized by (1) the use of lists or sampling frames to select the sample, (2)
clearly defined sample selection procedures, and (3) the possibility of estimating
sampling error from the survey data. If applied properly, probability sampling methods
result in a representative sample of the target population of interest. (Source: Adapted
from FANTA Sampling Guide, 1997)
Procedures
are steps to fulfill policy requirements. For the purposes of the MEAL Policies,
procedures are required auditable practices. (Source: CRS HR Policy, Policy
Development, Review, Approval and Dissemination)
Proframe
is a logical planning tool for generating a Project or Program Framework. The

13
Proframe provides information about not only higher-level objectives, but also outputs
and activities, the performance indicators, and the critical assumptions that have been
made about project performance and plans. (Source: Adapted from ProPack I)
Program staff
(or key program staff) is defined for the purposes of the MEAL Policies and
Procedures as Heads of Programming, Program Managers, Country Program MEAL
Staff, MEAL Advisors, and Deputy Regional Directors for Program Quality.
Project
is the implementation of a set of funded, time-bound and managed activities that is
linked to one or more donor source and project number (DSPN).
Project strategy
describes what the project will do and with whom to address identified problems and
opportunities and achieve higher-level objectives, in particular the strategic objectives
(SOs). Similar terms are intervention, approach or response. Project strategy is
sometimes called the project’s “design”. Project strategies may involve behavior and
social change, service delivery, institution and systems strengthening, training,
capacity building, facilitation of networks or processes, infrastructure, advocacy,
community empowerment, product distribution or some combination. (Source: Funnell
and Rogers 2011)
Project value
(or total project value) is the total budget approved for the life of a project awarded to CRS.

Purposive sampling
(or purposeful sampling) is the selection of participants based on their knowledge,
perspective or other characteristics of interest (e.g., women or men, young or old, very
poor or better off). Purposive sampling is appropriate for qualitative data collection,
such as focus group discussions or semi-structured interviews. (Source: CRS
Guidance on Monitoring and Evaluation)

Qualitative data
are open-ended, text-based or narrative data that provide detailed descriptions of
contexts and challenges, events, types of people or households, and observed
behaviors (Source: Adapted from CRS Guidance on Monitoring and Evaluation).
Qualitative methods
generally do not generate specific numbers. They concern themselves with exploring
meanings, contexts, processes, reasons, and explanations. This is then captured in
text or diagrams, but generally not in numbers. Examples of qualitative methods
include focus group discussion, key informant interviews, etc. (Source: Rapid Rural
Appraisal (RRA) and Participatory Rural Appraisal (PRA): A Manual for CRS
Fieldworkers and Partners)
Quantitative data
are a type of data that can be counted, coded, or otherwise represented numerically
(Source: Adapted from CRS Guidance on Monitoring and Evaluation)

14
Quantitative methods
generate information that can be captured numerically. These refer to mathematically
based methods, particularly statistics that summarize the data or analyze relationships
between data elements. They are thus particularly useful for describing the scope of a
problem. (Source: Adapted from Rapid Rural Appraisal (RRA) and Participatory Rural
Appraisal (PRA): A Manual for CRS Fieldworkers and Partners)
Quasi-experimental design
is a type of research or evaluation design virtually identical to the randomized control
trial (See experimental design), with the exception that it opts for a comparison group
that is identified out of convenience (but still ensuring members of the group are as
similar to the intervention group as possible) rather than using randomization. Quasi-
experimental designs are often employed where it is not feasible or ethical to randomly
assign groups to not receive an intervention. As with the experimental design, the
counterfactual (See counterfactual) is represented in the comparison between the
intervention and the non-intervention group.

Random sampling
is a type of probability sampling (See probability sampling methods) in which all
sampling units have an equal chance of being selected.
Range
is a series of numbers that includes the highest and lowest possible amounts (Source:
Adapted from Merriam Webster)
Real-time evaluation
(RTE) is an internal rapid review carried out early on in an emergency response
(usually between six to eight weeks after the onset of the emergency, depending on
the scale of the emergency). It helps to identify what is being done, what is working,
what is not working, and what needs to change to improve the appropriateness and
effectiveness of the emergency response program. An RTE looks at where the
response is at a given point in time and provides an opportunity for staff to step back
and reflect on an emergency response. It is used to gain quick feedback on
operational performance and identify systemic problems. (Source: CRS RTE
Guidance)

Recommendation

is a specific change identified to improve an ongoing project which may not be broadly
applicable in other contexts. (Source: CRS Guidance on Monitoring and Evaluation)
Reflection event
is the intentional use of monitoring or evaluation data to improve ongoing or future

15
programming or to generate lessons learned. Reflection events are generally held with
a variety of stakeholders and may range from short meetings to events lasting several
days. (Source: Adapted from CRS Guidance on Monitoring and Evaluation)
Relevance
(or appropriateness) is the extent to which the aid activity is suited to the priorities and
policies of the target group, recipient and donor. (Source: DAC Criteria for Evaluating
Development Assistance)
Reliability of data
is the stability and consistency of data collection processes and analysis methods over
time. Reliability is one of five USAID Data Quality Standards. (Source: USAID
Performance Monitoring & Evaluation Tips: Data Quality Standards, 2009)
Response
See Emergency Response.
Results framework
is an easy-to-read diagram that gives a snapshot of the top levels of a project’s
objectives hierarchy (means-to-end relationship). It describes the change the project
wants to bring about (strategic objective or SO), why this change is important (goal)
and what needs to happen (intermediate result) for this change to occur.
Review
is an event held to reflect on a completed activity or process or on project progress to
date. Reviews are often qualitative in nature and participatory, and do not follow a
prescribed methodology or format. The term “review” in the CRS context often implies
a process that is conducted by internal CRS staff.

Sample
is a small group of people or items taken from a larger group and used to represent
the larger group. (Source: Merriam-Webster)
Sample bias
occurs when a statistic based on a sample systematically misestimates the equivalent
characteristic of the population from which the samples were drawn. (Source: Sage
Definition of Statistics)
Scope of work
(SOW) is the division of work to be performed under a contract in the completion of an
activity, typically broken into specific tasks with deadlines.

SMILER
(Simple Measurement of Indicators for Learning and Evidence-based Reporting) is a
comprehensive and practical approach to develop an M&E system; the objectives and
their indicators are linked to a system to collect, analyze, and report on data. SMILER
includes mechanisms to turn data into useful knowledge that supports sound project
decision making and ensures that all staff have a clear understanding of the project,

16
and their role in M&E. The process of developing a SMILER M&E system is called the
SMILER coaching session. The primary output is the M&E Operating Manual for the
project. (Source: ProPack III)
Social learning
reflects the theory and view that learning is a social process, occurring in the context
of person-to-person relationships and is based on dialogue and reflection among and
between people or groups. (Source: ProPack I)
Soft competency
(or behavioral competency) is related to how a CRS staff member does the job, and is
likely common to many different jobs. (Source: Adapted from the Limerick Institute of
Technology definition)
Stakeholders
are individuals, groups and institutions important to the success of the project. Project
stakeholders have an interest in or an influence over a project. Interest involves what
stakeholders might gain or lose in the project, their expectations or the resources they
commit. Influence refers to power that stakeholders have over a project, such as
decision-making authority. Stakeholders include those directly affected by the project
(e.g. out-of-school girls, a collaborative partner, a government service) and those with
power to influence the project (e.g. religious leader, national government institutions,
CRS staff at all levels, and donors). (Source: ProPack I)
Start date
is the date of initiation of project or response activities as dictated by the terms of the
donor agreement. If not mentioned in the donor agreement, the start date should be
determined by the Country Representative or Regional Director.
Survey
is to ask (many people) a question or a series of questions in order to gather
information about what most people do or think about something. (Source: Merriam-
Webster)
Sustainability
is concerned with measuring whether the benefits of an activity are likely to continue
after donor funding has been withdrawn. (Source: DAC Criteria for Evaluating
Development Assistance)

Tacit knowledge
is the vast unwritten storehouse of knowledge held by individuals, based on their
emotions, experience, insights, intuition, observations and internalized information. It is
highly personal, hard to formalize, and rooted in a specific context. Put simply, it is
‘what is in your head’. It is acquired largely through association with others and
requires joint or shared activities to become explicit/codified knowledge so that it can
be imparted from one person to another. Tacit knowledge is more difficult to
communicate because, “We know more than we can say; and we say more than we
can write.” (Source: Adapted from Business Dictionary and CRS September 2014).

17
Terms of reference
(TOR) provide an important overview of what is expected in an evaluation. In an
external evaluation, the TOR document provides the basis for a contractual
arrangement between the commissioners of an evaluation and a consultant/evaluation
team, and establishes the parameters against which the success of an evaluation
assignment can be assessed. (Source: Better Evaluation)
Theory of change
(TOC) makes clear how and why you and others expect or assume that certain actions
will produce desired changes in the environment where the project will be
implemented. A robust TOC draws from research-based theories, conceptual
frameworks and/or deep experience and lessons learned – and not from leaps of faith
and assumptions. A TOC is a concise, explicit explanation of: “If we do X, then Y,
because Z. (Source: Funnel and Rogers 2011, Babbitt, Chiagas and Wilkinson 2013)
Timeliness
of data is the availability of up-to-date-enough data to meet management needs.
Timeliness is one of five USAID Data Quality Standards. (Source: USAID Performance
Monitoring & Evaluation Tips: Data Quality Standards, 2009)
Triangulation
refers to the diversification of perspectives that comes about when a set of issues is
investigated by a diverse, multidisciplinary team, using multiple tools and techniques,
with individuals and groups of people who represent the diversity of the community. In
order to understand the importance of triangulation, it is necessary to think about the
issue of bias (see definitions for general bias and sample bias). (Source: CRS
PRA/RRA Guide)
Validity
of data is the extent to which a measure actually represents what we intend to
measure. Validity is one of five USAID Data Quality Standards. (Source: USAID
Performance Monitoring & Evaluation Tips: Data Quality Standards, 2009)

18

You might also like