MONITORING AND EVALUATION SERIES

GLOSSARY OF COMMONLY USED M&E TERMINOLOGIES

By: Enock Warinda

©2011

This is a living document and will be updated periodically as required.

1|P a g e

Planning, Monitoring and Evaluation definitions
Many ASARECA staff and partners bring a wealth of PM&E experience to bear on the programs and projects that they are responsible for. Frequent obstacles to effective discussion of PM&E are the misunderstandings that result from a lack of agreed terminology. Many donor and implementing organizations have their own specific definitions of the terms commonly associated with PM&E. To facilitate communication inside ASARECA, the following section lists some key terms and establishes a common definition The growth of Monitoring and Evaluation (M&E) units in government, together with an increased supply of M&E expertise from the private sector and public institutions, calls for a common language on M&E. M&E is a relatively new practice, which tends to be informed by varied ideologies and concepts. A danger for stakeholders is that these diverse ideological and conceptual approaches can exacerbate confusion and misalignment. The standardization of concepts and approaches in ASARECA is particularly crucial for the enhancement of service delivery. Please note that this glossary is not to be considered as exhaustive. It should rather be viewed as an attempt to provide ASARECA members with the same understanding of key terminology used in M&E. ASARECA is in the process of refining its M&E systems to improve the performance of its system of governance and the quality of its outputs – thus providing early warning systems and mechanisms to respond speedily to problems as they arise.

2|P a g e

Detailed Glossary
Accountability Obligation to demonstrate that work has been conducted in compliance with agreed rules and standards or to report fairly and accurately on performance results vis-à-vis mandated roles and/or plans. In terms of development, it refers to the obligations of partners to act according to clearly defined responsibilities, roles and performance expectations, often with respect to the prudent use of resources. For evaluators, it connotes the responsibility to provide accurate, fair and credible monitoring reports and performance assessments. It also refers to planning for and the monitoring, evaluation and reporting of performance and compliance against agreed upon organizational standards and outcomes. It enables ASARECA to answer to all stakeholders for results and impacts and the use of resources and requires the fullest communication between different programs, ASARECA core service units and responsibilities. Action research Action research is an interactive inquiry process that balances problem solving actions implemented in a collaborative context with data-driven collaborative analysis or research to understand underlying causes enabling future predictions about personal and organizational change. Activities Activities refer to the actions taken to achieve the required outputs and to accomplish the planned objective. Examples of activities include: Conducting needs assessment and training on … Establishing trials on ….. Marketing value-added products of …. Negotiations and dialogue with …. Monitor/evaluate program results ….. Allocate funds to …. Provide technical assistance to …. Conducting information sessions, etc. Activity Schedule An activity schedule refers to the graphic representation that set out the timing, sequence and duration of project activities. It can also be used to identify milestones for monitoring progress and to assign responsibility for achievement of milestones. Advocacy Refers to the act of representing or defending others (individuals, communities, etc) and using evaluation results to promote and inform. Aggregate To aggregate is to put together (collapse) data from different sectors (such as men, women, households, communities, management practices, regions, locations, technologies, innovation, etc) into one category. For example: putting together data from men and women to have household-level data, or collapsing data from numerous households into community-level data. This requires organization beforehand, at the levels of data coding, collection, and computer input. Agricultural Development Domain A development domain refers to the spatial representation of preconditions or factors considered important for rural development. It can be characterized using stratification criteria that, based on theory and previous research, determine the comparative advantage of rural areas with respect to frequently occurring livelihood strategies. It is constructed by the intersection of three spatial variables: agricultural potential, market access and population density, using a geographic information system (GIS).

3|P a g e

Assessment This refers to a process of making judgment on the basis of the analysis of available information. Assumptions Refer to hypotheses about factors or risks which could affect the progress or success of a development intervention. Analysis of Objectives Identification and verification of future desired benefits to which the beneficiaries attach priority. (anticipated or unanticipated) confounding factors. Archival Records Involve gleaning of information from existing records that are kept by your own or another institution to gather data for your evaluation. Its purpose is to enable decision-makers to decide whether the activity represents an appropriate use of resources. They are made explicit in theory-based evaluations where evaluation tracks systematically the anticipated results chain. It also refers to failure to collect data from a unit in subsequent rounds of a panel data survey.Agricultural Performance Indicator (API) Agricultural Performance Indicator refers to the extent or level of contribution of agriculture to an economy or to the region. usually one year Target level of contribution = the maximum expected or planned contribution that agriculture could make to the economy given the resource base of the economy. Analysis of the Strategies This refers to the critical assessment of the alternative ways of achieving objectives. Attrition in the treatment group is generally higher the less desirable the intervention 4|P a g e . Attrition A situation when some members of the treatment or control group. or external shocks. Analytical methods Methods used to process and interpret information during an evaluation. farmers and cluster groups) drop out from the sample. feasibility and potential sustainability of a development intervention prior to a decision of funding. The output of an analysis of objectives is the objective tree. Appraisal Within the context of ASARECA. It represents the extent to which observed development effects can be attributed to a specific intervention or to the performance of one or more partner taking account of other interventions. or both (e. and selection of one or more for inclusion in the proposed project. appraisal refers to an overall assessment of the relevance. They represent the 4th column of the Logframe matrix. It is computed as follows: Observed level of contribution Target level of contribution Observed level of contribution = the contribution that agriculture makes to the economy in a particular time period. Attribution The ascription of a causal link between observed (or expected to be observed) changes and a specific intervention. Attribution refers to that which is to be credited for the observed changes or results achieved.g.

you will have to collect a set of baseline data at the first opportunity. control and governance processes. Bias 5|P a g e . Benchmark Benchmark refers to a reference point or standard against which performance or achievements can be assessed. data collection and analysis. Beneficiaries The individuals. or the level of income at the start of a project. Note: It is worth noting a distinction between regularity (financial) auditing. Example: If you are on a diet. against which progress can be assessed or comparisons made. benefit directly or indirectly from the interventions of ASARECA. economy. which is concerned with relevance. otherwise. then both the baseline and end-line studies should use the same methods of sampling. which focuses on compliance with applicable statutes and regulations. The following questions should be considered when planning a baseline study: What information is already available? What will the study measure? Which data will effectively measure the indicators? Which methodology should be used to measure progress and results achieved against the project objectives? What logistical preparations are needed for collecting. Baseline Study Refers to an analysis describing the situation prior to a development intervention. Baseline Data A set of data that measures specific conditions (almost always the indicators we have chosen through the design process) before a project. analyzing. storing and sharing data? How will the data be analyzed? Who should be involved in conducting the studies? Does the team have all the skills needed to conduct the study? If not. how will additional expertise be obtained? What will the financial and management costs of the study be? Are the estimated costs of the studies proportionate to the overall project costs? Are adequate quality control procedures in place? How will the study results/recommendations be used? Point to remember: In case an end-line study is planned. groups or organizations who. A benchmark refers to the performance that has been achieved in the recent past by other comparable organizations. and performance auditing. and collect the same data (set of indicators) for comparison. then it should be used. It helps an organization accomplish its objectives by bringing a systematic.Audit Refers to an independent. or what can be reasonably inferred to have been achieved in the circumstances. in their own view and whether targeted or not. efficiency and effectiveness. your baseline is your weight on the day you begin. disciplined approach to assess and improve the effectiveness of risk management. Internal auditing provides an assessment of internal controls undertaken by a unit reporting to management while external auditing is conducted by an independent organization. objective assurance activity designed to add value and improve an organization’s operations. initiative or program starts or shortly after implementation begins. If reliable historical data on your performance indicator exists. It provides a starting point to compare project performance over the life of the project.

but not due to sampling error. Community A group of people living in the same locality and sharing common characteristics. 6|P a g e . and members can interact online or in person. etc that typically incorporates a number of data gathering activities (e. As part of the evaluation process. individual. or a management function to determine if materials conform to technical specifications and to international norms (technical control). products. products. Community of Practice Refers to networks of people who work on similar processes or in similar disciplines and who come together to develop and share their knowledge in that field for the benefit of both themselves and their organization. guidelines.The extent to which the estimate of impact differs from the true value as a result of problems in the evaluation or sample design. Control A verification that financial documents are exact and expenditures conform to norms and to authorization procedures (financial control). activities. trends and findings into relationships of cause and effect. bias in sampling. for example. It may be created formally or informally. Comparison Group Refers to individuals whose characteristics (such as race/ethnicity. and approaches set by an institution. or products provided by your program produced the expected changes. In no instance do they receive the same service(s) as those you are evaluating. Control Group A group of individuals whose characteristics (such as race/ethnicity. and age) are similar to those of your program participants. gender. or they may receive a different set of services. but do not receive the program (services. Capacity-building The process through which capacity is created. interviews. It organizes the main data. This is an increasingly important cross cutting issue in poverty reduction interventions. Case Study A methodological approach to describing a situation. etc) at select sites or programs. questionnaire. A control group is used to assess the effect of your program on participants as compared to similar individuals not receiving the services. the experimental (or treatment) group and the control/comparison group are assessed to determine which type of services. priorities. means ignoring or underrepresenting parts of the target population. However. or activities) you are evaluating. or activities you are evaluating. Causality Analysis Refers to an analysis used in program formulation to identify the root causes of development challenges. Participants are randomly assigned to either the treatment (or program) group or the control group. or products. Coherence Compliance with the policies. and identifies root causes and their linkages as well as the differentiated impact of the selected development challenges. observations. gender. and age) are similar to those of your program participants. activities. These individuals may not receive any services. A causality framework or causality tree analysis (or problem tree) can be used as a tool to cluster contributing causes and examine the linkages among them and their various determinants. The same information is collected for people in the control group as in the experimental group.g.

an economic cost-benefit analysis is made from the perspective of the entire economy of which the aid activity is part. The financial cost-benefit analysis is made from the perspective of the project. it is the situation or condition which hypothetically may prevail for individuals. Cost-Effectiveness Analysis An economic or social cost-benefit analysis that quantifies benefits without translating them into monetary terms. This analysis allows one to compare alternative ways to accomplish a same objective(s). The methods include: informal and formal surveys. focus groups. Data Describes information stored in numerical form. It is an important factor outside management control that can strongly influence the project implementation and success. case studies. Counterfactual It refers to an estimate of what the outcome (Y) would have been for a program or project participant in the absence of the program (P). e. etc.g. By definition. literature search. In other words. 7|P a g e . expert opinion. the counterfactual cannot be observed.among those feasible . Data Collection Method Refers to the strategy and approach used to collect data.that will allow the attainment of the objective at the least cost.Correlation The strength of relationship between two (or more) variables. Assumptions are made explicit in theory-based evaluations where evaluation tracks systematically the anticipated results chain. direct and participatory observation. numerical information Soft data is less precise. the following questions should be addressed: What type of data should we collect? When should we collect data (how often)? What methods and tools will we use to collect data? Where do we get the data from? How do we ensure good quality data? What type of data should we collect? There are two main types of data – qualitative and quantitative – and the type of data most appropriate for a project will depend on the indicators developed. It also allows the selection of the activity . In collecting data. either in hard or soft format. Critical assumption It refers to the hypothesis about factors or risks which could affect the progress or success of a development intervention. It can take two trends: Positive correlation – one variable tends to increase together with another variable Negative correlation – one variable decreases as the other one increases Cost-Benefit analysis A form of economic analysis that takes into account the benefits and costs in commensurable and actual monetary values and arrives at single index to determine the value of a project. a social cost benefit analysis also includes distributional considerations. verbal information Raw data is the survey information before it has been processed and analyzed Missing data are values or responses which fieldworkers were unable to collect (or which were lost before analyses) Gender-segregated data are information used to promote gender balanced analyses. about the characteristics of the population when designing a sampling procedure for a survey. community interviews. Note: Assumptions can also be understood as hypothesized conditions that bear on the validity of the evaluation itself. organizations or groups were there no development intervention. it must be estimated using comparison groups. Hard data is precise. Therefore.

and they can also be selfadministered. satisfaction levels and opinions can be collected through surveys which usually involve pre-set questions in a particular order or flow. It can provide evaluators with an opportunity to understand the context within which the project operates and to learn about issues that the participants or staff may themselves be unaware of or unwilling to discuss in an interview or focus group. ‘What proportion of…?’ etc. No information should be trusted to future recall. percentages and ratios. non-verbal cues or the nonoccurrence of something that is expected. Quantitative data involve numbers. The emphasis is on obtaining answers to pre-prepared questions. There may be a trade-off between a comprehensive coverage of topics and in-depth exploration of a more limited set of issues. Interviews are best conducted face to face although. The date and time of the observation should be recorded. interactions between individuals or groups. For in-depth interviews. Performance data. attitudes and relationships with one another. In both cases. focus group discussions and indepth interviews. ‘In what way…?’. b) Interviews: An interview is used when interpersonal contact is important and when the follow up of any interesting comments provided is desired. and common questions to collect qualitative information might begin with ‘How did…?’. in some situations. processes and behavior. Data collected through observation should be documented immediately. experience and opinions. In structured interviews. data is gathered on activities. It is a useful way to collect data on physical settings. In terms of qualitative data that can be used to gain insight into the achievements and challenges of a project. Qualitative data can be categorized and quantified for the purpose of data analysis. interviews with project staff and beneficiaries can be especially useful to answer questions like: How has the project affected you personally? Which aspects of the project have been successful. The most common methods used in qualitative data collection are observation. and the most common methodologies are: a) Surveys: These are used by evaluators to gather data on specific questions. it is good practice to prepare an interview guide and to hold mock interviews to estimate the time required. The facilitator can also observe group dynamics and gain insight into the respondents’ behaviors. and everything that the observer believes to be worth noting should be included. no rigid format is followed. What methods and tools will we use to collect data? There are a variety of methods used to collect data. by email or telephone. although a series of open-ended questions is usually used to guide the conversation. Interviewers are trained to deviate only minimally from the question wording to ensure uniformity of interview administration. and to ensure that none of the questions are leading or prompting a specific answer. This allows the evaluator to understand the experience from the respondent’s perspective. open or closeended in format. Surveys can be conducted face to face. and common questions to collect quantitative information might start with ‘How many?’. accurate and thorough without being opinionated or too detailed. The descriptions must be factual. to amend difficult questions or wordings. and why? What needs to be done to improve the project? Observation: In observation. Focus group discussions: This method combines elements of both interviewing and observation. a carefully worded questionnaire is administered. In-depth interviews capture the respondent’s perceptions in his or her own words. demographic information. The discussions should take a c) d) 8|P a g e . and the explicit use of group interaction is believed to generate data and insights that are unlikely to emerge without the interaction found in the group. The most common methods of collecting quantitative data are quantitative surveys and secondary data review. ‘In your opinion…?’ etc. telephone or online interviewing can be successful. and why? Which aspects of the project have not been successful. The questions can be structured or semi-structured. Focus groups involve a gathering of 8 – 12 people who share characteristics relevant to the evaluation questions.Qualitative data consist of perceptions.

They can help the evaluator understand an individual’s perspective with regard to the project. Personal documents are first person accounts of events and experiences such as diaries. policies. Specific applications of the focus group method in evaluations include: identifying and defining achievements and constraints in project implementation identifying project strengths. their answers are sent to a central source. Data Quality Management Refers to the establishment and deployment of roles. responsibilities. Because no one knows who said what. newspaper archives and local business records that can assist an evaluator in gathering information about the larger community and relevant trends. the focus group technique has been adapted as a tool for data collection in many other sectors. and fed back to the experts. authenticity and access always need to be considered. and procedures concerning the acquisition. photographs. institutional mission statements. Originally used as a market research tool to investigate the appeal of various products. portfolios. summarized. setting. poetry. field notes. However. letters to the paper and quotes. scrapbooks. processes. county office records. quick and unobtrusive. These may be helpful in understanding the characteristics of the project participants to make comparisons between communities. annual reports. budgets and policy manuals. external public records include census and vital statistics reports. Examples of internal public records are organizational accounts.maximum of 90 minutes. maintenance. schedules. Experts are asked specific questions. and unbiased. They can help the evaluator understand the institution’s resources. artwork. FGDs are useful in answering the same type of questions as those posed in in-depth interviews but within a social. It allows an organization to see how the data quality procedures put in place have caused the quality of the data to improve. accuracy. priorities and concerns. They are free to challenge particular points of view or to add new perspectives by providing additional information. and whether it is presented in an accurate. rather than individual. and disposition of data. For example. 9|P a g e . weaknesses and opportunities assisting with interpretation of quantitative findings obtaining perceptions of project effects providing recommendations for similar future interventions generating new ideas e) Document studies: Reviews of various documents that are not prepared for the purpose of the evaluation can provide insights into a setting and/or group of people that cannot be observed or noted in any other way. The experts then comment on the summary. reliable. Delphi Technique The Delphi technique enables experts who live in different locations to engage in dialogue and reach consensus through an iterative process. dissemination. namely: Accuracy Reliability Completeness Precision Timeliness Integrity – refers to the security or protection of information from unauthorized access or revision Utility – refers to the usefulness of the information for its intended users Objectivity – refers to whether information is accurate. Data Quality Extent to which data adheres to the key dimensions of quality. Data Quality Assessment and/or Assurance Set of internal and external mechanisms and processes to ensure that data meets the key dimensions of quality. Document studies are inexpensive. values. clear and unbiased manner . conflict is avoided.

identify data resources available. and identify key stakeholders and clarify their information needs. Primary effects: the changes brought about by an assistance effort to accomplish the specific objective of the intervention. and which provides valuable lessons for implementing organizations and their partners. These effects are taken into account by sociological analyses.Double Difference The difference in the change in the outcome observed in the treatment group compared to the change observed in the control group. Equivalently. It is a management tool which can assist in evidence-based decision making. which are unleashed by the contributions to a project and by its results. Indirect effects: the costs and benefits. It commonly seeks to determine the efficiency. It verifies whether project objectives have been achieved or not. It may also redefine the purpose of the evaluation and the methods for conducting it. Evaluability Assessment An evaluability assessment is a brief preliminary study undertaken to determine whether an evaluation would be useful and feasible. To conduct an evaluability assessment. External effects: the costs and benefits not taken into account in determining the expenditures and financial revenue of the aid programme. impact. sustainability and relevance of a project or organization’s objectives. Effects Intended or unintended changes resulting directly or indirectly from a development intervention. This type of preliminary study helps clarify the goals and objectives of the program or project. without taking into consideration their effect on the economy. Direct effect: The immediate costs and benefits of both the contributions to and the results of a project. Intangible effects: costs and benefits. ongoing or completed project or initiative at a given point in time. cannot be included in the economic analysis. pinpoint gaps and identify data that need to be developed. such as implementers. the team: Reviews materials that define and describe the interventions Identifies modifications to the intervention Interviews managers. Evaluation Evaluation is a systematic and objective examination of a planned. which are thought to be pertinent but which cannot be measured and which therefore. Evaluability assessments are often conducted by a group. Also called Difference-in-difference. it is also the change in the difference in the outcome between treatment and control. Double differencing removes selection bias resulting from time-invariant unobservables. and administrators. Evaluation helps to answer questions such as: 10 | P a g e . including stakeholders. and level of satisfaction with the interventions’ goals and objectives Develops and refines a theory of change model Identifies sources of data and data collection methods Identifies people and organizations that can implement any possible recommendations from the evaluation. By looking at the intervention as implemented on the ground and the implications for the timing and design of the evaluation. evaluators. and other staff on their perceptions of the intervention’s goals and objectives Interviews stakeholders on their perceptions of. It requires an in-depth review at specific points in the life of the project – usually mid-point or end of a project. Scientists. effectiveness. an evaluability assessment can save time and help avoid costly mistakes. Principal Investigators.

When evaluating the efficiency of a project or program. It is the degree of achievement of the planned specific objectives and thus the extent to which the beneficiaries have reaped the planned benefits. In evaluating the effectiveness of a program or project. financial) utilized as planned and used in an effective way? What are the key results. policy. policy. Effectiveness The extent to which an organization. program or initiative is using the appropriate and efficient means in achieving its expected results relative to alternative design and delivery approaches. it is useful to consider the following questions: To what extent were the planned resources used to meet project objectives? What were the major factors influencing the achievement or non-achievement of the objectives? What was the rate of disbursement of project resources? To what extent did the interventions address the capacity needs identified? What was the quality of capacity built? To what extent were the capacity building skills acquired utilized? Related terms: Cost effectiveness – the extent to which an organization. it is useful to consider the following questions: Were the activities cost-effective? Were objectives achieved on time? Was the project or program implemented in the most efficient was compared to alternatives? 3. Relevance 11 | P a g e . It is an economic term which is used to assess the extent to which aid uses the least costly resources possible in order to achieve the desired results. it is useful to consider the following criteria: 1.How relevant was our work in relation to the primary stakeholders and beneficiaries? To what extent were the project objectives achieved? What contributed to and/or hindered these achievements? Were the available resources (human. This generally requires comparing alternative approaches to achieving the same outputs. expansion and replication of similar interventions? What are the lessons learned from the intervention? How should those lessons be utilized in future planning and decision making? There are several approaches to development evaluation(see figure below) Evaluation Criteria When evaluating development programs and projects. program or initiative is meeting its expected results. 2. Efficiency Efficiency measures the outputs – qualitative and quantitative – in relation to the inputs. including intended and unintended results? What evidence is there that the project has changed the lives of individuals and communities? How has the project helped to strengthen the management and institutional capacity of the organization? What is the potential for sustainability. to see whether the most efficient process has been adopted. to see whether the most efficient approaches to achieving the same outputs.

It includes the revision of data availability and possible collection of baseline data. When evaluating the impact of a program or project. Impacts The positive or negative changes produced by a development intervention. Most prospective evaluations involve the following kinds of activities: A contextual analysis of the proposed program or policy A review of evaluation studies on similar programs or policies and synthesis of the findings and lessons from the past 12 | P a g e . When evaluating the sustainability of a program or project. as a specific research methodology. The examinations should be concerned with both intended and unintended results and must also include the positive and negative impact of external factors. directly or indirectly. environmental and other development indicators. Ex-ante Evaluation (Prospective Evalaution) A prospective evaluation is conducted ex ante – that is. and as an assessment process that employs special techniques unique to the evaluation of social programs. and analyze alternative proposals and projections. intended or unintended. in an attempt to analyze its likely success. the selection/provision of tools for data collection according to the selected / available sources of verification. the final selection of the indicators. In evaluating the relevance of a project or program. economic. Sustainability Sustainability is concerned with measuring whether the benefits and effects generated by a project or program will continue after the donor funding has been withdrawn and the projects terminated. the agreement on the targets to be achieved and their measurement on the basis of selected indicators. This involves the main impacts and effects resulting from the activity on the local social. it is useful to consider the following questions: To what extent did the benefits of a program or project continue after donor funding ceased? What were the major factors which influenced the achievement or non-achievement of sustainability of the program or project? Evaluation Framework Study Assessment conducted at the starting up of the project to verify the conditions for allowing monitoring and evaluation of the project. such as changes in terms of trade and financial conditions. recipient and donors. it is useful to consider the following: What happened as a result of the program or project? What real difference has the activity made to the beneficiaries? How many people have been affected? 5. it is useful to consider the following questions: To what extent are the objectives of the program or project still valid? Are the activities and outputs of the program or project consistent with the overall goal and the attainment of its objectives? Are the activities and outputs of the program or project consistent with the intended impacts and effects? 4. Projects need to be environmentally as well as financially sustainable.Relevance refers to the extent to which the supported interventions are suited to the priorities and policies of the target group. Evaluative Research Evaluation research is as a type of study that uses standard social research methods for evaluative purposes. predict its cost. a prposed program is reviewed before it begins.

Feedback The transmission of findings generated through the evaluation process to parties for whom it is relevant and useful so as to facilitate learning. to the next ex ante study. This may involve the collection and dissemination of findings. in turn. institutional. economic. and the possible outcomes 13 | P a g e . technical. If the evaluation is conducted externally but is funded by and under the general oversight of Program Managers and Principal Investigators.A prediction of likely success or failure. which includes not only the summative evaluation of the project itself (typically in terms of processes and outputs) but also an analysis of the project's impact on its environment and its contribution to wider (economic/societal/education. Feasibility study It is an assessment conducted during the appraisal phase to verify whether the proposed project is well founded. Note: Externally conducted evaluation is not necessarily independent evaluation. The intention is to identify the factors of success or failure. including the challenges and strong points.al/community etc.g. socio-cultural. recommendations and lessons from experience. ex post evaluations often take so long to produce (in order to measure long-term impact) that they are too late to influence future planning. It should also lay down a framework for future action leading. Expected Results An outcome that a program. can serve as useful information for improving practice. most often conducted during the implementation phase of projects or programs. rethinking how to go about things. gender-related aspects. financial. It may be undertaken directly after or long after completion. and to draw conclusions that may inform other interventions. and suggestions on strengthening the proposed program and policy if decision makers want to go forward. management. It should take into account all policy. e. it is an internal evaluation and should not be deemed independent. to assess the sustainability of results and impacts. given a future context that is not too different from the past. policy or initiative is designed to produce. It can also be conducted for other reasons such as compliance. Learning how the program is being implemented. This is the evaluation produced after the project is completed. environmental. and is likely to meet the needs of its intended target groups/beneficiaries.: Needs assessment determines who needs the program. legal requirements or as part of a larger evaluation initiative. conclusions. Ex-post Evaluation (Analysis) Evaluation of a development intervention after it has been completed. External Evaluation The evaluation of a development intervention conducted by entities and/or individuals outside the donor and implementing organizations. Formative Evaluation Refers to an evaluation intended to improve performance. and what might work to meet the need Evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness Structured conceptualization helps stakeholders define the program or technology. and identifying future action steps. the target population.) goals and policies. It includes several evaluation types. how great the need is. In reality.

and the barrier to the full and equitable participation of women and men in project activities and the equitable distribution of the benefits obtained. Independence implies freedom from political influence and organizational pressure. It is a statement of intention that defines the main reason for undertaking the project. including alternative delivery procedures Gender Gender refers to the social roles assigned to men and women based on their sex. It comes from the Latin words “in” (towards) and “dicare” (make known). a pilot. or organizational objectives to which the project is designed to contribute. Types of Indicators: 14 | P a g e . national. Goal Refers to the sectoral. It implies testing a technology or an innovation or any activity in the setting where it will be used. subobjectives) in the form of a hierarchy or tree. the differential access to and use of resources and their specific needs. Independent Evaluation Refers to an evaluation carried out by entities and persons free of the control of those responsible for the design and implementation of the development intervention. thus showing the logical links between the objectives and their sub-objectives. interests and problems.Implementation evaluation monitors the fidelity of the program or technology delivery Process evaluation investigates the process of delivering the program or technology. It presents in a synthetic manner the various intervention logics derived from the regulation. Indicator An indicator is a marker of performance – showing progress and helping measure change. Gender Analysis Assessment of the likely differences in the impacts of proposed policies. Ground-truth This refers to a test run. that link individual actions and measures to the overall goals of the intervention Horizontal logic Indicates the relation between the resources and the results of a project or programme through the identification of objectively verifiable indicators and means of verification for these indicators. It can also be thought of as describing the expected impact of the project. Hierarchy of objectives This is a tool that helps to analyze and communicate programme objectives and shows how local interventions should contribute to global objectives. It is characterized by full access to information and by full autonomy in carrying out investigations and reporting findings. Inception Phase The period from the project start-up until the finalization of the updating of the work plan. or a pre-test study. It organizes these objectives into different levels (objectives. It extends between one and three months and ends with a first project report. It includes attention to: the different roles. It also refers to checking ideas or methods in the real world. Note: The credibility of an evaluation depends in part on how independently it has been carried out. programmes or projects on women and men. Logframe Matrix and the evaluation framework study.

2. what are the reasons for low levels of technology adoption? Why do so few men use the introduced varieties? What are the cultural determinants that contribute to the need for appropriate information packages? Qualitative information supplements quantitative data with a richness of detail that brings a project’s results to life. They relate to multiple activities that are carried out to achieve project objectives. 3. etc. number of partner organizations involved. theme and requirements of each project. Examples of impact indicators include: percent change in total factor productivity. Qualitative indicators are narrative descriptions of phenomena measured through people’s opinions. Criteria of a strong Performance Indicator Validity: Doest the performance indicator actually measure the result? Reliability: Is the performance indicator a consistent measure over time? Sensitivity: When the result changes.g. ratios. Input indicators: these indicators measure the provision of resources. number of policy options presented for legislation or decree. Impact indicators: these indicators measure the long term effect of a program. Impact measurement requires rigorous evaluation methods. Examples include number of demand driven technologies generated. Outcome indicators: these indicators illustrate the change with regard to the beneficiaries of the project in terms of knowledge. Quantitative and qualitative indicators: all the indicators discussed above can be categorized as qualitative or quantitative indicators on the basis of the way they are expressed. etc. How well have things been done? Examples include proportion of participants who report they are satisfied with the service or information provided. skills or behavior. proportion of ethnic groups.g. number of varieties produced. etc. Proxy indicators: these indicators provide supplementary information where direct measurement is unavailable or impossible to collect. Examples include the number of new varieties users in a community. percent change in selected crops. etc. Process indicators: these indicators provide evidence of whether the project is moving in the right direction to achieve the set objectives. binary values (yes/no). longitudinal study and an experimental design involving control groups in order to assess the extent to which any change observed can be directly attributed to project activities.1. Qualitative information often provides information which explains the quantitative evidence. often at the national or population level. beliefs and perceptions and the reality of people’s lives in terms of non-quantitative facts. will the performance indicator be sensitive to those changes? Simplicity: How easy will it be to collect and analyze the data? Does it present challenges – is it complex? Does it need technical expertise to understand? 15 | P a g e . Quantitative indicators are essentially numerical and are expressed in terms of absolute numbers. attitudes. Other types of indicators: 1. Who and how many people have been involved? Examples include number of participants. e. As there is no standard list of indicators. Output indicators: these indicators demonstrate the change at project level as a result of activities undertaken. etc. 2. age groups. 5. each project will require a collaborative planning exercise to develop indicators related to each specific objective and on the basis of the needs. percentages. e. It is important to select a limited number of key indicators that will best measure any change in the project objectives and which will not impose unnecessary data collection.: What has been done? Examples include training outlines. policies/procedures developed. These indicators can usually be monitored after a medium to long term period. for example the number of full time staff working on the project. 4.

and extension/advisory services Average distance of farm households to markets Membership in international regimes. product placement. user friendliness or other functional characteristics. learning. components and materials. The IS concept embraces not only the science suppliers. suppliers and consumers everywhere – in government. education. They operate on both the formal and informal sources of innovation. across sectors. software in the product. equipment and/or software. material/physical and information resources used to produce outputs through activities and accomplish outcomes. and adjustment)? Affordability: Can the program afford to collect the information? Innovation It refers to a creative and interactive process of making improvements by successfully introducing something new into the social and economic practices. International Union for the Protection of New Varieties of Plants (UPOV) or the International Treaty on Plant Genetic Resources for Food and Agriculture (ITPGRFA) Innovation System An Innovation System refers to a network of organizations. and across institutions. Internal Evaluation 16 | P a g e . This includes significant improvements in technical specifications.Utility: Will the information be useful for program management (decision-making. It gives more attention to The interaction between research and related economic activities The attitudes and practices that promote interaction and the learning that accompanies it The creation of an enabling environment that encourages interaction and helps to put knowledge into socially and economically productive use. It goes far beyond the confines of research labs to users. and civil society actors engaged in innovation processes. and individuals focused on bringing new products. financial. Inputs These are the human. Innovation Policies (Agricultural) Refer to policies designed to enhance the stakeholders’ capacity to innovate in the agricultural sector.g. Marketing innovation: A new marketing method involving significant changes in product design or packaging. Process innovation: A new or significantly improved production or delivery method. business and non-profit organizations. including farmers Policies that integrate and intermediate among public. new processes. Organizational innovation: A new organizational method in business practices. Marketing innovation. together with the institutions and policies that affect their behavior and performance. and new forms of organization into social and economic use. Product innovation: A good or service that is new or significantly improved. Process innovation. e. product promotion or pricing. but also the totality and interaction of actors involved in innovation. and Organizational innovation. private. Potential indicators on agricultural innovation policy include: Expert assessments of policies on agricultural research. enterprises. The Oslo Manual defines four types of innovation: Product innovation. innovation policies are hereby classified into three categories: Policies designed to create and strengthen the formal organizations and institutions needed to generate and apply new or existing information Policies that support and facilitate innovation among system actors. This includes significant changes in techniques. Based on the Innovation Systems Framework. workplace organization or external relations. across borders.

Therefore the evaluators will have to obtain the information from a large number of key informants. region. It is the narrative description of the project at each of the four levels of the hierarchy of the objectives used in the Logframe. though it is important to bear in mind that they also provide a subjective/one-sided perspective. outcome. which provides the basis for another cycle of learning. lessons highlight strengths or weaknesses in preparation. Joint evaluations can help overcome attribution problems in assessing the effectiveness of programs and strategies.g. the quality of aid coordination. who. especially in qualitative evaluation. outputs. we learn to: Increase effectiveness and efficiency Increase the ability to initiate and manage change Utilize institutional knowledge and promote organizational learning Improve cohesion among different units of the organization Increase adaptability – for opportunities. partner. because of their position. but should adequately and sufficiently measure the intended change either singly or in combination. These informants play a key role in evaluation. Key performance indicators may be selected from overall objectively verifiable indicators. most often at the project level. and 17 | P a g e .Refers to an evaluation of a development intervention conducted by a unit and/or individuals reporting to the management of the donor. beneficiaries or evaluators with implications for effectively addressing similar issues or problems in another setting. confidence and proactive learning Lessons Learned Refer to the conclusions extracted from reviewing a development program or project. all the values are continuous. outcomes. Logical Framework Management tool used to improve the design of interventions. Note: There are various degrees of “jointness” depending on the extent to which individual partners cooperate in the evaluation process. It also refers to reflection on experiences to identify how a situation or future actions could be improved. Thus. or even activities by participants. etc. the complementarity of efforts supported by different partners. merge their evaluation resources and combine their evaluation reporting. indicators. It involves identifying strategic elements (inputs. are able to provide information or insights on some aspects relevant to the project. Key performance indicator (KPI) A variable that allows the verification of changes in the development intervention or shows results relative to what was planned. Learning involves applying lessons learned to future actions. Learning This is the process by which knowledge and experience directly influence changes in behavior. design. challenges and unpredictable events Increase motivation. This can be individual or group-based. organization. managers. Intervention Logic The strategy underlying the project. and impact. or implementing organization. Joint Evaluation An evaluation to which different donor agencies and/or partners participate. Interval scale Refers to measurements with defined and constant intervals between successive values (e. In Interval Scale. and implementation that affect performance. impact) and their causal relationships. Key informants People in a community. attitude measures and rankings). Frequently.

It can also be used to denote the evaluation of an evaluation to judge its quality and/or assess the performance of the evaluators.point' at all but at the end of a significant phase! Milestones 18 | P a g e . in certain circumstances. e. It thus facilitates planning. re-negotiate timescales or outputs. What are the indicators to measure whether and to what extent the action achieves the expected outputs? 2: Means: What are the means required to implement these activities. They ensure that the indicators can be measured effectively by specification of types of data. decide whether the project is on target in terms of its projected outputs. sources of information and methods of collection. Mid-Term Review/Evaluation This is the point at which progress-to-date is formally measured to see whether the original environment has changed in a way which impacts on the relevance of the original objectives. execution and evaluation of a development intervention. The following is the layout of the Logical Framework Matrix: Intervention logic Impact 13: What is the overall impact of the project? OVIs of achievement 14: What are the key indicators related to the impact? 10: Which indicators clearly show that the outcome has been achieved? Sources and means of verification 15: What are the sources of information for these indicators? 11: What are the sources of information that exist or can be collected? What are the methods required to get this information? 7: What are the sources of information for these indicators? Assumptions Outcomes 9: What specific outcome is the action intended to achieve? 12: Which risks should be taken into consideration? Outputs 5: Outputs are the results envisaged to achieve the specific objective. adjust the working practices if necessary or. supplies. Meta Evaluation This is an evaluation designed to aggregate findings from a series of evaluations. etc. 6: Enumerate the outputs. training. 8: What external conditions must be met to obtain the expected outputs on schedule? Activities 1: What are the key activities to be carried out and in what sequence in order to produce the expected results? (group the activities by result) 3: What are the sources of information about action progress? 4: What pre-conditions are required before the action starts? What conditions outside the beneficiary's direct control have to be met for the implementation of the planned activities? Logical Framework Approach (LFA) A methodology for planning.g.the assumptions or risks that may influence success and failure. It is an opportunity to review these objectives if necessary. personnel. preparation of the Logframe Matrix and activities and resources schedule. analysis of objectives. Means of Verification Refers to the expected source of the information we need to collect. managing and evaluating programmes and projects involving stakeholder analysis. MoVs should clearly specify this source. analysis of strategies. Management Information System (MIS) The creation. It is often not carried out at the 'mid. of a regular feedback to the management at the project and central level on all key aspects of a project. equipment. problem analysis. through a well designed monitoring system. studies. operational facilities.

NARS The National Agricultural Research Systems (NARS) comprise mainly of national institutes of agricultural research. They are an indication of short and medium-term objectives (usually activities) which facilitate the measurement of achievements throughout the project rather than just at the end. sources. It helps to answer the following questions: How well are we doing? Are we doing the activities we planned to do? Are we following the designated timeline? Are we over/under-spending? What are the strengths and weaknesses in the project? Monitoring and Evaluation (M&E) Framework Refers to a holistic approach that can address the program needs. Its strength lies in its ability to produce information-rich stories that can be analyzed for lesson learning. ensure accountability and provide part of the data for evaluation and learning. monitor program processes and outputs. Most Significant Change (MSC) This is a system that designed to record and analyze change in projects or programs where it is not possible to precisely predict changes beforehand. Research activities carried out within this framework are mainly funded on a 19 | P a g e . training and extension services. users of agricultural products and civil society organizations (NGO. and is therefore difficult to set predefined indicators. Natural Resources Management and Biodiversity. It is also designed to ensure that the process of analyzing and recording change is as participatory as possible. It is designed around purposive sampling – sampling to find the most interesting or revealing stories. Staple crops. It relies on people at all stages of a project or program meeting to identify what they consider to be the most significant changes within predefined areas or domains. what indicators to measure. or lack of progress. It is also envisaged to function as an apex-level information system which draws from program and project systems to deliver useful M&E products for stakeholders. It provides project and program managers with important information on progress. Producer Organizations. It also involves a transparent process for the generation of stories that shows why and how each story was chosen. They also indicate times when decisions should be taken or an action should be finished. and Private Sector). targets and assumptions. especially in those areas where changes are qualitative and therefore not susceptible to statistical treatment. in relation to project/program objectives. how to analyze or interpret data. Monitoring and Evaluation (M&E) Plan A comprehensive narrative document on all M&E activities (summary of M&E Framework). the implementation of the technical programmes (Agro-biodiversity. frequency and method of indicator data collection. This system is important in the promotion of the new paradigm of integrated agricultural research for development (IAR4D). It encompasses the program planning processes right down to the documentation and dissemination plan. In general. and how the components of the M&E System will function.They correspond to the process indicators. etc) must be realized through networking between members of NARS of ASARECA. It addresses key M&E questions. frequency and method for report development and distribution of the indicators. Monitoring This an ongoing and systematic collection and analysis of information to assist timely decision making. practices and standards to be used throughout ASARECA. and evaluate goals and program/project objectives. baselines. Monitoring and Evaluation (M&E) System M&E system is a framework of M&E principles. universities. It aims to identify significant changes brought about by a development intervention. Livestock and Fisheries. High Value Non-staple crops.

but have no mathematical value and cannot be used for mathematical functions. increase in income. groups. commissioned research could be carried out. It consists of three phases: Intentional design: The program or project frames its activities based on the changes it intends to help bring about and that its actions are purposely chosen so as to maximize the effectiveness of its contributions to development.g. OVIs provide the basis for designing an appropriate monitoring system. regional administration. numbers sometimes are assigned to characteristics for identification. It does not list all the planned activities. In drafting the vision. However. etc). community leaders. project implementers must be visionary. and relevant. Mission Statement: describes how the program or project intends to support the vision. However. they represent the second column of the logical framework. but also how it will keep itself effective. Ordinal Scale Refers to measurements using classifications with a natural sequence (lowest to highest). Boundary Partners refer to individuals. following a problem analysis and showing a means to ends relationship. The values are discontinuous. the project implementers should consider not only how the program will support the achievement of outcomes by its boundary partners. but with undefined intervals. In developing the mission statement. by specialized centers in the sub-region. o o 20 | P a g e . It helps a project or program learn about its influence on the progression of change in their direct partners. Outcome Mapping Is a methodology for planning and assessing development programming that is oriented towards change and social transformation. as the case may be. efficient. The knowledge management and upscaling programme is carried out through networking at the level of NARS and through competitive funds and commissioned research. or organizations with whom the project or program interacts directly and with whom the program can anticipate some opportunities for influence (e. indigenous groups. o Vision Statement: describes why the program or project is engaged in development and provides an inspirational focus. improvement in the environment. by establishing a vivid beacon to motivate staff and highlight the ultimate purpose of their day-to-day work. Nominal Scale Refers to classifications that form no natural sequence. churches. Objective Tree A diagrammatic representation of the situation in the future once problems have been remedied. by stating the areas in which the program or project will work toward the vision. It is specified in terms of desired changes in behaviors and practices as a result of training or services provided by a project. Examples: reduction of malnutrition. It provides a set of tools to design and gather information on the outcomes. of the change process. Objective A specific statement detailing the desired accomplishments of a project. defined as behavioral changes. international institutions. and therefore helps those in the assessment process think more systematically and pragmatically about what they are doing and to adaptively manage variations in strategies to bring about the desired outcomes. It puts people and learning at the centre of development and accepts unanticipated changes as potential for innovation. private sector. NGOs. as the case may be. They are assumed to control change since they operate within different logic and responsibility systems. Objectively Verifiable Indicators (OVI) Indicators of the different level of objectives.competitive basis. academic and research institutions.

to the research process. when (should it be collected). how and when. networking.g. It is based largely on systematized self-assessment. what information is needed.” with a focus on how the behavior. evidence of the change. how and when. Progress Markers: A graduated set of statements describing a progression of changed behaviors in the boundary partner that will lead to the outcome challenge (e. o Evaluation Plan: A short description of the main elements of an evaluation study to be conducted.g. and the environment of the partner (“rules of the game”. o o o Outcome and performance monitoring: It provides a framework for the ongoing monitoring of the program’s or project’s actions in support of the outcomes and the boundary partners’ progress towards the achievement of outcomes. o Monitoring Priorities: Refers to what (information). new skills. that foster creativity and innovation. what questions should be answered. Outcomes result both from meeting research objectives (outputs) and from the participatory research process itself. at least in part. group.g. a record of unanticipated change. They can be negative or positive. an outcome challenge statement is development for each of them. use it). This information is then collected by the following 3 tools: Outcome Journals: Data collection tools for monitoring the progress of a boundary partner in achieving progress markers over time.o Outcome Challenge: Once the boundary partners have been identified. Organizational Practices: Refers to practices that determine an organization’s effectiveness. or require too much effort or too many resources relative to the results obtained)? How are/should we be responding to the changes in boundary partners’ behavior? Who is responsible? What are the timelines? Has any issue come up that we need to evaluate in greater depth? What? When? Why? How? Performance Journal: Data collection tools for monitoring how well the program or project is carrying out its organizational practices. Outcomes Outcomes are the changes – within the community or among the researchers – that can be attributed. or actions of an individual. 21 | P a g e . Strategy Journals: Data collection tools for monitoring the strategies a program uses to encourage change in the boundary partner. relationships. used) etc. the people and circumstances that contributed to the change. who will collect this information. changed farming practices. greater adoption and diffusion of new technologies. o o o Evaluation planning: It helps the program or project to identify evaluation priorities and develop an evaluation plan. support needs). and encompass both the functional effects of participatory research (e. It identifies who will use the evaluation. changes in institutions or management regimes) and the empowering effects (e. or institution will change if the program or project is extremely successful. how (will it be collected. and how much it will cost. information availability. It records data on how the program is operating as an organization fulfills its mission. It includes information explaining the reasons for the change. it describes the level of change as low. and a place to record who among the boundary partners exhibited the change. Outcomes are the effects of the program “being there. and lessons for the program or project is also recorded in order to keep a running track of the context for future analysis or evaluation. activities. medium. A single performance journal is created for the program and filled out during the regular monitoring meetings. like to see. expected or unexpected. who (will collect. or high. what are the changes you expect to see. They are phrased in a way that emphasizes behavioral change. Some of the planning and management questions that project implementers might want to consider during monitoring meetings after completing the strategy journal include: What are we doing well and what should we continue doing? What are we doing “okay” or badly and what can we improve? What strategies or practices do we need to add? What strategies or practices do we need to give up (those that have produced no results. etc). love to see) Strategy Maps: A combination of strategies or activities aimed at the boundary partner (outputs. assist partners and maintain the organization’s “niche”.

pamphlets produced. increased area under technology or management practice in …. Three types of outcomes related to the logic model are defined as: 1. for example..g. program or initiative in causal manner. but requires specific skills to be implemented and is more time 22 | P a g e . and usually within the control of the organization itself. new management regimes for common resources. 3. many are diffuse. or community development plans. what was the nature of the activities? Were all those interested in the project able to participate? Are the outputs useful? For whom? etc). These outcomes usually represent the raison d'être of a policy.. research studies conducted and the number of reports or publications of the research Evaluators will assess the quality of the outputs (e. In terms of time frame and level. The desired outcomes of participatory research in natural resource management projects. etc. Immediate Outcome: an outcome that is directly attributable to a policy. Examples include: increase in awareness/skills of …. new community institutions and organizations. It is particularly suitable for process projects. program or initiative. program or initiative. and notoriously difficult to measure or to attribute to a particular research project or activity. these are medium term outcomes and are often at the change of behavior level among a target population. Measures such as the number of people trained. Examples of outputs are: The research activities undertaken as well as the tangible products of the research Information. improved confidence or self-esteem. Final Outcome: the highest-level outcome that can be reasonably attributed to a policy. such as new techniques or technologies developed through farmer experimentation. the number of farmers involved in on-farm experiments. Intermediate Outcome: an outcome that is expected to logically occur once one or more immediate outcomes have been achieved. They are long-term outcomes that represent a change of state of a target population. Examples include: increased adoption of crop varieties in Country X …. program or initiative's outputs. etc (organized in a report. thus helping to achieve intermediate changes that result from accessing and using inputs.increased community capacity. Ultimate outcomes of individual programs. access to …. monitoring and evaluation of a project. such as a profile of a community. Participatory Approach It refers to the involvement of project participants in the design. Outputs Direct products and services stemming from the activities of an organization. these are short-term outcomes and are often at the level of an increase in awareness of a target population. These products and services are delivered to the project participants. policies or initiatives contribute to the higher-level departmental Strategic Outcomes. 2. long term. and is the consequence of one or more intermediate outcomes having been achieved. In terms of time frame and level. It may also help to show how a programme fits into the regional/sectoral policies of the government/organization concerned and of the donor community. generally involve social transformation. Overall Objective It explains why the project is important to the society in terms of long-term benefits to final beneficiaries as well as wider benefits to other groups. for example) Products. and improved ability to resolve conflict or solve problems). policy. documentation of indigenous knowledge of plant species or local management practices.

interpretation of. Participatory Process One or more processes in which the key stakeholders take part in specific decision-making and action. and the responsibility for data collection. They are used to observe progress and to measure actual results compared to expected results. universities. carrying out and interpreting an evaluation. targets. This often has the intention of sharing control over the resources generated and responsibility for their future use. and to secure stakeholder participation. community-based organizations. learning and evaluation. percentages. Partners include: universities. project. It also refers to the collection. Performance indicators help to answer “whether” a project is progressing toward its objective. although in certain circumstances qualitative indicators are appropriate. Participatory Evaluation Refers to the evaluation method in which representatives of agencies and stakeholders (including beneficiaries) work together in designing. multilateral organizations. rather than why/why not such progress is being made. and over which they may exercise specific controls. scores and indices). private companies. Performance reflects effectiveness in converting inputs into outputs. indicators. Performance Monitoring Framework (PMF) A plan to systematically collect relevant data over the lifetime of an investment to assess and demonstrate progress made in achieving expected results. Performance The degree to which a development intervention or a development partner operates according to specific criteria/standards/guidelines or achieves results in accordance with stated goals or plans. outcomes and impacts. private sector. baseline data. Performance Indicator Performance indicator refers to a particular characteristic or dimension used to measure intended changes defined by an organization’s Logframe or results framework. frequency. They are usually expressed in quantifiable terms. non-governmental organizations. Performance Measurement A system for assessing performance of development interventions against stated goals. It is often used to refer specifically to processes in which primary stakeholders take an active part in planning and decision-making. Partners These are individuals and/or organizations with whom/which ASARECA works cooperatively to achieve mutually agreed upon objectives and outputs and outcomes. data collection methods. It documents the major elements of the monitoring system and ensures that performance information is collected on a regular basis. etc.consuming than other approaches. Performance monitoring Performance monitoring refers to the continuous process of collecting and analyzing data to measure the performance of a program. and should be objective and measurable (numeric values. Farmer Organizations. For example: 23 | P a g e . process or activity against expected results. the use of participatory approach increases beneficiaries’ ownership and therefore potential sustainability of project results. It contains information on expected results. data sources. A defined set of indicators is constructed to regularly track the key aspects of performance. implementation. Quantitative indicators are preferred in most cases. On the other hand. governments. and reporting on data for performance indicators which measure how well programs or projects deliver outputs and contribute to achievement of goals. professional and business associations. civil society. CG Centres.

partner organizations. or final evaluations. and the regular collection of actual results data (see figure for details). team. reporting and utilization. ASARECA staff. consultants.Data Sources: Individuals. plans for data collection. analysis of records or documents. Responsibility: Who is responsible for collecting and validating the data? {e. partner organizations. preand post-intervention survey. government documents. method of collection. identify the data sources for each performance indicator that has been selected. interviews. or individual. means for tracking critical assumptions. tracking sheets. or not attained to advance organizational learning It clearly spells the desired results. It is mainly used for: Planning to monitor achievement of program implementation Collecting and analyzing performance information to track progress towards planed results and outcomes Using performance information to improve management decision making and resource allocation Communicating results achieved. and schedule of collection for each piece of datum required. local professionals. or Publications from which data about your performance indicator will be obtained. performance indicators. questionnaire. performance targets for all result areas. observing participants.g. collection of anecdotal evidence. etc}. survey. Performance monitoring systems at ASARECA consist of. performance monitoring plans to assist in managing the data collection process. 24 | P a g e . and assigns responsibility for collection to a specific office. literature review. etc} Performance Monitoring Plan (PMP) The PMP is a detailed plan for managing the collection of data in order to monitor performance. consultants.g. Therefore. mid-term. performance baselines. It contributes to the effectiveness of the performance monitoring system by assuring that comparable data will be collected on a regular and timely basis. comparative study. inter alia: performance indicators. It identifies the indicators to be tracked. focus group. partner statistical reports. Focus also on existing sources to maximize value from existing data {beneficiaries. beneficiaries. project. baselines and targets. analysis. Frequency: How often will the information about each performance indicator be collected? Some indicators may be looked at regularly as part of ongoing performance management. specifies the source. while others will only be collected periodically for baseline. Performance Monitoring System It refers to an organized approach or process for systematically monitoring the performance of a program. process or activity toward its objectives over time. ASARECA staff} Data Collection Methods: Represent HOW data about performance indicators is collected {e. Organizations.

regional. The plan should also explain the different aspects that need to be addressed as part of project development. and typically involves the existence of funds from the donor agency. b) Evidence that outputs of activities are adequately supporting the relevant outcomes and ultimately contributing to the achievement of the purpose and goals. ranging from 0 to 1. etc. Pre-Planning The process of understanding the status. and intra-cluster correlation of the outcome of interest. and expectations regarding future results’ achievements. ecosystems and institutions in a given geographic context at any level (local. The sequence of and relationship between main activities and milestones should also be described. or Outcome by the M&E Units. Popular levels of power are 0. Problem tree A diagrammatic representation of a negative situation. The power of a test is equal to 1 minus the probability of a type II error.8 and 0. Problem Analysis A structured investigation of the negative aspects of a situation in order to establish cause-effect relationships. d) Status and timeliness of input mobilization effort. or Outcomes. 25 | P a g e . or the approval of a specific policy/law by the government. capacity building details. of committing type II error. trends and key issues affecting people and community.e. At a minimum. Outputs. a portfolio review must examine the following: a) Progress towards achievements of Objectives.Planning A broad description of the activities that would normally be carried out as part of project development. Power The power is the probability of detecting an impact if one has occurred. g) Pipeline levels and future research requirements. e) Status of critical assumptions and causal relationships defined in the results framework. It focuses on both operational and strategic issues and examines the robustness of the underlying development hypotheses and the impact of activities on results. c) Adequacy of inputs for producing activity outputs and efficiency of processes leading to output. Pre-Conditions Conditions that have to be met before the project can commence. and illustrate basic principles that are to be followed. It is intended to bring together various expertise and points of view to arrive at a conclusion as to whether the program is “on track” or if new actions are needed to improve the chances of achieving results. Output. condition. showing a cause-effect relationship. High levels of power are more conservative and decrease the likelihood of type II error.9. Portfolio Review A required systematic analysis of the progress of an Objective. It depends on parameters such as power (or the likelihood of type II error). variance. national. An impact evaluation has high power if there is a low risk of not detecting real program impacts. along the related implications for performance towards outcomes and goals. significance level. from start to finish. such as signing sub grant agreements. i. Power Calculations Power calculations indicate the sample size required for an evaluation to detect a given minimum desired effect. f) Status of related partner efforts that contribute to the achievement of results. and the milestones that would generally be achieved along the way. It is the visual result of a problem analysis. international).

project evaluation should consider: 1) Project objectives in terms of cost. regional. Project Goal Project Goal (Purpose. It can also be conducted at regular intervals to check that operation remains on track and follows established procedures. Project Evaluation Refers to a technique to review the current status of a project against plan and to provide practical.Process-Based Evaluation Process-based evaluations are aimed at understanding how a program or project works. financial. etc) to ensure the timely delivery of quality products or services? Are there adequate systems (human resources. Projects in International Programs are to address specific needs identified by communities and families. 6) Performance of consultants. country. 7) Work todate in terms of cost. financial. development objective) refers to what the project is expected to achieve in terms of significant improvements in the lives of the target population beyond the life of the project. long-term. for example. etc) in place to support program operations? Are program clients receiving quality products and services? What is the general process that project beneficiaries go through with the products or projects? are project beneficiaries satisfied with the processes and services? Are there any operational bottlenecks? Is the program or project reaching the intended population? Are program or project reach-out activities adequate to ensure the desired level of target population participation? Program Evaluation Evaluation of a set of interventions. clarity of the information campaign. increased household income and reduced malnutrition in children leading to improved quality of life as demonstrated by improved nutrition of all members of the household. Note: A development program is a time-bound intervention involving multiple activities that may cut across sectors. It seeks to answer the following key questions: What are the actual steps and activities involved in delivering a good or service? How close are they to agreed operation? Is program evaluation efficient? It is an evaluation that tries to establish the level of quality or success of the processes of a program. adequacy of the administrative processes. Examples: demonstration of community solidarity as expressed through engagement in a number of defined activities. 5) Suitability of contracts. training. comprehensive and forward-looking recommendations for corrective action where necessary. institutional. It is helpful in obtaining early warnings of operational difficulties in newly implemented programs or projects. 3) Organization. social. facilities. 4) Systems and Procedures. environmental. 2) Management. Project or Program Objective The intended physical. or sector development objectives. management information. time and quality. and the linkage among these. their service delivery mechanisms. or other development results to which a project or program is expected to contribute. Project An intervention that consists of a set of planned and interrelated activities and tasks designed to achieve defined objectives within a given budget and a specified period of time. accessibility of the program benefits. internal dynamics of implementing organizations. including: Is the program being implemented according to design? Are operational procedures appropriate to ensure the timely delivery of quality products or services? What is the level of compliance with the Operations Manual? Are there adequate resources (money. equipment. their management practices. In general terms. There are numerous questions that might be asked in a process-based evaluation. time and quality measured against plan. 26 | P a g e . marshaled to attain specific global. themes and/or geographic areas. their policy instruments.

etc. By achieving its purpose the project contributes to the overall objective. It relates only to the beneficiaries. For example. attitudes or behaviors. Qualitative methods They belong to the social science tradition and are based on the observation of people in their own territory. evaluations.Project Partner The organization in the project country with which the Heifer country program collaborates to achieve mutually agreed upon objectives. the treatment group and the comparison group can be constructed by matching each person in the treatment group with the one in the comparison group that is most similar using the identified observable characteristics. The organization works closely with the beneficiary group/community and may handle financial and operational aspects of the project. Project Strategy An overall framework of what a project will achieve and how it will be implemented. Qualitative methods emphasize understanding reality as the persons being studied construe it. Quality Assurance It encompasses any activity that is concerned with assessing and improving the merit or the worth of a development intervention or its compliance with given standards. general notes from observations. Proxy Indicator An appropriate indicator that is used to represent a less easily measurable one. marital status. on their own terms. Qualitative Data Qualitative data deal with descriptions. condition of the house is a proxy indicator for income. etc. Examples of qualitative data include: The leadership role of women in a community. Most qualitative studies rely on descriptive rather numerical or statistical analysis. They normally describe people's knowledge. professional and business associations. and are not usually summarized in numerical form. etc. They must then identify observable characteristics that are likely to link to the evaluation question {for example. Partners may include host country governments. “Do farmers living near the experimental plots have higher yields from their farms than those further away?”}. etc. local and international NGOs. or self-reported. except on the treatment variable. Minutes from community meetings. distance from home to experimental sites. It may also refer to the assessment of the quality of a portfolio and its development effectiveness. Purpose Refers to what the project is expected to achieve in terms of its development outcome. a specific area and a timeframe. The result is pairs of individuals or households that are similar to one another as possible. Propensity Score Matching (PSM) PSM is used to measure a program’s effect on project participants relative to non-participants with similar characteristics. They are data that can be observed. but not necessarily precisely measured. The observable characteristics may include gender. Project Purpose It is the central objective of the project and represents what the project is expected to achieve by the end of the project and with the resources available. It also refers to the publicly stated objectives of the development program or project. universities. evaluators must first collect baseline data. 27 | P a g e . reviews during implementation. and interaction with them in their own language. age. Once the variables are selected. RBM. private businesses. To use this technique. Examples include: Appraisal.

while the characteristics of the beneficiaries (other independent variables) are held fixed. Quasi-Experimental Design Impact evaluation designs which create a control group using statistical procedures. or concerned with. quantity and expressed in numbers or quantities. as would be the case from an experimental design. weight. Reach Reach refers to who is influenced (by the research) and who acts because of this influence. Recommendations These refer to proposals aimed at enhancing the effectiveness. which puts more focus on process and ownership of the participants. quality. or efficiency of a development intervention. Rapid Appraisal Methods first developed in agriculture and rural development (where they are known as Rapid Rural Appraisal. Since then. with reference to the quality of the instruments. regression analysis helps us understand how the typical value of the outcome indicator (Y) (dependent variable) changes when the assignment to treatment or comparison group (P) (independent variable) is varied. semi-structured interviews. number. etc. and/or at the reallocation of resources. RRA) to provide rapid and cost-effective means of assessing the conditions of a community or area at the time a project was planned. The intention is to ensure that the characteristics of the treatment and control groups are identical in all respects.Quantitative Data Quantitative data are data that can be precisely measured. other than the intervention. Examples include data on age. length. and that have a threshold along the index that determines whether potential beneficiaries receive the program or not. They can be measured or measurable by. rather than treating them as passive objects intended to benefit from the research results. Currently one speaks about Participatory Rapid Appraisal (PRA). They are based on qualitative methods such as observation. 28 | P a g e . In impact evaluation. at redesigning the objectives. they have been extended to provide a rapid method of impact assessment and are now being used in other sectors such as health and nutrition. It is adequate for programs that use a continuous index to rank potential beneficiaries. The cutoff threshold for program eligibility provides a dividing point between the treatment and comparison groups. Regression In statistics. procedures and analyses used to collect and interpret evaluation data. Regression Discontinuity Design Regression Discontinuity Design (RDD) is a non-experimental evaluation method. Reach is closely related to the concept of equity. regression analysis includes any techniques for modeling and analyzing several variables. Reliability Refers to consistency or dependability of data and evaluation judgments. area. and by mobilizing them to act in their own interests. cost. Participatory research is assumed to improve reach to disadvantaged groups and communities by including them in defining research priorities and capacity-building activities. Note: Evaluation information is reliable when repeated observations using similar instruments under similar conditions produce similar results. when the focus is on the relationship between a dependent variable and one or more independent variables. volume. Recommendations should be linked to conclusions.

moving through activities and outputs. and reporting ASARECA results. processes and measurements to improve decision-making and drive change. beginning with inputs. positive and/or negative) of a development intervention. b) Selecting indicators that will be used to measure progress towards indicators. analyzing and reporting actual results vis-à-vis the targets. resources. MDGs. impacts. used to judge indicators. government bodies or society has taken place. ASARECA projects not only report on performance. Reports are designed to: Enable the assessment of progress in the implementation process and achievement of results Focus activities and therefore improve subsequent workplans Facilitate the replenishment of funds by donors. reach is part of the results chain. organizations. and feedback. etc. the planning starts with a clear view of the project purpose and outcomes. impact and the need for sustainable benefits (the results of what you do). It typically is presented both in narrative form and as a graphical representation. Broadly. outcomes. Results-Based Management (RBM) A comprehensive. Results Refer to the output. lifecycle approach to management that integrates strategy. Results Chain Refer to the causal sequence for a development intervention that stipulates the necessary sequence to achieve desired objectives. groups of people. based on appropriate problem analyses.Reporting Refers to the systematic and timely provision of essential information at periodic intervals. Progress reports are essential mechanisms for project implementation to inform partners and donors on the progress. outcome or impact (intended or unintended. RBM involves: a) Identifying clear and measurable results aided by logical frameworks. and culminating in outcomes. It also refers to the changes that occur as an effect of a development intervention – thus implying that a change of behavior by individuals. but also show how they are contributing to the key results of the global program – CAADP. difficulties. 29 | P a g e . A results chain is an iterative process. Results Framework The results framework represents the development hypothesis including those results necessary to achieve intended results and their causal relationships and underlying assumptions. and problems encountered and lessons learned during the implementation of project activities. It is also an official record of a given period in the life of a project that presents a summary of project implementation and performance reporting. It is a shift from focusing on the inputs and activities (the resources and procedures) to focusing on the outputs. analyzing. The framework also establishes an organizing basis for measuring. In some agencies. c) Setting explicit targets for each indicator. people. planning backwards to the inputs and then implementing the project from the inputs to the outcomes. e) Reviewing. d) Developing performance monitoring systems to regularly collect data on actual results.

communities. not readily available from performance monitoring systems. Stated otherwise. transferring or accepting it) Risk Owner is the person who owns the process of coordinating. property. reducing. the results of their analysis and a summary of additional risk response strategies. Increased generation and uptake of demand driven gender responsive agricultural technologies and innovations. program or investment is expected to achieve or contribute to. communities. The sample is generally used when carrying out impact evaluations. learning and decision making processes.g. This register should be continuously updated and reviewed throughout the project life.f) g) Integrating evaluations to provide complementary performance information. Groups selected as blocks. Sample A number of people. mitigating and gathering information about the specific risk as opposed to the person who enacts the controls. Enhanced utilization of agricultural research and development innovations in ECA. or other units that have been selected to estimate the characteristics of the population from which the units were drawn. it is the person or entity with the accountability and authority to resolve a risk incident (ISO 31000) Operational Risk is the potential impact on ASARECA’s ability to operate effectively or efficiently Financial Risk is the potential impact on the ability to properly protect the funds Development Risk is the potential impact on the ability to achieve expected development results Reputation Risk is the potential impact arising from a reduction in ASAREA’s reputation and in stakeholder confidence in ASARECA's ability to fulfill its mandate. etc}. A detailed examination of the potential unwanted and negative consequences to human life. Using performance information for internal management accountability. 30 | P a g e . Reviews tend to emphasize operational aspects. sharing. or the environment posed by development interventions. Sometimes the terms “review” and “evaluation” are used as synonyms. sector or other definable areas. Risk Analysis An analysis or an assessment of factors (called assumptions in the logframe) affect or are likely to affect the successful achievement of an intervention’s objectives. Results Statement It outlines what a policy. Risk Register A list of the most important risks. Facilitated policy options for enhancing the performance of the agricultural sector in ECA. Review An assessment of the performance of an intervention. The use of cluster samples reduces the time and cost of data collection. households. periodically or on an ad hoc basis. and also for external performance reporting to stakeholders and partners. although it might lead to less precise statistical estimates. a systematic process to provide information regarding such undesirable consequences. Note: Frequently “evaluation” is used for a more comprehensive and/or more in-depth assessment than “review”. health. the process of quantification of the probabilities and expected impacts for identified risks. Sampling methods include: Cluster sample. It describes the change stemming from ASARECA’s contribution to a development activity in cooperation with others {e. Useful Risk Terminology: Risk refers to the effect of uncertainty on results (ISO 31000) Impact is the effect of the risk on the achievement of results Likelihood is the perceived probability of occurrence of an event or circumstance Risk level is Impact multiplied by Likelihood Risk response is the plan to manage a risk (by avoiding.

a level of 5% is chosen for no better reason than that it is convenient.001). Sampling frame The most comprehensive list of units from the population of interest (universe) that can be obtained.1% (0. they benefit). Statistical Power 31 | P a g e . the null hypothesis is rejected. etc.e.Purposive sample. and 0.05). The conclusions of this analysis are then integrated in the project design. This bias commonly occurs when the comparison group is ineligible or self-selects out of treatment. By contrast. necessary to design a project. It is a very economical way to obtain information. if they suffer because of the project). If the spillover effect on the comparison group is negative (i. then it will yield an underestimation of the project impact. organizations. In the presence of coverage bias. etc) are classified in groups according to characteristics.e. Spillover effects Also known as contamination of the comparison group. Each unit has an equal chance to be selected. Generalization can be therefore made directly from the sample for the total population without introducing a bias. Significance level The significance level is usually denoted by the Greek symbol. If a test of significance gives a p value lower than the α level.” The lower the significance level. potentials. The primary units (households. Such results are informally referred to as “statistically significant. α. but for many applications. Situation Definition or Needs Assessment or Feasibility Study A process or set of processes of gathering information. Choosing the level of significance is an arbitrary task. if the spillover effect on the comparison group is positive (i. implementation. Stakeholders Analysis Analysis that involves: the identification of all stakeholder groups likely to be affected by the proposed intervention. It is a good snapshot of the current situation. the stronger the evidence required. Reduces the number of interviews required to achieve a required level of statistical precision in the estimation of population attributes. individuals. Random sample. then the straight difference between outcomes in the treatment and comparison groups will yield an overestimation of the program impact. Stratified sample. analyzing it. and then making a judgment on the basis of the information on the current scenario of the community or theme that requires participation/intervention from ASARECA. Popular levels of significance are 5% (0. Sources of verification They represent the third column of the Logframe matrix and indicate where and in what form information on the achievement of the overall objective. benefits or in its evaluation. groups or individuals who have a direct or indirect stake or commitment in the programme or project design. but caution has to be used in making generalization. even though the treatment is not administered directly to the comparison group. the identification and analysis of their interest. 1% (0. the project purpose and the results can be found. problems.01). Stakeholders Agencies. Selection bias Selection bias occurs when the reasons for which an individual participates in a program are correlated with outcomes. For each stratum a sample is extracted. Respondents are selected according to given characteristics that are particularly relevant for the study. Differences between the sampling frame and the population of interest create a coverage (sampling) bias. results from the sample do not have external validity for the entire population of interest. A spillover effect occurs when the comparison group is affected by the treatment administered to the treatment group.

the chances of a type II error decrease. processing. Reporting. they limit the ability to report against the achievement of that result by restricting success to an overly narrow window (i. that it will not make a type II error).e. the target itself). vision and efforts. As power increases. and Weaknesses. program or initiative plans to achieve within a specified time period. distribution. and should be a clear measurable outcome that is within its sphere of influence. each organization should establish realistic targets for each performance indicator in relation to the baseline data already identified. Targets can be either quantitative or qualitative and are appropriate for both outputs and outcomes (e. Target A measurable performance or success level that an organization.e. Target group The specific group for whom the intervention is planned and undertaken.The power of a statistical test is the probability that the test will reject the null hypothesis when the alternative hypothesis is true (i. It is a tool that can be used during all phases of the project cycle.intended or unintended -. storage and exchange. Summative Evaluation A study conducted at the end of an intervention (or a phase of that intervention) to determine the extent to which anticipated outcomes were produced. Therefore power is equal to 1-β. Technology (Agricultural) Agricultural technology refers to all aspects of technology in agricultural production. This sets the expectations for performance over a fixed period of time. It represents the difference it wants to make for its stakeholders. Summative evaluation can be subdivided: outcome evaluations investigate whether the program or technology caused demonstrable effects on specifically defined target outcomes impact evaluation is broader and assesses the overall or net effects -. Summative evaluation is intended to provide information about the worth of the program. The probability of a type II error is referred to as the false negative rate (β). When targets are included in a result statement. in this context becomes an exercise of justifying why the target was not met or was exceeded instead of comparing expected outcomes to actual outcomes and discussing variance.g. Targets belong only in the PMP and should not appear in Outcomes themselves. 70% of targeted farmers will use identified technologies in 2013). Related term: impact evaluation. Strategic Outcome A long-term and enduring benefit to ASARECA and its partners that stems from its mandate. Therefore. Terms of Reference {Scope of Work or Evaluation Mandate} 32 | P a g e . and the Opportunities and Threats that it faces.of the program or technology as a whole cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs and values secondary analysis reexamines existing data to address new questions or use methods not previously employed meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgment on an evaluation question SWOT analysis Analysis of an organization’s Strengths.

all of which address a specific development priority that cuts across countries. and sectors. the resources and time allocated. the null hypothesis holds). implementing and reporting? What resources are needed/available? How should the findings and recommendations be presented and disseminated? How will the results be used? How will learning from the evaluation be implemented? It is always a good idea to include the following with terms of reference: Conceptual framework of the project or project logical framework Budget details Map of project sites List of projects/sites to be visited Evaluation mission schedule List of people to be interviewed Project statistics. a type II error is made when concluding that a program has no impact (i. single observer or single theory studies. the null hypothesis does not hold). single methods.e. during and after intervention in order to reach conclusions about the effect of the intervention. or types of analysis to verify and substantiate an assessment. sources or types of information. The significance level determines the probability of committing a type I error.This refers to a structured document explaining the purpose and guiding principles on an evaluation exercise. Unit of Analysis 33 | P a g e . documents.e. evaluators seek to overcome the bias that comes from single informants. The probability of committing a type II error is 1 minus the power level. the standard against which performance is to be assessed or analyses are to be conducted. regions. Note: By combining multiple data sources. Triangulation The use of three or more theories. reports already available Thematic Evaluation Evaluation of a selection of development interventions. It specifies the methods to be used. Type I error Error committed when rejecting a null hypothesis even though the null hypothesis actually holds. It should answer questions on: Why is the exercise necessary? Why now? What will be covered and what will not be covered? How will the findings and learning be used? Who should be involved? What questions should be asked? What methodologies should be used for data collection and analysis? How much time is available for planning. In the context of an impact evaluation. the null hypothesis of “no impact” is not rejected) even though the program did have an impact (i. Type II error Error committed when accepting (not rejecting) the null hypothesis even though the null hypothesis does not hold. Time Series Analysis Quasi-experimental designs that rely on relatively long series of repeated measurements of the outcome-output variables taken before. a type I error is made when an evaluation concludes that a program has had an impact (i. and reporting requirements. analyses or theories. the null hypothesis of “no impact‟ is rejected) even though in reality the program had no impact (i.e.e. methods. In the context of an impact evaluation.

Vertical logic It designates the casual relationships between each level of a narrative summary (inputs-activities. The activities that comprise a value chain can be contained within a single firm or many firms. 34 | P a g e . purpose overall objective) and the critical assumptions affecting these linkages. It includes design. and final disposal after use. Value Chain (Agricultural) The full range of activities that are required to bring a product (goods and services) through different phases of production. activitiesresults.The level at which analysis is done. responsibilities and resources. marketing. Work Plan Contains a detailed list of activities to be performed to reach the project objectives with clearly defined timeframe. Variable In statistical terminology. For example. or household fields. or a village’s communal land Validity The extent to which the data collection strategies and instruments measure what they purport to measure. Financial allocations can also be done to achieve the activities defined in the work plans. at the: Level of individual men or individual women Community.or regional-level Level of individual agricultural fields. and support to get the product to the final consumer. results-purpose. including both input supply and output marketing systems. delivery to final consumers. It incorporates a range of activities within each phase. distribution. production. a variable is a symbol that stands for a value that may vary.

How Innovative Is Your Agriculture? Using Innovation Indicators and Benchmarks to Strengthen National Agricultural Innovation Systems. 5. 2. IFFPRI Discussion Paper – 00732 RBM Tools at CIDA: How-to Guide Spielman. USAID: ADS 203.3 3. 2010: Glossary of Key Terms in Evaluation and Results Based Management Handbook on monitoring and evaluation of human resources for health: with special applications for lowand middle-income countries/edited by Mario R Dal Poz … [et al]. 2008. 35 | P a g e . 4. DAC Network on Development Evaluation.Reference 1. 7.J. 2008: Results-Based Management Policy Statement. 6. D. 2009. CIDA.3.

Sign up to vote on this title
UsefulNot useful