You are on page 1of 111

April 2007 UI Response to UNDP Comments


Prepared for

Prepared by Harry Hatry Katharine Mark Ritu Nayyar-Stone Sonia Ignatova

UNDP, Bratislava Regional Centre Grosslingova 35, 811 09 Bratislava Slovak Republic Contract No. PS-2006/10 March 2007 UI Project 08013-000

THE URBAN INSTITUTE 2100 M Street, NW Washington, DC 20037 +1 (202) 833-7200

Table of Contents
INTRODUCTION AND SCOPE................................................................................................. 6
The Millennium Development Goals and their Role in this Guide .......................................................... 7 Focus of the Guide.................................................................................................................................... 8 Remainder of Guide.................................................................................................................................. 8

What is Performance Management? How Can It Help Local Governments? ........................................ 10 Focus on Service Outcomes ................................................................................................................... 11 Limitations.............................................................................................................................................. 11 Select the Scope and Coverage of the Performance Measurement Process ........................................... 13

STEP 2: IDENTIFY OUTCOMES AND OUTCOME INDICATORS ................................. 18

Identify the service/program objectives, and customers......................................................................... 18 Select the important outcomes for your service/program....................................................................... 21 Categorize performance indicators ......................................................................................................... 26 Select outcome indicator breakouts (disaggregation) of each outcome indicator by key characteristics ................................................................................................................................................................ 28


Use agency records................................................................................................................................. 34 Survey Citizens (including Businesses).................................................................................................. 35 Data Quality Control .............................................................................................................................. 48 The Cost of Performance Measurement ................................................................................................. 51

STEP 4: ANALYZE PERFORMANCE DATA....................................................................... 52

The Importance of Analyzing Performance Data ................................................................................... 52 How Performance Data Can Help .......................................................................................................... 52 Do Some Preliminary Work ................................................................................................................... 54 Examine the Aggregate Outcome Data .................................................................................................. 54 Examine Breakout Data ...................................................................................................................... 56 Examine Findings Across Indicators ...................................................................................................... 59 Make Sense of the numbers.................................................................................................................... 60

STEP 5: REPORT PERFORMANCE RESULTS................................................................... 65

The Importance of Good Reporting........................................................................................................ 65 Internal Reporting................................................................................................................................... 65 External Reporting.................................................................................................................................. 69 Other Information that should be Included When Reporting Performance Data In Both Internal and External Reports ..................................................................................................................................... 70 What If the Performance News Is Bad? ................................................................................................. 70 Dissemination of Performance Reports .................................................................................................. 71

STEP 6: SET MUNICIPAL TARGETS ................................................................................... 77

The Importance of Setting Targets ......................................................................................................... 77 Set Targets .............................................................................................................................................. 79 Connect Local Targets to National Targets ............................................................................................ 83

Relation of Municipality Targets to Millennium Development Goals (MDGs) and Strategic Plans ..... 83 Concerns About Targets and Target Setting........................................................................................... 84 Final Comment ....................................................................................................................................... 84


The Importance of Using the Information .............................................................................................. 86 Undertake Service Improvement Action Plans....................................................................................... 86 Analyze Options/Establish Priorities...................................................................................................... 91 HOLD HOW ARE WE DOING? SESSIONS ....................................................................................... 93 Performance Budgeting .......................................................................................................................... 95 Capital Budgeting ................................................................................................................................... 99 Strategic Planning................................................................................................................................. 100 Motivate Your Employees.................................................................................................................... 100 Performance Contracting ...................................................................................................................... 101 Contribute to National and Regional Information Sources .................................................................. 102 Final Comment on Using Performance Information ............................................................................ 103

STEP 8: BUILD MUNICIPAL CAPACITY.......................................................................... 105

Decide what training is required, to whom, and how much ................................................................. 106 FINAL WORDS ................................................................................................................................... 109 APPENDIX A: Set of sample service outcome/quality indicators for a variety of municipal services APPENDIX B: The millennium development goals and likely data collection sources APPENDIX C: Sample customer questionnaire for municipal services, user survey for water service, patient feedback form APPENDIX D: Sample procedures for rating certain quality elements of municipal services using trained observer ratings: street cleanliness APPENDIX E: Health status, mortality, country data for MDG 7, Target 10

PREFACE Preparation of this guide has been sponsored by the UNDP as part of an effort to support the achievement of the Millennium Development Goals. The Millennium Development Goals (MDGs) were derived from the United Nations Millennium Declaration, and adopted by 189 countries in 2000. The MDGs, as they are called, focus on the most critical aspects of poverty, and the factors that contribute to both income and human poverty. They were re-confirmed in 2005 as the Global Development Agenda. The MDGs are a set of global goals whose achievement depends on the implementation by countries of national MDG agendas aimed at achieving nationally adjusted global goals. The MDGs have played a special role in developing and transition countries as they provide one set of goals that can help them and their cities as well set concrete targets and concentrate resources on meeting those. There are a large number of other examples and resources around the world now available, with more and more national governments and city governments choosing to measure their performance and improve the services they provide their citizens. UNDP advocates the adoption by both national and local governments of an MDG agenda (see the Millennium Declaration, UN, 2000). While this Guide was first targeted at those countries and specifically the municipalities of such countries which have decided to adopt an MDG agenda (for further discussion of the relationship between national and local MDG agendas see Capacity Development for Localizing the MDGs, UNDP 2006), the performance management methodology in this Guide can be used by all municipalities irrespective of whether they have adopted the achievement of the MDGs as their goals. UNDP defines localizing the MDGs as the process of designing (or adjusting) and implementing local development strategies to achieve the MDGs (or more specifically, to achieve locally adapted MDGs targets). This implies either adapting and sequencing the indicators and targets of existing local development strategies as needed, or elaborating a MDGs-based development strategy that reflects local priorities and realities. For this approach to be successful, it should be locally owned and participatory/ UNDP and other UN agencies have produced Toolkits and Guides to help municipalities localize the MDGs and thus address regional disparities and marginalization at the sub-national level. (link to UNDP and UN-Habitat toolkits). There is a compelling logic to believe that unless the type of goals included in the MDGs are brought to the local level (localized), national and global achievements will be skewed. National targets and indicators represent national averages. Achieving them would require targeted interventions in pockets of poverty which are often very context specific. In order to impact the lives of people, goals, such as those in the MDGs need to be adapted to the current level of development and translated into local realities, and embedded into local planning processes. A Special Note to Municipalities: Integrating the MDGs in the municipal performance framework can bring several advantages to your City and to your country, such as: Make sure performance of services related to key development problems such as poverty, health, education and environment, is monitored and weak performance is identified and addressed; Link your local development agenda to the national MDG agenda (if it exists) thus ensuring that

the work of your city contributes to nationally set goals (the setting of which you should have ideally participated); Ensure that the work of your City contributes to the global development agenda and contributes to better and safer world which also benefits you city (poverty and the related crime and diseases, environment have no frontiers nowadays); Increase the chances of benefiting from financial support to your City from central government or the donor community.

This Guide is intended to support you in this effort.


This guide is aimed at helping all municipalities accomplish their goals by monitoring their own performance and using the information they get to improve the lives of all their citizens. Why establish a performance monitoring system to improve results? Many countries of Europe, the CIS region, and elsewhere in the world have initiated political and administrative decentralization processes. Decentralization means transparency and accountability to taxpayers, as well as local governments that strive to continually improve the services they provide to their citizens. Local authorities play a vital role in improving the well-being of their citizens. They provide the most basic everyday services such as solid waste collection, road maintenance, or access to water as well as working in many other ways to help their citizens out of poverty and improve their quality of life. Integrating a sound performance measurement process is only a first step. Often faced with very limited resources, poor quality infrastructure, and historically weak trust and communication between citizens and local government, local government officials and staff in Eastern Europe and elsewhere frequently feel especially handicapped in trying to implement improved services that meet citizen needs. Setting up a system to monitor performance of municipality programs and policies will enable the municipality and its agencies to: x x x x x x x x Establish strategic plans, such as City Development Strategies Regularly, monitor progress in meeting strategic plan and annual performance targets Use performance budgeting as a means to links resources with results Identify weak performing services and programs so that the necessary actions can be taken to improve performance Allocate their own resources (not just city budget funds, but also city staff and equipment) in the most effective way Help motivate public employees to continually focus on improving the quality of their services to citizens Identify best practices in order to learn from high performing entities Compare performance across localities, regions, and countries to help identify achievable targets for the municipalitys own performance and identify areas that need additional strengthening or resources

As suggested above, and as will be made clearer in this Guide, establishing a performance monitoring can have multiple benefits for the work of a municipality, ranging from strengthening strategic planning, learning what service approaches work well and which do not improve budgeting and justifying the allocation of funds for initiatives to improve service delivery, to encouraging the reporting of results to citizens. The Guide suggests steps a municipality can take both to improve collection of information on performance performance measurement and to use that information to help get better results performance management. The use of performance indicators and targets to improve conditions for citizens has increased over the last decade as local and national governments around the world have become increasingly aware of the value of results-based decision-making.


The MDGs provide a clear framework for national and local development efforts taking a holistic multidimensional approach to poverty reduction and human development. They link the global, the national and local levels through the same set of goals and provide a target-based, measurable framework for accounting for national and local development results. As already indicated, the MDGs, focus on critical aspects of poverty, and the factors that contribute to poverty. These include essential areas such as health, education, access to drinking water, or adequate shelter. The eight MDGs listed below: Goal 1: Goal 2: Goal 3: Goal 4: Goal 5: Goal 6: Goal 7: Goal 8: Eradicate extreme poverty and hunger Achieve universal primary education Promote gender equality and empower women Reduce child mortality Improve maternal health Combat HIV/AIDS, malaria and other diseases Ensure environmental sustainability Develop a global partnership for development

The Goals are accompanied by indicators and targets which are more specific, to allow countries to focus on particular areas that are important to monitor. Governments and municipalities that have chosen to adopt an MDG agenda can adapt the global targets and indicators to national and local circumstances. They are should also include additional outcomes and indicators important to the specific local conditions. All the MDGs, with associated targets and indicators, are listed in Appendix B. We have marked the indicators that might be of particular interest to local governments as part of the indicators they choose to monitor. Some examples of MDGs and the indicators used to monitor their progress are: Goal 2: Achieve universal primary education Indicator 7. Proportion of pupils starting grade 1 who reach grade 5 Goal 5: Improve maternal health Indicator 17. Proportion of births attended by skilled health personnel Goal 7: Ensure environmental sustainability Indicator 29. Proportion of population with sustainable access to an improved water source Indicator 30. Proportion of people with access to improved sanitation Because of the international interest in MDGs, they have helped focus national attention on the importance of identifying priority outcomes and using performance measurement tools to reach those outcomes. Most international donors, including the UN system as a whole naturally support the MDGs and are aware of the importance more broadly of monitoring performance in key areas of public service. The adoption of an MDG agenda and initiatives related to improving public service delivery can act as effective advocacy tools and help the mobilization of resources for local development. As cities begin to establish their own performance measurement systems, it will be useful to have allies at the national and international level, who may be able to provide resources and data to strengthen the local efforts. This Guide will refer to the MDGs throughout, identifying associated resources for local governments, and suggesting opportunities for using some aspect of the MDGs.

This guide shows cities, first, how municipalities can use performance measurement and performance management to improve their services and their responsiveness to citizens; how they might use outcome indicators, including MDG indicators, as part of their performance monitoring; how they can set local targets for each of their indicators and reach the desired outcomes; and how they can also contribute to the efforts of the country itself to set and meet country-wide targets to mitigate poverty and improve the quality of life at the national level.


The material in this guide focuses on Governing for Results. Most suggestions contained here are intended to encourage municipalities and their agencies to seek and use information on the results (the benefits) that their services and programs are achieving for their citizens. This means explicitly considering the specific likely effects on their citizens when making policy and program decisions. Achieving results at as low a financial cost (that is, being efficient) is another important area of municipal performance. However, this is a subject for another guide. This guide focuses on the process of regularly monitoring the outcomes of a municipalitys services. By regularly, we mean at least annually for purposes such as budgeting, and more frequent monitoring of performance by individual agency managers, such as quarterly or monthly. The focus is on developing a practical process that municipalities can adapt to their own situation for regularly, and reliably, tracking their own progress on outcomes of importance to their citizens. The Guide does not discuss more indepth evaluation studies, in which, if resources are available, the government can sponsor more in-depth look into how well a particular program or service is performing and why. Such in-depth studies can on occasion be very useful to municipalities. However, the regular monitoring of outcomes discussed here report can often provide considerable information for such studies. This guide is complementary to two Guides that have been produced by UN Agencies: 1) Toolkit for Localizing the MDGs (UNDP); 2) Localizing the MDGs: a Guide for Municipalities and Local Partners (UN-Habitat). The two publications deal with strategic planning which integrates and localizes the MDGs. As will become clear from this guide, performance management can be an element of an MDGbased strategic planning and implementation processes, or any strategic planning process, and a tool for monitoring the implementation of strategic plans. The readers of this guide are therefore encouraged to also consult at least one of the two documents mentioned above.

This Guide will help you develop high quality performance management systems or improve the ones already in place. The guide is organized in accordance with the basic steps for instituting performance management in your local government, as listed below. Step 1. Step 2. Step 3. Step 4. Step 5. Organize the Effort and Determine the Scope Identify Outcomes and Outcome Indicators Select the Data Collection Sources and Procedures Analyze the Performance Data Report Performance Results

Step 6. Set Municipality Targets for Each Performance Indicator Step 7. Use the Performance Information to Improve Services and Establish Accountability Step 8. Build Municipal Capacity for Governing for Results This guide takes the reader through each of these steps, with each chapter corresponding to one step. The flow chart below depicts the usual order of these steps. It is important to note, however, that in many cases these steps are iterative. For example, after setting targets (Step 6) usually data will again be collected (Step 3) and analyzed (Step 4) in order to monitor progress, and then reported (Step 5), perhaps leading to the establishment of new targets (Step 6).

STEP 1 Select Servic


STEP 2 Choose Outcomes & Identify Indicators

STEP 3 Data Collection

STEP 4 Analyze

STEP 5 Report

STEP 6 Set Targets

STEP 7 Use

STEP 8 Build Municipal Capacity

A number of Appendices are provided. These include: Appendix A: Appendix B: Appendix C: Appendix D: Appendix E: A candidate set of outcome indicators for a number of municipal services. Use these to give you ideas for the kinds of indicators you might want to select in your municipality. Millennium Development Goals with associated targets and indicators and likely data collection sources for indicators applicable to municipalities. Examples of a sample customer questionnaire for municipal services, an illustrative user survey for a water service, and a patient feedback form from a hospital in India. Procedures for rating certain quality elements of municipal services using trained observer ratings Data for a number of performance indicators to illustrate data that could help a municipality set its own targets for those indicators.

Step 1.



Performance management sometimes referred to as governing for results is a system of regularly measuring the results (outcomes) of public sector programs, organizations, or individuals, and using this information to increase efficiency in service delivery. Public officials need regular feedback on the effectiveness of their services in order to make improvements, while at the same time the public wants to know that the government is spending their tax money in the best way to meet citizens needs.

What is the difference between Performance Measurement and Performance Management? Performance Management goes one step further, using the measurements of performance to manage services better. Traditionally, this kind of information has been hard to get emerging only piecemeal through complaints or occasional anecdotes. Over the last three decades, performance management has become increasingly popular way for governments around the world both local and national governments to manage their programs and services to achieve the results their citizens expect. BENEFITS OF A PERFORMANCE MANAGEMENT SYSTEM TO LOCAL GOVERNANCE x x x x Improving service quality and outcomes; Improving resource allocation and justifying agency budgets or service cuts; Making public agencies accountable for results to elected officials and the public; Increasing the citizens trust in the local government; and x Making work more interesting and satisfying for public employees because of its citizen focus.

Governing for Results has encouraged governments both to make the results they are seeking explicit and to design programs and policies to actively and directly seek those results. Monitoring progress towards those results provides a constant feedback into the policy and implementation process to improve efforts to achieve their objectives.



Performance management is based on a simple concept: a focus on service outcomes, or actual results, rather than only on the quantity of service that an agency provides. This implies assessing the performance of the government based on the implications of services to customers, not on physical outputs. The work of the local government is measured by what the citizen or user of the service actually experiences are the roads in good enough condition so that children can get to school in the rainy season? Do the children stay in school until they graduate? Do pregnant women visit the primary health clinic during pregnancy, and does that result in healthier babies? Is garbage collected regularly, and does that have an effect on health? This simple idea, however, means that many people in local government need to think in a different way. It will not be enough to measure how many kilometers are paved, or whether the clinic has the right staff. It will be important to see what the results of those efforts are in order to know whether they are working well. Through tracking performance indicators, and clearly linking those indicators to the results that the local governments wants to see, the system provides decision makers with better information. With this information they can make better decisionsand show why they made those decisions. Using performance management, local governments can demonstrate their commitment to providing quality service. This way of thinking can proceed at several levels. For example, a city may believe it is very important to provide a safe, health, and clean environment for all the citizens of the municipality. That may lead to the identification of a number of outcomes that will be sought in several different services: good quality service at the primary health clinic, public awareness about health hazards, better solid waste collection, clean water sources. Each of those in turn can require a number of different outcomes. For example, the head of solid waste management may want to ensure that streets are clean, citizens satisfied with collection service, landfills are well managed, and that the service has full cost recovery so that good service is sustainable.

Municipal officials need to recognize important limitations of the performance information that would come from the steps discussed here. These include: x The regularly collected annual performance information discussed in this report will not tell you WHY the recorded performance levels were good or bad. (However, a well-designed performance measurement process can provide useful clues. For performance indicators that indicate unexpectedly low, or high, results, more indepth evaluations will be needed to get at the causes). x Similarly, the performance information does not tell municipality officials what has to be done to correct problems identified by the performance measurement data. x Performance measurement information provides information about past performance. It does not by itself tell what future result will be. However, information on past performance provides a major source of information for estimating future results, such as needed for making budget, policy, and program decisions.


Depending on their size, governance system, and capacity constraints, governments around the world are using different tools to govern-for-results. One or more of the following approaches might be used to start developing a performance management system in your city:  Develop and track selected performance indicators in each service sector. Make policy decisions based on the information and disclose this information in city performance reports and its budget.  Develop service improvement action plans in priority sectors.  Apply performance management to the internal processes of the local government, for example to increase municipal revenue, or reduce the time it takes for citizens to register births or marriages.  Implement a comprehensive performance management system in your city that combines strategic planning, setting goals and objectives for each service sector, citizen participation and using performance information. A municipality can choose to adopt an MDG agenda and integrate MDG Goals in its strategic plan. For each of the MDG Goals this municipality should set targets and indicators which reflect local circumstances.

Who should be involved?

Many stakeholders play a role in the process of implementing and using a performance management system. Some of the key actors are described here. Mayor The Mayor should be a principal user of performance information especially in establishing major policies and in reviewing city programs and its budget. In addition, the Mayor will play a major role in setting the climate for the shift to a results-orientation. The Mayors support is important, for making sure that adequate resources are allocated to implementing and, later, sustaining the process.. City Council The support of the council will be essential to the success of the enterprise, not only through the provision of funding when necessary, but also through underlying the importance of performance information by requesting it and using it. Elected council members will find outcome information to be very useful in carrying out their responsibilities, enabling them to more easily understand the impact of city services on their constituents, and to make decisions in their appropriation and oversight roles. Department Heads The heads of different departments or institutions play a crucial role in facilitating and using performance data. National and Regional Government These entities should have their own performance management processes. Many of their agencies are likely to need performance data (such as on health, education and welfare) to make their own policy and program decisions and to provide information that will enable them to set national targets (such as for MDG indicators).


National and regional agencies may also use the performance indicators as a basis for identifying local areas that require special assistance, training, and help in achieving equity in the country, or in identifying best practices that can be shared with other localities to improve service everywhere. International Donors Donor loans or grants sometimes stipulate the condition that infrastructure projects or grants-in-aid be subject to detailed performance monitoring, or tied to achievable results. For example, the World Banks output-based-aid involves delegating service delivery to a third party (private firm or NGO) under contracts that tie payment to particular outputs or results delivered. Other agencies like USAID, CIDA, and DFID also use performance monitoring for specific projects, own inter-agency performance, and in some cases link them to country plans. A number of international donors, including the UN system, are focusing assistance efforts on helping countries attain the MDG targets and may want to see the connections between municipal programs and those targets. These donors can provide valuable assistance in capacity building within municipalities, by providing training on performance management systems, and creating incentives for performance monitoring in service delivery. Non-Governmental Organizations Different types of NGOs can play two valuable roles in improving service delivery: (1) in providing important quality public services themselves -- in which case they should themselves have their own performance management process, and (2) playing a watchdog or advocacy role in increasing citizen awareness of their rights to better quality public services. Business Community Businesses are a major consumer of government services (such as water, transportation, and, economic development services). In addition, many services are delivered through contracts. To ensure quality services, the municipality can use performance contracts with these businesses. s Data for performance monitoring will, thus, also have to be obtained from, or at least with the cooperation of, these businesses. Citizens Citizens are the major consumers of public services. And they pay (via their tax payments or fees), for many services. Citizens are also a major source of information needed to evaluate services and are an important source for identifying the outcomes that should be tracked, both annually and for strategic plans.


Which services should you include? You might choose to start with one service (or program), several, or cover all municipal services and programs. It is recommended here that you attempt to cover all your services so that all municipality staff are encouraged to focus on results. Realistically, however, you may need to start with a few services at a time, so that the successes in one area can serve as motivation to introduce performance management more widely.


There are several different ways to decide where to start. One method is to identify the departments that might be easiest for example, areas where data is easily available, or some inexpensive improvements are most likely to yield rapid results. Another might be to start with departments in which the leadership is already very interested in adopting the new approach, which is also likely to make the pilot effort easier. Another approach to choosing a starting place can be to look at citizen priorities. To do so might slow down the process, but does identify an area that is likely to yield improvements in citizen satisfaction in the short term. International donors are often extremely supportive of such consultations with citizens, and can help fund such efforts. Some more detail about such an approach is provided in Step 7 on Using Performance Management, under the description of Strategic Planning. If there is strategic planning, the strategic goals identified will determine the services that are to be monitored. The Millennium Development Goals can also provide guidance on what outcomes to select. A step by step approach on how a municipality should select services that contribute to the MDGs might be as follows: Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Review the MDG Goals and select those global Goals which are relevant to local realities (in a transition or developing country context, all should normally be relevant) Identify the services which are related to those Goals Adapt the indicators to local circumstances Review the national (if they exist) and global indicators related to the selected (and adapted) targets and select those which are relevant to local circumstances Identify additional outcomes and indicators which are relevant to local circumstances and contribute to the MDG goals Collect baseline data for all indicators Set for each indicator targets appropriate for the locality, given the priorities, citizen preferences, needs, and available resources, and also bearing in mind as possible benchmarks, such sources as national MDG targets, performance in other localities, and adapting those targets to local circumstances Identify non-MDG related city goals, identify indicators and set targets

Step 8

Local governments in different countries are responsible for different types of functions. In addition, in some countries those functions may be subject to change, especially where a process of decentralization may be underway. It makes sense to start with services that are fully under the control of the locality, because that is where improved decisions will have the greatest impact, but there have also been instances when performance measurement can also be applied to functions that are mixed i.e., shared between central and local government -- or even largely central functions. It can be especially difficult in shared functions to ascertain the effectiveness of the service, and measuring performance can provide useful input on the various aspects of the service. Thus, the central government might gain information about how well local governments are carrying out a task, or performance information might show that central funding or regulations arent yielding the results that were expected. Exhibit 1-1 provides a list of exclusive and shared functions in Albania as of 2006. It can be noted that while most cities that have used performance management in Albania have focused first on exclusive functions, such as solid waste collection, parks, street cleaning, or water provision, there have also been several efforts to measure performance in areas of shared functions, such as in education and social assistance. Some of those examples will be provided in this Guide. Exhibit 1-1. Functions of Communes and Municipalities in Albania


Exclusive Functions of Communes and Municipalities I. a. b. c. . d. dh. e. . f. g. II. a. b. c. III. a. b. c. . d. dh. IV. a. b. Infrastructure and Public Services Water supply Sewage and drainage system and [flood] protection canals in the residential areas; Construction, rehabilitation and maintenance of local roads, sidewalks and squares; Public lighting; Public transport; Cemeteries and funeral services; City/village decoration; Parks and public spaces; Waste management; Urban planning, land management and housing according to the manner described in the law. Social Cultural and Recreational Functions Saving and promoting the local culture and historic values, organization of activities and management of relevant institutions; Organization of recreational activities and management of relevant institutions; Social services including orphanages, day care, elderly homes, etc. Local Economic Development The preparation of programs for local economic development; The setting [regulation] and functioning of public market places and trade networks; Small business development as well as the carrying out of promotional activities, as fairs and advertisement in public places; Performance of services in support of the local economic development, as information, necessary structures and infrastructure; Veterinary service; The protection and development of local forests, pastures and natural resources of local character. Civil Security The protection of public order to prevent administrative violations and enforce the implementation of commune or municipality acts; Civil security. Shared Functions of Communes and Municipalities a. b. c. . d. dh. Pre school and pre university education; Priority health service and protection of public health; Social assistance and poverty alleviation and ensuring of the functioning of relevant institutions; Public order and civil protection; Environmental protection; Other shared functions as described by law.

Source: Law on Organization and Functioning of Local Governments, No. 8652, dated 31.07.2000

A Good Approach: Establish a Municipal Steering Committee and Working Groups Once you have determined the scope of your performance measurement process, a good way to begin implementing it is to establish a high-level, across-government, Steering Committee to oversee the process. The Steering Committee can then establish a Working Group to lead work on the details of implementation. The Steering Committee should include such persons as: A representative of the Mayor A high-level official of the finance/budget office A high-level official of the human resources (personnel) office Several department heads Information technology high-level official


The Working Group should have representatives from the departments carrying out or overseeing the work in question, from the financial department, and from a number of related areas. Encourage each participating municipal department to have its own working group Exhibit 1-2 provides examples of the types of people that might be included in these department working groups. Such groups should consider including a representative from outside the government to obtain a broader, consumer, perspective. Exhibit 1-2. Example of Working Group Composition Solid Waste Working Group City Manager Social economic department Director of financial department Head of the solid waste collection company Sanitation team Health offices administration Municipality level social team Municipality level economic team Environmental protection authority Representative from an NGO or citizen group interested in city cleanliness Education Working Group Deputy Mayor School principal Representative from the parent-teacher association Representative from the Education Committee of the City Council Land Management Working Group Head of Technical and Land Administration Municipal Services Manager Head of Planning and Information Construction and Design Team Municipality administrators Representatives from business firms Financial Management team Construction and Maintenance of Asphalt and Gravel Roads Working Group Technical and Land Administration Department Technical team Administrative Support services Urban Development and Construction Bureau local branch Municipality-Wide Working Group Deputy Mayor Head of public works department Director of finance department Other department heads Representatives of selected NGOs

What should be the functions of these working groups? The government-wide working group should have such task as: Developing a government time table for implementation Identifying and defining the types of indicators to be included Identifying staff training needs and making arrangements for the initial training efforts for both management and line personnel. Develop a communication strategy Communicate with local government bodies, City Council, civil society and ordinary citizens Arranging for the development of guidelines for major data collection procedures In general, guiding such steps as those described in this guide in Steps 2-8


The department working groups should have similar tasks but focused on their specific needs.


Step 2.



For each service or program included in the municipalitys performance measurement process, the municipality should start by identifying the services objectives. What is the service intended to do for the city and its citizens? What are the primary benefits desired? A good statement of objectives should identify the key intended benefits and who are the intended beneficiaries (such as all the municipalitys citizens or some particular segment). This process should also identify the possible unintended effects, both beneficial and negative effects. Each of these will help formulate the outcomes that will be tracked. Ask such questions as: Who benefits from the program in what ways? Which demographic groups are particularly affected by the program? Who might be hurt by program activities in what ways? What persons who the program does not directly target might be significantly affected by the program? Is the public-at-large likely to have a major interest in what the program accomplishes?

Exhibit 2-1 provides examples of the objectives and affected citizen groups for a few services. Exhibit 2-1. Service, Objective, and Customers Program or Service Solid waste collection Schools Financial Department Road maintenance All services Land management Water authority Social services Housing Health NGOs Objective Clean city and neighborhoods Better education Increase municipal revenue Safe and rideable roads Improved collection of fees Adequate housing Healthy population Healthy and secure elderly people Customers or Users City residents Children and parents; employers All municipal services and citizens City residents and city visitors All municipal services and citizens City residents City residents Elderly people, their families, their caretakers

Examples of key customer groups in different programs are shown in Exhibit 2-2.


Exhibit 2-2. Examples of Key Customer Groups in Different Programs A road construction program A water treatment plant A vocational school program: A sports facility A municipal park Citizens and transportation companies Citizens, businesses, and visitors to the community Students, parents, and local businesses who recruit the schools graduates Athletes and the general public Adults, children, and senior citizens in the community, and visitors


Identifying the specific outcomes that you will try to achieve, given the services objectives, is one of the most important parts of this process. What is an outcome?

An outcome is the result of a service, from the point of view of the citizens, especially the customers for the service. We can start by thinking about the various steps that go into delivering a service: First, there are inputs these are the resources we use, for example, money or employees of the municipality. Second, there are outputs these are the products that the city department, contractor, or agency produces, such as kilometers of road repaired, tons of garbage collected. Third are the outcomes these are the results of the service: the roads are in good condition, the city streets are clean, etc. It is useful to identify two primary levels of outcomes; intermediate outcomes and end outcomes. We can think of the higher outcomes the end outcomes or ultimate outcomes as the real purposes of what we are doing. For instance, the improved health of citizens that comes from a clean city, or the ability to go to work or school quickly and safely that is made possible by good roads. We call these end outcomes. An intermediate outcome is also a result, not just an output, but the accomplishment of something that is likely to lead to an end outcome. Exhibit 2-3 diagrams the causal relationship among these categories. Funding and people are needed to implement activities. Those activities are expected to produce outputs that are expected to lead to intermediate outcomes and then to end outcomes. Exhibit 2-3. Building Towards Results


End Outcome

Intermediate Outcomes



Below are some sources of information that can help you identify what outcomes your municipality should track. Each source is likely to have its own perspective on what is, or should be, important to citizens and the community a whole. Most, probably all, services and programs will each need to consider multiple outcomes in order to be comprehensive as to what is important to citizens and the community. x x x x x x x x x x x x x Discussions or meetings with customers and service providers Customer complaint information Legislation and regulations Community policy statements contained in budget documents Strategic plans Program descriptions and annual reports Discussions with upper level officials and their staff, to identify future directions, new responsibilities, new standards at the national or regional level Discussions with legislators and their staff Input from program personnel Goal statements by other governments for similar programs Poverty Reduction Strategy Document (or other national strategy) Sector Strategies Regional Development Strategies

You can obtain information on program results through meetings with customers (known as focus groups); meetings with program staff; and meetings with other local government personnel. Exhibit 2-4 provides several different examples of outputs, intermediate outcomes, and end outcomes. Exhibit 2-4. Examples of Outputs, Intermediate Outcomes, and End Outcomes Output Roads are repaired Intermediate Outcome Roads are in good condition End Outcome Citizens can reach work,


Clinics are built and staffed Garbage is collected Customers are billed Water is supplied Schools have desks and textbooks

Pregnant women visit clinic Neighborhoods are clean Fees are paid Citizens have access to water Children attend school

school, markets and services Children are born healthy Lower incidence of disease Cost recovery enables adequate services to be provided Citizens are healthy Children are educated

How are outputs and outcomes different? An important element of performance measurement is that it differentiates between outputs and outcomes. In measuring what government does, the traditional focus has been on tracking expenditures, number of employees, and sometimes their physical outputs. An Output or an Outcome? Sometimes people are confused about the difference between an output and an outcome. A key question is how likely it is to be important to citizens and service customers. Outputs are usually the physically things that services and their employees did (e.g., paved 200 square meters of road), while outcomes are what those things are expected to accomplish from the viewpoint of the recipient of the service (e.g., road condition is good). The outcome focus of performance measurement, however, connects performance to the benefits of services for citizens and the community. For example, performance measurement is concerned not with the number of teachers employed, but with the reduction in student dropout rate. Of course, focusing on outcomes does not mean that you neglect outputs. Instead, a focus on outcomes provides a framework for you to analyze outputs in a meaningful way. In the above example, hiring more teachers or increasing the number of lessons taught does not necessarily reduce the number of students dropping out of school. It may mean that you also need special programs to improve the employment opportunities for parents for students who are dropping out of school. Or you might set up a preventive counseling program to help those students who are the most likely to drop out. Measuring the performance of programs targeted at decreasing the dropout rate would then tell you how successful or unsuccessful these programs are. Another example: Focusing on the percentage of your municipalitys roads that are in good, rideable condition, rather than on the number of square meters of road maintained, helps identify specific areas that most need maintenance attention.



It is not always obvious what outcomes should be selected, but the best way to decide is to think about what is most important. Whoever is responsible for selecting the outcomes should brainstorm about outcomes before making a final decision. Remember, there can be several layers of outcomes ranging from the end outcomes (e.g., the health of citizens) to a number of intermediate outcomes (e.g., access to water, sufficient water pressure, and adequate cost recovery). Usually, you will want to track both intermediate outcomes and end outcomes to determine both whether the end outcome has been reached and which intermediate outcomes have been successful or might need to be adjusted. End outcomes are more important, but intermediate outcomes provide services/programs with earlier information on progress and, thus, usually need to be tracked. One important source of outcomes that should not be neglected is consultation with end-users, that is, those who will benefit from the service. This might include meetings with citizens in different neighborhoods, or the use of information from a citizen survey. How do you brainstorm for outcomes? Brainstorming is a technique to help a group think creatively to come up with new ideas. The central rule is that everyone should say what he or she thinks openly and without inhibition. No one will be critical. A good way to start might be to have a large piece of paper (maybe on a flip chart) and ask everyone to suggest outcomes they would like to see. Go ahead and shout them out. Write everything down. After the brainstorm, the group will discuss the choices to decide which outcomes they will focus on. Multiple outcomes are to be expected for any public service. Here are some examples of some outcomes that have been selected in cities in Eastern Europe for selected services: Solid waste collection: - Areas around collection points are rated clean - Citizens are satisfied with cleanliness in their neighborhoods - Full cost recovery via collection of garbage fees Water - More households are connected to the city water system - Citizens feel they have enough water when they need it - Increased water quality - Full cost recovery through water tariffs Outcomes contributing to the Millennium Development Goals Some local governments may want to consider in what ways their local functions contribute to reaching the Millennium Development Goals. As a starting point, it is useful to note that most local government services are essential contributions to the Millennium Development Goals, although they are not specifically identified by the Goals, the Targets, or the Indicators. For instance, maintaining the adequacy of roads a key local function in most countries is essential to ensuring access to many primary services (clinics, schools, markets, water). Appendix B provides an annotated version of the MDGs, suggesting ways in which local services might be contributors to the Goals.


Local governments may choose to start their performance management efforts in areas related to one or more Millennium Development Goal, choosing outcomes over which the local government has some control and that are important to the community. An example might be: Goal 2: Achieve universal primary education Supporting outcomes: x Good condition roads to allow access to schools x School facilities are in good condition (in countries where local governments are responsible for school facilities)

For each outcome you identify, you also need one or more specific outcome indicators, specific ways to measure progress toward that outcome. Outcomes Indicators are at the heart of performance management. They are the elements you will measure and track to see whether your local government is achieving the results it wants. For each outcome that is sought, measurable indicators need to be selected that permit the government to assess the progress being made towards the outcome. An indicator must first of all be measurable. Not all outcomes of programs are measurable, or at least directly measurable. You need to translate each outcome of the program into performance indicators that specify what you will measure. In some cases you may want several indicators for one outcome. Typically indicators start with the words number of or percent of. In some cases, you will want to measure both the number and the percent; for instance you might want to measure the total number of children who have received vaccinations, and also the percent of children that represents, so that is clear how many children are still at risk. Sometimes, when you cannot measure directly a particular outcome, you can use a substitute indicator, a proxy indicator. For example, for outcomes that seek to prevent something from occurring, measuring the number of incidents prevented can be very difficult, if not feasible. Instead, governments track the number of incidents that do occur a proxy indicator. These proxies are not ideal but they can be the only practical approach. Each indicator needs to be fully and clearly defined so that data collection can be done properly and produce valid data. For example, consider the important indicator: Proportion of population with sustainable access to an improved water source. What do the words sustainable, access, and improved water source, mean? Different people responsible for collecting the data for the indicator can easily define each of these terms differently. Next year different staff might interpret the terms differently than those collecting the data last year. An excellent source for definitions, especially for MDG indicators, is that provided by the United Nations Development Group. The block below presents the definition provided by UNDG for the water-access indicator used in the above example. The MDG indicators most likely to be directly relevant to municipalities are listed in Appendix B.


Exhibit 2-5 Definition of the Indicator Proportion of Population With Sustainable Access to an Improved Water Source, Urban and Rural The percentage of the population who use any of the following types of water supply for drinking: piped water, public tap, borehole or pump, protected well, protected spring or rainwater. Improved water sources do not include vendor-provided water, bottled water, tanker trucks, or unprotected wells and springs. Access to safe water refers to the percentage of the population with reasonable access to an adequate supply of safe water in their dwelling or within a convenient distance of their dwelling. The Global Water Supply and Sanitation Assessment 2000 Report defines reasonable access as the availability of 20 litres per capita per day at a distance no longer than 1,000 metres. However access and volume of drinking water are difficult to measure, so sources of drinking water that are thought to provide safe water are used as a proxy.
Source for this and the other MDG indicator definitions: Indicators for Monitoring the Millennium Development Goals. Definitions, Rational, Concepts, and Sources. 2003. New York: United Nations. This publication is available at in six languages.

Such available definitions should provide your municipality with a very good starting point. However, as the definition in the block indicate, at least some tailoring to own local situation is likely to be necessary to fully define each indicator. Appendix A provides a candidate set of outcome indicators for a number of typical municipality services.

(The MDG indicators included in Attachment B are included in the set of candidate outcome indicators presented in Appendix A.)
You need to consider several factors when selecting performance indicators. Exhibit 2-5 suggests a set of criteria for selecting them. Rate each indicator according to these criteria. Exhibit 2-5. Criteria for Selecting Performance Indicators Relevance. Choose indicators that are relevant to the mission/objectives of the service and to what they are supposed to measure. Importance/Usefulness. Select indicators that provide useful information on the program and that are important to help you determine progress in achieving the services objectives. Availability. Choose indicators for which data can likely be obtained and within your budget. Uniqueness. Use indicators that provide information not duplicated by other indicators. Timeliness. Choose indicators for which you can collect and analyze data in time to make decisions.


Ease of Understanding. Select indicators that the citizens and government officials can easily understand. Costs of Data Collection. Choose indicators for which the costs of data collection are reasonable.

Exhibit 2-6 is an example of indicators that you could use as a starting point for two different programs. This exhibit also contains a fourth major category of performance indicator: efficiency indicators. These are usually defined as the ratio of the cost of a particular service to the amount of product that was produced with that amount of expenditure. The unit of product traditionally has been one of the outputs. The efficiency indicator usually is of the form cost per unit of output. However, a sole focus on output efficiency can tempt employees to speed up their work, sacrificing quality. A municipality that also collects outcome data can in many cases then use a much more true indicator of efficiency: cost per unit of outcome. For example, the public works agency can then in addition to tracking cost per meter of road repaired also track cost per meter of road repaired that was improved from an unsatisfactory condition to a good condition. Exhibit 2-6. Illustrative Performance Indicators City of Bangalore, India Water Supply Input Cost Staff Materials, equipment Output Average number of hours of water supply per day Ratio of number of stand-posts in slums to total slum household Daily consumption of water in litres per capita per day (LPCD) Outcome Percentage of water lost during distribution total water supply Average citizen satisfaction rating with water quality Percentage of households having safe or potable water source located within 200 meters of the dwelling Efficiency Cost of installing water harvesting equipment (per kilo litre) Cost per metered household Environment Input Cost Staff Materials, equipment Output Number of persons per hospital bed, including both government and private sector hospitals Percentage distribution of waste water treated by each method Percent of waste water treated and re-cycled for nonconsumption purposes Outcome Noise pollution in decibels at selected locations Percentage of population suffering from pollutionresultant respiratory diseases Percentage of population suffering from pollutionresultant water-borne diseases Pollution load per capita per day Efficiency Average cost, per kilolitre, of waste water treatment Cost per person treated in hospitals by pollutionresultant diseases


Adapted from Bangalore City Indicators Programme. (December 2000). Government of Karnataka, Bangalore Metropolitan Region Development Authority


It is good practice for a municipality and its agencies to categorize each of its indicators by such categories as those given above. This will help users of the performance information keep in mind the relative importance to the city and its citizens of the individual indicators. Input, output, and efficiency indicators are relatively familiar to program managers. Governments regularly use them to track program expenditures and service provided. Indicators of outcomes are much rarer even though they are more helpful in determining the consequences or results of the program. Categories of performance indicators are described below, and examples are shown in Exhibit 2-7.

It is important for you to recognize the differences between the following categories of information: Inputs Input data indicate the amount of resources (amount of expenditures and amount of personnel used in delivering a service. Outputs Output data show the quantity of work activity completed. A programs outputs are expected to lead to desired outcomes, but outputs do not by themselves tell you anything about the outcomes of the work done. To help identify outcomes that you should track, you should ask yourself what result you expect from a programs outputs. Outcomes (intermediate and end outcomes) Outcomes do not indicate the quantity of service provided, but the results and accomplishments of those services. Outcomes provide information on events, occurrences, conditions, or changes in attitudes and behavior (intermediate outcomes) that indicate progress toward achievement of the objectives of the program (end outcomes). Outcomes happen to groups of customers (e.g., students or elderly persons) or to other organizations (e.g., individual schools and/or businesses) who are affected by the program or whose satisfaction the government wishes to attain. Efficiency and Productivity These categories relate the amount of input to the amount of output (or outcome). Traditionally, the ratio of the amount of input to the amount of output (or outcome) is labeled efficiency. The inverse, which is the ratio of the amount of output (or outcome) to the amount of input, is labeled productivity. These are equivalent numbers.

Exhibit 2-7. Examples of Performance Indicators Input Number of positions required for a program


Cost Supplies used Equipment needed Output Number of classes held Number of projects completed Number of people served Number of letters answered Number of applications processed Number of inspections made Crime rate Employment rate Incidence of disease Average student test scores Percent of youth graduating from high school Number of successful rehabilitations Number of traffic accidents Cost per kilometer of road repaired (output based) Cost per million gallons of drinking water delivered to customers (output based) Cost per number of school buildings that were improved from poor to good condition (outcome based)



Exhibit 2-8 contrasts output and outcome indicators for specific services or activities. Exhibit 2-8. Contrast Between Output and Outcome Indicators Output Indicators 1. Number of clients served. 2. Lane kilometers of road repaired. 3. Number of training programs held. 4. Number of crimes investigated. 5. Number of calls answered. Outcome Indicators 1. Clients whose situation improved. 2. Percentage of lane kilometers in good condition. 3. Number of trainees who were helped by the program. 4. Conviction rates of serious crimes, and crime rate. 5. Number of calls that led to an adequate response.

As a summary of selecting performance indicators, Exhibit 2-9 provides an example of Objectives, Outcomes, and Indicators for a road maintenance program. The example also provides targets for improving performance. Targets will be addressed later in this manual, in Step 5, Data Analysis. Exhibit 2-9. Example of an Objective, Outcomes, Indicators, and Targets Road Maintenance Program Objective: Provide safe, rideable roads to the citizens, by regular renovation and maintenance of existing roads and by upgrading of any unpaved roads in the municipality.


Outcomes: (1) Maintain municipalitys road surface in good, better, or excellent condition. (2) Reduce traffic injuries or deaths by improving the condition and clarity of road signs. Indicators for Outcome (1): Input: cost of paving the road, personnel, equipment; amount of equipment used. Output: kilometers of road paved; number of households having paved roads. Outcome: kilometers of road surface in good or excellent condition; percent of citizens satisfied with road conditions. Efficiency: cost per kilometer of road paved; cost per kilometer of road in excellent condition. Indicators for Outcome (2): Input: cost of new road signs, personnel costs. Output: number of road signs improved; number of new road signs installed. Outcome: traffic injuries or deaths; road signs in good or excellent condition. Efficiency: cost per new or improved road signs. Target for Outcome (1): Ensure that 90 percent of the road surface is in good or excellent condition. Target for Outcome (2): Reduce traffic injuries or deaths during the year by 10% through improved road condition and clarity of road signs. An important element of selecting performance indicators is to define each indicator thoroughly so that measurements will be made in a consistent way by different personnel and over time. For example in the above measurement of roads in various conditions, The municipality agency needs to define how to determine whether a meter of road is in excellent, good, fair, or poor condition.



Your municipality and your agencies will find the outcome information considerably more useful for making improvement if you breakout the outcome data by key customer and service characteristics. This will much better enable users of the data to identify more precisely where problems, and successful practices, are present. Consider breaking our the outcome data into categories such as the following: x x x x x By geographical location; By organizational unit/project; By customer characteristics; By degree of difficulty (in carrying out the task in question); and By type of process or procedure you use to deliver the service.

Each of these recommendations for indicator breakouts is discussed below. By geographical location Break out data by district, neighborhood, etc. The presentation of data by geographical area gives users information about where service outcomes are doing well and where they are not.


Exhibit 2-10 shows the percentage of respondents who rated the cleanliness of their neighborhood in Pspkladny, Hungary, as very clean and somewhat clean. Overall (for the entire city), 45 percent of respondents stated their neighborhood was very clean or somewhat clean. However, when you break up responses geographically (by districts) you begin to see interesting variation. While most of the districts got a similar rating on neighborhood cleanliness, only 26 percent of respondents in district 1 rated their neighborhood as very clean or somewhat clean. This shows that district 1 is a problem area, and the city needs to examine why residents in that district rated cleanliness so low. (Note: The seven districts in the city were categorized based on socioeconomic conditions. Respondents were asked, "How would you rate the cleanliness of the neighborhood you reside in from 1 to 5, where 1 is very dirty, and 5 is very clean?"). By organizational unit/project Separate outcome information on individual supervisory units is much more useful than information on several projects lumped together. For example, it is useful to have separate performance information on each public works departmental unit, not only for all the units together. Another useful application of breakouts by organizational unit would be to have separate performance information on the different units of the police department. For example, response times could be examined for individual units that specialize in particular crimes or other emergencies. By customer characteristics Breakouts by categories of customers (e.g., age, gender, education) can be very useful in highlighting categories of customer services that are or are not achieving desired outcomes. For example, if the government finds that the daytime hours of operation for reporting a problem with city services are too limited, the government may consider opening a hotline in the evenings for citizens to contact them who otherwise are not able to call during the day. For another example, park staff may find that they have put too much effort into satisfying parents with children and that their parks are lacking facilities that the elderly can enjoy. By degree of difficulty All programs have tasks that vary in difficulty. A more difficult program will have a harder time achieving the results you desire, and therefore distinguishing the degree of difficulty of a program will drastically change your perception of its outcomes. To show good performance an organization is sometimes tempted to attract easier-to-help customers, while discouraging service to more difficult (and more expensive) customers. Reporting breakouts by difficulty will eliminate this temptation. Exhibit 2-11 gives an example of considering the difficulty factor in presenting performance information. By type of process or procedure you use to deliver the service Presenting performance information by the type and magnitude of activities or projects being supported by the program is very useful for you. For example, a street cleaning program can comprise sweepers, garbage cans and dumpsters, and garbage trucks. You should present data on each project in the program by (1) the type and amount of each activity; and (2) the indicators resulting from each projects efforts.


Exhibit 2-10. Geographic Location Breakout


Exhibit 2-11. Workload (Client) Difficulty Breakout Total Clients Number Helped Percent Helped Difficult Cases Number Helped Percent Helped Non-Difficult Cases Number Helped Percent Helped Unit No. 1 500 300 60% 100 0 0% 400 300 75% Unit No. 2 500 235 47% 300 75 25% 200 160 80%

Note: If you only looked at aggregate outcomes for Units 1 and 2 together, you would unfairly evaluate Unit 2, which had a higher proportion of difficult cases.

You can use breakouts for purposes such as the following: x To help pinpoint where problems exist as a first step toward identifying corrective action; x As a starting point for identifying best practices that might be disseminated to other program areas, by identifying where especially good outcomes have been occurring; and x As a way to assess the equity with which services have been serving specific population groups A summary checklist of these breakout categories is given in Exhibit 2-12. Exhibit 2-12. Possible Client and Service Characteristic Breakouts Service Characteristics Gender Age Examine outcomes for men and women separately. Examine outcomes for different age ranges. Depending on the program, the age groups might span a large range of ages (such as examining clients under 21, between 21 and 59, and 60 and older), or the program might focus on a much smaller age range (such as youth programs wanting to compare outcomes for youth under 12, 13-14, 15-16, and 17 or older) Examine outcomes for clients based on race/ethnicity. Examine outcomes based on client disability. For example, some programs might want to determine whether clients with disabilities rate services differently than those without disabilities, as well as the outcomes for clients with various types of disabilities. Examine outcomes for each client based on the educational level achieved before

Race/Ethnicity Disability

Educational level


starting service. Income Household Difficulty of problem at intake Examine outcomes for clients grouped into specific income ranges based on the latest annual household income at the time clients began service. Examine outcomes for households of various sizes, generations, and number of children. Examine outcomes by incoming statute based on expected difficulty in being able to help the client. Inevitably, some clients are more difficult t help than others. For example, an employment program might want to consider the literacy level of its new clients. An adoption program might want to relate outcomes to the age and health of the children. Examine outcomes for individual facilities or offices. Examine outcomes for clients of individual service providers, such as caseworkers. Examine outcomes for clients who were served using each distinct procedure. For example, a youth program might have used workshops, field trips, classes, etc. Examine outcomes for clients who received varying amounts of service. This might be expressed as number of sessions a client attended, the number of hours of service provided each client, or whatever level of service measurement the program uses.

Service Characteristics Facility/Office Service provider Type of procedure Amount of service

Source: Analyzing Outcome Information: Getting the Most from Data, The Urban Institute, 2004.


Step 3.



A performance indicator is not very useful until a feasible data collection method has been identified. For MDG indicators, a United Nations publication (2003) provides general suggestions for data collection and sources. (See Appendix B for a list of the MDG indicators likely to be directly applicable to municipalities, and their data sources.) However, your municipality will need to work out the data collection procedure details. For the MDG indicator used as an example in Step 2, Proportion of the population with access to an improved water source, the UN report notes that the usual sources have been administrative records on facilities and surveys of households. (It states that the evidence suggests that data from surveys are more reliable than administrative records andprovide information on facilities actually used by the population.) We note that another possible data collection procedure is the use of trained observer rating procedures to help determine what is available to households. Data for most of the MDG indicators are obtained from national census and survey (usually conducted every two to five years) or agency records from national line ministries. In some cases data are also computed directly by the countries National Statistical Offices, the World Bank or UNESCO Institute for Statistics. In addition to these national surveys or agency records you will need to track disaggregated values for the indicators for your municipality. There are four primary sources of performance data: x x x x Agency records Surveys of citizens Ratings by trained observers Use of special measuring equipment

In this guide we discuss the first three in some detail below. Several factors will affect your decisions of which sources to use for which indicators: x How applicable is it to the information you seek? (For instance, outcome information such as citizen satisfaction or ratings of service quality can only be obtained from surveys of citizens.) x What is the availability of sources from which you can obtain the information? x How much time and resources would it take to regularly collect the data? x What is the likelihood that reasonably accurate data can be obtained from the procedure?


USE AGENCY RECORDS Examples of performance data obtainable from agency records (sometimes call administrative records) include the following (some of these records will be available locally, others from the national government): Tracking Citizen Calls and x Incidence of illnesses and deaths in a hospital Response Times in Indjija, (end outcome indicator) Serbia x Results of test scores in schools (end outcome The key feature of Indjijas Sistem48 is a call center indicator) for citizens to make complaints, comments, or x Total percent of owed fees collected requests concerning any local government service. After the call is received, several things take place: (intermediate outcome indicator) x Callers are guaranteed a response within 48 x Number of complaints received (intermediate hours outcome indicator) x The complaint or request is forwarded to the x Percent of time equipment is operational (such as service in question for resolution equipment for street cleaning or public transit x Data about the call is logged and reported, vehicles (internal intermediate outcome including: Time of call indicator) Content x Response time to respond to citizen requests for Length of time until resolution a service, such as to determine eligibility for a Data on the calls are reviewed by the Mayor in bipublic welfare benefit, to obtain a business weekly meetings with the departments. Receiving permit, to receive emergency medical attention, and recording citizen calls provides an important measure of citizen satisfaction, as well as pointing etc. (intermediate outcome indicator) to specific areas of particular concern. x Cost per kilometer of road maintained (efficiency indicator) x Size of workload, for example number of buildings needing inspection or number of kilometers of street that need to be repaired (used for calculating outcome and efficiency indicator values)

Why is it useful to use agency records? The advantages of using agency records as data sources are their availability, low cost, and program personnels familiarity with the procedures. Since agency record data are already collected and available, this has been the major source of performance data used by local governments. This information can, thus, form the starting point for your performance measurement system. For some performance indicators, an agency might need to obtain information from another municipal agency or even from another level of government. For example, one of your public welfare agencies might need record data from the health department to process an application for disability benefits.

Can you use existing processes? For performance measurement purposes, however, your agencies are likely to need to modify their existing processes. For example, you may have to modify the forms and procedures to enable you to calculate service response times. This involves: x Recording the time of receipt of a request for service;


x Defining when completion of the response has occurred; x Recording the time of completion of the response; x Establishing data processing procedures to calculate and record the time between these two events; and x Establishing data processing procedures for aggregating the data on individual requests.

Are there drawbacks to using agency records? Agency records have a major limitation. Records alone seldom provide sufficient information on major aspects of program quality and outcomes.


Citizen surveys are a very important procedure for obtaining many key elements of the outcomes and quality of many, probably most, of your municipalitys services. They may be the only way to obtain certain information for some if not many, of your outcome indicators.

Why is it useful to use customer surveys? The advantages of customer surveys are that they provide information not available from other sources and they obtain information directly from program customers. The disadvantages of customer surveys are that they are unfamiliar to agency personnel and require special expertise or training; they can be costly; and they are based on respondents perceptions and memory and are therefore subjective. Examples of the types of information you can obtain from customer surveys include: x x x x x x x x x Ratings of overall satisfaction with a service and of the results achieved Ratings of specific service quality characteristics Data on actual customer experiences and results of those experiences Data on customer actions/behavior sought, or attempted to reduce, by the programs service Extent of service use Extent of awareness of services Reasons for dissatisfaction or non-use of services Suggestions for improving the service Demographic information about customers

Exhibit 3-1 illustrates the service outcome data that a survey of citizens can provide.


Exhibit 3-1. Example of Water Service Indicators Derived from a Customer Survey Indicator Percent who receive drinking water most often from a private connection to the city pipeline Of those who need water from a different source sometimes, percent who get it from a river, lake, pond, stream or other surface water Percent who have access to water no more than once every three days on average during the past 12 months Percent who have access to water four hours or less per day on average during the past 12 months Percent who report that they always or usually have sufficient water when they needed it Percent reporting that sometime in the past 12 months the water had bad odor Percent reporting that sometime in the past 12 months the water had bad taste Percent reporting that sometime in the past 12 months the water had a different appearance Percent reporting that the pressure or flow of water was not enough in the past 12 months Percent who report they pay for water service Of those who don't pay at present, percent who say they would be willing to pay a fee to receive a better water supply Percent who pay at present, who say they would be willing to pay a higher fee if they were to receive better service % 49.3 24.2 13.1 25.4 87.1 8.6 16.8 34.4 19.7 97.4 33.3 56.7

Source: Data from Kombolcha, Ethiopia City Survey, 2005. In Using Performance Management to Strengthen Local Services: A Manual for Local Governments in Ethiopia. Katharine Mark. July 2006. Washington, D.C.


Unlike opinion polls, surveys of citizens focus on respondents past actual experience with services, not their opinions about the future. (However, you can include in any of these surveys a few questions to solicit citizen opinions on issues of particular, importance. This will make these surveys of even greater use to the municipality.) Surveys are especially useful if they are taken periodically, so that the local government can see what has improved (or weakened) over time. These surveys come in two major forms: x Surveys of samples of households (or businesses) in the municipality commonly called a household survey. Such a survey can be used to provide feedback simultaneously on multiple services. x Surveys of those citizens (or businesses) that have actually used the particular service a user survey. User surveys are likely to be most useful to your individual agencies and programs since they can obtain more detailed information on the particular service. In household surveys covering multiple services, normally only a few questions can be asked about each service. Survey results can also be very effective in informing the public about city performance. They can, for example, be used in citizen report cards as illustrated in the box to publicize city performance in clear understandable terms in order to galvanize the government to improve services. Using Citizen Report Cards in India In 1993-94 the Public Affairs Center in Bangalore, concerned about the deteriorating quality of public services, developed and implemented a citizen satisfaction survey that measured user perception on the quality, efficiency and adequacy of basic services extended by 12 municipal agencies. The results of the survey were translated into a quantitative measure of citizen satisfaction and presented in various media in the form of a report card. The 1994 survey was followed up in 1999. x Eight of the 12 agencies covered in the 1994 report card made attempts to respond to public dissatisfaction. The worst rated agencythe Bangalore Development Authorityreviewed its internal systems for service delivery, trained junior staff and began to co-host a forum for NGOs and public agencies to consult on solving high priority civic problems such as waste management. x The report cards were also successful in generating political momentum for reforms. Popular local and regional media carried regular stories on the report card findings. Citizens have also been inspired to take initiative towards improving services and have subsequently engaged in the state-citizen Swabhimana partnership in Bangalorea program to arrive at sustainable solutions to longstanding citylevel problems. The Chief Minister of Karnataka has also established a Bangalore Agenda Task Force of prominent citizens to make recommendations for the improvement of basic services. x Report cards have also been used as an effective tool in Ukraine and the Philippines.


You may need to contract for these surveys, especially household surveys. In this case, the contractor pretests the final questionnaire, conducts the survey, tabulates the results, and reports them to the municipality. However, it is the responsibility of the municipality and its agencies to make sure that the


questionnaire and survey process obtains the information it needs and that the survey is done in a valid manner. Task 1: Prepare a draft questionnaire Decide what service outcomes are best obtained from citizens. Decide what descriptive information is desirable from respondents that would help you interpret the outcome data. For example, would information from the respondent on reasons for not liking the service, or information on how much service the respondent received from the program, be helpful in later interpreting the outcome information? Obtain professional help to prepare the question wording so that the wording is clear and unbiased. A good way to jump start the process of drafting a questionnaire is to use an existing questionnaire used by another government as a starting point. Examples will often be available from other governments. Task 2: Identify the major population subgroups for which the local government wants to obtain data This decision should be guided in part by the resources available to conduct the survey. Try to obtain responses from at least 100 respondents in any category. For example, if the survey seeks to compare four different geographic areas in the municipality, the number of completed interviews needs to be at least 400. If more precise data is needed, this means a larger sample size. (Determination of what size samples are needed to obtain various degrees of precision in the survey findings will need to be done by a statistician.) If, however, the requisite sample size is not affordable, it is better to go ahead with what is affordable so that at least approximate results can be obtained. It is better to be roughly right than precisely ignorant! Customer Survey in Georgia A customer survey was conducted in seven cities in Georgia in 2001 followed by two additional surveys in 2002 and 2004. The survey included questions like:  About how many hours during each day do you have access to the water supply during the winter?  Would you say the city is generally very clean, fairly clean, average in cleanliness,  Speaking of everyday household garbage (as opposed to bulk refuse) in the past month (meaning, in the past thirty days) did the garbage collectors ever miss picking up your garbage?  How satisfied are you with the services provided you by the local government?

Task 3: Determine the mode of administration and who will be responsible for each survey task Several factors need to be considered, especially Survey results were used by cities to identify cost and the likely rate of response. In many priority areas for service improvement. (A developing countries these surveys will best be sample multi-service survey instrument is administered using in-person interviews. attached in Appendix A.) Interviewing time is inexpensive, and alternative approaches such as telephone interviews or Source: USAID Local Government Reform Initiative mailed questionnaires may not be feasible. In (LGRI) in Georgia. more developed countries, telephone interviews and mailed questionnaire will likely be less expensive than in-person interviews. Careful record keeping is important while the survey is administered so that the findings can be compared to future administrations of the same survey. The frequency at which the survey is administered should be determined in advance. Task 4: If the survey is to be contracted out, hold a competition, select the contractor and undertake the survey There are several reasons to have the survey carried out by a contractor, such as greater objectivity or the lack of in-house resources. If a contractor is needed, you will need to prepare a scope of work


and then select the contractor. A competitive process will help you obtain the best possible contractor for the job. If the survey is to be done in-house, carry out the implementation steps established in point 3 above. For user surveys, the government may be able to administer the surveys themselves, especially if potential survey respondents can be expected to come to the service facility at which time the respondent can complete the questionnaire. This last procedure, however, would need to done in a way that protects the confidentiality of each respondent so the respondent can answer the questions frankly. Task 5: Provide for analysis and reporting of the data After the survey has been completed, the responses need to be tabulated and the findings summarized for the relevant stakeholders. (This is discussed further in Steps 4 and 5.) Exhibit 3-2 is an example from a 2004 survey in Georgia. It depicts citizens rating of city cleanliness in each of seven cities, comparing results in 2002 and 2004. Exhibit 3-2 Percent of Citizens Rating the Entire City as Fairly Dirty or Dirty

0.50 0.40 0.30 0.20

0.10 0.24





0.45 0.35 0.23



0.13 0.05 0.02

0.10 0.00
a ekh i Po ti



Oz urg et






Source: Georgia Customer Survey 2004. By Ritu Nayyar-Stone and Lori Bishop. June 2004. Washington, D.C.: The Urban Institute, Project No. 06901-012.

Appendix C of this Guide provides an example of a multi-service customer questionnaire to assess citizens satisfaction with a number of services that might be provided by a local government. The questionnaire can serve as a starting point for any municipality. However, it should, of course, be modified based on the specific conditions and services in your city. Substantial modifications will be required to adapt this questionnaire for a small town or agglomeration of villages.


Mts k

Sa ga

Tki bu

tap o



Appendix D is an example of a user questionnaire for water service delivery and patients satisfaction with the quality of services in a hospital. It asks service users to provide comments and suggestions for improvement, and also asks about end outcomes. Other options a government has include sampling a smaller number, asking fewer questions, or using mail instead of personal interviews. Note that this example includes a few open-ended questions. Open-ended questions can obtain richer information about the opinions of citizens, but the responses take considerably more time to process and analyze, and it is difficult to know how representative the responses are unless a high proportion of the respondents give answers that are quite similar. Relative advantages of user and household surveys User surveys can be especially useful to municipal agencies because they can provide more in-depth information on a particular service from citizens who have used the service. User surveys are also likely to be easier to administer than household surveys, because contact information is likely to be already available on users, and users are more likely to respond because they are interested in the service. Household surveys have their advantages. They can obtain information about several services at once, survey costs can be shared among those agencies, and they can obtain information from non-users. Information obtained from non-users is helpful in estimating rates of participation among different types of households. Also, non-users can indicate why they do not use a service, and improvements can then be made as appropriate.


This data collection procedure can be highly useful for assessing any service outcome that can be measured by direct physical observation, especially by visual means. Different observers rate, in a systematic manner, physically observable conditions using a pre-selected rating scale. The rating scales need to be clear and specific so that different raters, would give approximately the same ratings to the observed condition. This also means that changes in condition over time can be reliably detected by comparing the later findings made by either the same or different trained raters. This can be a highly accurate, reliable, and useful procedure if you have a clearly defined rating system, adequate training of the observers, adequate supervision of the rating process, and a procedure for periodically checking the quality of the ratings. Examples of applications include:

Cleanliness of streets, alleys, and recreation areas; Condition of trash receptacles in public areas; Presence of offensive odors from solid waste; Condition of roads (potholes, sidewalks, paved area, etc.); Condition and visibility of street signs; Condition of public facilities, such as school buildings and health clinics; Condition of safety equipment in buildings (fire extinguisher, hose, sprinklers); and Cleanliness of public baths. Conditions in waiting rooms; Waiting times; Ability of developmental disability citizens to undertake basic activities of basic living.


What types of rating systems can you use? Trained observers can use three major types of rating systems: x Written descriptions x Photographs x Other visual scales such as drawings or videos Written Descriptions This is the simplest and most familiar type of rating system. It depends on specific written descriptions of each grade used in the rating scale. Exhibit 3-3 is a written set of scores for street cleanliness. Visual rating of cleanliness can be made from a car, or by observers on foot. Photographic Rating Scales Photographic scales can be more precise than written scales in providing clear definitions of each ratings grade, and make ratings easier to explain. Photos are used to represent each of the grades on the rating scale. Observers are trained to use that set of photos to determine the appropriate grade. An example of a photograph rating scale, based on photographs taken in a city in Armenia is shown in Exhibit 3-5. Other Visual Scales Visual rating scales can also use drawings or sketches that represent each grade on a rating scale. An example of this is sketches representing conditions of school buildings, or classroom walls. This kind of rating scale was used by the New York City school system to track the physical condition of its schools and to help make decisions about building repairs. Assessing the need for repairs (determining needed action) is an additional, very important use for outcome information from observer ratings. The city of Toronto used the information obtained from the scale in Exhibit 3-4 below not only to help track road conditions but also to determine what repairs were needed in each location. Exhibit 3-3. Rating Cleanliness of Streets, Squares, and Sidewalks Rating Scale 1 2 Description Streets Almost or Completely Clean: Two pieces of litter are allowed Streets Generally Clean: Some litter observed in the form of items thrown here and there; or a separate pile that has not been thrown in the container with a volume equal to or smaller than a shopping bag Dirty Street: Garbage scattered here and there along the street or a big pile, but not sufficient to be considered garbage collection area; or in a generally clean block with a single pile bigger than a shopping bag but smaller than 120 liters standard size containers (that was not thrown out for pick up by the cleaning team (1m3) Very Dirty Streets: Piles of garbage or lots of litter scattered everywhere or almost throughout the block; or in a block a pile with a volume much bigger than a 120 liter standard garbage container.


Source: Adapted from Kavaja Municipality, Albania. November 2006.

City Cleanliness Rating by Trained Observer Ratings Approach.

Exhibit 3-4 Road Condition Rating Scale

Rating 9 8

Condition Excellent Good

Description No fault whatsoever No damage, normal wear, and small cracks Slight damage, crack fill or minor leveling required 10% of complete replacement cost 25% of complete replacement cost 50% of complete replacement cost 75% of complete replacement cost More than 75% of complete replacement cost


Recently constructed work


Average rating for City of Toronto pavements and sidewalks Pavement requires preventive overlay.


5 4 3 2

Repair Repair Repair Repair

Eligible for reconstruction programme.

Total reconstruction probably indicated

Requires complete reconstruction

Impossible to repair

Source: Adapted from Performance Measurement: Getting Results, 2nd edition, Washington, DC: The Urban Institute, p. 104.

How can you establish a trained observer rating system? The basic steps to establish a trained observer rating system are as follows:


Task 1: Determine what conditions to rate and where the ratings will be made. Your municipality first needs to choose what conditions need to be rated. You will also need to decide which areas of the city to cover whether to try to cover all parts of the whole city, all streets, all facilities, or concentrate on specific neighborhoods, a sample of blocks, or only some facilities. Starting with just one area can develop skills, and may also be motivating if it shows positive results. A final decision you will have to make is how often to carry out the ratings. This depends on a number of factors, particularly what is being rated (such as how frequently are observable conditions likely to occur) and the cost of the ratings. For example, building ratings might be done only once a year, while ratings of liter conditions might be done considerably more frequently, such as monthly or every two weeks. Task 2: Develop a rating scale with explicit definitions for the grades of each condition to be measured. After the determinations of the scope of the effort as described above, you will need to decide specifically what to rate and to create measurable rating scales for each condition such as the rating scales in Exhibits 3-3 and 3-4. If you use a photographic rating scale, you will need to take additional steps: x Take a large number of photographs in settings representative of the full range of conditions expected to be present. (One rural commune started with a group of about fifty photos. Care should be given to have a set of photographs that encompasses the full range and multiple variations of each condition. (In Exhibit 3-5 below you will see some photographs taken of street cleanliness conditions in Yerevan, Armenia that might be used to establish a photographic rating system.) x Select a panel of persons to act as judges with varied backgrounds, persons who will not be part of the performance measurement activities. Select labels, each representing a condition that the program expects to find (for example, smooth, somewhat smooth, bumpy, very bumpy), and ask the judges to sort the photographs into groups that represent each condition. x For each condition, select four or five photographs that the largest number of judges identified as representative. These sets of photographs then become the rating scale. x Develop written guidelines to accompany the photographs. x Package the guidelines and copies of the photograph selected for the final scale in a kit for each observer. Exhibit 3-5 shows examples of street cleanliness conditions. Visual ratings by trained observers can be based on a scale described both photographically and in writing. This reduces the subjectivity of the ratings so that different observers using the rating guidelines would give the same rating to similar street conditions. The exhibit shows photographs representing four levels of rating: Condition 1: Very clean. No noticeable littering or one or two scattered items. Condition 2: Clean. A few littered items in a relatively contained area. No large items. Condition 3: Dirty. Many littered items covering a fairly large area. Condition 4: Very dirty. A large number of littered items, one or more large piles of trash.


Exhibit 3-5. Sample Rating Scale for Street Cleanliness

Condition 1

Condition 2

Condition 3

Condition 4

Task 3: Develop and document procedures for selecting inspection locations, recording data and processing data. You will need to be sure that every aspect of the process is decided ahead of time, by thinking through each step. You will need to decide:

How will the observations be recorded? - On a paper form? In a handheld computer? With a

How will blocks/facilities be assigned to observers? How long will they work each day and how
many blocks/facilities will they be expected to rate each day? How will they be transported to the blocks/facilities they are rating? Will they be paid, and if so, how much?

What provision should be made for raters to supplement their numerical ratings with comments
that provide specific information as to the nature and extent of the problems they have observed?


What do observers do with their data after rating? Record it in a central database? Will someone
else process it, and who will that be? It is also important to think through the data analysis ahead of time to be sure data are collected in a form that will be compatible with the analysis methods and formats for reporting. See Step 5 for more information on analysis. Procedures for rating certain quality elements of municipal services (street cleanliness) using trained observer ratings are provided in Appendix D. Task 4: Select and train observers. Observers might be staff members, part-time employees, students, community members, or other volunteers. (See box for some of the advantages of using volunteers.) Training sessions need to be designed carefully so that observers will receive complete and consistent training, but the training itself need not be very long. Depending on the complexity of the conditions to be rated and the experience of the raters and trainers, even one or two days might be enough. Using Volunteers as Trained Observers There are many examples of successful rating systems that rely on volunteer raters. Some examples include neighborhood residents, pensioners, representatives of NGOs, and youth groups or high school/university students. In addition to cost factors, use of volunteers has several advantages. Citizens bring a fresh eye to their assessments. They will notice aspects that might be missed by others, and are likely to think more broadly, coming up with ideas about the causes of the problem and possible solutions. In addition, with citizens as raters, the process is likely to be more trusted by the public as a genuine and objective assessment of public services. If the citizen volunteers add comments in their ratings about citizen behavior (for example, that littering should be reduced), those comments are more likely to be accepted. The observers first need to learn what each of the grading scales means and how the local government expects them to be interpreted. Then the trainees should work together as a team to test the process of rating blocks/facilities. The field training locations should be selected to ensure that they include a range of conditions similar to those the raters are expected to encounter. This group practice is important to develop in each observer the same understanding off the rating scales. If the program is just starting this might be a good time to review the rating scale and revise as might be necessary. For instance, if there are consistent divergences in ratings given, it might be an indication that the scale needs refinement or more elaboration. Task 5: Set up a procedure to check the quality of the ratings. It is inevitable that some variation in rating quality will occur over time. Moreover, even if the raters are all consistent, there will always be those who are doubtful about whether such ratings are reliable. For both those reasons it is important to have some systematic procedures to check the ratings of the trained observers. One relatively easy way to do this is by having an experienced rater, usually the rating supervisor, verify a random selection of about 10%-15% of the areas/facilities rated. Knowing that this will happen will also help raters work to maintain the consistency of their ratings.

What are the advantages of trained observer ratings? Some advantages of trained observer ratings are:


x They provide reliable, reasonably accurate ratings of conditions that are otherwise difficult to measure; x If ratings are done several times a year, you can adjust allocation of program resources throughout the year; x You can present ratings in an easy-to-understand format to public officials and citizens; and x They are relatively inexpensive. A major advantage of trained observer ratings is that these ratings can be used to give real time operational feedback to service managers as to where service problems are present - in which streets, in which facilities. Your municipality can use those ratings, and any supplementary comments provided by the raters as a basis for work orders to service personnel. This contrast with citizen surveys and even agency record data, that do not lend themselves as much to identifying specific service improvement needs.

What about any disadvantages? Some disadvantages of trained observer ratings are: x They are a labor intensive method that requires time and training of observers; x You need to check ratings periodically to ensure that the observers are adhering to procedures; x Program personnel may not feel comfortable with the procedures for trained observer ratings because they do not use them often; and x They are only applicable to conditions that can be measured by physical observations.


Youth in Georgia As Trained Observer Raters In Georgia, local governments worked with youthhigh school studentsto develop, conduct, and analyze trained observer ratings. Announcements were posted in schools asking youth to participate in the training. Interested students were selected based on previous volunteer experience and basic familiarity with local government and trained by a US expert to do hands-on training on rating, including fieldwork exercises. Youth trained in the initial round then trained youth in other cities. In Ozurgeti, the Youth Group rated the cleanliness of the streets and the area around garbage bins several times and presented the results to the mayor and city council, followed by a joint presentation to the public by the Youth Groups and city officials. In 2003, the audience for the presentation included local government representatives, citizens, communal service departments, and other interested groups. The presentations were covered by the media. In Gori, the volunteers not only rated the streets, but also Gori youth collected comments from the citizens during the rating process and incorporated these into their recommendations for the Communal Service Department. In several cities, youth groups organized volunteer cleaning days to clean up the center of town or specific parks. In Zestaponi, for example, the Youth Group mobilized more than 1,000 participants. 11 Youth Groups participated in the Global Youth Volunteer Day Cleaning Action, which was sponsored by 15 organizations, including UI, IRI, and IFES. More than 8,000 people participated in the 11 cities, with (in most cases) the active participation of local governments, businesses, and local NGOs. These experiences showed that youth groups represent an exciting mechanism for increasing transparency and sustainability of reforms at the local level. The young people who participated in this initiative were enthusiastic about involvement in activities to improve their communities and proved to have a great deal of influence on older members of their communities, as well as to be able to successfully engage political leaders in discussions about local decision-making.
Source: Georgia Local Government Reform Initiative, Final Report, January 2005.


DATA QUALITY CONTROL When performance data begins to be used to help make important decisions, such as budget decisions, users of the performance data get very concerned about the quality of the performance data and its credibility. From the beginning, municipalities should consider ways to ensure reasonable quality of the performance measurement process. This is important for building accuracy into the design of the measurement system and into the training of personnel. As agencies implement performance measurement systems, they need to ensure that the data are sufficiently complete, accurate, and consistent to support decision making. The data must be sufficiently free of bias and other significant errors that would affect conclusions about the extent to which the outcomes sought have been achieved. A particular source of potential bias that needs to be guarded against is the incentive for employees to game the system by manipulating indicator values to make their performance look good. E X H I B I T 1 32 Following are some sub-steps, your municipality, and each of your agencies, can take: Task 1: Assign responsibility for data quality Make it clear that each agency and each of the agencys programs is responsible for the quality of the data it submits. If possible, also provide some form of periodic review by an independent office (perhaps a management or audit office) of at least samples of the data collection procedures and data. Program Managers are the first line of defense against poor quality data. They should be accountable for data quality. However, the performance measurement system should also be subject to periodic assessment by other, more independent offices or organizations. Task 2: Require that each performance indicator is fully and clearly defined so that users will know what is being reported Both the program collecting the data and those using the data need to be clear as to what is being measured and the time period covered by the data. A classic example is measuring response times. Indicators of response time are likely to be included in the performance indicators for many, if not most, programs. But response times can usually be defined in many ways. When should the clock be started -- when the request is first made to the agency, when the appropriate person in the agency has received the request, or what? When should the clock be stopped -- when a formal written response has been mailed to the customer; when some appropriate action has been started, or at some other time? Task 3: Require written procedures describing how the data are to be collected Documenting data collection procedures can be a cumbersome task. However, it clearly is good practice to write down the procedures. This will help ensure that data collected by different (perhaps new) staff will use the same procedures that other staff have used. Task 4: Train persons who collect or record the data to do it the same way, the way specified in the documentation for the data collection procedure A classic example occurs in police reporting what category of crime was committed. Police officers have discretion in labeling certain crimes depending on estimates of the amount of property stolen or other factors. Task 5: Make sure the material accompanying the performance data provides sufficient information for users to understand the data


Task 6: Identify the source of the data in the performance reports Sources can be presented in notes at the bottom of each table, or if extensive, in an appendix at the back of the report. This is good practice. Task 7: Identify limitations of the data The agency may still want to report the data but should make clear the limitations of the data. For example, when performance information is based on surveys whose findings are based on only a small number of responses, users of the data should be alerted to this limitation. When reporting the results of surveys of citizens, confidence intervals and response rates (how many of those persons the agency attempted to reach were not reached) should normally be identified. Task 8: Make certain that the time period covered by the data for each indicator is clearly identified Individual performance indicators may have differing time periods. For example, agencies that survey their customers may do so at various times of the years. The time period when surveys were administered should be identified. Are the data current or old? Users should be alerted to this. Substantial time lags can occur before the data for an indicator become available. However, agencies should attempt to speed the process so that timely data can be provided. (Internet web sites often provide data that may be older than desirable, because of the time it takes to post them or because the website is not kept up.) One way to reduce this problem is to encourage agencies to obtain and report preliminary data, identifying the data as preliminary and indicating when the final version will be forthcoming. Task 9: Avoid changing the performance indicators from one year to the next Some changes can be expected due to improvements in measurement and is justified. Changes in indicators can be due to the addition of brand new ones, deletion, or ones in which the data collection procedure has changed so much that current measurements no longer can be compared to previous ones. However, too much change means that comparisons over time cannot be made. And, users will become suspicious that the changes are intended to assure that primarily indicators with favorable outcomes will be measured in a given year. Task 10: Protect your files, whether electronic or not, from tampering Accidental or intentional breaches of confidentiality or security need to be guarded against, especially with data that are likely to have major implications for the agency, program, or staff.

Task 11: Train the staff responsible for the collection and processing of the data have been adequately trained Reviewers of a performance measurement system should assess the extent to which persons collecting, entering, or otherwise involved in processing the data, are doing so correctly. To the extent that the program has documented the data collection procedures, this will be easier to assess by comparing the documented procedures to those actually being. A typical concern is that staff regularly turns over. Training needs to be provided in proper data collection and recording to new staff. For trained observer ratings, (discussed above), the agency should periodically re-check samples of ratings to assess whether the observers have over time telescoped their ratings or other wise have begun deviating from the rating standards. For surveys of citizens (discussed above), if done in-house, the performance measurement system reviewers should check the procedures being used, including checking the work of the persons responsible for sample selection, for administering the survey, and for processing completed questionnaires when returned. If the survey is contracted out, you should before finalizing the contract at least check on the reputation of the survey firm.
Task 12: Require that the agency providing performance data identify the particular office and primary person responsible for the data


Some agencies have used an outcome indicator specification form that identifies who is responsible for the indicator. The form could identify the person responsible by name or only identify the responsible office. See Exhibit 3-6 for a sample of such a form. Use of a form such as this would provide a good summarized description of for an agency of its individual performance indicators. Exhibit 3-6. Outcome Indicator Specification Sheet Program: ______________________ 1. Outcome 2. Outcome indicator 3. Category of indicator (e.g., intermediate or end outcome) 4. Data source and collection procedures 5. Breakouts of the outcome indicator that are needed 6. Frequency of collection 7. Who is responsible for data collection and its quality
Source: Performance Measurement: Getting Results. Washington, DC: The Urban Institute Press, 2006.

Date: _______________

Task 13: Establish a formal municipality-wide policy clearly identifying the importance of data quality and identifying the respective responsibilities of managers and staff This includes responsibility for: x Training personnel in data collection; x Implementing procedures for double-checking data entries; x Examining the reasonableness of the dataare they in an appropriate range? (whether the examination is done manually or automatically through computer programming); x Checking on data outliers; x Sampling a subset of the data to identify its accuracy; x Establishing a process for periodically checking data quality. Agencies should establish some procedure for periodically checking completeness and accuracy. For trained observer ratings, a supervisor should check a sample of ratings to assess whether they are reasonably complete and accurate. For citizen surveys, some supervisor of the organization administering the survey should check to see that interviewers, data entry persons, and computer programmers have been thorough and accurate in their work. For agency record data, entries being made by human beings should be periodically sampled to assess their completeness and accuracy. Computer checks can often be included that check for certain types of mistakes such as out-of-the-range numbers, unusual patterns in the recorded data (such as has been used with school test score data to help detect cheating), and missing data. No performance measurement system isor ever will beperfect. The most important question is whether the performance data are sufficiently complete, accurate, and consistent to document performance and support decision making at various organizational levels. If the answer to this is positive, the system can be considered adequate on technical quality.


THE COST OF PERFORMANCE MEASUREMENT The cost of performance measurement is always a significant issue. Agencies must balance the cost of performance data against the value added by the information obtained from the process. Finding the appropriate balance is the key. The largest single cost of a performance measurement system usually occurs during the start-up period. Costs then include: management and staff time used in designing and testing the measurement systems. Once the system is operating, annual costs include: staff time required to collect and analyze the performance data; the cost of data collection and analysis; the costs of any performance measurement related contracts (such as the cost of surveys conducted for the municipality by contractors) and contract oversight; and the costs of checking data quality. In assessing performance measurement systems, therefore, the ultimate question is whether the performance data is sufficiently useful to justify the cost of the performance measurement system. Does the information significantly help the municipality improve its services to the public so as to improve the outcomes of those services? And is the increased accountability achieved worth the costs of performance measurement? The use of performance information is described in Step 7.


Step 4.


THE IMPORTANCE OF ANALYZING PERFORMANCE DATA After an agency has collected all these data, it needs to examine and analyze the data to identify appropriate actions that may be needed. Analyzing the performance data after the data have been collected is a vital part of any outcome measurement system. This often has been done in an overly casual way. The suggestions in this chapter are aimed at helping transform this key outcome measurement step into a more systematic, and considerably more useful step in providing information for making service improvements. This chapter suggests ways in which agencies can analyze performance data to help make service improvements. This section does not, however, discuss the subsequent analysis needed to relate the performance information to cost data, also needed for planning and budgeting. These are discussed later under Step 7. HOW PERFORMANCE DATA CAN HELP Analysis of data from a well-conceived performance measurement system can help an agency: Identify the conditions under which a program is doing well or poorly and thus stimulate remedial actions Raise key questions regarding a service that can help staff develop and carry out improvement strategies Provide clues to problems and sometimes to what can be done to improve future outcomes Help assess the extent to which remedial actions have succeeded The focus in this step is on ways municipalities can examine information to help agencies determine what changes and steps toward improvement, if any, should be taken. The focus is on basic tasks that all agencies can take, not on the use of more sophisticated approaches such as extensive statistical analyses or in-depth impact evaluations. Exhibit 4-1 lists a number of tasks for analyzing program outcome data. Each of these tasks is discussed below.


Exhibit 4-1. Basic Tasks for Analyzing Program Outcome Data Preliminary Task Task 1. Tabulate the data for each performance indicator

Examine the Aggregate Outcome Data Task 2. Task 3. Task 4. Compare the latest overall outcomes to outcomes from previous time periods Compare the latest overall outcomes to pre-established aggregate targets Compare the programs outcomes to those of similar programsand to any outside benchmarks, such as to performance levels achieved by other local governments

Examine Breakout Data Task 5. Breakout and compare outcomes by various categories of the workload, especially important characteristics of service customers, such as in which city district they live, their age group, and/or their income group) Break out and compare outcomes by service characteristics, such as the type and amount of service the customer received Compare the latest outcomes for each breakout group with outcomes from previous reporting periods and to targets

Task 6. Task 7.

Examine Findings Across Indicators Task 8. Task 9. Examine consistency and interrelationships among inputs, outputs, and outcomes Examine the outcome indicators together to obtain a more comprehensive perspective on performance

Make Sense of the Numbers Task 10. Task 11. Task 12. Identify and highlight key findings Seek explanations for unexpected findings Provide Recommendations to Officials for Future Actions, including Experimentation With New Service Delivery Approaches


DO SOME PRELIMINARY WORK Task 1. Tabulate the data for each performance indicator Compute the values for each performance indicator for the reporting period. The analysts need to decide for each indicator the numeric form in which the results for the indicator will be presented. For most performance indicators this means deciding whether to express the performance indicator as a number, as a percentage, or both. (For example, should infant mortality be presented as the number of injuries, as the rate of infant deaths per 1,000 births, or both?) EXAMINE THE AGGREGATE OUTCOME DATA Task 2. Compare the latest overall outcomes to outcomes from previous time periods Examine changes over time. After data becomes available for more than one reporting period for a performance indicator, the latest findings can be compared with findings for prior reporting periods to detect trends and other significant changes. If the data indicate substantial worsening or improvement, the agency should attempt to identify why this occurred. The following are questions that might be asked to help identify reasons for changes: x x x x Have external factors significantly affected outcomes? Have special events significantly affected outcomes during the reporting period? Have resources been reduced (increased) to a degree that affected outcomes? Have legislative (central or local government) requirements changed in ways that affected the ability of the program to produce outcomes? x Have program staff changed their procedures in a way that affected outcomes? For example, if the number of housing fires and loss of life and property have been increasing in recent years, the relevant agency needs to determine the causes and the extent to which they can be prevented, such as through housing inspections or building codes or by educating smokers about cigarette disposal. Reminder: When comparing reporting periods of less than one year, seasonal factors can be present and can affect outcomes, such as the condition of roads and the rate of unemployment. In such cases, the program should compare performance data for a given season with data for the same season in previous years. This task is particularly helpful when an agency wants to assess the success of a change it has made in a service delivery procedure. In such cases, the agency can compare performance values that occurred before the new procedure was introduced to values that occurred afterwards. Example: The municipality might have changed the application process for new businesses in order to speed up the process. Comparing the average (or median) time before the changes to the time afterwards indicates whether a significant improvement had occurred as expected. If not, the municipality would need to consider other actions. Exhibit 4-2 shows that average response times for processing business loan requests after an automated process was introduced declined from 52 to 46 days. Whether or not this improvement is sufficient to change the processing process for other types of loans is a judgment for the agency to make.


Exhibit 4-2. Comparison of Outcomes Before and After Process Change

1997 Quarters 1st Average Response Time (Days) 53 2nd 51 3rd 56 4th 49 1st 53 1998 Quarters 2nd 53 3rd 49 4t h 47 1st 43 1999 Quarters 2nd 45 3rd 44 4th N/A Average Before Change 52.4 Average Before Change 46.8

Introduction of Automated Process

Source: Performance Measurement: Getting Results. Washington DC: The Urban Institute Press, 2006, p. 157.

Caution: The evidence provided by such before-and-after values is weak. It should not be relied on exclusively in making agency decisions about change. Other factors may have been present that could have caused the change, and they should be considered before deciding what to do next. Task 3. Compare the latest overall outcomes to pre-established targets. Targets might be those set specifically for each service or identified in a city-wide strategic planning process, such as an MDG-based strategy. The municipality and its departments might select the targets based on the results achieved in previous years, by examining the targets included in the municipalitys strategic plan, by examining the targets included in MDG plans established by the national government, by those targets used in other governments, and by a combination of these. Setting targets is discussed further in Step 6. Task 4. Compare the programs outcomes to those of similar programs and to any outside benchmarks, such as to performance levels achieved by other local governments. Actual values for each performance indicator should be compared to targets set by the government, such as may be done as part of the municipalitys budgeting process. Indicators for which the actual values are much worse, or much better, than the targets should be identified and an attempt made to identify why the difference occurred. (Target setting is discussed further in Step 6.) If comparable data are available on a performance indicator from other similar programs, these data can be used for comparisons. These other programs might be located within the agency, in other agencies in the municipality, or in other jurisdictions. If substantially better outcomes have been achieved elsewhere, ask program staff to assess why. For MDG performance indicators, data are becoming increasingly available on the internet, at least at the country level. In the future such data may also become available on individual municipalities. For example, Appendix E shows that the available country-level information for the indicator proportion of population using improved sanitation facilities in urban Kyrgyzstan was 75% in 2004. This value gives at least a rough idea of what has been achieved in other countries and, thus, can be used as a benchmark for a municipality. For some indicators, external standards may be, or become, available in the future, against which to compare a jurisdictions own values, such as a central governments drinking water and air quality standards. These standards can even be built into the performance indicator. For example, a municipality


might use an indicator such as Percent of days in which water quality did not meet national water quality standards. If not already built into the performance indicator, the level measured, such as the amount of a certain pollutant in the municipalitys air or water, can be compared to the national standard. EXAMINE BREAKOUT DATA Tasks 5, 6, and 7. Breakout and Compare Outcomes by Workload and Service Characteristics (Task 5) and to Previous Reporting Periods (Task 6) and to Targets (Task 7) These three tasks are likely to be done jointly and so are discussed together. Here ways to examine that information are discussed. Examine the breakouts for each outcome indicator to assess where performance is good, fair, or poor. Compare the outcomes for various breakouts such as: x Customer characteristics (Do females have substantially more, or less, health problems or education than males?); x Organizational units (Does street cleanliness differ substantially among geographical areas served by different waste collection crews); x Workload difficulty (Does it take substantially longer for municipality employees to process certain types of business-permit applications than other types); and x Type and amount of service (Are there substantial differences in the ability of clients of the municipalitys employment training program for training approaches that vary as to the type, or amount, or training provided.) For any of these subgroups where the service outcomes appear to have been particularly bad, the agency should seek out the reasons and take corrective action. (This will be discussed further under Step 7.) For subgroups whose performance appears to have been particularly good, the municipal agency should seek explanations to help it assess whether these successes can be transferred to other groups. For example, if the outcomes for younger clients are particularly good for a particular employment training program, the agency might consider actions directed toward improving the outcomes for other age groups, or it might reconsider whether that particular type of program is appropriate for the other age groups. Comparing breakouts across organizational units will indicate which units have particularly weak outcomesand need attention, such as training or technical assistance. This information can also be used as a basis for rewards (whether monetary or non-monetary) to persons or organizations with particularly good outcomes or efficiency levels. Exhibit 2-11 listed a number of categories of breakouts likely to be relevant to municipalities of programs for which people are the primary workload (and not, for example, road maintenance programs for which other characteristics, such as average daily travel or soil conditions, are likely to be the appropriate breakout categories). Exhibit 4-3 provides an example of a report for a health program that provides comparisons across three client demographic characteristics (Gender, age group, and race/ethnicity) and three service characteristics (number of sessions clients attended, the facility used, and the attending caseworker).


Exhibit 4-3. Sample Comparison of Multiple Breakout Characteristics Clients That Reported Improved Functioning After Receiving Health Care Characteristic Number of Clients 31 43 13 28 24 9 25 5 20 24 13 21 40 49 25 19 18 18 19 74 Considerable Improvement (%) 10 30 23 21 21 22 32 0 15 21 15 24 23 24 16 26 11 6 42 22 Some Improvement (%) 19 40 31 32 29 33 20 60 40 29 8 33 38 27 40 26 39 17 42 31 Little Improvement (%) 55 21 31 36 38 33 32 20 40 38 54 33 30 35 36 42 33 56 11 35 No Improvement (%) 16 7 15 11 13 11 16 20 5 13 23 10 10 14 8 5 17 22 5 12

Gender Female Male Age Group 21-30 31-39 40-49 50-59 Race/Ethnicity African-American Asian Hispanic White/Caucasian Number of Visits 1-2 3-4 5+ Facility Facility A Facility B Caseworker Health Care Worker A Health Care Worker B Health Care Worker C Health Care Worker D All Clients

Source: Analyzing Outcome Information: Getting the Most from Data, The Urban Institute, 2004.

Comparisons can be made of outcomes both within and among each characteristic. For example, Exhibit 4-3 indicates that, for the reporting period, the health care program achieved considerably poorer outcomes for females than males. The agency should ask such questions as: Why did this occur? Has this also been the case in previous years? How close were these actual results to the targets set for these groups? The exhibit also indicates that most clients who had attended only one or two sessions showed little or no improvement. In addition, most of the clients of health care worker C showed little or no


improvement. It appears likely that many females only attended one or two sessions and had health care worker C. (These possibilities can be checked by cross-tabulating the data for these characteristics.) This example also illustrates the danger of jumping to conclusions too soon. Does the data in Exhibit 4-3 show that health care worker C is a poor health provider? Not necessarily. For example, health care worker C might have assisted many females under circumstances where they could only attend one or two sessions. As typical with data, the data indicate what has happened, but more information on why it happened is almost always needed before actions should be taken. To make comparisons more meaningful, the analysis should examine each organization units outcomes by a variety of relevant breakout characteristics, such as customer demographic characteristics and difficulty of the incoming workload. Exhibit 4-4 illustrates a breakout for a program providing assistance to small businesses. The analysis examines the outcomes for each of three small business assistance offices for each of three levels of difficulty the client businesses were experiencing at the time they came in for help. The footnotes in the exhibit indicate the type of action a program might take in light of such outcome data. Exhibit 4-4. Outcomes by Organizational Unit and Difficulty of Pre-service Problems (Small Business Assistance Program) Percent of Clients Whose Outcomes Have Improved 12 Months After Intake a Difficulty of problems at intake Minor Moderate Major Total

Small business assistance office 1 52 35 58 48

Small business assistance office 2 b 35 30 69 44

Small business assistance office 3 c 56 54 61 57

All units

47 39 63 50

Tables such as these should also identify the number of clients in each cell. If a number is very small, the percentages may not be meaningful. b Office 2 clients with minor problems at intake have not shown as much improvement as hoped. Office 2 should look into this (such as identifying what the other offices are doing to achieve their considerably higher success rates), report on their difficulties, and provide recommendations for corrective actions. c A substantial proportion of office 3 clients with moderate problems at intake showed improvement. The program should attempt to find out what is leading to this higher rate of improvement so offices 1 and 2 can use the information. Office 3 should be congratulated for these results.

Source: Performance Measurement: Getting Results. 2nd Edition. Washington DC: The Urban Institute Press, 2006, p. 125.

Finally, relating outcomes to the type and level of service provided to individual customers can be very helpful to program managers in helping them determine what procedures work and which do not. To do this, the program will need to identify for each customer the type or amount of service provided and connect that information to each of the relevant outcome indicators. For example, suppose an employment-training program is trying different ways to train unemployed persons, such as by using different degrees to which the training relies on small versus large group training sessions. This information should be recorded as to which clients were trained in each way and then related to success in getting each of those clients employed. The overall success rate for clients in


each group can then be compared to identify which, if any, of the approaches had a substantially higher rate of employment success. Exhibit 4-5 illustrates a report comparing employment success for two lengths of training programs, and at the same time, compares these for clients with different levels of education at the time of entry into the employment-training program. It indicates that persons with little education were helped much more by the long program, but not those clients with more education. Therefore, the training program could save money, and get more overall benefit, by using the short program only for persons with more education and the long program only for those with little education. Exhibit 4-5. Comparison of Different Program Variations
Percent of Clients Employed Three Months after Completing Service Education Level at Entry Completed High School Did not complete high school Total N 100 180 280 Short Program 62% employed (of 55 clients) 26% employed (of 95 clients) 39% (of 95 clients) Long Program 64% employed (of 45 clients) 73% employed (of 85 clients) 70% (of 130 clients) Total 63% (of 100 clients) 48% (of 180 clients) 54% (of 280 clients)

Is action needed? Encourage clients who had not rather than had completed high school to attend the long program. Use these figures to help convince clients of the longer programs success with helping clients secure employment.
Adapted from: Analyzing Outcome Information: Getting the Most From Data. Washington DC: The Urban Institute, 2004, p. 20.

Another example: Suppose a road repair program finds it has a choice of various road repair materials. It could use each type of material on a random selection of streets (or on groups of streets selected so they have similar traffic and soil conditions). If the program several months later assessed the condition of each group of streets road, it would have evidence as to which type of material held up better. Combined with cost information on each type of material, the program could then make an informed decision as to which type to use in the future. EXAMINE FINDINGS ACROSS INDICATORS Task 8. Examine Consistency and Interrelationships Among Inputs. Outputs, and Outcomes The amount of input (e.g., funds and staffing) should be consistent with the amount of output. The amount of output, in turn, should be consistent with the amount of intermediate and end outcomes achieved. If an agency has not been able to produce the amount of output anticipated, the amount of outcome that can be achieved is also likely to be less than expected. Similarly, if the expected intermediate outcomes did not occur as hoped, end outcomes can be expected to suffer as well. These relationships do not always hold, but they sometimes can help explain why measured outcomes were not as expected. For example, if a program unexpectedly lost staff during the year, this would likely be an important


reason for a smaller number of helped service customers. Similarly, if fewer clients than budgeted for came in during the year, this would also likely be an important reason for fewer customers helped. Task 9. Examine the Outcome Indicators Together to Obtain a More Comprehensive Perspective on Performance Most programs will need to track more than one outcome indicator. It is tempting to only examine these indicators separately. However, programs should also examine the set of outcome indicators together in order to obtain a better understanding of performance and, thus, what improvements may be needed. A program might, for example, examine whether the extent to which reductions in intermediate outcome values, such as reduced agency response times, are associated with greater subsequent successes, such as in reduced number of contagious disease victims or reduced fire losses). Another example: If the number of persons assisted by program staff has declined, it can be expected that the number of assisted businesses whose situation improved would also decline. However, declines in number assisted might also lead to improvements in other outcomes. Smaller numbers of persons assisted might lead to higher rates of success -- because program staff might then be able to spend more time with each business.) Another example: The manager of a traffic safety program might find that an indicator based on trained observer ratings showed traffic signs and pavement markings were in satisfactory condition, while another indicator, based on findings from a citizen survey found that a substantial percentage of citizens had problems with the signs. A third indicator showed increasing traffic accidents. A fourth indicator reported a high percentage of delayed response times to requests to fix traffic sign problems. The agency would need to consider all of these findings (and others) in determining what action, if any, is needed. A program sometimes has directly competing objectives. In such cases, the outcomes relating to these multiple objectives need careful examination to achieve a reasonable balance. For example, reducing school dropouts might lower average test scores because more students with academic difficulties were tested. Improved water quality might be associated with reduced economic performance in an agricultural industry. Agencies need to examine these competing outcomes together to assess overall program performance. MAKE SENSE OF THE NUMBERS Task 10. Identify and highlight key findings Performance measurement systems are likely to provide large amounts of data each reporting period -too much data for many, probably most, managers and staff to absorb. Therefore, an important element of the data analysis process is to establish a process that highlights the data that most warrants attention. A simple step is to ask someone in the program to examine the comparisons, such as those described above (Tasks 2-9) and make judgments as to which of the performance findings are important. The examiners might prepare written highlights, or merely flag, the data (such as by circling or marking that data in red). A more formal procedure is to establish a formal exception reporting process. The program establishes target ranges within which it expects the values for its indicators to fall and concentrates on indicators whose values fall outside those ranges. (This approach is an adaptation from the field of statistical quality control, sometimes used by manufacturing organizations.)


Once chosen, the target ranges can be programmed into performance report software so that performance indicator values falling outside them are automatically highlighted for program attention. Task 11. Seek Explanations for Unexpected Findings A performance measurement system should explicitly call for explanatory information along with outcome and efficiency data. This is particularly important in situations where the latest outcome data are considerably worse than anticipated. The municipalitys chief executive officer and its elected officials might require explanations for below expected performance (as has been done by New Zealand and some states in the United States). A municipality might obtain explanations from such sources as the following: x x x x x x Discussions with program personnel Discussions/focus groups with program customers Responses to open-ended questions on customer surveys Examination of the breakout data (as suggested under Task 5-7) Special examinations by teams selected by the program In-depth program evaluations

Explanatory information can take many forms. Probably the one most used is that of qualitative judgments provided by program personnel as to why the outcomes were the way they were. Such judgments might be mere rationalizations and excuses; however, program personnel should be encouraged to provide meaningful information. Special studies, such as in-depth program evaluations, can be expensive and time-consuming and, thus, can be done only on a small fraction of an agencys programs in any given reporting period. Between these two extremes, program personnel should usually be able to provide a variety of information, some quantitative and some qualitative, that will reveal the reasons for problems. Likely reasons include the following, each of which requires a different program response: x x x x Staff and/or funding changes, such as cutbacks Legislation or regulatory requirements that have changed or been found inappropriate Poor implementation (for example, inadequate training, inexperience, or motivation of staff) External factors over which the program has limited or no control, such as increasingly difficult workload; significant change in the international, national, provincial, or local economy; unusual weather conditions, e.g., unusually heavy rains can increase runoff, leading to increased pollution of rivers and lakes); new international pressure or competition; new businesses starting up or leaving the jurisdiction (thus affecting outcomes such as employment and earnings); and/or changes in the composition of the relevant population x Problems in the programs own practices and policies Another source of explanatory information is the responses to open-ended questions on customer surveys. As noted in Step 3, if an agency surveys its customers to obtain performance data, the questionnaire should give respondents the opportunity to explain the reasons for the ratings they gave (particularly any poor ratings) and to provide suggestions for improving the service. Tabulations of responses provide clues as to the causes of poorer-than-desired performance and may provide useful suggestions for improvement.


Sometimes breakouts can provide explanations. If, for example, the program breakouts failures into likely reasons for these failures, this information can be tabulated across all customers to identify likely reasons for the less than satisfactory performance. For example, traffic safety agencies might identify the causes of traffic accidents. Performance indicators should identify the total number of accidents and disaggregate the total into categories by cause, allowing the agency to focus on causes it can change. Traffic accidents due primarily to mechanical failure or bad weather, for example, are much less controllable by municipal agencies than accidents related to problem-intersections, or poor traffic signs and signals. Another example: Surveys that seek information on citizen participation rates (such as citizen use of public transit, libraries, parks, and other services) can ask non-participating citizens why they did not use the service. Such reasons might include: a. b. c. d. e. f. g. h. Did not know about the service Service times were inconvenient Service locations were inconvenient Heard that the service was not good Had previous bad experiences with the service Cant afford to pay for the service Dont need the service Dont have time for the service

Responses (a) through (f) refer to things that potentially can be corrected by the municipality. For example, if a substantial proportion of the respondents indicated that the hours of operation were when the respondents had to be at work, the agency could consider whether changes in its service hours are feasible. The last two responses, dont need and dont have time for the service, are reasons over which the agency probably has little or no influence. No action by the municipality is likely to be available for these categories. Note that an agency may not be able to take direct action itself but may still want to provide suggestions to other levels of the government or to higher-level governments. For example, while vehicle mechanical failures may not be within the control of local governments, the central government can take action if significant patterns of such failures occur. Municipalities should recognize that responsibility for some indicators presence is actually shared by other organizations. Sharing may be among multiple agencies within the government, with other levels of government (such as the district, provincial, and central levels), and even with other sectors of the economy, such as businesses, churches, and even individual citizens (such as the responsibility of citizens to get their children to school). The important point here is that by properly designing data-gathering instruments and analyzing the resulting data, the program can obtain important clues as to what the problems are and what the program can do about them. Additional suggestion: Categorize each outcome indicator by the degree of influence the program has and include this information in performance reports. The degree of influence might be expressed in three categories, such as little or no influence, modest or some influence, or considerable influence. The agency using such categories should define them as specifically as possible and provide illustrations for each. Such categorization helps users of the information understand the extent to which the agency is likely to be able to affect outcomes. (For an outcome indicator to be included in a programs set, the program should have some influence, even if small.) The outcomes can then be broken out by these difficulty categories. This will provide users of the outcome information with better, and fairer, information for interpreting the outcome data.


For most outcome indicators, agencies and their programs will be less able to influence end outcomes than intermediate outcomes. Even most intermediate outcomes are not likely to be fully controllable by any agency. This does not absolve agencies from the responsibility to recognize the amount of influence they do have and to take action to attempt to improve the outcomes for citizens. Task 12. Provide Recommendations to Officials for Future Actions, including Experimentation With New Service Delivery Approaches Those persons who examine the data, whether they are professional analysts, staff, or managers, should, where possible, also provide recommendations to other officials based on what the performance information shows. Those who have examined the data in some detail, are likely to gain insights as to what should, or should not, be done. Elsewhere, we have emphasized that users of the information should not jump to conclusions and take action based solely on the performance data. Normally the recommendations likely to be appropriate are to undertake other types of examinations into causes and ways to alleviate these causes, as well as cost information. When performance data indicate the presence of problems, the solutions are often not clear. However, if based on the findings from procedures such as those discussed above, analysts find performance problems, they should be able to suggest one or more of the following types of actions: x A wait-and-see strategy in the expectation that the unsatisfactory outcomes are aberrations the result of a temporary problem rather than a trendand will correct themselves in the future. x Specific corrective procedures (with provisions that when future outcome data reports become available, these should be assessed to determine whether the actions appear to have resolved the problems) x Further examination of program elements that the explanatory information that was provided indicated might be causing problems x An in-depth evaluation to identify causes and what corrective actions should be taken x An experiment to test a new procedure against the current one The experimental option included above may be particularly interesting to officials who like to innovate, to try out new ways to provide services. Sometimes it will be practical to design a simple experiment with a new, or modified, service approach and use performance data to assess the results before making a commitment to the new approach. A program that has an ongoing performance measurement system can use data from that system to help evaluate the new procedure. An example is provided in Exhibit 4-6. The exhibit compares computer to manual processing of eligibility determinations in a program that applied the new process to part of its incoming work.


The program then tracked two outcomes separately for each procedure for a few months and finally compared these outcomes for the two procedures. A simpler, more common but less powerful, approach to examining new or modified processes occurs is to only compare outcomes for the old service procedures with the outcomes that occurred after introduction of the new procedures. Exhibit 4-2 (the table showing Comparison of Outcomes Before and After Process Change, shown earlier in this section) illustrates this approach. Processing Procedure Computer Manual

Exhibit 4-6. Computer versus Manual Procedures for Processing Eligibility Determinations* Error Rate (%) 9 8 Applications Taking More Than One Day to Process (%) 18 35

* About 250 applications processes by each procedure

Source: Performance Measurement: Getting Results. Washington DC: The Urban Institute Press, 2006, p. 144.

The above approaches are similar to a variety of standard program evaluation procedures. Exhibit 4-2 is an illustration of a pre-post program evaluation. The example in Exhibit 4-6 is an illustration of an experimental approach. If the incoming applications for eligibility can be assigned randomly to the two different processing procedures, this would be a particularly powerful evaluation approach, an example of a random assignment controlled experiment. A full program evaluation would use the outcome data but add such steps as more extensive statistical analysis and an intensive search for explanations for the outcomes.


Step 5.


THE IMPORTANCE OF GOOD REPORTING The importance of good presentation of performance has only recently begun to be fully recognized. Technology advances are making it much easier to provide clear attractive reports. The use of a variety of forms of graphics, and even color is becoming much easier and less costly to use. In addition to the traditional tables, performance data can be provide in the form of bar charts, trend charts, pie charts, maps, and other graphics. And photographs can be added to illustrate performance results.

How the findings are reported is likely to be as important as what is reported.

Of course, report presentation also vitally needs attention to content. What information needs to be provided to the various audiences? The focus in this step is on written, not oral, reporting. Oral reporting techniques are also important but are beyond the scope of this manual. Below, we first address internal reporting and then external reporting (when reports become public). INTERNAL REPORTING Internal performance reporting is vital to stimulating service improvements. The form, substance, and dissemination of performance reports play a major role in proving useful feedback. Key questions to ask yourself regarding internal performance reporting include: x Are the reports clear? x Do they contain useful, important information? x Are they timely? Are the data reported sufficiently frequently? When reported are the data in the reports reasonably up-to-date? (Some performance indicators may need to be reported more frequently than others. For example, data from household surveys are likely to be needed less frequently, only annually or quarterly, while performance reports containing incidence of crimes, fires, water main breaks, etc, need to be reported considerably more frequently -- to enable managers to take timely corrective actions. x Are the reports adequately summarized or highlighted to allow very busy managers to digest the information in a reasonable amount of time? x Are they disseminated to all those who need and can use the information? Often missing is dissemination to persons who are most involved and can do something about the data the first line staff!


A program is likely to find it useful to track a large number of outcome indicators for internal use. However, for external reporting, a considerably shorter list is likely to be appropriate. The municipalitys and departments highest officials and the local legislative body are likely to want a relatively short list of indicators.

SOME EXAMPLES OF REPORT FORMATS Throughout the world, it is surprising how difficult performance reports have been to read. Here a number of formats are illustrated. The first formats use tables. The intent of each is to illustrate the presentation of comparisons a key way to make performance data useful and more interesting to readers. These tabular formats (which use hypothetical data) can be used for both internal and external reports. They are a sample of the many formats that can be constructed based on the special needs of a program. FORMAT 1, EXHIBIT 5-1, compares actual outcomes to targets for at least one earlier period, and sets new targets, for each of several outcome indicators. This is a very useful format when setting new targets for the future. FORMAT 2, EXHIBIT 5-2 compares actual outcomes to targets for both the last and current reporting periods. It does this for each of a number of outcome indicators. This format is likely to be a key one for most programs. FORMAT 3, EXHIBIT 5-3, is similar to format 1 but shows values both for the current reporting period and cumulative values for the year. This format is useful for outcome measurement systems that provide data more frequently than once per year (as is usually desirable). FORMAT 4, EXHIBIT 5-4, compares the latest outcomes for various geographical locations. This format is useful for making comparisons across any breakout categories identified by the program. For example, a program may want to illustrate comparisons across managerial units or particular customer characteristics. To do this, the program would change the column labels in Exhibit 7-3 to correspond to the relevant breakouts. FORMAT 5, EXHIBIT 5-5, displays outcome data for one indicator for a number of client demographical and service characteristics (in this case satisfaction with several services across several cities). This format permits a number of comparisons for assessing for which categories of clients and, for which forms of service, results have been good or poor. The format is likely to be highly useful for internal reports, in identifying where improvements are likely to be needed. This multiple cross-tabulation enables program staff to identify which respondent characteristics show unusually positive or negative results for a particular outcome indicator. The format can be used to report on any indicator for which data on a variety of customer or program characteristics have been obtained. These findings suggest that the program should seek to find out why the program had such a low success rate with females and why health care worker C had such a low success rate. Data obtained in later reporting periods will indicate whether any changes made improved outcomes for patients in those groups. The above formats use tables to present the data. Other graphic presentations can be more attractive to users, especially for external consumption. A picture is often worth 1,000 words (or numbers)! Options include:


Graphs Graphs are especially good for showing trendsthe values of the indicator plotted against time, perhaps by month, quarter, or year. Exhibit 5-6 is an example. Bar charts These are an excellent way to show comparisons. Exhibit 5-7 displays a series of bar charts rating a number of outcome indicators on New York Citys transit system. Similar ratings were provided for each of the subway systems 19 lines. The published report used color, making the presentation considerably more attractive than this exhibit. (These data were obtained, assembled, and analyzed by a citizens group, the Straphangers Campaign.). Maps Mapping performance information has become very popular as inexpensive mapping software has emerged. These are a dramatic way to present geographical data, such as by neighborhoods or districts within municipalities. To be clear as to which neighborhood is which, the maps might show numbers inserted in each geographical area rated, use shading or colors to distinguish various rating levels, both (as shown here), or be accompanied by a table displaying the values for each neighborhood. Exhibit 5-8 through the use of map shading, compares ranges of low-weight births for each neighborhood. Next to each of these maps in the report a table was presented that provided the actual values for each neighborhood. For some map presentations, if ample room is available, the actual values might be included on the map itself. The geographical areas might display such indicators as the percent in each neighborhood who are employed, are healthy, are satisfied with particular services, have low rates of feeling unsafe walking around their neighborhoods during the day or at night, and so on. Some caveats on graphics. With the rise in easy computer graphic capability, has arisen a tendency to overdo the presentations, sacrificing clarity for artistic endeavor. For example, pie charts look nice but they are not very good at making it easy for readers to determine size differences. Sometimes performance reports use a three-dimensional option in presenting bar charts. Again this can make it difficult to judge differences. If the display is only intended to do is give an overall impressions, this may be ok, but not if the intention is to provide careful comparisons. Similarly, when using maps, the temptation exits to put too much information on the map, reducing readability.


A considerable danger in performance measurement is overwhelming public officials, both on the executive and legislative side, with indicator data. Whoever are responsible for preparing the performance report (whether in the agency, in a central management or budget office, or in the legislature) need to identify and extract what they believe to be the key findings from the data. This becomes even more important as agencies expand their reporting of indicator breakout data. What are the important issues, problems, successes, failures, and progress indicated by the data? What is likely to be of concern and interest to the audience of the report? Any important missing outcome should also be identified. Thus, performance reports should contain, not only the data but also a summary of the reports highlightsemphasizing information that warrants particular attentionto help users focus


quickly on the important findings. These highlights should include both success and failure (problem) stories. The summary should be a balanced account to reduce the likelihood that readers will consider the report self-serving. Explanatory information should be included as part of the report and should be clear, concise, and to the point. Reports, especially those going outside the program, should also identify any actions the program has taken, or plans to take, to correct problems identified in the outcome report. This step, along with explanatory information, can help avert, or at least reduce, unwarranted criticism. Highlighting can be done by: x Writing our the key findings and issues raised x Physically highlighting the data, such as by circling or marking in red the data that raise flags (such as illustrated in Exhibit 4-3 in Step 4) x Combinations of these Preferably each performance report would contain selected performance comparisons (such as breakouts showing relevant differences in outcomes for various demographic groups) to help in identifying the key findings. (See Step 4 for a discussion of comparison options.)


Explanatory information provides information important for readers interpretation of the data. It also gives program managers and their staffs an opportunity to explain unexpected, undesirable outcomes, thus, potentially reducing their concern that the data will be misused (and against them). Explanations can be: 1. Qualitative (including judgments), quantitative, or a combination. 2. Provide explanatory information when any of the comparisons show unexpected differences in outcome values, for example when: a) the actual value for an outcome indicator deviates substantially from the target value (better or worse) or b) the outcome values show major differences among operating units, categories of customers, or among other workload units 3. Distinguish internal from external explanatory factors. Program personnel are likely to have at least some influence over internal factors. Such factors include significant unexpected loss of program personnel (or other resources) during the reporting period. External factors might include unexpected changes in national economic conditions, highly unusual weather conditions, or unexpected loss (or gain) of industry within a particular jurisdiction. 4. Incorporate the findings from any special evaluations that provide an in-depth examination of the program and its achievements. Such findings are likely to supersede the outcome data collected as part of the routine outcome measurement process. At the very least, recent program evaluation findings should be given prominence in the presentation of a programs outcomes. Such studies are likely to provide considerably more information about the


impacts of the program than outcome data alone can reveal. EXTERNAL REPORTING External reporting of performance information is a major way for a municipality to become accountable. It enables elected officials, interest groups, and citizens to see what they are getting for their moneyat least to some extent, since the information inevitably will be filtered, at least somewhat, by the internal organization. These reports can be called How-is-the-Municipality-Doing? reports. External reporting also has the potential operational use of motivating the municipal government to do better on the performance indicators being reported. Such motivation is likely to become stronger as more municipalities provide external performance reports, permitting each municipality to compare its service outcomes to those of other similar agencies. These comparisons, however, have dangers. They can be misleading and unfair for a variety of reasons, such as comparing agencies that operate under considerably different environmental conditions or that use very different data collection procedures. On the whole, however, comparisons will be made and can serve a useful motivational, as well as accountability, function. Web-based reporting is beginning to replace at least some paper reports. Many local governments have their own web sites. Individual agencies in many of the larger governments also have their own sites. Some of these have placed performance data on their web sites. This trend is likely to continue, making electronic reporting a major way citizens and interest groups will obtain performance information. Municipalities are beginning to even include data on selected performance indicators for each of their neighborhoods or districts. (For example, New York City posts data for each of its 59 citizen community board areas). Citizens who have computers can enter their addresses and find the relevant data. Key issues for web-based performance reporting are: x Many citizens, some who are likely to be the most concerned about low levels of service, do not have ready access to the internet. x Many persons who have access are not likely to look for performance unless some particular issue faces them. x The information is often not summarized in any way, leaving it to the user to extract the highlights. x Often the performance information on the web sites is not kept up-to-date in a timely manner. It does not contain the latest available data. The following are suggestions for reporting outside the agency. x Be selective in reports as to which, and how many, indicators are included. Focus on those indicators most likely to be of interest to the audience (probably not on output or efficiency indicators). Selectivity does not mean selecting only those indicators that make the agency look goodreporting needs to be balanced in order to be credible. x Pay particular attention to making the reports easily understandable. As discussed earlier, use charts and graphs perhaps to supplemental tables. Use color if practical. x Obtain feedback on the reports periodically from major constituencies, such as elected officials, funders, and the public (perhaps obtaining feedback from the public by use of focus groups). Ask about the usefulness and readability of the performance reports. Use the feedback to help tailor


future performance reports to the particular audience.

OTHER INFORMATION THAT SHOULD BE INCLUDED WHEN REPORTING PERFORMANCE DATA IN BOTH INTERNAL AND EXTERNAL REPORTS In addition to explanations for unexpected findings, performance reports should also contain such information as the following, even if only contained in footnotes. These are needed for both external and internal reports. Some of this information would likely be provided in footnotes. x Each performance indicator should be clearly defined, such as what the indicator covers. This includes the time period covered by the performance data. x Any important uncertainties or limitations in the data should be identified. x For indicators based on survey data (whether the surveys are of citizens or through surveys made by trained observers), the following additional information should also be provided: The total number of items surveyed, such as number of respondents both in total and for each category of respondents for whom data are presented in the report, such as each gender or each racial/ethnicity group) The response rates (an important indicator of the likelihood of non-sampling error) The dates when the survey was conducted How the survey was conducted (e.g., in-person, phone, mail, web-based, etc.) and What organization conducted the survey x Any substantial changes in the performance indicators and the data collection procedures from previous reporting periods should be identified. (Changing these without good reasons being given can give the appearance of being selective to hide unpleasant findings.) x If indices are reported: (a) the individual elements that comprise the index should be clearly and fully identified; and (b) the values for each element that comprise the index should be readily available. x Programs may also want to include information on the extent to which the program can influence the indicator values. WHAT IF THE PERFORMANCE NEWS IS BAD? Almost certainly, every performance report will include some indicators showing results significantly below expectations (such as compared to the targets for the reporting period). A major function of performance measurement systems is to surface problems of below-par outcomes so that those who can do something about them are alerted and, after corrective actions are taken, can assess whether the actions have produced the desired results. Agency and program officials should include with their performance reports both explanations as to why any poor outcomes occurred and identify the steps taken, or being planned, to correct the problem. One city agency head once said If the data look good, I will take the credit. If the data looks bad, I will ask for more money. This is another approach.


DISSEMINATION OF PERFORMANCE REPORTS Performance reports should be disseminated to everyone on the programs staff as soon as possible after the data become available. Program personnel should be given the opportunity to provide any additional relevant explanatory information for the final formal report before it is released outside. This will encourage all program members to feel they are part of a team whose purpose is to produce outcomes that are as good as possible. As will be discussed further in Step 7, the program manager after each performance report, might hold staff meetings on the outcome data to identify any actions that the performance data indicate are needed. The performance reportincluding data, explanatory information, and the highlights or summary should then be provided to offices outside the program. A key question to consider is how much detail should be given to those outside the program. Avoid overloading outside readers with detail. Select indicators that are likely to be of most interest to those outside the program. Breakout data also need to be provided, but selectively, to avoid overwhelming readers with too many numbers. The breakouts provided should be those considered most important to report users. Outcomes broken out by customer demographic characteristics, such as race or ethnicity, for example, are often quite important and of considerable interest. More breakout detail can be included in appendixes. Newspapers, radio, and television and (increasingly) the Internet are all ways to disseminate the performance material. Here again, the program will need to decide what detail will be of interest to these groups. Inclusion of explanatory information and statements of corrective actions already taken, or planned, can help defuse negative reactions to data that appear to represent poor performance. External performance measurement reporting is of special concern to agency officials, who can be expected to be particularly apprehensive of performance reports provided to the news media. The objective should be to provide media representatives with an understanding of what the data tell and what the datas limitations are. Avoid choosing just data that make the agency look goodtempting as this may be. Over the long run, the media, special interest groups, and the public (possibly already suspicious of government) will catch on, and the reports will lose credibility. Summary annual performance reports can be an effective way to communicate with citizens and increase public credibility, as long as they are user-friendly, timely, and provide a balanced assessment of performance.


Exhibit 5-1 Reporting Format 1: Performance Vs. Targets and setting New Targets
Indicators Percent of citizens satisfied with cleanliness in the street Percent of citizens satisfied with cleanliness in the neighborhoods Percent of households receiving regular garbage collection service Percent of cleaning service cost recovery Survey 2004 75% 48% 70% 54% Target 2005 82% 60% 78% 75% Survey 2005 91% 72% 75% 76% +/+24% +14% +5% +22% Target 2006 93% 75% 78% 85%

Source: Presentation on Street Cleanliness from Budget Presentation for 2006. Pogradec, Albania.

Exhibit 5-2 Reporting Format 2: Actual Outcomes versus Targets

Last Period Outcome indicator Percent of children returned to home within 12 months Percent of children who had over two placements within the past 12 months Percent of children whose adjustment level improved during the past 12 months Percent of clients reporting satisfaction with their living arrangements Target 35 20 50 80 Actual 25 20 30 70 Difference 10 0 20 10 Target 35 15 50 80 This Period Actual 30 12 35 85 Difference 5 +3 -15 +5

Note: This format compares actual outcomes to targets for both the last and current periods. Plus (+) indicates improvement; minus () indicates worsening. Source: Performance Measurement: Getting Results. 2nd Edition, Washington, DC: The Urban Institute Press, 2006, p. 181.


Exhibit 5-3 Reporting Format 3: Actual Values versus Targets

Current Period Outcome indicator Percentage of parents reporting knowledge or awareness of local parental resources center activities Percentage of parents reporting that parental resource centers led to their taking a more active role in their childs development or education Target 75 50 Actual 70 65 Cumulative for Year Target 70 50 Actual 65 60

Years target 70 50

Note: This format shows cumulative values for a year rather than for previous reporting periods. It will only be useful for outcome measurement systems that provide data more than once a year. Source: Performance Measurement: Getting Results. 2nd Edition, Washington, DC: The Urban Institute Press, 2006, p. 181.

Exhibit 5-4 Reporting Format 3: Outcomes by Geographical Location

Geographical Location Outcome indicator Percent of schools participating in the program Number of students enrolled in courses that had not been available previously Percentage of students reporting increased interest in school because of distance-learning activities in their classes Eastern 30% 1,500,000 Central 15% 600,000 Mountain 20% 850,000 Pacific 35% 1,950,000

United States 29% 4,900,000






Note: The format makes comparisons across any breakout categories identified by the program, such as managerial units, individual projects, schools, school districts, or particular student characteristics. Source: Adapted from Performance Measurement: Getting Results. 2nd Edition, Washington, DC: The Urban Institute Press, 2006, p. 182.


Exhibit 5-5 2004 Survey: Citizens rating of satisfaction with the most important services (averages)

5 4 3 2 1 0

Satisfaction Rating

Wate r

Solid Waste

Stre e t C le aning

Se wage


Stre e t Lights

Storm Se we r

Public Transport








(1 = very satisfied; 5 = very dissatisfied)

Note: Services included in this chart were listed on the most importance services lists for at least three of the five pilot cities. Source: USAID Local Government Reform Initiative (LGRI) in Georgia, Georgia Customer Survey 2004.

Exhibit 5-6 Example of Use of Graphs

Source: Progress Report on Regional Development Strategy of Fier Region, UNDP Albania, November 2005


Exhibit 5-7 Example of Use of Bar Charts

Source: State of the Subways Report Card, NYPIRG Straphangers Campaign, Summer 2004 (


Exhibit 5-8 Example of Use of Maps (Percent of Low-Weight Births by Neighborhood Cluster, Washington, D.C., 2004)

Source: Every KID COUNTS in the District of Columbia: 13th Annual Fact Book, 2006, D.C. KIDS COUNT Collaborative for Children and Families.


Step 6.


THE IMPORTANCE OF SETTING TARGETS Setting municipal targets for each performance indicator can be of considerable use to public managers, elected officials, and the public. Annual and long-range targets provide a roadmap for the jurisdictions and can be a powerful motivational tool for the government and its managers for improving service outcomes. This is especially so if: your municipality has some form of multi-year strategic plan; annual targets are set; and sub-targets are set for each reporting period during the year (such as quarterly or monthly). Out-year targets, perhaps for five years into the future, can encourage long-range thinking by program personnel and reduce the temptation to over-emphasize current results at the expense of future progress. This step discusses the process for setting targets for individual performance indicators. Targets are the specific numerical goals for individual performance indicators for some future period, such as the coming budget year. Governments throughout the world that use any form of program, performance, or resultsbased budgeting, include performance indicators in budget submissions and typically will include target values for the budget year. Setting targets, at least annually, for each of your performance indicators can also be a highly useful managerial, and policy-making, tool. For example, if quarterly targets are set for each of your public services at the beginning of each year, and if the actual values for the quarter are calculated and reported, managers have the opportunity to review progress with their staffs and make decisions as to needed corrections. Exhibit 6-1 is an example (from Albania) of a table of actual (2006) and targeted values (2009, 2012, and 2015) for five performance indicators for: the countrys latest available values; the values of one of 12 Albanian regions, and the latest available European Union values.


Exhibit 6-1

Table of Actual and Targeted Values for the National and Regional Government










1.1 1.3 1.1 1.3 1.1 1.2 1.2

1. Unemployment rate (%) 2. % of families benefiting from social assistance 3. Infant mortality rate/1000 4. Water supply within dwelling (%) 5. Water running in average day (% of the 24 hours)

14.6 (INSTAT 2002) 22.06 (INSTAT 2002)

29.24 (2002, INSTAT) 56.6 (2001 district level) 16.3 (2002, INSTAT and MoH) 31.7 (2001, INSTAT) 38 (2001 district level)

22 50

20 40

14 30

14 20

8% (2003 EU average) 5.97% (1992, based on average of 12 of EU countries)

20.5 (2000, MSH) 46.9 (2002 NHDR)

13 40 15

10 50 40

7 60 75

4.5 70 100

4.5% (2002) 97.14% (1984) 98% (2000)

Source: Albania National Report on Progress Towards Achieving the Millennium Development Goals, August 2004.


Here we discuss the various issues involved, particularly: x Guidelines that municipalities can use to help them set target values (including what factors should be considered in setting realistic and challenging targets); x Who should participate; and x Concerns about setting targets Target-setting is more an art than a science, since it inevitably requires making assumptions abut the future. However, below we provide guidelines to help governments and their agencies establish appropriate targets. To be useful to your municipality, challenging targets need to be set that consider your municipalitys own priorities, situation, and demographic condition. SET TARGETS Below are suggested guidelines for setting targets for your individual performance indicators. Consider the amount of funding and number of employees expected to be available during the target period (usually a year). The resources available to your municipality will be a major constraint to making substantial improvement on your key outcome indicators. This funding should include not only your own expected local revenues but also those revenues expected from outside sources, such as the central government, provincial governments, and donor organizations. Because major uncertainties may exist as to the revenues from some, or all, of these, consider setting conditional (variable) targets with the projected value for the indicators dependent on the amount of resources actually received. Such conditional targets are discussed further later. Consider your own municipalitys previous performance. This baseline will almost always be a major factor in determining targets. Recent year performance has often been the primary basis on which government agencies have set their next periods targets. Use not only the most recent data but also data on the indicators for prior periods to identify trends that might be expected to continue into the future. The quality of such targets depends considerably on the quality of the data the municipality has been collecting and the timeliness with which the data become available. Consider the performance levels reported by your central government and by other jurisdictions with similar activities and workload or customer compositions, when such information is available. Such information is becoming increasingly available, especially at the countrywide level. This is so for Millennium Development Goal (MDG) performance indicators. For instance, see Exhibit 6-2 for data related to access to water as provided in Armenias Poverty Reduction Strategy Progress Report. Exhibit 6-2 National Level Data on Relevant Indicators Indicator 2002 2003 2004 Share of people having sustainable access 94.8 94.1 95.4


to safe drinking water, percentage Share of households using springs (and/or 3.6 wells, rivers), percentage Share of households using water delivered 5.2 by water tankers

2.9 5.9

3.8 4.6

Localities in Armenia may want to use such national information to set their own targets. In addition, for service areas in fields such as health and education, various international organizations are likely to be collecting and reporting data on other performance indicators similar to indicators in which your municipality is interested. It will probably be most useful to you to focus on countries that have similar characteristics to your own. Appendix E provides an extract from the World Health Organizations report World Health Statistics and an extract from a 2006 report on the MDGs. Each extract reports data from a number of countries on a number of outcome indicators. Exhibit 6-1 illustrates the data that might be available from other jurisdictions. In this case showing a countrys latest available national average and that for one of its regions (as well as the regions future years targets.) In the future, it seems likely that information on some performance indicators from other municipalities will become more available. (Clearly, the targets set at the various levels of government are related; provision is needed to achieve at least a basic compatibility among targets for the various levels.) Using information from other governments has problems. In particular, the definitions and data collection procedures used may be at least somewhat different from the ones you use. Second, the available data may be a few years old. Consider any targets set by your central government on performance indicators that your municipality is using. The countrys national targets should ideally be fully compatible with municipality and any other subnational government targets. Coordination in target-setting between levels of government is likely to be needed, if not essential, especially in such key areas as health and education. For any targets set at the central level for local performance, work collaboratively with the central government to ensure that targets are realistic and useful. See Exhibit 6-3 for an example of this kind of useful collaboration between central and local governments. Exhibit 6-3 Targets for Maintenance and Operation of Education Facilities In 2004 the maintenance and operation of education facilities was formally delegated to local governments in Albania. In setting standards for the maintenance of facilities, a working group including the Ministry of Education and Science and local government representatives established eight standards for critical National Interest Areas regarding primarily the health and safety of the school facilities. Both parties worked on development of those standards and the design, application, and assessment of the standards. In an ongoing pilot program compliance with standards is being tested in three localities, and a decision has been made that Targets are established by considering at least  Existing baseline conditions of schools, and  Available financial resources at both national and local level


Source: Summary of Conclusions. Developing Draft Standards for Maintenance and Operation of Pre-University Education Facilities in Albania. 2005.

Identify any new developmentsinternal and externalthat may affect the programs ability to achieve desired outcomes. New developments include such factors as: x Demographic changes in the population of your municipality; x Legislative changes (such as from the central government) affecting policy or funding that have recently occurred or are expected to occur soon; x Known expected major shifts, in or out, of businesses; and x New technological or medical advances, such as in malaria, tuberculosis, or HIV/AIDS prevention or treatment (which may enable the jurisdiction to improve its target values). Consider the outcomes achieved in the past for different customer or workload categories and the projected future mix. Since aggregate citywide targets are based, at least implicitly, on some assumption about the distribution of workload by difficulty category (see Steps 2 and 4), the program should explicitly estimate the percentage of customers expected to fall into each difficulty category in the next reporting period. Preferably, targets should initially be established for each outcome indicator for each different category of customer or workload. Then based on your projections of how many customers or workload are expected in each category an overall, aggregate target can be set. This should lead to more realistic and appropriate targets. (Having different targets for different categories will help reduce the temptation for program personnel to concentrate on easier-to-help customers or workload in order to show high performance.) For example, the values achievable for an indicator of success of training programs in helping disadvantaged citizens obtain jobs will likely be affected to a substantial extent by the literacy of the clients of the training program. If (a) the municipality is able to estimate the number of clients expected to fall into each of, say, three literacy levels, and (b) the performance measurement system has provided data for each of those three levels on the percent who got jobs the past year after completing the program, then it can use that information to help calculate a more accurate aggregate target. Consider benchmarking against the best. If the program has more than one unit that provides the same service for the same types of customers, consider using the performance level achieved by the most successful managerial unit as the target for all units. Alternatives are to set the target at, or close to, the average value achieved in the past for all the units. For example, you might choose as your 5-year target for your municipalitys infant morality rate the best value achieved by any country as reported in the latest World Health Organization report. The exhibit in Appendix E shows that the best known available value is 2 deaths per 1,000 live births (in Singapore). However, a municipality would more likely choose a considerably more reachable target value based only on countries with economic and environmental conditions similar to it. Your municipality should, of course, consider the latest value available on your city and develop annual targets that appear reachable to meet a five-year out target.


Make sure the targets chosen are feasible, given the programs budget and staffing plan for the year. For example, retaining a past target for the year despite a reduced budget can probably be achieved up to a point, but eventually substantial cutbacks in resources should be reflected in reduced targets for the agency. Thus, you should consider reviewing the targets after the final budget has been established, or even any time during the year when a major change in a programs situation has occurred. Set targets, not only for the year as a whole, but also for portions of the year, such as for each quarter, or even for each month for some performance indicators. This is a good management practice. Having such targets provides a basis for regular reviews throughout the year as to the progress agency programs are making in meeting their targets. This can encourage mid-year corrections that appear needed if significant shortfalls in meeting targets occur. Such targets should reflect seasonal factors that might affect performance and thus the targets for particular time periods for some performance indicators. For example, more job opportunities are likely to occur during tourist season. Thus, the number of job placements and percent of persons served by the municipalitys job-training services who become employed can be expected to increase during such times. Targets for these two performance indicators should be higher for tourist season months. Similarly, other seasonal differences in outcomes can be expected during months with normal weather differences and service demands (such as number of patients with seasonally related health conditions). If an outcome indicator is new, defer setting firm targets until the program has collected enough data to be confident of setting plausible values. However, a program might set rough targets for the initial data collection period, being explicit in labeling them as pilot targets. If appropriate, use a range for the target. While preferable, targets do not have to be a single value. A range is a reasonable alternative, especially if a great amount of uncertainty exists. For example, the target might be expressed as the most likely achievable value plus or minus 10 percentage points. Another option that might be appropriate for some indicators occurs where the performance indicator value is highly dependent on some external factor over which the municipality has no control. This may frequently be the case if the amount and timing of major funding from the central government is highly uncertain. You might identify two or three scenarios as to the amount and timing of funding that would be forthcoming. For each scenario, you would identify the performance targets you believe could be achieved with those funds. Who Should Set the Targets? Typically, the municipalitys program managers make the initial selection of target values for each performance indicator (preferably with the help of each managers staff). The program manager is normally the person with most knowledge of what factors need consideration when setting the targets. However, a programs targets should be reviewed by higher-level managers to assure that the targets are appropriate and are compatible with higher-level concerns. An option sometimes used is to have higher-level officials and/or the public set the targets. These are persons without much detailed knowledge of how the workings of the program or service. The advantage of this option is that it incorporates the views of the customers for the service and is more likely to


establish targets that push the program to higher levels of achievement (even if program personnel participate in the group sessions). The disadvantage is that the targets these outsiders set might not be realistic and set the program up for failure. Ideally, it is preferable if a wide consensus in the municipality can be achieved and involve such groups as both citizen and business groups as well as agency managers and elected officials. At times, high-level political considerations will override the judgments of the program managers. This may lead in some cases to very optimistic targets (to improve the chances of success in a near future election before the actual values become available) or, in other cases, to very conservative targets (to improve chances of success in the next election after the actual values have been calculated). An example of outside participation in setting targets is given in Albania National Report on Progress Toward Achieving the Millennium Development Goals. It reported that the central government formed seven working groups that included government institutions, NGOs, and the private sector to reach a national MDG consensus on Albania-relevant goals, targets and indicators. [Albania, 2004, page 3]. This same approach can be considered for adaptation to the municipality level. CONNECT LOCAL TARGETS TO NATIONAL TARGETS When your agencies select targets for their performance indicators, they should consider any national targets the central government may have set for any performance indicators that are similar to the agencies indicators. The national targets can be considered another set of benchmarks you can use to set your own targets. Preferably, the central government will have previously obtained your municipalitys input as part of its selection of national targets. The central government should have considered targets set by its local governments as part of its target-setting process. Ideally, the central government and its municipalities would work together in setting targets for those performance indicators that are common to both levels of government. For example, infant mortality rates are a shared responsibility of all levels of government. The central government should work with local agencies to work out strategies, resource needs, performance measurement and reporting procedures, and targets. RELATION OF MUNICIPALITY AND STRATEGIC PLANS TARGETS TO MILLENNIUM DEVELOPMENT GOALS (MDGS)

For each MDG Goal, the UN has established one or more outcome indicators, for each of which a target may have been set. -- see Appendix B table). However, MDG targets are not usually provided for the years leading to 2015. Far out-year targets are important for strategic planning, but are much less useful for operations. Targets are needed for earlier years, and particular years in the near future. If you formulate a strategic plan for your city that integrates MDG goals, you should establish annual MDG targets. Strategic plans should also break down the far-out strategic targets into annual targets for the purpose of annual performance budgeting and monitoring. The annual targets set during your strategic planning process should be used in the annual budgeting process of your city so that you can link results with resources and demonstrate effectiveness or identify a lack of thereof. (Performance budgeting is discussed in more detail in Step 7.) If your municipality is using any form of program, performance, or result-based budgeting, your annual budgets should already be including, as part of your budget submission, projected targets for your performance indicators in the budget year. For those of your performance indicators that are similar to MDG indicators, it is good practice when


setting targets to consider the most recently available historical MDG data from individual countries as well as the international MDG targets. Unfortunately, MDG values are not available at the present time for individual cities or other local governments. In addition, some of the available country-level data are somewhat old. Nevertheless, you can use some of the MDG data to help you develop your own targets. That data provides one set of benchmarks for you. We have provided some samples of such data for your own target setting in Appendix B. It is, of course, very important that you set your targets considering your own local situation. The MDG goal-setters also recognize that each country, and presumably each municipality, needs to establish its own targets reflecting their own unique situation. The basic philosophy is that targets should push the municipality forward but should be reasonably attainable. Finally, perhaps the most important limitation of using the MDG indicators and their targets is that they cover only a portion, and probably only a small portion, of the issues and concerns that municipalities have. As indicated in the sample list of outcome indicators in Appendix A, only a small percent are MDG indicators. However, the MDG broad goal statements can be interpreted to include the need for many more such indicators as those included in Appendix A. CONCERNS ABOUT TARGETS AND TARGET SETTING Public officials understandably are often concerned that failure to meet targets can become threatening to them, such as threatening their job security. This is especially of concern to public officials in situations where the reasons for failure by an agency to meet its key targets are due to factors outside the control of the agency. This can lead to game-playing with targets, such as setting targets overly easy to achieve in order to be more likely to be successful meeting targets. Thus, public managers and their employees are concerned that they may become political scapegoats if important targets are not met. One way to lessen this concern is to give agencies and their programs the opportunity to include in their performance reports their explanations for missed targets and to identify what they are doing, or planning to do, to remedy the problem. A second concern arises when circumstances change substantially during the year after targets have been established, making it very difficult if not impossible to meet those targets. This problem can be lessened by permitting agencies to modify their targets during the year. This should be permitted if circumstances outside the responsibility of the program change so substantially that the program is not likely to be able to come close to meeting the target.

FINAL COMMENT Setting municipal targets for each performance indicator can be of considerable use to public managers, elected officials, and the public. Target-setting can be a highly useful management tool and encourage program improvements, especially if sub-targets are set for each reporting period during the year, such as quarterly or monthly, depending on the particular indicator. For example, if a department surveys its customers only once a year, or if the value of a performance indicator is not likely to change significantly on a monthly or quarterly basis, quarterly or monthly, reporting on that particular performance indicator is not appropriate and only annual targets would be


needed. However, other indicators such as response times for providing a service, is likely to be subject to short-term changes and so the data should be useful if collected more frequently such as monthly and quarterly. Then, quarterly or monthly targets should be useful to department management for tracking progress and identifying the need for interim changes. Annual and long-range targets provide a roadmap for the jurisdictions and can be a powerful motivational tool for the government and its managers for improving service outcomes. Long-range target-setting, such as called for in MDGs can be very helpful if the municipality has some form of multi-year strategic plan and the plan includes annual targets so that progress towards the plan can be tracked. This enables the municipality to identify the need to mid-course corrections where necessary. In addition, establishing targets for out-years, perhaps for five years into the future, can encourage longrange thinking by program personnel and reduce the temptation to over-emphasize current results at the expense of future progress.


Step 7.


THE IMPORTANCE OF USING THE INFORMATION Performance measurement is of little value if nothing is done with the information it produces. Surprisingly, there are many examples around the world of performance measurement systems that produce data that are seldom, if ever, used for improving services. The use of data should be central in the performance management system from the start to make sure it is not wasted. The primary use of performance information around the world has been a way to provide accountability of governments to their citizens. This is very important. However, probably the most important use of performance information is the role it can play in improving local services. It is clear that the data can be very informative to service managers, but how can it actually be put to use in an active and systematic way? In this section, we suggest a number of important ways to use performance information. UNDERTAKE SERVICE IMPROVEMENT ACTION PLANS Service improvement action planning is a Desired Outcomes Can Cut Across straightforward process that incorporates Boundaries performance management tools into a A SIAP focuses on a limited number of framework to improve service outcomes. outcomes. But these do not need to be confined This process is essentially a focused version of the to one service, but can cut across several steps described in this manual applied to a particular departments, for example, to address such broad service, program, or issue, and often can produce issues as youth, the environment, health, or results in measurable improvements in a fairly short economic development. time. Service Improvement Action Plans (SIAPs) have already been put into action in several cities in Eastern Europe, and have helped identify priorities and set up systems leading to improvement. The use of SIAP introduces the discipline needed by managers to think through a problem and its solutions. However, SIAPs provide other advantages as well, such as providing the foundation for program budgeting, long range financial planning, and strategic planning.


Improving Citizen Satisfaction with Cleanliness Several different cities in Georgia undertook SIAPs focused on cleanliness. All resulted in measurable improvements in cleanliness and in citizen satisfaction. In the city of Ozurgeti, for example, the number of blocks receiving high ratings on cleanliness rose from 12% to 47% in the first year, while citizen satisfaction with the cleaning service rose from 54% to 82% over the same period. This increase in satisfaction with services was also accompanied by increases in fee collections in one city the increase in collections was as high as 50% in the first year. The tasks required for completing a SIAP are described below. It is not necessary to carry out each step of the process when first adopting this process. However, as city staff become more familiar with the SIAP they are likely to find it to be a tool useful in many aspects of their job and expand their application. The tasks are laid out in the sequence that they usually occur, but in a number of cases it will be useful to go back to an earlier step to update information or rethink the direction the SIAP will be taking. Task 1. Identify the Focus Area for the SIAP. Many cities have focused on a traditional service area or department for example, solid waste collection, or public lighting. However, others have Albanian City Uses Performance found that the SIAP process is well suited to address Management to Improve Street a more complex problem or issue that does not Cleaning necessarily reside within one city department. Service areas that cities in Russia, Albania, and Kyrgyzstan have chosen include: juvenile In 2004, the Municipality of Kavaja, Albania, delinquency, avian flu, tourism, economic identified street cleaning as an issue deserving development, tax and fee collection, and traditional attention. A Citizen Survey conducted with local holidays (community pride). The selection of funding from USAID revealed that only15% service areas may be based on feedback from of citizens surveyed viewed the city as clean surveys identifying citizen priorities, city council or very clean. The city opted for creating a input, as well as deliberations among city technical Service Improvement Action Plan in an staff and leadership. attempt to improve upon these results and truly achieve their desired outcome of a Task 2. Form a Working Group clean city. Try to form an inclusive group that encompasses key stakeholders. Start by identifying the individuals Kavaja established a local government-citizen who are interested in or involved in the area of working group whose task was to study the focus. There is no need to limit the working group problem, including analyzing performance to one department or even within the local information, set targets for improvement, government. For example, a working group devise actions for improvement and monitor addressing city cleanliness might include a health- annually. focused NGO as well as the public works and health departments and the contractor that collects solid Through a variety of actions, including waste. Think about including city council members, purchasing new garbage bins, approving a other departments, NGOS, other government new fee schedule, expanding garbage service, agencies, experts of other individuals for inclusion. and reallocating existing resources, in one There are two reasons for the scope of these groups: years time Kavaja was able to improve its (1) to be sure the group is aware of the many often citizen satisfaction ratings with cleanliness by complicated dimensions of the issue being 46% and within two years, by over 100%. addressed and to get the perspective and feedback of different stakeholders, and (2) to be sure the


Working Group will have the avenues, resources, and skills to implement the actions the group will be recommending. Task 3. Prepare a Situation/Issue Analysis Staff and other core group members should record what they think are the key issues or concerns. What appears to be the nature and causes of the problems identified? The latest available performance data should be examined to help identify the scope of the main issues the group thinks are important. This is not an in-depth analysis but an outline of the key directions and issues that should be addressed by the SIAP. Note that this is a preliminary step. Later, as more data become available, it is likely that the issues identified will change and the situation will look different. Task 4. Prepare a Table Presenting the Expected Level of Service, Outcomes (Results) and Indicators This table see the example in Exhibit 7-1 is the core of the SIAP. The Working Group should identify several desired outcomes, select the indicators to measure progress towards those outcomes, and fill in the latest set of available data for the outcomes, outputs, and inputs relating to the service. The past outcome data should also be broken out by demographic characteristics (such as district, age group, gender, etc. see Step 2 for further discussion of breakouts). This will enable the working group to much better pinpoint key problem areas. All these data become the baseline values for the indicators. In some cases where data are not available, it may be necessary to obtain new data before the next steps, by using one or more of the data collection procedures discussed in Step 3. Exhibit 7-1 provides an example of a table with outcomes and outcome indicators associated with the aim of meeting one of the MDG targets. In this case, the city has identified four outcomes that the city can affect in order to contribute to ensured environmental sustainability, and have set targets for next year that they believe to be realistic. Exhibit 7-1 Outcomes and Outcome Indicators for a SIAP Linked to a Millennium Development Goal Goal 7: Ensure environmental sustainability Target 10: Decrease the proportion of people without sustainable access to safe drinking water and basic sanitation. Target 11: Improvement in the lives of at least 100 million slum dwellers
Desired Outcome Outcome Indicator % of citizens who have access to water four hours or less per day % reporting that sometime in the past 12 months the water had a bad taste % non-revenue water i.e., (volume of water supplied from all sources (m3) volume water billed )/ volume of water from all sources % of citizens with access to Baseline values This Year 28% 23% 48% Target Next Year 15% 10% 30% Source Survey Survey Water agency records

Access to safe drinking water

Access to basic






Clean streets

Access to adequate housing

improved sanitation % of citizens who report having seen animals/small pests in the uncollected garbage % streets that are rated as clean or very clean (score 3 or 4) % cost recovery for solid waste collection % of low income families living in adequate housing

47% 24% 63% 45%

25% 60% 75% 60%

Survey Trained observer ratings Municipal records Survey

Task 5. Set Targets. For each indicator the working group should come up with targets. These should be for the next year, but the group may want to consider out-year targets as well. These targets may later change based on the particular course of action decided on. Moreover, the final selection of targets should probably involve input from other decision-makers, such as the Mayor or the city council. Task 6. Identify What Options are Available to Correct the Problem Actions may encompass one or more of the following types of actions. Usually more than one way to make improvements will be available. x Shifts in resource allocation within the service (e.g., bins are relocated from one zone to another) x The introduction of new technologies (for example, the installation of water meters or the acquisition of new solid waste collection equipment) x Public awareness campaigns to help reduce the size of the problem (for example, an anti-litter campaign) x Appeals to regional or federal government for regulatory changes x Additional staff or training x Steps to research best practices x Other analysis x Improved monitoring Task 7. Estimate the Benefits and Costs of Each Option Estimate the likely effects of each option on each of the outcome indicators. Also, estimate the costs of each option. Finally, consider the feasibility of implementation of the options. Are there technical, political, or other constraints that would make some options very difficult to implement? Then select the option that appears to have the best outcomes for the expenditures required. Taborsky, oblast of Perm, Russia, Considers Options to Improve Street Lighting The community identified street lighting as a critical area, with the greatly deteriorated system contributing to a number of other problems, such as difficulty traveling, doctors reluctant to make house calls after dark, the encouragement of petty crime. Based on the priority of maximizing coverage with a target of illuminating 85% of the streets they shifted resources from planned light meters and light sensitive photocell devices to installing lighting throughout the village. A volunteer team of high schoolers was formed to promote the maintenance of the new street lights and prevent youth vandalism. Identify in detail the expected cost of the selected option. These costs will usually include such items as wages, fuel for operating a machine or vehicle, materials used for repairs and maintenance, plus


any additional equipment or facilities that might be needed. If the planned actions will have budget implications, identify them. (Note: Not all service improvements will require budget increases!) Based on what appears achievable if the selected option is implemented, select final targets for each performance indicator. Depending on the problem being addressed, targets for more than one year into the future may be appropriate. Task 8. Develop an Action Plan for the Selected Option Identify responsibilities for implementing the selected option and identify dates by which a task will be completed. The action plan is a critical tool for measuring progress of the implementation of the Service Improvement Action Plan. Without assigning responsibility and deadlines action plans rarely get implemented. The information generated on the cost of inputs such as labor, materials, and equipment form the basis for budgeting. Because the SIAP method also usually requires multi-year planning, the expenditure estimates can be factored into a long-term financial plan for the city. Finally, because SIAPs require outcomes to be stated, they form the base for a practical strategic plan, which provides local leaders a road map for the future and the costs to traverse the road map. Overall, municipalities are likely to find that SIAPs become an integral part of their planning, management, and budgeting activities. Poti, Georgia, Increases Collection of Water Tariffs In the city of Poti, Georgia, a working group was formed including three people form the water company, one from the Sakrebulo, one from a local NGO, and one from the local television station. The SIAP addressed such issues as improving billing procedures, signing service contracts between the service provider and households, and installing metering in some neighborhoods. One result was an increase in collections from 18% to 29% within the same calendar year. Task 9. Monitor, Report, and Implement Once the action plan is put into motion it is important to track progress. This is where the performance indicators come into play. The Departments will need to be sure that data collection proceeds in a timely manner so that at appropriate intervals at least once a year information will be available on performance. Results should be reported to city managers and leadership, as well as to the staff involved. Reporting to citizens should take place on a regular basis, along with information on local government plans to address any particular needs or problems. Managers can use the performance information as they receive it to make needed changes. This cycle of action, monitoring, reporting, adjusting, and action will continue over time. Action plans should be reviewed on an annual basis; ideally they should be reviewed shortly before the preparation of the annual budget in order to incorporate changes in cost of service delivery and additional expenditures into the next budget cycle. Further, review at this time allows the department director, the municipal manager and the Council the ability to inform the citizenry on how service delivery has improved and how resources will be used to continue the upward trajectory in service improvements. Each city will find its own best way of carrying out this process. Cities may not be able to carry out all these steps perfectly from the start, but it is likely to be well worth the effort. In Exhibit 7-2 below


is one example, based on the work sheet completed by a real city carrying out a SIAP on solid waste collection. Local governments have found it useful to use this form throughout the process. Exhibit 7-2 Sample Worksheet for Selected Tasks in preparing a Service Improvement Plan
Working Group list members Situation Analysis -- Describe the current service, including delivery mechanisms, potential problem areas, and ideas about possible improvements. Identify important outcomes and outcome indicators. Identify data collection source and, where possible, provide baseline data Desired Outcome Streets are clean Outcome Indicator % of citizens who report having seen animals/small pests in the uncollected garbage % streets that are rated as clean or very clean (score 3 or 4) % of citizens who say they do not know the garbage collection schedule % cost recovery for solid waste collection Baseline values 2007 47% 24% 87% 54% Target 2008 25% 60% 50% 75% Source Survey Trained observer ratings Survey Municipal records

Service is financially sustainable

Identify Tasks to be completed by working group

1. Look at breakouts of citizen satisfaction data to identify which neighborhoods have most complaints about cleanliness. 2. Verify placement of garbage bins in the priority neighborhoods to determine if it might be useful to reallocate. 3. Plan an information campaign for citizens to educate them about the garbage collection schedule and encourage them to deposit garbage into bins 4. Look into practices of other municipalities on fee/fine collection procedures. Review legislation concerning garbage fees. 5. Review billing and collections procedures and consider areas for streamlining. Priority Actions, Timing, and Responsibility. Drawing on the tasks outlined in the previous section, list major actions, a deadline, and staff assignments. WHAT BY WHEN WHO

ANALYZE OPTIONS/ESTABLISH PRIORITIES A major task for public service organizations is to decide among options and establish priorities among competing claims for scarce resources. No organization can do everything it would like to do. This applies both to operational resource allocation systems and for making longer term choices, such as in


strategic planning and developing a multi-year capital investment program. Information from the organizations performance measurement system can usually provide important information to help make these choices. The following steps can be used for such analysis. Here we illustrate how an agency might address the choice problem, focusing on choices involving the construction, repair, and maintenance of physical infrastructure, such as roads, bridges, water supply, sewer systems, and buildings or other facilities (such as parks and other public recreational areas). Repair and maintenance of school buildings will be used to illustrate the steps leading up to final choices. * Step 1. Assess the condition of each existing element of infrastructure (intermediate outcomes) -- using information from the agencys performance measurement system. In the school example, trained observers might rate each sub-system of each school building in the community using a well-defined rating scale, such as acceptable, not acceptable, or hazardous. Step 2. Estimate the cost to bring each infrastructure element to an acceptable condition. Exhibit 7-3 illustrates the summary table from these two steps. In this example the cost estimates are those needed to bring the condition levels from hazardous or unacceptable to acceptable. Step 3. Compare the total cost to the available resources. Inevitably, the total costs will be considerably greater than the need. In the example, which schools and which infrastructure elements should be repaired? Also, consider the various sources of funding and which funds can be used for which activities. Capital costs often come from a different fund than operation and maintenance (O&M) expenditures. The availability of funding is likely to differ between the two sources. In the school example, some of the large cost repairs are likely to be considered to be capital expenditures, with funds that might be more available than other needed repairs that are O&M activities. Step 4. Consider other important factors. For example, the severity of the unacceptable condition level is likely to differ among the infrastructure elements. A natural choice would be to fix hazardous conditions first (what can be called the worst first option). This might require much of the available resources, as is the case in the cost numbers in Exhibit 7-3 (45% of the total need and probably a much larger percent of the total dollars available). Step 5. Estimate the number of persons (in the school example, the number of students plus school staff) affected adversely by each unacceptable condition. The agency might then calculate the ratio number of persons benefited (by bringing the infrastructure element up to an acceptable condition level) per estimated repair dollar. These ratios provide one useful perspective for prioritizing the repairs. Step 6. If funds are very tight (as usually will be the case), consider interim, as well as complete, fixes. For example, if some needed work, has particularly large costs, i can a temporary, partial, fix be used until more funds become available? If only a few persons are affected by particularly costly repairs, are there other ways those persons (the students and staff in the example) can be served using less costly means?

This example is adapted from work done by the Urban Institute with several towns in Albania. The local and central governments both participated in this process.


None of the above steps directly addresses political considerations. These will often affect resource allocation decisions. The use of such data as that identified above can help reduce such pressures on public officials, such as the temptation to spread the available funding around regardless of need. Note also that performance measurement information provides only part, but a vital part, of the information needed for these resource allocation decisions. HOLD HOW ARE WE DOING? SESSIONS Cities can hold sessions periodically to review performance information internally with staff. This is an excellent way to focus staff attention on the importance and use of performance information. The Mayor may want to preside at these sessions, or they could be internal staff meetings within a single department. As a first step, identify particular indicators to review on a regular basis. It is most useful if printed copies listing the relevant indicators with up-to-date data are available at each session. In a number of instances, cities have chosen to project indicator data on a screen, so that everyone is looking at the same numbers during the discussion. ILLUSTRATIVE QUESTIONS FOR HOW ARE WE DOING? SESSIONS x x x x For which indicators have we met or exceeded our targets? Are there some lessons learned from those successes that might be useful elsewhere? For which indicators did we fail to meet our targets? Why? And how could we improve performance? (Request a written performance improvement plan.) x Are there other unexpected outcomes that should be discussed? x In later meetings, request an update on progress in carrying out the improvement plan and meeting new targets. Ask relevant staff to provide updates on performance and explain any deviation from targets. This is also a good opportunity to review performance information at breakouts of the data for particular areas of the city or the outcomes for particular citizen groups to see if one area is experiencing problems and might benefit from more assistance, or if one area is doing exceptionally well and might be used as a model for others. These should be sessions intended to improve service, not to serve as an occasion for attacking poor performance. These sessions can be highly constructive. Having the performance information in view will help the discussion to be focused and creative. Several cities in the United States (perhaps most famously, CitiStat in Baltimore and New York City) and elsewhere use this technique on a regular basis. In Baltimores CitiStat system, department representatives answer the Mayors questions regarding performance at bi-weekly meetings, while indicator data are projected on screens behind them.


Exhibit 7-3. Cost Estimates for Bringing Each School Element up to an Acceptable Level* No. 1  3 4 5 6 7 8 9 10 11 Description School A School B School C School D School E School F School G School H School I School J School K Hazardous Conditions 1,710,000 25,000 45000 1,000,000 ,780,000 Fire Protection 45,000 90,000 45,000 45,000 45,000 45,000 45,000 45,000 45,000 45,000 45,000 540,000 Lighting 16,450 5,300 ,600 1,850 800 7,000 1,160 1,950 9,500 4,000 1,900 52,510 Temperature 341,000 457,400 91,000 80,000 37,500 150,000 8,000 134,200 199,000 123,600 79,500 1,701,200 Water Supply 49,000 28,000 20,000 ,250 25,000 25,000 30,000 27,160 35,000 17,000 258,410 Bathroo ms 18,000 59,300 39,000 15,000 11,000 15,000 15,000 22,400 15,000 63,220 11,000 283,920 Sanitation 96,500 20,500 18,500 11,900 8,200 10,100 8,000 20,700 6,500 11,800 8,800 221,500 Communications Total


27,000 ,253,950 81,000 787,500 27,000 296,100 27,000 1,200,750 27,000 131,750 27,000 279,100 27,000 129,160 27,000 281,250 27,000 329,160 27,000 309,620 27,000 190,200 351,000 6,188,540

* Adapted from work done by one Albanian community.


Indjija and Paracin (Serbia) CitiStats Both Serbian cities have introduced a CitiStat process: Indjija in 2003 (called Sistem48) , and Paracin in 2004 (Called InfoStat). In both cities, the mayor meets weekly with key staffs from the various municipal departments to review department reports on a variety of pre-selected indicators. All municipal service providers (including both city departments and municipal enterprises) attend bi-weekly meetings with the Mayor, where they report on predetermined indicators. At each session, indicator data are reviewed for each service for both outcome indicators (such as times taken to complete responses to citizen complaints and total parking fees collected) and output indicators (such as the number of work orders completed, number of trees trimmed, and number of square meters of road swept). The meetings in Indjija are open to the media. Both cities have identified a number of improvements to their operations the cities attribute to these meetings. These include in Indjija: clean-up of illegal dumping, fixing sewage problems caused by precipitation, and replacement of traffic lights to reduce breakdowns in Indjija. Paracin attributes to these meetings such improvements as: increased attendance at cultural events due to improved communication with citizens (identified by a citizen survey); and major cost savings in heating expenses in kindergartens (based on comparisons of the costs in individual schools).
Source: DAI/SLGRP CitiStat Implementation, Development Alternatives, Inc., working paper, undated but probably 2006.

PERFORMANCE BUDGETING How can you get there if you dont know where you are going? Using performance information for budgeting is known as performance-based budgeting or results-based budgeting and is probably the best known potential use of performance information. While performance measurement looks backward to see what has been accomplished, performance budgeting can be thought of as looking forward. This presents additional challenges. Performance-based budgeting has several strong advantages. It improves decision-making. City administrators can use performance information to think more carefully about the choices implicit in the budget they prepare. And the city council receives better information about the implications the outcomes that their decisions will have, and are therefore better able to make effective choices. Second, it can lead to better resource allocation. Developing a results-based focus towa4rds budgeting enables you to monitor a better understand the accomplishments of various programs and what you are getting for the resources allocated to that program. While some local governments may have limited authority to reallocate funds across different departments, there is always some scope to better allocate limited resources within the department. A key element in performance-based budgeting is that linking program objectives and accomplishments to budget request justification for the next year.


It also provides accountability in an especially important area: spending taxpayers money in a way that explicitly links funding to the outcomes it is expected to lead to. Including outcomes with expenditures greatly enhances the transparency of the budget. Citizens and councilors have found budgets much easier to understand when they include performance information. Governments at all levels in many countries have begun to use performance-based budgeting. In actuality, this has taken many forms, across a fairly large range of possibilities. Each city should consider doing some or all of the following: x List objectives in the budget for each service area. This will require departments to think carefully about the purpose of their work, and to present the proposed budget figures in connection with those purposes for everyone to consider. x List outputs, outcomes and indicators based on service objectives in the budget. It will be easier to include outputs at first, as these are much more directly linked to planned expenditures. However, identifying the relevant outcomes will make the budget figures considerably more meaningful. Including data from the last years will describe the current context, and will provide council members with information on which to judge how the department will perform in the future. x Include targets for key outcome indicators, linking outcome targets to estimated expenditure. These will describe most clearly how the funding being requested is expected to be used. x Use outcome indicators and targets to prepare the budget. The information should play an important role in an organizations budget choices. For example, different outcome targets be considered relative to the funding levels requested. x Use explanations provided about past performance levels to help justify proposed budget changes. x The City Council uses performance information to make appropriation decisions. Although it is likely to be a significant change for council members used to seeing a budget in traditional format, the Council will find the inclusion of indicator data especially outcomes invaluable. On the one hand, it will make decisions clearer, once the implications in terms of results are included in the budget. On the other, it will be easier for them to explain to citizens why they made the decisions they did. Exhibit 7-4 provides a list of questions that department heads, the mayor, or council members can ask about when developing or considering budget requests in order to consider the performance implications of the proposed budget.
Exhibit 7-4 Basic Questions to Ask During The Budget Process 1. 2. 3. 4. 5. 6. What are the key results that should be expected from your department or service? Who is your service intended to serve? Who else is affected by your program? What important performance indicators do you use to track progress in attaining these results? If none at the moment, what would make sense to use? What do these performance indicators show for the past several years? What are the expected values for these performance indicators you expect to accomplish with the budget you propose? To what extent have you met your most recent targets? For targets that were not achieved, why were those targets missed? What does this latest budget do to correct the problems?


7. 8. 9.

What actions does your proposed budget include that will improve the quality of your services for our citizens? Where you have proposed efficiency (cost-saving improvements, what effects will they have on the quality and effectiveness of the service? What major factors influence the results you are trying to achieve? What are you doing to try to address those factors?

10. What are the major challenges facing your program/service? 11. How would results change if funding is increased by 10 percent? Decreased by 10 percent?

Performance based budgeting presents some problems, in particular in estimating the costs of achieving various levels of outcomes. A few tips might help the process: 1. Start with historical costs from the last few years. 2. Think about changes expected in: a. resources available b. the complexity of the task c. staffing available d. external factors, such as the weather, new regulations, or population e. policy changes 3. Do not expect absolute precision. It may be useful to plan to adjust targets during the year, depending on performance in meeting semi-annual or quarterly targets. 4. Consider setting variable targets that explicitly depend on different factors is a way of formalizing the uncertainty. Some options are: a. Setting different targets depending on expected future workload Indicator: Number of hours it takes to complete a service call Target: 24 hours, if no more than 50 work orders per week come in during the budget year 48 hours, if over 50 work orders per week come in during the budget year b. Setting a range, rather than a single value for the target, such as: Indicator: Percent of roads to be asphalted Target: 40-50% depending on future weather conditions Exhibit 7-5 is an excerpt from a performance-based budget prepared by the city of Fier, in Albania.


Exhibit 7-5 Example of Performance-Based Budgeting from Fier, Albania Excerpt from the Parks and Greenery section of the 2007 Budget Performance strategic objective a.1: Annual performance goal a.1.1 Annual performance goal a.1.2 Activities: Performance strategic objective a.2: Annual performance goal a.2.1 Activities: Citizens satisfied with the greenery in the city and neighborhood 5% increase of citizens satisfaction with the greenery in the city and neighborhood 5% increase of the number of citizens who use the parks of the city and neighborhoods - Increase the numbers of parks from 86 in 2006 to 90 in 2007 - Plant trees and flowers Citizens satisfied with parks 5% increase of rating the cleaning, safety, maintenance, and lighting in parks very good and good - Supply six parks with garbage containers and lighting - Supply five parks with benches - Maintain the existing parks - Add two employees to the greenery service Baseline 2006 19.1% 27.3% 52.5% 56.8% 43.2% 33.9% 27.9% 18% Target 2007 25% 33% 57% 62% 45% 40% 32% 22%

Outcome Indicators for the Greenery Service % of citizens satisfied with the quality of parks and green areas in the city % of citizens who use parks and green areas in the city and neighborhoods % of citizens who rate the cleanliness of parks / green areas very good or good % of citizens who rate the safety of parks / green areas very good or good % of citizens who rate very good or good the maintenance of grass and trees in parks / green areas % of citizens who rate very good or good the benches and tables in parks and green areas % of citizens who rate very good or good the lighting in parks and green areas % of citizens who rate the works of art in parks / green areas very good or good

Budget allocated to the Greenery Service and the service enterprise (in thousand leks) No. 1. 2 3. Item Service enterprise Flores & Co. Company Investments Total 2004 43 906 7 062 0 50 968 2005 52 152 5 569 0 57 721 2006 61 374 3 452 0 64 826 2007 70 400 4 056 10 072 84 528



Many capital expenditures such as those for road, water, and sewerage rehabilitation, are intended to provide improved public services. These capital expenditure decisions should be made based in part on estimates of the extent of improvement in services that would result. Using outcome information for selecting, and later justifying, capital projects can help: x Determine how proposed capital projects relate to the long-term capital plan. x Identify how proposed capital projects support major goals and objectives in the strategic plan. x Provide information to the public about how an investment will benefit them in the future x Hold agencies more accountable for the results they achieve with capital projects they propose by following up on expected results after they are completed. Useful steps include: x Use the latest performance information to determine to what extent a capital facility is needed. For example, information on response times for emergency vehicles can be used to determine whether more vehicles need to be purchased. x For capital projects intended to directly provide services, provide annual estimates of changes in results for that service expected after the project is completed. This will enable the city to assess the value to the municipality of each project. x Hold public discussion sessions with citizens to obtain their opinions on the projects being proposed including the results they would expect from those projects. x When preparing public service capital investment improvement proposals, estimate the expected outcome values for each relevant outcome indicator. Exhibit 7-6 provides an example. Exhibit 7-6 Example of Outcome Information for a Capital Project During the budget year, the proposed project will complete much needed reconstruction and signalization of Center Street from X to Y. This work will speed up traffic and reduce congestion in this area. We expect that peak hour driving time from one end of the work to the other will improve from its current average of 32 minutes to approximately 19 minutes. In addition it is expected to reduce traffic accidents by about one-half, from the 213 accidents over the past 12 months. x When seeking citizen approval of a capital project, include information on expected outcomes. This information can be very useful in justifying the capital expenditures to citizens.


STRATEGIC PLANNING If the municipality has a strategic plan in place, it is important to link performance measurement efforts to the plans strategic goals and objectives or, conversely, to modify those objectives if they are inconsistent with the outcomes selected as city priorities. Performance monitoring including the establishment of specific targets and monitoring compliance with those targets -- of key strategic objectives is an essential part of ensuring that the city meets the objectives. The performance measurement process has the following roles in strategic planning: x Provides the baseline data for the plan x Provides historical information for estimating the likely future outcomes for various service options examined in the strategic planning process x Provides the data for tracking progress towards the plan, thus indicating whether midcourse changes are needed There are direct links between performance management and strategic management. Strategic planning harnesses your citys strengths and anticipates and plans for its future. It involves specifying a vision for the community, defining a strategy (objectives) to achieve the vision, and finally concrete tools (programs) to achieve the objectives. Performance management evaluates the performance of various programs to see if objectives or targets are being achieved. It also involves reporting and using performance information to make your citys program more effective. Strategic Planning is a major topic itself and is not covered in detail in this Guide. MOTIVATE YOUR EMPLOYEES Performance data can be very useful for motivating municipality employees. Typically, the most feasible methods are those that rely primarily on recognition (rather than financial incentives) Performance-based motivation can be especially successful if it is focused on teams not just individuals reinforcing outcome-orientation and rewarding innovation and success. Linking salary to performance is an appealing concept but has many difficulties, such as difficulties in objectively assessing an individuals contribution to outcomes without making value judgments. It is, of course, also essential to ensure there is no incentive to reduce quality of performance by focusing only on the outputs that are being measured. In addition, individual monetary rewards can easily build resentments if they are not perceived as being completely fair. More feasible financial incentives are likely to be financial rewards for a whole team or department for meeting specific performance targets; or rewards in the form of additional funding for for training or professional development, or more flexibility and less oversight in some of their activities. Some specific incentives include the following: Non-monetary Incentives Using recognition awards


Provide access to training or study tours abroad for staff or departments meeting performance
targets Providing regular performance reports to all program personnel (this can be done by posting results on a bulletin board, for example, and can include breakdowns by region or by customer groups) Setting performance targets and regularly reviewing achievements in relation to targets (especially effective for shorter reporting periods) Giving managers more flexibility in exchange for more accountability for performance Making performance information an explicit part of the agencys individual performance appraisal process (all persons in a group would receive the same rating on this part of the appraisal)

Monetary Incentives Linking pay to performance (note the difficulties described above, as well as the fact that external factors can greatly affect outcomes) Allocating discretionary funds to agencies for programs with high performance (such as providing extra resources for classroom equipment for a high-performing teacher, or returning a part of cost-savings to the programs budget) PERFORMANCE CONTRACTING If you contract out services (such as street cleaning, solid waste collection, street lighting, and road maintenance) to private service providers, consider using performance contracting. If you provide grants to non-government organizations for services (such as a variety of social services), consider specifying performance targets in the grant. IN either case, agreements between the two parties should include outcome targets so that outcomes can be compared against the targets. The outcome indicators should be included in requests for proposals (RFPs), along with desired targets for each indicator. Organizations that indicate they can produce higher levels of outcomes can be given higher ratings during proposal considerations. A combination of rewards and penalties can be included in these agreements, such as:  Increased fees for meeting or exceeding targets  Reduced fees for failing to meet targets Many service contracts include termination options for poor or nonperformance, but these generally apply to extreme circumstancesusually only vaguely definedand do not appear to provide much incentive for improving performance. A private organizations performance on previous contracts or grants might be considered as an explicit criterion for future awards. In the city of Ozurgeti, Georgia, indicators on street cleanliness were included in the performance-based contract signed with the first private-sector solid waste collection contractor to provide communal services for the city. Trained observer ratings were used to monitor the companys performance.

Outcome-based performance contracting with clearly defined performance measures can be attractive to both the government and contractors if the contract also gives the contractor greater flexibility in how the work is performed as long as the outcome targets are met.


Exhibit 7-7 lists a number of key questions that should be considered when establishing outcomebased performance contracting. The following elements are important to a successful contracting process: x Incentive provisions need to be fair both to the public and to the contractor or grantee when including monetary bonuses or penalties in contracts or grants. x Strong contract oversight needs to be maintained to make the contract more effective by either collecting the performance data yourself or regularly checking the quality of performance data provided by the contractor or grantee. x Encouragement and help to grantees and contractors is likely to be needed to maintain their own outcome measurement processes. Contractors and grantees should be required to allow the government to undertake periodic quality control audits of the data systems. x Post-service measurement will be needed before final payments are made for services where important outcomes cannot be assessed for a considerable period of time after the contractors services have been completed. Exhibit 7-7. Outcome-Based Performance Contracting Questions x x x x x x x x To what extent should contractors be involved in the selection of performance indicators? What incentives should be included in the contract? How should the size of penalties and positive incentives be related to outcomes? Should initial performance contracts include a hold-harmless clause? (This refers to a period of time in which the contracting entity holds off any adverse action against the contractor or grantee so that they have some time to get on their feet). What role should the contractor play in data collection? How should external factors be considered when determining rewards and sanctions (e.g., escape clauses?) How can outcome incentives be used to encourage contractor innovation? Should performance on past contracts help determine future contract awards?

Source: Performance Measurement: Getting Results, Second edition 2006. Washington, D.C.: The Urban Institute.

CONTRIBUTE TO NATIONAL AND REGIONAL INFORMATION SOURCES Performance information collected at the local level can make a major contribution to the regional and national levels of government who also need to assess the progress of their programs and objectives. They face many challenges in getting detailed information of outcomes that can help them select the best policies. In some cases they adopt overarching development plans with their own monitoring systems. Most national governments believe monitoring progress is a valuable tool, but there are often fairly serious data gaps, however, and countries are still working on improving their capacity to collect data. While country-level data are the simplest for national planning, often the findings reveal profound difference across the country or even the region. For those purposes, national or regional government plans could very much benefit from local data. A number of countries have adopted national plans to meet Millennium Development Goals, in order to concentrate develop policies and resources on a few selected priorities. Many countries


also have Poverty Reduction Strategies that are required by multi-lateral donors such as the World Bank. These Strategies include monitoring and evaluation components. The indicators they use often are based on or include MDG Indicators. For instance, Albania has built the National Strategy for Socioeconomic Development (their Poverty Reduction Strategy) around the achievement of the Millennium Development Goals. Many of the MDG indicators appear in the NSSEDs main objectives, and others are regularly updated in NSSED progress reports. Exhibit 7-8 shows a table from the 2004 Progress Report for Albanias NSSED, providing actual data and target values for MDG Indicators. In Albania, as in many other countries tracking progress toward MDGs, it is clear that there are inequities across the country, with some regions doing much better than others, or urban areas outpacing rural areas. In these cases, local data can be especially useful, and international donors and governments alike recognize the role that local governments will need to play in meeting those goals. Performance information collected by local governments can be very helpful to the national government in tracking progress toward meeting national targets, and can also suggest refinements to the indicators that are being monitored at the national level FINAL COMMENT ON USING PERFORMANCE INFORMATION Ultimately, all the above uses of performance information are intended to improve the governments services and their benefits to citizens and the community as a whole and to provide better accountability of the government to its citizens.


Exhibit 7-8 MDG From Albanias NSSED Progress Report


Step 8.


The key to building capacity for performance measurement and performance management for most municipalities is likely to be training. In addition, some outside technical assistance is likely to be needed. Below, suggestions on these are presented. x Municipalities should request funding from the national government and donor organizations to build their own capacity so as to better contribute to the MDGs. x Donors need to recognize that building the capacity of municipalities will create synergies between indicators regularly tracked by the municipalities and the MDG indicators.


DECIDE WHAT TRAINING IS REQUIRED, TO WHOM, AND HOW MUCH Adopting performance management requires staff training to provide the special skills and understanding of performance measurement and performance management concepts and procedures. This includes training in both how to undertake performance measurement (the technical side) and how to use the information obtained ("managerial" training). Technical training Managers and professional staffs within both national and line agencies (including those in administrative offices such as finance, procurement, and personnel offices) will need some degree of technical training. This includes exposure to such subjects as: x Awareness of the MDGs and how municipal performance management can contribute to achieving MDG targets.

SPECIAL NOTE TO LOCAL GOVERNMENTS This chapter provides a fairly comprehensive list of the kinds of capacity building that would be useful for local governments undertaking performance management. It is not likely that any one local government will need the full range of training described in this section. Moreover, it will be difficult for most local governments to obtain funding for such a complete program. It is practical to think of it in terms of a progression. While most cities will decide their own way of proceeding, we provide in this box some suggestions of how these items might be prioritized. Most essential x Key concepts outcomes versus outputs, the importance of monitoring performance x Data collection choosing sources that are appropriate and feasible x Using performance information how can this information be used in the short term to improve results With additional resources: x TA to assist in each of the above steps x Special topics on data collection x Data analysis how to look at data in greater detail and with more accuracy x Data quality

x The distinctions between indicators of inputs, outputs, efficiency, intermediate outcomes, and end outcomesincluding use of such tools as outcome sequence charts (logic models) and focus groups (groups of both staff and service clients) to obtain information on how to identify appropriate performance indicators. x Ways to identify needed outcome indicators for a service, including the use of focus groups and using outcome sequence charts to illustrate the relationship between outputs, intermediate outcomes and end outcomes. x Ways to measure performance, including data sources and data collection procedures, such as using agency records, surveys (household and user surveys), and trained observer ratings. This includes the basics of surveying service clients, with a brief exposure to sampling concepts to enable agencies to avoid the need and added costs, where appropriate, for surveying large numbers of clients. This also includes the elements, and applications, of "trained observer" approaches in which personnel are trained to make systematic ratings of physical conditions as a way to measure various aspects of service quality. x The topics of breaking out outcome data by key customer and service characteristics, obtaining baseline data, and setting targets.


x Analysis of the performance data. This includes such topics as: selecting the appropriate comparisons to help interpret how well performance has been; and searching for explanations as to why unusual results (unexpectedly high or low) have occurred. Training in using the performance data Often badly overlooked in many, if not most, governments throughout the world is training managers on using the performance information to make program improvementsand not only to satisfy higher level requirements on reporting performance. This neglect has meant that managers and officials tend to look on performance measurement as primarily something to get done in order to satisfy higher-level authorities. The important need is to transition managers from performance measurement to performance management. This means training in such elements as the following, each of which can considerably increase the value of data to managers and their staffs: x Reviewing performance data that identify and compare performance achieved for key citizen demographic groups the service servessuch as age, gender, income, ethnic groups and citizens living in the various neighborhoods/sections of the municipality. x Reviewing performance data that identify and compare performance achieved for key service characteristicssuch as comparisons among individual organizational units providing similar services and comparisons of the outcomes of different service delivery mechanisms where different amounts and/or types of a service are delivered to citizens. x Reviewing performance data on service qualitysuch as obtaining information on the timeliness, helpfulness, courteousness, and accessibility of the services. x Regular and reasonably frequent collection and reporting of the performance data. Typically higher-level officials examine performance data on an annual basis, typically during the budget/appropriation process. However, annual reporting is not likely to be adequate for managerial purposes. For some services, reporting performance data at least quarterly is likely to be desirable for managers and their staff to track what is happening. This added frequency will make it more feasible for managers to make mid-course corrections and then have the opportunity to determine in later quarters whether or not changes that were introduced have led to desired improvements in performance. x Holding regular managerial performance reviews with staff after each performance report has been issued. Such reviews can be used to identify where performance is on target and where not, to identify possible explanations for unusually good or unusually disappointing service levels; and to suggest ways to improve future performance (such as by identifying practices that have been particularly successful for some organization units or for some client groups that might be transferred to other units or other client groups). x Undertaking searches for explanations for performance that is unexpectedly bad or good. A search for explanations should be an explicit part of the performance management process. This topic has been discussed in more detail in Step 4: Analyzing Performance Data. x Having data that becomes available and is processed in a reasonably and timely way, such as within no more than one month after the end of a reporting period. Data that takes years, or


even months, to obtain will lessen the ability of managers to react to that data and to make improvements. Several levels of training are requiredfor elected officials, for department heads, managers and supervisors, and for many, if not most, technical and support staffs. It can be argued that every one in the government should have exposure to the concepts of citizen-focused services, a central concept of performance management. Performance measurement training appears to work best when it includes small group exercises, where participants can work on a specific service and go through the process of actually identifying outcomes and indicators, data collection procedures, and uses of the performance information. These exercises will help participants use the theoretical concepts of the training in practical real-life scenarios, thereby making the training less abstract and more effective.

Who should be trained? All municipality managers and supervisors should eventually be provided training, including administration managers and not only direct line operating managers. In addition most municipal staff should also receive some training. Ideally, all staff would be provided at least brief training to encourage all public employees to work to produce the best possible results for the citizens of the municipality. If your municipality already has made a significant start on performance management less training will be needed. Initially, the training will need to focus on those managers and staff who will be involved in the performance measurement effort. Because of inevitable staff turnover, provision also needs to be made for the training of new municipal employees, both mangers and staffs.

How much training is needed? Some municipality personnel will only require perhaps a two-hour introduction to performance measurement, especially to encourage a results-orientation. However, those who will be responsible for implementation of the performance measurement process will likely need about two to three days of initial training. It is important that training is done not only at the start of the Performance Management Strategy, but also on an ongoing regular basis to reconfirm techniques and learn new approaches. Training will also be required for new staff joining the departments.

What technical assistance is likely to be needed? Hands-on, outside, technical assistance can be very helpful in developing and implementing a performance measurement and performance management process. Training will likely have considerably more impact if accompanied by assistance, especially on technical issues that arise. The technical assistance might come from external consultants, local universities, or even internal staff who have had experience and knowledge of particular steps in the process. For example, some municipalities have planning or statistical staffs who have had considerable experience in undertaking surveys. These persons may be available to help your individual agencies with surveys of their customers. In addition, some or your agencies are likely to have staff who already have experience in performance data collection and analysis, such as staff in health, education, and transportation agencies.


Who can provide the training? Training, while highly desirable can put a strain on the resources of a government. Of course, not all the training needs to be done at once but can take place over many months or years. Much training ultimately can be done by internal municipality staff who have had obtained previous experience and knowledge in performance measurement and performance management issues. As with technical assistance, some agencies in your municipality are likely to have staff who already have experience in performance data collection and analysis, such as staff in health, education, and transportation agencies. These persons and their experience might be drawn on for training. One strategy, that has been used, is to use persons that have completed the training to help train others. In the long run, providing adequate training is the responsibility of each agency itself. An agency can choose to use outside consultants or inside the government personnel, or some combination. Increasingly, written materials are becoming available that can be used in the training. For example, one of the purposes of this manual itself is for use in such training programs. You may need to translate some or all of the material available into the local language to enable effective training of line staff.

How much is such training (and technical assistance) likely to cost? This is a critical issue. Overall, the cost of training and technical assistance need not be large. As noted above, the primary out-of-pocket cost is likely to occur at initial start-up of the performance measurement process when outside assistance is likely to be needed. Later your municipality is likely to be able to use internal personnel who gained experience during the initial stages of implementation to provide later training and technical assistance. Early start-up training and technical assistance might be available, if needed, from your national government or from international donors.

FINAL WORDS Implementing a useful performance management process is not easy. Numerous technical and political issues need to be addressed, such as those discussed throughout this guide. The key for your municipality is to place greater focus on outcomes of importance to your citizens, including the international Millennium Development Goals and other outcomes relating to the quality of life of your citizens. The resulting ability of your municipality to track key outcomes of vital importance to your citizens, then using that information to improve the quality and effectiveness of your services to the public, is what is likely to make all this effort well worth it.


BIBLIOGRAPHY Albania National Report on Progress Towards Achieving the Millennium Development Goals, August 2004. Analyzing Outcome Information: Getting the Most from Data. Washington, D.C.: The Urban Institute Press, 2004. Approaching Performance Management in Local Government: A Guide. Council of Europe, Centre of Expertise for Local Government Reform, Directorate of Cooperation for Local and Regional Democracy, Council of Europe, Strasbourg. Armenia Poverty Reduction Strategy Paper. Progress Report. (2004-2005) First Term. Yerevan 2006. Bangalore City Indicators Programme. Government of Karnataka, Bangalore Metropolitan Region Development Authority, December 2000. DAI/SLRP CitiStat Implementation. Development Alternatives, Inc. Working Paper (Undated but probably 2006.0 Every KID COUNTS in the District of Columbia: 13th Annual Fact Book. D.C. KIDS COUNT Collaborative for Children and Families, 2006. Georgia Customer Survey 2004. USAID Local Government Reform Initiative (LGRI) in Georgia Georgia Local Government Reform Initiative Final Report. USAID Georgia Local Government Reform Initiative, January 2005. How Effective Are Community Services? Procedures for Performance Measurement. Washington, D.C.: The Urban Institute and the International City/County Management Association, 2006. Hatry, Harry. Performance Measurement: Getting Results. 2nd Edition. Washington, D.C.: The Urban Institute Press, 2006. Indicators for Monitoring the Millennium Development Goals. Definitions, Rational, Concepts, and Sources. New York: United Nations, 2003. Kavaja Municipality, Albania. City Cleanliness Rating by Trained Observer Ratings Approach. November 2006. Key Steps in Outcome Management. Washington, D.C.: The Urban Institute Press, 2003. Localizing the Millennium Development Goals: A Guide for Municipalities and Local Partners. UN-HABITAT, United Nations Human Settlements Programme, Nairobi, Kenya, March 2006. Mark, Katharine. Using Performance Management to Strengthen Local Services: A Manual for Local Governments in Ethiopia. Washington, D.C.: July 2006.


The Millennium Development Goals Report 2006. United Nations. New York, 2006. Progress Report on Regional Development Strategy of Fier Region. UNDP Albania, November 2005 Republic of Albania. Law on Organization and Functioning of Local Governments, July 31, 2000. State of the Subways Report Card. NYPIRG Straphangers Campaign, Summer 2004. Available at Surveying Clients About Outcomes. Washington, D.C.: The Urban Institute Press, 2003. Urban Indicators Guidelines: Monitoring the Habitat Agenda and the Millennium Development Goals. United Nations Human Settlements Programme, August 2004. Using Outcome Information: Making Data Pay Off. Washington, D.C.: The Urban Institute Press, 2004.