You are on page 1of 14

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1741-0401.

htm

IJPPM 55,6

Comprehensive performance assessment in English local government


Chris Game
Institute of Local Government Studies (INLOGOV), The University of Birmingham, Birmingham, UK
Abstract
Purpose The purpose of this paper is to provide, at a particularly signicant point in its short history, an overview of a unique system of performance management to which all principal local authorities in England have been subject for the past three years. Design/methodology/approach Comprehensive performance assessment (CPA) is the controversial centrepiece of a system of performance measurement and improvement management that has involved the external classication of each individual local authority as Excellent, Good, Fair, Weak or Poor. It is a system that, as comparative data on the scale of local government demonstrate, could only be attempted in the UK. The article is written as a non-technical and evaluative narrative of the introduction, early operation and impact of this system, concluding with the changes in methodology introduced to counter the phenomenon of too many of the nations local authorities becoming ofcially too good for the existing measurement framework. Findings Key points that the article brings out concern the exceptional circumstances of UK local government that make such a performance management system even contemplatable, the improvement and recovery part of the regime, and the inherent implications of a system geared to providing regular statistical evidence of continuous performance improvement. Originality/value The originality lies in the CPA system itself, aspects of which at least will be of interest both to specialists in performance measurement and management and to those with an interest in decentralized government and intergovernmental relations. Keywords Performance measurement (quality), Performance management, Decentralised government, Local government, England Paper type Viewpoint

466

The problem with English local government? Its just too good!
Its ofcial! Two-thirds of major councils are Good or Excellent 52 promotions, 2 relegations only one Poor council left Islington transformed from Poor to Good in just two years Coventrys social service scores propel them out of bottom division Warringtons corporate assessment holds them back again
International Journal of Productivity and Performance Management Vol. 55 No. 6, 2006 pp. 466-479 q Emerald Group Publishing Limited 1741-0401 DOI 10.1108/17410400610682497

This article is a heavily edited version of a paper Comprehensive performance assessment in English local government: has life on Animal Farm really improved under Napoleon?, delivered at the Conference of the European Group of Public Administration (EGPA) in Bern, Switzerland, September 2005.

Since December 2002, headlines like these have become standard fare in British local government magazines and journals in what are often their last pre-Christmas issues. UK readers have become used to them, yet to anyone unfamiliar with the uniquenesses of our governmental system, they must seem rather extraordinary. Indeed, but for the odd references to councils, services, and corporate assessment and the winners outnumbering losers by a ratio of 26 to 1 the whole thing might seem to have escaped from the sports pages. In fact, of course, these headlines have nothing to do with sport, at least in the conventional, organised sense (although, as in all performance measurement exercises, gaming skills are at a premium see Pollitt, 1989). Rather, they are the public manifestation of our comprehensive performance assessment (CPA) system the means introduced by the Labour Government in 2002, by which all local authorities would have their overall performance regularly externally evaluated and scored, thereby, it was argued, prompting a continuous improvement in the quality of their service provision. This article presents an overview of the CPA regime the way it works, its origins, its methodology, its results, and its impact at a particularly signicant point in its life cycle. For the problem with a system that has to justify itself by producing regular evidence of continuous improvement is that sooner or later in this case sooner almost everyone arrives at or near the top of the scale, with apparently nowhere else to go. When the great majority become good or excellent, the terms lose any meaning they once may have had and almost no one is left in need of improvement. This is the dilemma that, after just three years of the CPAs operation, confronted the Government in 2005: Englands local government, as judged by its own appointed assessors, was just too good! Snakes and ladders but mainly ladders The idea of CPA is that all principal local authorities (councils) in England[1], regardless of size or range of responsibilities, can and should be comprehensively assessed by the independent (though ministerially appointed) Audit Commission and then assigned to one of just ve performance categories: initially Excellent, Good, Fair, Weak, Poor; now 4 stars to 0 stars. These overall ratings, through a methodology detailed later in the article, are derived from weighted assessments of a councils core services education, social care, housing, the environment, libraries and leisure facilities, welfare benets a general assessment of its use of resources, and a grading of its corporate capacity to improve. CPA results for Englands 150 major local authorities London, metropolitan, unitary and county councils are published annually in mid-December. Excellent councils just 22 in 2002, 41 by 2004 thus receive, as it were, an early Christmas present. For, as an integral part of the assessment and improvement system, their rating qualies them for various Government goodies in the form of freedoms and exibilities. They are excused, for example, from producing certain statutory service plans for ministerial approval, less of their grant funding is ring-fenced, and they are subjected to a lighter touch inspection regime. So, as if they were well-behaved children, these Excellent authorities are trusted a little more, given a little more discretion in spending their pocket money, and suffer a little less constant pestering

English local government

467

IJPPM 55,6

468

from their ministerial parents. Good councils too 54 initially, 60 by 2004 are rewarded in a similar, but scaled down, fashion. Even in the rst CPA round in December 2002, Excellent and Good councils totalled 76 or conveniently for a government keen not to alienate any further a sceptical local government world 51 per cent of all 150 councils assessed. You would scarcely have noticed it, though, from the national media headlines, as opposed to those of the local government press. For few were the stories that led by noting that a majority of the countrys biggest and most important councils had been independently judged either Good or Excellent. As ever, the far commoner practice was to express shock and horror at the unsurprising nding that a small minority 13 or 9 per cent had been judged Poor and a further 22 (15 per cent) Weak. Some of these latter councils accepted their ratings with a mixture of resignation and the resolution hoped for by the Audit Commission itself, whose metaphor had been that these assessments should be viewed less as a league table than as a game of snakes and ladders:
Some councils are climbing ladders, others are slipping down snakes. Our job is to kill the snakes and build the ladders.

CPAs, in other words, were a tool for diagnosing which councils needed to improve their performance and in which key service areas. In contrast to the good and excellent, the Christmas present for these poorly performing authorities would consist not of more freedoms and exibilities, but quite the reverse. With varying forms of ministerial and external oversight, they would be assisted in drawing up and implementing recovery plans. For some disappointed councils, though, the rst instinct has been to dispute and challenge the Audit Commissions ratings. There have been injunctions, court cases, complaints about the CPAs awed methodology, scoring systems, and irrationally simplistic judgements, and accusations of attempted interference from the Prime Ministers ofce. There has also emerged stronger evidence than the Audit Commission would initially admit of the impact of external constraints on service performance: a clear relationship between CPA scores and the characteristics such as economic deprivation and ethnic diversity of a local authority area (e.g. Andrews, 2004; Andrews et al., 2005). Large, prosperous and ethnically homogeneous councils (like many counties) were relatively more likely to be scored excellent, and the most deprived and ethnically diverse councils (like metropolitan and Inner London boroughs) more likely to be weak or poor. Subsequently, both the Government and the Audit Commission have acknowledged the need for the CPA exercise to be rened. Some adjustments were made, and, as described below, more substantial changes were introduced for CPA 2005, making the overall assessment more demanding. Abandonment of the underlying principle, however, or of the essential methodology has been ruled out at least, following Labours third election victory in May 2005, for the foreseeable future. Indeed, the head of the civil service, Sir Gus ODonnell, has begun applying a version of CPA to central government departments, in the form of Departmental Capability Assessments signicantly, though, without the external validation element and with no mention either of the sanction of intervention, both of which are considered necessary for local government.

By contrast, the main opposition party, the Conservatives, favour abolishing the whole bloated CPA regime (Conservative Party, 2004), as do some far from Conservative commentators (e.g. Stewart, 2003, pp. 252-3). But the dominant view, even early on, was that the Audit Commission had completed a contentious and complicated job in a fearfully short time period about as satisfactorily as could reasonably have been expected (e.g. Travers, 2005, p. 78). There was a strong feeling across the local government world that the process should be less centralist and more reective of local priorities and circumstances (Wilson, 2004, p. 65). But the public position of the Local Government Association (LGA) was that it had set a useful baseline of performance, had re-energised local governments commitment to improvement, and that overall the results provide a reasonably accurate picture of the distribution of council performance (LGA, 2002). That the LGA, under both Labour and, since 2004, Conservative leadership, has been prepared to incorporate its reservations in a continuing contribution to the development of the proposals for CPA (LGA, 2005, p. 1) undoubtedly owes much to the apparent evidence of local governments efciency and steady improvement that the CPA classications have annually produced. In December 2003 and 2004 the Audit Commission published updated assessments for all 150 single-tier and county councils, showing signicant overall improvement in their performance (Audit Commission, 2003, 2004). As indicated in the opening headlines, 52 of these councils in 2003-2004 moved up at least one CPA category, with just two moving down. As a result, 41 councils were ofcially categorised as Excellent and over two-thirds (101) as either Excellent or Good. December 2004 also marked the completion of the rst CPAs for English lower-tier authorities the 238 non-metropolitan/shire district councils and their results proved noticeably similar to the rst assessments of the single-tier and county councils, with 49 per cent in the top two categories: 14 per cent Excellent and 35 per cent Good; 36 per cent were judged Fair, leaving just 12 per cent Weak and 4 per cent (9) Poor. More interventionist than Thatcher Within a four-year Parliament, therefore, CPA had become a major element in the relationship between the Government and local councils a remarkable achievement for something that had not been publicly mentioned even in Labours June 2001 election manifesto, let alone that of 1997; also for an exercise that, on logistical grounds alone, could simply not be contemplated in almost any other large Western European country. As shown in Table I, UK local government is organised on a scale several times larger than that of its European neighbours, and it is the resulting modest number of councils just 388 in England for a population of nearly 50 million that provides the structural precondition for an intensive external monitoring exercise like CPA. The other preconditions, of course, are the presence of a political culture and a governmental will to make acceptable the idea that central government should concern itself in this degree of directive detail with the activities of democratically elected local councils. Which in turn raises the question of whether it should be seen primarily as a force for localism or as a further turn of the centralist screw in a relationship in which the centre has had the overwhelmingly dominant hand under governments of both major parties in recent years (see Wilson and Game, 2006, esp. ch. 9).

English local government

469

IJPPM 55,6
France Austria Spain Germany Italy Greece Norway Finland Belgium Denmark Sweden Portugal The Netherlands Ireland (England) UK

Population (millions) 60.7 8.2 40.3 82.4 58.1 10.7 4.6 5.2 10.4 5.4 9.0 10.6 16.4 4.0 49.9 60.4

Number of principal local councils 36,782 Communes 2,380 Gemeinden 8,108 Municipios 12,434 Gemeinden 8,101 Comuni 1,033 Dimoi, Kinotites 435 Kommuner 444 Kunta 589 Communes, Gemeenten 271 Kommuner 290 Kommuner 309 Municipios 467 Gemeenten 39 Counties, cities, boroughs 388 Counties, districts, etc. 468 Counties, districts, etc.

Average population per council 1,650 3,440 4,970 6,630 7,170 10,360 10,500 11,710 17,660 19,930 31,000 34,300 35,120 103,000 129,000 129,000

470

Table I. Britains large-scale local government

Sources: Wilson and Game, 2006, Exhibit 12.3 based on The Local Channel, 2005, pp. 4,7, and individual countries sources

CPA has been presented by the Blair Government as a key contribution to its decentralization policy. The rst part of the title of the White Paper in which it was conceived (DTLR, 2001) was Strong Local Leadership, and the introductory paragraphs were peppered with assertions about ministers commitment to vibrant local democracy, and to removing unnecessary controls which stie innovation. CPA is, though, an unambiguously centrally run, management-focused reform: to quote the Deputy Prime Minister, John Prescott, one of the most ambitious exercises in performance management ever undertaken by central and local government. It is thus a direct successor to previous managerialist programmes Compulsory Competitive Tendering (CCT), introduced by the Thatcher/Major Conservative Governments, and its Labour replacement, Best Value (Wilson and Game, 2006, ch. 17). The central direction is the same, concessions to local priorities about as minimal, and the propensity to take over or, in the favoured euphemism, engage with poorly performing authorities enormously greater. Any relaxation of control is conned largely to those authorities judged, by appointed outside experts, to be the highest performers, while those at the other end of the scale are subject to a degree of central intervention that Conservative Governments would not have dared. CPAs antecedents CCT and Best Value Of all the local government reforms introduced by the 1979-1997 Conservative administrations, arguably the most far-reaching were those associated with Compulsory Competitive Tendering (CCT). CCT required councils to compare the costs of continuing to provide specied services in-house with those of any interested private contractors, and to award the service contract to the most competitive bidder. That meant the lowest bidder, and councils were prohibited from imposing conditions on such issues as trade union rights, employment protection, sickness benet,

pensions, training, and equal opportunities that might have the effect of restricting, distorting or preventing competition (Local Government Act 1988, s.7(7)). Cost was always the ultimate criterion, rather than quality, and this centralist rigidity and the privatising ideology underpinning it caused it, not surprisingly, to be abhorred by all Labour politicians, local and national alike. The abolition of CCT became a central imperative in Labours 1997 manifesto, but the New Labour Government elected under moderniser Tony Blair was concerned, proverbially, not to throw the baby out with the bath water. CCT had to go, but ministers were insistent that it be replaced with a regime that, while more concerned with quality and performance improvement, would continue to emphasise economy and efciency in service delivery. That regime was Best Value (BV) service provision, subject of the Governments rst major local government legislation, the Local Government Act 1999. Councils were required to produce an annual BV Performance Plan based in part on regular service-specic and cross-cutting reviews, themselves driven by the four Cs: each review should challenge the purpose of the service, compare the authoritys performance with others, consult the community, and provide for competition where appropriate. With this information to hand, the council should then select the delivery method that would give best value to local people. This BV regime, monitored by external, independent audit and inspection arrangements, should secure continuous improvement in the way the council undertook all its service responsibilities. However, persistent performance failure would be referred to the Minister, with a view to appropriate intervention and ultimately the removal of responsibility for the failing service from the authority altogether. Best Value, Mark II Comprehensive Performance Assessment Following the Labour Governments re-election in June 2001, a new team of ministers took over responsibility for local government. Signicantly, their rst major publication the December 2001 White Paper, Strong Local Leadership Quality Public Services conceded and sought to respond to the criticisms of excessive centralist bureaucracy, micro-management and interventionism that were felt to characterise government policy in general and Best Value in particular:
Over the course of this Parliament we will give councils more space to innovate, to respond in ways that are appropriate to local circumstances, and to provide more effective leadership. We will provide greater freedom for councils to borrow, invest, trade, charge and set spending priorities (paras. 4.6-4.7).

English local government

471

This space, however, and the new freedoms would have to be earned through ministers latest managerial invention, Comprehensive Performance Assessment. High performing councils would indeed be rewarded with fewer inspections, fewer policy plans to be submitted for ministerial approval, and some loosening of constraints over their current and capital spending. Poor performers, by contrast, should expect few such freedoms and a great deal of remedial attention, even if the earlier interventionist language had been subtly modied to that of relational engagement (Skelcher et al., 2004, esp. ch. 2). The radical innovation of CPA was that every council from Birmingham, with its 2.5 billion budget and responsibility for literally hundreds of different services, to the

IJPPM 55,6

472

smallest district council would be assigned to one of just ve categories. The assessment system would be similar in principle to that used in the Audit Commissions stand-alone Best Value inspections, which scored each inspected service on two four-point scales: (1) Quality of the service: excellent, good, fair, poor. (2) Prospects for improvement: yes, probable, unlikely, no. Best Value, however, operated on a service-by-service basis, whereas CPA would use the ve-point scale already noted: Excellent, Good, Fair, Weak, Poor. Initial misgivings were widespread, and easy to understand. Best Value inspections may have been hugely time-demanding, intrusive, sometimes even confrontational, but at least the inspectors were focussing on one service at a time about which they might be expected to have some specialist knowledge. CPA seemed to demand the impossible: a single snapshot judgement of the performance of the whole authority, on the apparent assumption that that performance will be uniform across its dozens of service areas and thousands of employees. Such an assumption ew in the face of empirical evidence, both from and independent of Best Value assessments, which indicated the very reverse:
There is very little evidence to suggest that levels of performance vary together across services . . . results imply that performance is not driven by the general characteristics of local councils, but by the circumstances, organisation or ethos of specic service departments. It is therefore inappropriate to categorise councils into high performing and low performing groups across all services (Boyne, 1997, p. 40, cited in Wilson, 2004, p. 66).

The CPA methodology the performance measurement framework The Audit Commission, however, was assigned the task of devising a measurement framework that would do precisely that (Audit Commission, 2002, p. 2). Like Best Value, it would measure two key elements of a councils activities: (1) Core service performance, which covers six principal service areas, differentially weighted: education, social care for children and adults, housing, the environment, libraries and leisure facilities, and welfare benets, plus a general assessment of the councils use of resources. Assessment evidence would include judgments from the Audit Commissions own and other inspectorates performance indicators, and government assessments of council plans. Individual service assessments are combined to provide an overall service score out of 4. (2) Ability to improve, dened as a councils ability to lead its community and to improve services. It is a corporate assessment, comprising two elements: a self-assessment of what the council is trying to achieve, its success in delivering those priorities, and its plans for the future, followed by an external assessment carried out by a small team, including an auditor and inspector plus ofcers and members from peer councils. The outcome is a detailed report on the councils strengths and weaknesses, and a grading, again on a 4-point scale, of its ability to improve.

The core service performance scores and ability judgements are then assembled into a matrix that produces the overall assessment of Excellent, Good, Fair, Weak or Poor. The claim for CPA is that, by pulling together for the rst time in a single framework information held by councils themselves, government departments, auditors and inspectors, it provides the most complete picture yet of their current performance, their strengths and weaknesses, and their ability to improve. While emphasising that there is no such entity as a typical council, the Audit Commission does try to summarise what a council in each of the ve CPA categories might look like, and the areas on which it might need to focus attention in its future improvement planning (Audit Commission, 2002, pp. 3-4). Excellent councils, for example, will have shown overall that they deliver high quality services, especially in national priority areas such as education and social services. They have effective leadership and management arrangements, and are clear about their priorities, which are linked to local needs and aspirations. Their nances are well managed and are directed at key priorities. Excellent councils are good at achieving more for their communities through the delivery of cross-cutting projects, often in partnership with others. Poor councils, in almost complete contrast, are likely to offer inadequate services and to lack too the leadership and managerial capacity to improve them. Performance management is ineffective and resources are not used to their best advantage. Most poor councils are trying to make service improvements, but lack the focus and clarity of priorities to do so effectively. Engagement with local people does not translate into positive changes or better services to the community. Without external support, the efforts that many poor councils are making to improve services for their citizens are unlikely to lead to lasting change. Reactions to the early rounds of CPAs If, following this process, your council is categorised as Excellent or Good, your natural instinct is to praise the Audit Commissions perspicacity for overlooking or under-weighting the deciencies that even you know exist in certain aspects of your service provision. If, however, you are judged Weak or Poor, you will probably see things rather differently, particularly if you happen to work in a service area previously assigned a Best Value rating of Good or Excellent. This is the fault line running through the whole CPA process. Everyone who works in or has anything much to do with local government not least we ourselves as service users knows that the worst managed councils have their areas of strength and even excellence, just as the very best have weaknesses. Indeed, that is precisely what the last two years of Best Value inspections, prior to the introduction of CPA, had been reporting. Large authorities would have had up to a dozen service areas inspected in that time, with results typically ranging across at least three of the four BV ratings: excellent, good, fair, poor. CPAs focus specically on core services and corporate performance, so no council should have expected its CPA rating to amount to an aggregate of its previous individual service BV scores. Even so, any comprehensive or overall rating of a large, multifunctional, multi-service organisation, is bound to conceal almost as much as it reveals. As already noted, though, much of the potential vehemence of criticism from within local government was undoubtedly lanced by the favourable distribution of the overall

English local government

473

IJPPM 55,6

474

CPA rankings and the largely positive comments that these drew from ministers. In the second round of CPAs for single-tier and county authorities, for example, Good and Excellent councils increased from 51 per cent of the total to nearly 55 per cent, the Poor and Weak fell from 23 per cent to under 19 per cent; 26 councils moved up at least one category, while just nine moved down. Any system that can produce so many more apparent winners than losers will not nd itself short of defenders especially when those winners are fulsomely praised by the Minister: There have been real improvements in local public services over the past year, Local Government Minister Nick Raynsford said today. Todays results show councils rising to the challenge of delivering better quality public services. We introduced CPA to drive improvement in local government and that is exactly what it is doing. But CPA is not about scoring points for the sake of it. Better public services make a difference to peoples lives. Councils which have raised their CPA performance are delivering real improvements on the ground (ODPM, 2003). No, the problem for a system geared to demonstrating continuous improvement and that therefore has to go on producing more winners than losers is that the supply of losers begins quite quickly to run out which is what manifestly started to happen in the third round of CPAs in December 2004. As we saw in our opening headlines, more than two-thirds of all Englands major councils were by 2004 either Good or Excellent, the latter having increased by 86 per cent in just two years. Over a third of councils had moved up at least one category and ve had moved up two Weak becoming Good, or Fair becoming Excellent in the space of 12 months. Poor councils had fallen by 89 per cent in a year and in the whole of England there was now but one left. Such gures inevitably prompt incredulity, but also a number of serious questions: . Are these high and ever-improving standards reected in the perceptions of the general public and service users? Indeed, to what extent are local residents and electors even aware of their councils CPA ratings? . What is the evidence of the performance improvement of, for example, the 13 councils rated poor in December 2002? . What is the future of a performance measurement exercise in which the overwhelming majority are good or excellent and there remains only a negligible amount of poor or weak performance? The rst of these questions can be quickly answered, at least in general terms. There has, over the three years of CPAs operation, been a gradually increasing awareness on the part of local media and the public that some new form of council grading system has come into existence, but any correlation between councils CPA ratings and the levels of satisfaction in their performance that local residents express in opinion surveys is, at best, weak. There are councils with Fair, Weak or even Poor CPA ratings that are well regarded by their residents, while supposedly better performing councils are signicantly less popular. It would seem that peoples attitudes towards their own councils and towards the performance of local government as a whole are far more inuenced by prevailing levels of council tax the single, property-based, tax that is available to UK local authorities. As it happened largely because of the impact on council budgets of

various central government decisions and policies council tax in England rose on average by 8.5 per cent in 2002/2003 and by 13.1 per cent in 2003/2004, compared to an average annual increase of 6.3 per cent in the preceding eight years. Over virtually the same time period (2001-2004), the average level of satisfaction with individual councils fell by 10 per cent at the same time as the CPA exercise was telling those same respondents that local government performance was good and improving. Only in 2004/2005, when the average tax rise fell back to 6.5 per cent did resident satisfaction return to approximately its 2001 level. Offered the choice of their council achieving a higher CPA ranking or leaving them with more of their own money to spend, there can be little doubt how most residents, if they took the trouble to turn out, would vote. Recovery of poorly performing councils Continuous performance improvement, particularly from previously poorly performing local authorities, has been a key Government objective at the heart of both Best Value and CPA. Other organisations have shared this concern notably the Local Government Association and the Improvement and Development Agency (IDeA). But it has been the Ofce of the Deputy Prime Minister (ODPM), that has determined the rules of engagement with these authorities and that has commissioned a research programme to track and evaluate their organisational turnaround and recovery (Skelcher et al., 2004; Hughes et al., 2004; Fox, 2003a and 2003b; Boyne et al., 2004; Jas, 2004; Turner and Whiteman, 2005). In total, 15 councils were classed as poorly performing following CPA 2002: the 13 assessed as Poor plus two in the Weak category with the lowest score for their ability to improve. By the nature of the CPA scoring system, the 15 were not a homogeneous group: some were particularly poor on either services or ability to improve, while some had been judged poor on both. There were, however, certain commonalities behind the different individual stories that between them were identied by the research team as providing a key to understanding the underlying causes of unsatisfactory performance (Hughes et al., 2004, p. 15ff.). These common themes are: . Ineffective political arrangements particular political structures and/or behaviours that limit elected members capacity to exercise effective leadership and take collective action to address shortcomings. . Ineffective managerial arrangements the breakdown of effective managerial leadership as the result of either excessive change or inertia. . Weaknesses in relationships with the external environment a failure to engage effectively with local communities, or a disengagement from or resistance to mainstream changes taking place in local government. . Weaknesses in the authoritys culture an organisational culture/self-image signicantly at odds with the views of external inspectors and other stakeholders. The mechanisms that contribute to the recovery process of a poorly performing authority take two distinct, if not in practice entirely separate, forms: the regulatory and the developmental. Regulatory mechanisms are the structures and systems put in place by the ODPM, in association with the Audit Commission and other regulators,

English local government

475

IJPPM 55,6

476

to motivate, supervise, guide and assess the recovery of individual councils (Hughes et al., 2004, p. 31ff.). They include: . The lead ofcial usually a former senior local government manager who has the key role in the day-to-day relationship between the ODPM and the council, assisting the council in formulating its recovery plan, identifying external sources of support, and co-ordinating the Governments monitoring of the implementation of the plan. . The Audit Commission relationship manager has the role of co-ordinating inspection activity for any council (not just those subject to ODPM involvement). . The Government monitoring board comprises the above two ofcials, plus other stakeholders with an interest in the councils recovery. The board provides the formal mechanism through which the Government assesses the councils performance recovery, and makes recommendations to the Minister, other government bodies and inspectorates, and the council itself. . The liaison board will replace the monitoring board in councils where the Minister has decided that progress has been sufcient to introduce a lower level of oversight. As a reection of this change in their status, local authorities are members of liaison boards, rather than merely attendees, as at monitoring boards. Developmental mechanisms are the arrangements put in place by each authority to organise, implement and monitor their recovery. They include recovery plans, political mentors, improvement boards, interim managers, external support, outsourcing, and various other mechanisms (Hughes et al., 2004, c.6). In terms of the impact on their CPA gradings, the effect of these recovery plans and processes was impressive. All but one of the 15 major councils assessed as poorly performing in December 2002 had, by December 2004, improved their grading by at least one category, and the rst of these improvers had in fact been released from ODPM supervision within 14 months of CPA 2002. On the basis of the albeit early evidence, an ODPM-commissioned monitoring team gave the following cautious endorsement of the Governments strategy (ODPM, 2004, p. 4):
The approaches being used to secure the recovery of poorly performing councils are producing results in terms of improved performance. The use of graduated pressure (related to the situation in each council) and the commitment to non-statutory engagement by ODPM has minimized resistance by authorities to ODPM involvement. This has enabled ODPM and other external stakeholders to develop an approach to engagement that is tailored to the situation in each council, rather than imposing a uniform strategy.

Raising the bar To minimise resistance by poor performing authorities keen to demonstrate their recovery is one thing. Containing the resistance of authorities who are told at one and the same time that their performance is improving but that their CPA grading has fallen is an altogether more difcult proposition. That, however, was the task the Government and the Audit Commission set for themselves in 2005. For, in order to restore some balance to the ve-category system and avoid the derisible situation of almost all councils being ofcially assessed as Good or Excellent, the Audit

Commission, following consultations, sought to turn the whole CPA process into what it termed A Harder Test (Audit Commission, 2005a). The basic framework was retained, but each key element the service block assessments, the corporate assessment, the categorisation rules underwent signicant, if mainly technical, changes (Game, 2005), with the explicit intention of producing a considerably more stringent test, with more emphasis on outcomes for local people and on value for money (Audit Commission, 2005b, p. 11). For outside observers, the most visible change was to the actual categories. There are still ve, but the single-word adjectives have been disingenuously transformed into a scale of 4 stars to 0 stars. There was also a signicant addition. Alongside their CPA score, all councils would have a direction of travel label improving strongly, well, adequately, or inadequately ofcially to indicate the state of their arrangements to secure continuous improvement, but possibly also, it was suggested, a small sop to those councils nding themselves downgraded in the new and harder test. It is perhaps to the Governments credit that the object of all these changes was made quite transparent, and the Audit Commission document referred directly to raising the bar by making the CPA test more demanding than it was (Audit Commission, 2005a, p. 12). In truth, though, a more accurate athletics metaphor would be the javelin event, rather than the high jump or pole vault. For this exercise in recalibration was equivalent to the redesigns of the javelin that were undertaken for safety reasons in 1986 for men and 1999 for women. The effects too, in several instances, were similar: top performers, on the basis of ofcial records alone, appearing to have regressed. Of the 41 top-tier and county councils rated Excellent in 2004, only 25 achieved 4-star ratings in December 2005 (Audit Commission, 2005b, pp. 12-13). The other 16 dropped out of the top category a fate suffered by just one council over the preceding two years ten of whom had the small, and possibly confusing, consolation of being able to point to a strongly improving or well improving direction of travel. Though not as extensive as had been anticipated, this cull of top performers was in itself hardly surprising, being a principal objective of the whole reform exercise. What was unexpected was that anything like as many as 12 of the 16 would be replaced by formerly Good councils raising their performance in the harder test to the extent of attaining 4-star status. A similar uidity was seen throughout the rankings. Of the 66 3-star councils, for example, 41 had previously been Good, but, in addition to the 16 former Excellents, 7 had been Fair and 2 Weak. In total, therefore, there were still in 2005/2006, as in our opening headline, over two-thirds of major English councils (70 per cent) ofcially judged either good or excellent, and a similar, though not completely overlapping, number improving strongly or well. The early impact of the Harder Test would appear to have been to leave the overall prole of performance little changed, while wakening from a three-year somnolence the snakes in the Audit Commissions snakes and ladders analogy: for the rst time as many authorities are slipping down snakes as are climbing ladders.

English local government

477

IJPPM 55,6

478

Note 1. It should be emphasised that this CPA process of performance management is conned to England local government now being a devolved responsibility in the rest of the UK. England has a hybrid structure of 388 principal local authorities: 36 metropolitan district councils, 33 London borough councils, and 47 unitary councils all of which are actually or effectively single-tier or unitary and 34 county councils, which are the upper-tier of a two-tier system, the lower tier of which consists of 238 non-metropolitan/shire district councils.

References Andrews, R. (2004), Analysing deprivation and local authority performance: the implications of CPA, Public Money and Management, Vol. 24 No. 1, pp. 19-26. Andrews, R., Boyne, G., Law, J. and Walker, R. (2005), External constraints on local service standards: the case of comprehensive performance assessment in English local government, Public Administration, Vol. 83 No. 3, pp. 639-56. Audit Commission (2002), Comprehensive Performance Assessment: Scores and Analysis of Performance, Audit Commission, London. Audit Commission (2003), Comprehensive Performance Assessment: Scores and Analysis of Performance for Single Tier and County Councils in England, Audit Commission, London. Audit Commission (2004), Comprehensive Performance Assessment: Scores and Analysis of Performance for Single Tier and County Councils in England, Audit Commission, London. Audit Commission (2005a), CPA The Harder Test: Single Tier and County Councils Framework for 2005, Audit Commission, London. Audit Commission (2005b), CPA The Harder Test: Scores and Analysis of Performance for Single Tier and County Councils, Audit Commission, London. Boyne, G. (1997), Comparing the performance of local authorities: an evaluation of the Audit Commission indicators, Local Government Studies, Vol. 17 No. 4, pp. 17-43. Boyne, G., Martin, S. and Reid, S. (2004), Learning from the Experience of Recovery Policy Paper 3: Strategies for Organizational Recovery in Local Government: Retrenchment, Repositioning and Reorganization, Centre for Local and Regional Government Research, Cardiff. Conservative Party (2004), Local Conservatives Deliver Better Services and Lower Taxes: Manifesto for the English Local Elections, June 2004, Conservative Party, London. Department for Transport, Local Government and the Regions (DTLR) (2001), Strong Local Leadership Quality Public Services, DTLR, London. Fox, P. (2003a), Learning from the Experience of Recovery Policy Paper 1: Good Practice in Recovery Planning, University of Birmingham, School of Public Policy, Birmingham. Fox, P. (2003b), Learning from the Experience of Recovery Policy Paper 2: Recovery in Poorly Performing Councils, University of Birmingham, School of Public Policy, Birmingham. Game, C. (2005), Comprehensive performance assessment in English local government: has life on Animal Farm really improved under Napoleon?, paper delivered at the Conference of the European Group of Public Administration (EGPA) in Bern, Switzerland, September 2005, available at: http://soc.kuleuven.be/io/egpa/qual/bern/Game.pdf Hughes, M., Skelcher, C., Jas, P., Whiteman, P. and Turner, D. (2004), Learning from the Experience of Recovery Paths to Recovery: Second Annual Report, ODPM, London.

Jas, P. (2004), Learning from the Experience of Recovery Policy Paper 4: The Role of Interim Management in Local Authorities Recovering from Poor Performance, University of Birmingham, School of Public Policy, Birmingham. Local Channel (The) (2005), ICT and e-government development for small rst-tier councils in the EU 25, available at: www.thelocalchannel.co.uk/i2010/Docs/IT2010.pdf Local Government Association (LGA) (2002), Comprehensive Performance Assessment FAQs, LGA, London. Local Government Association (2005), Proposals for Comprehensive Performance Assessment from 2005: LGA Response to the Audit Commission Consultation Paper, LGA, London. Ofce of the Deputy Prime Minister (ODPM) (2003), Performance assessment driving up council standards, news release 2003/0278, 18 December 2003, ODPM, London. Ofce of the Deputy Prime Minister (ODPM) (2004), Learning from the Experience of Recovery Paths to Recovery: Second Annual Report, Research Summary, ODPM, London. Pollitt, C. (1989), Performance indicators in the longer term, Public Money and Management, Vol. 9 No. 3, pp. 51-5. Skelcher, C., Hughes, M., Jas, P., Turner, D. and Whiteman, P. (2004), Learning from the Experience of Recovery Foundations for Recovery: First Annual Report, ODPM, London. Stewart, J. (2003), Modernising British Local Government: An Assessment of Labours Reform Programme, Palgrave Macmillan, Basingstoke. Travers, T. (2005), Local and central government, in Seldon, A. and Kavanagh, D. (Eds), The Blair Effect 2001-5, CUP, Cambridge, pp. 68-93. Turner, D. and Whiteman, P. (2005), Learning from the experience of recovery: the turnaround of poorly performing local authorities, Local Government Studies, Vol. 31 No. 5, pp. 627-54. Wilson, D. and Game, C. (2006), Local Government in the United Kingdom, 4th ed., Palgrave Macmillan, Basingstoke. Wilson, J. (2004), Comprehensive performance assessment springboard or dead-weight?, Public Money and Management, Vol. 24 No. 1, pp. 63-8. About the author Chris Game is an Honorary Senior Lecturer at the University of Birminghams Institute of Local Government Studies (INLOGOV). His specialist interests are in the politics of sub-central government, and he has published extensively on the topics of elections and electoral reform, political parties, councillors and political leadership, inter-governmental relations, and political management. He is joint-author of Local Government in the United Kingdom, the 4th edition of which is published in Summer 2006. He can be contacted at: c.h.game@bham.ac.uk

English local government

479

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints

You might also like