You are on page 1of 16

The Emerald Research Register for this journal is available at The current issue and full text archive

hive of this journal is available at


www.emeraldinsight.com/researchregister www.emeraldinsight.com/1741-0401.htm

Performance
Performance management management in
in the public sector: the public sector

fact or fiction? 245


Zoe Radnor
Warwick Business School, University of Warwick, Coventry, UK, and Received November 2003
Revised November 2003
Mary McGuire Accepted December 2003
Anite Public Sector, Milton Keynes, UK

Keywords Performance management systems, Public sector organizations,


Health services sector, Central government, Balanced scorecard, Incentive schemes
Abstract Since New Labour came to power in the UK in 1997, there has been a drive to improve
the effectiveness of public services through the use of private sector principles. From, the
Modernising Government White Paper to the development of the Public Services Productivity
Panel who, produced a raft of White Papers tackling health, social services, welfare and criminal
justice. This paper, through the analysis of two studies, will reflect on some of the general literature
on public sector performance management and the findings and recommendations of the Public
Services Productivity Panel in order to attempt to answer whether performance management in
the public sector is currently fact or fiction? In other words the paper will aim to develop and
answer, to a certain extent, if it really possible to raise productivity and performance within public
sector organisations through developing performance management systems based on private
sector experience.

Introduction
Since New Labour came to power in the UK in 1997, there has been a drive to
improve the effectiveness of public services through the use of private sector
principles. From the Modernising Government White Paper to the development
of the Public Services Productivity Panel who produced a raft of White Papers
tackling health, social services, welfare and criminal justice. New Labour has
also sought to bring transparency in the performance of public services
through the introduction of targets. This in turn has led to the introduction of
performance targets in all areas of the public sector from local, central
government to education, health and community care.
In assessing (Neely, 1998) the broad reasons to explain why organisations
have performance management systems (PMSs) in general, no grounds can be
found to indicate that these reasons are not equally applicable to public sector
organisations. This view is underlined by central government’s decision to use
International Journal of Productivity
public service agreements from 1998 to drive continuous improvement in and Performance Management
modernising public services (Audit Commission, 1999). In articulating the need Vol. 53 No. 3, 2004
pp. 245-260
for PMSs to be used in order to modernise government services, the Audit q Emerald Group Publishing Limited
1741-0401
Commission (1999) emphasis two key reasons, namely, the need: DOI 10.1108/17410400410523783
IJPPM (1) to improve public services (i.e. through increased economy, efficiency
53,3 and effectiveness in service delivery); and
(2) to reinforce accountability, so that organisations are clearly held to
account for the resources they use, and the outcomes achieved.
To address these points the Public Services Productivity Panel was formed
246 which aimed to improve productivity and efficiency of public services (Public
Services Productivity Panel, 2000). The members of the panel produced a series
of approximately twelve reports in order to support the improvement of public
services productivity. These included a joint report Public Service Productivity:
Meeting the Challenge (Public Services Productivity Panel, 2000) which
outlined a performance management framework in order to “assist managers
and organisations to strengthen existing systems”. The reports addressed a
number of perspectives and areas of performance management, from
particularly public services to a report by Makinson (2000) that focused on
performance incentives within the public sector.
This paper, through the analysis of two studies, will reflect on some of the
general literature on public sector performance management and the findings
and recommendations of the Public Services Productivity Panel in order to
attempt to answer whether performance management in the public sector is
currently fact or fiction? In other words the paper will aim to develop and
answer, to a certain extent, if it really possible to raise productivity and
performance within public sector organisations through developing PMSs
based on private sector experience.

Background
It could be argued that there has been a “revolution” in performance
measurement and performance management over the past 20 years with the
enormous interest reflecting itself in practitioner and academic conferences and
publications (Neely, 1998). The two terms performance measurement and
performance management are used often interchangeably. However, it could be
argued that performance measurement is the act of measuring the performance
whereas performance management aims to react to the “outcome” measure
using it in order to manage the performance. This is more clearly defined by
Lebas (1995) who stated that “Performance Measurement: includes measures
based on key success factors, measures for detection of deviations, measures to
track past achievements, measures to describe the status potential, measures of
output, measures of input, etc. and Performance Management: involves
training, team work, dialogue, management style, attitudes, shared vision,
employee involvement, multicompetence, incentives and rewards, etc.”. For the
purpose of this paper both terms will be used as, to date, the majority of the
debate within the public sector has talked about performance measurement
(Johnsen, 2000). Although, it could be argued that the systems being
introduced, for example the balanced scorecard (BSC) and European
Foundation Quality Model (EFQM), are more in line with performance Performance
management tools (Kaplan and Norton, 1992). management in
In general, PMSs can be classified into four main groups depending upon the the public sector
variables they seek to measure, with a view to influencing or controlling their
outcome (Simons, 2000). These four classifications are defined as follows:
(1) belief systems (e.g. audit of mission and vision systems via staff 247
surveys);
(2) boundary systems (e.g. systematic internal financial controls);
(3) diagnostic control systems (e.g. profit plans and budgets); and
(4) interactive control systems (e.g. profit planning and project
management).
However, it is important to note that successful strategic control is unlikely to
result from the use of just one type of PMS alone (Simons, 2000). Therefore,
when organisations” typically reach the decision to introduce another PMS
(rather than their only PMS), it is also important to remember that the
performance management function within an organisation is an “overhead”
that diverts resources away from the production of “front line” services. Indeed,
this was one of the explicit reasons given by the government to commence its
modernisation programme for the NHS (Secretary of State for Health, 1997).
In addition, within organisations a new PMS can also create performance
management tensions (as well as benefits) in its own right. Therefore, it is vital
that organisations simply do not automatically add an extra PMS to their
portfolio as soon as performance management issues/needs become apparent.
Multiple PMSs should only be operated out of assessed necessity, explicitly
integrated to complement each other in the drive for specific goals and
strategies (Simons, 2000). Indeed, Neely and Bourne (2000) have indicated that
the changing nature of the “management crisis” afflicting many organisations
has changed from the 1980s/1990s. Then the fundamental problem existed that
the wrong things were being measured, but now the problem is that too much
measurement is occurring.
This paper will present evidence and argue that, particularly in public sector
organisations, faced by information overload, managers often ignore the output
of PMSs – regardless of the quality of the information they are producing.
Therefore, even though there is a justification for public sector organisations
needing PMSs (Radnor and Lovell, 2003) that these needs should consistently
be met in an efficient and effective manner. This will then minimise the
occurrence of “information overload” and wasteful “bureaucracy” that can arise
when multiple types of PMSs are used in an un-coordinated manner to meet
perceived information needs.
Wilcox and Bourne (2002) suggest that there has been three main phases in
performance measurement development. They (Wilcox and Bourne, 2002)
suggest that the traditional performance measurement was developed from
IJPPM cost and management accounting (1850-1925). However, by the 1980s this
53,3 purely financial perspective of performance measures was felt to be
inappropriate so multi-dimensional performance measurement frameworks
were developed (1974-1992), e.g. the BSC (Kaplan and Norton, 1993). Finally,
since the mid-1990s, Wilcox and Bourne (2002) feel that the performance
measurement literature has been dominant by discussion around strategy
248 maps and using these to show the link between key performance indicators.
This has been reflected in the public as well as the private sector with many
NHS Strategic Health Authorities moving away from merely “performance
indicators” (introduced in the 1980s) and now instead embarking on using
frameworks such as the BSC (Radnor and Lovell, 2003).
Moriarty and Kennedy (2002) and Johnsen (2000) argue that performance
measurement has been used in the public sector for decades. Moriarty and
Kennedy (2002) suggest that because public sector service organisations
operate without market competition so, the implementation of performance
measurement is often used as a means of a substitute for market pressures.
Pollitt (1986) note that even in the mid- to late 1980s “most public services have
take on board formal performance measurement”. He then goes on to state that
the “NHS has evolved an elaborate system of ‘performance indicators’ . . . Local
authorises are obliged to include certain performance information in their
annual reports” (Pollitt, 1986). Building on these initially Conservative policies,
Labour, since coming into power in 1997, has manifested the performance
measurement drive into various agreements, e.g. for local government in the
form of local public service agreements (PSAs) and service delivery agreements
(SDAs) (which are then monitored and evaluated). Much of this has been
“promoted” under the banner of the “modernisation agency” or “modernising
government”, which had the remit as its vision to transform the way public
services are designed, delivered and perceived (Public Services Productivity
Panel, 2000).
In response to “modernising government” in 2000 the Public Services
Productivity Panel was formed. The aim of this panel was to “advise on ways
of improving the productivity and efficiency of the public services” (Public
Services Productivity Panel, 2000). The panel consisted of a team of business
and public sector leaders to provide “a new perspective on some of the difficult
issues that public services face in their drive to improve performance” (Public
Services Productivity Panel, 2000). As a basis for this panel they focused on
productivity in the wider sense to cover economy, efficiency and effectiveness
(the three Es) so that productivity could be defined as “the broad relationship
between inputs and public service outcomes” (Public Services Productivity
Panel, 2000). This three-Es model was actually addressed by Pollitt (1986) in
the 1980s when he suggested to the public sector management community that
“economy” should be replaced with “equity” in order to distinguish between the
customer, consumer and citizen (three Cs) perspective. Interestingly, very little
of this debate, which is often referred to in academic writings, seems to have Performance
been acknowledged by the Productivity Panel who produced recommendations management in
within areas including defence, police, education, health and government and the public sector
administration. Building on the three-Es model the panel produced the
recommendations based on a framework (Figure 1) which aimed to represent
the five building blocks of performance management (Public Services 249
Productivity Panel, 2000).
This model was developed in the early days of the panel and is based on two
fundamental assumptions that:
(1) that in order to function optimally all the basic building blocks must be
place; and
(2) there is a natural sequence in which the blocks need to be addressed (i.e.
top down).
Briefly, to describe the model, “bold aspiration” refers to ensuring that
organisations have a clear sense of direction that centrally includes coherence
with the PSAs and SDAs. The measures then need to be SMART and linked
throughout the organisation. There also needs to be ownership for every target
either individually or collectively. The targets and the delivery of them must
also be regularly and rigorously reviewed. Finally, success in delivering
targeted performance should result in reinforcement through incentives (Public
Services Productivity Panel, 2000).
The last building block, “meaningful reinforcement” (Figure 1) was focused
on and addressed by a report titled Incentives for Change (Makinson, 2000)
which outlined new incentives arrangements in four large government
agencies. These arrangements or recommendations were set out under the

Figure 1.
Performance
management framework
IJPPM areas of performance framework, targets and incentives, distribution of
53,3 incentives and funding of incentives (Makinson, 2000).
This paper will present two case studies. One case study within the health
sector has considered implementing the BSC as a PMS in order to address, it
could be argued, the building blocks in first three levels defined in Figure 1
(bold aspiration, coherent set of measures and targets, ownership and
250 accountability, rigorous performance review). The second case study is one of
the large government agencies in which the recommendations from Makinson’s
report were implemented. This case study evaluated an incentive scheme
which aimed to address mainly the fifth building block (reinforcement) but also
hoped to allow a coherent set of performance measures and demanding targets,
clear accountability and rigorous review to be developed. From the analysis of
these case studies the paper will present a revised view of Figure 1. Then
building on the work of writers including Pollitt the paper will argue that
a lot of work still needs to be done in order for performance
measurement/management to become fact rather than fiction within the
public sector. Some of this work includes considering the organisational form
of the public sector organisation so better links can be made between the
various elements of performance management (Figure 1). The paper will
conclude with presenting a model that could be used to allow better
understanding of the relationship of these links.

Case studies and methodology


The case studies chosen offer two different public sector areas as well as, to a
degree, two different perspectives in order to widen the performance
management debate. The first reflects some research carried out in Bradford
Health Authority which aimed to “assess the balanced scorecard system’s
potential to enhance performance management within a multi agency health
care setting”. The findings presented in the paper will be based on the focus
groups that were held with all the main public sector organisations within the
Bradford health sector that either provided or commissioned health care and
personal social services. These bodies comprised of Bradford Health Authority,
four Primary Care Trusts (PCTs), two hospital (i.e. secondary care) Trusts, and
Bradford Metropolitan District Council. In all, 46 people attended the eight
focus groups, with attendance determined on the basis of natural sampling
(Hussey and Hussey, 1997). Of the 46 people who attended the focus groups,
approximately 22 per cent were front line clinicians (i.e. consultant medical
staff/general practitioners (GPs) 7 per cent, clinical lead nurses 15 per cent) and
approximately 22 per cent were either chief executive officers or executive
directors. The remaining 56 per cent of focus group attendees were drawn from
management posts at all levels, and from all areas, within each organisation,
and came from both clinical settings and central departments (such as
performance management and information technology).
The second case study is concerned with one of the largest government Performance
departments, employing over 100,000 staff. For the purpose of this paper it will management in
be referred to as “central”. Although staff dealt with the public, the work could the public sector
be described mainly as administrative and transaction intensive. A pilot
scheme involving 14,000 staff was introduced that offered a cash bonus at the
end of a performance year on successful achievement of set targets. The pilot
scheme was directly related to the recommendations from the work of the 251
Productivity Panel. The aim of the study was to evaluate this team based
incentive scheme. In particular, the study aimed to find out:
.
Had the opportunity of achieving a bonus motivated staff to increase their
performance?
. Had the bonus scheme communicated what were the key priorities?
.
Was there a perceived increase in performance of self and colleagues over
the period?
.
Did a bonus scheme enable managers to motivate staff?
The research tool used in this study was a postal questionnaire, which used a
five-point Likert scale (Remenyi et al., 1998) to a series of 40 statements
grouped around five key areas. A random sampling grid was used as shown in
Table I.
Also, individual interviews were carried out with personnel chosen from a
randomly chosen list that followed the same split as the sampling grid. Finally,
focus groups involving up to 15 participants were administered. Within the
interviews and focus groups the discussion was based around the five key
areas from the questionnaire. In total, therefore, over 2,900 staff were involved
in the research (approximately 5 per cent of the population).

Findings
Bradford Health Authority
The main findings from Bradford Health Authority focus group sessions
focused around three main areas. These are outlined briefly below:
(1) Regardless of their pre-existing knowledge of, and exposure to, the BSC
system, individuals from all backgrounds and at all levels within each of
the organisations could quickly identify many of the potential benefits

Area Grade “a” Grade “b” Grade “c” Grade “d” Grade “e” Total

1 300 800 550 220 24 1,894


2 150 300 120 60 12 642
3 30 35 25 8 4 102
4 45 40 14 5 0 104 Table I.
Total 525 1,175 709 293 40 2,742 Random sampling grid
IJPPM from adopting the BSC system, that Kaplan and Norton (1996, 2000)
53,3 themselves highlight. The BSC’s perceived benefits according to Kaplan
and Norton (1996, 2000) are its contribution in terms of: clarifying and
obtaining consensus about strategy, communicating strategy
throughout the organisation, aligning departmental and personal goals
to strategy, linking strategic objectives to long term targets and annual
252
budgets, identifying and aligning strategic initiatives, enabling
periodic/systematic reviews, providing (double loop) feedback to assist
learning/strategy development and, translating better strategic
alignment into “better results”. As one focus group member
commented: “It looks workable, and you need to “speculate to
accumulate”. It would be brilliant if everyone had clear agendas”.
(2) Despite widespread recognition of the requirement for improved PMSs,
and the BSC’s potential to meet this need, this did not automatically lead
to acceptance of the BSC as a preferred solution for this problem. Focus
group contributors could also see potential problems as well as benefits
with the BSC system, along with difficulties with implementing the BSC
system.
(3) It was apparent that even in those organisations where the most cautious
views were expressed regarding the anticipated net value that adoption
of the BSC system could bring, that there would be a willingness to
suspend theoretical reservations and adopt the BSC on pragmatic
grounds, if a number of conditions were met.
Details of these findings, the conditions and their implications can be found in
Radnor and Lovell (2003). Building on, in particular the second and third
finding, in relation to this paper are two main points:
(1) lack of ownership and accountability; and
(2) “working the system” in order to comply to the requirements.
The first point, lack of ownership, can be illustrated by the following scenario.
Within the research project initially the potential value of the BSC system
within the NHS was confirmed on a theoretical basis. Once this was done,
“strawman” BSC’s were developed at multi-agency (Strategic Authority) level
and at an organisational Primary Care Trust (PCT) level. Once the “strawman”
BSCs had been developed, two organisations decided to proceed with the BSC:
the multi agency Health Improvement Programme (HIMP) Steering Group
decided to use the BSC on a pilot basis, and one of the Bradford PCTs resolved
to fully implement the BSC system as its main strategic PMS of choice.
However, valuable as these steps were in confirming the BSC’s positive
potential to be deployed within NHS organisations, this evidence was assessed
as being inconclusive on two main grounds. First, the health authority based
HIMP’s demise was announced by the government (Department of Health,
2001) before the BSC could be implemented and, second, though one PCT had Performance
voluntarily decided to implement the BSC after assessing the system’s management in
potential, no other organisation within the Bradford health sector had reached the public sector
this same conclusion.
Much effort within the research project was focused on attempting to “trial”
the “strawman” BSCs in order to achieve real evidence on its effectiveness but
reasons such as “time”, “not enough information” or “too busy complying the
253
data for other reports” was often given. Hence, the need to run the focus groups
as a means of achieving feedback on the BSC usage. This illustrates that, even
though the benefits of the PMS were apparent it was difficult to achieve real
ownership and accountability for the framework within the various
organisations. This was also hindered by the change structural nature of
large agencies such as the health sector.
Therefore, it could be argued that even if tools or framework such as the BSC
are introduced they soon become merely a diagnostic and not a interactive tool
within the organisation. This is also supports the fact that the role of the
manager is more about being a good administrator and less about being an
effective manager. It could even be argued that “civil servants are
administrators not managers” but, in reality, for performance measurement
and performance management to be effective, then the systems need to be
interactive and the managers need to manage!
Within the public sector it is often argued that the reason why the diagnostic
approach often becomes the norm is related to the “multi stakeholders” or as
Pollitt (1986) suggests the different requirements of the customer, consumer
and citizen. Within something like the health sector this could be presented as
the government (local and central) being the customer, the patient as the
consumer and the population as the citizen. The need to satisfy all of them can
potential create trade-offs in the product delivery (Slack and Lewis, 2002).
However, in order to indicate that all parties are being addressed, parallel
systems are often operated, e.g. business plans, star rating systems, service
agreements, etc. This can often mean that the completion of them becomes a
“form” filling or “box ticking” exercise which again supports a diagnostic not
interactive approach to performance management. In the words of one chief
executive from a hospital Trust, “We will make anything work, as our results
prove in terms of performance indicators . . . and if this group has to use it
[BSC], well we will use it, but then we’ll move on to something else”.

“Central”
The focus of performance measurement within “central” was on specific teams
or operational areas. Although the objective was to bring about overall
performance improvement at an agency or business unit level, equally the
system was trying to motivate administrative staff to increase their individual
performance and so justify the payment of a cash bonus.
IJPPM The main findings within the department were that:
53,3 .
less than one-third of all staff accepted the principal of performance bonus
schemes as reasonable and fair;
. only one-quarter of staff felt motivated to improve their performance to
achieve a bonus payment and;
254 .
only one-third of managers reported any measurable increase in
performance of staff during the first year of operation.
At a departmental level it was found that:
.
competing pressures on senior management led to a lack of clarity of
vision and leadership to position the scheme effectively within the
department’s overall performance improvement agenda;
.
resistance to the concept of self-funding schemes through resource (i.e.
staff) cuts; and
.
limited buy in at the middle manager level to the idea of performance
bonus schemes.
The findings within central can be summarised as:
.
no real target or understanding the baseline to which the target is
developed and set in the first place; and
.
“working the system” to comply to the requirements.
The issue of targets, and what they meant to individuals was well illustrated
during one of the focus group meetings. Some of the staff explained that the
target for processing applications was on a “number per staff basis”. As the
unit fell behind with the targets, greater pressure and push was put onto teams
to complete the work and clear the backlog (some of which had built up well
before the introduction of the bonuses). Therefore, extra staff were brought in
to help improve the clearance rates. However, through the speeding up of the
process in order to meet the target, less of the revenue was collected from each
application and also less time was taken to fully investigate applicants
circumstances. Thus the high level objective of cash collection was sacrificed
for the more immediate objective of backlog clearance.
Another example, in a call centre area, was a target set around “the number
of calls” answered in any given week. This target, whilst quantitative, failed to
balance with any quantitative measure of satisfaction. Thus there were many
examples given of inexperienced operators answering calls, or cutting people
off which both then led to repeat calls for the same query. Therefore, overall the
demand in the call centre continued to increase despite more staff being made
available. Interestingly, the qualitative measures of customer satisfaction had
not been built in to the target measure.
The other phenomenon that was evident during the evaluation, was that of
working the system. In the civil service there is a strong culture of entitlement,
which illustrates itself in a number of ways, for example the institutionalisation Performance
of sick entitlement as an allocated number of days per year that people are management in
“entitled” to take off. Similarly, appraisal systems within many departments the public sector
within the civil service are led by indicating that quotas of staff should fit in
with either poor, average or high performing bands. This leads to considerable
pressure for all managers to ensure that they never have too many staff in
either the high or the low performing group. This “dumbing down” of
255
performance, therefore, lead to a quite jaded reception to something like a team
based incentive scheme. The reason for this was illustrated well by one of the
teams involved with the scheme. The culture of entitlement meant that not
getting a bonus was an unthinkable conclusion. One of the quotes that was
heard frequently during the evaluation was “If we continue to perform exactly
the same as we did last year, we will get the bonus”. This was not due to staff
gathering romantic notions about the scheme, but a direct communication from
middle and senior managers, designed to motivate staff to be involved with the
scheme.

Discussion
Drawing together the findings and discussion within each of the case studies
presented it could be argued that in reality the framework used by the Public
Services Productivity Panel (Figure 1) could be redrawn with “barriers to
progress” stated rather than building blocks. This revised framework is
represented in Figure 2.
As shown in both case studies the role of the managers are far more
administrators than managerial particularly in relation to performance
measurement/management. That in order to achieve or respond to the various
stakeholders staff spend their time “form filling” and chasing information
rather than changing or managing the process. Therefore, in order to achieve
“bold aspiration”, the “local” management team is left to develop their own
purpose or “mission statement”. As stated by Claytonsmith (2003), “. . .
indicators do not always reflect the needs of authorities own specific objectives.
Also with much emphasis previously on collecting information, authorities
need further encouragement to use performance indicators locally to monitor,
action and evaluate their own performance”.
The previous statement also links in with the second issue “working the
system”. As the quote from the hospital Trust CEO indicates the desire and
support to merely tick the right boxes is strong. If public sector organisations
are truly going to use performance management in a interactive way, develop
coherent set of performance measures and consider tools such as the BSC then
they need embrace them on a behavioural (de Waal, 2002) rather than just
operational level.
The lack of ownership was indicated in both case studies. Within Bradford
Health Authority it was difficult to get anyone to pilot, let alone develop one for
IJPPM
53,3

256

Figure 2.
The performance
framework re-visited

their own organisation, a BSC even at a very basic level. In “central”, the targets
set for one of the areas were almost wholly dependent on an outside body. The
staff, feeling that they had no direct influence on the targets, soon lost interest
in the scheme as providing any relevance to their daily working tasks. However
when the opportunity arose to re-think the targets and make them more
relevant to staff, the targets that were set continued to focus on objectives that
were geared towards the performance of external suppliers. This meant that
the senior managers did not harness the opportunity to engage and motivate
the staff towards the scheme, but merely took the route of least resistance at a
departmental level.
The parallel running of performance measurement systems meant that the
review of the performance let alone the system was difficult for the Health
Authority and “central”. Too much time was spent on collecting the data and
information in order to satisfy “government”/customer requirements rather
than ensuring that they were the right measures in the first place and using
them to meet the needs of the consumer and citizen (Pollitt, 1986).
The research carried out at “central” gave a clear indication that there was
little understanding of the baseline target shown by one of the units that had
targets set based on case clearance, which were expressed as a percentage of
the total number of applications received. Thus the targets were vulnerable to a
significant increase in demand for their service. As occurred in the year of the
evaluation, an increase of over 15 per cent for services led to a reduction in case
clearance levels as a proportion of overall application levels. This then meant
that the target was not achieved, despite evidence emerging that staff had in Performance
fact increased their output in comparison with the previous year’s performance. management in
Therefore, the question is: can all the building blocks represented within the the public sector
performance management framework (Figure 1) currently be achieved within
the public sector? The answer shown the case analysed in this paper and
represented in Figure 2 is probably not.
Performance management is not going to go away and to a certain extent
257
can be argued as being important for most organisations. However, currently in
the public sector under its present conditions it will be difficult for any, let alone
all five, of the building blocks shown in Figure 1 to be achieved. Therefore, this
paper will argue that, in order for public sector organisation to truly address
the building blocks (Figure 1), some understanding of the organisation
elements or facets, and the relationship between them, needs to be generated
first. Work by Leavitt (1965), Radnor (1999) and Pettigrew and Whipp (1991) all
suggest that there is need to understand the context and the balance of the
various organisational facets in order to allow effective change or development.
Building on the model presented by Leavitt (1965) this paper argues that the
facets needed to be considered for performance management in the public
sector are strategy, process, people and system. Their relationship is shown in
Figure 3. The arrows on Figure 3 indicate that this is an interdependent
relationship, so that change in any one of the facets usually (or should) result in
change in the others. In fact, Leavitt (1965) argues that this relationship should
be considered as a “system” and, so by solely optimising one facet could lead to
a sub-optimal result in total.
By defining and understanding certain elements for each of the facets and
relating them to performance management it may be possible for the
organisation to support the building blocks outlined in Figure 1.
Strategy can be defined as “the direction of the organisation” (Radnor, 1999)
and, in relation to the framework used by the Public Services Productivity
Panel, it could refer to and support the building block of bold aspiration.
Kaplan and Norton (1996) outline that for successful implementation of the BSC
the organisation needs to understand and state their strategic foundation. So,

Figure 3.
Organisational diamond
for performance
management in the
public sector
IJPPM for example, the strategy of an organisation can help in giving the sense of
53,3 direction for performance management.
Processes are the “nervous” system of the organisation (Clarke, 1994). They
can be considered to be the “harder” mechanics of the organisation and
described as the business processes as well as the structure of the firm (Radnor,
1999). By understanding and defining the processes within an organisation it
258 would be possible to develop ensure that a coherent set of performance
measurement and appropriate targets that support the processes and vice
versa. For example, if you measure “bed occupancy” in terms of time a patient
is occupying a bed then there is pressure to discharge as quickly as possible
and therefore length of stay reduces. However, examples are evident of many
occasions when a patient is discharged early, (due to drive to meet the
measure?), only to be readmitted again due to complications. The end result is a
resource impact on emergency units, a longer time in total in a bed and, a
poorer experience for patients! The measures should be designed to suit the
process and the outcomes required.
People are the “blood and guts” of the organisation (Clarke, 1994). The
organisational factors represented here are largely concerned with training,
motivation, culture and skills (Radnor, 1999). The people element is important
in terms of the framework (Figure 1) to ensure ownership, accountability and,
improvement in the performance. People need to be trained to understand the
purpose and impact of performance management. They should also be
involved in creating and managing performance management (de Waal, 2002)
then the PMS can become something which creates improvement rather than
just a judgement or blaming tool.
The last facet “system” relates to the actual performance measurement or
PMS itself. It needs to be realistic, measure and reinforce the right targets to
ensure the appropriate behaviour (de Waal, 2002). In other words there needs to
be an understanding between, the structure and behaviour that a PMS drives.
For example, an ophthalmology service introduced direct referral from
optometrists to a consultant bypassing the GP in an effort to improve the
referral process for cataract surgery. They were then advised that since the
referral was not now coming from a GP, they did not have to include these
patients in the outpatient waiting time return to the Department of Health. As
you can imagine, this had a dramatic effect on the number of patients reported
as waiting over 13 weeks for an appointment! However, the picture given by
the reported measured was not that being experienced by the patient where the
wait was still the same, if not longer. This in turn could lead to a reduction in
resources in the longer term so affecting the delivery of the service.
By understanding the various facets and ensuring that there is some balance
between them in relation to performance management within the public sector
it should be possible that it is not always be about developing targets, setting
measures and measuring the process but rather about developing indicators,
performance management and understanding the outcome to support the
“organisational” needs.
Conclusion Performance
This paper raises the question is performance management in the public sector management in
fact or fiction? By considering the framework developed by the Public Services the public sector
Productivity Panel and analysing two case studies the paper suggests that
currently it is closer to fiction than fact. Figure 2 illustrates that currently
within the public sector, performance is about measurement and evaluation not
management, the system is diagnostic not interactive or about allowing
259
improvement, that the targets are not considered nor their baseline
appropriately evaluated and overall there is a lack of ownership.
In order for these to be addressed the paper argues that instead of
considering performance management as a pyramid or framework that first the
organisational facets or elements need to be understood in relation to
performance management. For example, a clear strategy developed that allows
a clear purpose to be generated which the performance can be assessed. Or that
the current skills and motivation of the people within the organisation are
understood so that the PMS can ensure these are developed and motivated in
the appropriate way. Finally, the processes and systems within the
organisation are clearly defined and the relationships between the various
sub-processes and systems are understood so that meaningful feedback,
targets and performance are measured and rewarded.
As stated by Hernandez (2002), “if performance measurement is simply
viewed as a data-collection and reporting exercise, it will serve little purpose to
a community. It is only through the analysis of data that performance
measurement can become a tool for continuous service improvement”.
However, this paper argues that to achieve this, there needs to be
understanding of the relationship between strategy, people, organisational
form/design and performance systems in order for performance management
to be achieved particularly within the public sector.

References
Audit Commission (1999), Performance Measurement as a Tool for Modernising Government,
Audit Commission, London.
Clarke, L. (1994), The Essence of Change, Prentice-Hall International, Englewood Cliffs, NJ.
Claytonsmith, P. (2003), “Local government performance – two years on”, Management Services,
Vol. 47, pp. 20-1.
Department of Health (2001), Shifting the Balance of Power within the NHS: Securing Delivery,
Department of Health, London.
de Waal, A.A. (2002), The Quest for Balance, Wiley & Sons, New York, NY.
Hernandez, D. (2002), “Local government performance measurement”, Public Management,
Vol. 84, pp. 10-11.
Hussey, R. and Hussey, J. (1997), Business Research: A practical Guide for Undergraduate and
Postgraduate Students, Macmillan Press, Basingstoke.
IJPPM Johnsen, A. (2000), “Performance measurement: past, present and future”, paper presented at the
Performance Measurement Association Conference, Centre of Business Performance,
53,3 Cambridge.
Kaplan, R. and Norton, D. (1992), “The balanced scorecard – measures that drive performance”,
Harvard Business Review, January/February, pp. 71-9.
Kaplan, R. and Norton, D. (1993), “Putting the balanced scorecard to work”, Harvard Business
260 Review, September/October, pp. 134-43.
Kaplan, R. and Norton, D. (1996), “Using the balanced scorecard as a strategic management
system”, Harvard Business Review, January/February, pp. 75-85.
Kaplan, R.S. and Norton, D.P. (2000), “Having trouble with your strategy: then map it”, Harvard
Business Review, September/October, pp. 167-76.
Leavitt, H.J. (1965), “Applied organizational change in industry”, in March, J.G. (Ed.), Handbook
of Organizations, Rand McNally, Chicago, IL.
Lebas, M.J. (1995), “Performance measurement and performance management”, International
Journal of Production Economics, Vol. 41 No. 1, pp. 23-35.
Makinson, J. (2000), Incentives for Change: Rewarding Performance in National Government
Networks, HM Treasury, London.
Moriarty, P. and Kennedy, D. (2002), “Performance measurement in public sector services:
problems and potential”, paper presented at the Performance Management Association
Conference, Centre of Business Performance, Boston, MA.
Neely, A. (1998), “Measurement business performance – why, what and how”, The Economist,
London.
Neely, A. and Bourne, M. (2000), “Why measurement initiatives fail”, Measuring Business
Excellence, Vol. 4 No. 4, pp. 3-6.
Pettigrew, A.M. and Whipp, R. (1991), Managing Change for Competitive Success, Blackwell,
Oxford.
Pollitt, C. (1986), “Beyond the managerial model: the case for broadening performance
assessment in government and the public services”, Financial Accountability and
Management, Vol. 2 No. 3, pp. 155-70.
Public Services Productivity Panel (2000), Public Services Productivity: Meeting the Challenge,
HM Treasury, London.
Radnor, Z. (1999), Lean Working Practices: The Effect on the Organisation, Manchester School of
Management, UMIST, Manchester.
Radnor, Z.J. and Lovell, B. (2003), “Success factors for implementation of the balanced scorecard
in a NHS multi-agency setting”, International Journal of Health Care Quality Assurance,
Vol. 16 No. 2, pp. 99-108.
Remenyi, D., Williams, B., Money, A. and Swartz, E. (1998), Doing Research in Business and
Management, Sage, London.
Secretary of State for Health (1997), The New NHS: Modern – Dependable, The Stationery Office,
London.
Simons, R. (2000), Performance Measurement and Control Systems for Implementing Strategy:
Text and Cases, Prentice-Hall, Englewood Cliffs, NJ.
Slack, N. and Lewis, M. (2002), Operations Strategy, Pearson Education, Harlow.
Wilcox, M. and Bourne, M. (2002), “Performance measurement and management: research and
action”, paper presented at the Performance Management Association Conference, Centre
for Business Performance, Boston, MA.

You might also like