You are on page 1of 19

Administration & Society

http://aas.sagepub.com/

Objective and Subjective Performance Measures: A Note on


Terminology
Hindy Lauer Schachter
Administration & Society 2010 42: 550 originally published online 2 August 2010
DOI: 10.1177/0095399710378080

The online version of this article can be found at:


http://aas.sagepub.com/content/42/5/550

Published by:

http://www.sagepublications.com

Additional services and information for Administration & Society can be found at:

Email Alerts: http://aas.sagepub.com/cgi/alerts

Subscriptions: http://aas.sagepub.com/subscriptions

Reprints: http://www.sagepub.com/journalsReprints.nav

Permissions: http://www.sagepub.com/journalsPermissions.nav

Citations: http://aas.sagepub.com/content/42/5/550.refs.html

>> Version of Record - Sep 7, 2010

OnlineFirst Version of Record - Aug 2, 2010

What is This?

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


78080AAS

Administration & Society


42(5) 550­–567
Objective and Subjective © 2010 SAGE Publications
DOI: 10.1177/0095399710378080
Performance Measures: http://aas.sagepub.com

A Note on Terminology

Hindy Lauer Schachter1

Abstract
At least since the late 1970s, the performance measurement literature has
used different terminologies to describe agency- and citizen-generated
performance measures. The first type of measures are placed into objective
and the second type into subjective categories. This article argues that this
terminology is outdated owing to evidence on the contextual subjectivity of
all performance measures. The analysis also examines the potential impact
of the dichotomization on the role accorded citizens in helping to develop
public sector performance measures.

Keywords
performance measurement, citizen participation, public administration
methodology

This article analyzes the dichotomization of objective and subjective measures


in the public sector performance measurement literature and the relationship of
that dichotomy to the field’s perennial science wars. The past decade and a half
has seen new energy in the epistemological debate over whether public admin-
istration is a science best pursued by quantitative positivist methods or an art
best understood through interpretative narrative methodologies. This debate
has a long history. As Raadschelders (2000) has noted, comparisons to the nat-
ural sciences have dominated successive attempts to define an identity for the

1
New Jersey Institute of Technology, Newark

Corresponding Author:
Hindy Lauer Schachter, New Jersey Institute of Technology, 4019 Central Avenue Building,
Newark, NJ 07102
Email: hindy.l.schachter@njit.edu

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


Schachter 551

field. These attempts go back at least to William Bennett Munro’s 1928 plea for
a natural science paradigm in public sector research rather than continued reli-
ance on narratives promoting political reform. Such attempts undergirded Her-
bert Simon’s (1957) drive to create an administrative science.
In our own day, the debate over the field’s scientific status has continued
through a number of Public Administration Review articles that questioned or
embraced a natural science approach to education and research in the 1980s and
early 1990s (White & Adams, 1994). Mel Dubnick’s (1999) call for greater con-
ceptual rigor to increase the field’s scientific standing gave the issue enhanced
visibility. This call was backed up by Gill and Meier’s (2000) argument that pub-
lic administration needed to develop its own quantitative, scientific methods.
To these suggestions, proponents of qualitative, interpretative research
replied that narrative inquiry or story telling represented an important alter-
native methodology. Its unique attributes—subjectivity versus objectivity
and involvement versus detachment—were advantages in opening new
inquiries outside of the small area amenable to positivist methods (e.g., Balfour
& Mesaros, 1994; Dodge, Ospina, and Foldy, 2005; Hummel, 2008; Yanow,
2000). Postmodern theorists argued that science was only one among multi-
ple equally relevant discourses on reality (De Zwart, 2002).
Although each side in this debate has its adherents along with its own dis-
semination networks, the two camps have never enjoyed equal status in the
field. People are caught between the practical benefits of getting knowledge
from common experience and the desire to seem scientific (Riccucci, 2007).
Although it is not unusual for policy oriented writers to denigrate the worth
of case study evidence in their articles (e.g., Candreva & Brook, 2008), few
if any writers outside of the methodological imbroglios speak ill of science as
a way of gaining useful knowledge. The science label has widespread cachet.
Although performance measurement is one of public administration’s hot-
test topics (e.g., Behn, 1995; Dalehite, 2008), for a long time the subfield
seemed isolated from the controversy over method. Performance measure-
ment as an activity was indisputably allied with the idea of public administra-
tion as a science and with the notion that quantitative data should take
precedence over narratives in policy development. Proponents of measure-
ment saw the activity as an attempt to rationalize government policy and
move beyond decision making through political bargaining (Townley, Cooper,
& Oakes, 2003). An oft iterated point of performance measurement was that
it would “rationalize the budget process, replacing politics with facts and
figures” (Ammons, 2008, p. 3). Indeed, the 1993 Government Performance
and Results Act, which mandated performance measurement for federal
agencies, required that each program’s activity have objective and quantifi-
able goals (Kravchuk & Schack, 1996).

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


552 Administration & Society 42(5)

Only recently have some analysts questioned whether such a rational pro-
cess does or can exist. Radin (2006), for example, noted that all Congressio-
nal committees do not use the same measures to evaluate a given service
because committee perspectives vary. Such differences suggest a strong
political component to the measurement enterprise. As administration-as-
science proponents have traditionally defined science to require objective
observation (e.g., Meier, 2005), this declaration thrusts the interpretative
viewpoint into the heart of performance measurement territory.
Those who posit performance measurement as a scientific process seem to
require objective measures, that is to say measures developed and collected
by people who are independent of the matter under observation (Adcroft &
Willis, 2008). Indeed, proponents have conceived of performance measure-
ment’s purpose as to “isolate decisions about allocation of resources from
political pressures by providing objective and undisputed data” (Halachmi,
2004, p. 334). Most performance measurement specialists have assumed that
agency data can meet these criteria if agency objectives are clear (e.g.,
Gooden & McCreary, 2001, or Nicholson-Crotty, Theobald, & Nicholson-
Crotty, 2006). The majority of performance measurement scholars, therefore,
have preferred measures generated by administrators or outside experts
rather than through politics—however defined.
For new public management (NPM) advocates the problem with politi-
cally generated measures is that “politics focuses on perceptions and ideol-
ogy, not performance” (Osborne & Gaebler, 1992). NPM proponents believe
that politicians want to use measurement to support their favorite programs
and therefore are equally open to objective and subjective criteria. They
crave particular results and do not much care about the objectivity of the
measures that obtain them (Osborne & Hutchinson, 2004). This worldview
accepts a politics–administration dichotomy where political figures have a
lack of interest in objectivity whereas effective administrators focus on
objective measures (Osborne & Plastrik, 2000). The effective administrative
mindset mirrors a similar concentration on objectivity that NPM advocates
discern in the private sector. Because NPM focuses on keeping politics out of
measurement, Alkadry and Farazmand (2004, p. 297) argue that its propo-
nents want to use performance measurement to minimize political account-
ability. (This argument does not mean that measurement does in fact limit
political accountability but simply that its NPM proponents hoped it might.)
Apolitical measurement theorists assume that professional administrators
have special knowledge that allows them to choose neutral measures (e.g.,
Williams, McShane, & Sechrest, 1994). Some theorists then privilege these
measures over citizen evaluations of services that are seen as dependent on

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


Schachter 553

the respondent’s sociodemographic background (e.g., Licari, McLean, &


Rice, 2005; Stipak, 1979). The literature in general labels citizen-generated
measures as subjective or context dependent.
The objective–subjective dichotomy has significant implications for the
debate over whether public sector performance measurement is a science or
art. For the endeavor to be a science, someone must be able to identify objec-
tive measures—and the expert has been chosen for the task. But as none of the
articles contains an operational definition of objectivity, we can ask what are
the differences between expert- (i.e., agency) and citizen-generated measures
that lands one category—and one category only—in the objective column?
The objective–subjective distinction also has implications for the relative
roles of professional administrators and lay citizens in choosing and record-
ing performance measures. On one hand, some organizations such as
the Alfred P. Sloan Foundation have funded grants to involve citizens in
developing measures. But, as agencies are often free to choose which perfor-
mance measures they collect, a literature dichotomizing agency- and citizen-
generated measures on an objective–subjective dimension may be one reason
that many cities survey citizens on performance issues but few localities use
these surveys in developing performance measurement systems (Licari et al.,
2005; Sanger, 2008).
The following analysis suggests that the differences implicitly posited to
construct the objective–subjective dichotomy do not actually exist—or, at
least no evidence we have shows they exist. The authors have simply made
different assumptions about each data class and these assumptions result in
differential classification. Categorization as objective or subjective is socially
constructed and can be used to buttress particular theories about the policy
development role citizens should play.
The paper consists of three sections. The first two sections identify and
analyze the assumptions made to construct separate objective and subjective
classes for agency and citizen generated data in the performance measure-
ment literature. A final section then attempts to answer the question why
researchers made variant assumptions for agency- and citizen-generated
data. The analysis links these assumptions to their policy implications.

Objective and Subjective Measures


The worth of a performance measurement system hinges on the measures its
administrators use and whether those measures help officials achieve their
purpose(s) in implementing the system in the first place, for example, control
or motivation (Behn, 2003). Effective measures are both valid in terms of

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


554 Administration & Society 42(5)

relating to the underlying conceptual goal and reliable, that is, internally con-
sistent and stable over time unless change occurs in the underlying reality.
Valid measures of important program outputs and outcomes are a necessary—
but not a sufficient—prerequisite to having the system actually spur perfor-
mance improvement. First the agency has to identify goals for its programs
(e.g., educate students or prevent crime). Because people can rarely measure
such conceptual objectives directly, the agency must select outputs to mea-
sure, that is to say specific measurable results that the organization itself
produces in a given time period (e.g., standardized test scores for a school or
arrest rates for a police department). Next the agency must select desirable
outcomes that it does not directly produce (e.g., crime rate decreases) and
monitor relations between output and outcome (Behn, 2001). Fourth, the
agency must conceptualize and implement some procedures to reorient out-
puts if they do not achieve outcome goals.
Most program administrators face a choice of seemingly logical measure-
ment options at the output and outcome levels. Anyone with an interest in the
social utility of performance measurement has an interest also in the specific
measures chosen as programs may come out well with one set of measures
and poorly with another. The particular criteria a given agency uses to mea-
sure performance influence its program evaluations and ratings. A training
program, for example, might fare well on participant employment after 13
weeks but poorly on participant employment after 26 weeks or 6 months.
In addition, the measurement enterprise has multiple dimensions before
and after employees count data points, including formulating the decision
criteria that are used to select facts to measure and which will undergird
administrator perception and interpretation of counts. These aspects of the
selection process inevitably afford ample scope for politics—broadly
defined—to enter decision making even if elected officials and citizens play
no role in the choice.
Public program measures can come from many sources—governmental,
commercial, academic, and civic, including public interest groups and foun-
dations (e.g., Coe & Brunet, 2006). Data from different sources may yield
different pictures of an agency’s performance; for example, Kelly’s (2003)
multicity police and fire function comparison of agency data and citizen rat-
ings found no relationship between the two measure types.
At least since the late 1970s, a number of public administration scholars
have contrasted two methods to obtain performance measures. When mea-
sures are obtained from official public records, scholars speak of objective
measurement or analyzing “actual service delivery” (Brown & Coulter, 1983,
p. 51). When data are obtained from citizen surveys—whether from clients of

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


Schachter 555

a particular program or a sample of area residents—the information is rou-


tinely labeled subjective (Park, 1984). A recent commentator labeled it “inev-
itable” that citizen-driven measures would be less objective and scientific
than those created by professional managers (Ho, 2007, p. 117).
Objectivity is a concept based on the assumption that the structure of real-
ity provides a basis for making correct judgments independent of the back-
ground characteristics of individual perceivers (Belliotti, 1992). A
well-regarded textbook on behavioral research methods has defined objec-
tive tests as those “in which anyone following the prescribed rules will assign
the same numerals to objects and sets of objects as anyone else” (Kerlinger,
1964, p. 479). All other tests are subjective. This definition focuses test
objectivity on score enumeration rather than on deciding whether to give the
exam in the first place.
Although the performance literature analyzed here does not explicitly
define “objective,” two assumptions, close to the textbook understanding of
objective and subjective terminology, have undergirded its labeling process.
One that relates to measurement reporting is an implicit assumption that
although some transmission errors may occur, most of the material held in
public archives is true, that is, that the response time recorded in a police
record was the actual response time or that the arresting officer recorded in
the archives actually made the relevant arrests.
A second assumption relates to how measures are chosen. Here, the research-
ers’ assumptions are broader than in the Kerlinger definition. They assume that
both the choice of public archive measures and the manner in which adminis-
trators collected them are demographically neutral—they do not relate to the
personal characteristics of agency executives or workers. The relationship
between administrator background and measure selection is not discussed. No
mention of administrator demographic characteristics ever appears. The analy-
sis proceeds as if administrators either have no demographic backgrounds or
that when people become professional managers their demographic character-
istics cease to influence their approach to measurement.
Similar assumptions are not made for citizens. Researchers investigate
correlations between citizen characteristics such as age, gender, socioeco-
nomic status and race, and individual service evaluations. A number of stud-
ies, for example, show that African American citizens in a given city rated the
police lower than White residents (e.g., Fitzgerald & Durant, 1980) although
DeHoog, Lowery, and Lyons (1990) found that this relationship did not hold
in all localities they studied. From these analyses, researchers conclude that
citizens base their evaluations on their own demographic characteristics
rather than objective service.

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


556 Administration & Society 42(5)

These correlations between citizen evaluations and demographic charac-


teristics have led some writers to question the usefulness citizen-generated
information would have for giving an accurate picture of services in a given
locale. Brown and Coulter (1983) go so far as to argue that increasing service
quality to a given neighborhood would not improve citizen ratings for that
function because citizen evaluations do not respond to objective quality indi-
cators. Even Shingler, van Loon, Alter, and Bridger (2008, p. 1110), who favor
using citizen surveys to evaluate agency performance say that because of the
demography–ratings relationship, evaluators “must keep in mind that citizen
perceptions and objective data are two different types of information.”
Researchers could have used the evidence that citizen ratings sometimes cor-
relate with ascriptive characteristics to bolster an alternative conclusion; that is,
different demographic groups have objective needs for different services because
they have different life experiences. This view uses the correlations between rat-
ings and demographic status as evidence on how the jurisdiction satisfies the
objective needs of each group—needs that may differ group by group.
As valid performance measures evaluate services in relation to actual citi-
zen needs, they must reflect gender, age, race, ethnicity, and religion to the
extent that such characteristics affect the purposes for and manner in which
people use public services. The definition of what comprises quality is not
fixed; it varies depending on the needs of different communities. Sometimes
measures that agency personnel assume affect quality are actually unimport-
ant in the life experiences of some communities. Agencies, for example, may
present data on quick response time to show high quality; some communities
will agree that this measure is a suitable component of quality whereas other
communities may care more for polite service once respondents arrive. For
members of these latter communities, quick response time only means that
the demeaning behavior occurs sooner. A full understanding of agency per-
formance may require attention to both time and conduct indicators.
In addition, as the demographic profile of a given community changes, its
citizens may want to use services for different purposes—for example, they
want to use transit to access different destinations. Routes that meet one group
of needs may be insufficient to meet others. Thus, people with one set of
demographic characteristics legitimately rate service high whereas those with
another set rate it low because they seek different outcomes from the service.
The literature, however, does not predicate quality on meeting a community’s
needs. It sees demographic–rating correlation as impeding objectivity.
Researchers, in addition, do not extend the same courtesy to citizens as they
extend to agencies where they implicitly assume that archived data generally
offers factual information. When citizen evaluations do not conform to agency

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


Schachter 557

indicators, authors simply assume that survey ratings are wrong and cannot tell
us anything about service provision (e.g., Stipak, 1979, or Brown & Coulter,
1983). In this literature, if citizens rate service quality low when agencies
record that they have exceeded their performance targets, researchers view the
problem as one of faulty lay perception and announce that government needs
to educate the public (Kelly & Swindell, 2002). Citizens are seen as lacking
knowledge about services or as confusing service measurement with providing
an overall appraisal of local government (Brudney & England, 1982).
On the other hand, when citizen and agency ratings mesh, this concor-
dance confirms that “citizens have some ability to perceive the efforts of
service agencies” (Percy, 1986, p. 80). In this view, a concordance with
expert ratings is a prerequisite for citizen knowledge to approximate an
objective designation. Only to the extent that citizen surveys and expert rat-
ings converge can citizen perceptions serve as a proxy for outcome measure-
ments (Van Ryzin, Immerwahr, & Altman, 2008). As Rosentraub, Harlow,
and Thompson (1979) wrote in a communication to Public Administration
Review criticizing an early article on this subject (Stipak, 1979), this type of
analysis implies that bureaucrats can substitute their sense of service appraisal
for that of community members.
Pointing out this discrepancy in the treatment of citizen- and agency-
generated measures should not be taken to imply that the former are always
valid or that in every case, citizens even have sufficient information to judge
agency operations. Sometimes citizens who do not have much contact with a
particular service might base their ratings on media stories that have vested
interest in boosting the negative and sensational aspects of agency perfor-
mance. Sometimes citizen ratings may express unwarranted prejudices.
Andrews, Boyne, Meier, O’Toole, and Walker (2005) suggest, for example,
that racial unease might be one reason that citizens rated some local govern-
ments with diverse personnel lower than those with less diversity in a study
of English authorities. The point here is not that citizens are always right and
well informed but rather that a sweeping and indiscriminate difference in
treatment occurs in the literature evaluating the two types of measures.
Citizen-generated data are labeled as subjective a priori whereas agency-
generated measures never receive this label. Is this differential expectation
based on data source justified?

Analysis
As mentioned previously, in the performance measurement articles critiqued
here, the term objective is never defined, but the objective nature of agency data

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


558 Administration & Society 42(5)

seems to rest primarily on two assertions: the truth of the reported measure, that
is, its correspondence to what actually happened, and the lack of correlation
between administrator choice of measures and an administrator’s own demo-
graphic characteristics. Each of these assertions is problematic, however, when
applied to actual agency measures and the measurement process.
Although the argument here is not that administrators habitually falsify
data, analysis of actual records supports the contention that public archives
are not inevitably correct. Some discrepancies are simply the product of cal-
culation errors. But mistakes occur not only because of random human weak-
ness but also because of systemic agency prejudice. Every society has its
communal or intersubjective agreement on how people should perceive real-
ity and thus behave properly in certain situations (Belliotti, 1992). In any
given era, stereotypes that cloud citizen survey data can also influence how
agencies record information. Williams and Kellough (2006) give a particu-
larly chilling account of how the Washington, D.C., police department his-
torically falsified records on arresting officers to deny African American
police a chance for promotions. The agency substituted the names of White
officers for the African American policemen who actually made the arrests
and then listed the Black officers as complainants. A look at the department’s
log would reflect the prejudices dominating Washington, D.C., in the era—
not objective reality.
Agency records are built and maintained by people who share the preju-
dices of their time. In addition, agency administrators have their own unique
stance on measurement because they bear the brunt of legislative/citizen dis-
pleasure for low performance and may see budgetary increases and other
benefits if indicators are high. This set of incentives gives agency employees
a reason to manipulate performance data. As Downs (1967) has noted,
administrators—like all people—tend to be utility maximizers. They favor
actions that advance their own interests and try to minimize those that will
reveal their own shortcoming. Some of them may, therefore, take action to
record figures that make their performance better than facts would indicate.
Bohte and Meier’s (2000) analysis of organizational cheating includes the
example of the Austin Independent School District, which was caught manip-
ulating test scores by eliminating grades from low-performing students.
Hood (2006, p. 517) reports similar “creative compliance” in English agency
data on hospital waiting times and ambulance response statistics.
Outright fraud is not the only relevant outcome of utility maximization.
Once an agency selects a particular measure for its performance reports,
administrators have a vested interest in getting good scores on this indicator.
To this end, they will devote scarce unit resources to improve their tallies.

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


Schachter 559

Such use of resources is only warranted, to the extent that the measure is
actually a good way of learning whether a unit is meeting the agency’s under-
lying goal. If, however, the measure is a poor indicator for the underlying
goal, the transfer of resources is not warranted. Standardized test scores, for
example, are the dominant measure of output in the educational production
literature (Smith, 2003). Yet such scores have only a modest relationship with
individual or aggregate economic success. To the extent that administrators
use these measures to identify their ability to transfer economically useful
knowledge and skills, they exemplify Blau and Meyer’s (1971, p. 50) argu-
ment that there is a “tendency in large bureaucracies, for organizational ide-
ologies to develop that take precedence over original goals.” Scarce resources
are spent administering tests that do not relate well to the underlying goal
while the agency could have spent those resources instead on improving indi-
ces that research shows do relate to economic return on schooling, for exam-
ple, pupil–teacher ratio or term length (Card & Krueger, 1992).
Administrators at various levels of the hierarchy or with different func-
tional responsibilities have different perceptions about what is key to agency
success as well as how well the organization currently works to attain its
goals. Brewer (2006) found, for example, that federal agency supervisors
were more optimistic about performance in their organizations than entry-
level workers. Such variations in outlook imply an internal politics of con-
flict among administrators over which measures to use and how to interpret
them. Different actors are likely to back those measures that support their
own sometimes conflicting management agendas or factions in policy dis-
putes. Through interviews, Moynihan (2008) found that many state legisla-
tors were skeptical of agency-generated performance data because they saw
it as reflecting the interests of the executive branch; federal legislators had
similar reservations about the use of the Bush administration’s Program
Assessment Rating Tool (PART).
Moynihan’s research as well as both the Williams and Kellough (2006)
and Bohte and Meier (2000) articles underscore the subjective nature of
agency data collection. In particular, these accounts underscore how ascrip-
tive characteristics—race, place in the hierarchy, etc.—and organizational
context influence administrative information collection choices and the trust
various actors accord these choices. The relevance of administrator charac-
teristics meshes with a basic premise of the representative bureaucracy litera-
ture: passive representation of groups (e.g., by race or gender) can influence
active representation of group interests in some instances. Thus, Meier and
Nicholson-Crotty (2006) found that local police departments with a greater
presence of women police officers showed greater responsiveness to crimes

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


560 Administration & Society 42(5)

against women. Another study found that an increase of female math teachers
correlated with educational benefits for girls (Keiser, Wilkins, Meier, &
Holland, 2002).
This evidence also fits with those interpretative theories that emphasize
that all people are “caught in a web of social, personal, and familial relations,
as well as embedded in particular historical contexts” (Burnier, 2003, p. 533).
As Lindblom (1990) notes, every background socializes people to look at life
and its discontents from one perspective rather than another. Each person
rates services as a full human persona—that means a person with specific
racial, gender, and time-dependent cultural attachments. As no ground for
administrative choice exists where these attachments are absent, so no mea-
sure can be pristinely objective. Each is chosen because it has subjective
importance to some arbiter owing to that person’s own needs and understand-
ing. This relationship holds whether the person is a community member or an
agency expert. In both cases, “no escape from feeling can be found” (Lindblom,
1990, p. 217).

Conclusions
Much of the most recent literature on objective and subjective measures cen-
ters on the appropriate role that citizen surveys should play in ascertaining
agency performance. The conclusions often soften earlier analyses of the
1980s that wanted to exclude such evidence. Current authors are more likely
to either find similarities between expert and citizen data that they say can
validate the citizen material or they argue that the subjective surveys yield
additional evidence that is important for understanding agency effectiveness.
None of the analyses, however, asks the fundamental question whether the
objective–subjective dichotomy itself should be retained or question why
scholars created it in the first place.
A point of entry to answering such queries is trying to understand why
much of the research community dichotomized agency and citizen responses
on an objective–subjective frame. Why the use of these particular terms?
Why the privileging of professional knowledge and minimizing of the prob-
lems associated with agency-generated data? Words matter. As vocabulary
choices nudge policy in one or another direction, scholars need to study the
practical consequences of research heuristics.
One answer to explain the dichotomy’s creation and persistence may lie in
the need to affirm performance measurement’s scientific nature. Researchers
believed that presentation of performance measurement as a scientific enter-
prise required objective data. Highlighting the subjective aspects of agency

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


Schachter 561

information would not reinforce the politics–administration split with poli-


tics a subjective exercise and administration an objective enterprise above
the fray of bargaining. If managerial—rather than political—entrepreneur-
ship is the desired outcome, the agency’s own data needed to sport the cov-
eted objective brand.
The objective and subjective literature that presents itself as immersed in
methodology may actually be seen then as part of an attempt to map the respec-
tive roles of researchers, practitioners, and citizens in performance definition.
As yet, no consensus exists in practice as to what role each of these groups
should take (e.g., Coplin, Merget, & Bourdeaux, 2002; Heikkila & Isset, 2007).
In fact, no consensus exists on what roles different practitioner levels should
take, that is, whether measure development should be a top–down exercise or
involve line workers at lower levels (Kamensky, 2009). But overreaching by
professionals to differentiate their inquiry products from ordinary knowledge is
not a new phenomenon for gaining professional prestige. Lindblom and Cohen
(1979) already argued that researchers tended to distinguish too sharply
between professional social inquiry and ordinary modes of knowledge—a ten-
dency nursed at least partly by contempt for politics.
By proposing an objective–subjective dichotomy, the performance mea-
surement literature elevates the role of researchers and administrators and
diminishes the importance of involving citizens or even getting information
from them. To a certain extent, this objective and subjective measures litera-
ture can be viewed as a counterweight to the citizen-generated performance
measurement projects funded by the Alfred P. Sloan Foundation or the
National Center for Civic Innovation. If allowed to go unchallenged, its cen-
tral dichotomized contrast of citizen and expert ratings could prove problem-
atic to efforts to construct and institutionalize forums where citizens
participate in developing public policies (e.g., Leib, 2004).
Tension between citizen and professional input is endemic in many areas
of modern public administration. A set of comparative case studies on citizen
coproduction found that administrator experts often feared citizen involve-
ment and wanted to denigrate its importance (Bovaird, 2007). One way to
limit citizen influence in identifying performance measures is to label such
efforts as subjective because as Andrews, Boyne, and Walker (2006, p. 16)
note, “Objective measures have been viewed as the gold standard in public
management research.” Despite the importance of interpretive studies to
public administration, many researchers and administrators will still view
citizen ratings as a substandard, second-order category if they bear the sub-
jective label.

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


562 Administration & Society 42(5)

Once classifications enter a field’s vocabulary, it is often difficult to extri-


cate them even if subsequent analysis finds them wanting. Explicit calls to
reformulate the objective–subjective dichotomy should, however, be part of
the ongoing agenda for people who favor increased citizen involvement in
performance measurement. Researchers should not use objective and subjec-
tive labels to dichotomize agency and citizen formulations. Because the pri-
mary criterion leading to subjective characterization—context specificity—is
equally applicable to both types of data, this analysis urges eliminating the
objective–subjective dichotomy that can separate the worth accorded citizen
and administrator involvement in measuring public service quality. The dis-
tinction is not between objective and subjective measures but rather between
measures developed by a relatively small group of experts and those pro-
duced by the individual judgments of large numbers of citizens. (See Weale,
2009, for the use of this distinction in a very different context.) For both types
of measures, the creator’s ascriptive characteristics matter. For both types of
measures, the key question is the same. How well does the indicator help the
agency move toward attaining the underlying conceptual goal?

Author’s Note
Much of this article was presented at the 2009 American Political Science Association
annual conference in Toronto, Canada. I want to thank Professor Kenneth J. Meier
for useful comments he made on an earlier draft.

Declaration of Conflicting Interests


The author declared no conflicts of interests with respect to the authorship and/or
publication of this article.

Funding
The author received no financial support for the research and/or authorship of this
article.

References
Adcroft, A., & Willis, R. (2008). A snapshot of strategy research 2002-2006. Journal
of Management History, 14, 313-333.
Alkadry, M., & Farazmand, A. (2004). Measurement as accountability. In M. Holzer
& S.-H. Lee (Eds.), Public Productivity Handbook (2nd ed., pp. 297-310). New
York, NY: Marcel Dekker.
Ammons, D. (2008, March). Performance budgeting success in local government:
Is a new litmus test needed. Paper delivered at the American Society for Public
Administration National Conference, Dallas, Texas.

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


Schachter 563

Andrews, R., Boyne, G., Meier, M., O’Toole, L., Jr., & Walker, R. (2005). Represen-
tative bureaucracy, organizational strategy, and public service performance: An
empirical analysis of English local governments. Journal of Public Administra-
tion Theory and Research, 15, 489-504.
Andrews, R., Boyne, G., & Walker, R. (2006). Subjective and objective measures of
organizational performance: An empirical exploration in public service perfor-
mance. In: G. Boyne, K. Meier, L. O’Toole, & R. Walker (Eds.), Public service
performance: Perspectives on measurement and management (pp. 14-34). New
York, NY: Cambridge University Press.
Balfour, D., & Mesaros, W. (1994). Connecting the local narratives: Public admin-
istration as a hermeneutic science. Public Administration Review, 54, 559-564.
Behn, R. (1995). The big questions of public management. Public Administration
Review, 55, 313-324.
Behn, R. (2001). Rethinking democratic accountability. Washington, DC: Brookings
Institution.
Behn, R. (2003). Why measure performance? Different purposes require different
measures. Public Administration Review, 63, 586-606.
Belliotti, R. (1992). Justifying law: The debate over foundations, goals, and methods.
Philadelphia, PA: Temple University Press.
Blau, P., & Meyer, M. (1971). Bureaucracy in modern society (2nd ed.). New York,
NY: Random House.
Bohte, J., & Meier, K. (2000). Goal displacement: Assessing the motivation for orga-
nizational cheating. Public Administration Review, 60, 173-182.
Bovaird, T. (2007). Beyond engagement and participation: User and community
coproduction of public services. Public Administration Review, 67, 846-860.
Brewer, G. (2006). All measures of performance are subjective: More evidence on
US federal agencies in public service performance. In: G. Boyne, K. Meier, L.
O’Toole Jr., & R. Walker (Eds.), Public service performance: Perspectives on
measurement and management (pp. 35-54). New York, NY: Cambridge Univer-
sity Press.
Brown, K., & Coulter, P. (1983). Subjective and objective measures of police service
delivery. Public Administration Review, 43, 50-58.
Brudney, J., & England, R. (1982). Urban policy making and subjective service eval-
uations: Are they compatible? Public Administration Review, 42, 127-135.
Burnier, D. (2003). Other voices/other rooms: Towards a care-centered public admin-
istration. Administrative Theory and Praxis, 25, 529-544.
Candreva, P., & Brook, D. (2008). Transitions in defense management reform: A
review of the government accountability office’s chief management officer rec-
ommendation and comments for the new administration. Public Administration
Review, 68, 1043-1049.

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


564 Administration & Society 42(5)

Card, D., & Krueger, A. (1992). Does school quality matter? Returns to education
and the characteristics of public schools in the United States. Journal of Political
Economy, 100, 1-40.
Coe, C., & Brunet, J. (2006). Organizational report cards: Significant impact or much
ado about nothing? Public Administration Review, 66, 90-100.
Coplin, W., Merget, A., & Bourdeaux, C. (2002). The professional researcher as
change agent in the government-performance movement. Public Administration
Review, 62, 699-711.
Dalehite, E. (2008). Political rationality and symbolism: An investigation into the
decision to conduct citizen surveys. Public Administration Review, 68, 891-907.
DeHoog, R. H., Lowery, D., & Lyons, W. (1990). Citizen satisfaction with local gov-
ernance: A test of individual, jurisdictional, and city-specific explanations. Jour-
nal of Politics, 50, 807-837.
De Zwart, F. (2002). Administrative practice and rational inquiry in postmodern pub-
lic administration theory. Administration & Society, 34, 482-498.
Dodge, J., Ospina, S., & Foldy, E. G. (2005). Integrating rigor and relevance in public
administration scholarship: The contribution of narrative inquiry. Public Admin-
istration Review, 65, 286-300.
Downs, A. (1967). Inside bureaucracy. Boston, MA: Little, Brown.
Dubnick, M. (1999, September). Demons, spirits, and elephants: Reflections on the
failure of public administration theory. Presentation at the American Political
Science Association Conference, Atlanta. Retrieved from www.fsuedu/-localgov/
papers/archive/Dubnick.001.pdf
Fitzgerald, M., & Durant, R. (1980). Citizen evaluations and urban management: Ser-
vice delivery in an age of protest. Public Administration Review, 40, 585-594.
Gill, J., & Meier, K. (2000). Public administration research and practice: A methodologi-
cal manifesto. Journal of Public Administration Research and Theory, 10, 157-199.
Gooden, S. T., & McCreary, S. (2001). That old-time religion: Efficiency, benchmark-
ing and productivity. Public Administration Review, 61, 116-120.
Halachmi, A. (2004). Performance measurements, accountability, and improving per-
formance. In: M. Holzer & S.-H. Lee (Eds.), Public productivity handbook (2nd
ed., pp. 333-352). New York, NY: Marcel Dekker.
Heikkila, T., & Isset, K. R. (2007). Citizen involvement and performance management
in special-purpose governments. Public Administration Review, 67, 238-248.
Ho, A. (2007). Citizen participation in performance measurement. In: R. Box (Ed.),
Democracy and public administration (pp. 107-118). Armonk, NY: ME Sharpe.
Hood, C. (2006). Gaming in Targetworld: The targets approach to managing British
public services. Public Administration Review, 66, 515-521.
Hummel, R. (2008). Toward Bindlestiff science: Let’s all get off the 3:10 to Yuma.
Administration & Society, 39, 1013-1019.

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


Schachter 565

Kamensky, J. (2009). Web 2.0 meets performance reporting. PA Times, 32(4), 9.


Keiser, L., Wilkins, V., Meier, K., & Holland, C. (2002). Lipstick and logarithms:
Gender, institutional context, and representative bureaucracy. American Political
Science Review, 96, 553-564.
Kelly, J. (2003). Citizen satisfaction and administrative performance measures: Is
there really a link? Urban Affairs Review, 38, 855-866.
Kelly, J., & Swindell, D. (2002). A multiple-indicator approach to municipal service
evaluation: Correlating performance measurement and citizen satisfaction across
jurisdictions. Public Administration Review, 62, 610-621.
Kerlinger, F. (1964). Foundations of behavioral research: Educational and psycho-
logical inquiry. New York, NY: Holt, Rinehart & Winston.
Kravchuk, R., & Schack, R. (1996). Designing effective performance-measurement
systems under the Government Performance and Results Act of 1993. Public
Administration Review, 56, 348-358.
Leib, E. (2004). Deliberative democracy in America: A proposal for a popular branch
of government. University Park, PA: Pennsylvania State University Press.
Licari, M., McLean, W., & Rice, T. (2005). The condition of community streets and
parks: A comparison of resident and nonresident evaluations. Public Administra-
tion Review, 65, 360-368.
Lindblom, C. (1990). Inquiry and change: The troubled attempt to understand and
shape society. New Haven, CT: Yale University Press.
Lindblom, C., & Cohen, D. (1979). Usable knowledge: Social science and social
problem solving. New Haven, CT: Yale University Press.
Meier, K. (2005). Public administration and the myth of positivism: The AntiChrist’s
view. Administrative Theory and Praxis, 27, 650-668.
Meier, K., & Nicholson-Crotty, J. (2006). Gender, representative bureaucracy, and law
enforcement: The case of sexual assault. Public Administration Review, 66, 850-860.
Moynihan, D. (2008). The dynamics of performance management: Constructing
information and reform. Washington, DC: Georgetown University Press.
Munro, W. B. (1928). Physics and politics—An old analogy revised. American Politi-
cal Science Review, 22, 1-11.
Nicholson-Crotty, S., Theobald, N., & Nicholson-Crotty, J. (2006). Disparate mea-
sures: Public managers and performance-measurement strategies. Public Admin-
istration Review, 66, 101-113.
Osborne, D., & Gaebler, T. (1992). Reinventing government. Reading, MA: Addison-
Wesley.
Osborne, D., & Hutchinson, P. (2004). The price of government: Getting the results
we need in an age of permanent fiscal crisis. New York, NY: Basic Books.
Osborne, D., & Plastrik, P. (2000). The reinventor’s fieldbook: Tools for transforming
your government. San Francisco, CA: Jossey-Bass.

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


566 Administration & Society 42(5)

Park, R. (1984). Linking objective and subjective measures of performance. Public


Administration Review, 44, 116-127.
Percy, S. (1986). In defense of citizen evaluations as performance measures. Urban
Affairs Quarterly, 22, 66-83.
Raadschelders, J. (2000). Understanding government in society: We see the trees, but
could we see the forest? Administrative Theory and Praxis, 22, 192-225.
Radin, B. (2006). Challenging the performance movement: Accountability, complex-
ity and democratic values. Washington, DC: Georgetown University Press.
Riccucci, N. (2007). The criteria of action. In: D. Rosenbloom & H. McCurdy (Eds.),
Revisiting Waldo’s administrative state: Constancy and change in public admin-
istration (pp. 55-70). Washington, DC: Georgetown University Press.
Rosentraub, M., Harlow, K., & Thompson, L. (1979). In defense of surveys as a reli-
able source of evaluation data. Public Administration Review, 39, 302-303.
Sanger, M. B. (2008). From measurement to management: Breaking through the bar-
riers to state and local performance. Public Administration Review, 68(Suppl.),
S70-S85.
Shingler, J., van Loon, M., Alter, T., & Bridger, J. (2008). The importance of sub-
jective data for public agency performance evaluation. Public Administration
Review, 68, 1101-1111.
Simon, H. (1957). Administrative behavior: A study of decision-making processes in
administrative organizations (2nd ed.). New York, NY: Free Press.
Smith, K. (2003). The ideology of education: The commonwealth, the market, and
America’s schools. Albany, NY: State University of New York Press.
Stipak, B. (1979). Citizen satisfaction with urban services: Potential misuse as a per-
formance indicator. Public Administration Review, 39, 46-52.
Townley, B., Cooper, D., & Oakes, L. (2003). Performance measures and the rational-
ization of organizations. Organization Studies, 24, 1045-1071.
Van Ryzin, G., Immerwahr, S., & Altman, S. (2008). Measuring street cleanliness:
A comparison of New York City’s scorecard and results from a citizen survey.
Public Administration Review, 68, 295-303.
Weale, A. (2009). Metrics versus peer review. Political Studies Review, 7, 39-49.
White, J., & Adams, G. (1994). Research in public administration: Reflections on
theory and practice. Thousand Oaks, CA: Sage.
Williams, B., & Kellough, E. (2006). Leadership with an enduring impact: The legacy
of Chief Burtell Jefferson of the Metropolitan Police Department of Washington,
DC. Public Administration Review, 66, 813-822.
Williams, F., III, McShane, M., & Sechrest, D. (1994). Barriers to effective performance
review: The seduction of raw data. Public Administration Review, 54, 537-542.
Yanow, D. (2000). Conducting interpretative policy analysis. Newbury Park, CA:
Sage.

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014


Schachter 567

Bio

Hindy Lauer Schachter, PhD, is a professor in the school of management at New


Jersey Institute of Technology. She is the author of Reinventing Government or
Reinventing Ourselves: The Role of Citizen Owners in Making a Better Government
(SUNY Press, 1997), Frederick Taylor and the Public Administration Community: A
Reevaluation (SUNY Press, l989), and Public Agency Communication: Theory and
Practice (Nelson Hall, 1983). Her articles have appeared in Public Administration
Review, Administration and Society, International Journal of Public Administration,
Public Administration Quarterly and other journals. She is currently book review
editor of Public Administration Review.

Downloaded from aas.sagepub.com at CIAD - Parent on November 14, 2014

You might also like