You are on page 1of 19

685804

research-article2017
ARPXXX10.1177/0275074016685804American Review of Public AdministrationPoister et al.

Article
American Review of Public Administration
1­–19
The Impact of Performance- © The Author(s) 2017
Reprints and permissions:
Based Grants Management on sagepub.com/journalsPermissions.nav
DOI: 10.1177/0275074016685804
https://doi.org/10.1177/0275074016685804
Performance: The Centers for journals.sagepub.com/home/arp

Disease Control and Prevention’s


National Breast and Cervical
Cancer Early Detection Program

Theodore H. Poister1, Obed Pasha2, Amy DeGroff3,


and Janet Royalty3

Abstract
Performance-based grants management is a strategy used by public agencies to improve
performance and strengthen accountability by connecting annual award amounts to performance
information. This study evaluates the impacts of a performance-based grants management
process implemented by the U.S. Centers for Disease Control and Prevention to strengthen
the effectiveness of its National Breast and Cervical Cancer Early Detection Program. The study
uses panel data and interrupted time-series analysis over 10 years for 51 grantees. Results show
partial and conditional effectiveness of the performance-based grants management process in
strengthening performance. In particular, the implementation of the performance-based grants
management system consistently improved the performance of those grantees for whom the
targets were challenging. While prior research has found, in some cases, evidence of a positive
impact of performance management practices in improving programs delivered directly by public
organizations at the local level, this study examines the performance management–performance
relationship in a more challenging context of a federal grants program delivered through a highly
decentralized system.

Keywords
performance based grants management, public health care program evaluation, goal-setting
theory

Performance-based grants management is a form of performance management system that relies


on setting goals for grantees, establishing appropriate performance measures with challenging
targets, and rewarding the achievement of these goals through incentives in the form of increased

1Georgia State University, Atlanta, GA, USA


2Universityof Massachusetts Amherst, Amherst, MA, USA
3Centers for Disease Control and Prevention, Atlanta, GA, USA

Corresponding Author:
Theodore H. Poister, Andrew Young School of Public Studies, Georgia State University, 114 Marietta Street NW,
Atlanta, GA 30302, USA.
Email: tpoister@gsu.edu
2 American Review of Public Administration 

awards. It is increasingly used by public agencies whose substantive programmatic activity is


carried out through grants awarded to other agencies, typically public or nonprofit organizations,
to provide services. However, the effectiveness of performance-based management practices in
the context of grants management remains an open question. This study contributes to the litera-
ture by (a) evaluating the effectiveness of these systems in a large federal grants program and (b)
testing whether setting challenging goals is an essential condition for their success.
This study evaluates the impact of the performance-based grants management program
adopted by the U.S. Centers for Disease Control and Prevention (CDC) for its National Breast
and Cervical Cancer Early Detection Program (Early Detection Program from here onwards),
using a panel data and interrupted time-series analysis over 10 years for 51 grantee states. As
hypothesized, the implementation of the performance-based grants management process proved
effective only for the grantees for whom the targets were challenging, while it had no effect for
those that were already achieving the targets when the process was initiated. This study takes
earlier research by Wright (2007) and Gerrish (2016) forward by examining the impact of chal-
lenging goals as yet another contingency factor necessary for the success of performance man-
agement systems. The implications of these findings on theory, practice, and future research are
discussed in the conclusion section.

Literature Review
The question of whether performance-based approaches to managing grant programs can be
effective in improving their performance is particularly salient given the magnitude of grants
programming in the United States. In Fiscal Year 2011, for example, the last year of the period
examined in this research, Federal grants to state and local governments in areas such as health,
education, transportation, housing and community development, agriculture, energy and the
environment, and social services totaled US$607 billion. This represents 17% of total federal
outlays and approximately 50% of federal spending on discretionary programs, and these grants
accounted for 25% of spending by state and local governments that year (https://www.cbo.gov/
publication/43967, accessed on January 4, 2016).
Although some recent studies examine the impacts of performance-based bonuses (Heinrich,
2007) and performance-based contracting on deliver services (e.g., Heinrich & Choi, 2007;
Koning & Heinrich, 2013; Marvel & Marvel, 2008; Miller, Doherty, & Nadash, 2013), there is a
lack of studies addressing the impact of performance-based approaches to managing programs
that operate primarily through grants to other governmental units to operate their programs. A
study by El-Khawas (1998) on the performance-based grants management system adopted by
Tennessee for its 24 higher education institutions serves as an exception in this regard. Through
a qualitative analysis, El-Khawas found several benefits to the system, including increased politi-
cal support and management efficiency. However, that study did not address the system’s impact
on actual program performance.
Prior quantitative research on the impact of performance management on performance
improvement has generated mixed results to date, with some studies (Andersen, 2008; Boyne &
Gould-Williams, 2003; Patrick & French, 2011) finding negative or no effects of performance
management on actual performance improvement and others (Boyne & Chen, 2007; Heinrich &
Lynn, 2001; Hvidman & Andersen, 2014; Nielsen, 2014; Poister, Pasha, & Edwards, 2013; Sun
& Van Ryzin, 2014; Walker, Damanpour, & Devece, 2011) finding positive effects of perfor-
mance management on organizational performance. Gerrish (2016) argues that the impact of
performance management systems is conditional and depends on other factors such as bench-
marking and bottom–up decision making.
One limitation of this stream of research is that all these studies focus on the relationship of
performance management practices to performance in the context of the direct delivery of service
Poister et al. 3

at the local government level, specifically in the fields of municipal services, local schools, job
training, and public transit. These settings are typically characterized by direct control over per-
sonnel, the use of resources, and operations, and there is a high level of accountability over ser-
vice delivery and customer service, all of which facilitates effective performance management.
On the other hand, programs that are managed indirectly through grants tend by definition to
operate in more complex, decentralized environments, arrangements that depend more heavily
on interactions between managers and “environmental actors” who are not direct-line subordi-
nates (Meier & O’Toole, 2003). Such decentralized systems tend not to lend themselves to con-
trol and accountability, and they may well be characterized by differences in values and priorities
between grantor and grantee agencies as well as between program providers and service delivery
agencies. In these settings, grantees have considerable discretion over priorities and program
operation, and the principal-agent problem may be more difficult to overcome (Beam & Conlan,
2002). All of this makes it more challenging to attain high levels of efficiency and effectiveness
in public programs provided through grants (Salamon, 2002).
Similarly, implementing performance management processes, which require a level of
accountability and control, is likely more challenging as well. As Jennings and Haist (2004) have
hypothesized that when organizations with relatively less control over program execution insti-
tute performance management systems, those systems are less likely to be effective. Indeed,
O’Toole and Meier (2015) argue that operating in a more heterogeneous and dispersed task envi-
ronment will reduce the likelihood of not only program success but also the marginal impact of
externally directed managerial actions. Thus, in a highly complex environment such as that sur-
rounding the Early Detection Program, in which numerous and varied kinds of organizations are
involved, each with some degree of influence on how the program operates and services are
actually delivered, we might not expect a performance-based grants management process to gen-
erate significant improvements in overall performance.
Thus, the purpose of this research is to examine the efficacy of performance-based approaches
to managing grants in a heavily decentralized environment, using CDC’s Early Detection
Program as a case in point. In doing so, we respond to the call by O’Toole and Meier (2015) to
elaborate and test theory regarding the impact of public management practices on performance in
different contexts. This study also contributes to the growing contingency literature (see Gerrish,
2016; Nielsen, 2014) that proposes that the effectiveness of performance management depends
on implementation factors such as benchmarking, bottom–up decision making, and employee
discretion. We argue that challenging goals is yet another important factor that that is essential to
the effectiveness of these systems.

Performance-Based Grants Management and Performance Improvement


Grants are administered as an indirect tool of service provision, where the grantee wields consid-
erable discretion over program operation, more so when the grantees are sovereign state govern-
ments (Beam & Conlan, 2002). While there aren’t many studies examining the role of
performance-based management practices in the context of grants, earlier research on the man-
agement–performance connection has found that proactive management practices can lead to
improved performance in multi-organizational environments (Goerdel, 2006). In setting direc-
tion through goals, maintaining control through regular reporting requirements, and evaluating
results through analysis of performance information, performance-based grants management
exemplifies a more proactive approach to managing through decentralized environments which
might well be expected to contribute to higher levels of performance on the part of grantees.
Setting goals motivates grantees to direct attention and effort to achieving those goals even in
decentralized environments where imposing penalties for poor performance may not be politi-
cally advisable (Locke & Latham, 1990). The need for direct accountability regarding more
4 American Review of Public Administration 

operational matters such as ensuring proper staffing levels and adhering to appropriate protocol
in delivering services diminishes as the grantees will themselves make sure that they do their
work properly to achieve the established targets.
Feedback in the form of performance information plays an important mediating role between
goal setting and performance improvement (Wright, 2004). And, comparing results across mul-
tiple grantees can engender competition among them as the desire to outperform others and be
recognized as high performers can motivate grantees to strengthen their performance (Miner,
2015). In addition, when grantees recognize that their performance is being monitored by higher
level authorities and resourcing agencies, they are likely to be further motivated to improve per-
formance. Indeed, grantees may promote their accomplishments by reporting on targets that have
been attained and instances in which they have improved performance, in part to justify further
funding (Behn, 2003).
Performance-based grants management can also help grant-making organizations to mitigate
principal-agent problems (Moynihan, 2008). When grantees recognize that future awards are at
least partially dependent on actual performance, they tend more to align their own goals and
activities with the priorities of the funding agency (Murphy, 2000). Essentially, then, by setting
appropriate targets, monitoring progress toward these goals, and rewarding the achievement of
targets, performance-based grants management allows principals to impose their values, priori-
ties, goals, and objectives on their agents (Bouckaert & Balk, 1991; Colby & Pickell, 2010). With
respect to the principal-agent paradigm, then, performance management systems have tradition-
ally emphasized accountability, control, and cost-effectiveness (Heinrich & Marschke, 2010).
More recently, however, the use of these systems, including performance-based grants manage-
ment, has broadened to focus on other purposes as well, such as promoting learning and motivat-
ing improved performance among grantees (Behn, 2003; Moynihan, 2008).
The positive impact of performance management systems, however, is not universal and may
depend on the presence or absence of other managerial practices (Gerrish, 2016; Nielsen, 2014).
In the following section, we discuss challenging goals as an important condition for the success
of these systems in the grant-making context.

Impact of Challenging Goals on Organizational Performance


The presence of a gap between current and desired performance is a fundamental assumption of
goal-setting theory, and organizations have a meaningful incentive to improve only when such a
gap exists. A performance gap between current performance and the targets requires the grantees
to make greater efforts to reach their targets to gain positive self-evaluation as well as recognition
from peers and administrators (Bandura, 1986; Wright, 2004). By way of contrast, easy-to-reach
targets provide little motivation for grantees to work harder, innovate, and strategize to improve
their performance beyond current levels.
Some caution in setting challenging goals in the absence of essential organizational routines
relating to goal setting such as defining and analyzing performance measures is also expressed in
the literature (Linderman, Schroeder, & Choo, 2006). Scholars have argued that targets may cre-
ate pessimism among individuals if they are too difficult to achieve and will subsequently
decrease employee motivation (Erez & Zidon, 1984). The ceiling effect may prevent high per-
formers from improving their performance, while it is relatively easy for low performers to
improve performance as there is a room for improvement and they can also borrow innovations
from high performers.
Challenging-yet-achievable goals could also lead to other negative consequences (see Ordóñez,
Schweitzer, Galinsky, & Bazerman, 2009). Employees may adopt riskier strategies and become
aggressive in negotiations to meet challenging targets. Some studies have shown unethical behav-
ior due to stretch goals such as overcharging customers, lying on performance reports, and focus
Poister et al. 5

on the ends rather than means (Barsky, 2008). Instead of increasing job satisfaction, challenging
goals have a detrimental impact on employee self-efficacy in some cases. Employees might ques-
tion their abilities and intelligence that made the task difficult for them, which may further lead to
lower engagement and commitment toward future tasks (Mussweiler & Strack, 2000).
On balance, however, both theory and research suggest that setting goals that are challenging
as well as realistic and achievable can contribute to improved performance in grants management
settings as is the case with respect to direct service delivery and contract management processes.
Extensive research in the corporate sector has shown that setting challenging goals has a positive
impact on motivation, organizational commitment, and performance of employees (Whittington,
Goodwin, & Murray, 2004; see Latham & Locke, 2006, for examples). Hence, setting challeng-
ing goals is now considered a hallmark of good change-oriented leadership in business (Crossley,
Cooper, & Wernsing, 2013; Thorgren & Wincent, 2013). In the context of public administration,
Wright (2007) administered a cross-sectional survey of 2,200 employees working with a large
New York State agency and found that employees perceiving their goals as challenging reported
a higher level of motivation. The present study takes this research further by using objective
measures of organizational performance over a period of 11 years to determine whether challeng-
ing targets lead to improved performance.

Hypothesis 1: Performance-based grants management will improve the performance of


grantees for whom the goals are challenging.

The National Breast and Cervical Cancer Early Detection


Program
In 1990, Congress passed the Breast and Cervical Cancer Mortality Prevention Act establishing
the Early Detection Program with the aim of reducing morbidity and mortality due to breast and
cervical cancer among medically underserved women. Administered by the CDC, the Early
Detection Program currently provides grant funding to all 50 states, Washington D.C., and 16
tribal and territorial health organizations. The goal of the program is to increase screening among
low-income women and reduce disparities through providing high quality screening and diag-
nostic services. In fiscal year 2015, CDC provided over US$152 million in funds to grantees with
awards ranging from US$238,338 to US$8,269,773 (median of US$2,260,314). Since the pro-
gram’s inception, more than 4.8 million women have been screened with more than 67,959 breast
cancers; 3,715 invasive cervical cancers; and 171,174 premalignant cervical lesions diagnosed
(http://www.cdc.gov/cancer/nbccedp/about.htm; accessed on January 8, 2016).
Like many federal programs, the Early Detection Program operates within a complex environ-
ment in which the grantees are autonomous state agencies which provide numerous and some-
times competing programs and often have their own funding for cancer detection that is
independent of the federal grants they receive. In addition, the Early Detection Program is highly
decentralized to accommodate local context and community needs (DeGroff et al., 2010). The 67
grantees typically establish subcontracts with health care providers across the state for delivery
of screening services, and in some instances, grantees subcontract with regional agencies that
again subcontract with providers in their geographic area. Nationwide, it currently includes more
than 10,000 individual clinical sites of diverse types including federally qualified health centers,
individual provider clinics, hospitals and other health care systems affiliated clinics, and Indian
Health Service clinics (DeGroff et al., 2015). Thus, the Early Detection Program relies on numer-
ous and varied public, private, and nonprofit organizations to deliver services that operate well
beyond the control of CDC.
CDC requires grantees to support a comprehensive, organized screening program by imple-
menting a set of core activities. Central to these is contracting with local health care providers for
6 American Review of Public Administration 

clinical service delivery. Eligibility for the Early Detection Program is restricted to low income,
uninsured or underinsured women with emphasis on reaching women where disease is more prev-
alent (i.e., women never or rarely screened for cervical cancer, women age 50 and older for breast
cancer). Consequently, grantees conduct public education and outreach to educate women about
screening and recruit women eligible for the program. To ensure that patients with abnormal
screening results complete diagnostic testing, grantees also implement tracking, follow-up, and
patient navigation activities. Research has demonstrated that completed screening and diagnostics
leads to the prevention of cervical cancer (through removal of precancerous lesions) and early
detection of breast and cervical cancers when treatment is more effective (Nelson et al., 2009;
Vesco et al., 2011). This, in turn, should lead to decreased breast and cervical cancer mortality.

Performance-Based Grants Management Within the Early Detection Program


To strengthen accountability in accordance with Government Performance and Results Act
(GPRA) and address the leveling of funding for the program after many years of consistent
increases, CDC embarked on developing a performance-based system for managing the Early
Detection Program in 2006. This process is embodied in cooperative agreements with grantees
that provide for performance-based funding augmentations, maintenance of high quality perfor-
mance data, and performance reports generated and reviewed with the grantees, all of which is
intended to encourage and assist the grantees to screen more women in the defined priority popu-
lations, provide higher quality screening and diagnostic services, and ensure that women with
diagnoses are accessing treatment.
As the performance-based grants management strategy for the Early Detection Program
awards funds to grantees partially on the basis of performance, performance measurement and
assessment are central to the grants management process. It is important to note here that the
Early Detection Program supports good performance, while grantees are not penalized for poor
performance. CDC requires grantees to support a data management system and collect client
demographic and clinical outcome data for all women screened through the Early Detection
Program. Semiannually, grantees submit these data to CDC’s data contractor who, in turn, pro-
duces a set of standard data reports used in a CDC-led review process. These include a report on
27 indicators with benchmarks, originally established in 1994, to evaluate both quality of care
and data quality. Grantees are required to regularly monitor data quality through medical chart
abstraction to ensure data validity. A study conducted to evaluate the quality of these data
showed that the data reported to CDC are valid, and consistent with medical record clinical data
(Yancy et al., 2014).
When CDC initiated the performance-based grants management process in 2006, it empha-
sized 11 indicators, all which had been part of the larger indicator report since 2000. The 11 core
performance measures, shown in Table 1 along with their targets, represent important areas of
clinical performance and closely reflect the Early Detection Program’s goal of providing high
quality screening and diagnostic services. The 11 measures include two emphasizing priority
populations for the program (where disease is most likely to be found) and nine clinical measures
regarding complete and timely care and cancer treatment initiation. Our preliminary analysis
revealed that nearly all grantees were meeting targets on the nine completeness and timeliness
indicators when the performance-based grants management program was implemented. However,
there was a sizable number of grantees that were not meeting targets for the two priority popula-
tion measures and would likely have found them challenging. Thus, our analysis focuses on those
two measures.
Target setting for the core performance measures has been informed by medical expert opin-
ion, stakeholder input, and the realities of program implementation. Some of the targets might be
considered to be soft, and one might argue that better performance could be extracted by setting
Poister et al. 7

Table 1.  Early Detection Program Core Performance Measures.

Measure type Performance measure Target


Screening the priority Priority population for cervical cancer screening: percentage of ≥20%
population initial program Pap tests provided to women rarely or never
screened for cervical cancer
Priority population for breast cancer screening: percentage ≥75%
of screening mammograms provided to women ages 50 and
older
Complete and timely Complete cervical diagnostic follow-up: percentage of ≥90%
diagnostic follow-up of abnormal cervical screens with diagnostic evaluation
abnormal screening results completed
Timely cervical diagnostic follow-up: percentage of abnormal ≤25%
cervical screens with time from screening to final diagnosis
>90 days
Complete breast diagnostic follow-up: percentage of abnormal ≥90%
breast screens with diagnostic evaluation completed
Timely breast diagnostic follow-up: percentage of abnormal ≤25%
breast screens with time from screening test result to final
diagnosis >60 days
Complete and timely Treatment started for cervical cancer: percentage of women ≥90%
initiation of treatment for diagnosed with invasive cervical cancer or premalignant high-
cancers diagnosed grade lesions who have initiated treatment
Timely treatment for premalignant cervical lesions: percentage ≤20%
of women diagnosed with premalignant high-grade cervical
lesions with time from date of diagnosis to treatment started
>90 days
Timely treatment for invasive cervical cancer: percentage of ≤20%
women diagnosed with invasive cervical cancer with time
from date of diagnosis to treatment started >60 days
Treatment started for breast cancer: percentage of women ≥90%
diagnosed with breast cancer initiating treatment
Timely treatment for breast cancer: percentage of women ≤20%
diagnosed with breast cancer with time from date of
diagnosis to treatment started >60 days

more challenging targets (e.g., 90% or 95% of women with abnormal breast cancer screening
results must receive complete diagnostic work-up within 60 days rather than the current 75%).
The targets that were established for the Early Detection Program, however, were based on strong
theoretical and programmatic reasoning. The CDC consulted with all grantees to ensure that the
indicators and targets were practical, an important tenet of developing measures. During the
study period, CDC typically sponsored an annual business meeting for all grantees where con-
cerns about the performance measures and targets could be discussed. For instance, CDC recog-
nizes that, while desirable, follow-up on every woman with abnormal screening results is
impractical given that some women may become insured, seek follow-up elsewhere, or move to
a different state or country.
Even though the consultation process resulted in lower targets than might be considered ideal,
such collaboration was necessary to ensure the internalization of the measures by grantees, an
essential condition of the goal-setting theory. And, in fact, prior research has shown that the Early
Detection Program grantees perceive the measures to be meaningful, fair, and closely in-line with
overall goals for the program (Self-Cited, 2014). Furthermore, CDC funds often represent only a
part of grantees’ overall cancer prevention budget and participation in the Early Detection
8 American Review of Public Administration 

Program is voluntary. Thus, CDC was concerned that setting more aggressive targets might push
some grantees or providers to opt out of the program. Finally, there are valid clinical reasons to
keep targets lower for some measures in the aggregate to leave room for clinical judgment on
individual cases. Based on grantee concerns, CDC has, in rare cases, modified how a particular
indicator is specified to adjust for circumstances outside of program control. Although the per-
formance measures have been modified somewhat over time to better reflect program priorities
and performance (see Table 5), the system has maintained a consistent, concise, and well-defined
set of priorities that can be accurately measured using reliable data sources.
Implementation of performance-based grants management in early 2006 involved a robust
monitoring and feedback process following each data submission. A summary report of the core
performance measures is prepared and discussed by CDC and individual grantees as part of semi-
annual data reviews. Action items for follow-up by the grantee are generated and a narrative
addressing their resolution is included with the next data submission (Self-Cited, 2014). While
the reports are not shared publically, grantees have access to aggregate results which allows them
to assess their own performance on each measure compared to others. Grantee performance on
the measures has generally been high over time, making it challenging to separate better perform-
ers from poor ones.
CDC funding for these grants varied somewhat over the first 8 years following implemen-
tation of the performance management system, which is the time period examined in this
study, and funding in FY 2012 was just slightly higher than it was in FY 2005. Although
annual funding decisions for the Early Detection Program grantees are largely dependent on
the overall amount of funds available to distribute and current funding levels, decision mak-
ing is also informed by a process that uses a combination of performance and equity mea-
sures to establish a recommended funding range for each grantee. To establish these ranges,
grantees are placed in performance groups based on a scored assessment of clinical care
(whether the grantee meets the core performance measures), program management (techni-
cal review of grantee continuation applications), and fiscal stewardship (spend rate). They
are also placed in high-to-low equity groups by comparing their current funding level to an
estimated equitable distribution of Early Detection Program funds across grantees based on
the number of program-eligible women in their jurisdiction to inject an indication of need
into the allocation process. Performance and equity groups are then combined and each level
is assigned a recommended funding range, with the intention to reward high performers and
move toward a more equitable distribution of resources.

Method
As discussed above, consistent with goal-setting theory, CDC establishes targets with due con-
sultation with grantees to assure targets are internalized, and provides semiannual feedback
reports to all grantees. While the conditions of internalization and feedback are met for all the
grantees, the criterion of setting challenging targets was not met consistently across grantees. On
the other hand, since most Early Detection Program grantees were already meeting the CDC
targets at the time the performance-based grants management program was implemented, it can
be argued that the targets were not only challenging but also achievable for the grantees that were
not already meeting them. Thus, to test the impact of the performance-based grants process, we
separated the grantees into two groups, the first group consisting of the grantees that were not
already meeting targets on a particular indicator, and the second group consisting of grantees that
were already meeting targets when the performance-based grants management program was
implemented. The impact of the performance-based grants management program on grantee per-
formance was analyzed separately for these two groups.
Poister et al. 9

The methodology used in this research follows earlier studies such as Wagner, Soumerai,
Zhang, and Ross-Degnan (2002) and Fowler et al. (2007), which analyze panel data using inter-
rupted time-series analysis for policy interventions in health care. This study compares perfor-
mance trends on the core performance indicators before and after the formal introduction of the
system in early 2006 to evaluate the effectiveness of the Early Detection Program’s performance-
based grants management system in strengthening grantee performance. The analysis covers
twenty-three 6-month periods running from June 2000 through June 2011. Fifty-one of the 68
grantees were included in the analysis. Given the small numbers of women screened and unique
contextual issues, tribal and territorial grantees were excluded while all 50 states and the District
of Columbia were included.
An autoregressive fixed-effects model was adopted, where the standard errors of each grantee
state (for the 23 time periods) were clustered to produce unbiased standard errors and correctly
sized confidence intervals (Penfold & Zhang, 2013). STATA v 12.0 was used to run a generalized
least squares (GLS) regression with robust standard errors owing to the high correlation between
observations over time belonging to the same grantee, as well as a high heteroscedasticity due to
differences in variability among the grantees. The GLS regression model to estimate the level and
trend in mean performance of grantees before the performance-based grants management pro-
gram and the changes in level and trend following the program is specified as follows:

PerfYit = β0 + β1 × timeit + β2 × interventionit + β3 × time after interventionit


+β4 × percent uninsured it + β5 × awardsit + δt + ςi + εit ,

Here, PerfYit is the mean performance of grantee i in 6-month period t, and time is a continu-
ous variable indicating the 6-month period at time t for grantee i from the start of the observation
period in June 2000 (time = −11) to the last observation in June 2011 (time = 11). Intervention is
a dummy variable for grantee i indicating whether the observation is occurring before (interven-
tion = 0) or after (intervention = 1) the performance-based grants management program was
implemented in January 2006 (time = 0). Time after intervention represents an interaction of time
and intervention. Percent uninsured is a control for the percentage of the population which
remained uninsured within grantee state i in a given year. The data on percent of uninsured popu-
lation by state (annual) is derived from the United States Census Bureau’s Annual Social and
Economic Supplement. Award refers to the amount of money CDC awarded to each grantee
annually and is derived from the CDC’s own records.
In this model, β0 estimates the mean performance of grantees at time 0; β1 estimates the
change in the mean performance of grantees that occurs with each 6-month period before the
intervention (baseline trend); β2 estimates the level change in the mean performance of grantees
immediately after the intervention; β3 estimates the change in the trend in the mean performance
of grantees after the performance-based grants management program, relative to the trend before
the intervention.
This model controls for unobserved contextual factors as well as baseline level and trend, a
major strength of interrupted time-series analysis. ςi controls for the unobserved time-invariant
contextual covariates using fixed-effects at the grantee-level and δt represents the year fixed-
effects controlling for the system-level changes due to time. Finally, εit represents the unob-
served error. In addition to the statistical analysis, time-series graphs were constructed to facilitate
visual inspection of the data. In reporting the results, the 6-month time periods are denoted on the
horizontal axis while the mean performance of the grantees for a particular indicator is denoted
on the vertical axis. A horizontal reference line represents the CDC target for each indicator while
a vertical reference line identifies the intervention point when performance-based grants man-
agement was implemented.
10 American Review of Public Administration 

Figure 1.  Priority Population for Cervical Cancer Screening for grantees meeting and not meeting the
target at the intervention point.

Results and Analysis


The performance-based grants management system for CDC’s Early Detection Program uses 11
indicators to measure performance (see Table 1). However, only two of these indicators, which
are both concerned with the extent to which the screening activity focused on the priority popula-
tions targeted by CDC, allow us to test the hypothesis because almost all grantees were meeting
their targets for the remaining nine indicators when the performance-based grants management
program was established. Hence, we will analyze only the Priority Population for Cervical
Cancer Screening and Priority Population for Breast Cancer Screening indicators, which had 11
and 17 grantees not meeting targets at the intervention point, respectively. We analyze the impact
of the performance-based grants management program on the mean performance of grantees
meeting and not meeting their targets separately to observe whether the performance of these two
groups reacts differently after the intervention.
Figure 1 represents the impact of the performance-based grants management program on
the mean performance of grantees on the Priority Population for Cervical Cancer Screening
performance indicator. The solid line represents the mean performance of grantees that were
already meeting the target (n = 40), while the dotted line represents the mean performance
of grantees that were not meeting the target (n = 11) at the intervention point. A visual
inspection reveals no change in the performance trend for grantees already meeting the tar-
gets. However, the performance for grantees not meeting the target was trending downward
in the pre-intervention period, but turns into a somewhat steeper upward trend in the postint-
ervention period.
The regression results in Table 2 support the visual analysis presented in Figure 1. The GLS
regression results for grantees that were already meeting the target for Priority Population for
Cervical Cancer Screening in Model 2 shows that the baseline trend (β1) for performance was
increasing by 0.577 percentage points every 6 months and was statistically significant at .05
Poister et al. 11

Table 2.  Generalized Least Square Regression Results for the Grantees Meeting and Not Meeting
Target for Priority Population for Cervical Cancer Screening Prior to the Implementation of
Performance-Based Grants Management Intervention.
Grantees meeting targets Grantees not meeting targets

  (1) (2) (3) (4)

Intercept β0 28.707*** (0.794) 36.320*** (6.691) 15.875*** (0.889) 10.374 (6.600)


Baseline trend β1 0.459 (0.198) 0.577* (0.265) −0.198 (0.217) −0.203 (0.205)
Level change after intervention β2 1.479 (0.992) 1.243 (1.020) 4.423* (1.457) 4.527* (1.562)
Trend change after intervention β3 −0.168 (0.240) −0.218 (0.264) 0.900** (0.250) 0.894** (0.246)
Percent uninsured β4 −0.608 (0.436) 0.213 (0.396)
Awards β5 0.000001 (0.000003) 0.000001 (0.000001)
R2 within .178 .188 .409 .416
F-statistics 7.48*** 5.11*** 13.26*** 8.16**
No. of observations 920 920 253 253
No. of grantees 40 40 11 11

Note. Unstandardized coefficients; Hubert–White robust standard errors in parentheses allow for unrestricted error correlation
within each agency.
*p < .05. **p < .01. ***p < .001, two-tailed test.

level. This suggests a maturation of performance for these grantees. However, the coefficients for
the change in level (β2) and change in trend after the intervention (β3) are not significant, which
suggest that the hypothesized impact of the performance-based grants management program on
the performance of grantees already meeting the target is unfounded.
Model 4 in Table 2 shows the regression results for grantees not meeting the indicator target
at the intervention point. Although the coefficient for baseline trend (β1) demonstrates a decreas-
ing trend of performance, these estimates are not statistically significant. On the other hand, the
coefficients for change in level (β2) and change in trend after the intervention (β3) are not only
positive but are also statistically significant at .05 and .01 levels, respectively. This suggests that
the performance-based grants management system improved the mean performance of these
grantees (ones not meeting the target) by 4.527 percentage points immediately following the
intervention and 0.894 percentage points every 6 months thereafter.
For the Priority Population for Breast Cancer Screening indicator, Figure 2 shows the impact
of the performance-based grants management system on the mean performance of grantees meet-
ing (solid line) and not meeting (dotted line) the target at the intervention point. Similar to Figure
1, we observe a rather stable increment in the mean performance (maturation) for grantees meet-
ing the target before and after the intervention point. The trend for the grantees not meeting the
target shows a substantial negative trend until the intervention point, but then reflects a substan-
tial improvement in mean performance going forward.
In Model 2 of Table 3, the coefficients for baseline trend (β1), change in level (β2), and change
in trend (β3) are not statistically significant. This shows that the slightly increasing mean perfor-
mance of grantees meeting their targets for the Priority Population for Breast Cancer Screening
indicator remained quite consistent over the entire 23 periods and was not impacted by the per-
formance-based grants management system. A positive coefficient for percent uninsured (β4),
however, is statistically significant. This shows that a one percentage point increase in the unin-
sured population leads to slightly more than a one percentage point increase in the mean perfor-
mance on this indicator.
In contrast to the trends for grantees that were already meeting the target, the coefficient for
baseline trend (β1) for grantees not meeting targets suggests a depreciating and significant perfor-
mance trend (see Model 4 of Table 3). The change in level (β2) and trend for the mean perfor-
mance (β3) of grantees not meeting the target for this indicator, however, are positive and
12 American Review of Public Administration 

Figure 2.  Priority Population for Breast Cancer Screening for grantees already meeting and not meeting
the target at the intervention point.

Table 3.  Generalized Least Square Regression Results for the Grantees Meeting and Not Meeting
Target for Priority Population for Breast Cancer Screening Prior to the Implementation of Performance-
Based Grants Management Intervention.

Grantees meeting targets Grantees not meeting targets

  (1) (2) (3) (4)


Intercept β0 84.437*** (1.107) 63.759*** (8.273) 7.144*** (1.770) 76.052*** (6.360)
Baseline trend β1 .319 (.264) .210 (.270) −.282 (.2659) −.641* (.270)
Level change after .781 (1.026) .668 (1.118) 6.652** (2.321) 6.877** (2.336)
intervention β2
Trend change after −.171 (.314) −.197 (.316) 1.182 (.413) 1.448** (.433)
intervention β3
Percent uninsured β4 1.043** (.385) .710 (.461)
Awards β5 .000004 (.000004) −.00001** (.000003)
R2 within .0638 .101 .280 .337
F-Statistics 2.33 3.28* 7.84*** 8.08***
No. of observation 781 781 391 391
No. of grantees 34 34 17 17

Note. Unstandardized coefficients; Hubert–White robust standard errors in parentheses allow for unrestricted error
correlation within each agency.
*p < .05. **p < .01. ***p < .001, two-tailed test.
Poister et al. 13

Table 4.  Generalized Least Square Regression Results for the Priority Population for Cervical Cancer
Screening and Priority Population for Breast Cancer Screening.

Priority population for cervical Priority population for breast


cancer screening cancer screening

  (1) (2) (3) (4)


Not Meeting × Time β5 −.657* (0.287) −.689* (0.297) −.601 (0.370) −.672 (0.360)
Not Meeting × Intervention β6 2.943 (1.714) 2.982 (1.767) 5.871* (2.490) 6.427* (2.499)
Not Meeting × Time After 1.068** (0.339) 1.100** (0.347) 1.352* (0.511) 1.450*** (0.511)
Intervention β7
Percent uninsured β8 −.421 (0.362) 9.850** (0.299)
Awards β9 .000001 (.000001) .000002 (.000003)
R2 within .217 .223 .171 .188
F-statistics 1.96*** 8.69*** 5.27*** 5.01***
No. of observation 1,173 1,173 1,173 1,173
No. of grantees 51 51 51 51

Note. Unstandardized coefficients. Results for β0 through β4 are suppressed for brevity. Hubert–White robust stan-
dard errors in parentheses allow for unrestricted error correlation within each agency.
*p < .05. **p < .01. ***p < .001, two-tailed test.

statistically significant at .01 level. These results show that performance increased by 6.877
percentage points immediately after the intervention and continued to rise by 1.448 percentage
points every 6 months following implementation of the performance-based grants management
program.
These findings support our hypothesis that performance-based grants management improves
performance only when targets are challenging. For both indicators, the mean pre-intervention
performance trend was negative for grantees not meeting the target but improved both in level
and trend after the performance-based grants management system was in implemented.

Robustness Check
In Table 4, we include all the grantees in the same model as a robustness check for the results in
Tables 2 and 3. Here again, we see that the grantees not meeting their targets show a different
performance trend compared to the agencies that were already reaching their targets.

PerfYit = β0 + β1 × timeit + β2 × interventionit + β3 × time after interventionit + β4


× NotMeetingit + β5 × ( NotMeeting × time )it + β6 × ( NotMeeting × intervention )it + β7
× ( NotMeeting × time after intervention )it + β8 × percent uninsuredit
+β9 × awardsit + δt + ςi + εit .

In Model 1 for Priority Population for Cervical Cancer Screening, the grantees not meeting
their targets show a decrease of 0.657 percentage points by every passing 6-month period prior
to the intervention compared to the agencies meeting targets (β5), whereas performance on this
measure increases by 1.068 percentage points more for grantees not meeting targets than the
grantees meeting their targets in the postintervention period (β7). Similarly, for Priority Population
for Cervical Cancer Screening in Model 3, the performance of grantees not meeting targets
improved by 5.871 percentage points at the intervention level (β6) and 1.352 percentage points at
the postintervention level more than the grantees meeting targets (β7).
14 American Review of Public Administration 

Table 5.  Impact of Performance-Based Grants Management on Early Detection Program Performance
Measures.

Effect on grantees Effect on grantees


Performance measure meeting targets not meeting target
Screening the priority population
  Priority population for cervical cancer screening No effect Positive
  Priority population for breast cancer screening No effect Positive
Complete and timely diagnostic follow-up of abnormal screening results
  Complete cervical diagnostic follow-up Negative Not applicable
  Timely cervical diagnostic follow-upa No effect Not applicable
  Complete breast diagnostic follow-up Negative Not applicable
  Timely breast diagnostic follow-upb No effect Not applicable
Complete and timely initiation of treatment for cancers diagnosed
  Treatment started for cervical cancerc No effect Not applicable
  Timely treatment for premalignant cervical lesionsd No effect Not applicable
  Timely treatment for invasive cervical cancer No effect Not applicable
  Treatment started for breast cancer No effect Not applicable
  Timely treatment for breast cancer No effect Not applicable
aIn October 2009, the target interval increased from 60 to 90 days, and the measurement changed to start the
diagnostic interval on the date of enrollment in the program for women referred in for diagnostic evaluation after
receiving an abnormal cervical screen provided through another source. For all time periods in this analysis, per-
formance was based on a 60-day interval, and the diagnostic interval started with an enrollment date for referred
women when available.
bIn October 2009, the measurement changed to start the diagnostic interval on the date of enrollment in the pro-

gram for women referred in for diagnostic evaluation after receiving an abnormal breast screening provided through
another source. For all time periods in this analysis, the diagnostic interval started with an enrollment date for
referred women when available.
cAlthough we see a negative overall impact, but after excluding instability/regression exhibited in the four earliest time

periods, we see a positive trend on this measure.


dIn July 2002, the target interval increased from 60 to 90 days. For all time periods in this analysis, performance was

based on a 90 day interval.

To further confirm these results, we analyzed the other nine indicators where almost all
grantees were meeting the targets at the intervention point. The analysis showed similar results,
and performance-based grants management was found to have either no effect or a negative
effect on the performance of these indicators. Summary results for all 11 indicators are pre-
sented in Table 5.

Discussion
These results tentatively support the argument of goal-setting theory that establishing challeng-
ing goals, representing a gap between expected and current performance levels, is helpful for
performance improvement. Given the relative lack of control and authority that characterizes
grant programs, we might expect performance management practices to be less effective
(Jennings & Haist, 2004), but our results indicate nevertheless that setting challenging goals and
monitoring grantee performance against them can lead to performance improvement. These find-
ings are particularly noteworthy given recent theorizing by O’Toole and Meier (2015) that the
marginal impact of management interventions, such as the Early Detection Program, will be
reduced in complex environments.
The Early Detection Program itself embodies numerous complexities such as federal-state rela-
tions, a focus on highly vulnerable women who are uninsured and have low income, and grantees’
reliance on a large number and wide variety of health care providers to deliver services. Implementation
Poister et al. 15

of performance-based grants management in such a scenario is bound by both stakeholder influences


and programmatic realities. Our results demonstrate that performance-based grants management can
contribute to improved performance even when it is implemented in situations far from ideal. In part,
this was facilitated by the collaborative relationship that was fostered between CDC and its grantees
through participation in setting targets and reviewing the performance with an eye toward learning
and finding solutions to problems rather than focusing on accountability per se.
The differential impact of performance-based grants management in these two conditions might
also be related to the ease in performance improvement. Related to the concept of diminishing
returns, marginal performance improvement for good performers is restricted by need for dramatic
innovation and experimentation. For poorly performing agencies, on the other hand, there is ample
room for growth mostly by borrowing ideas from high performers and learning form their experi-
ences (Murphy, 2000). Boyne, James, John, and Petrovsky (2011) support the contingency model in
a study in which they found changes in management to have a positive impact on the performance of
only the poor performers. The poor performers may also attract more political attention as politicians
may be more interested in avoiding failure than generating success. These findings may also reflect
a diversion of resources by grantees from indicators that are already being met to other areas where
improvement is needed. Ceiling effects may also prevent high performers from improving perfor-
mance, as might be the case in Figure 2 where the highest performance level is close to 100%.
However, we do not observe such as effect on Priority Population for Cervical Cancer Screening
(Figure 1) where the highest performance level of 32% is well below the ceiling of 100%.
There could potentially be other factors contributing to the improvement in grantee performance
on the priority population criteria in conjunction with the implementation of the Early Detection
Program such as an increase in awareness in women’s health issues, specific breast or cervical
cancer health campaigns (e.g., “pink ribbon”), other sources of funding, and information cam-
paigns. However, the authors could not identify any particular threats of history which might have
had a nationwide-wide impact on grantee performance. The “pink ribbon” campaign initiated in
early 1990s by Susan B. Komen Foundation, for example, is one of the largest national campaigns
to raise awareness for breast cancer (http://ww5.komen.org/uploadedFiles/Content_Binaries/The_
Pink_Ribbon_Story.pdf; accessed on March 14, 2014). Being more than two decades old, its effects
on women’s awareness of the importance of breast cancer screening were likely established prior to
our intervention point in 2006, and it would have had no bearing on measures regarding cervical
cancer in any event. In addition, even if other threats to history exist, it would be surprising to find
that they had a positive impact only for grantees not meeting targets.

Conclusion
This study tests the effect of performance management processes on the performance of a
federal program using grants to deliver services in a decentralized environment. The findings
provide evidence that the implementation of the performance-based grants management
approach helped CDC improve grantee performance in some respects. Specifically, perfor-
mance-based grants management improved the performance of grantees that found the targets
to be challenging or had more room for improvement. In contrast, we observed no effects or
even declines in performance among grantees already meeting their targets or performing
close to maximum performance levels. This study adds to the growing contingency literature
that associates the success of performance management systems on conditional factors by
examining the essential role of challenging goals in making performance management sys-
tems effective (Gerrish, 2016; Nielsen, 2014).
These results emphasize the possibility of greater performance improvement in low-perform-
ing agencies through the use of performance management regimes. Performance management
systems help identify the low performers and focus the attention of policy makers and managers
on these agencies. This increase in scrutiny may play a role in motivating low-performing
16 American Review of Public Administration 

agencies to improve their performance. However, consistent with the ideas of threshold and ratchet
effects (Deming, 2000; Hood, 2012), once an agency reaches the targeted performance level and
is no longer identified as a poor performer, there may be little incentive to improve performance.
Political and public attention to low performers, thus, may serve as a positive impetus for such
organizations to improve their performance levels. These results may also reflect a negativity bias,
such that grant-makers would be more likely to draw causal responsibility for poorly performing
grantees and thus increasing pressure on them to perform (Nielsen & Moynihan, 2016).
The results suggest ways for the CDC and other agencies funding grant programs to strengthen
performance-based grants management efforts. First, wherever practical, targets should be made
challenging. For instance, differential targets based on individual grantees’ past performance
could potentially encourage high performers to attain even higher levels of performance while
incentivizing poorer performers to meet existing targets. In addition, the incentive structure could
be revised to reward improvement beyond the targeted standards to maintaining already high
levels of performance.

Directions for Future Research


Given the centrality of grants in federal public health programs, as well as numerous other federal
and state service delivery programs, more research is needed on the efficacy of performance-based
grants management approaches with respect to performance improvement. Additional research
should focus on performance-based grant processes which are structured differently or which oper-
ate in different kinds of environments. Given the way the Early Detection Program is structured, it
is impossible to isolate whether the positive impact on performance observed for the low-perform-
ing grantees occurred because those grantees were inspired to achieve the targets, or because they
were motivated by the monetary award associated with the attainment of those targets. Research
that could compare the efficacy of setting challenging goals in managing a grants program, with and
without attendant monetary rewards, would be particularly helpful in this regard.
While this study shows that performance-based grants management processes can lead to
improved performance on the part of grantees operating in highly decentralized contexts such as
that characterizing the Early Detection Program, the same processes might be less effective when
applied to programs operating in even more complex environments. For example, another pro-
gram operated by CDC provides grants to state health departments and some large local health
departments with the goal of reducing the incidence and prevalence of sexually transmitted dis-
eases (STDs) such as syphilis, gonorrhea, and chlamydia. To make this program effective requires
the grantees to enlist active support of a wide range of other agencies such as health insurers,
community-based organizations, schools, and even jails in addition to STD clinics along with
private and nonprofit health care providers. With results dependent on the cooperation of such a
diverse range of actors going far beyond the control of the grantees, which are often unable to
make financial incentives available to them, we might not expect grant awards tied to the achieve-
ment of challenging goals to have the same kind of impact as is the case with the Early Detection
Program. Conversely, we might expect federal grant programs in other areas in which the imme-
diate grantees provide services directly, such as state child support enforcement programs or
local public transit systems, to be even more responsive to performance-based grants manage-
ment processes. Furthermore, the indicators used by the Early Detection Program were easily
quantifiable compared to other areas like social justice or humans services. The effectiveness of
performance-based grants in those other contexts needs to be studied further.
Another area of investigation would focus on the cost-effectiveness of such performance-
based management programs. In the case of the Early Detection Program, the cost of administer-
ing the program at the federal level is substantial, and CDC has made significant investments in
training and technical assistance, staffing, hardware and software, and data management to sup-
port the performance-based grants management process. While this has contributed to the
Poister et al. 17

success of the performance-based grants management in fostering a culture of data use and strong
performance in the overall enterprise (Self-Cited, 2014), the marginal cost per unit of perfor-
mance improvement has not been determined. Research in this area could help assess the worth
of the investment made in relation to the payoff generated.
Finally, research is needed, perhaps first on a comparative case study basis, to investigate the
extent to which performance-based processes might be promulgated down through the networked
structure to manage more effectively and improve the performance of the overall program. A qualita-
tive analysis could also help investigate the initiatives adopted by the low-performing grantees to
improve their performance. Such an analysis may also help determine to what extent the technical
review by the CDC contributed toward this performance improvement. The research question here
might be stated as follows: When grantees are, in turn, employing performance-based approaches to
manage their own programs, does this increase their contractors’ accountability to their principals,
incentivize them to work harder and smarter to attain targets or standards, and ultimately improve
their own performance and, in turn, help strengthen the performance of the overall program?

Declaration of Conflicting Interests


The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or
publication of this article.

Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publi-
cation of this article: CDC provided financial support for this project via an Intergovernmental Personnel
Agreement with Georgia State University, #12IPA03129.

References
Andersen, S. C. (2008). The impact of public management reforms on student performance in Danish
schools. Public Administration, 86, 541-558.
Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Upper Saddle
River, NJ: Prentice Hall.
Barsky, A. (2008). Understanding the ethical cost of organizational goal-setting: A review and theory devel-
opment. Journal of Business Ethics, 81, 63-81.
Beam, D. R., & Conlan, T. J. (2002). Grants. In L. Salamon (Ed.), The tools of government: A guide to the
new governance (pp. 340-380). Oxford, UK: Oxford University Press.
Behn, R. D. (2003). Why measure performance? Different purposes require different measures. Public
Administration Review, 63, 586-606.
Bouckaert, G., & Balk, W. (1991). Public productivity measurement: Diseases and cures. Public Productivity
& Management Review, 15, 229-235.
Boyne, G. A., & Chen, A. A. (2007). Performance targets and public service improvement. Journal of
Public Administration Research and Theory, 17, 455-477.
Boyne, G. A., & Gould-Williams, J. (2003). Planning and performance in public organizations an empirical
analysis. Public Management Review, 5, 115-132.
Boyne, G. A., James, O., John, P., & Petrovsky, N. (2011). Top management turnover and organizational
performance: A test of a contingency model. Public Administration Review, 71, 572-581.
Colby, D. C., & Pickell, S. G. (2010). Investing for good: Measuring nonfinancial performance. Community
Development Investment Review, 6, 64-68.
Crossley, C. D., Cooper, C. D., & Wernsing, T. S. (2013). Making things happen through challenging goals:
Leader proactivity, trust, and business-unit performance. Journal of Applied Psychology, 98, 540-549.
DeGroff, A., Cheung, K., Dawkins-Lyn, N., Hall, M. A., Melillo, S., & Glover-Kudon, R. (2015). Identifying
promising practices for evaluation: The national breast and cervical cancer early detection program.
Cancer Causes & Control, 26, 767-774.
DeGroff, A., Schooley, M., Chapel, T., & Poister, T. H. (2010). Challenges and strategies in applying
performance measurement to federal public health programs. Evaluation and Program Planning, 33,
365-372.
18 American Review of Public Administration 

Deming, W. E. (2000). The new economics: For industry, government, education. Cambridge, MA: The
MIT Press.
El-Khawas, E. (1998). Strong state action but limited results: Perspectives on university resistance.
European Journal of Education, 33, 317-330.
Erez, M., & Zidon, I. (1984). Effect of goal acceptance on the relationship of goal difficulty to performance.
Journal of Applied Psychology, 69, 69-78.
Fowler, S., Webber, A., Cooper, B. S., Phimister, A., Price, K., Carter, Y., . . . Stone, S. P. (2007). Successful
use of feedback to improve antibiotic prescribing and reduce clostridium difficile infection: A con-
trolled interrupted time series. Journal of Antimicrobial Chemotherapy, 59, 990-995.
Gerrish, E. (2016). The impact of performance management on performance in public organizations: A
meta-analysis. Public Administration Review, 76, 48-66.
Goerdel, H. T. (2006). Taking initiative: Proactive management and organizational performance in net-
worked environments. Journal of Public Administration Research and Theory, 16, 351-367.
Heinrich, C. J. (2007). False or fitting recognition? The use of high performance bonuses in motivating
organizational achievements. Journal of Policy Analysis and Management, 26, 281-304.
Heinrich, C. J., & Choi, Y. (2007). Performance-based contracting in social welfare programs. The American
Review of Public Administration, 37, 409-435.
Heinrich, C. J., & Lynn, L. E. (2001). Means and ends: A comparative study of empirical methods for investi-
gating governance and performance. Journal of Public Administration Research and Theory, 11, 109-138.
Heinrich, C. J., & Marschke, G. (2010). Incentives and their dynamics in public sector performance man-
agement systems. Journal of Policy Analysis and Management, 29, 183-208.
Hood, C. (2012). Public management by numbers as a performance-enhancing drug: Two hypotheses.
Public Administration Review, 72(s1), S85-S92.
Hvidman, U., & Andersen, S. C. (2014). Impact of performance management in public and private organi-
zations. Journal of Public Administration Research and Theory, 24, 35-58.
Jennings, E., & Haist, M. (2004). Putting performance measurement in context. In P. W. Ingraham & L.
E. Lynn, Jr (Eds.) The art of governance: Analyzing management and administration (pp. 152-194).
Washington, DC: Georgetown University Press.
Koning, P., & Heinrich, C. J. (2013). Cream-skimming, parking and other intended and unintended effects of
high-powered, performance-based contracts. Journal of Policy Analysis and Management, 32, 461-483.
Latham, G. P., & Locke, E. A. (2006). Enhancing the benefits and overcoming the pitfalls of goal setting.
Organizational Dynamics, 35, 332-340.
Linderman, K., Schroeder, R. G., & Choo, A. S. (2006). Six sigma: The role of goals in improvement teams.
Journal of Operations Management, 24, 779-790.
Locke, E. A., & Latham, G. P. (1990). A theory of goal setting & task performance. Upper Saddle River,
NJ: Prentice Hall.
Marvel, M. K., & Marvel, H. P. (2008). Government-to-government contracting: Stewardship, agency, and
substitution. International Public Management Journal, 11, 171-192.
Meier, K. J., & O’Toole, L. J. (2003). Public management and educational performance: The impact of
managerial networking. Public Administration Review, 63, 689-699.
Miller, E. A., Doherty, J., & Nadash, P. (2013). Pay for performance in five states: Lessons for the nursing
home sector. Public Administration Review, 73(s1), S153-S163.
Miner, J. B. (2015). Organizational behavior 1: Essential theories of motivation and leadership. London,
UK: Routledge.
Moynihan, D. P. (2008). The dynamics of performance management: Constructing information and reform.
Washington, DC: Georgetown University Press.
Murphy, K. J. (2000). Performance standards in incentive contracts. Journal of Accounting & Economics,
30, 245-278.
Mussweiler, T., & Strack, F. (2000). The “relative self”: Informational and judgmental consequences of
comparative self-evaluation. Journal of Personality and Social Psychology, 79, 23-38.
Nelson, H. D., Tyne, K., Naik, A., Bougatsos, C., Chan, B. K., & Humphrey, L. (2009). Screening for
breast cancer: An update for the US preventive services task force. Annals of Internal Medicine,
151, 727-737.
Nielsen, P. A. (2014). Performance management, managerial authority, and public service performance.
Journal of Public Administration Research and Theory, 24, 431-458. doi:10.1093/jopart/mut025.
Poister et al. 19

Nielsen, P. A., & Moynihan, D. P. (2016). How do politicians attribute bureaucratic responsibility for
performance? negativity bias and interest group Advocacy. Journal of Public Administration Research
and Theory. doi:10.1093/jopart/muw060
Ordóñez, L. D., Schweitzer, M. E., Galinsky, A. D., & Bazerman, M. H. (2009). Goals gone wild:
The systematic side effects of overprescribing goal setting. Academy of Management Perspectives,
23(1), 6-16.
O’Toole, L. J., & Meier, K. J. (2015). Public management, context, and performance: In quest of a more
general theory. Journal of Public Administration Research and Theory, 25, 237-256. doi:10.1093/
jopart/muu011.
Patrick, B. A., & French, P. E. (2011). Assessing new public management’s focus on performance measure-
ment in the public sector: A look at no child left behind. Public Performance & Management Review,
35, 340-369.
Penfold, R. B., & Zhang, F. (2013). Use of interrupted time series analysis in evaluating health care quality
improvements. Academic Pediatrics, 13(6 Suppl.), S38-S44.
Poister, T. H., Pasha, O. Q., & Edwards, L. H. (2013). Does performance management lead to better
outcomes? evidence from the US public transit industry. Public Administration Review, 73, 625-636.
Salamon, L. M. (2002). The tools of government: A guide to the new governance. Oxford, UK: Oxford
University Press.
Sun, R., & Van Ryzin, G. G. (2014). Are performance management practices associated with better pub-
lic outcomes? Empirical evidence from New York public schools. The American Review of Public
Administration, 44, 324-338. doi:10.1177/0275074012468058.
Thorgren, S., & Wincent, J. (2013). Passion and challenging goals: Drawbacks of rushing into goal-setting
processes. Journal of Applied Social Psychology, 43, 2318-2329.
Vesco, K. K., Whitlock, E. P., Eder, M., Lin, J., Burda, M. B. U., Senger, C. A., . . . Zuber, S. (2011).
Screening for cervical cancer: A systematic evidence review for the US preventive services task force.
The Lancet Oncology, 12, 663-672.
Wagner, A. K., Soumerai, S. B., Zhang, F., & Ross-Degnan, D. (2002). Segmented regression analy-
sis of interrupted time series studies in medication use research. Journal of Clinical Pharmacy and
Therapeutics, 27, 299-309.
Walker, R. M., Damanpour, F., & Devece, C. A. (2011). Management innovation and organizational perfor-
mance: The mediating effect of performance management. Journal of Public Administration Research
and Theory, 21, 367-386.
Whittington, J. L., Goodwin, V. L., & Murray, B. (2004). Transformational leadership, goal difficulty, and job
design: Independent and interactive effects on employee outcomes. The Leadership Quarterly, 15, 593-606.
Wright, B. E. (2004). The role of work context in work motivation: A public sector application of goal
and social cognitive theories. Journal of Public Administration Research and Theory, 14, 59-78.
Wright, B. E. (2007). Public service and motivation: Does mission matter? Public Administration Review,
67, 54-64.
Yancy, B., Royalty, J. E., Marroulis, S., Mattingly, C., Benard, V. B., & DeGroff, A. (2014). Using data to
effectively manage a national screening program. Cancer, 120, 2575-2583.

Author Biographies
Theodore H. Poister is professor of Public Management in the Andrew Young School of Policy Studies at
Georgia State University. His research focuses on performance management and stakeholder feedback pro-
cesses in government.
Obed Pasha is a lecturer at the School of Public Policy, University of Massachusetts Amherst. His research
interests focus on public management and the use of performance management in the public sector.
Amy DeGroff is a senior health scientist at the Centers for Disease Control and Prevention (CDC), Division
of Cancer Prevention and Control. Dr. DeGroff leads a team responsible for evaluating CDC’s National
Breast and Cervical Cancer Early Detection Program and the Colorectal Cancer Control Program.
Janet Royalty, MS, is a senior analyst at the Centers for Disease Control and Prevention and provides data
management support for two large cancer screening and control programs with an emphasis on using data
to monitor and direct program activities.

You might also like