Professional Documents
Culture Documents
research-article2017
ARPXXX10.1177/0275074016685804American Review of Public AdministrationPoister et al.
Article
American Review of Public Administration
1–19
The Impact of Performance- © The Author(s) 2017
Reprints and permissions:
Based Grants Management on sagepub.com/journalsPermissions.nav
DOI: 10.1177/0275074016685804
https://doi.org/10.1177/0275074016685804
Performance: The Centers for journals.sagepub.com/home/arp
Abstract
Performance-based grants management is a strategy used by public agencies to improve
performance and strengthen accountability by connecting annual award amounts to performance
information. This study evaluates the impacts of a performance-based grants management
process implemented by the U.S. Centers for Disease Control and Prevention to strengthen
the effectiveness of its National Breast and Cervical Cancer Early Detection Program. The study
uses panel data and interrupted time-series analysis over 10 years for 51 grantees. Results show
partial and conditional effectiveness of the performance-based grants management process in
strengthening performance. In particular, the implementation of the performance-based grants
management system consistently improved the performance of those grantees for whom the
targets were challenging. While prior research has found, in some cases, evidence of a positive
impact of performance management practices in improving programs delivered directly by public
organizations at the local level, this study examines the performance management–performance
relationship in a more challenging context of a federal grants program delivered through a highly
decentralized system.
Keywords
performance based grants management, public health care program evaluation, goal-setting
theory
Corresponding Author:
Theodore H. Poister, Andrew Young School of Public Studies, Georgia State University, 114 Marietta Street NW,
Atlanta, GA 30302, USA.
Email: tpoister@gsu.edu
2 American Review of Public Administration
Literature Review
The question of whether performance-based approaches to managing grant programs can be
effective in improving their performance is particularly salient given the magnitude of grants
programming in the United States. In Fiscal Year 2011, for example, the last year of the period
examined in this research, Federal grants to state and local governments in areas such as health,
education, transportation, housing and community development, agriculture, energy and the
environment, and social services totaled US$607 billion. This represents 17% of total federal
outlays and approximately 50% of federal spending on discretionary programs, and these grants
accounted for 25% of spending by state and local governments that year (https://www.cbo.gov/
publication/43967, accessed on January 4, 2016).
Although some recent studies examine the impacts of performance-based bonuses (Heinrich,
2007) and performance-based contracting on deliver services (e.g., Heinrich & Choi, 2007;
Koning & Heinrich, 2013; Marvel & Marvel, 2008; Miller, Doherty, & Nadash, 2013), there is a
lack of studies addressing the impact of performance-based approaches to managing programs
that operate primarily through grants to other governmental units to operate their programs. A
study by El-Khawas (1998) on the performance-based grants management system adopted by
Tennessee for its 24 higher education institutions serves as an exception in this regard. Through
a qualitative analysis, El-Khawas found several benefits to the system, including increased politi-
cal support and management efficiency. However, that study did not address the system’s impact
on actual program performance.
Prior quantitative research on the impact of performance management on performance
improvement has generated mixed results to date, with some studies (Andersen, 2008; Boyne &
Gould-Williams, 2003; Patrick & French, 2011) finding negative or no effects of performance
management on actual performance improvement and others (Boyne & Chen, 2007; Heinrich &
Lynn, 2001; Hvidman & Andersen, 2014; Nielsen, 2014; Poister, Pasha, & Edwards, 2013; Sun
& Van Ryzin, 2014; Walker, Damanpour, & Devece, 2011) finding positive effects of perfor-
mance management on organizational performance. Gerrish (2016) argues that the impact of
performance management systems is conditional and depends on other factors such as bench-
marking and bottom–up decision making.
One limitation of this stream of research is that all these studies focus on the relationship of
performance management practices to performance in the context of the direct delivery of service
Poister et al. 3
at the local government level, specifically in the fields of municipal services, local schools, job
training, and public transit. These settings are typically characterized by direct control over per-
sonnel, the use of resources, and operations, and there is a high level of accountability over ser-
vice delivery and customer service, all of which facilitates effective performance management.
On the other hand, programs that are managed indirectly through grants tend by definition to
operate in more complex, decentralized environments, arrangements that depend more heavily
on interactions between managers and “environmental actors” who are not direct-line subordi-
nates (Meier & O’Toole, 2003). Such decentralized systems tend not to lend themselves to con-
trol and accountability, and they may well be characterized by differences in values and priorities
between grantor and grantee agencies as well as between program providers and service delivery
agencies. In these settings, grantees have considerable discretion over priorities and program
operation, and the principal-agent problem may be more difficult to overcome (Beam & Conlan,
2002). All of this makes it more challenging to attain high levels of efficiency and effectiveness
in public programs provided through grants (Salamon, 2002).
Similarly, implementing performance management processes, which require a level of
accountability and control, is likely more challenging as well. As Jennings and Haist (2004) have
hypothesized that when organizations with relatively less control over program execution insti-
tute performance management systems, those systems are less likely to be effective. Indeed,
O’Toole and Meier (2015) argue that operating in a more heterogeneous and dispersed task envi-
ronment will reduce the likelihood of not only program success but also the marginal impact of
externally directed managerial actions. Thus, in a highly complex environment such as that sur-
rounding the Early Detection Program, in which numerous and varied kinds of organizations are
involved, each with some degree of influence on how the program operates and services are
actually delivered, we might not expect a performance-based grants management process to gen-
erate significant improvements in overall performance.
Thus, the purpose of this research is to examine the efficacy of performance-based approaches
to managing grants in a heavily decentralized environment, using CDC’s Early Detection
Program as a case in point. In doing so, we respond to the call by O’Toole and Meier (2015) to
elaborate and test theory regarding the impact of public management practices on performance in
different contexts. This study also contributes to the growing contingency literature (see Gerrish,
2016; Nielsen, 2014) that proposes that the effectiveness of performance management depends
on implementation factors such as benchmarking, bottom–up decision making, and employee
discretion. We argue that challenging goals is yet another important factor that that is essential to
the effectiveness of these systems.
operational matters such as ensuring proper staffing levels and adhering to appropriate protocol
in delivering services diminishes as the grantees will themselves make sure that they do their
work properly to achieve the established targets.
Feedback in the form of performance information plays an important mediating role between
goal setting and performance improvement (Wright, 2004). And, comparing results across mul-
tiple grantees can engender competition among them as the desire to outperform others and be
recognized as high performers can motivate grantees to strengthen their performance (Miner,
2015). In addition, when grantees recognize that their performance is being monitored by higher
level authorities and resourcing agencies, they are likely to be further motivated to improve per-
formance. Indeed, grantees may promote their accomplishments by reporting on targets that have
been attained and instances in which they have improved performance, in part to justify further
funding (Behn, 2003).
Performance-based grants management can also help grant-making organizations to mitigate
principal-agent problems (Moynihan, 2008). When grantees recognize that future awards are at
least partially dependent on actual performance, they tend more to align their own goals and
activities with the priorities of the funding agency (Murphy, 2000). Essentially, then, by setting
appropriate targets, monitoring progress toward these goals, and rewarding the achievement of
targets, performance-based grants management allows principals to impose their values, priori-
ties, goals, and objectives on their agents (Bouckaert & Balk, 1991; Colby & Pickell, 2010). With
respect to the principal-agent paradigm, then, performance management systems have tradition-
ally emphasized accountability, control, and cost-effectiveness (Heinrich & Marschke, 2010).
More recently, however, the use of these systems, including performance-based grants manage-
ment, has broadened to focus on other purposes as well, such as promoting learning and motivat-
ing improved performance among grantees (Behn, 2003; Moynihan, 2008).
The positive impact of performance management systems, however, is not universal and may
depend on the presence or absence of other managerial practices (Gerrish, 2016; Nielsen, 2014).
In the following section, we discuss challenging goals as an important condition for the success
of these systems in the grant-making context.
on the ends rather than means (Barsky, 2008). Instead of increasing job satisfaction, challenging
goals have a detrimental impact on employee self-efficacy in some cases. Employees might ques-
tion their abilities and intelligence that made the task difficult for them, which may further lead to
lower engagement and commitment toward future tasks (Mussweiler & Strack, 2000).
On balance, however, both theory and research suggest that setting goals that are challenging
as well as realistic and achievable can contribute to improved performance in grants management
settings as is the case with respect to direct service delivery and contract management processes.
Extensive research in the corporate sector has shown that setting challenging goals has a positive
impact on motivation, organizational commitment, and performance of employees (Whittington,
Goodwin, & Murray, 2004; see Latham & Locke, 2006, for examples). Hence, setting challeng-
ing goals is now considered a hallmark of good change-oriented leadership in business (Crossley,
Cooper, & Wernsing, 2013; Thorgren & Wincent, 2013). In the context of public administration,
Wright (2007) administered a cross-sectional survey of 2,200 employees working with a large
New York State agency and found that employees perceiving their goals as challenging reported
a higher level of motivation. The present study takes this research further by using objective
measures of organizational performance over a period of 11 years to determine whether challeng-
ing targets lead to improved performance.
clinical service delivery. Eligibility for the Early Detection Program is restricted to low income,
uninsured or underinsured women with emphasis on reaching women where disease is more prev-
alent (i.e., women never or rarely screened for cervical cancer, women age 50 and older for breast
cancer). Consequently, grantees conduct public education and outreach to educate women about
screening and recruit women eligible for the program. To ensure that patients with abnormal
screening results complete diagnostic testing, grantees also implement tracking, follow-up, and
patient navigation activities. Research has demonstrated that completed screening and diagnostics
leads to the prevention of cervical cancer (through removal of precancerous lesions) and early
detection of breast and cervical cancers when treatment is more effective (Nelson et al., 2009;
Vesco et al., 2011). This, in turn, should lead to decreased breast and cervical cancer mortality.
more challenging targets (e.g., 90% or 95% of women with abnormal breast cancer screening
results must receive complete diagnostic work-up within 60 days rather than the current 75%).
The targets that were established for the Early Detection Program, however, were based on strong
theoretical and programmatic reasoning. The CDC consulted with all grantees to ensure that the
indicators and targets were practical, an important tenet of developing measures. During the
study period, CDC typically sponsored an annual business meeting for all grantees where con-
cerns about the performance measures and targets could be discussed. For instance, CDC recog-
nizes that, while desirable, follow-up on every woman with abnormal screening results is
impractical given that some women may become insured, seek follow-up elsewhere, or move to
a different state or country.
Even though the consultation process resulted in lower targets than might be considered ideal,
such collaboration was necessary to ensure the internalization of the measures by grantees, an
essential condition of the goal-setting theory. And, in fact, prior research has shown that the Early
Detection Program grantees perceive the measures to be meaningful, fair, and closely in-line with
overall goals for the program (Self-Cited, 2014). Furthermore, CDC funds often represent only a
part of grantees’ overall cancer prevention budget and participation in the Early Detection
8 American Review of Public Administration
Program is voluntary. Thus, CDC was concerned that setting more aggressive targets might push
some grantees or providers to opt out of the program. Finally, there are valid clinical reasons to
keep targets lower for some measures in the aggregate to leave room for clinical judgment on
individual cases. Based on grantee concerns, CDC has, in rare cases, modified how a particular
indicator is specified to adjust for circumstances outside of program control. Although the per-
formance measures have been modified somewhat over time to better reflect program priorities
and performance (see Table 5), the system has maintained a consistent, concise, and well-defined
set of priorities that can be accurately measured using reliable data sources.
Implementation of performance-based grants management in early 2006 involved a robust
monitoring and feedback process following each data submission. A summary report of the core
performance measures is prepared and discussed by CDC and individual grantees as part of semi-
annual data reviews. Action items for follow-up by the grantee are generated and a narrative
addressing their resolution is included with the next data submission (Self-Cited, 2014). While
the reports are not shared publically, grantees have access to aggregate results which allows them
to assess their own performance on each measure compared to others. Grantee performance on
the measures has generally been high over time, making it challenging to separate better perform-
ers from poor ones.
CDC funding for these grants varied somewhat over the first 8 years following implemen-
tation of the performance management system, which is the time period examined in this
study, and funding in FY 2012 was just slightly higher than it was in FY 2005. Although
annual funding decisions for the Early Detection Program grantees are largely dependent on
the overall amount of funds available to distribute and current funding levels, decision mak-
ing is also informed by a process that uses a combination of performance and equity mea-
sures to establish a recommended funding range for each grantee. To establish these ranges,
grantees are placed in performance groups based on a scored assessment of clinical care
(whether the grantee meets the core performance measures), program management (techni-
cal review of grantee continuation applications), and fiscal stewardship (spend rate). They
are also placed in high-to-low equity groups by comparing their current funding level to an
estimated equitable distribution of Early Detection Program funds across grantees based on
the number of program-eligible women in their jurisdiction to inject an indication of need
into the allocation process. Performance and equity groups are then combined and each level
is assigned a recommended funding range, with the intention to reward high performers and
move toward a more equitable distribution of resources.
Method
As discussed above, consistent with goal-setting theory, CDC establishes targets with due con-
sultation with grantees to assure targets are internalized, and provides semiannual feedback
reports to all grantees. While the conditions of internalization and feedback are met for all the
grantees, the criterion of setting challenging targets was not met consistently across grantees. On
the other hand, since most Early Detection Program grantees were already meeting the CDC
targets at the time the performance-based grants management program was implemented, it can
be argued that the targets were not only challenging but also achievable for the grantees that were
not already meeting them. Thus, to test the impact of the performance-based grants process, we
separated the grantees into two groups, the first group consisting of the grantees that were not
already meeting targets on a particular indicator, and the second group consisting of grantees that
were already meeting targets when the performance-based grants management program was
implemented. The impact of the performance-based grants management program on grantee per-
formance was analyzed separately for these two groups.
Poister et al. 9
The methodology used in this research follows earlier studies such as Wagner, Soumerai,
Zhang, and Ross-Degnan (2002) and Fowler et al. (2007), which analyze panel data using inter-
rupted time-series analysis for policy interventions in health care. This study compares perfor-
mance trends on the core performance indicators before and after the formal introduction of the
system in early 2006 to evaluate the effectiveness of the Early Detection Program’s performance-
based grants management system in strengthening grantee performance. The analysis covers
twenty-three 6-month periods running from June 2000 through June 2011. Fifty-one of the 68
grantees were included in the analysis. Given the small numbers of women screened and unique
contextual issues, tribal and territorial grantees were excluded while all 50 states and the District
of Columbia were included.
An autoregressive fixed-effects model was adopted, where the standard errors of each grantee
state (for the 23 time periods) were clustered to produce unbiased standard errors and correctly
sized confidence intervals (Penfold & Zhang, 2013). STATA v 12.0 was used to run a generalized
least squares (GLS) regression with robust standard errors owing to the high correlation between
observations over time belonging to the same grantee, as well as a high heteroscedasticity due to
differences in variability among the grantees. The GLS regression model to estimate the level and
trend in mean performance of grantees before the performance-based grants management pro-
gram and the changes in level and trend following the program is specified as follows:
Here, PerfYit is the mean performance of grantee i in 6-month period t, and time is a continu-
ous variable indicating the 6-month period at time t for grantee i from the start of the observation
period in June 2000 (time = −11) to the last observation in June 2011 (time = 11). Intervention is
a dummy variable for grantee i indicating whether the observation is occurring before (interven-
tion = 0) or after (intervention = 1) the performance-based grants management program was
implemented in January 2006 (time = 0). Time after intervention represents an interaction of time
and intervention. Percent uninsured is a control for the percentage of the population which
remained uninsured within grantee state i in a given year. The data on percent of uninsured popu-
lation by state (annual) is derived from the United States Census Bureau’s Annual Social and
Economic Supplement. Award refers to the amount of money CDC awarded to each grantee
annually and is derived from the CDC’s own records.
In this model, β0 estimates the mean performance of grantees at time 0; β1 estimates the
change in the mean performance of grantees that occurs with each 6-month period before the
intervention (baseline trend); β2 estimates the level change in the mean performance of grantees
immediately after the intervention; β3 estimates the change in the trend in the mean performance
of grantees after the performance-based grants management program, relative to the trend before
the intervention.
This model controls for unobserved contextual factors as well as baseline level and trend, a
major strength of interrupted time-series analysis. ςi controls for the unobserved time-invariant
contextual covariates using fixed-effects at the grantee-level and δt represents the year fixed-
effects controlling for the system-level changes due to time. Finally, εit represents the unob-
served error. In addition to the statistical analysis, time-series graphs were constructed to facilitate
visual inspection of the data. In reporting the results, the 6-month time periods are denoted on the
horizontal axis while the mean performance of the grantees for a particular indicator is denoted
on the vertical axis. A horizontal reference line represents the CDC target for each indicator while
a vertical reference line identifies the intervention point when performance-based grants man-
agement was implemented.
10 American Review of Public Administration
Figure 1. Priority Population for Cervical Cancer Screening for grantees meeting and not meeting the
target at the intervention point.
Table 2. Generalized Least Square Regression Results for the Grantees Meeting and Not Meeting
Target for Priority Population for Cervical Cancer Screening Prior to the Implementation of
Performance-Based Grants Management Intervention.
Grantees meeting targets Grantees not meeting targets
Note. Unstandardized coefficients; Hubert–White robust standard errors in parentheses allow for unrestricted error correlation
within each agency.
*p < .05. **p < .01. ***p < .001, two-tailed test.
level. This suggests a maturation of performance for these grantees. However, the coefficients for
the change in level (β2) and change in trend after the intervention (β3) are not significant, which
suggest that the hypothesized impact of the performance-based grants management program on
the performance of grantees already meeting the target is unfounded.
Model 4 in Table 2 shows the regression results for grantees not meeting the indicator target
at the intervention point. Although the coefficient for baseline trend (β1) demonstrates a decreas-
ing trend of performance, these estimates are not statistically significant. On the other hand, the
coefficients for change in level (β2) and change in trend after the intervention (β3) are not only
positive but are also statistically significant at .05 and .01 levels, respectively. This suggests that
the performance-based grants management system improved the mean performance of these
grantees (ones not meeting the target) by 4.527 percentage points immediately following the
intervention and 0.894 percentage points every 6 months thereafter.
For the Priority Population for Breast Cancer Screening indicator, Figure 2 shows the impact
of the performance-based grants management system on the mean performance of grantees meet-
ing (solid line) and not meeting (dotted line) the target at the intervention point. Similar to Figure
1, we observe a rather stable increment in the mean performance (maturation) for grantees meet-
ing the target before and after the intervention point. The trend for the grantees not meeting the
target shows a substantial negative trend until the intervention point, but then reflects a substan-
tial improvement in mean performance going forward.
In Model 2 of Table 3, the coefficients for baseline trend (β1), change in level (β2), and change
in trend (β3) are not statistically significant. This shows that the slightly increasing mean perfor-
mance of grantees meeting their targets for the Priority Population for Breast Cancer Screening
indicator remained quite consistent over the entire 23 periods and was not impacted by the per-
formance-based grants management system. A positive coefficient for percent uninsured (β4),
however, is statistically significant. This shows that a one percentage point increase in the unin-
sured population leads to slightly more than a one percentage point increase in the mean perfor-
mance on this indicator.
In contrast to the trends for grantees that were already meeting the target, the coefficient for
baseline trend (β1) for grantees not meeting targets suggests a depreciating and significant perfor-
mance trend (see Model 4 of Table 3). The change in level (β2) and trend for the mean perfor-
mance (β3) of grantees not meeting the target for this indicator, however, are positive and
12 American Review of Public Administration
Figure 2. Priority Population for Breast Cancer Screening for grantees already meeting and not meeting
the target at the intervention point.
Table 3. Generalized Least Square Regression Results for the Grantees Meeting and Not Meeting
Target for Priority Population for Breast Cancer Screening Prior to the Implementation of Performance-
Based Grants Management Intervention.
Note. Unstandardized coefficients; Hubert–White robust standard errors in parentheses allow for unrestricted error
correlation within each agency.
*p < .05. **p < .01. ***p < .001, two-tailed test.
Poister et al. 13
Table 4. Generalized Least Square Regression Results for the Priority Population for Cervical Cancer
Screening and Priority Population for Breast Cancer Screening.
Note. Unstandardized coefficients. Results for β0 through β4 are suppressed for brevity. Hubert–White robust stan-
dard errors in parentheses allow for unrestricted error correlation within each agency.
*p < .05. **p < .01. ***p < .001, two-tailed test.
statistically significant at .01 level. These results show that performance increased by 6.877
percentage points immediately after the intervention and continued to rise by 1.448 percentage
points every 6 months following implementation of the performance-based grants management
program.
These findings support our hypothesis that performance-based grants management improves
performance only when targets are challenging. For both indicators, the mean pre-intervention
performance trend was negative for grantees not meeting the target but improved both in level
and trend after the performance-based grants management system was in implemented.
Robustness Check
In Table 4, we include all the grantees in the same model as a robustness check for the results in
Tables 2 and 3. Here again, we see that the grantees not meeting their targets show a different
performance trend compared to the agencies that were already reaching their targets.
In Model 1 for Priority Population for Cervical Cancer Screening, the grantees not meeting
their targets show a decrease of 0.657 percentage points by every passing 6-month period prior
to the intervention compared to the agencies meeting targets (β5), whereas performance on this
measure increases by 1.068 percentage points more for grantees not meeting targets than the
grantees meeting their targets in the postintervention period (β7). Similarly, for Priority Population
for Cervical Cancer Screening in Model 3, the performance of grantees not meeting targets
improved by 5.871 percentage points at the intervention level (β6) and 1.352 percentage points at
the postintervention level more than the grantees meeting targets (β7).
14 American Review of Public Administration
Table 5. Impact of Performance-Based Grants Management on Early Detection Program Performance
Measures.
gram for women referred in for diagnostic evaluation after receiving an abnormal breast screening provided through
another source. For all time periods in this analysis, the diagnostic interval started with an enrollment date for
referred women when available.
cAlthough we see a negative overall impact, but after excluding instability/regression exhibited in the four earliest time
To further confirm these results, we analyzed the other nine indicators where almost all
grantees were meeting the targets at the intervention point. The analysis showed similar results,
and performance-based grants management was found to have either no effect or a negative
effect on the performance of these indicators. Summary results for all 11 indicators are pre-
sented in Table 5.
Discussion
These results tentatively support the argument of goal-setting theory that establishing challeng-
ing goals, representing a gap between expected and current performance levels, is helpful for
performance improvement. Given the relative lack of control and authority that characterizes
grant programs, we might expect performance management practices to be less effective
(Jennings & Haist, 2004), but our results indicate nevertheless that setting challenging goals and
monitoring grantee performance against them can lead to performance improvement. These find-
ings are particularly noteworthy given recent theorizing by O’Toole and Meier (2015) that the
marginal impact of management interventions, such as the Early Detection Program, will be
reduced in complex environments.
The Early Detection Program itself embodies numerous complexities such as federal-state rela-
tions, a focus on highly vulnerable women who are uninsured and have low income, and grantees’
reliance on a large number and wide variety of health care providers to deliver services. Implementation
Poister et al. 15
Conclusion
This study tests the effect of performance management processes on the performance of a
federal program using grants to deliver services in a decentralized environment. The findings
provide evidence that the implementation of the performance-based grants management
approach helped CDC improve grantee performance in some respects. Specifically, perfor-
mance-based grants management improved the performance of grantees that found the targets
to be challenging or had more room for improvement. In contrast, we observed no effects or
even declines in performance among grantees already meeting their targets or performing
close to maximum performance levels. This study adds to the growing contingency literature
that associates the success of performance management systems on conditional factors by
examining the essential role of challenging goals in making performance management sys-
tems effective (Gerrish, 2016; Nielsen, 2014).
These results emphasize the possibility of greater performance improvement in low-perform-
ing agencies through the use of performance management regimes. Performance management
systems help identify the low performers and focus the attention of policy makers and managers
on these agencies. This increase in scrutiny may play a role in motivating low-performing
16 American Review of Public Administration
agencies to improve their performance. However, consistent with the ideas of threshold and ratchet
effects (Deming, 2000; Hood, 2012), once an agency reaches the targeted performance level and
is no longer identified as a poor performer, there may be little incentive to improve performance.
Political and public attention to low performers, thus, may serve as a positive impetus for such
organizations to improve their performance levels. These results may also reflect a negativity bias,
such that grant-makers would be more likely to draw causal responsibility for poorly performing
grantees and thus increasing pressure on them to perform (Nielsen & Moynihan, 2016).
The results suggest ways for the CDC and other agencies funding grant programs to strengthen
performance-based grants management efforts. First, wherever practical, targets should be made
challenging. For instance, differential targets based on individual grantees’ past performance
could potentially encourage high performers to attain even higher levels of performance while
incentivizing poorer performers to meet existing targets. In addition, the incentive structure could
be revised to reward improvement beyond the targeted standards to maintaining already high
levels of performance.
success of the performance-based grants management in fostering a culture of data use and strong
performance in the overall enterprise (Self-Cited, 2014), the marginal cost per unit of perfor-
mance improvement has not been determined. Research in this area could help assess the worth
of the investment made in relation to the payoff generated.
Finally, research is needed, perhaps first on a comparative case study basis, to investigate the
extent to which performance-based processes might be promulgated down through the networked
structure to manage more effectively and improve the performance of the overall program. A qualita-
tive analysis could also help investigate the initiatives adopted by the low-performing grantees to
improve their performance. Such an analysis may also help determine to what extent the technical
review by the CDC contributed toward this performance improvement. The research question here
might be stated as follows: When grantees are, in turn, employing performance-based approaches to
manage their own programs, does this increase their contractors’ accountability to their principals,
incentivize them to work harder and smarter to attain targets or standards, and ultimately improve
their own performance and, in turn, help strengthen the performance of the overall program?
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publi-
cation of this article: CDC provided financial support for this project via an Intergovernmental Personnel
Agreement with Georgia State University, #12IPA03129.
References
Andersen, S. C. (2008). The impact of public management reforms on student performance in Danish
schools. Public Administration, 86, 541-558.
Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Upper Saddle
River, NJ: Prentice Hall.
Barsky, A. (2008). Understanding the ethical cost of organizational goal-setting: A review and theory devel-
opment. Journal of Business Ethics, 81, 63-81.
Beam, D. R., & Conlan, T. J. (2002). Grants. In L. Salamon (Ed.), The tools of government: A guide to the
new governance (pp. 340-380). Oxford, UK: Oxford University Press.
Behn, R. D. (2003). Why measure performance? Different purposes require different measures. Public
Administration Review, 63, 586-606.
Bouckaert, G., & Balk, W. (1991). Public productivity measurement: Diseases and cures. Public Productivity
& Management Review, 15, 229-235.
Boyne, G. A., & Chen, A. A. (2007). Performance targets and public service improvement. Journal of
Public Administration Research and Theory, 17, 455-477.
Boyne, G. A., & Gould-Williams, J. (2003). Planning and performance in public organizations an empirical
analysis. Public Management Review, 5, 115-132.
Boyne, G. A., James, O., John, P., & Petrovsky, N. (2011). Top management turnover and organizational
performance: A test of a contingency model. Public Administration Review, 71, 572-581.
Colby, D. C., & Pickell, S. G. (2010). Investing for good: Measuring nonfinancial performance. Community
Development Investment Review, 6, 64-68.
Crossley, C. D., Cooper, C. D., & Wernsing, T. S. (2013). Making things happen through challenging goals:
Leader proactivity, trust, and business-unit performance. Journal of Applied Psychology, 98, 540-549.
DeGroff, A., Cheung, K., Dawkins-Lyn, N., Hall, M. A., Melillo, S., & Glover-Kudon, R. (2015). Identifying
promising practices for evaluation: The national breast and cervical cancer early detection program.
Cancer Causes & Control, 26, 767-774.
DeGroff, A., Schooley, M., Chapel, T., & Poister, T. H. (2010). Challenges and strategies in applying
performance measurement to federal public health programs. Evaluation and Program Planning, 33,
365-372.
18 American Review of Public Administration
Deming, W. E. (2000). The new economics: For industry, government, education. Cambridge, MA: The
MIT Press.
El-Khawas, E. (1998). Strong state action but limited results: Perspectives on university resistance.
European Journal of Education, 33, 317-330.
Erez, M., & Zidon, I. (1984). Effect of goal acceptance on the relationship of goal difficulty to performance.
Journal of Applied Psychology, 69, 69-78.
Fowler, S., Webber, A., Cooper, B. S., Phimister, A., Price, K., Carter, Y., . . . Stone, S. P. (2007). Successful
use of feedback to improve antibiotic prescribing and reduce clostridium difficile infection: A con-
trolled interrupted time series. Journal of Antimicrobial Chemotherapy, 59, 990-995.
Gerrish, E. (2016). The impact of performance management on performance in public organizations: A
meta-analysis. Public Administration Review, 76, 48-66.
Goerdel, H. T. (2006). Taking initiative: Proactive management and organizational performance in net-
worked environments. Journal of Public Administration Research and Theory, 16, 351-367.
Heinrich, C. J. (2007). False or fitting recognition? The use of high performance bonuses in motivating
organizational achievements. Journal of Policy Analysis and Management, 26, 281-304.
Heinrich, C. J., & Choi, Y. (2007). Performance-based contracting in social welfare programs. The American
Review of Public Administration, 37, 409-435.
Heinrich, C. J., & Lynn, L. E. (2001). Means and ends: A comparative study of empirical methods for investi-
gating governance and performance. Journal of Public Administration Research and Theory, 11, 109-138.
Heinrich, C. J., & Marschke, G. (2010). Incentives and their dynamics in public sector performance man-
agement systems. Journal of Policy Analysis and Management, 29, 183-208.
Hood, C. (2012). Public management by numbers as a performance-enhancing drug: Two hypotheses.
Public Administration Review, 72(s1), S85-S92.
Hvidman, U., & Andersen, S. C. (2014). Impact of performance management in public and private organi-
zations. Journal of Public Administration Research and Theory, 24, 35-58.
Jennings, E., & Haist, M. (2004). Putting performance measurement in context. In P. W. Ingraham & L.
E. Lynn, Jr (Eds.) The art of governance: Analyzing management and administration (pp. 152-194).
Washington, DC: Georgetown University Press.
Koning, P., & Heinrich, C. J. (2013). Cream-skimming, parking and other intended and unintended effects of
high-powered, performance-based contracts. Journal of Policy Analysis and Management, 32, 461-483.
Latham, G. P., & Locke, E. A. (2006). Enhancing the benefits and overcoming the pitfalls of goal setting.
Organizational Dynamics, 35, 332-340.
Linderman, K., Schroeder, R. G., & Choo, A. S. (2006). Six sigma: The role of goals in improvement teams.
Journal of Operations Management, 24, 779-790.
Locke, E. A., & Latham, G. P. (1990). A theory of goal setting & task performance. Upper Saddle River,
NJ: Prentice Hall.
Marvel, M. K., & Marvel, H. P. (2008). Government-to-government contracting: Stewardship, agency, and
substitution. International Public Management Journal, 11, 171-192.
Meier, K. J., & O’Toole, L. J. (2003). Public management and educational performance: The impact of
managerial networking. Public Administration Review, 63, 689-699.
Miller, E. A., Doherty, J., & Nadash, P. (2013). Pay for performance in five states: Lessons for the nursing
home sector. Public Administration Review, 73(s1), S153-S163.
Miner, J. B. (2015). Organizational behavior 1: Essential theories of motivation and leadership. London,
UK: Routledge.
Moynihan, D. P. (2008). The dynamics of performance management: Constructing information and reform.
Washington, DC: Georgetown University Press.
Murphy, K. J. (2000). Performance standards in incentive contracts. Journal of Accounting & Economics,
30, 245-278.
Mussweiler, T., & Strack, F. (2000). The “relative self”: Informational and judgmental consequences of
comparative self-evaluation. Journal of Personality and Social Psychology, 79, 23-38.
Nelson, H. D., Tyne, K., Naik, A., Bougatsos, C., Chan, B. K., & Humphrey, L. (2009). Screening for
breast cancer: An update for the US preventive services task force. Annals of Internal Medicine,
151, 727-737.
Nielsen, P. A. (2014). Performance management, managerial authority, and public service performance.
Journal of Public Administration Research and Theory, 24, 431-458. doi:10.1093/jopart/mut025.
Poister et al. 19
Nielsen, P. A., & Moynihan, D. P. (2016). How do politicians attribute bureaucratic responsibility for
performance? negativity bias and interest group Advocacy. Journal of Public Administration Research
and Theory. doi:10.1093/jopart/muw060
Ordóñez, L. D., Schweitzer, M. E., Galinsky, A. D., & Bazerman, M. H. (2009). Goals gone wild:
The systematic side effects of overprescribing goal setting. Academy of Management Perspectives,
23(1), 6-16.
O’Toole, L. J., & Meier, K. J. (2015). Public management, context, and performance: In quest of a more
general theory. Journal of Public Administration Research and Theory, 25, 237-256. doi:10.1093/
jopart/muu011.
Patrick, B. A., & French, P. E. (2011). Assessing new public management’s focus on performance measure-
ment in the public sector: A look at no child left behind. Public Performance & Management Review,
35, 340-369.
Penfold, R. B., & Zhang, F. (2013). Use of interrupted time series analysis in evaluating health care quality
improvements. Academic Pediatrics, 13(6 Suppl.), S38-S44.
Poister, T. H., Pasha, O. Q., & Edwards, L. H. (2013). Does performance management lead to better
outcomes? evidence from the US public transit industry. Public Administration Review, 73, 625-636.
Salamon, L. M. (2002). The tools of government: A guide to the new governance. Oxford, UK: Oxford
University Press.
Sun, R., & Van Ryzin, G. G. (2014). Are performance management practices associated with better pub-
lic outcomes? Empirical evidence from New York public schools. The American Review of Public
Administration, 44, 324-338. doi:10.1177/0275074012468058.
Thorgren, S., & Wincent, J. (2013). Passion and challenging goals: Drawbacks of rushing into goal-setting
processes. Journal of Applied Social Psychology, 43, 2318-2329.
Vesco, K. K., Whitlock, E. P., Eder, M., Lin, J., Burda, M. B. U., Senger, C. A., . . . Zuber, S. (2011).
Screening for cervical cancer: A systematic evidence review for the US preventive services task force.
The Lancet Oncology, 12, 663-672.
Wagner, A. K., Soumerai, S. B., Zhang, F., & Ross-Degnan, D. (2002). Segmented regression analy-
sis of interrupted time series studies in medication use research. Journal of Clinical Pharmacy and
Therapeutics, 27, 299-309.
Walker, R. M., Damanpour, F., & Devece, C. A. (2011). Management innovation and organizational perfor-
mance: The mediating effect of performance management. Journal of Public Administration Research
and Theory, 21, 367-386.
Whittington, J. L., Goodwin, V. L., & Murray, B. (2004). Transformational leadership, goal difficulty, and job
design: Independent and interactive effects on employee outcomes. The Leadership Quarterly, 15, 593-606.
Wright, B. E. (2004). The role of work context in work motivation: A public sector application of goal
and social cognitive theories. Journal of Public Administration Research and Theory, 14, 59-78.
Wright, B. E. (2007). Public service and motivation: Does mission matter? Public Administration Review,
67, 54-64.
Yancy, B., Royalty, J. E., Marroulis, S., Mattingly, C., Benard, V. B., & DeGroff, A. (2014). Using data to
effectively manage a national screening program. Cancer, 120, 2575-2583.
Author Biographies
Theodore H. Poister is professor of Public Management in the Andrew Young School of Policy Studies at
Georgia State University. His research focuses on performance management and stakeholder feedback pro-
cesses in government.
Obed Pasha is a lecturer at the School of Public Policy, University of Massachusetts Amherst. His research
interests focus on public management and the use of performance management in the public sector.
Amy DeGroff is a senior health scientist at the Centers for Disease Control and Prevention (CDC), Division
of Cancer Prevention and Control. Dr. DeGroff leads a team responsible for evaluating CDC’s National
Breast and Cervical Cancer Early Detection Program and the Colorectal Cancer Control Program.
Janet Royalty, MS, is a senior analyst at the Centers for Disease Control and Prevention and provides data
management support for two large cancer screening and control programs with an emphasis on using data
to monitor and direct program activities.