You are on page 1of 21

American Journal of Evaluation

1-21
ª The Author(s) 2014
A Framework for Assessing Reprints and permission:
sagepub.com/journalsPermissions.nav
the Performance of Nonprofit DOI: 10.1177/1098214014545828
aje.sagepub.com
Organizations

Chongmyoung Lee1 and Branda Nowell1

Abstract
Performance measurement has gained increased importance in the nonprofit sector, and contem-
porary literature is populated with numerous performance measurement frameworks. In this article,
we seek to accomplish two goals. First, we review contemporary models of nonprofit performance
measurement to develop an integrated framework in order to identify directions for advancing the
study of performance measurement. Our analysis of this literature illuminates seven focal per-
spectives on nonprofit performance, each associated with a different tradition in performance
measurement. Second, we demonstrate the utility of this integrated framework for advancing theory
and scholarship by leveraging these seven perspectives to develop testable propositions aimed at
explaining variation across nonprofits in the adoption of different measurement approaches. By
better understanding how performance measurement is conceptualized within sector, the field will
be better positioned to both critique and expand upon normative approaches advanced in the lit-
erature as well as advance theory for predicting performance measurement decisions.

Keywords
performance measurement, performance assessment, performance evaluation, outcome, nonprofit

Introduction
Financial and competitive pressures within the nonprofit sector have led to an increased emphasis on
performance measurement. As a result of a growing emphasis on accountability in government fund-
ing, nonprofits are under increasing pressure to demonstrate excellence in performance in order to
secure financial resources similar to their public and private counterparts (Cairns, Harris, Hutchison, &
Tricker, 2005; Moxham, 2009a). In conjunction, the current economic climate has had an adverse
effect on the value of public donations made to the nonprofit sector (McLean & Brouwer, 2010),
suggesting that government grants and contracts will play a more prominent role in the general
financial portfolio of the nonprofit sector. As a result, the pressure for nonprofit organizations to
demonstrate their performance in order to secure continued funding is acute (Martikke, 2008).

1
Department of Public Administration, North Carolina State University, Raleigh, NC, USA

Corresponding Author:
Chongmyoung Lee, Department of Public Administration, North Carolina State University, Raleigh, NC 27695, USA.
Email: clee18@ncsu.edu

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


2 American Journal of Evaluation

Correspondingly, the measurement of nonprofit performance has received increasing scholarly


attention.
Our focus on the nonprofit sector is purposeful. Although perspectives in performance measure-
ment from other sectors offer valuable insights, literature has identified key differences in perfor-
mance measurement across sectors (e.g., Baruch & Ramalho, 2006; Kaplan, 2001; Moore, 2000).
One of the main differences is the complexity of performance measurement. Performance measure-
ment in the nonprofit sector is complicated by the fact that nonprofits often pursue missions whose
achievement is difficult to measure (Bryson, 2011; Drucker, 2010; Forbes, 1998; Kanter & Sum-
mers, 1994; Oster, 1995; Sawhill & Williamson, 2001). Further, although there is a growing body
of scholarship dedicated to performance measurement in the nonprofit sector, this literature is less
established relative to the private and public sectors (Dart, 2004; Moxham, 2009a), suggesting the
need for greater integration to advance this growing field of study.
Scholars have developed a variety of performance measurement approaches for the nonprofit sec-
tor. Representative examples are Kaplan’s (2001) balanced scorecard for nonprofits, Moore’s (2003)
public value scorecard, and Sowa, Selden, and Sandfort’s (2004) multidimensional, integrated
model of nonprofit organizational effectiveness (MIMNOE). Although performance measurement
frameworks generally emphasize only certain aspects of performance, a number of different per-
spectives can be identified across frameworks. To date, there has been limited scholarly attention
paid toward the analysis and integration of these divergent perspectives.
An integrated analysis and synthesis of the different perspectives currently adopted in the aca-
demic literature allows the field to take stock of the diverse approaches to conceiving and evaluating
nonprofit performance in order to both inform practice and advance theory. Although there have
been reviews of organizational performance measurement approaches (e.g., Baruch & Ramalho,
2006; Ritchie & Kolodinsky, 2003), these efforts have tended to be more narrow than holistic in
focus (e.g., focusing on financial performance) and have not attended to the specific performance
dimensions of the nonprofit sector. In this article, we address this gap by reviewing the contempo-
rary literature on nonprofit performance measurement and presenting an integrated framework that
summarizes the different perspectives that can be adopted in conceptualizing and measuring nonpro-
fit performance. Such an integrated framework can advance theory by providing a foundation upon
which to base investigations of managerial choice in the adoption and evolution of different nonpro-
fit measurement approaches. Accordingly, we conclude our article with an eye toward future
research by explicating a set of propositions regarding key contingencies that can influence when
different measurement perspectives may be more salient for nonprofits. In doing so, we examine the
implications of this integrated framework for informing theory development and future research on
strategic choice in performance measurement adoption.

Method
A general review of the literature on nonprofit performance measurement revealed an impressive
array of scholarly efforts over the past decade. Some scholars have focused on understanding the
patterns of performance measurement use in nonprofits (e.g., Barman, 2007; MacIndoe & Barman,
2012; Poole, Nelson, Carnahan, Chepenik, & Tubiak, 2000). Other studies focus on specific tools in
facilitating nonprofit performance measurement such as the use of logic models (e.g., Hatry, Houten,
Plantz, & Taylor, 1996; Russ-Eft & Preskill, 2009; Valley of the Sun United Way, 2008; W.K.
Kellogg Foundation, 2004). Others focus on the substantive focus of performance measurement
in nonprofits. It is the latter that is the focus of this review. In this review, we considered contem-
porary nonprofit performance measurement frameworks, defined as frameworks present in the lit-
erature between 1990 and 2012. To identify an initial population of books and articles, electronic
searches of five bibliographic databases were conducted (1) Social Science Citation Index, (2)

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


Lee and Nowell 3

Academic Search Premier, (3) Public Administration Abstracts, (4) JSTOR Arts and Sciences, and
(5) WorldCat. The search terms used included ‘‘performance evaluation,’’ ‘‘performance measure-
ment,’’ ‘‘performance appraisal,’’ ‘‘outcome measurement,’’ or ‘‘impact measurement’’ and ‘‘non-
profit.’’ To expedite the identification of relevant articles, only the articles that included one or
more of the search terms in the title or abstract were reviewed. Additional resources were identi-
fied with the assistance of knowledgeable colleagues in the field.
Each article was then reviewed to identify only those that contained an explicit performance mea-
surement framework for a nonprofit organization including a defined scope and constituting factors
of performance specific to the nonprofit sector. This review led to the identification of 18 distinct
nonprofit performance evaluation frameworks that were included for full analysis: (1) Bagnoli and
Megali (2011), (2) Beamon (1999), (3) Berman (2006), (4) Cutt and Murray (2000), (5) Greenway
(2001), (6) Herman and Renz (2008), (7) Kaplan (2001), (8) Lampkin et al. (2006), (9) Land (2001),
(10) Median-Borja and Triantis (2007), (11) Moore (2003), (12) Moxham (2009b), (13) Newcomer
(1997), (14) Penna (2011), (15) Poister (2003), (16) Sawhill andWilliamson (2001), (17) Sowa et al.
(2004), and (18) Talbot (2008).
We also reviewed related efforts such as studies of performance measurement focused on a pro-
gram or case level of analysis but not at the organizational level of analysis (e.g., Crook, Mullis,
Cornille, & Mullis, 2005; Hatry, Fisk, Hall, Schaenman, & Snyder, 2006; James, 2001; Poole,
Duvall, & Wofford, 2006; Sobelson & Young, 2013) as well as works that focus on a specific aspect
of nonprofit performance (e.g., Benjamin, 2012; Ebrahim & Rangan, 2010) such as Eckerd and
Moulton’s (2011) investigation of how role heterogeneity and external environment influence the
adoption and uses of performance measures in nonprofits. These related works did not propose a
specific framework of nonprofit performance measurement, which is the focus of this review. None-
theless, they offered valuable insights that helped to inform our analysis. As with any review meth-
odology, we recognize that our resulting sample may not be exhaustive of the population of
frameworks proposed between 1990 and 2012. However, we feel confident that the 18 frameworks
identified provide a representative picture of contemporary nonprofit performance measurement
perspectives in the United States.
In order to develop an integrated framework, each source was reviewed and the measurement
framework adopted by each author was identified. For each source, the dimensions and measures
of performance proposed were summarized in a table along with any available information on meth-
odology, background, and context related to the measurement framework. Last, we content analyzed
the frameworks in order to identify common themes in both evaluands and perspectives of nonprofit
performance adopted by each author. Based on the common perspectives, we constructed Table 1.
Each core perspective of nonprofit performance has several main concepts and associated measure-
ment indicators. In the subsequent section, we elaborate the core perspectives, concepts, and mea-
surement indicators summarized in Table 1.

Core Perspectives of Performance Measurement in Nonprofits


Our review revealed a wide variety of perspectives adopted by authors in conceptualizing nonprofit
performance, each focusing on different phases in the value-generation process. Although most per-
formance measurement frameworks focused on multiple phases, no framework represented all of
them. Figure 1 summarizes this value-generation process through the core factors associated with
the performance of nonprofit organizations. First, several frameworks emphasize the importance
of understanding and measuring inputs into an organization. This perspective highlights that nonpro-
fits acquire and utilize resources to achieve financial health and support organizations’ activities
(Bagnoli & Megali, 2011; Beamon, 1999; Cutt & Murray, 2000; Kaplan & Norton, 1996; Kendall &
Knapp, 2000; Median-Borja & Triantis, 2007; Moxham, 2009b; Newcomer, 1997). Then, nonprofits

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


4 American Journal of Evaluation

Table 1. Core Perspectives of Performance Measurement in Nonprofits.

Performance Dimension Performance Measures


and Contributing Scholars Definition/Main Focuses or Criteria

Inputs: Bagnoli and Megali (2011), – The ability of a nonprofit to – Increase in revenue from
Beamon (1999); Cutt and Murray acquire necessary resources year to year;
(2000); Kaplan and Norton (financial and nonfinancial) – diversity of revenue streams;
(1996); Kendall and Knapp and efficiently use those – net surplus of financial
(2000); Median-Borja and resources to achieve reserves;
Triantis (2007); Moxham resiliency, growth, and – ability to acquire and manage
(2009b); and Newcomer (1997) long-term sustainability human resources (e.g.,
employees, volunteers);
– strength of the relationship
with resource providers
(e.g., funders, volunteers)
Organizational capacity: Kaplan – Consists of human and – Employee satisfaction;
(2001); Moore (2003); and Sowa, structural features that – employee motivation,
Selden, and Sandfort (2004) facilitate an organization’s retention, capabilities, and
ability to offer programs alignment;
and services – employee education/
counseling;
– staff and executive
perspective on operational
capabilities;
– operating performance
(cost, quality, and cycle
times) of critical processes;
– information system
capabilities;
– capacity to innovate
Outputs: Bagnoli and Megali (2011); – Entails a specification of the – Frequency and hours of
Berman (2006); Cutt and Murray scale, scope, and quality of services provided;
(2000); Kendall and Knapp products and services – on-time service deliveries;
(2000); Moxham (2009b); provided by the – achieved specified goals in
Newcomer (1997); Poister organization; relation to services;
(2003); and Sawhill and – focuses on organizational – number of participants
Williamson (2001) targets and activities that served;
have direct linkages to – client/customer response
organizational mission time;
accomplishment – quality of services provided
(physical and cultural
accessibility, timeliness,
courteousness, and physical
condition of facilities)
Outcomes: behavioral and – State of the target – Increased skills/knowledge
environmental changes; Bagnoli population or the condition (increase in skill, knowledge,
and Megali (2011); Berman that a program is intended learning, and readiness);
(2006); Greenway (2001); to affect; – improved condition/status
Lampkin et al. (2006); Moxham – focuses on the differences (participant social status,
(2009b); and Penna (2011) and benefits accomplished participant economic
through the organizational condition, and participant
activities health condition);
– modified behavior/attitude
(incidence of bad behavior,
incidence of desirable
activity, and maintenance of
new behavior)
Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015
(continued)
Lee and Nowell 5

Table 1. (continued)

Performance Dimension Performance Measures


and Contributing Scholars Definition/Main Focuses or Criteria

Outcomes: client/customer – Extent to which the – Market share;


satisfaction; Kaplan (2001); organization satisfied and – client/customer satisfaction;
Median-Borja and Triantis met the needs of the – client/customer retention;
(2007); Newcomer (1997); population the nonprofit are – new client/customer
Penna (2011); and Poister (2003) intended to serve acquisition
Public value accomplishment: Hills – The ultimate value/impact – Quality of life, well-being,
& Sullivan (2006); Greenway the organization hopes to and happiness;
(2001); Lampkin et al. (2006); create for the community/ – social capital creation, social
Land (2001); Moore (2003); and society cohesion, and social
Penna (2011) inclusion;
– safety and security;
– equality, tackling
deprivation, and social
exclusion;
– promoting democracy and
civic engagement;
– citizen engagement and
democratization;
– political advocacy;
– individual expression
Network/institutional legitimacy: – Focuses on positive – Funder relations and
Bagnoli and Megali (2011); relationship with other diversification;
Herman and Renz (2008); Moore organizations, reputational – funder/stakeholder
(2003); and Talbot (2008) legitimacy within the satisfaction;
community and field, – cases/activities of successful
compliance with laws, and partnership/collaboration;
best practices – credibility with other civil
society actors;
– compliance with general/
particular laws;
– institutional coherence;
– coherence of activities with
the stated mission;
– strength of the relationship
with public legitimaters,
authorizers, and regulators;
– image of the organization on
the mass media

create organizational capacity through the acquisition of resources that enhance their ability to offer
quality programs and services (Kaplan, 2001; Moore, 2003; Sowa, Selden, & Sandfort, 2004). Next,
nonprofits generate specific products and services by utilizing the capacity and acquired resources (Bag-
noli & Megali, 2011; Berman, 2006; Cutt & Murray, 2000; Kendall & Knapp, 2000; Moxham, 2009b;
Newcomer, 1997; Poister, 2003; Sawhill & Williamson, 2001).
Near-term outcomes that are generated as a consequence of a nonprofit’s activities have been
conceptualized and measured in two key ways. Some frameworks focus on the evaluation of beha-
vioral and environmental changes such as increased skills, improved condition, and modified beha-
vior (Bagnoli & Megali, 2011; Berman, 2006; Greenway, 2001; Lampkin et al., 2006; Moxham,

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


6 American Journal of Evaluation

Inter-organizational
Networks

Behavioral and environmental changes

Input Capacity Output Outcome Public value

Client/Customer satisfaction

Institutional
Legitimacy

Figure 1. Main perspectives of nonprofits’ performance.

2009b; Penna, 2011). Other performance frameworks highlight customer satisfaction as an impor-
tant near-term outcome (Kaplan, 2001; Median-Borja & Triantis, 2007; Newcomer, 1997; Poister,
2003; Penna, 2011). Finally, several authors have emphasized the importance of considering the
public value that is created at the community level as a result of a nonprofit’s activities (Greenway,
2001; Hills & Sullivan, 2006; Lampkin et al., 2006; Land, 2001; Moore, 2003; Penna, 2011). Last,
some frameworks have highlighted how well the organization has managed relationships with key
stakeholders and gained legitimacy in their field and community as a critical aspect of nonprofit per-
formance (Bagnoli & Megali, 2011; Herman & Renz, 2008; Moore, 2003; Talbot, 2008).
In this article, we describe these dimensions as core perspectives of nonprofits performance. As
illustrated in Figure 1, it is important to note that these perspectives do not operate independently of
one another. Performance in the areas of resource acquisition and organizational capacity has down-
stream consequences for what a nonprofit is able to accomplish in terms of outcomes and public
value. In the same way, organizational outcomes can subsequently influence the acquisition of new
resources. In the subsequent section, we will discuss each of these perspectives in greater detail
including key operationalizations and indicators suggested by the current literature.

Input
The public and nonprofit sector has been dominated by a result-oriented performance perspective,
arguably to the neglect of other complementary perspectives. In response to this, several scholars
have proposed performance evaluation frameworks which take into account that nonprofit organi-
zations are working under the constraint of budget and resources (e.g., Moxham, 2009b). Thus, mea-
suring how the inputs have been acquired and utilized is argued by some as a key dimension of
nonprofit performance. The main concepts dominating this perspective are resource acquisition
and utilization (Beamon, 1999; Kaplan & Norton, 1996; Kendall & Knapp, 2000; Median-Borja &
Triantis, 2007) and expenditure (Cutt & Murray, 2000; Moxham, 2009b; Newcomer, 1997).
Resource acquisition and utilization focuses on how well nonprofits are able to acquire resources
in order to generate value. Two main approaches have been advocated for in this domain. The first
approach focuses on the acquisition and utilization of money, facilities, equipment, staffing and
training, and the preparation of programs and services (Berman, 2006; Median-Borja & Triantis,
2007). For example, in the three-part framework of Beamon (1999), resource performance metrics

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


Lee and Nowell 7

measure the level of resources used to meet the system’s objectives. Examples of resource perfor-
mance metrics include the number of employees/volunteers (or hours) required for an activity,
increase in revenue from year to year, diversity of revenue streams, and net surplus of financial
reserves. Also, the financial perspective in Kaplan’s Balanced Scorecard for Nonprofits framework
is used to examine how the resources have been used to develop necessary internal processes
(Kaplan & Norton, 1996). In addition to focusing on successful resource acquisition, scholars also
emphasize that nonprofits need to make wise use of the resources they have (Median-Borja & Triantis,
2007). Accordingly, a second approach for measuring nonprofit performance advocated in the litera-
ture is through an emphasis on expenditures (Cutt & Murray, 2000; Moxham, 2009b; Newcomer,
1997). Expenditure-focused measurement is commonly used in the public and nonprofit area in grants
and contract management and is an integral part of nonprofit performance evaluation frameworks
posed by Moxham (2009b). A common approach to examining nonprofit performance through expen-
ditures is ‘‘end of funding cycle’’ reports that compare expenditures to outputs with an eye toward
evaluating the efficiency of organizational activities (Moxham, 2009b).

Organizational Capacity
Several scholars have advocated the importance of including organizational capacity development
in the evaluation of nonprofits (Kaplan, 2001; Moore, 2003; Sowa et al., 2004). This dimension is
strongly connected with the input perspective. Although the input perspective focuses primarily on
resource acquisition and utilization, the organizational capacity perspective focuses on developing
the capability to generate outputs/outcomes effectively. That is to say, this perspective is about eval-
uating how well a nonprofit has constructed effective internal processes and structures to use the
resources efficiently and effectively toward the advancement of the organization’s mission. It also
includes developing the requisite capacity to deliver the services, adopt necessary innovations, and
expand/alter programs and operations to meet changing needs (Kaplan, 2001). The justification for
adopting an organizational capacity perspective is based on the premise that processes of innovation,
adaptation, and learning strongly influence organizational activities and overall performance. Exam-
ple indicators of high performance for this perspective include the employee education/counseling,
employee satisfaction, staff and executive perspective on the operational capabilities, and capacity
to innovate. There are several key concepts represented within this perspective. First, scholars high-
light the need for nonprofits to continually work to streamline and improve internal processes that
deliver value to customers and reduce operating expenses. For instance, abandoning an unnecessary
service provision procedure can be one way to reduce operating expenses (Sowa et al., 2004). Sec-
ond, scholars have emphasized the importance of considering a nonprofit’s performance in strength-
ening their organizational capacity for learning, innovation, and growth. This focus includes the
development of innovations that create entirely new processes, services, or products as well as mak-
ing quality improvements to people and systems within the organization. For example, in the
balanced scorecard, Kaplan (2001) focuses on quality improvements to people and systems within
the organization. Relevant indicators identified by Kaplan (2001, p. 357) include ‘‘employee moti-
vation, retention, capabilities, and alignment, as well as information system capabilities.’’ Moore
(2003, p. 18) also mentions the importance of building operational capacity through learning and
innovation. He argues that through such processes, organizations can enhance the ‘‘the technologies
that convert inputs into outputs, and outputs into satisfied clients and desired outcomes.’’ In the long
term, the performance of nonprofits will depend on ‘‘the rate at which it can learn to improve its
operations as well as continue to carry them out’’ (Moore, 2003, p. 22). Further, Kaplan (2001,
p. 366) highlights the innovative capacity of an organization as the extent to which ‘‘the boards,
investors, fund managers, foundations, and social entrepreneurs bring all their resources to bear
in the right ways to strategic applications.’’

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


8 American Journal of Evaluation

Last, among the dimensions of MIMNOE, Sowa et al. (2004) encourage a focus on organizational
capacity in terms of management and program capacity. By this, they advocate that organizations
evaluate both how well programs are designed and operated and whether they are perceived by
employees, managers, and clients as being designed and operated appropriately. Although outcome
measures are often chosen to represent organizational performance, there is a complex mechanism
hidden behind the outcome. To improve the performance, organizations should understand how the
management capacity and program capacity influences the resulting outcomes (Sowa et al., 2004).

Output
A focus on outputs represents another perspective for conceptualizing the performance of nonpro-
fits. Outputs refer to the countable goods and services obtained by means of the nonprofit activities
and direct products of activities for achieving the mission (Bagnoli & Megali, 2011; Berman, 2006;
Cutt & Murray, 2000; Kendall & Knapp, 2000; Poister, 2003). This perspective focuses on the tar-
get/activities of the organization, generally as these relate and contribute to mission accomplishment
(Sawhill & Williamson, 2001).
Output measurement addresses whether nonprofit activities achieved the specific targets initially
intended (Moxham, 2009b). This approach involves highlighting the physical products of the activ-
ities carried out by nonprofits as a valuation (or quantitative accounting) of outputs. Thus, they are
generally quantitative and include criteria such as the number of people who have been served and
the number of services that have been offered (Moxham, 2009b). Such information can be analyzed
in relationship with input perspective like relative production costs in order to measure efficiency
and productivity (Bagnoli & Megali, 2011). Output measures are recognized as valuable in a perfor-
mance measurement portfolio, as they generally have the advantage of being easier and cheaper to
monitor than other types of outcomes. However, they are also known to be powerful drivers of beha-
vior within the organization and therefore need to be considered carefully. Consequently, scholars
(e.g., Sawhill & Williamson, 2001) emphasize the importance of output measures that have clear
linkages to the organizational mission to avoid goal displacement.

Outcome: Behavioral and Environmental Changes


An outcome of a nonprofit organization can be defined as the ‘‘state of the target population or the
social condition that a program is supposed to have changed’’ (Rossi, Lipsey, & Freeman, 2004, p.
204). The different approaches to measuring outcomes in the literature can be broadly grouped into
two categories: (1) a behavioral and environmental changes approach and (2) a customer satisfaction
approach. These two approaches can be used complementarily, but we distinguished them because
their focuses and measurement methods are different. In the former, the outcome is understood
based on whether nonprofits accomplished substantial changes in their target group or an environ-
mental condition (Berman, 2006; Greenway, 2001; Lampkin et al., 2006; Penna, 2011). For exam-
ple, in the Moxham’s (2009b, p. 4) measurement framework, one definition of a successful project is
one that ‘‘has been able to successfully demonstrate that it has achieved the outcomes that it set out
to achieve at the time that the grant offer was made and which has made a real difference to the lives
of disadvantaged people.’’ An outcome-based perspective differs from the output approach in that it
looks beyond organizational activities and seeks to discern the impact of these activities on the tar-
geted setting or population. This perspective highlights that while organizations may be highly pro-
ductive in the number of people served or projects implemented, it is a different issue whether
organizations made substantial changes in behavior or environmental conditions through these ser-
vices. Therefore, outcomes can be measured independently of the outputs achieved (Bagnoli &
Megali, 2011).

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


Lee and Nowell 9

Outcome: Client/Customer Satisfaction


Another approach to outcome measurement is a focus on client/customer satisfaction. Since a large
number of nonprofits are service oriented, measuring the quality of service is an important issue.
However, service quality is an abstractive and intangible concept. Because there are not objective
measures, many performance measurement frameworks advocate for assessing the quality of a non-
profit’s services by measuring consumer perceptions of quality through satisfaction surveys and cus-
tomer complaints (Median-Borja & Triantis, 2007; Penna, 2011; Poister, 2003). For example,
Kaplan’s Balanced Scorecard (2001) focuses on how the organization creates value for its targeted
customers, how much difference the value accomplishes for the targeted customer, and to what
extent the customers are satisfied with the value created for them. When Kaplan developed the
balanced scorecard for private sector, he identified profit (financial perspective) as the highest indi-
cator of performance. When he adapted the balanced scorecard to nonprofit organizations, however,
he emphasized creating value for clients and client satisfaction as a main goal of the organization.
Newcomer’s (1997, p. 16) framework takes a similar approach but provides a more comprehensive
discussion on customer or consumer evaluation. She advocates ‘‘physical and cultural accessibility,
timeliness, courteousness, physical condition of facilities, and overall satisfaction’’ as elements of
customer satisfaction. Other metrics of customer satisfaction present in the literature include client
retention and new client acquisition (Kaplan, 2001).

Public Value Accomplishment


Bozeman (1984) suggests ‘‘publicness’’ as a fundamental difference between public and private
organizations, highlighting the importance of public value in considering nonprofit performance.
Accordingly, the concern for public value accomplishment as a dimension of performance illumi-
nates a fundamental difference between for-profit and nonprofit organizations. Although the ulti-
mate value of a private sector firm’s operations lies in the profit maximization, Moore (2003)
argues the ultimate value of a nonprofit organization should be evaluated by the public values that
it produces for society. Diverse scholars have argued contributing to the public value as a main role
espoused by nonprofit organizations (e.g., Anheier, 2009; Frumkin, 2002; Gronbjerg, 2001;
Moulton & Eckerd, 2012; Salamon, 2002).
Although the client perspective focuses on client satisfaction, the public value perspective
focuses on community-oriented outcomes and broader benefits to society. Also, the public value per-
spective differs from the outcome perspective primarily in the sense that outcome-based approaches
to performance measurement tend to be program or activity specific, whereas scholars of public
value accomplishment are interested in the global contribution of the nonprofit during a certain
period of time.
Two key concepts serve to frame discussions of public value in performance measurement. The
first is that of social ambition. The ultimate goal that nonprofit organizations seek to achieve is the
social ambitions outlined in their mission. These social ambitions cannot be captured in financial
measurers. They describe ‘‘particular people to be aided in particular ways, or particular social con-
ditions to be achieved through the work of the nonprofit firm’’ (Moore, 2003, p. 12). The extent of
social ambitions may be influenced by the organizational size. Small nonprofit organizations may
have less capacities and resources to change the communitywide values than may large nonprofits.
Several scholars have sought to operationalize public value as an aspect of organizational perfor-
mance for nonprofits although there is little consensus across conceptualizations. For example,
according to the public value measurement framework of Hills & Sullivan (2006), the examples
of the public value are (1) quality of life, well-being, and happiness; (2) social capital, social cohe-
sion, and social inclusion; (3) safety and security; (4) equality, tackling deprivation, and social
exclusion; and (5) promoting democracy and civic engagement. They suggest several methods for

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


10 American Journal of Evaluation

measuring of public value including consensus conferences, citizen’s juries/panels, attitudinal sur-
veys, opinion polling, and the public value measurement framework developed by The Work Foun-
dation. Moulton and Eckerd (2012) categorize the public value of nonprofits into the following six
dimensions: (1) service delivery, (2) innovation, (3) advocacy, (4) individual expression, (5) social
capital creation, and (6) citizen engagement. These authors suggest survey items to assess each
dimension of public value of nonprofits. Similarly, among the four outcome types of Lampkin
et al. (2006), community-centered outcomes can be included in this perspective. Although the other
dimensions in the Lampkin’s framework—program centered outcomes and participant centered out-
comes—are restricted in the organizational level, community-centered outcomes are more oriented
to the public value and community value. They suggest community-centered outcomes can be mea-
sured through citizen panel evaluations and opinion polls.

Network/Institutional Legitimacy
Finally, networks and institutional legitimacy may be considered as a key component of a nonpro-
fit’s performance framework. Within this perspective, performance is conceptualized in terms of
how an organization has managed its relations with other stakeholders and established a reputation
for trustworthiness and excellence within this broader network. As shown in Figure 1, network and
legitimacy issue can be examined across all nonprofits’ activities like acquiring and utilizing
resources, developing organizational capacity, and creating value for the customers and community.
This perspective can be broken down further into three core areas of focus. First, our review
of contemporary frameworks of nonprofit performance revealed an increased emphasis on under-
standing nonprofits’ effectiveness from the perspective of interorganizational networks and
network-level effectiveness. That is because the effectiveness of an organization often depends on
the ‘‘effectiveness of other organizations and people with which it is interconnected and the ways
in which they are interconnected’’ (Herman & Renz, 2008, p. 409). Also, networks are of particular
importance to nonprofits these days due to the government incentives for collaboration to reduce
costs and duplication of efforts (Guo & Acar, 2005; Sharfman, Gray, &Yan, 1991). Therefore, the
working relationship with other organizations should be considered as one of main perspectives of
nonprofit performance. Nonprofit organizations can advance their missions by collaborating with
other organizations who have the same goals or who have resources that they can utilize (Moore,
2003). This kind of interorganizational network/collaboration can be measured through the success-
ful partnership cases and stakeholder/board survey on the partnership. Consistent with this, several
scholars (e.g., Herman & Renz, 2008; Moore, 2003) have advocated for examining the network
partnerships as a critical aspect of nonprofit performance.
Second, managing relations with other funders/volunteers/stakeholders was identified as key
area of focus in the network perspective. For example, Moore (2003) emphasized nonprofit efficacy
at expanding support and authorization as an aspect of nonprofit performance. By this, he was refer-
ring to funder relations and diversification, legitimacy with general public, relations with govern-
ment regulators, reputation on the media, and the credibility with civil society actors. ‘‘If
nonprofit managers are to keep their attention focused on both the overall success and sustainability
of their strategy, they have to develop and use measures that monitor the strength of their relation-
ship with financial supporters, and public legitimaters and authorizers as well as those that record
their impact on the world’’ (Moore, 2003, p. 16). In this sense, funder/stakeholder survey or images
of the organization on the mass media are presented as indicators to measure this perspective.
Finally, institutional legitimacy as an aspect of nonprofit performance focuses on verifying that a
nonprofit has respected ‘‘its self imposed rules (statute, mission, program of action) and the legal
norms applicable to its institutional formula’’ (Bagnoli & Megali, 2011, p. 158). For the legitimacy
dimension, Bagnoli & Megali (2011, p. 162) insist the following aspects should be considered: ‘‘1)

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


Lee and Nowell 11

institutional coherence, thus the coherence of activities with the stated mission; 2) compliance with
general and particular laws applicable; and 3) compliance with secondary norms.’’ Talbot (2008,
p. 4) also emphasizes this concept in his framework. The legitimacy justifies ‘‘the raising of public
funds to carry out collective action projects that the market would not provide.’’ This perspective can
be measured through checking whether the organizations have conformed to the institutional norms
and laws in their working environment.
In summary, findings from this review reveal of portrait of nonprofit performance that encom-
passes seven core perspectives. The first perspective emphasizes how important it is that nonprofits
be adept at acquiring, managing, and utilizing resources in light of increasingly challenging envir-
onments. In the second perspective, nonprofit performance is viewed through the lens of achieve-
ments in strengthening the internal capacity of the organization. In the third and fourth,
traditional program evaluation considerations of organizational outputs and outcomes are high-
lighted. Although these perspectives have been central to performance measurement for some time,
contemporary frameworks have grown in sophistication. Scholars and practitioners now have access
to a range of alternative designs such as behavioral, environmental, and client satisfaction
approaches to outcome measurement. In the public value perspective of nonprofit performance,
scholars urge the field to not abandon efforts to assess the substantive value that nonprofits achieve
for society as a whole (Bagnoli & Megali, 2011; Moore, 2003). As stated by Bagnoli and Megali
(2011, p. 156), nonprofits must continue to consider the degree to which their activities have ‘‘con-
tributed to the well-being of the intended beneficiaries and also has contributed to community-wide
goals.’’ Last, contemporary frameworks of nonprofit performance provide a framework for adopting
an ecological view of nonprofits, viewing them as embedded in a complex array of stakeholder rela-
tionships and institutional fields. As the social problems nonprofits confront become more complex
and beyond the scope of any single organizational solution, interorganizational networks are likely
to become even more significant as a mechanism for problem solving. Given this increasing impor-
tance of collaboration in the development of new solutions to complex social problems, network
management and institutional legitimacy are likely to become even more prominent among the crit-
ical perspectives of nonprofit performance.

Advancing Nonprofit Performance Measurement Theory and Practice


Although performance measurement is an important area in nonprofit research, limited integration
across normative prescriptions can impede progress in developing more robust theory. By mapping
these various normative approaches to conceptualizing nonprofit performance to phases of the value
creation process, we have created an integrated framework that can aid in advancing both scholar-
ship and practice the field of nonprofit performance measurement.
Given the diversity of perspectives that can be embraced by nonprofit managers in considering
performance measurement, one key area of scholarly concern is understanding why different non-
profits might embrace different approaches. This question has received comparatively limited atten-
tion in current scholarship. Combined with insights from organizational theory, the integrated
framework developed here can suggest propositions to inform future research on patterns of adop-
tion and evolution in nonprofit performance measurement. While an exhaustive review of contin-
gency factors is outside the scope of this article, here we explore the implications of three
prominent organizational contingencies: (1) funding type, (2) task programmability and observabil-
ity, and (3) environmental turbulence. Although presented separately, we assume that nonprofits
performance measurement approaches are influenced by multiple contingencies that may interact
in complex ways (Gresov, 1989).
Current scholarship concerning organizational incentive to engage in performance measurement
activity can be characterized as motivated by two broad drivers. The first perspective is based on the

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


12 American Journal of Evaluation

strategic management tradition. In this perspective, the focus is intraorganizational and performance
measurement is viewed as a managerial tool for improving work processes toward the accomplish-
ment of a predefined set of organizational goals (e.g., Kaplan & Norton, 1996; Moynihan, 2005; Pet-
tigrew, Woodman, & Cameron, 2001; Poister, 2010; Russ-Eft & Preskill, 2009; Skinner, 2004). In a
second perspective, performance measurement is viewed from extraorganizational standpoint as a
symbolic activity aimed at enhancing legitimacy and credibility in the eyes of key organizational
stakeholders such as potential clients, donors, funders, and policy makers (e.g., Modell, 2001 &
2009; Moynihan, 2005; Roy & Seguin, 2000; Yang, 2009).
There is ongoing, spirited debate concerning institutional versus operational explanations for why
organizations do what they do (e.g., Bielefeld, 1992; Guo & Acar, 2005; Hill & Lynn, 2003; Moy-
nihan, 2005; Sowa, 2009; Williams, Fedorowicz, & Tomasino, 2010). The same can be true for per-
formance measurement approaches adopted by nonprofits. Resource dependency and institutional
theorists reject the notion that organizational actions such as the adoption of a given performance
measurement approach need to have anything to do with improving work processes. Although
operational improvement is one possible motivation, this body of scholarship argues it is not the
most likely explanation. Rather, these theorists argue that organizations are primarily concerned
with structurally adjusting themselves to manage relational dependencies and conform to institu-
tional norms in order to achieve legitimacy (DiMaggio & Powell, 1983; Meyer & Rowan, 1977;
Roy & Seguin, 2000; Scott, 1995; Sowa, 2009). The organizational mandate to effectively manage
external relations is so paramount that Pfeffer and Salancik (2003) have argued that managerial
actions are often best understood in terms of their symbolic importance rather than operational
value. Performance measurement information is frequently used as part of the public narrative of
a nonprofit, helping the organization to tell its story in reports and other media. In light of this,
we suggest that performance measurement is one such managerial domain which is shaped most
strongly by external pressures.

Proposition 1: Conformity to institutional pressures will have a stronger influence on nonpro-


fit performance measurement portfolios relative to process improvement motivations.

Funding Type
Nonprofit organizations have long been characterized as operating in resource-scarce environments
and subsequently subject to considerable influence by current and potential funders (Bielefeld, 1992;
Chang & Tuckman, 1994; Froelich, 1999; Hodge & Piccolo, 2005; Pfeffer & Slancik, 2003).
Resource dependency theorists argue that resources are the basis of organization’s strategy, struc-
ture, and survival (Aldrich, 1976; Hodge & Piccolo, 2005; Pfeffer & Slancik, 2003). Understanding
the symbolic importance of performance measurement to external stakeholders (Proposition 1) pro-
vides a foundation to consider how nonprofits may tailor performance measurement portfolios in
accordance with their funding model. There has been significant scholarship aimed at understanding
the implications of different funding models for nonprofit design and performance (e.g., Crittenden,
2000; Carroll & Stater, 2009; Hodge & Piccolo, 2005; Kara, Spillan, & DeShields, 2004); however,
to date, there has been no scholarship aimed at understanding how funding models may influence the
adoption of different performance measurement approaches.
Nonprofit funding comes from five primary sources: (1) private contributions of individ-
uals (e.g., membership fees, donations), (2) corporate donors, (3) government grants and con-
tracts, (4) commercial enterprise (e.g., concession sales), and (5) foundation grants and
contracts (Bielefeld, 1992; Chang & Tuckman, 1994; Froelich, 1999; Gronbjerg, 1993;
Hodge, & Piccolo, 2005). Each funding source suggests a different audience and consumer
of performance measurement information.

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


Lee and Nowell 13

Consideration of the unique interests, priorities, and values of these different audiences sug-
gests interesting areas for future research concerning their influence on performance measurement
designs. For example, commercial funding models have been described as strongly influenced by
the values of the marketplace and its emphasis on near-term efficiency in the value creation pro-
cess (Froelich, 1999; Peterson, 1986). As stewards of public funds, government funding models
are more likely to be shaped by public sector values of accountability and equity in the outputs
and outcomes of the organization (Chaves, Stephens, & Galaskiewicz, 2004; Froelich, 1999;
O’Regan & Oster, 2002; Peterson, 1986). Corporate philanthropy can serve as a key strategy for
helping a corporation garner greater loyalty and goodwill within its local community and broader
marketplace (DiMaggio, 1986; Froelich, 1999; Kelly, 1998; Powell & Friedkin, 1986). Accord-
ingly, visibility of the outcomes of its philanthropic efforts is an important marketing asset. Simi-
larly, private donors are another market in which nonprofits compete for attention and popularity,
making the ability to effectively communicate impact a critical capability. These suggest the fol-
lowing propositions:

Proposition 2. Performance measurement in commercial funding models will emphasize con-


cerns of efficiency in inputs, capacity, and outputs.
Proposition 3. Performance measurement in government funding models will emphasis con-
cerns for accountability and equity in outputs and outcomes as well as public value.
Proposition 4. Performance measurement in corporate funding models will emphasize visibi-
lity in outcomes and public value.
Proposition 5. Performance measurement in private donor funding models will emphasize
impact on public value and behavioral/environmental change.

Task Programmability and Observability


Early organizational scholars (e.g., Burns & Stalker, 1961; Eisenhardt, 1989; Lambright, 2009;
Simon, 1982; Van Slyke, 2007; Williamson 1981, 1999) identified task programmability and obser-
vability as critical defining features in shaping organizational design. Task programmability refers to
the degree to which appropriate behavior can be specified in advance through predetermined poli-
cies, procedures, and protocols (Eisenhardt, 1989). Since the task programmability and observability
may vary by program, the performance measurement system is designed by the unit of the program.
When tasks are highly programmable, a mechanistic logic can be established within the nonprofit
which outlines the exact ways in which inputs will be procured and transformed into the desired
outputs.
An example of a highly programmable task environment might be a Meals on Wheels program. In
this case, a predefined technology carries out a predetermined set of tasks (distributing vital food to a
targeted population within a specific service area) along a predefined route on a predetermined
schedule. In these types of organizations, each step from input to output can be standardized to a
great extent. Therefore, it is easily compared against performance targets to create meaningful
metrics for quality control and process improvement. However, an important consideration in per-
formance measurement is the degree to which the steps along the line from input to output are easily
(i.e., inexpensively) observable. Monitoring performance can have significant costs associated with
it. For example, although the food distribution may be highly programmable with regard to where
each truck stops at what times, the cost of the technology that would be required to monitor this (e.g.,
GPS tracking units in every truck, tracking databases, analysis algorithms) may exceed the value of
the monitoring itself. Because the logical connection between outputs and outcomes is generally
unambiguous in highly programmable tasks, monitoring of the more proximate outputs is given pri-
ority. Therefore, we would predict that:

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


14 American Journal of Evaluation

Proposition 6. Performance measurement under conditions of high task programmability and


high observability will emphasize standardization and efficiency in inputs, capacity, and
outputs.
Proposition 7. Performance measurement under conditions of high task programmability but
low observability will emphasize conformity to targets in outputs.

However, nonprofits as a population are more likely to pursue social missions that are character-
istically nonprogrammable (Moore, 2003). Nonprogrammable tasks are those in which the inputs,
appropriate actions, and outputs can vary dramatically. An example of a nonprogrammable task
would be crisis counseling in which what constitutes appropriate action varies hour by hour based
on the client who walks in the door, what they need, and what resources they may have to draw upon.
In nonprogrammable tasks that are easily observable, there is greater opportunity for process
improvement through direct observation approaches to organizational learning. However, in cases
where direct observation is either problematic (such as in the crisis counseling situation) or simply
too costly and the task is nonprogrammable, performance measurement of the value creation process
becomes more difficult. In these cases, nonprofits may focus more on monitoring internal capacity,
client satisfaction, and institutional legitimacy within their field as indicators of high performance.

Proposition 8. Performance measurement under conditions of low task programmability but


high observability will emphasize professional standards in inputs, capacity, outputs, and beha-
vioral/environmental outcomes.
Proposition 9. Performance measurement under conditions of low task programmability and
low observability will emphasize internal capacity, client satisfaction, and institutional
legitimacy.

Environmental Turbulence
Task programmability can be closely related to the degree of environmental stability. In highly sta-
ble operating environments, it becomes much easier to develop clear and unambiguous technologies
for accomplishing one’s goals. However, in turbulent and dynamic environments, nonprofits face
the additional challenge of sustaining themselves amid ever-changing fluctuations in funding, the
political and policy environment, human resource availability, relevant technologies, and public
priorities. Although the relative turbulence versus stability of nonprofit environments is an area that
has received significant scholarly interest (e.g.Andrews, 2008; Budding, 2004; Hoque, 2004), lim-
ited work has been done on understanding the implication of environmental turbulence on perfor-
mance measurement systems. However, there is ample theoretical basis to suggest an association
between environmental turbulence and managerial choice in the adoption of performance measure-
ment systems may exist. Nonprofits operating in volatile environments must have greater capacity
to (1) buffer their organization from external shocks (Pfeffer & Salancik, 2003; Rethemeyer &
Hatmaker, 2008), (2) adapt their operations to new opportunities (Bryson, Crosby, & Stone,
2006; Guo & Acar, 2005; Larson, 1992), and (3) maintain relevancy within their domain despite
shifting priorities (DiMaggio & Powell, 1983). In these cases, internal capacity, interorganiza-
tional networks, and institutional legitimacy may be particularly critical to organizational survival
as each serves as a form of capital that can aid the organization during times of upheaval. For
example, interorganizational networks are commonly viewed as organizational resources for gain-
ing information that can give early notice of upcoming changes (Rethemeyer & Hatmaker, 2008;
Pfeffer & Salancik, 2003). Further, there is evidence that organizations can draw upon its network
to identify collaborators to enable the organization to take advantage of new opportunities that it
would not be able to pursue if it were working alone (Bryson et al., 2006; Guo & Acar, 2005;

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


Lee and Nowell 15

Larson, 1992). Institutional legitimacy is another critical form of capital that can ensure the non-
profit is able to maintain competitiveness for funding even during times of economic downturn or
shifting political priorities when resources may become scarcer (DiMaggio & Powell, 1983).
Given that we can expect these capitals to be most critical to those organizations operating in tur-
bulent environments, we propose that:

Proposition 10. Performance measurement for nonprofits in highly turbulent environments


will be more likely to emphasize interorganizational networks and institutional legitimacy.

Directions for Future Research


Taking stock of the array of perspectives that can be brought to bear upon nonprofit performance
can advance the field and suggests important areas for future research. First, the majority of the
frameworks reviewed here were multidimensional in nature and reflected a sentiment expressed
by Kaplan and Norton (1996) that problematized performance measurement that was dominated
by a single perspective (for example, financial measures). Indeed, there is a strong narrative in
contemporary performance measurement that multidimensional holistic approaches to conceptua-
lizing performance are superior. While there is a growing literature focused on understanding the
performance measurement practices of nonprofits (e.g., Campbell, Lambright, & Bronstein, 2012;
Carman, 2007; Carman & Fredericks, 2008; Eckerd & Moulton, 2011; MacIndoe & Barman,
2012; Thomson, 2010), little is known about the extent to which nonprofits are adopting these
more holistic performance measurement practices reviewed in our study. More importantly, we
know almost nothing about the substantive value of adopting a more multidimensional approach.
In other words, the field of performance measurement in nonprofits has been dominated by nor-
mative arguments about what nonprofits should include in their performance measurement port-
folios, but we know little about what nonprofits are actually doing and whether efforts toward
more holistic portfolios of performance measurement are value added. This is a fruitful area for
future research.
A second overarching theme on contemporary frameworks of nonprofit performance measure-
ment in this literature generally reflects an external perspective of an outsider looking in. How-
ever, research has started to examine how practitioners perceive the concepts and indicators of
performance measurement and use them in practice. For example, Campbell, Lambright, and
Bronstein (2012) have examined how the funders and nonprofit practitioners may value perfor-
mance measurement information differently. Future research may find different perspectives
between researchers and practitioners and contribute to constructing better frameworks for both
research and practice.
Third, performance measurement is not an end in itself. One of the big questions of public man-
agement suggested by Behn (1995) is the use of performance measures. Several studies (e.g., Car-
man, 2007; Carman & Fredericks, 2008; LeRoux & Wright, 2010) have sought to unpack the
relationship between the act of performance measurement and actual performance. Also, some scho-
lars (e.g., Ammons & Rivenbark, 2008; Julnes & Holzer, 2001; LeRoux & Wright, 2010; Moynihan
& Pandey, 2010; Moynihan, Pandey, & Wright, 2012; Radin, 2006) have examined the use or use-
fulness performance information. For example, LeRoux and Wright (2010) investigated how the
output, outcome, and efficiency performance measurement improves strategic decision making.
However, the role of other less mainstream conceptualizations of performance measurement (input,
public value, etc) in influencing nonprofit functioning and effectiveness are less well researched and
an interesting area for future research. By better understanding how performance measurement is
both conceptualized and used within sector, the field will be better positioned to expand upon both
normative frameworks and inform practice.

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


16 American Journal of Evaluation

Conclusion
This study synthesizes the varied perspectives advocated by scholars to present an integrated frame-
work of nonprofit performance. Such a review highlights that there is more than one legitimate way
to conceptualize performance. By mapping of these perspectives to the value creation process of
nonprofits, this integrated framework offers scholars and practitioners a more holistic framework
within which to position and critically assess the various normative arguments in the literature.
This review and integrated framework also has important implications for the practice of nonpro-
fit performance measurement: (1) it provides a starting point for new nonprofits to design a perfor-
mance measurement system; (2) for nonprofits that already have performance measurement system,
the framework allows them to critically consider their own performance evaluation approaches
within this broader array of possible approaches; (3) to the extent that nonprofits use such common
measures, the framework can facilitate benchmarking and cross-program comparisons (Lampkin
et al., 2006); and (4) for stakeholders, this integrated framework provides a guideline to assess their
organizations’ performance or performance measurement system.
Further, given the diversity of normative perspectives present in the current literature, greater
integration is necessary for systematically advancing theory and research of nonprofit performance
measurement adoption and use. For example, building on neoinstitutional perspectives of organiza-
tions, we suggest that, despite the common assumption that performance measurement is motivated
by internal needs to improve operational functioning and efficiency, institutional pressures and the
need for external legitimacy are likely to have a stronger influence on what nonprofits actually mea-
sure. Further, we propose that variation in the nonprofits funding and operating environment will
increase the likelihood of certain perspectives being prioritized over others. In doing so, we demon-
strate how an integrated framework can play a critical role in identifying directions for future
research such as how different approaches to performance evaluation are understood, adopted, and
used within the sector. By better understanding how performance measurement is conceptualized
within sector, the field will be better positioned to both critique and expand upon normative
approaches advanced in the literature as well as advance theory for predicting performance measure-
ment decisions.

Declaration of Conflicting Interests


The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or
publication of this article.

Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.

References
Aldrich, H. (1976). Resource dependence and interorganizational relations local employment service offices
and social services sector organizations. Administration & Society, 7, 419–454.
Ammons, D. N., & Rivenbark, W. C. (2008). Factors influencing the use of performance data to improve
municipal services: Evidence from the North Carolina benchmarking project. Public Administration
Review, 68, 304–318.
Andrews, R. (2008). Perceived environmental uncertainty in public organizations: An empirical exploration.
Public Performance & Management Review, 32, 25–50.
Anheier, H. K. (2009). What kind of nonprofit sector, what kind of society? Comparative policy reflections.
American Behavioral Scientist, 52, 1082–1094.

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


Lee and Nowell 17

Bagnoli, L., & Megali, C. (2011). Measuring performance in social enterprises. Nonprofit and Voluntary Sector
Quarterly, 40, 149–165.
Barman, E. (2007). What is the bottom line for nonprofit organizations? A history of measurement in the British
voluntary sector. Voluntas: International Journal of Voluntary and Nonprofit Organizations, 18, 101–115.
Baruch, Y., & Ramalho, N. (2006). Communalities and distinctions in the measurement of organizational per-
formance and effectiveness. Nonprofit and Voluntary Sector Quarterly, 35, 39–65.
Beamon, B. M. (1999). Measuring supply chain performance. International Journal of Operations & Produc-
tion Management, 19, 275–292.
Behn, R. D. (1995). The big questions of public management. Public Administration Review, 55, 313–324.
Benjamin, L. M. (2012). Nonprofit organizations and outcome measurement: From tracking program activities
to focusing on frontline work. American Journal of Evaluation, 33, 431–447.
Berman, E. M. (2006). Performance and productivity in public and nonprofit organizations (2nd ed.). Armonk,
NY: M.E. Sharpe.
Bielefeld, W. (1992). Non-profit-funding environment relations: Theory and application. Voluntas: Interna-
tional Journal of Voluntary and Nonprofit Organizations, 3, 48–70.
Bozeman, B. (1984). Dimensions of ‘‘publicness:’’ An approach to public organization theory. New Directions
in Public Administration (Editörler B. Bozeman ve J. Straussman). Monterey, CA: Brooks/Cole.
Bryson, J. M. (2011). Strategic planning for public and nonprofit organizations: A guide to strengthening and
sustaining organizational achievement (Vol. 1). San Francisco, CA: John Wiley.
Bryson, J. M., Crosby, B. C., & Stone, M. M. (2006). The design and implementation of cross-sector collabora-
tions: Propositions from the literature. Public administration review, 66, 44–55.
Budding, G. T. (2004). Accountability, environmental uncertainty and government performance: Evidence
from Dutch municipalities. Management Accounting Research, 15, 285–304.
Burns, T., & Stalker, G. M. (1961). The management of innovation. University of Illinois at Urbana-Cham-
paign’s Academy for Entrepreneurial Leadership Historical Research Reference in Entrepreneurship.
Cairns, B., Harris, M., Hutchison, R., & Tricker, M. (2005). Improving performance? The adoption and imple-
mentation of quality systems in UK nonprofits. Nonprofit management & leadership, 16, 135–151.
Campbell, D. A., Lambright, K. T., & Bronstein, L. R. (2012). In the eyes of the beholders: Feedback motiva-
tions and practices among nonprofit providers and their funders. Public Performance & Management
Review, 36, 7–30.
Carman, J. G. (2007). Evaluation practice among community-based organizations research into the reality.
American Journal of Evaluation, 28, 60–75.
Carman, J. G., & Fredericks, K. A. (2008). Nonprofits and evaluation: Empirical evidence from the field. In J.
G. Carman & K. A. Fredericks (Eds.), Nonprofits and evaluation: New Directions for Evaluation, 119,
51–71.
Carroll, D. A., & Stater, K. J. (2009). Revenue Diversification in nonprofit organizations: Does it lead to finan-
cial stability? Journal of Public Administration Research and Theory, 19, 947–966.
Chang, C. F., & Tuckman, H. P. (1994). Revenue diversification among non-profits. VOLUNTAS: International
Journal of Voluntary and Nonprofit Organizations, 5, 273–290.
Chaves, M., Stephens, L., & Galaskiewicz, J. (2004). Does government funding suppress nonprofits’ political
activity? American Sociological Review, 69, 292–316.
Crittenden, W. F. (2000). Sprinning straw into gold: The tenuous strategy, funding, and financial performance
linkage. Nonprofit and Voluntary Sector Quarterly, 29, 164–182.
Crook, W. P., Mullis, R. L., Cornille, T. A., & Mullis, A. K. (2005). Outcome measurement in homeless systems
of care. Evaluation and Program Planning, 28, 379–390.
Cutt, J., & Murray, V. (2000). Accountability and effectiveness evaluation in nonprofit. London, England:
Routledge.
Dart, R. (2004). Being ‘‘business-like’’ in a nonprofit organization: A grounded and inductive typology. Non-
profit and Voluntary Sector Quarterly, 33, 290–310.

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


18 American Journal of Evaluation

DiMaggio, P. J. (1986). Can culture survive the marketplace? In P. DiMaggio (Ed.), Nonprofit enterprise and
the arts (pp. 65–93). New York, NY: Oxford University Press.
DiMaggio, P., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective
rationality in organizational fields. American sociological review, 48, 147–160.
Drucker, P. F. (2010). Managing the non-profit organization: Principles and practices. Newyork, NY:
HarperCollins.
Ebrahim, A., & Rangan, K. (2010). The limits of nonprofit impact: A contingency framework for measuring
social performance (Working paper, no. 10-099). Boston, MA: Harvard Business School.
Eckerd, A., & Moulton, S. (2011). Heterogeneous roles and heterogeneous practices: Understanding the adop-
tion and uses of nonprofit performance evaluations. American Journal of Evaluation, 32, 98–117.
Eisenhardt, K. M. (1989). Agency theory: An assessment and review. Academy of management review, 14,
57–74.
Forbes, D. P. (1998). Measuring the unmeasurable: Empirical studies of nonprofit organization effectiveness
from 1977 to 1997. Nonprofit and voluntary sector quarterly, 27, 183–202.
Froelich, K. A. (1999). Diversification of revenue strategies: Evolving resource dependence in nonprofit orga-
nizations. Nonprofit and voluntary sector quarterly, 28, 246–268.
Frumkin, P. (2002). On being nonprofit: A conceptual and policy primer. Cambridge, MA: Harvard University
Press.
Greenway, M. T. (2001). The emerging status of outcome measurement in the nonprofit human service sector.
In P. Flynn & V. A. Hodgkinson (Eds.), Measuring the impact of the nonprofit sector. New York, NY:
Kluwer Academic/Plenum Publishers.
Gresov, C. (1989). Exploring fit and misfit with multiple contingencies. Administrative Science Quarterly, 34,
431–453.
Gronbjerg, K. A. (1993). Understanding nonprofit funding. San Francisco, CA: Jossey-Bass.
Gronbjerg, K. A. (2001). Forward. In J. S. Ott (Ed.), The nature of the nonprofit sector. Boulder, CO:
Westview.
Guo, C., & Acar, M. (2005). Understanding collaboration among nonprofit organizations: Combining resource
dependency, institutional, and network perspectives. Nonprofit and Voluntary Sector Quarterly, 34,
340–361.
Hatry, H. P, Houten, T. V., Plantz, M. C., & Taylor, M. (1996). Measuring program outcomes: A practical
approach. Alexandria, VA: United Way of America.
Hatry, H. P., Fisk, D. M., Hall, J. R., Jr., Schaenman, P. S., & Snyder, L. (2006). How effective are your com-
munity services? Procedures for measuring their quality (3rd ed.). Washington, DC: The Urban Institute and
International City/County Manager Association.
Herman, R. D., & Renz, D. O. (2008). Advancing nonprofit organizational effectiveness research and theory
nine theses. Nonprofit Management & Leadership, 18, 399–415.
Hill, C., & Lynn, L. (2003). Producing human services why do agencies collaborate? Public Management
Review, 5, 63–81.
Hills, D., & Sullivan, F. (2006). Measuring public value 2: Practical approach. London, England: The Work
Foundation.
Hodge, M. M., & Piccolo, R. F. (2005). Funding source, board involvement techniques, and financial vulnerability in
nonprofit organizations: A test of resource dependence. Nonprofit Management and Leadership, 16, 171–190.
Hoque, Z. (2004). A contingency model of the association between strategy, environmental uncertainty and perfor-
mance measurement: Impact on organizational performance. International Business Review, 13, 485–502.
James, B. (2001). A performance monitoring framework for conservation advocacy. Department of Conserva-
tion Technical Series 25. Wellington, New Zealand: Department of Conservation.
Julnes, P. D. L., & Holzer, M. (2001). Promoting the utilization of performance measures in public organiza-
tions: An empirical study of factors affecting adoption and implementation. Public Administration Review,
61, 693–708.

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


Lee and Nowell 19

Kanter, R. M., & Summers, D. V. (1994). Doing well while doing good: Dilemmas of performance measure-
ment in nonprofit organizations and the need for a multiple-constituency approach (pp. 220–236). London,
United Kingdom: Sage.
Kaplan, R. S. (2001). Strategic performance measurement and management in nonprofit organizations. Nonpro-
fit Management & Leadership, 11, 353–370.
Kaplan, R. S., & Norton, D. P. (1996). The balanced scorecard: Translating strategy into action. Boston, MA:
Harvard Business School Press.
Kara, A., Spillan, J. E., & DeShields, Jr., O. W. (2004). An empirical investigation of the link between market
orientation and business performance in nonprofit service providers. Journal of Marketing Theory and Prac-
tice, 12, 59–72.
Kelly, K. S. (1998). Effective fund-raising management. Mahwah, NJ: Lawrence Erlbaum.
Kendall, J., & Knapp, M. (2000). Measuring the performance of voluntary organizations. Public Management:
An International Journal of Research and Theory, 2, 105–132.
Lambright, K. T. (2009). Agency theory and beyond: Contracted providers’ motivations to properly use service
monitoring tools. Journal of Public Administration Research and Theory, 19, 207–227.
Lampkin, L. M., Winkler, M. K., Kerlin, J., Harry, H. P., Natenshon, D., Saul, J., Melkers, J., & Sheshadri, A.
(2006). Building a common outcome framework to measure nonprofit performance. Washington, DC: The
Urban Institute. Retrieved from http://www.urban.org/publications/411404.html
Land, K. C. (2001). Social indicators for assessing the impact of the independent, non-for-profit sector of soci-
ety. In P. Flynn & V. A. Hodgkinson (Eds.), Measuring the impact of the nonprofit sector. New York, NY:
Kluwer Academic/Plenum Publishers.
Larson, A. (1992). Network dyads in entrepreneurial settings: A study of the governance of exchange relation-
ships. Administrative Science Quarterly, 37, 76–104.
LeRoux, K., & Wright, N. S. (2010). Does performance measurement improve strategic decision making? Find-
ings from a national survey of nonprofit social service agencies. Nonprofit and Voluntary Sector Quarterly,
39, 571–587.
MacIndoe, H., & Barman, E. (2012). How organizational stakeholders shape performance measurement in non-
profits: Exploring a multidimensional measure. Nonprofit and Voluntary Sector Quarterly, Advance online
publication. doi: 10.1177/0899764012444351.
Martikke, S. (2008). Commissioning: Possible–Greater Manchester VCS organizations’ experiences in public
sector commissioning. Manchester: Greater Manchester Centre for Voluntary Organization.
McLean, C., & Brouwer, C. (2010). The effect of the economy on the nonprofit sector: A june 2010 survey.
Retrieved March 13, 2013, from http://www.guidestar.org/ViewCmsFile.aspx?ContentID¼2963
Median-Borja, A., & Triantis, K. (2007). A conceptual framework to evaluate performance of nonprofit social
service organizations. International Journal of Technology Management, 37, 147–161.
Meyer, J. W., & Rowan, B. (1977). Institutionalized organizations: Formal structure as myth and ceremony.
American journal of sociology, 83, 340–363.
Modell, S. (2001). Performance measurement and institutional processes: a study of managerial responses to
public sector reform. Management Accounting Research, 12, 437–464.
Modell, S. (2009). Institutional research on performance measurement and management in the public sector
accounting literature: a review and assessment. Financial Accountability & Management, 25, 277–303.
Moore, M. H. (2000). Managing for value: Organizational strategy in for-profit, nonprofit, and governmental
organizations. Nonprofit and Voluntary Sector Quarterly, 29, 183–208.
Moore, M. H. (2003). The ‘public value scorecard’: A rejoinder and an alternative to ‘strategic performance
measurement and management in non-profit organizations’ by Robert Kaplan (Hauser Center for Nonprofit
Organizations Working Paper, no. 18). Boston, MA: Hauser Center for Nonprofit Organizations, Harvard
University.
Moulton, S., & Eckerd, A. (2012). Preserving the publicness of the nonprofit sector resources, roles, and public
values. Nonprofit and Voluntary Sector Quarterly, 41, 656–685.

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


20 American Journal of Evaluation

Moxham, C. (2009a). Performance measurement: Examining the applicability of the existing body of knowl-
edge to nonprofit organizations. International Journal of Operations & Production Management, 29,
740–763.
Moxham, C. (2009b). Quality or quantity? Examining the role of performance measurement in nonprofit orga-
nizations in the UK. Paper presented at the16th International European Operations Management Association
Conference, Goteborg, Sweden.
Moynihan, D. P. (2005). Why and how do state governments adopt and implement ‘‘Managing for Results’’
reforms? Journal of Public Administration Research and Theory, 15, 219–243.
Moynihan, D. P., & Pandey, S. K. (2010). The big question for performance management: why do managers use
performance information? Journal of Public Administration Research and Theory, 20, 849–866.
Moynihan, D. P., Pandey, S. K., & Wright, B. E. (2012). Setting the table: How transformational leadership
fosters performance information use. Journal of Public Administration Research and Theory, 22, 143–164.
Newcomer, K. (1997). Using performance measurement to improve programs. In K. Newcomer (Ed.), Using
performance measurement to improve public and nonprofit programs. San Francisco, CA: Jossey-Bass.
O’Regan, K., & Oster, S. (2002). Does government funding alter nonprofit governance? Evidence from New
York City nonprofit contractors. Journal of Policy Analysis and Management, 21, 359–379.
Oster, S. M. (1995). Strategic management for nonprofit organizations: Theory and cases. New York, NY:
Oxford University Press.
Penna, R. M. (2011). The nonprofit outcomes toolbox: A complete guide to program effectiveness, performance
measurement, and results. Hoboken, NJ: John Wiley.
Peterson, P. A. (1986). From impresario to arts administrator. In P. DiMaggio (Ed.), Nonprofit enterprise and
the arts (pp. 161–183). New York, NY: Oxford University Press.
Pettigrew, A. M., Woodman, R. W., & Cameron, K. S. (2001). Studying organizational change and development:
Challenges for future research. Academy of Management Journal, 44, 697–713.
Pfeffer, J., & Salancik, G.R. (2003). An external perspective on organizations. In The External Control of Orga-
nizations, (pp. 1–22). Stanford, CA: Stanford University Press.
Poister, T. H. (2003). Measuring performance in public and nonprofit organizations. San Francisco, CA: Jossey-
Bass.
Poole, D. L., Duvall, D., & Wofford, B. (2006). Concept mapping key elements and performance measures in a
state nursing home-to-community transition project. Evaluation and Program Planning, 29, 10–22.
Poole, D., Nelson, J., Carnahan, S., Chepenik, N. G., & Tubiak, C. (2000). Evaluating performance measure-
ment systems in nonprofit agencies: The Program Accountability Quality Scale (PAQS). American Journal
of Evaluation, 21, 15–26.
Powell, W. W., & Friedkin, R. (1986). Politics and programs: Organizational factors in public television deci-
sion making. In P. DiMaggio (Ed.), Nonprofit enterprise in the arts (245–269). New York, NY: Oxford Uni-
versity Press.
Poister, T. H. (2010). The future of strategic planning in the public sector: Linking strategic management and
performance. Public Administration Review, 70, 246–254.
Radin, B. (2006). Challenging the performance movement: Accountability, complexity, and democratic values.
Washington, DC: Georgetown University Press.
Rethemeyer, R. K., & Hatmaker, D. M. (2008). Network management reconsidered: An inquiry into manage-
ment of network structures in public sector service provision. Journal of public administration research and
theory, 18, 617–646.
Ritchie, W. J., & Kolodinsky, R. W. (2003). Nonprofit organization financial performance measurement: An evalua-
tion of new and existing financial performance measures. Nonprofit Management & Leadership, 13, 367–381.
Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systemic approach (7th ed.). London,
England: Sage.
Roy, C., & Seguin, F. (2000). The institutionalization of efficiency-oriented approaches for public service
improvement. Public Productivity & Management Review, 23, 449–468.

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015


Lee and Nowell 21

Russ-Eft, D., & Preskill, H. (2009). Evaluation in organizations: A systematic approach to enhancing learning,
performance, and change (2nd ed.). Cambridge, MA: Perseus Press.
Salamon, L. M. (2002). The resilient sector: The state of nonprofit America. In L. Salamon (Ed.), The state of
nonprofit America (pp. 3–63). Washington, DC: Brookings Institution Press.
Sawhill, J. C., & Williamson, D. (2001). Mission impossible? Measuring success in nonprofit organizations.
Nonprofit and Voluntary Sector Quarterly, 11, 371–386.
Scott, W. R. (1995). Institutions and organizations (2nd ed.). Thousand Oaks, CA: Sage.
Sharfman, M. P., Gray, B., & Yan, A. (1991). The context of interorganizational collaboration in the garment
industry: An institutional perspective. Journal of Applied Behavioral Science, 27, 181–208.
Simon, H. (1982). Models of bounded rationality: Behavioral economics and business organization. Cam-
bridge, MA: MIT Press.
Skinner, D. (2004). Evaluation and change management: rhetoric and reality. Human Resource Management
Journal, 14, 5–19.
Sobelson, R. K., & Young, A. C. (2013). Evaluation of a federally funded workforce development program:
The Centers for Public Health Preparedness. Evaluation and Program Planning, 37, 50–57.
Sowa, J. E. (2009). The collaboration decision in nonprofit organizations views from the front line. Nonprofit
and Voluntary Sector Quarterly, 38, 1003–1025.
Sowa, J. E., Selden, S. C., & Sandfort, J. R. (2004). No longer unmeasurable? A multidimensional integrated
model of nonprofit organizational effectiveness. Nonprofit and Voluntary Sector Quarterly, 33, 711–728.
Talbot, C. (2008). Measuring public value: A competing value approach. London, England: The Work
Foundation.
Thomson, D. E. (2010). Exploring the role of funders’ performance reporting mandates in nonprofit perfor-
mance measurement. Nonprofit and Voluntary Sector Quarterly, 39, 611–629.
Valley of the Sun United Way. (2008). Logic model handbook. Retrieved from http://www.vsuw.org/file/logic_
model_handbook_updated_2008.pdf
Van Slyke, D. M. (2007). Agents or stewards: Using theory to understand the government-nonprofit social ser-
vice contracting relationship. Journal of Public Administration Research and Theory, 17, 157–187.
W. K. Kellogg Foundation. (2004). Logic model development guide. Retrieved from http://www.wkkf.org/
knowledge-center/resources/2006/02/wk-kellogg-foundation-logic-model-development-guide.aspx
Williams, C. B., Fedorowicz, J., & Tomasino, A. P. (2010, May). Governmental factors associated with state-
wide interagency collaboration initiatives. In Proceedings of the 11th Annual International Digital Govern-
ment Research Conference on Public Administration Online: Challenges and Opportunities (pp. 14–22).
Digital Government Society of North America.
Williamson, O. (1981). The economics of organization: The transaction cost approach. The American Journal
of Sociology, 87, 548–577.
Williamson, O. (1999). Public and private bureaucracies: A transaction cost economics approach. The Journal
of Law, Economics, and Organization, 15, 306–342.
Yang, K. (2009). Examining perceived honest performance reporting by public organizations: Bureaucratic pol-
itics and organizational practice. Journal of Public Administration Research and Theory, 19, 81–105.

Downloaded from aje.sagepub.com at FRESNO PACIFIC UNIV on January 1, 2015

You might also like