You are on page 1of 18

Accounting, Organizations and Society 35 (2010) 689–706

Contents lists available at ScienceDirect

Accounting, Organizations and Society


journal homepage: www.elsevier.com/locate/aos

The role of performance measurement and evaluation in building


organizational capabilities and performance
Jennifer Grafton a,1, Anne M. Lillis a,⇑, Sally K. Widener b,2
a
Department of Accounting and BIS, The University of Melbourne, Victoria 3010, Australia
b
Jones Graduate School of Business, Rice University, Houston, TX 77005-1892, USA

a r t i c l e i n f o a b s t r a c t

This study examines the processes through which the availability of broad-based strategi-
cally relevant performance information impacts on the performance outcomes of organi-
zations. We explore the role of evaluation mechanisms in influencing managers’ use of
broad-based performance measurement information for feedback and feed-forward con-
trol. We hypothesize that these resultant decision-making patterns impact the exploita-
tion and identification of strategic capabilities within an organization and in turn
organizational performance. Using a structural equation model, we find support for a
model in which the degree of commonality between measures identified as decision-facil-
itating and decision-influencing is significantly associated with the use of decision-facili-
tating measures for both feedback and feed-forward control. In turn, the extent to
which decision-facilitating measures are actually used by strategic business unit managers
impacts on the strategic capabilities of the organization and subsequently its performance.
Overall the results suggest that to encourage managers to use the multiple financial and
non-financial performance indicators increasingly incorporated in contemporary perfor-
mance measurement systems it is imperative that performance evaluation schemes are
also designed to reflect these measures. To the extent performance evaluation schemes
do not reflect such decision-facilitating measures it is less likely managers will use these
indicators to effectively manage performance. The resultant performance implications for
the organization arise from the impact of these decision effects on the exploitation of
existing capabilities and the search for and identification of new strategic opportunities.
Ó 2010 Elsevier Ltd. All rights reserved.

Introduction outcomes by enhancing the decision-relevant informa-


tion available to managers thereby facilitating strategy-
Contemporary thinking with respect to the design of consistent decision making. Much of the rhetoric
performance measurement systems promotes the capture accompanying contemporary performance measurement
of multiple financial and non-financial performance indi- innovations emphasizes the role of performance mea-
cators that reflect the key value-adding activities of an sures in directing managers’ attention to the longer-term
organization (Kaplan & Norton, 1992, 1996). The consequences of their actions, by encouraging managers
provision of broad-based, strategically-aligned perfor- to track effective strategy implementation, and informing
mance indicators is expected to improve organizational the assessment and development of organizational capa-
bilities (Chenhall, 2005; Kaplan & Norton, 1996; Simons,
2000). While empirical evidence demonstrates that the
⇑ Corresponding author. Tel.: +61 3 8344 5351; fax: +61 3 9349 2397. introduction of contemporary broad-based performance
E-mail addresses: j.grafton@unimelb.edu.au (J. Grafton), alillis@ measurement systems can result in a range of improved
unimelb.edu.au (A.M. Lillis), widener@rice.edu (S.K. Widener).
1 organizational outcomes (Chenhall & Langfield-Smith,
Tel.: +61 3 8344 7662; fax: +61 3 9349 2397.
2
Tel.: +1 71 3348 3596. 1998; Davis & Albright, 2004; De Geuser, Mooraj, &

0361-3682/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.aos.2010.07.004
690 J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706

Oyon, 2009; Lingle & Schiemann, 1996; Malina & Selto, sustained competitive advantage, we argue in turn that
2001; Scott & Tiessen, 1999) the results are by no means the use of performance measurement information for feed-
unequivocal (Ittner, Larcker, & Randall, 2003) and the ex- back and feed-forward control influences the extent to
tant literature offers only limited insights into the means which an organization is able to exploit and identify its
through which these improvements are achieved (De strategic capabilities, respectively. In the current globally
Geuser et al., 2009). competitive, turbulent business environment it is critical
In this paper we explicitly examine the processes to develop our understanding of the ways in which con-
through which the availability of broad-based performance temporary performance measurement aids or impedes
measurement information impacts on the performance the exploitation of existing capabilities and identification
outcomes of organizations. We do so by developing and of new strategic capabilities. We model organizational per-
testing a model that is reflected in Fig. 1. Specifically, we formance as an outcome of the effective exploitation of
argue that the extent to which decision-facilitating perfor- strategic capabilities. Thus, modelling the exploitation of
mance measures are perceived to be captured in evalua- capabilities as a mediating variable allows us to explore
tion influences the extent to which managers will use the processes by which performance measurement sys-
those measures. The decision-facilitating role refers to tems enhance organizational outcomes.
the provision of information to decision makers ex ante We test our model using a survey of the contemporary
to decision making, in order to help resolve uncertainties performance measurement practices of strategic business
in decision problems (Demski & Feltham, 1976; Narayanan units from a broad range of manufacturing and service
& Davila, 1998). In contrast, the decision-influencing role organizations. We find empirical support for our model
refers to the use of information by higher-level manage- in that strategic business unit managers’ use of measures
ment to evaluate the performance of subordinate manag- for both feedback and feed-forward control is influenced
ers. By forming perceptions about which performance by their perception of the extent to which the measures
measures will be used by higher-level management to are used in the evaluation of their performance. In turn,
evaluate their performance, subordinate managers are the extent to which decision-facilitating performance mea-
influenced to make decisions that improve performance sures are actually used by strategic business unit managers
on those measures (Demski & Feltham, 1976; Narayanan for feedback and feed-forward control is demonstrated to
& Davila, 1998). While originating in the economics litera- impact the ability of the unit to exploit its existing capabil-
ture these notions of decision-facilitating and decision- ities and identify new capabilities, respectively, and ulti-
influencing roles of performance measurement remain mately on the performance of the strategic business unit.
important in behavioral studies of the impact of perfor- Consistent with Henri (2006) these findings suggest that
mance measurement system design on managerial deci- the effect of contemporary performance measurement sys-
sion making (Sprinkle, 2003). ‘‘Managers tend to be most tems on organizational performance is in fact indirect via
affected by areas that senior managers signal as important, both the decision-making patterns resulting from the use
with success in these areas potentially determining status of these performance measures and the subsequent impact
and progression in the organization” (Ferreira & Otley, of this managerial focus on the exploitation of an organiza-
2009, p. 272). Thus we argue that the more perceived tion’s strategic capabilities.
importance given decision-facilitating measures in the This study contributes to the literature by building in-
evaluation process, the more likely the manager will use sights into the link between control systems and perfor-
those measures for feedback and feed-forward control. mance in two ways. First, in common with recent studies
These two control uses capture the assessment of actual such as Henri (2006) and Widener (2007), we explore the
outcomes, and the formulation and use of predictive infor- process by which performance measurement impacts on
mation respectively (Emmanuel & Otley, 1985). the way managers mobilize resources to achieve competi-
Following the resource-based view of the firm, which tive advantage, and thus, contribute indirectly to perfor-
emphasizes the management of strategic capabilities for mance outcomes. The paucity of studies exploring the

Use of Decision-
Facilitating Measures H2a
Existing Capabilities
Decision-Facilitating
Performance Measures H3
Feedback
Control Use
H1
Commonality Current SBU-Level
(Overlap) H2c Performance

Feed-Forward
Control Use
Decision-Influencing
Performance Measures
New Capabilities
H2b

Fig. 1. Hypothesized structural model.


J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706 691

process by which control system attributes influence out- hypothesis, we first discuss the interplay between the
comes at an organizational level is noted in the literature two roles of performance measurement and the impact
(Bisbe & Otley, 2004; Chenhall, 2003; De Geuser et al., of this interplay on the use of decision-facilitating perfor-
2009; Hall, 2008). Both Henri (2006) and Widener (2007) mance measures. We then draw on literature relating to
suggest that a more detailed understanding of the role of uses of performance measures to conceptualize the feed-
management control systems as an antecedent to the back and feed-forward performance measurement uses in
development of organizational capabilities may help to re- our model.
solve some of the ambiguous findings from literature that
attempts to relate performance measurement innovation The decision-facilitating and decision-influencing roles of
and organizational performance. performance measurement
Second, we build on literature that links performance The literature on strategic performance measurement is
measurement system innovation with organizational per- founded on the premise that performance measurement
formance by introducing evaluation mechanisms as poten- innovation benefits organizations through the provision
tially impacting the way broad-based decision-facilitating of diverse, strategically-aligned metrics that facilitate
measures are utilized. Inferences derived from studies of managerial decision making (Chenhall, 2005; Ittner,
single, typically financial, performance measures have Larcker, & Randall, 2003; Kaplan & Norton, 1996; Simons,
established a common understanding that performance 2000). It is further argued that offering managers an array
measurement information impacts managerial actions of decision-facilitating measures and then using these
through inclusion in evaluation mechanisms (Sprinkle, multiple measures in the evaluation process will reinforce
2003). This strand of established research demonstrates the importance of these measures and thus enhance their
that managers devote effort to performance measures efficacy. However there is limited empirical evidence on
linked to evaluation and incentives (Demski & Feltham, this and the risks of using multiple broad-based measures
1978; Feltham & Xie, 1994; Gibbs, Merchant, Van der in the decision-influencing role are also acknowledged (Itt-
Stede, & Vargus, 2004; Holmstrom & Milgrom, 1991; Kerr, ner, Larcker, & Meyer, 2003; Narayanan & Davila, 1998; Ot-
1975). While contemporary performance measurement ley, 1999). On the one hand, a broad-based set of measures
systems are designed and adopted primarily to facilitate is generally needed to capture an underlying business
strategy-consistent decision making, rather than for evalu- model and value drivers (Kaplan & Norton, 1996). Impacts
ation purposes (Kaplan & Norton, 1996), it is problematic on performance occur as a result of the influence these
to consider the way performance information is used by measures have on managerial decision making. Yet we
decision makers without also considering its use by supe- know that decision making is also significantly influenced
riors in evaluation processes (Dossi & Patelli, 2008; Ferreira by evaluation mechanisms and the literature is equivocal
& Otley, 2009; Kelly, 2007; Ullrich & Tuttle, 2004; van on the extent to which broad-based sets of decision-facili-
Veen-Dirks, 2010). tating measures should be captured in evaluation.
In the next section we develop the theoretical founda- Both economic and behavioral research recognizes the
tions of this study and develop a path model to test the potential for variation in decision-facilitating and deci-
hypothesized relations suggested in Fig. 1. sion-influencing measures. The economics literature treats
variation between decision-facilitating and decision-influ-
encing measures as being driven by different risk prefer-
Theoretical model development
ences, contracting difficulties, and the possession of
private information (Datar, Kulp, & Lambert, 2001; Gjesdal,
The theoretical foundations of this study reflect two
1982; Ittner, Larcker, & Meyer, 2003). In the behavioral lit-
bodies of literature. To theorize the first stage of our model
erature explanations for divergence in decision-facilitating
(H1) we draw on the literature relating to the decision-
and decision-influencing measures include cognitive limi-
facilitating and influencing roles of performance measure-
tations, evaluation time frames, measurement cost consid-
ment. We examine the joint impact of these two roles on
erations, the importance of comparability of measures, and
the use of performance measures. The second stage of
objectivity (Banker, Chang, & Pizzini, 2004; Ittner, Larcker,
our model (H2 and H3) focuses on the processes by which
& Meyer, 2003; Lipe & Salterio, 2000; Malina & Selto,
the decision-facilitating use of performance measures
2004). For example, Ittner, Larcker, and Meyer (2003)
translates into organizational outcomes. We theorize this
found empirically that the weights placed on non-financial
process by drawing on literature linking performance mea-
measures in compensation plans (a decision-influencing
surement, the identification and exploitation of strategic
role) had little to do with their ability to predict future
capabilities and organizational performance outcomes.
financial performance (a decision-facilitating role). Fur-
thermore, superiors used their subjective judgment to
Hypothesis one – the influence of evaluation on use of override the system. The contemporary performance mea-
decision-facilitating performance measures surement system was ultimately discarded in favour of a
less complicated single performance measure evaluation
Our first hypothesis links the degree of commonality system. Lipe and Salterio (2000, 2002) found that top man-
between decision-facilitating and decision-influencing agement may collect multiple performance measures but
measures with the extent to which managers use deci- adopt simplifying strategies in determining which infor-
sion-facilitating measures in feedback and feed-forward mation is used in evaluation and how that information is
control. In order to develop the theory underlying this weighted. Malina and Selto (2004) document a reluctance
692 J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706

to persist with ‘soft’ Balanced Scorecard measures for the valuable source of information for organizational learn-
purposes of evaluation and a reversion to simple financial ing, and a basis for revising plans and strategies
measures. This stream of literature suggests that using a (Chenhall, 2005; Kaplan & Norton, 1996; Sim & Killough,
contemporary performance measurement system in evalu- 1998; Simons, 2000). Several classifications of the ways
ation may add stress to the evaluation system, and that in which managers utilize performance measures are evi-
over time evaluation mechanisms may be simplified and dent in the literature in different eras. Important classifi-
disengaged from the broad-based measures relevant for cations include scorekeeping, problem solving and
decision facilitation. However, this literature does not attention-directing (Simon, Guetzkow, Kozmetsky, &
examine the impact of divergence between decision-facil- Tyndall, 1954), answer, dialogue or debate, learning and
itating and decision-influencing measures on decision idea generation (Earl & Hopwood, 1979), feedback and
making. feed-forward control (Emmanuel & Otley, 1985; Preble,
While the extant literature typically studies the deci- 1992) and interactive and diagnostic uses (Simons,
sion-facilitating and decision-influencing roles of perfor- 1990). Simons’ (1990) classification of interactive and
mance measurement systems independently (Kelly, 2007; diagnostic control use is the dominant framework in re-
Sprinkle, 2003), there is some evidence of tensions and cent literature. For example, Widener (2007) investigates
decision impacts associated with the interplay between both antecedents and consequences of the interactive
these roles when they are examined jointly. For example, and diagnostic uses of performance measures. Henri
Magee and Dickhaut (1978) find that decision-influencing (2006) examines the impact of diagnostic and interactive
compensation plans condition information selection for uses and the tensions between them on organizational
problem solving; Sprinkle (2000) finds that decision-facil- capabilities. However, both the conceptualization and
itating information enables learning to occur, but the mag- operationalization of interactive use remain problematic
nitude of learning is responsive to measures used in (Bisbe, Batista-Foguet, & Chenhall, 2007).3 Furthermore,
evaluation and incentives; and Abernethy and Bouwens interactive use, as described by Simons, refers to a use
(2005) identify the tension between decision-influencing of performance measures by senior management in a
and decision-facilitating roles of information as an influ- way which is both context- and time-dependent. That is,
ence on resistance to change in the context of activity- control system elements are selected for high-level inter-
based costing. These studies provide empirical evidence active attention at particular times because of their rele-
of the joint influence of decision-facilitating and decision- vance to specific contemporaneous strategic uncertainties
influencing roles of performance metrics in specific deci- (Simons, 1990; Simons, 1995). These context- and time-
sion contexts. dependent attributes of interactive use render the diag-
There is a long history of studies in the accounting nostic/interactive distinction problematic in modelling uni-
literature that inform our understanding of the impact versal distinctions in use of performance measures. In this
of disengaging evaluation from decision-facilitating study we conceptualize the use of performance measures
information. A key example is the reduced use of deci- in a feedback/feed-forward dichotomy. As a result of some
sion-facilitating project-life metrics such as net present degree of conceptual overlap, we are able to situate our
value in long-term project decisions where annual results within extant literature that uses a range of control
return-on-assets is used as a managerial performance eval- system frameworks. However, adopting the feedback/feed-
uation metric (Solomons, 1965). In more contemporary forward distinction allows us to focus on distinctive uses
performance management settings recent experimental of performance measures by subunit managers and the
evidence suggests that managers will shift their attention impact of these measures on individual decision making
from measures that are not monitored or rewarded to without reference to engagement by superiors or to vary-
areas that are (Ullrich & Tuttle, 2004) and are encouraged ing levels of attention to specific measures at different
to incorporate a broader set of performance measures into points in time.
their decision processes when incentives are added to Both feedback and feed-forward performance measures
those measures (Kelly, 2007). Also consistent with this lit- are part of a cybernetic control system. The primary differ-
erature, Lipe and Salterio’s (2000) concern about the dom- ence between them is that feedback control focuses on the
inance of common measures in evaluation arises from assessment of actual outcomes while feed-forward control
empirical evidence that measures excluded from ex post focuses on the formulation and use of predictions
performance evaluation are unlikely to be used in ex ante (Emmanuel & Otley, 1985). In using measures for feedback
decision making (Holmstrom & Milgrom, 1991). Overall, control, managers examine variances between actual and
this literature, in conjunction with contracting theory expected outcomes, and then work to determine the cause
regarding effort allocation, suggests that use of decision- of the variance (Emmanuel, Otley, & Merchant, 1990;
facilitating information may vary depending on the mea- Preble, 1992). Thus the use of performance information
sures used in evaluation.
3
Interactive control use is a difficult construct to measure and theorize
Feedback and feed-forward uses of decision-facilitating given, as Bisbe et al. (2007) point out, it consists of five dimensions. The
performance information instability of interactive control systems (particularly based on exploratory
Performance measurement information is a significant factor analysis) is evident in a comparison of Bisbe and Otley (2004),
Widener (2007), Abernethy and Brownell (1999), and Henri (2006). For
input into an array of managerial decisions and uses. It is example, Widener (2007) separated interactive from diagnostic use based
a catalyst to problem identification and remedial action, on the extent of top/operating manager involvement while Henri (2006)
a means of focusing attention on critical processes, a separated interactive from diagnostic use based on the use of measures.
J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706 693

as a feedback control mechanism provides managers with Ferreira & Otley, 2009; Gibbs et al., 2004; Kaplan & Norton,
information on outcomes that are not meeting expecta- 2001; Kerr, 1975; Otley, 1999; Sim & Killough, 1998).
tions; acting as a catalyst for problem identification. It
stimulates problem solving, the need for correction action, H1. The greater the commonality between decision-
and organizational learning, all within the domain of exist- facilitating measures and those perceived to be used in
ing activity (Emmanuel et al., 1990; Ferreira & Otley, evaluation, the greater the use of decision-facilitating
2009); thereby, focusing managers on the achievement of measures for feedback and feed-forward control.
current goals (Emmanuel & Otley, 1985; Nørreklit, 2000;
Simons, 2000).
However, while the use of performance measures as Hypothesis two – the impact of performance measure use on
feedback control enhances the ability of the firm to meet strategic capabilities
current performance expectations (Emmanuel et al.,
1990; Simons, 2000) and assists managers in understand- In order to capture the process by which effective use of
ing the impact of past decisions, it does little to enhance decision-facilitating performance measures translates into
alertness to the future (Nørreklit, 2000). Moreover, wait- organizational outcomes, we draw on resource-based the-
ing for outcomes to be realized for use in feedback con- ory. Studies adopting a resource-based perspective suggest
trol may generate a lengthy time delay that is that performance measurement systems indirectly influ-
unacceptable to making effective decisions (Emmanuel ence organizational performance via their impact on the
et al., 1990; Preble, 1992). Thus managers must supple- strategic capabilities of an organization (Henri, 2006;
ment feedback control with feed-forward control in Teece, Pisano, & Shuen, 1997; Widener, 2007). The re-
which performance measurement information is used to source-based perspective on strategy stresses the pivotal
facilitate goal setting and the development of action importance of ‘capabilities’ in providing the organizational
plans (Emmanuel et al., 1990; Nørreklit, 2000). In using means to mobilize resources to achieve competitive advan-
measures for feed-forward control, managers examine tage (Barney, 1991; Day, 1994; Grant, 1991; Kogut & Zan-
variances between predicted and desired outcomes, and der, 1992). Capabilities are defined as a firm’s strengths
work to minimize the variance (Emmanuel et al., 1990). based on a combination of resources working together
As Emmanuel et al. (1990) note, planning is a prime (Grant, 1991). They are the difficult-to-imitate managerial,
example of feed-forward control and ‘‘a major purpose organizational, functional and technological skills that
of preparing plans is to communicate intended strate- generate entrepreneurial rents (Teece et al., 1997).
gies” (Simons, 2000, p. 32). These plans help guide the To provide a source of competitive advantage capabili-
successful implementation of an organization’s intended ties must be embedded in organizational routines (Grant,
strategy. Consensus on the organization’s intended strat- 1991) or ‘‘patterns of current practice and learning” (Teece
egy positions the organization to effectively recognize et al., 1997, p. 578). Teece et al. (1997) draw attention to
strategic opportunities as they arise and successfully seek the value of coherence in co-ordinative and integrative
any new capabilities necessary to capture these opportu- management routines such as control systems in facilitat-
nities. In contemporary performance measurement ing the development of capabilities. Performance measures
settings, current performance information captures per- are key organizational routines which act as an antecedent
formance drivers that predict impacts on future perfor- to organizational capabilities (Henri, 2006).
mance outcomes (Kaplan & Norton, 1996; Nørreklit, The accounting literature has begun to explore the role
2000). Preble (1992) refers to feed-forward controls as of performance measurement and control systems in en-
‘‘steering controls” where current performance is exam- abling firms to exploit these fundamental drivers of perfor-
ined along with external information to predict and as- mance (Henri, 2006; Widener, 2007). This literature
sess likely outcomes arising from current courses of explores the attributes of control systems that both enable
action. Thus the use of performance measures as a means and constrain the effective exploitation of strategic capa-
of signalling future outcomes, communicating strategy bilities. For example, Henri (2006) investigates the role of
and goals and as a catalyst to planning and goal-setting interactive and diagnostic uses of performance measures
enhances the firm’s future performance (Bisbe & Otley, in enabling and constraining the exploitation of a range
2004; Emmanuel et al., 1990; Kaplan & Norton, 1996). of future-oriented capabilities. We adopt a similar perspec-
Our first hypothesis draws together the literature which tive and model an indirect relationship between perfor-
examines the impact of evaluation on the extent to which mance measurement and organizational performance via
managers use decision-facilitating performance informa- the impact of performance measurement on decision mak-
tion and the literature which explicates the key uses of ing and strategic capabilities. However, in addition to mod-
performance information for feedback and feed-forward eling the impact of performance measure use on future
purposes. We propose that the extent to which managers capabilities we also recognize the use of performance mea-
use decision-facilitating measures in both feedback and surement in managing current capabilities. This is critical
feed-forward uses will be influenced by the extent to as firms seek both stability and dynamism.
which they perceive the decision-facilitating measures to The management accounting literature relating to con-
be captured in evaluation. Such a proposition is consistent temporary performance measurement expresses similar
with both the contracting and behavioral management themes to the resource-based theory literature in suggest-
control literatures that establish the link between evalua- ing that well-designed performance measurement systems
tion and effort allocation (Demski & Feltham, 1978; can encourage a medium to long-term focus, investment in
694 J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706

intangible sources of future growth, as well as the constant in ways that address shifts in strategic needs (Eisenhardt &
reassessment and development of business unit capabili- Martin, 2000). Thus, we expect the firm’s capacity to ex-
ties (Kaplan & Norton, 1996; Kaplan & Norton, 2001; ploit existing capabilities will directly inform management
Nørreklit, 2000). Drawing on Kaplan and Norton (1996), of changes in the strategic environment that stimulate the
Chenhall (2005, p. 400) argues that strategic performance need to search and identify new opportunities.
measurement systems ‘‘assist in assessing the effective- To summarize, we hypothesize that the use of decision-
ness of existing strategies and provide a basis for learning facilitating measures will influence the ability of an SBU to
related to effecting successful strategies”. Provision of rou- exploit its current capabilities and identify new capabili-
tine performance information is one of the most critical ties that underlie its competitive advantage (Chenhall,
organizational activities that affects managerial decision 2005; Kaplan & Norton, 1996; Simons, 2000). We draw
making and influences perceptions of the positioning of on the strategic capabilities literature to argue that firms
the organization in its competitive environment (Simons, seek both stability and adaptability, and that performance
2000). measurement both reinforces existing capabilities and sig-
The feedback use of performance information facilitates nals the need to challenge the status quo. We model both
the exploitation of existing capabilities (Maritan, 2001). the exploitation of existing capabilities and the search
Feedback on actual performance reports on the outcomes and identification of new capabilities, which allows us to
of applying existing capabilities in existing strategic and provide clear insights regarding how the feedback and
competitive settings. Feedback on variances between ac- feed-forward uses of performance measures differentially
tual and expected outcomes aims to keep performance affect both current and future capabilities. Thus, our sec-
on track, monitor for exceptions, and allow the organiza- ond hypothesis predicts differential outcomes based on
tion to learn how to best use its existing capabilities within different uses. The feedback use of performance informa-
its existing operating paradigm (Preble, 1992; Stalk, Evans, tion focuses on current positioning and facilitates the
& Shulman, 1992). However, current performance is built firm’s ability to exploit existing capabilities. In contrast,
on existing strengths that can decay or erode over time the feed-forward use of performance information focuses
due to mismanagement of activities (Größler, 2007), on future positioning and acts as a catalyst to the search
changes in the environment (Simons, 2000), or other fac- for new opportunities. Consistent with Teece et al. (1997)
tors. Thus, the resource-based perspective holds that in the search is expected to be facilitated further by the firm’s
addition to exploiting existing capabilities, firms must also ability to exploit its existing capabilities.
be committed to searching for new capabilities and being
alert to the need for strategic change in order to provide H2a. The greater the extent to which decision-facilitating
direction for future competitive advantage (Grant, 1991; measures are used by SBU-level managers for feedback
Maritan, 2001; Teece et al., 1997). The use of performance control, the greater the capacity of the SBU to exploit
information for feed-forward control will help facilitate existing capabilities.
this outcome since new opportunities may come to light
during the planning process (Barney, 1991). Rather than
monitoring for exceptions and forcing the organization to H2b. The greater the extent to which decision-facilitating
perform to existing plans, the feed-forward use of perfor- measures are used by SBU-level managers for feed-forward
mance information focuses on strategic dialogue and control, the greater the capacity of the SBU to search for
informing planning and goal setting. and identify new capabilities.
Capabilities are also important to their own renewal
and growth. The dynamic capabilities literature suggests
H2c. The greater the capacity of the SBU to exploit existing
that capabilities are an important cog in a system critical
capabilities, the greater the capacity of the SBU to search
to sustaining competitive advantage, that capabilities can
for and identify new capabilities.
both develop and erode over time, and finally, that they
‘‘depend on each other” (Größler, 2007, p. 251). Strategy
development results from the understanding of how cur- Hypothesis three – the impact of exploitation of strategic
rent resources and capabilities can be applied in the face capabilities on performance
of changing environmental conditions to enhance the cur-
rent strategic stock, and/or identify new stocks, of strategic Consistent with contemporary performance measure-
resources and capabilities. (Größler, 2007). That is, the ment literature and the resource-based perspective, we
search for new capabilities is path dependent such that view performance as a function of a business unit’s ability
responsiveness to threats and opportunities is in part a to exploit its existing capabilities in pursuit of its current
function of the firm’s ability to leverage its existing inter- strategy as well as its potential to adapt to emerging
nal and external capabilities to address changing environ- opportunities and threats (Barney, 1991; Grant, 1991;
ments (Teece et al., 1997). Gathering performance Simons, 1990). Exploiting current capabilities and being
information in relation to existing capabilities also con- poised to take advantage of new capabilities positions
strains excessive change and enhances learning efficiency the firm to establish a competitive advantage that is re-
and effectiveness (Teece et al., 1997). New capabilities quired to sustain both current and future performance
are identified in the synthesis and application of current (Grant, 1991). In a cross-sectional study we are unable to
capabilities (Kogut & Zander, 1992) and promoted through track or anticipate the future performance consequences
the deployment and reconfiguration of existing capabilities associated with current investments in new capabilities.
J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706 695

In effect, the development of new capabilities is in itself a are consistent with respondents occupying the hierarchical
current outcome. We do, however, model the impact of position we require to address our research questions.
exploiting existing capabilities on current performance.
Non-response bias
H3. The greater the capacity of the SBU to exploit existing
capabilities, the higher is current SBU-level performance. Of the 183 useable surveys we identify 29 early and 32
late respondents according to the return date of their sur-
vey (approximately 17.5% per group) and investigate non-
response bias by comparing the two groups across both the
Study design and method demographic variables and the variables used in the mod-
el. The results are presented in Appendix A. We find no sta-
Sampling frame tistically significant differences between early and late
respondents for any of the demographic variables (e.g.
We used a mail survey to collect data from strategic years of experience, years worked for current employer,
business unit (SBU) managers from a cross section of firms etc.) or for the variables of interest used in the model.
in the manufacturing and service sector in Australia. We We conclude that the tests support the absence of signifi-
identified 794 respondent business unit managers (from cant non-response bias.
715 unique firms) using the Kompass Company Directory.
We required that the company employ more than 200 Variable measurement
individuals, a restriction designed to ensure the firm is suf-
ficiently large to structure according to SBUs and imple- To the extent possible existing survey instruments are
ment a formal performance measurement system. We used or refined to capture the variables of interest in this
offered each respondent a $10 charitable donation on his study. The nature of our research questions requires that
or her behalf to complete the survey. Survey administra- a number of instruments also be purpose developed. All
tion took place over the course of six weeks and consisted instruments included in the final survey were evaluated
of the initial survey mail out, a follow up reminder post- for content validity by a seven-member panel of academ-
card and a final reminder letter and replacement survey ics. Instruments were refined until consensus was reached
to non-respondents (Dillman, 2000). A total of 188 com- that the survey items captured the theoretical construct of
pleted surveys (representing 180 unique firms) were re- interest. The survey was also reviewed for undue complex-
turned for a response rate of 24.58% (adjusted for ity, ambiguity and clarity by three financial/accounting
undeliverable mail). Of these 188 returned surveys we managers.
use 183 complete surveys (from 178 unique firms) in the We establish construct validity by further assessing
analyses that follow.4 Although we generally include sur- content validity, along with criterion and convergent/dis-
veys that may inadvertently miss an answer to one or more criminant validity (Bollen, 1989; Kline, 1998). We assess
of the survey questions, we do not use surveys if respon- content validity by specifying an appropriate domain of
dents fail to indicate the performance measures and/or the observables based on extant literature and through the
weighting on those measures needed to assess the common- use of validated measures where possible; criterion valid-
ality between decision-facilitating measures and those used ity through correlation analyses which demonstrate that
in evaluation. the constructs are behaving in a plausible manner; and
convergent/discriminant validity through the use of factor
Respondent demographics analyses which support the uni-dimensionality of con-
structs (Nunnally, 1978), a multitrait matrix (Churchill,
Our respondents have worked for their current employ- 1979), and lack of significant cross-loadings in support of
er for an average of 12 years (median = 10 years), have on discriminant validity.5 We assess internal reliability
average 7.48 years of experience (median = 5.00 years) in through computation of Cronbach’s Alphas which provide
their current SBU and have on average held the position an empirical measure of the internal consistency of each
as head of this unit for 4.17 years (median = 3.00 years). construct. To assist in establishing construct validity we first
The mean age of respondents is between 40 and 49 years employ factor analyses. To test our hypotheses, we then
of age and the mean experience of the SBU managers is jointly estimate a structural equation model, which consists
on average the same as that of their superior (ranging from of a measurement model and a structural model. In this joint
a minimum of 20 years less experience to a maximum of model we confirm the initial measurement of our latent con-
24 years more experience). Further, the survey asked structs. To assess the extent of common method bias, we run
respondents to indicate their level of responsibility within a Harman’s one-factor test on the 14 survey questions that
the firm using a generic organizational chart. Ninety-eight form the constructs in the base model (i.e. use of perfor-
percent of respondents indicated a position consistent mance measures, strategic capabilities, and performance).
with managing a SBU. The descriptive statistics reported
5
The diagonal of the multimatrix contains the Cronbach’s Alpha for each
4
Since there is some risk of non-independence we remove the five latent construct, while the remainder of the table presents the bi-variate
observations that we receive from SBUs within a repeated firm and rerun correlation coefficients. Demonstrating that the internal reliability is
the SEM model on the reduced sample of 178 unique observations. The greater than the bi-variate correlation coefficients is an indicator of
statistical inferences are unchanged. discriminant validity (Churchill, 1979).
696 J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706

The factor solution yields six factors with eigenvalues great- Table 2
er than one. The first factor explains 24% of the total vari- Multitrait matrix.

ance. Overall, the results support the absence of significant FEED-FWD FEEDBACK CAP_N CAP_E PERF
single-source bias (Podsakoff & Organ, 1986). Descriptive FEED-FWD 0.770
statistics for the multi-item variables are reported in Table 1 FEEDBACK 0.549*** 0.720
and the multitrait matrix is reported in Table 2. Abbreviated CAP_N 0.293*** 0.213*** 0.670
survey questions are included in Appendix A. CAP_E 0.197** 0.178** 0.270*** 0.670
PERF 0.116 0.073 0.157* 0.277** 0.520

FEED-FWD: use of performance information for feed-forward control.


Performance measurement information (commonality) FEEDBACK: use of performance information for feedback control.
We asked respondents to list up to five measures con- CAP_N: new capabilities.
sidered most informative in managing their business unit CAP_E: existing capabilities.
PERF: business unit performance.
and to weight the relative informativeness of each of *
Significant at p-value <0.10 (two-tailed).
these measures. We also asked respondents to identify **
Significant at p-value <0.05 (two-tailed).
up to five measures they perceive their superiors use to ***
Significant at p-value <0.01 (two-tailed).
evaluate their performance and to similarly weight the
relative importance of these measures in the evaluation there is potential for them to achieve the same decision
process. From these responses we construct a measure outcome. However, in both cases the measures do not
of weighted overlap (LAP_W), to capture the extent to ‘‘match” as the broader performance measures used in
which decision-facilitating measures are also used in evaluation offer SBU managers more degrees of freedom
evaluation. as to how decision outcomes are achieved. The manager
In determining the extent of commonality between the evaluated on profit might ignore quality metrics and reve-
two sets of measures, the question arises as to whether nue growth and find other means of increasing profit. If
decision-facilitating measures are ‘captured in evaluation’ revenue growth and quality metrics are key drivers of
to the extent that they are identical to those perceived to long-term business unit performance, then the alternative
be used to evaluate the manager using them in decision actions chosen may compromise the fundamental compet-
making, or whether some level of conceptual consistency itive capabilities of the business unit while delivering prof-
between the two sets of measures is sufficient. Thus for it. Similarly the manager evaluated on ROA may delay or
example, revenue growth and cost reduction through qual- restrict new investment rather than improve profit in or-
ity improvement may both be important decision-facilitat- der to increase ROA. Alternatively, reliance on ‘‘matched”
ing measures. The use of a profit measure in evaluation is measures for evaluation and decision facilitation offers
to some extent conceptually consistent with these mea- subunit managers minimal degrees of freedom to achieve
sures in that the use of revenue growth and quality metrics desired outcomes. To avoid making any subjective judge-
to facilitate decisions should improve profit. Similarly, ments about the level of conceptual consistency between
profit may be a key decision-facilitating measure that decision-facilitating and decision-influencing measures,
could be considered conceptually consistent with the use we examine the implications of variation in the extent to
of return-on-assets (ROA) in evaluation. Profit improve- which measures are identical across the two sets.
ment should feed through to ROA improvement. In both Construction of the weighted overlap measure involves
of these cases the measures are arguably consistent in that the manual matching of the decision-facilitating measures

Table 1
Descriptive statistics for survey items.

N Min Mean Med. Max Std. dev. Skewness Kurtosis


Feed-forward control use
Set performance goals 183 3 6.37 7 7 0.840 1.621 3.240
Guide strategy implementation 183 1 5.74 6 7 1.155 1.040 1.098
Develop action plans 183 3 6.04 6 7 0.951 0.890 0.476
Communicate aspects of strategy 183 2 5.91 6 7 1.045 1.052 1.392
Feedback control use
Promote organizational learning 182 2 4.91 5 7 1.209 0.076 0.439
Analyze the impact of past decisions 183 1 4.93 5 7 1.359 0.508 0.104
Prompt re-examination of strategies and targets 183 2 5.47 6 7 1.058 0.413 0.169
Identify need for corrective actions 183 3 6.09 6 7 0.875 0.779 0.180
New capabilities
Able to sense the need for strategic change 181 2 5.17 5 7 1.210 0.505 0.140
Able to seek new capabilities 181 1 4.79 5 7 1.325 0.534 0.043
Existing capabilities
Able to exploit its current capabilities 182 2 5.29 5 7 1.085 0.485 0.348
Able to renew its current capabilities 173 2 4.81 5 7 1.231 0.462 0.034
Performance
Overall financial performance relative to competitors (reverse coded) 176 1 3.06 3 7 1.218 0.294 0.581
Overall performance of business unit relative to expectations 180 2 4.78 5 7 1.194 0.478 0.082
J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706 697

and decision-influencing measures lists provided by each clarifying and communicating strategy and planning and
respondent. Consensus on this ‘matching’ process was target setting. Our feedback use construct captures the var-
reached between the three researchers. We summed the iance between actual outcomes and stated objectives as
weights on the identified measures to form LAP_W.6 well as the learning generated by these variances
LAP_W ranges from 0 to 100, with a mean of 62.5 (med- (Emmanuel et al., 1990). This construct is aligned with
ian = 65).7 We admitted limited discretion in this matching the attributes of feedback control discussed by Nørreklit
process, treating all dissimilar measures as non-overlapping. (2000) which include supplying strategic feedback, strat-
The limited discretion we used related only to very minor egy review and learning. Both the feed-forward and feed-
changes in terminology between the two lists (e.g. ‘cost con- back constructs are uni-dimensional. The constructs
trol’ in the decision-facilitating list and ‘expense control’ in capture 59% of the explained variance, and the Cronbach’s
the same manager’s decision-influencing list were consid- Alphas are 0.77 and 0.72, respectively.
ered a match).8
The calculation of LAP_W, necessary to make the data Strategic capabilities
tractable for use in hypothesis testing, results in the loss We draw on the principles of resource-based theory
of rich information as to the nature of performance mea- and the dynamic capabilities literatures to purpose devel-
sures reported by respondents. As such we explore the nat- op a measure of the firm’s ability to exploit and renew
ure of the reported performance measures in the results distinctive firm competencies (Day, 1994; Teece et al.,
section and provide descriptive information to assist the 1997). We asked respondents to rate the extent to which
reader in contextualizing the results. their business unit is able to: (a) exploit current capabil-
ities; (b) renew current capabilities; (c) sense the need
for strategic change; (d) seek new capabilities in light
Use of performance measurement information of the need for strategic change; and (e) take advantage
We measure feedback and feed-forward use of perfor- of new opportunities as they arise. A factor analysis of
mance measurement information using eight questions these five items reveals that items c and d load on factor
based on extant literature (Bisbe & Otley, 2004; Chenhall, 1, which we call ‘‘new capabilities” (CAP_N) while items a
2005; Emmanuel et al., 1990; Ittner, Larcker, & Randall, and b load on factor 2, which we call ‘‘existing capabili-
2003; Kaplan & Norton, 1996; Simons, 2000; Sprinkle, ties” (CAP_E). We exclude item e as it fails to load on
2003). Respondents were asked to rate the extent to which either factor. Overall, the constructs capture 69% of the
they actually use the decision-facilitating measures to: (a) explained variance and the Cronbach’s Alphas are 0.67
set performance goals for the business unit and/or business and 0.67, respectively. CAP_N and CAP_E are positively
unit employees; (b) guide strategy implementation; (c) correlated 0.376 (p < 0.01) and 0.352 (p < 0.01), respec-
promote organizational learning; (d) analyze the impact tively, to questions that measure the extent the SBU
of past decisions; (e) prompt re-examination of strategies can respond quickly to changes in the external environ-
and targets; (f) develop action plans; (g) communicate ment. CAP_N and CAP_E are also positively correlated
important aspects of the business unit’s strategy; and (h) 0.212 (p < 0.01) and 0.275 (p < 0.01), respectively, with
identify the need for corrective actions. Items a, b, f and g the extent the SBU’s available performance information
load on factor 1, which we label feed-forward control enhances its ability to respond to changes in the external
(FEED-FWD) while items c–e, and h load on factor 2, which environment. These correlations help demonstrate crite-
we label feedback control (FEEDBACK). Our feed-forward rion validity.
use construct captures the use of performance measures
as a means to alert managers to the future through the Performance
communication of strategy and development of goals and We capture current business unit performance by ask-
action plans (Emmanuel et al., 1990). This factor contains ing respondents to rate their performance over the past
items consistent with the theoretical attributes of feed-for- year relative to competitors (reverse scored where
ward control identified by Nørreklit (2000), which includes 1 = 20% better than competitors, 7 = 20% worse than com-
petitors); and overall performance of the business unit rel-
6
To illustrate the calculation of LAP_W consider the following: Respon- ative to expectations (where 1 = poor, 7 = outstanding). We
dent A reports that sales, customer satisfaction, staff satisfaction, budget/ use a relative measure of performance as absolute perfor-
expenses, and business improvement are the five measures considered
mance is not meaningfully comparable across cases and
most informative for managing their operations while their superior
evaluates them on sales, profit, customer satisfaction, safety, and business over time (Andrews, Carpenter, & Gowen, 2001). Our rela-
improvement. The measures that match are sales, customer satisfaction, tive performance measure is consistent with the literature
and business improvement. The respondent assigned these three measures that commonly measures performance relative to compet-
weights of 30%, 20%, and 15%, respectively, in their decision-facilitating itors and/or expectations (Chapman & Kihn, 2009; Henri,
role. Thus LAP_W is 65%, which represents the sum of the weights on the
2006; Pizzini, 2006; Widener, 2007).
decision-facilitating measures that are also used in evaluation. We also test
our model using an ‘‘unweighted” overlap variable and find that the A factor analysis on the two items reveals one factor
statistical inferences are unchanged. which we label ‘‘performance” (PERF). The construct cap-
7
Sixteen firms report that there is no overlap in the use of DI and DF tures 67% of the explained variance and the Cronbach’s Al-
performance measures. pha is 0.52. We validate the use of performance by
8
Some confidence as to the reliability of the matching process is
provided by the high consensus among the three authors when indepen-
establishing convergent validity with an alternative mea-
dently completing this task. We differed in the coding of 13 out of a total sure of performance. We find that PERF is significantly cor-
884 reported measures across all surveys for an agreement rate of 98.53%. related (r = 0.276, p < 0.01) with a measure capturing the
698 J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706

respondents’ performance relative to its competitors with set of financial and non-financial measures for both
respect to: (a) market share; (b) profit; and (c) rate of sales problem identification (decision facilitation) and perfor-
growth, which demonstrates criterion validity.9 We also mance evaluation (decision influence). Similar to our
test the sensitivity of our results to alternative measures of findings, they find a heavy weight on short-term finan-
performance (see next section). cial measures for evaluation and note that these mea-
sures are significantly less important than customer-
focused and operational measures in capturing value
Results drivers of the business. They also document both similar-
ities and differences in the extent of use of particular
Decision-facilitating and decision-influencing performance categories of performance measures between their prob-
measures lem identification and evaluation roles.10
To test whether these patterns are similar across
We begin by providing a brief descriptive analysis of the instances where decision-facilitating and decision-influenc-
nature of performance measures reported by respondents ing measures reported by respondents vary in commonality
as decision-facilitating and decision-influencing to assist (i.e. situations of high and low weighted overlap) we split
in contextualizing the findings of this study. the sample at the mean weighted overlap and repeat the
Respondents provided 884 informative performance above analysis (see Appendix D).11 We find similar patterns
measures. All respondents listed at least two measures in relative weight on decision-facilitating and decision-influ-
and 87.98% of respondents listed five measures. They pro- encing uses across the range of categories of measures in con-
vided 832 decision-influencing measures. All respondents ditions of both high and low commonality. We also examine
listed at least two measures and 72.68% listed five mea- the similarity of the composition of decision-influencing mea-
sures. To make tractable the range of reported measures sures and decision-facilitating measures, separately, in high
it is necessary to adopt a form of classification. We devised and low levels of weighted overlap. These tests reveal two sig-
a coding schema that distinguishes five key types of perfor- nificant differences. First, where overlap is high the list of
mance measures: aggregate financial (e.g. profit); disag- decision-facilitating measures reported by respondents in-
gregate financial (e.g. variances); customer-focused (e.g. cludes significantly more AF measures than where overlap is
customer satisfaction); internal process (e.g. quality); and low (t = 2.726, p = 0.007). Second, where overlap is low the list
people, learning and growth (e.g. employee satisfaction). of decision-influencing measures reported by respondents in-
These categories are broadly consistent with those used clude significantly more PLG measures than where overlap is
in the Balanced Scorecard literature and are described high (t = 2.068, p = 0.040).
more fully in Appendix C. The above analyses suggest important performance
We find that the types of measures identified by measurement system implications that inform the con-
respondents as decision-facilitating and decision-influ- text of the model testing that follows. First, respondents
encing are reflective of the use of broad-based typically report the use of broad-based performance mea-
performance measurement systems. The lists of deci- surement systems that reflect an array of financial and
sion-facilitating performance measures include at least non-financial measures.12 In this sample AF measures are
three or more types of performance measures in 66.12% more important in the decision-influencing role than the
of cases and the lists of performance measures used in decision-facilitating role. Conversely, DAF and CF measures
evaluation include three or more types of performance are more evident in the decision-facilitating role than the
measures in 67.76% of cases. Examining the composition decision-influencing role. These results are not unexpected.
of performance measures used in decision-facilitating Both results are consistent with AF measures being
versus decision-influencing roles (see Appendix C) re- weighted highly in evaluation, possibly due to their scope
veals that aggregated financial (AF) and people, learning in capturing outcomes of a range of decisions. Nonetheless,
and growth (PLG) measures are used more extensively both SBU managers and their superiors use a range of mea-
in the decision-influencing role than the decision-facili- sures from all categories such that aggregate financial mea-
tating role, while disaggregate financial (DAF) and cus- sures are common, but do not dominate either role. It is
tomer-focused (CF) measures are used more in the also evident that when AF measures are considered deci-
decision-facilitating role than the decision-influencing sion-facilitating, the overlap between decision-facilitating
role (all means significantly different at p < 0.00). Only and decision-influencing measures tends to be higher. As
the mean difference for internal process (IP) measures we have identified a potential confounding effect of the ex-
between the decision-facilitating and decision-influenc- tent of reliance on AF measures in our model, we examine
ing roles is insignificant (p = 0.202). These performance
measurement patterns are consistent with those ob- 10
Ittner, Larcker, and Randall (2003) appear to observe more consistency
served by Ittner, Larcker, and Randall (2003). While than we do between measures across roles, which is perhaps not surprising
Ittner, Larcker, and Randall (2003) use different catego- given their single-industry focus.
11
ries of measures and restrict their sample to financial The mean weighted overlap value (62.5%) is consistent with the median
weighted overlap value of 65%.
services firms, they also document the use of a broad 12
Note that the number of measures that overlap between the DI and DF
roles is significantly and positively correlated with LAP_W (r = 0.867,
p < 0.01), thus indicating that higher overlap cases are drawing on a range
9
This is a uni-dimensional variable with explained variance of 58% and a of measures in both DF and DI roles and that overlap is not driven by cases
standardized Cronbach’s Alpha of 64%. with one or two measures in both categories which happen to overlap.
J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706 699

H2a (0.270**)

Feedback
Existing Capabilities
H1 (0.197**) Control Use H3 (0.454***)
Decision-Facilitating
Performance Measures

Commonality Current SBU-Level


H2c (0.360**)
(Overlap) Performance

Decision-Influencing
Performance Measures
H1 (0.256***) Feed-Forward
New Capabilities
Control Use
H2b (0.340***)

*** **
Fig. 2. Depiction of significant results – structural (base) model, showing standardized coefficients. , p-values significant at p < 0.01, 0.05 respectively
(two-tailed).

the robustness of our results after controlling for this Table 3


effect.13 Measurement model.

Unstandardized Standardized
loading loading
Feed-forward control use
Hypothesis testing
Set performance goals 0.544*** 0.593
Guide strategy implementation 1.000 0.792
We use the AMOS 5.0 software program, with maxi- Develop action plans 0.708*** 0.681
mum likelihood estimation technique, to estimate the base Communicate aspects of strategy 0.743*** 0.651
model illustrated in Fig. 1. We use structural equation Feedback control use
modelling (SEM) since it allows us to explicitly model mea- Promote organizational learning 0.620*** 0.501
surement error of the latent dependent and independent Analyze the impact of past 1.000 0.716
decisions
variables and model multiple latent dependent variables.
Prompt re-examination of 0.768*** 0.707
Our sample is 183 observations14 and we estimate 35 strategies and targets
parameters, resulting in a subjects-to-parameter ratio of Identify need for corrective 0.536*** 0.597
5.23, which exceeds the rule of thumb of 5:1 (Hair, Anderson actions
New capabilities
(Jr.), Tatham, & Black, 1995).15 We show significant results in
Able to sense the need for 0.924*** 0.702
Fig. 2 and report the results of the measurement model in strategic change
Table 3 and the structural model in Table 4.16 Able to seek new capabilities 1.000 0.694
Existing capabilities
13
We control for the weighted overlap of aggregate financial measures Able to exploit its current 0.935*** 0.719
(LAP_W_AF). We implement this control by including paths from capabilities
LAP_W_AF to each of the performance measure uses. We find that the Able to renew its current 1.000 0.695
statistical inferences are unchanged when controlling for LAP_W_AF and capabilities
the paths for the base model as presented in Fig. 2 remain qualitatively Performance
unchanged. This provides assurance that the weight placed on aggregate Overall financial perf. relative to 0.594** 0.453
financial measures is not driving the results of our model. competitors (reverse coded)
14
Boomsma and Hoogland (2001) suggest a sample size ±200 is sufficient Overall performance of business 1.000 0.770
for adequate maximum likelihood estimation. unit relative to expectations
15
To determine the extent to which our results are sensitive to potential
***
issues related to sample size we treat our constructs as manifest variables p-value significant at p < 0.01.
**
using factor scores created using SPSS and run only the structural model in p-value significant at p < 0.05.
AMOS. This results in an estimation of 13 parameters and a subjects-to-
parameter ratio of 14.1. The statistical inferences are unchanged. We also
bootstrap the results (200 samples with replacement), which does not Measurement model and assessment of model fit
assume multivariate normality (nor does it rely on asymptotic distribution
theory) to estimate p-values (Kline, 1998). We find that the results are We assign (i.e. identify) each latent variable a scale by
qualitatively unchanged. Thus we conclude that our results are not
setting the loading of one item to 1.0 since the latent vari-
sensitive to issues of sample size and normality.
16
The Mahalanobis D2 scores indicate that one observation may exert ables are not directly measured (Kline, 1998). In Table 3 we
undue influence on the determination of model parameters. We exclude report the results of the measurement model. This confir-
this observation, rerun the base model, and conclude that statistical matory analysis yields standardized loadings greater than
inferences are not changed. We note that our data do not differ strongly 0.50 (with the exception of one performance item that is
from univariate normality; the maximum absolute values for skewness and
kurtosis are 1.621 and 3.240, respectively, well within suggested ranges
0.453), all of which are significant (p-value <0.05), which
(Kline, 1998). The bi-variate correlation coefficients reported in Table 2 provides evidence of convergent validity (Hair et al.,
show that multi-collinearity is not likely. 1995). In Table 4, we report the results of the structural
700 J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706

Table 4 for both feedback (p < 0.05) and feed-forward (p < 0.01)
Base structural model. This table presents the results of a structural control. This finding is consistent with literature suggest-
equation model based on the theoretical model depicted in Fig. 1. The
standardized coefficients are presented.
ing that the tendency to only partially capture decision-
facilitating performance measurement information in
Independent Dependent Direction Coefficient evaluation mechanisms potentially distorts decision mak-
variable variable
ing by increasing the salience of those decision-facilitating
LAP_W FEEDBACK + 0.197** measures that are incorporated in evaluation relative to
LAP_W FEED-FWD + 0.256***
FEEDBACK CAP_E + 0.270**
those that are not. While the role of evaluation mecha-
FEED-FWD CAP_N + 0.340*** nisms in directing effort is widely acknowledged, this
CAP_E CAP_N + 0.360** study isolates the specific impact of evaluation on the ex-
CAP_E PERF + 0.454*** tent of use of measures designed for decision facilitation.
Model fit Thus, evaluation is an important link in the control struc-
X2 186.709 ture of organizations (Ferreira & Otley, 2009) and this find-
p-value 0.000 ing reinforces the need for consistency between the
DF 85 decision-facilitating performance system and those perfor-
CMINDF 2.197
mance measures used by superiors in the evaluation
RMSEA 0.081
GFI 0.892
process.
CFI 0.831 In turn the increased use of decision-facilitating mea-
sures for feedback control is shown to be positively associ-
LAP_W: extent of commonality (weighted) between decision-facilitating
measures and decision-influencing measures.
ated with the firm’s ability to exploit existing strategic
FEED-FWD: use of performance information for feed-forward control. capabilities (p < 0.05), while correspondingly the increased
FEEDBACK: use of performance information for feedback control. use of such measures for feed-forward control is positively
CAP_N: new capabilities. associated with the firm’s capacity to identify and develop
CAP_E: existing capabilities.
new strategic capabilities (p < 0.01). These two results pro-
PERF: business unit performance.
***
Significant at p-value < 0.01 (two-tailed). vide support for H2a and H2b respectively. As the firm’s
**
Significant at p-value < 0.05 (two-tailed). capacity to exploit existing capabilities increases so too
does its ability to pursue new capabilities providing sup-
port for H2c (p < 0.05). These findings suggest that encour-
model. In sum, the standard measures of model fit support aging managers to incorporate performance measures
the hypothesis that the relationships suggested by the considered useful for managing their business into their
measurement and structural equation model are accept- decision making processes is important in directing their
able.17 We perform an additional robustness test for alterna- attention toward improving not only the mobilization of
tive models and model fit later in this section. resources for immediate results, but also toward the iden-
In general, we conclude that the extent to which man- tification and development of longer-term sources of com-
agers perceive decision-facilitating measures are used in petitive advantage.
their evaluation is positively associated with the use of Finally, we find support for H3 (p < 0.01) in that there is
those performance measures for both feedback and feed- a propensity for greater current performance outcomes in
forward control purposes. Feed-forward control use is pos- firms better able to exploit their existing capabilities.18
itively associated with the firm’s ability to identify new Overall these results suggest that higher levels of common-
strategic capabilities while the use of performance mea- ality between decision-facilitating and decision-influencing
sures for feedback control purposes is positively associated sets of measures is associated with greater use of
with the firm’s ability to exploit its existing capabilities. performance measurement information, the development
Existing capabilities also influence the firm’s new capabil- of strategic capabilities and higher subunit performance
ities as well as drive its current performance. We now turn outcomes. Taken together these results provide cross-
to a discussion of the specific hypotheses. sectional evidence indicating several implications of varia-
tion between decision-facilitating and decision-influencing
performance metrics.
Discussion of hypotheses and path coefficients

Table 4 reports the path coefficients for each of our Additional analyses – base model
hypotheses of interest and provides evidence broadly con-
sistent with our proposed model. First, the results indicate We perform a number of additional analyses in order to
that greater commonality between decision-facilitating ensure the robustness of the results of our base model. In
and decision-influencing performance measures encour- hypotheses one through three we model the impact of
ages managers to use decision-facilitating measures more commonality between decision-facilitating measures and

17 18
An insignificant Chi-square (Joreskog, 1969), a CMINDF ratio less than 5 To test the robustness of our model to variations in the performance
(Wheaton, Muthen, Alwin, & Summers, 1977), a CFI and GFI close to 1 (Hu & variable, we also run the model with performance measured as the change in
Bentler, 1999), and an RMSEA of less than 0.08 (Browne & Cudeck, 1993) performance over the past year with respect to market share, profit and
indicates good fit. Although we report the Chi-square test statistic, recent return on investment; and a second time for performance relative to
evidence shows that the Chi-square test statistic is sensitive to sample size competitors for profitability, rate of sales growth, and market share. We find
and may be misleading (Byrne, 2001). that our statistical inferences are qualitatively unchanged.
J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706 701

those used in evaluation on performance outcomes via consistent with Henri (2006) who found empirical
their increased use in decision making, impacting in turn evidence of the dynamic tensions between interactive
on the capacity of the SBU to exploit current and future and diagnostic uses of performance measures theorized
capabilities. An alternative perspective on the impact of by Simons (2000).
commonality between decision-facilitating measures and Finally, as subjectivity, decision rights and information
those used in evaluation is that commonality directly influ- asymmetry are shown often to influence the use of per-
ences measured performance outcomes. The use of mea- formance measures in evaluation (Baiman & Demski,
sures in evaluation provides SBU-level managers with a 1980; Baiman & Rajan, 1993) we check for moderating ef-
direct incentive to improve their reported performance fects of these variables on the extent to which decision-
on these metrics (Ittner, Larcker, & Meyer, 2003). The facilitating measures used in the evaluation process affect
mechanisms used to improve reported performance may the use of those measures.19 We do not find effects strong
include the decision responses that we have hypothesized. enough to conclude that decision rights, information asym-
Alternatively, they may also include manipulation, gaming metry, or subjectivity is a moderating variable in this rela-
or simple heuristic responses to a performance metric per- tion.20 Alternatively, to the extent that a principal
ceived to be ‘‘off target”. If the measures used in evaluation determines the set of decision-facilitating measures to be
are also decision-facilitating, then such actions may coinci- used by subordinate managers based on their decision
dentally produce desired performance outcomes. To cap- rights, the extent of information asymmetry, and the level
ture the potential for managers to influence performance of subjectivity to be employed in evaluation, as well as
outcomes by using mechanisms other than those we cap- determining the measures to be used in evaluation, then
ture in our measure of the range of decision options, we the extent of overlap between the two sets of measures
test the direct link between the extent of commonality be- may be co-determined by the principal’s decisions (Baiman
tween decision-facilitating measures and those used in & Demski, 1980; Bouwens & van Lent, 2007). Thus, we test
evaluation and performance outcomes. There is no support for the impact of decision rights, information asymmetry,
for this relation. That is, there is no evidence of a direct im- and subjectivity on the level of commonality or overlap.
pact of incorporating decision-facilitating measures in None of these variables are found to significantly explain
evaluation on performance outcomes. The impact on per- the extent of overlap between decision-facilitating and
formance is indirect via the impact of performance mea- decision-influencing measures.
sures on decision making and the impact of decision
making on the ability to exploit current capabilities.
We also perform an additional (more complete) test of Conclusions
alternate models and model fit by estimating a ‘‘least con-
strained” structural model. The least constrained model is In this study we investigate two means by which con-
a structural model in which all unidirectional paths are temporary performance measurement systems may be ex-
estimated (Anderson & Gerbing, 1988). We then estimate pected to impact firm performance. First, we investigate
a series of nested models by constraining one path coeffi- the way in which the interplay between performance mea-
cient in each subsequent model by setting the path coeffi- surement and evaluation impacts on managerial use of
cient equal to zero. We choose the least significant path in performance measurement information. There is relatively
the structural model to constrain in each subsequent mod- little empirical evidence to determine whether such sys-
el. Since the models are nested, we use the Chi-square dif- tems will achieve their intended objectives if they are
ference test to determine which model is most appropriate not reflected in evaluation and reward schemes (Kelly,
(Anderson & Gerbing, 1988; Kline, 1998). An insignificant 2007; Ullrich & Tuttle, 2004). We hypothesize that the ex-
Chi-square indicates that the model has not been overly tent to which decision-facilitating measures are captured
trimmed and the more parsimonious model (i.e. the model in evaluation mechanisms will influence the use of those
with the constrained path) is better-fitting. This approach measures and in turn affect organizational outcomes. Sec-
is suggested in the psychology literature (see for example ond, we consider the implications of the decision making
Anderson & Gerbing, 1988) and has been used previously use of contemporary performance measurement systems
in the accounting literature (Abernethy & Lillis, 2001).
19
One advantage to this method is that it allows alternative We measure subjectivity using three questions that ask the extent to
which performance is evaluated on performance measures (reverse coded),
models to be tested against the preferred theoretical model
a subjective evaluation, and effort (Cronbach’s Alpha = 0.65); information
(Anderson & Gerbing, 1988; Kline, 1998). The parsimoni- asymmetry using a six-item instrument developed by Dunk (1993)
ous model arrived at through this estimation technique is (Cronbach’s Alpha = 0.78); and decision rights using a five-item instrument
well-fitting with a GFI = 0.910, a CFI = 0.930, and an developed by Gordon and Narayanan (1984) and adapted by Abernethy,
RMSEA = 0.052, and substantiates the base model shown Bouwens, and van Lent (2004) (Cronbach’s Alpha = 0.74).
20
We find that when decision rights and information asymmetry are high
in Fig. 2. It also reveals the existence of a relationship
the relation between LAP_W and FEEDBACK is positive and significant
between the use of decision-facilitating performance mea- (p < 0.10) but when decision rights or information asymmetry are low then
sures for feedback control and the use of those perfor- the relation between LAP_W and FEEDBACK is not significant. We also see
mance measures for feed-forward control (p < 0.01). that when subjectivity is low the relation between LAP_W and FEEDBACK is
While adopting different ‘use’ constructs this additional re- positive and significant (p < 0.05) but when subjectivity is high the relation
between LAP_W and FEEDBACK is not significant. However, these effects
sult is consistent with Widener (2007), who found that the are not strong enough to make the model better fitting when either path is
interactive use of performance measures influences the allowed to vary, thus we cannot conclude that decision rights, information
diagnostic use of performance measures. Moreover, it is asymmetry, or subjectivity is a moderating variable.
702 J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706

for exploiting and developing strategic capabilities for of striking a balance in the use of performance measure-
sustained competitive advantage. Consistent with studies ment to support both current and future capabilities as a
that adopt a resource-based perspective we hypothesize means of securing competitive advantage in a changing
that the impact of contemporary performance measure- competitive environment.
ment systems on organizational performance will be in Our findings are of course subject to several limitations.
fact be indirect via the effective management of strategic First, while we took great care to validate our survey
capabilities. The data collected for this study support a instrument, we acknowledge the difficulty of obtaining
structural model in which the extent to which decision- reliable data relating to the influence of information on
facilitating measures are captured in evaluation mecha- decision making through the use of a mail survey. In addi-
nisms influences the use of those measures, which in turn tion, some of our key measures were purpose developed
impacts on the strategic capabilities and performance of and thus may benefit from further validation in future
the business unit, thus providing preliminary evidence studies. Other limitations commonly associated with sur-
on the processes through which contemporary perfor- vey research relate to the potential for non-response and
mance measurement systems can influence performance common-rater bias. However, we have sought to minimize
outcomes. these effects through both careful instrument design and
The findings of this study are consistent with persistent administration, and through the assessment of evidence
themes in the strategic performance measurement litera- of bias in the data collected. Finally we test for a limited
ture that it is important to broaden evaluation protocols range of performance measures and uses across a broad
in line with the increasing use of broad-based decision- range of industry and organizational settings. Due to sam-
facilitating performance measurement systems. The ple size limitations, we are unable to fully test the sensitiv-
inclusion of decision-facilitating measures in evaluation ity of the model to industry and other contextual effects.
validates the importance of those measures and ensures The identification in this study of significant joint ef-
their use in decision making. These results would caution fects between decision-facilitating and decision-influenc-
against empirically documented tendencies to either focus ing roles of performance measures, suggest the need for
evaluation on financial outcome measures, or to narrow further research seeking to understand more of the ways
the set of measures used in evaluation in response to con- these roles of information impact on decision making,
cerns regarding comparability, validity and objectivity of especially in situations where the roles are potentially con-
many strategic performance measures (Banker et al., flicting rather than mutually reinforcing. It would be inter-
2004; Lipe & Salterio, 2000; Malina & Selto, 2004). These esting to examine in greater depth the interaction among
concerns, in combination with the cognitive limitations performance measurement roles in the context of specific
of evaluators, may systematically reduce the comprehen- performance measurement protocols focused more or less
siveness of measures used in evaluation (Lipe & Salterio, intensely on financial measures, as well as the impact of
2000). Our findings suggest that such tendencies may sig- the interaction on specific individual decision uses such
nificantly compromise the effectiveness of strategic perfor- as learning or plan revision, rather than overall use
mance measurement systems in practice. patterns.
This study also demonstrates the important implica- Finally, this study adds to the literature which explores
tions of using performance measures for both feedback the process by which performance measurement systems
and feed-forward uses. By modelling both the exploitation influence organizational outcomes. The incorporation of
of current capabilities and the search for new capabilities strategic capabilities as a mediator in this relationship is
as theorized in the dynamic capabilities literature, we are emerging as an important development in the literature
able to clearly see that the two uses of performance mea- (Henri, 2006; Widener, 2007). Our study highlights the
sures serve two distinctly different purposes in supporting importance of current capabilities in enhancing firms’ stra-
strategy implementation and development. The feedback tegic readiness but does not explore the process by which
use of performance measures significantly supports the management control systems inform the further develop-
exploitation of current capabilities, while the feed-forward ment of new strategic opportunities into sustainable com-
use of performance measures supports the search for and petitive advantage.
identification of new capabilities. This finding comple-
ments well the findings of Henri (2006). Henri (2006) doc- Acknowledgments
uments the potential adverse impact of diagnostic control
on the development of new capabilities but does not doc- We are grateful to Margaret Abernethy, Shannon
ument the positive impact of feedback control that we Anderson, Henri Dekker, Frank Moers, Julia Mundy, Martyn
identify in relation to the exploitation of existing capabili- Shoute, the two reviewers, and seminar participants at the
ties. This provides an alternative perspective on the University of Arizona, Erasmus University, Michigan State
dynamic tension that exists in both performance measure- University, the University of Melbourne, Monash Univer-
ment (diagnostic/interactive; feedback/feed-forward) and sity, the University of New South Wales, the University of
in the management of capabilities (current and new). Technology – Sydney, Amsterdam Vrije Universitiet, the
Moreover, in a supplemental analysis, we find that feed- 2007 AAA Annual Meeting (Chicago), and the 2007 EAA
back and feed-forward uses of performance measures are Congress (Lisbon) for helpful comments; and to Andrew
related, consistent with the findings of Widener (2007) Leslie for invaluable research assistance. This research is
who finds a relation between diagnostic and interactive supported by a University of Melbourne, Faculty of Busi-
uses. Taken together these studies reflect the importance ness and Economics research grant.
J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706 703

Appendix A c. Develop action plans


d. Communicate important aspects of the business
Test of non-response bias (using late respondents) of the business unit’s strategy
Variable Early Late Feedback use of performance measurement information
respondents respondents 4. Please rate the extent to which you actually use the
(n = 29) (n = 32) business unit performance measures identified
Descriptive variables above to (scale of 1 = not at all, 4 = to some extent,
Years experience in 20.34 18.38 7 = to a great extent):
industry a. Promote organizational learning
Years worked for current 12.31 11.50 b. Analyze the impact of past decisions
employer c. Prompt re-examination of strategies and targets
Years worked in current 8.03 6.88 d. Identify the need for corrective actions
SBU
New strategic capabilities
Years held current 5.14 3.88
5. Please rate the extent to which your business unit is
position
(1 = not at all, 4 = to some extent, 7 = to a great
Approx. number of full- 459.10 499.10
extent):
time employees in
a. Able to sense the need for strategic change
SBU
b. Able to seek new capabilities in light of the need
Constructs of interest for strategic change
Weighted overlap 70.34 61.25
Existing strategic capabilities
Feed-forward control use 6.27 5.98
6.Please rate the extent to which your business unit is
Feedback control use 5.24 5.32
(1 = not at all, 4 = to some extent, 7 = to a great
Capabilities new 5.21 5.13
extent):
Capabilities current 5.33 5.03
a. Able to exploit its current capabilities
Performance 5.02 4.66
b. Able to renew its current capabilities
** *
, Means are significantly different at p-value <0.05, 0.10, respectively.
+
Means are significantly different at p-value = 0.103. Performance
7. How would you rate your overall financial
performance relative to your competitors over the
past year? (reverse scale of better than competitors
Appendix B by 1 = more than 20%, 2 = 11–20%, 3 = less than 11%,
4 = relatively constant, and worse than competitors
Abbreviated survey questions.21 by 5 = less than 11%, 6 = 11–20%, and 7 = more than
Performance measurement information (commonality) 20%).
1. Identify the five (5) performance measures that are 8. How would you rate the overall performance of
most informative to you in managing your business your business unit relative to expectations? (scale
unit. Please weight the measures according to their of 1 = poor, 4 = about average, 7 = outstanding)
relative informativeness.
2. Identity the five (5) performance measures that are
used by your superior to evaluate your
performance. These measures may or may not Appendix C
overlap with those identified earlier as being as
informative for managing your business unit. Please Distribution of weighted performance measures by type
weight the measures according to their relative
importance in evaluation. Aggregate financial measures (AF) are those that capture
Feed-forward use of performance measurement financial ‘outcomes’ such as measures of net profit, profit-
information ability and return on funds employed. Disaggregate financial
3. Please rate the extent to which you actually use the measures (DAF) are financial measures that are not neces-
business unit performance measures identified sarily ‘outcomes’ and measures such as dollar measures of
above in Q1 to (scale of 1 = not at all, 4 = to some efficiency (e.g. variances) and employee productivity (e.g.
extent, 7 = to a great extent): sales dollars per employee). Thus, DAF measures are some-
a. Set performance goals for the business unit and/or what similar to internal process measures, except they are
business unit employee specified in financial terms. Non-financial measures that
b. Guide strategy implementation monitor key aspects of internal functioning are coded as
internal process (IP) measures and include measures of effi-
ciency, productivity, quality, safety, lead times, inventory,
innovation and new product development. Customer-fo-
21
Survey questions are shown in factor structure rather than original cused (CF) measures are those focused on assessing the
questionnaire sequence. way the business unit appears to customers, for example
704 J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706

customer satisfaction, complaints, market share and also References


measures focused on orders received. Finally, the category
of people, learning and growth (PLG) measures captures Abernethy, M. A., & Bouwens, J. (2005). Determinants of accounting
innovation implementation. Abacus, 41, 217–240.
those measures related to building knowledge/capabilities, Abernethy, M. A., Bouwens, J., & van Lent, L. (2004). Determinants of
as well as measures relating to employee satisfaction, lead- control system design in divisionalized firms. The Accounting Review,
ership and communication within the business unit. 79(3), 545–570.
Abernethy, M. A., & Brownell, P. (1999). The role of budgets in
organizations facing strategic change: An exploratory study.
Decision-facilitating Decision-influencing Accounting, Organizations and Society, 24(3), 189–204.
measures measures Abernethy, M. A., & Lillis, A. M. (2001). Interdependencies in organization
design: A test in hospitals. Journal of Management Accounting Research,
Type Percentage Type Percentage 13, 107–130.
Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in
AF 24.34*** < AF 31.52*** practice: A review and recommended two-step approach.
DAF 30.26*** > DAF 23.46*** Psychological Bulletin, 103(3), 411–423.
Andrews, B. H., Carpenter, J. J., & Gowen, T. L. (2001). A new approach to
IP 25.08 > IP 23.24 performance measurement and goal setting. Interfaces, 31(3), 44–54.
CF 9.38*** > CF 6.91*** Baiman, S., & Demski, J. S. (1980). Economically optimal performance
PLG 9.17*** < PLG 11.83*** evaluation and control systems. Journal of Accounting Research, 18,
184–220.
UN 1.77 UN 3.05 Baiman, S., & Rajan, M. V. (1993). The informational advantages of
Totals 100.00 Totals 100.00 discretionary bonus schemes. The Accounting Review, 70(4), 557–580.
Banker, R. D., Chang, H., & Pizzini, M. J. (2004). The balanced scorecard:
AF: aggregate financial measures. Judgmental effects of performance measures linked to strategy. The
DAF: disaggregate financial measures. Accounting Review, 79(1), 1–23.
IP: internal process measures. Barney, J. (1991). Firm resources and sustained competitive advantage.
CF: customer-focused measures. Journal of Management, 17(1), 99–120.
PLG: people, learning and growth measures. Bisbe, J., Batista-Foguet, J.-M., & Chenhall, R. H. (2007). Defining
UN: unidentified measures. management accounting constructs: A methodological note on the
***
Percentage of type of measures used for decision-facilitating is sig- risks of conceptual misspecification. Accounting, Organizations and
nificantly different from decision-influencing at p-value <0.01. Society, 32(7/8), 789–820.

Appendix D

Distribution of weighted performance measures across decision-facilitating and decision-influencing roles in cases of high and low
weighted overlap
Decision-facilitating measures Decision-influencing measures
Type Percentage Mean Mean diff. Type Percentage Mean
Panel A – high weighted overlap (mean > 62.502), n = 96
AF 29.05+++ 28.82 4.680*** AF 33.03 32.96
DAF 29.22 29.57 4.784*** DAF 27.72 25.10
IP 22.51 22.78 0.326 IP 22.99 23.34
CF 9.06 9.01 1.368* CF 7.78 7.74
PLG 8.21 7.94 1.947** PLG 9.98++ 9.76
UN 1.96 UN 1.50
Totals 100.00 100.00

Panel B – low weighted overlap (mean < 62.502)


AF 18.89+++ 18.87 10.904*** AF 29.76 29.76
DAF 31.47 31.43 9.337*** DAF 21.99 21.99
IP 28.07 28.04 4.843 IP 23.53 23.53
CF 9.74 9.73 3.765** CF 5.90 5.90
PLG 10.28 10.27 3.886* PLG 13.98++ 13.98
UN 1.55 UN
Totals 100.00 100.00
AF: aggregate financial measures.
DAF: disaggregate financial measures.
IP: internal process measures.
CF: customer-focused measures.
PLG: people, learning and growth measures.
UN: unidentified measures.
*
Means are significantly different at p-value <0.10.
**
Means are significantly different at p-value <0.05.
***
Means are significantly different at p-value <0.01.
++
Percentage of type of measures between panels is significantly different at p-value <0.05.
+++
Percentage of type of measures between panels is significantly different at p-value <0.01.
J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706 705

Bisbe, J., & Otley, D. (2004). The effects of the interactive use of Gordon, L. A., & Narayanan, V. K. (1984). Management accounting
management control systems on product innovation. Accounting, systems, perceived environmental uncertainty and organization
Organizations and Society, 29(8), 709–737. structure: An empirical investigation. Accounting, Organizations and
Bollen, K. A. (1989). Structural equations with latent variables. New York: Society, 9(1), 33–47.
Wiley & Sons. Grant, R. M. (1991). The resource-based theory of competitive advantage:
Boomsma, A., & Hoogland, J. J. (2001). The robustness of LISREL modeling Implications for strategy formulation. California Management Review,
revisited. In R. Cudeck, S. du Toit, & D. Sorbom (Eds.), Structural 33(3), 114–135.
equation models: present and future. A Festschrift in honor of Karl Größler, A. (2007). A dynamic view on strategic resources and capabilities
Joreskog (pp. 139–168). Chicago: Scientific Software. applied to an example from the manufacturing strategy literature.
Bouwens, J., & van Lent, L. (2007). Assessing the performance of business Journal of Manufacturing Technology Management, 18(3), 250–266.
unit managers. Journal of Accounting Research, 45(4), 667–697. Hair, J. F., Anderson, R. E., Jr., Tatham, R. L., & Black, W. C. (1995).
Browne, M. W., & Cudeck, R. (1993). Alternative ways of assessing model Multivariate data analysis. Englewood Cliffs, NJ: Prentice Hall.
fit. In K. A. Bollen & J. S. Long (Eds.), Testing structural equation models. Hall, M. (2008). The effect of comprehensive performance measurement
Newbury Park, CA: Sage. systems on role clarity, psychological empowerment and managerial
Byrne, B. M. (2001). Structural equation modeling with AMOS: Basic performance. Accounting, Organizations and Society, 33(2), 141–163.
concepts, applications, and programming. New Jersey: Lawrence Henri, J.-F. (2006). Management control systems and strategy: A resource-
Erlbaum Associates. based perspective. Accounting, Organizations and Society, 31(6),
Chapman, C. S., & Kihn, L.-A. (2009). Information system integration, 529–558.
enabling control and performance. Accounting, Organizations and Holmstrom, B. R., & Milgrom, P. (1991). Multitask principal-agent analyses:
Society, 34(2), 151–169. Incentive contracts, asset ownership, and job design. Journal of Law,
Chenhall, R. H. (2003). Management control systems design within its Economics and Organization, 7(Special issue), 24–52.
organizational context: Findings from contingency-based research Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in
and directions for the future. Accounting, Organizations and Society, covariance structure analysis: Conventional criteria versus new
28(2/3), 141–163. alternatives. Structural Equation Modeling: A Multidisciplinary Journal,
Chenhall, R. H. (2005). Integrative strategic performance measurement 6, 1–55.
systems, strategic alignment of manufacturing, learning and strategic Ittner, C. D., Larcker, D. F., & Meyer, M. W. (2003). Subjectivity and the
outcomes: An exploratory study. Accounting, Organizations and weighting of performance measures: Evidence from a balanced
Society, 30, 395–422. scorecard. The Accounting Review, 78(3), 725–758.
Chenhall, R. H., & Langfield-Smith, K. (1998). The relationship between Ittner, C. D., Larcker, D. F., & Randall, T. (2003). Performance implications
strategic priorities, management techniques and management of strategic performance measurement in financial services firms.
accounting: An empirical investigation using a systems approach. Accounting, Organizations and Society, 28(7–8).
Accounting, Organizations and Society, 23(3), 1–22. Joreskog, K. G. (1969). A general approach to confirmatory maximum
Churchill, G. A. (1979). A paradigm for developing better measures of likelihood factor analysis. Psychometrika, 34, 183–202.
marketing constructs. Journal of Marketing Research, 16, 64–73. Kaplan, R. S., & Norton, D. P. (1992). The balanced scorecard – Measures
Datar, S., Kulp, S., & Lambert, R. A. (2001). Balancing performance that drive performance. Harvard Business Review, 70(1), 71–79.
measures. Journal of Accounting Research, 39(1), 75–92. Kaplan, R. S., & Norton, D. P. (1996). The balanced scorecard. Boston,
Davis, S., & Albright, T. (2004). An investigation of the effect of balanced Massachusetts: Harvard Business School Press.
scorecard implementation on financial performance. Management Kaplan, R. S., & Norton, D. P. (2001). The strategy-focused organization: How
Accounting Research, 15(2), 135–153. balanced scorecard companies thrive in the new business environment.
Day, G. S. (1994). The capabilities of market-driven organizations. Journal Boston, MA: Harvard Business School Press.
of Marketing, 58(4), 37–52. Kelly, K. O. (2007). Feedback and incentives on nonfinancial value drivers:
De Geuser, F., Mooraj, S., & Oyon, D. (2009). Does the balanced scorecard Effects on managerial decision making. Contemporary Accounting
add value? Empirical evidence on its effect on performance. European Research, 24(2), 523–556.
Accounting Review, 18(1), 93–122. Kerr, S. (1975). On the folly of rewarding A, while hoping for B. Academy of
Demski, J. S., & Feltham, G. A. (1976). Cost determination: A conceptual Management Journal, 18(4), 769–784.
approach. Ames: Iowa State University Press. Kline, R. B. (1998). Principles and practice of structural equation modeling.
Demski, J. S., & Feltham, G. A. (1978). Economic incentives in budgetary NY: Guilford Press.
control systems. The Accounting Review, 53(2), 336. Kogut, B., & Zander, U. (1992). Knowledge of the firm, combinative
Dillman, D. A. (2000). Mail and internet surveys: The tailored design method. capabilities and the replication of technology. Organization Science,
Canada: John Wiley & Sons. 3(3), 383–397.
Dossi, A., & Patelli, L. (2008). The decision-influencing use of Lingle, J. H., & Schiemann, W. A. (1996). From balanced scorecard to
performance measurement systems in relationships between strategic gauges: Is measurement worth it? Management Review,
headquarters and subsidiaries. Management Accounting Research, 85(3), 56–61.
19(2), 126–148. Lipe, M. G., & Salterio, S. (2000). The balanced scorecard: Judgmental
Dunk, A. S. (1993). The effect of budget emphasis and information effects of common and unique performance measures. The Accounting
asymmetry on the relation between budgetary participation and Review, 75(3), 283–296.
slack. The Accounting Review, 68(2), 400–410. Lipe, M. G., & Salterio, S. (2002). A note on the judgmental effects of the
Earl, M. J., & Hopwood, A. G. (1979). From Management information to balanced scorecard’s information organization. Accounting,
information management. Paper presented at the proceedings of the Organizations and Society, 27(6), 531–540.
IFIPTC 8.2 working conference on the information systems environment, Magee, R. P., & Dickhaut, J. W. (1978). Effects of compensation plans on
Bonn, West Germany. heuristics in cost variance investigations. Journal of Accounting
Eisenhardt, K. M., & Martin, J. A. (2000). Dynamic capabilities: What are Research, 16(2), 294–314.
they? Strategic Management Journal, 21(10/11), 1105–1121. Malina, M. A., & Selto, F. H. (2001). Communicating and controlling
Emmanuel, C. R., & Otley, D. T. (1985). Accounting for management control. strategy: An empirical study of the effectiveness of the balanced
Worcester: Van Nostrand Reinhold Co. scorecard. Journal of Management Accounting Research, 13, 47–90.
Emmanuel, C., Otley, D. T., & Merchant, K. A. (1990). Accounting for Malina, M. A., & Selto, F. H. (2004). Choice and change of measures in
management control (2nd ed.). London: Chapman and Hall. performance measurement models. Journal of Management Accounting
Feltham, G. A., & Xie, J. (1994). Performance measure congruity and Research, 15, 441–469.
diversity in multi-task principal/agent relations. The Accounting Maritan, C. A. (2001). Capital investment as investing in organizational
Review, 69(3), 429–454. capabilities: An empirically grounded process model. Academy of
Ferreira, A., & Otley, D. (2009). The design and use of performance Management Journal, 44(3), 513–531.
management systems: An extended framework for analysis. Narayanan, V. G., & Davila, A. (1998). Using delegation and control
Management Accounting Research, 20(4), 263–282. systems to mitigate the trade-off between the performance-
Gibbs, M., Merchant, K. A., Van der Stede, W. A., & Vargus, M. E. (2004). evaluation and belief-revision uses of accounting signals. Journal of
Determinants and effects of subjectivity in incentives. The Accounting Accounting & Economics, 25(3), 255–282.
Review, 79(2), 409–436. Nørreklit, H. (2000). The balance on the balanced scorecard – A critical
Gjesdal, F. (1982). Information and incentives: The agency information analysis of some of its assumptions. Management Accounting Research,
problem. Review of Economic Studies, 49, 373–390. 11(1), 65–88.
706 J. Grafton et al. / Accounting, Organizations and Society 35 (2010) 689–706

Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York, NY: Simons, R. (2000). Performance measurement and control systems for
McGraw-Hill Book Company. implementing strategy. Upper Saddle River: Prentice Hall.
Otley, D. T. (1999). Performance management: A framework for Solomons, D. (1965). Divisional performance: Measurement and control.
management control systems research. Management Accounting Illinois: Irwin.
Research, 10(4), 363–382. Sprinkle, G. B. (2000). The effect of incentive contracts on learning and
Pizzini, M. J. (2006). The relation between cost-system design, performance. The Accounting Review, 75(3), 299–326.
managers’ evaluations of the relevance and usefulness of cost Sprinkle, G. B. (2003). Perspectives on experimental research in
data, and financial performance: An empirical study of US managerial accounting. Accounting, Organizations and Society, 28(2–
hospitals. Accounting, Organizations and Society, 31(2), 3), 287–318.
179–210. Stalk, G., Evans, P., & Shulman, L. E. (1992). Competing on capabilities: The
Podsakoff, P., & Organ, D. W. (1986). Self-reports in organizational new rules of corporate strategy. Harvard Business Review, 70(2),
research: Problems and prospects. Journal of Management, 12, 57–69.
531–544. Teece, D. J., Pisano, G., & Shuen, A. (1997). Dynamic capabilities and
Preble, J. F. (1992). Towards a comprehensive system of strategic control. strategic management. Strategic Management Journal, 18(7), 509–533.
Journal of Management Studies, 29(4), 391–409. Ullrich, M. J., & Tuttle, B. M. (2004). The effects of comprehensive
Scott, T. W., & Tiessen, P. (1999). Performance measurement and information reporting systems and economic incentives on managers’
managerial teams. Accounting, Organizations and Society, 24(3), time-planning decisions. Behavioral Research in Accounting, 16(1),
263–285. 89–105.
Sim, K. L., & Killough, L. N. (1998). The performance effects of van Veen-Dirks, P. (2010). Different use of performance measures: The
complementarities between manufacturing practices and evaluation versus reward of production managers. Accounting,
management accounting systems. Journal of Management Accounting Organizations and Society, 35(2), 141–164.
Research, 10, 325–346. Wheaton, B., Muthen, B., Alwin, D. F., & Summers, G. F. (1977). Assessing
Simon, H. A., Guetzkow, H., Kozmetsky, G., & Tyndall, G. (1954). reliability and stability in panel models. In D. R. Heise (Ed.), Sociology
Centralization vs. decentralization in organizing the controller’s methodology. San Franscisco, CA: Jossey-Bass.
department. Houston, TX: Scholars Book Co. Widener, S. K. (2007). An empirical analysis of the levers of control
Simons, R. (1990). The role of management control systems in creating framework. Accounting, Organizations and Society, 32(7/8), 757–788.
competitive advantage: New perspectives. Accounting, Organizations
and Society, 15, 127–143.
Simons, R. (1995). Levers of control: How managers use innovative control
systems to drive strategic renewal. Boston: Harvard Business School
Press.

You might also like