You are on page 1of 9

452 M. Artz et al.

/ Accounting, Organizations and Society 37 (2012) 445–460

Table 2
Intercorrelation matrix.

1 2 3 4 5 6 7 8 9 10 11 12 13 14
1 Functional strategic decision influence –
2 Decision-facilitating use .36 .68
3 Use for accountability .24 .50 .77
4 PM reliability .33 .49 .25 .69
5 PM functional specificity .31 .47 .25 .57 .80
6 Firm size .06 .04 .06 .07 .04 .61
7 Decision-influencing use .29 .47 .34 .33 .22 .01 –
8 Functional background of the CEO .10 .07 .12 .08 .12 .05 .04 –
9 Market environment uncertainty .04 .00 .05 .21 .10 .13 .03 .01 –
10 Strategic focus differentiation .36 .23 .21 .22 .23 .03 .17 .16 .05 –
11 Strategic focus cost leadership .16 .20 .23 .18 .31 .00 .24 .05 .04 .15 –
12 Functional performance .26 .13 .03 .26 .21 .03 .05 .11 .09 .34 .09 .73
13 Functional self-participation in PMS design .04 .02 .00 .08 .04 .05 .11 .05 .16 .06 .02 .09 –
14 VP Marketing professional experience .15 .03 .05 .09 .07 .09 .01 .11 .06 .25 .08 .12 .14 –

Note: PM = performance-measure; PMS = performance-measurement system; VP = Vice President. Sample based on n = 192 firms. Absolute values of cor-
relation coefficients above .13 (.17) are significant on a 5% (1%) level. Diagonal entries (in bold) denote the square root of the average variance extracted (not
calculated for index-based constructs). For adequate discriminant validity, the Fornell–Larcker criterion requires that each diagonal entry exceed the
corresponding off-diagonal entries. All constructs pass this test.

Table 3
Psychometric quality assessment.

Construct Theoretical range Empirical range Mean SD Cronbach’s alpha


Functional strategic decision influence 1–7 2.0–7 4.92 .96 Index
Decision-facilitating use 1–7 1.25–7 5.00 1.03 .77
Decision-influencing use 1–7 1.0–7 4.72 1.13 .81
Use for accountability 1–7 1.0–7 4.52 1.18 .70
Performance-measure reliability 1–7 1.7–7 5.27 1.09 .77
Performance-measure functional specificity 1–7 2.0–7 4.82 1.09 .73
Firm sizea NA 250–34,040 1310 2633 Single item
Functional background of the CEOb 0/1 0/1 NA NA Dummy
Market environment uncertainty 1–7 1.5–6.0 3.62 .97 Index
Strategic focus differentiation 1–7 1.8–7 5.43 .98 Index
Strategic focus cost leadership 1–7 2.0–7 4.59 1.05 Index
Functional performance 3 to +3 2.2 to 3 1.07 1.00 .88
Industry affiliationc 0/1 0/1 NA NA Dummy
Functional self-participation in PMS design 0–100 0–80 31.74 17.67 Single item
VP Marketing prof’l experience 0/NA 1–62 18.12 8.38 Single item

Note: NA = not applicable; PMS = performance-measurement system; SD = standard deviation; VP = Vice President.
a
Firm size is measured by the natural logarithm of the number of employees.
b
Measured by a dummy variable. 1 indicates a marketing background and 0 otherwise.
c
Measured by a dummy variable indicating the industry affiliation of the company (see Table 1, panel A).

performance measures (Abernethy, Bouwens, & van Lent, instrument relies on three items that reflect the extent to
2004). Therefore, we developed two novel constructs. With which the functions use performance measures to account
regard to decision-facilitating use, we asked to what extent for differentiated budget spending, functional performance,
the marketing function employed performance measures and contribution to organization performance in quantita-
for (1) decision making, (2) budget allocation, (3) variance tive terms.
analyses with regard to planned performance, and (4)
tracking progress to pre-defined goals.6 For performance- Performance-measure properties
measure use for accountability, we developed a new We developed the items for functional specificity from
measure based on the work of Evans et al. (1994). The the work of Arya et al. (2005) and Lipe and Salterio
(2000). The scale consists of four indicators that measure
the extent to which performance measures are linked to
6
As items 3 and 4 could potentially refer to decision-influencing use, we disaggregated marketing-related objectives and activities.
ran a principal component analysis over all eight items for both types of We surveyed reliability with two items that assess
use. Items 3 and 4 loaded strictly on the first component (construct whether the performance measures the function employs
decision-facilitating use) as intended, and not on the second component
(construct decision-influencing use). Additionally, we ran all analyses
are precise and actually represent what they purport to
excluding both items when measuring the construct decision-facilitating represent (Christensen & Demski, 2003; Ittner & Larcker,
use. The results remained stable. 2001).
M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460 453

Control variables scale ranging from 0 (no participation at all) to 100


To control for contextual factors that might affect the (complete determination), we measure the level of partic-
dependent variable of our model, we include a series of ipation by asking respondents to indicate to what extent
covariates. The performance-measurement literature rec- their function participated in the system’s development
ommends that empirical research consider firm-related and design. We make no specific prediction for the sign
factors and industry characteristics (Chenhall, 2003; Ittner of this effect. We also control for decision-influencing use
& Larcker, 2001). We therefore control for firm size and of performance measures. We surveyed how intensively
industry affiliation, which prior research regards as impor- functions employ performance measures in evaluating
tant context factors (Chapman, 1997; Waterhouse & Thies- managerial performance, determining compensation, and
sen, 1978). Additionally, we control for the functional applying sanctions concerning budget responsibility and
background of the CEO, which may affect decision processes decision rights (Abernethy, Bouwens, & van Lent, 2010).
at the top management level (Glaser, Lopez-de-Silanes, & As decision-influencing use should be more relevant for
Sautner, 2012). We measure firm size using the natural solving ‘‘local’’ incentive and control problems on the func-
logarithm of the number of employees, and we assess tional level, we refrained from making a specific prediction
industry affiliation and the CEO’s background by dummy for the sign of the effect.
variables that indicate the specific industry type and pri- Finally, we control for respondent-specific variables. To
mary functional background of the CEO. We predict that avoid bias resulting from different hierarchical positions,
when the CEO has a marketing background, the marketing we control for the influence of formal authority, defined
function should wield more influence. With respect to firm in terms of positions on the organization chart (Abernethy
size and industry affiliation, we make no specific predic- & Vagnoni, 2004). As we describe above, our sample collec-
tions about the signs of effects. tion procedure implicitly controls for this type of bias, not
We also control for firms’ strategic focus and the level of only by surveying a sample of firms whose top manage-
market environment uncertainty. We assess these variables ment team includes a VP for the examined function but
with scales developed on the basis of prior studies in the also by discarding questionnaires that were answered by
literature (Duncan, 1972; Govindarajan & Gupta, 1985). a person other than the functional VP. Additionally, we
The instrument for strategic focus considers the character- control for the VP Marketing’s professional experience. We
istics of cost leadership and differentiation strategies and expect this variable to have a positive effect, because exec-
allows these dimensions to vary independently rather than utives develop more effective interpersonal influence tac-
classifying firms as either cost leadership or differentiation tics over time and can draw on their observations and
strategy archetypes.7 From a theoretical perspective, market experiences from prior decision situations to attain greater
environment uncertainty and a differentiation strategy sway in strategic decision making (Mowday, 1979).
should relate positively to the influence of the marketing
function, because these conditions emphasize exploration
of evolving customer needs and adaptation of products Analytical approach and model estimation results
and services to effectively meet those needs (Verhoef & Lee-
flang, 2009). Conversely, if the firm adopts a strategic focus To estimate our empirical model, we employed multivar-
on costs and efficiency, the influence of the marketing func- iate regression analysis with an ordinary least squares (OLS)
tion should be lower. estimator and heteroscedasticity-robust standard errors
Further, we control for function-specific variables. The (White, 1980). The variance inflation factors of all constructs
first of these is functional performance, measured by a con- indicate no substantial degree of multicollinearity (Woold-
struct employed in prior research to assess the perfor- ridge, 2002). For interpretation purposes, we mean-cen-
mance of the marketing function (Vorhies & Morgan, tered all independent variables, and all interaction terms
2003). As high-performing functions should have more are the product of their underlying constructs. The estima-
clout in strategic decision making by virtue of being good tion of our model followed a three-step approach.
performers (Jensen & Zajac, 2004), we expect a positive First, we analyzed a controls-only model to assess
association with functional influence. Controlling for this whether the control variables relate to strategic decision
effect allows us to explore empirically whether perfor- influence as predicted (model 1 in Table 4). By and large,
mance-measurement practices affect power structures be- the effects of the covariates reflect prior expectations. While
yond effects attributable to subunit performance. We also no statistically significant coefficient sign is opposite to our
control for the degree of the functional self-participation in predictions, nonsignificant effects might be attributable to
performance-measurement system design, which may be the fact that, so far, no complex model has tested these con-
related to the independent and interacting variables (e.g., structs simultaneously.8 In a second step, we included the
functional specificity of performance measures). Using a main effects of performance-measure use (model 2 in Table 4),
with the values for the performance-measure properties equal
7
The original taxonomy of business strategies developed by Porter
8
(1980) assumes that firms focus their strategy exclusively on either We conducted an additional analysis to test variations in the extent of
differentiation or cost leadership. More recent examinations suggest that marketing’s strategic decision influence across different industries and
firms may also follow joint strategies aiming to achieve differentiation and found only modest differences at the industry level. All means for
cost leadership simultaneously (Hill, 1988; Miller & Dess, 1993). Following marketing’s influence within a particular type of industry are less than
recent work in the accounting literature, our measurement approach takes half a standard deviation below or above the overall sample mean,
this view into account and includes variables for a differentiation and a cost indicating that most of the variance in marketing’s influence is attributable
leadership focus side-by-side (Lillis & van Veen-Dirks, 2008). to firm-specific and environmental drivers, as our research model suggests.
454 M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460

Table 4
Regression analysis results.

Variable Prediction Model 1 Model 2 Model 3


(controls-only) (main effects) (with interactions)
Decision-facilitating use of performance measures + .125 (.076) .180 (.084)*
Performance-measure use for accountability + .061 (.064) .080 (.059)

Decision-facilitating use  performance-measure reliability + .012 (.060)


Use for accountability  performance-measure reliability + .106 (.057)*
Decision-facilitating use  performance-measure functional specificity Np .127 (.063)*
Use for accountability  performance-measure functional specificity – .118 (.055)*
Controls
Decision-influencing use of performance measures Np .081 (.062) .057 (.056)
Performance-measure reliability + .138 (.077)* .165 (.070)**
Performance-measure functional specificity Np .057 (.066) .029 (.062)
Firm size Np .066 (.093) .052 (.084) .096 (.088)
Functional background of the CEO (dummy) + .135 (.171) .082 (.168) .146 (.172)
Market environment uncertainty + .097 (.072) .141 (.067)* .123 (.066)*
Strategic focus differentiation + .278 (.078)** .182 (.075)** .185 (.074)**
Strategic focus cost leadership – .078 (.055) .021 (.060) .077 (.061)
Functional performance + .178 (.088)* .129 (.076)* .110 (.074)
VP Marketing professional experience + .008 (.008) .013 (.008) .015 (.009)*
Functional self-participation in PMS design Np .003 (.003) .004 (.003) .004 (.003)
Industry affiliation (dummy) Np Included Included Included
Constant 2.063** 2.857** 2.642**
Number of observations (n) 192 192 192
R-square .29 .40 .44
Partial R-square added (incremental F-test) .11** .04**
Adjusted R-square .20 .31 .34

Note: PMS = performance-measurement system; VP = Vice President. The table reports nonstandardized coefficients. Robust standard errors are shown in
parentheses. Significance levels are one-tailed for variables with a directional prediction and two-tailed otherwise +/–/Np denote positive/negative/non-
predicted relations.
*
Significance at the 5% level.
**
Significance at the 1% level.

to zero (i.e., without interactions). The estimation of the model vide partial support for H2. Regarding the property of
does not provide support for H1. Both coefficients are positive, functional specificity, we find a negative interaction effect
but not significant on a 5% level (H1a: decision-facilitating use: for the use for accountability (b = .118; p < .05), as
b = .125, p > .05; H1b: use for accountability: b = .061, p > .05). predicted by H3. We also tested the interaction with
Given that zero values of the performance-measure properties decision-facilitating use, for which we made no explicit
represent the sample average (owing to mean-centering), by inferences about the signs of the interaction term. We
implication the use of performance measures does not affect find a positive and significant interaction (b = .127, p < .05).
functional strategic decision influence ‘‘on average’’ (Hart- Notably, model 3 explains significant incremental variance
mann & Moers, 1999; Moers, 2006). to both other models as indicated by an adjusted R2 measure
A third model then examined whether including perfor- of .44 (vs. .29 and .40 in model 1 and model 2 respectively).
mance-measure properties provides additional explana- A closer examination of the economic interpretations of
tory power (model 3 in Table 4). With respect to the the interaction effects is worthwhile. To this end, we deter-
property of reliability, we find significant positive interac- mined the partial derivatives of functional strategic deci-
tions with the use for accountability (b = .106, p < .05), sion influence to performance-measure use. This analysis
whereas the interaction term with decision-facilitating allows us to test how different levels of the performance-
use is nonsignificant (b = .012, p > .05).9 These results pro- measure properties affect the baseline effect of perfor-
mance-measure use on the dependent variable. We
9 analyzed the following equations:
As a valid theoretical rationale exists for expecting a significant positive
interaction effect, an important question is whether our sample size is
powerful enough to find an existing effect. Therefore, we analyzed whether @ functional strategic decision influence
our sample has adequate statistical power to reject the null hypothesis of
@ decision  facilitating use
no effect. We investigated this possibility, dubbed the type II error
(Verbeek, 2008), by computing the power of H2a using a critical signifi- ¼ :180 þ :127  functional specificity þ :012  reliability
cance level of a = 5%. We found that we can detect a true effect size of .050
with about 90% power and a true effect size of .037 with about 80% power— @ functional strategic decision influence
a threshold suggested by Cohen (1992) and previously applied in the
accounting literature (e.g., Abernethy et al., 2010). The significant interac- @ use for accountability
tions (Table 4) have effect sizes greater than .050. Therefore, we conclude ¼ :080  :118  functional specificity þ :106  reliability
that if an effect exists, our setting is powerful enough to find it. We still
have to reject H2a.
M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460 455

These partial derivatives describe the impact of the two performance measures is decreasing in the homogeneity of
uses of performance measures on functional strategic deci- business functions within the firm.
sion influence as a function of the performance-measure Moreover, in empirical research on performance-mea-
properties. Table 5 provides the marginal effects of the surement system choices, an important factor is the possi-
relationships depending on selected values of the interact- bility of different causal flows of effects (Chenhall & Moers,
ing variables. The slope values allow some insightful inter- 2007). For our study, an alternative explanation could sub-
pretations. For constant values of reliability at the mean of stantiate an inversely specified model in which functional
zero, the data show that for high functional specificity of strategic decision influence affects the use of performance
performance measures, a positive association exists be- measures in the function, and not vice versa. Agency the-
tween decision-facilitating use and functional strategic ory predicts that when a firm allocates more decision
decision influence. Further, this effect is stronger the high- rights to a particular function, the firm is likely to subject
er the level of specificity (e.g., .459, p < .01 at the highest the function to tighter control to ensure that its use of deci-
sample value). The slopes for lower levels of specificity sion rights produces organizationally desirable outcomes
are negative, but not significant (e.g., .175, p > .05 at the (Abernethy et al., 2004). Thus, for an inverse effect (i.e.,
lowest sample value). functional strategic decision influence affects the use of
Regarding the use of performance measures for performance measures), we should observe a negative cor-
accountability, we find a similar pattern for the interaction relation between functional strategic decision influence
with reliability. Negative (nonsignificant) slopes measured and the function’s self-participation in the design of its
for low values (e.g., .253, p > .05 at the lowest sample va- performance-measurement system. However, we do not
lue) change to positive, significant slopes for higher values observe this condition empirically. As Table 2 shows, this
of reliability (e.g., .273, p < .05 at the highest sample value). correlation is close to zero and nonsignificant. In untabu-
However, the interaction effect of functional specificity lated analyses, we regressed functional self-participation
operates in the opposite direction. Slopes are positive for in designing the performance-measurement system on
low values (e.g., .408, p < .01 at the lowest sample value) functional strategic decision influence (including controls)
and negative (but nonsignificant) for high values of func- and find a nonsignificant association as well. These results
tional specificity (e.g., .178, p > .05 at the highest sample provide no evidence for the validity of an inversely speci-
value). fied model.
Additionally, we employed two specification tests to as-
sess whether our model specification fits the data. In the
first test, we estimated a model in which all controls and
Additional analyses
property variables feed back to all three uses of perfor-
mance measures, thus treating the different uses as endog-
Possibly our results are affected by organizational struc-
enous variables. We used four interrelated regressions and
tures within the firm. In particular, the more homogeneous
the seemingly unrelated estimator (SUR) to account for
the various subunits within an organization are, the fewer
systematic correlations between the error terms (Woold-
subunit-specific measures in the whole firm are likely to
ridge, 2002). The results of our main predictions do not
exist. In such organizations, measures peculiar to a partic-
change substantially, which is consistent with the specifi-
ular function may be better understood, because fewer
cation of functional strategic decision influence as the
other measures exist to which outsiders (in our case the
independent variable in the analysis. As a second test, we
top management team) must attend. For our research
estimated a recursive model with the three types of perfor-
model, we expect a less negative effect of performance-
mance-measure use as dependent variables and functional
measure specificity if performance measures are used for
strategic decision influence as the independent variable,
accountability purposes in more homogenous organiza-
including all controls. In conformity with our interpreta-
tions. We empirically tested this proposition by employing
tion of results, we find no significant effects when strategic
service (vs. manufacturing) firms as a proxy variable for
decision influence serves as the independent variable.
homogeneity of business functions.10 We added a triple
Thus, our tests do not support reverse causation.
interaction term to our model 3 shown in Table 4, including
use for accountability, performance-measure specificity, and
an indicator variable classifying service firms. In line with
Discussion and conclusions
our expectation, we find a positive triple interaction
(p < .10), which indicates that the negative effect of account-
Although prior investigations make a strong conceptual
ability use for a high level of functional specificity of
case for an association between a function’s performance-
10
measurement practice and its influence on strategic deci-
We argue that, in service firms, the operations or research and
sion making (Markus & Pfeffer, 1983; Saunders, 1981; Wick-
development function is more closely related to marketing, whereas in
manufacturing firms, operations or research and development is substan- ramasinghe, 2006), empirical research investigating this
tially different from marketing. We classified financial services, retailing, phenomenon is lacking. Drawing on institutional theory,
logistics and transportation, and IT and telecommunication as mainly this study analyzes how the use of performance measures
service-driven firms and automotive, chemicals, pharmacy, mechanical for decision facilitation and accountability in the marketing
engineering, consumer goods, utilities, metal production, electronics, and
building and construction as mainly manufacturing-driven firms. As we
function affects the function’s influence over strategic deci-
could not classify the ‘‘other’’ firms appropriately, we performed the sion making. Furthermore, this investigation is the first to
analysis with n = 187 firms. consider interaction effects of performance-measure
456 M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460

Table 5
Partial derivatives (slopes) of significant interactions.

Interaction term Value of interaction variable


Lowest Sample Sample mean Sample Highest
sample value mean  1 SD (centered at zero) mean + 1 SD sample value
Decision-facilitating use  PM reliability Not tested because of nonsignificant interaction term
Decision-facilitating use  PM functional specificity .175 (.162) .042 (.086) .180 (.084)* .319 (.127)** .459 (.187)**
Use for accountability  PM reliability .253 (.184) .040 (.084) .080 (.059) .199 (.090)* .273 (.122)*
Use for accountability  PM functional specificity .408 (.163)** .208 (.083)** .008 (.059) .049 (.085) .178 (.136)

Note: PM = performance-measure; SD = standard deviation. Robust standard errors are shown in parantheses. Significance levels based on two-tailed tests.
*
Significance at the 5% level.
**
Significance at the 1% level.

properties, extending prior research in this area of inquiry a positive interaction term, indicating that the use of more
that has so far neglected the role of information properties. specific metrics in decision making has a positive effect.
The model was tested using a large sample of top executives This finding lends support to the perspective that custom-
across a range of industries, thus meaningfully extending ized measures designed specifically for a particular subunit
extant empirics. can be better targeted to the subunit’s idiosyncratic needs
Analysis of the main effects shows that neither decision- and are therefore more helpful in supporting the function’s
facilitating use nor use for accountability has significant decision processes (Arya et al., 2005). An alternative line of
associations with functional strategic decision influence reasoning might argue that outsiders may see the use of
when the effects of the properties are not considered. How- specific performance measures in decision making as an
ever, including interaction effects shows that the use of per- indication of a parochial, self-centered perspective of the
formance measures can have either a positive or a negative function’s management practices, resulting in decisions
effect on functional strategic decision influence. We find that are less congruent with organizational goals. However,
evidence that the effect of performance-measure use for we find no empirical support for this argument.
accountability is positive and significant for high levels of A further interesting insight is that the property of reli-
performance-measure reliability. On the other hand, how- ability does not exert a significant interaction effect for
ever, the use for accountability has no effect for very low decision-facilitating use. Thus, employing precise mea-
levels of reliability. These findings are consistent with the sures in decision making apparently is not required for a
theoretical rationale of our model, which states that top positive effect. Although surprising at a first glance, this
executives face institutionalized expectations to account finding coincides with anecdotal evidence that the use of
for their function’s performance and value contribution. accounting information for decision making may enhance
In view of these expectations, employing reliable perfor- the legitimacy of individual or group activities, regardless
mance measures for accountability may enhance the func- of the actual information value (Markus & Pfeffer, 1983).
tion’s legitimacy and perceived image, giving functional In other words, for performance-measure use in decision
VPs more weight in strategic decision making. However, making to confer legitimacy, how decisions and actions
the use of less reliable measures for accountability does were actually affected by the systems may be less relevant.
not seem to confer legitimacy in the same way. Proponents of institutional theory have also noted that
With respect to the functional specificity of perfor- organizational legitimacy may occasionally derive from a
mance measures, the data support the hypothesized inter- superficial adoption of techniques or policies and that for
action effect with the use for accountability. Using some expectations of the institutional environment, a mere
performance measures for accountability has a positive ef- ‘‘ceremonial conformity’’ may placate potential sources of
fect on functional influence when the measures are less support (Meyer & Rowan, 1977, p. 340; Scott, 2001). This
specific to the particular function and thus better reflect theoretical view might offer a possible interpretation for
congruity with organizational goals. These findings echo our results. While reliability is plausibly an important
prior accounting research, which argues that standardized quality for measures used for accountability, which should
measures offer more meaningful opportunities for relative be beyond debate (Gjesdal, 1981; Ijiri, 1975), top managers
performance evaluation. Managers outside a particular may not attach the same importance to reliability with re-
function may be unable to fully exploit the information gard to the use for decision making. In a similar vein, prior
found in a diverse set of measures unique to that particular studies in the accounting literature conducted in different
function (Arya et al., 2005). Similarly, our findings are con- functional settings argue that reliability of performance
sistent with previous experimental evidence suggesting measures is less critical for decision-facilitating use be-
that top managers evaluating multiple subunits place more cause the role of behavioral risk in this context is less rel-
weight on measures common to many subunits than on evant (e.g., van Veen-Dirks, 2010).
those specific to particular subunits (Lipe & Salterio, 2000). Overall, our findings provide empirical evidence that
Regarding decision-facilitating use, underlying theory functional performance-measurement practices may affect
does not clearly point toward the signs of interaction effects the ability of functions’ top managers to promote strategic
with functional specificity. Tests of these interactions show initiatives at the top management level. Moreover, our
M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460 457

study provides evidence that the accountability demand stability of our findings. In this regard, a particularly valu-
for accounting information (Evans et al., 1994) plays a role able contribution would be the investigation of the effects
for organizational power structures. However, the proper- of personality characteristics such as executives’ leadership
ties of performance measures need particular attention, as effectiveness, interpersonal skills, or decision effectiveness.
they seem to determine whether the effect of perfor- Additionally, investigators should address three main
mance-measure use is positive or negative. methodological concerns. First, our measurement instru-
ments require further testing. This study relies on both
newly developed constructs and established scales drawn
Limitations and directions for future research from previous research. Although sound theoretical con-
ceptualizations in the accounting literature facilitated the
While our study addresses important research issues, development of the new scales, further research should
some interesting directions for future research arise from provide support for their psychometric properties. Second,
our findings. Perhaps the most challenging task that re- the survey-based approach has a potential for measure-
mains is to extend the present study to other functional ment error. Our choice of key informants and our specific
contexts. Our study analyzes the strategic decision influ- study design may alleviate this concern to some extent.
ence of the marketing function in a cross-industry sample We restricted the analysis to top management representa-
of German business firms. Surveying the same function in tives responsible for the marketing function and collected
each organization facilitated the collection of a sizable additional data from a second respondent group to assess
sample, and it also mitigates methodological problems re- the validity of key informant perceptions for the most crit-
lated to function-specific unobserved heterogeneity that ical constructs. Notwithstanding these efforts, we cannot
would occur in a cross-functional sample. Nonetheless, fu- completely rule out measurement errors. Lastly, cross-sec-
ture research should assess the stability of our findings in tional data cannot establish the causality of purported rela-
other functional contexts. In particular, the top manage- tionships. While evidence from previous literature and
ment team of all firms in our sample included a VP for additional econometric tests lend support to our interpre-
the marketing function, which indicates that the function tation of results, any implied causality reflects the theoret-
is regarded as influential and important. Further research ical position taken (van Lent, 2007), and substantiating the
could explore whether less important functions whose purported flow of effects, as for instance with a longitudinal
measures possess the same characteristics would benefit analysis design, remains a challenge left to future research.
in the same way.
Scholars should also more closely examine some of the Acknowledgments
effects studied in our model. Significant potential seems to
exist for further research regarding the use of accounting The authors are listed alphabetically and contributed
information for accountability in organizational contexts. equally to this paper. We appreciate helpful comments of
An interesting investigation would be to explore what Markus Glaser, Martin Holzhacker, Christian Kunz, Matth-
makes the use of information to account for functions’ ias Mahlendorf (our discussant), David Marginson, Alexan-
achievements more or less effective. For instance, previous der Schmidt, Dirk Totzek, participants of the 3rd Annual
experimental research has analyzed how the organization Conference for Management Accounting Research (AC-
and presentation of performance measures in a balanced MAR) and of the 34th Annual Congress of the European
scorecard affect evaluators’ perceptions of business unit Accounting Association (EAA). We further thank Michael
performance (Cardinaels & van Veen-Dirks, 2010; Lipe & Shields (Editor) and two anonymous reviewers for con-
Salterio, 2002). Further investigations could explore in structive suggestions for improvement.
more depth how functions should use performance infor-
mation within the organization to achieve the strongest Appendix A
impact on their strategic influence. Furthermore, the fact
that reliability does not affect the efficacy of decision-facil- Measurement instruments.
itating use warrants further examination. Although prior
conceptual work offers a plausible explanation for the ob- I. Functional strategic decision influence
served result, our findings contain a residual degree of Seven-point scale: ‘‘very little influence’’ to ‘‘very
ambiguity. As to the best of our knowledge our study offers strong influence’’ (1–7).
the first empirical test of the effect, future research should Please provide your assessment of the influence of the
assess whether our findings can be replicated. marketing function in the following decision areas:
Some other limitations of our study imply further direc- 1. Strategic direction of the company.
tions for future research. As surveys targeting top manage- 2. Expansion in new geographic markets.
ment representatives face restrictions regarding the length 3. Customer satisfaction measurement and
of the research instrument, we were unable to include all management.
possible antecedents of functional strategic decision influ- 4. New product development.
ence. While our study takes into account important covari- 5. Major capital expenditures.
ates such as functional performance and respondents’ 6. Pricing decisions.
professional experience, future research should explore
whether including additional variables explains incremen- (continued on next page)
tal variance in the dependent variable and affects the
458 M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460

Appendix A (continued) Appendix A (continued)


7. Choice of strategic business partners. 3. Besides results-oriented measures (e.g., sales,
8. Design of customer service and support. customer satisfaction), the performance measures
also include input- (e.g., meeting the marketing
II. Decision-facilitating use of performance measures
budget) and process-related measures (e.g., length
Seven-point scale: ‘‘totally disagree’’ to ‘‘totally agree’’
of marketing processes).
(1–7).
4. The performance measures provide a balanced
Please indicate whether performance measures are
picture of the marketing function.
used in your function for the following:
1. Decision making. VII. Firm size
2. Budgeting. What is the approximate number of full-time
3. Variance analyses of planned performance. employees in your company?
4. Tracking progress to pre-defined goals.
VIII. Functional background of the CEO
III. Decision-influencing use of performance measures Dummy variable indicating the primary functional
Seven-point scale: ‘‘totally disagree’’ to ‘‘totally agree’’ background of the CEO.
(1–7). 1. Marketing.
Please indicate whether performance measures are 2. Sales.
used in your function for the following: 3. Research and development.
1. Evaluating employee performance within the 4. Purchasing/production/logistics.
function. 5. Finance/controlling.
2. Rewarding employee performance within the
IX. Market environment uncertainty
function.
Seven-point scale: ‘‘very rarely’’ to ‘‘very frequently’’
3. Determining compensation practices within the
(1–7).
function.
Please indicate how frequently the following aspects
4. Applying sanctions within the function (e.g.,
change in the market:
concerning decision rights, budgets).
1. Products and services offered by competition.
IV. Use of performance measures for accountability 2. Marketing and sales strategy of competitors.
Seven-point scale: ‘‘totally disagree’’ to ‘‘totally agree’’ 3. Customers’ preferences for product features.
(1–7). 4. The price-to-value ratio customers expect.
Please indicate whether performance measures are
X. Strategic focus (differentiation; cost leadership)
used in your function for the following:
Seven-point scale: ‘‘totally disagree’’ to ‘‘totally agree’’
1. To account for the function’s budget spending.
(1–7).
2. To account for the function’s performance.
Items 1–4 refer to a differentiation strategy, items 5–8
3. To illustrate the function’s contribution to firm
refer to a cost leadership strategy.
performance relative to other functions in
To what degree does the competitive strategy of your
quantitative terms.
company emphasize the following goals?
V. Performance-measure reliability 1. Building up premium product or brand image.
Seven-point scale: ‘‘totally disagree’’ to ‘‘totally agree’’ 2. Offering highly differentiated/innovative
(1–7). products.
Please indicate whether the performance measures 3. Obtaining high prices.
used in your function show the following 4. Creating superior customer value by knowledge
characteristics: of customers’ preferences and customized products.
1. The performance measures used in our function 5. Standardization of products and services with few
are reliable. variants and ancillary services.
2. The performance measures used in our function 6. Standardization of processes in production and
represent what they purport to represent. sales.
7. Using economies of scale in purchasing volumes.
VI. Performance-measure functional specificity
8. Cost efficiency in overhead functions and general
Seven-point scale: ‘‘totally disagree’’ to ‘‘totally agree’’
administration.
(1–7).
XI. Functional performance
Please indicate whether the performance measures
Seven-point scale from ‘‘much worse than
used in your function show the following
competition’’ [3] to ‘‘much better than
characteristics:
competition’’ [+3].
1. The performance measures are relevant for the
How would you rate your function’s performance in
marketing function.
relation to its competition during the past 3 years
2. The performance measures put special weight on
with respect to the following aspects?
customer-, competitor-, and market-related
(continued on next page)
measures.
M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460 459

Appendix A (continued) Chenhall, R. H. (2003). Management control systems design within its
organizational context: Findings from contingency-based research
1. Achieving customer satisfaction. and directions for the future. Accounting, Organizations and Society, 28,
127–168.
2. Creating customer utility. Chenhall, R. H., & Langfield-Smith, K. (1998). The relationship between
3. Achieving customer loyalty. strategic priorities, management techniques, and management
4. Acquisition of new customers. accounting: An empirical investigation using a systems approach.
Accounting, Organizations and Society, 23, 234–264.
5. Achievement of growth targets. Chenhall, R. H., & Langfield-Smith, K. (2007). Multiple perspectives of
6. Achievement of planned market share. performance measures. European Management Journal, 25, 266–
282.
XII. Industry affiliation Chenhall, R. H., & Moers, F. (2007). The issue of endogeneity within
Dummy variable indicating the industry affiliation of theory-based, quantitative management accounting research.
European Accounting Review, 16, 173–195.
the company. Christensen, J. A., & Demski, J. S. (2003). Accounting theory: An information
content perspective. Boston, MA: McGraw-Hill.
XIII. Functional self-participation in performance- Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.
measurement system design Covaleski, M. A., & Dirsmith, M. W. (1986). The budgetary process of
To what extent does your function participate in power and politics. Accounting, Organizations and Society, 11, 193–
214.
designing its performance-measurement system Datar, S., Kulp, S. C., & Lambert, R. A. (2001). Balancing performance
compared to other actors outside the function? measures. Journal of Accounting Research, 39, 75–92.
Scale ranging from 0 to 100. 0 = no participation at all; Davis, G. F., & Marquis, C. (2005). Prospects for organization theory in the
early twenty-first century: Institutional fields and mechanisms.
100 = complete determination.
Organization Science, 16, 322–343.
Demski, J. S. (2008). Managerial uses of accounting information (2nd ed.).
XIV. VP Marketing professional experience
New York: Springer.
How many years of professional experience do you Demski, J. S., & Feltham, G. A. (1976). Cost determination: A conceptual
have? Approx. _________ years. approach. Ames: Iowa State University Press.
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and
mixed-mode surveys: The tailored design method (3rd ed.). Hoboken, NJ:
John Wiley and Sons.
DiMaggio, P., & Powell, W. (1983). The iron cage revisited: Institutional
References isomorphism and collective rationality in organizational fields.
American Sociological Review, 48, 147–160.
Abernethy, M. A., Bouwens, J., & van Lent, L. (2004). Determinants of Duncan, R. (1972). Characteristics of organizational environments and
control system design in divisionalized firms. The Accounting Review, perceived environmental uncertainty. Administrative Science
79, 545–570. Quarterly, 17, 313–327.
Abernethy, M. A., Bouwens, J., & van Lent, L. (2010). Leadership and Emsley, D. (2000). Variance analysis and performance: Two empirical
control system design. Management Accounting Research, 21, 2–16. studies. Accounting, Organizations and Society, 25, 1–12.
Abernethy, M. A., & Chua, W. F. (1996). A field study of control system Evans, J. H., Heiman-Hoffman, V. B., & Rau, S. E. (1994). The accountability
‘‘redesign’’: The impact of institutional processes on strategic choice. demand for information. Journal of Management Accounting Research,
Contemporary Accounting Research, 13, 569–606. 6, 24–42.
Abernethy, M. A., & Vagnoni, E. (2004). Power, organization design, and Feltham, G. A., & Xie, J. (1994). Performance measure congruity and
managerial behavior. Accounting, Organizations and Society, 29, diversity in multi-task principal/agent relations. The Accounting
207–225. Review, 69, 429–453.
Ansari, S., & Euske, K. J. (1987). Rational, rationalizing, and reifying uses of Finn, R. H. (1970). A note on estimating the reliability of categorical data.
accounting data in organizations. Accounting, Organizations and Educational and Psychological Management, 30, 71–76.
Society, 12, 375–384. Fligstein, N. (1987). The intraorganizational power struggle: Rise of
Arya, A., Glover, J., Mittendorf, B., & Ye, Lixin. (2005). On the use of finance personnel to top leadership in large corporations. American
customized versus standardized performance measures. Journal of Sociological Review, 52, 44–58.
Management Accounting Research, 17, 7–21. Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models
Banker, R. D., Chang, H., & Pizzini, M. J. (2004). The balanced scorecard: with unobservable variables and measurement error. Journal of
Judgemental effects of performance measures linked to strategy. The Marketing Research, 18, 39–50.
Accounting Review, 79, 1–23. Gerdin, J. (2005). Management accounting system design in
Banker, R. D., & Datar, S. M. (1989). Sensitivity, precision, and linear manufacturing departments: An empirical investigation using a
aggregation of signals for performance evaluation. Journal of multiple contingencies approach. Accounting, Organizations and
Accounting Research, 27, 21–39. Society, 30, 99–126.
Bariff, M. L., & Galbraith, J. R. (1978). Intraorganizational power Gjesdal, F. (1981). Accounting for stewardship. Journal of Accounting
considerations for designing information systems. Accounting, Research, 19, 208–231.
Organizations and Society, 3, 15–27. Glaser, M., Lopez-de-Silanes, F., & Sautner, Z. (2012). Opening the black
Birnberg, J. G., & Zhang, Y. (2010). When betrayal aversion meets loss box: Internal capital markets and managerial power. Journal of
aversion: The effect of economic downturn on internal control system Finance, Forthcoming.
choices. In Proceedings of the American Accounting Association. San Govindarajan, V., & Gupta, A. K. (1985). Linking control systems to
Francisco (August 2010). business unit strategy: Impact on performance. Accounting,
Birnberg, J. G., Hoffman, V. B., & Yuen, S. (2008). The accountability Organizations and Society, 10, 51–66.
demand for information in China and the US—A research note. Hambrick, D. C., & Mason, P. M. (1984). Upper echelons: The organization
Accounting, Organizations and Society, 33, 20–32. as a reflection of its top managers. Academy of Management Review, 9,
Brignall, S., & Modell, S. (2000). An institutional perspective on 193–206.
performance measurement and management in the new public Hartmann, F. G. H., & Moers, F. (1999). Testing contingency hypotheses in
sector. Management Accounting Research, 11, 281–306. budgetary research: An evaluation of the use of moderated regression
Cardinaels, E., & van Veen-Dirks, P. M. G. (2010). Financial versus non- analysis. Accounting, Organizations and Society, 24, 291–315.
financial information: The impact of information organization and Henri, J.-F. (2006). Organizational culture and performance measurement
presentation in a balanced scorecard. Accounting, Organizations and systems. Accounting, Organizations and Society, 31, 77–103.
Society, 35, 565–578. Hill, C. W. L. (1988). Differentiation versus low cost or differentiation and
Carpenter, M. A. (2011). The handbook of research on top management low cost: A contingency framework. Academy of Management Review,
teams. Cheltenham, UK: Edward Elger Publishing. 13, 401–412.
Chapman, C. S. (1997). Reflections on a contingent view of accounting. Homburg, C., Workman, J. P., & Krohmer, H. (1999). Marketing’s influence
Accounting, Organizations and Society, 22, 189–205. within the firm. Journal of Marketing, 63, 1–17.
460 M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460

Humphreys, K. A., & Trotman, K. T. (2011). The balanced scorecard: The Porter, M. E. (1980). Competitive strategy. New York: Free Press.
effect of strategy information on performance evaluation judgments. Richardson, A. J. (1987). Accounting as a legitimating institution.
Journal of Management Accounting Research, 23, 81–98. Accounting, Organizations and Society, 12, 341–355.
Ijiri, Y. (1975). Theory of accounting measurement. Studies in accounting Rowe, C., Shields, M. D., & Birnberg, J. G. (2012). Hardening soft
research (Vol. 10). Sarasota, FL: American Accounting Association. accounting information: Games for planning organizational change.
Ittner, C. D., & Larcker, D. F. (2001). Assessing empirical research in Accounting, Organizations and Society, 37, 260–279.
managerial accounting: A value-based management perspective. Saunders, C. S. (1981). Management information systems,
Journal of Accounting and Economics, 32, 349–410. communications, and departmental power: An integrative model.
Jaworski, B. J., & Young, S. M. (1992). Dysfunctional behavior and Academy of Management Review, 6, 431–442.
management control: An empirical study of marketing managers. Scott, W. R. (2001). Institutions and organizations (2nd ed.). Newbury Park,
Accounting, Organizations and Society, 17, 17–35. CA: Sage Publications.
Jensen, M. C., & Zajac, E. J. (2004). Corporate elites and corporate strategy: Scott, W. R. (2005). Institutional theory: Contributing to a theoretical
How demographic preferences and structural position shape the research program. In K. G. Smith & M. Hitt (Eds.), Great minds in
scope of the firm. Strategic Management Journal, 25, 507–524. management: The process of theory development (pp. 450–484). New
Kurunmäki, L. (1999). Professional vs. financial capital in the field of York: Oxford University Press.
health care—Struggles for the redistribution of power and control. Snavely, H. (1967). Accounting information criteria. The Accounting
Accounting, Organizations and Society, 24, 95–124. Review, 42, 223–232.
LeBreton, J. M., Burgess, J. R. D., Kaiser, R. B., Atchley, E. K., & James, L. R. Suchman, M. C. (1995). Managing legitimacy: Strategic and institutional
(2003). The restriction of variance hypothesis and interrater approaches. Academy of Management Review, 20, 571–610.
reliability and agreement: Are ratings from multiple sources really Van der Stede, W. A., Young, S. M., & Chen, C. X. (2005). Assessing the
dissimilar? Organizational Research Methods, 6, 80–128. quality of evidence in empirical management accounting research:
Lev, B. (2001). Intangibles: Management, measurement, and reporting. The case of survey studies. Accounting, Organizations and Society, 30,
Harrisonburg, VA: R.R. Donnelley. 655–684.
Lillis, A. M., & van Veen-Dirks, P. M. G. (2008). Performance measurement Van Lent, L. (2007). Endogeneity in management accounting research: A
system design in joint strategy settings. Journal of Management comment. European Accounting Review, 16, 197–205.
Accounting Research, 20, 25–27. Van Veen-Dirks, P. M. G. (2010). Different uses of performance measures:
Lipe, M. G., & Salterio, S. E. (2000). The balanced scorecard: Judgmental The evaluation versus reward of production managers. Accounting,
effects of common and unique performance measures. The Accounting Organizations and Society, 35, 141–164.
Review, 75, 283–298. Verbeek, M. (2008). A guide to modern econometrics. Hoboken, NJ: John
Lipe, M. G., & Salterio, S. E. (2002). A note on the judgmental effects of the Wiley and Sons.
balanced scorecard’s information organization. Accounting, Verhoef, P. C., & Leeflang, P. S. H. (2009). Understanding the marketing
Organizations and Society, 27, 531–540. department’s influence within the firm. Journal of Marketing, 73,
Luft, J. L., & Shields, M. D. (2003). Mapping management accounting: 14–37.
Graphics and guidelines for theory consistent empirical research. Vorhies, D. W., & Morgan, N. A. (2003). A configuration theory assessment
Accounting, Organizations and Society, 12, 169–249. of marketing organization fit with business strategy and its
Malina, M. A., & Selto, F. A. (2001). Communicating and controlling relationship with marketing performance. Journal of Marketing, 67,
strategy: An empirical study of the effectiveness of the balanced 100–115.
scorecard. Journal of Management Accounting Research, 13, 47–90. Waterhouse, J., & Thiessen, P. (1978). A contingency framework for
Markus, M. L., & Pfeffer, J. (1983). Power and the design and management accounting systems research. Accounting, Organizations
implementation of accounting and control systems. Accounting, and Society, 3, 65–76.
Organizations and Society, 8, 205–218. White, H. (1980). A heteroscedastic-consistent covariance matrix
Mayston, D. J. (1985). Non-profit performance indicators in the public estimator and a direct test of heteroscedasticity. Econometrica, 48,
sectors. Financial Accountability & Management, 1, 51–74. 817–838.
Merchant, K. A., & Van der Stede, W. A. (2012). Management control Wickramasinghe, D. (2006). Power and accounting: A guide to critical
systems: Performance measurement, evaluation and incentives (3rd ed.). research. In Z. Hoque (Ed.), Methodological issues in accounting
London, UK: Prentice Hall. research: Theories and methods (pp. 339–360). London, UK: Spiramus
Meyer, J. W., & Rowan, B. (1977). Institutionalized organizations: Formal Press.
structure as myth and ceremony. American Journal of Sociology, 83, Widener, S. K. (2006). Associations between strategic resource
340–363. importance and performance measure use: The impact on firm
Miller, A., & Dess, G. G. (1993). Assessing Porter’s (1980) model in terms of performance. Management Accounting Research, 17, 433–457.
its generalizability, accuracy, and simplicity. Journal of Management Wolk, H. I., Francis, J. R., & Tearney, M. G. (1999). Accounting theory: A
Studies, 30, 553–585. conceptual and institutional approach (2nd ed.). Boston, MA: Kent
Miller, P. (1994). Accounting as social and institutional practice: An Publishing.
introduction. In A. G. Hopwood & P. Miller (Eds.), Accounting as social Wooldridge, J. M. (2002). Econometric analysis of cross section and panel
and institutional practice (pp. 1–39). Cambridge, UK: Cambridge data. Cambridge, MA: MIT Press.
University Press. Wouters, M., & Wilderom, C. (2008). Developing performance
Moers, F. (2006). Performance measure properties and delegation. The measurement systems as enabling formalization: A longitudinal
Accounting Review, 81, 897–924. field study of a logistics department. Accounting, Organizations and
Moll, J., Burns, J., & Major, M. (2006). Institutional theory. In Z. Hoque Society, 11, 488–516.
(Ed.), Methodological issues in accounting research: Theories and Wyatt, A. (2008). What financial and non-financial information on
methods (pp. 183–206). London, UK: Spiramus Press. intangibles is value relevant? A review of the evidence. Accounting
Mowday, R. T. (1979). Leader characteristics, self-confidence, and and Business Research, 38, 217–256.
methods of upward influence in organizational decision situations. Young, S. M. (1996). Survey research in management accounting: A
Academy of Management Journal, 22, 709–725. critical assessment. In A. Richardson (Ed.), Research methods in
Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York: McGraw- accounting: Issues and debates (pp. 55–68). Vancouver, Canada: CGA
Hill. Canada Research Foundation.
Ortega, J. (2003). Power in the firm and managerial career concerns. Zucker, L. G. (1987). Institutional theories of organization. Annual Review
Journal of Economics and Management Strategy, 12, 1–29. of Sociology, 13, 443–464.
Pfeffer, J. (1981). Power in organizations. Marshfield, MA: Pitman.

You might also like