You are on page 1of 14

565988

research-article2015
SOQ0010.1177/1476127014565988<italic>Strategic Organization X(X)</italic>Grant and Verona

So!apbox Essay

Strategic Organization

What’s holding back empirical


2015, Vol. 13(1) 61­–74
© The Author(s) 2015
Reprints and permissions:
research into organizational sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/1476127014565988
capabilities? Remedies for soq.sagepub.com

common problems

Robert M Grant and Gianmario Verona


Bocconi University, Italy

Introduction
Recent commentary on state of research in strategic management points to excessive attention to
theory development and deficiencies in the empirical testing of these theories (Bettis, 2012; Miller
and Tsang, 2010; Oxley et al., 2010). This criticism is especially pertinent to research into organi-
zational capabilities, where empirical verification of constructs and relationships has lagged far
behind conceptual and theoretical developments (Arend and Bromiley, 2009; Leiblein, 2011;
Newbert, 2007). Despite a growing methodological sophistication in using statistical tests to ensure
some dimensions of validity, fundamental impediments to reliability and validity in identifying and
measuring organizational capabilities remain unrecognized and unresolved.
In this article, we use a survey of the empirical literature on organizational capability to iden-
tify the principal problems that have impeded empirical research, and, for each problem, we offer
recommendations for either resolving the problem or ameliorating its negative impact. Separate
problems arise in each of the two main categories of empirical research into capabilities. In quan-
titative studies (both cross-sectional and panel sampling), the problems relate mainly to measure-
ment; in qualitative, case study research, the problems relate mainly to identification. The
Appendix describes our methodology in identifying and coding published articles on organiza-
tional capabilities.

Quantitative studies: problems of measurement


The core problem of empirical research into organizational capability is that organizational capa-
bilities are latent constructs that are inherently unobservable. Like individual capability (such as
playing the violin), organizational capability is the capacity to undertake a particular activity or
function; it is observed only when it is performed. Yet, performance will reflect the influence of
contextual and dispositional factors. To measure organizational capability, empirical researchers

Corresponding author:
Robert M Grant, Department of Management and Technology, Bocconi University, Via Roentgen 1, 20136 Milan, Italy.
Email: grant@unibocconi.it
62 Strategic Organization 13(1)

have used either questionnaires to key informants to elicit perceived assessments of capabilities or
observable proxies for organizational capabilities.

Perceptual measures of organizational capabilities


Self-reported questionnaire ratings of organizational capabilities—typically from the senior man-
agers of responding firms—are either single-item ordinal assessments of specific capabilities (e.g.
Ethiraj et al., 2005; Sirmon et al., 2010; Snow and Hrebiniak, 1980) or multiple items that measure
different dimensions of a single capability (e.g. Berchicci et al., 2012; Kusunoki et al., 1998; Zhou
and Wu, 2010).
Inaccuracy and inconsistency in self-reported performance are well known in measuring indi-
vidual competences (Kruger and Dunning, 1999) but have attracted less attention in relation to
organizational capabilities. There are three sources of error: motivation factors that cause inform-
ants to deliberately misreport, perceptual and cognitive distortions, and lack of information (Huber
and Power, 1985). Motivational and perceptual factors are likely to cause “common method bias”
which arises when the same informants rate both independent and dependent variables. However,
a much more fundamental source of error is respondents’ lack of information. The routinized,
socially embedded, causally ambiguous nature of organizational capabilities results in managers
lacking comprehension of their companies’ levels of capability—especially for capabilities that are
not systematically assessed by standardized performance metrics. This problem is most serious for
capabilities that do not reside in a defined functional unit or where no single executive has direct
management responsibility for the capability.
Denrell et al. (2004) provide salutary evidence of the unreliability of self-reported assessments
of organizational capability. Inconsistent assessments by different executives of their companies’
capabilities were caused: first, by different raters interpreting capabilities in different ways; sec-
ond, by raters lacking direct experience in the tasks being evaluated; and third, by cognitive bias
created by prior beliefs and selective recall.
Several of the studies addressed capabilities whose performance appeared to be inherently
difficult to discern and measure. For example, Drnevich and Kriauciunas (2011) asked respond-
ents to assess “the use of IT in your firm over the last three years to (…) implement new business
process, create new customer relationships [and] change ways of doing business” (p. 265). Zhou
and Wu (2010) asked respondents to assess their firms, “reconfiguring chains of resources the
firm can use in developing, manufacturing, and delivering its intended products to target mar-
kets” (p. 561).
In some studies, it was doubtful whether informants had the direct involvement that Denrell
et al. (2004) suggest is necessary for reliable assessment. Danneels (2008) asked a single respond-
ent in each firm to assess components of marketing, R&D, and resource accumulation capabilities
with questions that ranged from “setting up new distribution channels” to “recruiting engineers in
technical areas it is not familiar with” (pp. 542–543). Gruber et al. (2010) asked managing direc-
tors or founders to assess their level of assent to the statements: “In the recruiting process, we
check whether a candidate for the sales department has substantial knowledge of our market,” and
“Review meetings after contacting a customer often lead to incisive discussions.” In both studies,
accurate assessments would have required the respondents possessing detailed performance knowl-
edge across a wide functional range.
The requirement for respondents to assess their own firms’ capabilities relative to those of com-
petitors greatly extends the knowledge needed to provide accurate assessments. Managers tend to
have limited knowledge of competitors’ performance in activities that are not readily observable or
subject to regular benchmarking. Thus, we anticipate that the senior managers of trucking firms
Grant and Verona 63

might have difficulty in assessing: “How well does your organization perform the following activi-
ties relative to competitors: environmental scanning, market planning, marketing skill develop-
ment, internal coordination and communication” (Vorhies et al., 2009: 1334).

Solutions.  Several of the problems that afflict self-reported measures of organizational capabilities
were widely recognized and effectively addressed. In particular, most studies avoided common
method bias by using different sources for dependent and independent variables or—less satisfac-
torily—by collecting data at different times (e.g. Danneels, 2008; Schreiner et al., 2009; Subrama-
nian and Youndt, 2005). Multi-item measures of organizational capabilities combined with the use
of confirmatory factor analysis and structural equation models ensure convergent and discriminant
validity in measuring organizational capabilities (e.g. Gruber et al., 2010; McEvily and Marcus,
2005; Vorhies et al., 2009; Zhou and Wu, 2010).
However, such methodological sophistication counts for little if respondents lack information
about their own firms’ capabilities. We recommend that questionnaires should, first, seek factual
information that is either readily observable by managers or generated as part of internal perfor-
mance monitoring; second, address informants who are intimately involved in the processes/activi-
ties which underlie the capabilities being measured; third, ensure that the terminology employed is
comprehended by informants. Thus, Weigelt’s (2009) assessment of banks’ capabilities in integrat-
ing web-based technology to support their online banking operations asked precisely targeted
questions concerning the use of information technology to the individuals directly responsible for
online banking operations. Similarly, Hansen and Lovas (2004) measurement of the transfer of
technological competencies among MNC subsidiaries posed precise factual questions concerning
the proportions of hardware, software, technical know-how from other subsidiaries to the project
managers actually engaged in these activities. King and Zeithaml’s (2003) data gathering protocol
is exemplary in its careful design of questionnaires and identification of appropriate respondents.
If managers are to assess their own firms’ capabilities relative to rivals, questions need to
address observable factual information. Thus, rather than asking informants to assess broad capa-
bilities such as “marketing orientation,” Morgan et al. (2009) advocate asking specific questions
concerning narrowly defined capabilities that are based upon “organizational processes (…) whose
manifestations are observable to rivals” (p. 912). Similarly, in assessing alliance capability,
Schreiner et al. (2009: 1206) inquired into specific aspects of “the service provider’s processes,
actions, and behaviors in its relationship with that particular vendor.”
Large-scale, third-party questionnaire surveys such as Banque de France’s Sesame survey
(Sirmon et al., 2010) and Eurostat’s Community Innovation Surveys (Laursen and Salter, 2006;
Leiponen and Helfat, 2010) can assist cross-sectional consistency. In the study by Rockart and Dutt
(2013), consistency and lack of bias in the assessments of investment banks’ underwriting capabil-
ity were enhanced by the fact that the capability assessments (collected by Greenwich Associates)
were provided by the banks’ underwriting clients rather than the banks themselves.

Observable proxies for organizational capabilities


The reliability and validity problems of self-reported capability assessments can, in principle, be
avoided by using observable indicators of the underlying capability—usually measures of the per-
formance arising from the capability. Such proxies include patent counts to measure innovation
capability (e.g. Henderson and Cockburn, 1994), the frequency of new product introductions to
measure “market pioneering” and “market responding” capabilities (Franco et al., 2009), operating
margin and loans per employee to measure “productive capability” in mortgage banking (Jacobides
and Hitt, 2005), and the ratio of total deposits to balance sheet assets to measure the cross-selling
64 Strategic Organization 13(1)

capabilities of online banks (Brush et al., 2012). Alternatively, some studies have used resource
inputs as proxies for capabilities: for example, R&D expenditure as an indicator of technological
capability (Helfat, 1997; Rothaermel and Hess, 2007) and numbers of general management and
functional executives to measure managerial and functional capabilities (Fortune and Mitchell,
2012).
The central challenge here is content validity: “the extent to which empirical measurement
instrument reflects a specific domain of content” (Venkatraman and Grant, 1986: 79) which
depends upon the “correspondence between the definition of the concept and the domain covered
by the chosen pool of items” (ibid: 81). This correspondence relates to the extent to which the
indicator variable, first, captures the underlying capability construct and, second, is independent of
determinants other than the underlying capability. Patent counts as indicators of innovation capa-
bility suffer on both counts: they are just one aspect of innovation and they are determined by fac-
tors other than innovation capability (Devinney, 1993; Hagedoorn and Cloodt, 2003). Other proxies
suffer from similar deficiencies, for example, measuring project management capability by fre-
quency of errors and overruns of time and labor (Ethiraj et al., 2005) fails to account for differences
in the novelty and complexity of different firms’ project portfolios.

Solutions.  Observable proxies for organizational capabilities need to correspond closely to the
underlying capability and be free from the influence of extraneous factors. For single-variable
proxies to meet these criteria requires careful selection of capabilities and industry contexts.
Thus, Makadok and Walker (2000) measure money market funds’ interest rate forecasting capa-
bility by the ratio of the time-to-maturity of a fund’s portfolio at the beginning of each month
and the treasury bill rate at the end of the month. This measure achieves accordance between
the manifest behavior of the firm (the maturity of its securities portfolio) with the capability
being measured. Similarly, in Franco et al.’s (2009) study of the disk drive industry, average
areal density offers a credible indicator of technological capability. In general, however, multi-
ple indicators are better than single indicators in the capturing the domain of a capability (e.g.
Ethiraj et al., 2005).
Content validity cannot be assessed by statistical procedures, but it relies on “appeals to reason
that the procedures used to develop an instrument ensure that important content has been ade-
quately sampled and represented” (Edwards, 2003: 334). Citing precedence is inadequate to justify
the choice of proxy variables, far more credible is endorsement of the capability indicators by
industry experts (e.g. Ethiraj et al., 2005; King and Zeithaml, 2003).
Industries subject to extensive data gathering by regulatory bodies, benchmarking associations,
industry associations, and standards and certification bodies are valuable sources of large sample,
high reliability performance data that can be used as measures of organizational capabilities.
Berchicci et al. (2012) use data from the Environmental Protection Agency’s Toxics Release
Inventory Program to measure chemical firms’ environmental capabilities; Pisano et al. (2001) and
Huesch (2013) draw upon the rich clinical data relating to surgical inputs, procedures and out-
comes to measure capabilities and capability accumulation in cardiac surgery.

Circumventing capability measurement


Finally, we identify several approaches to estimating the effects of organizational capabilities that
avoid measuring the capabilities themselves. The most widely used approach is that outlined by
Godfrey and Hill (1995: 529–31) for empirically testing the resource-based view. When resources
are unobservable, they propose linking their antecedents to their profit outcomes in order to infer
their role in determining profitability.
Grant and Verona 65

This approach—in essence, replacing a structural model with a reduced form model—has been
widely used. Eggers and Kaplan (2013) refer to the ‘standard model’ of capability development in
which ‘capabilities and resources are built through experience, and that these capabilities and
resources in turn drive organizational performance,’ which permits performance to be regressed
directly on experience. In relation to alliance capability, Kale et al. (2002) regress abnormal stock
market returns to alliance announcements on accumulated alliance experience. Zollo and Singh
(2004) use a similar approach to research acquisition integration capability: they regress the post-
merger profitability change on accumulated acquisition experience (and other variables). Studies
relating capabilities to strategic choice also use this type of indirect estimation. Chang (1995) and
King and Tucci (2002) predict that market entry develops capabilities, which then drive subsequent
market entries, which they test by relating market entries in one period to previous market entries.
Alternatively, the performance impact of organizational capability has been identified by cross-
sectional regression of firm performance on a number of explanatory factors, where performance
residuals or the coefficients of firm dummy variables are interpreted as reflections of the influence
of firms’ differential capabilities. For example, Berchicci et al. (2012) regress output of pollutants
on various predictor variables and use the residuals as measures of organizational capability, while
Clougherty and Moliterno (2010) introduce firm dummies into regressions of airline load factors
on various explanatory variables in order to capture the impact of differential capabilities. An
extension of this use of productivity regressions to estimate firms’ relative capabilities is stochastic
frontier estimation which regresses output on resource inputs, then separates the error term into a
random component and a firm-specific component, the latter representing each firm’s capability
(Dutta et al., 2005). Mahmood et al. (2011) employ a similar approach to study the antecedents of
R&D capabilities.
While these indirect approaches may avoid the allegations of tautology aimed at empirical tests
of capability theory (Grant and Verona, 2013), the core issue is one of nomological validity which
“entails the evaluation of a measure within a broader theory that describes the causes, effects, and
correlates of the construct and how they relate to one another” (Edwards, 2003: 331). Once organi-
zational capabilities are eliminated from models that either link their determinants to their out-
comes or identify the impact of organizational capabilities with regression residuals or firm-specific
effects, there is a lack of direct evidence that they “exhibit relationships with measures of other
constructs in accordance with relevant theory” (ibid).
Thus, regressing the organizational performance upon the antecedents of organizational capa-
bility reveals little about capabilities unless alternative paths of causation can be legitimately elimi-
nated. For example, Arikan and McGahan (2010) regress (a) the occurrence of acquisition and
alliance deals on abnormal stock market returns from prior deals and (b) the difference between
abnormal returns from the focal deal as compared to abnormal returns from prior deals on the
number of prior deals of that type undertaken by the firm. Inferring the presence of alliance and
acquisition capabilities from these relationships is dependent on number of restrictive assumptions
concerning the behavior of managers and stock markets.

Solutions.  In all these approaches, the critical issue is credibility of the inferences concerning the
effects that are attributed to the role of organizational capabilities. This credibility depends upon the
ability to reasonably eliminate alternative paths of causation. In studies that regress the outcomes
capabilities on their antecedents, extreme caution must be exercised in interpreting evidence of
association. For example, Arora and Gambardella’s (1997) analysis of the impact of home country
size on the performance of engineering firms draws inferences about the relative role of ‘product-
specific’ and ‘generic’ capabilities; however, the authors are careful not to go further in specifying
the specific capabilities that might comprise these two broad groups of capability types. In the case
66 Strategic Organization 13(1)

of the linkage between experience, capability and performance, Eggers and Kaplan (2013) point to
the complex relationships between these constructs, with particular focus on the role of cognition.
The implication is that empirical investigation of the relationships between the antecedents and
outcomes of organizational capabilities should either draw upon additional sources of evidence in
order to infer the role of organizational capabilities or employ structural equation models.

Qualitative studies: problems of identification


Godfrey and Hill (1995) suggest that

qualitative methodologies such as multiple case studies, event histories, and ethnographic inquiries may
represent the best way forward in observing the effects of otherwise unobservable, idiosyncratic effects on
business strategy and performance, such as those predicted by the resource-based view of the firm. (p. 531)

In the case of organizational capabilities, the benefits of case-based research are twofold. First, it
offers penetrating insight into the origins and development of organizational capabilities and their
relationship to strategy. Second, it permits investigation of the idiosyncratic capabilities which the
resource-based view predicts are most strategically important in terms of conferring sustainable
competitive advantage.
However, qualitative case study research encounters its own problems. While the limitations of
case-based research in terms of external validity—the ability to generalize findings to other sam-
ples and settings—are well recognized, less attention has been given to the problems of identifying
capabilities.
In case research, capabilities are identified on the basis of the researchers’ interpretation of the
case data. Three types problem emanate from this subjectivity. First, accuracy and face validity: To
what extent do the capabilities identified correspond to the underlying reality? Second, discrimi-
nant validity: To what extent does the case data point to the existence of the identified capability as
distinct from another capability? Third, consistency: Would different researchers have identified
the same capabilities within these firms?
Symptomatic of all three problems is the lack of specificity and precision with which capabili-
ties were identified in some studies. In Raff’s (2000) account of capability development at Borders
and Barnes & Noble, Barnes & Noble’s capabilities remained vague and ill defined: its capabilities
were directed toward “reaching for scale,” establishing “practices to promote volume,” and “mass-
market merchandizing” (pp. 1054–5). Similarly, Montealegre’s (2002) study of Ecuadorian elec-
tronic stock exchange identified the “capability to strategize,” “capability to be flexible,” and
“capabilities to integrate and engender trust.”
This lack of specificity is especially apparent in relation to dynamic capabilities—a concept
“riddled with inconsistencies, overlapping definitions, and outright contradictions” (Zahra et al.,
2006: 917). For example, Rosenbloom (2000) attributed NCR’s transformation to its “latent
dynamic capabilities,” but beyond “decisive action by the top executive” (p. 1102) the content of
these capabilities remained vague. Other broad-based descriptions of dynamic capabilities include
the “continuous morphing” of internet portals Yahoo and Excite (Rindova and Kotha, 2001) and
“resource cognition” at typewriter manufacturer Smith Corona (Danneels, 2011).

Solutions
If case-based research is to achieve face validity, discriminant validity, and consistency in identify-
ing organizational capabilities, then constraints need to be placed on researchers’ subjectivity. The
Grant and Verona 67

key is to use multiple sources to triangulate case data. It is desirable that capabilities are empiri-
cally verifiable in relation to the organizational mechanisms through which they emerge and are
operationalized. While organizational capabilities have been identified primarily on the basis of
their manifest performance, research into the microfoundations of organization capabilities reveals
the roles of cognition and action in their origins and operation (Gavetti, 2005; Winter, 2000). On
this basis, we propose that the identification of organizational capabilities should be based on three
types of empirical data: performance (Is the capability manifest in the performance of a particular
task or function?), cognition (Are those who perform the capability or those who manage them
aware of it and able to articulate it?), and action (Is the capability manifest in observable routines,
processes, decisions, directions, and activities within the organization?). Such a multi-dimensional
approach to identifying capabilities was used in Leonard-Barton’s (1992) study of core capabilities
among five major US industrial firms and in Pandza’s (2011) study of operational capabilities
within a pharmaceutical company.
Evidence of an “identifiable, specific process” is especially relevant to support the identifica-
tion of dynamic capability (Eisenhardt and Martin (2000: 1107). Without such explicit underpin-
ning, the validity of broadly defined dynamic capabilities such as Teece’s (2007) tripartite
framework of “sensing, seizing, and transforming” capabilities will always be questionable.
Harreld et al.’s (2007) analysis of dynamic capabilities at IBM exemplifies the merits of articulat-
ing the processes and structures which “allowed the elephant to dance.”
Articulation of the structural and cognitive dimensions of organizational capabilities can also
help in the identification of generalities among idiosyncratic capabilities. For example, work on the
linkages between human capital and organizational systems suggests that “effective systems may
be cospecialized to create highly idiosyncratic capabilities composed of distinct structures, systems
and individuals” (Coff and Kryscynski, 2011: 1440).
Finally, as Godfrey and Hill (1995) recognize, the external validity of case research ultimately
rests upon “the need for repeated clinical studies” (p. 530). Such repeatability may be achieved by
small sample case research (Leonard-Barton, 1992; Lorenzoni and Lipparini, 1999), by paired
comparisons (Raff, 2000; Rindova and Kotha, 2001), or by multiple cases within a single firm (e.g.
the 90 product development projects at Alessi studied by Salvato, 2009).

Conclusion
Organizational capability is a central construct in theories of competitive advantage, strategic
choice, firm boundaries, learning and adaptation, and technological change. Yet, if organizational
capabilities are to fulfill their potential to illuminate these phenomena, there is an urgent need to
surmount the problems of identification and measurement that continue to impede empirical
research. The key problems that we identify—the unreliability of self-reported capability assess-
ments, the use of unsuitable proxies for organizational capabilities, and subjectivity in the identifi-
cation of organizational capabilities—have their solutions less in psychometric and econometric
sophistication and more in researchers’ exercise of sound judgement. Thus, in relation to self-
reported assessments of capabilities in cross-sectional studies, we stress the need to ensure that
those who rate their organizations’ capabilities are adequately informed of those capabilities.
Where archival data are used as proxies for organizational capabilities in cross-sectional studies,
the proxy variables need to correspond closely to the underlying capability. We see tremendous
potential for qualitative, case-based research to validate and extend existing theories concerning
the development and deployment of organizational capabilities—especially in relation to the vexed
topic of dynamic capability. Again, we see progress as dependent upon very basic improvements
in research methodology—especially reducing the subjectivity with which capabilities
68 Strategic Organization 13(1)

are identified through recognition of the structural and cognitive foundations of capabilities to
facilitate triangulated observation. Finally, remedying these problems does not require new research
methods: all the solutions we offer are drawn from the existing literature. The essential require-
ment is for researchers (together with the reviewers and editors who assess their output) to acknowl-
edge the problems and recognize and adopt current best practices.

Acknowledgements
Connie Helfat and Davide Ravasi provided insightful comments on previous versions of the manuscript. We are
grateful to them and to the three SO! editors for helping improve the article. Errors remain our own.

Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit
sectors.

References
Adner, R. and Helfat, C. E. (2003) “Corporate Effects and Dynamic Managerial Capabilities”, Strategic
Management Journal 24(10): 1011–25.
Amit, R. and Schoemaker, P. (1993) “Strategic Assets and Organizational Rent”, Strategic Management
Journal 14(1): 33–46.
Ansari, S., Munir, K. and Gregg, T. (2012) “Impact at the ‘Bottom of the Pyramid’: The Role of Social Capital
in Capability Development and Community Empowerment”, Journal of Management Studies 49(4):
813–42.
Arend, R. J. and Bromiley, P. (2009) “Assessing the Dynamic Capabilities View: Spare Change, Everyone?”,
Strategic Organization 7(1): 75–90.
Argyres, N. (1996) “Evidence on the Role of Firm Capabilities in Vertical Integration Decisions”, Strategic
Management Journal 17(1): 129–50.
Arikan, A. M. and McGahan, A. M. (2010) “The Development of Capabilities in New Firms”, Strategic
Management Journal 31(1): 1–18.
Arora, A. and Gambardella, A. (1997) “Domestic Markets and International Competitiveness: Generic and
Product-Specific Competencies in the Engineering Sector”, Strategic Management Journal 18(Summer
Special Issue): 53–74.
Berchicci, L., Dowell, G. and King, A. A. (2012) “Environmental Capabilities and Corporate Strategy:
Exploring Acquisitions among US Manufacturing Firms”, Strategic Management Journal 33(9): 1053–
71.
Bettis, R. A. (2012) “The Search for Asterisks: Compromised Statistical Tests and Flawed Theories”, Strategic
Management Journal 33(1): 108–13.
Brush, T. H. and Artz, K. W. (1999) “Toward a Contingent Resource Based Theory: The Impact of Information
Asymmetry on the Value of Capabilities in Veterinary Medicine”, Strategic Management Journal 20(3):
223–50.
Brush, T. H., Dangol, R. and O’Brien, J. P. (2012) “Customer Capabilities, Switching Costs, and Bank
Performance”, Strategic Management Journal 33(13): 1499–515.
Casciaro, T. and Lobo, M.S. (2008) When Competence is Irrelevant: The Role of Interpersonal Affect in
Task-Related Ties’, Administrative Science Quarterly 53: 655–84.
Chang, S. J. (1995) “International Expansion Strategy of Japanese Firms: Capability Building through
Sequential Entry”, Academy of Management Journal 38(2): 383–407.
Clougherty, J. A. and Moliterno, T. P. (2010) “Empirically Eliciting Complementarities in Capabilities:
Integrating Quasi-Experimental and Panel Data Methodologies”, Strategic Organization 8(1): 107–31.
Coff, R. (2010) “The Coevolution of Rent Appropriation and Capability Development”, Strategic Management
Journal 31(7): 711–33.
Grant and Verona 69

Coff, R. and Kryscynski, D. (2011) “Drilling for the Microfoundations of Human Capital-Based Competitive
Advantages”, Journal of Management 37(5): 1429–43.
Collis, D. J. (1991) “A Resource-Based Analysis of Global Competition: The Case of the Bearings Industry”,
Strategic Management Journal 12(Summer Special Issue): 49–68.
Colombo, M. G. (2003) “Alliance Form: A Test of the Contractual and Competence Perspectives”, Strategic
Management Journal 24(12): 1209–29.
Danneels, E. (2008) “Organizational Antecedents of Second Order Competences”, Strategic Management
Journal 29(5): 519–43.
Danneels, E. (2011) “Trying to Become a Different Type of Company: Dynamic Capability at Smith Corona”,
Strategic Management Journal 32(1): 1–31.
Denrell, J., Arvidsson, N. and Zander, U. (2004) “Managing Knowledge in the Dark: An Empirical Study of
the Reliability of Capability Evaluations”, Management Science 50(11): 1491–503.
Devinney, T. M. (1993) “How Well Do Patents Measure New Product Activity?”, Economic Letters 41(4):
447–50.
Drnevich, P. L. and Kriauciunas, A. P. (2011) “Clarifying the Conditions and Limits of the Contributions
of Ordinary and Dynamic Capabilities to Relative Firm Performance”, Strategic Management Journal
32(2): 254–79.
Dutta, S., Narasimhan, O. and Rajiv, S. (2005) “Conceptualizing and Measuring Capabilities: Methodology
and Empirical Application”, Strategic Management Journal 26(3): 277–85.
Edwards, J. R. (2003) “Construct Validation in Organizational Behavior Research”, in J. Greenberg (ed.)
Organizational Behavior: The State of the Science, 2nd edn. Mahwah, NJ: Erlbaum, pp. 327–71.
Eggers, J. P. and Kaplan, P. (2013) “Cognition and Capabilities”, Academy of Management Annals 4: 1–46.
Eisenhardt, K. M. and Martin, J. (2000) “Dynamic Capabilities: What Are They?”, Strategic Management
Journal 21(10–11): 1105–21.
Ethiraj, S. K., Kale, P., Krishnan, M. S. and Singh, J. V. (2005) “Where Do Capabilities Come from and How
Do They Matter? A Study in the Software Services Industry”, Strategic Management Journal 26(1): 25–45.
Fortune, A. and Mitchell, W. (2012) “Unpacking Firm Exit at the Firm and Industry Levels: The Adaptation
and Selection of Firm Capabilities”, Strategic Management Journal 33: 794–819.
Franco, A. M., Sarkar, M. B. and Agarwal, R. (2009) “Swift and Smart: The Moderating Effects of
Technological Capabilities on the Market Pioneering-Firm Survival Relationship”, Management Science
55(11): 1842–60.
Gavetti, G. (2005) “Cognition and Hierarchy: Rethinking the Microfoundations of Capabilities’ Development”,
Organization Science 16(6): 599–617.
Godfrey, P. C. and Hill, C. W. L. (1995) “The Problem of Unobservables in Strategic Management Research”,
Strategic Management Journal 16(7): 519–33.
Grant, R. M. and Verona, G. (2013) “Measuring Competence”, in M. Augier and D. J. Teece (eds) The
Palgrave Encyclopedia of Strategic Management. Available at: http://www.palgraveconnect.com/esm/
doifinder/10.1057/9781137294678.0419 (accessed 9 November 2014).
Gruber, M., Heinemann, F., Brettel, M. and Hungeling, S. (2010) “Configurations of Resources and
Capabilities and Their Performance Implications: An Exploratory Study on Technology Ventures”,
Strategic Management Journal 31(12): 1337–56.
Hagedoorn, J. and Cloodt, M. (2003) “Measuring Innovative Performance: Is there an Advantage in Using
Multiple Indicators?”, Research Policy 32: 1365–79.
Hamel, G. and Prahalad, C. K. (1992) “Letter”, Harvard Business Review, May–June, pp. 164–5.
Hansen, M. T. and Lovas, B. (2004) “How Do Multinational Companies Leverage Technological
Competencies? Moving from Single to Interdependent Explanations”, Strategic Management Journal
25: 801–22.
Harreld, J. B., O’Reilly, C. B. and Tushman, M. L. (2007) “Dynamic Capabilities at IBM: Driving Strategy
into Action”, California Management Review 49(4): 21–43.
Harzing, A.-W. (2014) “Journal Quality List”. Available at: http://www.harzing.com/jql.htm (accessed June
20, 2014).
70 Strategic Organization 13(1)

Helfat, C. E. (1997) “Know-How and Asset Complementarity and Dynamic Capability Accumulation: The
Case of R&D”, Strategic Management Journal 18(5): 339–60.
Helfat, C. E. and Winter, S. G. (2011) “Untangling Dynamic and Operational Capabilities: Strategy for the
(N)Ever-Changing World”, Strategic Management Journal 32: 1243–50.
Henderson, R. and Cockburn, I. (1994) “Measuring Competence? Exploring Firm Effects in Pharmaceutical
Research.”, Strategic Management Journal 15(Winter Special Issue): 63–84.
Hitt, M. A. and Ireland, R. D. (1985) “Corporate Distinctive Competence, Strategy, Industry and Performance”,
Strategic Management Journal 6(3): 273–93.
Holburn, G. L. F. and Zelner, B. A. (2010) “Political Capabilities, Policy Risk, and International Investment
Strategy: Evidence from the Global Electric Power Generation Industry”, Strategic Management Journal
31: 1290–315.
Huber, G. P. and Power, D. J. (1985) “Retrospective Reports of Strategic-Level Managers: Guidelines for
Increasing Their Accuracy”, Strategic Management Journal 6: 1–180.
Huesch, M. D. (2013) “Are there Always Synergies between Productive Resources and Resource Deployment
Capabilities?”, Strategic Management Journal 34(11): 1288–313.
Jacobides, M. G. and Hitt, L. M. (2005) “Losing Sight of the Forest for the Trees: Productive
Capabilities and Gains from Trade as Drivers of Vertical Scope”, Strategic Management Journal
26(13): 1209–27.
Jain, A. (2013) “Learning by Doing and the Locus of Innovative Capability in Biotechnology Research”,
Organization Science 24(6): 1683–700.
Kale, P. and Singh, H. (2007) “Building Firm Capabilities through Learning: The Role of the Alliance
Learning Process in Alliance Capability and Firm-Level Alliance Success”, Strategic Management
Journal 28(10): 981–1000.
Kale, P., Dyer, J. H. and Singh, H. (2002) “Alliance Capability, Stock Market Response, and Long-Term
Alliance Success: The Role of the Alliance Function”, Strategic Management Journal 23(8): 747–67.
King, A. A. and Tucci, C. L. (2002) “Incumbent Entry into New Market Niches: The Role of Experience
and Managerial Choice in the Creation of Dynamic Capabilities”, Management Science 48(2): 171–86.
King, A. W. and Zeithaml, C. P. (2001) “Competencies and Firm Performance: Examining the Causal
Ambiguity Paradox”, Strategic Management Journal 22(1): 75–99.
King, A. W. and Zeithaml, C. P. (2003) “Measuring Organizational Knowledge: A Conceptual and
Methodological Framework”, Strategic Management Journal 24(8): 763–72.
Kruger, J. and Dunning, D. (1999) “Unskilled and Unaware of It” How Difficulties in Recognizing One’s
Own Incompetence Lead to Inflated Self-Assessments”, Journal of Personality and Social Psychology
77(6): 1121–34.
Kusunoki, K., Nonaka, I. and Nagata, A. (1998) “Organizational Capabilities in Product Development of
Japanese Firms: A Conceptual Framework and Empirical Findings”, Organization Science 9(6): 699–
718.
Laursen, K. and Salter, A. (2006) “Open for Innovation: The Role of Openness in Explaining Innovation
Performance among UK Manufacturing Firms”, Strategic Management Journal 27(2): 131–50.
Lee, G. (2008) “Relevance of Organizational Capabilities and Its Dynamics: What to Learn From Entrants’
Product Portfolio about the Determinants of Entry Dynamics”, Strategic Management Journal 29(12):
1257–80.
Leiblein, M. J. (2011) “What Do Resource and Capability Theories Propose?”, Journal of Management 37(4):
909–32.
Leiblein, M. J. and Madsen, T. L. (2009) “Unbundling Competitive Heterogeneity: Incentive Structures and
Capability Influences on Technological Innovation”, Strategic Management Journal 30(7): 711–35.
Leiponen, A. and Helfat, C. E. (2010) “Innovation Objectives, Knowledge Sources, and the Benefits of
Breadth”, Strategic Management Journal 31(2): 224–36.
Leonard-Barton, D. (1992) “Core Capabilities and Core Rigidities: A Paradox in Managing New Product
Development”, Strategic Management Journal 13(Summer Special Issue): 111–26.
Lorenzoni, G. and Lipparini, A. (1999) “The Leveraging of Interfirm Relationships as a Distinctive
Organizational Capability: A Longitudinal Study”, Strategic Management Journal 20: 317–38.
Grant and Verona 71

McEvily, B. and Marcus, A. (2005) “Embedded Ties and the Acquisition of Competitive Capabilities”,
Strategic Management Journal 26(11): 1033–55.
Mahmood, I. P., Zhu, H. and Zajac, E. J. (2011) “Where Can Capabilities Come From? Network Ties and
Capability Acquisition in Business Groups”, Strategic Management Journal 32: 820–48.
Makadok, R. and Walker, G. (2000) “Identifying a Distinctive Competence: Forecasting Ability in the Money
Fund Industry”, Strategic Management Journal 21(8): 853–64.
Marino, K.E. (1996) “Developing Consensus on Firm Competences and Capabilities”, The Academy of
Management Executives 10(3): 40–5.
Miller, K. D. and Tsang, E. W. K. (2010) “Testing Management Theories: Critical Realist Philosophy and
Research Methods”, Strategic Management Journal 32: 139–58.
Montealegre, R. (2002) “A Process Model of Capability Development: Lessons from the Electronic Commerce
Strategy at Bolas De Valores De Guayaquil”, Organization Science 13(5): 514–31.
Morgan, N. A., Vorhies, D. W. and Mason, C. H. (2009) “Market Orientation, Marketing Capabilities, and
Firm Performance”, Strategic Management Journal 30(8): 909–20.
Newbert, S. (2007) “Empirical Research of the Resource-Based View of the Firm: An Assessment and
Suggestions for Future Research”, Strategic Management Journal 28(2): 121–46.
Orlikowski, W. J. (2002) “Knowing in Practice: Enacting a Collective Capability in Distributed Organizing”,
Organization Science 13(4): 249–63.
Oxley, J. E., Rivkin, J., J.W. and Ryall, M. D. (2010) “The Strategy Research Initiative: Recognizing and
Encouraging High-Quality Research in Strategy”, Strategic Organization 8(4): 377–86.
Pandza, K. (2011) “Why and How Will a Group Act Autonomously to Make an Impact on the Development
of Organizational Capabilities?”, Journal of Management Studies 48(5): 1015–43.
Parmigiani, A. and Mitchell, W. (2009) “Complementarity, Capabilities, and the Boundary of the Firm: The
Impact of Within-Firm and Interfirm Expertise on Concurrent Sourcing of Complementary Components”,
Strategic Management Journal 30(10): 1065–91.
Pisano, G. P., Bohmer, R. M. J. and Edmondson, A. C. (2001) “Organizational Differences in Rates of
Learning: Evidence from the Adoption of Minimally Invasive Cardiac Surgery”, Management Science
47(6): 752–68.
Raff, D. M. (2000) “Superstores and the Evolution of Firm Capabilities in American Bookselling”, Strategic
Management Journal 21(10–11): 1043–59.
Ranft, A. L. and Lord, M. D. (2002) “Acquiring New Technologies and Capabilities: A Grounded Model of
Acquisition Implementation”, Organization Science 13(4): 420–41.
Rindova, V. P. and Kotha, S. (2001) “Continuous “Morphing”: Competing Through Dynamic Capabilities,
Form, and Function”, Academy of Management Journal 44(6): 1263–80.
Rockart, S. F. and Dutt, N. (2013) “The Rate and Potential of Capability Development Trajectories”, Strategic
Management Journal 36(1): 53–75.
Rosenbloom, R. S. (2000) “Leadership, Capabilities, and Technological Change: The Transformation of NCR
in the Electronic Era”, Strategic Management Journal 21(10–11): 1083–103.
Rothaermel, F. T. and Hess, A. M. (2007) “Building Dynamic Capabilities: Innovation Driven by Individual-,
Firm-, and Network-Level Effects”, Organization Science 18(6): 898–921.
Salvato, C. (2009) “Capabilities Unveiled: The Role of Ordinary Activities in the Evolution of Product
Development Processes”, Organization Science 20(2): 384–409.
Schreiner, M., Kale, P. and Corstner, D. (2009) “What Really Is Alliance Management Capability and How
Does It Impact Alliance Outcomes and Success?”, Strategic Management Journal 30(13): 1395–419.
Sirmon, D. G., Hitt, M. A., Arregle, J.-L. and Tochman-Campbell, J. (2010) “The Dynamic Interplay of
Capability Strengths and Weaknesses: Investigating the Bases of Temporary Competitive Advantage”,
Strategic Management Journal 31(13): 1386–409.
Snow, C. C. and Hrebiniak, H. G. (1980) “Strategy, Distinctive Competence, and Organizational Performance”,
Administrative Science Quarterly 25(2): 317–36.
Subramanian, M. and Youndt, M. A. (2005) “The Influence of Intellectual Capital on the Types of Innovative
Capabilities”, Academy of Management Journal 48(3): 450–63.
72 Strategic Organization 13(1)

Teece, D. J. (2007) “Explicating Dynamic Capabilities: The Nature and Microfoundations of (Sustainable)
Enterprise Performance”, Strategic Management Journal 28(13): 1319–50.
Tripsas, M. and Gavetti, G. (2000) “Capabilities, Cognition, and Inertia: Evidence from Digital Imaging”,
Strategic Management Journal 21(10–11): 1147–61.
Venkatraman, N. and Grant, J. H. (1986) “Construct Measurement in Organizational Strategy Research: A
Critique and Proposal”, Academy of Management Review 11(1): 71–87.
Vorhies, D. W., Morgan, R. E. and Autry, C. W. (2009) “Strategy and the Marketing Capabilities of the Firm:
Impact on Market Effectiveness and Cash-Flow Performance”, Strategic Management Journal 30(12):
1310–34.
Weigelt, C. (2009) “The Impact of Outsourcing New Technologies on Integrative Capabilities”, Strategic
Management Journal 30(6): 595–616.
Winter, S. G. (2000) “The Satisficing Principle in Capability Learning”, Strategic Management Journal
21(10–11): 981–96.
Winter, S. G. (2003) “Understanding Dynamic Capabilities”, Strategic Management Journal 24(Winter
Special Issue): 991–5.
Zahra, S. A., Sapienza, H. J. and Davidsson, P. (2006) “Entrepreneurship and Dynamic Capabilities: A
Review, Model and Research Agenda”, Journal of Management Studies 43(4): 917–55.
Zander, U. and Kogut, B. (1995) “Knowledge and the Speed of the Transfer and Imitation of Organizational
Capabilities”, Organization Science 6(1): 76–92.
Zhou, K. Z. and Wu, F. (2010) “Technological Capability, Strategic Flexibility, and Product Innovation”,
Strategic Management Journal 31(5): 547–61.
Zollo, M. and Singh, H. (2004) “Deliberate Learning in Corporate Acquisitions: Post-Acquisition Strategies
and Integration Capability in U.S. Bank Mergers”, Strategic Management Journal 25(13): 1233–56.
Zott, C. (2003) “Dynamic Capabilities and the Emergence of Intraindustry Differential Firm Performance:
Insights from a Simulation Study”, Strategic Management Journal 24(2): 97–125.

Author biographies
Robert M Grant is professor of management at Bocconi University and is a visiting faculty member at City
University’s Cass Business School and Georgetown University’s McDonough School of Business. His
research interests are in organizational capability and firms’ strategy making processes.
Gianmario Verona is professor of management at Bocconi University. He has also been a visiting faculty
member at the Tuck School of Business at Dartmouth College and visiting scholar at the Sloan School of
Management at MIT. His research interests lie at the intersection of competitive strategy and innovation
and more specifically include three areas: dynamic capabilities, knowledge integration, and open and user
innovation.

Appendix
Method for data collection and coding
To delineate the boundaries of our study, we followed Helfat and Winter (2011) in defining
organizational capability as an organization’s “capacity to perform a particular activity in a reli-
able and at least minimally satisfactory manner” (p. 1244).1 As for terminology, we followed
Hamel and Prahalad (1992) and Amit and Schoemaker (1993) in regarding “capability” and
“competence” as synonymous—despite some authors’ efforts to distinguish them (e.g. Collis,
1991; Marino, 1996).
Our point of departure was the 3084 articles with capability or competence (or variants) in their
titles published in the “business” and “management” journals surveyed by ISI Web of Science dur-
ing January 1980 to January 2014. We then applied the following filters. First, we selected only
those articles published in 8 leading strategy and general management journals: those with the
Grant and Verona 73

Table 1.  Results of literature search.

(A) Journal (B) Articles with (C) Empirical articles with


“capabilit*” or “capabilit*” or “competenc*”
“competenc*” in the title in the title
Strategic Management Journal 120 90
Organization Science 47 30
Management Science 15 8
Academy of Management Journal 13 8
Administrative Science Quarterly 6 1
Strategic Organization 6 2
Journal of Management 11 6
Journal of Management Studies 38 24
Total 256 169

Source: ISI Web of Science, January 2014.

Table 2.  Categorizing empirical organizational capability research by empirical method and research
focus: examples (studies refer to column C of Table 1).

Organizational capabilities as independent Organizational capabilities


variables as dependent variables
Quantitative Self-reported, Hitt and Ireland (1985), Kusunoki et al. Brush and Artz (1999),
cross-sectional questionnaire (1998), Sirmon et al. (2010), Gruber et al. McEvily and Marcus (2005),
and panel data (2010), Drnevich and Kriauciunas (2011), Kale and Singh (2007),
studies Parmigiani and Mitchell (2009) and Zhou Danneels (2008), Vorhies
and Wu (2010). et al. (2009) and Weigelt
(Total: 49 articles) (2009).
(Total: 34 articles)
  Observable Henderson and Cockburn (1994), Leiblein and Madsen (2009),
indicators Makadok and Walker (2000), King and Mahmood et al. (2011) and
Zeithaml (2001), Kale et al. (2002), Zollo Jain (2013).
and Singh (2004), Ethiraj et al. (2005), (Total: 8 articles)
Franco et al. (2009), Fortune and Mitchell
(2012), Brush et al. (2012), King and Tucci
(2002), Colombo (2003), Lee (2008): Dutta
et al. (2005), Clougherty and Moliterno
(2010) and Berchicci et al. (2012).
(Total: 50 articles)
  Reduced form Zander and Kogut (1995), Arora and Gambardella (1997), Adner and
estimation Helfat (2003), Hansen and Lovas (2004) and Arikan and McGahan (2010).
(Total: 12 articles)
Qualitative case studies Leonard-Barton (1992), Lorenzoni and Tripsas and Gavetti
Lipparini (1999), Argyres (1996), Raff (2000), Rindova and Kotha
(2000), Rosenbloom (2000) and Coff (2001), Ranft and Lord
(2010). (2002), Orlikowski (2002),
(Total: 10 articles) Montealegre (2002), Salvato
(2009), Danneels (2011)
and Pandza (2011).
(Total: 12 articles)

The number of articles in each category do not sum to 169 because some studies fell outside of the seven categories;
other studies were included within multiple categories.
74 Strategic Organization 13(1)

highest aggregate scores in the 11 journal rankings published between January 2000 and July 2013
compiled by Harzing (2014).2 These are shown in columns A and B of Table 1. Second, we elimi-
nated studies which did not address organizational capabilities. These included studies concerned
with individual-level competencies (as used in the human resource appraisal literature, for exam-
ple, Casciaro and Lobo (2008)), individual liberty capabilities (as used in the welfare economics
and human development literature, for example, Ansari et al. (2012)), and national-level capabili-
ties (e.g. Holburn and Zelner, 2010).3 Third, we eliminated theoretical and methodological articles
(e.g. Winter, 2003) and simulation studies (e.g. Zott, 2003; Gavetti, 2005).4 The remaining 169
articles (column C of Table 1) form the subject matter of our article.5
We categorized these 169 articles according to research method and objective. In terms of meth-
odology, we identified two main types of study: (a) those using quantitative data for cross-sectional
and panel studies and (b) those using qualitative data for case studies of individual firms or small
samples of firm. These two types of studies were each associated with distinctly different prob-
lems: the quantitative studies experienced problems of measuring capabilities; the qualitative, case
studies involved experienced problems of identifying capabilities. Among the quantitative studies,
we distinguished between studies using self-reported questionnaire responses to measure capabili-
ties and those which used observable proxies. We then identified an additional category: studies
which circumvented the need to measure capabilities either through indirect estimation or using
reduced form regressions of the consequences of organizational capabilities upon their anteced-
ents. We also categorized the studies according to their research objective, distinguishing studies
in which organizational capabilities were the dependent variable, from those where organizational
capabilities were the independent variable (and the dependent variable was either firm perfor-
mance or firm strategy). Table 2 shows the resulting categories, examples of each type of study,
and the number of papers falling into each category. We surveyed the methodologies employed for
all of the articles falling into each category in order to identify problems in identifying and measur-
ing organizational capabilities.

Notes
1.  This view is not universal: for example, Ray, Barney and Muhanna (2004: 24) state “(…) ‘resources’ and
‘capabilities’ are used interchangeably and refer to tangible and intangible assets firms use to develop
and implement their strategies.”
2.  To be more specific, from Harzing’s (2013) “Management and Strategy” journals, we (a) distinguished
between general management and strategy journals, (b) excluded practitioner journals and journals that
did not include original empirical research, (c) added Management Science and Organization Science to
the list of general management journals (Harzing had allocated them to other categories). For the 11 jour-
nal rankings published between January 2000 and July 2013 listed by Harzing (2013), we assigned con-
secutive numbers to each quality category beginning with 1 for the bottom category. We then calculated
aggregate scores for each journal across all 11 rankings. We selected the six general management jour-
nals with the highest aggregate scores. These were (in rank order) the following: Administrative Science
Quarterly, Management Science, Academy of Management Journal, Organization Science, Journal of
Management Studies, and Journal of Management. We added to this group the two strategy journals with
the highest scores: Strategic Management Journal and Strategic Organization.
3.  We regarded team-based capabilities as organizational capabilities.
4.  We included theoretical and simulation articles whose empirical data extended beyond illustration to
address empirical operationalization of constructs, for example, Coff (2010) and Gavetti (2005).
5.  The list of articles in our sample is available from the authors.

You might also like