Professional Documents
Culture Documents
1 s2.0 S0272696312000502 Main PDF
1 s2.0 S0272696312000502 Main PDF
1 s2.0 S0272696312000502 Main PDF
Neeley Business School, TCU, PO Box 298530, Fort Worth, TX 76129, United States
Broad College of Business, Michigan State University, N370 North Business Complex, East Lansing, MI 48824-1122, United States
a r t i c l e
i n f o
Article history:
Received 18 October 2011
Received in revised form 24 February 2012
Accepted 25 May 2012
Available online 4 June 2012
Keywords:
Six sigma
Process innovation
Operating performance
Event study
a b s t r a c t
We assess the operational impacts of Six Sigma program adoptions through an event study methodology,
comparing nancial data for 200 Six Sigma adopting rms against data for matched rms, which serve as
control groups for the analyses. We employ various matching procedures using different combinations
of pre-adoption return on assets (ROA), industry, and size as matching criteria. By comparing performance outcomes across a hierarchy of operating metrics, we establish a pattern of Six Sigma adoption
effects that provides strong evidence of a positive impact on ROA. Interestingly, these ROA improvements arise mostly from signicant reductions in indirect costs; signicant improvements in direct costs
and asset productivity are not evident. We also nd small improvements in sales growth due to Six
Sigma adoption. Cross-sectional analyses of the performance results reveal that distinctions in Six Sigma
impacts across manufacturing and service rms are negligible. Interestingly, we nd that the performance
impact of Six Sigma adoption is negatively correlated to the rms quality system maturity (indicated by
prior ISO 9000 certication). Further analyses of manufacturing and service rms reveals that Six Sigma
benets are signicantly correlated with intensity in manufacturing, and with nancial performance
before adoption in services. We discuss the implications of these ndings for practice and for future
research.
2012 Published by Elsevier B.V.
1. Introduction
Since its origins in the mid-1980s, the Six Sigma program
for process improvement has become widely embraced. One
report suggests that many Fortune 500 rms have adopted Six
Sigma (Nakhai and Neves, 2009). Early successes in high prole companies such as Motorola, Allied Signal (now Honeywell),
and General Electric helped to both popularize and legitimize
the approach, and dozens of books have been devoted to the
topic.
The practitioner literature documents substantial cost savings
and other benets from Six Sigma program adoptions (Pande
et al., 2000; Harry and Schroeder, 2000). However, some still
question whether these benets sufciently exceed the costs of
adoption. Corporate-wide adoption of Six Sigma often involves
considerable investments in consulting support, training, organizational restructurings, and associated information and reporting
systems. For example, over a four year period (19961999)
438
Limitations might also stem from the context within which Six
Sigma is adopted. Like many process improvement programs, Six
Sigma originated in manufacturing rms; many of its principles and
tenets were developed in a setting of asset-intensive, repeatable
processes. The name itself, Six Sigma, refers to limits in measurable variations of outputs that were established in Motorolas
manufacturing processes. In addition, researchers maintain that
a rm must possess certain resources and make certain commitments in order to make Six Sigma successful (Antony et al., 2008;
Schroeder et al., 2008). Hence, Six Sigma methods and tools may
be more or less effective in certain technological and operational
contexts.
In this article, we examine the operating performance impacts of
Six Sigma adoptions. The study seeks answers to the following three
research questions. First, does Six Sigma adoption consistently produce a signicant net effect on operating performance? Given the
widespread adoption and continued popularity of this program, we
consider this a very important question. A sizable literature on the
efcacy of other process management strategies exists, providing
mixed results. However, researchers argue that Six Sigma is different from other process management approaches; it is distinguished
by its requisite organizational structures, structured methods, and
emphasis on customer-oriented metrics (Linderman et al., 2003;
Sinha and Van de Ven, 2005; Schroeder et al., 2008). Given these
proposed distinctions, it is important to determine whether or not
managers should have reason to expect that Six Sigma will provide
benets that exceed alternative programs for improvement.
Our second research question addresses the nature of Six
Sigmas impacts. What types of benecial impacts are manifested in
the operating data of Six Sigma adopters? By examining the components of both prot and growth-oriented nancial outcomes of Six
Sigma adopters, we develop insights into the types of impacts provided by the program. These results serve to inform the debate over
the roles of process management programs in creating competitive
advantages for their adopters; they also point to some interesting
propositions for future research.
Our third research question is: are Six Sigma impacts related to
operating contexts? As Six Sigma adoptions have grown to include
a wider scope of businesses, researchers have begun to question
the applicability and effectiveness of related tools and techniques
in certain contexts. In addition, case studies and anecdotal evidence is suggestive of factors that may be critical to successful
implementation. We study differences in Six Sigma success associated with industry (manufacturing or service), labor intensity,
R&D intensity, prior operating performance, and quality maturity.
Our examination of these factors provides insights into the sources
of, and constraints on, process improvements emerging from Six
Sigma adoption.
We address the foregoing questions through an event study
methodology, comparing nancial data for about 200 Six Sigma
adopting rms against data for matched rms, which provide
control groups for the analyses. We employ various matching procedures using different combinations of pre-adoption operating
performance (measured by return on assets (ROA)), industry, and
size as matching criteria. By comparing performance outcomes
across a hierarchy of operating metrics, we establish a pattern
of Six Sigma adoption effects that provides strong evidence of a
positive impact on ROA. Interestingly, these ROA improvements
arise mostly from signicant reductions in indirect costs. Improvements in direct costs and asset productivities are not evident. We
also nd small improvements in sales growth due to Six Sigma
adoption. From cross-sectional analyses, we determine that performance improvement due to Six Sigma adoption is not signicantly
related to industry (manufacturing or service) or R&D intensity.
However, changes in performance are signicantly correlated with
the quality maturity of the adopting rms. Interestingly, rms with
439
440
441
442
3. Research method
3.1. Sample collection and description
We used multiple sources web searches, books, practitioner
journals, and academic journals to identify a preliminary list of
over 600 companies named as adopters of the Six Sigma philosophy and methodologies. While the list is certainly not exhaustive,
it appears to be fairly representative, as it includes a wide range
of industries, rm sizes, and adoption years. Of the identied
rms, 421 are publicly traded companies with nancial data in
Compustat.
To corroborate whether the identied rms actually adopted
Six Sigma, and to determine their Six Sigma adoption dates, we
used key words such as Six Sigma in conjunction with history
or adoption to thoroughly search publicly available documents
(e.g., all publication sources in the Factiva database, corporate 10K reports, corporate websites, the Internet) for each of the 421
publicly traded rms. We retained in the sample only rms that
had adopted Six Sigma in 2007 or earlier, in order to have sufcient data to establish post-adoption performance. Using these
sources, we found denite, pre-2008 adoption years for 214 of
the 421 rms (50.8%). For 143 rms (34.0%), we could not determine a specic adoption year, but we found enough evidence
of Six Sigma activity to establish a late bound for adoption. For
the remaining 64 rms (15.2%), we did not nd sufcient information to establish either an adoption date or a late bound for
adoption.
Because the available public information sometimes required
interpretation and/or was conicting from different sources, each
rm was researched independently by two members of our
research team. The two independent researchers agreed on 351 of
421 ndings (83.4%) of adoption dates, late bounds, or no adoption
dates. For the remaining 70 rms with disputed adoption dates or
late bounds, the mean (median) difference in designated adoption
years was 0.6 (1.0) years. To resolve the differences, we supplied the
data sources discovered by the two researchers to a third researcher
who independently weighed the evidence and determined the specic adoption years for use in our analyses.
Early on in this research, we sent a survey to each publicly
held Six Sigma adopter for which we could identify a credible
contact person. The survey asked about adoption date and extent
of adoption of the aforementioned practices that are distinct to
Six Sigma (centralized team structure, improvement specialists,
structured methods/DMAIC, and customer-focused metrics). We
secured survey responses from 58 of the 214 publicly traded rms
with identied adoption dates (23.8% of our sample). Of the 58
single respondents: 38 (65.5%) agreed with our identied adoption years; 9 (17.6%) were unable to provide a specic adoption
year; 7 (12.1%) supplied an adoption date one year earlier than our
nding; 3 (5.9%) supplied an adoption date more than one year
later than our nding; and 1 (1.7%) supplied an adoption date one
year later than our nding. We note that all three respondents
with adoption dates greater than one year later from our nding
were reporting only for their division within the overall rm. To be
conservative, we used the earliest adoption year identied. Furthermore, the survey data indicate a remarkably uniform application
of Six Sigma practices across the respondents. For example, over
90% of the respondents indicated that they employed a black/green
belt structure, and over 95% designated that DMAIC and other Six
Sigma tools were used on at least 80% of improvement projects.
These results reinforce our overall condence in the accuracy of
our estimates for both the timing and extent of adoptions in our
sample rms.
For the 214 rms with specic adoption years, Panel A of
Table 1 presents the number of adopting rms by year. The earliest
Table 1
Sample description for 214 rms with specic Six Sigma adoption years.
Panel A: Frequency of Six Sigma adoption years
Year
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
Frequency
2
0
2
1
1
1
1
0
0
2
3
Year
Frequency
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
16
12
18
38
40
18
18
15
9
13
4
Frequency
35
36
28
37
38
24 (11.2%)
23 (10.7%)
21 (9.8%)
21 (9.8%)
14 (6.5%)
3-Digit code
371
602
357
283
367
Frequency
12 (5.6%)
11 (5.1%)
10 (4.7%)
6 (2.8%)
6 (2.8%)
adoption year in our sample is 1986 and the most frequently occurring adoption year is 2001. We note the drop-off in Six Sigma
adoptions in our sample post-2001. Given the continued interest
and relevance of Six Sigma, as evidenced by academic publications,
current business school textbooks and curriculums, and practitioner seminar offerings, we suspect that the drop-off of Six Sigma
adoptions in our sample is indicative of non-newsworthiness. In
other words, Six Sigma has become an accepted part of everyday
business, much like TQM or Lean. This highlights the importance
of rigorously studying the impact Six Sigma adoption on operating
performance.
Table 1 Panel B presents the most frequently occurring SIC codes
within the sample rms. The sample contains rms from 47 unique
two-digit SIC codes and 101 unique three-digit SIC codes. Though
the majority of rms represent manufacturing industries, about
one-third of the rms are services. Table 1 provides more information on the most frequently represented industries. Table 2 Panel
A presents descriptive statistics for our sample based on the 2001
scal year, the most common Six Sigma adoption year in our sample. The median observation in the sample represents a rm with
$5.6B in market value, $7.5B in total assets, $6.2B in annual sales,
$0.8B in annual operating income, and 28,300 employees. For comparison, Table 2 Panel B presents descriptive statistics for the 207
suspected Six Sigma adopters for which we could not determine a
specic adoption year. In addition, Table 2 Panel C presents descriptive statistics for all rms listed in the New York Stock Exchange
(NYSE), also for the 2001 scal year. In summary, our sample represents a wide variety of industries, and is not signicantly different
from the suspected Six Sigma adopters for which we could not
determine a specic adoption year. However, when compared to all
NYSE rms, our sample is not representative of smaller enterprises.
This outcome raises a question regarding the generalizability of our
ndings, as the cause of the difference is not known. Research indicates that small and medium sized rms are less likely to adopt Six
Sigma, mainly because they lack requisite resources and knowledge
(Antony et al., 2008; Kumar and Antony, 2009). Thus, our sample
rms might be larger because of sampling bias (i.e., larger rms
are more likely to be identied by our sources), but the sample
rms might also be larger because they truly represent the population (i.e., larger rms are more likely to adopt Six Sigma). We
note that large-rm bias is common in OM research. We discuss
this limitation further in Section 6.
443
Table 2
Descriptive statistics for 2001, the most frequent sample Six Sigma adoption year.
Market value ($M)
The conventional approach to establishing expected operating performance is to use rms similar to the sample rms as
benchmarks (controls). In their comprehensive, simulation-based
research, Barber and Lyon (1996) described their three most
important ndings to accurately determine expected operating
performance and abnormal operating performance: (1) nonparametric Wilcoxon test statistics are more powerful than parametric
t-statistics; (2) test statistics for the change in operating performance relative to an appropriate benchmark are more powerful
than those based on the level of operating performance relative to
the same benchmark; and (3) benchmark rms are best determined
by matching on prior performance and industry. We employed each
of these recommendations in our analyses.
Using multiple criteria such as prior performance and industry presents a challenge when establishing benchmark groups, and
sometimes require trade-offs. The multi-step process we used to
establish our benchmark groups is as recommended by Barber
and Lyon (1996). We label it as our performance-industry matching
method:
Sales ($M)
Employees (000s)
6204.9
14,205.1
21,799.3
162,412.0
49.1
760.2
2337.4
4695.6
37,966.0
(5062.0)
28.3
51.9
69.2
395.0
0.4
3986.3
11,625.6
24,409.1
218,529.0
<0.01
494.0
1754.2
3429.2
29,602.0
(326.3)
17.8
47.6
119.6
1383.0
<0.01
864.8
4507.9
13,149.7
218,529.0
<0.01
137.4
838.5
2743.6
45,732.0
(5743.0)
3.9
17.9
50.7
1383.0
<0.01
Step 1. For each sample rm, we identied all rms within the
same two-digit SIC code as the sample rm, and whose
ROA in year 2 (the scal year immediately prior to the
study period) was within 90110% of the sample rms ROA.
The 10% requirement provides a tight match on operating
performance.
Step 2. If no rms were found in Step 1, we then matched performance within the 90110% performance range using all
rms in the same one-digit SIC code.
Step 3. If no rms were found in Step 2, we then matched performance within the 90110% performance range regardless
of SIC code.
Since we were interested in determining the effects of Six Sigma
adoption, we excluded the 421 identied public Six Sigma-adopting
rms from our benchmark groups. A limitation of our study is that
we could not denitively determine that all benchmark rms were
not also Six Sigma-adopters in the time frame studied. Because it is
possible that at least some of the benchmark rms were adopters,
our estimates of abnormal performance should be considered conservative. In order to dilute the effects of adopters who may be
present in the benchmark groups, we set our matching criteria to
maximize the number of rms in each group, thus limiting the
number of one-on-one matching comparisons. In addition, we perform a separate analysis with one-to-one matching of adopters and
conrmed non-adopters for a subset of the sample (described later).
In order to evaluate the robustness of the results, we used two
other matching methods that added controls to the matching criteria specied above. First, rather than matching on ROA only in
year 2, we matched on the median ROA in years 2, 3, and
4, with the requirement that we had data for the sample and
benchmark rms in at least two of the three years. We labeled this
approach as the median-performance-industry matching method;
this approach provides an even stricter performance control. Second, we added the matching criteria of rm size to each step. We
measured rm size as median year-end total assets in years 2,
3, and 4, again with the requirement that we had data for the
sample and benchmark rms in at least two of the three years. We
selected benchmark rms with total assets within a factor of 25 of
the assets of the sample rm. We labeled this matching method as
median-performance-size-industry.
Of the 214 public rms with identied Six Sigma adoption
dates, we dropped rms with insufcient data in Compustat in
either year 2, or years 2, 3, and 4 to compute the baseline ROA or size metrics. We eliminated ve rms from the
performance-industry matching analysis, and eight rms from
444
Table 3
Matching process and benchmark group statistics.
Matching method
Step 1 matchesd
Step 2 matchese
Step 3 matchesf
Total rms matched
Mean group size
Median group size
Maximum group size
No. of groups with a single rm
a
b
c
d
e
f
Performance in
year 2a , and
industry
Performance in
years 2, 3, 4b ,
and industry
203
6
0
209
34.3
18.0
470
6
200
6
0
206
34.1
21.5
423
2
198
7
1
206
14.9
9.0
86
8
Table 4
Median descriptive statistics for the matching period (years 2, 3, and 4) prior to Six Sigma adoption.
ROA
Sales ($M)
both the median-performance-industry and median-performancesize-industry matching analyses. Table 3 presents the results of each
matching process. In each process, the vast majority of matches
were accomplished in Step 1, using the tightest matching criteria.
In the performance-industry matching, for example, 203 of the 209
benchmark groups were generated using Step 1 of our matching
process, and the remaining six benchmark groups were generated using Step 2. The mean (median) benchmark group size was
34.3 (18.0), and only six rms were benchmarked against a single rm. The other matching methods produced similar results,
though notably in the median-performance-size-industry matching, the mean (median) matching group size dropped to 14.9
(9.0), and one rm required Step 3 for matching. Table 4 presents
median descriptive statistics for the pre-adoption matching period
(years 2, 3, and 4) for both the sample rms and the benchmark rms obtained using the median-performance-size-industry
matching method. The sample rms and benchmark rms are wellmatched on ROA and assets within the stated criteria (10% and
factor of 25, respectively).
Due to the common occurrence of extreme outliers in
accounting-based performance measures, we used the median
(versus the mean) change in the benchmark rms to estimate the
sample rms expected operating performance without Six Sigma
adoption. The sample rms expected operating performance without Six Sigma adoption is the sum of its actual performance in
the base year(s) plus the median change in operating performance
for the rms in the benchmark group over the period of interest. The sample rms abnormal operating performance with Six
Sigma adoption is then the difference between the sample rms
actual performance (with Six Sigma adoption) and the sample
rms expected performance (without Six Sigma adoption). As an
Employees (000s)
1882.1
761.9
3672.0
32,291.0
(22.2)
47.9
25.5
64.5
413.0
0.3
625.2
122.0
2014.6
31,750.0
(9.8)
11.1
3.2
28.4
775.1
<0.01
(SIC codes beginning with 5), nance, insurance, and real estate
industries (SIC codes beginning with 6), and other service industries (SIC codes beginning with 7 and 8). We evaluated each of the
remaining rms in the agriculture, forestry, and sheries industries
(SIC codes beginning with 0), mining and construction industries
(SIC codes beginning with 1), and the public administration industry (SIC codes beginning with 9) to determine whether its primary
business is the production of goods or the provision of services.
Thus, we identied 144 rms in manufacturing industries and 70
rms in services industries.
To test hypothesis H4, we consider the rms labor intensity. We
measure labor intensity as the rms number of employees in year
2 divided by its sales in year 2.
We use the rms R&D intensity to provide testing of hypothesis
H5. We dene R&D intensity as the rms R&D expenses in year
2 divided by its sales in year 2. We note that this analysis is
limited to manufacturing rms, because few service rms report
R&D expenditures.
To examine the possibility of different response coefcients
for prot- and loss-making rms (H6), we follow the method of
Hendricks and Singhal (2008). For each adopting rm, we compute
industry-adjusted ROA as the rm ROA in year 2 minus the median
ROA of its industry in year 2. Using these scores, we created two
prior operating performance variables as follows. If the industryadjusted ROA is positive, we used the value as our positive nancial
performance variable, and 0 otherwise. If the industry-adjusted
ROA is negative, we used the value as our negative nancial performance variable, and 0 otherwise.
Finally, we use ISO 9000 certication status as an indicator
of quality maturity (H7). Using the method employed by Corbett
et al. (2005), we determined the year of initial ISO certication for
each rm in our sample. We then computed the number of years
between initial ISO certication and Six Sigma adoption as a proxy
for the rms maturity in applying quality management principles
and techniques. We coded rms as 1 if they were not ISO certied,
or if they were certied only after Six Sigma adoption.
In addition to the above factors, we include three control variables in the analysis. Larger rms likely have more implementation
resources, and economies of scale to better exploit improvements.
Conversely, smaller rms tend to be more focused and agile, easing
adoption of new improvement philosophies. In addition, if wellimplemented, the relative impact of any one improvement program
such as Six Sigma is likely to be greater in a small rm. Hence, we
control for the effects of rm size, operationalizing it as the natural log of the rms market value at the end of year 2. Market
value provides a suitable measure of size, given that rm sales,
assets, and number of employees are already represented in the
independent and dependent variables described earlier. We also
include the year of Six Sigma adoption as a control for changes in
economic, business environment, and other time-related variables
that might confound the results. Lastly, we control for leadership changes in adopting rms. It can be argued that, if Six Sigma
adoption is instigated by new leadership, then observed benecial effects might be attributable to new leadership competencies
or motivational factors, rather than to Six Sigma adoption per se.
To control for this possibility, we used the Execucomp database
to determine sample rm CEO changes in years 0, 1 and 2.
For the 32 sample rms not listed in Execucomp, we searched
all sources in the Factiva database for evidence of CEO turnover
in the three-year time period. In total, 90 rms in our sample
(42.4%) experienced a CEO change within the specied period. We
note that the mean (median) CEO turnover rate in Execucomp is
40.4% (52.9%) for a three-year period, suggesting that our sample
rms are not more prone to CEO changes than other rms. Sample rms with a CEO change were coded as 1; other rms were
coded as 0.
445
4. Results
We begin the discussion of our empirical results by focusing on
the effects of Six Sigma adoption on the rm protability measure,
ROA, on both an annual basis and for the aggregated multiple-year
periods. We then decompose ROA into its constituent components
to determine whether changes in ROA are due to changes in ROS,
COGS/sales, SG&A/sales, ATO, or a combination thereof. We also
examine the impact of Six Sigma adoption on rm growth by
considering year-on-year changes in sales. Lastly, we present our
results contingent on the contextual factors of industry, and preadoption labor intensity, R&D intensity, nancial performance, and
quality maturity.
446
Table 5
Annual abnormal changes in ROA for sample rms for year 1 through year +4.
From year:
Median
Z-Statistic
Mean
t-Statistic
% Positive
Z-Statistic
0.641
2.585***
0.941
0.520
2.173**
1.470*
0.542
0.077
1.580*
2.388***
1.818**
0.520
2.030**
1.562*
1.718**
1.406*
0.287
0.217
0.438
0.075
2.428***
1.018
0.705
1.018
0.686
1.480*
1.441*
1.000
1.029
0.686
1.521*
2.000**
The change per rm over the 5-year period (from year 0 to +4) is
also signicantly positive for both the mean and median, using all
three matching methods. The % rms with positive 5-year changes
are signicantly greater than 50% using the median-performanceindustry matching method. As expected, the magnitudes of the 5year changes are generally less than those of the 6-year changes.
The change per rm over the 4-year period (from year +1 to +4)
follows a similar pattern and is again generally lesser in magnitude
and signicance than the 5-year changes.
As we noted in Section 3.2, a limitation of our study is that we
could not denitively determine that all benchmark rms were not
also Six Sigma-adopters during the sampling time frame. If many of
the benchmark rms were truly adopters, then estimates of abnormal performance would be muted, making signicant differences
difcult to detect. The fact that we do nd signicant differences
suggests that, in the worst case, our ndings of abnormal performance due to Six Sigma adoption are conservative.
To address this limitation, we would ideally match known
adopters only with known non-adopters. Such a pure comparison
would produce a more reliable estimate of expected performance
improvement from Six Sigma adoption. To approximate this pure
comparison, we identied known non-adopters as rms from our
sample that adopted Six Sigma at least ve years later than earlier
sample adopter rms. The ve-year delay allows us to consider
operating performance impacts to the early adopters during the
post-adoption period through year +4. Given that we are limited to
only the 214 rms in our sample, and to permit the greatest number of comparisons, we matched each adopter against only a single
rm, and did not use data from any one rm more than once; this
method permitted 41 matches using the ROA, assets, and industry
criteria from median-performance-size-industry matching. Table 5
Panel D presents the results for abnormal changes in ROA for year
1 through year +4. The ROA improvements are signicant and generally stronger than in our other matching methods. These results
should be regarded as somewhat tentative, given the small sample
size, and the confounding with time due to this matching methodology (the adopters all adopted prior to 2003). However, the results
do strongly reinforce the conclusion that the estimated improvements from our larger analyses are real, albeit conservative.
Considering both the annual changes and multiple-year changes
indicates that Six Sigma adoption produces an immediate and persistent positive effect on ROA. These results provide support for
hypothesis H1.
4.2. Decomposition of ROA effects
For brevity in presenting and discussing the remainder of
our results, we concentrate on our most conservative matching
method, median-performance-size-industry, noting that the pattern
of results is similar regardless of matching method. Table 6 Panel
A presents the results for the abnormal changes in the level of ROS
on an annual basis and for multiple-year periods. For the 6-year
447
Table 6
Annual abnormal changes in ROS, COGS/sales, SG&A/sales, and ATO for sample rms for year 1 through year +4 using the median-performance-size-industry matching
method.
From year:
Median
Z-Statistic
Mean
t-Statistic
% Positive
Z-Statistic
0.117%
0.048%
0.127%
0.167%
0.482%
0.899%
0.632%
0.892%
0.528%
0.320
0.009
1.151
0.249
1.955**
0.937
1.363*
1.761**
2.063**
0.324%
0.117%
0.328%
0.065%
0.364%
0.342%
0.490%
0.596%
0.645%
1.406*
0.492
1.449*
0.287
1.639*
0.774
1.114
1.563*
1.976**
51.55%
50.79%
51.60%
52.51%
59.51%
52.76%
56.44%
56.44%
56.44%
0.260%
0.214%
0.200%
0.173%
0.095%
0.533%
0.336%
0.438%
0.106
0.193
1.927**
0.344
0.738
0.374
0.225
0.018
0.222%
0.006%
0.447%
0.086%
0.211%
0.281%
0.150%
0.033%
0.987
0.026
2.346***
0.407
0.952
0.539
0.317
0.087
45.00%
45.91%
45.86%
46.98%
48.89%
53.03%
51.13%
46.97%
1.265
1.031
1.038
0.737
0.258
0.696
0.260
0.696
0.007%
0.096%
0.155%
0.149%
0.219%
0.842%
0.639%
0.604%
0.319
0.061
0.762
2.277**
1.979**
2.846***
2.140**
2.476***
0.051%
0.123%
0.023%
0.342%
0.297%
1.094%
0.771%
0.747%
0.370
0.855
0.156
2.639***
2.190**
2.832***
2.108**
2.707***
49.38%
55.35%
47.13%
41.61%
40.00%
38.64%
42.86%
38.64%
0.158
1.348*
0.718
2.048**
2.324**
2.611***
1.648**
2.611***
0.295%
0.055%
0.633%
0.605%
0.130%
0.004%
0.027%
0.253%
0.882
0.512
1.299*
0.626
0.802
0.715
0.367
0.095
0.200%
0.866%
0.680%
0.164%
1.067%
1.944%
1.120%
0.026%
0.203
1.112
0.795
0.184
1.199
0.992
0.634
0.018
44.33%
49.74%
44.68%
45.25%
51.53%
49.69%
49.08%
48.47%
1.580*
0.072
1.459*
1.271
0.392
0.078
0.235
0.392
0.431
0.217
0.438
0.673
2.428***
0.705
1.645*
1.645*
1.645*
period from year 1 to +4, the median and mean change per adopting rm, and the % of sample rms experiencing positive change,
are all positive but insignicant. For the 5-year period from year 0
to +4, the median change is 0.632%, signicantly positive at the 10%
level. The mean change is 0.490%, positive but insignicant, and
56.44% of rms experience positive changes, signicantly greater
than 50% at the 10% level. For the 4-year period from year +1 to +4,
the median and mean changes are 0.528% and 0.645%, respectively,
both signicantly positive at the 5% level, and 56.44% of rms experience positive changes, signicantly greater than 50% at the 10%
level.
To determine whether the improvement in ROS from year +1
to +4 contributes signicantly to the improvement in ROA, we
employ the method of Kinney and Wempe (2002). For each adopting rm, we compute the effect of ROS on ROA by multiplying the
change in abnormal ROS from year +1 to +4 with the rms ATO at
adoption (year 0). The median and mean ROS effects over the 4year period are 0.528% and 0.645%, respectively, both signicantly
positive at the 5% level, and 56.44% of sample rms experience positive ROS effects, signicantly greater than 50% at the 10% level.
These results indicate that the ROS improvement from Six Sigma
adoption signicantly contributes to the overall improvement
in ROA.
448
Table 7
Annual abnormal changes in sales growth for sample rms for year 1 through year +4.
From year:
Median
Z-Statistic
Mean
t-Statistic
% Positive
Z-Statistic
1.693 **
0.570
0.753
1.278
0.250
0.229
0.103
0.378
0.555%
1.148%
0.498%
1.613%
0.749%
3.179%
1.985%
0.034%
0.589
1.284*
0.501
1.536*
0.853
0.704
0.545
0.012
43.65%
46.91%
47.64%
47.51%
46.39%
43.11%
49.70%
48.50%
1.781**
0.862
0.651
0.669
0.931
1.780**
0.077
0.387
0.260
0.225
0.856
0.396
2.681***
1.725**
1.876**
1.952**
0.957%
1.101%
0.417%
0.208%
3.138%
11.897%
8.118%
5.344%
0.950
1.182
0.411
0.217
3.296***
2.803***
2.416***
2.028**
47.94%
46.60%
48.15%
49.72%
57.32%
52.44%
54.88%
56.71%
0.574
0.941
0.509
0.074
1.874**
0.625
1.249
1.718**
0.575
0.326
0.158
0.046
1.410*
1.483*
1.078
1.403*
2.127%
0.774%
0.237%
0.071%
2.098%
11.022%
5.006%
3.759%
1.973**
0.859
0.234
0.072
2.345***
2.485***
1.422*
1.441*
50.00%
46.07%
51.06%
49.72%
53.37%
50.92%
53.99%
55.83%
0.000
1.085
0.292
0.075
0.862
0.235
1.018
1.488*
449
Table 8
Estimated coefcients (standardized, t-Statistics in parentheses) from regressions of abnormal ROA change from year 1 to year +4 using the median-performance-size-industry
matching method.
Independent variables
Operationalization
Intercept
Manufacturing or services
Labor intensity
R&D intensity
1 if manufacturing,
0 if services
Employees/sales
R&D/sales
Firm size
Industry-adjusted
ROA if positive,
0 otherwise
Industry-adjusted
ROA if negative,
0 otherwise
Six Sigma adoption
year minus 1st
ISO9000
certication
ln(market value)a
Adoption year
YearO
New CEO
1 if new CEO in
years 0, 1, or 2,
0 otherwise
Number of
observations
Model F value
R2
Adjusted R2
ISO9000 experience
Model 1
manufacturing and
Services
Model 2 services
only
Model 3
manufacturing
only
Model 4
manufacturing
only
3.875
(1.203)
0.074
(0.790)
0.242
(2.932)***
7.646
(1.565)
3.830
(0.907)
5.589
(1.175)
0.128
(0.852)
0.343
(3.199)***
0.059
(0.725)
0.358
(2.483)***
0.081
(0.831)
0.325
(2.661)***
0.042
(0.383)
0.115
(0.934)
0.136
(1.664)*
0.374
(2.499)**
0.103
(1.083)
0.118
(1.131)
0.254
(2.551)**
0.201
(1.278)
0.207
(1.755)*
0.264
(2.024)**
0.027
(0.317)
0.113
(1.200)
0.049
(0.612)
0.192
(1.307)
0.219
(1.553)
0.134
(0.954)
0.000
(1.000)
0.110
(0.903)
0.037
(0.385)
0.067
(0.542)
0.168
(1.172)
0.023
(0.220)1
156
47
109
97
2.418**
11.63%
6.82%
2.346**
29.63%
17.01%
2.758**
16.05%
10.23%
2.181**
16.55%
8.96%
450
451
452
the time horizon we studied, as positive ROA changes were frequently evidenced in latter periods (years +3 and +4). Further, the
results indicate that overall protability impacts stem primarily
from improved indirect cost efciencies, rather than from direct
cost improvements or from improved asset productivities. The
results also manifested marginally signicant positive effects on
sales growth. These ndings hint at potential differences in how
Six Sigma programs are possibly being applied in front-ofce versus
back-ofce contexts.
In addition, our ndings identify some important contextual
factors. In general, our ndings hint that Six Sigma methods may
be most benecial when applied to labor-intensive, repeatable
processes. However, the results suggest that less labor-intensive,
quality experienced, manufacturing rms will not experience the
prot impact from Six Sigma adoption that others will. On the other
hand, service rms contemplating Six Sigma adoption should pay
close attention to how their current performance can be leveraged to maximize Six Sigmas potential. Managers in these rms
would do well to identify and exploit either resource-based or
motivational advantages that come from positive or negative prior
nancial performance, respectively.
Our study is limited in ways that can be addressed in future
research. First, our sample of service rms is somewhat small and
grossly aggregated across service types (e.g., logistics providers and
banks). Future research that directly compares Six Sigma implementations in personal and non-personal services might extend
our understanding of its applicability limits and particular sources
of impact.
Second, our sample may be biased toward larger rms. We cannot verify the bias, as it may be that only larger rms are likely to
adopt Six Sigma due to the sizable investments required. Moreover,
in our analysis, our control for rm size was not signicant, suggesting that size is not a strong driver of Six Sigma impact. However,
our limited sample prevents us assessing the effects of rm size for
relatively small rms. Given the difculty in obtaining secondary
data on small and medium sized enterprises (SMEs), a primary data
collection approach might be benecial.
Third, our study is limited by its focus on operational performance impacts over a six-year time horizon. As time passes,
future studies should be able to study whether Six Sigma effects
persist over even longer periods. Moreover, there are a host of
other performance effects that should be examined, including
the stock markets valuation of future cash ows and intangible
assets associated with Six Sigma, as well as impacts on corporate
social responsibility and other concerns. Though we investigated
effects on sales growth, this serves as a poor proxy for innovation
performance. Future studies might also build on our method by
comparing patent activity, new product introductions, and other
more direct measures of explorative innovation across adopters
and non-adopters of Six Sigma.
We explored associations of abnormal performance due to Six
Sigma with manufacturing and service contexts and with a number
of other contextual factors. Others factors may also be important.
Several researchers have pointed to important differences in the
timing of adoption of such programs (Westphal et al., 1997; Yeung
et al., 2006; Benner and Veloso, 2008), thus warranting a study
comparing early versus late adopters. Though our results did not
relate another size variable (market value) to abnormal performance, yet other size-related factors such as resource slack and
inertia may moderate Six Sigma impacts (Hendricks and Singhal,
2001b). In addition, the process management literature points
out other potentially important moderators, including diversication (Hendricks and Singhal, 2001b), technology strategy (Ittner
and Larcker, 1997; Benner and Veloso, 2008), and technological dynamism (Benner and Tushman, 2002; Benner, 2009; Parast,
2011). Finally, numerous writers have identied implementation
References
Abernathy, W.J., 1978. The Productivity Dilemma. Johns Hopkins University Press,
Baltimore.
Anand, G., Ward, P.T., Tatikonda, M.V., Schilling, D.A., 2009. Dynamic capabilities
through continuous improvement infrastructure. Journal of Operations Management 27, 444461.
Antony, J., 2006. Six Sigma for service processes. Business Process Management
Journal 12 (2), 234248.
Antony, F., Kumar, M., Cho, B., 2007. Six Sigma in service organisations: benets,
challenges and difculties, common myths, empirical observations and success factors. International Journal of Quality & Reliability Management 24 (3),
294311.
Antony, J., Kumar, M., Labib, A., 2008. Gearing Six Sigma into UK manufacturing
SMEs: results from a pilot study. Journal of the Operational Research Society 59,
482493.
Barber, B.M., Lyon, J.D., 1996. Detecting abnormal operating performance: the empirical power and specication of test statistics. Journal of Financial Economics 41,
359399.
Barney, M., 2002. Macro, meso, micro: Six Sigma. The Industrial Organizational
Psychologist 39 (4), 104107.
Benner, M.J., 2009. Dynamic or static capabilities? Process management practices
and response to technological change. Journal of Production Innovation Management 26, 473486.
Benner, M.J., Tushman, M., 2002. Process management and technological innovation:
A longitudinal study of the photography and paint industries. Administrative
Science Quarterly 47, 676706.
Benner, M.J., Tushman, M.L., 2003. Exploitation, exploration, and process management: The productivity dilemma revisited. Academy of Management Review 28
(2), 238256.
Benner, M.J., Veloso, F.M., 2008. ISO 9000 practices and nancial performance:
a technology coherence perspective. Journal of Operations Management 26,
611629.
Blakeslee Jr., J.A., 1999. Implementing the Six Sigma solution. Quality Progress 32
(7), 7785.
Brady, D., 2005. Bringing innovation to the home of Six Sigma. Business Week
(August), 68.
Braunscheidel, M.J., Hamister, J.W., Suresh, N.C., Star, H., 2011. An institutional theory perspective on Six Sigma adoption. International Journal of Operations &
Production Management 31 (4), 423451.
Brewer, P.C., 2004. Six Sigma helps a company create a culture of accountability.
Journal of Organizational Excellence 23 (3), 4559.
Breyfogle, F.W., 2003. Implementing Six Sigma: Smarter Solutions Using Statistical
Methods. John Wiley, Hoboken, NJ.
Byrne, G., Norris, B., 2003. Drive Baldrige level performance. Six Sigma Forum Magazine 2 (3), 1321.
Carnell, M., 2003. Gathering customer feedback. Quality Progress 36 (1), 6061.
Chakravorty, S.S., 2009. Six Sigma programs: an implementation model. International Journal of Production Economics 119 (1), 116.
Chakravorty, S.S., 2010. Where process improvement projects go wrong. The Wall
Street Journal (January), R6.
Choo, A.S., Linderman, K.W., Schroeder, R.G., 2007. Method and psychological effects
on learning behaviors and knowledge creation in quality improvement projects.
Management Science 53 (3), 437450.
Clark, K.B., Fujimoto, T., 1991. Product Development Performance. Harvard Business
School Press, Boston.
Clifford, L., 2001. Why you can safely ignore Six Sigma. Fortune 143 (2), 140.
Cohen, W.M., Levinthal, D.A., 1990. Absorptive capacity: a new perspective on learning and innovation. Administrative Science Quarterly 35 (1), 128152.
Corbett, C., Montes-Sancho, M., Kirsch, D., 2005. The nancial impact of ISO 9000
certication in the U.S.: an empirical analysis. Management Science 51 (7),
10461059.
Detert, J.R., Schroeder, R.G., Mauriel, J.J., 2000. A framework for linking culture and
improvement initiatives in organizations. Academy of Management Review 25
(4), 850863.
Fahmy, D., 2006. When one black belt is enough. Treasury and Risk Management 16
(3), 5559.
Goh, T.N., Low, P.C., Tsui, K.L., Xie, M., 2003. Impact of Six Sigma implementation on
stock price performance. TQM & Business Excellence 14 (7), 753763.
Gowen, C.R., Tallon, W.J., 2005. Effect of technological intensity on the relationships
among Six Sigma design, electronic-business, and competitive advantage: a
dynamic capabilities model study. The Journal of High Technology Management
Research 16, 5987.
Gutierrez, L.J.G., Llorens-Montes, F.J., Sanchez, O.F.B., 2009. Six Sigma: from a goaltheoretic perspective to shared-vision development. International Journal of
Operations & Production Management 29 (2), 151169.
Hahn, G.J., Hill, W.J., Hoeri, R.W., Zinkgraf, S.A., 1999. The impact of Six Sigma
improvementa glimpse into the future of statistics. The American Statistician
53 (3), 208215.
Harry, M., Schroeder, R., 2000. Six Sigma: The Breakthrough Management Strategy
Revolutionizing The Worlds Top Corporations. Doubleday, New York.
453
Nakhai, B., Neves, J.S., 2009. The challenges of Six Sigma in improving service quality.
International Journal of Quality & Reliability Management 26 (7), 663684.
Naor, M., Goldstein, S.M., Linderman, K.W., Schroeder, R.G., 2008. The role of culture
as driver of quality management and performance: infrastructure versus core
quality practices. Decision Sciences 39 (4), 671702.
Naveh, E., Erez, M., 2004. Innovation and attention to detail in the quality improvement paradigm. Management Science 50 (11), 15761586.
Pande, P.S., Neuman, R.P., Cavanagh, R.R., 2000. The Six Sigma Way: How GE,
Motorola and Other Top Companies are Honing their Performance. McGraw-Hill,
New York.
Parast, M.M., 2011. The effect of Six Sigma projects on innovation and rm performance. International Journal of Project Management 29, 4555.
Powell, T.C., 1995. Total Quality Management as competitive advantage: a review
and empirical study. Strategic Management Journal 16, 1537.
Rowlands, H., 2003. Six Sigma: a new philosophy or repacking of old ideas? Engineering Management (April), 1821.
Schmenner, R.G., 1988. Escaping the black holes of cost accounting. Business Horizons 31 (1), 6672.
Schmenner, R.G., 1991. International factory productivity gains. Journal of Operations Management 10 (2), 229254.
Schmenner, R.G., Swink, M.L., 1998. On theory in operations management. Journal
of Operations Management 17 (1), 97113.
Schroeder, R.G., Linderman, K., Liedtke, C., Choo, A.S., 2008. Six Sigma: denition and
underlying theory. Journal of Operations Management 26, 536554.
Shewhart, W.A., 1931. Economic Control of Quality of Manufactured Product. D. Van
Nostrand, New York.
Sila, I., 2007. Examining the effects of contextual factors on TQM and performance
through the lens of organizational theories: an empirical study. Journal of Operations Management 25, 83109.
Sinha, K.K., Van de Ven, A.H., 2005. Designing work within and between organizations. Organization Science 16 (4), 389408.
Snee, R.D., Hoerl, R.W., 2003. Leading Six Sigma. Prentice Hall, Englewood Cliffs, NJ.
Teece, D.J., 2007. Explicating dynamic capabilities: the nature and microfoundations
of (sustainable) enterprise performance. Strategic Management Journal 38 (13),
13191350.
Teece, D.J., Pisano, G., Shuen, A., 1997. Dynamic capabilities and strategic management. Strategic Management Journal 18 (7), 509533.
Tushman, M.L., OReilly III, C.A., 1996. Ambidextrous organizations: managing evolutionary and revolutionary change. California Management Review 38 (4),
830.
Westphal, J.D., Gulati, R., Shortell, S.M., 1997. Customization or conformity? An institutional and network perspective on the content and consequences of TQM
adoption. Administrative Science Quarterly 42, 366394.
Yeung, A.C.L., Cheng, T.C.E., Lai, K.H., 2006. An operational and institutional perspective on Total quality management. Production and Operations Management 15
(1), 156170.
York, K.M., Miree, C.E., 2004. Causation or covariation: an empirical re-examination
of the link between TQM and nancial performance. Journal of Operations Management 22, 291311.
Zahra, S.A., George, G., 2002. Absorptive capacity: a review, reconceptualization, and
extension. Academy of Management Review 27 (2), 185203.
Zollo, M., Winter, S.G., 2002. Deliberate learning and the evolution of dynamic capabilities. Organization Science 13 (3), 339351.
Zu, X., Fredendall, L.D., Douglas, T.J., 2008. The evolving theory of quality management: the role of Six Sigma. Journal of Operations Management 26, 630650.