You are on page 1of 19

Design of Experiments,

UNIT 7 DESIGN OF EXPERIMENTS, SIX Six Sigma and


SIGMA AND BENCHMARKING Benchmarking

Structure
7.1 Introduction
Objectives

7.2 Some Experimental Design for Quality Tools


7.2.1 One Factor at a Time Method
7.2.2 The Full Factorial Method
7.2.3 The Fractional Factorial Method

7.3 Taguchi’s Experimental Design Method


7.3.1 Objective
7.3.2 Different Design Stages
7.3.3 Suggested Steps

7.4 Six Sigma


7.4.1 Six Sigma Methodology
7.4.2 Six Sigma Participants

7.5 Benchmarking
7.5.1 Approach
7.5.2 Steps in Formal Benchmarking Process

7.6 Summary
7.7 Key Words
7.8 Answers to SAQs

7.1 INTRODUCTION
The design of experiments is a series of techniques which involves the identification and
control of parameters which have a potential effect on performance and reliability of a
product design and/or the output of a process, with the objective of optimizing product
design, process design and process operation, and limiting the effect of noise
(uncontrollable) factors. The objective is to optimize the value of these design parameters
to make the performance of the system immune to variation. The concept can be applied
to the design of new products and processes or to the redesign of existing ones, in order
to :
• optimize product design, process design and process operation,
• achieve minimum variation of best system performance,
• achieve reproducibility of best system performance in manufacture and use,
• improve the productivity of design engineering activity,
• evaluate the statistical significance of the effect of any controlling factor on
outputs, and
• reduce costs.
These techniques identify and control the parameters or variables, which have a potentia l
influence on the output of a process. Each combination of factors and levels are used to
Quality Tools – effect on a defined output. Prediction of system performance can be made based on the
Statistical analysis.
In this unit, the concept of six sigma has also been introduced. Six sigma is a measure of
quality that strives for near perfection. It is a disciplined and statistical methodology for
eliminating defects (driving towards six standard deviations between the mean and the
nearest specification limit) in any process, from manufacturing to transactional and from
product to service. Benchmarking is another quality improvement tool to improve the
performance of an organization by comparing the practices of a successful organization,
i.e. it is an opportunity to learn from the experience of others. This also has been
described in this unit.
Objectives
After studying this unit, you should be able to
• know the design of experiments,
• understand the six sigma, and
• describe the benchmarking.

7.2 SOME EXPERIMENTAL DESIGN FOR QUALITY


TOOLS

7.2.1 One Factor at a Time Method


In this method, setting of one factor is altered holding the other constant and then the
resulting response is measured. This approach is easy to use and understand. However, it
has many shortcomings. It does not uncover interactions among variables. It is also
inefficient, resource intensive and costly. In addition, it is not easy to hold the factors
constant from experiment to experiment, and this itself creates variation.
7.2.2 The Full Factorial Method
Using this approach, all combinations of the factors are considered which are to be tested
and best combination is found out. For example, three factors with two levels each (i.e.
level 1 and level 2) would need 23 = 8 trials, as shown in Table 7.1.
Table 7.1 : The Full Factorial Method

Trial Control Factors


Number
A B C
1 1 1 1

2 1 1 2
3 1 2 1
4 1 2 2

5 2 1 1
6 2 1 2
7 2 2 1

8 2 2 2
This approach is
feasible only when number of factors are small and when experimentation is easy
because even with say seven factors at two levels, the minimum number of trials would
7
and pure scientific manner, the associated time and costs for running such a large number Design of Experiments,
of experiments are usually too high and unrealistic in industrial situations. Also, much of Six Sigma and
the information obtained from the trials would be from combinations of factors which are Benchmarking
of little practical value. This problem may be overcome by the use of fractional factorial
designs.
7.2.3 The Fractional Factorial Method
The difficulty associated with full factorial designs regarding running too large number of
experimental runs led to the evolution of fractional factorial designs where the chosen
fraction of the full designs gives an even and balanced spread throughout all the factors
being studied. Typically, a quarter of 128 experiments required for seven factors at two
levels would involve just 32 experiments. In this method of experimentation, as suggested
by Fisher (1925) and Plackett and Burman (1946), several factors are changed at the
same time in a systematic way so as to ensure the reliable and independent study of the
main factors and interaction effects. The orthogonal arrays are constructed with a limited
number of runs as a subset of the full factorial layout. The subsets are balanced between
columns to ensure that an even number of each level of each factor is tested during the
running of the experiment. The technique of orthogonal arrays reduces the size of the
experiment to a practicable level. However, some information is sacrificed by following
this method. Hence, while adopting this method, technical knowledge of the people
involved in the experiment is very important to ensure that the loss of information is
relatively insignificant. A typical Fisher array is shown in Table 7.2.
Table 7.2 : Fisher Array
Runs Control Factors

A B C D E F G

1 1 1 1 1 1 1 1
2 1 1 1 2 2 2 2

3 1 2 2 1 1 2 2

4 1 2 2 2 2 1 1
5 2 1 2 1 2 1 2

6 2 1 2 2 1 2 1
7 2 2 1 1 2 2 1
8 2 2 1 2 1 1 2

In this array, the columns represent the independent variables or factors to be studied and
tested at one of two levels and the rows represent the tests or experiments to be
performed. In an experiment which has eight experimental runs (i.e. L8), the first option or
level of factor A is tested four times, and the second option or level of factor A is also
tested four times. In addition to this, during the experimental run, the array tests all the
combinations of options or levels of any two factors. Thus A1 is tested both against B1 and
B2, as well as A2 similarly testing B1 and B2. The number of interactions that can be
studied is dependent on the size of the array. The analysis of the orthogonal array is done
by averaging the responses applicable to the level of each factor. Factor A at level 1 is
given by averaging the results obtained from running experiments numbers 1 to 4 and
factor A at level 2 by averaging the results obtained from running experiments numbers
5 to 8. The difference between level 1 and level 2 of each factor is an indication of the
significance of that factor in influencing the response measured. Generally, the larger is
the difference, the greater is the significance. The analysis array enables the strength of
each level of each factor to be measured, and their relative significance in influencing the
designated output to be assessed. Analysis of variance is used to estimate the significance
Quality Tools – that any factor has in influencing the measured response in relation to error (e.g.
Statistical measurement and inconsistency in the setting of factor levels) in the experimental system.
Following experiment concerning the processes used in the pharmaceutical industry in the
manufacture of medicines in tablet form, outlines the use of fractional factorial experiment
method with the use of orthogonal arrays. The aim is to produce uniform tablets in terms
of size and content and so the initial process of mixing the drug solution and the carrying
medium is of paramount importance. The particle size, the even distribution of the drug
(content uniformity) and the moisture content are controlled with small variation around
the target value prior to feeding into the tablet-making part of the operation. The three
measured responses are, therefore, particle size, content uniformity, and moisture content.
Table 7.3 shows the layout of the experiment and the results of each experimental run
from the particular combination of the factors in the run are given in Table 7.4. The
experiment will indicate the combination which gives the best result, but there may be a
better combination. This is done by analyzing the effect of each factor. The response of
the relevant experiment where the information occurs is simply added up and averaged so
that comparisons may be made between level 1 and level 2 of each factor, as shown in
Tables 7.5 and 7.6. The average of the experimental run is calculated to be 3.96.
Table 7.3 : Layout of the Experiment
Control Factors Level 1 Level 2

A Mixing speed High Low

B Drying temperature High Low

C Chopping speed Long Short

D Drying mechanism Type A Type B

E Drying time Long Short

F Mixing time Long Short

G Solution addition rate Fast Slow

Table 7.4 : Results of the Experiment


A B C D E F G Particle
Size

1 High High Long Type A Long Long Fast 3.8

2 High High Long Type B Short Short Slow 4.5

3 High Low Short Type A Long Short Slow 5.3

4 High Low Short Type B Short Long Fast 4.9

5 Low High Short Type A Short Long Slow 4.4

6 Low High Short Type B Long Short Fast 2.9

7 Low Low Long Type A Short Short Fast 2.3

8 Low Low Long Type B Long Long Slow 3.6

Table 7.5 : Analysis of the Effect of Factors and Levels

A1 = 1/4 (3.8 + 4.5 + 5.3 + 4.9) = 18.5/4 = 4.625

A2 = 1/4 (4.4 + 2.9 + 2.3 + 3.6) = 13.2/4 = 3.3

B1 = 1/4 (3.8 + 4.5 + 4.4 + 2.9) = 15.6/4 = 3.9


Design of Experiments,
B2 = 1/4 (5.3 + 4.9 + 2.3 + 3.6) = 16.1/4 = 4.025
Six Sigma and
Benchmarking

Table 7.6 : Comparison between Level 1 and Level 2

A B C D E F G

Level 1 4.625 3.9 3.550 3.950 3.900 4.175 3.475

Level 2 3.300 4.025 4.375 3.975 4.025 3.750 4.450

Difference 1.325 0.125 0.825 0.025 0.125 0.425 0.975

Table 7.7 : Final Result


A2 Mixing speed = 3.96 − 3.300 = 0.660
Gl Solution add. rate = 3.96 − 3.475 = 0.485

Cl Chopping speed = 3.96 − 3.550 = 0.410


F2 Mixing time = 3.96 − 3.750 = 0.210

Total below average = 3.96 − 1.765 = 2.195

The construction of the orthogonal array shows the significance of each of the factors in
relative value to each other in terms of their effect on influencing the value of the output
or response, in this case ‘particle size’. Thus mixing speed, solution addition rate, chopping
speed and mixing time have the greatest effect in that order, and drying time and drying
temperature are, in this example, of no relative significance at all. The other useful
property the balance of the array gives is the additive effect of each of the main control
factors in the value of the response beyond the experimental average. In this example,
particle size is required to be as small as possible. The effect below average is shown in
Table 7.7. The total below average is used as a prediction of the result if the process is
set up using a combination of factor level settings that reflect their best effect on the
output, in this case A2, C1, F1, G1. The other factors B, D and E can be set at the level
where least cost is incurred. This may be B2 (lowest temperature), E2 (shortest drying
time) and perhaps either D1 or D2 according to the lower capital cost, or the lower
operating cost. The predicted results are compared with the results obtained by a
confirmation run. The closer the confirmation run is to the prediction, the team thinking in
the construction of the experiment can be considered to be better.

7.3 TAGUCHI’S EXPERIMENTAL DESIGN METHOD


Professor Genichi Taguchi devised a quality improvement technique that uses
experimental design methods for efficient characterization of a product or process,
combined with a statistical analysis of its variability. This approach allows quality
considerations to be included at an early stage of any new venture: in the design and
prototype phase for a product; during routine maintenance; or during installation and
commissioning of a manufacturing process. In other words, the need for mass inspection
is eliminated by building quality into the product and process at the design stage.
Taguchi’s definition of quality is quite different from that of many people in the field of
quality. He defines quality in a negative way as ‘the loss imparted to society from the time
the product is shipped’. This loss includes the cost of customer dissatisfaction which may
lead to loss of reputation and goodwill for the company. Indeed, apart from the direct loss
Quality Tools – market-share loss and increased marketing efforts needed to overcome lack of
Statistical competitiveness.
Taguchi uses his loss function approach to establish a value base for the development of
quality products. This function recognizes the need for average performance to match
customer requirements, and the fact that variability in this performance should be as small
as possible. According to Taguchi, a product does not cause a loss only when it is outside
specification but whenever it deviates from its target value. Larger is the deviation from
the target, larger will be society’s (producer’s and consumer’s) loss. This loss can be
approximately evaluated by Taguchi’s loss function, which united the financial loss with
the function specification through a quadratic relationship. This loss, as proposed by
Taguchi, is proportional to the square of the deviation from the target. Figure 7.1 provides
the basic formula for the loss function L (Y) and a graphical representation of the loss to
society when the performance (Y) of a product deviates from the desired target t. M is
Producer’s loss (in monetary terms) when the customer’s tolerance D is exceeded.

L(Y) M
L(Y ) = 2
(Y − t ) 2
D

Customer Tolerances

Loss

Target

Y
t−D t t+D

Figure 7.1 : Taguchi’s Loss Function

7.3.1 Objective
The objective of Taguchi’s design is improvement in the process and product-design
through the identification of easily controllable factors and their settings, which minimize
the variation in product response while keeping the mean response on target. This is
achieved during Taguchi’s parameter-design stage by removing the bad effect of the
cause rather than the cause of the bad effect. Furthermore, since the method is applied in
a systematic way at a pre-production stage (off-line), it can greatly reduce the number of
time-consuming tests needed to determine cost effective process conditions, thus saving in
costs and wasted products.
According to Taguchi, the behaviour of a product or a process is characterized in terms of
two factors – controllable (or design) factors and uncontrollable (or noise) factors.
Controllable factors are those whose values may be easily adjusted or set by the designer
or process engineer whereas uncontrollable (or noise) factors are sources of variation
associated with the production or operational environment and they are difficult or
virtually impossible to control. So, ideally, the overall performance should be insensitive to
their variation. Controllable factors can further be classified as
• Target Control factors (TCF) or signal factors, which affect the average Design of Experiments,
levels of the response of interest. Six Sigma and
Benchmarking
• Variability control factors (VCF) which affect the variability in the response,
and
• Cost factors which affect neither the mean response nor the variability, and
so can be adjusted to fit economic requirements.
The approach of variability distinguishes the Taguchi approach from traditional tolerance
methods or inspection-based quality control. The idea is to reduce varia bility by changing
the variability control factors, while maintaining the required average performance through
adjustments to the target control factors.
7.3.2 Different Design Stages
There are three distinct stages of designing in quality, as suggested by Taguchi :
System Design
In system design, technical knowledge and scientific skills are used for the
development of the basic configuration of the system, which involves the selection
of parts and materials, the use of feasibility studies and prototyping.
Parameter Design
In parameter design, while keeping the response of interest on target, settings of
the controllable factors expected to reduce the performance variation (caused by
the noise factors) are identified. Attempt is made to reduce or remove the effect of
the noise factors rather than the noise factors themselves. By systematically
varying the noise factors at each of the various settings of the controllable factors,
the effect (variation) is simulated during the experiment. The controllable factor
settings are represented by the rows of an experimental design (inner array),
usually a fractional orthogonal array, where every level (setting) of a factor occurs
with every level of all other factors the same number of times. At every level-
combination of the controllable factors, some observations are obtained while
changing the settings of the noise factors assuming that the noise factors can be
controlled and changed (at least for the purposes of the experiment). A fractional
orthogonal array can then be utilized to determine the level-combinations of the
noise factors (outer array). In such a case, the experimenter can simulate the
variability (effect) of the noise factors on each controllable -factor setting and
determine the setting which minimizes this variability. Two types of performance
measures, noise performance measure (NPM) and target performance measure
(TPM) are calculated from the observation of each setting of the controllable
factors. The noise performance measure (NPM) reflects the variation in the
response at each setting and its analysis will determine the controllable factors
which can affect (and thus control) this variation. The target performance measure
(TPM) reflects the process average performance at each setting and its analysis
reveals those controllable factors, which are not variability control factors, but have
a large effect on the mean response (the target control factors). These can be
manipulated to bring the mean response to the required target. As a TPM, Sample
mean X of the observations in each trial may be used. Many measures for the
NPM have been suggested. According to Taguchi, when there is a target value to
be achieved for the response signal to noise ratio (SNR) is used which estimates
the inverse of the coefficient of variation, i.e. the ratio of process mean (µ) and
process standard deviation (σ). For each experimental trial, SNR is computed
 y2 
according to the equation ( SNR = 10 log 10  2  where y and s are respectively
s 
Quality Tools – Taguchi also recommends the consideration of interaction effect between the
Statistical factors A and B (i.e. A × B), when the effect of one factor A (on the response)
depends on the settings of another factor B. Depending on the number of factors,
orthogonal arrays can be constructed and the interaction effect can be studied.
Allowance or Tolerance Design
Taguchi recommends the use of allowance design to remove the effects of the
(outer or inner) noise factors if parameter design fails to do so. In this, some
additional factors are considered which were excluded earlier due to cost related
factors and tolerance re-design is advocated if that also fails by retaining the
optimum nominal levels for the factors (as identified by parameter design), but
reducing the tolerances of certain crucial factors (components) in an optimal and
cost-effective way so that the overall variability in the response is reduced to
acceptable levels. A trade off can be involved by relaxing the tolerances of certain
non-crucial components. In other words, this is the stage when the decision is taken
on how best to remove the noise factors, having failed to remove their effect.
Therefore the requirement for tolerance design is that it should take place as the
last resort, only after the parameter-design stage.
7.3.3 Suggested Steps
There are certain steps that Taguchi suggests to be taken in carrying out experimental
studies.
Define the Problem
A clear statement of the problem to be solved is provided.
Determine the Objective
The output characteristics (responses) to be studied are identified and eventually
optimized. The method of measurement is determined.
Conduct a Brainstorming Session
This is a very important stage in performing an experimental study. Managers and
operators closely related to the production process or the product under
consideration should get together in order to determine the controllable and
uncontrollable factors, and to define the experimental range and the appropriate
factor levels. Taguchi believes that it is generally preferable to consider as many
factors (rather than many interactions) as is economically feasible for the initial
screening.
Design the Experiment
The appropriate experimental designs are selected by taking into account the
controllable and noise factors. Conduct the experiment: Perform the experimental
trials and collect the experimental data.
Analyze the Data
The performance measures (TPM and NPM) for each trial run of the inner array
are evaluated and analyzed using the appropriate statistical analysis techniques.
Interpret the Results
Optimal levels for the variability control factors (VCF) and target control factors
(TCF) are identified. For the VCFs, the optimal levels are those which maximize
the NPM (minimize variability in the response), and for the TCFs, they are those
which bring the mean response nearest to the target value. The process
performance under the optimal conditions are then predicted.
Run a Confirmatory Experiment Design of Experiments,
Six Sigma and
It is necessary to confirm, by some follow up experimental trials, that the new Benchmarking
parameter settings improve the performance measures over their value at the initial
settings. A successful confirmation experiment alleviates concerns about the
possibilities of a wrong choice for factor levels and experimental design, wrong
assumptions of no interactions or improper assumptions underlying the response
model. If the predicted results are not confirmed, or the results are not
unsatisfactory, additional experiments have to be carried out.
SAQ 1
(a) What is Design of Experiments?
(b) What are the different factorial experimentation methods? Explain in brief.

7.4 SIX SIGMA


Six Sigma was born in Motorola about 15 years back. It is a high performance, data
driven approach for analyzing the root causes of business processes/ problems and solving
them. It links Customers, Process improvements and financial results. Sigma is a Greek
alphabet and is used in statistics as a measure to denote the standard variation in a
process. More specifically sigma measures the capability of the process to perform defect
free work. A defect is anything that results in customer dissatisfaction like defective
component, wrong shipment, delayed deliveries, high cycle time, missed calls. The sigma
value indicates how often defects are likely to occur. As sigma increases, cost of poor
quality goes down (as shown in Figure 7.2) while profitability, productivity and customer
satisfaction go up.

40%

30%
Cost
of
20%
Poor
Quality
10%

5%

2 3 4 5 6
Sigma Level

7.2 : Variation of Cost of Poor Quality with Sigma Level

Many manufacturing company processes operate at 3 sigma level which translates into
approx. 67,000 defects per million, while service company processes are often at 1 to
2 sigma level, i.e. 690,000 to 308,000 defects per million. Six Sigma’s target is to achieve
less than 3.4 defects or errors per million opportunities where an opportunity is defined as
Quality Tools – Quality (COPQ), must be reduced in order to improve net profit margins twenty to forty
Statistical percent.
Six Sigma is not about establishing a separate quality ivory tower within a company or
organization and it is not about cost avoidance. It is an enterprise-wide strategy that
effectively develops employees within a company to have the knowledge and capability to
solve problems, to improve decision-making and subsequently improve the overall
performance of the enterprise from a financial and customer perspective.
When Six Sigma is properly implemented as a roadmap and a management framework, it
consistently delivers breakthrough results throughout the business. As a system, it
combines the best problem solving tools and methods with capable employees under the
umbrella of a comprehensive leadership framework, to rapidly achieve reduced costs,
higher quality, lower cycle times, improved overall customer satisfaction and a lower
investment in equipment and inventory; all leading to increased market share, revenue,
profits, and ultimately shareholder value. The real challenge with six sigma is not the
statistics. It is getting to the point where one can meaningfully measure a business’s
current performance against dynamic customer requirements while developing the internal
abilities to respond to changing market place conditions.
7.4.1 Six Sigma Methodology
The fundamental objective of the Six Sigma methodology is the implementation of a
measurement-based strategy that focuses on process improvement and variation
reduction through the application of Six Sigma tools. As a way of running a business, Six
Sigma is a highly disciplined process, which helps companies, and individuals develop and
deliver near perfect products and services. It is an enterprise-wide strategy that
effectively develops a capability and a desire within individuals to improve decision
making, solve business problems and improve the overall performance of the enterprise.
The Six Sigma philosophy holds that every process can and should be repeatedly
evaluated and significantly improved in terms of time required, resources used, quality
performance, cost and other aspects relevant to the process. It prepares employees with
the best available problem-solving tools and methods. At its core, Six Sigma utilises a
systematic five-phase problem solving methodology called DMAIC : Define, Measure,
Analyse, Improve and Control. This is illustrated in Figure 7.3.

Define

Characterization
Measure

Analysis

Improve
Optimization

Control
Define Design of Experiments,
Six Sigma and
At the preliminary stage we identify poorly performing areas of the company, Benchmarking
define and launch projects with well articulated problem and objective statements
that have a financially beneficial impact to the company.
Measure
Here we identify the true process and determine the most likely contributors
including the statistical determination of the accuracy and repeatability of the data
characterizing the process. We ask, what is the capability of the process? Using
process mapping, flow charts and FMEA (Failure Mode Effects Analysis), original
data is collected that will act as a baseline for monitoring improvements.
Analyse
When, where and why do defects occur? This phase applies appropriate statistical
analysis such as scatter plots, Input/Output matrices and hypothesis testing to
accurately understand exactly what is happening within a given process.
Improve
In this phase, vital factors in the process are identified and experiments are
systematically designed to focus on those that can be modified or adjusted to
achieve the desired level of improvement.
Control
The Control phase incorporates the basic tools of Process Control to manage
processes on a continual basis. Once the DMAIC process has begun, it must be
managed continually to assure that benefits are sustained.
7.4.2 Six Sigma Participants
In the Six Sigma environment, participants from senior management to factory floor
workers assume specific roles in the performance improvement process. The Champion,
Master Black Belt, Black Belt, Green Belt and Yellow Belt (Figure 7.4) each have unique
perspective on a businesses’ strategic priorities, key processes and the organization’s
culture.

Champions

Master Black Belts

Black Belt

Green Belts

Yellow
Belts

Figure 7.4: Six Sigma Participants


Quality Tools – aspects of a Six Sigma project. Champions select and scope projects that are aligned with
Statistical the corporate strategy, choose and mentor the right people for the project, and remove
barriers to ensure the highest levels of success.
The Master Black Belt (MBB) sits atop a skill and knowledge hierarchy that includes
Black and Green Belts, with gradually increasing levels of sophisticated tool sets at their
disposal. The primary activity for the MBB is being a leader and teacher. As a leader, the
MBB will have responsibility for overseeing projects with multiple Black Belts and Green
Belts participation that will significantly change the way the organisation does business.
As a teacher, the MBB is responsible for the on-going development of existing Black
Belts, Green Belts and Yellow Belts and the training for new participants.
The Black Belt is a key change agent for the Six Sigma process. Typically from among
the best performers these individuals lead teams working on chronic issues that are
negatively impacting the company’s performance. The Black Belt is usually assigned to a
two-year dedicated position responsible for executing the Six Sigma process on selected
projects.
Green Belts serve as specially trained team members within a function-specific area of
the organization. This focus allows the Green Belt to work on small carefully defined Six
Sigma projects, requiring less than a Black Belt’s full-time commitment to Six Sigma
throughout the business
Yellow Belts represent a large percentage of the workforce and is trained with skills
necessary to identify, monitor and control profit-eating practices in their own processes.
They are also prepared to feed that information to Black Belts and Green Belts working
on larger system projects. The training of Yellow Belts builds and sustains the Six Sigma
culture.

7.5 BENCHMARKING
The concept of benchmarking was popularized by the work of Camp (1989) based on the
experiences of Rank Xerox company when the company started to evaluate its copying
machines against the Japanese competition and found that the Japanese companies were
selling their machines for what it cost Rank Xerox to make them. It was assumed that the
Japanese produced machines were of poor quality, but this proved not to be the case. This
exposure of the corporation’s vulnerability highlighted the need for change.
Benchmarking is a quality improvement tool to improve the performance of an
organization by comparing the practices of another organization, i.e. it is an opportunity to
learn from the experience of others. A simple self-explanatory model for
benchmarking is shown in Figure 7.5. It helps to develop an improvement mindset
amongst staff that facilitates a better understanding of practices and processes often
challenging the existing ones to achieve the goals. It has been defined in a number of
ways (Adam and Vande Walter, 1995) including:
• as a process for identifying and learning from the best practices in the world,
• as a search for and application of significantly better practices that lead to
superior competitive performance, and
• as a process of comparing the business of one organization against another
to gain information about "best practices" that when creatively adapted, can
lead to superior performance.

- Comparison of own
Design of Experiments,
Six Sigma and
Benchmarking

Best
Practices
for
Organization
Within Industry
Outside Industry
- Comparison of
- Comparison of own practices
own practices with Industry’s
with best best practices
practices - Evaluate those
- Evaluate those practices
practices - Implement best
- Implement best practices
practices

Figure 7.5 : Benchmarking Model

7.5.1 Approach
Benchmarking can be either informal benchmarking or formal benchmarking. The
informal benchmarking is a traditional form of benchmarking which most of the
organizations carry out for years in mainly two forms :
• Visits to other companies to obtain ideas on how to facilitate improvement in
one’s own organization.
• The collection, in a variety of ways, of data about competitors.
This approach is not very effective because of the lack of structure and clear objectives.
This approach is often branded ‘industrial tourism’. To use benchmarking as a learning
experience as part of a continuous process rather than a one-off exercise, a more formal
approach is required. There are three main types of formal benchmarking:
Internal Benchmarking
This is the easiest and simplest form of benchmarking which involves
benchmarking between business or functions within the same group of companies.
Most companies commence benchmarking with this form of internal comparison. In
this way, the best internal practice and initiatives are shared across the corporate
business.
Competitive Benchmarking
This is a comparison with direct competitors, whether of products, services or
processes within a company’s market. It is often difficult, if not impossible in some
industries, to obtain the data for this form of benchmarking as by the very nature of
being a competitor the company is seen as a threat.
Functional/Generic Benchmarking
‘Functional’ relates to the functional similarities of organizations, while ‘generic’
looks at the broader similarities of businesses. With functional benchmarking, the
partners will usually share common characteristics in the industry, whereas generic
Quality Tools – access to other organizations to perform this type of benchmarking. Organizations
Statistical are often keen to swap and share information in a network or partnership
arrangement, particularly when no direct threat is presented to a company’s
business or market share.
7.5.2 Steps in Formal Benchmarking Process
There are a number of steps in a formal benchmarking process. They are now briefly
described :
• The subject to be benchmarked is identified. A team is formed and the
proper support to them as well as their roles and responsibilities of all the
team members are decided to reach agreement on the benchmark measures
to be used. A draft project plan is created and communicated with the
required internal parties. The process for benchmarking is chosen in such a
way that it should have a significant impact on customer satisfaction and/or
internal efficiency.
• The companies which will be benchmarked will be identified from a set of
selection criteria defined from the critical success factors of the project.
• A data-collection plan is developed by collecting the most appropriate means
of collecting the data, the type of data to be collected, and a plan of action to
obtain the data.
• The data collected is tabulated and analyzed to determine the reasons for the
current gap (positive or negative) in performance between the company and
the best amongst the companies involved in the benchmarking exercise. The
gap is usually expressed in the form of a percentage. The change in
performance of the company and the benchmark company over an agreed
time-frame is estimated in order to assess if the gap is going to grow or
decrease, based on the plans and goals of the parties concerned.
• To establish the goals to close or increase the gap in performance, effective
communication of the benchmarking exercise findings and gaining
acceptance of the data, is ensured
• An action plan is developed to achieve the goals. This step involves gaining
acceptance of the plans by all employees likely to be affected by the
changes.
• By effective project-planning and management, the actions, plans and
strategies are implemented and the results of the action plans are assessed.
• Reassessing or recalibration of the benchmark is done if the actual
performance/improvement is meeting that which has been projected. This
should be conducted on a regular basis and involves maintaining good links
with the benchmarking Partners.
SAQ 2
(a) Define quality in terms of quality loss function as suggested by Taguchi.
(b) What is six sigma?
(c) What is benchmarking? Briefly describe the steps to be followed in
benchmarking process.
Design of Experiments,
Six Sigma and
Benchmarking
7.6 SUMMARY
This unit gives the concept of experimental design techniques for understanding the effect
of controllable factors, be it a product or a process design, in minimizing variation while
centering the output on a target value. These are major techniques in investigating quality
problems. Besides highlighting the one factor at a time method, full factorial and fractional
factorial method, Taguchi’s off line technique for quality control has been presented
which allows quality considerations to be included at an early stage of any new venture.
Some new tools of quality like affinity diagrams, relations diagrams and process decision
program chart have been discussed in brief. Benchmarking, a quality improvement tool to
improve the performance of an organization by comparing the practices of another
organization, has also been discussed.

7.7 KEY WORDS

Controllable Factors : These are factors that may be controlled during


production (e.g. temperature, speed, pressure,
tension, material type, etc.).
Design Parameters : Those parameters or factors that affect the
performance of a product or process.
Informal Benchmarking : The informal benchmarking is a traditional form of
benchmarking which most of the organizations
carry out mainly in two forms, first is by visiting
other companie s to obtain ideas on how to facilitate
improvement in one’s own organization and
secondly by collection, in a variety of ways, of data
about competitors.
Loss Function : Loss function is an approach to establish a value
base for the development of quality products, as
suggested by Taguchi. By this function, loss to the
society can be approximately evaluated by a
function with respect to the performance of the
product deviating from a desired target.
Noise Factors : Factors that either can’t be controlled or for
economic reasons are not to be controlled.
Six Sigma : Six Sigma is a high performance, data driven
approach for analyzing the root causes of business
processes/problems and solving them. It links
customers, process improvements and financial
results.

7.8 ANSWERS TO SAQs

SAQ 1
(a) The design of experiments is a series of techniques which involves the
identification and control of parameters which have a potential effect on
performance and reliability of a product design and/or the output of a
Quality Tools – The objective is to optimize the value of these design parameters to make the
Statistical performance of the system immune to variation. The concept can be applied
to the design of new products and processes or to the redesign of existing
ones, in order to :
• optimize product design, process design and process operation,
• achieve minimum variation of best system performance,
• achieve reproducibility of best system performance in manufacture
and use,
• improve the productivity of design engineering activity,
• evaluate the statistical significance of the effect of any controlling
factor on outputs, and
• reduce costs.
(b) The different factorial methods can broadly be classified into one factor at a
time method, Fractional factorial method and Full factorial method.
One factor at a time method : In this method, setting of one factor is
altered holding the others constant and then the resulting response is
measured.
The full factorial method : Using this approach, all combinations of the
factors are considered which are to be tested and best combination is found
out. For example, three factors with two levels each (i.e. level 1 and level 2)
would need 23 = 8 trials.
The fractional factorial method : The difficulty associated with full
factorial designs regarding running too large number of experimental runs led
to the evolution of fractional factorial designs where the chosen fraction of
the full designs gives an even and balanced spread throughout all the factors
being studied. Several factors are changed at the same time in a systematic
way so as to ensure the reliable and independent study of the main factors
and interaction effects.
SAQ 2
(a) Taguchi’s definition of quality is in a negative way as ‘the loss imparted to
society from the time the product is shipped’. This loss includes the cost of
customer dissatisfaction which may lead to loss of reputation and goodwill
for the company. Taguchi uses his loss function approach to establish a value
base for the development of quality products. The loss can be approximately
evaluated by Taguchi’s loss function, which is proportional to the square of
the deviation from the target.
(b) Six Sigma is a high performance, data driven approach for analyzing the root
causes of business processes/problems and solving them. It links Customers,
Process improvements and financial results. Six Sigma’s target is to achieve
less than 3.4 defects or errors per million opportunities where an opportunity
is defined as a chance for nonconformance, or not meeting the required
specifications. Most of the manufacturing company processes operate at 3
sigma level which translates into approx. 67,000 defects per million, while
service company processes are often at 1 to 2 sigma level, i.e. 690,000 to
308,000 defects per million.
(c) Answer can be found in the text.
Design of Experiments,
Six Sigma and
Benchmarking

FURTHER READING
Logothetis, N. (1997), Managing For Total Quality, Prentice Hall of India.
Barrie, G. Dale (2004), Managing Quality, Blackwell Publishing, USA.
Stamatis, D. H. (1997), TQM Engineering Handbook, Marcel Dekker Inc, USA.
Fisher, R. A. (1925), Statistical Methods for Research Workers, Oliver and Boyd :
Edinburgh.
Plackett, R. L. and Burman, J. D. (1946), The Design of Optimum Multifactorial
Experiments, Biometrika, 33(3), pp 305-325.
Taguchi, G. (1986), Introduction to Quality Engineering :Designing Quality into
Products and Process, Asian Productivity Organization, Tokyo.
Adam, P. and Vande, R. Walter, (1995), Benchmarking on the Bottom Line
:Translating Business Reengineering into Bottom-line Results, Industrial Engineering,
Feb., pp 24-26.
Quality Tools –
Statistical

QUALITY TOOLS – STATISTICAL


Use of statistics in quality engineering has found an important place. There are inherent
process variations. Hence, the statistics can help to quantify the process performance.
Statistical process control helps control the process. The aim should be to reduce the
variability in the process. Statistical process control is applicable to any section of an
organization, whether it is manufacturing, service, education or any other.
Statistical Process Control (SPC) is applicable mainly at production stage. Taguchi’s
methods are applicable starting from the design phase. The design stage is the off-line
stage. Taguchi methods use a lot of material from statistics. Taguchi provided the concept
of loss function. According to him, deviation from the target leads to the loss, which can
be quantified in terms of money. Thus, there is a need to reduce the variability. The
variable in a process is due to processes parameters and associated noises. The process
parameters are controllable factors, whilst noises are uncontrollable factors. Taguchi
such a manner, that effect of the noises is reduced. He also suggested a systematic Design of Experiments,
procedure for carrying out the experiments, which help in the statistical analysis. Six Sigma and
Benchmarking
In the recent past, six-sigma technique has become very popular. Six sigma is a measure
of quality that strives for near perfection. It is a disciplined and statistical methodology for
eliminating defects (driving towards six standard deviations between the mean and the
nearest specification limit) in any process, from manufacturing to transactional and from
product to service. Benchmarking is another quality improvement tool to improve the
performance of an organization by comparing the practices of a successful organization in
that it is an opportunity to learn from the experience of others.
This Block contains three units. Unit 5 introduces basics of statistics and their application
to quality engineering. Acceptance sampling, in which a lot is accepted or rejected based
on the inspection of sample, is described in detail. Unit 6 describes online and offline
methods of quality control. Statistical process control is an online method, whereas
Taguchi’s method falls in the category of offline method. Unit 7 describes design of
experiments, six sigma and benchmarking.

You might also like