Professional Documents
Culture Documents
413 Die Cast Problem Solving PDF
413 Die Cast Problem Solving PDF
Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used
only for identification and explanation without intent to infringe nor endorse the product or corporation.
© 2009 by North American Die Casting Association, Arlington Heights, Illinois. All Rights Reserved.
Neither this book nor any parts may be reproduced or transmitted in any form or by any means,
electronic or mechanical, including photocopying, microfilming, and recording, or by any information
storage and retrieval system, without permission in writing from the publisher.
TABLE OF CONTENTS
I. Introduction ………………………………………………………………………… 1
II. Variation and Basic Statistics ……………………………………………………… 3
III. Problem Solving Methodologies ………………………………………………… 7
IV. 8D Step 1: Team Approach ………………………………………………………… 11
V. 8D Step 2: Describe the Problem ………………………………………………… 15
VI. 8D Step 3: Implement and Verify Interim Containment ………………………… 21
VII. 8D Step 4: Determine and Verify Root Causes ………………………………… 23
A. Identify Potential Causes ……………………………………………………… 23
B. Select Likely Causes …………………………………………………………… 30
C. Is the Potental Cause a Root Cause? ……………………………………… 33
D. Identify Alternative Solutions ………………………………………………… 44
VIII. 8D Step 5: Verify Corrective Actions ……………………………………………… 45
IX. 8D Step 6: Implement Permanent Corrective Actions ………………………… 49
X. 8D Step 7: Prevent Recurrence …………………………………………………… 51
XI. 8D Step 8: Congradulate Your Team ……………………………………………… 53
XII. Problem Solving Examples ………………………………………………………… 55
A. Example 1: 2 Factor, 2 Level Full Factorial ………………………………… 55
B. Example 2: 5 Factor, 2 Level Fractional Factorial ………………………… 56
C. Example 3: 5 Factor, 2 Level Fractional Factorial ………………………… 60
D. Example 4: 3 Factor, 3 Level Full Factorial ………………………………… 68
XIV. Appendices …………………………………………………………………………… 73
The Solution
The lack of success in die cast problem solving is not likely due to a lack of interest or
effort in solving problems. Rather, it is likely due to a lack in understanding the nature
of variation, and the fact that the interactive nature of die casting makes solving process
problems very difficult.
This Text
This die cast problem solving text and the accompanying EC-413 course is unique and
progressive in that it addresses variation thoroughly and covers the interactive nature of the
die casting process by using problem solving tools that respect this interactivity. In addition,
this book approaches problem solving using the Eight Discipline Problem Solving Process,
which is required under QS-9000 certification and accepted under TS-16949 specification.
Statistical Thinking
“Statistical thinking is a philosophy of learning based upon the following principles:
• All work occurs in a system of interconnected processes,
• Variation exists in all processes, and
• Understanding and reducing variation are keys to success.”
(Special Publication, ASQC Statistics Division; Spring 1996).
Analyzing this statement in detail provides justification for the problem solving strate-
gies proposed in this course…
1) “Learning” – Statistical methods enable us to learn more about the factors that
control the quality of our process. The process knowledge gained using sound
problem solving strategies may directly lead to technological improvements that
provide significant quality and cost benefits.
2) “Work occurs in a system of interconnected processes” – In die casting, “in-
teractive processes” may be a more appropriate term to use than “interconnected
processes.” With that adjustment considered, the statement leads one to realize
that the process and design factors that are required to make quality product must
work in symphony together. In die casting, there is a significant interactive effect
of all processes and design characteristics that come together at one instant to
enable proper filling of the die within 0.005 to 0.200 seconds.
3) “Variation exists in all processes” – Variation occurs naturally in all things to some
degree. The flattest, most perfect cut on the face of a diamond would appear very
rough under our most highly powered microscopes, but this does not mean that the
diamond is not of high value. It only illustrates that variation will always occur, and
understanding the impact of the variation on product quality is what is important.
4) “Understanding and reducing variation are keys to success” – We must first
understand what types of variation are important. Then we can find ways to reduce
these significant process variations, which will likely lead to a decreased production
of sub-standard product and thus an increased production of high-quality product.
Normal Distribution
The normal distribution is the probability density distribution, which defines most all
types of variation in any process. A normal distribution appears as shown next:
Within the normal distribution curve, 68 percent of the variation falls within –1 to +1
standard deviation from the average; 95 percent of the variation falls within –2 to +2
standard deviations from the average; 99.7 percent of the variation falls within –3 to +3
standard deviations from the average; and 99.994 percent of the variation falls within –4
to +4 standard deviations from the average. We have accounted for all but 0.3 percent
of the process defined by 3 standard deviations, therefore we typically use 3 standard
deviations to define the normal variation of the process.
Three important die cast problem solving pitfalls may be understood from the
above example:
1) Several factors frequently affect a defect because of the interactive nature of the
die casting process.
2) When several factors contribute to a defect’s variation, significantly improving one
factor may not seem to have a significant effect on the defect.
3) Reducing
the variation of several parameters is often the only way to assure
significant improvement.
These pitfalls tend to result in unsuccessful problem solving efforts in die casting. Be-
coming aware of these potential pitfalls will make you a better problem solver.
There are several approaches to problem solving that have been offered to the manu-
facturing community. All of these approaches have some commonalities, which are:
1. Describe the problem.
2. Determine and test potential root causes of the problem.
3. Determine potential corrective actions.
4. Implement corrective action(s).
A good problem solving method should include the above steps.
Dorian Shainin
Dorian Shainin is a problem-solving “Guru” who developed his statistical engineering
methodology while working at Hughes Aircraft. His methods are still being used signifi-
cantly within General Motors and some of his methods will be applied in this text.
Taguchi
Taguchi, a Japanese problem-solving “Guru”, developed a non-interactive design of
experiments methodology. His methods have been successfully used in non-interactive
manufacturing processes for many years. However, his methodologies are of little use
in the die casting industry due to the interactive nature of die casting.
This Book
The uniqueness of this book lies in that it is specific to the interactive nature of die casting.
We consider a variety of existing problem solving tools such as 8D, Shainin, Taguchi, and
Box, Hunter, and Hunter. The specific problem solving tools selected are those that work
well in reducing interactive die casting problems. The new problem solving format, out-
lined below, is linked to the 8D steps for ease of application and understanding.
In general, teams have not been well utilized in the problem solving process in die cast-
ing, and therefore, are an important topic of discussion in this text. Teams are an ef-
fective and justifiable means of solving problems, and if employee involvement a goal,
team oriented problem solving is the answer.
Benefit of Knowledge
The most significant benefit of utilizing teams in the problem solving process is the
cumulative knowledge that teams are capable of providing. By having a well-rounded
team composed of people with the required product, process, and customer knowledge,
problem solving is most likely to be effective.
Brainstorming
Brainstorming is a team problem solving tool that is critical to success in problem solv-
ing. The goal of brainstorming is to attain the most creative and effective ideas toward
the resolution of a problem. To get these types of ideas, brainstorming should be con-
ducted in the following manner:
1) Discussions should be enthusiastic and people should be commended openly
for all ideas. Ideas that appear silly or outrageous may not seem valuable at
first, however, these types of ideas may lead to creative solutions. The facilitator
should promote these approaches.
2) Allow only one person to speak at a time. The facilitator should control who
speaks so that all have the opportunity to express their thoughts.
3) Do not ever criticize, evaluate, or compare one idea to another. Just write them all
down. The facilitator should write down ideas and assure that the group follows
this rule.
4) The facilitator should make sure the discussion is limited to the problem described.
Storyboard Brainstorming
Storyboard Brainstorming is a team tool to be used for brainstorming when:
• The sequence of events in the process is not necessarily clear.
• The point at which defects occur is not clear.
• There are members within the group who are not likely to become involved or there
are members who will tend to control the problem solving process.
To use Storyboard Brainstorming follow these steps:
1) For each process step or operation, have each team member note potential
causes of the defect that could originate in each step. Use Post-ittm notes.
2) P
ut these potential causes of the defect on a board or wall in sequence of the process.
3) Follow brainstorming rules otherwise.
The benefits of storyboard brainstorming are as follows:
• It enables structured brainstorming for defect resolution of multiple operation processes.
• It enables the group to learn about the entire process.
• It minimizes the impact of dominant group members and promotes the involvement
of typically uninvolved employees.
Definition Statement
The definition statement is critical to beginning an effective problem solving project. Once
a problem solving team is created, coming up with a commonly understood definition
statement is the first task for the team. Once developed, the definition statement should
assure that the team directly attacks the cause of the problem defined in the definition
statement. Philosophically speaking, the statement, “A problem well described is a prob-
lem half solved,” describes the importance and benefit of a good definition statement.
The definition statement should have three separate components. It should begin
with the part number or description. The statement should then clearly define the defect
or symptom. Once the problem solving project is complete, the statement will conclude
with the third part, which is the proven cause of the defect or symptom. This sounds
very simple, however, it is critical to proper team-based problem solving.
Iso-Plot
An Iso-plot is a tool to compare: 1) two different types of gages, 2) two identical gages,
3) two people using a gage, or 4) any type of comparison of gages. The Iso-plot is a
Dorian Shainin tool that is shown in the example below:
Day 1 Day 2
Part Inspector 1 Inspector 2 Average Range Part Inspector 1 Inspector 2 Average Range Grand Avg.
1 1 2 1.5 1 1 1 1 1 0 1.25
2 2 2 2 0 2 2 2 2 0 2
3 3 3 3 0 3 4 3 3.5 1 3.25
4 2 2 2 0 4 2 2 2 0 2
5 4 5 4.5 1 5 4 5 4.5 1 4.5
% Reproducibility
= 100 x Reproducibility/Total Variation
= 100 x 0.447/7.030
= 6.4%
% R&R
= 100 x R&R/Total Variation
= 26.8%
The above would be considered an acceptable gage since the Gage R & R % is less
than 30%. Microsoft Excel forms for completing Gage R & R studies are included with
the NADCA information available on-line and a Gage R & R calculation template is dis-
played in Appendix 2.
To assure that defective product does not get to the customer, the 8D problem solv-
ing methodology requires that some sort of defect containment be implemented. This
defect containment is only an interim measure, which typically requires extra inspection
activity, to find product defects within the die cast facility. Since containment is required
under the Eight Discipline process and is costly, it should motivate the die caster to rap-
idly resolve the defect problem. The extra inspection may also provide an opportunity
to gather quality data that is difficult to gather without inspection. This data may be very
useful during the problem solving process.
Step 4 in 8D problem solving, which is “Determine and Verify Root Causes,” is the meat
of any problem solving effort. It is broken down into four sub-steps: 1) identify potential
causes, 2) select likely causes, 3) test to determine if the potential root cause(s) is a
root cause, and 4) select potential solutions to eliminate the root cause(s).
To analyze the defect tally sheet you must create a probability chart (p chart) for each
defect. You may then easily understand the average defect rate and variability for each
defect studied.
Multi-Vari Study
The Multi-Vari study, developed by Dorian Shainin, is the most valuable clue generation
tool for chronic cause defect problems. The purpose of the study is to determine the
“Time Mode of Variation” of the product defect problem. The “Time Mode of Variation” is
how a defect varies over time. For example, if the Multi-Vari shows that the defect var-
ies from piece to piece, then the clue is to look for causes that vary from piece to piece.
If the defect would vary from shift to shift, then the clue is that something is changing
from shift to shift. In summary, the reason that the Multi-Vari study is so valuable is that
it can significantly narrow the number of potential causes of a product defect problem.
The procedure for completing a Multi-Vari study is as follows:
1) Determine the number of castings to be collected each hour and each shift, as well
as the length of the study in days. Make sure castings are collected during all shifts.
Difficult to resolve or expensive defects:
4 pieces/hour, 4 hours/shift, and 4 days/study
Moderately easy to resolve or moderately expensive defects:
3 pieces/hour, 3 hours/shift, and 3 days/study
2) Collect castings under normal process conditions. Assure that operators on all
shifts do not change the way that they run the process during the study. This may
be done by either collecting castings without their knowledge, or preferably by
involving them in the problem solving process so they appreciate the purpose of
the study. Assure that castings are collected in consecutive order over consecu-
tive hours, shifts and days.
3) Process the castings to the point where the defect is typically found.
4) Rank the castings using your defect ranking system.
5) Chart the results with the defect rank on the Y-axis and the casting collection
order on the X-axis.
6) Evaluate the results visually or statistically.
The following are examples of the results of two Multi-Vari studies:
The first example (on the cover of this book) shows a complete Multi-Vari chart. To
analyze this chart, you must compare the amount of variation within each potential time
mode of variation (i.e., piece-to-piece, hour-to-hour, shift-to-shift, and day-to-day). On
this chart, it appears that piece-to-piece variation is most significant, however shift-to-
shift variation also appears to be somewhat significant.
In contrast, the charts in the second example show a Multi-Vari study broken down
into individual piece-to-piece, hour-to-hour, shift-to-shift, and day-to-day charts.
In the example below, questions five and seven were not answered fully so they did not
have differences (D5 and D7) and theories. Seven theories were brainstormed. When
they were compared to the differences, a score was developed for each theory. From
these scores, Theory 2 (T2) would be the most significant theory to try first. The other the-
ories would be tested in descending score order until the defect is satisfactorily resolved.
Likely causes for chronic cause defects also need to be determined. Brainstorming
using the clues that we have generated will accomplish this task. If you have done a
complete job in clue generation, the brainstorming should be fairly easy.
First, the team leader or facilitator should distribute or display all the information found
during clue generation tools. Using the clues, the team members should brainstorm pos-
sible causes. The possible causes are typically process variables that explain the chronic
variation. The leader or facilitator should document all the ideas from brainstorming.
The next phase of chronic cause problem resolution will be to conduct a designed ex-
periment. Designed experiments containing more than five process variables are cum-
bersome in the die casting process; therefore, we must reduce the number of potential
causes to five or less at this point. To reduce the number of process variables below
five, the problem solving team should first collectively discuss the importance of each
process variable in its relation to the defect. It is important to consider the clues that
your team has generated during this discussion. If the team cannot reduce the number
of process variables below five during this process, a correlation analysis must be used.
Correlation Analysis
A correlation analysis is a rarely used but is a valuable tool in problem solving efforts.
The only time when this analysis must be used is when more than five variables re-
main after brainstorming. The goal of using a correlation is to find which variables vary
with the defect, and eliminate the variables from consideration that do not vary with the
defect. To complete a correlation study, collect a small sample of castings over the time
mode(s) of variation established in the multi-vari study along with the process variable
information. Then, mathematically calculate your correlation using the Excel correlation
spreadsheet provided within the NADCA on-line files.
To evaluate a correlation study, compare the calculated correlation values for all mea-
surable process variables to the defect rank level. The value for a correlation will be
between -1 and 1. If the value is a positive number, it means that you have positive cor-
relation. A positive correlation indicates that when the defect level is lower, the process
variable level is lower, and when the defect level is higher, the process variable level is
higher. If the correlation value is negative, it means that when the defect is lower, the
process variable level is higher and vice-versa. The closer the correlation value is to
1 or -1, the stronger the correlation. A correlation value close to zero indicates a weak
correlation. Therefore, the process variables with correlation values closer to zero are
less important in your problem solving effort. You should eliminate process variables
from consideration until you have five or fewer variables remaining. The following ex-
ample illustrates how to interpret a correlation:
Correlation Analysis
Porosity Level Fast Shot Velocity Impact Pressure Vacuum Time Intensifier Rise Time
6 84 2120 4 3
5 84 2120 5 4
1 80 2150 5 1
4 83 2130 5 2
7 85 2100 1 3
6 85 2110 2 1
2 81 2140 5 0
0 80 2150 1 0
3 81 2150 2 1
8 85 2110 5 5
An interaction is when two or more process factors work together to affect a product
defect. Interactions are independent of the individual effect of each process parameter
involved. For example, assume we have two process variables, fast shot speed and die
temperature, that both significantly affect porosity on a given casting. The parameters
both affect porosity independently. This is not an interaction because they do not work
together to affect porosity.
When we test process factors in an experiment, we will test each factor at two or
three levels. Assuming a two-level experiment is being done, an interaction is when the
defect result differs when the two process factors are at the same level versus when
they are at different levels. We will learn more about interactions in the examples ex-
periments in Chapter 12 (Problem Solving Examples).
Examples of interactions:
The following IS NOT an interaction:
Blocking
Blocking is a powerful tool in designed experimentation. Its value lies in that it reduces un-
explained variation that could be caused by known process variables that are not included
in the designed experiment. Thus, when conducting an experiment, blocking allows the
effect of the experimental variables to be more apparent. When statistically evaluating your
experiment, proper blocking will raise the statistical confidence of your experiment.
The use of blocking is necessary because some process variables may vary with the
time mode of variation and are not possible to control in the long term due to cost or
lack of technology. Examples of such process variables in die casting may include die
water temperature, plant ambient temperature, air pressure, or operator variation. Be-
fore conducting an experiment, any of the selected process variables that are not pos-
sible to control in the long term should be considered blocked variables.
The blocked variable(s) must be constant during the experiment. Blocking is accom-
plished by holding the blocked variable(s) at a nominal value during the experiment,
or by waiting for the nominal value to occur before collecting castings. For example,
assume you have cycle time variation because you have a manually operated machine.
To block for cycle time, you should use a stopwatch to carefully time the machine to
make sure that the cycle rate is the same during the experiment. If you wanted to block
for fast shot velocity, you might wait until you measure the nominal value on a process
monitoring system to collect castings for the experiment.
Randomization
Even after clue generation and several effective team meetings, you will likely still have
process variables that the team did not consider that cause some variation in your de-
fect. To minimize the effect of these factors during an experiment, we must randomize
the order of the experiment.
In order to realize the importance of randomization, we must consider the situation where
we do not randomize. For example, assume we complete a two-factor, two-level experi-
ment. Factor 1 is Fast Shot Velocity and Factor 2 is Intensification Pressure. Fast Shot
Velocity will be tested at 90 and 100 inches per second and Intensification Pressure will be
tested at 1200 and 1500 pounds per square inch. The experiment would appear as follows:
Treatment Fast Shot Velocity Intensification Pressure
1 90 IPS 1200 PSI
2 100 IPS 1200 PSI
3 90 IPS 1500 PSI
4 100 IPS 1500 PSI
Assume that an air line blows on a nearby trim press sometime between treatments
2 and 3 of the experiment, and the machines air pressure drops from 100 PSI to 25 PSI
for the duration of your experiment. This loss of air pressure causes your die spray to
no longer atomize, and therefore your die temperature changes significantly. You then
analyze the results of your experiment. The castings show that you have significant po-
rosity on the castings from treatments 3 and 4, but treatments 1 and 2 are much better.
Is this porosity difference due to the Intensification Pressure or is it due to the air pres-
sure drop? You will never know because you did not randomize the experiment.
To effectively randomize the experiment, the order of your experimentation should
not have any patterns that would allow other variation to have a detrimental effect on
the experiment. The best way to randomize is to use the random number tables sup-
plied in each Excel experiment spreadsheet file. The random number table will change
every time that you use the software. Therefore, to use the table, begin at the upper left
corner. The first number is the first treatment that should be conducted. The number to
the right should be the second treatment conducted and so on until all treatments in the
experiment are accounted for. If you have a duplicate treatment number, ignore it. If
you complete the first row without completing the experiment, go to the second row.
Confounding
In fractional factorial designed experiments, some combinations of factors are not
tested. When these experiments are used, the intent is always to assume higher level
interactions or sometimes all interactions do not exist. However, when fractional facto-
rial experiments are conducted, these interactions may occur. When these interactions
occur in fractional factorial experiments, the high level interactions are confounded with
main effects or lower level interactions. For example, when a 25-1 fractional factorial
experiment is evaluated, the result of a main effect is confounded with a 4-factor interac-
tion. In this case, you cannot tell if the calculated result is due to the main effect, due
to the 4-factor interaction, or due to some combination of the two effects. Because the
likelihood of the 4-factor interaction being significant is low, you assume that it does not
exist. This assumption is the reason that fractional factorial experiments are statistically
risky. However, the benefit of using these experiments often outweighs the risk.
Experiment Designs
In the past, the statistical community referred to the field of statistical problem solving
as design of experiments, or DOE. This term was used because experimenters toiled
over approaches to designing experiments for problem resolution. The act of design-
ing the experiment is not an issue in die casting due to the fact that we are limited in
the number of effective experimental designs available. This limitation stems from the
interactive nature of die casting, as most experimental designs are for non-interactive
problems. We are further limited in options for effective experimental designs because
experiments with greater than five factors and greater than three levels are too large
and time consuming to use. The experiment selection chart below shows the 12 choic-
es of experiments for five or less factors and for two or three levels. To determine which
experiment to use, find the experiments that have the proper number of factors. Then
choose the number of levels desired and determine if confounding will be a concern.
Replications
More than one casting per treatment must be collected in order to improve the power of
an experiment. Choosing the appropriate number of replications will increase the likeli-
hood that other types of random variation will not lower the confidence of the experi-
mental results, assuming that the proper variables were initially selected for the experi-
ment. A rough guideline for choosing the number of replications is shown below:
Number of Cost of Prob- Suggested Number
Treatments lem per Year of Replications
2 to 4 < $10,000 5
2 to 4 $10k to $100k 8
2 to 4 > $100k 12
8 to 12 < $10,000 8
8 to 12 $10k to $100k 12
8 to 12 > $100k 15
16 to 18 < $10,000 10
16 to 18 $10k to $100k 15
16 to 18 > $100k 20
> 20 < $10,000 15
> 20 $10k to $100k 20
> 20 > $100k 25
Conducting an Experiment
To complete an experiment, your team should follow these steps:
1) Determine the experiment from the experiment selection chart.
2) Determine factor levels based upon historical data.
3) Decide how the blocks will be applied.
4) Randomize the experiment order.
5) Determine the number of replications for each treatment.
6) Determine a start time for the experiment.
7) Complete the experiment utilizing team members to:
a. collect and mark castings.
b. maintain and monitor blocks.
c. collect process monitoring data.
d. manipulate process factors.
e. document all events during the experiment.
8) If necessary, process the experimental castings to the point where the defect is evident.
9) Measure the defect level using the defect ranking system.
10) Enter the casting defect results into the proper experiment spreadsheet to determine:
a. the best and worst process combinations.
b. the most significant process factors to control.
c. the statistical confidence of your experiment.
Using Spreadsheets
Included with this course is on-line access that contains 12 Microsoft® Excel spread-
sheets for experiment calculations and documentation. Go to diecasting.org/???? to get
the course files. Directions for use and application are provided within the spreadsheet.
When using a spreadsheet, always save a copy of the file first and use this copy as your
working spreadsheet. Examples of the application of the spreadsheets are in chapter 12.
Calculating Effects
The spreadsheets noted above automatically calculate all effects for any of the experi-
ments in the selection chart. However, if you are evaluating an experiment manually,
use the following formulas:
Σ 2’s – Σ 1’s
Main Effects: Effect = -------------------------------
(# of Treatments/2)
Σ Same – Σ Different
Two-Factor Interactions: Effect = -------------------------------
(# of Treatments/2)
For example, if you are calculating the effect for a two-factor interaction, first calcu-
late the sum of the results of the experiment when the levels of the two factors are the
same. Then subtract the sum of the results of the experiment when the levels of the two
factors are different. Divide this result by one-half of the number of treatments.
Determining Significance
The spreadsheets noted above automatically calculate significance for any of the ex-
periments in the selection chart. However, if you are evaluating an experiment manually
follow these steps:
1) Summarize the results of all the effects.
2) Take the absolute value of each of the effects.
3) Place the effects in descending order.
(The largest absolute value is the most significant effect and so on.)
Process Optimization
When defects exist, even in the best process found by using the problem solving strat-
egies to this point, process optimization may be needed to further minimize defects.
Optimization is often referred to as “response surface optimization” by statisticians. This
reference roots from the way the process is graphed. For example, assume we have
two interacting factors that have been proven critical to a defect. These factors are
Lube Amount and Metal Temperature. We complete a two-factor, two-level experiment
and find the result below:
When reviewing the graph, imagine that the casting rank is represented by height.
To find the peak height of the response surface, move in the direction of highest cast-
ing rank 4 (upper left on the graph). To continue in search of the peak, we must test the
three other points to the upper left.
Once we test these three process combinations, we find that the best direction to find
the peak height is now to the lower left. We then test another three process combina-
tions to the lower left. After this test we find that all points have a casting rank of 4.5. If
all points are equal, then the center is likely the peak, or the optimum process. Next we
test the center point of the box and find that the average casting rank is 4.75. We have
now found the optimum process for these two factors, assuming that a sound problem
solving process has been followed to this point. This may be the best possible process.
To complete the ANOVA, the Sum of Squares Table must first be completed. This
may be done by first finding the Grand Average of the experiment data. Then by sub-
tracting the average from each data point in the experiment and squaring the result.
These results are referred to as the “difference squares.” Add up all the difference
squares to find the Grand Sum of Squares. The Grand Sum of Squares Table indicates
the total amount of variation that occurred in the experiment.
The ANOVA table may then be created by using the above data.
ANOVA Table
Effect Sum of Squares Deg. of Freedom Mean Square F-value Confidence
Int F.T./G.V. 1.2656 1 1.2656 1.661 78.0%
Gate Velocity 0.7656 1 0.7656 1.005 66.6%
Residual 9.9063 13 0.7620
Percent Confidence*
= (1 – Fdist (F-value, Degree of Freedom effect, Degree of Freedom Residual)) x 100
*Note: this is an Excel function. The Excel spreadsheet must be used to complete
this calculation.
The F-value is used for the F-test, which is a statistical test. The F-test compares the
experimentally controlled variation to the remaining random uncontrolled variation. The
more experimental control over the process, the higher the resulting confidence.
The confidence column shows the confidence from the F-test. If the confidence is 95%,
there is a 5% chance that the results of the experiment are due to random variation. Typi-
cally, you want to have at least a 75% confidence for each effect. If you are concerned
about the confidence of your experiment, you should complete the B vs. C test below.
The B vs. C Test is often more powerful than ANOVA in determining the confidence be-
cause you are directly comparing two processes rather than several processes (treatments).
The B vs. C Test works well in the die casting industry because it is based on a one-
tailed distribution. Die casting defects always have a one-tailed distribution. Porosity,
surface defects, mis-runs, heat checks, blisters, etc., are all one-tailed distributions
because they cannot improve beyond perfect. The goal of this test is to show that the
B, or better process, has a tighter distribution with a shorter tail than the C, or current
(worse) process, that has a wider distribution with a longer tail.
To complete a B vs. C Test, you must collect castings during the best and worst pro-
cesses. You should collect three castings minimum and up to 32 castings for both the
best and worst processes. To prove that the best process is truly better than the worst
process, you must first arrange all of the test castings in order from best to worst. Then
you must evaluate the end count of the worst process. The confidence for the confirma-
tion run may be determined by using the following Confidence Table:
B vs. C Confidence Table
Confidence Number of Castings Minimum End Count
14 – 16 7
18 – 28 8
99.9%
30 – 62 9
>64 10
10 – 12 5
99% 14 – 38 6
>40 7
6 3
8 – 30 4
95%
32 – 36 5
>38 6
8 – 16 3
90%
>18 4
If we consider the previous example used in the calculation of ANOVA, trial 3 has the
worst casting results and trial 2 has the best results. Using the B versus C analysis,
three out of four of the trial 3 castings are worse than all of the trial 2 castings.
Therefore, since a total of eight castings are taken from trials 2 and 3 and the end
count is 3, the statistical confidence is 90%. This is much higher than the confidence
calculation (78%) from the ANOVA calculation.
Statistical Documentation
Documentation is critical to maintaining permanent corrective actions. When complet-
ing an involved problem solving project, documenting the defect ranking system and the
gains received through problem solving will enable you to verify the corrective actions
at any time. If the defect problem occurs again, the best tool to re-evaluate the process
is the Multi-Vari. Completing another Multi-Vari will allow you to evaluate the current
status of your process and provide a foundation for further problem solving if needed.
The team that has helped resolve the defect problem by this point has exemplified the
willingness to work and think above and beyond their typical responsibilities. Because
your company needs to promote involvement, congratulating the team that is involved in
solving a defect problem is critical. If you want problem solving efforts to be respected
in your company, finding ways to promote this activity is necessary.
Thus, the spreadsheet shows that the best conditions are die temperature at Level 1
and gate velocity at Level 2. This result agrees with Treatment 3, where the best result
was found. Also, the spreadsheet shows that the two-factor interaction between die
temperature and gate velocity is the most significant effect. The main effect of die tem-
perature is the second most significant effect.
Trial Fast Shot Speed Shot Pressure Metal Temperature Lube Amount Cycle Time
1 1 1 1 1 2
2 2 1 1 1 1
3 1 2 1 1 1
4 2 2 1 1 2
5 1 1 2 1 1
6 2 1 2 1 2
7 1 2 2 1 2
8 2 2 2 1 1
9 1 1 1 2 1
10 2 1 1 2 2
11 1 2 1 2 2
12 2 2 1 2 1
13 1 1 2 2 2
14 2 1 2 2 1
15 1 2 2 2 1
16 2 2 2 2 2
The team completed 16 replications for all 16 treatments. The team then ranked the
casting using their defect ranking system. The results shown in the calculation spread-
sheet were as follows:
Replications 16
Trial Results 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
1 4.3125 4 4 5 5 5 4 3 4 5 5 5 4 4 4 5 3
2 2.25 2 2 3 3 2 3 2 2 2 3 3 2 2 2 1 2
3 4.3125 4 4 5 4 3 4 5 4 4 5 5 4 4 5 5 4
4 1.75 1 2 2 2 3 2 3 1 2 1 2 2 2 1 1 1
5 1.875 2 2 1 2 2 1 3 2 2 2 1 2 1 2 3 2
6 4.5625 5 5 4 5 4 5 4 5 4 5 4 5 4 5 4 5
7 2.3125 1 2 2 2 3 2 2 3 3 1 3 4 2 2 3 2
8 4.6875 5 5 4 5 4 5 5 4 5 5 4 5 5 4 5 5
9 1.1875 2 2 1 1 1 2 1 0 1 2 1 1 1 2 1 0
10 3.0625 2 3 4 3 2 3 4 3 3 3 4 3 2 3 3 4
11 1.625 2 1 2 1 2 2 2 1 2 2 1 1 3 1 2 1
12 3.75 3 4 4 4 5 4 3 4 4 4 2 4 5 4 2 4
13 4 4 5 3 4 4 4 4 4 4 5 4 3 4 5 4 3
14 1.375 1 2 1 1 1 2 2 2 1 1 1 2 2 2 1 0
15 3.5 3 3 3 4 3 3 3 3 4 3 4 3 5 5 4 3
16 1.5 0 3 2 1 2 1 1 1 1 2 1 2 2 1 2 2
The calculated effects and the significance from the spreadsheet were as follows:
Fast Shot Speed Shot Pressure Metal Temperature Lube Amount Cycle Time
Best 2 2 2 1 1
Worst 1 1 1 2 1
Because there are 16 other potential combinations that were not tested, we need to
evaluate the significance and whether the effect is a positive or negative value to be cer-
tain that we have the actual best and worst conditions. To do this, we first look at the ef-
fect with the highest significance. This is the interaction between Factor 2 (Shot Pressure)
and Factor 5 (Cycle Time). This effect is a negative value. A negative value for an inter-
action means that the levels for the factors in the best process must be opposite. That is,
if Factor 2 has a level of one, then Factor 5 must have a level of two, or vice versa. Fur-
thermore, this means that the levels for the factors in the worst process must be the same.
Fast Shot Speed Shot Pressure Metal Temperature Lube Amount Cycle Time
Best 1 2
Best 2 1
Worst 1 1
Worst 2 2
The second significant effect is the main effect of Factor 4, which is Lube Amount.
This effect is a negative value. A negative value for a main effect means that the level
for Lube Amount in the best process must be one. Also, this means that the level for
Lube Amount in the worst process must be two.
Fast Shot Speed Shot Pressure Metal Temperature Lube Amount Cycle Time
Best 1 1 2
Best 2 1 1
Worst 1 2 1
Worst 2 2 2
The third significant effect is the interaction between Factor 1 (Fast Shot Speed) and
Factor 5 (Cycle Time). This effect is a negative value. A negative value for an inter-
action means that the levels of the factors in the best process must be opposite. The
worst process must be the same level.
Fast Shot Speed Shot Pressure Metal Temperature Lube Amount Cycle Time
Best 1 1 1 2
Best 2 2 1 1
Worst 1 1 2 1
Worst 2 2 2 2
The fourth significant effect is the interaction between Factor 3 (Metal Temperature)
and Factor 5 (Cycle Time). This effect is a positive value. A positive value for an inter-
action means that the levels for the factors in the best process must be the same. The
levels in the worst process must be different.
Fast Shot Speed Shot Pressure Metal Temperature Lube Amount Cycle Time
Best 1 1 2 1 2
Best 2 2 1 1 1
Worst 1 1 2 2 1
Worst 2 2 1 2 2
The fifth significant effect is the main effect of metal temperature. The sign is positive
therefore the best process must be 2 and the worst process must be 1.
Fast Shot Speed Shot Pressure Metal Temperature Lube Amount Cycle Time
Best 1 1 2 1 2
Best 2 2 1 1 1
Worst 1 1 2 2 1
Worst 2 2 1 2 2
Or
Fast Shot Speed Shot Pressure Metal Temperature Lube Amount Cycle Time
Best 1 1 2 1 2
Worst 2 2 1 2 2
Notice that both the best and worst processes were not tested in the experiment.
This shows the power of a fractional factorial design. That is, the experiment is able
to accurately find the best combination of factors by using interactions without testing
every combination…even if the best and worst combinations are not tested.
The levels for each factor were chosen using historical data. The levels were:
Trial Lube Amount Shot Pressure Shot Speed Nozzle Temperature Metal Temperature
Level 1 0.10 sec 650 psi 5.5 ips 890 deg F 780 deg F
Level 2 0.12 sec 750 psi 6.5 ips 920 deg F 800 deg F
he team completed the experiment as described above and then ranked the cast-
T
ings using their defect ranking system. The results as shown in the calculation
spreadsheet were as follows:
Trial Results 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
1 4.47 2 7 7 4 8 7 0 2 2 1 5 8 5 7 2
2 5.07 8 1 1 5 0 0 10 8 8 9 8 5 2 10 1
3 4.40 5 9 0 6 10 0 1 7 7 2 0 0 7 5 7
4 6.33 6 2 10 5 0 10 8 5 8 7 10 2 4 8 10
5 4.73 4 7 0 5 8 7 1 2 8 2 8 8 6 0 5
6 4.60 4 1 10 3 0 4 9 4 2 6 10 2 2 2 10
7 4.60 3 9 0 4 10 7 1 10 10 2 1 1 7 2 2
8 4.67 6 2 10 4 1 2 9 9 1 7 4 7 3 1 4
9 2.67 2 8 2 3 10 3 0 0 0 1 0 1 6 2 2
10 5.40 5 1 9 3 0 8 8 6 10 6 10 5 2 8 0
11 4.87 3 8 2 5 10 2 1 6 6 2 2 8 6 2 10
12 5.07 6 2 10 5 0 8 8 4 6 7 8 2 4 6 0
13 5.40 4 8 0 5 10 8 2 3 10 3 10 4 7 2 5
14 5.47 5 1 8 3 0 10 7 10 9 6 10 0 2 1 10
15 5.20 5 9 1 7 10 9 2 1 0 4 9 5 8 0 8
16 5.87 7 2 4 6 0 9 10 7 7 9 10 4 4 9 0
The calculated effects and the significance from the spreadsheet were as follows:
he best and worst processes tested agree with the significance shown in the sig-
T
nificance table. This is based upon the following analysis:
Significance Factor(s) Sign Best Worst Agrees with Best/Worst
1 1 + 2 1 Yes
2 3–4 + Same Diff. Yes
3 1–3 – Diff. Same Yes
4 5 + 2 1 Yes
5 3–5 – Diff. Same Yes
*Since factor 2 has not yet been verified we go to #6 in significance.
6 2 + 2 1 Yes
D. Identify Alternative Solutions
The experiment results showed that the following process changes should be considered:
1. T he Lube Amount main effect is positive. Therefore, the Lube Amount should
be set at level two which is 0.12 seconds.
2. The Shot Speed and Nozzle Temperature have a positive interaction. There-
fore, both should be set at level two or both should be set at level one.
3. The Shot Speed and Lube Amount have a negative interaction. Therefore,
since Lube Amount is at level two, then Shot Speed should be set at level
one, which is 5.5 inches per second. This means that Nozzle Temperature
should also be at level one, which is 890 degrees Fahrenheit.
4. The Metal Temperature main effect is positive. Therefore, Metal Temperature
should be set to level two or 800 degrees Fahrenheit.
5. T
he Shot Pressure main effect is positive. Therefore, Shot Pressure should
be set to level two or 750 pounds per square inch.
rom the B vs. C evaluation table, the above results show a confidence of 99%.
F
Therefore, the experiment results were not random variation. However, some de-
fects still exist in the best process.
7. Prevent Recurrence
Long Term Verification
he means of long-term verification in this problem solving example was to evalu-
T
ate the number of defects at the assembly plant and at the customer’s site. Once
the backlog of product made it through the production system, there were no de-
fects found at the assembly plant or at the customer’s site.
Take Advantage of Lessons Learned
nce this cause and effect relationship was established for wormholes, the team
O
found that some similar defects to wormholes existed on other parts. The knowl-
edge of the resolution to this defect was then communicated to the persons re-
sponsible for the other parts. This resulted in several die and process design
changes on other products at the facility.
The levels for each factor were chosen using historical data. The levels were:
Trial Dwell Time Hot Oil Temp. Vacuum Time
Level 1 5.00 350.00 1.50
Level 2 5.75 400.00 1.75
Level 3 6.50 450.00 2.00
The experiment was completed and the castings were ranked using the previously
established defect ranking system. The results, as shown in the calculation spread-
sheet, were as follows:
Trial Results 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
1 4.321 4 4 5 5 4 4 3 4 4 5 5 4 4 5 3 4 4 5 5 4 4 4 5 4 5 5 5 4
2 4.357 5 5 4 4 3 5 4 4 5 4 5 5 4 3 4 4 5 5 4 5 4 5 5 4 5 3 5 4
3 4.429 5 5 4 4 5 3 5 5 5 4 4 4 5 4 5 4 3 5 5 4 4 5 4 5 5 4 4 5
4 3.714 3 4 4 3 4 4 5 3 3 4 4 4 4 2 3 4 4 3 4 4 5 3 4 4 3 4 4 4
5 4.286 4 4 4 4 5 5 5 4 4 4 4 5 5 5 3 5 5 5 4 4 4 4 5 3 4 4 4 4
6 3.929 5 4 3 2 5 4 5 4 3 2 5 4 5 4 3 5 5 4 3 2 5 4 5 5 4 3 2 5
7 4.214 4 5 4 5 4 3 4 5 4 5 4 3 5 4 5 4 3 4 5 4 2 5 4 5 4 5 4 5
8 3.821 4 3 4 4 4 5 4 4 2 4 4 3 4 5 4 3 4 4 5 2 4 4 3 5 3 4 4 4
9 4.321 4 3 4 5 4 4 5 5 4 4 5 4 5 3 4 5 3 5 4 4 5 4 5 5 4 5 4 5
10 4.214 5 4 5 3 5 4 5 2 5 4 5 3 5 4 5 4 5 3 5 4 5 2 5 4 5 3 5 4
11 3.964 5 4 3 2 1 5 4 3 2 5 4 3 2 5 4 3 5 4 3 5 4 5 5 5 5 5 5 5
12 4.000 2 3 4 5 3 4 5 3 4 5 3 4 5 3 4 5 3 4 5 3 4 5 4 5 4 5 4 4
13 3.714 5 4 3 2 5 4 3 5 4 3 5 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4 4 4
14 3.536 1 2 3 4 5 2 3 4 5 2 3 4 5 3 4 5 3 4 4 3 4 3 4 4 4 4 3 4
15 4.250 5 4 3 2 5 4 3 5 4 3 5 4 3 5 4 3 5 4 5 4 5 4 5 5 5 5 5 5
16 3.893 2 3 4 5 2 3 4 5 2 3 4 5 3 4 5 3 4 5 4 5 4 5 4 5 4 4 4 4
17 4.107 5 4 3 5 4 3 5 4 3 5 4 3 5 4 3 5 4 5 4 5 4 4 4 4 4 4 4 4
18 4.250 3 4 5 3 4 5 3 4 5 3 4 5 4 5 4 5 4 5 4 5 4 5 4 5 4 5 4 4
19 4.000 5 4 3 5 4 3 5 4 3 5 4 3 5 4 3 5 4 3 5 4 3 5 4 3 4 4 4 4
20 3.893 3 4 5 3 4 5 3 4 5 3 4 5 3 4 5 3 4 3 4 3 4 4 4 4 4 4 4 4
21 4.107 5 4 3 2 1 5 4 3 2 5 4 5 4 5 4 5 4 5 4 5 4 5 4 5 4 5 4 5
22 3.786 2 3 4 5 2 3 4 5 3 4 5 3 4 5 3 4 5 3 4 5 3 4 3 4 4 4 4 4
23 3.857 5 4 3 2 5 4 3 4 3 4 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
24 3.464 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 3 4 5 3 4 5 4 4 4 4 4 4 4
25 3.679 5 4 3 2 5 4 3 2 5 4 3 4 3 4 3 4 3 4 3 4 3 4 4 4 4 4 4 4
26 3.571 1 2 3 4 5 2 3 4 5 3 5 3 4 3 4 3 4 3 4 3 4 4 4 4 4 4 4 4
27 3.750 5 4 3 5 4 3 5 4 3 5 4 3 5 4 3 4 3 4 3 4 3 4 3 4 3 4 3 3
The three-level experiment may not be evaluated, therefore the experiment needed
to be pooled to calculate the effects. To do this, the best and worst conditions for each
factor had to be determined, and all treatments where the best or worst treatments
were not tested had to be eliminated. In essence, the experiment was converted from
a three-factor, three-level experiment into a three-factor, two-level experiment. The
pooled experiment appeared as follows:
From the pooled experiment we calculated the effects and determined significance:
Day 1 Day 2
Part Insp 1 Insp 2 Insp 3 Avg Rng Part Insp 1 Insp 2 Insp 3 Avg Rng Grd Avg
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
Average Average
1. A.
B.
C.
D.
2. A.
B.
C.
D.
3. A.
B.
C.
D.
4. A.
B.
C.
D.
5. A.
B.
C.
D.
6. A.
B.
C.
D.
7. A.
B.
C.
D.
8. A.
B.
C.
D.
T2.
T3.
T4.
T5.
T6.
T7.
T8.
T9.
T10.
T11.
T12.
T13.
T14.
T15.