You are on page 1of 42

Chapter 3 Design & Analysis of Experiments 1

7E 2009 Montgomery
What If There Are More Than
T
Two Factor
F Levels?
L l ?
• The t-test does not directlyy apply
pp y
• There are lots of practical situations where there are
either more than two levels of interest, or there are
severall ffactors
t off simultaneous
i lt interest
i t t
• The analysis of variance (ANOVA) is the appropriate
analysis “engine”
engine for these types of experiments
• The ANOVA was developed by Fisher in the early
1920s, and initially applied to agricultural experiments
• Used extensively today for industrial experiments

Chapter 3 Design & Analysis of Experiments 2


7E 2009 Montgomery
An Example (See pg. 61)
• An engineer is interested in investigating the relationship
bet een the RF power
between po er setting and the etch rate for this tool
tool. The
objective of an experiment like this is to model the relationship
between etch rate and RF power, and to specify the power
setting that will give a desired target etch rate.
• The response variable is etch rate.
• She is interested in a particular gas (C2F6) and gap (0.80 cm),
and wants to test four levels of RF power: 160W
160W, 180W
180W, 200W
200W,
and 220W. She decided to test five wafers at each level of RF
power.
• The experimenter
p chooses 4 levels of RF p power 160W,, 180W,,
200W, and 220W
• The experiment is replicated 5 times – runs made in random
order

Chapter 3 Design & Analysis of Experiments 3


7E 2009 Montgomery
Chapter 3 Design & Analysis of Experiments 4
7E 2009 Montgomery
An Example (See pg. 62)

Chapter 3 Design & Analysis of Experiments 5


7E 2009 Montgomery
• Does changing the power change the
mean etch rate?
• Is there an optimum level for power?
• We would like to have an objective
way to answer these questions
• The t-test
t test really doesn
doesn’tt apply here –
more than two factor levels

Chapter 3 Design & Analysis of Experiments 6


7E 2009 Montgomery
The Analysis of Variance (Sec. 3.2, pg. 62)

• In general, there will be a levels of the factor, or a treatments,


and n replicates of the experiment, run in random order…a
completely randomized design (CRD)
• N = an total runs
• W consider
We id th
the fixed
fi d effects
ff t case…the th randomd effects
ff t case
will be discussed later
• Objective is to test hypotheses about the equality of the a
treatment means

Chapter 3 Design & Analysis of Experiments 7


7E 2009 Montgomery
The Analysis of Variance
• Th
The name ““analysis
l i off variance”
i ” stems ffrom a
partitioning of the total variability in the
response variable into components that are
consistent with a model for the experiment
• The basic single-factor ANOVA model is

 i  1,2,...,
, , ,a
yij    i  ij , 
 j  1,2,..., n

  an overall mean, i  ith treatment effect,


ij  experimental error, (0  2 )
error NID(0,
Chapter 3 Design & Analysis of Experiments 8
7E 2009 Montgomery
Models for the Data

There are several ways to write a model


for the data:

yij     i   ij is called the effects model


Let i     i , then
yij  i   ij is called the means model
Regression models can also be employed
Chapter 3 Design & Analysis of Experiments 9
7E 2009 Montgomery
The Analysis
y of Variance
• Total variability is measured by the total
sum of squares:
a n
SST   ( yij  y.. ) 2

i 1 j 1

• The basic ANOVA partitioning is:


a n a n

 ij ..  i. .. ij i.
( y  y
i 1 j 1
)  [( y 
2
y )  ( y  y
i 1 j 1
)]2

a a n
 n ( yi.  y.. ) 2   ( yij  yi. ) 2
i 1 i 1 j 1

SST  SSTreatments
T t t  SS E

Chapter 3 Design & Analysis of Experiments 10


7E 2009 Montgomery
The Analysis of Variance

SST  SSTreatments  SS E
• A large value of SSTreatments reflects large differences in
treatment means
• A small value of SSTreatments likely indicates no
diff
differences in
i treatment
t t t means
• Formal statistical hypotheses are:

H 0 : 1   2     a
H1 : At least one mean is different

Chapter 3 Design & Analysis of Experiments 11


7E 2009 Montgomery
The Analysis of Variance
• While sums of squares cannot be directly compared
to test the hypothesis of equal means, mean
squares can be compared.
• A mean square is a sum of squares divided by its
degrees of freedom:

dfTotal  dfTreatments  df Error


an  1  a  1  a ( n  1)
SSTreatments SS E
MSTreatments  , MS E 
a 1 a ( n  1)
• If the treatment means are equal, the treatment and
error mean squares will be (theoretically) equal.
• If treatment means differ, the treatment mean square
will be larger than the error mean square.
Chapter 3 Design & Analysis of Experiments 12
7E 2009 Montgomery
The Analysis of Variance is
Summarized in a Table

• Computing…see text, pp 69
• Th reference
The f di
distribution
t ib ti for
f F0 is
i th
the Fa-1, a(n-1) distribution
di t ib ti
• Reject the null hypothesis (equal treatment means) if
F0  F ,a 11,a ( n 1)

Chapter 3 Design & Analysis of Experiments 13


7E 2009 Montgomery
Chapter 3 Design & Analysis of Experiments 14
7E 2009 Montgomery
Example: Etch Rate Data (in Å/min) from the Plasma
Etching Experiment

RF Power Observed Etch Rate Totals Average


(W) 1 2 3 4 5 yi. yi.bar
160 575 542 530 539 570 2756 551 2
551.2
180 565 593 590 579 610 2937 587.4
220 600 651 610 637 629 3127 625.4
220 725 700 715 685 710 3535 707

Chapter 3 Design & Analysis of Experiments 15


7E 2009 Montgomery
ANOVA Table
E
Example
l 3-1
31

Chapter 3 Design & Analysis of Experiments 16


7E 2009 Montgomery
The Reference Distribution:

P-value

Chapter 3 Design & Analysis of Experiments 17


7E 2009 Montgomery
ANOVA calculations may be made simpler by coding the
observation. Observation may be coded by subtracting
each data by 600. We get:

Based on the data above


above, the sum of squares may be
computed in the usual manner. The sum of squares are
similar to that calculated previously. Therefore subtracting
a constant from the original data does not change the sum
of squares. DOX 6E Montgomery 18
Model Adequacy Checking in the ANOVA
T t reference,
Text f Section
S ti 3.4,
3 4 pg. 75

• Checking
Ch ki assumptions ti i iimportant
is t t
• Normality
• Constant variance
• Independence
• Have we fit the right model?
• L t we will
Later ill ttalk
lk about
b t what
h t tto d
do if
some of these assumptions are
violated
i l t d
Chapter 3 Design & Analysis of Experiments 19
7E 2009 Montgomery
The normality assumption could be verified by
constructing a normal probability plot of the residuals. If
the underlying error distribution is normal, this plot will
resemble a straight line.

DOX 6E Montgomery 20
Model Adequacy Checking in the ANOVA
• Examination of
residuals (see text, Sec.
3 4 pg. 75)
3-4,

eij  yij  yˆij


 yij  yi.

• Computer software
generates the residuals
• Residual plots are very
useful
• Normal probability plot
of residuals

Chapter 3 Design & Analysis of Experiments 21


7E 2009 Montgomery
Plot of residuals in time sequence
• Plotting the residuals in time
order of data collection is
helpful in detecting correlation
b t
between the
th residuals.
id l
• A tendency to have runs of
positive and negative
p g residuals
indicates positive correlation.
This would imply that the
independence assumption on
the errors has been violated.
• Important to prevent the
problem and proper
randomization of the
experiment is an important
step in obtaining
DOX 6E Montgomery 22
independence.
• Skill of the experimenter may improve as the
p
experiment p
progresses
g or the p
process may y drift. This
will often result in a change in the error variance over
time depicted by more spread at one end than at the
other.
other

DOX 6E Montgomery 23
Plot of residuals versus fitted values
• If the model is correct and if the
assumptions are satisfied, the
residuals should be
structureless A simple check is
structureless.
to plot the residuals versus the
fitted values. This plot should
nott reveall any obvious
b i pattern
tt
as shown in Fig. 3.6.
• Nonconstant variance may
show up on this plot in a form of
an outward opening funnel or
megaphone.
megaphone
• One way in dealing with
nonconstant variance is to
apply a variance stabilizing
DOX 6E Montgomery 24
transformation
The Regression Model

Chapter 3 Design & Analysis of Experiments 25


7E 2009 Montgomery
Post-ANOVA
Post ANOVA Comparison of Means
• The analysis of variance tests the hypothesis of equal
treatment means
• Assume that residual analysis is satisfactory
• If that hypothesis is rejected, we don’t know which
specific means are different
• Determining which specific means differ following an
ANOVA is called the multiple comparisons problem
• There are lots of ways to do this…see text, Section 3.5,
pg. 84
• We will use pairwise tt-tests
tests on means…sometimes
means sometimes
called Fisher’s Least Significant Difference (or Fisher’s
LSD) Method

Chapter 3 Design & Analysis of Experiments 26


7E 2009 Montgomery
Design-Expert Output

Chapter 3 Design & Analysis of Experiments 27


7E 2009 Montgomery
Graphical Comparison of Means
T t pg. 88
Text,

Chapter 3 Design & Analysis of Experiments 28


7E 2009 Montgomery
ANOVA calculations are usually done via
computert

• Text exhibits sample calculations from


three very popular software packages,
Design-Expert
Design Expert, JMP and Minitab
• See pages 98-100
• Text
T t discusses
di some off the
th summary
statistics provided by these packages

Chapter 3 Design & Analysis of Experiments 29


7E 2009 Montgomery
Figure 3.12 (p. 99)
Design-Expert computer output
f Example
for E l 3-1.
31

SS Model 66870.55
R2    0.9261
SS Total 72209.75
• R2 is loosely interpreted as
the proportion of the
variability in the data
explained by the ANOVA
model. Values closer to 1 is
desirable.
• Adjusted R2 reflects the
number of factors in the
model and useful for
evaluating the impact of
increasing or decreasing the
number of model terms.
• Others – see text pg. 98

DOX 6E Montgomery 30
Figure 3.13 (p. 100)
Minitab computer output for
Example 3-1.

DOX 6E Montgomery 31
Why Does the ANOVA Work?
We are sampling
W li fromf normall populations,
l i so
SSTreatments SS E
 2
a 1 if H is true,
, and  2
a ( n 1)
2
 0 2

Cochran's theorem gives the independence of
these
h two chi-square
hi random
d variables
i bl
SSTreatments /(a  1)  a21 /(a  1)
So F0  Fa 1,1 a ( n 1)
SS E /[a (n  1)]  2
a ( n 1) /[a (n  1)]
n
n i
2

Finally, E ( MSTreatments )   2  i 1
and E ( MS E )   2
a 1
Therefore an upper-tail F test is appropriate.
Chapter 3 Design & Analysis of Experiments 32
7E 2009 Montgomery
Sample Size Determination
Text, Section 3.7, pg. 101
• FAQ in designed experiments
• Answer depends on lots of things; including
what type
yp of experiment
p is being
g
contemplated, how it will be conducted,
resources, and desired sensitivity
• Sensitivity
S iti it refers
f tto the
th difference
diff in
i means
that the experimenter wishes to detect
• Generally,
Generally increasing the number of
replications increases the sensitivity or it
makes it easier to detect small differences in
means
Chapter 3 Design & Analysis of Experiments 33
7E 2009 Montgomery
Sample Size Determination
Fi d Eff
Fixed Effects
t Case
C

• Can choose the sample size to detect a specific


difference in means and achieve desired values of
type I and type II errors
• Type I error – reject H0 when it is true ( )
• Type II error – fail to reject H0 when it is false (  )
• Power = 1 - 
• Operating characteristic curves plot  against a
parameter  where
p
a
n  i2
2  i 1
a 2
Chapter 3 Design & Analysis of Experiments 34
7E 2009 Montgomery
Sample Size Determination
Fi d Eff
Fixed Effects
t Case---use
C off OC Curves
C
• The OC curves for the fixed effects model are in the
Appendix, Table V
• A very common way to use these charts is to define a
difference in two means D of interest, then the minimum
value of  2 is 2
nDD
 2

2a 2
• Typically work in term of the ratio of D /  and try values
off n untilil the
h desired
d i d power is
i achieved
hi d
• Most statistics software packages will perform power and
sample size calculations – see page 103
• There
Th are some other
th methods
th d didiscussedd iin th
the ttextt

Chapter 3 Design & Analysis of Experiments 35


7E 2009 Montgomery
Power and sample size calculations from Minitab (Page 103)

Chapter 3 Design & Analysis of Experiments 36


7E 2009 Montgomery
Chapter 3 Design & Analysis of Experiments 37
7E 2009 Montgomery
Chapter 3 Design & Analysis of Experiments 38
7E 2009 Montgomery
Chapter 3 Design & Analysis of Experiments 39
7E 2009 Montgomery
Chapter 3 Design & Analysis of Experiments 40
7E 2009 Montgomery
Chapter 3 Design & Analysis of Experiments 41
7E 2009 Montgomery
Question to be attempted

• Chapter 3
No. 3.1, 3.3, 3.6, 3.9
(N 3
(No. 3.7,
7 33.8,
8 33.11,
11 33.5)
5)

DOX 6E Montgomery 42

You might also like