Professional Documents
Culture Documents
14.0 Response Surface Methodology (RSM) : (Updated Spring 2001)
14.0 Response Surface Methodology (RSM) : (Updated Spring 2001)
Identify a few important variables from a large set of candidate variables, i.e., a screening experiment.
Ascertain how a few variables impact the response
( x 1, x 2 )
70
40
50
90
80
60
30
0
( x 1, x 2 )
x1
starting point. ( x 1, x 2, )
2. Fit a linear model (no interaction or quadratic terms) to the data.
3. Determine path of steepest ascent (PSA) - quick way to move to the
optimum - gradient based
4. Run tests on the PSA until response no longer improves.
92
Low Level
Midpoint
High Level
(1)% Bleach
(2) Temp
75
80
85
x2
y = 29 y = 32
+1
-1
28
19
44
Averge=30.5
E1=14
E2=11
E12=2 (It is small relative to E1 and
E2, so the response surface is
fairly linear)
31
x1
-1
+1
93
Linear model: y = b0 + b 1 x 1 + b 2 x 2 +
Fitted model is: y = 30.5 + 7x 1 + 5.5x 2
A one unit change in x1 is a change of 2% in Bleach.
A one unit change in x2 is a change of 5 deg. in Temperature.
%Bleach-4
x1 = -------------------------- ,
2
Bleach% = 2x1+4
Temp - 80
x2 = ------------------------- ,
5
1
Line normal to this one (slope of PSA = ---------------------------------- )
contour slope
5.5x 1
x2 = ------------7
Lets define some points (candidate tests) on the path of steepest ascent
(PSA)
94
Candidate
Test
x1
5.5x 1
x2= ------------7
%Bleach
= 2x1+4
Temp
=5x2+80
80
0.5
0.39
82
1.0
0.79
84
43
1.5
1.18
86
49
2.0
1.57
88
52
2.5
1.96
90
59
3.0
2.36
10
92
66
3.5
2.75
11
94
75
4.0
3.14
12
96
81
10
4.5
3.54
13
98
86
11
5.0
3.93
14
100
80
12
5.5
4.32
15
102
69
13
6.0
4.71
16
104
Since it appears that the response surface is fairly linear (E12 is small)
where we conducted tests, no reason to examine/conduct test 1 or 2.
Run 1st test on or beyond boundary defined by x1, x2 = + 1.
Continue running tests on the PSA until response no longer increases.
Highest response occurs at (Bleach = 13%, Temp = 98o). Run new factorial
at this point.
Ran initial factorial design - defined path of steepest ascent - moved on
PSA to
Bleach=13%, Temperature = 98o, Whiteness = 86%.
Conduct another factorial design near this point
95
Temp
x2
102
+1
92
-1
77
76
75
88
Averge=79
E1=6
E2=-5
E12=-7 (It is fairly large)
-1
+1
x1
12
14
Bleach
y = 79 + 3x 1 2.5x 2
At x1 = 0, x2 = 0, y = 79
The combinations of (x1, x2) that give y =79 are specified by:
3x1 -2.5 x2 =0
3
x2 = ------- x 1
2.5
2.5
PSA: x2 = ----------x 1 = 0.833x 1
3
or x1 = -1.2x2
Lets follow the PSA but take steps in the x1 direction since E 1 is bigger
than E2.
96
Candidate
Test
x1
x2= .833x1
%Bleach
= 13+x1
Temp
=97+5x2
13
97
0.5
-0.4165
13.5
94.9
1.0
-0.833
14
92.8
88
1.25
-1.04125
14.25
91.8
90
1.5
-1.2495
14.50
90.8
92
1.75
-1.4578
14.75
89.7
89
2.00
-1.666
15.00
88.7
84
2.25
-1.87425
15.25
87.6
2.50
-2.0825
15.50
86.6
10
2.75
-2.29075
15.75
85.5
11
3.00
-2.50
16.00
84.5
Note that we werent able to move as far along the path before we fell
off.
We now believe we are fairly close to the optimum, so we must
characterize the response surface with more than a plane, perhaps a model
of the form:
2
y = b 0 + b 1 x 1 + b 2 x 2 + b 12 x 1 x 2 + b 11 x 1 + b 22 x 2 +
To estimate the bs in this model, we need more than a 2 - level factorial
design. A Central Composite Design (CCD) is more efficient than a 3level factorial design.
Given the results of a 2nd order design - must use least squares (linear
regression) to estimate the bs. We will examine the topic of regression
after we finish up the RSM procedure for our example.
Fit the following model to the response surface near the point:
%Bleach=14.50, Temp = 90.8, Whiteness=92%
97
y = B 0 + B 1 x 1 + B 2 x 2 + B 12 x 1 x 2 + B 11 x 1 + B 22 x 2 +
The fitted model will be of the form:
2
y = b 0 + b 1 x 1 + b 2 x 2 + b 12 x 1 x 2 + b 11 x 1 + b 22 x 2
b0
b1
b2
T 1 T
b =
= (x x) x y
b 12
b 11
b 22
Three-level Factorial Design
x2
x2
+1
+1
-1
x3
x1 -1
-1
+1
-1
-1
+1
+1
x1
98
Tests
27
81
243
x2
+
+1
+
+1
x3
-1
-
-1
-
x1
- -1
+1 +
- -1
-1
x1
+1 +
k
Factorial Portion = F
16
32
64
Axial Points
10
12
1.414
1.682
2.000
2.378
2.828
10
15
12
17
24
13
20
31
52
91
16
23
36
59
100
= (F)
14
: rotatability property -
+1
Variable Levels
- = - 2
-1
+1
+ = + 2
Bleach (%)
13.1
13.5
14.5
15.5
15.9
Temp (deg)
84
86
91
96
98
Test Plan
Test
x1
x2
%Bleach
Temp
1
2
3
4
5
-1
+1
-1
+1
- 2
-1
-1
+1
+1
0
13.5
15.5
13.5
15.5
13.1
86
86
96
96
91
+ 2
15.9
91
- 2
14.5
84
+ 2
14.5
98
14.5
91
1
+1
1
+1
x = 1 2
1 + 2
1
1
+1
+1
+1
1
1
+1
+1
+1
+1
+1
+1
+1
+1
+1
0 2 0
0 2 0
2 0 0 2
1
1
0
0
+ 2 0 0 2
0 0 0 0
100
87
85
89
83
y = 86
82
98
87
92
9
0
T
(x x) = 0
0
8
8
(x x)
0
8
0
0
0
0
0
0
8
0
0
0
0
0
0
4
0
0
8
0
0
0
12
4
8
0
0
0
4
12
1
0
0
0 0.5 0.5
0 0.125 0
0
0
0
0
0 0.125 0
0
0
=
0
0
0 0.25 0
0
0.5 0
0
0 0.344 0.219
0.5 0
0
0 0.219 0.344
789
13.657
T
( x y ) = 15.556
4
680
714
b0
b1
b2
b =
b 12
b 11
b 22
92
-1.707
1
T
T
= ( x x ) x y = -1.945
-1.000
-4.563
-0.313
101
95
90
85
80
75
70
65
60
2
2
1
1
x2
x1
1
2
Linear Regression
8
7
6
5
4
3
2
1
0
y = b 0 + b 1 x
102
n=6
( yi y i )
i=1
n=6
Define S ( b 0, b 1 ) =
( yi y i )
i=1
n=6
Minimize S ( b 0, b 1 ) =
( yi y i )
n=6
2
i=1
( yi b0 b1 xi )
i=1
S ( b 0, b 1 ) = 2 ( y i b 0 b 1 x i ) ( 1 ) = 0
b0
S ( b 0, b 1 ) = 2 ( y i b 0 b 1 x i ) ( x i ) = 0
b1
y i + b o + b 1 x i = 0
2
x i y i + b o x i + b 1 x i = 0
Simplifying, we obtain:
nb 0 + b 1 x i = y i
2
b 0 x i + b 1 x i = x i y i
These two equations are known as Normal Equations.
The values of b 0 &b 1 that satisfy the Normal Equations are the least
squares estimates -- they are the values that give a minimum S.
In matrix form
103
n x i b 0 y i
=
2 b
x i x i 1 x i y i
*
b0
* = Least Squares Estimates =
b
1
x i
x i x i
*
2
b0
1
x i x i
* = ---------------------------------2
2
b1
nx i ( x i ) x i n
yi
xi yi
yi
xi yi
1
x i y i x i x i y i
= ---------------------------------2
2
nx i ( x i ) x i y i + nx i y i
*
b 0 1.4667
* =
b 0.9143
1
*
b 0 = B 0 = an estimate of B 0
*
b 1 = B 1 = an estimate of B 1
104
8
7
6
5
4
3
2
1
0
0
Matrix Approach
2
3
y : Vector of Observations = 5 ,
5
7
6
y 1
y 2
y = Vector of Observations =
= xb
:
:
y n
b
b coefficients = 0
b1
105
1
1
1
1
1
1
1
2
3
4
5
6
e1
e2
e = Vector of Prediction Errors =
:
:
en
e = y y
T
x ( y x b ) = 0 = x y + ( x x )b
( x x )b = x y
Therefore,
T
1 T
b = (x x) x y
b0
n x
It is analogous to =
2
b1
x x
xy
b1
1.5309 b 0
, =
0.9741
b1
E(b1)=B1,
E[ b ] = B
106
1.5512
1.0134
b0
B0
1 2
Var ( b ) = ( x x ) y
n=6
( yi yi )
i=1
- = 0.67619
s y = ------------------------------n2
T 1 2
Var ( b ) = ( x x ) s y = 0.586 0.135
0.135 0.039
s b0 = 0.767
standard error of b0
s b1 = 0.197
standard error of b1
s b0 = 0.586,
s b1 = 0.039,
107
the form:
bi t
, 1 --2
s bi
1.4667 2.125
0.9143 0.546
Note that these confidence intervals do not consider the fact that the
parameter estimates may be correlated.
Joint Confidence Interval for Parameter Values
Joint Confidence Interval (approximately) is bounded by a Sum of Squares
Contour, S0. Consider all coefficients simultaneously,
p
S 0 = S R 1 + ------------ F p, n p, 1
np
where
n
SR = S ( b ) =
( yi y i )
i=1
Sum of Squares contours are only circles if b 0 &b 1 , are uncorrelated (xTx)-1 is diagonal.
Sum of Squares often contours appear as ellipses.
Confidence Interval for Mean Response
The predicted response at an arbitrary point, x o , is y 0 = x o b . Due to the
uncertainty in the bs, there is uncertainty in y o (remember y o is the best
estimate available for the mean response at x o ).
1 T 2
Var ( y 0 ) = x 0 ( x x ) x 0 y
108
Var ( y 0 ) = x 0 ( x x ) x 0 s y
y t
( 0 ):
0)]
[ Var ( y
, 1 --2
1
--2
10
8
6
4
2
0
0
-2
X
Prediction Interval
Even if we know with certainty the y o for a given x 0 , there would still be
variation in the responses due to system noise().
The variance of the mean of g future trials at x 0 is:
s
Var ( y 0 ) = ---- + Var ( y 0 )
g
We can define limits within which the mean of g future observations will
fall. These limits for a future outcome are known as a 100 (1-)%
prediction interval.
109
, 1 --2
s
---- + Var ( y 0 )
g
)
y0 t
1
--2
12
10
8
6
4
2
0
0
-2
X
110